I Built a Chatbot for My SpaceX Interview. They Never Saw It.
How I built a production-grade AI chatbot for a SpaceX interview — and what happened when the role disappeared before anyone saw it work.
I Built Something Nobody Asked For
I was deep in a SpaceX interview process. Multiple rounds in. The kind of process where you start rearranging your calendar and telling yourself not to get excited.
So naturally, I built a chatbot.
Not a ChatGPT wrapper. Not a "paste your resume and ask questions" toy. A production-grade AI system that knew my entire career, could answer technical questions about specific projects, and was tailored to the role I was interviewing for.
The chatbot used Google Gemini Flash Lite via the Vercel AI SDK, running on AWS Lambda with rate limiting, moderation, and profile-aware tool calls. It could look up specific work experiences, surface relevant projects, pull testimonials, and tailor every response to the SpaceX engineering role — all from structured resume data, not hallucinated filler.
The Architecture
The system had five layers. Each one existed because I'd seen the failure mode without it.
Context bundling. A build-time script scraped my resume data and profile configuration into a single JSON payload. The chatbot didn't freestyle about my experience. It pulled from the same structured dataset that powers this portfolio. Every claim traceable. Every metric real.
Tool calls. The chatbot didn't just generate text. It called functions: experience lookup by company or skill, project deep-dives with architecture details, testimonial retrieval. When someone asked "What's your distributed systems experience?" it didn't summarize — it queried.
Role-specific framing. The SpaceX profile wasn't a generic dump. It emphasized the work that mattered for that role: real-time systems, infrastructure at scale, platform engineering. Same resume data, different lens.
Moderation pipeline. Rate limiting per session. Token budgets so it couldn't be tricked into dumping my entire history in one response. Content filtering so it stayed professional. The kind of guardrails you build when you've seen what happens without them.
Lambda backend. Serverless, cheap to run, fast to cold-start. API Gateway in front. WAF rules. The same production posture I'd use for any customer-facing system.
What Happened
The position was eliminated. Cost savings. The chatbot was never seen by the hiring team. Tyler Wall had built a production AI system tailored to a specific role, and the role vanished before the interview process concluded. What happened next turned a dead-end project into the foundation of a portfolio that serves thousands of visitors.
I'm not going to pretend that didn't sting. You build something specifically for an opportunity, you put real engineering into it, and the opportunity evaporates. That's the job. That's how it goes sometimes.
AI-Directed Development
AI-directed development means building AI into the product itself — not as an add-on feature, but as a structural layer that makes everything else more useful. The SpaceX chatbot became the portfolio's AI layer, serving every visitor with conversational access to structured career data across 16 profile variants.
This is what I mean when I talk about AI-directed development. It's not "I used Copilot to autocomplete some functions." It's building AI into the product itself. The chatbot isn't a feature bolted onto a static site. It's a layer of the portfolio that makes everything else more accessible.
A hiring manager lands on the site. They have 90 seconds. They can read, or they can ask. The chatbot gives them a second path through the same information — conversational, targeted, fast.
That's the craft. Not the wrapper. The system underneath it.
The best engineering work doesn't always ship where you planned. Sometimes it ships somewhere better.
Try It
The chatbot is live on every page of this portfolio. Click the chat icon in the bottom corner. Ask it about my distributed systems work, or my AI experience, or what I built at any company on my resume. It pulls from the same structured dataset that powered the SpaceX version.
The AI engineering profile shows the full trajectory of this work. The platform engineering profile has the infrastructure perspective. And if you're curious about the Vercel AI SDK or Google Gemini that powers it, those docs are worth exploring.
The SpaceX hiring team never saw what I built for them. You can see it right now.
In This Series
- One Afternoon, 23 Backgrounds — The 23 canvas engines behind every page
- One Resume Is Not Enough — How YAML drives 16 portfolio variants
- I Built a Chatbot for My SpaceX Interview. They Never Saw It. — The chatbot that outlived the opportunity
- Why Everything Is Glass — The glassmorphism design system
- Ask ChatGPT Who Tyler Wall Is — Infrastructure and AI discoverability
Frequently Asked Questions
What tech stack did the SpaceX interview chatbot use?
The chatbot used Google Gemini Flash Lite via the Vercel AI SDK, deployed on AWS Lambda behind API Gateway. It included rate limiting, content moderation, token budgets, and profile-aware tool calls that let it answer questions about specific roles and experiences from a structured resume dataset.
Is the SpaceX chatbot the same one on Tyler Wall's portfolio?
The architecture is the same. The chatbot Tyler built for the SpaceX interview became the foundation of the portfolio's AI layer. Profile-aware context bundling, tool calls for experience lookup and project deep-dives, and the moderation pipeline all carried over directly.
Why build a chatbot for a job interview?
Tyler Wall's approach to AI-directed development means using AI as a core part of how he works and presents his work. A chatbot that could answer technical questions about his experience, pull relevant projects, and tailor responses to the SpaceX role demonstrated the approach better than any resume could.