Ask ChatGPT Who Tyler Wall Is
The invisible infrastructure that makes this portfolio fast, findable, and cited by AI. Static export, CloudFront CDN, llms.txt, and 1,200+ tests.
Go Ask ChatGPT Who Tyler Wall Is
I come up. That didn't happen by accident.
AI systems know who I am because I built the infrastructure for it. The portfolio loads in under a second. Every page is pre-rendered HTML. Search engines index it. AI crawlers cite it. These aren't features on a roadmap — they're the result of deliberate decisions made at every layer of the stack.
The Stack
Next.js 15 with static export. AWS Amplify for hosting. CloudFront CDN. Cloudflare DNS with CNAME flattening for the apex domain. AWS WAF for security.
No server means no server failures. Static export means every page is pre-built HTML sitting on a CDN edge node. The deploy script auto-generates rewrite rules from profile YAML files — add a new profile, run one command, it's live with a clean URL.
AI Discoverability
The portfolio serves llms.txt and llms-full.txt following the llmstxt.org spec, giving AI crawlers structured context about Tyler Wall's experience and skills. Every page includes JSON-LD structured data — Person, TechArticle, BreadcrumbList — and per-profile meta descriptions and OG images. This is why AI systems can cite Tyler accurately.
Most portfolios optimize for Google. I optimize for Google and ChatGPT. The llms.txt spec is simple — a plain text file that tells AI systems who you are, what you've built, and where to find the details. Here's the structure:
# Tyler Wall
> AI-directed full-stack engineer. 16 profiles. 23 canvas engines.
## Links
- Homepage: https://tylerrwall.com
- AI Engineer: https://tylerrwall.com/ai-engineer
- GitHub: https://github.com/Tyler-R-Wall
## Experience
- Principal Engineer at PrizePicks (current)
- Staff Engineer at Calendly
- Engineering Lead at Greenlight FinancialJSON-LD goes on every page. Not just the homepage — every profile gets its own Person schema with role-specific skills and descriptions. The OG images are generated at build time per profile and per blog post. Even the README on GitHub is a bit of a sales tactic.
The Knowledge Pipeline
Every profile pulls from one base file: data/resume-base.ts. YAML over database. The data doesn't change often, and a flat file is easier to diff, review, and validate than a database row.
The chatbot has its own pipeline. Interview transcripts go through a RAG process to build context that Gemini can reference in real time. Zod validates every response at runtime. Generating JSON with LLMs needs validation — otherwise it just blows up.
Sixteen profiles, one data source, zero drift.
Testing and Quality
Vitest. Over 1,200 tests across 60+ files. The portfolio is built to the same standard as the enterprise systems on my resume. Type safety with strict TypeScript. Path aliases. Automated builds that catch regressions before they ship.
The entire codebase is open source on GitHub. The infrastructure is itself a portfolio piece — proof that I build production systems, not demos.
The best infrastructure is the kind nobody notices. It just works, every time, for everyone.
See the Infrastructure
Everything described here is running right now. The default profile loads in under a second. The platform lead profile demonstrates the multi-profile system. The AI engineer profile shows how the same data source generates completely different presentations.
The invisible work is the real proof. Anyone can build a portfolio that looks good. Building one that's fast, findable, citable, tested, and open source — that's the job.
In This Series
- One Afternoon, 23 Backgrounds — The 23 canvas engines behind every page
- One Resume Is Not Enough — How YAML drives 16 portfolio variants
- Text Is Not Enough — The profile-aware AI chatbot
- Why Everything Is Glass — The glassmorphism design system
- Ask ChatGPT Who Tyler Wall Is — Infrastructure and AI discoverability
Frequently Asked Questions
What is llms.txt?
A spec from llmstxt.org for making websites readable to AI systems. It's a markdown file served at /llms.txt that describes the site's content, structure, and key facts in a format AI crawlers can parse. Think of it as robots.txt for language models instead of search engines.
Is the portfolio really statically rendered?
Yes. Next.js static export. Every page is pre-built HTML served from CloudFront CDN. No server-side rendering, no Lambda at page load, no database queries. The only dynamic part is the chat API, which runs on a separate Lambda behind API Gateway.
How many tests does the portfolio have?
Over 1,200 tests across 60+ files, run with Vitest. The count grows as I add features. Run npx vitest run on the repo to get the current number. I test the portfolio the same way I'd test a production system at work — because that's what it is.