Behind the Scenes
Built with Claude Code
This site was built entirely through conversation with Claude Code. Here's what that process looked like—and what it reveals about working with AI coding agents.
For Everyone
The Big Picture
This project started the way many do—with a folder full of scattered resources. LinkedIn exports, YouTube playlists, GitHub repos, blog posts across multiple platforms, speaking engagement histories. The kind of content that every professional accumulates over years but never has time to organize into anything cohesive.
The difference here is that the folder was shared with Claude Code, an AI coding agent. And instead of manually sorting, formatting, and assembling everything myself, I described what I wanted and let the agent do the work.
Claude Code read through the exported files, figured out their structures, and started building. It retrieved data from seven different platforms—parsing RSS feeds, calling APIs, scraping HTML where no API existed—and normalized everything into a consistent format. When it needed to understand how personal showcase websites are typically structured, it researched patterns online. When I pointed it at a website whose visual style I liked, it studied the design and adapted it.
The entire site—every page, every data pipeline, every deployment configuration—was built through this kind of back-and-forth conversation. I provided direction and judgment. The agent handled implementation at a pace no single developer could match.
The broader point is this: if you collect your resources in one place and give an AI coding agent access, it can analyze, research, and synthesize across all of them—producing websites, documents, presentations, or whatever artifact you need. That's a fundamentally different workflow from building things by hand.
7
Content sources aggregated automatically
9+
Pages designed and built through conversation
100%
Code written by Claude Code, directed by a human
For Technical Readers
Under the Hood
Data Engineering
AI coding agents are remarkably good at the unglamorous work of data engineering—retrieving, parsing, transforming, joining, and manipulating data from heterogeneous sources. This site pulls from seven of them, each requiring a different strategy:
Across all seven sources, the agent handled the messy realities of heterogeneous data—malformed dates, missing fields, encoding issues—and produced consistent JSON schemas that the site's pages can consume uniformly.
The Refresh Scripts
Each external data source has its own refresh script:
refresh:repos,
refresh:youtube,
refresh:chariot,
refresh:medium, and
curate:linkedin.
Each script uses the right tool for the job:
gh CLI for GitHub repos,
yt-dlp for YouTube playlists,
cheerio for scraping
Chariot Solutions (no API), and RSS parsing for Medium.
LinkedIn posts come from a manual CSV export that gets filtered by AI-related keywords.
All scripts output to
raw-data/*.csv,
then
csv-to-json
converts them to JSON files in
src/data/
for Astro to consume. Claude Code wrote every one of these scripts, choosing the appropriate
retrieval strategy for each source.
Research and Design
Claude Code didn't just write code. It researched how personal showcase websites are typically structured, laid out, and styled. When I pointed it at astro.build as a style reference, it studied the site's design language—the dark theme, the color system, the spacing—and adapted those patterns into the Tailwind configuration and component library that drives this site. Getting from zero to a deployed, styled website happened remarkably fast because the agent could both research and implement in the same session.
Mastodon Integration
The homepage's "Latest Thoughts" section takes a different approach from the other data sources. Instead of fetching at build time, it loads posts client-side from the Mastodon public API, which supports CORS and requires no authentication. This means the content is always live—no refresh scripts, no rebuild needed.
The widget randomly selects five posts from the ten most recent and displays them in a fading carousel with dot indicators. If the API is unreachable, the section gracefully hides itself.
Monolog Integration
One of the more satisfying pieces was incorporating
Monolog,
a microblogging tool I built for myself. I originally vibe-coded it as a personal tool,
then evolved it to be agent-friendly—adding informational help output from the CLI,
especially around custom templates. Claude Code picked up the tool, understood its interface,
built a custom template matching the site's theme, and wired it into the build pipeline.
Multiple repos each maintain a
posts.md
file that gets aggregated and rendered at build time. The fact that an AI agent could adopt a
tool I built, understand its conventions, and integrate it cleanly was a strong validation of
designing tools with agent consumption in mind.
The Monolog Build Step
The site's build process does more than compile templates.
npm run build
triggers a
build:updates
step before Astro's static build kicks in. A Node.js script,
build-updates.mjs,
reads
updates.config.yaml
to determine which GitHub repos to pull from. For each repo, it fetches
monolog/posts.md
via the GitHub API (using
gh api
for authenticated access) along with any image assets from each repo's monolog directory.
Posts are extracted, tagged with their source project, sorted chronologically, and merged into a single Markdown file. The merged file is piped through the Monolog CLI with a custom HTML template to produce the final updates page. This means the Updates page is always current with the latest posts from all tracked projects at every deploy.
Project Structure
The codebase follows a clean separation between data retrieval, data storage, and presentation:
src/pages/— Astro file-based routing, one file per pagesrc/components/— Shared components (Header, Footer)src/layouts/— BaseLayout with meta tags, JSON-LD structured data, and shared head contentsrc/data/— JSON data files consumed by pages, plus generated updates HTMLsrc/styles/— Global CSS with Tailwind v4 theme configurationscripts/— Node.js scripts for data retrieval and processingraw-data/— Source CSV files, versioned in Git for traceabilitytemplates/— Monolog HTML template for the updates page
The agent organized the codebase this way—separating data retrieval, data storage, and presentation into distinct layers.
Tooling Choices
Claude Code selected each tool based on the specific requirements of each data source and the overall goals of the project:
- Astro — Static site generator that ships zero JavaScript by default, ideal for a content-heavy portfolio
- Tailwind CSS v4 — Utility-first styling with CSS-based theming (no config file needed in v4)
- Netlify — Hosting with Git-based deploys, built-in form handling, and CDN
- cheerio — Lightweight HTML/XML parser, used for scraping sources without APIs and parsing RSS feeds
- csv-parse — Robust CSV parsing for converting raw data exports to JSON
- yt-dlp — YouTube playlist metadata extraction without needing an API key
- gh CLI — Authenticated GitHub API access for fetching repo data and monolog posts
Deployment & Hosting
The site is hosted on
Netlify
with automatic deploys from Git. The agent configured the
@astrojs/netlify
adapter in
astro.config.mjs
and wired up the contact form to use Netlify Forms
(data-netlify="true")—no
backend needed, with submissions managed directly in the Netlify dashboard.
The
@astrojs/sitemap
integration generates the XML sitemap automatically. Deployment is zero-config: push to Git,
Netlify builds and deploys.
SEO & Generative Engine Optimization
Claude Code also handled search optimization across two dimensions. For traditional SEO, it audited the site and implemented Open Graph tags, Twitter Cards, canonical URLs, a sitemap, robots.txt, JSON-LD structured data, and semantic HTML. For generative engine optimization (GEO), it structured content so that when AI systems like ChatGPT, Perplexity, and Google AI Overviews do encounter it, it's easier for them to parse and surface. It added an llms.txt file, content freshness signals, and FAQ structured data.
The full rationale, implementation details, and references for every change are documented in SEO_AND_GEO.md.
The Takeaway
The Developer Marketing Problem
Altogether, this project demonstrates something practical: it's now remarkably easy for a developer to market themselves, capture project information, and produce content—without ever leaving their engineering flow. No context-switching to a CMS. No wrestling with a website builder. Just a conversation with an agent that understands your codebase, your data, and your intent.
Interested in how AI agents can accelerate your workflow?