Why I Built This
I built this site over the 2025 holiday break using AI coding agents. I've always wanted a personal website but abandoned multiple attempts over the years. I didn't want it to be boring or laggy, and I kept getting stuck in tutorial hell.
This past year, I started seriously looking into AI tools and LLMs for policy research, both out of personal interest and to help IRPP navigate these technologies. The tools got better, I got better at using them, and experiments became less daunting. Even when experiments don't work out, if you code alongside the agent, you learn more than from tutorials alone.
By December, I'd done enough side projects that I felt ready to try again, not only out of revenge but also because I finally had some side projects to put in it.
The Knowledge Base Approach
I was annoyed at how often I needed to type or collect basic context about myself and my projects. A global text file is too general, and manually typing it is tiring and unreliable.
Inspired by how good models had gotten at tool-calling, I came up with what I thought was a great solution: codify all my information into a JSON database with a strict schema. That way I could have semantic answers based on text and programmatic ones when needed.
This part was easy and has hugely paid off. Two afternoons for collecting old CVs and publications. One more for schema design and helper scripts. A few hours for documentation and LLM-aided web scraping enrichment. The database now powers a custom chatbot that generates detailed context summaries for other projects. And because I'm already logging publications this way, the website updates automatically.
From Plan to Execution
My original plan was to use this project to apply the AI-assisted workflow I'd been workshopping. I told myself this would be largely human-written. And the first version was - using targeted tutorials from Claude and Gemini. By then, the site was using Astro with minimal components and almost no formatting. I put it aside and started researching design options.
I didn't know what to look for, but I knew I wanted something less polished than a startup website, more bespoke than Squarespace, and kind of weird. I learned about minimalist and brutalist websites, looked at academic examples, collected notes, and commissioned prototypes from all the advanced models I had access to.
It dawned on me that my ambition had outpaced my skills, so I pivoted to my more traditional approach: Build scaffolding → Plan → Execute → Review → Research → Finalize. Projects begin by ensuring context is available, straightforward, and easy to follow: writing project docs, data contracts, verification scripts. Then break the work into session-sized chunks, execute while keeping a dev log, and review code after each session. When I don't understand something, I look up docs, read tutorials, watch videos.
This approach is effective, but it has trade-offs. As the codebase grew, agents stopped getting it right every time. Good structure and documentation helped somewhat, but what mattered more was having—and communicating—a clear, self-contained task.
GitHub-based agents force this discipline: you compartmentalize assignment and review. They're more reliable, but less fun to explore with. I'm still figuring out the right balance.
How It Actually Works
The site was built using Claude (Sonnet and Opus) and Gemini using Claude Code in VS Code and Antigravity. The semantic analysis was added midway through, incorporated into the footer visualization and the tagging system.
It's an Astro site - static by default, with Svelte islands for the
interactive bits: visualizations, keyboard navigation (press / to search, ? for help), and the footer's keyword map, where hovering an entry lights
up its terms positioned by semantic similarity.
Currently working on bringing in the PPI chyron (a live feed showing recent policy developments from the Passive Policy Intelligence scanner) and other experiments.