Create Three Spec Files Before Using Claude Code and Codex
I spent a year getting wildly inconsistent results from Claude Code and Codex. Three spec files, each with a distinct role, fixed it.
Simple thoughts on building, designing, and shipping.
I spent a year getting wildly inconsistent results from Claude Code and Codex. Three spec files, each with a distinct role, fixed it.
Agents writing code is just the start. To review PRs and explain architecture to teammates, you need visualization tools.
Subscribing puts you in the top 0.3%. These five configurations — agents, teams, MCP, monitoring, automation — push you into the top 0.01%.
I classified every term I kept encountering while using Claude Code and Codex daily. Five groups emerged, and they map the entire system these tools run on.
I dug into SDK type definitions and system prompts for both tools. The 29 vs 7 gap isn't about feature count. It's about two fundamentally different answers to the same question: how should an AI coding agent interact with your system?
Someone benchmarked an LLM-written Rust reimplementation of SQLite. The gap between code that looks right and code that is right turned out to be five orders of magnitude.
Four projects shipped in the last two months show what happens when AI agents handle not just coding but earning, orchestrating, and running entire companies.
After a year of agent-assisted development, I found that structured spec files fixed the inconsistency problem better than any prompt technique.
I reverse-engineered how Codex handles context overflow compared to Claude Code. The answer involves AES encryption, session handover patterns, and KV cache tricks.
Shopify CEO Tobias built QMD, an open-source search engine. Connect it to Claude Code and every session gets persistent memory.
Every connection matters to me.
Feel free to reach out anytime.