Eight Hooks That Guarantee AI Agent Reliability
CLAUDE.md rules get followed about 80% of the time. Hooks get followed 100% of the time. After six months of testing, these are the eight I never removed.
Simple thoughts on building, designing, and shipping.
CLAUDE.md rules get followed about 80% of the time. Hooks get followed 100% of the time. After six months of testing, these are the eight I never removed.
The NYT story about Medvi's two-person, $430M operation looks like AI creating a business from scratch. Dig in, and the real lesson is about funnel compression on borrowed infrastructure.
I installed three popular Claude Code extensions and productivity barely moved. The problem was never which tools to pick.
I edited config.toml, wrote rules in AGENTS.md, and nothing stuck. Turns out the folder structure itself was the issue, not my settings.
OpenAI shipped Codex as a Claude Code plugin on the same day Anthropic announced Computer Use. I think it's the smartest concession of 2026.
A month ago I couldn't leave my laptop during a build. Three features in four weeks fixed that.
I thought a single SKILL.md file was enough. Then I saw how Anthropic's own team structures theirs, and rebuilt everything.
I spent a weekend stuffing 100MB of PDFs into an agent. Performance got worse. Mapping what I was feeding into four categories finally showed me why.
I tested dozens of design skills for AI coding agents. Most didn't last a week. These 12 are the ones I still use.
I built skills, configured subagents, and set up slash commands. Then a single loop running overnight outperformed all of it. Three loop architectures that actually deliver.
Every connection matters to me.
Feel free to reach out anytime.