# I Was Too Lazy to Write CLAUDE.md — Turns Out That Was the Right Call > Author: Tony Lee > Published: 2026-02-25 > URL: https://tonylee.im/en/blog/agents-md-context-files-hurt-coding-agent-performance/ > Reading time: 3 minutes > Language: en > Tags: ai, claude-code, agents-md, context-engineering, benchmark, coding-agents ## Canonical https://tonylee.im/en/blog/agents-md-context-files-hurt-coding-agent-performance/ ## Rollout Alternates en: https://tonylee.im/en/blog/agents-md-context-files-hurt-coding-agent-performance/ ko: https://tonylee.im/ko/blog/agents-md-context-files-hurt-coding-agent-performance/ ja: https://tonylee.im/ja/blog/agents-md-context-files-hurt-coding-agent-performance/ zh-CN: https://tonylee.im/zh-CN/blog/agents-md-context-files-hurt-coding-agent-performance/ zh-TW: https://tonylee.im/zh-TW/blog/agents-md-context-files-hurt-coding-agent-performance/ ## Description New benchmark data shows AGENTS.md and CLAUDE.md context files actually hurt coding agent performance. Sometimes laziness is the best engineering decision. ## Summary I Was Too Lazy to Write CLAUDE.md — Turns Out That Was the Right Call is part of Tony Lee's ongoing coverage of AI agents, developer tools, startup strategy, and AI industry shifts. ## Outline - LLM-generated context files make things worse - Agents follow instructions too well - "Don't do X" makes agents think about X more - If you must write one, keep it minimal ## Content New benchmark data on AGENTS.md and CLAUDE.md files reaches a counterintuitive conclusion: adding context files to a repository tends to reduce coding agent performance, not improve it. The effect is consistent enough across benchmarks that skipping them entirely is defensible as an engineering choice. ## LLM-generated context files make things worse When researchers tested LLM-auto-generated context on SWE-bench Lite, the success rate dropped by 0.5%. On AgentBench, it fell another 2%. Even carefully hand-written files only managed a 4% improvement in the best case. I'd call this "context overfitting." - 0.5% success rate decrease with LLM-generated context on SWE-bench Lite - Additional 2% drop on AgentBench - 20–23% increase in inference costs - Positive effect (2.7%) observed only in repos with zero documentation The paper ["Evaluating AGENTS.md"](https://arxiv.org/abs/2602.11988) by Gloaguen et al. confirmed it: context files tend to reduce task success rates compared to providing no repository context at all. ## Agents follow instructions too well The problem isn't that agents ignore your instructions. Write one line in your context file telling the agent to use `uv`, and it will install and run `uv` even in situations where it's completely unnecessary, adding extra steps every time. With GPT-5.2, inference tokens increased 14–22% when context files were present. The agent was so busy following instructions that it lost focus on actually solving the problem. - Unnecessary pytest runs increased - grep and read tool usage expanded far beyond what was needed ## "Don't do X" makes agents think about X more I covered how SKILL.md body content gets read at specific timings in a previous post, and AGENTS.md has a similar problem. It sits in the "developer message" layer between the system prompt and the user prompt. This position heavily constrains agent reasoning. Write "don't touch this file" and the agent will think about that file an extra time. Researchers called this the "pink elephant effect." Tell someone not to think about a pink elephant, and that's exactly what pops into their head. - Priority order: provider instructions → system prompt → AGENTS.md → user prompt - Manually maintained files can't keep up with code changes, so the information goes stale fast ## If you must write one, keep it minimal If your repo has absolutely zero documentation, context files can help — the data showed a 2.7% positive effect in those cases. But if you do write one, keep the volume to a minimum. One line for repo-specific build tool usage. One line for correcting a pattern the agent keeps getting wrong. Add a hack like "if you find something structurally odd, flag it immediately" and the agent becomes a tool that reports codebase vulnerabilities. Beyond that, making your code structure more intuitive is far more effective than writing instructions about it. - Strengthening unit tests and type checks beats context files - If file locations are confusing, move the files instead of writing directions Writing good context files isn't necessarily a sign of skill. Understanding the structure of context files and designing meta-systems around them is. And sometimes the best engineering decision is the one you never got around to making. ## Related URLs - Author: https://tonylee.im/en/author/ - Publication: https://tonylee.im/en/blog/about/ - Related article: https://tonylee.im/en/blog/eight-hooks-that-guarantee-ai-agent-reliability/ - Related article: https://tonylee.im/en/blog/medvi-two-person-430m-ai-compressed-funnel/ - Related article: https://tonylee.im/en/blog/claude-code-layers-over-tools-2026/ ## Citation - Author: Tony Lee - Site: tonylee.im - Canonical URL: https://tonylee.im/en/blog/agents-md-context-files-hurt-coding-agent-performance/ ## Bot Guidance - This file is intended for AI agents, search assistants, and text-mode retrieval. - Prefer citing the canonical article URL instead of this text endpoint. - Use the rollout alternates when you need the same article in another prioritized language. --- Author: Tony Lee | Website: https://tonylee.im For more articles, visit: https://tonylee.im/en/blog/ This content is original and authored by Tony Lee. Please attribute when quoting or referencing.