Four Contexts That Decide Whether AI Helps or Wastes Your Time
I spent a weekend stuffing 100MB of PDFs into an agent. Performance got worse. Mapping what I was feeding into four categories finally showed me why.
I spent an entire weekend parsing PDFs over 100MB. The assumption was simple: more knowledge fed to the agent means better output. I was wrong.
After days of frustration, I drew a graph splitting everything I had been feeding into four categories. The problem became obvious. Volume was never the issue. The type of context was.
Feeding the model what it already knows makes things worse
LLMs finish training on trillions of tokens. When you paste the same information into a prompt, those redundant tokens occupy space in the context window and dilute attention away from what actually matters. The information you added to help the model ends up constraining it.
I tested this directly. Stuffing Python syntax and basic React patterns into prompts caused the model to conflict with its own training, producing stranger output than with no context at all. Stack enough of this redundant information and you get context rot, where the model’s responses degrade progressively. The intuition that “more input equals smarter output” is the most dangerous trap in prompt engineering.
Environment context is the only type the model cannot infer
Project directory structure, team conventions, internal API schemas. None of this exists in training data, and the model has zero way to reason about it without explicit input. This category is where context genuinely earns its place.
The tooling around capturing environment context is evolving faster than any other area right now. Document OCR efforts are happening simultaneously across continents: Upstage and Korea Deep Learning domestically, Mistral in France, Sarvam in India, Baidu and Zhipu and DeepSeek and even Xiaohongshu in China. Voice, which used to be the most volatile medium, is getting captured too. Meeting note tools like Granola preserve conversations that previously vanished the moment a call ended. Typeless, Wispr Flow, and Willow convert thoughts into text in real time. Browser activity, ambient visual input, things you glance at without thinking are already becoming structured context.
The shift is clear: information that used to evaporate is now being converted into something models can use.
The gap between knowing and executing is where people diverge
Environment context tells the model what exists. Skills tell it how to do things, in what order, and to what standard. Anyone can store and verify knowledge. But once you add structured execution, defining sequences based on reasoning, the gap between people starts to widen.
A good skill definition is not a simple instruction list. It contains six things: discipline, a definition of “done,” task decomposition, defect patching methods, anti-patterns, and environment adaptation. Cramming all tasks into one skill guarantees failure. Breaking work into granular skills and composing them through workflow files like AGENTS.md is what lets agents move flexibly. Even rough hint-level notes can be converted into skills instantly with tools like /skill-creator.
The design perspective matters most here. Saving intermediate files, analyzing before executing, defining verification criteria: these decisions determine whether an agent succeeds or fails. Preferring scripts over MCP is a lesson I learned through production use, not theory. And skills sharpen with use. Give the agent comparison examples and it optimizes its own execution.
I will admit that getting skill design right took me longer than I expected. My first few attempts were either too broad (the agent ignored half the instructions) or too rigid (it couldn’t adapt to slight variations in the task). The sweet spot, specific enough to guide but loose enough to flex, took real iteration to find.
Intent and taste are why identical setups produce different results
Over ten years of watching people work, one pattern keeps showing up. Collecting and verifying knowledge is something everyone does. General knowledge is now something AI holds in greater volume than any human. Skills accumulate through repetition. Yet people using the exact same model still produce wildly different results.
Look at vibe coding output. Some people’s work triggers “how did you make this?” reactions. Others get silence. The difference sits between someone who accepts default AI aesthetics and someone who pushes for a specific vision. Catching information quickly and filtering it through a particular intent are two completely different abilities. The second requires considering the audience’s perspective and the full surrounding context, a higher-order kind of thinking.
The model does not know what you want. You have to be able to express it. This is why taste outweighs knowledge in the AI era.
The harder something is to automate, the more valuable the human behind it
General knowledge is already owned by AI. Adding more of it to prompts actively hurts. Environment context is being captured by OCR and voice tools at increasing speed. Skills can be built through repetition and structure, then delegated to agents. Intent and taste remain the only category that resists automation entirely.
Gathering AI tools and information matters. But the real leverage is not there. Instead of packing more context into prompts, the better move is knowing what you want with greater precision. Your value in the AI era lives in your taste.
Join the newsletter
Get updates on my latest projects, articles, and experiments with AI and web development.