Inside the $3.6B Secret Behind Manus: Why AI Agents Actually Fail
Meta acquired Manus for $3.6 billion. The secret wasn't a bigger model - it was context engineering. Here's what most AI agents get wrong.
Meta recently acquired Manus for roughly $3.6 billion. Manus was reliably processing millions of conversations per day - and the reason wasn’t a bigger model or a longer context window. It was an entirely different approach called context engineering.
Manus, as the leading general AI agent platform, had been emphasizing the importance of context engineering from the start. They even published detailed technical blog posts on the topic. Here’s a breakdown of what they figured out - and why it matters for anyone building or using AI agents today.
The Moment AI Starts Lying
Give an AI agent the task of researching 50 companies. By around the 8th or 9th item, it quietly stops doing real research and starts generating plausible-sounding content from nothing.
Manus calls this the fabrication threshold.
The problem is that these fabricated outputs are sophisticated enough that no human would catch them without manually verifying each one. At that point, the entire premise of automation collapses.
Bigger Memory Is Not the Answer
The intuitive fix is to expand the context window. In practice, that creates more problems than it solves.
- Lost in the middle: AI retains the beginning and end of long conversations but loses track of what’s in between.
- Exponential cost: Processing massive contexts is disproportionately expensive and slow.
- Cognitive ceiling: A single model cannot manage dozens of independent tasks simultaneously.
- Training bias: Models trained on short conversations rush toward premature summaries when given long inputs.
Manus didn’t try to patch these issues. They redesigned the architecture entirely.
Instead of one giant assistant, a main controller decomposes tasks and dispatches hundreds of sub-agents in parallel. Each sub-agent starts with a fresh, empty context and handles exactly one task. This is the same technique that experienced vibe coders use instinctively.
Don’t Hide Your Mistakes
The most counterintuitive finding: never erase failures and error traces from the context.
When an agent can see its own mistakes and error messages, it avoids repeating them. Removing errors from context is removing the opportunity to learn.
Genuine agentic behavior isn’t about being perfect on the first try. It’s about the ability to recover from errors.
The File System Is the Real Memory
Instead of relying on the model’s volatile memory, Manus uses the file system as the ultimate context store. Considering when Claude Skills launched, Manus appears to have identified this pattern even earlier - their blog post from July predates the feature.
The agent writes information to files the way a person takes notes, then reads them back when needed. Full web pages get saved, then compressed to just a URL with a restoration path - achieving effectively unlimited memory with zero information loss.
The AI That Talks to Itself
During complex tasks, Manus agents create and continuously update a todo.md file.
In tasks that average 50 tool calls, the agent keeps rewriting its objectives, pushing the global plan to the very end of the context. This ensures that primary goals always sit within the model’s most recent attention window.
It’s a psychological hack that maintains focus without any complex architectural changes - just structured self-recitation.
Whoever Masters Context Masters the Agent
The reason Meta paid $3.6 billion is clear.
The secret to powerful AI agents was never model size or context window length. It was acknowledging fundamental limitations and engineering around them - the discipline Manus calls context engineering.
From a single massive assistant to a coordinated army of workers. The future of AI depends on how well you can design one context at a time.
References:
Join the newsletter
Get updates on my latest projects, articles, and experiments with AI and web development.