Index
3 min read Updated Feb 18, 2026

Meta's $2.5B Manus Acquisition - Its Core Tech Is Now Open Source

The file-based memory system behind Manus's $2.5 billion valuation is now a free Claude Code skill. Here's why it matters for every AI agent builder.

If you’ve used AI agents for complex tasks, you’ve seen this: halfway through a long workflow, the agent is doing something completely unrelated to your original request.

This isn’t a user error. It’s a structural limitation of large language models. And the company that solved it - Manus - was acquired by Meta for $2.5 billion. Now a developer has released the core principle as an open-source Claude Code skill, hitting nearly 1,000 GitHub stars within three days.

The Root Problem - Why AI Agents Forget Their Goals

LLMs operate within a context window - a fixed-size working memory.

  • As conversations grow longer, the original goal gets pushed out of the model’s active attention
  • Critical information fades beyond the attention mechanism’s effective range
  • The agent gradually drifts away from the initial request

This phenomenon is called goal drift. Once tool calls exceed 50 or so, it becomes nearly inevitable.

Manus’s Solution - The File System as External Memory

Manus’s answer was surprisingly simple: make the AI take notes.

  • Use the file system as a persistent memory store for the agent
  • Bypass the physical limits of the context window entirely
  • Retrieve stored information on demand whenever the agent needs it

This approach is one form of context engineering - designing how information flows in and out of an LLM’s working memory.

The Open-Source Implementation - A 3-File Memory System

The Claude Code skill called planning-with-files implements Manus’s principle using three markdown files.

  • task_plan.md - The master plan containing goals, progress steps, and error logs. The agent is instructed to read this file before every major decision
  • notes.md - A scratch pad for research results and intermediate data. Prevents context window overload
  • [deliverable].md - The final output file where completed work accumulates

The beauty is in the simplicity. No custom infrastructure, no database - just markdown files on disk.

The Core Mechanism - Re-Read the Plan Before Every Decision

The most important rule in this system is one sentence:

“Before any major decision, read the plan file.”

  • The LLM’s attention mechanism responds most strongly to the most recently ingested tokens
  • Reading task_plan.md right before a decision restores the original goal to the top of the context
  • This solves the problem not by expanding the context window, but by optimizing information placement within it

A longer context window is brute force. Strategic information positioning is engineering.

Error Handling - Breaking the Infinite Retry Loop

The second critical design choice is forced error logging.

  • When an error occurs, the agent must record it in the error section of task_plan.md
  • This forces the AI to explicitly acknowledge failures instead of silently retrying
  • The agent is guided toward plan revision rather than repeating the same mistake
  • Debug logs accumulate automatically as a side effect

Without this, agents tend to slam into the same wall repeatedly - burning tokens and context without making progress.

What This Means - A New Baseline for Agent Performance

The reason this hit nearly 1,000 stars in three days is clear: a multi-billion-dollar architectural insight is now accessible to anyone with a terminal.

The deeper lesson is that AI agent performance isn’t determined by model size or parameter count. It’s determined by memory architecture design - how you structure the flow of information around the model’s limitations.

The best agents aren’t the ones with the biggest brains. They’re the ones that know how to take notes.

Link: planning-with-files on GitHub

Join the newsletter

Get updates on my latest projects, articles, and experiments with AI and web development.