Claude Code Just Reset Everyone's Weekly Limits to Zero — Here's What Happened
A race condition between Auto Memory and context compaction in Claude Code v2.1.59–v2.1.61 broke prompt caching and corrupted sessions. Anthropic reset all weekly limits as compensation.
Quick take
A race condition between Auto Memory and context compaction in Claude Code v2.1.59–v2.1.61 broke prompt caching and corrupted sessions. Anthropic reset all weekly limits as compensation.
Your Claude Code weekly limit just got reset to zero. All of it — gone and refilled.
I checked my usage earlier and did a double take. A weekly quota I had burned through over 80% of was suddenly back to zero. Turns out it was bug compensation.
Thariq, who leads Claude Code at Anthropic, posted an announcement about an hour ago: they reset weekly limits for all users. The backstory is worth understanding.
What Happened
Starting with v2.1.59, Claude Code shipped an Auto Memory feature. The problem? This feature and the existing context compaction system both tried to read and write to the same conversation store at the same time. The collision broke prompt caching, causing abnormally fast token consumption.
- Affected versions: v2.1.59 through v2.1.61
- Hotfix shipped: v2.1.62
- Resolution: full weekly limit reset for all users
The Symptoms Were Serious
This wasn’t just limits draining faster than expected. Users reported mid-session context corruption — earlier parts of conversations getting truncated or, worse, fragments from completely different sessions bleeding into the current one.
The /compact command itself produced broken results when Auto Memory was enabled. Think of it as “context entanglement”: the AI started confusing the current conversation with past ones.
- Early parts of conversations vanishing mid-session
- Fragments from previous sessions appearing in the current one
- Message boundaries breaking when Auto Memory and Auto Compaction ran simultaneously
- Similar compaction bugs had appeared before in v2.1.47, v2.1.21, and v2.1.14
Root Cause: A Race Condition
The Auto Memory system and the context compaction logic were accessing the same message store concurrently. When their timing drifted, the auto-save mechanism would overwrite current state with stale data. This is a classic race condition, and the fact that similar compaction bugs keep resurfacing suggests a deeper architectural issue that needs structural attention.
- Core issue: concurrent read/write conflict (race condition)
- Compounding factor: auto-save overwriting live state with outdated data
Prompt Caching Breaks More Easily Than You Think
Here’s the part worth paying attention to if you build with AI agents.
When prompt caching fails in an agent-based coding tool, both cost and speed collapse simultaneously. Anthropic themselves have acknowledged that “caching regresses surprisingly easily.” If you don’t design your agent architecture with cache stability in mind from the start, incidents like this will keep happening.
- Broken caching means 2–3x token consumption for the same work
- Locking down cache paths at the agent design stage isn’t optional — it’s essential
What You Should Do Now
Tools are improving fast, but they’re also breaking fast. When you see an update notification, resist the urge to click immediately — check the patch notes first.
Your immediate action item: run claude update and verify you’re on v2.1.62 or later.
Enjoy your freshly reset limits. Free tokens don’t come around often.
Join the newsletter
Get updates on my latest projects, articles, and experiments with AI and web development.