After using OpenCode extensively for some time, I discovered that mere tinkering and patching could no longer satisfy my requirements.
I realized a fundamental architectural fallacy: For vibe coding, linear Chat History is not an asset; it is a burden.
The Pain Point: Stream Cannot Carry State
Current interaction models habitually dump the System Prompt + User History into the model all at once. As the number of dialogue turns increases, the drawbacks become glaringly obvious:
- Extremely Low Signal-to-Noise Ratio: The history is filled with trial-and-error, error messages, and code that has already been corrected. The AI must waste massive amounts of Attention searching for signals amidst the noise.
- Context Pollution: Old, erroneous lines of thought linger in the Context, causing the AI to repeatedly “go in circles” (get stuck in loops).
- Token Abuse: To modify a single function, one might be forced to carry ten rounds of irrelevant chit-chat.
Dialogue is a Stream, but programming essentially maintains a State.
Attempting to use a linear, ephemeral dialogue stream to maintain the complex, structured state of a project is likely the biggest architectural bottleneck in current vibe coding tools.
The Breakthrough: Manager-Worker Dual-Layer Architecture
To solve this problem, I am experimenting with a new architecture. Its core lies in decoupling the AI’s functions into a Manager Agent (The Brain/Project Manager) and an Execution Agent (The Hand/Engineer), and thoroughly abandoning “Chat History” as the core context.
In this architecture, context is no longer a messy stew but is strictly managed in layers.
Architecture Data Flow Diagram
Manager Agent (Brain)
├── Maintains: Level 1 Project Global State (Persisted)
└── Action: Distill
│
▼
Execution Agent (Hand)
├── Receives: Level 2 Task Context (Minimal Complete Set)
├── Runs: Level 3 Execution Context (Dynamic Growth)
│ └── Action: Query on Demand (LSP/File Read)
└── Output: Code Changes -> Sync back to Manager
Core Innovation: The Three-Level Context System
We divide context from macro to micro into three levels, each with a distinct lifecycle and responsibility:
Level 1: Project Context
- Holder: Manager Agent
- Content: The global topology of the project. This includes the file tree structure, dependency graphs, confirmed requirement lists, tech stack specifications, etc.
- Characteristics: Persisted, High Abstraction. It acts like a map; it doesn’t contain specific code details but knows “where everything is.”
Level 2: Task Context
- Holder: Generated by the Manager, passed to the Execution Agent.
- Content: The “Minimal Complete Set” required to execute the current specific task.
- Bad Case: Throwing the entire
srcfolder at the AI. - Good Case: “Modify the
loginfunction inauth.ts; only reference theUserinterface definition inuser.ts.”
- Bad Case: Throwing the entire
- Characteristics: Disposable, Highly Distilled. This is context that has been “distilled” by the Manager, with all irrelevant noise removed.
Level 3: Execution Context
- Holder: Execution Agent (Runtime)
- Content: The “Workbench” while the Worker is active. It contains the Task Context, but more importantly, it includes supplementary details dynamically queried by the Worker while writing code.
- Characteristics: Dynamic Growth, Load on Demand, Burn After Use. For example, when the Worker finds a missing type definition, it can actively read the definition via LSP tools rather than relying on the Manager’s guesses.
Dynamic Collaboration Process: From “Spoon-feeding” to “On-Demand”
Traditional RAG works like “I predict you need this, so I’ll stuff it all in.” The Three-Level Context architecture supports Lazy Loading.
Scenario Example:
When the Manager dispatches a “Fix Login Bug” task, the Execution Agent starts in a pristine environment.
- The Worker reads the
loginfunction and notices a call to an unknownvalidatemethod. - The Worker actively initiates a Tool Call to read the definition of
validateinutils.ts. - The Worker fixes the code and runs tests.
- The task ends, the Execution Context is destroyed, and only the final code changes are synced back to the Project State.
Core Advantages
-
Drastic Reduction in Hallucinations
The Execution Agent always works in an extremely pure “vacuum environment.” It sees no previous user complaints and no prior failed attempts. It sees only clear instructions and precise code snippets. The purer the input, the more deterministic the output. -
Infinite Context Window
Through the dynamic query mechanism of Level 3, the Worker doesn’t need to load the entire project at the start. It can “reach out” to the Manager or the file system whenever needed. This makes handling super-large projects with tens of thousands of files possible, breaking the physical limits of the Context Window. -
Self-Correction and State Machines
The Manager Agent maintains State, not History. When a Worker completes a task, the Manager updates the project state; if a task fails, the Manager generates a new fix task based on the current state. This is a Finite State Machine (FSM) that constantly converges, rather than a dialogue stream that diverges infinitely.
Comparison: From Static CLAUDE.md to Dynamic Agent State
To better understand the evolution of this architecture, we can look at the currently popular CLAUDE.md practice.
In existing best practices, developers maintain a CLAUDE.md file in the project root to record architectural norms, common commands, and code styles. This is actually a prototype of Level 1 (Project Context), but it has two fatal limitations:
- Maintenance Cost
CLAUDE.mdrelies on manual updates by human developers. Once code changes without the documentation syncing, the outdated context becomes “poison” that misleads the AI. In the Three-Level Context architecture, the Manager Agent is responsible for real-time updates to the project state, ensuring the “map” always matches the “terrain.” - Granularity Issues
CLAUDE.mdis a “flat” file. Regardless of the task size, the AI is forced to read the entire file every time. In our architecture, the Manager dynamically crops out Level 2 (Task Context) from the global state.- CLAUDE.md Mode: “Here are all the rules of the project, figure it out yourself.”
- Three-Level Context Mode: “For this specific task, you must follow these few rules.”
In short, CLAUDE.md is a static, human-maintained, read-only snapshot; the Three-Level Context architecture is a dynamic, Agent-maintained, living state. We are shifting the burden from “writing documentation for AI” to “enabling AI to maintain its own memory.”
Conclusion
As I delve deeper into vibe coding, I increasingly realize: Context construction is an art.
The human brain is accustomed to linear logical deduction, but AI is different; it relies on associative generalization based on massive knowledge.
Therefore, the core competitiveness of future vibe coding tools will no longer be just the model itself, but how to design an efficient context system—one that precisely “triggers” and “guides” the AI’s generalization capabilities to produce results that meet human expectations.