The Framework Trap
Reaching for agent frameworks (LangChain, CrewAI, AutoGen) before understanding what you actually need, wrapping logic in layers of abstraction that obscure the prompts, tool calls, and responses flowing through the system.
Why It Happens
- Frameworks promise rapid prototyping and come with impressive demos
- It feels irresponsible to "reinvent the wheel"
- Teams want the confidence of a battle-tested library
- Framework documentation makes complex patterns look easy
What Goes Wrong
- Debugging becomes archaeology — when behavior is wrong, you're reverse-engineering the framework, not your logic
- Incorrect assumptions — Anthropic notes that "incorrect assumptions about what's under the hood are a common source of customer error"
- Abstraction lock-in — the framework's model of the world becomes your model, even when it doesn't fit
- Upgrade churn — frameworks evolve rapidly, breaking your code on update cycles
What to Do Instead
- Start with direct LLM API calls — Anthropic says "many patterns can be implemented in a few lines of code"
- Build the simplest thing that works before adding abstraction
- Use frameworks only after you understand the underlying mechanics well enough to debug them
- Prefer thin libraries (Instructor, Pydantic) over thick frameworks
- If you use a framework, understand what it's doing at the API level
Signs You Have This
- You can't explain what prompts your agent is actually sending
- Debugging requires reading framework source code
- You've written more framework configuration than actual logic
- Upgrading the framework broke your agent in ways you don't understand