The Framework Was Patching the Model, Not Helping You
LangChain made sense in 2023. The model limitations it was compensating for no longer exist
LangChain shipped at the right moment. Models in 2022 and 2023 were genuinely limited in ways that required scaffolding. Short context windows meant you needed careful document chunking and retrieval logic. Inconsistent tool use meant you needed chains to structure multi-step behavior. Memory had to be managed externally because models had none. LangChain handled all of that, and doing so was real work.
The models caught up. Claude Opus 4.6, GPT-5, and Gemini 3.1 Pro handle multi-step reasoning, tool selection, retrieval, and context management natively in ways the 2023 models did not. The scaffold was compensating for model limitations. Most of those limitations are gone.
What you get now, if you reach for LangChain by default, is an abstraction layer over a model that does not need abstracting. The debugging experience when something breaks is worse than with direct API calls, because you are now tracing through the framework’s logic to find the problem. The surface area is larger. The control is lower. For the teams I know that have removed it and replaced it with direct API calls, the debugging time dropped sharply.
This is not a criticism of LangChain as a project. It shipped a stable 1.0 in October 2025. LangGraph handles genuinely complex stateful multi-agent workflows well. LangSmith is a good observability tool. If you are building something that needs those specific capabilities, the ecosystem earns its weight.
The issue is that teams use LangChain as a default starting point when they do not have a specific reason to. They add a dependency, they will spend months understanding before they discover the underlying model could have handled their use case with 200 lines of direct API calls. The abstraction was not solving their problem. It was deferring the work of understanding what they actually needed.
Before reaching for a framework, be specific about what it buys you. If the answer is ‘quicker to prototype,’ that is a legitimate reason. If the answer is ‘I have always used it,’ run the experiment of not using it. You will find out quickly whether the complexity is earned.

