- DATE:
- AUTHOR:
- The LangChain Team
Unified cost tracking for LLMs, tools, retrieval
We’ve added full-stack cost tracking to LangSmith to make it easier to understand and monitor spend across complex agent applications.
LangSmith now automatically records token usage and derived costs for major model providers (OpenAI, Anthropic, and others with OpenAI-compatible responses). You can also submit custom cost data for any run type — including tools, retrieval steps, or non-linear pricing models — giving you a complete picture of where your compute budget is going.
What’s new
Automatic cost derivation for LLM calls based on token counts and model pricing tables
Provider-aware pricing, including multimodal token types and cache reads
Manual cost submission for any run type (LLMs, tools, retrieval, custom operations)
Token & cost breakdowns visible throughout the LangSmith UI
In the trace tree
In project stats
In dashboards
Model price map editor for adding custom models or overriding default pricing
How it works
Costs are computed in one of two ways:
Automatically — when token counts, provider, and model name are present
Manually — by submitting
usage_metadatawith custom cost fields
LangSmith includes pricing data for most OpenAI, Anthropic, and Gemini models out of the box. For other providers or custom pricing schemes, you can supply your own token counts and price mapping.
Why it matters
Building agents at scale introduces non-trivial usage-based costs across multiple components. LangSmith gives you a single, consistent view of spend across prompts, outputs, tools, and retrieval — enabling better monitoring, debugging, and optimization.
See the docs: https://docs.langchain.com/langsmith/cost-tracking