DATE:
AUTHOR:
The LangChain Team
LangChain

📊 LangChain Benchmarks for Python

DATE:
AUTHOR: The LangChain Team

LangChain Benchmarks

LangChain benchmarks is a Python package with associated datasets to facilitate experimentation and benchmarking of different cognitive architectures. Each benchmark task targets key functionality within common LLM applications, such as retrieval-based Q&A, extraction, agent tool use, and more.

For our first benchmark, we released a Q&A dataset over the LangChain python documentation. See the blog post here for our results and instructions on how to test your own cognitive architecture.

Helpful resources : docs, repository, Q&A leaderboard.

Powered by LaunchNotes