- DATE:
- AUTHOR:
- The LangChain Team
Run evaluators in the LangSmith UI
We've made evaluations even easier to set up! You can now define evaluators for datasets and tracing projects directly in the UI—no code required. After extensive testing and refinement, this update is designed to streamline the process, especially for non-technical users like SMEs, PMs, and UX designers.
What’s New?
Improved UI for LLM-as-a-judge evaluators – Easily configure and manage evaluators.
Pre-built evaluators to get started fast – Includes Hallucination, Correctness, Conciseness, and Code Checker.
Customization options – Fine-tune evaluators with custom prompts, variable mapping, and scoring criteria.
Few-shot learning integration – Improve evaluator accuracy by incorporating human feedback.
How to Use It:
Select/Create an Evaluator – From a dataset, tracing project, or the playground.
Configure Settings – Choose a model, map variables, and customize scoring criteria.
Preview & Save – Ensure the setup aligns with your needs and apply it to experiments.
Read the docs for more info: https://docs.smith.langchain.com/evaluation/how_to_guides/llm_as_judge?mode=ui
Try it out in LangSmith today and let us know what you think! smith.langchain.com