DATE:
AUTHOR:
The LangChain Team
LangSmith

Multimodal support in LangSmith

DATE:
AUTHOR: The LangChain Team

LangSmith now supports images, PDFs, and audio files across the playground, annotation queues, and datasets — making it easier than ever to build, test, and evaluate multimodal applications.

LangSmith helps you bring real-world data into your workflows. You can now:

  • Attach files directly to dataset examples, eliminating the need for clunky base64 encoding

  • Speed up evaluations with faster upload/download of binary files

  • Visualize multimodal content right inside the LangSmith UI for smoother debugging and iteration

Watch it in action:

How we evaluate a receipt-extracting agent

Ready to build?

Upload images, PDFs, or audio to your datasets and run your first multimodal eval:

Powered by LaunchNotes