DATE:
AUTHOR:
The LangChain Team
LangSmith

📊 Version-controlled prompts & running prompts over datasets in LangSmith

DATE:
AUTHOR: The LangChain Team

Version-controlled prompts, running prompts over datasets, and more in LangSmith

It’s been a little over 1 month since we GA’d LangSmith, and we’re so grateful for all the new users. We’ve crossed 100k signups Thank you.

If you haven’t checked it out, we’ve shipped some exciting new features:

  • Version controlled datasets: We now let you tag a dataset to a moment in time, so that when you run your tests, you can make sure you control for the same dataset in each of your runs, even if examples have been added, removed, or changed.

  • Run a prompt over a dataset: Now in the Prompt Hub playground, you can run a prompt over all the inputs of a dataset and see the results in the Datasets and Testing tab. This helps ensure your prompt works well over a wide range of inputs.

  • Custom model rates for cost tracking: Head to the Settings tab to add custom rates for a model that you want to track the cost of. We support OpenAI models with no setup, but if you want to add additional rates for other models, this is now possible.

  • PII Masking: We now let you mask the input and output of a single trace. See docs. If you detect a trace might contain PII data client side, mask all the text before sending it to LangSmith. We want to invest more in PII masking, but this should help you guard against the occasional call the contains PII and shouldn’t be shared with a 3rd party.

Powered by LaunchNotes