DATE:
AUTHOR:
The LangChain Team
LangGraph

`interrupt`: Simplifying human-in-the-loop agents

DATE:
AUTHOR: The LangChain Team

Our latest feature in LangGraph, interrupt, makes building human-in-the-loop workflows easier.

Why Human-in-the-Loop?

Agents aren’t perfect, so keeping humans “in the loop” ensures better accuracy, oversight, and flexibility. LangGraph’s checkpointing system already supports this by allowing workflows to pause, edit, and resume seamlessly—even after months or on different machines.

Meet interrupt

Inspired by Python's input, interrupt pauses a graph, marks it as interrupted, and saves input into the persistence layer:

response = interrupt("Your question here")

You can resume execution when ready:

graph.invoke(Command(resume="Your response here"), thread)  

Unlike input, interrupt works in production, freeing up resources and enabling workflows to pick up where they left off.

Use Cases

  • Approve/Reject: Review critical steps before execution.

  • Edit State: Correct or enhance the graph state.

  • Review Tool Calls: Verify LLM outputs.

  • Multi-Turn Conversations: Enable dynamic, interactive dialogues.

Get Started

interrupt is available now in Python and JavaScript. Read the blog post to learn more: https://blog.langchain.dev/making-it-easier-to-build-human-in-the-loop-agents-with-interrupt/

Or watch our Youtube video walkthrough:

Powered by LaunchNotes