DATE:
AUTHOR:
The LangChain Team
LangGraph

Node-level caching in LangGraph

DATE:
AUTHOR: The LangChain Team

You can now cache the results of tasks in LangGraph based on node input, helping you avoid redundant computation and speed up execution.

How it works:

  • Define a cache at compile time or entrypoint

  • Set cache policies per node:

    • key_func to control cache key generation (defaults to input hash)

    • ttl to control cache expiration (or leave unset for persistent cache)

See the docs: https://langchain-ai.github.io/langgraph/concepts/low_level/#node-caching

Powered by LaunchNotes