How to add cross-thread persistence to your graph#
In the previous guide you learned how to persist graph state across multiple interactions on a single thread. LangGraph also allows you to persist data across multiple threads. For instance, you can store information about users (their names or preferences) in a shared memory and reuse them in the new conversational threads.
In this guide, we will show how to construct and use a graph that has a shared memory implemented using the Store interface.
Note
Support for the Store API that is used in this guide was added in LangGraph v0.2.32.
Support for index and query arguments of the Store API that is used in this guide was added in LangGraph v0.2.54.
Setup#
First, let’s install the required packages and set our API keys
%%capture --no-stderr
%pip install -U langchain_openai langgraph
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
_set_env("OPENAI_API_KEY")
!!! tip “Set up LangSmith for LangGraph development”
Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started [here](https://docs.smith.langchain.com)
Define store#
In this example we will create a graph that will be able to retrieve information about a user’s preferences. We will do so by defining a RedisStore - an object that can store data in Redis and query that data. We will then pass the store object when compiling the graph. This allows each node in the graph to access the store: when you define node functions, you can define store keyword argument, and LangGraph will automatically pass the store object you compiled the graph with.
When storing objects using the Store interface you define two things:
the namespace for the object, a tuple (similar to directories)
the object key (similar to filenames)
In our example, we’ll be using ("memories", <user_id>) as namespace and random UUID as key for each new memory.
Importantly, to determine the user, we will be passing user_id via the config keyword argument of the node function.
Let’s first define a RedisStore already populated with some memories about the users.
from langchain_openai import OpenAIEmbeddings
from langgraph.store.redis import RedisStore
from langgraph.store.base import IndexConfig
# Set up Redis connection
REDIS_URI = "redis://redis:6379"
# Create index configuration for vector search
index_config: IndexConfig = {
"dims": 1536,
"embed": OpenAIEmbeddings(model="text-embedding-3-small"),
"ann_index_config": {
"vector_type": "vector",
},
"distance_type": "cosine",
}
# Initialize the Redis store
redis_store = None
with RedisStore.from_conn_string(REDIS_URI, index=index_config) as s:
s.setup()
redis_store = s
00:35:11 langgraph.store.redis INFO Redis standalone client detected for RedisStore.
00:35:11 redisvl.index.index INFO Index already exists, not overwriting.
00:35:11 redisvl.index.index INFO Index already exists, not overwriting.
Create graph#
import uuid
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import RunnableConfig
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.checkpoint.redis import RedisSaver
from langgraph.store.base import BaseStore
model = ChatAnthropic(model="claude-sonnet-4-20250514")
# NOTE: we're passing the Store param to the node --
# this is the Store we compile the graph with
def call_model(state: MessagesState, config: RunnableConfig, *, store: BaseStore):
user_id = config["configurable"]["user_id"]
namespace = ("memories", user_id)
memories = store.search(namespace, query=str(state["messages"][-1].content))
info = "\n".join([d.value["data"] for d in memories])
system_msg = f"You are a helpful assistant talking to the user. User info: {info}"
# Store new memories if the user asks the model to remember
last_message = state["messages"][-1]
if "remember" in last_message.content.lower():
memory = "User name is Bob"
store.put(namespace, str(uuid.uuid4()), {"data": memory})
response = model.invoke(
[{"role": "system", "content": system_msg}] + state["messages"]
)
return {"messages": response}
builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
# Set up Redis connection for checkpointer
REDIS_URI = "redis://redis:6379"
checkpointer = None
with RedisSaver.from_conn_string(REDIS_URI) as cp:
cp.setup()
checkpointer = cp
# NOTE: we're passing the store object here when compiling the graph
graph = builder.compile(checkpointer=checkpointer, store=redis_store)
# If you're using LangGraph Cloud or LangGraph Studio, you don't need to pass the store or checkpointer when compiling the graph, since it's done automatically.
00:35:11 langgraph.checkpoint.redis INFO Redis client is a standalone client
00:35:11 redisvl.index.index INFO Index already exists, not overwriting.
00:35:11 redisvl.index.index INFO Index already exists, not overwriting.
00:35:11 redisvl.index.index INFO Index already exists, not overwriting.
Note
If you're using LangGraph Platform or LangGraph Studio, you don't need to pass store when compiling the graph, since it's done automatically.
Run the graph!#
Now let’s specify a user ID in the config and tell the model our name:
config = {"configurable": {"thread_id": "1", "user_id": "1"}}
input_message = {"role": "user", "content": "Hi! Remember: my name is Bob"}
for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
================================ Human Message =================================
Hi! Remember: my name is Bob
00:35:12 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
00:35:12 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
00:35:14 httpx INFO HTTP Request: POST https://api.anthropic.com/v1/messages "HTTP/1.1 200 OK"
================================== Ai Message ==================================
Hi Bob! Nice to meet you. I'll remember that your name is Bob. How are you doing today?
config = {"configurable": {"thread_id": "2", "user_id": "1"}}
input_message = {"role": "user", "content": "what is my name?"}
for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
================================ Human Message =================================
what is my name?
00:35:15 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
00:35:17 httpx INFO HTTP Request: POST https://api.anthropic.com/v1/messages "HTTP/1.1 200 OK"
================================== Ai Message ==================================
Your name is Bob.
We can now inspect our Redis store and verify that we have in fact saved the memories for the user:
for memory in redis_store.search(("memories", "1")):
print(memory.value)
{'data': 'User name is Bob'}
{'data': 'User name is Bob'}
{'data': 'User name is Bob'}
{'data': 'User name is Bob'}
Let’s now run the graph for another user to verify that the memories about the first user are self contained:
config = {"configurable": {"thread_id": "3", "user_id": "2"}}
input_message = {"role": "user", "content": "what is my name?"}
for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
================================ Human Message =================================
what is my name?
00:35:18 httpx INFO HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
00:35:19 httpx INFO HTTP Request: POST https://api.anthropic.com/v1/messages "HTTP/1.1 200 OK"
================================== Ai Message ==================================
I don't have any information about your name. Could you please tell me what you'd like me to call you?