How to view and update past graph state#
!!! tip “Prerequisites”
This guide assumes familiarity with the following concepts:
* [Time Travel](../../../concepts/time-travel)
* [Breakpoints](../../../concepts/breakpoints)
* [LangGraph Glossary](../../../concepts/low_level)
Once you start checkpointing your graphs, you can easily get or update the state of the agent at any point in time. This permits a few things:
You can surface a state during an interrupt to a user to let them accept an action.
You can rewind the graph to reproduce or avoid issues.
You can modify the state to embed your agent into a larger system, or to let the user better control its actions.
The key methods used for this functionality are:
get_state: fetch the values from the target config
update_state: apply the given values to the target state
Note: this requires passing in a checkpointer.
Below is a quick example.
Setup#
First we need to install the packages required
%%capture --no-stderr
%pip install --quiet -U langgraph langchain_openai
Next, we need to set API keys for OpenAI (the LLM we will use)
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
Build the agent#
We can now build the agent. We will build a relatively simple ReAct-style agent that does tool calling. We will use Anthropic’s models and fake tools (just for demo purposes).
# Set up the tool
import uuid
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langgraph.graph import MessagesState, START
from langgraph.prebuilt import ToolNode
from langgraph.graph import END, StateGraph
from langgraph.checkpoint.redis import RedisSaver
# Set up Redis connection
REDIS_URI = "redis://redis:6379"
memory = None
with RedisSaver.from_conn_string(REDIS_URI) as cp:
cp.setup()
memory = cp
@tool
def play_song_on_spotify(song: str):
"""Play a song on Spotify"""
# Call the spotify API ...
return f"Successfully played {song} on Spotify!"
@tool
def play_song_on_apple(song: str):
"""Play a song on Apple Music"""
# Call the apple music API ...
return f"Successfully played {song} on Apple Music!"
tools = [play_song_on_apple, play_song_on_spotify]
tool_node = ToolNode(tools)
# Set up the model
model = ChatOpenAI(model="gpt-4o-mini")
model = model.bind_tools(tools, parallel_tool_calls=False)
# Define nodes and conditional edges
# Define the function that determines whether to continue or not
def should_continue(state):
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return "end"
# Otherwise if there is, we continue
else:
return "continue"
# Define the function that calls the model
def call_model(state):
messages = state["messages"]
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Finally we pass in a mapping.
# The keys are strings, and the values are other nodes.
# END is a special node marking that the graph should finish.
# What will happen is we will call `should_continue`, and then the output of that
# will be matched against the keys in this mapping.
# Based on which one it matches, that node will then be called.
{
# If `tools`, then we call the tool node.
"continue": "action",
# Otherwise we finish.
"end": END,
},
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
# We add in `interrupt_before=["action"]`
# This will add a breakpoint before the `action` node is called
app = workflow.compile(checkpointer=memory)
Interacting with the Agent#
We can now interact with the agent. Let’s ask it to play Taylor Swift’s most popular song:
from langchain_core.messages import HumanMessage
import uuid
# Use a unique thread ID for a fresh start
config = {"configurable": {"thread_id": str(uuid.uuid4())}}
input_message = HumanMessage(content="Can you play Taylor Swift's most popular song?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
Checking history#
Let’s browse the history of this thread, from start to finish.
# Check the current state messages
current_state = app.get_state(config)
if current_state and current_state.values.get("messages"):
print(f"Current state has {len(current_state.values['messages'])} messages:")
for i, msg in enumerate(current_state.values['messages']):
msg_type = type(msg).__name__
if msg_type == "HumanMessage":
print(f" {i}. Human: {msg.content}")
elif msg_type == "AIMessage":
if msg.tool_calls:
print(f" {i}. AI: Called {msg.tool_calls[0]['name']}")
else:
print(f" {i}. AI: {msg.content[:100]}...")
elif msg_type == "ToolMessage":
print(f" {i}. Tool Result: {msg.content[:50]}...")
else:
print("No messages in current state")
print("State history (newest to oldest):")
print("=" * 50)
all_states = []
for state in app.get_state_history(config):
msg_count = len(state.values.get('messages', []))
print(f"State {len(all_states)}:")
print(f" - Messages: {msg_count}")
print(f" - Next node(s): {state.next}")
if state.next == ('action',):
print(f" ⚡ This is where we can intercept before tool execution")
all_states.append(state)
print("-" * 30)
Replay a state#
We can go back to any of these states and restart the agent from there! Let’s go back to right before the tool call gets executed.
# Get the state right before the tool was called
# The states list is in reverse chronological order (newest first)
# We want index 2 which is the state right before the tool execution
if len(all_states) > 2:
to_replay = all_states[2] # This should be the state with 2 messages, right before action
print(f"Selected state: {to_replay.next}")
print(f"Messages in state: {len(to_replay.values.get('messages', []))}")
if to_replay.values.get('messages'):
last_msg = to_replay.values['messages'][-1]
if hasattr(last_msg, 'tool_calls') and last_msg.tool_calls:
print(f"Tool call found: {last_msg.tool_calls[0]['name']}")
else:
to_replay = None
print("Not enough states to replay")
if to_replay:
print(f"\nState values:")
print(f" Messages: {len(to_replay.values.get('messages', []))}")
for i, msg in enumerate(to_replay.values.get('messages', [])):
print(f" Message {i}: {type(msg).__name__}")
if hasattr(msg, 'tool_calls') and msg.tool_calls:
print(f" - Tool call: {msg.tool_calls[0]['name']}({msg.tool_calls[0]['args']})")
else:
print("No state to replay")
if to_replay:
print(f"\nNext steps from this state: {to_replay.next}")
print("This state is right before the tool execution.")
else:
print("No state to check")
To replay from this place we just need to pass its config back to the agent. Notice that it just resumes from right where it left all - making a tool call.
if to_replay:
print("\nReplaying from selected state (resuming tool execution):")
print("-" * 50)
for event in app.stream(None, to_replay.config, stream_mode="values"):
if "messages" in event:
event["messages"][-1].pretty_print()
else:
print("No state to replay from")
Branch off a past state#
Using LangGraph’s checkpointing, you can do more than just replay past states. You can branch off previous locations to let the agent explore alternate trajectories or to let a user “version control” changes in a workflow.
Let’s show how to do this to edit the state at a particular point in time. Let’s update the state to instead of playing the song on Apple to play it on Spotify:
if to_replay and to_replay.values.get("messages"):
# Get the last message in the state (the AI message with tool calls)
last_message = to_replay.values["messages"][-1]
# Check if it has tool calls
if hasattr(last_message, 'tool_calls') and last_message.tool_calls:
# Create a modified copy of the tool call
from langchain_core.messages import AIMessage
# Create a new AIMessage with the modified tool call
modified_message = AIMessage(
content=last_message.content,
tool_calls=[{
"name": "play_song_on_spotify", # Changed from play_song_on_apple
"args": last_message.tool_calls[0]["args"],
"id": last_message.tool_calls[0]["id"]
}]
)
# Update the state with the modified message
branch_config = app.update_state(
to_replay.config,
{"messages": [modified_message]},
)
print(f"✅ Updated tool call from 'play_song_on_apple' to 'play_song_on_spotify'")
print(f"Branch config checkpoint: {branch_config['configurable']['checkpoint_id'][:8]}...")
else:
print("Last message doesn't have tool calls")
else:
print("No valid state to replay or no messages in state")
We can then invoke with this new branch_config to resume running from here with changed state. We can see from the log that the tool was called with different input.
if 'branch_config' in locals():
print("\n🎵 Running with modified tool call (Spotify instead of Apple):")
print("-" * 50)
for event in app.stream(None, branch_config, stream_mode="values"):
if "messages" in event:
event["messages"][-1].pretty_print()
else:
print("No branch config available. Make sure the previous cell executed successfully.")
Alternatively, we could update the state to not even call a tool!
from langchain_core.messages import AIMessage
if to_replay and to_replay.values.get("messages"):
# Get the last message in the state (the AI message with tool calls)
last_message = to_replay.values["messages"][-1]
# Create a new message without tool calls
new_message = AIMessage(
content="It's quiet hours so I can't play any music right now! But 'Anti-Hero' is indeed a great song.",
id=last_message.id if hasattr(last_message, 'id') else None
)
# Create another branch from the same checkpoint
branch_config_2 = app.update_state(
to_replay.config,
{"messages": [new_message]},
)
print("✅ Created alternative branch without tool call")
print(f"New branch checkpoint: {branch_config_2['configurable']['checkpoint_id'][:8]}...")
else:
print("No valid state to create alternative branch")
if 'branch_config_2' in locals():
branch_state = app.get_state(branch_config_2)
print(f"\nAlternative branch state has {len(branch_state.values.get('messages', []))} messages")
print(f"Last message: {branch_state.values['messages'][-1].content[:100]}...")
else:
print("No branch config available")
if 'branch_state' in locals():
print("\nBranch state values:")
print(f" Messages: {len(branch_state.values['messages'])}")
for i, msg in enumerate(branch_state.values['messages']):
print(f" {i}. {type(msg).__name__}: {msg.content[:50] if hasattr(msg, 'content') else str(msg)[:50]}...")
else:
print("No branch state available")
if 'branch_state' in locals():
print(f"\nNext steps for alternative branch: {branch_state.next}")
if branch_state.next == ():
print("✅ Graph execution complete - no tool was called in this branch")
else:
print("No branch state available")
Summary#
This notebook demonstrated time-travel capabilities in LangGraph with Redis checkpointing:
Running the agent: We asked the agent to play Taylor Swift’s most popular song, which triggered a tool call
Viewing history: We examined all checkpoints created during execution
Selecting a checkpoint: We identified the state right before the tool was executed
Replaying: We resumed execution from that checkpoint
Branching - Option 1: We modified the tool call to use Spotify instead of Apple Music
Branching - Option 2: We replaced the tool call entirely with a different response
This shows how you can:
Navigate through execution history
Replay from any checkpoint
Create alternate execution branches by modifying state
Build human-in-the-loop workflows with fine-grained control
All state management is handled by Redis, providing persistent, scalable checkpointing for production applications.