How to Review Tool Calls#
!!! tip “Prerequisites”
This guide assumes familiarity with the following concepts:
* [Tool calling](https://python.langchain.com/docs/concepts/tool_calling/)
* [Human-in-the-loop](../../../concepts/human_in_the_loop)
* [LangGraph Glossary](../../../concepts/low_level)
Human-in-the-loop (HIL) interactions are crucial for agentic systems. A common pattern is to add some human in the loop step after certain tool calls. These tool calls often lead to either a function call or saving of some information. Examples include:
A tool call to execute SQL, which will then be run by the tool
A tool call to generate a summary, which will then be saved to the State of the graph
Note that using tool calls is common whether actually calling tools or not.
There are typically a few different interactions you may want to do here:
Approve the tool call and continue
Modify the tool call manually and then continue
Give natural language feedback, and then pass that back to the agent
We can implement these in LangGraph using the [interrupt()][langgraph.types.interrupt] function. interrupt allows us to stop graph execution to collect input from a user and continue execution with collected input:
def human_review_node(state) -> Command[Literal["call_llm", "run_tool"]]:
# this is the value we'll be providing via Command(resume=<human_review>)
human_review = interrupt(
{
"question": "Is this correct?",
# Surface tool calls for review
"tool_call": tool_call
}
)
review_action, review_data = human_review
# Approve the tool call and continue
if review_action == "continue":
return Command(goto="run_tool")
# Modify the tool call manually and then continue
elif review_action == "update":
...
updated_msg = get_updated_msg(review_data)
return Command(goto="run_tool", update={"messages": [updated_message]})
# Give natural language feedback, and then pass that back to the agent
elif review_action == "feedback":
...
feedback_msg = get_feedback_msg(review_data)
return Command(goto="call_llm", update={"messages": [feedback_msg]})
Setup#
First we need to install the packages required
%%capture --no-stderr
%pip install --quiet -U langgraph langchain_anthropic "httpx>=0.24.0,<1.0.0"
Next, we need to set API keys for Anthropic (the LLM we will use)
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
Simple Usage#
Let’s set up a very simple graph that facilitates this. First, we will have an LLM call that decides what action to take. Then we go to a human node. This node actually doesn’t do anything - the idea is that we interrupt before this node and then apply any updates to the state. After that, we check the state and either route back to the LLM or to the correct tool.
Let’s see this in action!
import uuid
from typing_extensions import Literal
from langgraph.graph import StateGraph, START, END, MessagesState
from langgraph.checkpoint.redis import RedisSaver
from langgraph.types import Command, interrupt
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from IPython.display import Image, display
# Set up Redis connection
REDIS_URI = "redis://redis:6379"
memory = None
with RedisSaver.from_conn_string(REDIS_URI) as cp:
cp.setup()
memory = cp
@tool
def weather_search(city: str):
"""Search for the weather"""
print("----")
print(f"Searching for: {city}")
print("----")
return "Sunny!"
model = ChatAnthropic(model_name="claude-3-5-sonnet-latest").bind_tools([weather_search])
class State(MessagesState):
"""Simple state."""
def call_llm(state):
return {"messages": [model.invoke(state["messages"])]}
def human_review_node(state) -> Command[Literal["call_llm", "run_tool"]]:
last_message = state["messages"][-1]
# Handle Anthropic message format which uses content list with tool_use type
tool_call = None
if hasattr(last_message, "content") and isinstance(last_message.content, list):
for part in last_message.content:
if isinstance(part, dict) and part.get("type") == "tool_use":
tool_call = {
"name": part.get("name"),
"args": part.get("input", {}),
"id": part.get("id"),
"type": "tool_call"
}
break
# this is the value we'll be providing via Command(resume=<human_review>)
human_review = interrupt(
{
"question": "Is this correct?",
# Surface tool calls for review
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
# if approved, call the tool
if review_action == "continue":
return Command(goto="run_tool")
# update the AI message AND call tools
elif review_action == "update":
# For Anthropic format
updated_content = []
for part in last_message.content:
if isinstance(part, dict) and part.get("type") == "tool_use":
updated_part = part.copy()
updated_part["input"] = review_data
updated_content.append(updated_part)
else:
updated_content.append(part)
updated_message = {
"role": "assistant",
"content": updated_content,
"id": last_message.id,
}
return Command(goto="run_tool", update={"messages": [updated_message]})
# provide feedback to LLM
elif review_action == "feedback":
# NOTE: we're adding feedback message as a ToolMessage
# to preserve the correct order in the message history
# (AI messages with tool calls need to be followed by tool call messages)
tool_message = {
"role": "tool",
# This is our natural language feedback
"content": review_data,
"name": tool_call["name"],
"tool_call_id": tool_call["id"],
}
return Command(goto="call_llm", update={"messages": [tool_message]})
def run_tool(state):
new_messages = []
tools = {"weather_search": weather_search}
# Handle different message formats
last_message = state["messages"][-1]
tool_calls = []
# Handle Anthropic format
if hasattr(last_message, "content") and isinstance(last_message.content, list):
for part in last_message.content:
if isinstance(part, dict) and part.get("type") == "tool_use":
tool_calls.append({
"name": part.get("name"),
"args": part.get("input", {}),
"id": part.get("id"),
})
for tool_call in tool_calls:
tool_name = tool_call["name"]
if tool_name in tools:
tool = tools[tool_name]
result = tool.invoke(tool_call["args"])
new_messages.append(
{
"role": "tool",
"name": tool_call["name"],
"content": result,
"tool_call_id": tool_call["id"],
}
)
return {"messages": new_messages}
def route_after_llm(state) -> Literal[END, "human_review_node"]:
last_message = state["messages"][-1]
# Check for Anthropic tool calls
has_tool_calls = False
if hasattr(last_message, "content") and isinstance(last_message.content, list):
for part in last_message.content:
if isinstance(part, dict) and part.get("type") == "tool_use":
has_tool_calls = True
break
if has_tool_calls:
return "human_review_node"
else:
return END
builder = StateGraph(State)
builder.add_node(call_llm)
builder.add_node(run_tool)
builder.add_node(human_review_node)
builder.add_edge(START, "call_llm")
builder.add_conditional_edges("call_llm", route_after_llm)
builder.add_edge("run_tool", "call_llm")
# Add
graph = builder.compile(checkpointer=memory)
# View
display(Image(graph.get_graph().draw_mermaid_png()))
Example with no review#
Let’s look at an example when no review is required (because no tools are called)
!!! tip “Thread Management”
When running examples multiple times, it’s important to use unique thread IDs to avoid conflicts with previous runs.
This is especially important when working with tool calls, as some LLM providers (like Anthropic) require proper
message history with matching tool_use and tool_result pairs. Using uuid.uuid4() ensures each run gets a fresh state.
# Input
initial_input = {"messages": [{"role": "user", "content": "hi!"}]}
# Thread - use unique ID to avoid conflicts
thread = {"configurable": {"thread_id": str(uuid.uuid4())}}
# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="updates"):
print(event)
print("\n")
If we check the state, we can see that it is finished
Example of approving tool#
Let’s now look at what it looks like to approve a tool call
# Input
initial_input = {"messages": [{"role": "user", "content": "what's the weather in sf?"}]}
# Thread - use unique ID to avoid conflicts
thread = {"configurable": {"thread_id": str(uuid.uuid4())}}
# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="updates"):
print(event)
print("\n")
If we now check, we can see that it is waiting on human review
print("Pending Executions!")
print(graph.get_state(thread).next)
To approve the tool call, we can just continue the thread with no edits. To do so, we need to let human_review_node know what value to use for the human_review variable we defined inside the node. We can provide this value by invoking the graph with a Command(resume=<human_review>) input. Since we’re approving the tool call, we’ll provide resume value of {"action": "continue"} to navigate to run_tool node:
for event in graph.stream(
# provide value
Command(resume={"action": "continue"}),
thread,
stream_mode="updates",
):
print(event)
print("\n")
Edit Tool Call#
Let’s now say we want to edit the tool call. E.g. change some of the parameters (or even the tool called!) but then execute that tool.
# Input
initial_input = {"messages": [{"role": "user", "content": "what's the weather in sf?"}]}
# Thread - use unique ID to avoid conflicts
thread = {"configurable": {"thread_id": str(uuid.uuid4())}}
# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="updates"):
print(event)
print("\n")
print("Pending Executions!")
print(graph.get_state(thread).next)
To do this, we will use Command with a different resume value of {"action": "update", "data": <tool call args>}. This will do the following:
combine existing tool call with user-provided tool call arguments and update the existing AI message with the new tool call
navigate to
run_toolnode with the updated AI message and continue execution
# Let's now continue executing from here
for event in graph.stream(
Command(resume={"action": "update", "data": {"city": "San Francisco, USA"}}),
thread,
stream_mode="updates",
):
print(event)
print("\n")
Give feedback to a tool call#
Sometimes, you may not want to execute a tool call, but you also may not want to ask the user to manually modify the tool call. In that case it may be better to get natural language feedback from the user. You can then insert this feedback as a mock RESULT of the tool call.
There are multiple ways to do this:
You could add a new message to the state (representing the “result” of a tool call)
You could add TWO new messages to the state - one representing an “error” from the tool call, other HumanMessage representing the feedback
Both are similar in that they involve adding messages to the state. The main difference lies in the logic AFTER the human_review_node and how it handles different types of messages.
For this example we will just add a single tool call representing the feedback (see human_review_node implementation). Let’s see this in action!
# Input
initial_input = {"messages": [{"role": "user", "content": "what's the weather in sf?"}]}
# Thread - use unique ID to avoid conflicts
thread = {"configurable": {"thread_id": str(uuid.uuid4())}}
# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="updates"):
print(event)
print("\n")
print("Pending Executions!")
print(graph.get_state(thread).next)
To do this, we will use Command with a different resume value of {"action": "feedback", "data": <feedback string>}. This will do the following:
create a new tool message that combines existing tool call from LLM with the with user-provided feedback as content
navigate to
call_llmnode with the updated tool message and continue execution
# Let's now continue executing from here
for event in graph.stream(
# provide our natural language feedback!
Command(
resume={
"action": "feedback",
"data": "User requested changes: use <city, country> format for location",
}
),
thread,
stream_mode="updates",
):
print(event)
print("\n")
We can see that we now get to another interrupt - because it went back to the model and got an entirely new prediction of what to call. Let’s now approve this one and continue.
print("Pending Executions!")
print(graph.get_state(thread).next)
for event in graph.stream(
Command(resume={"action": "continue"}), thread, stream_mode="updates"
):
print(event)
print("\n")