How to migrate from legacy LangChain agents to LangGraph
Here we focus on how to move from legacy LangChain agents to LangGraph
agents. LangChain agents (the
AgentExecutor
in particular) have multiple configuration parameters. In this notebook
we will show how those parameters map to the LangGraph react agent
executor using the
create_react_agent
prebuilt helper method.
For more information on how to build agentic workflows in LangGraph, check out the docs here.
Prerequisites
This how-to guide uses Anthropic’s "claude-3-haiku-20240307"
as the
LLM. If you are running this guide as a notebook, set your Anthropic API
key to run.
// process.env.ANTHROPIC_API_KEY = "sk-...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls...";
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
// process.env.LANGCHAIN_TRACING_V2 = "true";
// process.env.LANGCHAIN_PROJECT = "How to migrate: LangGraphJS";
Basic Usage
For basic creation and usage of a tool-calling ReAct-style agent, the functionality is the same. First, let’s define a model and tool(s), then we’ll use those to create an agent.
The tool
function is available in @langchain/core
version 0.2.7 and
above.
If you are on an older version of core, you should use instantiate and
use
DynamicStructuredTool
instead.
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
});
const magicTool = tool(
async ({ input }: { input: number }) => {
return `${input + 2}`;
},
{
name: "magic_function",
description: "Applies a magic function to an input.",
schema: z.object({
input: z.number(),
}),
}
);
const tools = [magicTool];
const query = "what is the value of magic_function(3)?";
For the LangChain
AgentExecutor
,
we define a prompt with a placeholder for the agent’s scratchpad. The
agent can be invoked as follows:
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createToolCallingAgent } from "langchain/agents";
import { AgentExecutor } from "langchain/agents";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const agent = createToolCallingAgent({ llm, tools, prompt });
const agentExecutor = new AgentExecutor({ agent, tools });
await agentExecutor.invoke({ input: query });
{
input: "what is the value of magic_function(3)?",
output: "The value of magic_function(3) is 5."
}
LangGraph’s off-the-shelf react agent
executor
manages a state that is defined by a list of messages. In a similar way
to the AgentExecutor
, it will continue to process the list until there
are no tool calls in the agent’s output. To kick it off, we input a list
of messages. The output will contain the entire state of the graph - in
this case, the conversation history and messages representing
intermediate tool calls:
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { HumanMessage } from "@langchain/core/messages";
const app = createReactAgent({ llm, tools });
let agentOutput = await app.invoke({
messages: [new HumanMessage(query)],
});
console.log(agentOutput);
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "what is the value of magic_function(3)?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "what is the value of magic_function(3)?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: [ [Object] ],
additional_kwargs: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: [Object]
},
tool_calls: [ [Object] ],
invalid_tool_calls: [],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: [
{
type: "tool_use",
id: "toolu_01WCezi2ywMPnRm1xbrXYPoB",
name: "magic_function",
input: [Object]
}
],
name: undefined,
additional_kwargs: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
response_metadata: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
tool_calls: [
{
name: "magic_function",
args: [Object],
id: "toolu_01WCezi2ywMPnRm1xbrXYPoB"
}
],
invalid_tool_calls: []
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
name: "magic_function",
content: "5",
tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: "magic_function",
additional_kwargs: {},
response_metadata: {},
tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB"
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "The value of magic_function(3) is 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "The value of magic_function(3) is 5.",
name: undefined,
additional_kwargs: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
response_metadata: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
tool_calls: [],
invalid_tool_calls: []
}
]
}
const messageHistory = agentOutput.messages;
const newQuery = "Pardon?";
agentOutput = await app.invoke({
messages: [...messageHistory, new HumanMessage(newQuery)],
});
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "what is the value of magic_function(3)?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "what is the value of magic_function(3)?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: [ [Object] ],
additional_kwargs: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: [Object]
},
tool_calls: [ [Object] ],
invalid_tool_calls: [],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: [
{
type: "tool_use",
id: "toolu_01WCezi2ywMPnRm1xbrXYPoB",
name: "magic_function",
input: [Object]
}
],
name: undefined,
additional_kwargs: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
response_metadata: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
tool_calls: [
{
name: "magic_function",
args: [Object],
id: "toolu_01WCezi2ywMPnRm1xbrXYPoB"
}
],
invalid_tool_calls: []
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
name: "magic_function",
content: "5",
tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: "magic_function",
additional_kwargs: {},
response_metadata: {},
tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB"
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "The value of magic_function(3) is 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "The value of magic_function(3) is 5.",
name: undefined,
additional_kwargs: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
response_metadata: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "Pardon?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Pardon?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "I apologize for the confusion. Let me explain the steps I took to arrive at the result:\n" +
"\n" +
"1. You aske"... 52 more characters,
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_012yLSnnf1c64NWKS9K58hcN",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "I apologize for the confusion. Let me explain the steps I took to arrive at the result:\n" +
"\n" +
"1. You aske"... 52 more characters,
name: undefined,
additional_kwargs: {
id: "msg_012yLSnnf1c64NWKS9K58hcN",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 455, output_tokens: 137 }
},
response_metadata: {
id: "msg_012yLSnnf1c64NWKS9K58hcN",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 455, output_tokens: 137 }
},
tool_calls: [],
invalid_tool_calls: []
}
]
}
Prompt Templates
With legacy LangChain agents you have to pass in a prompt template. You can use this to control the agent.
With LangGraph react agent executor, by default there is no prompt. You can achieve similar control over the agent in a few ways:
- Pass in a system message as input
- Initialize the agent with a system message
- Initialize the agent with a function to transform messages before passing to the model.
Let’s take a look at all of these below. We will pass in custom instructions to get the agent to respond in Spanish.
First up, using LangChain’s AgentExecutor
:
const spanishPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant. Respond only in Spanish."],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const spanishAgent = createToolCallingAgent({
llm,
tools,
prompt: spanishPrompt,
});
const spanishAgentExecutor = new AgentExecutor({
agent: spanishAgent,
tools,
});
await spanishAgentExecutor.invoke({ input: query });
{
input: "what is the value of magic_function(3)?",
output: "El valor de magic_function(3) es 5."
}
Now, let’s pass a custom system message to react agent
executor.
This can either be a string or a LangChain SystemMessage
.
import { SystemMessage } from "@langchain/core/messages";
const systemMessage = "You are a helpful assistant. Respond only in Spanish.";
// This could also be a SystemMessage object
// const systemMessage = new SystemMessage("You are a helpful assistant. Respond only in Spanish.");
const appWithSystemMessage = createReactAgent({
llm,
tools,
messageModifier: systemMessage,
});
agentOutput = await appWithSystemMessage.invoke({
messages: [new HumanMessage(query)],
});
agentOutput.messages[agentOutput.messages.length - 1];
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "El valor de magic_function(3) es 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_01P5VUYbBZoeMaReqBgqFJZa",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 444, output_tokens: 17 }
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "El valor de magic_function(3) es 5.",
name: undefined,
additional_kwargs: {
id: "msg_01P5VUYbBZoeMaReqBgqFJZa",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 444, output_tokens: 17 }
},
response_metadata: {
id: "msg_01P5VUYbBZoeMaReqBgqFJZa",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 444, output_tokens: 17 }
},
tool_calls: [],
invalid_tool_calls: []
}
We can also pass in an arbitrary function. This function should take in
a list of messages and output a list of messages. We can do all types of
arbitrary formatting of messages here. In this cases, let’s just add a
SystemMessage
to the start of the list of messages.
import { BaseMessage, SystemMessage } from "@langchain/core/messages";
const modifyMessages = (messages: BaseMessage[]) => {
return [
new SystemMessage("You are a helpful assistant. Respond only in Spanish."),
...messages,
new HumanMessage("Also say 'Pandemonium!' after the answer."),
];
};
const appWithMessagesModifier = createReactAgent({
llm,
tools,
messageModifier: modifyMessages,
});
agentOutput = await appWithMessagesModifier.invoke({
messages: [new HumanMessage(query)],
});
console.log({
input: query,
output: agentOutput.messages[agentOutput.messages.length - 1].content,
});
{
input: "what is the value of magic_function(3)?",
output: "5. ¡Pandemonium!"
}
Memory
With LangChain’s
AgentExecutor
,
you could add chat memory classes so it can engage in a multi-turn
conversation.
import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
const memory = new ChatMessageHistory();
const agentExecutorWithMemory = new RunnableWithMessageHistory({
runnable: agentExecutor,
getMessageHistory: () => memory,
inputMessagesKey: "input",
historyMessagesKey: "chat_history",
});
const config = { configurable: { sessionId: "test-session" } };
agentOutput = await agentExecutorWithMemory.invoke(
{ input: "Hi, I'm polly! What's the output of magic_function of 3?" },
config
);
console.log(agentOutput.output);
agentOutput = await agentExecutorWithMemory.invoke(
{ input: "Remember my name?" },
config
);
console.log("---");
console.log(agentOutput.output);
console.log("---");
agentOutput = await agentExecutorWithMemory.invoke(
{ input: "what was that output again?" },
config
);
console.log(agentOutput.output);
The magic_function takes an input number and applies some magic to it, returning the output. For an input of 3, the output is 5.
---
Okay, I remember your name is Polly.
---
So the output of the magic_function with an input of 3 is 5.
In LangGraph
The equivalent to this type of memory in LangGraph is persistence, and checkpointing.
Add a checkpointer
to the agent and you get chat memory for free.
You’ll need to also pass a thread_id
within the configurable
field
in the config
parameter. Notice that we only pass one message into
each request, but the model still has context from previous runs:
import { MemorySaver } from "@langchain/langgraph";
const memory = new MemorySaver();
const appWithMemory = createReactAgent({
llm,
tools,
checkpointSaver: memory,
});
const config = {
configurable: {
thread_id: "test-thread",
},
};
agentOutput = await appWithMemory.invoke(
{
messages: [
new HumanMessage(
"Hi, I'm polly! What's the output of magic_function of 3?"
),
],
},
config
);
console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
console.log("---");
agentOutput = await appWithMemory.invoke(
{
messages: [new HumanMessage("Remember my name?")],
},
config
);
console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
console.log("---");
agentOutput = await appWithMemory.invoke(
{
messages: [new HumanMessage("what was that output again?")],
},
config
);
console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
The magic_function takes an input number and applies some magic to it, returning the output. For an input of 3, the magic_function returns 5.
---
Ah yes, I remember your name is Polly! It's nice to meet you Polly.
---
So the magic_function returned an output of 5 for an input of 3.
Iterating through steps
With LangChain’s
AgentExecutor
,
you could iterate over the steps using the
stream
method:
const langChainStream = await agentExecutor.stream({ input: query });
for await (const step of langChainStream) {
console.log(step);
}
{
intermediateSteps: [
{
action: {
tool: "magic_function",
toolInput: { input: 3 },
toolCallId: "toolu_01KCJJ8kyiY5LV4RHbVPzK8v",
log: 'Invoking "magic_function" with {"input":3}\n' +
'[{"type":"tool_use","id":"toolu_01KCJJ8kyiY5LV4RHbVPzK8v"'... 46 more characters,
messageLog: [ [AIMessageChunk] ]
},
observation: "5"
}
]
}
{ output: "The value of magic_function(3) is 5." }
In LangGraph
In LangGraph, things are handled natively using the stream method.
const langGraphStream = await app.stream(
{ messages: [new HumanMessage(query)] },
{ streamMode: "updates" }
);
for await (const step of langGraphStream) {
console.log(step);
}
{
agent: {
messages: [
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: [Array],
additional_kwargs: [Object],
tool_calls: [Array],
invalid_tool_calls: [],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: [ [Object] ],
name: undefined,
additional_kwargs: {
id: "msg_01WWYeJvJroT82QhJQZKdwSt",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: [Object]
},
response_metadata: {
id: "msg_01WWYeJvJroT82QhJQZKdwSt",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: [Object]
},
tool_calls: [ [Object] ],
invalid_tool_calls: []
}
]
}
}
{
tools: {
messages: [
ToolMessage {
lc_serializable: true,
lc_kwargs: {
name: "magic_function",
content: "5",
tool_call_id: "toolu_01X9pwxuroTWNVqiwQTL1U8C",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: "magic_function",
additional_kwargs: {},
response_metadata: {},
tool_call_id: "toolu_01X9pwxuroTWNVqiwQTL1U8C"
}
]
}
}
{
agent: {
messages: [
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "The value of magic_function(3) is 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: [Object],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "The value of magic_function(3) is 5.",
name: undefined,
additional_kwargs: {
id: "msg_012kQPkxt2CrsFw4CsdfNTWr",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
response_metadata: {
id: "msg_012kQPkxt2CrsFw4CsdfNTWr",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
tool_calls: [],
invalid_tool_calls: []
}
]
}
}
returnIntermediateSteps
Setting this parameter on AgentExecutor allows users to access intermediate_steps, which pairs agent actions (e.g., tool invocations) with their outcomes.
const agentExecutorWithIntermediateSteps = new AgentExecutor({
agent,
tools,
returnIntermediateSteps: true,
});
const result = await agentExecutorWithIntermediateSteps.invoke({
input: query,
});
console.log(result.intermediateSteps);
[
{
action: {
tool: "magic_function",
toolInput: { input: 3 },
toolCallId: "toolu_0126dJXbjwLC5daAScz8bw1k",
log: 'Invoking "magic_function" with {"input":3}\n' +
'[{"type":"tool_use","id":"toolu_0126dJXbjwLC5daAScz8bw1k"'... 46 more characters,
messageLog: [
AIMessageChunk {
lc_serializable: true,
lc_kwargs: [Object],
lc_namespace: [Array],
content: [Array],
name: undefined,
additional_kwargs: [Object],
response_metadata: {},
tool_calls: [Array],
invalid_tool_calls: [],
tool_call_chunks: [Array]
}
]
},
observation: "5"
}
]
By default the react agent executor in LangGraph appends all messages to the central state. Therefore, it is easy to see any intermediate steps by just looking at the full state.
agentOutput = await app.invoke({
messages: [new HumanMessage(query)],
});
console.log(agentOutput.messages);
[
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "what is the value of magic_function(3)?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "what is the value of magic_function(3)?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: [
{
type: "tool_use",
id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj",
name: "magic_function",
input: [Object]
}
],
additional_kwargs: {
id: "msg_01BhXyjA2PTwGC5J3JNnfAXY",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
tool_calls: [
{
name: "magic_function",
args: [Object],
id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj"
}
],
invalid_tool_calls: [],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: [
{
type: "tool_use",
id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj",
name: "magic_function",
input: { input: 3 }
}
],
name: undefined,
additional_kwargs: {
id: "msg_01BhXyjA2PTwGC5J3JNnfAXY",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
response_metadata: {
id: "msg_01BhXyjA2PTwGC5J3JNnfAXY",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
tool_calls: [
{
name: "magic_function",
args: { input: 3 },
id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj"
}
],
invalid_tool_calls: []
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
name: "magic_function",
content: "5",
tool_call_id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: "magic_function",
additional_kwargs: {},
response_metadata: {},
tool_call_id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj"
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "The value of magic_function(3) is 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_01ABtcXJ4CwMHphYYmffQZoF",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "The value of magic_function(3) is 5.",
name: undefined,
additional_kwargs: {
id: "msg_01ABtcXJ4CwMHphYYmffQZoF",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
response_metadata: {
id: "msg_01ABtcXJ4CwMHphYYmffQZoF",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
tool_calls: [],
invalid_tool_calls: []
}
]
maxIterations
AgentExecutor
implements a maxIterations
parameter, whereas this is
controlled via recursionLimit
in LangGraph.
Note that in the LangChain AgentExecutor
, an “iteration” includes a
full turn of tool invocation and execution. In LangGraph, each step
contributes to the recursion limit, so we will need to multiply by two
(and add one) to get equivalent results.
If the recursion limit is reached, LangGraph raises a specific exception type, that we can catch and manage similarly to AgentExecutor.
const badMagicTool = tool(
async ({ input }) => {
return "Sorry, there was an error. Please try again.";
},
{
name: "magic_function",
description: "Applies a magic function to an input.",
schema: z.object({
input: z.string(),
}),
}
);
const badTools = [badMagicTool];
const spanishAgentExecutorWithMaxIterations = new AgentExecutor({
agent: createToolCallingAgent({
llm,
tools: badTools,
prompt: spanishPrompt,
}),
tools: badTools,
verbose: true,
maxIterations: 2,
});
await spanishAgentExecutorWithMaxIterations.invoke({ input: query });
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: {
"input": "what is the value of magic_function(3)?"
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] Entering Chain run with input: {
"input": "what is the value of magic_function(3)?",
"steps": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] Entering Chain run with input: {
"input": ""
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] Entering Chain run with input: {
"input": ""
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] Entering Chain run with input: {
"input": ""
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] [0ms] Exiting Chain run with output: {
"output": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] [1ms] Exiting Chain run with output: {
"agent_scratchpad": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] [1ms] Exiting Chain run with output: {
"input": "what is the value of magic_function(3)?",
"steps": [],
"agent_scratchpad": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] Entering Chain run with input: {
"input": "what is the value of magic_function(3)?",
"steps": [],
"agent_scratchpad": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] [0ms] Exiting Chain run with output: {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompt_values",
"ChatPromptValue"
],
"kwargs": {
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "You are a helpful assistant. Respond only in Spanish.",
"additional_kwargs": {},
"response_metadata": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"HumanMessage"
],
"kwargs": {
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
}
}
]
}
}
[llm/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] Entering LLM run with input: {
"messages": [
[
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "You are a helpful assistant. Respond only in Spanish.",
"additional_kwargs": {},
"response_metadata": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"HumanMessage"
],
"kwargs": {
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
}
}
]
]
}
[llm/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] [1.56s] Exiting LLM run with output: {
"generations": [
[
{
"text": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessageChunk"
],
"kwargs": {
"content": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.",
"additional_kwargs": {
"id": "msg_011b4GnLtiCRnCzZiqUBAZeH",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 378,
"output_tokens": 59
}
},
"tool_call_chunks": [],
"tool_calls": [],
"invalid_tool_calls": [],
"response_metadata": {}
}
}
}
]
]
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] Entering Chain run with input: {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessageChunk"
],
"kwargs": {
"content": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.",
"additional_kwargs": {
"id": "msg_011b4GnLtiCRnCzZiqUBAZeH",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 378,
"output_tokens": 59
}
},
"tool_call_chunks": [],
"tool_calls": [],
"invalid_tool_calls": [],
"response_metadata": {}
}
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] [0ms] Exiting Chain run with output: {
"returnValues": {
"output": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
},
"log": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] [1.56s] Exiting Chain run with output: {
"returnValues": {
"output": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
},
"log": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
}
[chain/end] [1:chain:AgentExecutor] [1.56s] Exiting Chain run with output: {
"input": "what is the value of magic_function(3)?",
"output": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
}
{
input: "what is the value of magic_function(3)?",
output: 'Lo siento, pero la función "magic_function" espera un parámetro de tipo "string", no un número enter'... 103 more characters
}
import { GraphRecursionError } from "@langchain/langgraph";
const RECURSION_LIMIT = 2 * 2 + 1;
const appWithBadTools = createReactAgent({ llm, tools: badTools });
try {
await appWithBadTools.invoke(
{
messages: [new HumanMessage(query)],
},
{
recursionLimit: RECURSION_LIMIT,
}
);
} catch (e) {
if (e instanceof GraphRecursionError) {
console.log("Recursion limit reached.");
} else {
throw e;
}
}
Recursion limit reached.