当前位置: 首页 > news >正文

Plan-and-Execute Agents

Plan-and-Execute Agents

https://blog.langchain.com/planning-agents/

Over the past year, language model-powered agents and state machines have emerged as a promising design pattern for creating flexible and effective ai-powered products.

At their core, agents use LLMs as general-purpose problem-solvers, connecting them with external resources to answer questions or accomplish tasks.

LLM agents typically have the following main steps:

  1. Propose action: the LLM generates text to respond directly to a user or to pass to a function.
  2. Execute action: your code invokes other software to do things like query a database or call an API.
  3. Observe: react to the response of the tool call by either calling another function or responding to the user.

The ReAct agent is a great prototypical design for this, as it prompts the language model using a repeated thought, act, observation loop:

Thought: I should call Search() to see the current score of the game.
Act: Search("What is the current score of game X?")
Observation: The current score is 24-21
... (repeat N times)

A typical ReAct-style agent trajectory.

This takes advantage of Chain-of-thought prompting to make a single action choice per step. While this can be effect for simple tasks, it has a couple main downsides:

  1. It requires an LLM call for each tool invocation.
  2. The LLM only plans for 1 sub-problem at a time. This may lead to sub-optimal trajectories, since it isn't forced to "reason" about the whole task.

One way to overcome these two shortcomings is through an explicit planning step. Below are two such designs we have implemented in LangGraph.

Plan-And-Execute

🔗 Python Link

🔗 JS Link

Plan-and-execute Agent

Based loosely on Wang, et. al.’s paper on Plan-and-Solve Prompting, and Yohei Nakajima’s BabyAGI project, this simple architecture is emblematic of the planning agent architecture. It consists of two basic components:

  1. A planner, which prompts an LLM to generate a multi-step plan to complete a large task.
  2. Executor(s), which accept the user query and a step in the plan and invoke 1 or more tools to complete that task.

Once execution is completed, the agent is called again with a re-planning prompt, letting it decide whether to finish with a response or whether to generate a follow-up plan (if the first plan didn’t have the desired effect).

This agent design lets us avoid having to call the large planner LLM for each tool invocation. It still is restricted by serial tool calling and uses an LLM for each task since it doesn't support variable assignment.

 

REF

https://www.bilibili.com/video/BV1vJ4m1s7Zn?spm_id_from=333.788.videopod.sections&vd_source=57e261300f39bf692de396b55bf8c41b

https://www.bilibili.com/video/BV1qa43z5EBJ/?spm_id_from=333.337.search-card.all.click&vd_source=57e261300f39bf692de396b55bf8c41b

https://github.com/MehdiRezvandehy/Multi-Step-Plan-and-Execute-Agents-with-LangGraph/blob/main/langgraph_plan_execute.ipynb

https://github.com/fanqingsong/langgraph-plan-and-react-agent

https://github.com/fanqingsong/plan-execute-langgraph

 

LANGCHAIN MEMORY

https://www.cnblogs.com/mangod/p/18243321

https://reference.langchain.com/python/langchain_core/prompts/#langchain_core.prompts.chat.ChatPromptTemplate

https://langchain-tutorials.com/lessons/langchain-essentials/lesson-6

from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory(return_messages=True)
memory.load_memory_variables({})memory.save_context({"input": "我的名字叫张三"}, {"output": "你好,张三"})
memory.load_memory_variables({})memory.save_context({"input": "我是一名 IT 程序员"}, {"output": "好的,我知道了"})
memory.load_memory_variables({})from langchain.prompts import ChatPromptTemplate
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholderprompt = ChatPromptTemplate.from_messages([("system", "你是一个乐于助人的助手。"),MessagesPlaceholder(variable_name="history"),("human", "{user_input}"),]
)
chain = prompt | modeluser_input = "你知道我的名字吗?"
history = memory.load_memory_variables({})["history"]chain.invoke({"user_input": user_input, "history": history})user_input = "中国最高的山是什么山?"
res = chain.invoke({"user_input": user_input, "history": history})
memory.save_context({"input": user_input}, {"output": res.content})res = chain.invoke({"user_input": "我们聊得最后一个问题是什么?", "history": history})

 

https://docs.langchain.com/oss/python/langchain/messages

 

http://www.jsqmd.com/news/38151/

相关文章:

  • revit esc取消报错处理
  • 2025年定制全屋家居公司权威推荐榜单:全屋定制装修/全屋定制品牌/全屋定制源头公司精选
  • 意大利OT高密度脑电肌电推荐企业:瑞鸿安——专业品质与服务
  • 无问智推:开启数据消费新范式
  • MATLAB实现图像去模糊
  • Win11安装五笔输入法
  • revit api 获取导入的cad图形的位置
  • 家庭相册私有化:Immich+cpolar构建你的数字记忆堡垒 - 详解
  • 2025年实木全屋定制公司权威推荐榜单:全屋定制加盟/全屋定制十大品牌/全屋定制加盟源头公司精选
  • 插板法 笔记
  • 2025年正式整理5款免费在线客服系统软件
  • 【URP】Unity[后处理]色调分离SplitToning
  • 详细介绍:5-4〔OSCP ◈ 研记〕❘ SQL注入攻击▸基于 UNION 的SQLi
  • npm yarn pnpm 区别
  • 第七届智能控制、测量与信号处理国际学术会议 (ICMSP 2025)
  • 文档内容比对桌面软件V2.2.0(新增详细报告输出)
  • C#语言中使用using关键字的介绍
  • matplotlib 中文显示异常的修复方法
  • 奇妙清单的制作
  • 深入解析:OpenAI推出即时支付功能,ChatGPT将整合电商能力|技术解析与行业影响
  • 2025年卧式数控车床优质厂家推荐排行榜单
  • 【machine learning】COVID-19 daily cases prediction - 指南
  • 【开题答辩全过程】以 北京房屋租赁数据分析与可视化为例,包含答辩的问题和答案 - 教程
  • phpMyAdmin Docker 容器化部署指南
  • 2025年11月5日一星期
  • 高精度乘法模板(p1303)
  • 2025年云桌面软件排名
  • 2025年11月EGUOO京东自营:800万瓶纳豆激酶销量见证用户信赖
  • 2025年高速高压旋转接头权威推荐榜单:导热油旋转接头/液压多通路旋转接头/高速旋转接头源头厂家精选
  • 传统油烟机智能化升级之雷达手势感应唤醒控制方案