当前位置: 首页 > news >正文

LLM Agentic Memory Systems

LLM Agentic Memory Systems

https://kickitlikeshika.github.io/2025/03/22/agentic-memory.html#1-working-memory

 

Introduction

Current AI systems, particularly those built around Large Language Models (LLMs), face a fundamental limitation: they lack true memory. While they can process information provided in their immediate context, they cannot naturally accumulate experiences over time. This creates several problems:

  1. Contextual Amnesia: Agents forget previous interactions with users, forcing repetitive explanations
  2. Inability to Learn: Without recording successes and failures, agents repeat the same mistakes
  3. Personalization Gaps: Agents struggle to adapt to individual users’ preferences and needs over time
  4. Efficiency Barriers: Valuable insights from past interactions are lost, requiring “reinvention of the wheel”

To address these limitations, we need to equip our AI agents with memory systems that capture not just what was said, but what was learned.

Types of Agentic Memory

Agentic memory systems can be categorized into several distinct types, each serving different purposes in enhancing agent capabilities:

1. Working Memory

Working memory represents the short-term, immediate context an agent uses for the current task. It’s analogous to human short-term memory or a computer’s RAM.

Characteristics:

  • Temporarily holds information needed for the current conversation
  • Limited in size due to context window constraints
  • Cleared or reset between different sessions or tasks

Example: When a user asks a series of related questions about a topic, working memory helps the agent maintain coherence throughout that specific conversation without requiring repetition of context.

2. Episodic Memory

Episodic memory stores specific interactions or “episodes” that the agent has experienced. These are concrete, instance-based memories of conversations, including what was discussed and how the interaction unfolded.

Characteristics:

  • Records complete or summarized conversations
  • Includes metadata about the interaction (time, user, topic)
  • Searchable by semantic similarity to current context
  • Contains information about what worked well and what didn’t

Example: An agent remembers that when discussing transformers with a particular user last week, visual explanations were particularly effective, while mathematical formulas caused confusion.

3. Semantic Memory

Semantic memory stores general knowledge extracted from experiences, rather than the experiences themselves. It represents the “lessons learned” across many interactions.

Characteristics:

  • Abstracts patterns across multiple episodes
  • Represents generalized knowledge rather than specific instances
  • Often organized in structured forms (rules, principles, facts)
  • Evolves over time as more experiences accumulate

Example: After numerous interactions explaining technical concepts, an agent develops general principles about how to adapt explanations based on the user’s background.

4. Procedural Memory

Procedural memory captures how to perform actions or processes. For AI agents, this translates to remembering effective strategies for solving problems.

Characteristics:

  • Stores successful action sequences and approaches
  • Focuses on “how” rather than “what”
  • Can be applied across different but similar situations

Example: An agent remembers the effective sequence of steps for debugging code issues, starting with checking syntax, then examining logic, and finally testing edge cases.

 

 

mem0

https://docs.mem0.ai/examples/personal-travel-assistant

https://github.com/mem0ai/mem0/tree/main

import os
from openai import OpenAI
from mem0 import Memory# Set the OpenAI API key
os.environ['OPENAI_API_KEY'] = "sk-xxx"config = {"llm": {"provider": "openai","config": {"model": "gpt-4o","temperature": 0.1,"max_tokens": 2000,}},"embedder": {"provider": "openai","config": {"model": "text-embedding-3-large"}},"vector_store": {"provider": "qdrant","config": {"collection_name": "test","embedding_model_dims": 3072,}},"version": "v1.1",
}class PersonalTravelAssistant:def __init__(self):self.client = OpenAI()self.memory = Memory.from_config(config)self.messages = [{"role": "system", "content": "You are a personal AI Assistant."}]def ask_question(self, question, user_id):# Fetch previous related memoriesprevious_memories = self.search_memories(question, user_id=user_id)# Build the promptsystem_message = "You are a personal AI Assistant."if previous_memories:prompt = f"{system_message}\n\nUser input: {question}\nPrevious memories: {', '.join(previous_memories)}"else:prompt = f"{system_message}\n\nUser input: {question}"# Generate response using Responses APIresponse = self.client.responses.create(model="gpt-4o",input=prompt)# Extract answer from the responseanswer = response.output[0].content[0].text# Store the question in memoryself.memory.add(question, user_id=user_id)return answerdef get_memories(self, user_id):memories = self.memory.get_all(user_id=user_id)return [m['memory'] for m in memories['results']]def search_memories(self, query, user_id):memories = self.memory.search(query, user_id=user_id)return [m['memory'] for m in memories['results']]# Usage example
user_id = "traveler_123"
ai_assistant = PersonalTravelAssistant()def main():while True:question = input("Question: ")if question.lower() in ['q', 'exit']:print("Exiting...")breakanswer = ai_assistant.ask_question(question, user_id=user_id)print(f"Answer: {answer}")memories = ai_assistant.get_memories(user_id=user_id)print("Memories:")for memory in memories:print(f"- {memory}")print("-----")if __name__ == "__main__":main()

 

Implementation

使用向量数据库,并制造记忆prompt或有偏好的记忆信息并存储

https://github.com/KickItLikeShika/agentic-memory/blob/main/agentic-memory.ipynb

 

react prompt属于过程性提示词

https://zhuanlan.zhihu.com/p/1931154686532105460

 

记忆作为工具

https://python.langchain.com/docs/versions/migrating_memory/long_term_memory_agent/

 

summarization

https://github.com/fanqingsong/langgraph-memory-example

from typing import Literal
from langchain_core.messages import SystemMessage, HumanMessage, RemoveMessage
from langchain_openai import ChatOpenAIfrom langgraph.graph import START, StateGraph, MessagesState, END
from langgraph.prebuilt import tools_condition, ToolNodefrom langchain_community.tools.tavily_search import TavilySearchResultstavily_tool = TavilySearchResults(max_results=10)tools = [tavily_tool]# Define LLM with bound tools
llm = ChatOpenAI(model="gpt-4o")
llm_with_tools = llm.bind_tools(tools)
model = llm_with_tools# State class to store messages and summary
class State(MessagesState):summary: str# Define the logic to call the model
def call_model(state: State):# Get summary if it existssummary = state.get("summary", "")# If there is summary, then we add it to messagesif summary:# Add summary to system messagesystem_message = f"Summary of conversation earlier: {summary}"# Append summary to any newer messagesmessages = [SystemMessage(content=system_message)] + state["messages"]else:messages = state["messages"]response = model.invoke(messages)return {"messages": response}# Custom routing function from assistant
def route_assistant(state: State) -> Literal["tools", "summarize_conversation", "__end__"]:"""Route from assistant based on tool calls and message count."""messages = state["messages"]last_message = messages[-1]# Check if the assistant called any toolsif hasattr(last_message, "tool_calls") and len(last_message.tool_calls) > 0:return "tools"# No tools called - check if we should summarizeif len(messages) > 6:return "summarize_conversation"# Otherwise endreturn END# Routing function after tools
def route_after_tools(state: State) -> Literal["summarize_conversation", "assistant"]:"""Route after tools execution."""messages = state["messages"]# If there are more than six messages, summarizeif len(messages) > 6:return "summarize_conversation"# Otherwise go back to assistantreturn "assistant"def summarize_conversation(state: State):# First get the summary if it existssummary = state.get("summary", "")# Create our summarization prompt if summary:# If a summary already exists, add it to the promptsummary_message = (f"This is summary of the conversation to date: {summary}\n\n""Extend the summary by taking into account the new messages above:")else:# If no summary exists, just create a new onesummary_message = "Create a summary of the conversation above:"# Add prompt to our historymessages = state["messages"] + [HumanMessage(content=summary_message)]response = model.invoke(messages)# Delete all but the 2 most recent messages and add our summary to the state delete_messages = [RemoveMessage(id=m.id) for m in state["messages"][:-2]]return {"summary": response.content, "messages": delete_messages}# Build graph
builder = StateGraph(State)  # Note: using State instead of MessagesState
builder.add_node("assistant", call_model)
builder.add_node("tools", ToolNode(tools))
builder.add_node("summarize_conversation", summarize_conversation)# Add edges
builder.add_edge(START, "assistant")# Route from assistant based on tool calls and message count
builder.add_conditional_edges("assistant", route_assistant)# After tools, check if we should summarize or go back to assistant
builder.add_conditional_edges("tools", route_after_tools)# After summarization, go back to assistant
builder.add_edge("summarize_conversation", "assistant")# Compile graph
graph = builder.compile()

 

mem0 集成

import os
from typing import Literal
from langchain_core.messages import SystemMessage, HumanMessage, RemoveMessage
from langchain_openai import ChatOpenAIfrom langgraph.graph import START, StateGraph, MessagesState, END
from langgraph.prebuilt import tools_condition, ToolNodefrom langchain_community.tools.tavily_search import TavilySearchResultsfrom mem0 import MemoryClient
mem0 = MemoryClient(api_key=os.getenv("MEM0_API_KEY"))tavily_tool = TavilySearchResults(max_results=10)tools = [tavily_tool]# Define LLM with bound tools
llm = ChatOpenAI(model="gpt-4o")
llm_with_tools = llm.bind_tools(tools)
model = llm_with_toolsclass State(MessagesState):mem0_user_id: str# Define the logic to call the model
def call_model(state: State):messages = state["messages"]user_id = state.get("mem0_user_id", "default_user_1")# Get only the last message (current user input)current_message = messages[-1]# Retrieve relevant memories based on the current messagememories = mem0.search(current_message.content, user_id=user_id)context = "Relevant information from previous conversations:\n"for memory in memories:context += f"- {memory['memory']}\n"system_message = SystemMessage(content=f"""You are a helpful Assistant. Use the provided context to personalize your responses and remember user preferences and past interactions.
{context}""")# Only send system message + current user message to LLM (no history)full_messages = [system_message, current_message]response = llm_with_tools.invoke(full_messages)# Store the interaction in Mem0 - use a list of messages
    mem0.add(messages=[{"role": "user", "content": current_message.content},{"role": "assistant", "content": response.content}], user_id=user_id)# Clear all previous messages except the current response# so we dont have to store all the messages in the contextmessages_to_delete = [RemoveMessage(id=m.id) for m in messages]return {"messages": messages_to_delete + [response]}# Build graph
builder = StateGraph(MessagesState)
builder.add_node("assistant", call_model)
builder.add_node("tools", ToolNode(tools))# Add edges
builder.add_edge(START, "assistant")# Use tools_condition to route from assistant
builder.add_conditional_edges("assistant",tools_condition,{"tools": "tools",  # If tools are called, go to toolsEND: END,          # If no tools, end
    }
)# After tools execution, go back to assistant
builder.add_edge("tools", "assistant")# Compile graph
graph = builder.compile()

 

https://github.com/fanqingsong/langgraph-mem0-agent

 

http://www.jsqmd.com/news/11632/

相关文章:

  • 量化(一)
  • 2025 年试验箱厂商最新推荐排行榜:涵盖高低温 / 恒温恒湿 / 冷热冲击等设备,精选研发实力强、质量管控严的优质企业
  • 2025 最新化粪池生产厂家推荐排行榜:聚焦老牌标杆与新锐力量,预制 / 玻璃钢品类权威甄选钢筋混凝土/一体/成品/拼装式化粪池厂家推荐
  • MyEMS + 边缘网关:偏远基站如何实现 “无人值守” 下的精准能耗管理?
  • 2025 云栖精选资料:《从云原生到 AI 原生核心技术与最佳实践》PPT 免费下载
  • Salesforce项目老掉坑?这8个思维陷阱千万别踩
  • 加权图异常检测技术获最具影响力论文奖
  • java基础3-判断和循环
  • 总线死锁验证方法
  • FPGA MT25QL FLASH
  • C#/.NET/.NET Core优秀项目和框架2025年9月简报
  • 论文对比
  • Alpha稳定分布概率密度函数的MATLAB实现
  • 激光打印机出现黑竖线,清理一下硒鼓即可
  • 关于我心目中的理想课堂构建之法的一些感受
  • 2025 年温控器厂家最新推荐排行榜:涵盖电子式、机械式、双恒温等多类型设备,结合产品性能、创新能力与市场反馈的优质品牌汇总
  • 2025 年工业与民用加热器品牌最新推荐排行榜,深度盘点机柜、柜内、紧凑、PTC 风扇型等多类型加热器优质厂商
  • Qoj 14436. Robot Construction/Open Your Brain 做题记录
  • 2025 年最新推荐!国内软件开发厂商排行榜:政企定制开发优选指南 物联网软件开发/运维管理系统软件开发/仓储管理系统软件开发/人力资源管理系统软件开发公司推荐
  • 函数计算 MSE Nacos : 轻松托管你的 MCP Server
  • Metasploit Framework 6.4.92 (macOS, Linux, Windows) - 开源渗透测试框架
  • 如何查看Linux系统信息,Linux查看系统基本信息命令
  • 基于MATLAB的梯度下降法实现
  • Python 处理 Word 文档中的批注(添加、删除) - E
  • Nexpose 8.23.0 for Linux Windows - 漏洞扫描
  • C++练习
  • 2025 年房屋鉴定公司最新推荐权威榜单:涵盖安全评估 / 承载力 / 工程质量 / 危房 / 受损伤等领域,助您精准挑选靠谱机构
  • 当游戏NPC有了“灵魂”,网易伏羲解码游戏智能交互场景新实践
  • 2025最新微信公众号文章数据批量导出excel工具1.0版
  • 磊科N60Pro刷机