当前位置: 首页 > news >正文

Building AI Agents In Action : Architectures, Algorithms, and Source Code Using LangGraph, FastAPI

Here is a comprehensive structure and content draft for the book“Building AI Agents In Action”. This manuscript focuses on the practical implementation of autonomous agents using the modern stack of LangGraph, FastAPI, Vue.js, and Docker.


Building AI Agents In Action : Architectures, Algorithms, and Source Code Using LangGraph, FastAPI, Vue, Docker

Author:[Photon AI]
Format:Technical / Educational


Table of Contents

Part I: The Architecture of Autonomy

  1. Introduction to Agentic Workflows:From Chatbots to Agents.
  2. The Tech Stack Demystified:Why LangGraph, FastAPI, and Vue?
  3. Designing Agentic Brains:State Machines vs. DAGs.

Part II: Backend & The Brain (LangGraph & FastAPI)
4.Foundations of LangGraph:Nodes, Edges, and State.
5.Building the Toolkit:Shell, File Ops, and Web Search.
6.Browser Automation:Integratingbrowser-usefor Web Agents.
7.Creating the API:FastAPI for Real-Time Agent Streaming.
8.Memory and Persistence:Checkpointing and RAG.

Part III: Frontend & Interaction (Vue.js)
9.Visualizing Thought:Building a Streaming UI in Vue 3.
10.Controlling the Agent:Human-in-the-Loop Interfaces.

Part IV: Deployment & Infrastructure (Docker)
11.Sandboxing for Safety:Dockerizing Tool Execution.
12.Production-Grade Deployment:Multi-stage builds and Orchestration.
13.Security:Guardrails and Sandboxes in Production.


Chapter 1: Introduction to Agentic Workflows

(Excerpt)

The era of simple “prompt-response” AI is ending. We are entering the age ofAgentic Workflows—systems that don’t just generate text but plan, reason, utilize tools, and execute code to achieve complex goals.

In this book, we move beyond theory. We will build a robust system capable of interacting with a file system, browsing the web autonomously, and executing shell commands—all wrapped in a secure Docker container and presented through a reactive Vue.js frontend.

The Modern Agent Stack

  • Orchestration:LangGraph. Unlike sequential chains, LangGraph allows for cyclic graphs, enabling agents to loop, retry, and self-correct.
  • Backend:FastAPI. High performance, native async support, and perfect for handling Server-Sent Events (SSE) for streaming agent thoughts.
  • Frontend:Vue.js. Reactivity is key when visualizing an agent’s step-by-step reasoning process.
  • Infrastructure:Docker. The only safe way to run an agent withshellandfileaccess permissions.

Chapter 4: Foundations of LangGraph

(Source Code Focus)

The core of our agent is the Graph. In LangGraph, we define aStatethat circulates betweenNodes.

Defining the Agent State

First, we define the data structure that our agent will pass around and update.

# agent/state.pyfromtypingimportAnnotated,TypedDict,List,Sequencefromlangchain_core.messagesimportBaseMessagefromlanggraph.graph.messageimportadd_messagesclassAgentState(TypedDict):# The add_messages function automatically handles message historymessages:Annotated[Sequence[BaseMessage],add_messages]# Specific fields for tool execution trackingnext_action:struser_intent:str

The Graph Architecture

We will build a “Supervisor” pattern where an LLM decides whether to call a tool (Search, Shell, Browser) or finish.

# agent/graph.pyfromlanggraph.graphimportStateGraph,ENDfromlangchain_openaiimportChatOpenAIfrom.nodesimportcall_model,tool_node,should_continuedefcreate_graph():workflow=StateGraph(AgentState)# Initialize the LLMmodel=ChatOpenAI(model="gpt-4o",temperature=0)# Define Nodesworkflow.add_node("agent",call_model)workflow.add_node("tools",tool_node)# Define Entry Pointworkflow.set_entry_point("agent")# Define Conditional Edges (The "Brain")# After the agent acts, decide: Do we stop? Or call a tool?workflow.add_conditional_edges("agent",should_continue,{"continue":"tools","end":END,},)# Define Normal Edges# After a tool is used, go back to the agent to observe the resultworkflow.add_edge("tools","agent")returnworkflow.compile()

Chapter 6: Browser Automation

(Integratingbrowser-use)

One of the most powerful capabilities of a modern agent is the ability to “see” and “click” the web. We will integrate thebrowser-uselibrary as a LangChain tool.

The Browser Tool Wrapper

We need a safe wrapper that executes browser actions within a controlled headless instance.

# tools/browser_tool.pyfromlangchain_core.toolsimporttoolfrombrowser_useimportAgentasBrowserAgentimportasyncio@tooldefbrowse_website(url:str,objective:str)->str:""" Navigates to a URL and performs an objective using the browser. Args: url: The website URL. objective: What to achieve (e.g., "Find the price of the iPhone 15"). """asyncdef_run():# Initialize the browser agentagent=BrowserAgent(task=objective,browser_config={"headless":True},# Server-friendly)# Note: In production, manage the browser context lifecycle betterresult=awaitagent.run()returnresult.extracted_contentor"Task completed, but no text extracted."try:# Run the async browser task in a sync contextreturnasyncio.run(_run())exceptExceptionase:returnf"Browser failed:{str(e)}"

Chapter 7: Creating the API with FastAPI

(Streaming the Thoughts)

Agents take time. Users don’t want to wait 10 seconds for a black box to resolve. We must stream the “tokens” and the “steps” back to the frontend using Server-Sent Events (SSE).

The Streaming Endpoint

# api/main.pyfromfastapiimportFastAPIfromfastapi.responsesimportStreamingResponsefrompydanticimportBaseModelfrom.graphimportcreate_graphimportjson app=FastAPI()graph=create_graph()classUserRequest(BaseModel):message:strthread_id:str@app.post("/chat")asyncdefchat_endpoint(request:UserRequest):config={"configurable":{"thread_id":request.thread_id}}inputs={"messages":[("user",request.message)]}asyncdefevent_generator():try:# Stream the graph executionasyncforeventingraph.astream(inputs,config):# Parse different types of events (Node execution, LLM tokens)fornode_name,node_outputinevent.items():ifnode_name!="__end__":# Send JSON updates to the frontendyieldf"data:{json.dumps({'type':'step','node':node_name,'output':str(node_output)})}\n\n"exceptExceptionase:yieldf"data:{json.dumps({'type':'error','message':str(e)})}\n\n"yield"data: [DONE]\n\n"returnStreamingResponse(event_generator(),media_type="text/event-stream")

Chapter 9: Visualizing Thought in Vue.js

(Source Code Focus)

The Vue component needs to handle an incoming stream of SSE data and render a “Chain of Thought” visualization.

The Agent Chat Component

<!-- src/components/AgentChat.vue --> <template> <div class="chat-container"> <div v-for="(msg, index) in history" :key="index" class="message"> <div :class="msg.role">{{ msg.content }}</div> <!-- Visualization of Agent Steps (Tools used) --> <div v-if="msg.steps" class="steps-log"> <div v-for="(step, sIdx) in msg.steps" :key="sIdx" class="step-badge"> 🤖 <strong>{{ step.node }}</strong>: {{ formatOutput(step.output) }} </div> </div> </div> </div> </template> <script setup> import { ref, onMounted } from 'vue'; const history = ref([]); let eventSource = null; const startChat = async (message) => { history.value.push({ role: 'user', content: message, steps: [] }); const currentMsgIndex = history.value.length - 1; // Standard fetch to initiate stream (or EventSource directly) eventSource = new EventSource(`http://localhost:8000/chat?message=${message}`); eventSource.onmessage = (event) => { if (event.data === '[DONE]') { eventSource.close(); return; } const data = JSON.parse(event.data); if (data.type === 'step') { // Append steps to the current message visualization if (!history.value[currentMsgIndex].steps) { history.value[currentMsgIndex].steps = []; } history.value[currentMsgIndex].steps.push(data); } else if (data.type === 'token') { // Append raw text to the content history.value[currentMsgIndex].content += data.content; } }; }; const formatOutput = (output) => { // Truncate long shell/browser outputs for UI cleanliness return output.length > 100 ? output.substring(0, 100) + '...' : output; }; </script> <style scoped> .steps-log { background: #f4f4f4; padding: 10px; border-left: 3px solid #42b883; margin-top: 5px; font-family: monospace; font-size: 0.9em; } .step-badge { margin-bottom: 4px; color: #35495e; } </style>

Chapter 11: Sandbox Safety with Docker

(Deploy, Sandbox, Shell, File Ops)

This is the most critical part of the book. An agent that can runrm -rfor install malicious Python packages must be contained.

The Docker Strategy

We use aMulti-Stage Docker Build.

  1. Builder Stage:Compiles the Vue frontend into static files.
  2. Runner Stage:A Python image that serves both the API (FastAPI) and the static Frontend (Vue).
  3. Isolation:The agent tools (Shell, File Ops) runinsidethis container. If the agent goes rogue, it only destroys the container, not the host server.

Dockerfile

# --- Stage 1: Build the Vue Frontend --- FROM node:18-alpine as frontend-builder WORKDIR /app/frontend COPY frontend/package*.json ./ RUN npm install COPY frontend/ . RUN npm run build # --- Stage 2: The Backend Runtime --- FROM python:3.11-slim # Install system dependencies needed for browser automation (Playwright/Selenium) RUN apt-get update && apt-get install -y \ wget \ gnupg \ procps \ libnss3 \ libnspr4 \ libatk1.0-0 \ libatk-bridge2.0-0 \ libcups2 \ libdrm2 \ libxkbcommon0 \ libxcomposite1 \ libxdamage1 \ libxfixes3 \ libxrandr2 \ libgbm1 \ libasound2 WORKDIR /app # Copy Python requirements COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Install Playwright Browsers RUN playwright install --with-deps chromium # Copy the compiled Vue files from Stage 1 COPY --from=frontend-builder /app/frontend/dist ./static # Copy Backend Code COPY backend/ . # Expose port EXPOSE 8000 # Security Principle: Run as a non-root user RUN useradd -m agentuser USER agentuser # Command to serve both API and Static files CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Docker Compose for Orchestration

# docker-compose.ymlversion:'3.8'services:agent-app:build:.ports:-"8000:8000"environment:-OPENAI_API_KEY=${OPENAI_API_KEY}-LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}-LANGCHAIN_TRACING_V2=truevolumes:# Mount a safe workspace directory for file ops-./agent_workspace:/app/workspacerestart:unless-stopped

Chapter 13: Production-Grade Security

(Summary)

When deploying agents withshellaccess, standard API security is not enough.

  1. The Allow-List Pattern:Do not let the LLM generateanyshell command. Force it to choose from a pre-defined Python function list (e.g.,read_file,write_file,list_directory).
  2. Jailbreak Detection:Implement a middleware check in FastAPI that scores incoming prompts for “jailbreak” attempts before sending them to the LLM.
  3. Resource Limits:Configure Docker (via Compose) to limit CPU and Memory usage (deploy: resources: limits:). This prevents an infinite loop agent from freezing your server.
  4. Ephemeral Containers:Ideally, for every user session, spin up a fresh Docker container that dies when the session ends. This ensures no “memory” of user data persists between sessions.

Appendix A: Putting It All Together

Project: “The DevOps Agent”

  • Goal:An agent that checks a GitHub repo, runs tests using theshell, and if a test fails, it reads the error logs, modifies the code usingfile ops, and re-runs the test.
  • Implementation:Combines LangGraph’s loop capability, Docker’s isolation, and Vue’s real-time log streaming.

This book structure provides a complete end-to-end guide, from the Python code running the logic to the Javascript displaying the results, all wrapped in the safety of containerization.

http://www.jsqmd.com/news/258647/

相关文章:

  • Building AI Agents In Action : Architectures, Algorithms, and Source Code Using LangGraph, FastAPI
  • 基于Java的财务报销管理系统的设计与实现
  • 基于YOLOv8+pyqgt5的道路坑洼检测系统(设计源文件+万字报告+讲解)(支持资料、图片参考_相关定制)_文章底部可以扫码
  • 护资刷题 APP 推荐:真正拉开差距的,是这 3 个刷题细节 - 品牌观察员小捷
  • 老张的AI助手,每天早上帮他念一遍:“你不是机器。”
  • 微栖太空舱市场口碑如何,为你揭秘实际状况 - 工业品牌热点
  • 小美用AI写了一篇高考作文,老师说:“这不是你写的。”她笑了
  • 2026年技术好的公交广告代理公司联系方式,上海花旗大厦广告/上海外滩广告/广播电台广告,公交广告代理公司推荐 - 品牌推荐师
  • 2026年目前最好的星型卸料器生产厂家排行,除尘器布袋/通风蝶阀/通风阀门/除尘器骨架,星型卸料器加工厂推荐排行榜 - 品牌推荐师
  • 使用 Flying-Saucer-Pdf + velocity 模板引擎生成 PDF(处理中文和图片问题)
  • 专业老酒回收公司 京城亚南高价收茅台五粮液有保障 十年口碑值得信赖 - 品牌排行榜单
  • LLM real-time image quality check prevents misdiagnosis
  • 2026年温度保险丝厂家专业推荐榜:惠州市凯森电子有限公司,提供65℃/72℃/轴向/金属壳温度保险丝及电饭煲电水壶专用系列 - 品牌推荐官
  • 记一次Qt视频监控平台的优化/双击打开分组可能崩溃的BUG/排对打开通道过程中关闭通道可能崩溃的BUG
  • 2026年手术器械消毒筐厂家权威推荐榜单:医疗消毒筐/消毒筐灭菌筐/冲孔消毒筐/供应室消毒筐/304不锈钢消毒筐源头厂家精选 - 品牌推荐官
  • 2025年最新折弯非标钣金定制加工厂口碑推荐榜,数控非标钣金定制品牌睿意达市场认可度高 - 品牌推荐师
  • 2026河北loft户型装修方案推荐榜:半包装修/ 奶油风局部装修/ 个性化定制装修 /装修设计/ 小户型装修 /一站式装修服务商精选 - 品牌推荐官
  • 整箱老酒高价收 京城亚南全国上门安全交易 茅台五粮液收藏变现不用愁 - 品牌排行榜单
  • 救命神器9个一键生成论文工具,专科生毕业论文救星!
  • 异步线程ACPI!ACPIWorker中ACPI!RestartCtxtPassive函数对节点BAT1方法_STA的处理
  • 2026空调新材料讨论升温?美的官方定调:舆论误读技术研究,国内在售全系仍是纯铜管 - 速递信息
  • 警惕新型网络攻击:黑客借虚假ChatGPT指令传播MacStealer恶意软件
  • 2026兰州高三辅导冲刺班机构推荐榜:高考冲刺班价格 /初中冲刺班 /高考前冲刺班/ 中考数学冲刺班 /高考考前冲刺班机构精选 - 品牌推荐官
  • 2026年纸箱封箱机制造优选:哪家厂家质量更胜一筹?封箱机/角边封箱机/包装流水线,纸箱封箱机实力厂家怎么选择 - 品牌推荐师
  • zynqmpsoc linux如何启动自动执行.sh
  • 2026年河南手机桌面提醒便签服务推荐榜:提醒便签记事本下载/ 便签待办软件推荐 /电脑桌面日历便签/ 桌面记事本便签软件/ 好用的便签软件服务精选 - 品牌推荐官
  • 2026年冲洗卷盘箱厂家推荐榜:福建省首阀消防科技有限公司,冲洗卷盘/高压冲洗卷盘/矿用冲洗卷盘箱/不锈钢冲洗卷盘/矿山冲洗卷盘厂家精选 - 品牌推荐官
  • 微栖智能装备好用吗,十大好用品牌排名 - 工业品牌热点
  • zynq mpsoc 以太网联网脚本
  • zynq mpsoc 以太网联网脚本