Java程序员72小时Python实战手册
Java写了几年的人转Python,根本不需要从头学。核心语法一张表就能覆盖,然后直接案例练手。这篇给你一张对照表,再加一个实战:把一段Java的HttpClient调用,改成Python异步流式请求模块,直接对接大模型API。后面专栏里,所有和大模型的交互代码都基于这一套。
2.1 语言切换的核心对照
下面这张表,覆盖了专栏里95%的用法。瞅一眼就行,不用死记。
Java → Python 常用对照
| 场景 | Java | Python |
|---|---|---|
| 变量声明 | String name = "CSDN"; | name: str = "CSDN" |
| 常量 | static final int MAX = 100; | MAX: int = 100(约定大写) |
| 列表 | List<String> list = new ArrayList<>(); | list: list[str] = [] |
| Map | Map<String, Integer> map = new HashMap<>(); | dict[str, int] = {} |
| for循环 | for (int i = 0; i < n; i++) | for i in range(n): |
| for-each | for (String s : list) | for s in list: |
| if-else | if (x > 0) { ... } else { ... } | if x > 0:\n ...\nelse:\n ... |
| 方法 | public String greet(String name) { return "Hi " + name; } | def greet(name: str) -> str:\n return f"Hi {name}" |
| null | null | None |
| 类 | public class Dog { ... } | class Dog:\n def __init__(self, name: str):\n self.name = name |
| 构造器 | public Dog(String name) { this.name = name; } | __init__同上 |
| 静态方法 | public static void util() { ... } | @staticmethod\ndef util():\n ... |
| try-catch | try { ... } catch (Exception e) { ... } | try:\n ...\nexcept Exception as e:\n ... |
| with-resources | try (FileReader fr = new FileReader(...)) | with open("file") as f:\n ... |
| import | import java.util.List; | from typing import List |
| main | public static void main(String[] args) | if __name__ == "__main__": |
| 字符串格式化 | String.format("val: %d", val) | f"val: {val}" |
| 列表推导 | list.stream().map(...).collect(...) | [item.process() for item in items] |
| lambda | (x) -> x * 2 | lambda x: x * 2 |
2.2 实战:把Java的HttpClient调用改写成Python异步流式客户端
后续我们要频繁调大模型API,而且一般都是流式返回(SSE)——数据一块一块推过来。这要求HTTP客户端支持异步流式处理。Java里可以实现,但代码量不低;Python里可以非常简洁。
2.2.1 Java里的写法(参考)
java // Java 11+ 异步流式请求示意 HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create("https://api.openai.com/v1/chat/completions")) .header("Authorization", "Bearer " + API_KEY) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.ofString(jsonBody)) .build(); client.sendAsync(request, HttpResponse.BodyHandlers.ofLines()) .thenAccept(response -> { response.body().forEach(line -> { if (line.startsWith("data: ")) { String data = line.substring(6); if (!data.equals("[DONE]")) { // 解析并处理 } } }); });不算复杂,但链式调用多了容易绕。而且每轮对话都得重新组织一遍。
2.2.2 用Python写同样的东西
我们会分两步:先写一个能用的同步版本,再改成异步生成器,处理流式响应的同时保持调用优雅。
第一步:安装依赖
bash pip install httpxhttpx是Python里比较现代的HTTP客户端,支持同步、异步、HTTP/2,API设计清晰。
第二步:同步版——先跑通
python import httpx import json API_KEY = "your-api-key" BASE_URL = "https://api.openai.com/v1" def chat_completion_sync(messages: list[dict]) -> str: """同步请求,返回完整回答文本。""" headers = { "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json", } payload = { "model": "gpt-3.5-turbo", "messages": messages, "stream": False, } with httpx.Client(timeout=60.0) as client: response = client.post( f"{BASE_URL}/chat/completions", headers=headers, json=payload, ) response.raise_for_status() data = response.json() return data["choices"][0]["message"]["content"]可以测试一下:
python if __name__ == "__main__": msgs = [{"role": "user", "content": "用一句话解释Spring Boot的优点"}] print(chat_completion_sync(msgs))第三步:异步流式版——对接后续所有场景
python import httpx import json from typing import AsyncIterator API_KEY = "your-api-key" BASE_URL = "https://api.openai.com/v1" async def chat_completion_stream(messages: list[dict], model: str = "gpt-3.5-turbo") -> AsyncIterator[str]: """ 异步流式请求。 每产生一块文本片段,就通过yield返回。 调用方用 async for 消费。 """ headers = { "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json", } payload = { "model": model, "messages": messages, "stream": True, } async with httpx.AsyncClient(timeout=120.0) as client: async with client.stream( "POST", f"{BASE_URL}/chat/completions", headers=headers, json=payload, ) as response: response.raise_for_status() async for line in response.aiter_lines(): if not line.startswith("data: "): continue data_str = line[6:].strip() if data_str == "[DONE]": break try: chunk = json.loads(data_str) delta = chunk["choices"][0].get("delta", {}) content = delta.get("content", "") if content: yield content except Exception: # 忽略解析错误的行 continue使用示例:
python import asyncio async def main(): msgs = [{"role": "user", "content": "简述Java的GC机制"}] full_response = "" async for token in chat_completion_stream(msgs): print(token, end="", flush=True) # 逐字输出 full_response += token print("\n\n---完整回复结束---") asyncio.run(main())2.2.3 对比一下
同样的流式请求,Python方案的调用方只需一个async for,代码清晰。而httpx的stream()方法直接提供了异步行迭代器,不用手动管理连接、解析SSE协议。后续专栏里所有和大模型的交互,都会基于这个封装。
2.3 关于迁移成本,说句实话
Python和Java本质上只是语法糖厚度不同。Java里那些设计模式、分层思想、异常处理思维,在Python里一模一样,只是写得短一点。真正耗时的是领域知识:比如RAG原理、Embedding选型、模型性能评估——这些和语言无关,也是专栏真正要讲的东西。
下一篇,我们将开始在本地真正部署第一个开源大模型,并测试不同尺寸模型在实际问答时的表现差异。
本篇文章源码:stage01_env/simple_chat_client.py
