基于Win10 + WSL2 + Ubuntu22.04的AI探索(一)
基于Win10 + WSL2 + Ubuntu22.04的AI探索
- 架构图
- 在WSL2安装多个Ubuntu子系统
- 安装CUDA,cuDNN,NCCL,torch
- 本地部署Ollama
- 本地部署Llama.cpp
- 本地部署OpenClaw
- 本地部署CoPaw
架构图
在WSL2安装多个Ubuntu子系统
意在利用子系统隔离不同的AI探索项目,避免依赖冲突等问题
1. 安装Ubuntu22.04
wsl--install-dUbuntu-22.042. 初始环境
sudovim/etc/wsl.conf[network]hostname = 新主机名 generateHosts = false generateResolvConf = false[user]default = rootsudovi/etc/hosts127.0.1.1 新主机名.localdomain 新主机名sudovi/etc/resolv.confnameserver 8.8.8.8 nameserver 8.8.4.4sudovi/etc/systemd/resolved.conf[Resolve]DNS=8.8.8.8sudosystemctl restart systemd-resolvedsudosystemctl restart NetworkManager3. 更新系统,安装依赖包
sudoaptupdate&&sudoaptupgrade-ysudoaptinstall-ynet-tools network-manager zstd build-essentialsudoaptinstall-ycmake libcurl4-openssl-dev checkinstallgitcurlunzipsudoln-fs/bin/bash /bin/sh4. 更新cmake 3.28
wgethttps://github.com/Kitware/CMake/releases/download/v3.28.3/cmake-3.28.3-linux-x86_64.shchmod+x cmake-3.28.3-linux-x86_64.shsudo./cmake-3.28.3-linux-x86_64.sh --skip-license--prefix=/usr/local# 备份旧版本的 cmake 链接(可选,但建议做)sudomv/usr/bin/cmake /usr/bin/cmake.old# 创建新版本的软链接(指向 /usr/local/bin/cmake)sudoln-s/usr/local/bin/cmake /usr/bin/cmake# 同理,更新 cpack、ctest 等相关工具(避免后续报错)sudomv/usr/bin/cpack /usr/bin/cpack.oldsudoln-s/usr/local/bin/cpack /usr/bin/cpacksudomv/usr/bin/ctest /usr/bin/ctest.oldsudoln-s/usr/local/bin/ctest /usr/bin/ctest cmake--version5. 导出为基础版本
#wsl --export [子系统名] "导出目标路径"wsl--exportUbuntu-22.04 E:\WSL\Ubuntu-22.04.tar6. 利用WSL导入功能创建新的子系统
wsl --import [子系统名] "子系统路径" "导入来源路径"wsl--importUbuntu-22.04-llamacpp"E:\WSL\Ubuntu-22.04-llamacpp""E:\WSL\Ubuntu-22.04.tar"7. 与宿主机的网络端口映射
netsh interface portproxyaddv4tov4listenport=[宿主机监听端口]listenaddress=0.0.0.0connectport=[子系统端口]connectaddress=[子系统IP]8. 安装新版本的Node.js和npm
#如果之前通过 apt 安装过旧版本 Node.js,建议先卸载避免冲突sudoaptremove-ynodejsnpmsudoaptautoremove-y# 下载并安装 nvm(使用官方脚本,版本号可能更新,以官网为准)curl-o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh|bashsource~/.bashrc# 输出版本号即成功nvm--version# 安装 Node.js 20.x LTS(会自动安装对应版本的 npm)nvminstall20# 设置 20.x 为默认版本(避免重启终端后版本切换)nvmaliasdefault20# 应输出 v20.x.x(如 v20.17.0)node-v# 应输出对应的 npm 版本(如 10.8.2)npm-v#安装pnpmnpminstall-gpnpmpnpm--version9. 安装uv
curl-LsSfhttps://astral.sh/uv/install.sh|shecho'export PATH="$HOME/.local/bin:$PATH"'>>~/.bashrcsource~/.bashrc安装CUDA,cuDNN,NCCL,torch
1. 安装CUDA Toolkit
查询CUDA版本
nvidia-smi在 https://developer.nvidia.com/cuda-downloads 根据操作系统及CUDA版本,下载并执行对应的run file
#13.2wgethttps://developer.download.nvidia.com/compute/cuda/13.2.0/local_installers/cuda_13.2.0_595.45.04_linux.runsudoshcuda_13.2.0_595.45.04_linux.run#12.8wgethttps://developer.download.nvidia.com/compute/cuda/12.8.0/local_installers/cuda_12.8.0_570.86.10_linux.runsudoshcuda_12.8.0_570.86.10_linux.run#12.9wgethttps://developer.download.nvidia.com/compute/cuda/12.9.0/local_installers/cuda_12.9.0_575.51.03_linux.runsudoshcuda_12.9.0_575.51.03_linux.run2. 配置环境变量
#13.2echo'export CUDA_HOME=/usr/local/cuda-13.2'>>~/.bashrc#12.8echo'export CUDA_HOME=/usr/local/cuda-12.8'>>~/.bashrc#12.9echo'export CUDA_HOME=/usr/local/cuda-12.9'>>~/.bashrcecho'export PATH=$PATH:${CUDA_HOME}/bin'>>~/.bashrcecho'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${LD_LIBRARY_PATH}/lib64'>>~/.bashrcecho'export PATH=$PATH:/home/ubuntu/.local/bin'>>~/.bashrcsource~/.bashrc3. 检查nvcc版本
nvcc--version4. 安装cuDNN
在 [https://developer.nvidia.com/rdp/cudnn-archive) 根据操作系统及CUDA版本下载并安装cuDNN
#解压缩tar-xvfcudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz#复制到cuda目录#13.2sudocpcudnn-linux-x86_64-8.9.7.29_cuda12-archive/include/cudnn* /usr/local/cuda-13.2/includesudocp-Pcudnn-linux-x86_64-8.9.7.29_cuda12-archive/lib/libcudnn* /usr/local/cuda-13.2/lib64#12.8sudocpcudnn-linux-x86_64-8.9.7.29_cuda12-archive/include/cudnn* /usr/local/cuda-12.8/includesudocp-Pcudnn-linux-x86_64-8.9.7.29_cuda12-archive/lib/libcudnn* /usr/local/cuda-12.8/lib64#12.9sudocpcudnn-linux-x86_64-8.9.7.29_cuda12-archive/include/cudnn* /usr/local/cuda-12.9/includesudocp-Pcudnn-linux-x86_64-8.9.7.29_cuda12-archive/lib/libcudnn* /usr/local/cuda-12.9/lib64#修改文件权限#13.2sudochmoda+r /usr/local/cuda-13.2/include/cudnn*.h /usr/local/cuda-13.2/lib64/libcudnn*#12.8sudochmoda+r /usr/local/cuda-12.8/include/cudnn*.h /usr/local/cuda-12.8/lib64/libcudnn*#12.8sudochmoda+r /usr/local/cuda-12.9/include/cudnn*.h /usr/local/cuda-12.9/lib64/libcudnn*#显示版本表示安装成功cat/usr/local/cuda/include/cudnn_version.h|grepCUDNN_MAJOR-A25. 安装NCCL
在 [https://developer.nvidia.com/nccl/nccl-download/) 根据操CUDA版本下载并安装NCCL
wgethttps://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.debsudodpkg-icuda-keyring_1.1-1_all.debsudoaptupdate#13.2sudoaptinstalllibnccl2=2.30.3-1+cuda13.2 libnccl-dev=2.30.3-1+cuda13.2#12.8sudoaptinstalllibnccl2=2.26.2-1+cuda12.8 libnccl-dev=2.26.2-1+cuda12.8#12.9sudoaptinstalllibnccl2=2.30.3-1+cuda12.9 libnccl-dev=2.30.3-1+cuda12.96. 安装torch
#13.2pipinstalltorch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130#12.8pipinstalltorch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128#12.9pipinstalltorch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu1297. 验证脚本
importtorchimportplatformdefget_system_info():return{"系统":platform.system(),"Python 版本":platform.python_version(),"PyTorch 版本":torch.__version__,"CUDA 可用":torch.cuda.is_available(),"CUDA 版本":torch.version.cuda,"MPS 可用":hasattr(torch.backends,"mps")andtorch.backends.mps.is_available(),"显卡信息":torch.cuda.get_device_name(0)iftorch.cuda.is_available()else"无"}deftest_mps():ifnottorch.backends.mps.is_available():ifnottorch.backends.mps.is_built():print("MPS 不可用,因为当前的 PyTorch 安装未启用 MPS。")else:print("MPS 不可用,因为当前的 MacOS 版本不是 12.3+,或者此机器上没有启用 MPS 的设备。")else:mps_device=torch.device("mps")# 在 mps 设备上创建一个张量x=torch.ones(5,device=mps_device)# 或者x=torch.ones(5,device="mps")# 任何操作都在 GPU 上进行y=x*2# 将模型移动到 mps 设备model=YourFavoriteNet()model.to(mps_device)# 现在每次调用都在 GPU 上运行pred=model(x)if__name__=="__main__":info=get_system_info()fork,vininfo.items():print(f"{k}:{v}")test_mps()本地部署Ollama
1. 安装ollama
curl-fsSLhttps://ollama.com/install.sh|sh2. 配置服务
sudovi/etc/systemd/system/ollama.service# 文件内容[Unit]Description=Ollama Service After=network-online.target[Service]ExecStart=/usr/bin/ollama serve User=ollamaGroup=ollama Restart=always RestartSec=3 Environment="PATH=$PATH"Environment="OLLAMA_HOST=0.0.0.0"Environment="OLLAMA_ORIGINS=*"[Install]WantedBy=default.target# 重载配置sudosystemctl daemon-reload# 启动服务sudosystemctl start ollama.service# 查看服务状态sudosystemctl status ollama.service# 设置服务开机自启动sudosystemctlenableollama.service3. 下载模型
ollama pull qwen3.5:35b4. 安装Ngnix
#安装 Nginxsudoaptinstallnginx-y#启动 Nginx 并设置开机自启sudosystemctl start nginxsudosystemctlenablenginx#验证 Nginx 是否运行,如果 Nginx 正常运行,你应该看到 `active (running)` 状态。sudosystemctl status nginx5. 配置 API Key 验证
sudotee/etc/nginx/conf.d/ollama.conf<<'EOF' server { listen 9180; location / { if ($http_authorization != "[API KEY]") { return 403; } proxy_pass http://localhost:11434; } } EOF6. 设置宿主机端口映射
netsh interface portproxyaddv4tov4listenport=9180listenaddress=0.0.0.0connectport=9180connectaddress=172.20.149.74本地部署Llama.cpp
1. 安装llamp.cpp
# 克隆仓库cd/usr/localgitclone https://github.com/ggerganov/llama.cpp.gitcdllama.cpp# 使用 CMake 编译(推荐方式)mkdirbuild&&cdbuild# 编译时启用 CUDAcmake..-DLLAMA_CUDA=ON cmake--build.--configRelease -j$(nproc)exportPATH=$PATH:/usr/local/llama.cpp/build/binsource~/.bashrc2. 安装modelscope
sudoaptinstallpython3-pip pipinstallmodelscope-ihttps://pypi.tuna.tsinghua.edu.cn/simpleexportPATH=$PATH:/home/ubuntu/.local/binsource~/.bashrc3. 从modescope下载模型
在https://www.modelscope.cn/models下载合适的模型
modelscope download --model [模型合集] [模型] --local_dir [下载路径]modelscope download--modelQwen/Qwen3.5-27B-FP8 README.md--local_dir/usr/local/llama.cpp/build/models4. 运行模型
# 回到 build 目录cd/usr/local/llama.cpp/build/bin# 基础运行方式./llama-cli\-m~/models/Llama-3.2-1B-Instruct-Q4_K_M.gguf\-p"你好,请介绍一下自己"\-n512# 交互式聊天模式./llama-cli\-m~/models/Llama-3.2-1B-Instruct-Q4_K_M.gguf\--chat-template llama3\-cnv# 启动 HTTP API 服务器./llama-server\-m~/models/Llama-3.2-1B-Instruct-Q4_K_M.gguf\--host0.0.0.0\--port8080\-c40965. 配置服务
sudovi/etc/systemd/system/llama-server.service设置API端口为9191
[Unit]Description=llama.cppHTTP Server After=network.target[Service]Type=simple User=llamaGroup=llama WorkingDirectory=/usr/local/llama.cppExecStart=/usr/local/llama.cpp/build/bin/llama-server \-m/usr/local/llama.cpp/build/models/modei_file_name.guff \--port 9191 \--host 0.0.0.0 \-c 163840 \-np 4 \--threads 12 \--n-gpu-layers 35 \--cont-batching \-ngl 99999 \-b 4096 Restart=always RestartSec=5 Environment=LD_LIBRARY_PATH=/usr/local/cuda/lib64 LimitNOFILE=65536[Install]WantedBy=multi-user.target6. 安装Ngnix
#安装 Nginxsudoaptinstallnginx-y#启动 Nginx 并设置开机自启sudosystemctl start nginxsudosystemctlenablenginx#验证 Nginx 是否运行,如果 Nginx 正常运行,你应该看到 `active (running)` 状态。sudosystemctl status nginx7. 配置 API Key 验证
sudotee/etc/nginx/conf.d/llamacpp.conf<<'EOF' server { listen 9280; location / { if ($http_authorization != "[API KEY]") { return 403; } proxy_pass http://localhost:9191; } } EOF8. 设置宿主机端口映射
netsh interface portproxyaddv4tov4listenport=9280listenaddress=0.0.0.0connectport=9280connectaddress=172.20.149.74本地部署OpenClaw
本地部署CoPaw
1. 安装CoPaw
curl-fsSLhttps://copaw.agentscope.io/install.sh|bash2. 初始化CoPaw
/home/ubuntu/.local/bin/copaw init--defaults/home/ubuntu/.local/bin/copaw app3. 启动CoPaw
/home/ubuntu/.local/bin/copaw app4. 打开控制台
http://127.0.0.1:8088
5. 创建service
sudotee/etc/systemd/system/copaw.service<<EOF [Unit] Description=CoPaw Inference Service After=network.target [Service] Type=simple User=ubuntu Group=ubuntu Restart=always WorkingDirectory=/home/ubuntu/.copaw ExecStart=/home/ubuntu/.local/bin/copaw app ExecStop=/home/ubuntu/.local/bin/copaw shutdown Restart=always RestartSec=5 [Install] WantedBy=multi-user.target EOFsudosystemctl start copaw.servicesudosystemctlenablecopaw.service6. 安装Ngnix
#安装 Nginxsudoaptinstallnginx-y#启动 Nginx 并设置开机自启sudosystemctl start nginxsudosystemctlenablenginx#验证 Nginx 是否运行,如果 Nginx 正常运行,你应该看到 `active (running)` 状态。sudosystemctl status nginx7. 配置代理
CoPaw默认绑定127.0.0.1地址,修改配置文件后还是被覆盖,所以改用NG做代理
sudotee/etc/nginx/conf.d/copaw.conf<<'EOF' server { listen 18088; location / { proxy_pass http://localhost:8088; } } EOF8. 设置宿主机端口映射
netsh interface portproxyaddv4tov4listenport=18088listenaddress=0.0.0.0connectport=18088connectaddress=172.20.149.74