当前位置: 首页 > news >正文

04-进阶方向:自然语言处理(NLP)——Hugging Face实战

Hugging Face实战(加载模型、分词器、微调、Pipeline)

一、Hugging Face生态概述

1.1 核心组件

importnumpyasnpimportmatplotlib.pyplotaspltfrommatplotlib.patchesimportRectangle,FancyBboxPatchimportwarnings warnings.filterwarnings('ignore')print("="*60)print("Hugging Face Transformers:NLP开发的瑞士军刀")print("="*60)# Hugging Face生态图fig,ax=plt.subplots(figsize=(12,8))ax.axis('off')# 中心center=plt.Circle((0.5,0.5),0.12,color='lightcoral',ec='black')ax.add_patch(center)ax.text(0.5,0.5,'Hugging\nFace',ha='center',va='center',fontsize=10,fontweight='bold')# 周边库libraries={'Transformers':(0.15,0.75),'Datasets':(0.85,0.75),'Tokenizers':(0.15,0.25),'Accelerate':(0.85,0.25),'PEFT':(0.5,0.85),'Gradio':(0.5,0.15),}forlib,(x,y)inlibraries.items():circle=plt.Circle((x,y),0.08,color='lightblue',ec='black')ax.add_patch(circle)ax.text(x,y,lib,ha='center',va='center',fontsize=7)# 连接到中心ax.annotate('',xy=(x,y),xytext=(0.5,0.5),arrowprops=dict(arrowstyle='-',color='gray',lw=1,alpha=0.5))ax.set_xlim(0,1)ax.set_ylim(0,1)ax.set_title('Hugging Face生态系统',fontsize=14)plt.tight_layout()plt.show()print("\n💡 Hugging Face核心组件:")print(" - Transformers: 模型库(数千个预训练模型)")print(" - Datasets: 数据集库(数百个公开数据集)")print(" - Tokenizers: 高性能分词器")print(" - Accelerate: 分布式训练加速")print(" - PEFT: 参数高效微调")print(" - Gradio: 快速部署演示")

二、加载模型与分词器

2.1 基础加载

defload_model_tokenizer():"""加载模型和分词器"""print("\n"+"="*60)print("加载模型和分词器")print("="*60)code=""" from transformers import AutoTokenizer, AutoModel, AutoModelForSequenceClassification import torch # 1. 加载分词器 tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") # 2. 加载模型 model = AutoModel.from_pretrained("bert-base-uncased") # 3. 加载特定任务模型 classifier = AutoModelForSequenceClassification.from_pretrained( "bert-base-uncased", num_labels=2 ) # 4. 查看模型信息 print(f"模型名称: {model.config.model_type}") print(f"隐藏层维度: {model.config.hidden_size}") print(f"层数: {model.config.num_hidden_layers}") print(f"注意力头数: {model.config.num_attention_heads}") print(f"参数量: {sum(p.numel() for p in model.parameters()):,}") # 5. 基本使用 text = "Hello, Hugging Face!" inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) print(f"输出形状: {outputs.last_hidden_state.shape}") """print(code)load_model_tokenizer()

2.2 分词器详解

deftokenizer_detailed():"""分词器详解"""print("\n"+"="*60)print("分词器详解")print("="*60)code=""" from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") # 1. 基本分词 text = "Hello, Hugging Face! How are you?" tokens = tokenizer.tokenize(text) print(f"分词结果: {tokens}") # 2. 转换为ID ids = tokenizer.convert_tokens_to_ids(tokens) print(f"Token IDs: {ids}") # 3. 完整编码 encoding = tokenizer( text, truncation=True, # 截断 padding='max_length', # 填充 max_length=128, # 最大长度 return_tensors='pt' # 返回PyTorch张量 ) print(f"Input IDs shape: {encoding['input_ids'].shape}") print(f"Attention Mask shape: {encoding['attention_mask'].shape}") # 4. 批量编码 texts = ["First sentence.", "Second sentence.", "Third sentence."] batch = tokenizer( texts, padding=True, truncation=True, return_tensors='pt' ) print(f"批量编码形状: {batch['input_ids'].shape}") # 5. 解码 decoded = tokenizer.decode(batch['input_ids'][0]) print(f"解码结果: {decoded}") # 6. 特殊token print(f"[CLS] ID: {tokenizer.cls_token_id}") print(f"[SEP] ID: {tokenizer.sep_token_id}") print(f"[PAD] ID: {tokenizer.pad_token_id}") print(f"[MASK] ID: {tokenizer.mask_token_id}") """print(code)tokenizer_detailed()

三、Pipeline:开箱即用

3.1 Pipeline基础

defpipeline_demo():"""Pipeline使用"""print("\n"+"="*60)print("Pipeline:一行代码完成NLP任务")print("="*60)code=""" from transformers import pipeline # 1. 情感分析 classifier = pipeline("sentiment-analysis") result = classifier("I love this product!") print(f"情感分析: {result}") # 2. 文本生成 generator = pipeline("text-generation", model="gpt2") result = generator("Once upon a time", max_length=50, num_return_sequences=1) print(f"文本生成: {result[0]['generated_text']}") # 3. 问答系统 qa = pipeline("question-answering") result = qa( question="What is Hugging Face?", context="Hugging Face is a company that develops machine learning tools." ) print(f"问答: {result}") # 4. 命名实体识别 ner = pipeline("ner", model="dbmdz/bert-large-cased-finetuned-conll03-english") result = ner("My name is John and I live in New York.") for entity in result: print(f"{entity['word']}: {entity['entity']} (置信度: {entity['score']:.3f})") # 5. 文本摘要 summarizer = pipeline("summarization", model="facebook/bart-large-cnn") text = """Hugging Face Transformers provides thousands of pretrained models to perform tasks on texts suchasclassification,information extraction,question answering,summarization,translation,text generationandmore.""" result = summarizer(text, max_length=30, min_length=10) print(f"摘要: {result[0]['summary_text']}") # 6. 翻译 translator = pipeline("translation_en_to_fr", model="t5-small") result = translator("Hello, how are you?") print(f"翻译: {result[0]['translation_text']}") # 7. 零样本分类 classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli") result = classifier( "I love this movie!", candidate_labels=["positive", "negative", "neutral"] ) print(f"零样本分类: {result['labels'][0]} (置信度: {result['scores'][0]:.3f})") # 8. 特征提取 feature_extractor = pipeline("feature-extraction", model="bert-base-uncased") features = feature_extractor("Hello world!") print(f"特征形状: {np.array(features).shape}") """print(code)pipeline_demo()

3.2 自定义Pipeline

defcustom_pipeline():"""自定义Pipeline"""print("\n"+"="*60)print("自定义Pipeline")print("="*60)code=""" from transformers import Pipeline import torch class CustomNERPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} return preprocess_kwargs, {}, {} def preprocess(self, inputs): # 分词 return self.tokenizer(inputs, return_tensors="pt", truncation=True) def _forward(self, model_inputs): # 前向传播 outputs = self.model(**model_inputs) return outputs def postprocess(self, model_outputs): # 后处理 logits = model_outputs.logits predictions = torch.argmax(logits, dim=2) return predictions.tolist() # 创建自定义Pipeline ner_pipeline = CustomNERPipeline( model=AutoModelForTokenClassification.from_pretrained("bert-base-uncased"), tokenizer=AutoTokenizer.from_pretrained("bert-base-uncased") ) # 使用 result = ner_pipeline("John lives in New York") print(result) """print(code)custom_pipeline()

四、模型微调

4.1 完整微调流程

deffinetuning_demo():"""模型微调完整流程"""print("\n"+"="*60)print("模型微调完整流程")print("="*60)code=""" from transformers import ( AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments, DataCollatorWithPadding ) from datasets import load_dataset import numpy as np from sklearn.metrics import accuracy_score, f1_score # 1. 加载数据集 dataset = load_dataset("imdb") # 2. 加载分词器和模型 model_name = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2) # 3. 数据预处理 def preprocess_function(examples): return tokenizer( examples["text"], truncation=True, padding=True, max_length=512 ) tokenized_dataset = dataset.map(preprocess_function, batched=True) # 4. 数据整理器 data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # 5. 评估指标 def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return { 'accuracy': accuracy_score(labels, predictions), 'f1': f1_score(labels, predictions) } # 6. 训练参数 training_args = TrainingArguments( output_dir="./results", evaluation_strategy="epoch", save_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, weight_decay=0.01, logging_dir="./logs", logging_steps=100, load_best_model_at_end=True, metric_for_best_model="accuracy", ) # 7. 创建Trainer trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_dataset["train"], eval_dataset=tokenized_dataset["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # 8. 训练 trainer.train() # 9. 保存模型 model.save_pretrained("./my_model") tokenizer.save_pretrained("./my_model") # 10. 评估 eval_results = trainer.evaluate() print(f"评估结果: {eval_results}") # 11. 推理 def predict(text): inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512) outputs = model(**inputs) pred = torch.argmax(outputs.logits, dim=1).item() return "正面" if pred == 1 else "负面" print(predict("This movie is great!")) """print(code)finetuning_demo()

4.2 使用Datasets库

defdatasets_demo():"""Datasets库使用"""print("\n"+"="*60)print("Datasets库使用")print("="*60)code=""" from datasets import load_dataset, DatasetDict, concatenate_datasets # 1. 加载内置数据集 dataset = load_dataset("imdb") print(f"训练集大小: {len(dataset['train'])}") print(f"测试集大小: {len(dataset['test'])}") # 2. 查看数据 sample = dataset["train"][0] print(f"样本: {sample['text'][:100]}...") print(f"标签: {sample['label']}") # 3. 数据集操作 # 筛选 filtered = dataset["train"].filter(lambda x: len(x["text"]) > 200) print(f"筛选后: {len(filtered)}") # 映射 def add_prefix(example): example["text"] = "Review: " + example["text"] return example dataset = dataset.map(add_prefix) # 4. 划分数据集 splits = dataset["train"].train_test_split(test_size=0.1) dataset = DatasetDict({ "train": splits["train"], "validation": splits["test"], "test": dataset["test"] }) # 5. 数据集合并 combined = concatenate_datasets([dataset["train"], dataset["validation"]]) # 6. 流式处理(大数据集) streaming_dataset = load_dataset("c4", split="train", streaming=True) for i, example in enumerate(streaming_dataset): if i >= 5: break print(example["text"][:100]) # 7. 保存和加载 dataset.save_to_disk("./my_dataset") loaded_dataset = DatasetDict.load_from_disk("./my_dataset") # 8. 数据集信息 print(dataset) print(f"特征: {dataset['train'].features}") """print(code)datasets_demo()

五、高级功能

5.1 混合精度训练

defmixed_precision():"""混合精度训练"""print("\n"+"="*60)print("混合精度训练")print("="*60)code=""" from transformers import TrainingArguments # 启用混合精度 training_args = TrainingArguments( output_dir="./results", fp16=True, # 启用FP16混合精度 fp16_opt_level="O1", # 优化级别 per_device_train_batch_size=32, # 可以更大 ) # 使用Accelerate库 from accelerate import Accelerator accelerator = Accelerator( mixed_precision="fp16", # fp16, bf16, or 'no' gradient_accumulation_steps=4 ) model, optimizer, dataloader = accelerator.prepare(model, optimizer, dataloader) # 训练循环 for batch in dataloader: with accelerator.autocast(): outputs = model(**batch) loss = outputs.loss accelerator.backward(loss) optimizer.step() optimizer.zero_grad() """print(code)mixed_precision()

5.2 模型并行与量化

defmodel_parallel():"""模型并行与量化"""print("\n"+"="*60)print("模型并行与量化")print("="*60)code=""" from transformers import AutoModelForCausalLM, BitsAndBytesConfig import torch # 1. 模型并行(大模型分片) model = AutoModelForCausalLM.from_pretrained( "gpt2-large", device_map="auto", # 自动分配到可用设备 load_in_8bit=True, # 8bit量化 ) # 2. 4bit量化 (QLoRA) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-7b-hf", quantization_config=bnb_config, device_map="auto" ) # 3. 梯度检查点(节省内存) model.gradient_checkpointing_enable() # 4. 查看模型设备分布 print(model.hf_device_map) """print(code)model_parallel()

六、模型保存与分享

6.1 保存与加载

defsave_load_model():"""模型保存与加载"""print("\n"+"="*60)print("模型保存与加载")print("="*60)code=""" from transformers import AutoModel, AutoTokenizer # 1. 保存到本地 model.save_pretrained("./my_model") tokenizer.save_pretrained("./my_model") # 2. 从本地加载 model = AutoModel.from_pretrained("./my_model") tokenizer = AutoTokenizer.from_pretrained("./my_model") # 3. 推送到Hub(需要登录) from huggingface_hub import notebook_login notebook_login() # 输入token model.push_to_hub("my-awesome-model") tokenizer.push_to_hub("my-awesome-model") # 4. 从Hub加载 model = AutoModel.from_pretrained("username/my-awesome-model") # 5. 保存训练状态 trainer.save_model("./checkpoint") trainer.save_state() # 6. 保存完整状态(包括优化器) torch.save({ 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'epoch': epoch, 'loss': loss, }, "checkpoint.pt") """print(code)save_load_model()

七、总结

组件功能常用方法
AutoTokenizer分词from_pretrained, tokenize, decode
AutoModel加载模型from_pretrained, save_pretrained
Pipeline任务接口pipeline, 自定义Pipeline
Trainer训练train, evaluate, predict
Datasets数据load_dataset, map, filter

最佳实践:

  1. 使用AutoModel和AutoTokenizer自动匹配模型
  2. 使用Pipeline快速验证
  3. 使用Trainer进行标准训练
  4. 使用Datasets高效处理数据
  5. 使用Accelerate进行分布式训练
http://www.jsqmd.com/news/701780/

相关文章:

  • 多项式回归实战:从原理到工业级应用技巧
  • 为什么92%的团队在2026Q1已弃用Copilot?VSCode原生AI插件三大不可逆替代逻辑
  • SharpKeys:Windows键盘重映射的专业深度优化解决方案
  • VSCode 2026车载调试必须关闭的4个默认设置(否则导致CAN FD总线误触发、BootROM断点失效、多核核间同步丢失),92%工程师仍在错误启用!
  • FinRobot开源框架:构建金融AI智能体的四层引擎与实战指南
  • Gemma-3 Pixel Studio作品集:音乐专辑封面→风格识别→相似艺人推荐→歌单生成
  • Hugging Face Auto Classes原理与高效实践指南
  • 2026年3月异形泡沫公司推荐,搬家打包泡沫板/保温泡沫/地暖隔热泡沫板/泡沫填充块,异形泡沫生产厂家哪家好 - 品牌推荐师
  • 远程容器开发总掉线、断联、同步延迟?深度解析WSL2网络栈、SSH KeepAlive与VS Code Remote-SSH协同机制
  • 终极SMAPI完全指南:10分钟学会星露谷物语模组安装与管理
  • WeDLM-7B-Base惊艳续写效果:中英双语科技文本生成质量对比展示
  • 用Markdown驱动设计:提升团队协作效率的工程化实践
  • 阿里面试官问:MCP 到底值不值得做
  • MPS:用Go语言打造轻量级媒体服务器,让旧安卓设备变身家庭流媒体中心
  • Stable Diffusion人脸生成技术实战指南
  • 当前主流 AI 代码工具
  • Tailwind CSS 自定义样式
  • VSCode 2026嵌入式调试适配全攻略:5步完成J-Link/OpenOCD/PyOCD多协议零配置接入
  • 量子计算基础:Hadamard门与CNOT门的原理与应用
  • 从CVE-2023-XXXX到2026零容忍机制:17个真实工业级漏洞如何被新规范提前封堵(含NASA/JPL内部审计案例节选)
  • BGE-M3新手教程:如何用语义分析提升你的AI应用效果
  • C++ MCP网关TCO优化黄金公式:1行编译器flag + 2个零拷贝改造 + 3次ABI精简 = 年省¥287万(某金融客户实证)
  • 小白也能搞定:SenseVoice-Small语音识别镜像完整使用教程
  • Tailwind CSS 指令与函数
  • 从constexpr if到compile-time reflection,C++元编程范式革命,你还在手写type_list?
  • 无需代码!用HeyGem WebUI版快速搭建企业数字人视频生产线
  • PyTorch单层神经网络实现与调试指南
  • nli-MiniLM2-L6-H768多场景落地:已集成至3个开源RAG框架默认NLI组件
  • bge-large-zh-v1.5快速部署:小白友好的Embedding服务搭建
  • NovelClaw:基于动态记忆与可观测架构的AI长篇叙事工作台