返回介绍

数学基础

统计学习

深度学习

工具

Scala

八、示例

发布于 2023-07-17 23:38:23 字数 7601 浏览 0 评论 0 收藏 0

  1. 使用 Trainer API微调模型:

    
    
    xxxxxxxxxx
    ### 加载数据集 ### from datasets import load_dataset from transformers import AutoTokenizer, DataCollatorWithPadding raw_datasets = load_dataset("glue", "mrpc") checkpoint = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ### 定义 TrainingArguments ### from transformers import TrainingArguments training_args = TrainingArguments("test-trainer", evaluation_strategy="epoch") ### 定义模型 ### from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) ### 定义 Trainer ### from transformers import Trainer import evaluate def compute_metrics(eval_preds): # 评估函数 metric = evaluate.load("glue", "mrpc") logits, labels = eval_preds predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model, training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics, ) ### 训练 ### trainer.train()
  2. 不使用 Trainer API 来训练,纯人工实现:

    
    
    xxxxxxxxxx
    ### 加载数据集 ### from datasets import load_dataset from transformers import AutoTokenizer, DataCollatorWithPadding raw_datasets = load_dataset("glue", "mrpc") checkpoint = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) tokenized_datasets = tokenized_datasets.remove_columns(["sentence1", "sentence2", "idx"]) # 删除不必要的列 tokenized_datasets = tokenized_datasets.rename_column("label", "labels") # 重命名列 tokenized_datasets.set_format("torch") # 设置数据集的格式 tokenized_datasets["train"].column_names # ["attention_mask", "input_ids", "labels", "token_type_ids"] from torch.utils.data import DataLoader train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, batch_size=8, collate_fn=data_collator ) eval_dataloader = DataLoader( tokenized_datasets["validation"], batch_size=8, collate_fn=data_collator ) ### 定义模型 ### from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) ### 定义优化器和调度器 ### from transformers import AdamW from transformers import get_scheduler optimizer = AdamW(model.parameters(), lr=5e-5) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps, ) ### 定义 training loop ### import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model.to(device) from tqdm.auto import tqdm progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ### 评估 ### import evaluate metric = evaluate.load("glue", "mrpc") model.eval() for batch in eval_dataloader: batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) logits = outputs.logits predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) metric.compute()
  3. 使用 Accelerate 加速训练:使用Accelerate 库,只需进行一些调整,我们就可以在多个 GPUTPU 上启用分布式训练,以下是改动的部分:

    
    
    xxxxxxxxxx
    + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1)

    要添加的第一行是导入Accelerator。第二行实例化一个 Accelerator对象 ,它将查看环境并初始化适当的分布式设置。 Accelerate 为你处理数据在设备间的传递,因此你可以删除将模型放在设备上的那行代码(或者,也可以使用 accelerator.device 代替 device )。

    要在分布式设置中使用它,请运行以下命令:

    
    
    xxxxxxxxxx
    accelerate config

    这将询问你几个配置的问题并将你的回答转储到如下命令使用的配置文件中:

    
    
    xxxxxxxxxx
    accelerate launch train.py

    这将启动分布式训练。

    为了使云端 TPU提供的加速发挥最大的效益,我们建议使用tokenizerpadding=max_lengthmax_length 参数将你的样本填充到固定长度。

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文