如何衡量预训练的 HuggingFace 语言模型的性能?
我正在使用 Trainer
预训练 GPT2LMHeadModel
,如下所示:
training_args = TrainingArguments(
output_dir=str(project_root / 'models/bn-gpt2/'),
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=4,
fp16=True,
optim="adafactor",
eval_steps=400,
save_steps=800,
warmup_steps=500,
evaluation_strategy="steps",
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset['train'],
eval_dataset=tokenized_dataset['test'],
)
trainer.train()
我想在训练期间和训练之后使用困惑度或准确性指标来衡量预训练模型的性能。我已经找到了一些方法来测量单个句子的这些,但我找不到一种方法来为完整的模型做到这一点。我的目标是使用 GPT2 训练从头开始为我的母语创建下一个单词预测模型。
I am pretraining a GPT2LMHeadModel
using Trainer
as follows:
training_args = TrainingArguments(
output_dir=str(project_root / 'models/bn-gpt2/'),
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=4,
fp16=True,
optim="adafactor",
eval_steps=400,
save_steps=800,
warmup_steps=500,
evaluation_strategy="steps",
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset['train'],
eval_dataset=tokenized_dataset['test'],
)
trainer.train()
I want to measure the performance of my pre-trained model using perplexity or accuracy metrics during and after training. I have found some ways to measure these for individual sentences, but I cannot find a way to do this for the complete model. My goal is to create a next word prediction model for my native language using GPT2 training from scratch.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果我理解正确,那么本教程 展示了如何计算整个测试集的困惑度。如果我没看错的话,他们会使用整个测试语料库作为一个由换行符连接的字符串,这可能与以下事实有关:perplexity 使用滑动窗口,该窗口使用语料库中前面的文本。我个人还没有计算模型的困惑度,也不是这方面的专家。在任何情况下,您都可以将句子得分平均为语料库得分,尽管该指标的工作逻辑以及权重可能存在问题,因为句子可以具有不同数量的单词,请参阅 此说明。
另外,我不确定您是否已经意识到这一点,但还有一个 预训练的 GPT-2模型可用于 Huggingface 上的孟加拉语。
If I understand it correctly then this tutorial shows how to calculate perplexity for the entire test set. If I see it correctly they use the entire test corpus as one string connected by linebreaks, which might have to do with the fact that perplexity uses a sliding window which uses the text that came previous in the corpus. I personally did not calculate perplexity for a model yet and am not an expert at this. In any case you could average the sentence score into a corpus score, although there might be issues with the logic of how that metric works as well as the weighting since sentences can have a different number of words, see this explaination.
Also I'm not sure if you are already aware of this but there is also a pretrained GPT-2 model available for Bengali on huggingface.