如何衡量预训练的 HuggingFace 语言模型的性能?

发布于 2025-01-13 20:36:40 字数 842 浏览 0 评论 0原文

我正在使用 Trainer 预训练 GPT2LMHeadModel,如下所示:

training_args = TrainingArguments(
    output_dir=str(project_root / 'models/bn-gpt2/'),
    overwrite_output_dir=True,
    num_train_epochs=1,
    per_device_train_batch_size=1,
    per_device_eval_batch_size=1,
    gradient_accumulation_steps=4,
    fp16=True,
    optim="adafactor",
    eval_steps=400,
    save_steps=800,
    warmup_steps=500,
    evaluation_strategy="steps",
)

trainer = Trainer(
    model=model,
    args=training_args,
    data_collator=data_collator,
    train_dataset=tokenized_dataset['train'],
    eval_dataset=tokenized_dataset['test'],
)

trainer.train()

我想在训练期间和训练之后使用困惑度或准确性指标来衡量预训练模型的性能。我已经找到了一些方法来测量单个句子的这些,但我找不到一种方法来为完整的模型做到这一点。我的目标是使用 GPT2 训练从头开始为我的母语创建下一个单词预测模型。

I am pretraining a GPT2LMHeadModel using Trainer as follows:

training_args = TrainingArguments(
    output_dir=str(project_root / 'models/bn-gpt2/'),
    overwrite_output_dir=True,
    num_train_epochs=1,
    per_device_train_batch_size=1,
    per_device_eval_batch_size=1,
    gradient_accumulation_steps=4,
    fp16=True,
    optim="adafactor",
    eval_steps=400,
    save_steps=800,
    warmup_steps=500,
    evaluation_strategy="steps",
)

trainer = Trainer(
    model=model,
    args=training_args,
    data_collator=data_collator,
    train_dataset=tokenized_dataset['train'],
    eval_dataset=tokenized_dataset['test'],
)

trainer.train()

I want to measure the performance of my pre-trained model using perplexity or accuracy metrics during and after training. I have found some ways to measure these for individual sentences, but I cannot find a way to do this for the complete model. My goal is to create a next word prediction model for my native language using GPT2 training from scratch.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

呢古 2025-01-20 20:36:40

If I understand it correctly then this tutorial shows how to calculate perplexity for the entire test set. If I see it correctly they use the entire test corpus as one string connected by linebreaks, which might have to do with the fact that perplexity uses a sliding window which uses the text that came previous in the corpus. I personally did not calculate perplexity for a model yet and am not an expert at this. In any case you could average the sentence score into a corpus score, although there might be issues with the logic of how that metric works as well as the weighting since sentences can have a different number of words, see this explaination.

Also I'm not sure if you are already aware of this but there is also a pretrained GPT-2 model available for Bengali on huggingface.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文