加载保存的型号在填充时不像预期的那样行事
我训练了一个 pytorch 模型,第一个 epoch 结束时的准确度为 20%,损失值为 3.8 。我训练它直到损失为 3.2 并且准确度约为 50% 并像这样保存:
torch.save(model.state_dict(), 'model.pth')
然后我像这样加载它:
model = Perceptron(input_size, output_size)
model.load_state_dict(torch.load("model.pth"))
model.to(device, dtype=torch.double)
当我开始使用相同的任务、相同的优化器和学习率对其进行微调时,我预计损失从 3.2 开始,准确度为 50%,但看起来模型正在回滚,并再次从损失值 3.8 和准确度 20% 开始。是我的代码有问题还是我对微调模型有什么不明白的地方?
I trained a pytorch
model the accuracy at end of the first epoch is 20% and the loss value is 3.8 . I trained it until the loss is 3.2 and accuracy is around 50% and save it like this:
torch.save(model.state_dict(), 'model.pth')
Then I load it like this:
model = Perceptron(input_size, output_size)
model.load_state_dict(torch.load("model.pth"))
model.to(device, dtype=torch.double)
When I'm starting to fine-tune it using the same task, same optimizer, and learning rate, I expect the loss starts at 3.2 and accuracy to be 50% but it looks like the model is rolling back and starts from a loss value of 3.8 and accuracy of 20% again. Is something wrong with my code or there's something that I don't understand about the fine-tuning model?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
首先,由于您需要微调,因此还需要保存优化器:
然后:
其次,您需要在代码的最初设置随机种子
First, because you need to fine-tune, the optimizer also needs to be saved:
And then:
Second, you need to set the random seed at the very beginning of the code