评估bert模型参数。requires_grad
我对BERT模型的测试集评估有疑问。在评估部分期间,param.requires_grad假定为真或错误?如果我在培训期间是否进行了完整的微调,则不愿意进行。我的模型是Model.eval()模式,但我想确保在调用它进行评估时不要在模型()类中强制任何错误。谢谢 !
if freeze_bert == 'True':
for param in self.bert.parameters():
param.requires_grad = False
#logging.info('freeze_bert: {}'.format(freeze_bert))
#logging.info('param.requires_grad: {}'.format(param.requires_grad))
if freeze_bert == 'False':
for param in self.bert.parameters():
param.requires_grad = True
I have a doubt regarding the evaluation on the test set of my bert model. During the eval part param.requires_grad is suppose to be True or False? indipendently if I did a full fine tuning during training or not. My model is in model.eval() mode but I want to be sure to not force nothing wrong in the Model() class when i call it for evaluation. Thanks !
if freeze_bert == 'True':
for param in self.bert.parameters():
param.requires_grad = False
#logging.info('freeze_bert: {}'.format(freeze_bert))
#logging.info('param.requires_grad: {}'.format(param.requires_grad))
if freeze_bert == 'False':
for param in self.bert.parameters():
param.requires_grad = True
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果冻结了模型,则不得更新相应模块的参数, ie 他们不应需要梯度计算:
requientes_grad = false
。注:具有 方法:
理想情况
freeze_bert
将是布尔值,您只需做:If you freeze your model then the parameter of the corresponding modules must not be updated, i.e. they should not require gradient computation:
requires_grad=False
.Note
nn.Module
also has arequires_grad_
method:Ideally
freeze_bert
would be a boolean and you would simply do: