在Pytorch中实施不可能的培训损失
我正在尝试实施本研究论文中提出的不可能的训练损失:。此损失是负模型损失(NLLLOSS)的更新版本。
这是我的代码:
def NLLLoss(logs, targets, c, alpha=0.1):
out = torch.zeros_like(targets, dtype=torch.float)
for i in range(len(targets)):
# out[i] = logs[i][targets[i]] # The original implementation
out[i] = alpha * (1 - logs[i][c[i]]) * logs[i][targets[i]]
return -out.sum()/len(out)
注释的行是原始的NLLLOSS实现。这个代码很好,但是我想知道,此实现正确吗?
I am trying to implement the Unlikelihood Training loss that was proposed in this research paper: NEURAL TEXT DEGENERATION WITH UNLIKELIHOOD TRAINING. This loss is an updated version of the negative log-likelihood loss (NLLLOSS).
The main idea of this loss is that it avoids unwanted tokens during the training process.
This is my code:
def NLLLoss(logs, targets, c, alpha=0.1):
out = torch.zeros_like(targets, dtype=torch.float)
for i in range(len(targets)):
# out[i] = logs[i][targets[i]] # The original implementation
out[i] = alpha * (1 - logs[i][c[i]]) * logs[i][targets[i]]
return -out.sum()/len(out)
The commented line is the original NLLLoss implementation. This code well, but I was wondering, is this implementation correct?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
不,log(1 -x)不等于1- log(x)。
我认为需求是
No, log(1-x) does not equal 1 - log(x).
I think what need is here.