增加交叉熵损失而不是减少
有没有办法翻转横向渗透损失的效果?
我有一个语言模型,我想以不会生成特定文本的方式训练模型。因此,我有两个损失,一个我想减少(loss1
),另一个我想增加(loss2
):
loss1 = outputs['loss1']
loss2 = 1-outputs['loss2']
loss = loss1 + loss2
我的问题是,减去减去是正确的lose2
从1?这样,它会增加而不是减少。
Is there a way to flip the effect of the cross-entropy loss?
I have a language model, and I want to train the model in a way that doesn't generate a specific text. Thus, I have two losses, one that I want to reduce (loss1
) and another that I want to increase (loss2
):
loss1 = outputs['loss1']
loss2 = 1-outputs['loss2']
loss = loss1 + loss2
My question is, is it correct to subtract loss2
from 1? in this way it increases instead of decreasing.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论