在Pytorch中添加我的自定义损失会使Autograd混乱吗?

发布于 2025-02-08 21:19:17 字数 1191 浏览 1 评论 0原文

我正在尝试使用两种不同的损失,其中一些标签的Mseloss和其他标签的自定义损失。然后,我试图将这些损失总结在一起。我的模型在每个时期都打印出相同的损失,因此我必须做错了什么。任何帮助都将受到赞赏!我怀疑我的实施使Pytorch的Autograd弄乱了。请参阅下面的代码:

mse_loss = torch.nn.MSELoss()
...
loss1 = mse_loss(preds[:,(0,1,3)], label[:,(0,1,3)])
print("loss1", loss1)
loss2 = my_custom_loss(preds[:,2], label[:,2])
print("loss2", loss2)
print("summing losses")
loss = sum([loss1, loss2]) # tensor + float = tensor
print("loss sum", loss)
loss = torch.autograd.Variable(loss, requires_grad=True)
print("loss after Variable(loss, requires_grad=True)", loss)

这些打印语句产生:

loss1 tensor(4946.1221, device='cuda:0', grad_fn=<MseLossBackward0>)
loss2 34.6672
summing losses
loss sum tensor(4980.7891, device='cuda:0', grad_fn=<AddBackward0>)
loss after Variable() tensor(4980.7891, device='cuda:0', requires_grad=True)

我的自定义损失功能如下:

def my_custom_loss(preds, label):
    angle_diff = preds - label
    # /2 to bring angle diff between -180<theta<180
    half_angle_diff = angle_diff.detach().cpu().numpy()/2
    sine_diff = np.sin(half_angle_diff)
    square_sum = np.nansum(sine_diff**2)
    return square_sum

I'm trying to use two different losses, MSELoss for some of my labels and a custom loss for the other labels. I'm then trying to sum these losses together before backprop. My model prints out the same loss after every epoch so I must be doing something wrong. Any help is appreciated! I suspect my implementation is messing up Pytorch's autograd. See code below:

mse_loss = torch.nn.MSELoss()
...
loss1 = mse_loss(preds[:,(0,1,3)], label[:,(0,1,3)])
print("loss1", loss1)
loss2 = my_custom_loss(preds[:,2], label[:,2])
print("loss2", loss2)
print("summing losses")
loss = sum([loss1, loss2]) # tensor + float = tensor
print("loss sum", loss)
loss = torch.autograd.Variable(loss, requires_grad=True)
print("loss after Variable(loss, requires_grad=True)", loss)

These print statements yield:

loss1 tensor(4946.1221, device='cuda:0', grad_fn=<MseLossBackward0>)
loss2 34.6672
summing losses
loss sum tensor(4980.7891, device='cuda:0', grad_fn=<AddBackward0>)
loss after Variable() tensor(4980.7891, device='cuda:0', requires_grad=True)

My custom loss function is below:

def my_custom_loss(preds, label):
    angle_diff = preds - label
    # /2 to bring angle diff between -180<theta<180
    half_angle_diff = angle_diff.detach().cpu().numpy()/2
    sine_diff = np.sin(half_angle_diff)
    square_sum = np.nansum(sine_diff**2)
    return square_sum

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

南…巷孤猫 2025-02-15 21:19:17

您之所以没有通过第二次损失进行反向传播的原因是,您没有将其定义为可区分的操作员。您应该坚持使用Pytorch操作员,而不会切换到Numpy。

这样的事情将起作用:

def my_custom_loss(preds, label):
    half_angle_diff = (preds - label)/2
    sine_diff = torch.sin(half_angle_diff)
    square_sum = torch.nansum(sine_diff**2)
    return square_sum

您可以检查自定义损失是否与虚拟输入可区分:

>>> preds = torch.rand(1,3,10,10, requires_grad=True)
>>> label = torch.rand(1,3,10,10)
>>> my_custom_loss(preds, label)
tensor(11.7584, grad_fn=<NansumBackward0>)

请注意grad_fn属性>在其上显示输出张量确实附加到计算图上,因此您可以执行执行从中传播。

此外,您不应使用 variable 现在已弃用。

The reason why you are not backpropagating through your second loss is that you haven't defined it as a differentiable operator. You should stick with PyTorch operators without switching to NumPy.

Something like this will work:

def my_custom_loss(preds, label):
    half_angle_diff = (preds - label)/2
    sine_diff = torch.sin(half_angle_diff)
    square_sum = torch.nansum(sine_diff**2)
    return square_sum

You can check that your custom loss is differentiable with dummy inputs:

>>> preds = torch.rand(1,3,10,10, requires_grad=True)
>>> label = torch.rand(1,3,10,10)
>>> my_custom_loss(preds, label)
tensor(11.7584, grad_fn=<NansumBackward0>)

Notice the grad_fn attribute on it which shows the output tensor is indeed attached to a computational graph, and you can therefore perform back propagation from it.

Additionally, you should not use Variable as it is now deprecated.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文