我正在尝试实施论文“与Wasserstein损失一起学习”(链接为)然后,更具体地说,我尝试在第4页上实现算法1在纸上。
但是,我不知道如何通过Optimizer.Step(),例如SGD来反映我的模型算法1的结果。
例如,当我们用Pytorch计算损失时,我们可以使用以下代码进行学习。
optimizer.zero_grad()
loss.backward()
optimizer.step()
然而,算法1输出了沃斯堡损失的梯度随熵分析的梯度。因此,我们不能像对损失的损失更新不同。
如何通过算法1更新我的模型?
I am trying to implement the paper 'Learning with Wasserstein loss' (the link is https://arxiv.org/abs/1506.05439) then, more specifically, I try to implement the algorithm 1 at page 4 in the paper.
However, I don't know how to reflect the result of algorithm 1 to my model via optimizer.step(), e.g. SGD.
For example, when we calculate loss with pytorch, then we can progress the learning with such a following code.
optimizer.zero_grad()
loss.backward()
optimizer.step()
However, the algorithm 1 output the gradient of wasserstein loss with entropic reguralization. Therefore we cant't update it unlike the case of caliculating a loss.
How to update my model by algorithm 1?
发布评论