Keras MeanQuaredError计算每个样本的损失
我正在尝试在张量中获取每个个体样本的含量。
这是一些示例代码来显示我的问题。
src = np.random.uniform(size=(2, 5, 10))
tgt = np.random.uniform(size=(2, 5, 10))
srcTF = tf.convert_to_tensor(src)
tgtTF = tf.convert_to_tensor(tgt)
print(srcTF, tgtTF)
lf = tf.keras.losses.MeanSquaredError(reduction=tf.compat.v1.losses.Reduction.NONE)
flowResults = lf(srcTF, tgtTF)
print(flowResults)
结果是结果:
(2, 5, 10) (2, 5, 10)
(2, 5)
我想保留张量的所有原始尺寸,并仅计算单个样品的损失。有没有办法在TensorFlow中执行此操作? 请注意,Pytorch的Torch.nn.mseloss(还原='none')确实做了我想要的,所以还有一个更类似的选择吗?
I'm trying to get the MeanSquaredError of each individal sample in my tensors.
Here is some sample code to show my problem.
src = np.random.uniform(size=(2, 5, 10))
tgt = np.random.uniform(size=(2, 5, 10))
srcTF = tf.convert_to_tensor(src)
tgtTF = tf.convert_to_tensor(tgt)
print(srcTF, tgtTF)
lf = tf.keras.losses.MeanSquaredError(reduction=tf.compat.v1.losses.Reduction.NONE)
flowResults = lf(srcTF, tgtTF)
print(flowResults)
Here are the results:
(2, 5, 10) (2, 5, 10)
(2, 5)
I want to keep all the original dimensions of my tensors, and just calculate loss on the individual samples. Is there a way to do this in Tensorflow?
Note that pytorch's torch.nn.MSELoss(reduction = 'none') does exactly what I want, so is there an alternative that's more like that?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这是一种方法:
我认为这里的关键是样本。由于MSE是在最后一个轴上计算的,因此您将失去该轴,因为那是“减少”的内容。该五个维矢量中的每个点表示最后轴中10个维的平方误差。因此,为了恢复原始形状,本质上,我们必须执行每个标量的MSE,为此我们需要扩大尺寸。从本质上讲,我们说的(2、5、10)是我们拥有的批次数量,每个标量是我们的样本/预测,这是tf.expand_dims(< tensor> -1)成就的成就。
Here is a way to do it:
I think the key here is samples. Since MSE is being computed on the last axis, you lose that axis as that's what's being "reduced". Each point in that five dimensional vector represents the mean squared error of the 10 dimensions in the last axis. So in order to get back the original shape, essentially, we have to do the MSE of each scalar, for which we need to expand the dimensions. Essentially, we are saying that (2, 5, 10) is the number of batches we have, and each scalar is our sample/prediction, which is what tf.expand_dims(<tensor>, -1) accomplishes.