Keras MeanQuaredError计算每个样本的损失

发布于 2025-01-23 10:22:15 字数 571 浏览 1 评论 0原文

我正在尝试在张量中获取每个个体样本的含量。

这是一些示例代码来显示我的问题。

src = np.random.uniform(size=(2, 5, 10))
tgt = np.random.uniform(size=(2, 5, 10))
srcTF = tf.convert_to_tensor(src)
tgtTF = tf.convert_to_tensor(tgt)
print(srcTF, tgtTF)

lf = tf.keras.losses.MeanSquaredError(reduction=tf.compat.v1.losses.Reduction.NONE)

flowResults = lf(srcTF, tgtTF)
print(flowResults)

结果是结果:

(2, 5, 10) (2, 5, 10)
(2, 5)

我想保留张量的所有原始尺寸,并仅计算单个样品的损失。有没有办法在TensorFlow中执行此操作? 请注意,Pytorch的Torch.nn.mseloss(还原='none')确实做了我想要的,所以还有一个更类似的选择吗?

I'm trying to get the MeanSquaredError of each individal sample in my tensors.

Here is some sample code to show my problem.

src = np.random.uniform(size=(2, 5, 10))
tgt = np.random.uniform(size=(2, 5, 10))
srcTF = tf.convert_to_tensor(src)
tgtTF = tf.convert_to_tensor(tgt)
print(srcTF, tgtTF)

lf = tf.keras.losses.MeanSquaredError(reduction=tf.compat.v1.losses.Reduction.NONE)

flowResults = lf(srcTF, tgtTF)
print(flowResults)

Here are the results:

(2, 5, 10) (2, 5, 10)
(2, 5)

I want to keep all the original dimensions of my tensors, and just calculate loss on the individual samples. Is there a way to do this in Tensorflow?
Note that pytorch's torch.nn.MSELoss(reduction = 'none') does exactly what I want, so is there an alternative that's more like that?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

空城缀染半城烟沙 2025-01-30 10:22:15

这是一种方法:

[ins] In [97]: mse = tf.keras.losses.MSE(tf.expand_dims(srcTF, axis=-1) , tf.expand_dims(tgtTF, axis=-1))                                                                 
                                                                                                                                                                            
[ins] In [98]: mse.shape                                                                                                                                                    
Out[98]: TensorShape([2, 5, 10])       

我认为这里的关键是样本。由于MSE是在最后一个轴上计算的,因此您将失去该轴,因为那是“减少”的内容。该五个维矢量中的每个点表示最后轴中10个维的平方误差。因此,为了恢复原始形状,本质上,我们必须执行每个标量的MSE,为此我们需要扩大尺寸。从本质上讲,我们说的(2、5、10)是我们拥有的批次数量,每个标量是我们的样本/预测,这是tf.expand_dims(< tensor> -1)成就的成就。

Here is a way to do it:

[ins] In [97]: mse = tf.keras.losses.MSE(tf.expand_dims(srcTF, axis=-1) , tf.expand_dims(tgtTF, axis=-1))                                                                 
                                                                                                                                                                            
[ins] In [98]: mse.shape                                                                                                                                                    
Out[98]: TensorShape([2, 5, 10])       

I think the key here is samples. Since MSE is being computed on the last axis, you lose that axis as that's what's being "reduced". Each point in that five dimensional vector represents the mean squared error of the 10 dimensions in the last axis. So in order to get back the original shape, essentially, we have to do the MSE of each scalar, for which we need to expand the dimensions. Essentially, we are saying that (2, 5, 10) is the number of batches we have, and each scalar is our sample/prediction, which is what tf.expand_dims(<tensor>, -1) accomplishes.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文