在Tensorflow中,如何将多个损失与所需公式相结合

发布于 2025-02-10 01:24:38 字数 1092 浏览 1 评论 0原文

我有一个由单个输出神经元组成的CNN模型,其值在0到1之间。我想计算该特定输出神经元的损失组合。

我使用的是平均绝对误差和平均平方误差,并创建这样的损失:

loss = tf.keras.losses.MeanAbsoluteError() + tf.keras.losses.MeanSquaredError()

现在,由于某些问题,TensorFlow框架不支持这样的损失函数。这是错误:

Traceback (most recent call last):
  File "run_kfold.py", line 189, in <module>
    loss = tf.keras.losses.MeanAbsoluteError() + tf.keras.losses.MeanSquaredError()
TypeError: unsupported operand type(s) for +: 'MeanAbsoluteError' and 'MeanSquaredError'

任何人都可以建议如何计算特定输出层的组合损失。这将有助于创建多个加权损失,例如:

l_1 = 0.6
l_2 = 0.4
loss = l_1 * tf.keras.losses.MeanAbsoluteError() + l_2 *tf.keras.losses.MeanSquaredError()

然后,我可以将此损失变量传递给 model.compile()函数

model.compile(optimizer=opt, 
                  loss=loss,
                  metrics = ['accuracy', sensitivity, specificity, tf.keras.metrics.RootMeanSquaredError(name='rmse')]
                )

I have a CNN model with a single output neuron consisting of sigmoid activation, hence its value is in between 0 and 1. I wanted to calculate a combination of loss for this particular output neuron.

I was using Mean Absolute Error and Mean Squared Error for the same, and creating a loss like this:

loss = tf.keras.losses.MeanAbsoluteError() + tf.keras.losses.MeanSquaredError()

Now, due to some issue, the tensorflow framework is not supporting loss function like this. Here is the error:

Traceback (most recent call last):
  File "run_kfold.py", line 189, in <module>
    loss = tf.keras.losses.MeanAbsoluteError() + tf.keras.losses.MeanSquaredError()
TypeError: unsupported operand type(s) for +: 'MeanAbsoluteError' and 'MeanSquaredError'

Can anyone suggest how to calculate combo loss for a certain output layer. This will help to create multiple weighted losses in combination, like this:

l_1 = 0.6
l_2 = 0.4
loss = l_1 * tf.keras.losses.MeanAbsoluteError() + l_2 *tf.keras.losses.MeanSquaredError()

I can then pass this loss variable to the model.compile() function

model.compile(optimizer=opt, 
                  loss=loss,
                  metrics = ['accuracy', sensitivity, specificity, tf.keras.metrics.RootMeanSquaredError(name='rmse')]
                )

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

禾厶谷欠 2025-02-17 01:24:38

您可以编写一个函数并使用MeanabSoluteError() and MeansQuaredError()并计算Custom_loss并返回:

import tensorflow as tf

# model = your_model
...

def custom_loss(y_true, y_pred):
    l_1 = 0.6
    l_2 = 0.4
    mae = tf.keras.losses.MeanAbsoluteError()
    mse = tf.keras.losses.MeanAbsoluteError()
    loss_mae = mae(y_true , y_pred)
    loss_mse = mse(y_true , y_pred)
    total_loss = l_1*loss_mae + l_2*loss_mse
    return total_loss


model.compile(loss=custom_loss, 
              optimizer='Adam')

model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS)

You can write a function and use MeanAbsoluteError() and MeanSquaredError() and compute custom_loss and return it:

import tensorflow as tf

# model = your_model
...

def custom_loss(y_true, y_pred):
    l_1 = 0.6
    l_2 = 0.4
    mae = tf.keras.losses.MeanAbsoluteError()
    mse = tf.keras.losses.MeanAbsoluteError()
    loss_mae = mae(y_true , y_pred)
    loss_mse = mse(y_true , y_pred)
    total_loss = l_1*loss_mae + l_2*loss_mse
    return total_loss


model.compile(loss=custom_loss, 
              optimizer='Adam')

model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文