我的TensorFlow自定义损耗功能不起作用。什么错?
我想让损失功能代码可行。看来我的代码在批处理中存在问题,即我的损失量无法获得张量的形状。 我的代码是
backbone = tf.keras.applications.resnet50.ResNet50(include_top=False, weights=None, input_shape=INPUT_SHAPE)
x = tf.keras.layers.Conv2D(filters=5, kernel_size=3, padding='same', activation='sigmoid')(backbone.output)
model = tf.keras.Model(inputs=backbone.input, outputs=x)
def custom_loss(y_true, y_pred):
batch_loss = 0.0
batch_cnt = len(y_true)
for i in range(batch_cnt):
tf.autograph.experimental.set_loop_options(shape_invariants=[(batch_loss, tf.TensorShape([None]))])
y_true_unit = y_true[i]
y_pred_unit = y_pred[i]
loss = 0.0
for j in range(18):
for k in range(32):
conf_true = y_true_unit[j,k,0]
cell_loss = tf.where(conf_true==1, 5 * tf.math.abs(y_true_unit - y_pred_unit), 0.5 * tf.math.abs(conf_true - y_pred_unit[j,k,0]))
loss = tf.where(loss==0, tf.identity(cell_loss), tf.math.add(loss, cell_loss))
batch_loss = tf.where(batch_loss==0, tf.identity(loss), tf.math.add(batch_loss, loss))
return batch_loss / batch_cnt
sgd = tf.keras.optimizers.SGD(momentum=0.99)
model.compile(sgd, custom_loss)
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5)
model.fit(
train_batch,
validation_data = val_batch,
epochs = 100,
callbacks = [reduce_lr]
)
,错误是
ValueError: in user code:
File "C:\Users\user\anaconda3\lib\site-packages\keras\engine\training.py", line 1021, in train_function *
return step_function(self, iterator)
File "C:\Users\user\AppData\Local\Temp\ipykernel_5952\2961884429.py", line 4, in yolo_loss *
for i in range(batch_cnt):
ValueError: 'batch_loss' has shape () before the loop, which does not conform with the shape invariant (None,).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您不能在损失函数中具有复杂的逻辑,如果命令命令可以预防损失函数,则损失函数需要可区分和
。
如果,则需要在没有和
的情况下编写损失功能,否则它永远无法使用。
要了解可以使用的操作以及如何重写损失功能的操作,请访问
keras backend
。给出一个想法:
You can't have complex logic inside a loss function, your loss function needs to be differentiable and
for loops
andif
commands prevents this.You need to write your loss function without
for
andif
or it will never work.To have an idea of the operations you can use and how to rewrite your loss function check out
Keras Backend
.To give an idea: https://towardsdatascience.com/how-to-create-a-custom-loss-function-keras-3a89156ec69b
通过以下代码解决了问题,但我仍然不知道它的工作原理。
Got solved the problem by below code, but I still don't know how it works well.