瓦尔损失和手动计算的损失产生不同的值
我有一个使用损失的CNN分类模型:二进制交叉熵:
optimizer_instance = Adam(learning_rate=learning_rate, decay=learning_rate / 200)
model.compile(optimizer=optimizer_instance, loss='binary_crossentropy')
我们保存了最佳模型,因此最新的保存模型是达到最佳val_loss的模型:
es = EarlyStopping(monitor='val_loss', mode='min', verbose=0, patience=Config.LearningParameters.Patience)
modelPath = modelFileFolder + Config.LearningParameters.ModelFileName
checkpoint = keras.callbacks.ModelCheckpoint(modelPath , monitor='val_loss',
save_best_only=True,
save_weights_only=False, verbose=1)
callbacks = [checkpoint,es]
history = model.fit(x=training_generator,
batch_size=Config.LearningParameters.Batch_size,
epochs=Config.LearningParameters.Epochs,
validation_data=validation_generator,
callbacks=callbacks,
verbose=1)
在训练过程中,日志显示Val_loss已减少到0.41 。 在火车结束时,我们加载了在培训过程中保存的最佳模型,并预测了验证数据集。 然后,我们手动计算了BCE,并获得了2.335的完全不同的值。
这是手动损失计算:
bce = tf.keras.losses.BinaryCrossentropy()
binaryCSELoss = bce(y_valid, preds)
print("Calculated Val Loss is: " + str(binaryCSELoss ))
这是培训日志的终结:
10/10 [==============================] - ETA: 0s - loss: 0.0778
Epoch 40: val_loss did not improve from 0.41081
10/10 [==============================] - 4s 399ms/step - loss: 0.0778 - val_loss: 0.5413
% of marked 1 in validation: [0.51580906 0.48419094]
% of marked 1 in Test: [0.51991504 0.480085 ]
---------------------------------
Calculated Val Loss is: 2.3350689765791395
我们认为它可能必须使用我们使用数据生成器的面孔做某事,然后分别在批处理上计算损失,因此我们添加了另一个测试我们不使用数据生成器:
history = model.fit(x=trainX,y = y_train,
epochs=Config.LearningParameters.Epochs,
validation_data=(validateion_x,y_valid),
callbacks=callbacks,
verbose=1)
predictions_cnn = model.predict(validateion_x)
bce = tf.keras.losses.BinaryCrossentropy(from_logits=False)
binaryCSELoss = bce(y_valid, predictions_cnn)
valloss = binaryCSELoss.numpy()
print("binaryCSELoss logits=false on all Val Loss is: " + str(valloss))
bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)
binaryCSELoss = bce(y_valid, predictions_cnn)
valloss = binaryCSELoss.numpy()
print("binaryCSELoss logits=true on all Val Loss is: " + str(valloss))
这是培训日志的终结。同样,损失与众不同:
54/54 [==============================] - ETA: 0s - loss: 0.5015
Epoch 6: val_loss did not improve from 0.66096
54/54 [==============================] - 8s 144ms/step - loss: 0.5015 - val_loss: 1.9742
% of marked 1 in validation: [0.28723404 0.71276593]
% of marked 1 in Test: [0.52077866 0.47922137]
loading Model: E:\CnnModels\2022-06-03_11-53-53\model.h5
Backend TkAgg is interactive backend. Turning interactive mode on.
binaryCSELoss logits=false on all Val Loss is: 0.6353029
binaryCSELoss logits=true on all Val Loss is: 0.7070135
这怎么可能?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
BCE是二进制交叉熵时,当它对二进制输出响应时,它们通常将其理解为[1 -P,p],即我们使用具有最大值的输出层以简单地表示[1 -P,P]
样本损失函数:
https://www.tensorflow.org/api_docs/python/ppython/tf/keras/losses/loss/loss
输出权重参数和偏差为(192,1)是代表重要的意义树的打印和变化值表示在此期间损失。当您从日志[''损失']中读取变量值以进行评估,但对于要求,映射结果是首选损失值。
[示例]:
[输出]:
BCE is Binary cross entropy when it is responsive to the binary output they mostly understand it as [ 1 -p , p ] that is when we use the output layer with the maximum value for simply representing [ 1 -p , p ]
Sample losses function :
https://towardsdatascience.com/where-did-the-binary-cross-entropy-loss-function-come-from-ac3de349a715
https://www.tensorflow.org/api_docs/python/tf/keras/losses/Loss
The output weights parameters and bias are ( 192, 1 ) representing significance as tree prints and change of the values indicated loss during the time. Loss value is preferred when you read variables value from logs['loss'] for evaluation but for the requirements, mapping result.
[ Sample ]:
[ Output ]: