历史词典的损失与训练期间在屏幕上印刷的内容不同
系统信息 使用Tensorflow 2.8.0
描述当前行为 在调用“历史= model.fit(...)”之后,保存在历史上的值['损失']和历史['准确性']与训练期间在屏幕上打印的值不同。
我有一个自定义培训循环,其中train_step()
函数返回2个损失:d_loss和g_loss。 当我调用model.fit()
-
Epoch 1/2
110/110 [==============================] - 33s 294ms/step - d_loss: -1.5941 - g_loss: 2.2890
Epoch 2/2
110/110 [==============================] - 32s 288ms/step - d_loss: -2.5873 - g_loss: 2.8013
print([round(x, 4) for x in history_init.history['d_loss']])
print([round(x, 4) for x in history_init.history['g_loss']])
输出: D_LOSS:[-2.7844,-2.2764] G_LOSS:[2.4266,3.3268]
描述预期行为 值应该与先前的TF版本和其他参数相同。
先前已经对此进行了讨论,并提出了一个错误,但似乎从TF2.4.1回来了: https://github.com/tensorflow/tensorflow/issues/48408
System information
Using TensorFlow 2.8.0
Describe the current behavior
After calling "history = model.fit(...)", values saved in history.history['loss'] and history.history['accuracy'] are different from the ones printed on-screen at every epoch during training.
I have a custom training loop with the train_step()
function returning 2 losses: d_loss and g_loss.
The values turn out to be different when I call model.fit()
-
Epoch 1/2
110/110 [==============================] - 33s 294ms/step - d_loss: -1.5941 - g_loss: 2.2890
Epoch 2/2
110/110 [==============================] - 32s 288ms/step - d_loss: -2.5873 - g_loss: 2.8013
print([round(x, 4) for x in history_init.history['d_loss']])
print([round(x, 4) for x in history_init.history['g_loss']])
OUTPUT:
d_loss: [-2.7844, -2.2764]
g_loss: [2.4266, 3.3268]
Describe the expected behavior
Values should be the same, as is with previous tf versions and with other parameters.
There has been a previous discussion on this and a bug raised, but it seems to be back from tf2.4.1: https://github.com/tensorflow/tensorflow/issues/48408
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论