TFF:评估联邦学习模型,损失值大幅增加
我正在尝试按照此教程评估联邦学习模型。正如下面的代码所示
test_data = test.create_tf_dataset_from_all_clients().map(reshape_data).batch(2)
test_data = test_data.map(lambda x: (x['x'], x['y']))
def evaluate(server_state):
keras_model = create_keras_model()
keras_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
keras_model.set_weights(server_state)
keras_model.evaluate(test_data)
server_state = federated_algorithm.initialize()
evaluate(server_state)
>>> 271/271 [==============================] - 1s 2ms/step - loss: 23.7232 - sparse_categorical_accuracy: 0.3173
,我对其进行了多轮训练,然后进行评估
server_state = federated_algorithm.initialize()
for round in range(20):
server_state = federated_algorithm.next(server_state, train_data)
evaluate(server_state)
>>> 271/271 [==============================] - 1s 2ms/step - loss: 5193926.5000 - sparse_categorical_accuracy: 0.4576
,我发现准确率有所提高,但损失值非常大。这是为什么?我该如何解决它? 另外,如何查看每一轮的训练结果?
I am trying to evaluate the Federated Learning model following this tutorial. As in the code below
test_data = test.create_tf_dataset_from_all_clients().map(reshape_data).batch(2)
test_data = test_data.map(lambda x: (x['x'], x['y']))
def evaluate(server_state):
keras_model = create_keras_model()
keras_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
keras_model.set_weights(server_state)
keras_model.evaluate(test_data)
server_state = federated_algorithm.initialize()
evaluate(server_state)
>>> 271/271 [==============================] - 1s 2ms/step - loss: 23.7232 - sparse_categorical_accuracy: 0.3173
after that, I train it for multiple rounds and then evaluate
server_state = federated_algorithm.initialize()
for round in range(20):
server_state = federated_algorithm.next(server_state, train_data)
evaluate(server_state)
>>> 271/271 [==============================] - 1s 2ms/step - loss: 5193926.5000 - sparse_categorical_accuracy: 0.4576
I see that the accuracy increased, but the loss value is very large. Why is that and how can I fix it?
also, how can I see the train results of every round?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
如果模型预测正确的类别但置信度较低,则可能会发生这种情况。例如,对于 label0,如果真实值是 1 并且您预测为 0.45,则准确度测量会将其视为 FN。但如果你的模型预测为 0.51,这将被算作 TP,但损失值不会发生太大变化。类似地,如果 label1 为 0 并且您预测为 0.1,则损失将会很低,但如果模型预测为 0.4,损失将会很高,而不会影响准确性。
您可以检查的是每个时期的平均预测趋势如何。这可能会向您指出这个问题。
This can happen if the model is predicting the correct classes but with lower confidence. E.g for label0 if the ground truth is 1 and you predict 0.45 the accuracy measure would count this as FN. but if your model predicts it as 0.51 this will be counted as TP but the loss value won’t change much. Similarly if label1 is 0 and you predicted 0.1 the loss will be low but if model predicted 0.4 loss will be high without affecting accuracy.
What you can check is how are average predictions trending per epoch. That may point you to the issue.
回答问题的第二部分:您可以在每回合后在for循环中调用评估以查看结果。
要查看每2轮的结果,您可以使用类似的东西:
我希望可以帮助您跟踪越来越多的损失问题。
Answering the second part of your question: You could call evaluate in the for loop to see the result after every round.
To see the result every 2nd round you could use something like:
I hope that helps you to keep track of your increasing loss problem.