神经网络的准确性为75%,但预测不好

发布于 2025-02-08 09:51:08 字数 1129 浏览 1 评论 0原文

我正在建立一个卷积神经Networtk,以预测来自数据集的5个情绪。 在体重的施工后工作后,我可以获得75%的精度

score = model_2_emotion.evaluate(test_datagen.flow(X_test, Y_test, batch_size = 4))
print('Accuracy: {}'.format(score[1]))
308/308 [==============================] - 17s 56ms/step - loss: 0.6139 - accuracy: 0.7575
Accuracy: 0.7575264573097229

,但model_2_emotion.predict(x_test)返回我这个数组

array([[0.6594997 , 0.00083318, 0.19473663, 0.08065161, 0.06427888],
       [0.6610887 , 0.0008383 , 0.19332188, 0.08035047, 0.06440066],
       [0.66172844, 0.00082645, 0.19264877, 0.08032911, 0.06446711],
       ...,
       [0.66067713, 0.00084266, 0.19318439, 0.08052441, 0.06477145],
       [0.66050553, 0.00085838, 0.19319515, 0.08056776, 0.06487323],
       [0.6602842 , 0.00084602, 0.19372217, 0.08054546, 0.06460217]],
      dtype=float32)

,我们可以看到它只是在“纠正”的“纠正”(第一个情感)(第一个情感(第一个情感 )列)具有60%的准确性,并从此数组产生我的热图:

热图

我认为这是错误的,因为它通过了第一种情感。由于我获得了75%的准确性但预测不好,所以有人知道发生了什么事?

I'm building a convolutional neural networtk in order to predict 5 emotions from a data set of faces.
After working in the construction of the weights I could get an accuracy of 75%

score = model_2_emotion.evaluate(test_datagen.flow(X_test, Y_test, batch_size = 4))
print('Accuracy: {}'.format(score[1]))
308/308 [==============================] - 17s 56ms/step - loss: 0.6139 - accuracy: 0.7575
Accuracy: 0.7575264573097229

But model_2_emotion.predict(X_test) returns me this array

array([[0.6594997 , 0.00083318, 0.19473663, 0.08065161, 0.06427888],
       [0.6610887 , 0.0008383 , 0.19332188, 0.08035047, 0.06440066],
       [0.66172844, 0.00082645, 0.19264877, 0.08032911, 0.06446711],
       ...,
       [0.66067713, 0.00084266, 0.19318439, 0.08052441, 0.06477145],
       [0.66050553, 0.00085838, 0.19319515, 0.08056776, 0.06487323],
       [0.6602842 , 0.00084602, 0.19372217, 0.08054546, 0.06460217]],
      dtype=float32)

Where we can see it's just predecting "correcty" the first emotion (the first column) with the accuracy of 60% and from this array produces me this heat map:

Heat map

Which I think there is something wrong since its passing through the first emotion. Since I got 75% of accuracy but bad predictions, someone knows what's going on?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

淡忘如思 2025-02-15 09:51:08

查看您的混乱矩阵(这不称为热图),似乎您的模型只是预测单个类,并且您的数据是不平衡的。
您每个班级有多少个样本(这是不平衡的)?
您的模型培训是多少个时代?
您的上一层神经网络有多少个神经元(应该有5个神经元)?

只能靠近数据/问题(在火车/测试准确性曲线上,超过时期)可以提出更好的建议,但是您的问题似乎不足/过度依存,并且您可以从更好的理论基础上受益。

查看有关偏见变化权衡的任何来源。

https://quantdare.com/mitigating-mitigating-overfitting-everting-neural-networks/

在这里有一些通用技巧:获取更多数据,改进预处理,改进模型(更多层,不同的内核大小,跳过连接,批处理归一化,不同的优化/学习率等)。

Looking at your confusion matrix (this is not called a heat map), seems like your model is only predicting a single class, and that your data is unbalanced.
How many samples you have for each class (is it unbalanced)?
How many epochs is your model training?
how many neurons your neural network have in the last layer (it is supposed to have 5 neurons) ?

Only looking closer to the data/problem (and in the train/test accuracy curve over epochs) a better suggestion could be made, but your problem seems to be Under/Overfiting, and that you can benefit of better theoretical basis.

Take a look on any source about bias-variance trade off.

https://quantdare.com/mitigating-overfitting-neural-networks/

here are some generic tips: get more data, improve pre processing, improve model (more layers, different kernel sizes, skip connections, batch normalization, different optimization/learning rates etc ...).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文