使用 Keras 的迁移学习模型:使用准确性以外的其他指标
我正在研究瑞典树叶数据中树叶的二元分类模型,并认为迁移学习可能是实用的。我找到了这个教程,但在编译函数中,我想使用与准确性不同的指标。当我尝试获取 AUC 或 FP/FN/TP/TN 时,会引发 ValueError,声称 true y (None, 1) 的形状和 y_pred (None, 2) 的形状不兼容。
我不明白:
- 为什么 y_pred 会有这个形状?
- 如何计算准确率,而不是混淆矩阵的部分?!
没有合理解释的解决方案也非常受欢迎:)
feature_extractor_model = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
pretrained_model_without_top_layer = hub.KerasLayer(
feature_extractor_model, input_shape=(224, 224, 3), trainable=False)
classes_num = 2
model = tf.keras.Sequential([
pretrained_model_without_top_layer,
tf.keras.layers.Dense(classes_num)
])
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[['acc'], [tf.keras.metrics.TruePositives(), tf.keras.metrics.FalsePositives(), tf.keras.metrics.TrueNegatives(), tf.keras.metrics.FalseNegatives()]])
model.fit(X_train_scaled, y_train, steps_per_epoch=9, epochs=5)
I'm working on a binary classification model for leaves from the Swedish leaves data and thought Transfer Learning could be practical. I found this tutorial, but in the compile function, I want to use different metrics than accuracy. When I try to get AUC or FP/FN/TP/TN, ValueError is raised, claiming the shape of true y (None, 1) and the shape of the y_pred (None, 2) are incompatible.
I fail to understand:
- why would y_pred have this shape?
- how can the accuracy be calculated, but not the parts of the confusion matrix?!
A solution without a reasoned explanation is also very welcome :)
feature_extractor_model = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
pretrained_model_without_top_layer = hub.KerasLayer(
feature_extractor_model, input_shape=(224, 224, 3), trainable=False)
classes_num = 2
model = tf.keras.Sequential([
pretrained_model_without_top_layer,
tf.keras.layers.Dense(classes_num)
])
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[['acc'], [tf.keras.metrics.TruePositives(), tf.keras.metrics.FalsePositives(), tf.keras.metrics.TrueNegatives(), tf.keras.metrics.FalseNegatives()]])
model.fit(X_train_scaled, y_train, steps_per_epoch=9, epochs=5)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果您有两个类(例如猫和狗),您可以将其稀疏编码为 0 或 1,或者将其编码为 [0,1] 和 [1,0]。
你的训练数据稀疏,所以你的损失是 SparseCCE。指标只是功能上的损失,因此您使用的任何指标都需要接受稀疏。在您的情况下,只需编写一个“自定义”损失函数,该函数接受稀疏 y_true、one-hots,并将其传递给召回/精度/等度量函数。
If you have two classes (e.g. cats and dogs) you could either encode it sparsely as zero or one, or one-hot as [0,1] and [1,0].
Your training data is sparsely, so your loss is SparseCCE. Metrics are just losses functionally, so any metric you use would need to accept sparse. In your case, just write a "custom" loss function that accept a sparse y_true, one-hots it, and passes it to the recall/precision/etc metric function.