为什么我的模型不与Keras Imagedatagenerator一起训练?
我使用Keras API在CIFAR10上训练CNN。
这是我的代码:
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
conv_network = Input(shape=(32, 32, 3), name="img")
x = Conv2D(filters=32, kernel_size=(3,3), strides=2, activation="relu")(conv_network)
x = Conv2D(filters=64, kernel_size=(3,3), strides=2, activation="relu")(x)
x = Conv2D(filters=128, kernel_size=(3,3), strides=2, activation="relu")(x)
x = Flatten()(x)
x = Dense(1024, activation='relu')(x)
output = Dense(10, activation='softmax')(x)
model = tf.keras.Model(conv_network, output, name="convolutional_network")
model.compile(loss='sparse_categorical_crossentropy',optimizer='Adam', metrics=['accuracy'])
我使用以下内容训练我的模型:
r = model.fit(x_train, y_train, epochs=25,validation_data=(x_test, y_test))
它成功训练:
Epoch 1/25
1563/1563 [==============================] - 7s 4ms/step - loss: 1.7196 - accuracy: 0.4259 - val_loss: 1.3780 - val_accuracy: 0.5105
Epoch 2/25
1563/1563 [==============================] - 6s 4ms/step - loss: 1.2711 - accuracy: 0.5519 - val_loss: 1.2598 - val_accuracy: 0.5600
Epoch 3/25
1563/1563 [==============================] - 7s 4ms/step - loss: 1.1004 - accuracy: 0.6137 - val_loss: 1.2390 - val_accuracy: 0.5776
Epoch 4/25
1563/1563 [==============================] - 7s 4ms/step - loss: 0.9520 - accuracy: 0.6678 - val_loss: 1.2774 - val_accuracy: 0.5767
Epoch 5/25
1563/1563 [==============================] - 7s 4ms/step - loss: 0.7858 - accuracy: 0.7257 - val_loss: 1.3226 - val_accuracy: 0.5921
Epoch 6/25
1563/1563 [==============================] - 6s 4ms/step - loss: 0.6334 - accuracy: 0.7791 - val_loss: 1.5789 - val_accuracy: 0.5586
Epoch 7/25
1563/1563 [==============================] - 6s 4ms/step - loss: 0.5178 - accuracy: 0.8227 - val_loss: 1.7296 - val_accuracy: 0.5730
Epoch 8/25
1563/1563 [==============================] - 6s 4ms/step - loss: 0.4163 - accuracy: 0.8589 - val_loss: 2.0499 - val_accuracy: 0.5682
Epoch 9/25
1563/1563 [==============================] - 6s 4ms/step - loss: 0.3794 - accuracy: 0.8739 - val_loss: 2.0991 - val_accuracy: 0.5820
Epoch 10/25
1563/1563 [==============================] - 7s 4ms/step - loss: 0.3453 - accuracy: 0.8901 - val_loss: 2.3261 - val_accuracy: 0.5697
现在,当我用不做任何增强的成像载体训练时,预测是随机的,它根本没有训练:
datagen = ImageDataGenerator()
model.fit(datagen.flow(x_train, y_train, batch_size=32),
steps_per_epoch=50000 / 32,
epochs=10)
结果:结果:
Epoch 1/10
1562/1562 [==============================] - 7s 4ms/step - loss: 1.6822 - accuracy: 0.1010
Epoch 2/10
1562/1562 [==============================] - 7s 4ms/step - loss: 1.2881 - accuracy: 0.0982
Epoch 3/10
1562/1562 [==============================] - 7s 4ms/step - loss: 1.1302 - accuracy: 0.0987
Epoch 4/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.9817 - accuracy: 0.1001
Epoch 5/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.8215 - accuracy: 0.1011
Epoch 6/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.6760 - accuracy: 0.1000
Epoch 7/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.5445 - accuracy: 0.1005
Epoch 8/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.4660 - accuracy: 0.1006
Epoch 9/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.4048 - accuracy: 0.1002
Epoch 10/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.3641 - accuracy: 0.1006
我在这里做错了什么?
I use the Keras API to train a CNN on Cifar10.
Here is my code :
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
conv_network = Input(shape=(32, 32, 3), name="img")
x = Conv2D(filters=32, kernel_size=(3,3), strides=2, activation="relu")(conv_network)
x = Conv2D(filters=64, kernel_size=(3,3), strides=2, activation="relu")(x)
x = Conv2D(filters=128, kernel_size=(3,3), strides=2, activation="relu")(x)
x = Flatten()(x)
x = Dense(1024, activation='relu')(x)
output = Dense(10, activation='softmax')(x)
model = tf.keras.Model(conv_network, output, name="convolutional_network")
model.compile(loss='sparse_categorical_crossentropy',optimizer='Adam', metrics=['accuracy'])
I train my model using the following :
r = model.fit(x_train, y_train, epochs=25,validation_data=(x_test, y_test))
It trains successfully :
Epoch 1/25
1563/1563 [==============================] - 7s 4ms/step - loss: 1.7196 - accuracy: 0.4259 - val_loss: 1.3780 - val_accuracy: 0.5105
Epoch 2/25
1563/1563 [==============================] - 6s 4ms/step - loss: 1.2711 - accuracy: 0.5519 - val_loss: 1.2598 - val_accuracy: 0.5600
Epoch 3/25
1563/1563 [==============================] - 7s 4ms/step - loss: 1.1004 - accuracy: 0.6137 - val_loss: 1.2390 - val_accuracy: 0.5776
Epoch 4/25
1563/1563 [==============================] - 7s 4ms/step - loss: 0.9520 - accuracy: 0.6678 - val_loss: 1.2774 - val_accuracy: 0.5767
Epoch 5/25
1563/1563 [==============================] - 7s 4ms/step - loss: 0.7858 - accuracy: 0.7257 - val_loss: 1.3226 - val_accuracy: 0.5921
Epoch 6/25
1563/1563 [==============================] - 6s 4ms/step - loss: 0.6334 - accuracy: 0.7791 - val_loss: 1.5789 - val_accuracy: 0.5586
Epoch 7/25
1563/1563 [==============================] - 6s 4ms/step - loss: 0.5178 - accuracy: 0.8227 - val_loss: 1.7296 - val_accuracy: 0.5730
Epoch 8/25
1563/1563 [==============================] - 6s 4ms/step - loss: 0.4163 - accuracy: 0.8589 - val_loss: 2.0499 - val_accuracy: 0.5682
Epoch 9/25
1563/1563 [==============================] - 6s 4ms/step - loss: 0.3794 - accuracy: 0.8739 - val_loss: 2.0991 - val_accuracy: 0.5820
Epoch 10/25
1563/1563 [==============================] - 7s 4ms/step - loss: 0.3453 - accuracy: 0.8901 - val_loss: 2.3261 - val_accuracy: 0.5697
Now, when I train with a ImageDataGenerator that doesn't do any kind of augmentation, the predictions are random and it doesn't train at all :
datagen = ImageDataGenerator()
model.fit(datagen.flow(x_train, y_train, batch_size=32),
steps_per_epoch=50000 / 32,
epochs=10)
Results in :
Epoch 1/10
1562/1562 [==============================] - 7s 4ms/step - loss: 1.6822 - accuracy: 0.1010
Epoch 2/10
1562/1562 [==============================] - 7s 4ms/step - loss: 1.2881 - accuracy: 0.0982
Epoch 3/10
1562/1562 [==============================] - 7s 4ms/step - loss: 1.1302 - accuracy: 0.0987
Epoch 4/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.9817 - accuracy: 0.1001
Epoch 5/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.8215 - accuracy: 0.1011
Epoch 6/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.6760 - accuracy: 0.1000
Epoch 7/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.5445 - accuracy: 0.1005
Epoch 8/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.4660 - accuracy: 0.1006
Epoch 9/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.4048 - accuracy: 0.1002
Epoch 10/10
1562/1562 [==============================] - 7s 4ms/step - loss: 0.3641 - accuracy: 0.1006
What am I doing wrong here ?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我在反复试验后找到了一个解决方案,但我仍然不完全理解为什么我以前的代码不起作用。
更改是,
如果有人清楚地解释了为什么现在起作用,我很高兴听到它。
另外,是否有一种方法可以成功训练模型而无需使用单热编码?
谢谢
I found a solution after trial and error but I still don't fully understand why my previous code didn't work.
What changes is that
If someone has a clear explanation of why does it work now, I would be glad to hear it.
Also, is there a way to successfully train the model without using one-hot encoding ?
Thank you