训练Resnet50型号时,高val_loss和低val_accuracy
我一直在训练我的一些添加层的RESNET50模型,但是每个时期都有更高的val_loss和相同的val_accuracy。我认为它使该模型过于拟合,但不确定如何解决该模型。 IM使用FER2013 .JPG数据集测试和训练模型。
代码:
base_model = tf.keras.applications.ResNet50(input_shape=(48,48,3), include_top=False, weights='imagenet')
#ResNet model with additional convolutional layers.
model = Sequential()
model.add(base_model)
model.add(Conv2D(32, kernel_size=(3,3), activation='relu', padding='same', input_shape=(48,48,3), data_format='channels_last'))
model.add(Conv2D(64,kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2),padding='same'))
model.add(Dropout(0.25))
model.add(Conv2D(128,kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), padding='same'))
model.add(Conv2D(128,kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), padding='same'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(7, activation='softmax'))
adam = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
model_info = model.fit(traindata,epochs=100,validation_data=testdata)
#base_model.compile(loss='categorical_crossentrophy', optimizer='adam', metrics=['accuracy'])
#model_info = base_model.fit(traindata, steps_per_epoch=449,epochs=100,validation_data=testdata,validation_steps=112)
即时通讯使用128的batch_size,任何帮助都很棒。
第一个时期的结果:
Epoch 1/100
98/98 [==============================] - 276s 3s/step - loss: 1.6894 - accuracy: 0.3039 - val_loss: 3.2897 - val_accuracy: 0.1737
Epoch 2/100
98/98 [==============================] - 342s 4s/step - loss: 1.4305 - accuracy: 0.3630 - val_loss: 13.5700 - val_accuracy: 0.1737
Ive been training a ResNet50 model with some added layers of my own, however with each epoch comes a higher val_loss and the same val_accuracy. I think its overfitting the model but not sure how i would fix that. Im using FER2013 .jpg dataset to test and train the model.
Code:
base_model = tf.keras.applications.ResNet50(input_shape=(48,48,3), include_top=False, weights='imagenet')
#ResNet model with additional convolutional layers.
model = Sequential()
model.add(base_model)
model.add(Conv2D(32, kernel_size=(3,3), activation='relu', padding='same', input_shape=(48,48,3), data_format='channels_last'))
model.add(Conv2D(64,kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2),padding='same'))
model.add(Dropout(0.25))
model.add(Conv2D(128,kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), padding='same'))
model.add(Conv2D(128,kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2), padding='same'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(7, activation='softmax'))
adam = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
model_info = model.fit(traindata,epochs=100,validation_data=testdata)
#base_model.compile(loss='categorical_crossentrophy', optimizer='adam', metrics=['accuracy'])
#model_info = base_model.fit(traindata, steps_per_epoch=449,epochs=100,validation_data=testdata,validation_steps=112)
Im using batch_size of 128, any help would be great.
First epoch results:
Epoch 1/100
98/98 [==============================] - 276s 3s/step - loss: 1.6894 - accuracy: 0.3039 - val_loss: 3.2897 - val_accuracy: 0.1737
Epoch 2/100
98/98 [==============================] - 342s 4s/step - loss: 1.4305 - accuracy: 0.3630 - val_loss: 13.5700 - val_accuracy: 0.1737
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
Resnet模型的输出形状似乎是(2、2、2048),因此将Conv2D层应用于其没有意义:
The output shape of the ResNet model seems to be (2, 2, 2048), so it does not make sense to apply Conv2D layers to it :