DCNN实施
我正在此会议论文。
附件的图是模型体系结构: 输入图像大小为(256,256,12),训练图像的总数为90。 分类类别为2。(Pixelwise分类)
这是我的代码: (我不确定HOE能够在纸张中实现50%步伐的设置。)
cnn = Sequential()
cnn.add(Conv2D(100, (8,8), input_shape = (256, 256, 12), activation = 'relu', padding = 'SAME'))
cnn.add(Conv2D(50, (1,1), activation = 'relu'))
cnn.add(Conv2D(30, (1,1), activation = 'relu'))
cnn.add(Conv2D(15, (1,1), activation = 'relu'))
cnn.add(Conv2D(1, (1,1), activation = 'softmax'))
cnn.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath="dcnn_model.h5",
verbose=1,
save_weights_only=True,
monitor='val_accuracy',
mode='max',
save_best_only=True)
cnn.fit(x_train,y_train,
batch_size=5,
epochs=30,
shuffle=True,
validation_data=(x_val,y_val),
callbacks=[model_checkpoint_callback])
但是,准确性和损失似乎卡在相同的值处。
Epoch 1/10
30/30 [==============================] - 10s 318ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00001: val_accuracy improved from -inf to 0.75150, saving model to dcnn_model.h5
Epoch 2/10
30/30 [==============================] - 10s 330ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00002: val_accuracy did not improve from 0.75150
Epoch 3/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00003: val_accuracy did not improve from 0.75150
Epoch 4/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00004: val_accuracy did not improve from 0.75150
Epoch 5/10
30/30 [==============================] - 10s 332ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00005: val_accuracy did not improve from 0.75150
Epoch 6/10
30/30 [==============================] - 10s 331ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00006: val_accuracy did not improve from 0.75150
Epoch 7/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00007: val_accuracy did not improve from 0.75150
Epoch 8/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00008: val_accuracy did not improve from 0.75150
Epoch 9/10
30/30 [==============================] - 10s 333ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
如何修改代码?
I am implementing the DCNN model in this conference paper.
The attached figure is the model architecture:
The input image size is (256,256,12),and the total number of training images is 90.
The classification categories is 2. (pixelwise classification)
Here is my code:
(I am not sure hoe to achieve the setting of 50% stride in tha paper.)
cnn = Sequential()
cnn.add(Conv2D(100, (8,8), input_shape = (256, 256, 12), activation = 'relu', padding = 'SAME'))
cnn.add(Conv2D(50, (1,1), activation = 'relu'))
cnn.add(Conv2D(30, (1,1), activation = 'relu'))
cnn.add(Conv2D(15, (1,1), activation = 'relu'))
cnn.add(Conv2D(1, (1,1), activation = 'softmax'))
cnn.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath="dcnn_model.h5",
verbose=1,
save_weights_only=True,
monitor='val_accuracy',
mode='max',
save_best_only=True)
cnn.fit(x_train,y_train,
batch_size=5,
epochs=30,
shuffle=True,
validation_data=(x_val,y_val),
callbacks=[model_checkpoint_callback])
However, the accuracy and loss seem stuck at the same values.
Epoch 1/10
30/30 [==============================] - 10s 318ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00001: val_accuracy improved from -inf to 0.75150, saving model to dcnn_model.h5
Epoch 2/10
30/30 [==============================] - 10s 330ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00002: val_accuracy did not improve from 0.75150
Epoch 3/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00003: val_accuracy did not improve from 0.75150
Epoch 4/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00004: val_accuracy did not improve from 0.75150
Epoch 5/10
30/30 [==============================] - 10s 332ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00005: val_accuracy did not improve from 0.75150
Epoch 6/10
30/30 [==============================] - 10s 331ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00006: val_accuracy did not improve from 0.75150
Epoch 7/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00007: val_accuracy did not improve from 0.75150
Epoch 8/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
Epoch 00008: val_accuracy did not improve from 0.75150
Epoch 9/10
30/30 [==============================] - 10s 333ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515
How can I revise the code?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您拥有的体系结构并不完全像论文,但主要问题是您具有
conv2d
带有softmax
激活功能,而不是sigmoid
。如果您有二进制分类问题,则应使用sigmoid
激活功能和1
store>密集
layer,而不是soft> softmax < /代码>。您不能使用softmax和1个密度节点,因为当您使用
softmax
时,概率的总和必须为1,如果您只有1个节点,则概率永远是1是没有意义的1。The architecture you have is not exactly like the paper but the main issue is that you have a
Conv2D
with asoftmax
activation function instead of asigmoid
. If you have a binary classification problem you should use thesigmoid
activation function and1
unit for theDense
layer, instead ofsoftmax
. You can't use softmax and 1 dense node because when you use asoftmax
then the sum of the probabilities has to be 1 and if you only have 1 node then the probability will always be 1 which does not make sense.