DCNN实施

发布于 2025-02-07 17:22:48 字数 3045 浏览 3 评论 0原文

我正在此会议论文

附件的图是模型体系结构: 输入图像大小为(256,256,12),训练图像的总数为90。 分类类别为2。(Pixelwise分类)

这是我的代码: (我不确定HOE能够在纸张中实现50%步伐的设置。)

cnn = Sequential()
cnn.add(Conv2D(100, (8,8), input_shape = (256, 256, 12), activation = 'relu', padding = 'SAME'))
cnn.add(Conv2D(50, (1,1), activation = 'relu'))
cnn.add(Conv2D(30, (1,1), activation = 'relu'))
cnn.add(Conv2D(15, (1,1), activation = 'relu'))
cnn.add(Conv2D(1, (1,1), activation = 'softmax'))

cnn.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
    filepath="dcnn_model.h5", 
    verbose=1,
    save_weights_only=True,
    monitor='val_accuracy',
    mode='max',
    save_best_only=True)

cnn.fit(x_train,y_train,
      batch_size=5,
      epochs=30,
      shuffle=True,
      validation_data=(x_val,y_val),
      callbacks=[model_checkpoint_callback])

但是,准确性和损失似乎卡在相同的值处。

Epoch 1/10
30/30 [==============================] - 10s 318ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00001: val_accuracy improved from -inf to 0.75150, saving model to dcnn_model.h5
Epoch 2/10
30/30 [==============================] - 10s 330ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00002: val_accuracy did not improve from 0.75150
Epoch 3/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00003: val_accuracy did not improve from 0.75150
Epoch 4/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00004: val_accuracy did not improve from 0.75150
Epoch 5/10
30/30 [==============================] - 10s 332ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00005: val_accuracy did not improve from 0.75150
Epoch 6/10
30/30 [==============================] - 10s 331ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00006: val_accuracy did not improve from 0.75150
Epoch 7/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00007: val_accuracy did not improve from 0.75150
Epoch 8/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00008: val_accuracy did not improve from 0.75150
Epoch 9/10
30/30 [==============================] - 10s 333ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

如何修改代码?

I am implementing the DCNN model in this conference paper.

The attached figure is the model architecture:
The input image size is (256,256,12),and the total number of training images is 90.
The classification categories is 2. (pixelwise classification)
model architecture

Here is my code:
(I am not sure hoe to achieve the setting of 50% stride in tha paper.)

cnn = Sequential()
cnn.add(Conv2D(100, (8,8), input_shape = (256, 256, 12), activation = 'relu', padding = 'SAME'))
cnn.add(Conv2D(50, (1,1), activation = 'relu'))
cnn.add(Conv2D(30, (1,1), activation = 'relu'))
cnn.add(Conv2D(15, (1,1), activation = 'relu'))
cnn.add(Conv2D(1, (1,1), activation = 'softmax'))

cnn.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
    filepath="dcnn_model.h5", 
    verbose=1,
    save_weights_only=True,
    monitor='val_accuracy',
    mode='max',
    save_best_only=True)

cnn.fit(x_train,y_train,
      batch_size=5,
      epochs=30,
      shuffle=True,
      validation_data=(x_val,y_val),
      callbacks=[model_checkpoint_callback])

However, the accuracy and loss seem stuck at the same values.

Epoch 1/10
30/30 [==============================] - 10s 318ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00001: val_accuracy improved from -inf to 0.75150, saving model to dcnn_model.h5
Epoch 2/10
30/30 [==============================] - 10s 330ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00002: val_accuracy did not improve from 0.75150
Epoch 3/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00003: val_accuracy did not improve from 0.75150
Epoch 4/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00004: val_accuracy did not improve from 0.75150
Epoch 5/10
30/30 [==============================] - 10s 332ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00005: val_accuracy did not improve from 0.75150
Epoch 6/10
30/30 [==============================] - 10s 331ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00006: val_accuracy did not improve from 0.75150
Epoch 7/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00007: val_accuracy did not improve from 0.75150
Epoch 8/10
30/30 [==============================] - 10s 334ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

Epoch 00008: val_accuracy did not improve from 0.75150
Epoch 9/10
30/30 [==============================] - 10s 333ms/step - loss: 3.5711 - accuracy: 0.7658 - val_loss: 3.7894 - val_accuracy: 0.7515

How can I revise the code?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

枫以 2025-02-14 17:22:48

您拥有的体系结构并不完全像论文,但主要问题是您具有conv2d带有softmax激活功能,而不是sigmoid 。如果您有二进制分类问题,则应使用sigmoid激活功能和1 store>密集 layer,而不是soft> softmax < /代码>。您不能使用softmax和1个密度节点,因为当您使用softmax时,概率的总和必须为1,如果您只有1个节点,则概率永远是1是没有意义的1。

The architecture you have is not exactly like the paper but the main issue is that you have a Conv2D with a softmax activation function instead of a sigmoid. If you have a binary classification problem you should use the sigmoid activation function and 1 unit for the Dense layer, instead of softmax. You can't use softmax and 1 dense node because when you use a softmax then the sum of the probabilities has to be 1 and if you only have 1 node then the probability will always be 1 which does not make sense.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文