Tensorflow CNN 迁移学习的验证精度持续较低

发布于 2025-01-18 10:29:25 字数 4146 浏览 2 评论 0原文

您好,我一直在尝试创建两个张量流模型来实验迁移学习。我使用 kaggle 胸部 X 射线数据集

这是我的代码

import tensorflow as tf

import numpy as np

from tensorflow import keras

import os

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.preprocessing import image

import matplotlib.pyplot as plt

gen = ImageDataGenerator(rescale=1./255)

train_data = gen.flow_from_directory("/Users/saibalaji/Downloads/chest_xray/train",target_size=(500,500),batch_size=32,class_mode='binary')

test_data = gen.flow_from_directory("/Users/saibalaji/Downloads/chest_xray/test",target_size=(500,500),batch_size=32,class_mode='binary')


model = keras.Sequential()


# Convolutional layer and maxpool layer 1

model.add(keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=(500,500,3)))

model.add(keras.layers.MaxPool2D(2,2))


# Convolutional layer and maxpool layer 2

model.add(keras.layers.Conv2D(64,(3,3),activation='relu'))

model.add(keras.layers.MaxPool2D(2,2))


# Convolutional layer and maxpool layer 3

model.add(keras.layers.Conv2D(128,(3,3),activation='relu'))

model.add(keras.layers.MaxPool2D(2,2))


# Convolutional layer and maxpool layer 4

model.add(keras.layers.Conv2D(128,(3,3),activation='relu'))

model.add(keras.layers.MaxPool2D(2,2))


# This layer flattens the resulting image array to 1D array

model.add(keras.layers.Flatten())


# Hidden layer with 512 neurons and Rectified Linear Unit activation function

model.add(keras.layers.Dense(512,activation='relu'))


# Output layer with single neuron which gives 0 for Cat or 1 for Dog

#Here we use sigmoid activation function which makes our model output to lie between 0 and 1
model.add(keras.layers.Dense(1,activation='sigmoid'))

hist = model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])

model.fit_generator(train_data,
steps_per_epoch = 163,
epochs = 4,
validation_data = test_data
)

,我已将模型保存为 .h5 格式。

然后我创建了一个新的笔记本,从 Kaggle 加载 阿尔茨海默病的数据疾病并加载了我保存的肺炎模型。将其层复制到新模型(除了最后一层),然后将新模型中的所有层冻结为不可训练。然后添加一个具有 4 个神经元、4 个类别的输出密集层。然后只训练最后一层 5 epoch。但问题是 val 准确度始终保持在 35%。我该如何改进它。

这是我的阿尔茨默斯模型代码

import tensorflow as tf

from tensorflow import keras

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.preprocessing import image

import matplotlib.pyplot as plt

import numpy as np

gen = ImageDataGenerator(rescale=1./255)

traindata = datagen.flow_from_directory('/Users/saibalaji/Documents/TensorFlowProjects/ad/train',target_size=(500,500),batch_size=32)

testdata = datagen.flow_from_directory('/Users/saibalaji/Documents/TensorFlowProjects/ad/test',target_size=(500,500),batch_size=32)

model = keras.models.load_model('pn.h5')

nmodel = keras.models.Sequential()

#add all layers except last one

for layer in model.layers[0:-1]:

nmodel.add(layer)

for layer in nmodel.layers:

layer.trainable = False

nmodel.add(keras.layers.Dense(units=4,name='dense_last'))

hist = nmodel.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=0.002),loss = 'categorical_crossentropy', metrics = ['accuracy'])

nmodel.fit(x=traindata,validation_data=testdata,epochs=5,steps_per_epoch=160)

这是我的预测代码。

 class_labels = []
    for class_label,class_mode in traindata.class_indices.items():
        print(class_label)
        class_labels.append(class_label)
    def predictimage(filepath):
        test_image = image.load_img(path=filepath,target_size=(500,500))
        image_array = image.img_to_array(test_image)
        image_array = image_array / 255
        print(image_array.shape)
        image_array_exp = np.expand_dims(image_array,axis=0)
        result = nmodel.predict(image_array_exp)
        print(result)
        plt.imshow(test_image)
        plt.xlabel(class_labels[np.argmax(result)])

我还注意到,即使我将最后一层更改为 4 个神经元,并更改了损失函数,它也仅预测两个类别。

Hello I have been trying to create two tensorflow models to experiment with transfer learning. I have a trainned a cnn model for lung xray images for pneumonia(2 classes) by using the kaggle chest x-ray dataset .

Here is my code

import tensorflow as tf

import numpy as np

from tensorflow import keras

import os

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.preprocessing import image

import matplotlib.pyplot as plt

gen = ImageDataGenerator(rescale=1./255)

train_data = gen.flow_from_directory("/Users/saibalaji/Downloads/chest_xray/train",target_size=(500,500),batch_size=32,class_mode='binary')

test_data = gen.flow_from_directory("/Users/saibalaji/Downloads/chest_xray/test",target_size=(500,500),batch_size=32,class_mode='binary')


model = keras.Sequential()


# Convolutional layer and maxpool layer 1

model.add(keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=(500,500,3)))

model.add(keras.layers.MaxPool2D(2,2))


# Convolutional layer and maxpool layer 2

model.add(keras.layers.Conv2D(64,(3,3),activation='relu'))

model.add(keras.layers.MaxPool2D(2,2))


# Convolutional layer and maxpool layer 3

model.add(keras.layers.Conv2D(128,(3,3),activation='relu'))

model.add(keras.layers.MaxPool2D(2,2))


# Convolutional layer and maxpool layer 4

model.add(keras.layers.Conv2D(128,(3,3),activation='relu'))

model.add(keras.layers.MaxPool2D(2,2))


# This layer flattens the resulting image array to 1D array

model.add(keras.layers.Flatten())


# Hidden layer with 512 neurons and Rectified Linear Unit activation function

model.add(keras.layers.Dense(512,activation='relu'))


# Output layer with single neuron which gives 0 for Cat or 1 for Dog

#Here we use sigmoid activation function which makes our model output to lie between 0 and 1
model.add(keras.layers.Dense(1,activation='sigmoid'))

hist = model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])

model.fit_generator(train_data,
steps_per_epoch = 163,
epochs = 4,
validation_data = test_data
)

I have saved the model in .h5 format.

Then I created a new notebook loaded data from kaggle for alzheimer's disease and loaded my saved pneumonia model. Copied its layer to a new model except last layer then Freezed all the layers in the new model as non trainable. Then added a output dense layer with 4 neurons for 4 classes. Then trainned only the last layer for 5 epochs. But the problem is val accuaracy remains at 35% constant. How can I improve it.

Here is my code for alzeihmers model

import tensorflow as tf

from tensorflow import keras

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.preprocessing import image

import matplotlib.pyplot as plt

import numpy as np

gen = ImageDataGenerator(rescale=1./255)

traindata = datagen.flow_from_directory('/Users/saibalaji/Documents/TensorFlowProjects/ad/train',target_size=(500,500),batch_size=32)

testdata = datagen.flow_from_directory('/Users/saibalaji/Documents/TensorFlowProjects/ad/test',target_size=(500,500),batch_size=32)

model = keras.models.load_model('pn.h5')

nmodel = keras.models.Sequential()

#add all layers except last one

for layer in model.layers[0:-1]:

nmodel.add(layer)

for layer in nmodel.layers:

layer.trainable = False

nmodel.add(keras.layers.Dense(units=4,name='dense_last'))

hist = nmodel.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=0.002),loss = 'categorical_crossentropy', metrics = ['accuracy'])

nmodel.fit(x=traindata,validation_data=testdata,epochs=5,steps_per_epoch=160)

Here is my prediction code.

 class_labels = []
    for class_label,class_mode in traindata.class_indices.items():
        print(class_label)
        class_labels.append(class_label)
    def predictimage(filepath):
        test_image = image.load_img(path=filepath,target_size=(500,500))
        image_array = image.img_to_array(test_image)
        image_array = image_array / 255
        print(image_array.shape)
        image_array_exp = np.expand_dims(image_array,axis=0)
        result = nmodel.predict(image_array_exp)
        print(result)
        plt.imshow(test_image)
        plt.xlabel(class_labels[np.argmax(result)])

I also noticed that it is predicting only two classes even though I have changed the last layer to 4 neurons, changed the loss function.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

紫﹏色ふ单纯 2025-01-25 10:29:25

看来您没有在上一层添加激活功能。
使用softmax激活功能也许会有所帮助。

nmodel.add(keras.layers.Dense(units=4, activation= "softmax", name='dense_last'))

It looks like you didn't add an activation function in your last layer.
Maybe it would be helpful to use the softmax activation function.

nmodel.add(keras.layers.Dense(units=4, activation= "softmax", name='dense_last'))
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文