在使用Imagedatagenerator和Flow_from_directory时,如何使用TensorHub从Tensorhub使用自定义图像大小进行转移学习

发布于 2025-02-06 18:33:32 字数 3977 浏览 2 评论 0原文

我正在尝试学习如何从预训练的模型中进行特征提取,以进行转移学习任务。我目前正在尝试使用TensorHub中的Mobilenet V2特征提取器,尽管原始图像形状是(224,224)的元组,而我的图像为384x288x3。我尝试做的是:

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers


IMG_SHAPE = (384, 288)
BATCH_SIZE = 32

train_dir = '/content/drive/MyDrive/dataset_split/Training'
test_dir = '/content/drive/MyDrive/dataset_split/Test'


train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)


training_dataset = train_datagen.flow_from_directory(train_dir, target_size=IMG_SHAPE,
                                                     batch_size=BATCH_SIZE, class_mode='categorical')


print("Testing Images: ")
test_data = test_datagen.flow_from_directory(test_dir, target_size=IMG_SHAPE,
                                             batch_size=BATCH_SIZE, class_mode='categorical')
    
mobilenet_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"



def create_model(model_url, num_classes=2):
  feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, name="feature_extractor_layer", input_shape=IMG_SHAPE)
  model = tf.keras.Sequential([feature_extractor_layer, layers.Dense(num_classes, activation="softmax", name="output_layer")])
  return model
        
mobilenet_model = create_model(mobilenet_url, num_classes=2)



mobilenet_model.compile(loss='categorical_crossentropy',
                             optimizer=tf.keras.optimizers.Adam(),
                             metrics=['accuracy'])


history = mobilenet_model.fit(training_dataset, epochs=5, steps_per_epoch=len(training_dataset), validation_data=test_data,
                                          validation_steps=len(test_data),
                                          callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub", 
                                                                                 experiment_name="MobileNet_v2")]) 

我在以下行中遇到错误:

mobilenet_model = create_model(mobilenet_url, num_classes=2)

错误stacktrace如下:

valueError:调用layer“ feature_extractor_layer”(键入keraslayer)时遇到的异常。

在用户代码中:

    File "/usr/local/lib/python3.7/dist-packages/tensorflow_hub/keras_layer.py", line 237, in call  *
        result = smart_cond.smart_cond(training,

    ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:
      Positional arguments (4 total):
        * Tensor("inputs:0", shape=(None, 224, 224), dtype=float32)
        * False
        * False
        * 0.99
      Keyword arguments: {}
    
     Expected these arguments to match one of the following 4 option(s):
    
    Option 1:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * True
        * False
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 2:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * True
        * True
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 3:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * False
        * True
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 4:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * False
        * False
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}


Call arguments received:
  • inputs=tf.Tensor(shape=(None, 224, 224), dtype=float32)
  • training=None

我想知道如何将自己的图像形状用于功能提取?如果不可能,我该如何充分输入特征提取器的这些尺寸的图像

I am trying to learn how to perform feature extraction from a pre-trained model for a transfer learning task. I am currently trying to use MobileNet v2 Feature extractor from tensorhub, Although the original image shapes are a tuple of (224, 224) and my images are 384x288x3. What I tried doing was:

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers


IMG_SHAPE = (384, 288)
BATCH_SIZE = 32

train_dir = '/content/drive/MyDrive/dataset_split/Training'
test_dir = '/content/drive/MyDrive/dataset_split/Test'


train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)


training_dataset = train_datagen.flow_from_directory(train_dir, target_size=IMG_SHAPE,
                                                     batch_size=BATCH_SIZE, class_mode='categorical')


print("Testing Images: ")
test_data = test_datagen.flow_from_directory(test_dir, target_size=IMG_SHAPE,
                                             batch_size=BATCH_SIZE, class_mode='categorical')
    
mobilenet_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"



def create_model(model_url, num_classes=2):
  feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, name="feature_extractor_layer", input_shape=IMG_SHAPE)
  model = tf.keras.Sequential([feature_extractor_layer, layers.Dense(num_classes, activation="softmax", name="output_layer")])
  return model
        
mobilenet_model = create_model(mobilenet_url, num_classes=2)



mobilenet_model.compile(loss='categorical_crossentropy',
                             optimizer=tf.keras.optimizers.Adam(),
                             metrics=['accuracy'])


history = mobilenet_model.fit(training_dataset, epochs=5, steps_per_epoch=len(training_dataset), validation_data=test_data,
                                          validation_steps=len(test_data),
                                          callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub", 
                                                                                 experiment_name="MobileNet_v2")]) 

I am getting the error at the following line:

mobilenet_model = create_model(mobilenet_url, num_classes=2)

The error stacktrace is the following:

ValueError: Exception encountered when calling layer "feature_extractor_layer" (type KerasLayer).

in user code:

    File "/usr/local/lib/python3.7/dist-packages/tensorflow_hub/keras_layer.py", line 237, in call  *
        result = smart_cond.smart_cond(training,

    ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:
      Positional arguments (4 total):
        * Tensor("inputs:0", shape=(None, 224, 224), dtype=float32)
        * False
        * False
        * 0.99
      Keyword arguments: {}
    
     Expected these arguments to match one of the following 4 option(s):
    
    Option 1:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * True
        * False
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 2:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * True
        * True
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 3:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * False
        * True
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 4:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * False
        * False
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}


Call arguments received:
  • inputs=tf.Tensor(shape=(None, 224, 224), dtype=float32)
  • training=None

I'd like to know how can I use my own image shape for the feature extraction? And if it isn't possible how can I adequately input that images of those sizes for the feature extractor

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

月依秋水 2025-02-13 18:33:32

您需要rechape img_shape =(384,288) to (224,224)作为Mobilenet_v2的输入。重塑的一种方法之一是添加 > with tf.image.image.resize resize到您的模型:

def create_model(model_url, num_classes=2):
    inp = tf.keras.layers.Input((384, 288,3))
    resize_img = tf.keras.layers.Lambda(lambda image: tf.image.resize(image, (224,224)))

    feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, 
                                             name="feature_extractor_layer", 
                                             input_shape=(224,224,3))

    model = tf.keras.Sequential([
                                 inp,
                                 resize_img,
                                 feature_extractor_layer, 
                                 tf.keras.layers.Dense(num_classes, 
                                                       activation="softmax", 
                                                       name="output_layer")
                                 ])
    return model

示例代码:(您可以读取另一个示例在这里/a>)

import numpy
from PIL import Image
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras.preprocessing.image import ImageDataGenerator

for loc, rep in zip(['training', 'test'], [20,10]):
    for idx, c in enumerate([f'c/{loc}/1/', f'c/{loc}/2/']*rep):
        array = numpy.random.rand(384,288,3) * 255
        img = Image.fromarray(array.astype('uint8')).convert('RGB')
        img.save('{}img_{}.png'.format(c, idx))

IMG_SHAPE = (384, 288)
BATCH_SIZE = 32

train_dir = 'c/training'
test_dir = 'c/test'


train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)


training_dataset = train_datagen.flow_from_directory(train_dir, target_size=IMG_SHAPE,
                                                     batch_size=BATCH_SIZE, class_mode='categorical')


test_dataset = test_datagen.flow_from_directory(test_dir, target_size=IMG_SHAPE,
                                             batch_size=BATCH_SIZE, class_mode='categorical')
    
mobilenet_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"



def create_model(model_url, num_classes=2):
    inp = tf.keras.layers.Input((384, 288,3))
    resize_img = tf.keras.layers.Lambda(lambda image: tf.image.resize(image, (224,224)))

    feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, 
                                             name="feature_extractor_layer", 
                                             input_shape=(224,224,3))

    model = tf.keras.Sequential([
                                 inp,
                                 resize_img,
                                 feature_extractor_layer, 
                                 tf.keras.layers.Dense(num_classes, 
                                                       activation="softmax", 
                                                       name="output_layer")
                                 ])
    return model
        

mobilenet_model = create_model(mobilenet_url, num_classes=3)
mobilenet_model.compile(loss='categorical_crossentropy',optimizer=tf.keras.optimizers.Adam(),metrics=['accuracy'])
history = mobilenet_model.fit(training_dataset, epochs=5, steps_per_epoch=len(training_dataset),
                              validation_data=test_dataset,validation_steps=len(test_dataset))

输出:

Found 40 images belonging to 3 classes.
Found 20 images belonging to 3 classes.
Epoch 1/5
2/2 [==============================] - 18s 7s/step - loss: 0.9844 - accuracy: 0.5000 - val_loss: 0.8181 - val_accuracy: 0.5500
Epoch 2/5
2/2 [==============================] - 5s 4s/step - loss: 0.7603 - accuracy: 0.5250 - val_loss: 0.7505 - val_accuracy: 0.4500
Epoch 3/5
2/2 [==============================] - 4s 2s/step - loss: 0.7311 - accuracy: 0.4750 - val_loss: 0.7383 - val_accuracy: 0.4500
Epoch 4/5
2/2 [==============================] - 2s 1s/step - loss: 0.7099 - accuracy: 0.5250 - val_loss: 0.7220 - val_accuracy: 0.4500
Epoch 5/5
2/2 [==============================] - 2s 1s/step - loss: 0.6894 - accuracy: 0.5000 - val_loss: 0.7162 - val_accuracy: 0.5000

You need reshape IMG_SHAPE = (384, 288) to (224,224) as your input of mobilenet_v2. One of the methods for reshaping is adding Lambda layer with tf.image.resize to your model:

def create_model(model_url, num_classes=2):
    inp = tf.keras.layers.Input((384, 288,3))
    resize_img = tf.keras.layers.Lambda(lambda image: tf.image.resize(image, (224,224)))

    feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, 
                                             name="feature_extractor_layer", 
                                             input_shape=(224,224,3))

    model = tf.keras.Sequential([
                                 inp,
                                 resize_img,
                                 feature_extractor_layer, 
                                 tf.keras.layers.Dense(num_classes, 
                                                       activation="softmax", 
                                                       name="output_layer")
                                 ])
    return model

Example Code: (You can read another example here):

import numpy
from PIL import Image
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras.preprocessing.image import ImageDataGenerator

for loc, rep in zip(['training', 'test'], [20,10]):
    for idx, c in enumerate([f'c/{loc}/1/', f'c/{loc}/2/']*rep):
        array = numpy.random.rand(384,288,3) * 255
        img = Image.fromarray(array.astype('uint8')).convert('RGB')
        img.save('{}img_{}.png'.format(c, idx))

IMG_SHAPE = (384, 288)
BATCH_SIZE = 32

train_dir = 'c/training'
test_dir = 'c/test'


train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)


training_dataset = train_datagen.flow_from_directory(train_dir, target_size=IMG_SHAPE,
                                                     batch_size=BATCH_SIZE, class_mode='categorical')


test_dataset = test_datagen.flow_from_directory(test_dir, target_size=IMG_SHAPE,
                                             batch_size=BATCH_SIZE, class_mode='categorical')
    
mobilenet_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"



def create_model(model_url, num_classes=2):
    inp = tf.keras.layers.Input((384, 288,3))
    resize_img = tf.keras.layers.Lambda(lambda image: tf.image.resize(image, (224,224)))

    feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, 
                                             name="feature_extractor_layer", 
                                             input_shape=(224,224,3))

    model = tf.keras.Sequential([
                                 inp,
                                 resize_img,
                                 feature_extractor_layer, 
                                 tf.keras.layers.Dense(num_classes, 
                                                       activation="softmax", 
                                                       name="output_layer")
                                 ])
    return model
        

mobilenet_model = create_model(mobilenet_url, num_classes=3)
mobilenet_model.compile(loss='categorical_crossentropy',optimizer=tf.keras.optimizers.Adam(),metrics=['accuracy'])
history = mobilenet_model.fit(training_dataset, epochs=5, steps_per_epoch=len(training_dataset),
                              validation_data=test_dataset,validation_steps=len(test_dataset))

Output:

Found 40 images belonging to 3 classes.
Found 20 images belonging to 3 classes.
Epoch 1/5
2/2 [==============================] - 18s 7s/step - loss: 0.9844 - accuracy: 0.5000 - val_loss: 0.8181 - val_accuracy: 0.5500
Epoch 2/5
2/2 [==============================] - 5s 4s/step - loss: 0.7603 - accuracy: 0.5250 - val_loss: 0.7505 - val_accuracy: 0.4500
Epoch 3/5
2/2 [==============================] - 4s 2s/step - loss: 0.7311 - accuracy: 0.4750 - val_loss: 0.7383 - val_accuracy: 0.4500
Epoch 4/5
2/2 [==============================] - 2s 1s/step - loss: 0.7099 - accuracy: 0.5250 - val_loss: 0.7220 - val_accuracy: 0.4500
Epoch 5/5
2/2 [==============================] - 2s 1s/step - loss: 0.6894 - accuracy: 0.5000 - val_loss: 0.7162 - val_accuracy: 0.5000
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文