在使用Imagedatagenerator和Flow_from_directory时,如何使用TensorHub从Tensorhub使用自定义图像大小进行转移学习
我正在尝试学习如何从预训练的模型中进行特征提取,以进行转移学习任务。我目前正在尝试使用TensorHub中的Mobilenet V2特征提取器,尽管原始图像形状是(224,224)的元组,而我的图像为384x288x3。我尝试做的是:
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers
IMG_SHAPE = (384, 288)
BATCH_SIZE = 32
train_dir = '/content/drive/MyDrive/dataset_split/Training'
test_dir = '/content/drive/MyDrive/dataset_split/Test'
train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)
training_dataset = train_datagen.flow_from_directory(train_dir, target_size=IMG_SHAPE,
batch_size=BATCH_SIZE, class_mode='categorical')
print("Testing Images: ")
test_data = test_datagen.flow_from_directory(test_dir, target_size=IMG_SHAPE,
batch_size=BATCH_SIZE, class_mode='categorical')
mobilenet_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
def create_model(model_url, num_classes=2):
feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, name="feature_extractor_layer", input_shape=IMG_SHAPE)
model = tf.keras.Sequential([feature_extractor_layer, layers.Dense(num_classes, activation="softmax", name="output_layer")])
return model
mobilenet_model = create_model(mobilenet_url, num_classes=2)
mobilenet_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
history = mobilenet_model.fit(training_dataset, epochs=5, steps_per_epoch=len(training_dataset), validation_data=test_data,
validation_steps=len(test_data),
callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub",
experiment_name="MobileNet_v2")])
我在以下行中遇到错误:
mobilenet_model = create_model(mobilenet_url, num_classes=2)
错误stacktrace如下:
valueError:调用layer“ feature_extractor_layer”(键入keraslayer)时遇到的异常。
在用户代码中:
File "/usr/local/lib/python3.7/dist-packages/tensorflow_hub/keras_layer.py", line 237, in call *
result = smart_cond.smart_cond(training,
ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:
Positional arguments (4 total):
* Tensor("inputs:0", shape=(None, 224, 224), dtype=float32)
* False
* False
* 0.99
Keyword arguments: {}
Expected these arguments to match one of the following 4 option(s):
Option 1:
Positional arguments (4 total):
* TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
* True
* False
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Option 2:
Positional arguments (4 total):
* TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
* True
* True
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Option 3:
Positional arguments (4 total):
* TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
* False
* True
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Option 4:
Positional arguments (4 total):
* TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
* False
* False
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Call arguments received:
• inputs=tf.Tensor(shape=(None, 224, 224), dtype=float32)
• training=None
我想知道如何将自己的图像形状用于功能提取?如果不可能,我该如何充分输入特征提取器的这些尺寸的图像
I am trying to learn how to perform feature extraction from a pre-trained model for a transfer learning task. I am currently trying to use MobileNet v2 Feature extractor from tensorhub, Although the original image shapes are a tuple of (224, 224) and my images are 384x288x3. What I tried doing was:
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers
IMG_SHAPE = (384, 288)
BATCH_SIZE = 32
train_dir = '/content/drive/MyDrive/dataset_split/Training'
test_dir = '/content/drive/MyDrive/dataset_split/Test'
train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)
training_dataset = train_datagen.flow_from_directory(train_dir, target_size=IMG_SHAPE,
batch_size=BATCH_SIZE, class_mode='categorical')
print("Testing Images: ")
test_data = test_datagen.flow_from_directory(test_dir, target_size=IMG_SHAPE,
batch_size=BATCH_SIZE, class_mode='categorical')
mobilenet_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
def create_model(model_url, num_classes=2):
feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, name="feature_extractor_layer", input_shape=IMG_SHAPE)
model = tf.keras.Sequential([feature_extractor_layer, layers.Dense(num_classes, activation="softmax", name="output_layer")])
return model
mobilenet_model = create_model(mobilenet_url, num_classes=2)
mobilenet_model.compile(loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
history = mobilenet_model.fit(training_dataset, epochs=5, steps_per_epoch=len(training_dataset), validation_data=test_data,
validation_steps=len(test_data),
callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub",
experiment_name="MobileNet_v2")])
I am getting the error at the following line:
mobilenet_model = create_model(mobilenet_url, num_classes=2)
The error stacktrace is the following:
ValueError: Exception encountered when calling layer "feature_extractor_layer" (type KerasLayer).
in user code:
File "/usr/local/lib/python3.7/dist-packages/tensorflow_hub/keras_layer.py", line 237, in call *
result = smart_cond.smart_cond(training,
ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:
Positional arguments (4 total):
* Tensor("inputs:0", shape=(None, 224, 224), dtype=float32)
* False
* False
* 0.99
Keyword arguments: {}
Expected these arguments to match one of the following 4 option(s):
Option 1:
Positional arguments (4 total):
* TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
* True
* False
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Option 2:
Positional arguments (4 total):
* TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
* True
* True
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Option 3:
Positional arguments (4 total):
* TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
* False
* True
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Option 4:
Positional arguments (4 total):
* TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
* False
* False
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Call arguments received:
• inputs=tf.Tensor(shape=(None, 224, 224), dtype=float32)
• training=None
I'd like to know how can I use my own image shape for the feature extraction? And if it isn't possible how can I adequately input that images of those sizes for the feature extractor
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您需要rechape
img_shape =(384,288)
to(224,224)
作为Mobilenet_v2的输入。重塑的一种方法之一是添加 > with
到您的模型:tf.image.image.resize
resize示例代码:(您可以读取另一个示例在这里/a>):
输出:
You need reshape
IMG_SHAPE = (384, 288)
to(224,224)
as your input of mobilenet_v2. One of the methods for reshaping is addingLambda layer
withtf.image.resize
to your model:Example Code: (You can read another example here):
Output: