Mobilenet:Gradcam的转移学习
我对所有这一切都是新手,所以请对这个问题友好:)
我想做的就是使用转移学习技术训练Mobilenet分类器,然后实施GARGCAM技术,以了解我的模型正在研究。
- 我创建了一个模型
input_layer = tf.keras.layers.input(shape = img_shape) x = preprocess_input(input_layer) y = base_model(x) y = tf.keras.layers.globalaveratepooling2d()(y) y = tf.keras.layers.dropout(0.2)(y) 输出= tf.keras.layers.dense(5)(y) 模型= tf.keras.model(inputs = input_layer,outputs = outputs) model.summary()
模型摘要:
Model: "functional_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
tf_op_layer_RealDiv_1 (Tenso [(None, 224, 224, 3)] 0
_________________________________________________________________
tf_op_layer_Sub_1 (TensorFlo [(None, 224, 224, 3)] 0
_________________________________________________________________
mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280) 2257984
_________________________________________________________________
global_average_pooling2d_1 ( (None, 1280) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 1280) 0
_________________________________________________________________
dense_1 (Dense) (None, 5) 6405
=================================================================
Total params: 2,264,389
Trainable params: 6,405
Non-trainable params: 2,257,984
_________________________________________________________________
- 将其传递给Grad Cam算法,但是Grad Cam算法无法找到最后的卷积层
合理的解决方案: 如果我没有在模型中添加未包装的Mobilenet层,而不是拥有封装的“ Mobilenetv2_1.00_224
层
' 我可以将数据增强和PRE_PROCESSING层添加到Mobilenet Un -Inalper层的模型。
提前
感谢 ankit
I am a newbie to all this so please be kind to this question :)
What I am trying to do is train a Mobilenet classifier using the transfer learning technique and then implement the Gradcam technique to understand what my model is looking into.
- I created a model
input_layer = tf.keras.layers.Input(shape=IMG_SHAPE) x = preprocess_input(input_layer) y = base_model(x) y = tf.keras.layers.GlobalAveragePooling2D()(y) y = tf.keras.layers.Dropout(0.2)(y) outputs = tf.keras.layers.Dense(5)(y) model = tf.keras.Model(inputs=input_layer, outputs=outputs) model.summary()
model summary:
Model: "functional_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
tf_op_layer_RealDiv_1 (Tenso [(None, 224, 224, 3)] 0
_________________________________________________________________
tf_op_layer_Sub_1 (TensorFlo [(None, 224, 224, 3)] 0
_________________________________________________________________
mobilenetv2_1.00_224 (Functi (None, 7, 7, 1280) 2257984
_________________________________________________________________
global_average_pooling2d_1 ( (None, 1280) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 1280) 0
_________________________________________________________________
dense_1 (Dense) (None, 5) 6405
=================================================================
Total params: 2,264,389
Trainable params: 6,405
Non-trainable params: 2,257,984
_________________________________________________________________
- passed it to grad cam algorithm but the grad cam algorithm is not able to find the last convolutional layer
Plausible solution:
If instead of having an encapsulated 'mobilenetv2_1.00_224' layer if I can have unwrapped layers of mobilenet added in the model the grad cam algorithm will be able to find that last layer
Problem
I am not able to create the model where I can have data augmentation and pre_processing layer added to mobilenet unwrapped layers.
Thanks in advance
Regards
Ankit
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
@skruff看看这是否有帮助
@skruff see if this helps