为什么VGG16给出较少的NO。总参数?
from tensorflow.keras.applications import VGG16
pre_trained_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
model = Sequential()
model.add(pre_trained_model)
model.add(GlobalAveragePooling2D())
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
总数VGG16中的参数为1.38亿。但是,检查没有。在参数中,仅提供14,977,857。任何人都可以解释为什么No有所不同。总参数。即使我检查总数。 pre_trained_model中的参数,也不等于1.38亿。
from tensorflow.keras.applications import VGG16
pre_trained_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
model = Sequential()
model.add(pre_trained_model)
model.add(GlobalAveragePooling2D())
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
The total no. of parameters in VGG16 is 138 million. However, on checking the no. of parameters, it gives 14,977,857 only. Can anyone explain why is there a difference in the no. of total parameters. Even if I check the total no. of parameters in pre_trained_model, it is also not equal to 138 million.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您有
include_top = false
参数集,它丢弃了VGG16的顶部FC层。如果您设置include_top = true
并检查preed_trained_model.summary()
,您将在底部看到这些行:现在您拥有所需的138m参数。
这里学到的教训:NN中的大多数参数实际上来自FC层。顺便说一句,与FC相比,这一事实再次证明了卷积层的轻度。
You have
include_top=False
parameter set which drops top FC layers of VGG16. If you setinclude_top=True
and checkpre_trained_model.summary()
, you will see these lines at the bottom:And now you have the desired 138M parameters.
The lesson learned here: the majority of the parameters in this NN actually comes from FC layers. By the way, this fact once again demonstrates the lightness of convolutional layers in comparison with FC ones.