为什么VGG16给出较少的NO。总参数?

发布于 2025-01-25 09:36:18 字数 496 浏览 4 评论 0原文

from tensorflow.keras.applications import VGG16

pre_trained_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

model = Sequential()

model.add(pre_trained_model)

model.add(GlobalAveragePooling2D())

model.add(Flatten())

model.add(Dense(512, activation='relu'))

model.add(Dense(1, activation='sigmoid'))

model.summary()

总数VGG16中的参数为1.38亿。但是,检查没有。在参数中,仅提供1​​4,977,857。任何人都可以解释为什么No有所不同。总参数。即使我检查总数。 pre_trained_model中的参数,也不等于1.38亿。

from tensorflow.keras.applications import VGG16

pre_trained_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

model = Sequential()

model.add(pre_trained_model)

model.add(GlobalAveragePooling2D())

model.add(Flatten())

model.add(Dense(512, activation='relu'))

model.add(Dense(1, activation='sigmoid'))

model.summary()

The total no. of parameters in VGG16 is 138 million. However, on checking the no. of parameters, it gives 14,977,857 only. Can anyone explain why is there a difference in the no. of total parameters. Even if I check the total no. of parameters in pre_trained_model, it is also not equal to 138 million.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

萌酱 2025-02-01 09:36:18

您有include_top = false参数集,它丢弃了VGG16的顶部FC层。如果您设置include_top = true并检查preed_trained_model.summary(),您将在底部看到这些行:

flatten (Flatten)           (None, 25088)             0         
                                                                 
 fc1 (Dense)                 (None, 4096)              102764544 
                                                                 
 fc2 (Dense)                 (None, 4096)              16781312  
                                                                 
 predictions (Dense)         (None, 1000)              4097000   
                                                                 
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544

现在您拥有所需的138m参数。

这里学到的教训:NN中的大多数参数实际上来自FC层。顺便说一句,与FC相比,这一事实再次证明了卷积层的轻度。

You have include_top=False parameter set which drops top FC layers of VGG16. If you set include_top=True and check pre_trained_model.summary(), you will see these lines at the bottom:

flatten (Flatten)           (None, 25088)             0         
                                                                 
 fc1 (Dense)                 (None, 4096)              102764544 
                                                                 
 fc2 (Dense)                 (None, 4096)              16781312  
                                                                 
 predictions (Dense)         (None, 1000)              4097000   
                                                                 
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544

And now you have the desired 138M parameters.

The lesson learned here: the majority of the parameters in this NN actually comes from FC layers. By the way, this fact once again demonstrates the lightness of convolutional layers in comparison with FC ones.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文