与二进制分类器实施InceptionV3的问题 - 与Pytorch进行转移学习
我有一个问题,使Inception v3与Pytorch的二进制分类器一起作为功能提取器工作。我将启动的主要和辅助网更新为二进制类(如)
但是我遇到了一个错误
#Parameters for Inception V3
num_classes= 2
model_ft = models.inception_v3(pretrained=True)
# set_parameter_requires_grad(model_ft, feature_extract)
#handle auxilliary net
num_ftrs = model_ft.AuxLogits.fc.in_features
model_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes)
#handle primary net
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs,num_classes)
# input_size = 299
#simulate data input
x = torch.rand([64, 3, 299, 299])
#create model with inception backbone
backbone = model_ft
num_filters = backbone.fc.in_features
layers = list(backbone.children())[:-1]
feature_extractor = nn.Sequential(*layers)
# use the pretrained model to classify damage 2 classes
num_target_classes = 2
classifier = nn.Linear(num_filters, num_target_classes)
feature_extractor.eval()
with torch.no_grad():
representations = feature_extractor(x).flatten(1)
x = classifier(representations)
错误
RuntimeError Traceback (most recent call last)
<ipython-input-54-c2be64b8a99e> in <module>()
11 feature_extractor.eval()
12 with torch.no_grad():
---> 13 representations = feature_extractor(x)
14 x = classifier(representations)
9 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
442 _pair(0), self.dilation, self.groups)
443 return F.conv2d(input, weight, bias, self.stride,
--> 444 self.padding, self.dilation, self.groups)
445
446 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [64, 2]
,但是我会在将类更新为2(当它是1000时)之前遇到 64,1000]。这种创建骨干和添加分类器的方法可用于Resnet,但在这里不适合。我认为这是因为辅助网络结构,但不确定如何更新它来处理双重输出?谢谢
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
继承
feature_extracture
by儿童
linelayers = layers = List(Backbone.Children())[: - 1]
将从Backbone to
feature_extressure
,而不是forward> forward
函数中的操作。让我们看一下下面的代码:
模块
模型
和new_model
具有相同的块,但工作方式不相同。在new_module
中,池中的输出尚未挤压,因此线性输入的形状违反了其假设,从而导致错误。在您的情况下,最后两个注释是多余的,这就是为什么它返回错误的原因,您确实在line
mode_ft.fc = nn.linear(num_ftrs,num_ftrs,num_ftrs, num_classes)
。因此,将最后一个替换为下面的代码应正常工作:Inheriting
feature_extracture
bychildren
function at linelayers = list(backbone.children())[:-1]
will bring the module frombackbone
tofeature_extracture
only, not the operation inforward
function.Let's take a look at the code below:
Module
model
andnew_model
have the same blocks but not the same way of working. Innew_module
, the output from the pooling layer is not squeezed yet, so the shape of linear input is violate its assumption which causes the error.In your case, the last two comments are redundant and that's why it returns the error, you did create a new
fc
in the InceptionV3 module at linemodel_ft.fc = nn.Linear(num_ftrs,num_classes)
. Therefore, replace the last one as the code below should work fine: