RuntimeError:给定组= 1,大小的重量[64、64、1、1],预期输入[4、1、1080、1920]具有64个频道,但有1个频道

发布于 2025-02-06 16:23:50 字数 3102 浏览 2 评论 0原文

我想使用U-NET培训有关德国沥青路面障碍(GAPS)数据集的U-NET细分模型。我正在尝试在

这是包含所有相关文件和文件夹的文件夹: 9ATM?USP =共享

我修改了培训文件,并将其重命名为“ train_unet_gaps.py”。当我尝试使用以下命令在COLAB上训练时:

!python /content/drive/Othercomputers/My\ Laptop/crack_segmentation_khanhha/crack_segmentation-master/train_unet_GAPs.py -data_dir "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/GAPs/" -model_dir /content/drive/Othercomputers/My\ Laptop/crack_segmentation_khanhha/crack_segmentation-master/model/ -model_type resnet101

我会收到以下错误:

total images = 2410
create resnet101 model
Downloading: "https://download.pytorch.org/models/resnet101-63fe2227.pth" to /root/.cache/torch/hub/checkpoints/resnet101-63fe2227.pth
100% 171M/171M [00:00<00:00, 212MB/s]
Started training model from epoch 0
Epoch 0:   0% 0/2048 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/train_unet_GAPs.py", line 259, in <module>
    train(train_loader, model, criterion, optimizer, validate, args)
  File "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/train_unet_GAPs.py", line 118, in train
    masks_pred = model(input_var)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/unet/unet_transfer.py", line 224, in forward
    conv2 = self.conv2(x)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 141, in forward
    input = module(input)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/resnet.py", line 144, in forward
    out = self.conv1(x)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 447, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 444, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 64, 1, 1], expected input[4, 1, 1080, 1920] to have 64 channels, but got 1 channels instead
Epoch 0:   0% 0/2048 [00:08<?, ?it/s]

我认为这是因为Gaps数据集的图像是灰度图像(带有一个通道),而ResNet希望收到带有3个频道的RGB图像。

我该如何解决这个问题?如何修改模型以接收灰度图像而不是RGB图像?我需要帮助。我没有火炬的经验,我认为此实施使用内置的重新系统模型。

I want to train a U-net segmentation model on the German Asphalt Pavement Distress (GAPs) dataset using U-Net. I'm trying to modify the model at https://github.com/khanhha/crack_segmentation to train on that dataset.

Here is the folder containing all the related files and folders:
https://drive.google.com/drive/folders/14NQdtMXokIixBJ5XizexVECn23Jh9aTM?usp=sharing

I modified the training file, and renamed it as "train_unet_GAPs.py". When I try to train on Colab using the following command:

!python /content/drive/Othercomputers/My\ Laptop/crack_segmentation_khanhha/crack_segmentation-master/train_unet_GAPs.py -data_dir "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/GAPs/" -model_dir /content/drive/Othercomputers/My\ Laptop/crack_segmentation_khanhha/crack_segmentation-master/model/ -model_type resnet101

I get the following error:

total images = 2410
create resnet101 model
Downloading: "https://download.pytorch.org/models/resnet101-63fe2227.pth" to /root/.cache/torch/hub/checkpoints/resnet101-63fe2227.pth
100% 171M/171M [00:00<00:00, 212MB/s]
Started training model from epoch 0
Epoch 0:   0% 0/2048 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/train_unet_GAPs.py", line 259, in <module>
    train(train_loader, model, criterion, optimizer, validate, args)
  File "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/train_unet_GAPs.py", line 118, in train
    masks_pred = model(input_var)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/unet/unet_transfer.py", line 224, in forward
    conv2 = self.conv2(x)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 141, in forward
    input = module(input)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/resnet.py", line 144, in forward
    out = self.conv1(x)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 447, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 444, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 64, 1, 1], expected input[4, 1, 1080, 1920] to have 64 channels, but got 1 channels instead
Epoch 0:   0% 0/2048 [00:08<?, ?it/s]

I think that this is because the images of GAPs dataset are grayscale images (with one channel), while Resnet expects to receive RGB images with 3 channels.

How can I solve this issue? How can I modify the model to receive grayscale images instead of RGB images? I need help with that. I have no experience with torch, and I think this implementation uses built-in Resnet model.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

花开半夏魅人心 2025-02-13 16:23:50

我用您的代码弄清楚了几件事。

根据Trace Back,您使用的是基于RESNET的UNET模型。

您当前的模型forward方法定义为:

def forward(self, x):
    #conv1 = self.conv1(x)
    #conv2 = self.conv2(conv1)
    conv2 = self.conv2(x)
    conv3 = self.conv3(conv2)
    conv4 = self.conv4(conv3)
    conv5 = self.conv5(conv4)
    ...

您的错误来自self.conv2(x),因为,conv2采用了一个矩阵,其许多频道为64。 ,缺少某些东西,或..评论:)

通过更改

    #conv1 = self.conv1(x)
    #conv2 = self.conv2(conv1)
    conv2 = self.conv2(x)

    conv1 = self.conv1(x)
    conv2 = self.conv2(conv1) 

解决64个频道作为输入的问题。但是,还有另一个问题:

使用(b,1,h,w)的输入,无论您当前的体系结构是什么,无论b,h和w是什么。为什么 ?因此:

resnet34 = torchvision.models.resnet34(pretrained=False)
resnet101 = torchvision.models.resnet101(pretrained=False)
resnet152 = torchvision.models.resnet152(pretrained=False)

print(resnet34.conv1)
-> Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)

print(resnet101.conv1)
-> Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)

print(resnet152.conv1)
-> Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)

在任何情况下,Resnet的Conv1层,采用3个通道输入。

一旦进行了这些修改,您还应该尝试使用一个虚拟示例尝试网络,例如:

model = UNetResNet(34,num_classes=2)
out = model(torch.rand(4,3,1920,1920))
print(out.shape)
-> (4,2,1920,1920) | (batch_size, num_classes, H, W)

为什么您的宽度和高度在这里相同?因为您当前的体系结构仅支持平方图像。

例如:

-> (1080,1920) = dim mismatching during concatenation part
-> (1920,1920) = success
-> (108,192) = dim mismatching during concatenation part
-> (192,192) = success

结论:

  • 如果您的数据集由灰度图像制成,则修改网络以接受灰度图像。
  • 预处理图像以使宽度=高度。

编辑(设备不匹配):

class UNetResNet(nn.Module):

    def __init__(self, encoder_depth, num_classes, num_filters=32, dropout_2d=0.2,
                 pretrained=False, is_deconv=False):
        super().__init__()
        self.num_classes = num_classes
        self.dropout_2d = dropout_2d

        if encoder_depth == 34:
            self.encoder = torchvision.models.resnet34(pretrained=pretrained)
            bottom_channel_nr = 512
        elif encoder_depth == 101:
            self.encoder = torchvision.models.resnet101(pretrained=pretrained)
            bottom_channel_nr = 2048
        elif encoder_depth == 152:
            self.encoder = torchvision.models.resnet152(pretrained=pretrained)
            bottom_channel_nr = 2048
        else:
            raise NotImplementedError('only 34, 101, 152 version of Resnet are implemented')

        self.pool = nn.MaxPool2d(2, 2)

        self.relu = nn.ReLU(inplace=True)

        #self.conv1 = nn.Sequential(self.encoder.conv1,
        #                           self.encoder.bn1,
        #                           self.encoder.relu,
        #                           self.pool)

        self.conv1 = nn.Sequential(nn.Conv2d(1,64,kernel_size=(7,7),stride=(2,2),padding=(3,3),bias=False), # 1 Here is for grayscale images, replace by 3 if you need RGB/BGR
                                   nn.BatchNorm2d(64),
                                   nn.ReLU(),
                                   self.pool
                                )
        
        self.conv2 = self.encoder.layer1

        self.conv3 = self.encoder.layer2

        self.conv4 = self.encoder.layer3

        self.conv5 = self.encoder.layer4

        self.center = DecoderBlockV2(bottom_channel_nr, num_filters * 8 * 2, num_filters * 8, is_deconv)
        self.dec5 = DecoderBlockV2(bottom_channel_nr + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
        self.dec4 = DecoderBlockV2(bottom_channel_nr // 2 + num_filters * 8, num_filters * 8 * 2, num_filters * 8,
                                   is_deconv)
        self.dec3 = DecoderBlockV2(bottom_channel_nr // 4 + num_filters * 8, num_filters * 4 * 2, num_filters * 2,
                                   is_deconv)
        self.dec2 = DecoderBlockV2(bottom_channel_nr // 8 + num_filters * 2, num_filters * 2 * 2, num_filters * 2 * 2,
                                   is_deconv)
        self.dec1 = DecoderBlockV2(num_filters * 2 * 2, num_filters * 2 * 2, num_filters, is_deconv)
        self.dec0 = ConvRelu(num_filters, num_filters)
        self.final = nn.Conv2d(num_filters, num_classes, kernel_size=1)

    def forward(self, x):
        conv1 = self.conv1(x)
        conv2 = self.conv2(conv1)
        conv3 = self.conv3(conv2)
        conv4 = self.conv4(conv3)
        conv5 = self.conv5(conv4)

        pool = self.pool(conv5)
        center = self.center(pool)

        dec5 = self.dec5(torch.cat([center, conv5], 1))

        dec4 = self.dec4(torch.cat([dec5, conv4], 1))
        dec3 = self.dec3(torch.cat([dec4, conv3], 1))
        dec2 = self.dec2(torch.cat([dec3, conv2], 1))
        dec1 = self.dec1(dec2)
        dec0 = self.dec0(dec1)

        return self.final(F.dropout2d(dec0, p=self.dropout_2d))

I figured out few things with your code.

According to the trace back, you are using a resnet based Unet model.

Your current model forward method is defined as :

def forward(self, x):
    #conv1 = self.conv1(x)
    #conv2 = self.conv2(conv1)
    conv2 = self.conv2(x)
    conv3 = self.conv3(conv2)
    conv4 = self.conv4(conv3)
    conv5 = self.conv5(conv4)
    ...

Your error comes from self.conv2(x), because, conv2 takes a matrix with a number of channels of 64. It means, something is missing, or.. commented :)

By changing

    #conv1 = self.conv1(x)
    #conv2 = self.conv2(conv1)
    conv2 = self.conv2(x)

into

    conv1 = self.conv1(x)
    conv2 = self.conv2(conv1) 

Will fix the problem the problem of 64 channels as input. But, there is another problem :

Using an input of (B,1,H,W), no matters what B, H and W are, won't be possible with your current architecture. Why ? Because of this :

resnet34 = torchvision.models.resnet34(pretrained=False)
resnet101 = torchvision.models.resnet101(pretrained=False)
resnet152 = torchvision.models.resnet152(pretrained=False)

print(resnet34.conv1)
-> Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)

print(resnet101.conv1)
-> Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)

print(resnet152.conv1)
-> Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)

In any case, the layer conv1 of resnet, takes a 3 channels input.

Once you have made those modifications, you should also try your network with a dummy example like :

model = UNetResNet(34,num_classes=2)
out = model(torch.rand(4,3,1920,1920))
print(out.shape)
-> (4,2,1920,1920) | (batch_size, num_classes, H, W)

Why your width and height are the same here ? Because your current architecture only supports squared images.

For example :

-> (1080,1920) = dim mismatching during concatenation part
-> (1920,1920) = success
-> (108,192) = dim mismatching during concatenation part
-> (192,192) = success

Conclusion :

  • Modify your network to accept grayscale images if your dataset is made of grayscale images.
  • Preprocess your images to make Width=Height.

Edit (device mismatch) :

class UNetResNet(nn.Module):

    def __init__(self, encoder_depth, num_classes, num_filters=32, dropout_2d=0.2,
                 pretrained=False, is_deconv=False):
        super().__init__()
        self.num_classes = num_classes
        self.dropout_2d = dropout_2d

        if encoder_depth == 34:
            self.encoder = torchvision.models.resnet34(pretrained=pretrained)
            bottom_channel_nr = 512
        elif encoder_depth == 101:
            self.encoder = torchvision.models.resnet101(pretrained=pretrained)
            bottom_channel_nr = 2048
        elif encoder_depth == 152:
            self.encoder = torchvision.models.resnet152(pretrained=pretrained)
            bottom_channel_nr = 2048
        else:
            raise NotImplementedError('only 34, 101, 152 version of Resnet are implemented')

        self.pool = nn.MaxPool2d(2, 2)

        self.relu = nn.ReLU(inplace=True)

        #self.conv1 = nn.Sequential(self.encoder.conv1,
        #                           self.encoder.bn1,
        #                           self.encoder.relu,
        #                           self.pool)

        self.conv1 = nn.Sequential(nn.Conv2d(1,64,kernel_size=(7,7),stride=(2,2),padding=(3,3),bias=False), # 1 Here is for grayscale images, replace by 3 if you need RGB/BGR
                                   nn.BatchNorm2d(64),
                                   nn.ReLU(),
                                   self.pool
                                )
        
        self.conv2 = self.encoder.layer1

        self.conv3 = self.encoder.layer2

        self.conv4 = self.encoder.layer3

        self.conv5 = self.encoder.layer4

        self.center = DecoderBlockV2(bottom_channel_nr, num_filters * 8 * 2, num_filters * 8, is_deconv)
        self.dec5 = DecoderBlockV2(bottom_channel_nr + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
        self.dec4 = DecoderBlockV2(bottom_channel_nr // 2 + num_filters * 8, num_filters * 8 * 2, num_filters * 8,
                                   is_deconv)
        self.dec3 = DecoderBlockV2(bottom_channel_nr // 4 + num_filters * 8, num_filters * 4 * 2, num_filters * 2,
                                   is_deconv)
        self.dec2 = DecoderBlockV2(bottom_channel_nr // 8 + num_filters * 2, num_filters * 2 * 2, num_filters * 2 * 2,
                                   is_deconv)
        self.dec1 = DecoderBlockV2(num_filters * 2 * 2, num_filters * 2 * 2, num_filters, is_deconv)
        self.dec0 = ConvRelu(num_filters, num_filters)
        self.final = nn.Conv2d(num_filters, num_classes, kernel_size=1)

    def forward(self, x):
        conv1 = self.conv1(x)
        conv2 = self.conv2(conv1)
        conv3 = self.conv3(conv2)
        conv4 = self.conv4(conv3)
        conv5 = self.conv5(conv4)

        pool = self.pool(conv5)
        center = self.center(pool)

        dec5 = self.dec5(torch.cat([center, conv5], 1))

        dec4 = self.dec4(torch.cat([dec5, conv4], 1))
        dec3 = self.dec3(torch.cat([dec4, conv3], 1))
        dec2 = self.dec2(torch.cat([dec3, conv2], 1))
        dec1 = self.dec1(dec2)
        dec0 = self.dec0(dec1)

        return self.final(F.dropout2d(dec0, p=self.dropout_2d))

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文