如何解决这个问题:计算出每个通道的填充输入大小:(3 x 3)。内核大小:(4 x 4)。内核大小不能大于实际输入大小

发布于 2025-01-12 16:36:40 字数 1626 浏览 4 评论 0原文

我有问题:计算出每个通道的填充输入大小:(3 x 3)。内核大小:(4 x 4)。内核大小不能大于实际输入大小

def conv(c_in, c_out, batch_norm=True,activation="lrelu"): 返回 conv_block(c_in, c_out, kernel=4, stride=2, pad=1, 偏差=False, batch_norm=batch_norm, 激活=激活, pool_type=None)

def tconv(c_in, c_out, batch_norm=True, 激活=“lrelu” ”): 返回tconv_block(c_in,c_out,内核= 4,步幅= 2,垫= 1,偏差= False,batch_norm = batch_norm,激活=激活,pool_type =无)

    def __init__(self):
        super().__init__()
        self.conv = nn.Sequential(
            conv(3, 32, batch_norm=False),          
            conv(32, 64),
            conv(64, 128),
            conv(128, 256),
            conv_block(256, 1, kernel=4, stride=1, pad=0, bias=False, activation=None, pool_type=None),
            nn.Flatten()
        )

    def forward(self, x):
        x = self.conv(x)
        return x
    
    def clip_weights(self, vmin=-0.01, vmax=0.01):
        for p in self.parameters():
            p.data.clamp_(vmin, vmax)    


class Generator(nn.Module):
    def __init__(self, z_dim):
        super().__init__()
        self.z_dim = z_dim
        self.tconv = nn.Sequential(
            tconv_block(z_dim, 512, kernel=4, stride=2, pad=1, bias=False, activation="lrelu", pool_type=None),
            tconv(512, 256),
            tconv(256, 128),
            tconv(128, 64),
            tconv(64, 32),
            tconv(32, 3, activation="tanh", batch_norm=False)
        )
        
    def forward(self, x):
        return self.tconv(x)

    def generate(self, n, device):
        z = torch.randn((n, self.z_dim, 1, 1), device=device)
        return self.tconv(z)```

I have problem : Calculated padded input size per channel: (3 x 3). Kernel size: (4 x 4). Kernel size can't be greater than actual input size

def conv(c_in, c_out, batch_norm=True, activation="lrelu"):
return conv_block(c_in, c_out, kernel=4, stride=2, pad=1, bias=False, batch_norm=batch_norm, activation=activation, pool_type=None)

def tconv(c_in, c_out, batch_norm=True, activation="lrelu"):
return tconv_block(c_in, c_out, kernel=4, stride=2, pad=1, bias=False, batch_norm=batch_norm, activation=activation, pool_type=None)

    def __init__(self):
        super().__init__()
        self.conv = nn.Sequential(
            conv(3, 32, batch_norm=False),          
            conv(32, 64),
            conv(64, 128),
            conv(128, 256),
            conv_block(256, 1, kernel=4, stride=1, pad=0, bias=False, activation=None, pool_type=None),
            nn.Flatten()
        )

    def forward(self, x):
        x = self.conv(x)
        return x
    
    def clip_weights(self, vmin=-0.01, vmax=0.01):
        for p in self.parameters():
            p.data.clamp_(vmin, vmax)    


class Generator(nn.Module):
    def __init__(self, z_dim):
        super().__init__()
        self.z_dim = z_dim
        self.tconv = nn.Sequential(
            tconv_block(z_dim, 512, kernel=4, stride=2, pad=1, bias=False, activation="lrelu", pool_type=None),
            tconv(512, 256),
            tconv(256, 128),
            tconv(128, 64),
            tconv(64, 32),
            tconv(32, 3, activation="tanh", batch_norm=False)
        )
        
    def forward(self, x):
        return self.tconv(x)

    def generate(self, n, device):
        z = torch.randn((n, self.z_dim, 1, 1), device=device)
        return self.tconv(z)```

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

故事与诗 2025-01-19 16:36:40
z = torch.randn((n, self.z_dim, 1, 1), device=device)

上面的代码生成大小为 (1,1) 的输入噪声张量,这对于模型来说太小了。

z = torch.randn((n, self.z_dim, 10, 10), device=device)

如上面的代码所示,增加输入张量的大小应该可以解决该错误。

z = torch.randn((n, self.z_dim, 1, 1), device=device)

The code above generates input noise tensor with (1,1) size, which is too small for the model.

z = torch.randn((n, self.z_dim, 10, 10), device=device)

Increasing the size of the input tensor, as in the code above, should solve the error.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文