如何解决这个问题:计算出每个通道的填充输入大小:(3 x 3)。内核大小:(4 x 4)。内核大小不能大于实际输入大小
我有问题:计算出每个通道的填充输入大小:(3 x 3)。内核大小:(4 x 4)。内核大小不能大于实际输入大小
def conv(c_in, c_out, batch_norm=True,activation="lrelu"): 返回 conv_block(c_in, c_out, kernel=4, stride=2, pad=1, 偏差=False, batch_norm=batch_norm, 激活=激活, pool_type=None)
def tconv(c_in, c_out, batch_norm=True, 激活=“lrelu” ”): 返回tconv_block(c_in,c_out,内核= 4,步幅= 2,垫= 1,偏差= False,batch_norm = batch_norm,激活=激活,pool_type =无)
def __init__(self):
super().__init__()
self.conv = nn.Sequential(
conv(3, 32, batch_norm=False),
conv(32, 64),
conv(64, 128),
conv(128, 256),
conv_block(256, 1, kernel=4, stride=1, pad=0, bias=False, activation=None, pool_type=None),
nn.Flatten()
)
def forward(self, x):
x = self.conv(x)
return x
def clip_weights(self, vmin=-0.01, vmax=0.01):
for p in self.parameters():
p.data.clamp_(vmin, vmax)
class Generator(nn.Module):
def __init__(self, z_dim):
super().__init__()
self.z_dim = z_dim
self.tconv = nn.Sequential(
tconv_block(z_dim, 512, kernel=4, stride=2, pad=1, bias=False, activation="lrelu", pool_type=None),
tconv(512, 256),
tconv(256, 128),
tconv(128, 64),
tconv(64, 32),
tconv(32, 3, activation="tanh", batch_norm=False)
)
def forward(self, x):
return self.tconv(x)
def generate(self, n, device):
z = torch.randn((n, self.z_dim, 1, 1), device=device)
return self.tconv(z)```
I have problem : Calculated padded input size per channel: (3 x 3). Kernel size: (4 x 4). Kernel size can't be greater than actual input size
def conv(c_in, c_out, batch_norm=True, activation="lrelu"):
return conv_block(c_in, c_out, kernel=4, stride=2, pad=1, bias=False, batch_norm=batch_norm, activation=activation, pool_type=None)
def tconv(c_in, c_out, batch_norm=True, activation="lrelu"):
return tconv_block(c_in, c_out, kernel=4, stride=2, pad=1, bias=False, batch_norm=batch_norm, activation=activation, pool_type=None)
def __init__(self):
super().__init__()
self.conv = nn.Sequential(
conv(3, 32, batch_norm=False),
conv(32, 64),
conv(64, 128),
conv(128, 256),
conv_block(256, 1, kernel=4, stride=1, pad=0, bias=False, activation=None, pool_type=None),
nn.Flatten()
)
def forward(self, x):
x = self.conv(x)
return x
def clip_weights(self, vmin=-0.01, vmax=0.01):
for p in self.parameters():
p.data.clamp_(vmin, vmax)
class Generator(nn.Module):
def __init__(self, z_dim):
super().__init__()
self.z_dim = z_dim
self.tconv = nn.Sequential(
tconv_block(z_dim, 512, kernel=4, stride=2, pad=1, bias=False, activation="lrelu", pool_type=None),
tconv(512, 256),
tconv(256, 128),
tconv(128, 64),
tconv(64, 32),
tconv(32, 3, activation="tanh", batch_norm=False)
)
def forward(self, x):
return self.tconv(x)
def generate(self, n, device):
z = torch.randn((n, self.z_dim, 1, 1), device=device)
return self.tconv(z)```
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
上面的代码生成大小为 (1,1) 的输入噪声张量,这对于模型来说太小了。
如上面的代码所示,增加输入张量的大小应该可以解决该错误。
The code above generates input noise tensor with (1,1) size, which is too small for the model.
Increasing the size of the input tensor, as in the code above, should solve the error.