typeError:linear():参数' input' (位置1)必须是张量,而不是辍学的pytorch

发布于 2025-01-23 22:10:58 字数 649 浏览 0 评论 0原文

我在火炬中有一个自动编码器,我想在解码器中添加一个辍学层。 (我不确定应该在哪里添加辍学)。在下面,我添加了输入数据和DECORD函数的一些示例。老实说,我不知道我该怎么办来解决错误。你能帮我吗?

d_input   = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d      = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)

def decode(self, z, y):

        indata     = torch.cat((z,y), 1) #shape: [batchsize, 451
        indata     = torch.reshape(indata, (-1, 1, 451))
        hidden     = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
        hidden     = nn.Dropout(p=0.5) 
        par_mu     = self.mu_d(hidden)
        par_log_var= self.log_var_d(hidden)
        return par_mu, par_log_var

I have an auto encoder in torch and I want to add a dropout layer in the decoder. ( I am not sure where I should add the dropout). In the following I added a little example of the input data and the decorder function. Honestly, I don't know what I should do to fix the error. Could you please help me with that?

d_input   = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d      = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)

def decode(self, z, y):

        indata     = torch.cat((z,y), 1) #shape: [batchsize, 451
        indata     = torch.reshape(indata, (-1, 1, 451))
        hidden     = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
        hidden     = nn.Dropout(p=0.5) 
        par_mu     = self.mu_d(hidden)
        par_log_var= self.log_var_d(hidden)
        return par_mu, par_log_var

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

千年*琉璃梦 2025-01-30 22:10:58

torch.nn.dropout 是一个模块。您需要对其进行实例化,然后才能通过该变量。

d_input   = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d      = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)
dropout = nn.Dropout(p=0.5)

def decode(self, z, y):

        indata     = torch.cat((z,y), 1) #shape: [batchsize, 451
        indata     = torch.reshape(indata, (-1, 1, 451))
        hidden     = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
        hidden     = dropout(hidden)
        par_mu     = self.mu_d(hidden)
        par_log_var= self.log_var_d(hidden)
        return par_mu, par_log_var

torch.nn.Dropout is a module. You need to instantiate it before you can pass a variable through it.

d_input   = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d      = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)
dropout = nn.Dropout(p=0.5)

def decode(self, z, y):

        indata     = torch.cat((z,y), 1) #shape: [batchsize, 451
        indata     = torch.reshape(indata, (-1, 1, 451))
        hidden     = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
        hidden     = dropout(hidden)
        par_mu     = self.mu_d(hidden)
        par_log_var= self.log_var_d(hidden)
        return par_mu, par_log_var
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文