RuntimeRorr:对于打开的2D输入,HX和CX也应为2-D,但得到(3-D,3-D)张量

发布于 2025-02-06 21:27:44 字数 1613 浏览 0 评论 0原文

嘿,我的LSTM有一些问题。我有6个功能,我一次在LSTM中发送所有数据(29002行)(这是个好主意吗?)。

的输入是大小:

训练形状火炬。

我 ,1])

我的模型:

class LSTM(nn.Module):
def __init__(self, hidden_dim_LSTM, num_layers_LSTM, hidden1, drop):
    super(LSTM, self).__init__()
    
    self.hidden_dim_LSTM = hidden_dim_LSTM
    self.num_layers_LSTM = num_layers_LSTM
    self.hidden1=hidden1
    self.drop=drop
    
    final_output_dim = 1

    #self.lstm = nn.LSTM(self.input_dim, self.hidden_dim, self.num_layers, batch_first=True)
    self.lstm = nn.LSTM(6, hidden_size=hidden_dim_LSTM, num_layers=num_layers_LSTM, batch_first=True)
    self.fc1 = nn.Linear(in_features=hidden_dim_LSTM, out_features=hidden1)
    self.drop = nn.Dropout(drop)
    self.fc2 = nn.Linear(in_features=hidden1, out_features=final_output_dim) 

    
def forward(self, x):
    h_0 = Variable(torch.zeros(self.num_layers_LSTM, x.size(0), self.hidden_dim_LSTM)).requires_grad_().to(device) #hidden state
    c_0 = Variable(torch.zeros(self.num_layers_LSTM, x.size(0), self.hidden_dim_LSTM)).requires_grad_().to(device) #internal state
    # Propagate input through LSTM
    output, (hn, cn) = self.lstm(x, (h_0, c_0)) #lstm with input, hidden, and internal state
    hn = hn.view(-1, self.hidden_dim_LSTM) #reshaping the data for Dense layer next
    out = F.relu(hn)
    out = self.fc1(out) 
    out = self.drop(out)
    out = torch.relu(out) 
    #out = self.drop(out)
    out = self.fc2(out) 
    return out

当我开始培训时,我会收到此错误: RuntimeRorr:对于打开的2D输入,HX和CX也应为2-D,但得到(3-D,3-D)张量 在线:输出,(HN,CN)= self.lstm(x,(h_0,c_0))

我都非常感谢每一个帮助!

hey I have some problems with my LSTM. I have 6 features and I´m sending all my data (29002 rows) in the LSTM at once (is this a good idea?).

My Input is of size:

Training Shape torch.Size([290002, 1, 6]) torch.Size([290002, 1])

Testing Shape torch.Size([74998, 1, 6]) torch.Size([74998, 1])

my model:

class LSTM(nn.Module):
def __init__(self, hidden_dim_LSTM, num_layers_LSTM, hidden1, drop):
    super(LSTM, self).__init__()
    
    self.hidden_dim_LSTM = hidden_dim_LSTM
    self.num_layers_LSTM = num_layers_LSTM
    self.hidden1=hidden1
    self.drop=drop
    
    final_output_dim = 1

    #self.lstm = nn.LSTM(self.input_dim, self.hidden_dim, self.num_layers, batch_first=True)
    self.lstm = nn.LSTM(6, hidden_size=hidden_dim_LSTM, num_layers=num_layers_LSTM, batch_first=True)
    self.fc1 = nn.Linear(in_features=hidden_dim_LSTM, out_features=hidden1)
    self.drop = nn.Dropout(drop)
    self.fc2 = nn.Linear(in_features=hidden1, out_features=final_output_dim) 

    
def forward(self, x):
    h_0 = Variable(torch.zeros(self.num_layers_LSTM, x.size(0), self.hidden_dim_LSTM)).requires_grad_().to(device) #hidden state
    c_0 = Variable(torch.zeros(self.num_layers_LSTM, x.size(0), self.hidden_dim_LSTM)).requires_grad_().to(device) #internal state
    # Propagate input through LSTM
    output, (hn, cn) = self.lstm(x, (h_0, c_0)) #lstm with input, hidden, and internal state
    hn = hn.view(-1, self.hidden_dim_LSTM) #reshaping the data for Dense layer next
    out = F.relu(hn)
    out = self.fc1(out) 
    out = self.drop(out)
    out = torch.relu(out) 
    #out = self.drop(out)
    out = self.fc2(out) 
    return out

When I start the training I get this Error:
RuntimeError: For unbatched 2-D input, hx and cx should also be 2-D but got (3-D, 3-D) tensors
at Line: output, (hn, cn) = self.lstm(x, (h_0, c_0))

I'm grateful for every help!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文