attributeError:' tuple'对象没有属性' dim'

发布于 2025-01-27 15:24:51 字数 1576 浏览 2 评论 0原文

我正在尝试通过 pytorch库构建变压器网络。我使用的数据集是历史金融市场数据。

 x_train= torch.from_numpy(x_train_tfr)
 x_test= torch.from_numpy(x_test_tfr)
 y_train_tfr = torch.from_numpy(y_train_tfr)
 y_test_tfr = torch.from_numpy(y_test_tfr)

数据准备后,我使用以下代码将X_Train和Y-Train分为12个块:

x_train_split=torch.split(x_train_tfr,12, dim=0)
y_train_split=torch.split(y_train_tfr,12, dim=0)

然后我使用以下代码来训练我的模型:

transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
src = x_train_split
tgt = y_train_split
out, state = transformer_model(src, tgt)

但结果如下:

AttributeError                            Traceback (most recent call last)
<ipython-input-64-769f9734fa98> in <module>()
  3 src = x_train_split
  4 tgt = y_train_split
  ----> 5 out, state = transformer_model(src, tgt)

           1 frames
 /usr/local/lib/python3.7/dist-packages/torch/nn/modules/transformer.py in forward(self, src,tgt, src_mask, tgt_mask, memory_mask, src_key_padding_mask, tgt_key_padding_mask,memory_key_padding_mask)
   134         """
   135 
   --> 136         is_batched = src.dim() == 3
   137         if not self.batch_first and src.size(1) != tgt.size(1) and is_batched:
   138             raise RuntimeError("the batch number of src and tgt must be equal")

  AttributeError: 'tuple' object has no attribute 'dim'

如何解决此错误?在我的模型Trainin之前,我必须做任何额外的事情吗?

I am trying to build a transformer network by PyTorch library. The data set that I use is historical financial market data.

 x_train= torch.from_numpy(x_train_tfr)
 x_test= torch.from_numpy(x_test_tfr)
 y_train_tfr = torch.from_numpy(y_train_tfr)
 y_test_tfr = torch.from_numpy(y_test_tfr)

After data preparation, I use the below code to split x_train and y-train into 12 chunks:

x_train_split=torch.split(x_train_tfr,12, dim=0)
y_train_split=torch.split(y_train_tfr,12, dim=0)

and then I use the below code to train my model:

transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
src = x_train_split
tgt = y_train_split
out, state = transformer_model(src, tgt)

but the result is as below :

AttributeError                            Traceback (most recent call last)
<ipython-input-64-769f9734fa98> in <module>()
  3 src = x_train_split
  4 tgt = y_train_split
  ----> 5 out, state = transformer_model(src, tgt)

           1 frames
 /usr/local/lib/python3.7/dist-packages/torch/nn/modules/transformer.py in forward(self, src,tgt, src_mask, tgt_mask, memory_mask, src_key_padding_mask, tgt_key_padding_mask,memory_key_padding_mask)
   134         """
   135 
   --> 136         is_batched = src.dim() == 3
   137         if not self.batch_first and src.size(1) != tgt.size(1) and is_batched:
   138             raise RuntimeError("the batch number of src and tgt must be equal")

  AttributeError: 'tuple' object has no attribute 'dim'

How could I solve this error? Do I have to do anything extra before my model trainin?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

情丝乱 2025-02-03 15:24:51

最佳实践是包括定义x_train_tfry_train_tfr的代码。显然,来自X_TRAIN_TFR的一个示例不是您期望的张量,而是元组。数据集实现可能会返回数据本身(张量)以及可能对某些任务有用的其他一些信息(即元数据),并且这些信息作为元组返回。

如果是这种情况,您可以通过执行轻松解决此错误:

src = x_train_split[0]    # or whichever element of the tuple the data tensor is  
tgt = y_train_split 

# Note that model will expect a batch dimension so you'll need to 
# add a batch dimension even if you only have one data example per batch
out, state = transformer_model(src.unsqueeze(0), tgt.unsqueeze(0))    

Best practice would be to include the code that defines x_train_tfr and y_train_tfr. Clearly a single example from x_train_tfr is not a tensor as you expect it to be but is instead a tuple. Likely, the dataset implementation returns both the data itself (tensor) as well as some other information (ie metadata) that may be useful for some tasks, and these are returned as a tuple.

If this is the case you could get around this error easily by doing:

src = x_train_split[0]    # or whichever element of the tuple the data tensor is  
tgt = y_train_split 

# Note that model will expect a batch dimension so you'll need to 
# add a batch dimension even if you only have one data example per batch
out, state = transformer_model(src.unsqueeze(0), tgt.unsqueeze(0))    
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文