带有数据加载器的TypeError

发布于 2025-02-08 16:35:38 字数 2994 浏览 1 评论 0原文

我使用了一个非常大的数据集来测试我的模型。为了快速测试样品,我想构建一个数据加载程序。但是我遇到了错误。我无法解决两天。这是我的代码:

 PRE_TRAINED_MODEL_NAME = 'bert-base-cased'
 tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)

 class GPReviewDataset(Dataset):
    def __init__(self, Paragraph, target, tokenizer, max_len):
       self.Paragraph = Paragraph
       self.target= target
       self.tokenizer = tokenizer
       self.max_len = max_len
    
    def __len__(self):
       return len(self.Paragraph)

    def __getitem__(self, item):
       Paragraph = str(self.Paragraph[item])
       target = self.target[item]
       encoding = self.tokenizer.encode_plus(
       Paragraph,
       add_special_tokens=True,
       max_length=self.max_len,
       return_token_type_ids=False,
       pad_to_max_length=True,
       return_attention_mask=True,
       return_tensors='pt',
       )
       return {
       'review_text': Paragraph,
       'input_ids': encoding['input_ids'].flatten(),
       'attention_mask': encoding['attention_mask'].flatten(),
       'targets': torch.tensor(target, dtype=torch.long)
       }


def create_data_loader(df, tokenizer, max_len, batch_size):
    ds = GPReviewDataset(
    Paragraph=df.Paragraph.to_numpy(),
    target=df.target.to_numpy(),
    tokenizer=tokenizer,
    max_len=max_len
    )
   return DataLoader(
     ds,
    batch_size=batch_size,
    num_workers=4
    )

 # Main function
 paragraph=['Image to PDF Converter. ', 'Test Test']
 target=['0','1']
 df = pd.DataFrame({'Paragraph': paragraph, 'target': target})


 MAX_LEN='512'
 BATCH_SIZE = 1
 train_data_loader1 = create_data_loader(df, tokenizer, MAX_LEN, BATCH_SIZE)
 for d in train_data_loader1:
      print(d)

当我迭代数据加载程序时,我会遇到此错误:

  TypeError: Caught TypeError in DataLoader worker process 0.
  Original Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", 
  line 178, in _worker_loop
   data = fetcher.fetch(index)
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
  data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
  data = [self.dataset[idx] for idx in possibly_batched_index]
  File "<ipython-input-3-c4f87a4dbb48>", line 20, in __getitem__
  return_tensors='pt',
  File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 1069, in encode_plus
    return_special_tokens_mask=return_special_tokens_mask,
  File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 1365, in prepare_for_model
    if max_length and total_len > max_length:
   TypeError: '>' not supported between instances of 'int' and 'str'

有人可以帮助我吗?而且,您能否提供有关如何在大型数据集上测试我的模型的提示?我的意思是,在3M数据样本上测试模型的更快方法是什么?

i used a very large dataset for testing my model. to make the testing samples fast, I would like to construct a data loader. but I'm getting errors. I couldn't solve it for two days. Here is my code:

 PRE_TRAINED_MODEL_NAME = 'bert-base-cased'
 tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)

 class GPReviewDataset(Dataset):
    def __init__(self, Paragraph, target, tokenizer, max_len):
       self.Paragraph = Paragraph
       self.target= target
       self.tokenizer = tokenizer
       self.max_len = max_len
    
    def __len__(self):
       return len(self.Paragraph)

    def __getitem__(self, item):
       Paragraph = str(self.Paragraph[item])
       target = self.target[item]
       encoding = self.tokenizer.encode_plus(
       Paragraph,
       add_special_tokens=True,
       max_length=self.max_len,
       return_token_type_ids=False,
       pad_to_max_length=True,
       return_attention_mask=True,
       return_tensors='pt',
       )
       return {
       'review_text': Paragraph,
       'input_ids': encoding['input_ids'].flatten(),
       'attention_mask': encoding['attention_mask'].flatten(),
       'targets': torch.tensor(target, dtype=torch.long)
       }


def create_data_loader(df, tokenizer, max_len, batch_size):
    ds = GPReviewDataset(
    Paragraph=df.Paragraph.to_numpy(),
    target=df.target.to_numpy(),
    tokenizer=tokenizer,
    max_len=max_len
    )
   return DataLoader(
     ds,
    batch_size=batch_size,
    num_workers=4
    )

 # Main function
 paragraph=['Image to PDF Converter. ', 'Test Test']
 target=['0','1']
 df = pd.DataFrame({'Paragraph': paragraph, 'target': target})


 MAX_LEN='512'
 BATCH_SIZE = 1
 train_data_loader1 = create_data_loader(df, tokenizer, MAX_LEN, BATCH_SIZE)
 for d in train_data_loader1:
      print(d)

When I iterate over the dataloader I got this error:

  TypeError: Caught TypeError in DataLoader worker process 0.
  Original Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", 
  line 178, in _worker_loop
   data = fetcher.fetch(index)
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
  data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
  data = [self.dataset[idx] for idx in possibly_batched_index]
  File "<ipython-input-3-c4f87a4dbb48>", line 20, in __getitem__
  return_tensors='pt',
  File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 1069, in encode_plus
    return_special_tokens_mask=return_special_tokens_mask,
  File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 1365, in prepare_for_model
    if max_length and total_len > max_length:
   TypeError: '>' not supported between instances of 'int' and 'str'

Can anyone help me? and Also, Can you give tips on how I can test my model on a large dataset? I mean what the faster way to test my model on 3M samples of data is?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

萌无敌 2025-02-15 16:35:38

错误是指

File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 1365, in prepare_for_model
    if max_length and total_len > max_length:
   TypeError: '>' not supported between instances of 'int' and 'str'

您应该将max_len从字符串更改为int:

# MAX_LEN='512'
MAX_LEN=512

The error is as it stated

File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 1365, in prepare_for_model
    if max_length and total_len > max_length:
   TypeError: '>' not supported between instances of 'int' and 'str'

You should change your MAX_LEN from string to int:

# MAX_LEN='512'
MAX_LEN=512
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文