如何在拥抱表令牌中应用Max_length从左侧截断令牌序列?

发布于 2025-01-27 22:17:50 字数 475 浏览 5 评论 0原文

在HuggingFace Tokenizer中,应用max_length参数指定令牌化文本的长度。我相信,它通过从 right 中切割多余的令牌,将序列截断为max_length-2(如果truncation = true)。为了进行分类,我需要从左> 中切掉多余的令牌,即序列的开始,以保​​留最后一个令牌。我该怎么做?

from transformers import AutoTokenizer

train_texts = ['text 1', ...]
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
encodings = tokenizer(train_texts, max_length=128, truncation=True)

In the HuggingFace tokenizer, applying the max_length argument specifies the length of the tokenized text. I believe it truncates the sequence to max_length-2 (if truncation=True) by cutting the excess tokens from the right. For the purposes of utterance classification, I need to cut the excess tokens from the left, i.e. the start of the sequence in order to preserve the last tokens. How can I do that?

from transformers import AutoTokenizer

train_texts = ['text 1', ...]
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
encodings = tokenizer(train_texts, max_length=128, truncation=True)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

流星番茄 2025-02-03 22:17:50

Tokenizer具有truncation_side参数,该参数应该设置为此。
请参阅 docs

Tokenizers have a truncation_side parameter that should set exactly this.
See the docs.

恰似旧人归 2025-02-03 22:17:50

晚答案:

突变预处理tokenizer.truncation_side属性对我有用。

s = " ".join(str(i) for i in range(600))

tokenizer.truncation_side = "left" 

t = tokenizer(s, truncation=True)
tokenizer.decode(t.input_ids)
> '[CLS] 284 285 286 ... 597 598 599 [SEP]'

tokenizer.truncation_side = "right" 

t = tokenizer(s, truncation=True)
tokenizer.decode(t.input_ids)
> '[CLS] 0 1 2 ... 443 444 445 [SEP]'

Late answer:

Mutating the PreTrainedTokenizer.truncation_side attribute worked for me.

s = " ".join(str(i) for i in range(600))

tokenizer.truncation_side = "left" 

t = tokenizer(s, truncation=True)
tokenizer.decode(t.input_ids)
> '[CLS] 284 285 286 ... 597 598 599 [SEP]'

tokenizer.truncation_side = "right" 

t = tokenizer(s, truncation=True)
tokenizer.decode(t.input_ids)
> '[CLS] 0 1 2 ... 443 444 445 [SEP]'
幸福%小乖 2025-02-03 22:17:50

我写了一个解决方案,这不是很健壮。仍在寻找更好的方法。用代码中提到的模型对此进行了测试。

from typing import Tuple
from transformers import AutoTokenizer

# also tested with: ufal/robeczech-base, Seznam/small-e-czech
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base', use_fast=False)
texts = ["Do not meddle in the affairs of wizards for they are unpredictable.", "Did you meddle?"]
encoded_input = tokenizer(texts)


def cut_seq_left(seq: list, max_length: int, special_ids: dict) -> Tuple[int,int]:
    # cut from left if longer. Keep special tokens.
    normal_idx = 0
    while seq[normal_idx] in special_ids and normal_idx < len(seq)-1:
        normal_idx += 1
    if normal_idx >= len(seq)-1:
        normal_idx = 1
        #raise Exception('normal_idx longer for seq:' + str(seq))
    rest_idx = normal_idx + len(seq) - max_length
    seq[:] = seq[0:normal_idx] + seq[rest_idx:]
    return normal_idx, rest_idx


def pad_seq_right(seq: list, max_length: int, pad_id: int):
    # pad if shorter
    seq.extend(pad_id for _ in range(max_length - len(seq)))


def get_pad_token(tokenizerr) -> str:
    specials = [t.lower() for t in tokenizerr.all_special_tokens]
    pad_candidates = [t for t in specials if 'pad' in t]
    if len(pad_candidates) < 1:
        raise Exception('Cannot find PAD token in: ' + str(tokenizerr.all_special_tokens))
    return tokenizerr.all_special_tokens[specials.index(pad_candidates[0])]


def cut_pad_encodings_left(encodingz, tokenizerr, max_length: int):
    specials = dict(zip(tokenizerr.all_special_ids, tokenizerr.all_special_tokens))
    pad_code = get_pad_token(tokenizerr)
    padd_idx = tokenizerr.all_special_tokens.index(pad_code)
    for i, e in enumerate(encodingz.data['input_ids']):
        if len(e) < max_length:
            pad_seq_right(e, max_length, tokenizerr.all_special_ids[padd_idx])
            pad_seq_right(encodingz.data['attention_mask'][i], max_length, 0)
            if 'token_type_ids' in encodingz.data:
                pad_seq_right(encodingz.data['token_type_ids'][i], max_length, 0)
        elif len(e) > max_length:
            fro, to = cut_seq_left(e, max_length, specials)
            encodingz.data['attention_mask'][i] = encodingz.data['attention_mask'][i][:fro] \
                                                  + encodingz.data['attention_mask'][i][to:]
            if 'token_type_ids' in encodingz.data:
                encodingz.data['token_type_ids'][i] = encodingz.data['token_type_ids'][i][:fro] \
                                                      + encodingz.data['token_type_ids'][i][to:]


cut_pad_encodings_left(encoded_input, tokenizer, 10) # returns nothing: works in-place

I wrote a solution, which is not very robust. Still looking for a better way. This is tested with the models mentioned in code.

from typing import Tuple
from transformers import AutoTokenizer

# also tested with: ufal/robeczech-base, Seznam/small-e-czech
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base', use_fast=False)
texts = ["Do not meddle in the affairs of wizards for they are unpredictable.", "Did you meddle?"]
encoded_input = tokenizer(texts)


def cut_seq_left(seq: list, max_length: int, special_ids: dict) -> Tuple[int,int]:
    # cut from left if longer. Keep special tokens.
    normal_idx = 0
    while seq[normal_idx] in special_ids and normal_idx < len(seq)-1:
        normal_idx += 1
    if normal_idx >= len(seq)-1:
        normal_idx = 1
        #raise Exception('normal_idx longer for seq:' + str(seq))
    rest_idx = normal_idx + len(seq) - max_length
    seq[:] = seq[0:normal_idx] + seq[rest_idx:]
    return normal_idx, rest_idx


def pad_seq_right(seq: list, max_length: int, pad_id: int):
    # pad if shorter
    seq.extend(pad_id for _ in range(max_length - len(seq)))


def get_pad_token(tokenizerr) -> str:
    specials = [t.lower() for t in tokenizerr.all_special_tokens]
    pad_candidates = [t for t in specials if 'pad' in t]
    if len(pad_candidates) < 1:
        raise Exception('Cannot find PAD token in: ' + str(tokenizerr.all_special_tokens))
    return tokenizerr.all_special_tokens[specials.index(pad_candidates[0])]


def cut_pad_encodings_left(encodingz, tokenizerr, max_length: int):
    specials = dict(zip(tokenizerr.all_special_ids, tokenizerr.all_special_tokens))
    pad_code = get_pad_token(tokenizerr)
    padd_idx = tokenizerr.all_special_tokens.index(pad_code)
    for i, e in enumerate(encodingz.data['input_ids']):
        if len(e) < max_length:
            pad_seq_right(e, max_length, tokenizerr.all_special_ids[padd_idx])
            pad_seq_right(encodingz.data['attention_mask'][i], max_length, 0)
            if 'token_type_ids' in encodingz.data:
                pad_seq_right(encodingz.data['token_type_ids'][i], max_length, 0)
        elif len(e) > max_length:
            fro, to = cut_seq_left(e, max_length, specials)
            encodingz.data['attention_mask'][i] = encodingz.data['attention_mask'][i][:fro] \
                                                  + encodingz.data['attention_mask'][i][to:]
            if 'token_type_ids' in encodingz.data:
                encodingz.data['token_type_ids'][i] = encodingz.data['token_type_ids'][i][:fro] \
                                                      + encodingz.data['token_type_ids'][i][to:]


cut_pad_encodings_left(encoded_input, tokenizer, 10) # returns nothing: works in-place
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文