将新词汇标记添加到模型中并将其保存到下游模型

发布于 2025-01-13 14:49:04 字数 1440 浏览 2 评论 0原文

新令牌的平均初始化是否正确?另外,我应该如何保存新的标记生成器(向其添加新标记后)以在下游模型中使用它?

我通过添加新标记并取平均值来训练 MLM 模型。我应该如何使用微调的 MLM 模型来执行新的分类任务?

tokenizer_org = tr.BertTokenizer.from_pretrained("/home/pc/bert_base_multilingual_uncased")
tokenizer.add_tokens(joined_keywords)
model = tr.BertForMaskedLM.from_pretrained("/home/pc/bert_base_multilingual_uncased", return_dict=True)

# prepare input
text = ["Replace me by any text you'd like"]
encoded_input = tokenizer(text, truncation=True, padding=True, max_length=512, return_tensors="pt")
print(encoded_input)


# add embedding params for new vocab words
model.resize_token_embeddings(len(tokenizer))
weights = model.bert.embeddings.word_embeddings.weight
    
# initialize new embedding weights as mean of original tokens
with torch.no_grad():
    emb = []
    for i in range(len(joined_keywords)):
        word = joined_keywords[i]
        # first & last tokens are just string start/end; don't keep
        tok_ids = tokenizer_org(word)["input_ids"][1:-1]
        tok_weights = weights[tok_ids]

        # average over tokens in original tokenization
        weight_mean = torch.mean(tok_weights, axis=0)
        emb.append(weight_mean)
    weights[-len(joined_keywords):,:] = torch.vstack(emb).requires_grad_()

model.to(device)

trainer.save_model("/home/pc/Bert_multilingual_exp_TCM/model_mlm_exp1")

它保存模型、配置、training_args。如何保存新的分词器?

Is the mean initialisation of new tokens correct? Also how should I save new tokenizer( after adding new tokens to it) to use it in downstream model?

I train a MLM model by adding new tokens and taking mean. How should I use the fine tuned MLM model for new classification task?

tokenizer_org = tr.BertTokenizer.from_pretrained("/home/pc/bert_base_multilingual_uncased")
tokenizer.add_tokens(joined_keywords)
model = tr.BertForMaskedLM.from_pretrained("/home/pc/bert_base_multilingual_uncased", return_dict=True)

# prepare input
text = ["Replace me by any text you'd like"]
encoded_input = tokenizer(text, truncation=True, padding=True, max_length=512, return_tensors="pt")
print(encoded_input)


# add embedding params for new vocab words
model.resize_token_embeddings(len(tokenizer))
weights = model.bert.embeddings.word_embeddings.weight
    
# initialize new embedding weights as mean of original tokens
with torch.no_grad():
    emb = []
    for i in range(len(joined_keywords)):
        word = joined_keywords[i]
        # first & last tokens are just string start/end; don't keep
        tok_ids = tokenizer_org(word)["input_ids"][1:-1]
        tok_weights = weights[tok_ids]

        # average over tokens in original tokenization
        weight_mean = torch.mean(tok_weights, axis=0)
        emb.append(weight_mean)
    weights[-len(joined_keywords):,:] = torch.vstack(emb).requires_grad_()

model.to(device)

trainer.save_model("/home/pc/Bert_multilingual_exp_TCM/model_mlm_exp1")

It saves model, config, training_args. How to save the new tokenizer as well??

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

椵侞 2025-01-20 14:49:04

您要做的是一种向原始文本添加新标记和信息的便捷方法。 huggingface 提供了几种方法来做到这一点,我认为我使用了最简单的一种。

BASE_MODEL = "distilbert-base-multilingual-cased"
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
print('Vocab size before manipulation: ', len(tokenizer))
special_tokens_dict = {'additional_special_tokens': ['[C1]','[C2]','[C3]','[C4]']}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print('Vocab size after manipulation: ', len(tokenizer))
tokenizer.save_pretrained("./models/tokenizer/")
tokenizer2 = AutoTokenizer.from_pretrained("./models/tokenizer/")
print('Vocab size after saving and loading: ', len(tokenizer)) 

输出:

Vocab size before manipulation:  119547
Vocab size after manipulation:  119551
Vocab size after saving and loading:  119551

重要的警告:当您操作分词器时,您需要相应地更新模型的嵌入层。像这样的model.resize_token_embeddings(len(tokenizer))

What you are going to do is a convenient method for adding new markers and information to raw text. huggingface provided several method to do that I used the simplest one IMO.

BASE_MODEL = "distilbert-base-multilingual-cased"
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
print('Vocab size before manipulation: ', len(tokenizer))
special_tokens_dict = {'additional_special_tokens': ['[C1]','[C2]','[C3]','[C4]']}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print('Vocab size after manipulation: ', len(tokenizer))
tokenizer.save_pretrained("./models/tokenizer/")
tokenizer2 = AutoTokenizer.from_pretrained("./models/tokenizer/")
print('Vocab size after saving and loading: ', len(tokenizer)) 

output:

Vocab size before manipulation:  119547
Vocab size after manipulation:  119551
Vocab size after saving and loading:  119551

The big caveat : When you manipulated the tokenizer you need to update the embedding layer of the model accordingly. Some thing like this model.resize_token_embeddings(len(tokenizer)).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文