resize_token_embeddings在具有不同嵌入尺寸的pretrain模型上

发布于 2025-02-11 03:27:03 字数 1319 浏览 3 评论 0原文

我想询问改变训练有素模型的嵌入尺寸的方法。

我有一个训练有素的模型型号/BERT-PRETRAIN-1-Step-5000.pkl。 现在,我将新的令牌[tra]添加到令牌,并尝试将resize_token_embeddings添加到有关的resize_token_embeddings

from pytorch_pretrained_bert_inset import BertModel #BertTokenizer 
from transformers import AutoTokenizer
from torch.nn.utils.rnn import pad_sequence
import tqdm

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model_bert = BertModel.from_pretrained('bert-base-uncased', state_dict=torch.load('models/BERT-pretrain-1-step-5000.pkl', map_location=torch.device('cpu')))

#print(tokenizer.all_special_tokens) #--> ['[UNK]', '[SEP]', '[PAD]', '[CLS]', '[MASK]']
#print(tokenizer.all_special_ids)    #--> [100, 102, 0, 101, 103]

num_added_toks = tokenizer.add_tokens(['[TRA]'], special_tokens=True)
model_bert.resize_token_embeddings(len(tokenizer))  # --> Embedding(30523, 768)
print('[TRA] token id: ', tokenizer.convert_tokens_to_ids('[TRA]'))  # --> 30522

但是我遇到了这个错误:

AttributeError: 'BertModel' object has no attribute 'resize_token_embeddings'

我认为这是因为model_bert(bert-pretrain-1-step-5000.pkl)我的嵌入式大小不同。 我想知道是否有任何方法可以适应我的修改后令牌的嵌入尺寸以及我想用作初始权重的型号。

多谢!!

I would like to ask about the way to change the embedding size of the trained model.

I have a trained model models/BERT-pretrain-1-step-5000.pkl.
Now I am adding a new token [TRA]to the tokeniser and try to use the resize_token_embeddings to the pertained one.

from pytorch_pretrained_bert_inset import BertModel #BertTokenizer 
from transformers import AutoTokenizer
from torch.nn.utils.rnn import pad_sequence
import tqdm

tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model_bert = BertModel.from_pretrained('bert-base-uncased', state_dict=torch.load('models/BERT-pretrain-1-step-5000.pkl', map_location=torch.device('cpu')))

#print(tokenizer.all_special_tokens) #--> ['[UNK]', '[SEP]', '[PAD]', '[CLS]', '[MASK]']
#print(tokenizer.all_special_ids)    #--> [100, 102, 0, 101, 103]

num_added_toks = tokenizer.add_tokens(['[TRA]'], special_tokens=True)
model_bert.resize_token_embeddings(len(tokenizer))  # --> Embedding(30523, 768)
print('[TRA] token id: ', tokenizer.convert_tokens_to_ids('[TRA]'))  # --> 30522

But I encountered the error:

AttributeError: 'BertModel' object has no attribute 'resize_token_embeddings'

I assume that it is because the model_bert(BERT-pretrain-1-step-5000.pkl) I had has the different embedding size.
I would like to know if there is any way to fit the embedding size of my modified tokeniser and the model I would like to use as the initial weights.

Thanks a lot!!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

岁月染过的梦 2025-02-18 03:27:03

resize_token_embegrer“您正在使用pytorch_pretrataining_bert_inset的BertModel类,该类别不提供这种方法。查看

您可以等待插图的更新(也许创建GitHub问题),也可以编写自己的代码以扩展Word_embedding层:

from torch import nn 

embedding_layer = model.embeddings.word_embeddings

old_num_tokens, old_embedding_dim = embedding_layer.weight.shape

num_new_tokens = 1

# Creating new embedding layer with more entries
new_embeddings = nn.Embedding(
        old_num_tokens + num_new_tokens, old_embedding_dim
)

# Setting device and type accordingly
new_embeddings.to(
    embedding_layer.weight.device,
    dtype=embedding_layer.weight.dtype,
)

# Copying the old entries
new_embeddings.weight.data[:old_num_tokens, :] = embedding_layer.weight.data[
    :old_num_tokens, :
]

model.embeddings.word_embeddings = new_embeddings

resize_token_embeddings is a huggingface transformer method. You are using the BERTModel class from pytorch_pretrained_bert_inset which does not provide such a method. Looking at the code, it seems like they have copied the BERT code from huggingface some time ago.

You can either wait for an update from INSET (maybe create a github issue) or write your own code to extend the word_embedding layer:

from torch import nn 

embedding_layer = model.embeddings.word_embeddings

old_num_tokens, old_embedding_dim = embedding_layer.weight.shape

num_new_tokens = 1

# Creating new embedding layer with more entries
new_embeddings = nn.Embedding(
        old_num_tokens + num_new_tokens, old_embedding_dim
)

# Setting device and type accordingly
new_embeddings.to(
    embedding_layer.weight.device,
    dtype=embedding_layer.weight.dtype,
)

# Copying the old entries
new_embeddings.weight.data[:old_num_tokens, :] = embedding_layer.weight.data[
    :old_num_tokens, :
]

model.embeddings.word_embeddings = new_embeddings
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文