如何通过 model.generate 输出每个 token 的概率列表?
现在我有:
model = GPTNeoForCausalLM.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
gen_tokens = model.generate(input_ids, do_sample=specifiedDoSample, output_scores=True, temperature=specifiedTemperature, max_new_tokens=specifiedNumTokens, repetition_penalty=specifiedRepetitionPenalty, top_p=specifiedTopP)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
这将打印生成的文本。但是,我希望它列出每个步骤中的前 N 个标记及其概率(N 是我指定的数字),类似于 OpenAI 的 beta 游乐场,您可以在其中选择“显示概率:全谱”。例如,如果提示是“You are now a”,则下一个标记应为 {"vampire": 51%, "corpse": 32% ... etc.} 之类的内容。
通过以下方式执行此操作的最简单方法是什么抱脸变形金刚?
Right now I have:
model = GPTNeoForCausalLM.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
gen_tokens = model.generate(input_ids, do_sample=specifiedDoSample, output_scores=True, temperature=specifiedTemperature, max_new_tokens=specifiedNumTokens, repetition_penalty=specifiedRepetitionPenalty, top_p=specifiedTopP)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
This will print the generated text. However, I want it to list the top N tokens in each step as well as their probability (N being a number specified by me), similar to OpenAI's beta playground where you can select "Show probabilities: Full spectrum". For example, if the prompt is "You are now a", the next token should say something like {"vampire": 51%, "corpse": 32% ... etc.}
What is the easiest way to do this via Huggingface Transformers?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您需要在对生成方法的调用中添加“,output_scores = True,return_dict_in_generate = True”,这将为您提供生成短语的每个字符的分数表,其中包含带有分数的张量(需要softmax来获取概率) ) 为波束搜索中每个可能序列的每个标记。
查看 Transformer 源代码树中的 Generation_utils.py,从“defgenerate”开始
You need to add ", output_scores=True, return_dict_in_generate=True" in the call to the generate method, this will give you a scores table per character of generated phrase, which contains a tensor with the scores (need to softmax to get the probas) of each token for each possible sequence in the beam search.
Look at generation_utils.py in the transformers source tree, starting at "def generate"
潜在的解决方法位于线程 https://github.com/huggingface/transformers/issues/10012 。
按照线程中的描述使用波束搜索,使用 n 个波束,其中 n 是您要显示的概率数,但只查看未来的 1 个标记。然后,根据 mshuffett 的评论:
我尝试了一下,效果非常好。现在可以正确显示下一个标记的概率。
或者,您可以尝试 https://github.com/huggingface/transformers/issues/ 中描述的解决方案16010。我还没有抽出时间来解决它,因为它看起来比简单的解决方法稍微复杂一些。
A potential workaround is in the thread https://github.com/huggingface/transformers/issues/10012.
Use beam search as described in the thread, using n beams where n is the number of probs you want to display, but only looking 1 token into the future. Then, according to comment by mshuffett:
I tried it and it worked perfectly. The next single token's probabilities now displayed correctly.
Alternatively you can try the solutions described in https://github.com/huggingface/transformers/issues/16010. I haven't gotten around to it because it looks slightly more involved than the easy workaround.