如何在Julia中导入NLP模型(facebook bart Large mnli模型)?

发布于 2025-01-13 15:14:12 字数 894 浏览 0 评论 0原文

我想寻求帮助在 Julia 中导入 zero-shot-classificationbart-large-mnli 模型?

模型参考:https://metatext.io/models/facebook-bart-large -mnli

这是我想要移植到 Julia 的 python 示例:

from transformers import pipeline
classifier = pipeline("zero-shot-classification",
                      model="facebook/bart-large-mnli")
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)

预期输出:

{'sequence': 'one day I will see the world', 
 'labels': ['travel', 'dancing', 'cooking'], 
 'scores': [0.9938650727272034, 0.0032738070003688335, 0.002861041808500886]
}

请建议或建议针对此场景的解决方案。 期待大家的回应。谢谢!

I would like to seek help in importing the bart-large-mnli model for zero-shot-classification in Julia?

Reference to the model: https://metatext.io/models/facebook-bart-large-mnli

This is the python example which I want to port to Julia:

from transformers import pipeline
classifier = pipeline("zero-shot-classification",
                      model="facebook/bart-large-mnli")
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)

Expected Output:

{'sequence': 'one day I will see the world', 
 'labels': ['travel', 'dancing', 'cooking'], 
 'scores': [0.9938650727272034, 0.0032738070003688335, 0.002861041808500886]
}

Please advise or suggest a solution for this scenario.
Look forward to the responses. Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

冷血 2025-01-20 15:14:12

不确定您想要的用例是什么,但如果您只想访问 Julia 代码中预训练的 Huggingface 模型输出,您可以使用 PyCall.jl 调用该 Python 代码并返回您感兴趣的字典。

也就是说,在 Julia 中,运行 py"""..."" 中的 python 代码“

julia> py"""
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
                      model="facebook/bart-large-mnli")
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
output = classifier(sequence_to_classify, candidate_labels)
"""

那么Python 全局变量 output 可以通过 Julia 中的 py"output" 访问(Python dict 自动转换为 Julia Dict),就像

julia> py"output"
Dict{Any, Any} with 3 entries:
  "scores"   => [0.993865, 0.00327379, 0.00286104]
  "sequence" => "one day I will see the world"
  "labels"   => ["travel", "dancing", "cooking"]

您也可以通过在字符串后面放置 o 来将其作为 PyObject 获取,而无需自动类型转换:

julia> py"output"o
PyObject {
  'sequence': 'one day I will see the world', 
  'labels': ['travel', 'dancing', 'cooking'], 
  'scores': [0.9938650727272034, 0.0032737923320382833, 0.002861042506992817]
}

您还可以通过导入 python 包 transformers 来获得类似的东西code> 使用 PyCall 进入 Julia pyimport()

using PyCall
transformers = PyCall.pyimport("transformers")
classifier = transformers.pipeline("zero-shot-classification", model="facebook/bart-large-mnli")
sequence_to_classify = "one day I will see the world"
candidate_labels = ["travel", "cooking", "dancing"]
output = classifier(sequence_to_classify, candidate_labels)

现在 Julia 对象 output 将是您想要的 Dict。

Not sure quite what your desired use case is, but if you want to just have access to the pretrained huggingface model output in your Julia code, you can use PyCall.jl to call that Python code and return the dictionary you're interested in.

That is, in Julia, run the python code in py"""..."""

julia> py"""
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
                      model="facebook/bart-large-mnli")
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
output = classifier(sequence_to_classify, candidate_labels)
"""

then the Python global variable output will be accessible with py"output" in Julia (the Python dict automatically converted to a Julia Dict), like

julia> py"output"
Dict{Any, Any} with 3 entries:
  "scores"   => [0.993865, 0.00327379, 0.00286104]
  "sequence" => "one day I will see the world"
  "labels"   => ["travel", "dancing", "cooking"]

you can also get it as a PyObject without the automatic type conversion, by putting o after the string:

julia> py"output"o
PyObject {
  'sequence': 'one day I will see the world', 
  'labels': ['travel', 'dancing', 'cooking'], 
  'scores': [0.9938650727272034, 0.0032737923320382833, 0.002861042506992817]
}

You could also get something similar by importing the python package transformers into Julia with PyCall's pyimport(), and

using PyCall
transformers = PyCall.pyimport("transformers")
classifier = transformers.pipeline("zero-shot-classification", model="facebook/bart-large-mnli")
sequence_to_classify = "one day I will see the world"
candidate_labels = ["travel", "cooking", "dancing"]
output = classifier(sequence_to_classify, candidate_labels)

now the Julia object output will be the Dict you want.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文