Huggingface 模型的 OSError

发布于 2025-01-14 06:06:12 字数 789 浏览 1 评论 0原文

我正在尝试使用 Huggingface 模型(CamelBERT),但是加载分词器时出现错误: 代码:

from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("CAMeL-Lab/bert-base-arabic-camelbert-ca")
model = AutoModelForMaskedLM.from_pretrained("CAMeL-Lab/bert-base-arabic-camelbert-ca")

错误:

OSError: Can't load config for 'CAMeL-Lab/bert-base-arabic-camelbert-ca'. Make sure that:

- 'CAMeL-Lab/bert-base-arabic-camelbert-ca' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'CAMeL-Lab/bert-base-arabic-camelbert-ca' is the correct path to a directory containing a config.json file

由于此错误,我无法运行模型。

I am trying to use a huggingface model (CamelBERT), but I am getting an error when loading the tokenizer:
Code:

from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("CAMeL-Lab/bert-base-arabic-camelbert-ca")
model = AutoModelForMaskedLM.from_pretrained("CAMeL-Lab/bert-base-arabic-camelbert-ca")

Error:

OSError: Can't load config for 'CAMeL-Lab/bert-base-arabic-camelbert-ca'. Make sure that:

- 'CAMeL-Lab/bert-base-arabic-camelbert-ca' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'CAMeL-Lab/bert-base-arabic-camelbert-ca' is the correct path to a directory containing a config.json file

I couldn't run the model because of this error.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

吃兔兔 2025-01-21 06:06:12

Huggingface 中的 model_id 是有效的并且应该可以工作。可能导致问题的原因是您的项目中有本地文件夹 CAMeL-Lab/bert-base-arabic-camelbert-ca。在这种情况下,huggingface 会将其优先于在线版本,尝试加载它,如果它不是经过充分训练的模型/空文件夹,则会失败。

如果这是您的情况的问题,请避免在模型参数中使用确切的 model_id 作为 output_dir。因为如果在模型未完全训练时取消并且不手动删除它,就会导致此问题。

如果这不是问题,这可能是一个错误,按照 @dennlinger 的建议更新你的变形金刚版本可能是你最好的选择。

The model_id from huggingface is valid and should work. What can cause a problem is if you have a local folder CAMeL-Lab/bert-base-arabic-camelbert-ca in your project. In this case huggingface will prioritize it over the online version, try to load it and fail if its not a fully trained model/empty folder.

If this is the problem in your case, avoid using the exact model_id as output_dir in the model arguments. Because if you then cancel while the model is not fully trained and do not manually delete it, it will cause this issue.

If this is not the problem this might be a bug and updating your transformers version as @dennlinger suggested is probably your best shot.

厌倦 2025-01-21 06:06:12

运行 pip install -U Huggingface_hub 解决了我的这个问题。
HuggingFace hub 似乎改变了后端的一些逻辑,因此旧客户端不再工作。

Running pip install -U huggingface_hub fixed this problem to me.
It seems like HuggingFace hub changed some logic at backend side, so old client doesn't work anymore.

不知所踪 2025-01-21 06:06:12

我对模型“msperka/aleph_bert_gimmel-finetuned-ner”也遇到了完全相同的问题,该模型也处于拥抱状态。

我确保没有同名的本地目录。我按照建议安装了hugging-face_hub,但仍然无法正常工作。
时间
问题只是 transformerstokenizer 包的版本错误。我按照 HF 模型页面中的说明安装了所需的版本,效果非常好!

I had the exact same problem with the model "msperka/aleph_bert_gimmel-finetuned-ner", which is also in hugging-face.

I made sure I don't have a local directory with the same name. I installed hugging-face_hub as suggested and still it was not working.
T
he problem was simply a wrong version of the transformers and tokenizer packages. I installed the required versions as stated in the model page in HF and it works great!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文