从Dropbox(或其他云存储)加载拥抱面Tokenizer
我有一个分类的模型,并且几乎已经完成了将其变成一个简化的应用程序。
我在Dropbox上有嵌入式和模型。我已经成功地导入了嵌入式,因为它是一个文件。
但是,请求autotokenizer.from_pretaining()
呼叫文件夹各种文件而不是特定文件。文件夹包含以下文件:
- config.json
- specion_tokens_map.json
- tokenizer_config.json
- tokenizer.json
在本地使用该工具时,我会将函数引导到文件夹中,并且它将起作用。
但是,我无法将其引导到Dropbox上的文件夹,也无法将文件夹从Dropbox下载到Python中,只有一个文件(据我所知)。
是否有一种方法可以在Python上创建一个临时文件夹或单独下载所有文件,然后运行autotokenizer.from_pretrataining()
和所有文件?
I have a classifying model, and I have nearly finished turning it into a streamlit app.
I have the embeddings and model on dropbox. I have successfully imported the embeddings as it is one file.
However the call for AutoTokenizer.from_pretrained()
takes a folder path for various files, rather than a particular file. Folder contains these files:
- config.json
- special_tokens_map.json
- tokenizer_config.json
- tokenizer.json
When using the tool locally, I would direct the function to the folder and it would work.
However I am unable to direct it to the folder on DropBox, and I cannot download a folder from DropBox into Python, only a file (as far as I can see).
Is there a way of creating a temp folder on Python or downloading all the files individually and then running AutoTokenizer.from_pretrained()
with all the files?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
为了解决这个问题,我将模型上传到HuggingFace,以便可以在那里使用它。
IE
To get around this, I uploaded the model to HuggingFace so I could use it there.
I.e.