从项目的 pyinstaller 生成的 exe 文件运行时,onnxruntime-gpu 无法找到 onnxruntime_providers_shared.dll
简短:我在 pycharm 中运行我的模型,它通过 CUDAExecutionProvider 使用 GPU 运行。我使用 pyinstaller 创建项目的 exe 文件,但它不再工作了。
长&详细信息: 在我的项目中,我训练了一个张量流模型并将其成功转换为 onnx 文件。 然后我像这样加载它:
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
model_session = ort.InferenceSession(model_path, providers=providers)
prediction = model.run(None, {"input_1": tile_batch})
这可以工作并产生正确的预测。
然后我使用 pyinstaller 构建该项目的 exe 文件(然后我可以分发该文件),如下所示:
pyinstaller --add-binary %MODEL_PATH%/model.onnx;./resources/ ^
--clean ^
--onefile .\src\<my project>\run.py
然后我尝试在与之前相同的输入参数上运行此 exe 文件,并收到此错误消息:
2022-02-25 17:34:58.4774662 [E:onnxruntime:Default, provider_bridge_ort.cc:937 onnxruntime::ProviderSharedLibrary::Ensure] LoadLibrary failed with error 126 "Das angegebene Modul wurde nicht gefunden." when trying to load "C:\Users\ME
\AppData\Local\Temp\_MEI261442\onnxruntime\capi\onnxruntime_providers_shared.dll"
Traceback (most recent call last):
File "src\integration_tooling\<myproject>\run.py", line 246, in <module>
model_session = ort.InferenceSession(model_path, providers=providers)
File "onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in __init__
File "onnxruntime\capi\onnxruntime_inference_collection.py", line 379, in _create_inference_session
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:531 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as me
ntioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
[13704] Failed to execute script 'run_onnx_denoise' due to unhandled exception!
这是意外的,因为之前exe-fying它成功了。
关于CUDA的错误消息具有误导性,因为我安装了CUDA 11.4(以及合适的cudnn)并且相应的两个系统变量设置正确。错误地设置它们会导致之前的(exe 之前)进程不再像预期的那样工作。因此,CUDA 肯定安装正确。
我怀疑将项目转换为 exe 文件的过程在某种程度上失去了查找 dll 的能力。根据pyinstaller的文档,可以通过sys._MEIPASS访问exe解压到的文件夹。然而,onnxruntime 接口不允许我干预它,或者我不知道如何干预。
有任何线索吗?
Short: I run my model in pycharm and it works using the GPU by way of CUDAExecutionProvider. I create an exe file of my project using pyinstaller and it doesn't work anymore.
Long & Detail:
In my project I train a tensorflow model and convert it to an onnx file successfully.
I then load it like so:
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
model_session = ort.InferenceSession(model_path, providers=providers)
prediction = model.run(None, {"input_1": tile_batch})
This works and produces correct predictions.
Then I use pyinstaller to build an exe file of this project (which I can then distribute) like so:
pyinstaller --add-binary %MODEL_PATH%/model.onnx;./resources/ ^
--clean ^
--onefile .\src\<my project>\run.py
I then try to run this exe file on the same input parameters as before and I get this error message:
2022-02-25 17:34:58.4774662 [E:onnxruntime:Default, provider_bridge_ort.cc:937 onnxruntime::ProviderSharedLibrary::Ensure] LoadLibrary failed with error 126 "Das angegebene Modul wurde nicht gefunden." when trying to load "C:\Users\ME
\AppData\Local\Temp\_MEI261442\onnxruntime\capi\onnxruntime_providers_shared.dll"
Traceback (most recent call last):
File "src\integration_tooling\<myproject>\run.py", line 246, in <module>
model_session = ort.InferenceSession(model_path, providers=providers)
File "onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in __init__
File "onnxruntime\capi\onnxruntime_inference_collection.py", line 379, in _create_inference_session
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:531 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as me
ntioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
[13704] Failed to execute script 'run_onnx_denoise' due to unhandled exception!
This is unexpected, because before exe-fying it it worked.
The error message about CUDA is misleading, because I have CUDA 11.4 (and the fitting cudnn) installed and the corresponding two system variables are set correctly. Setting them incorrectly leads to the previous (before exe) process not working anymore, as expected. Hence, CUDA is installed correctly for sure.
I suspect that the process of turning the project into an exe file somehow looses the ability to find the dll. According to the documentation of pyinstaller, it is possible to access the folder to where the exe will be unpacked by way of sys._MEIPASS. However, the onnxruntime interface doesn't allow me to meddle with that, or I didn't find out, how.
Any clues, anyone?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
解决了。
在我写完问题后,我意识到错误消息具体是找不到 dll。好吧,事实证明这条消息是正确的!
pyinstaller 错过了将两个决定性的 dll 打包到单个 exe 文件中。使用显式选项来打包它们,这样它就可以工作:
´´´
--add-binary venv/Lib/site-packages/onnxruntime/capi/onnxruntime_providers_cuda.dll;./onnxruntime/capi/ ^
--add-binary venv/Lib/site-packages/onnxruntime/capi/onnxruntime_providers_tensorrt.dll;./onnxruntime/capi/ ^
´´´
Solved.
After I wrote the question I realized that the error message was specifically that it couldn't find the dll. Well, turns out the message was spot on!
The pyinstaller missed packaging the two decisive dlls into the single exe file. Using the explicit options to package them like so it works:
´´´
--add-binary venv/Lib/site-packages/onnxruntime/capi/onnxruntime_providers_cuda.dll;./onnxruntime/capi/ ^
--add-binary venv/Lib/site-packages/onnxruntime/capi/onnxruntime_providers_tensorrt.dll;./onnxruntime/capi/ ^
´´´
从@YourConscience 帖子跟进..
如果其他人遇到这个问题。我的提示:
python -m site --user-site
或只是pip show onnxruntime
Follow up from @YourConscience post..
If anyone else comes across this. My tips:
python -m site --user-site
or justpip show onnxruntime
我安装的是onnxruntime gpu-1.18.0,直接在vscode中运行代码没有问题,但是打包成exe就会报错如下。我该如何解决这个问题?错误代码:RuntimeError: C:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH已设置,但无法加载CUDA。请安装 GPU 要求页面中提到的正确版本的 CUDA 和 cuDNN ( https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),确保它们位于 PATH 中,并且您的 GPU是支持的。
I installed onnxruntime gpu-1.18.0,Running the code directly in vscode is not a problem, but packaging it as exe will result in an error as follows. How can I solve this problem?error code:RuntimeError: C:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.