无论我如何安装,RAPIDS.ai 依赖项 cuml 和 cudf 都找不到

发布于 2025-01-16 06:22:56 字数 514 浏览 6 评论 0原文

我已遵循 RAPIDS.ai 的 AWS-EC2 设置中的每个版本的说明: https://rapids .ai/cloud#AWS-EC2

我可以确认我使用的是说明中的确切实例类型,并严格按照步骤操作。

当我尝试使用 docker 方法时,不接受 --gpus all 命令。

当我尝试使用 conda 方法时,安装失败并出现错误:

PackageNotFoundError: Packages missing in current channels:
            
 - glibc

我已经尝试了(许多)提供的不同解决方案来解决这两个问题,但它们似乎都不起作用。我真的只需要在笔记本中使用 cuml 和 cudf 导入来测试一些 python 代码。已经这样做了 7 个小时(在放弃我的本地和 SageMaker 之后)。

I have followed every version of the instructions on the AWS-EC2 setup for RAPIDS.ai: https://rapids.ai/cloud#AWS-EC2

I can confirm that I am using the exact instance type in the instructions, and following the steps exactly.

When I try to use the docker approach, the --gpus all command is not accepted.

When I try to use the conda approach, the install fails with the error:

PackageNotFoundError: Packages missing in current channels:
            
 - glibc

I have tried (many) different solutions provided to solve both of these problems, none of them seem to work. I really just need to test some python code with cuml and cudf imports in a notebook. Been at this for 7 hours (after giving up on my local and SageMaker).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

百变从容 2025-01-23 06:22:56

您注意到 --gpus all 命令未被接受,这表明您没有安装 NVIDIA Docker 运行时。

我按照您链接的说明操作,确实遇到了一个问题,sudo yum install -y nvidia-docker2 命令失败,我需要禁用导致冲突的 Amazon yum 存储库 如本期所述

$ sudo yum-config-manager --disable amzn2-graphics

$ sudo yum install -y nvidia-docker2

$ sudo yum-config-manager --enable amzn2-graphics

完成此操作并运行 sudo systemctl restart docker 后,我就能够启动 RAPIDS 容器。

$ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786     rapidsai/rapidsai:cuda11.2-runtime-ubuntu18.04-py3.7
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.download.nvidia.com/licenses/NVIDIA_Deep_Learning_Container_License.pdf

A JupyterLab server has been started!
To access it, visit http://localhost:8888 on your host machine.
Ensure the following arguments were added to "docker run" to expose the JupyterLab server to your host machine:
      -p 8888:8888 -p 8787:8787 -p 8786:8786
Make local folders visible by bind mounting to /rapids/notebooks/host
(rapids) root@be7253bb4fdb:/rapids/notebooks#

You note that the --gpus all command is not accepted, which suggests that you do not have the NVIDIA Docker runtime installed.

I followed the instructions you linked and I did run into an issue where the sudo yum install -y nvidia-docker2 command failed and I needed to disable an Amazon yum repo that was causing come conflicts as outlined in this issue.

$ sudo yum-config-manager --disable amzn2-graphics

$ sudo yum install -y nvidia-docker2

$ sudo yum-config-manager --enable amzn2-graphics

Once I'd done that and run sudo systemctl restart docker I was able to start the RAPIDS container.

$ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786     rapidsai/rapidsai:cuda11.2-runtime-ubuntu18.04-py3.7
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.download.nvidia.com/licenses/NVIDIA_Deep_Learning_Container_License.pdf

A JupyterLab server has been started!
To access it, visit http://localhost:8888 on your host machine.
Ensure the following arguments were added to "docker run" to expose the JupyterLab server to your host machine:
      -p 8888:8888 -p 8787:8787 -p 8786:8786
Make local folders visible by bind mounting to /rapids/notebooks/host
(rapids) root@be7253bb4fdb:/rapids/notebooks#
童话 2025-01-23 06:22:56

事实证明,文档中建议的第一个 AMI 不兼容。请改用深度学习 NVIDIA 一款。

Turns out, the frist AMI suggested in the documentation is not compatible. Use the Deep Learning NVIDIA one instead.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文