- 概览
- 安装
- 教程
- 算法接口文档
- 简易高效的并行接口
- APIS
- FREQUENTLY ASKED QUESTIONS
- EVOKIT
- 其他
- parl.algorithms.paddle.policy_gradient
- parl.algorithms.paddle.dqn
- parl.algorithms.paddle.ddpg
- parl.algorithms.paddle.ddqn
- parl.algorithms.paddle.oac
- parl.algorithms.paddle.a2c
- parl.algorithms.paddle.qmix
- parl.algorithms.paddle.td3
- parl.algorithms.paddle.sac
- parl.algorithms.paddle.ppo
- parl.algorithms.paddle.maddpg
- parl.core.paddle.model
- parl.core.paddle.algorithm
- parl.remote.remote_decorator
- parl.core.paddle.agent
- parl.remote.client
文章来源于网络收集而来,版权归原创者所有,如有侵权请及时联系!
GPU Cluster
Author: wuzewu@baidu.com
This tutorial demonstrates how to set up a GPU cluster.
First we run the following command to launch a GPU cluster with the port 8002.
xparl start --port 8002 --gpu_cluster
Then we add GPU resource of the computation server to the cluster. Users should specify the GPUs added to the cluster with the argument --gpu
.
The following command is an example that adds the first 4 GPUs into the cluster.
xparl connect --address ${CLUSTER_IP}:${CLUSTER_PORT} --gpu 0,1,2,3
Once the GPU cluster based on xparl has been established, we can leverage the parl.remote_class
decorator to execute parallel computations. The number of GPUs to be utilized can be specified by the n_gpu
argument.
Here is an entry level example to test the GPU cluster we have set up.
import parl import os # connect to the Cluster. replace the ip and port with the actual IP address. parl.connect("localhost:8002") # Use a decorator to decorate a local class, which will be sent to a remote instance. # n_gpu=2 means that this Actor will be allocated two GPU cards. @parl.remote_class(n_gpu=2) class Actor: def get_device(self): return os.environ['CUDA_VISIBLE_DEVICES'] def step(self, a, b): return a + b actor = Actor() # execute remotely and return the value of the CUDA_VISIBLE_DEVICES environment variable. print(actor.get_device())
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论