GPU未在D3rlpy上使用
我是使用D3RLPY进行离线RL训练的新手,并使用Pytorch。因此,我按照 pytorch doc : pip> pip3安装torch torchvision torchvision torchvision torchvision torchvoush torchvision torm =“ - Extra-index-url https://download.pytorch.org/whl/cu116
。我在之后安装了D3RLPY并运行以下示例代码:
from d3rlpy.algos import BC,DDPG,CRR,PLAS,PLASWithPerturbation,TD3PlusBC,IQL
import d3rlpy
import numpy as np
import glob
import time
#models
continuous_models = {
"BehaviorCloning": BC,
"DeepDeterministicPolicyGradients": DDPG,
"CriticRegularizedRegression": CRR,
"PolicyLatentActionSpace": PLAS,
"PolicyLatentActionSpacePerturbation": PLASWithPerturbation,
"TwinDelayedPlusBehaviorCloning": TD3PlusBC,
"ImplicitQLearning": IQL,
}
#load dataset data_batch is created as a*.h5 file with d3rlpy
dataset = d3rlpy.dataset.MDPDataset.load(data_batch)
# preprocess
mean = np.mean(dataset.observations, axis=0, keepdims=True)
std = np.std(dataset.observations, axis=0, keepdims=True)
scaler = d3rlpy.preprocessing.StandardScaler(mean=mean, std=std)
# test models
for _model in continuous_models:
the_model = continuous_models[_model](scaler = scaler)
the_model.use_gpu = True
the_model.build_with_dataset(dataset)
the_model.fit(dataset = dataset.episodes,
n_steps_per_epoch = 10800,
n_steps = 54000,
logdir = './logs',
experiment_name = f"{_model}",
tensorboard_dir = 'logs',
save_interval = 900, # we don't want to save intermediate parameters
)
#save model
the_timestamp = int(time.time())
the_model.save_model(f"./models/{_model}/{_model}_{the_timestamp}.pt")
问题是,尽管设置了 use_gpu = true
,但没有一个模型实际上使用了GPU。使用pytotch和Testing torch.cuda.current_device()
的示例代码,我可以看到Pytorch已正确设置并检测GPU。有什么想法在哪里解决这个问题?我不确定这是d3rlpy的错误,所以我会在github上打扰:)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您可以尝试将
use_gpu = true
作为参数以及scaleer = safleer
。the_model
对象没有称为use_gpu
的方法,例如build_with_dataset
。You can try passing
use_gpu = True
as an argument along withscaler = scaler
.the_model
object has no method calleduse_gpu
likebuild_with_dataset
.