stable_baseline3 ppo get'get'流程以退出代码139完成。

发布于 2025-02-04 04:09:04 字数 1163 浏览 6 评论 0原文

我使用stable_baseline3.ppo训练在高速公路fast-v0上的代理(继续操作类型), 并找到调用ppo.learn()方法时,它被“以“出口代码139”完成的过程中流产,没有其他错误消息。而且在训练时不会在同一时间内发生此错误,我该如何解决?

import gym 
from stable_baselines3 import PPO
import warnings
warnings.filterwarnings('ignore')
# ==================================
#        Main script
# ==================================

def make_configure_env(**kwargs):
    env = gym.make(kwargs["id"])
    env.configure(kwargs["config"])
    env.reset()
    return env


env_kwargs = {
    'id': 'highway-fast-v0',
    'config': {
        "action": {
            "type": "ContinuousAction"
        }
    }
}
n_cpu = 6
batch_size = 64
env = make_configure_env(**env_kwargs)
env.reset()
model = PPO("MlpPolicy",
            env,
            policy_kwargs=dict(net_arch=[dict(pi=[256, 256], vf=[256, 256])]),
            n_steps=batch_size * 12 // n_cpu,
            batch_size=batch_size,
            n_epochs=10,
            learning_rate=5e-4,
            gamma=0.8,
            verbose=2,
            tensorboard_log="highway_ppo/")
# Train the agent
model.learn(total_timesteps=2e4)
# Save the agent
model.save("highway_ppo_continues/model")

I use Stable_baseline3.PPO to train an agent on highway-fast-v0 (continues action type),
and find that when calling ppo.learn() method, it is aborted with "Process finished with exit code 139" and no other error message. And this error is not occur at the same time_step when training, how can I solve it?

import gym 
from stable_baselines3 import PPO
import warnings
warnings.filterwarnings('ignore')
# ==================================
#        Main script
# ==================================

def make_configure_env(**kwargs):
    env = gym.make(kwargs["id"])
    env.configure(kwargs["config"])
    env.reset()
    return env


env_kwargs = {
    'id': 'highway-fast-v0',
    'config': {
        "action": {
            "type": "ContinuousAction"
        }
    }
}
n_cpu = 6
batch_size = 64
env = make_configure_env(**env_kwargs)
env.reset()
model = PPO("MlpPolicy",
            env,
            policy_kwargs=dict(net_arch=[dict(pi=[256, 256], vf=[256, 256])]),
            n_steps=batch_size * 12 // n_cpu,
            batch_size=batch_size,
            n_epochs=10,
            learning_rate=5e-4,
            gamma=0.8,
            verbose=2,
            tensorboard_log="highway_ppo/")
# Train the agent
model.learn(total_timesteps=2e4)
# Save the agent
model.save("highway_ppo_continues/model")

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

屌丝范 2025-02-11 04:09:04

在阅读代码时,我看到其中缺少Import Highway_env。我尝试使用相同的代码以及导入的代码,这对我有用。

On reading the code, I'm seeing import highway_env missing in it. I tried using the same code along with import and it was working for me.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文