渲染 open-AIgym 时如何避免此错误(错误:显示 Surface quit)?

发布于 2025-01-17 05:18:06 字数 1743 浏览 2 评论 0原文

我正在尝试解决 AI 健身房中的山车问题,但是当我使用 env. render() 它第一次工作,但是当我在 2000 运行后尝试再次渲染模拟时,它给出了以下错误(错误:显示 Surface 退出)。我怎样才能避免这个错误?

我使用的是 Windows,并且在 jupyter 笔记本上运行代码。

import gym
import numpy as np 
import sys

#Create gym environment.
discount = 0.95
Learning_rate = 0.01
episodes = 25000
SHOW_EVERY = 2000
    
env = gym.make('MountainCar-v0')

discrete_os_size = [20] *len(env.observation_space.high)
discrete_os_win_size = (env.observation_space.high - env.observation_space.low)/ discrete_os_size
q_table = np.random.uniform(low=-2, high=0, size=(discrete_os_size + [env.action_space.n]))

# convert continuous state to discrete state 
def get_discrete_state(state):
    discrete_State = (state - env.observation_space.low) / discrete_os_win_size
    return tuple(discrete_State.astype(int))




for episode in range(episodes):
    
    if episode % SHOW_EVERY == 0:
        render = True
        print(episode)
    else:
        render = False

    ds = get_discrete_state(env.reset())
    done = False
    while not done:
        action = np.argmax(q_table[ds])
        new_state, reward, done, _ = env.step(action)
        new_discrete_state = get_discrete_state(new_state)
        
        if episode % SHOW_EVERY == 0:
            env.render()


        if not done:
            max_future_q = np.max(q_table[new_discrete_state])
            current_q_value = q_table[ds + (action, )]
            new_q = (1-Learning_rate) * current_q_value + Learning_rate * (reward + 
                 discount * max_future_q )
            q_table[ds + (action, )] = new_q

        elif new_state[0] >= env.goal_position:
            q_table[ds + (action, )] = 0

        ds = new_discrete_state
        
   

    env.close()

I am trying to solve the mountain car problem in AI gym, but when I use env. render()it works the first time, but when I try to render the simulation again after 2000 runs it gives the below error ( error: display Surface quit). How can I avoid this error?

I am using windows, and I am running the code on a jupyter notebook.

import gym
import numpy as np 
import sys

#Create gym environment.
discount = 0.95
Learning_rate = 0.01
episodes = 25000
SHOW_EVERY = 2000
    
env = gym.make('MountainCar-v0')

discrete_os_size = [20] *len(env.observation_space.high)
discrete_os_win_size = (env.observation_space.high - env.observation_space.low)/ discrete_os_size
q_table = np.random.uniform(low=-2, high=0, size=(discrete_os_size + [env.action_space.n]))

# convert continuous state to discrete state 
def get_discrete_state(state):
    discrete_State = (state - env.observation_space.low) / discrete_os_win_size
    return tuple(discrete_State.astype(int))




for episode in range(episodes):
    
    if episode % SHOW_EVERY == 0:
        render = True
        print(episode)
    else:
        render = False

    ds = get_discrete_state(env.reset())
    done = False
    while not done:
        action = np.argmax(q_table[ds])
        new_state, reward, done, _ = env.step(action)
        new_discrete_state = get_discrete_state(new_state)
        
        if episode % SHOW_EVERY == 0:
            env.render()


        if not done:
            max_future_q = np.max(q_table[new_discrete_state])
            current_q_value = q_table[ds + (action, )]
            new_q = (1-Learning_rate) * current_q_value + Learning_rate * (reward + 
                 discount * max_future_q )
            q_table[ds + (action, )] = new_q

        elif new_state[0] >= env.goal_position:
            q_table[ds + (action, )] = 0

        ds = new_discrete_state
        
   

    env.close()

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

清晨说晚安 2025-01-24 05:18:06

我遇到了同样的问题,因为当您调用 env.close() 时,它会关闭环境,因此为了再次运行它,您必须创建一个新环境。如果你想再次运行相同的环境,只需注释 env.close() 即可。

I faced the same problem, cuz when you call env.close() it closes the environment so in order run it again you have to make a new environment. Just comment env.close() if you want to run the same environment again.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文