ValueError:给定的Numpy阵列中的至少一步是负的,目前不支持具有负相距的张量

发布于 2025-02-05 15:53:43 字数 1660 浏览 5 评论 0原文

我正在编写使用RL自动驾驶的代码。我正在使用稳定的基线3和开放的AI健身房环境。我在Jupyter笔记本中运行以下代码,这给我带来了以下错误:

# Testing our model
episodes = 5 # test the environment 5 times
for episodes in range(1,episodes+1): # looping through each episodes
    bs = env.reset() # observation space
    # Taking the obs and passing it through our model
    # tells that which kind of the action is best for our work
    done = False 
    score = 0
    while not done:
        env.render()
        action, _ = model.predict(obs) # now using model here # returns model action and next 
state
        # take that action to get the best reward
        # for observation space we get the box environment
        # rather than getting random action we are using model.predict(obs) on our obs for an 
curr env to gen the action inorder to get best possible reward
        obs, reward, done, info = env.step(action)  # gies state, reward whose value is 1
        # reward is 1 for every step including the termination step
        score += reward
    print('Episode:{},Score:{}'.format(episodes,score))'''
env.close()

错误

​/I.sstatic.net/lentg.png“ alt =”在此处输入图像描述>

我写的代码的链接如下:

我正在使用的Python的版本是Anaconda环境中的Python 3.8.13。 我正在使用Pytorch CPU版本,而OS是Windows 10。 请帮助我解决这个问题。

I am writing the code for Autonomous Driving using RL. I am using a stable baseline3 and an open ai gym environment. I was running the following code in the jupyter notebook and it is giving me the following error:

# Testing our model
episodes = 5 # test the environment 5 times
for episodes in range(1,episodes+1): # looping through each episodes
    bs = env.reset() # observation space
    # Taking the obs and passing it through our model
    # tells that which kind of the action is best for our work
    done = False 
    score = 0
    while not done:
        env.render()
        action, _ = model.predict(obs) # now using model here # returns model action and next 
state
        # take that action to get the best reward
        # for observation space we get the box environment
        # rather than getting random action we are using model.predict(obs) on our obs for an 
curr env to gen the action inorder to get best possible reward
        obs, reward, done, info = env.step(action)  # gies state, reward whose value is 1
        # reward is 1 for every step including the termination step
        score += reward
    print('Episode:{},Score:{}'.format(episodes,score))'''
env.close()

Error
enter image description here

enter image description here

enter image description here

The link for the code that I have written is given below:
https://drive.google.com/file/d/1JBVmPLn-N1GCl_Rgb6-qGMpJyWvBaR1N/view?usp=sharing

The version of python I am using is Python 3.8.13 in Anaconda Environment.
I am using Pytorch CPU version and the OS is Windows 10.
Please help me out in solving this question.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

丢了幸福的猪 2025-02-12 15:53:44

使用.copy()用于numpy数组应有所帮助(因为 pytorch张量无法处理负面步伐):

action, _ = model.predict(obs.copy())

由于依赖关系问题,我尚未设法快速运行笔记本,但是我在AI2thor模拟器中遇到了相同的错误,并添加> .copy()有所帮助。
也许有人有更多有关numpyTORCH或AI2THOR的人会解释为什么错误会更详细地发生。

Using .copy() for numpy arrays should help (because PyTorch tensors can't handle negative strides):

action, _ = model.predict(obs.copy())

I haven't managed to run your notebook quickly because of dependencies problems, but I had the same error with AI2THOR simulator, and adding .copy() has helped.
Maybe someone with more technical knowledge about numpy, torch or AI2THOR will explain why the error occurs in more detail.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文