glDelete* 上的访问冲突

发布于 2024-10-19 23:18:01 字数 1546 浏览 2 评论 0原文

我在这里遇到了一个奇怪的问题:我有一个潜在的大(如高达 500mb)3d 纹理,每秒创建多次。纹理的大小可能会发生变化,因此不能每次都重复使用旧纹理。避免内存消耗的逻辑步骤是每次不再使用纹理时删除该纹理(使用 glDeleteTexture),但程序很快就会因读取或写入访问冲突而崩溃。当在我用来更新纹理的缓冲区上调用时,glDeleteBuffer 上也会发生同样的事情。

在我看来,这不可能发生,因为 glDelete* 函数非常安全。如果你给他们一个不是相应对象的 gl 句柄,他们就不会做任何事情。

有趣的是,如果我不删除纹理和缓冲区,程序就会正常运行,直到最终耗尽显卡上的内存。

它运行在 Windows XP 32 位、带有 266.58er 驱动程序的 NVIDIA Geforce 9500GT 上,编程语言是 Visual Studio 2005 中的 C++。

更新

显然 glDelete 不是唯一受影响的功能。我刚刚在其他几种方法中遇到了违规(昨天不是这种情况)......看起来这里有些东西该死的坏了。

更新2

这应该不会失败吧?

template <> inline
Texture<GL_TEXTURE_3D>::Texture(
    GLint internalFormat,
    glm::ivec3 size,
    GLint border ) : Wrapper<detail::gl_texture>()
{
    glGenTextures(1,&object.t);

    std::vector<GLbyte> tmp(glm::compMul(size)*4);
    glTextureImage3DEXT(
        object,             // texture
        GL_TEXTURE_3D,          // target
        0,                      // level
        internalFormat,         // internal format
        size.x, size.y, size.z, // size
        border,                 // border
        GL_RGBA,                // format
        GL_BYTE,                // type
        &tmp[0]);               // don't load anything
}

失败:

Exception (first chance) at 0x072c35c0: 0xC0000005:  Access violoation while writing to position 0x00000004.
Unhandled exception at 0x072c35c0 in Project.exe: 0xC0000005: Access violatione while writing to position 0x00000004.

最佳猜测:程序内存被搞乱了?

I got a strange problem here: i have a potential large (as in up to 500mb) 3d texture which is created several times per second. The size of the texture might change so reusing the old texture is not an option every time. The logical step to avoid memory consumption is to delete the texture every time it is not used anymore (using glDeleteTexture) but the program crashes with a read or write access violation pretty soon. The same thing happens on glDeleteBuffer when called on the buffer i use to update the texture.

In my eyes this can't happen as the glDelete* functions are pretty failsafe. If you give them a gl handle which is not a corresponding object they just don't do anything.

The interesting thing is that if i just don't delete the textures and buffers the program runs fine until it eventually runs out of memory on the graphics card.

This is running on Windows XP 32bit, NVIDIA Geforce 9500GT with 266.58er drivers, programming language is c++ in visual studio 2005.

Update

Apparently glDelete is not the only function affected. I just got violations in several other methods (which wasn't the case yesterday) ... looks like something is damn broken here.

Update 2

this shouldn't fail should it?

template <> inline
Texture<GL_TEXTURE_3D>::Texture(
    GLint internalFormat,
    glm::ivec3 size,
    GLint border ) : Wrapper<detail::gl_texture>()
{
    glGenTextures(1,&object.t);

    std::vector<GLbyte> tmp(glm::compMul(size)*4);
    glTextureImage3DEXT(
        object,             // texture
        GL_TEXTURE_3D,          // target
        0,                      // level
        internalFormat,         // internal format
        size.x, size.y, size.z, // size
        border,                 // border
        GL_RGBA,                // format
        GL_BYTE,                // type
        &tmp[0]);               // don't load anything
}

fails with:

Exception (first chance) at 0x072c35c0: 0xC0000005:  Access violoation while writing to position 0x00000004.
Unhandled exception at 0x072c35c0 in Project.exe: 0xC0000005: Access violatione while writing to position 0x00000004.

best guess: something messing up the program memory?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

谁许谁一生繁华 2024-10-26 23:18:01

我不知道为什么 glDelete 会崩溃,但我相当确定你无论如何都不需要它并且使这变得过于复杂。

glGenTextures 为您的纹理创建一个“名称”。 glTexImage3D 为 OpenGL 提供一些附加到该名称的数据。如果我的理解是正确的,当您不再需要数据时,没有理由删除该名称

相反,您应该简单地使用相同的纹理名称再次调用 glTexImage3D,并相信驱动程序会知道不再需要您的旧数据。这允许您每次重新指定新的大小,而不是先指定最大大小然后调用 glTexSubImage3D,这将使实际使用数据变得困难,因为纹理仍将保留其最大大小。

下面是 python 中的一个愚蠢的测试(需要 pyglet),它分配一大堆纹理(只是为了检查 GPU-Z 中的 GPU 内存使用测量是否实际有效),然后每帧将新数据重新分配给相同的纹理,并使用随机的新大小和一些随机数据只是为了解决数据保持不变时可能存在的任何优化问题。

它(显然)慢得要命,但它确实表明,至少在我的系统(Windows server 2003 x64、NVidia Quadro FX1800、驱动程序 259.81)上,GPU 内存使用量在循环纹理重新分配时不会增加。

import pyglet
from pyglet.gl import *
import random

def toGLArray(input):
    return (GLfloat*len(input))(*input)

w, h = 800, 600
AR = float(h)/float(w)
window = pyglet.window.Window(width=w, height=h, vsync=False, fullscreen=False)


def init():
    glActiveTexture(GL_TEXTURE1)
    tst_tex = GLuint()
    some_data = [11.0, 6.0, 3.2, 2.8, 2.2, 1.90, 1.80, 1.80, 1.70, 1.70,  1.60, 1.60, 1.50, 1.50, 1.40, 1.40, 1.30, 1.20, 1.10, 1.00]
    some_data = some_data * 1000*500

    # allocate a few useless textures just to see GPU memory load go up in GPU-Z
    for i in range(10):
        dummy_tex = GLuint()
        glGenTextures(1, dummy_tex)
        glBindTexture(GL_TEXTURE_2D, dummy_tex)
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)

    # our real test texture
    glGenTextures(1, tst_tex)
    glBindTexture(GL_TEXTURE_2D, tst_tex)
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)

def world_update(dt):
    pass
pyglet.clock.schedule_interval(world_update, 0.015)

@window.event
def on_draw():
    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
    # randomize texture size and data
    size = random.randint(1, 1000)
    data = [random.randint(0, 100) for i in xrange(size)]
    data = data*1000*4

    # just to see our draw calls 'tick'
    print pyglet.clock.get_fps()

    # reallocate texture every frame
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, size, 0, GL_RGBA, GL_FLOAT, toGLArray(data))

def main():
    init()
    pyglet.app.run()

if __name__ == '__main__':
    main()

I don't know why glDelete would crash but I am fairly certain you don't need it anyway and are overcomplicating this.

glGenTextures creates a 'name' for your texture. glTexImage3D gives OpenGL some data to attach to that name. If my understanding is correct, there is no reason to delete the name when you don't want the data anymore.

Instead, you should simply call glTexImage3D again on the same texture name and trust that the driver will know that your old data is no longer needed. This allows you to respecify a new size each time, instead of specifying a maximum size first and then calling glTexSubImage3D, which would make actually using the data difficult since the texture would still retain its maximum size.

Below is a silly test in python (pyglet needed) that allocates a whole bunch of textures (just to check that the GPU memory usage measurement in GPU-Z actually works) then re-allocates new data to the same texture every frame, with a random new size and some random data just to work around any optimizations that might exist if the data stays constant.

It's (obviously) slow as hell but it definitely shows, at least on my system (Windows server 2003 x64, NVidia Quadro FX1800, drivers 259.81), that GPU memory usage does NOT go up while looping over the re-allocation of the texture.

import pyglet
from pyglet.gl import *
import random

def toGLArray(input):
    return (GLfloat*len(input))(*input)

w, h = 800, 600
AR = float(h)/float(w)
window = pyglet.window.Window(width=w, height=h, vsync=False, fullscreen=False)


def init():
    glActiveTexture(GL_TEXTURE1)
    tst_tex = GLuint()
    some_data = [11.0, 6.0, 3.2, 2.8, 2.2, 1.90, 1.80, 1.80, 1.70, 1.70,  1.60, 1.60, 1.50, 1.50, 1.40, 1.40, 1.30, 1.20, 1.10, 1.00]
    some_data = some_data * 1000*500

    # allocate a few useless textures just to see GPU memory load go up in GPU-Z
    for i in range(10):
        dummy_tex = GLuint()
        glGenTextures(1, dummy_tex)
        glBindTexture(GL_TEXTURE_2D, dummy_tex)
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)

    # our real test texture
    glGenTextures(1, tst_tex)
    glBindTexture(GL_TEXTURE_2D, tst_tex)
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, 1000, 0, GL_RGBA, GL_FLOAT, toGLArray(some_data))
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)

def world_update(dt):
    pass
pyglet.clock.schedule_interval(world_update, 0.015)

@window.event
def on_draw():
    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
    # randomize texture size and data
    size = random.randint(1, 1000)
    data = [random.randint(0, 100) for i in xrange(size)]
    data = data*1000*4

    # just to see our draw calls 'tick'
    print pyglet.clock.get_fps()

    # reallocate texture every frame
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1000, size, 0, GL_RGBA, GL_FLOAT, toGLArray(data))

def main():
    init()
    pyglet.app.run()

if __name__ == '__main__':
    main()
苄①跕圉湢 2024-10-26 23:18:01

在整个代码中散布 glGetError()。我敢打赌,您一定会被 glDelete 实际上并没有销毁该对象这一事实所困扰。该对象可能会使用更长的几帧。因此,我怀疑你的内存不足(即 glGetError 返回 GL_OUT_OF_MEMORY)。

Sprinkle glGetError()s throughout your code. I would wager you are getting caught out by the fact that glDelete doesn't actually destroy the object. The object may be in use for several frames longer. As such I suspect you are running out of memory (ie glGetError is returning GL_OUT_OF_MEMORY).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文