无法获取在 GLSL 1.5 中工作的整数顶点属性

发布于 2024-11-18 14:27:53 字数 2961 浏览 4 评论 0原文

我使用的是 OpenGL 3.2 上下文和 GLSL 1.5,并且出于某种原因,整数属性(int、uint、ivecX 或 uvecX 类型)在顶点着色器中始终被读取为 0。我使用以下方式声明它们:

in int name;

并且我使用 glVertexAttribIPointer (注意 I)绑定属性,而不是 glVertexAttribPointer (但那个也不起作用)。如果我将它们更改为浮点数,它们就可以正常工作 - 唯一的代码差异是顶点结构中的类型、GLSL 中的类型以及 IPointer 函数调用而不仅仅是指针。我没有收到任何错误或任何错误,它们都只是 0。如果我硬编码整数值,它就可以正常工作,并且整数制服也可以正常工作。此外,像 gl_VertexID 这样的内置整数可以完美地工作,只是自定义的不能。我正在运行 ATI Mobility Radeon HD 5870。我在另一台具有不同 GPU 的计算机上进行了尝试(不幸的是,我不确定是什么 GPU,但它与我的不同),结果相同。有什么想法为什么会出现这种情况吗?谢谢。

编辑:实际上看起来它们不是 0,更有可能它们是随机的大的未初始化值...很难说,因为我找不到任何调试 GLSL 着色器的方法。无论如何,还有更多信息。这是我的顶点结构:

struct TileVertex {
    float pos[2];
    float uv[2];
    float width;
    float pad;
    int animFrames;
    int animFrameLength;
};

animFrames 和 animFrameLength 是我尝试发送到着色器的两个整数值。我对 animFrames 的 glVertexAttribIPointer 的调用如下: 其中

glVertexAttribIPointer( attribute.location, attribute.typeSize, attribute.baseType, (GLsizei)stride, bufferOffset( bufOffset + attribOffset ) );

attribute.location = 1 (as determined by OpenGL)
attribute.typeSize = 1 (since it's a single int, not a vector)
attribute.baseType = 5124, which is GL_INT
stride = 32, which is sizeof( TileVertex )
bufferOffset() converts to a void pointer relative to NULL
bufOffset = 0 (my vertices start at the beginning of the VBO), and
attribOffset = 24, which is the offset of animFrames in the TileVertex struct

编辑:感谢到目前为止的帮助。所以我尝试使用变换反馈,现在事情变得更有意义了。如果我将 int attrib 的值设置为 1,在着色器中它是:

1065353216 = 0x3F800000 = 1.0 in floating point

如果我将其设置为 10,在着色器中我得到:

1092616192 = 0x41200000 = 10.0 in floating point

所以看来 int attrib 正在转换为 float,那么这些位将被解释为 int在着色器中,即使我指定 GL_INT 并使用 IPointer 而不是 Pointer!据我了解, IPointer 应该只是将数据保留为整数形式,而不是将其转换为浮点数。

编辑:

这里还有一些测试。对于每个测试,我尝试将整数值 1 传递给着色器中的整数输入:

glVertexAttribIPointer with GL_INT: shader values are 0x3F800000, which is 1.0 in floating point

似乎表明整数 1 正在转换为浮点 1.0,然后被解释为整数。这意味着 OpenGL 要么认为源数据是浮点形式(当它实际上是整数形式时),要么认为着色器输入是浮点形式(当它们实际上是整数时)。

glVertexAttribIPointer with GL_FLOAT: shader values are valid but weird floating point values, such as 0.0, 1.0, 4.0, 36.0... what the hell!?

不知道这意味着什么。我传递的唯一值是整数 1,所以我无法弄清楚为什么每个值会不同,或者为什么它们是有效的浮点数!我尝试这样做的逻辑是,如果 OpenGL 将整数转换为浮点数,也许告诉它它们已经是浮点数可以避免这种情况,但显然不是。

glVertexAttribPointer with GL_INT: same result as glVertexAttribIPointer with GL_INT

这是预期的结果。 OpenGL 将整数转换为浮点数,然后将它们传递给着色器。这是应该发生的事情,因为我没有使用 I 版本。

glVertexAttribPointer with GL_FLOAT: integer values 1 (the correct result)

这是有效的,因为 OpenGL 1) 认为源数据是浮点形式,2) 认为着色器输入也是浮点形式(它们实际上是 int 和 int),因此不应用任何转换,将 int 保留为 int (或者像它认为的那样浮动作为浮动)。这可行,但看起来非常老套且不可靠,因为我认为不能保证 CPU 浮点到 GPU 浮点不需要转换(某些 GPU 不使用 16 位浮点吗?也许这只是 OpenGL 3 之前的版本,但是仍然) - 它只是不在我的 GPU 上。

I'm using an OpenGL 3.2 context with GLSL 1.5 and for some reason integer attributes (of type int, uint, ivecX, or uvecX) are always being read as 0 in the vertex shader. I'm declaring them using:

in int name;

and I am binding the attributes using glVertexAttribIPointer (notice the I), and not glVertexAttribPointer (but that one doesn't work either). If I change them to be floats instead they work perfectly fine - the only code difference being the type in the vertex struct, the type in GLSL, and the IPointer function call instead of just Pointer. I'm not getting any errors or anything, they're just all 0. If I hard code integer values instead it works fine, and integer uniforms work fine. Also, built in integers like gl_VertexID work perfectly, just custom ones don't. I'm running an ATI Mobility Radeon HD 5870. I tried on another computer with a different GPU (unfortunately I'm not sure what GPU though, but it was different than mine) with the same results. Any ideas why this might be the case? Thanks.

EDIT: Actually it looks like they aren't 0, more likely they're random large uninitialized values... it's hard to tell since I can't find any ways to debug GLSL shaders. Anyway, some more info. Here is my vertex structure:

struct TileVertex {
    float pos[2];
    float uv[2];
    float width;
    float pad;
    int animFrames;
    int animFrameLength;
};

animFrames and animFrameLength are the two integer values I'm trying to send to the shader. My call to glVertexAttribIPointer for animFrames is the following:

glVertexAttribIPointer( attribute.location, attribute.typeSize, attribute.baseType, (GLsizei)stride, bufferOffset( bufOffset + attribOffset ) );

where:

attribute.location = 1 (as determined by OpenGL)
attribute.typeSize = 1 (since it's a single int, not a vector)
attribute.baseType = 5124, which is GL_INT
stride = 32, which is sizeof( TileVertex )
bufferOffset() converts to a void pointer relative to NULL
bufOffset = 0 (my vertices start at the beginning of the VBO), and
attribOffset = 24, which is the offset of animFrames in the TileVertex struct

EDIT: Thanks for the help so far guys. So I tried using transform feedback and things are making more sense now. If I set the int attrib's value to 1, in the shader it is:

1065353216 = 0x3F800000 = 1.0 in floating point

If I set it to 10, in the shader I get:

1092616192 = 0x41200000 = 10.0 in floating point

So it appears that the int attrib is being converted to float, then those bits are being interpreted as int in the shader, even though I'm specifying GL_INT and using IPointer instead of Pointer! As I understand it, IPointer is supposed to just leave the data in integer form and not convert it to a float.

EDIT:

Here are some more tests. For each test I am trying to pass the integer value 1 to an integer input in the shader:

glVertexAttribIPointer with GL_INT: shader values are 0x3F800000, which is 1.0 in floating point

seems to indicate that integer 1 is being converted to floating point 1.0, then being interpreted as an integer. This means that OpenGL either thinks that the source data is in floating point form (when it is actually in integer form), or it thinks that the shader inputs are floating point (when they are actually ints).

glVertexAttribIPointer with GL_FLOAT: shader values are valid but weird floating point values, such as 0.0, 1.0, 4.0, 36.0... what the hell!?

no idea what this means. The only value I am passing is integer 1, so I can't figure out why each values would be different, or why they would be valid floats! My logic trying this was that if OpenGL was converting the integers to floats, maybe telling it that they were already float would avoid that, but apparently not.

glVertexAttribPointer with GL_INT: same result as glVertexAttribIPointer with GL_INT

this is the expected result. OpenGL converts the ints to floats, then passes them to the shader. This is what is supposed to happen since I didn't use the I version.

glVertexAttribPointer with GL_FLOAT: integer values 1 (the correct result)

this works because OpenGL 1) thinks the source data is in floating point form and 2) thinks the shader inputs are also in floating point form (they are actually int and int), so therefore does not apply any conversion, leaving int as int (or float as float, as it thinks). This works but it seems very hacky and unreliable, since I don't think there's a guarantee that CPU float to GPU float won't require a conversion (don't some GPUs use 16 bit floats? Maybe that's just pre-OpenGL 3 but still) - it just doesn't on my GPU.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

能否归途做我良人 2024-11-25 14:27:53

对于其中一个着色器(不记得是哪个),您需要使用 variing 关键字。或者可能是 attribute 关键字。 GLSL 的更高版本使用 inout 代替。

我认为您需要:

attribute int name;

用于进入顶点着色器的数据,以及

varying int name;

从顶点着色器到片段着色器的数据。

另请确保使用 glEnableVertexAttribArray 启用着色器属性


我在使 int 属性也正常工作时遇到了一些麻烦,但我确实发现 type 字段(GL_INTGL_FLOATglVertexAttribPointer 的 code> 与传递给它的数据匹配,而不是着色器中的数据类型)。因此,我最终使用 glVertexAttribPointerGL_INT 将主机上的 int 转换为着色器中的 float,即对我来说很好,因为我的属性是位置数据,无论如何都需要通过浮点 vec2 进行转换。

可能您需要 glVertexAttribIPointerEXT 来匹配您的着色器 int 属性,如果主机以 int 形式提供数据,还需要 GL_INT代码>数组。

For one of the shaders (can't remember which), you need to use the varying keyword. Or maybe the attribute keyword. Later versions of GLSL use in and out instead.

I think you need:

attribute int name;

for data going to the vertex shader, and

varying int name;

for data going from vertex shader to fragment shader.

Also make sure to enable the shader attribute with glEnableVertexAttribArray.


I had some trouble getting int atributes working also, but what I did discover is that the type field (GL_INT or GL_FLOAT passed to glVertexAttribPointer matches the data passed to it, not the data type in the shader). So I ended up using glVertexAttribPointer and GL_INT which converted int on my host to float in my shader, which is fine for me because my attribute was position data and needed to be transformed by a floating-point vec2 anyway.

Probably you need glVertexAttribIPointerEXT to match your shader int attribute, and then also GL_INT if the host is supplying data as an int array.

溺深海 2024-11-25 14:27:53

我知道这个问题很旧,但也许这个信息仍然有帮助:

GLint定义为大小为 32 位,
但 C 语言中的 int 也可能是 64 位。

这让我调试了太长时间。

I know this question is old, but maybe this information is still helpful:

GLint is defined as having a size of 32 bits,
but your int in C might as well be 64 bits.

This got me debugging for way too long.

还给你自由 2024-11-25 14:27:53

tl;dr:

https://www.khronos.org/注册表/OpenGL-Refpages/gl4/html/glVertexAttribPointer.xhtml

glVertexAttribPointer即使标准化参数为false,也会将您的输入转换为浮点型。

glVertexAttribIPointer 只接受整数,并且没有标准化 参数。

tl;dr:

https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glVertexAttribPointer.xhtml

glVertexAttribPointer will convert your input to float, even if the normalized argument is false.

glVertexAttribIPointer will only accept ints, and has no normalized parameter.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文