如何使用 OpenGL 创建自定义顶点格式
我正在使用 OpenTK 编写自己的引擎(基本上只是 C# 的 OpenGL 绑定,gl* 变为 GL.*),并且我将存储大量顶点缓冲区,每个缓冲区中有数千个顶点。因此,我需要自己的自定义顶点格式,因为带有浮点的 Vec3 只会占用太多空间。 (我在这里谈论数百万个顶点)
我想要做的是使用此布局创建我自己的顶点格式:
Byte 0: Position X
Byte 1: Position Y
Byte 2: Position Z
Byte 3: Texture Coordinate X
Byte 4: Color R
Byte 5: Color G
Byte 6: Color B
Byte 7: Texture Coordinate Y
这是顶点的 C# 代码:
public struct SmallBlockVertex
{
public byte PositionX;
public byte PositionY;
public byte PositionZ;
public byte TextureX;
public byte ColorR;
public byte ColorG;
public byte ColorB;
public byte TextureY;
}
一个字节作为每个轴的位置就足够了,因为我只需要 32^3 个唯一位置。
我编写了自己的顶点着色器,它采用两个 vec4 作为输入,针对每组字节。 我的顶点着色器是这样的:
attribute vec4 pos_data;
attribute vec4 col_data;
uniform mat4 projection_mat;
uniform mat4 view_mat;
uniform mat4 world_mat;
void main()
{
vec4 position = pos_data * vec4(1.0, 1.0, 1.0, 0.0);
gl_Position = projection_mat * view_mat * world_mat * position;
}
为了尝试隔离问题,我使我的顶点着色器尽可能简单。 编译着色器的代码是用立即模式绘图进行测试的,并且它可以工作,所以不可能是这样。
这是我的函数,它生成、设置并用数据填充顶点缓冲区,并建立指向属性的指针。
public void SetData<VertexType>(VertexType[] vertices, int vertexSize) where VertexType : struct
{
GL.GenVertexArrays(1, out ArrayID);
GL.BindVertexArray(ArrayID);
GL.GenBuffers(1, out ID);
GL.BindBuffer(BufferTarget.ArrayBuffer, ID);
GL.BufferData<VertexType>(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * vertexSize), vertices, BufferUsageHint.StaticDraw);
GL.VertexAttribPointer(Shaders.PositionDataID, 4, VertexAttribPointerType.UnsignedByte, false, 4, 0);
GL.VertexAttribPointer(Shaders.ColorDataID, 4, VertexAttribPointerType.UnsignedByte, false, 4, 4);
}
据我了解,这是正确的程序: 生成一个顶点数组对象并绑定它 生成顶点缓冲区并绑定它 用数据填充顶点缓冲区 设置属性指针
Shaders.*DataID是在编译和使用着色器后使用此代码设置的。
PositionDataID = GL.GetAttribLocation(shaderProgram, "pos_data");
ColorDataID = GL.GetAttribLocation(shaderProgram, "col_data");
这是我的渲染函数:
void Render()
{
GL.UseProgram(Shaders.ChunkShaderProgram);
Matrix4 view = Constants.Engine_Physics.Player.ViewMatrix;
GL.UniformMatrix4(Shaders.ViewMatrixID, false, ref view);
//GL.Enable(EnableCap.DepthTest);
//GL.Enable(EnableCap.CullFace);
GL.EnableClientState(ArrayCap.VertexArray);
{
Matrix4 world = Matrix4.CreateTranslation(offset.Position);
GL.UniformMatrix4(Shaders.WorldMatrixID, false, ref world);
GL.BindVertexArray(ArrayID);
GL.BindBuffer(OpenTK.Graphics.OpenGL.BufferTarget.ArrayBuffer, ID);
GL.DrawArrays(OpenTK.Graphics.OpenGL.BeginMode.Quads, 0, Count / 4);
}
//GL.Disable(EnableCap.DepthTest);
//GL.Disable(EnableCap.CullFace);
GL.DisableClientState(ArrayCap.VertexArray);
GL.Flush();
}
任何人都可以给我一些指示吗(没有双关语)?我是否以错误的顺序执行此操作,或者是否需要调用某些函数?
我在整个网络上进行了搜索,但找不到一个好的教程或指南来解释如何实现自定义顶点。 如果您需要更多信息,请告知。
I am writing my own engine using OpenTK (basically just OpenGL bindings for C#, gl* becomes GL.*) and I'm going to be storing a lot of vertex buffers with several thousand vertices in each. Therefore I need my own, custom vertex format, as a Vec3 with floats would simply take up too much space. (I'm talking about millions of vertices here)
What I want to do is to create my own vertex format with this layout:
Byte 0: Position X
Byte 1: Position Y
Byte 2: Position Z
Byte 3: Texture Coordinate X
Byte 4: Color R
Byte 5: Color G
Byte 6: Color B
Byte 7: Texture Coordinate Y
Here is the code in C# for the vertex:
public struct SmallBlockVertex
{
public byte PositionX;
public byte PositionY;
public byte PositionZ;
public byte TextureX;
public byte ColorR;
public byte ColorG;
public byte ColorB;
public byte TextureY;
}
A byte as position for each axis is plenty, as I only need 32^3 unique positions.
I have written my own vertex shader which takes two vec4's as inputs, on for each set of bytes.
My vertex shader is this:
attribute vec4 pos_data;
attribute vec4 col_data;
uniform mat4 projection_mat;
uniform mat4 view_mat;
uniform mat4 world_mat;
void main()
{
vec4 position = pos_data * vec4(1.0, 1.0, 1.0, 0.0);
gl_Position = projection_mat * view_mat * world_mat * position;
}
To try and isolate the problem, I have made my vertex shader as simple as possible.
The code for compiling shaders is tested with immediate mode drawing, and it works, so it can't be that.
Here is my function which generates, sets up and fills the vertex buffer with data and establishes a pointer to the attributes.
public void SetData<VertexType>(VertexType[] vertices, int vertexSize) where VertexType : struct
{
GL.GenVertexArrays(1, out ArrayID);
GL.BindVertexArray(ArrayID);
GL.GenBuffers(1, out ID);
GL.BindBuffer(BufferTarget.ArrayBuffer, ID);
GL.BufferData<VertexType>(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * vertexSize), vertices, BufferUsageHint.StaticDraw);
GL.VertexAttribPointer(Shaders.PositionDataID, 4, VertexAttribPointerType.UnsignedByte, false, 4, 0);
GL.VertexAttribPointer(Shaders.ColorDataID, 4, VertexAttribPointerType.UnsignedByte, false, 4, 4);
}
From what I understand, this is the correct procedure to:
Generate a Vertex Array Object and bind it
Generate a Vertex Buffer and bind it
Fill the Vertex Buffer with data
Set the attribute pointers
Shaders.*DataID is set with this code after compiling and using the shader.
PositionDataID = GL.GetAttribLocation(shaderProgram, "pos_data");
ColorDataID = GL.GetAttribLocation(shaderProgram, "col_data");
And this is my render function:
void Render()
{
GL.UseProgram(Shaders.ChunkShaderProgram);
Matrix4 view = Constants.Engine_Physics.Player.ViewMatrix;
GL.UniformMatrix4(Shaders.ViewMatrixID, false, ref view);
//GL.Enable(EnableCap.DepthTest);
//GL.Enable(EnableCap.CullFace);
GL.EnableClientState(ArrayCap.VertexArray);
{
Matrix4 world = Matrix4.CreateTranslation(offset.Position);
GL.UniformMatrix4(Shaders.WorldMatrixID, false, ref world);
GL.BindVertexArray(ArrayID);
GL.BindBuffer(OpenTK.Graphics.OpenGL.BufferTarget.ArrayBuffer, ID);
GL.DrawArrays(OpenTK.Graphics.OpenGL.BeginMode.Quads, 0, Count / 4);
}
//GL.Disable(EnableCap.DepthTest);
//GL.Disable(EnableCap.CullFace);
GL.DisableClientState(ArrayCap.VertexArray);
GL.Flush();
}
Can anyone be so kind as to give me some pointers (no pun intended)? Am I doing this in the wrong order or is there some functions I need to call?
I've searched all over the web, but can't find one good tutorial or guide explaining how to implement custom vertices.
If you need any more information, please say so.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
制作自己的顶点格式并不需要太多。这一切都是在
glVertexAttribPointer
调用中完成的。首先,您使用 4 作为步幅参数,但您的顶点结构是 8 字节宽,因此从一个顶点的开始到下一个顶点有 8 个字节,因此步幅必须为 8(当然,在两个调用中) )。偏移量是正确的,但是您应该将颜色的归一化标志设置为 true,因为您肯定希望它们在 [0,1] 范围内(我不知道顶点位置是否也应该如此) )。接下来,在着色器中使用自定义顶点属性时,不要启用已弃用的固定函数数组(
gl...ClienState
事物)。相反,您必须使用相应的
glDisableVertexAttribArray
调用。count/4
在glDrawArrays
调用中意味着什么。请记住,最后一个参数指定顶点数而不是基元数(在您的情况下为四边形)。但也许是这样的。除了这些真正的错误之外,您不应该使用如此复杂的顶点格式,以至于您必须自己在着色器中对其进行解码。这就是
glVertexAttribPointer
的步长和偏移参数的用途。例如,稍微重新定义您的顶点数据:然后您可以
在您拥有的着色器中使用 And
,而不必自己从位置和颜色中提取纹理坐标。
您应该真正考虑一下您的空间要求是否确实需要使用字节来表示顶点位置,因为这极大地限制了位置数据的精度。也许短裤或半精度浮子是一个很好的折衷方案。
而且渲染方法中也不需要调用glBindBuffer,因为这只需要glVertexAttribPointer,并且保存在由glBindVertexArray激活的VAO中。您通常也不应该调用
glFlush
,因为这是在交换缓冲区时由操作系统完成的(假设您使用双缓冲)。最后但并非最不重要的一点是,请确保您的硬件也支持您正在使用的所有功能(例如 VBO 和 VAO)。
编辑:实际上数组的启用标志也存储在VAO中,以便您可以
在
SetData
方法中调用(当然是在创建并绑定VAO之后)然后,当您在渲染函数中通过 glBindVertexArray 绑定 VAO 时,它们就会被启用。哦,我刚刚看到另一个错误。当您在渲染函数中绑定 VAO 时,属性数组的启用标志将被 VAO 的状态覆盖,并且由于您在创建 VAO 后未启用它们,因此它们仍然处于禁用状态。因此,您必须像前面所说的那样,在SetData
方法中启用数组。实际上,在您的情况下,您可能很幸运,当您在渲染函数中启用数组时,VAO 仍然受到绑定(因为您没有调用glBindVertexArray(0)
),但您不应该指望这一点。There is not much to making your own vertex format. It is all done in the
glVertexAttribPointer
calls. First of all, you are using 4 as stride parameter, but your vertex structure is 8 bytes wide, so there are 8 bytes from the start of one vertex to the next, so the stride has to be 8 (in both calls, of course). The offsets are correct, but you should set the normalized flag to true for the colors, as you surely want them to be in the [0,1] range (I don't know if this should also be the case for the vertex positions).Next, when using custom vertex attributes in shaders, you don't enable the deprecated fixed function arrays (the
gl...ClienState
things). Instead you have to useand the corresponding
glDisableVertexAttribArray
calls.And what does the
count/4
mean in theglDrawArrays
call. Keep in mind that the last parameter specifies the number of vertices and not primitives (quads in your case). But maybe it's intended this way.Besides these real errors, you should not use such coplicated vertex format that you have to decode it in the shader yourself. That's what the stride and offset parameters of
glVertexAttribPointer
are for. For example redefine your vertex data a bit:and then you can just use
And in the shader you have
and you don't have to extract the texture coordinate from the position and color yourself.
And you should really think about if your space requirements really demand using bytes for vertex positions as this extremely limits the precision of your position data. Maybe shorts or half precision floats would be a good compromise.
And also it should not be neccessary to call
glBindBuffer
in the render method, as this is only needed forglVertexAttribPointer
and is saved in the VAO that gets activated byglBindVertexArray
. You should also usually not callglFlush
as this is done anyway by the OS when the buffers are swapped (assuming you use double buffering).And last but not least, be sure your hardware also supports all the features you are using (like VBOs and VAOs).
EDIT: Actually the enabled flags of the arrays are also stored in the VAO, so that you can call
in the
SetData
method (after creating and binding the VAO, of course) and they then get enabled when you bind the VAO byglBindVertexArray
in the render function. Oh, I just saw another error. When you bind the VAO in the render function, the enabled flags of the attribute arrays are overwritten by the state from the VAO and as you did not enable them after the VAO creation, they are still disabled. So you will have to do it like said, enable the arrays in theSetData
method. Actually in your case you might be lucky and the VAO is still bound when you enable the arrays in the render function (as you didn't callglBindVertexArray(0)
) but you shouldn't count on that.