使用 VBO 渲染顶点的问题 - OpenGL
我正在将顶点数组函数转移到 VBO,以提高应用程序的速度。
这是我原来的工作顶点数组渲染函数:
void BSP::render()
{
glFrontFace(GL_CCW);
// Set up rendering states
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), &vertices[0].u);
// Draw
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, indices);
// End of rendering - disable states
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
效果很好!
现在我将它们移至 VBO,而我的程序实际上导致我的显卡停止响应。我的顶点和索引的设置完全相同。
新设置:
vboId 在 bsp.h 中设置,如下所示:GLuint vboId[2];
当我刚刚运行 createVBO() 函数时,我没有收到任何错误!
void BSP::createVBO()
{
// Generate buffers
glGenBuffers(2, vboId);
// Bind the first buffer (vertices)
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Now save indices data in buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
}
以及VBOS的渲染代码。我很确定它就在这里。只是想渲染 VBO 中的内容,就像我在顶点数组中所做的那样。
渲染:
void BSP::renderVBO()
{
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]); // for vertex coordinates
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]); // for indices
// do same as vertex array except pointer
glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array
glVertexPointer(3, GL_FLOAT, 0, 0); // last param is offset, not ptr
// draw the bsp area
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY); // deactivate vertex array
// bind with 0, so, switch back to normal pointer operation
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
不确定错误是什么,但我很确定我的渲染函数错误。希望有一个更统一的教程,因为网上有很多教程,但它们经常相互矛盾。
I am transferring over my vertex arrays functions to VBOs to increase the speed of my application.
Here was my original working vertex array rendering function:
void BSP::render()
{
glFrontFace(GL_CCW);
// Set up rendering states
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), &vertices[0].u);
// Draw
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, indices);
// End of rendering - disable states
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Worked great!
Now I am moving them into VBOs and my program actually caused my graphics card to stop responding. The setup on my vertices and indices are exactly the same.
New setup:
vboId is setup in the bsp.h like so: GLuint vboId[2];
I get no error when I just run the createVBO() function!
void BSP::createVBO()
{
// Generate buffers
glGenBuffers(2, vboId);
// Bind the first buffer (vertices)
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Now save indices data in buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
}
And the rendering code for the VBOS. I am pretty sure it's in here. Just want to render whats in the VBO like I did in the vertex array.
Render:
void BSP::renderVBO()
{
glBindBuffer(GL_ARRAY_BUFFER, vboId[0]); // for vertex coordinates
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboId[1]); // for indices
// do same as vertex array except pointer
glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array
glVertexPointer(3, GL_FLOAT, 0, 0); // last param is offset, not ptr
// draw the bsp area
glDrawElements(GL_TRIANGLES, numVertices, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY); // deactivate vertex array
// bind with 0, so, switch back to normal pointer operation
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
Not sure what the error is but I am pretty sure I have my rendering function wrong. Wish there was a more unified tutorial on this as there are a bunch online but they are often contradicting eachother.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
另外Miro所说的(
GL_UNSIGNED_BYTE
应该是GL_UNSIGNED_SHORT
),我认为你不想使用numVertices
而是numIndices< /code>,就像在非 VBO 调用中一样。
否则,您的代码看起来非常有效,如果这不能解决您的问题,则错误可能在其他地方。
顺便说一句, BUFFER_OFFSET(i) 事物通常只是
((char*)0+(i))
的定义,因此您也可以只传入直接字节偏移,特别是当它为 0 时。编辑:刚刚发现了另一个。如果您使用与非 VBO 版本相同的数据结构(我在上面假设),那么您当然需要使用
sizeof(Vertex)
作为glVertexPointer
中的步幅参数代码>.In addition what Miro said (the
GL_UNSIGNED_BYTE
should beGL_UNSIGNED_SHORT
), I don't think you want to usenumVertices
butnumIndices
, like in your non-VBO call.Otherwise your code looks quite valid and if this doesn't fix your problem, maybe the error is somewhere else.
And by the way the
BUFFER_OFFSET(i)
thing is usuaully just a define for((char*)0+(i))
, so you can also just pass in the byte offset directly, especially when it's 0.EDIT: Just spotted another one. If you use the exact data structures you use for the non-VBO version (which I assumed above), then you of course need to use
sizeof(Vertex)
as stride parameter inglVertexPointer
.如果您在不使用 VBO 时将相同的数据传递给 glDrawElements,并将相同的数据传递给 VBO 缓冲区。那么参数几乎没有什么不同,没有 FBO 时您使用的是
GL_UNSIGNED_SHORT
,而使用 FBO 时您使用的是GL_UNSIGNED_BYTE
。所以我认为VBO调用应该是这样的:另外看看这个教程,有VBO缓冲区解释得很好。
If you are passing same data to glDrawElements when you aren't using VBO and same data to VBO buffer. Then parameters little differs, without FBO you've used
GL_UNSIGNED_SHORT
and with FBO you've usedGL_UNSIGNED_BYTE
. So i think VBO call should look like that:Also look at this tutorial, there are VBO buffers explained very well.
如何声明顶点和索引?
glBufferData 的 size 参数应该是缓冲区的大小(以字节为单位),如果传递 sizeof(vertices) ,它将返回声明数组的总大小(而不仅仅是分配的大小)。
尝试使用 sizeof(Vertex)*numVertices 和 sizeof(indices[0])*numIndices 代替。
How do you declare vertices and indices?
The size parameter to glBufferData should be the size of the buffer in bytes and if you pass sizeof(vertices) it will return the total size of the declared array (not just what is allocated).
Try something like sizeof(Vertex)*numVertices and sizeof(indices[0])*numIndices instead.