呈现渲染缓冲区时,是什么导致执行时间波动? (OpenGL)
发生的情况是这样的:
由于
usleep
,drawGL 函数在帧的确切末尾被调用,如建议的那样。 这已经保持了稳定的帧率。渲染缓冲区的实际呈现是通过
drawGL()
进行的。 测量执行此操作所需的时间后,我会得到波动的执行时间,从而导致动画出现卡顿。此计时器使用 mach_absolute_time,因此非常准确。在帧结束时,我测量
时间差
。 是的,平均为 1 毫秒,但偏差很大,范围从 0.8 毫秒到 1.2 毫秒,峰值最多超过 2 毫秒。
示例:
// Every something of a second I call tick
-(void)tick
{
drawGL();
}
- (void)drawGL
{
// startTime using mach_absolute_time;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
// endTime using mach_absolute_time;
// timeDifference = endTime - startTime;
}
我的理解是,一旦创建了帧缓冲区,呈现渲染缓冲区就应该始终花费相同的精力,无论帧的复杂程度如何? 这是真的? 如果不是,我怎样才能防止这种情况发生?
顺便说一句,这是 iPhone 应用程序的示例。 所以我们在这里讨论 OpenGL ES,尽管我不认为这是特定于平台的问题。 如果是的话,那是怎么回事? 这难道不应该不会发生吗? 再说一次,如果是这样,我怎样才能防止这种情况发生呢?
This is what happens:
The drawGL function is called at the exact end of the frame thanks to a
usleep
, as suggested. This already maintains a steady framerate.The actual presentation of the renderbuffer takes place with
drawGL()
. Measuring the time it takes to do this, gives me fluctuating execution times, resulting in a stutter in my animation. This timer uses mach_absolute_time so it's extremely accurate.At the end of my frame, I measure
timeDifference
. Yes, it's on average 1 millisecond, but it deviates a lot, ranging from 0.8 milliseconds to 1.2 with peaks of up to more than 2 milliseconds.
Example:
// Every something of a second I call tick
-(void)tick
{
drawGL();
}
- (void)drawGL
{
// startTime using mach_absolute_time;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
// endTime using mach_absolute_time;
// timeDifference = endTime - startTime;
}
My understanding is that once the framebuffer has been created, presenting the renderbuffer should always take the same effort, regardless of the complexity of the frame? Is this true? And if not, how can I prevent this?
By the way, this is an example for an iPhone app. So we're talking OpenGL ES here, though I don't think it's a platform specific problem. If it is, than what is going on? And shouldn't this be not happening? And again, if so, how can I prevent this from happening?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
您遇到的偏差可能是由很多因素引起的,包括操作系统调度程序启动并将 CPU 分配给另一个进程或类似问题。 事实上,普通人无法区分 1 毫秒和 2 毫秒渲染时间之间的差异。 电影以 25 fps 的速度运行,这意味着每一帧的显示时间约为 40 毫秒,对于人眼来说看起来很流畅。
至于动画卡顿,您应该检查如何保持恒定的动画速度。 我见过的最常见的方法大致如下:
或者您可以将lastFrameTime传递给每一帧的updateAnimation并在动画状态之间进行插值。 结果将更加流畅。
如果您已经在使用类似上面的东西,也许您应该在渲染循环的其他部分寻找罪魁祸首。 在 Direct3D 中,昂贵的事情是调用绘图基元和更改渲染状态,因此您可能需要检查这些的 OpenGL 类似物。
The deviations you encounter maybe be caused by a lot of factors, including OS scheduler that kicks in and gives cpu to another process or similar issues. In fact normal human won't tell a difference between 1 and 2 ms render times. Motion pictures run at 25 fps, which means each frame is shown for roughly 40ms and it looks fluid for human eye.
As for animation stuttering you should examine how you maintain constant animation speed. Most common approach I've seen looks roughly like this:
Or you could just pass lastFrameTime to updateAnimation every frame and interpolate between animation states. The result will be even more fluid.
If you're already using something like the above, maybe you should look for culprits in other parts of your render loop. In Direct3D the costly things were calls for drawing primitives and changing render states, so you might want to check around OpenGL analogues of those.
我一直以来最喜欢的 OpenGL 表达式:“特定于实现”。 我认为这句话用在这里非常合适。
My favorite OpenGL expression of all times: "implementation specific". I think it applies here very well.
快速搜索本文中的 mach_absolute_time 结果:链接
看起来 iPhone 上的计时器精度仅为 166.67 ns(甚至可能更差)。
虽然这可以解释巨大的差异,但它并不能解释根本存在差异。
三个主要原因可能是:
A quick search for mach_absolute_time results in this article: Link
Looks like precision of that timer on an iPhone is only 166.67 ns (and maybe worse).
While that may explain the large difference, it doesn't explain that there is a difference at all.
The three main reasons are probably:
出于多种原因,最好不要依赖高恒定帧速率,最重要的是操作系统可能会在后台执行某些操作,从而减慢速度。 最好对计时器进行采样并计算出每帧已经过去了多少时间,这应该确保动画流畅。
It is best not to rely on a high constant frame rate for a number of reasons, the most important being that the OS may do something in the background that slows things down. Better to sample a timer and work out how much time has passed each frame, this should ensure smooth animation.
即使计时器返回小数 0.8->2.0,计时器是否也可能不精确到亚毫秒级别?
Is it possible that the timer is not accurate to the sub ms level even though it is returning decimals 0.8->2.0?