iOs:使用 OpenGL 别名但不使用 Cocos2D?
我是 opengl 编程的新手,但我正在做一些非常基本的事情,自定义 opengl 代码和 cocos2d 之间的质量差异是巨大的!
我正在尝试加载图像并连续旋转每一帧。在我的代码中,我得到了很多闪烁、锐利的边缘,而 cocos2d 则一切都很好且平滑。 我已经在我的代码中使用苹果为 iOs 4 推荐的代码设置了 4x 多重采样抗锯齿,但与没有任何 MSAA 的 cocos2d 相比,它看起来仍然很糟糕。
您可以在这里看到差异: 自定义 opengl 代码(带 MSAA):
cocos2D(不含 MSAA):
有谁知道我缺少什么才能实现如此流畅的图形?通过查看 cocos2d 代码,我发现了一些将别名链接到 GL_LINEAR 的参考资料。我已经像 cocos 一样向我的纹理添加了 GL_LINEAR 参数,但它看起来仍然同样糟糕。
I am new to opengl programming, but I am doing something very basic, and the difference of quality between a custom opengl code and cocos2d is huge!
I am trying just to load an image and continuously rotate it every frame. With my code, I get a lot of flickering, sharp edges, while cocos2d has it all nice and smooth.
I've set up 4x Multi-Sampling Anti-Aliasing using apple's recommended code for iOs 4 on my code, and still it looks very bad in comparison to cocos2d without any MSAA.
You can see the differences here:
custom opengl code (with MSAA):
cocos2D (without MSAA):
Does anyone know what am I missing to be able to achieve such smooth graphics? By looking at cocos2d code, I found some references that linked aliasing to GL_LINEAR. I've added GL_LINEAR parameters to my textures just like cocos, but it's still looking equally bad.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
抗锯齿功能正如其名称所示:它可以防止图元出现别名,例如直线(对角线)变成楼梯。由于抗锯齿通常会产生“软”边缘,因此该术语有时用于表示任何算法“软化”,但这样做是不正确的。
假设你的源纹理已经包含一些抗锯齿功能,可以将汽车的弯曲边缘渲染到像素网格上(因此,如果你在艺术程序中打开源 PNG 或其他内容并放大边缘,你会看到一些柔和度) ,我认为您的代码由于某种原因未能应用多重采样。如果放大并查看屋顶的顶部边缘,请检查右侧台阶的最顶部与左侧台阶的最顶部之间的过渡。顶部刺眼的深色会自然地增加一个像素。这是该边缘位于原始纹理中并且被逐像素复制出来的症状。
GL_LINEAR
是一个过滤参数,它影响 OpenGL 如何回答这样的问题:“如果 (0, 0) 处的源像素是一种颜色,而 (0, 1) 处的源像素是另一种颜色,那么 (0, 1) 处的源像素是什么颜色” 0.5)?如果您应用线性过滤,那么当您将纹理缩放到其原始大小以上时,OpenGL 必须发明的额外像素将使用最近源像素的线性组合来创建。如果您使用了GL_NEAREST
,那么它将是最接近的源像素的颜色。这就是放大后看起来模糊且低对比度的纹理与放大后看起来像具有明显像素的马赛克的纹理之间的区别。因此,它(通常)会增加图像整体的模糊度和柔和度,但实际上与抗锯齿没有任何关系。关于为什么你没有获得抗锯齿功能,有两个可能的原因浮现在脑海中。您的代码中可能存在一些错误,或者您可能只是使用了与 Cocos2D 不同的算法。 Apple 的硬件多重采样支持仅在 iOS 4 和 Cocos2D 中出现,因此可能会坚持使用“软件”方法(具体来说,以 4 倍大小逐像素渲染整个场景,然后让 GPU 将其缩小)。后者会明显慢一些,但会阻止硬件尝试优化该过程。某些硬件有时应用的一种优化是仅在几何体的边缘(大约)进行多重采样。这显然不会给你带来任何好处。
另一种可能性是,您在绘制时缩小了图像(尽管看起来不像),而 Cocos2D 正在生成 mip 贴图,而您却没有。 Mip 贴图预先计算特定比例的图像,并在绘制到屏幕时从那里开始工作。这样做可以应用更昂贵的算法,并且往往会减少锯齿。
你能发布一些代码吗?
Anti-aliasing does exactly what the name says: it prevents primitives from assuming aliases, such as a straight (diagonal) line turning into a staircase. Because anti-aliasing usually results in 'soft' edges, the term is sometimes used to apply to any algorithmic 'softening', but it's incorrect to do so.
Assuming your source texture already contains some anti-aliasing to render the curved edges of your car onto a pixel grid (so, if you opened the source PNG or whatever in an art program and zoomed in on the edges you'd see some softness), I think that your code is failing to apply multisampling for whatever reason. If you zoom in and look at the top edge of the roof then check out the transition between the very top of the step one in from the right and the one to its left. The harsh dark colour at the top just spontaneously steps up a pixel. That's symptomatic of that edge being in the original texture and it being copied out pixel by pixel.
GL_LINEAR
is a filtering parameter that affects how OpenGL answer questions like 'if the source pixel at (0, 0) is one colour and at (0, 1) is another then what colour is at (0, 0.5)?' If you apply linear filtering then when you scale your texture above its native size the extra pixels that OpenGL has to invent will be created using a linear combination of the nearest source pixels. If you'd usedGL_NEAREST
then it'd be the colour of whichever source pixel is nearest. So that's the difference between textures that scale up to look blurry and low contrast and textures that scale up to look like mosaics with obvious pixels. So it (usually) adds blurriness and softness to the image overall but isn't really anything to do with anti-aliasing.With respect to why you're not getting anti-aliasing, two possible reasons sprint to mind. You may have some error in your code or you may simply be using a different algorithm from Cocos2D. Apple's hardware multisampling support arrived only in iOS 4 and Cocos2D predates that so may be sticking to a 'software' method (specifically, rendering the whole scene pixel-by-pixel at 4x the size, then getting the GPU to scale it down). The latter would be significantly slower but would prevent the hardware from attempting to optimise the process. One optimisation that some hardware sometimes applies is to multisample only at the edges of geometry (approximately). That obviously wouldn't benefit you at all.
Another possibility is that you're scaling your image down when you draw (though it doesn't look like it) and Cocos2D is generating mip maps whereas you're not. Mip maps precompute certain scales of image and work from there when drawing to the screen. Doing it that way allows a more expensive algorithm to be applied and tends to lead to less aliasing.
Can you post some code?