Android:在 OpenGL 1.1 中使用混合创建模糊纹理
有没有人在 Android 上使用混合模糊纹理创建模糊纹理取得了很大的成功?
我正在考虑此处描述的技术,但关键是获取加载的纹理,然后应用对它进行模糊处理,使绑定的纹理本身变得模糊。
Has anyone had much success on Android with creating blurred textures using blending to blur a texture?
I'm thinking of the technique described here but the crux is to take a loaded texture and then apply a blur to it so that the bound texture itself is blurred.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
“就地模糊”是 CPU 可以执行的操作,但使用 GPU(通常并行执行操作)时,您必须有另一个图像缓冲区作为渲染目标。
即使使用新的着色器,从/向同一缓冲区读取和写入也可能导致损坏,因为它们可以重新排序。一个类似的问题是,可以在一次传递中处理模糊的高斯模糊内核取决于相邻片段,这些片段可能已被应用于其片段坐标的内核修改。
如果您没有可用于渲染到渲染缓冲区甚至纹理的“framebuffer_object”扩展(另外需要“render_texture”扩展),
您必须像示例中那样渲染到后台缓冲区,然后执行glReadPixels()来获取图像以将其上传到源纹理,或者执行快速直接的glCopyTexImage2D() (OpenGL* 1.1 有这个)。
如果渲染目标太小,您可以渲染多个图块。
"Inplace blurring" is something a CPU can do, but using a GPU, which generally does things in parallel, you must have another image buffer as render target.
Even with new shaders, reads and writes from/to the same buffer can lead to corruption because they can be done reordered. A similar issue is, that a gaussian blurring kernel, which can handle blurring in one pass, depends on neighbor fragments which could have been modified by the kernel applied at their fragment coordinate.
If you don't have the 'framebuffer_object' extension available for rendering into renderbuffers or even textures (additionally requires 'render_texture' extension),
you have to render into the back buffer as in the example and then do
glReadPixels()
to get the image back for uploading it to the source texture, or do a fast and directglCopyTexImage2D()
(OpenGL* 1.1 have this).If the render target is too small, you can render multiple tiles.