如何将 GLSL 叠加混合转换为 OpenGL ES 1.1?
以下是 GLSL 中叠加混合算法的实现,取自 OpenGL Shading® 语言,第三版:
19.6.12 叠加
OVERLAY 首先计算亮度 的基值。
如果亮度值小于 0.5,混合值和基础值相乘。
如果亮度值大于 0.5,进行屏幕操作。
效果是基值为 与混合值混合,而不是 比被替换。这允许 图案和颜色覆盖 基础图像,但有阴影和高光 基本图像中的内容被保留。
亮度不连续的地方发生 = 0.5。为了提供平滑的过渡,我们实际上对 两个亮度方程 范围[0.45,0.55]。
float luminance = dot(base, lumCoeff);
if (luminance < 0.45)
result = 2.0 * blend * base;
else if (luminance > 0.55)
result = white - 2.0 * (white - blend) * (white - base);
else {
vec4 result1 = 2.0 * blend * base;
vec4 result2 = white - 2.0 * (white - blend) * (white - base);
result = mix(result1, result2, (luminance - 0.45) * 10.0);
}
我如何在 OpenGL ES 1.1(针对 iPhone 3G)中实现类似的功能,而不使用着色器?我可以使用混合函数或纹理组合来实现此目的吗?
The following is an implementation of an overlay blend algorithm in GLSL, drawn from the OpenGL Shading® Language, Third Edition:
19.6.12 Overlay
OVERLAY first computes the luminance
of the base value.If the luminance value is less than
0.5, the blend and base values are multiplied together.If the luminance value is greater than
0.5, a screen operation is performed.The effect is that the base value is
mixed with the blend value, rather
than being replaced. This allows
patterns and colors to overlay the
base image, but shadows and highlights
in the base image are preserved.A discontinuity occurs where luminance
= 0.5. To provide a smooth transition, we actually do a linear blend of the
two equations for luminance in the
range [0.45,0.55].
float luminance = dot(base, lumCoeff);
if (luminance < 0.45)
result = 2.0 * blend * base;
else if (luminance > 0.55)
result = white - 2.0 * (white - blend) * (white - base);
else {
vec4 result1 = 2.0 * blend * base;
vec4 result2 = white - 2.0 * (white - blend) * (white - base);
result = mix(result1, result2, (luminance - 0.45) * 10.0);
}
How would I implement something similar in OpenGL ES 1.1 (targeting the iPhone 3G), without using shaders? Can I use a blend function or texture combine to implement this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
为了在记录中留下答案,假设您无法进行进一步的优化,您可以:
1) 将亮度值加载到 Alpha 通道中
设置一个帧缓冲区对象,其大小与原始纹理相同。使用 glColorMask 启用或禁用对不同通道的写入。首先,启用红色、绿色和蓝色通道并禁用 Alpha 通道。正常绘制纹理。这将复制纹理的颜色信息。
然后启用 Alpha 通道并禁用红色、绿色和蓝色通道。使用 dot3 扩展(iPhone 从一开始就支持该扩展)用亮度值填充目标 Alpha 通道。
2) 根据亮度将纹理分割为三个纹理
一个简单的方案就是在亮度 = 0.5 处分割并忽略线性混合。如果您要这样做,您可以再次使用帧缓冲区对象来分割 GPU 上的纹理。这次使用 alpha 函数(glAlphaFunc 并确保启用它)在绘制到一个纹理时传递所有 alpha 大于 0.50 的区域,在绘制到另一个纹理时传递所有 alpha 小于 0.50 的特征。
虽然每个像素只能进行一次 alpha 测试,这意味着您无法一步分离出 0.45 到 0.55 的范围,但您可以分两步完成。
3) 使用普通混合模式将两个或三个纹理合成到帧缓冲区上
如果需要,您可以在渲染期间颠覆照明系统以偏移和缩放 Alpha 通道。
显然,您可以通过在启动时每次绘制一次执行相同的步骤来进行优化。这可能意味着将当前的一个纹理永久存储为两个或三个。
For the purposes of leaving an answer on the record and supposing there are no further optimisations you can make, you could:
1) Load luminance value into the alpha channel
Set up a framebuffer object, the same size as the original texture. Use glColorMask to enable or disable writing to different channels. First of all, enable red, green and blue channels and disable the alpha channel. Draw the texture normally. This will duplicate the texture's colour information.
Then enable the alpha channel and disable the red, green and blue channels. Use the dot3 extension (which has been supported on the iPhone since the beginning) to fill the target alpha channel with luminance values.
2) Split the texture into three textures, based on luminance
A simple scheme would be just to split at luminance = 0.5 and ignore the linear blend. If you were to do that, you could again use framebuffer objects to split a texture on the GPU. This time use the alpha function (glAlphaFunc and be sure to enable it) to pass all those areas with an alpha greater than 0.50 when drawing to one texture, to pass all those features with an alpha less than 0.50 when drawing to another.
Although you can do only one alpha test per pixel, meaning that you can't separate out the range 0.45 to 0.55 in a single step, you could do that in two steps.
3) Use normal blend modes to composite the two or three textures onto your framebuffer
You can subvert the lighting system to offset and scale the alpha channels during rendering if necessary.
Obviously you'd optimise by performing those steps that are identical every draw just once, at startup. Which probably means permanently storing what is currently one texture as two or three.