OpenGL ES Shader 绘制 2D 图像轮廓

发布于 2024-12-28 23:39:34 字数 119 浏览 0 评论 0原文

我正在使用 OpenGL 2.0 开发 2D iOS 游戏,我想知道是否可以编写一个着色器来勾勒出带有发光效果的图像。所有图像都是 2D 精灵。我见过的轮廓着色器示例适用于 3D 对象,因此我不确定是否可以用于 2D 图像。

I'm working on an 2D iOS game using OpenGL 2.0 and I'm wondering if it's possible to write a shader that will outline images with a glow. All the images are 2D sprites. The shader examples I've seen for outlining are for 3D objects, so I'm not sure if it possible for 2D images.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

忆离笙 2025-01-04 23:39:34

您是否接受边缘检测过滤器(例如 Sobel),生成类似于维基百科文章,然后对结果进行高斯模糊以柔化边缘并赋予其更多的光泽,然后将该图像合成到场景中?

在实践中,您可能可以重复使用您所见过的 3D 轮廓着色器 - 尽管理论上您可以检查深度量(在 ES 中进行一些扩展工作),但我见过的每一个都只是渲染图像上的 2D 效果。

编辑:经过进一步考虑,拉普拉斯算子可能比索贝尔更容易应用,因为它可以作为简单的卷积着色器来完成(如 像这样)。尽管为了在移动设备上安全起见,您可能最多希望坚持使用 3x3 内核,并为每种效果编写不同的着色器,而不是使用数据来实现。例如,粗略的高斯模糊,详细写出:

void main()
{
    mediump vec4 total = vec4(0.0);
    mediump vec4 grabPixel;

    total +=        texture2D(tex2D, texCoordVarying + vec2(-1.0 / width, -1.0 / height));
    total +=        texture2D(tex2D, texCoordVarying + vec2(1.0 / width, -1.0 / height));
    total +=        texture2D(tex2D, texCoordVarying + vec2(1.0 / width, 1.0 / height));
    total +=        texture2D(tex2D, texCoordVarying + vec2(-1.0 / width, 1.0 / height));

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(0.0, -1.0 / height));
    total += grabPixel * 2.0;

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(0.0, 1.0 / height));
    total += grabPixel * 2.0;

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(-1.0 / width, 0.0));
    total += grabPixel * 2.0;

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(1.0 / width, 0.0));
    total += grabPixel * 2.0;

    grabPixel = texture2D(tex2D, texCoordVarying);
    total += grabPixel * 4.0;

    total *= 1.0 / 16.0;

    gl_FragColor = total;
}

拉普拉斯边缘检测最终看起来相似,但具有不同的常数。

作为优化,考虑到变化的限制,您应该尽可能在顶点着色器而不是片段着色器中计算出相对采样点,因为这样做可以避免依赖纹理读取。

Would you accept an edge detection filter (such as Sobel), producing an image like that shown in the Wikipedia article, followed by a Gaussian blur on the results of that to soften the edges and give it more of a glow, then composite that image onto your scene?

In practice you could probably just reuse the 3d outlining shaders you've seen — although you could in theory inspect depth quantities (with some extended effort in ES), every one I've ever seen was just a 2d effect on the rendered image.

EDIT: on further consideration, the Laplacian may be a little easier to apply than the Sobel because it can be done as a simple convolution shaders (as described in places like this). Though to be safe on mobile you possibly want to stick to 3x3 kernels at most and write a different shader for each effect rather than doing it with data. So e.g. a rough Gaussian blur, written out at length:

void main()
{
    mediump vec4 total = vec4(0.0);
    mediump vec4 grabPixel;

    total +=        texture2D(tex2D, texCoordVarying + vec2(-1.0 / width, -1.0 / height));
    total +=        texture2D(tex2D, texCoordVarying + vec2(1.0 / width, -1.0 / height));
    total +=        texture2D(tex2D, texCoordVarying + vec2(1.0 / width, 1.0 / height));
    total +=        texture2D(tex2D, texCoordVarying + vec2(-1.0 / width, 1.0 / height));

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(0.0, -1.0 / height));
    total += grabPixel * 2.0;

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(0.0, 1.0 / height));
    total += grabPixel * 2.0;

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(-1.0 / width, 0.0));
    total += grabPixel * 2.0;

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(1.0 / width, 0.0));
    total += grabPixel * 2.0;

    grabPixel = texture2D(tex2D, texCoordVarying);
    total += grabPixel * 4.0;

    total *= 1.0 / 16.0;

    gl_FragColor = total;
}

And a Laplacian edge detect ends up looking similar but with different constants.

As an optimisation, you should work out your relative sampling points in the vertex shader rather than in the fragment shader as far as possible given the limit on varyings, as doing so will avoid dependent texture reads.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文