如何模糊片段着色器的结果?
我正在开发一个着色器,它可以根据一些蒙版图像生成小云。现在效果很好,但我觉得结果缺少一些东西,我认为模糊会很好。我记得一个基本的模糊算法,您必须应用范数为 1 的矩阵(矩阵越大结果越大)和图像进行卷积。问题是,我不知道如何将着色器的当前结果视为图像。所以基本上我想保持着色器原样,但让它变得模糊。有什么想法吗?如何将卷积算法集成到着色器中?或者有人知道其他算法吗?
CG代码:
float Luminance( float4 Color ){
return 0.6 * Color.r + 0.3 * Color.g + 0.1 * Color.b;
}
struct v2f {
float4 pos : SV_POSITION;
float2 uv_MainTex : TEXCOORD0;
};
float4 _MainTex_ST;
v2f vert(appdata_base v) {
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.uv_MainTex = TRANSFORM_TEX(v.texcoord, _MainTex);
return o;
}
sampler2D _MainTex;
sampler2D _Gradient;
sampler2D _NoiseO;
sampler2D _NoiseT;
float4 frag(v2f IN) : COLOR {
half4 nO = tex2D (_NoiseO, IN.uv_MainTex);
half4 nT = tex2D (_NoiseT, IN.uv_MainTex);
float4 turbulence = nO + nT;
float lum = Luminance(turbulence);
half4 c = tex2D (_MainTex, IN.uv_MainTex);
if (lum >= 1.0f){
float pos = lum - 1.0f;
if( pos > 0.98f ) pos = 0.98f;
if( pos < 0.02f ) pos = 0.02f;
float2 texCord = (pos, pos);
half4 turb = tex2D (_Gradient, texCord);
//turb.a = 0.0f;
return turb;
}
else return c;
}
I'm working on a shader that generates little clouds based on some mask images. Right now it works well, but i feel the result is missing something, and i thought a blur would be nice. I remember a basic blur algorithm where you have to apply a convolution with a matrix of norm 1 (the bigger the matrix the greater the result) and an image. The thing is, I don't know how to treat the current outcome of the shader as an image. So basically I want to keep the shader as is, but getting it blurry. Any ideas?, how can I integrate the convolution algorithm to the shader? Or does anyone know of other algorithm?
Cg code:
float Luminance( float4 Color ){
return 0.6 * Color.r + 0.3 * Color.g + 0.1 * Color.b;
}
struct v2f {
float4 pos : SV_POSITION;
float2 uv_MainTex : TEXCOORD0;
};
float4 _MainTex_ST;
v2f vert(appdata_base v) {
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.uv_MainTex = TRANSFORM_TEX(v.texcoord, _MainTex);
return o;
}
sampler2D _MainTex;
sampler2D _Gradient;
sampler2D _NoiseO;
sampler2D _NoiseT;
float4 frag(v2f IN) : COLOR {
half4 nO = tex2D (_NoiseO, IN.uv_MainTex);
half4 nT = tex2D (_NoiseT, IN.uv_MainTex);
float4 turbulence = nO + nT;
float lum = Luminance(turbulence);
half4 c = tex2D (_MainTex, IN.uv_MainTex);
if (lum >= 1.0f){
float pos = lum - 1.0f;
if( pos > 0.98f ) pos = 0.98f;
if( pos < 0.02f ) pos = 0.02f;
float2 texCord = (pos, pos);
half4 turb = tex2D (_Gradient, texCord);
//turb.a = 0.0f;
return turb;
}
else return c;
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在我看来,这个着色器正在模拟类似后备缓冲区的纹理(通过 Sampler2D _MainTex 传递)和生成的云亮度(由 float lum 表示)映射之间的 Alpha 测试到渐变上。这使得事情变得更加棘手,因为你不能仅仅伪造模糊并让 Alpha 混合处理其余的事情。您还需要更改 Alpha 测试例程以模拟 Alpha 混合,或相应地重组渲染管道。我们首先要处理模糊云层。
您需要问自己的第一个问题是是否需要屏幕空间模糊。看到这个片段着色器的机制,我认为不是——你想要模糊实际模型上的云。鉴于此,它应该足以模糊底层纹理并导致模糊的结果 - 除非您正在模拟 Alpha 剪切,所以您会得到粗糙的边缘。问题是如何处理这些粗糙的边缘。这就是 Alpha 混合的用武之地。
您可以通过使用 lerp() 函数在
turb
颜色和c
颜色之间使用 lerp(线性插值)来模拟 Alpha 混合(具体取决于哪个)您正在使用的着色器语言)。您可能想要类似于return lerp(c, turb, 1 - pos);
而不是return turb;
...我希望您会想要不断地调整它,直到你理解并开始得到你想要的结果。 (例如,您可能更喜欢 lerp(c, turb, 1 - pow(pos,4)))事实上,您可以在将纹理修改为了解 alpha 混合将为您做什么。
编辑:我没有考虑过
_NoiseO
和_NoiseT
采样器不断变化的情况,所以简单地告诉你模糊它们是没有用的建议。您可以使用多次抽头滤镜来模拟模糊。最简单的方法是采用均匀间隔的样本,对它们进行加权,然后将它们加在一起,得到最终的颜色。 (通常您会希望权重本身的总和为 1。)话虽这么说,您可能会也可能不会在
_NoiseO
和_NoiseT
纹理本身上执行此操作 - - 您可能想要创建屏幕空间模糊,这对观看者来说可能看起来更有趣。在这种情况下,同样的概念适用,但您需要计算每个点击的偏移坐标,然后执行加权求和。,如果我们要处理第一种情况,并且想要从 _Noise0 采样器中采样并稍微模糊它,我们可以使用此盒式过滤器(其中所有权重相同且总和为 1,从而执行平均值):
例如 ,如果我们希望整个云输出显得模糊,我们可以将云生成部分包装在一个函数中,并调用它而不是用于水龙头的
tex2D()
。多次过滤看起来像:
但是如果云函数太复杂,那么计算速度会相对较慢。如果您受到光栅操作和纹理读取(将纹理/缓冲区数据传输到内存和从内存传输)的束缚,那么除非您使用更先进的模糊技术(例如通过乒乓缓冲区成功进行下采样,有用),否则这不会有太大影响对于昂贵的模糊/滤镜,因为它们有很多水龙头)。但性能是另一个完整的考虑因素,不仅仅是获得你想要的外观。
It appears to me that this shader is emulating alpha testing between a backbuffer-like texture (passed via the
sampler2D _MainTex
) and a generated cloud luminance (represented byfloat lum
) mapped onto a gradient. This makes things trickier because you can't just fake a blur and let alpha blending take care of the rest. You'll also need to change your alpha testing routine to emulate an alpha blend instead or restructure your rendering pipeline accordingly. We'll deal with blurring the clouds first.The first question you need to ask yourself is if you need a screen-space blur. Seeing the mechanics of this fragment shader, I would think not -- you want to blur the clouds on the actual model. Given this, it should be sufficient to blur the underlying textures and result in a blurred result -- except you're emulating alpha clipping, so you'll get rough edges. The question is what to do about those rough edges. That's where alpha blending comes in.
You can emulate alpha blending by using a lerp (linear interpolation) between the
turb
color andc
color with lerp() function (depending on which shader language you're using). You'll probably want something that looks likereturn lerp(c, turb, 1 - pos);
instead ofreturn turb;
... I'd expect you'll want to tweak this continually until you understand and start getting the results you want. (For example, you may preferlerp(c, turb, 1 - pow(pos,4))
)In fact, you can try this last step (just adding the lerp) before modifying your textures to get an idea of what the alpha blending will do for you.
Edit: I hadn't considered the case where the
_NoiseO
and_NoiseT
samplers were changing continually, so simply telling you to blur them was minimally useful advice. You can emulate blurring by using a multi-tap filter. The most simple way is to take uniformly spaced samples, weight them, and sum them together resulting in your final color. (Typically you'll want the weights themselves to sum to 1.)This being said, you may or may not way to do this on the
_NoiseO
and_NoiseT
textures themselves -- you may want to create a screen-space blur instead which may look more interesting to a viewer. In this case, the same concept applies, but you need to do the calculations for the offset coordinates for each tap and then perform a weighted summation.For example if we were going with the first case and we wanted to sample from the _Noise0 sampler and blur it slightly, we could use this box filter (where all the weights are the same and sum to 1, thus performing an average):
Alternatively, if we wanted the entire cloud output to appear blurry we'd wrap the cloud generation portion in a function and call it instead of
tex2D()
for the taps.And the multi-tap filtering would look like:
However doing this is going to be relatively slow for calculations if you make the cloud function too complex. If you're bound by raster operations and texture reads (transferring texture/buffer data to and from memory) chances are this won't matter much unless you use a much more advanced blurring technique (such successful downsampling through ping-ponged buffers, useful for blurs/filters that are expensive because they have lots of taps). But performance is another entire consideration from just getting the look you want.