使用片段着色器的块过滤器
我使用 Apple 的 OpenGL Shader Builder(类似于 Nvidia 的 fx 编辑器的工具)遵循本教程但更简单)。
我可以轻松应用过滤器,但我不明白它们是否工作正常(如果是,我该如何改进输出)。例如模糊滤镜:OpenGL 本身对纹理进行一些图像处理,因此如果它们以比原始图像更高的分辨率显示,则它们已经被 OpenGL 模糊了。其次,模糊部分比未处理的部分更亮,我认为这没有意义,因为它只需要来自直接邻域的像素。这是由
float step_w = (1.0/width);
我不太明白的定义的:像素是使用浮点值索引的?
模糊图像 http://img218.imageshack.us/img218/6468/blurzt.png< /a>
编辑:我忘记附上我使用的确切代码:
片段着色器
// Originally taken from: http://www.ozone3d.net/tutorials/image_filtering_p2.php#part_2
#define KERNEL_SIZE 9
float kernel[KERNEL_SIZE];
uniform sampler2D colorMap;
uniform float width;
uniform float height;
float step_w = (1.0/width);
float step_h = (1.0/height);
// float step_w = 20.0;
// float step_h = 20.0;
vec2 offset[KERNEL_SIZE];
void main(void)
{
int i = 0;
vec4 sum = vec4(0.0);
offset[0] = vec2(-step_w, -step_h); // south west
offset[1] = vec2(0.0, -step_h); // south
offset[2] = vec2(step_w, -step_h); // south east
offset[3] = vec2(-step_w, 0.0); // west
offset[4] = vec2(0.0, 0.0); // center
offset[5] = vec2(step_w, 0.0); // east
offset[6] = vec2(-step_w, step_h); // north west
offset[7] = vec2(0.0, step_h); // north
offset[8] = vec2(step_w, step_h); // north east
// Gaussian kernel
// 1 2 1
// 2 4 2
// 1 2 1
kernel[0] = 1.0; kernel[1] = 2.0; kernel[2] = 1.0;
kernel[3] = 2.0; kernel[4] = 4.0; kernel[5] = 2.0;
kernel[6] = 1.0; kernel[7] = 2.0; kernel[8] = 1.0;
// TODO make grayscale first
// Laplacian Filter
// 0 1 0
// 1 -4 1
// 0 1 0
/*
kernel[0] = 0.0; kernel[1] = 1.0; kernel[2] = 0.0;
kernel[3] = 1.0; kernel[4] = -4.0; kernel[5] = 1.0;
kernel[6] = 0.0; kernel[7] = 2.0; kernel[8] = 0.0;
*/
// Mean Filter
// 1 1 1
// 1 1 1
// 1 1 1
/*
kernel[0] = 1.0; kernel[1] = 1.0; kernel[2] = 1.0;
kernel[3] = 1.0; kernel[4] = 1.0; kernel[5] = 1.0;
kernel[6] = 1.0; kernel[7] = 1.0; kernel[8] = 1.0;
*/
if(gl_TexCoord[0].s<0.5)
{
// For every pixel sample the neighbor pixels and sum up
for( i=0; i<KERNEL_SIZE; i++ )
{
// select the pixel with the concerning offset
vec4 tmp = texture2D(colorMap, gl_TexCoord[0].st + offset[i]);
sum += tmp * kernel[i];
}
sum /= 16.0;
}
else if( gl_TexCoord[0].s>0.51 )
{
sum = texture2D(colorMap, gl_TexCoord[0].xy);
}
else // Draw a red line
{
sum = vec4(1.0, 0.0, 0.0, 1.0);
}
gl_FragColor = sum;
}
顶点着色器
void main(void)
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
I was following this tutorial using Apple's OpenGL Shader Builder (tool similar to Nvidia's fx composer, but simpler).
I could easily apply the filters, but I don't understand if they worked correct (and if so how can I improve the output). For example the blur filter: OpenGL itself does some image processing on the textures, so if they are displayed in a higher resolution than the original image, they are blurred already by OpenGL. Second the blurred part is brighter then the part not processed, I think this does not make sense, since it just takes pixels from the direct neighborhood. This is defined by
float step_w = (1.0/width);
Which I don't quite understand: The pixels are indexed using floating point values??
Blurred Image http://img218.imageshack.us/img218/6468/blurzt.png
Edit: I forgot to attach the exact code I used:
Fragment Shader
// Originally taken from: http://www.ozone3d.net/tutorials/image_filtering_p2.php#part_2
#define KERNEL_SIZE 9
float kernel[KERNEL_SIZE];
uniform sampler2D colorMap;
uniform float width;
uniform float height;
float step_w = (1.0/width);
float step_h = (1.0/height);
// float step_w = 20.0;
// float step_h = 20.0;
vec2 offset[KERNEL_SIZE];
void main(void)
{
int i = 0;
vec4 sum = vec4(0.0);
offset[0] = vec2(-step_w, -step_h); // south west
offset[1] = vec2(0.0, -step_h); // south
offset[2] = vec2(step_w, -step_h); // south east
offset[3] = vec2(-step_w, 0.0); // west
offset[4] = vec2(0.0, 0.0); // center
offset[5] = vec2(step_w, 0.0); // east
offset[6] = vec2(-step_w, step_h); // north west
offset[7] = vec2(0.0, step_h); // north
offset[8] = vec2(step_w, step_h); // north east
// Gaussian kernel
// 1 2 1
// 2 4 2
// 1 2 1
kernel[0] = 1.0; kernel[1] = 2.0; kernel[2] = 1.0;
kernel[3] = 2.0; kernel[4] = 4.0; kernel[5] = 2.0;
kernel[6] = 1.0; kernel[7] = 2.0; kernel[8] = 1.0;
// TODO make grayscale first
// Laplacian Filter
// 0 1 0
// 1 -4 1
// 0 1 0
/*
kernel[0] = 0.0; kernel[1] = 1.0; kernel[2] = 0.0;
kernel[3] = 1.0; kernel[4] = -4.0; kernel[5] = 1.0;
kernel[6] = 0.0; kernel[7] = 2.0; kernel[8] = 0.0;
*/
// Mean Filter
// 1 1 1
// 1 1 1
// 1 1 1
/*
kernel[0] = 1.0; kernel[1] = 1.0; kernel[2] = 1.0;
kernel[3] = 1.0; kernel[4] = 1.0; kernel[5] = 1.0;
kernel[6] = 1.0; kernel[7] = 1.0; kernel[8] = 1.0;
*/
if(gl_TexCoord[0].s<0.5)
{
// For every pixel sample the neighbor pixels and sum up
for( i=0; i<KERNEL_SIZE; i++ )
{
// select the pixel with the concerning offset
vec4 tmp = texture2D(colorMap, gl_TexCoord[0].st + offset[i]);
sum += tmp * kernel[i];
}
sum /= 16.0;
}
else if( gl_TexCoord[0].s>0.51 )
{
sum = texture2D(colorMap, gl_TexCoord[0].xy);
}
else // Draw a red line
{
sum = vec4(1.0, 0.0, 0.0, 1.0);
}
gl_FragColor = sum;
}
Vertex Shader
void main(void)
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
纹理坐标通常从
(0,0)
(左下)到(1,1)
(右上角),所以实际上,它们是浮点数。因此,如果您有纹理坐标
(u,v)
,则“原始”坐标由(u*textureWidth, v*textureHeight)
计算。如果结果值不是整数,可能有不同的处理方法:
floor
或ceil
即可使数字)访问纹理的方法。
Texture coordinates conventionally reach from
(0,0)
(bottom left) to(1,1)
(top right), so in fact, they are floats.So if you have texturecoordinates
(u,v)
, the "original" coordinates are computed by(u*textureWidth, v*textureHeight)
.If the resulting values are not integral numbers, there may be different ways to handle that:
floor
orceil
of the result in order to make the number integralHowever I think every shading language has a method to access a texture by their "original", i.e. integral index.
@Nils,感谢您发布此代码。一段时间以来,我一直在尝试找出一种在 GPU 上进行卷积的简单方法。
我尝试了你的代码,我自己也遇到了同样的调光问题。我是这样解决的。
图像宽度。当
纹理绑定在opengl中。
通过总结内核中的所有值并除以该值来计算内核。
照明(样本的第四个组成部分)。
这是一个不存在调光问题的解决方案,并且还不需要 3x3 内核的偏移阵列。
我已经包含了 8 个对我有用且无需调暗的内核。
@Nils, thanks for posting this code. I've been trying to figure out a simple way to do a convolution on the GPU for some time now.
I tried your code out and ran into the same dimming problem myself. Here's how I solved it.
image width. It usually gets re-sized to a power of 2 when the
texture is bound in opengl.
kernel by summing up all values in your kernel and dividing by that.
illumination, (the fourth component of the sample).
Here's a solution that doesn't have the dimming issue and that also bypasses the need for an offset array for 3x3 kernels .
I've included 8 Kernels that worked for me without dimming.