将纹理投影到任意四边形的像素着色器
只需要找到一种方法,使用像素着色器将纹理投影到任意用户定义的四边形。
将接受四边形四个边的坐标:
/// <defaultValue>0,0</defaultValue>
float2 TopLeft : register(c0);
/// <defaultValue>1,0</defaultValue>
float2 TopRight : register(c1);
/// <defaultValue>0,1</defaultValue>
float2 BottomLeft : register(c2);
/// <defaultValue>1,1</defaultValue>
float2 BottomRight : register(c3);
尝试了几种插值算法,但无法使其正确。
你们认为有什么样本我可以修改以获得所需的结果吗?
Just need to figure out a way, using Pixel Shader, to project a texture to an arbitary user-defined quadrilateral.
Will be accepting coordinates of the four sides of a quadrilateral:
/// <defaultValue>0,0</defaultValue>
float2 TopLeft : register(c0);
/// <defaultValue>1,0</defaultValue>
float2 TopRight : register(c1);
/// <defaultValue>0,1</defaultValue>
float2 BottomLeft : register(c2);
/// <defaultValue>1,1</defaultValue>
float2 BottomRight : register(c3);
Tried couple of interpolation algorithm, but couldn't manage to get it right.
Is there any sample you guys think which I might be able to modify to get the desired result?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这里的问题是所有现代 3D 图形硬件都栅格化三角形,而不是四边形。因此,如果您要求 Direct3D 或 OpenGL 渲染四边形,API 会在内部将四边形分割为两个三角形。这通常不是问题,因为所有现代渲染都是透视正确的,因此观看者不应该能够分辨一个四边形和两个三角形之间的区别 - 插值将是无缝的。
要求 API 执行此插值的正确方法是将纹理坐标作为每个顶点数据传递给它,API/硬件将在每个像素上进行插值。然后,您的像素着色器将可以访问每个像素的纹理坐标。我假设当您使用术语“项目”时,您的意思是简单地将矩形纹理映射到四边形上,而不是指纹理投影(这就像聚光灯将纹理照射到表面上)。
这里的底线是,将四边形的纹理坐标传递给像素着色器是错误的方法(除非您有某些特殊原因需要这样做)。
The issue here is that all modern 3D graphics hardware rasterizes triangles, not quadrilaterals. So if you ask Direct3D or OpenGL to render a quad, the API will internally split the quad into two triangles. This is not usually a problem since all modern rendering is perspective-correct, so the viewer should not be able to tell the difference between one quad and two triangles - the interpolation will be seamless.
The correct way to ask the API to perform this interpolation is to pass it the texture coordinates as per-vertex data, which the API/hardware will interpolate across each pixel. Your pixel shader will then have access to this per-pixel texture coordinate. I assume that when you use the term 'project' you mean simply mapping a rectangular texture onto a quad, and you are not referring to texture projection (which is like a spotlight shining a texture onto surfaces).
The bottom line here is that passing the quad's texture coordinates to the pixel shader is the wrong approach (unless you have some special reason why this is required).
这里有一篇不错的论文 描述你的选择(或者这个 .ppt)。基本上,您需要在四边形上定义一些重心坐标,然后将片段值插入为给定顶点值的 BC 加权和。
抱歉,不知道任何代码;现代面向三角形的硬件缺乏对四边形的直接支持(参见 voidstar69 的答案)意味着它们已经过时了。
There's a nice paper here describing your options (or this .ppt). Basically you need to define some barycentric coordinates across the quad, then interpolate fragment values as BC-weighted sums of the given vertex values.
Sorry, don't know of any code; the lack of direct support for quads on modern triangle-oriented HW (see voidstar69's answer) means they've rather gone out of fashion.