与sv_position相比

发布于 2025-01-22 08:48:38 字数 1717 浏览 0 评论 0原文

我一直在尝试在剪辑平面中获得顶点的z位置,即在深度缓冲区中的位置,但是我一直在观察影响UnityObjectToclippos的结果的怪异行为。

我写了一个表面着色器,该着色器根据深度为顶点着色。这是相关的代码:

Tags { "RenderType"="Opaque" }
LOD 200
Cull off

CGPROGRAM
#pragma target 3.0
#pragma surface surf StandardSpecular alphatest:_Cutoff addshadow vertex:vert
#pragma debug

struct Input
{
    float depth;
};

float posClipZ(float3 vertex)
{
    float4 clipPos = UnityObjectToClipPos(vertex);
    float depth = clipPos.z / clipPos.w;
#if !defined(UNITY_REVERSED_Z)
    depth = depth * 0.5 + 0.5;
#endif
    return depth;
}

void vert(inout appdata_full v, out Input o)
{
    UNITY_INITIALIZE_OUTPUT(Input, o);
    o.depth = posClipZ(v.vertex);
}

void surf(Input IN, inout SurfaceOutputStandardSpecular o)
{
    o.Albedo.x = clamp(IN.depth, 0, 1);
    o.Alpha = 1;
}
ENDCG

基于我的理解,UnityObjectToclippos应返回摄像机剪辑坐标中顶点的位置,并且z坐标应在0和1之间(从同质坐标转换时)。但是,这根本不是我要观察到的:

​请注意,在夹夹平面附近或后面的顶点实际上具有负深度(我已经通过转换为反照率颜色检查了一下)。似乎clippos.z实际上在大多数情况下都是恒定的,只有clippos.w正在更改。

我已经设法劫持生成的片段着色器以添加sv_position参数,这是我真正期望看到的:

“使用sv_position”

我不想要,我不想要要使用sv_position,因为我希望能够从其他位置计算顶点着色器中的深度。

似乎UnityObjectToclippos不适合该任务,因为获得的深度甚至都不是单调的。

那么,如何通过在顶点着色器中计算的深度模仿第二张图像?它在插值方面也应该是完美的,因此我想我将不得不首先在顶点着色器中使用UnityObjectToviewPos获得线性深度,然后相应地将其缩放在片段着色器中。

I have been trying to obtain the Z position of a vertex in the clip plane, i.e. its location in the depth buffer, but I have been observing weird behaviour affecting the result of UnityObjectToClipPos.

I have written a surface shader that colors vertices based on the depth. Here is the relevant code:

Tags { "RenderType"="Opaque" }
LOD 200
Cull off

CGPROGRAM
#pragma target 3.0
#pragma surface surf StandardSpecular alphatest:_Cutoff addshadow vertex:vert
#pragma debug

struct Input
{
    float depth;
};

float posClipZ(float3 vertex)
{
    float4 clipPos = UnityObjectToClipPos(vertex);
    float depth = clipPos.z / clipPos.w;
#if !defined(UNITY_REVERSED_Z)
    depth = depth * 0.5 + 0.5;
#endif
    return depth;
}

void vert(inout appdata_full v, out Input o)
{
    UNITY_INITIALIZE_OUTPUT(Input, o);
    o.depth = posClipZ(v.vertex);
}

void surf(Input IN, inout SurfaceOutputStandardSpecular o)
{
    o.Albedo.x = clamp(IN.depth, 0, 1);
    o.Alpha = 1;
}
ENDCG

Based on my understanding, UnityObjectToClipPos should return the position of the vertex in the camera's clip coordinates, and the Z coordinate should be (when transformed from homogenous coordinates) between 0 and 1. However, this is not what I am observing at all:

using UnityObjectToClipPos

This shows the camera intersecting a sphere. Notice that vertices near or behind the camera near clip plane actually have negative depth (I've checked that with other conversions to albedo color). It also seems that clipPos.z is actually constant most of the time, and only clipPos.w is changing.

I've managed to hijack the generated fragment shader to add a SV_Position parameter, and this is what I actually expected to see in the first place:

using SV_Position

However, I don't want to use SV_Position, as I want to be able to calculate the depth in the vertex shader from other positions.

It seems like UnityObjectToClipPos is not suited for the task, as the depth obtained that way is not even monotonic.

So, how can I mimic the second image via depth calculated in the vertex shader? It should also be perfect regarding interpolation, so I suppose I will have to use UnityObjectToViewPos first in the vertex shader to get the linear depth, then scale it in the fragment shader accordingly.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

一抹苦笑 2025-01-29 08:48:38

我不确定为什么UnityObjectToclippos没有返回任何有用的东西,但无论如何它都不是适合该任务的正确工具。原因是顶点的深度在深度缓冲区中不是线性的,因此首先必须使用距离摄像机的实际距离,以适当地插值顶点之间所有像素的深度:

float posClipZ(float3 vertex)
{
    float3 viewPos = UnityObjectToViewPos(vertex);
    return -viewPos.z;
}

一旦片段/表面着色器被执行,LineareyEdepth似乎是检索预期深度值的适当功能:

void surf(Input IN, inout SurfaceOutputStandardSpecular o)
{
    o.Albedo.x = clamp(LinearEyeDepth(IN.depth), 0, 1);
    o.Alpha = 1;
}

再次重要的是在顶点着色器内不使用LineareyEdepth插值错误。

I am not completely sure why UnityObjectToClipPos didn't return anything useful, but it wasn't the right tool for the task anyway. The reason is that the depth of the vertex is not linear in the depth buffer, and so first the actual distance from the camera has to be used for proper interpolation of the depth of all the pixels between the vertices:

float posClipZ(float3 vertex)
{
    float3 viewPos = UnityObjectToViewPos(vertex);
    return -viewPos.z;
}

Once the fragment/surface shader is executed, LinearEyeDepth seems to be the proper function to retrieve the expected depth value:

void surf(Input IN, inout SurfaceOutputStandardSpecular o)
{
    o.Albedo.x = clamp(LinearEyeDepth(IN.depth), 0, 1);
    o.Alpha = 1;
}

Once again it is important not to use LinearEyeDepth inside the vertex shader, since the values will be interpolated incorrectly.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文