从 XNA 上的后台缓冲区 (HLSL) 获取顶点

发布于 2024-08-11 09:01:51 字数 266 浏览 8 评论 0原文

你好,很抱歉这个晦涩的标题:} 我会尽力解释。

首先,我是 HLSL 的新手,但我了解来自童话世界的管道和东西。我想做的是使用 GPU 进行一般计算(GPGPU)。

我不知道的是:我如何将顶点(已使用顶点着色器转换)读回我的 xna 应用程序?我读过一些关于使用 GPU 纹理内存的内容,但我找不到任何可靠的东西...

提前感谢您提供任何信息/提示! :-)

*不确定是否可能,因为光栅器和像素着色器(如果有),我的意思是,最终一切都与像素有关,对吗?

Hello and sorry for the obscure title :}
I`ll try to explain the best i can.

First of all, i am new to HLSL but i understand about the pipeline and stuff that are from the fairy world. What i`m trying to do is use the gpu for general computations (GPGPU).

What i don`t know is: how can i read* the vertices (that have been transformed using vertex shaders) back to my xna application? I read something about using the texture memory of the gpu but i can't find anything solid...

Thanks in advance for any info/tip! :-)

*not sure if possible bacause of the rasterizer and the pixel shader (if any), i mean, in the end it's all about pixels, right?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

雅心素梦 2024-08-18 09:01:51

据我所知,这通常是不可能的。

你到底想做什么?可能还有另一种解决方案

编辑::考虑到评论。如果您只想在 GPU 上进行一般向量计算,请尝试在像素着色器而不是顶点着色器中进行计算。

举例来说,假设你想要交叉两个向量,首先我们需要将数据写入纹理

//Data must be in the 0-1 range before writing into the texture, so you'll need to scale everything appropriately
Vector4 a = new Vector4(1, 0, 1, 1);
Vector4 b = new Vector4(0, 1, 0, 0);

Texture2D dataTexture = new Texture2D(device, 2, 1);
dataTexture.SetData<Vector4>(new Vector4[] { a, b });

所以现在我们有了一个包含数据的 2*1 纹理,只需使用 spritebatch 和效果即可渲染纹理

Effect gpgpu;
gpgpu.Begin();
gpgpu.CurrentTechnique = gpgpu.Techniques["DotProduct"];
gpgpu.CurrentTechnique.Begin();
spriteBatch.Begin();
gpgpu.CurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(dataTexture, new Rectangle(0,0,2,1), Color.White);
spriteBatch.end();
gpgpu.CurrentTechnique.Passes[0].End();
gpgpu.CurrentTechnique.End();

:我们现在需要的是我上面展示的 gpgpu 效果。这只是一个标准的后处理着色器,看起来像这样:

sampler2D DataSampler = sampler_state
{
    MinFilter = Point;
    MagFilter = Point;
    MipFilter = Point;
    AddressU = Clamp;
    AddressV = Clamp;
};

float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0
{
    float4 A = tex2D(s, texCoord);
    float4 B = tex2D(s, texCoord + float2(0.5, 0); //0.5 is the size of 1 pixel, 1 / textureWidth
    float d = dot(a, b)
    return float4(d, 0, 0, 0);
}

technique DotProduct
{
    pass Pass1
    {
        PixelShader = compile ps_3_0 PixelShaderFunction();
    }
}

这会将 A 和 B 的点积写入第一个像素,将 B 和 B 的点积写入第二个像素。然后你可以读回这些答案(忽略无用的)

Vector4[] v = new Vector4[2];
dataTexture.GetData(v);
float dotOfAandB = v[0];
float dotOfBandB = v[1];

tada!
尝试在更大范围内执行此操作会遇到很多小问题,请在此处发表评论,我会尽力帮助您解决遇到的任何问题:)

As far as I know this isn't generally possible.

What exactly are you trying to do? There is probably another solution

EDIT:: Taking into account the comment. If all you want to do is general vector calculations on the GPU try doing them in the pixel shader rather than the vertex shader.

So for example, say you want to do cross two vectors, first we need to write the data into a texture

//Data must be in the 0-1 range before writing into the texture, so you'll need to scale everything appropriately
Vector4 a = new Vector4(1, 0, 1, 1);
Vector4 b = new Vector4(0, 1, 0, 0);

Texture2D dataTexture = new Texture2D(device, 2, 1);
dataTexture.SetData<Vector4>(new Vector4[] { a, b });

So now we've got a 2*1 texture with the data in, render the texture simply using spritebatch and an effect:

Effect gpgpu;
gpgpu.Begin();
gpgpu.CurrentTechnique = gpgpu.Techniques["DotProduct"];
gpgpu.CurrentTechnique.Begin();
spriteBatch.Begin();
gpgpu.CurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(dataTexture, new Rectangle(0,0,2,1), Color.White);
spriteBatch.end();
gpgpu.CurrentTechnique.Passes[0].End();
gpgpu.CurrentTechnique.End();

All we need now is the gpgpu effect I've shown above. That's just a standard post processing shader, looking something like this:

sampler2D DataSampler = sampler_state
{
    MinFilter = Point;
    MagFilter = Point;
    MipFilter = Point;
    AddressU = Clamp;
    AddressV = Clamp;
};

float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0
{
    float4 A = tex2D(s, texCoord);
    float4 B = tex2D(s, texCoord + float2(0.5, 0); //0.5 is the size of 1 pixel, 1 / textureWidth
    float d = dot(a, b)
    return float4(d, 0, 0, 0);
}

technique DotProduct
{
    pass Pass1
    {
        PixelShader = compile ps_3_0 PixelShaderFunction();
    }
}

This will write out the dot product of A and B into the first pixel, and the dot product of B and B into the second pixel. Then you can read these answers back (ignoring the useless ones)

Vector4[] v = new Vector4[2];
dataTexture.GetData(v);
float dotOfAandB = v[0];
float dotOfBandB = v[1];

tada!
There are a whole load of little issues with trying to do this on a larger scale, comment here and I'll try to help you with any you run into :)

为你拒绝所有暧昧 2024-08-18 09:01:51

If you turn on the "Stream Output Stage," the outputs of your vertex shader will be stored in a memory buffer. Later these values can be read from the GPU or CPU as desired.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文