多个渲染目标不保存数据
我正在使用 SlimDX,针对具有着色器模型 4 的 DirectX 11。我有一个像素着色器“preProc”,它处理我的顶点并保存三个数据纹理。一种用于每像素法线,一种用于每像素位置数据,一种用于颜色和深度(颜色占用 RGB,深度占用 Alpha 通道)。
然后,我稍后在后处理着色器中使用这些纹理,以实现屏幕空间环境光遮挡,但似乎没有数据保存在第一个着色器中。
这是我的像素着色器:
PS_OUT PS( PS_IN input )
{
PS_OUT output;
output.col = float4(0,0,0,0);
output.norm = float4(input.norm,1);
output.pos = input.pos;
return output;
}
它输出以下结构:
struct PS_OUT
{
float4 col : SV_TARGET0;
float4 norm : SV_TARGET1;
float4 pos : SV_TARGET2;
};
并采用以下结构作为输入:
struct PS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD0;
float3 norm : TEXCOORD1;
};
但是在我的后处理着色器中:
Texture2D renderTex : register(t1);
Texture2D normalTex : register(t2);
Texture2D positionTex : register(t3);
Texture2D randomTex : register(t4);
SamplerState samLinear : register(s0);
float4 PS(PS_IN input) : SV_Target
{
return float4(getCol(input.tex));
}
它只是输出浅蓝色屏幕(我在每帧开始时将渲染目标重置为的颜色)。 getCol 已经过测试,可以在仅处理一个渲染目标时从 renderTex 材质返回一种颜色。如果我更改 Pixelshader 以改为采样 randomTex 纹理(我的代码之前从文件加载,不是渲染目标),那么一切都会渲染得很好,所以我确信它不是我的后处理着色器。
如果是我的 slimDX 代码失败了,我会这样做:
创建我的纹理、shaderresourvecviews 和 rendertargetviews:
Texture2DDescription textureDescription = new Texture2DDescription()
{
Width=texWidth,
Height=texHeight,
MipLevels=1,
ArraySize=3,
Format=SlimDX.DXGI.Format.R32G32B32A32_Float,
SampleDescription = new SlimDX.DXGI.SampleDescription(1,0),
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
CpuAccessFlags= CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
Usage= ResourceUsage.Default,
};
texture = new Texture2D(device, textureDescription);
renderTargetView = new RenderTargetView[3];
shaderResourceView = new ShaderResourceView[3];
for (int i = 0; i < 3; i++)
{
RenderTargetViewDescription renderTargetViewDescription = new RenderTargetViewDescription()
{
Format = textureDescription.Format,
Dimension = RenderTargetViewDimension.Texture2D,
MipSlice = 0,
};
renderTargetView[i] = new RenderTargetView(device, texture, renderTargetViewDescription);
ShaderResourceViewDescription shaderResourceViewDescription = new ShaderResourceViewDescription()
{
Format = textureDescription.Format,
Dimension = ShaderResourceViewDimension.Texture2D,
MostDetailedMip = 0,
MipLevels = 1
};
shaderResourceView[i] = new ShaderResourceView(device, texture, shaderResourceViewDescription);
}
渲染到我的多个渲染目标:
private void renderToTexture(Shader shader)
{
//set the vertex and pixel shaders
context.VertexShader.Set(shader.VertexShader);
context.PixelShader.Set(shader.PixelShader);
//send texture data and a linear sampler to the shader
context.PixelShader.SetShaderResource(texture, 0);
context.PixelShader.SetSampler(samplerState, 0);
//set the input assembler
SetInputAssembler(shader);
//reset the camera's constant buffer
camera.ResetConstantBuffer();
//set the render targets to the textures we will render to
context.OutputMerger.SetTargets(depthStencilView, renderTargetViews);
//clear the render targets and depth stencil
foreach (RenderTargetView view in renderTargetViews)
{
context.ClearRenderTargetView(view, color);
}
context.ClearDepthStencilView(depthStencilView, DepthStencilClearFlags.Depth, 1.0f, 0);
//draw the scene
DrawScene();
}
然后是当我将 postProcessing 着色器渲染到屏幕时的函数:
private void renderTexture(Shader shader)
{
//get a single quad to be the screen we render
Mesh mesh = CreateScreenFace();
//set vertex and pixel shaders
context.VertexShader.Set(shader.VertexShader);
context.PixelShader.Set(shader.PixelShader);
//set the input assembler
SetInputAssembler(shader);
//point the render target to the screen
context.OutputMerger.SetTargets(depthStencil, renderTarget);
//send the rendered textures and a linear sampler to the shader
context.PixelShader.SetShaderResource(renderTargetViews[0], 1);
context.PixelShader.SetShaderResource(renderTargetViews[1], 2);
context.PixelShader.SetShaderResource(renderTargetViews[2], 3);
context.PixelShader.SetShaderResource(random, 4);
context.PixelShader.SetSampler(samplerState, 0);
//clear the render targets and depth stencils
context.ClearRenderTargetView(renderTarget, new Color4(0.52734375f, 0.8046875f, 0.9765625f));
context.ClearDepthStencilView(depthStencil, DepthStencilClearFlags.Depth, 1, 0);
//set the vertex and index buffers from the quad
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(mesh.VertexBuffer, Marshal.SizeOf(typeof(Vertex)), 0));
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UInt, 0);
//draw the quad
context.DrawIndexed(mesh.indices, 0, 0);
//dispose of the buffers
mesh.VertexBuffer.Dispose();
mesh.IndexBuffer.Dispose();
}
编辑:我添加了当前运行的单个帧的 PIX 函数调用输出:
Frame 40
//setup
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B66190, 0x0028F068)
<0x06BDA1D8> ID3D11DeviceContext::ClearDepthStencilView(0x06B66138, 1, 1.000f, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0028F010, 0x0028EFF8, 0x0028F00C --> 0x06BF8EE0)
CreateObject(D3D11 Buffer, 0x06BF8EE0)
<0x06BDA1D8> ID3D11DeviceContext::PSSetConstantBuffers(0, 1, 0x0028F084 --> { 0x06BF8EE0 })
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F8DEB58, 0x0F8DEB40, 0x0F8DEB54 --> 0x06BF8F68)
CreateObject(D3D11 Buffer, 0x06BF8F68)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F70EAD8, 0x0F70EAC0, 0x0F70EAD4 --> 0x06BF8FF0)
CreateObject(D3D11 Buffer, 0x06BF8FF0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FAAE9A8, 0x0FAAE990, 0x0FAAE9A4 --> 0x06BF9078)
CreateObject(D3D11 Buffer, 0x06BF9078)
<0x0059FF78> ID3D11Device::GetImmediateContext(0x06BDA1D8 --> 0x5BA8A8D8)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F8DEB58, 0x0F8DEB40, 0x0F8DEB54 --> 0x06BF9100)
CreateObject(D3D11 Buffer, 0x06BF9100)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F70EAD8, 0x0F70EAC0, 0x0F70EAD4 --> 0x06BF9188)
CreateObject(D3D11 Buffer, 0x06BF9188)
<0x06BDA1D8> ID3D11DeviceContext::Release()
<0x06BDA1D8> ID3D11DeviceContext::UpdateSubresource(0x06B59270, 0, NULL, 0x06287FA0, 0, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FAAE9A8, 0x0FAAE990, 0x0FAAE9A4 --> 0x06BF9210)
CreateObject(D3D11 Buffer, 0x06BF9210)
<0x06BDA1D8> ID3D11DeviceContext::VSSetShader(0x06B66298, NULL, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FC0E978, 0x0FC0E960, 0x0FC0E974 --> 0x06BF9298)
CreateObject(D3D11 Buffer, 0x06BF9298)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FE8EDE8, 0x0FE8EDD0, 0x0FE8EDE4 --> 0x06BF9320)
CreateObject(D3D11 Buffer, 0x06BF9320)
<0x06BDA1D8> ID3D11DeviceContext::PSSetShader(0x06B666F8, NULL, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FC0E978, 0x0FC0E960, 0x0FC0E974 --> 0x06BF93A8)
CreateObject(D3D11 Buffer, 0x06BF93A8)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FE8EDE8, 0x0FE8EDD0, 0x0FE8EDE4 --> 0x06BF9430)
CreateObject(D3D11 Buffer, 0x06BF9430)
<0x0059FF78> ID3D11Device::CreateInputLayout(0x0028EBE0, 3, 0x06286CB8, 152, 0x0028EBD8 --> 0x06BF9D68)
CreateObject(D3D11 Input Layout, 0x06BF9D68)
<0x06BDA1D8> ID3D11DeviceContext::IASetInputLayout(0x06BF9D68)
<0x06BDA1D8> ID3D11DeviceContext::IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)
<0x0059FF78> ID3D11Device::GetImmediateContext(0x06BDA1D8 --> 0x5BA8A8D8)
<0x06BDA1D8> ID3D11DeviceContext::Release()
<0x06BDA1D8> ID3D11DeviceContext::VSSetConstantBuffers(0, 1, 0x0028F024 --> { 0x06B59270 })
<0x06BDA1D8> ID3D11DeviceContext::OMSetRenderTargets(3, 0x0028F004 --> { 0x06B65708, 0x06B657B8, 0x06B582E0 }, 0x06B66138)
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B65708, 0x0028EFEC)
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B657B8, 0x0028EFEC)
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B582E0, 0x0028EFEC)
<0x06BDA1D8> ID3D11DeviceContext::ClearDepthStencilView(0x06B66138, 1, 1.000f, 0)
//draw scene for preproc shader (this should output the three render targets)
//DRAW CALLS HIDDEN
<0x0059FF78> ID3D11Device::CreateBuffer(0x0028EE04, 0x0028EDEC, 0x0028EE00 --> 0x06BF94B8)
CreateObject(D3D11 Buffer, 0x06BF94B8)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0028EE04, 0x0028EDEC, 0x0028EE00 --> 0x06BF9540)
CreateObject(D3D11 Buffer, 0x06BF9540)
<0x06BDA1D8> ID3D11DeviceContext::VSSetShader(0x06B66BB8, NULL, 0)
<0x06BDA1D8> ID3D11DeviceContext::PSSetShader(0x06B66E50, NULL, 0)
<0x0059FF78> ID3D11Device::CreateInputLayout(0x0028EB64, 3, 0x05E988E0, 120, 0x0028EB5C --> 0x06BF9E28)
CreateObject(D3D11 Input Layout, 0x06BF9E28)
<0x06BDA1D8> ID3D11DeviceContext::IASetInputLayout(0x06BF9E28)
<0x06BDA1D8> ID3D11DeviceContext::IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)
<0x06BDA1D8> ID3D11DeviceContext::OMSetRenderTargets(1, 0x0028EFC0 --> { 0x06B66190 }, 0x06B66138)
<0x06BDA1D8> ID3D11DeviceContext::PSSetShaderResources(1, 3, 0x0028EF3C --> { 0x06B65760, 0x06B58288, 0x06B58338 })
<0x06BDA1D8> ID3D11DeviceContext::PSSetShaderResources(4, 1, 0x0028EFC0 --> { 0x06B66FA0 })
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B66190, 0x0028EFA4)
<0x06BDA1D8> ID3D11DeviceContext::ClearDepthStencilView(0x06B66138, 1, 1.000f, 0)
<0x06BDA1D8> ID3D11DeviceContext::IASetVertexBuffers(0, 1, 0x0028EFAC --> { 0x06BF94B8 }, 0x0028EFB0, 0x0028EFB4)
<0x06BDA1D8> ID3D11DeviceContext::IASetIndexBuffer(0x06BF9540, DXGI_FORMAT_R16_UINT, 0)
//draw quad for post proc shader. This shader takes the three textures in, as well as a random texture, which is added in the second PSSetShaderResources call. The random texture outputs fine.
<0x06BDA1D8> ID3D11DeviceContext::DrawIndexed(6, 0, 0)
<0x06BF94B8> ID3D11Buffer::Release()
<0x06BF9540> ID3D11Buffer::Release()
<0x06B65B00> IDXGISwapChain::Present(0, 0)
EDIT2:我一直在做一些阅读,也许我需要在 preProc 传递之后将纹理作为渲染目标取消分配,然后再将它们作为 ShaderResourceViews 传递到我的 postProcess 着色器。我假设调用 context.OutputMerger.SetTargets() 将取消分配所有当前分配的渲染目标,然后仅分配函数参数中指定的渲染目标。如果情况并非如此(我还不能确定是否如此),那么我将如何取消分配 SlimDX 中的渲染目标?
EDIT3:啊,根据这个MSDN Page,调用 OutputMerger.SetRenderTargets() “会覆盖所有有界渲染目标和深度模板目标,无论渲染目标的数量有多少ppRenderTargetViews。” 因此,当我告诉 OutputMerger 渲染到屏幕时,我的所有渲染目标都会自动释放。这让我回到了第一个方向。
I'm using SlimDX, targeting DirectX 11 with shader model 4. I have a pixel shader "preProc" which processes my vertices and saves three textures of data. One for per-pixel normals, one for per-pixel position data and one for color and depth (color takes up rgb and depth takes the alpha channel).
I then later use these textures in a postprocessing shader in order to implement Screen Space Ambient Occlusion, however it seems none of the data is getting saved in the first shader.
Here's my pixel shader:
PS_OUT PS( PS_IN input )
{
PS_OUT output;
output.col = float4(0,0,0,0);
output.norm = float4(input.norm,1);
output.pos = input.pos;
return output;
}
which outputs the following struct:
struct PS_OUT
{
float4 col : SV_TARGET0;
float4 norm : SV_TARGET1;
float4 pos : SV_TARGET2;
};
and takes the following struct for input:
struct PS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD0;
float3 norm : TEXCOORD1;
};
However in my postprocessing shader:
Texture2D renderTex : register(t1);
Texture2D normalTex : register(t2);
Texture2D positionTex : register(t3);
Texture2D randomTex : register(t4);
SamplerState samLinear : register(s0);
float4 PS(PS_IN input) : SV_Target
{
return float4(getCol(input.tex));
}
It simply outputs a light-blue screen (the colour I reset my render targets to at the start of each frame). getCol has been tested to work and returns a colour from the renderTex material when only dealing with one render target. If I change the pixelshader to instead sample the randomTex texture (which my code previously loaded from a file and is not a render target) everything is rendered fine, so I am confident it is not my post processing shader.
In case it's my slimDX code that's failing here's what I do:
Creating my textures, shaderresourvecviews and rendertargetviews:
Texture2DDescription textureDescription = new Texture2DDescription()
{
Width=texWidth,
Height=texHeight,
MipLevels=1,
ArraySize=3,
Format=SlimDX.DXGI.Format.R32G32B32A32_Float,
SampleDescription = new SlimDX.DXGI.SampleDescription(1,0),
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
CpuAccessFlags= CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
Usage= ResourceUsage.Default,
};
texture = new Texture2D(device, textureDescription);
renderTargetView = new RenderTargetView[3];
shaderResourceView = new ShaderResourceView[3];
for (int i = 0; i < 3; i++)
{
RenderTargetViewDescription renderTargetViewDescription = new RenderTargetViewDescription()
{
Format = textureDescription.Format,
Dimension = RenderTargetViewDimension.Texture2D,
MipSlice = 0,
};
renderTargetView[i] = new RenderTargetView(device, texture, renderTargetViewDescription);
ShaderResourceViewDescription shaderResourceViewDescription = new ShaderResourceViewDescription()
{
Format = textureDescription.Format,
Dimension = ShaderResourceViewDimension.Texture2D,
MostDetailedMip = 0,
MipLevels = 1
};
shaderResourceView[i] = new ShaderResourceView(device, texture, shaderResourceViewDescription);
}
Rendering to my multiple render targets:
private void renderToTexture(Shader shader)
{
//set the vertex and pixel shaders
context.VertexShader.Set(shader.VertexShader);
context.PixelShader.Set(shader.PixelShader);
//send texture data and a linear sampler to the shader
context.PixelShader.SetShaderResource(texture, 0);
context.PixelShader.SetSampler(samplerState, 0);
//set the input assembler
SetInputAssembler(shader);
//reset the camera's constant buffer
camera.ResetConstantBuffer();
//set the render targets to the textures we will render to
context.OutputMerger.SetTargets(depthStencilView, renderTargetViews);
//clear the render targets and depth stencil
foreach (RenderTargetView view in renderTargetViews)
{
context.ClearRenderTargetView(view, color);
}
context.ClearDepthStencilView(depthStencilView, DepthStencilClearFlags.Depth, 1.0f, 0);
//draw the scene
DrawScene();
}
and then the function when I render my postProcessing shader to the screen:
private void renderTexture(Shader shader)
{
//get a single quad to be the screen we render
Mesh mesh = CreateScreenFace();
//set vertex and pixel shaders
context.VertexShader.Set(shader.VertexShader);
context.PixelShader.Set(shader.PixelShader);
//set the input assembler
SetInputAssembler(shader);
//point the render target to the screen
context.OutputMerger.SetTargets(depthStencil, renderTarget);
//send the rendered textures and a linear sampler to the shader
context.PixelShader.SetShaderResource(renderTargetViews[0], 1);
context.PixelShader.SetShaderResource(renderTargetViews[1], 2);
context.PixelShader.SetShaderResource(renderTargetViews[2], 3);
context.PixelShader.SetShaderResource(random, 4);
context.PixelShader.SetSampler(samplerState, 0);
//clear the render targets and depth stencils
context.ClearRenderTargetView(renderTarget, new Color4(0.52734375f, 0.8046875f, 0.9765625f));
context.ClearDepthStencilView(depthStencil, DepthStencilClearFlags.Depth, 1, 0);
//set the vertex and index buffers from the quad
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(mesh.VertexBuffer, Marshal.SizeOf(typeof(Vertex)), 0));
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UInt, 0);
//draw the quad
context.DrawIndexed(mesh.indices, 0, 0);
//dispose of the buffers
mesh.VertexBuffer.Dispose();
mesh.IndexBuffer.Dispose();
}
EDIT: I've added the PIX function call output for a single frame of the current run:
Frame 40
//setup
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B66190, 0x0028F068)
<0x06BDA1D8> ID3D11DeviceContext::ClearDepthStencilView(0x06B66138, 1, 1.000f, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0028F010, 0x0028EFF8, 0x0028F00C --> 0x06BF8EE0)
CreateObject(D3D11 Buffer, 0x06BF8EE0)
<0x06BDA1D8> ID3D11DeviceContext::PSSetConstantBuffers(0, 1, 0x0028F084 --> { 0x06BF8EE0 })
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F8DEB58, 0x0F8DEB40, 0x0F8DEB54 --> 0x06BF8F68)
CreateObject(D3D11 Buffer, 0x06BF8F68)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F70EAD8, 0x0F70EAC0, 0x0F70EAD4 --> 0x06BF8FF0)
CreateObject(D3D11 Buffer, 0x06BF8FF0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FAAE9A8, 0x0FAAE990, 0x0FAAE9A4 --> 0x06BF9078)
CreateObject(D3D11 Buffer, 0x06BF9078)
<0x0059FF78> ID3D11Device::GetImmediateContext(0x06BDA1D8 --> 0x5BA8A8D8)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F8DEB58, 0x0F8DEB40, 0x0F8DEB54 --> 0x06BF9100)
CreateObject(D3D11 Buffer, 0x06BF9100)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F70EAD8, 0x0F70EAC0, 0x0F70EAD4 --> 0x06BF9188)
CreateObject(D3D11 Buffer, 0x06BF9188)
<0x06BDA1D8> ID3D11DeviceContext::Release()
<0x06BDA1D8> ID3D11DeviceContext::UpdateSubresource(0x06B59270, 0, NULL, 0x06287FA0, 0, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FAAE9A8, 0x0FAAE990, 0x0FAAE9A4 --> 0x06BF9210)
CreateObject(D3D11 Buffer, 0x06BF9210)
<0x06BDA1D8> ID3D11DeviceContext::VSSetShader(0x06B66298, NULL, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FC0E978, 0x0FC0E960, 0x0FC0E974 --> 0x06BF9298)
CreateObject(D3D11 Buffer, 0x06BF9298)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FE8EDE8, 0x0FE8EDD0, 0x0FE8EDE4 --> 0x06BF9320)
CreateObject(D3D11 Buffer, 0x06BF9320)
<0x06BDA1D8> ID3D11DeviceContext::PSSetShader(0x06B666F8, NULL, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FC0E978, 0x0FC0E960, 0x0FC0E974 --> 0x06BF93A8)
CreateObject(D3D11 Buffer, 0x06BF93A8)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FE8EDE8, 0x0FE8EDD0, 0x0FE8EDE4 --> 0x06BF9430)
CreateObject(D3D11 Buffer, 0x06BF9430)
<0x0059FF78> ID3D11Device::CreateInputLayout(0x0028EBE0, 3, 0x06286CB8, 152, 0x0028EBD8 --> 0x06BF9D68)
CreateObject(D3D11 Input Layout, 0x06BF9D68)
<0x06BDA1D8> ID3D11DeviceContext::IASetInputLayout(0x06BF9D68)
<0x06BDA1D8> ID3D11DeviceContext::IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)
<0x0059FF78> ID3D11Device::GetImmediateContext(0x06BDA1D8 --> 0x5BA8A8D8)
<0x06BDA1D8> ID3D11DeviceContext::Release()
<0x06BDA1D8> ID3D11DeviceContext::VSSetConstantBuffers(0, 1, 0x0028F024 --> { 0x06B59270 })
<0x06BDA1D8> ID3D11DeviceContext::OMSetRenderTargets(3, 0x0028F004 --> { 0x06B65708, 0x06B657B8, 0x06B582E0 }, 0x06B66138)
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B65708, 0x0028EFEC)
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B657B8, 0x0028EFEC)
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B582E0, 0x0028EFEC)
<0x06BDA1D8> ID3D11DeviceContext::ClearDepthStencilView(0x06B66138, 1, 1.000f, 0)
//draw scene for preproc shader (this should output the three render targets)
//DRAW CALLS HIDDEN
<0x0059FF78> ID3D11Device::CreateBuffer(0x0028EE04, 0x0028EDEC, 0x0028EE00 --> 0x06BF94B8)
CreateObject(D3D11 Buffer, 0x06BF94B8)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0028EE04, 0x0028EDEC, 0x0028EE00 --> 0x06BF9540)
CreateObject(D3D11 Buffer, 0x06BF9540)
<0x06BDA1D8> ID3D11DeviceContext::VSSetShader(0x06B66BB8, NULL, 0)
<0x06BDA1D8> ID3D11DeviceContext::PSSetShader(0x06B66E50, NULL, 0)
<0x0059FF78> ID3D11Device::CreateInputLayout(0x0028EB64, 3, 0x05E988E0, 120, 0x0028EB5C --> 0x06BF9E28)
CreateObject(D3D11 Input Layout, 0x06BF9E28)
<0x06BDA1D8> ID3D11DeviceContext::IASetInputLayout(0x06BF9E28)
<0x06BDA1D8> ID3D11DeviceContext::IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)
<0x06BDA1D8> ID3D11DeviceContext::OMSetRenderTargets(1, 0x0028EFC0 --> { 0x06B66190 }, 0x06B66138)
<0x06BDA1D8> ID3D11DeviceContext::PSSetShaderResources(1, 3, 0x0028EF3C --> { 0x06B65760, 0x06B58288, 0x06B58338 })
<0x06BDA1D8> ID3D11DeviceContext::PSSetShaderResources(4, 1, 0x0028EFC0 --> { 0x06B66FA0 })
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B66190, 0x0028EFA4)
<0x06BDA1D8> ID3D11DeviceContext::ClearDepthStencilView(0x06B66138, 1, 1.000f, 0)
<0x06BDA1D8> ID3D11DeviceContext::IASetVertexBuffers(0, 1, 0x0028EFAC --> { 0x06BF94B8 }, 0x0028EFB0, 0x0028EFB4)
<0x06BDA1D8> ID3D11DeviceContext::IASetIndexBuffer(0x06BF9540, DXGI_FORMAT_R16_UINT, 0)
//draw quad for post proc shader. This shader takes the three textures in, as well as a random texture, which is added in the second PSSetShaderResources call. The random texture outputs fine.
<0x06BDA1D8> ID3D11DeviceContext::DrawIndexed(6, 0, 0)
<0x06BF94B8> ID3D11Buffer::Release()
<0x06BF9540> ID3D11Buffer::Release()
<0x06B65B00> IDXGISwapChain::Present(0, 0)
EDIT2: I've been doing some reading and perhaps I need to deallocate the textures as render targets after the preProc pass before I pass them in as ShaderResourceViews to my postProcess shader. I assumed calling context.OutputMerger.SetTargets() would deallocate all of the currently assigned render targets and then assign only the render targets specified in the function's parameters. If this isn't the case (I can't yet be sure if it is or isn't), then how would I go about unassigning the render targets in SlimDX?
EDIT3: Ah, according to this MSDN Page, calling OutputMerger.SetRenderTargets() "overrides all bounded render targets and the depth stencil target regardless of the number of render targets in ppRenderTargetViews." so all of my render targets are getting deallocated automatically when I tell the OutputMerger to render to the screen. This leaves me back to square one.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
通过发现我是多么愚蠢来解决这个问题。
当我创建渲染目标时,我创建了一个Texture2DArray,但我将其视为一组Texture2D 对象,而不是一个对象。此后,我更改了代码以使用Texture2D 对象数组,并且效果非常好。
Fixed it by discovering just how silly I am.
When I create my rendertarget I create a Texture2DArray but I'm treating it like an array of Texture2D objects, instead of one object. I have since altered my code to use an array of Texture2D objects and it works very well.