多个纹理不显示

发布于 2024-11-25 13:33:42 字数 3681 浏览 3 评论 0原文

我是 DirectX10 的新手。现在我正在开发一个 Direct10 应用程序。它混合了两种根据用户输入手动填充的纹理。当前的实现是

  1. 使用 D3D10_USAGE_STAGING 创建两个空纹理。
  2. 创建两个资源着色器视图以绑定到像素着色器,因为着色器需要它。
  3. 通过调用 CopyResource 将纹理复制到 GPU 内存。

现在的问题是我只能看到第一个纹理,但看不到第二个纹理。在我看来,绑定不适用于第二个纹理。

我不知道这是怎么回事。这里有人能帮我解释一下吗?

谢谢, Marshall

COverlayTexture 类负责创建纹理、创建资源视图、使用另一个应用程序的映射位图填充纹理并将资源视图绑定到像素着色器。

HRESULT COverlayTexture::Initialize(VOID)
{
D3D10_TEXTURE2D_DESC texDesStaging;
texDesStaging.Width = m_width;
texDesStaging.Height = m_height;
texDesStaging.Usage = D3D10_USAGE_STAGING;
texDesStaging.BindFlags = 0;
texDesStaging.ArraySize = 1;
texDesStaging.MipLevels = 1;
texDesStaging.SampleDesc.Count = 1;
texDesStaging.SampleDesc.Quality = 0;
texDesStaging.MiscFlags = 0;
texDesStaging.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesStaging.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;  
HR( m_Device->CreateTexture2D( &texDesStaging, NULL, &m_pStagingResource ) );

D3D10_TEXTURE2D_DESC texDesShader;
texDesShader.Width = m_width;
texDesShader.Height = m_height;
texDesShader.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesShader.ArraySize = 1;
texDesShader.MipLevels = 1;
texDesShader.SampleDesc.Count = 1;
texDesShader.SampleDesc.Quality = 0;
texDesShader.MiscFlags = 0;    
texDesShader.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesShader.Usage = D3D10_USAGE_DEFAULT;    
texDesShader.CPUAccessFlags = 0;
HR( m_Device->CreateTexture2D( &texDesShader, NULL, &m_pShaderResource ) );

D3D10_SHADER_RESOURCE_VIEW_DESC viewDesc;
ZeroMemory( &viewDesc, sizeof( viewDesc ) );
viewDesc.Format = texDesShader.Format;
viewDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
viewDesc.Texture2D.MipLevels = texDesShader.MipLevels;
HR( m_Device->CreateShaderResourceView( m_pShaderResource, &viewDesc, &m_pShaderResourceView ) );
}

HRESULT COverlayTexture::Render(VOID)
{
m_Device->PSSetShaderResources(0, 1, m_pShaderResourceView);

D3D10_MAPPED_TEXTURE2D lockedRect;
m_pStagingResource->Map( 0, D3D10_MAP_WRITE, 0, &lockedRect );

// Fill in the texture with the bitmap mapped from shared memory view

m_pStagingResource->Unmap(0);

m_Device->CopyResource(m_pShaderResource, m_pStagingResource); 
} 

我使用 COverlayTexture 类的两个实例,每个实例分别将自己的位图填充到其纹理,并按序列 COverlayTexture[1] 和 COverlayTexture[0] 进行渲染。

COverlayTexture* pOverlayTexture[2];

for( int i = 1; i < 0; i++)
{
     pOverlayTexture[i]->Render()
}

FX 文件中的混合状态设置定义如下:

BlendState AlphaBlend
{
AlphaToCoverageEnable = FALSE;
BlendEnable[0] = TRUE;
      SrcBlend = SRC_ALPHA;
      DestBlend = INV_SRC_ALPHA;
      BlendOp = ADD;
      BlendOpAlpha = ADD;
      SrcBlendAlpha = ONE;
DestBlendAlpha = ZERO;
RenderTargetWriteMask[0] = 0x0f;
};

FX 文件中的像素着色器定义如下:

Texture2D txDiffuse;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
return ret;
}

再次感谢。

保罗编辑:

非常感谢,保罗。问题是对象的哪个实例应该绑定到 alpha 纹理或漫反射纹理。作为测试,我将 COverlayTexture[0] 绑定到 alpha,将 COverlayTexture[1] 绑定到漫反射纹理。

Texture2D txDiffuse[2];
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse[1].Sample(samLinear, input.Tex);
float alpha = txDiffuse[0].Sample(samLinear, input.Tex).x;

return float4(ret.xyz, alpha);
} 

我为这两个资源视图调用了 PSSetShaderResources。

g_pShaderResourceViews[0] = overlay[0].m_pShaderResourceView;
g_pShaderResourceViews[1] = overlay[1].m_pShaderResourceView;
m_Device->PSSetShaderResources(0, 2, g_pShaderResourceViews);

结果我什么也没看到。我还尝试了通道 x,y,z,w。

I'm a newbie of DirectX10. Now I'm developing a Direct10 application. It mixes two textures which are filled manually according to user's input. The current implementation is

  1. Create two empty textures with usage D3D10_USAGE_STAGING.
  2. Create two resource shader view to bind to the pixel shader because the shader needs it.
  3. Copy the textures to the GPU memory by calling CopyResource.

Now the problem is that I can only see the first texture but I don't see the second. It looks to me that the binding doesn't work for the second texture.

I don't know what's wrong with it. Can anyone here shed me a light on it?

Thanks,
Marshall

The class COverlayTexture takes responsible for creating the texture, creating resource view, fill the texture with the mapped bitmap from another applicaiton and bind the resource view to the pixel shader.

HRESULT COverlayTexture::Initialize(VOID)
{
D3D10_TEXTURE2D_DESC texDesStaging;
texDesStaging.Width = m_width;
texDesStaging.Height = m_height;
texDesStaging.Usage = D3D10_USAGE_STAGING;
texDesStaging.BindFlags = 0;
texDesStaging.ArraySize = 1;
texDesStaging.MipLevels = 1;
texDesStaging.SampleDesc.Count = 1;
texDesStaging.SampleDesc.Quality = 0;
texDesStaging.MiscFlags = 0;
texDesStaging.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesStaging.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;  
HR( m_Device->CreateTexture2D( &texDesStaging, NULL, &m_pStagingResource ) );

D3D10_TEXTURE2D_DESC texDesShader;
texDesShader.Width = m_width;
texDesShader.Height = m_height;
texDesShader.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesShader.ArraySize = 1;
texDesShader.MipLevels = 1;
texDesShader.SampleDesc.Count = 1;
texDesShader.SampleDesc.Quality = 0;
texDesShader.MiscFlags = 0;    
texDesShader.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesShader.Usage = D3D10_USAGE_DEFAULT;    
texDesShader.CPUAccessFlags = 0;
HR( m_Device->CreateTexture2D( &texDesShader, NULL, &m_pShaderResource ) );

D3D10_SHADER_RESOURCE_VIEW_DESC viewDesc;
ZeroMemory( &viewDesc, sizeof( viewDesc ) );
viewDesc.Format = texDesShader.Format;
viewDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
viewDesc.Texture2D.MipLevels = texDesShader.MipLevels;
HR( m_Device->CreateShaderResourceView( m_pShaderResource, &viewDesc, &m_pShaderResourceView ) );
}

HRESULT COverlayTexture::Render(VOID)
{
m_Device->PSSetShaderResources(0, 1, m_pShaderResourceView);

D3D10_MAPPED_TEXTURE2D lockedRect;
m_pStagingResource->Map( 0, D3D10_MAP_WRITE, 0, &lockedRect );

// Fill in the texture with the bitmap mapped from shared memory view

m_pStagingResource->Unmap(0);

m_Device->CopyResource(m_pShaderResource, m_pStagingResource); 
} 

I use two instances of the class COverlayTexture each of which fills its own bitmap to its texture respectively and renders with sequence COverlayTexture[1] then COverlayTexture[0].

COverlayTexture* pOverlayTexture[2];

for( int i = 1; i < 0; i++)
{
     pOverlayTexture[i]->Render()
}

The blend state setting in the FX file is definedas below:

BlendState AlphaBlend
{
AlphaToCoverageEnable = FALSE;
BlendEnable[0] = TRUE;
      SrcBlend = SRC_ALPHA;
      DestBlend = INV_SRC_ALPHA;
      BlendOp = ADD;
      BlendOpAlpha = ADD;
      SrcBlendAlpha = ONE;
DestBlendAlpha = ZERO;
RenderTargetWriteMask[0] = 0x0f;
};

The pixel shader in the FX file is defined as below:

Texture2D txDiffuse;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
return ret;
}

Thanks again.

Edit for Paulo:

Thanks a lot, Paulo. The problem is that which instance of the object should be bound to alpha texture or diffuse texture. As testing, I bind the COverlayTexture[0] to the alpha and COverlayTexture[1] to the diffuse texture.

Texture2D txDiffuse[2];
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse[1].Sample(samLinear, input.Tex);
float alpha = txDiffuse[0].Sample(samLinear, input.Tex).x;

return float4(ret.xyz, alpha);
} 

I called the PSSetShaderResources for the two resource views.

g_pShaderResourceViews[0] = overlay[0].m_pShaderResourceView;
g_pShaderResourceViews[1] = overlay[1].m_pShaderResourceView;
m_Device->PSSetShaderResources(0, 2, g_pShaderResourceViews);

The result is that i don't see anything. I also tried the channel x,y,z,w.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

渔村楼浪 2024-12-02 13:33:42

发布更多代码。

我不确定你的意思是如何混合这两种纹理。如果您想在像素着色器中混合它们,您需要对它们进行采样,然后将它们添加到一起(或您需要的任何操作)。

如何将纹理添加在一起?通过设置 ID3D11BlendState 还是在像素着色器中?

编辑:

您不需要在每个类中使用两个纹理:如果您想写入纹理,您的用法应该是 D3D10_USAGE_DYNAMIC。执行此操作时,您还可以将此纹理作为着色器资源,这样您就不需要执行 m_Device->CopyResource(m_pShaderResource, m_pStagingResource); 步骤。

由于您使用的是 alpha 混合,因此您必须控制像素着色器中的 alpha 值输出(像素着色器返回的 float4 的 w 分量)。

将两个纹理绑定到像素着色器并使用一个纹理值作为 Alpha 分量:

Texture2D txDiffuse;
Texture2D txAlpha;
float4 PS(PS_INPUT input) : SV_Target
{
    float4 ret = txDiffuse.Sample(samLinear, input.Tex);
    float alpha=txAlpha.Sample(samLinear,input.Tex).x; // Choose the proper channel
    return float4(ret.xyz,alpha); // Alpha is the 4th component
}

Post some more code.

I'm not sure how you mean to mix these two textures. If you want to mix them in the pixel shader you need to sample both of them then add them (or whatever operation you required) toghether.

How do you add the textures toghether? By setting a ID3D11BlendState or in the pixel shader?

EDIT:

You don't need two textures in every class: if you want to write to your texture your usage should be D3D10_USAGE_DYNAMIC. When you do this, you can also have this texture as your shader resource so you don't need to do the m_Device->CopyResource(m_pShaderResource, m_pStagingResource); step.

Since you're using alpha blending you must control the alpha value output in the pixel shader (the w component of the float4 that the pixel shader returns).

Bind both textures to your pixel shader and use one textures value as the alpha components:

Texture2D txDiffuse;
Texture2D txAlpha;
float4 PS(PS_INPUT input) : SV_Target
{
    float4 ret = txDiffuse.Sample(samLinear, input.Tex);
    float alpha=txAlpha.Sample(samLinear,input.Tex).x; // Choose the proper channel
    return float4(ret.xyz,alpha); // Alpha is the 4th component
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文