A-Frame:后处理尝试在桌面上有效,但在 VR 中失败

发布于 2025-01-17 04:18:27 字数 3279 浏览 4 评论 0 原文

我正在尝试一种简单的方法在 A-Frame 中进行后处理(为了简单起见,不使用 Three.js 类 EffectComposer 等)。该方法似乎是标准的:

  • 创建一个新的渲染目标
  • ,将场景渲染到目标的纹理中
  • ,创建一个包含单个四边形的辅助场景,并使用自定义着色器材质以某种方式改变该纹理,
  • 使用正交相机将辅助场景渲染到主场景中。 我已经使用 A-Frame 组件对其进行了设置,

如下所示(目标是在 VR 中工作,如勾选函数中的代码所示):

AFRAME.registerComponent("color-shift", {
    init: function () 
    {
        // render the scene to this texture
        this.renderTarget0 = new THREE.WebGLRenderTarget(1024, 1024);
        this.renderTarget0.texture.magFilter = THREE.NearestFilter;
        this.renderTarget0.texture.minFilter = THREE.NearestFilter;
        this.renderTarget0.texture.generateMipmaps = false;

        let texture = this.renderTarget0.texture;

        let postMaterial = new THREE.ShaderMaterial( {

          uniforms: {
            tex: {type: "t", value: texture},
          },

          vertexShader: `
          varying vec2 vUv;
          void main() 
          {
              vUv = uv;
              gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
          }
          `,

          fragmentShader: `
          varying vec2 vUv;
          uniform sampler2D tex;
          void main() 
          {
              vec4 color = texture2D(tex, vUv);
              gl_FragColor = vec4(color.g, color.b, color.r, 1);
          }
          `
         });

        // separate scene #1 for texture post processing
        const quad = new THREE.Mesh(
            new THREE.PlaneGeometry(2, 2), postMaterial );

        this.rtScene = new THREE.Scene();
        this.rtScene.add(quad);
        this.rtCamera = new THREE.OrthographicCamera();
        this.rtCamera.position.z = 0.5;
    },

    tick: function(t, dt)
    {
       // store XR settings
       const renderer = this.el.sceneEl.renderer;
       const currentRenderTarget = renderer.getRenderTarget();
       const currentXrEnabled = renderer.xr.enabled;
       const currentShadowAutoUpdate = renderer.shadowMap.autoUpdate;

       // temporarily disable XR
       renderer.xr.enabled = false;
       renderer.shadowMap.autoUpdate = false;

       // apply post-processing effects to previously rendered target texture,
       //   displayed on a quad, rendered to screen
       renderer.setRenderTarget(null);
       renderer.render(this.rtScene, this.rtCamera);

       // re-enable XR
       renderer.xr.enabled = currentXrEnabled;
       renderer.shadowMap.autoUpdate = currentShadowAutoUpdate;
       
       // render scene onto a texture the next time it renders
       renderer.setRenderTarget(this.renderTarget0); 
    }
});

完整的源代码位于:https://github.com/stemkoski/A- Frame-Examples/blob/master/post-processing-test.html 和实时版本位于 https://stemkoski.github.io/A-Frame-Examples/post-processing-test.html

此示例在桌面上完美地按照预期工作,导致色调偏移,但进入 VR 模式时,屏幕全黑。

我不知道为什么会发生这种情况; VR 模式下的渲染工作原理的细节让我有点困惑。我认为在 VR 模式下,相机实际上是 2 个透视相机的阵列。我认为渲染器的渲染方法将场景渲染两次,一次从每个摄像机渲染到对应于渲染目标纹理一半的视口中,但我很可能是错误的。我想在 VR 模式下捕获渲染结果,然后对其应用简单的后处理,就像上面的着色器一样。我该如何修复上面的代码来完成此任务?

I am attempting a simple approach to postprocessing in A-Frame (without using the three.js classes EffectComposer, etc., for simplicity). The approach seems standard:

  • create a new render target
  • render the scene into the target's texture
  • create a secondary scene containing a single quad, with a custom shader material that alters that texture in some way
  • use an orthographic camera to render the secondary scene into the main window

I have set this up with an A-Frame component as follows (with the goal of working in VR, as seen by the code in the tick function):

AFRAME.registerComponent("color-shift", {
    init: function () 
    {
        // render the scene to this texture
        this.renderTarget0 = new THREE.WebGLRenderTarget(1024, 1024);
        this.renderTarget0.texture.magFilter = THREE.NearestFilter;
        this.renderTarget0.texture.minFilter = THREE.NearestFilter;
        this.renderTarget0.texture.generateMipmaps = false;

        let texture = this.renderTarget0.texture;

        let postMaterial = new THREE.ShaderMaterial( {

          uniforms: {
            tex: {type: "t", value: texture},
          },

          vertexShader: `
          varying vec2 vUv;
          void main() 
          {
              vUv = uv;
              gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
          }
          `,

          fragmentShader: `
          varying vec2 vUv;
          uniform sampler2D tex;
          void main() 
          {
              vec4 color = texture2D(tex, vUv);
              gl_FragColor = vec4(color.g, color.b, color.r, 1);
          }
          `
         });

        // separate scene #1 for texture post processing
        const quad = new THREE.Mesh(
            new THREE.PlaneGeometry(2, 2), postMaterial );

        this.rtScene = new THREE.Scene();
        this.rtScene.add(quad);
        this.rtCamera = new THREE.OrthographicCamera();
        this.rtCamera.position.z = 0.5;
    },

    tick: function(t, dt)
    {
       // store XR settings
       const renderer = this.el.sceneEl.renderer;
       const currentRenderTarget = renderer.getRenderTarget();
       const currentXrEnabled = renderer.xr.enabled;
       const currentShadowAutoUpdate = renderer.shadowMap.autoUpdate;

       // temporarily disable XR
       renderer.xr.enabled = false;
       renderer.shadowMap.autoUpdate = false;

       // apply post-processing effects to previously rendered target texture,
       //   displayed on a quad, rendered to screen
       renderer.setRenderTarget(null);
       renderer.render(this.rtScene, this.rtCamera);

       // re-enable XR
       renderer.xr.enabled = currentXrEnabled;
       renderer.shadowMap.autoUpdate = currentShadowAutoUpdate;
       
       // render scene onto a texture the next time it renders
       renderer.setRenderTarget(this.renderTarget0); 
    }
});

The complete source code is at: https://github.com/stemkoski/A-Frame-Examples/blob/master/post-processing-test.html and a live version is at https://stemkoski.github.io/A-Frame-Examples/post-processing-test.html.

This example works perfectly as expected on desktop, resulting in a hue shift, but when entering VR mode, the screen is completely black.

I am not sure why this happens; the details about how rendering in VR mode works are a little confusing to me. I think that in VR mode, the camera is actually an array of 2 perspective cameras. I thought that the renderer's render method rendered the scene twice, once from each of these cameras, into a viewport that corresponds to half of a rendertarget texture, but I may very well be mistaken. I would like to capture the results of the render while in VR mode and then apply simple postprocessing to it, like in the shader above. How can I fix the code above to accomplish this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文