同时显示视频元素的多个帧

发布于 2024-12-08 17:24:10 字数 1213 浏览 1 评论 0原文

一些背景:这里是图形新手,刚刚通过 mrdoob 出色的 Three.js 在浏览器中涉足 3D 世界。我打算很快浏览 http://learningwebgl.com/ 上的所有内容:)


我我想知道人们将如何大致重新创建类似于以下内容的东西: http://yooouuutuuube.com/v/?width=192&height=120&vm=29755443&flux=0&direction=rand

的天真理解如下:

  1. 我对 yooouuutuuube 工作原理 大量位图数据(大于任何合理的浏览器窗口大小)。
  2. 根据目标视频帧的宽度/高度确定所需的行/列数(跨越整个 BitmapData 平面,而不仅仅是可见区域)
  3. 将像素从最近的视频帧复制到 BitmapData 上的某个位置(基于移动方向)
  4. 迭代 BitmapData 中的每个单元格,从其前面的单元格复制像素
  5. 沿相反方向滚动整个 BitmapData 以创建运动的幻觉,具有西洋镜类型的效果

我想在WebGL 与使用 Canvas 不同,因此我可以利用着色器进行后处理(噪声和颜色通道分离以模拟色差)。

这是我到目前为止所拥有的屏幕截图:

在此处输入图像描述

  • 三个视频(相同的视频,但分为 R、G 和B 通道)被绘制到画布 2D 上下文。每个视频都会稍微偏移,以伪造色差外观。
  • 在 Three.JS 中创建引用此画布的纹理。该纹理在每个绘制周期都会更新。
  • 在 Three.JS 中创建着色器材质,该材质链接到片段着色器(创建噪声/扫描线),
  • 然后将该材质应用于多个 3D 平面。

这对于显示单帧视频来说效果很好,但我想看看是否可以一次显示多个帧而不需要添加额外的几何图形。

完成此类任务的最佳方式是什么?是否有任何我应该进一步详细研究/研究的概念?

Some background: graphics newbie here, have just dipped my toes into the world of 3D in the browser with mrdoob's excellent three.js. I intend to go through all the tuts at http://learningwebgl.com/ soon :)


I'd like to know how one would roughly go about re-creating something similar to:
http://yooouuutuuube.com/v/?width=192&height=120&vm=29755443&flux=0&direction=rand

My naive understanding of how yooouuutuuube works is as follows:

  1. Create a massive BitmapData (larger than any reasonable browser window size).
  2. Determine the number of required rows / columns (across the entire BitmapData plane, not just the visible area) based on the width/height of the target video frame
  3. Copy pixels from the most recent video frame to a position on the BitmapData (based on the direction of movement)
  4. Iterate through every cell in the BitmapData, copying pixels from the cell that precedes it
  5. Scroll the entire BitmapData in the opposite direction to create the illusion of movement, with a Zoetrope-type effect

I'd like to do this in WebGL as opposed to using Canvas so I can take advantage of post-processing using shaders (noise and color channel separation to mimic chromatic aberration).

Here's a screenshot what I have so far:

enter image description here

  • Three videos (same video, but separated into R, G and B channels) are drawn to a canvas 2D context. Each video is slightly offset in order to fake that chromatic aberration look.
  • A texture is created in Three.JS which references this canvas. This texture is updated every draw cycle.
  • A shader material is created in Three.JS which is linked to a fragment shader (which creates noise / scanlines)
  • This material is then applied to a number of 3D Planes.

This works just fine for showing single frames of video, but I'd like to see if I could show multiple frames at once without needing to add additional geometry.

What would be the optimal way of going about such a task? Are there any concepts that I should be studying/investigating in further detail?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

无风消散 2024-12-15 17:24:10
<html>
    <body>
        <script>

            var video = document.createElement( 'video' );
            video.autoplay = true;
            video.addEventListener( 'loadedmetadata', function ( event ) {

                var scale = 0.5;
                var width = video.videoWidth * scale;
                var height = video.videoHeight * scale;
                var items_total = ( window.innerWidth * window.innerHeight ) / ( width * height );

                for ( var i = 0; i < items_total - 1; i ++ ) {

                    var canvas = document.createElement( 'canvas' );
                    canvas.width = width;
                    canvas.height = height;

                    canvas.context = canvas.getContext( '2d' );
                    canvas.context.scale( scale, scale );

                    document.body.appendChild( canvas );

                }

                setInterval( function () {

                    var child = document.body.insertBefore( document.body.lastChild, document.body.children[ 1 ] ); // children[ 0 ] == <script>
                    child.context.drawImage( video, 0, 0 );

                }, 1000 / 30 );

            }, false );
            video.src = 'video.ogv';

        </script>
    </body>
</html>
<html>
    <body>
        <script>

            var video = document.createElement( 'video' );
            video.autoplay = true;
            video.addEventListener( 'loadedmetadata', function ( event ) {

                var scale = 0.5;
                var width = video.videoWidth * scale;
                var height = video.videoHeight * scale;
                var items_total = ( window.innerWidth * window.innerHeight ) / ( width * height );

                for ( var i = 0; i < items_total - 1; i ++ ) {

                    var canvas = document.createElement( 'canvas' );
                    canvas.width = width;
                    canvas.height = height;

                    canvas.context = canvas.getContext( '2d' );
                    canvas.context.scale( scale, scale );

                    document.body.appendChild( canvas );

                }

                setInterval( function () {

                    var child = document.body.insertBefore( document.body.lastChild, document.body.children[ 1 ] ); // children[ 0 ] == <script>
                    child.context.drawImage( video, 0, 0 );

                }, 1000 / 30 );

            }, false );
            video.src = 'video.ogv';

        </script>
    </body>
</html>
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文