将视频用作视频的面具与画布

发布于 2025-01-28 11:32:52 字数 4029 浏览 1 评论 0原文

我一直在尝试创建一个从一个视频到另一个视频的动态面具。两种视频都没有Alpha频道(即使到那时,它现在在浏览器中的支持也很少),因此我试图用Canvas来解决该问题。我已经设法以非常有限的周期进行操作,但是它仍然无法以我的输出的体面帧速率进行此操作(它达到约20 fps,至少应该得到25/30以使其看起来平滑)。它实际上是与周围其他元素一起运行的,因此我需要使其更有效。

由于机密性,我实际上无法共享实际视频,但是其中包含黑色和白色的alpha mask(mask.mp4)(介于两者之间的灰色阴影),而其他视频内容可以是任何需要掩盖的东西(video.mp4)。

我要改进的代码实际上就是这样(假设所有视频和帆布元素均为1920x1080):

const video = document.getElementById( 'video' ); // `<video src="video.mp4"></video>
const mask = document.getElementById( 'mask' ); // `<video src="mask.mp4"></video>
const buffer = document.getElementById( 'buffer' ); // <canvas hidden></canvas>
const bufferCtx = buffer.getContext( '2d' );
const output = document.getElementById( 'output' ); // <canvas></canvas>
const outputCtx = output.getContext( '2d' );

outputCtx.globalCompositeOperation = 'source-in';

function maskVideo(){
    
    // Draw the mask
    bufferCtx.drawImage( mask, 0, 0 );
    
    // Get image data
    const data = bufferCtx.getImageData( 0, 0, w, h );
     
    // Assign the grayscale value to the alpha in the image data
    for( let i = 0; i < data.data.length; i += 4 ){
    
        data.data[i+3] = data.data[i];
    
    }
    
    // Put in the data
    outputCtx.putImageData( data, 0, 0 );
    // Draw the masked video into the mask (see the globalCompositeOperation above)
    outputCtx.drawImage( video, 0, 0, w, h );
    
    window.requestAnimationFrame( maskVideo );

}

window.requestAnimationFrame( maskVideo );

所以我的问题是:是否有更有效的方法从另一个视频中掩盖黑色像素两个视频都在运行?我目前并不担心并发(这两种流之间的差异均小于毫秒,远小于视频本身中的实际框架)。 目的是改善帧速率。

这是更广泛的代码片段,不幸的是,出于明显的CORS和视频托管原因,在堆栈溢出方面无效:

const video = document.getElementById( 'video' );
const mask = document.getElementById( 'mask' );
const buffer = document.getElementById( 'buffer' );
const bufferCtx = buffer.getContext( '2d' );
const output = document.getElementById( 'output' );
const outputCtx = output.getContext( '2d' );
const fpsOutput = document.getElementById( 'fps' );
const w = 1920;
const h = 1080;

let currentError;
let FPSCount = 0;

output.width =
buffer.width = w;
output.height =
buffer.height = h;

outputCtx.globalCompositeOperation = 'source-in';

function maskVideo(){

    if( !video.paused ){

        bufferCtx.drawImage( mask, 0, 0 );

        const data = bufferCtx.getImageData( 0, 0, w, h );

        for( let i = 0; i < data.data.length; i += 4 ){

            data.data[i+3] = data.data[i];

        }

        outputCtx.putImageData( data, 0, 0 );
        outputCtx.drawImage( video, 0, 0, w, h );

        FPSCount++;

    } else if( FPSCount ){

        fpsOutput.textContent = (FPSCount / video.duration * video.playbackRate).toFixed(2);
        FPSCount = 0;

    }

    window.requestAnimationFrame( maskVideo );

}

window.requestAnimationFrame( maskVideo );
html {
    font-size: 1vh;
    font-size: calc(var(--scale,1) * 1vh);
}
html, body {
    height: 100%;
}
body {
    padding: 0;
    margin: 0;
    overflow: hidden;
}
video {
    display: none;
}
canvas {
    position: fixed;
    top: 0;
    left: 0;
}
#fps {
    position: fixed;
    top: 10px;
    left: 10px;
    background: red;
    color: white;
    z-index: 5;
    font-size: 50px;
    margin: 0;
    font-family: Courier, monospace;
    font-variant-numeric: tabular-nums;
}
#fps:after {
    content: 'fps';
}
<video id="video" src="./video.mp4" preload muted hidden></video>
<video id="mask" src="./mask.mp4" preload muted hidden></video>
<canvas id="output"></canvas>
<canvas id="buffer" hidden></canvas>

<p id="fps" hidden></p>

I have been trying to create a dynamic mask from one video onto another one. Neither videos have an alpha channel (and even then it would have low support in browsers right now), so I am trying to solve the issue with canvas. I have managed to get it done in very limited cycles, but it still isn't able to do this at a decent frame rate in my output (it hits about 20 fps and it should at least get 25/30 to appear smooth). It is actually running in Singular with other elements around it, so I need to get it more efficient.

Due to confidentiality I can't actually share the actual videos, but one contains the alpha mask (mask.mp4) in black and white (and shades of gray in between) while the other video content can be anything that needs to be masked (video.mp4).

The code that I want to improve is actually this (assume all videos and canvas elements are 1920x1080):

const video = document.getElementById( 'video' ); // `<video src="video.mp4"></video>
const mask = document.getElementById( 'mask' ); // `<video src="mask.mp4"></video>
const buffer = document.getElementById( 'buffer' ); // <canvas hidden></canvas>
const bufferCtx = buffer.getContext( '2d' );
const output = document.getElementById( 'output' ); // <canvas></canvas>
const outputCtx = output.getContext( '2d' );

outputCtx.globalCompositeOperation = 'source-in';

function maskVideo(){
    
    // Draw the mask
    bufferCtx.drawImage( mask, 0, 0 );
    
    // Get image data
    const data = bufferCtx.getImageData( 0, 0, w, h );
     
    // Assign the grayscale value to the alpha in the image data
    for( let i = 0; i < data.data.length; i += 4 ){
    
        data.data[i+3] = data.data[i];
    
    }
    
    // Put in the data
    outputCtx.putImageData( data, 0, 0 );
    // Draw the masked video into the mask (see the globalCompositeOperation above)
    outputCtx.drawImage( video, 0, 0, w, h );
    
    window.requestAnimationFrame( maskVideo );

}

window.requestAnimationFrame( maskVideo );

So my question is: is there any more efficient way of masking out black pixels from another video while both videos are running? I am currently not worried about concurrency (the difference between both streams is less than a millisecond, much less than an actual frame in the video itself). The goal is to improve framerates.

Here is the more expansive code snippet that unfortunately doesn't work on stack overflow for obvious CORS and video hosting reasons:

const video = document.getElementById( 'video' );
const mask = document.getElementById( 'mask' );
const buffer = document.getElementById( 'buffer' );
const bufferCtx = buffer.getContext( '2d' );
const output = document.getElementById( 'output' );
const outputCtx = output.getContext( '2d' );
const fpsOutput = document.getElementById( 'fps' );
const w = 1920;
const h = 1080;

let currentError;
let FPSCount = 0;

output.width =
buffer.width = w;
output.height =
buffer.height = h;

outputCtx.globalCompositeOperation = 'source-in';

function maskVideo(){

    if( !video.paused ){

        bufferCtx.drawImage( mask, 0, 0 );

        const data = bufferCtx.getImageData( 0, 0, w, h );

        for( let i = 0; i < data.data.length; i += 4 ){

            data.data[i+3] = data.data[i];

        }

        outputCtx.putImageData( data, 0, 0 );
        outputCtx.drawImage( video, 0, 0, w, h );

        FPSCount++;

    } else if( FPSCount ){

        fpsOutput.textContent = (FPSCount / video.duration * video.playbackRate).toFixed(2);
        FPSCount = 0;

    }

    window.requestAnimationFrame( maskVideo );

}

window.requestAnimationFrame( maskVideo );
html {
    font-size: 1vh;
    font-size: calc(var(--scale,1) * 1vh);
}
html, body {
    height: 100%;
}
body {
    padding: 0;
    margin: 0;
    overflow: hidden;
}
video {
    display: none;
}
canvas {
    position: fixed;
    top: 0;
    left: 0;
}
#fps {
    position: fixed;
    top: 10px;
    left: 10px;
    background: red;
    color: white;
    z-index: 5;
    font-size: 50px;
    margin: 0;
    font-family: Courier, monospace;
    font-variant-numeric: tabular-nums;
}
#fps:after {
    content: 'fps';
}
<video id="video" src="./video.mp4" preload muted hidden></video>
<video id="mask" src="./mask.mp4" preload muted hidden></video>
<canvas id="output"></canvas>
<canvas id="buffer" hidden></canvas>

<p id="fps" hidden></p>

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

星星的軌跡 2025-02-04 11:32:52

一个窍门是在较小版本的视频中执行色彩键。
通常,在视频上,您不会真正注意到掩蔽的质量比实际的可见视频少。但是,CPU会注意到它比较少很多。
您甚至可以通过在面膜上涂上一个小的模糊过滤器来平滑边缘:

(async() => {
const video = document.getElementById( 'video' );
const mask = document.getElementById( 'mask' );
const buffer = document.getElementById( 'buffer' );
// since we're going to do a lot of readbacks on the context
// we should let the browser know,
// so that it doesn't move it from and to the GPU every time
// For this we use the { willReadFrequently: true } option
const bufferCtx = buffer.getContext( '2d', { willReadFrequently: true } );
const output = document.getElementById( 'output' );
const outputCtx = output.getContext( '2d' );


await video.play();
await mask.play();

const w = video.videoWidth;
const h = video.videoHeight;
output.width  = w;
output.height = h;
buffer.width  = mask.videoWidth  / 5; // do the chroma on a smaller version
buffer.heigth = mask.videoHeight / 5;

function maskVideo(){

    if( !video.paused ){

        bufferCtx.drawImage( mask, 0, 0, buffer.width, buffer.height );

        const data = bufferCtx.getImageData( 0, 0, buffer.width, buffer.height );

        for( let i = 0; i < data.data.length; i += 4 ) {
            data.data[i+3] = data.data[i];
        }
        // we put the new ImageData back on buffer
        // so that we can stretch it on the output context
        // alternatively we could have created an ImageBitmap from the ImageData
        bufferCtx.putImageData( data, 0, 0 );
        outputCtx.clearRect(0, 0, w, h);
        outputCtx.filter = "blur(1px)"; // smoothen the mask
        outputCtx.drawImage( buffer, 0, 0, w, h );
        outputCtx.globalCompositeOperation = 'source-in';
        outputCtx.filter = "none";
        outputCtx.drawImage( video, 0, 0, w, h );
        outputCtx.globalCompositeOperation = 'source-over';

    }

    window.requestAnimationFrame( maskVideo );

}

window.requestAnimationFrame( maskVideo );
})();
<video id="video" src="https://dl8.webmfiles.org/big-buck-bunny_trailer.webm" preload muted hidden loop></video>
<!-- not a great mask video example since it's really in shades of grays -->
<video id="mask" preload muted loop hidden src="https://upload.wikimedia.org/wikipedia/commons/6/64/Plan_9_from_Outer_Space_%281959%29.webm" crossorigin="anonymous"></video>
<canvas id="output"></canvas>
<canvas id="buffer" hidden></canvas>

另一个解决方案是因为您的掩码是白色&amp; black是使用SVG过滤器,而是&lt; fecolormatrix type = luminancetoalpha&gt;。这应该将所有工作都留在GPU方面,并删除对中间缓冲画布的需求,但是请注意,Safari浏览器仍然不支持该选项...

(async() => {
const video = document.getElementById( 'video' );
const mask = document.getElementById( 'mask' );
const output = document.getElementById( 'output' );
const outputCtx = output.getContext( '2d' );


await video.play();
await mask.play();

const w = output.width  = video.videoWidth;
const h = output.height = video.videoHeight;

function maskVideo(){

    outputCtx.clearRect(0, 0, w, h);
    outputCtx.filter = "url(#lumToAlpha)";
    outputCtx.drawImage( mask, 0, 0, w, h );
    outputCtx.filter = "none";
    outputCtx.globalCompositeOperation = 'source-in';
    outputCtx.drawImage( video, 0, 0, w, h );
    outputCtx.globalCompositeOperation = 'source-over';

    window.requestAnimationFrame( maskVideo );

}

window.requestAnimationFrame( maskVideo );
})();
<video id="video" src="https://dl8.webmfiles.org/big-buck-bunny_trailer.webm" preload muted hidden loop></video>
<video id="mask" preload muted loop hidden src="https://upload.wikimedia.org/wikipedia/commons/6/64/Plan_9_from_Outer_Space_%281959%29.webm" crossorigin="anonymous"></video>
<canvas id="output"></canvas>
<svg width=0 height=0 style=visibility:hidden;position:absolute>
  <filter id=lumToAlpha>
    <feColorMatrix type="luminanceToAlpha" />
  </filter>
</svg>

One trick is to do your chroma key on a smaller version of the video.
Generally on videos you won't really notice that the masking has lesser quality than the actual visible video. However the CPU will notice it has a lot less pixels to compare.
You can even smooth a bit the edges by applying a small blur filter over the mask:

(async() => {
const video = document.getElementById( 'video' );
const mask = document.getElementById( 'mask' );
const buffer = document.getElementById( 'buffer' );
// since we're going to do a lot of readbacks on the context
// we should let the browser know,
// so that it doesn't move it from and to the GPU every time
// For this we use the { willReadFrequently: true } option
const bufferCtx = buffer.getContext( '2d', { willReadFrequently: true } );
const output = document.getElementById( 'output' );
const outputCtx = output.getContext( '2d' );


await video.play();
await mask.play();

const w = video.videoWidth;
const h = video.videoHeight;
output.width  = w;
output.height = h;
buffer.width  = mask.videoWidth  / 5; // do the chroma on a smaller version
buffer.heigth = mask.videoHeight / 5;

function maskVideo(){

    if( !video.paused ){

        bufferCtx.drawImage( mask, 0, 0, buffer.width, buffer.height );

        const data = bufferCtx.getImageData( 0, 0, buffer.width, buffer.height );

        for( let i = 0; i < data.data.length; i += 4 ) {
            data.data[i+3] = data.data[i];
        }
        // we put the new ImageData back on buffer
        // so that we can stretch it on the output context
        // alternatively we could have created an ImageBitmap from the ImageData
        bufferCtx.putImageData( data, 0, 0 );
        outputCtx.clearRect(0, 0, w, h);
        outputCtx.filter = "blur(1px)"; // smoothen the mask
        outputCtx.drawImage( buffer, 0, 0, w, h );
        outputCtx.globalCompositeOperation = 'source-in';
        outputCtx.filter = "none";
        outputCtx.drawImage( video, 0, 0, w, h );
        outputCtx.globalCompositeOperation = 'source-over';

    }

    window.requestAnimationFrame( maskVideo );

}

window.requestAnimationFrame( maskVideo );
})();
<video id="video" src="https://dl8.webmfiles.org/big-buck-bunny_trailer.webm" preload muted hidden loop></video>
<!-- not a great mask video example since it's really in shades of grays -->
<video id="mask" preload muted loop hidden src="https://upload.wikimedia.org/wikipedia/commons/6/64/Plan_9_from_Outer_Space_%281959%29.webm" crossorigin="anonymous"></video>
<canvas id="output"></canvas>
<canvas id="buffer" hidden></canvas>

An other solution since your mask is white&black is to use an SVG filter, notabily an <feColorMatrix type=luminanceToAlpha>. This should leave all the work on the GPU side, and remove the need for an intermediary buffer canvas, however note that Safari browsers still don't support that option...

(async() => {
const video = document.getElementById( 'video' );
const mask = document.getElementById( 'mask' );
const output = document.getElementById( 'output' );
const outputCtx = output.getContext( '2d' );


await video.play();
await mask.play();

const w = output.width  = video.videoWidth;
const h = output.height = video.videoHeight;

function maskVideo(){

    outputCtx.clearRect(0, 0, w, h);
    outputCtx.filter = "url(#lumToAlpha)";
    outputCtx.drawImage( mask, 0, 0, w, h );
    outputCtx.filter = "none";
    outputCtx.globalCompositeOperation = 'source-in';
    outputCtx.drawImage( video, 0, 0, w, h );
    outputCtx.globalCompositeOperation = 'source-over';

    window.requestAnimationFrame( maskVideo );

}

window.requestAnimationFrame( maskVideo );
})();
<video id="video" src="https://dl8.webmfiles.org/big-buck-bunny_trailer.webm" preload muted hidden loop></video>
<video id="mask" preload muted loop hidden src="https://upload.wikimedia.org/wikipedia/commons/6/64/Plan_9_from_Outer_Space_%281959%29.webm" crossorigin="anonymous"></video>
<canvas id="output"></canvas>
<svg width=0 height=0 style=visibility:hidden;position:absolute>
  <filter id=lumToAlpha>
    <feColorMatrix type="luminanceToAlpha" />
  </filter>
</svg>

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文