将视频用作视频的面具与画布
我一直在尝试创建一个从一个视频到另一个视频的动态面具。两种视频都没有Alpha频道(即使到那时,它现在在浏览器中的支持也很少),因此我试图用Canvas
来解决该问题。我已经设法以非常有限的周期进行操作,但是它仍然无法以我的输出的体面帧速率进行此操作(它达到约20 fps,至少应该得到25/30以使其看起来平滑)。它实际上是与周围其他元素一起运行的,因此我需要使其更有效。
由于机密性,我实际上无法共享实际视频,但是其中包含黑色和白色的alpha mask(mask.mp4
)(介于两者之间的灰色阴影),而其他视频内容可以是任何需要掩盖的东西(video.mp4
)。
我要改进的代码实际上就是这样(假设所有视频和帆布元素均为1920x1080
):
const video = document.getElementById( 'video' ); // `<video src="video.mp4"></video>
const mask = document.getElementById( 'mask' ); // `<video src="mask.mp4"></video>
const buffer = document.getElementById( 'buffer' ); // <canvas hidden></canvas>
const bufferCtx = buffer.getContext( '2d' );
const output = document.getElementById( 'output' ); // <canvas></canvas>
const outputCtx = output.getContext( '2d' );
outputCtx.globalCompositeOperation = 'source-in';
function maskVideo(){
// Draw the mask
bufferCtx.drawImage( mask, 0, 0 );
// Get image data
const data = bufferCtx.getImageData( 0, 0, w, h );
// Assign the grayscale value to the alpha in the image data
for( let i = 0; i < data.data.length; i += 4 ){
data.data[i+3] = data.data[i];
}
// Put in the data
outputCtx.putImageData( data, 0, 0 );
// Draw the masked video into the mask (see the globalCompositeOperation above)
outputCtx.drawImage( video, 0, 0, w, h );
window.requestAnimationFrame( maskVideo );
}
window.requestAnimationFrame( maskVideo );
所以我的问题是:是否有更有效的方法从另一个视频中掩盖黑色像素两个视频都在运行?我目前并不担心并发(这两种流之间的差异均小于毫秒,远小于视频本身中的实际框架)。 目的是改善帧速率。
这是更广泛的代码片段,不幸的是,出于明显的CORS和视频托管原因,在堆栈溢出方面无效:
const video = document.getElementById( 'video' );
const mask = document.getElementById( 'mask' );
const buffer = document.getElementById( 'buffer' );
const bufferCtx = buffer.getContext( '2d' );
const output = document.getElementById( 'output' );
const outputCtx = output.getContext( '2d' );
const fpsOutput = document.getElementById( 'fps' );
const w = 1920;
const h = 1080;
let currentError;
let FPSCount = 0;
output.width =
buffer.width = w;
output.height =
buffer.height = h;
outputCtx.globalCompositeOperation = 'source-in';
function maskVideo(){
if( !video.paused ){
bufferCtx.drawImage( mask, 0, 0 );
const data = bufferCtx.getImageData( 0, 0, w, h );
for( let i = 0; i < data.data.length; i += 4 ){
data.data[i+3] = data.data[i];
}
outputCtx.putImageData( data, 0, 0 );
outputCtx.drawImage( video, 0, 0, w, h );
FPSCount++;
} else if( FPSCount ){
fpsOutput.textContent = (FPSCount / video.duration * video.playbackRate).toFixed(2);
FPSCount = 0;
}
window.requestAnimationFrame( maskVideo );
}
window.requestAnimationFrame( maskVideo );
html {
font-size: 1vh;
font-size: calc(var(--scale,1) * 1vh);
}
html, body {
height: 100%;
}
body {
padding: 0;
margin: 0;
overflow: hidden;
}
video {
display: none;
}
canvas {
position: fixed;
top: 0;
left: 0;
}
#fps {
position: fixed;
top: 10px;
left: 10px;
background: red;
color: white;
z-index: 5;
font-size: 50px;
margin: 0;
font-family: Courier, monospace;
font-variant-numeric: tabular-nums;
}
#fps:after {
content: 'fps';
}
<video id="video" src="./video.mp4" preload muted hidden></video>
<video id="mask" src="./mask.mp4" preload muted hidden></video>
<canvas id="output"></canvas>
<canvas id="buffer" hidden></canvas>
<p id="fps" hidden></p>
I have been trying to create a dynamic mask from one video onto another one. Neither videos have an alpha channel (and even then it would have low support in browsers right now), so I am trying to solve the issue with canvas
. I have managed to get it done in very limited cycles, but it still isn't able to do this at a decent frame rate in my output (it hits about 20 fps and it should at least get 25/30 to appear smooth). It is actually running in Singular with other elements around it, so I need to get it more efficient.
Due to confidentiality I can't actually share the actual videos, but one contains the alpha mask (mask.mp4
) in black and white (and shades of gray in between) while the other video content can be anything that needs to be masked (video.mp4
).
The code that I want to improve is actually this (assume all videos and canvas elements are 1920x1080
):
const video = document.getElementById( 'video' ); // `<video src="video.mp4"></video>
const mask = document.getElementById( 'mask' ); // `<video src="mask.mp4"></video>
const buffer = document.getElementById( 'buffer' ); // <canvas hidden></canvas>
const bufferCtx = buffer.getContext( '2d' );
const output = document.getElementById( 'output' ); // <canvas></canvas>
const outputCtx = output.getContext( '2d' );
outputCtx.globalCompositeOperation = 'source-in';
function maskVideo(){
// Draw the mask
bufferCtx.drawImage( mask, 0, 0 );
// Get image data
const data = bufferCtx.getImageData( 0, 0, w, h );
// Assign the grayscale value to the alpha in the image data
for( let i = 0; i < data.data.length; i += 4 ){
data.data[i+3] = data.data[i];
}
// Put in the data
outputCtx.putImageData( data, 0, 0 );
// Draw the masked video into the mask (see the globalCompositeOperation above)
outputCtx.drawImage( video, 0, 0, w, h );
window.requestAnimationFrame( maskVideo );
}
window.requestAnimationFrame( maskVideo );
So my question is: is there any more efficient way of masking out black pixels from another video while both videos are running? I am currently not worried about concurrency (the difference between both streams is less than a millisecond, much less than an actual frame in the video itself). The goal is to improve framerates.
Here is the more expansive code snippet that unfortunately doesn't work on stack overflow for obvious CORS and video hosting reasons:
const video = document.getElementById( 'video' );
const mask = document.getElementById( 'mask' );
const buffer = document.getElementById( 'buffer' );
const bufferCtx = buffer.getContext( '2d' );
const output = document.getElementById( 'output' );
const outputCtx = output.getContext( '2d' );
const fpsOutput = document.getElementById( 'fps' );
const w = 1920;
const h = 1080;
let currentError;
let FPSCount = 0;
output.width =
buffer.width = w;
output.height =
buffer.height = h;
outputCtx.globalCompositeOperation = 'source-in';
function maskVideo(){
if( !video.paused ){
bufferCtx.drawImage( mask, 0, 0 );
const data = bufferCtx.getImageData( 0, 0, w, h );
for( let i = 0; i < data.data.length; i += 4 ){
data.data[i+3] = data.data[i];
}
outputCtx.putImageData( data, 0, 0 );
outputCtx.drawImage( video, 0, 0, w, h );
FPSCount++;
} else if( FPSCount ){
fpsOutput.textContent = (FPSCount / video.duration * video.playbackRate).toFixed(2);
FPSCount = 0;
}
window.requestAnimationFrame( maskVideo );
}
window.requestAnimationFrame( maskVideo );
html {
font-size: 1vh;
font-size: calc(var(--scale,1) * 1vh);
}
html, body {
height: 100%;
}
body {
padding: 0;
margin: 0;
overflow: hidden;
}
video {
display: none;
}
canvas {
position: fixed;
top: 0;
left: 0;
}
#fps {
position: fixed;
top: 10px;
left: 10px;
background: red;
color: white;
z-index: 5;
font-size: 50px;
margin: 0;
font-family: Courier, monospace;
font-variant-numeric: tabular-nums;
}
#fps:after {
content: 'fps';
}
<video id="video" src="./video.mp4" preload muted hidden></video>
<video id="mask" src="./mask.mp4" preload muted hidden></video>
<canvas id="output"></canvas>
<canvas id="buffer" hidden></canvas>
<p id="fps" hidden></p>
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
一个窍门是在较小版本的视频中执行色彩键。
通常,在视频上,您不会真正注意到掩蔽的质量比实际的可见视频少。但是,CPU会注意到它比较少很多。
您甚至可以通过在面膜上涂上一个小的模糊过滤器来平滑边缘:
另一个解决方案是因为您的掩码是白色&amp; black是使用SVG过滤器,而是
&lt; fecolormatrix type = luminancetoalpha&gt;
。这应该将所有工作都留在GPU方面,并删除对中间缓冲画布的需求,但是请注意,Safari浏览器仍然不支持该选项...One trick is to do your chroma key on a smaller version of the video.
Generally on videos you won't really notice that the masking has lesser quality than the actual visible video. However the CPU will notice it has a lot less pixels to compare.
You can even smooth a bit the edges by applying a small blur filter over the mask:
An other solution since your mask is white&black is to use an SVG filter, notabily an
<feColorMatrix type=luminanceToAlpha>
. This should leave all the work on the GPU side, and remove the need for an intermediary buffer canvas, however note that Safari browsers still don't support that option...