Audio and video manipulation - Developer guides 编辑

The beauty of the web is that you can combine technologies to create new forms. Having native audio and video in the browser means we can use these data streams with technologies such as <canvas>, WebGL or Web Audio API to modify audio and video directly, for example adding reverb/compression effects to audio, or grayscale/sepia filters to video. This article provides a reference to explain what you need to do.

Video manipulation

The ability to read the pixel values from each frame of a video can be very useful.

Video and canvas

The <canvas> element provides a surface for drawing graphics onto web pages; it is very powerful and can be coupled tightly with video.

The general technique is to:

  1. Write a frame from the <video> element to an intermediary <canvas> element.
  2. Read the data from the intermediary <canvas> element and manipulate it.
  3. Write the manipulated data to your "display" <canvas>.
  4. Pause and repeat.

For example, let's process a video to display it in greyscale. In this case, we'll show both the source video and the output greyscale frames. Ordinarily, if you were implementing a "play video in greyscale" feature, you'd probably add display: none to the style for the <video> element, to keep the source video from being drawn to the screen while showing only the canvas showing the altered frames.

HTML

We can set up our video player and <canvas> element like this:

<video id="my-video" controls="true" width="480" height="270" crossorigin="anonymous">
  <source src="http://jplayer.org/video/webm/Big_Buck_Bunny_Trailer.webm" type="video/webm">
  <source src="http://jplayer.org/video/m4v/Big_Buck_Bunny_Trailer.m4v" type="video/mp4">
</video>

<canvas id="my-canvas" width="480" height="270"></canvas>

JavaScript

This code handles altering the frames.

var processor = {
  timerCallback: function() {
    if (this.video.paused || this.video.ended) {
      return;
    }
    this.computeFrame();
    var self = this;
    setTimeout(function () {
      self.timerCallback();
    }, 16); // roughly 60 frames per second
  },

  doLoad: function() {
    this.video = document.getElementById("my-video");
    this.c1 = document.getElementById("my-canvas");
    this.ctx1 = this.c1.getContext("2d");
    var self = this;

    this.video.addEventListener("play", function() {
      self.width = self.video.width;
      self.height = self.video.height;
      self.timerCallback();
    }, false);
  },

  computeFrame: function() {
    this.ctx1.drawImage(this.video, 0, 0, this.width, this.height);
    var frame = this.ctx1.getImageData(0, 0, this.width, this.height);
    var l = frame.data.length / 4;

    for (var i = 0; i < l; i++) {
      var grey = (frame.data[i * 4 + 0] + frame.data[i * 4 + 1] + frame.data[i * 4 + 2]) / 3;

      frame.data[i * 4 + 0] = grey;
      frame.data[i * 4 + 1] = grey;
      frame.data[i * 4 + 2] = grey;
    }
    this.ctx1.putImageData(frame, 0, 0);

    return;
  }
};  

Once the page has loaded you can call

processor.doLoad()

Result

This is a pretty simple example showing how to manipulate video frames using a canvas. For efficiency, you should consider using requestAnimationFrame() instead of setTimeout() when running on browsers that support it.

Note: Due to potential security issues if your video is on a different domain than your code, you'll need to enable CORS (Cross Origin Resource Sharing) on your video server.

Video and WebGL

WebGL is a powerful API that uses canvas to draw hardware-accelerated 3D or 2D scenes. You can combine WebGL and the <video> element to create video textures, which means you can put video inside 3D scenes.

Note: You can find the source code of this demo on GitHub (see it live also).

Playback rate

We can also adjust the rate that audio and video plays at using an attribute of the <audio> and <video> element called playbackRate. playbackRate is a number that represents a multiple to be applied to the rate of playback, for example 0.5 represents half speed while 2 represents double speed.

Note that the playbackRate property works with both <audio> and <video>, but in both cases, it changes the playback speed but not the pitch. To manipulate the audio's pitch you need to use the Web Audio API. See the AudioBufferSourceNode.playbackRate property.

HTML

<video id="my-video" controls
       src="http://jplayer.org/video/m4v/Big_Buck_Bunny_Trailer.m4v">
</video>

JavaScript

var myVideo = document.getElementById('my-video');
myVideo.playbackRate = 2;
Playable code
<video id="my-video" controls="true" width="480" height="270">
  <source src="http://jplayer.org/video/webm/Big_Buck_Bunny_Trailer.webm" type="video/webm">
  <source src="http://jplayer.org/video/m4v/Big_Buck_Bunny_Trailer.m4v" type="video/mp4">
</video>
<div class="playable-buttons">
  <input id="edit" type="button" value="Edit" />
  <input id="reset" type="button" value="Reset" />
</div>
<textarea id="code" class="playable-code">
var myVideo = document.getElementById('my-video');
myVideo.playbackRate = 2;</textarea>
var textarea = document.getElementById('code');
var reset = document.getElementById('reset');
var edit = document.getElementById('edit');
var code = textarea.value;

function setPlaybackRate() {
  eval(textarea.value);
}

reset.addEventListener('click', function() {
  textarea.value = code;
  setPlaybackRate();
});

edit.addEventListener('click', function() {
  textarea.focus();
})

textarea.addEventListener('input', setPlaybackRate);
window.addEventListener('load', setPlaybackRate);

Note: Try the playbackRate example live.

Audio manipulation

playbackRate aside, to manipulate audio you'll typically use the Web Audio API.

Selecting an audio source

The Web Audio API can receive audio from a variety of sources, then process it and send it back out to an AudioDestinationNode representing the output device to which the sound is sent after processing.

If the audio source is...Use this Web Audio node type
An audio track from an HTML <audio> or <video> elementMediaElementAudioSourceNode
A plain raw audio data buffer in memoryAudioBufferSourceNode
An oscillator generating a sine wave or other computed waveformOscillatorNode
An audio track from WebRTC (such as the microphone input you can get using getUserMedia().MediaStreamAudioSourceNode

Audio filters

The Web Audio API has a lot of different filter/effects that can be applied to audio using the BiquadFilterNode, for example.

HTML

<video id="my-video" controls
       src="myvideo.mp4" type="video/mp4">
</video>

JavaScript

var context = new AudioContext(),
    audioSource = context.createMediaElementSource(document.getElementById("my-video")),
    filter = context.createBiquadFilter();
audioSource.connect(filter);
filter.connect(context.destination);

// Configure filter
filter.type = "lowshelf";
filter.frequency.value = 1000;
filter.gain.value = 25;
Playable code
<video id="my-video" controls="true" width="480" height="270" crossorigin="anonymous">
  <source src="http://jplayer.org/video/webm/Big_Buck_Bunny_Trailer.webm" type="video/webm">
  <source src="http://jplayer.org/video/m4v/Big_Buck_Bunny_Trailer.m4v" type="video/mp4">
</video>
<div class="playable-buttons">
  <input id="edit" type="button" value="Edit" />
  <input id="reset" type="button" value="Reset" />
</div>
<textarea id="code" class="playable-code">
filter.type = "lowshelf";
filter.frequency.value = 1000;
filter.gain.value = 25;</textarea>
var context     = new AudioContext(),
    audioSource = context.createMediaElementSource(document.getElementById("my-video")),
    filter      = context.createBiquadFilter();
audioSource.connect(filter);
filter.connect(context.destination);

var textarea = document.getElementById('code');
var reset = document.getElementById('reset');
var edit = document.getElementById('edit');
var code = textarea.value;

function setFilter() {
  eval(textarea.value);
}

reset.addEventListener('click', function() {
  textarea.value = code;
  setFilter();
});

edit.addEventListener('click', function() {
  textarea.focus();
})

textarea.addEventListener('input', setFilter);
window.addEventListener('load', setFilter);

Note: unless you have CORS enabled, to avoid security issues your video should be on the same domain as your code.

Common audio filters

These are some of the common types of audio filter you can apply:

  • Low Pass: Allows frequencies below the cutoff frequency to pass through and attenuates frequencies above the cutoff.
  • High Pass: Allows frequencies above the cutoff frequency to pass through and attenuates frequencies below the cutoff.
  • Band Pass: Allows a range of frequencies to pass through and attenuates the frequencies below and above this frequency range.
  • Low Shelf: Allows all frequencies through, but adds a boost (or attenuation) to the lower frequencies.
  • High Shelf: Allows all frequencies through, but adds a boost (or attenuation) to the higher frequencies.
  • Peaking: Allows all frequencies through, but adds a boost (or attenuation) to a range of frequencies.
  • Notch: Allows all frequencies through, except for a set of frequencies.
  • Allpass: Allows all frequencies through, but changes the phase relationship between the various frequencies.

Note: See BiquadFilterNode for more information.

Convolutions and impulses

It's also possible to apply impulse responses to audio using the ConvolverNode. An impulse response is the sound created after a brief impulse of sound (like a hand clap). An impulse response will signify the environment in which the impulse was created (for example, an echo created by clapping your hands in a tunnel).

Example

var convolver = context.createConvolver();
convolver.buffer = this.impulseResponseBuffer;
// Connect the graph.
source.connect(convolver);
convolver.connect(context.destination);

See this Codepen for an applied (but very, very silly; like, little kids will giggle kind of silly) example.

Spatial audio

We can also position audio using a panner node. A panner node—PannerNode—lets us define a source cone as well as positional and directional elements, all in 3D space as defined using 3D cartesian coordinates.

Example

var panner = context.createPanner();
panner.coneOuterGain = 0.2;
panner.coneOuterAngle = 120;
panner.coneInnerAngle = 0;

panner.connect(context.destination);
source.connect(panner);
source.start(0);

// Position the listener at the origin.
context.listener.setPosition(0, 0, 0);

Note: You can find an example on our GitHub repository (see it live also).

JavaScript codecs

It's also possible to manipulate audio at a low level using JavaScript. This can be useful should you want to create audio codecs.

Libraries currently exist for the following formats :

Note: At Audiocogs, you can Try out a few demos; Audiocogs also provides a framework, Aurora.js, which is intended to help you author your own codecs in JavaScript.

Examples

See also

Tutorials

Reference

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据

词条统计

浏览:99 次

字数:21320

最后编辑:7年前

编辑次数:0 次

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文