BaseAudioContext.decodeAudioData() - Web APIs 编辑
The decodeAudioData()
method of the BaseAudioContext
Interface is used to asynchronously decode audio file data contained in an ArrayBuffer
. In this case the ArrayBuffer
is loaded from XMLHttpRequest
and FileReader
. The decoded AudioBuffer
is resampled to the AudioContext
's sampling rate, then passed to a callback or promise.
This is the preferred method of creating an audio source for Web Audio API from an audio track. This method only works on complete file data, not fragments of audio file data.
Syntax
Older callback syntax:
baseAudioContext.decodeAudioData(ArrayBuffer, successCallback, errorCallback);
Newer promise-based syntax:
Promise<decodedData> baseAudioContext.decodeAudioData(ArrayBuffer);
Parameters
- ArrayBuffer
- An ArrayBuffer containing the audio data to be decoded, usually grabbed from
XMLHttpRequest
,WindowOrWorkerGlobalScope.fetch()
orFileReader
. - successCallback
- A callback function to be invoked when the decoding successfully finishes. The single argument to this callback is an
AudioBuffer
representing the decodedData (the decoded PCM audio data). Usually you'll want to put the decoded data into anAudioBufferSourceNode
, from which it can be played and manipulated how you want. - errorCallback
- An optional error callback, to be invoked if an error occurs when the audio data is being decoded.
Return value
Void, or a Promise
object that fulfills with the decodedData.
Example
In this section we will first cover the older callback-based system and then the newer promise-based syntax.
Older callback syntax
In this example, the getData()
function uses XHR to load an audio track, setting the responseType
of the request to arraybuffer
so that it returns an array buffer as its response
that we then store in the audioData
variable . We then pass this buffer into a decodeAudioData()
function; the success callback takes the successfully decoded PCM data, puts it into an AudioBufferSourceNode
created using AudioContext.createBufferSource()
, connects the source to the AudioContext.destination
and sets it to loop.
The buttons in the example run getData()
to load the track and start it playing, and stop it playing, respectively. When the stop()
method is called on the source, the source is cleared out.
Note: You can run the example live (or view the source.)
// define variables
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;
var pre = document.querySelector('pre');
var myScript = document.querySelector('script');
var play = document.querySelector('.play');
var stop = document.querySelector('.stop');
// use XHR to load an audio track, and
// decodeAudioData to decode it and stick it in a buffer.
// Then we put the buffer into the source
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
// wire up buttons to stop and play audio
play.onclick = function() {
getData();
source.start(0);
play.setAttribute('disabled', 'disabled');
}
stop.onclick = function() {
source.stop(0);
play.removeAttribute('disabled');
}
// dump script to pre element
pre.innerHTML = myScript.innerHTML;
New promise-based syntax
ctx.decodeAudioData(audioData).then(function(decodedData) {
// use the decoded data here
});
Specifications
Specification | Status | Comment |
---|---|---|
Web Audio API The definition of 'decodeAudioData()' in that specification. | Working Draft |
Browser compatibility
BCD tables only load in the browser
See also
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论