AudioWorkletProcessor - Web API 接口参考 编辑
Web Audio API的 AudioWorkletProcessor
接口代表了一个 自定义的音频处理代码 AudioWorkletNode
. 它身处于 AudioWorkletGlobalScope
并运行在 Web Audio rendering 线程上. 同时, 一个建立在其基础上的 AudioWorkletNode
运行在主线程上.
构造函数
AudioWorkletProcessor
及其子类不能通过用户提供的的代码直接实例化。它们只能随着与之相联系AudioWorkletNode
s的创建而被其创建再内部。其子类的构造函数将被一个可选对象调用,因此您可以执行自定义的初始化过程——详细信息请参见构造函数页面。AudioWorkletProcessor()
- 创建一个
AudioWorkletProcessor
对象的新实例。
属性
port
只读- 返回一个用于在处理程序和其所属的
AudioWorkletNode
间双向通信的MessagePort
。 另一端 可通过该节点的port
属性使用.
方法
AudioWorkletProcessor
接口没有定义任何自己的方法。但是, 您必须提供一个 process()
方法, 用以处理音频流。
事件
AudioWorkletProcessor
接口不响应任何事件。
使用说明
Deriving classes
要自定义音频处理代码, 你必须从AudioWorkletProcessor
接口派生一个类. 这个派生类必须具有在该接口中不曾定义的process
方法. 该方法将被每个含有128 样本帧的块调用并且接受输入和输出数组以及自定义的AudioParam
s (如果它们刚被定义了) 的计算值作为参数. 您可以使用输入和 音频参数值去填充输出数组, 这是默认的用于使输出静音。
Optionally, if you want custom AudioParam
s on your node, you can supply a parameterDescriptors
property as a static getter on the processor. The array of AudioParamDescriptor
-based objects returned is used internally to create the AudioParam
s during the instantiation of the AudioWorkletNode
.
The resulting AudioParam
s reside in the parameters
property of the node and can be automated using standard methods such as linearRampToValueAtTime
. Their calculated values will be passed into the process()
method of the processor for you to shape the node output accordingly.
处理音频
一个创建自定义音频处理算法的步骤的实例:
- 创建一个独立的文件;
- 在这个文件中:
- Extend the
AudioWorkletProcessor
class (see "Deriving classes" section) and supply your ownprocess()
method in it; - Register the processor using
AudioWorkletGlobalScope.registerProcessor()
method;
- Extend the
- Load the file using
addModule()
method on your audio context'saudioWorklet
property; - Create an
AudioWorkletNode
based on the processor. The processor will be instantiated internally by theAudioWorkletNode
constructor. - Connect the node to the other nodes.
例子
In the example below we create a custom AudioWorkletNode
that outputs white noise.
First, we need to define a custom AudioWorkletProcessor
, which will output white noise, and register it. Note that this should be done in a separate file.
// white-noise-processor.js
class WhiteNoiseProcessor extends AudioWorkletProcessor {
process (inputs, outputs, parameters) {
const output = outputs[0]
output.forEach(channel => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * 2 - 1
}
})
return true
}
}
registerProcessor('white-noise-processor', WhiteNoiseProcessor)
Next, in our main script file we'll load the processor, create an instance of AudioWorkletNode
, passing it the name of the processor, then connect the node to an audio graph.
const audioContext = new AudioContext()
await audioContext.audioWorklet.addModule('white-noise-processor.js')
const whiteNoiseNode = new AudioWorkletNode(audioContext, 'white-noise-processor')
whiteNoiseNode.connect(audioContext.destination)
Specifications
Specification | Status | Comment |
---|---|---|
Web Audio API AudioWorkletProcessor | Working Draft |
Browser compatibility
BCD tables only load in the browser
The compatibility table on this page is generated from structured data. If you'd like to contribute to the data, please check out https://github.com/mdn/browser-compat-data and send us a pull request.See also
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论