BaseAudioContext - Web API 接口参考 编辑
The BaseAudioContext
interface acts as a base definition for online and offline audio-processing graphs, as represented by AudioContext
and OfflineAudioContext
respectively. You wouldn't use BaseAudioContext
directly — you'd use its features via one of these two inheriting interfaces.
A BaseAudioContext
can be a target of events, therefore it implements the EventTarget
interface.
<div id="interfaceDiagram" style="display: inline-block; position: relative; width: 100%; padding-bottom: 11.666666666666666%; vertical-align: middle; overflow: hidden;"><svg style="display: inline-block; position: absolute; top: 0; left: 0;" viewbox="-50 0 600 70" preserveAspectRatio="xMinYMin meet"><a xlink:href="https://developer.mozilla.org/wiki/zh-CN/docs/Web/API/EventTarget" target="_top"><rect x="1" y="1" width="110" height="50" fill="#fff" stroke="#D4DDE4" stroke-width="2px" /><text x="56" y="30" font-size="12px" font-family="Consolas,Monaco,Andale Mono,monospace" fill="#4D4E53" text-anchor="middle" alignment-baseline="middle">EventTarget</text></a><polyline points="111,25 121,20 121,30 111,25" stroke="#D4DDE4" fill="none"/><line x1="121" y1="25" x2="151" y2="25" stroke="#D4DDE4"/><a xlink:href="/wiki/zh-CN/docs/Web/API/BaseAudioContext" target="_top"><rect x="151" y="1" width="160" height="50" fill="#F4F7F8" stroke="#D4DDE4" stroke-width="2px" /><text x="231" y="30" font-size="12px" font-family="Consolas,Monaco,Andale Mono,monospace" fill="#4D4E53" text-anchor="middle" alignment-baseline="middle">BaseAudioContext</text></a></svg></div>
a:hover text { fill: #0095DD; pointer-events: all;}
Properties
BaseAudioContext.audioWorklet
只读- Returns the
AudioWorklet
object, used for creating custom AudioNodes with JavaScript processing. BaseAudioContext.currentTime
只读- Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at
0
. BaseAudioContext.destination
只读- Returns an
AudioDestinationNode
representing the final destination of all audio in the context. It can be thought of as the audio-rendering device. BaseAudioContext.listener
只读- Returns the
AudioListener
object, used for 3D spatialization. BaseAudioContext.sampleRate
只读- Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an
AudioContext
cannot be changed. BaseAudioContext.state
只读- Returns the current state of the
AudioContext
.
Event handlers
BaseAudioContext.onstatechange
- An event handler that runs when an event of type
statechange
has fired. This occurs when theAudioContext
's state changes, due to the calling of one of the state change methods (AudioContext.suspend
,AudioContext.resume
, orAudioContext.close
).
Methods
Also implements methods from the interface EventTarget
.
BaseAudioContext.createBuffer()
- Creates a new, empty
AudioBuffer
object, which can then be populated by data and played via anAudioBufferSourceNode
. BaseAudioContext.createConstantSource()
- Creates a
ConstantSourceNode
object, which is an audio source that continuously outputs a monaural (one-channel) sound signal whose samples all have the same value. BaseAudioContext.createBufferSource()
- Creates an
AudioBufferSourceNode
, which can be used to play and manipulate audio data contained within anAudioBuffer
object.AudioBuffer
s are created usingAudioContext.createBuffer
or returned byAudioContext.decodeAudioData
when it successfully decodes an audio track. BaseAudioContext.createScriptProcessor()
- Creates a
ScriptProcessorNode
, which can be used for direct audio processing via JavaScript. BaseAudioContext.createStereoPanner()
- Creates a
StereoPannerNode
, which can be used to apply stereo panning to an audio source. BaseAudioContext.createAnalyser()
- Creates an
AnalyserNode
, which can be used to expose audio time and frequency data and for example to create data visualisations. BaseAudioContext.createBiquadFilter()
- Creates a
BiquadFilterNode
, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc. BaseAudioContext.createChannelMerger()
- Creates a
ChannelMergerNode
, which is used to combine channels from multiple audio streams into a single audio stream. BaseAudioContext.createChannelSplitter()
- Creates a
ChannelSplitterNode
, which is used to access the individual channels of an audio stream and process them separately. BaseAudioContext.createConvolver()
- Creates a
ConvolverNode
, which can be used to apply convolution effects to your audio graph, for example a reverberation effect. BaseAudioContext.createDelay()
- Creates a
DelayNode
, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph. BaseAudioContext.createDynamicsCompressor()
- Creates a
DynamicsCompressorNode
, which can be used to apply acoustic compression to an audio signal. BaseAudioContext.createGain()
- Creates a
GainNode
, which can be used to control the overall volume of the audio graph. BaseAudioContext.createIIRFilter()
- Creates an
IIRFilterNode
, which represents a second order filter configurable as several different common filter types. BaseAudioContext.createOscillator()
- Creates an
OscillatorNode
, a source representing a periodic waveform. It basically generates a tone. BaseAudioContext.createPanner()
- Creates a
PannerNode
, which is used to spatialise an incoming audio stream in 3D space. BaseAudioContext.createPeriodicWave()
- Creates a
PeriodicWave
, used to define a periodic waveform that can be used to determine the output of anOscillatorNode
. BaseAudioContext.createWaveShaper()
- Creates a
WaveShaperNode
, which is used to implement non-linear distortion effects. BaseAudioContext.decodeAudioData()
- Asynchronously decodes audio file data contained in an
ArrayBuffer
. In this case, the ArrayBuffer is usually loaded from anXMLHttpRequest
'sresponse
attribute after setting theresponseType
toarraybuffer
. This method only works on complete files, not fragments of audio files. BaseAudioContext.resume()
- Resumes the progression of time in an audio context that has previously been suspended/paused.
Examples
Basic audio context declaration:
var audioCtx = new AudioContext();
Cross browser variant:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
var oscillatorNode = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
var finish = audioCtx.destination;
// etc.
Specifications
Specification | Status | Comment |
---|---|---|
Web Audio API BaseAudioContext | Working Draft |
Browser compatibility
We're converting our compatibility data into a machine-readable JSON format. This compatibility table still uses the old format, because we haven't yet converted the data it contains. Find out how you can help!Feature | Chrome | Edge | Firefox (Gecko) | Internet Explorer | Opera | Safari (WebKit) |
---|---|---|---|---|---|---|
Basic support | (Yes) | (Yes) | (Yes) | 未实现 | 15.0webkit 22 | 6.0webkit |
baseLatency | 60 | ? | ? | ? | ? | ? |
createConstantSource() | 56 | 未实现 | 52 (52) | 未实现 | 43 | 未实现 |
createStereoPanner() | 42 | (Yes) | 37.0 (37.0) | 未实现 | 未实现 | 未实现 |
onstatechange , state , suspend() , resume() | (Yes) | (Yes) | 40.0 (40.0) | 未实现 | 未实现 | 8.0 |
Unprefixed | (Yes) | (Yes) |
Feature | Android Webview | Chrome for Android | Edge | Firefox Mobile (Gecko) | Firefox OS | IE Mobile | Opera Mobile | Safari Mobile |
---|---|---|---|---|---|---|---|---|
Basic support | (Yes) | (Yes) | (Yes) | (Yes) | 2.2 | 未实现 | (Yes) | 未实现 |
baseLatency | 60 | 60 | ? | ? | ? | ? | ? | ? |
createConstantSource() | 56 | 56 | 未实现 | 52.0 (52) | 未实现 | 未实现 | 未实现 | 未实现 |
createStereoPanner() | 42 | 42 | (Yes) | (Yes) | (Yes) | 未实现 | 未实现 | 未实现 |
onstatechange , state , suspend() , resume() | (Yes) | (Yes) | (Yes) | (Yes) | (Yes) | 未实现 | 未实现 | 未实现 |
Unprefixed | (Yes) | (Yes) | (Yes) | ? | ? | ? | 43 | ? |
See also
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论