BaseAudioContext - Web APIs 编辑

The BaseAudioContext interface of the Web Audio API acts as a base definition for online and offline audio-processing graphs, as represented by AudioContext and OfflineAudioContext respectively. You wouldn't use BaseAudioContext directly — you'd use its features via one of these two inheriting interfaces.

A BaseAudioContext can be a target of events, therefore it implements the EventTarget interface.

  <div id="interfaceDiagram" style="display: inline-block; position: relative; width: 100%; padding-bottom: 11.666666666666666%; vertical-align: middle; overflow: hidden;"><svg style="display: inline-block; position: absolute; top: 0; left: 0;" viewbox="-50 0 600 70" preserveAspectRatio="xMinYMin meet"><a xlink:href="https://developer.mozilla.org/wiki/en-US/docs/Web/API/EventTarget" target="_top"><rect x="1" y="1" width="110" height="50" fill="#fff" stroke="#D4DDE4" stroke-width="2px" /><text  x="56" y="30" font-size="12px" font-family="Consolas,Monaco,Andale Mono,monospace" fill="#4D4E53" text-anchor="middle" alignment-baseline="middle">EventTarget</text></a><polyline points="111,25  121,20  121,30  111,25" stroke="#D4DDE4" fill="none"/><line x1="121" y1="25" x2="151" y2="25" stroke="#D4DDE4"/><a xlink:href="/wiki/en-US/docs/Web/API/BaseAudioContext" target="_top"><rect x="151" y="1" width="160" height="50" fill="#F4F7F8" stroke="#D4DDE4" stroke-width="2px" /><text  x="231" y="30" font-size="12px" font-family="Consolas,Monaco,Andale Mono,monospace" fill="#4D4E53" text-anchor="middle" alignment-baseline="middle">BaseAudioContext</text></a></svg></div>
  a:hover text { fill: #0095DD; pointer-events: all;}

Properties

BaseAudioContext.audioWorklet This is an experimental API that should not be used in production code. Read only Secure context
Returns the AudioWorklet object, which can be used to create and manage AudioNodes in which JavaScript code implementing the AudioWorkletProcessor interface are run in the background to process audio data.
BaseAudioContext.currentTime Read only
Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0.
BaseAudioContext.destination Read only
Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.
BaseAudioContext.listener Read only
Returns the AudioListener object, used for 3D spatialization.
BaseAudioContext.sampleRate Read only
Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an AudioContext cannot be changed.
BaseAudioContext.state Read only
Returns the current state of the AudioContext.

Event handlers

BaseAudioContext.onstatechange
An event handler that runs when an event of type statechange has fired. This occurs when the AudioContext's state changes, due to the calling of one of the state change methods (AudioContext.suspend, AudioContext.resume, or AudioContext.close).

Methods

Also implements methods from the interface EventTarget.

BaseAudioContext.createAnalyser()
Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.
BaseAudioContext.createBiquadFilter()
Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc
BaseAudioContext.createBuffer()
Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.
BaseAudioContext.createBufferSource()
Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.
BaseAudioContext.createConstantSource()
Creates a ConstantSourceNode object, which is an audio source that continuously outputs a monaural (one-channel) sound signal whose samples all have the same value.
BaseAudioContext.createChannelMerger()
Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.
BaseAudioContext.createChannelSplitter()
Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.
BaseAudioContext.createConvolver()
Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.
BaseAudioContext.createDelay()
Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.
BaseAudioContext.createDynamicsCompressor()
Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.
BaseAudioContext.createGain()
Creates a GainNode, which can be used to control the overall volume of the audio graph.
BaseAudioContext.createIIRFilter()
Creates an IIRFilterNode, which represents a second order filter configurable as several different common filter types.
BaseAudioContext.createOscillator()
Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.
BaseAudioContext.createPanner()
Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.
BaseAudioContext.createPeriodicWave()
Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.
BaseAudioContext.createScriptProcessor() This deprecated API should no longer be used, but will probably still work.
Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.
BaseAudioContext.createStereoPanner()
Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.
BaseAudioContext.createWaveShaper()
Creates a WaveShaperNode, which is used to implement non-linear distortion effects.
BaseAudioContext.decodeAudioData()
Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.

Examples

Basic audio context declaration:

const audioContext = new AudioContext();

Cross browser variant:

const AudioContext = window.AudioContext || window.webkitAudioContext;
const audioContext = new AudioContext();

const oscillatorNode = audioContext.createOscillator();
const gainNode = audioContext.createGain();
const finish = audioContext.destination;

Specifications

SpecificationStatusComment
Web Audio API
The definition of 'BaseAudioContext' in that specification.
Working Draft

Browser compatibility

BCD tables only load in the browser

See also

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据

词条统计

浏览:135 次

字数:16485

最后编辑:7 年前

编辑次数:0 次

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文