DefinitelyTyped

Index

Variables

Enumerations

Interfaces

Variables

AudioContext: AudioContext

This interface represents a set of AudioNode objects and their connections. It allows for arbitrary routing of signals to the AudioDestinationNode (what the user ultimately hears). Nodes are created from the context and are then connected together. In most use cases, only a single AudioContext is used per document. An AudioContext is constructed as follows:

var context = new AudioContext();

public activeSourceCount: number

The number of AudioBufferSourceNodes that are currently playing.

public currentTime: number

This is a time in seconds which starts at zero when the context is created and increases in real-time. All scheduled times are relative to it. This is not a "transport" time which can be started, paused, and re-positioned. It is always moving forward. A GarageBand-like timeline transport system can be very easily built on top of this (in JavaScript). This time corresponds to an ever-increasing hardware timestamp.

public destination: AudioDestinationNode

An AudioDestinationNode with a single input representing the final destination for all audio (to be rendered to the audio hardware). All AudioNodes actively rendering audio will directly or indirectly connect to destination.

public listener: AudioListener

An AudioListener which is used for 3D spatialization.

public sampleRate: number

The sample rate (in sample-frames per second) at which the AudioContext handles audio. It is assumed that all AudioNodes in the context run at this rate. In making this assumption, sample-rate converters or "varispeed" processors are not supported in real-time processing.

public createAnalyser(): AnalyserNode

Creates a AnalyserNode.

Returns

AnalyserNode

public createBiquadFilter(): BiquadFilterNode

Creates a BiquadFilterNode representing a second order filter which can be configured as one of several common filter types.

Returns

BiquadFilterNode

public createBuffer(numberOfChannels: number, length: number, sampleRate: number): AudioBuffer

Creates an AudioBuffer of the given size. The audio data in the buffer will be zero-initialized (silent). An exception will be thrown if the numberOfChannels or sampleRate are out-of-bounds.

Parameters

  • numberOfChannels: number

    how many channels the buffer will have. An implementation must support at least 32 channels.

  • length: number

    the size of the buffer in sample-frames.

  • sampleRate: number

    the sample-rate of the linear PCM audio data in the buffer in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000.

Returns

AudioBuffer

public createBuffer(buffer: ArrayBuffer, mixToMono: boolean): AudioBuffer

Creates an AudioBuffer given the audio file data contained in the ArrayBuffer. The ArrayBuffer can, for example, be loaded from an XMLHttpRequest's response attribute after setting the responseType to "arraybuffer". Audio file data can be in any of the formats supported by the audio element. The following steps must be performed:

  1. Decode the encoded buffer from the AudioBuffer into linear PCM. If a decoding error is encountered due to the audio format not being recognized or supported, or because of corrupted/unexpected/inconsistent data then return NULL (and these steps will be terminated).
  2. If mixToMono is true, then mixdown the decoded linear PCM data to mono.
  3. Take the decoded (possibly mixed-down) linear PCM audio data, and resample it to the sample-rate of the AudioContext if it is different from the sample-rate of buffer. The final result will be stored in an AudioBuffer and returned as the result of this method.

Parameters

  • buffer: ArrayBuffer

    the audio file data (for example from a .wav file).

  • mixToMono: boolean

    if a mixdown to mono will be performed. Normally, this would not be set.

Returns

AudioBuffer

public createBufferSource(): AudioBufferSourceNode

Creates an AudioBufferSourceNode.

Returns

AudioBufferSourceNode

public createChannelMerger(numberOfInputs?: number): ChannelMergerNode

Creates an ChannelMergerNode representing a channel merger. An exception will be thrown for invalid parameter values.

Parameters

  • numberOfInputs?: number optional

    the number of inputs. Values of up to 32 must be supported. If not specified, then 6 will be used.

Returns

ChannelMergerNode

public createChannelSplitter(numberOfOutputs?: number): ChannelSplitterNode

Creates an ChannelSplitterNode representing a channel splitter. An exception will be thrown for invalid parameter values.

Parameters

  • numberOfOutputs?: number optional

    the number of outputs. Values of up to 32 must be supported. If not specified, then 6 will be used.

Returns

ChannelSplitterNode

public createConvolver(): ConvolverNode

Creates a ConvolverNode.

Returns

ConvolverNode

public createDelay(maxDelayTime?: number): DelayNode

Creates a DelayNode representing a variable delay line. The initial default delay time will be 0 seconds.

Parameters

  • maxDelayTime?: number optional

    the maximum delay time in seconds allowed for the delay line. If specified, this value must be greater than zero and less than three minutes or a NOT_SUPPORTED_ERR exception will be thrown.

Returns

DelayNode

public createDynamicsCompressor(): DynamicsCompressorNode

Creates a DynamicsCompressorNode.

Returns

DynamicsCompressorNode

public createGain(): GainNode

Creates a GainNode.

Returns

GainNode

public createMediaElementSource(mediaElement: HTMLMediaElement): MediaElementAudioSourceNode

Creates a MediaElementAudioSourceNode given an HTMLMediaElement. As a consequence of calling this method, audio playback from the HTMLMediaElement will be re-routed into the processing graph of the AudioContext.

Parameters

  • mediaElement: HTMLMediaElement

Returns

MediaElementAudioSourceNode

public createMediaStreamSource(mediaStream: any): MediaStreamAudioSourceNode

Creates a MediaStreamAudioSourceNode given a MediaStream. As a consequence of calling this method, audio playback from the MediaStream will be re-routed into the processing graph of the AudioContext.

Parameters

  • mediaStream: any

Returns

MediaStreamAudioSourceNode

public createOscillator(): OscillatorNode

Creates an OscillatorNode.

Returns

OscillatorNode

public createPanner(): PannerNode

Creates an PannerNode.

Returns

PannerNode

public createScriptProcessor(bufferSize: number, numberOfInputChannels?: number, numberOfOutputChannels?: number): ScriptProcessorNode

Creates a ScriptProcessorNode for direct audio processing using JavaScript. An exception will be thrown if bufferSize or numberOfInputChannels or numberOfOutputChannels are outside the valid range. It is invalid for both numberOfInputChannels and numberOfOutputChannels to be zero.

Parameters

  • bufferSize: number

    the buffer size in units of sample-frames. It must be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. This value controls how frequently the onaudioprocess event handler is called and how many sample-frames need to be processed each call. Lower values for bufferSize will result in a lower (better) latency. Higher values will be necessary to avoid audio breakup and glitches. The value chosen must carefully balance between latency and audio quality.

  • numberOfInputChannels?: number optional

    (defaults to 2) the number of channels for this node's input. Values of up to 32 must be supported.

  • numberOfOutputChannels?: number optional

    (defaults to 2) the number of channels for this node's output. Values of up to 32 must be supported.

Returns

ScriptProcessorNode

public createWaveShaper(): WaveShaperNode

Creates a WaveShaperNode representing a non-linear distortion.

Returns

WaveShaperNode

public createWaveTable(real: any, imag: any): WaveTable

Creates a WaveTable representing a waveform containing arbitrary harmonic content. The real and imag parameters must be of type Float32Array of equal lengths greater than zero and less than or equal to 4096 or an exception will be thrown. These parameters specify the Fourier coefficients of a Fourier series representing the partials of a periodic waveform. The created WaveTable will be used with an OscillatorNode and will represent a normalized time-domain waveform having maximum absolute peak value of 1. Another way of saying this is that the generated waveform of an OscillatorNode will have maximum peak value at 0dBFS. Conveniently, this corresponds to the full-range of the signal values used by the Web Audio API. Because the WaveTable will be normalized on creation, the real and imag parameters represent relative values.

Parameters

  • real: any

    an array of cosine terms (traditionally the A terms). In audio terminology, the first element (index 0) is the DC-offset of the periodic waveform and is usually set to zero. The second element (index 1) represents the fundamental frequency. The third element represents the first overtone, and so on.

  • imag: any

    an array of sine terms (traditionally the B terms). The first element (index 0) should be set to zero (and will be ignored) since this term does not exist in the Fourier series. The second element (index 1) represents the fundamental frequency. The third element represents the first overtone, and so on.

Returns

WaveTable

public decodeAudioData(audioData: ArrayBuffer, successCallback: any, errorCallback?: any)

Asynchronously decodes the audio file data contained in the ArrayBuffer. The ArrayBuffer can, for example, be loaded from an XMLHttpRequest's response attribute after setting the responseType to "arraybuffer". Audio file data can be in any of the formats supported by the audio element. The decodeAudioData() method is preferred over the createBuffer() from ArrayBuffer method because it is asynchronous and does not block the main JavaScript thread.

The following steps must be performed:

  1. Temporarily neuter the audioData ArrayBuffer in such a way that JavaScript code may not access or modify the data.
  2. Queue a decoding operation to be performed on another thread.
  3. The decoding thread will attempt to decode the encoded audioData into linear PCM. If a decoding error is encountered due to the audio format not being recognized or supported, or because of corrupted/unexpected/inconsistent data then the audioData neutered state will be restored to normal and the errorCallback will be scheduled to run on the main thread's event loop and these steps will be terminated.
  4. The decoding thread will take the result, representing the decoded linear PCM audio data, and resample it to the sample-rate of the AudioContext if it is different from the sample-rate of audioData. The final result (after possibly sample-rate converting) will be stored in an AudioBuffer.
  5. The audioData neutered state will be restored to normal
  6. The successCallback function will be scheduled to run on the main thread's event loop given the AudioBuffer from step (4) as an argument.

Parameters

  • audioData: ArrayBuffer
  • successCallback: any
  • errorCallback?: any optional

webkitAudioContext: new() => AudioContext

constructor(): AudioContext

Returns

AudioContext

webkitOfflineAudioContext: new(numberOfChannels: number, length: number, sampleRate: number) => OfflineAudioContext

constructor(): OfflineAudioContext

Returns

OfflineAudioContext