Occurs when the state of processing the audio buffer in BufferSourceAudioTrack changes.
The state of processing the audio buffer:
"stopped"
: The SDK stops processing the audio buffer. Reasons may include:"paused"
: The SDK pauses the processing of the audio buffer."playing"
: The SDK is processing the audio buffer.The current state of audio processing, such as start, pause, or stop.
The total duration of the audio (seconds).
Whether a media track is playing on the webpage:
true
: The media track is playing on the webpage.false
: The media track is not playing on the webpage.Since
4.10.0
The destination of the current processing pipeline on the local audio track.
The source specified when creating an audio track.
The type of a media track:
"audio"
: Audio track."video"
: Video track.Closes a local track and releases the audio and video resources that it occupies.
Once you close a local track, you can no longer reuse it.
Gets the progress (seconds) of the audio buffer processing.
The progress (seconds) of the audio buffer processing.
Gets all the listeners for a specified event.
The event name.
Gets an MediaStreamTrack object.
An MediaStreamTrack object.
Gets the statistics of a local audio track.
DEPRECATED from v4.1.0. Use AgoraRTCClient.getLocalVideoStats and AgoraRTCClient.getLocalAudioStats instead.
Gets the ID of a media track, a unique identifier generated by the SDK.
The media track ID.
Gets the label of a local track.
The label that the SDK returns may include:
createMicrophoneAudioTrack
or createCameraVideoTrack
.sourceId
property, if the track is created by calling createScreenVideoTrack
.createCustomAudioTrack
or createCustomVideoTrack
.Gets the audio level of a local audio track.
The audio level. The value range is [0,1]. 1 is the highest audio level.
Removes the listener for a specified event.
The event name.
The callback that corresponds to the event listener.
Listens for a specified event once.
When the specified event happens, the SDK triggers the callback that you pass and then removes the listener.
The event name.
The callback to trigger.
Pauses processing the audio buffer.
Inserts a Processor
to the local audio track.
The Processor
instance. Each extension has a corresponding type of Processor
.
The Processor
instance.
Plays a local audio track.
When playing a audio track, you do not need to pass any DOM element.
Removes all listeners for a specified event.
The event name. If left empty, all listeners for all events are removed.
Resumes processing the audio buffer.
Jumps to a specified time point.
Note: This method is not supported on iOS.
The specified time point (seconds).
Sets the callback for getting raw audio data in PCM format.
After you successfully set the callback, the SDK constantly returns the audio frames of a local audio track in this callback by using AudioBuffer.
You can set the
frameSize
parameter to determine the frame size in each callback, which affects the interval between the callbacks. The larger the frame size, the longer the interval between them.
track.setAudioFrameCallback((buffer) => {
for (let channel = 0; channel < buffer.numberOfChannels; channel += 1) {
// Float32Array with PCM data
const currentChannelData = buffer.getChannelData(channel);
console.log("PCM data in channel", channel, currentChannelData);
}
}, 2048);
// ....
// Stop getting the raw audio data
track.setAudioFrameCallback(null);
The callback function for receiving the AudioBuffer object. If you set audioBufferCallback
as null
, the SDK stops getting raw audio data.
The number of samples of each audio channel that an AudioBuffer
object contains. You can set frameSize
as 256, 512, 1024, 2048, 4096, 8192, or 16384. The default value is 4096.
Since
4.0.0
Enables/Disables the track.
After a track is disabled, the SDK stops playing and publishing the track.
- Disabling a track does not trigger the LocalTrack.on("track-ended") event.
- If a track is published, disabling this track triggers the user-unpublished event on the remote client, and re-enabling this track triggers the user-published event.
- Do not call
setEnabled
andsetMuted
together.
Whether to enable the track:
true
: Enable the track.false
: Disable the track.Sends or stops sending the media data of the track.
Since
4.6.0
If the track is published, a successful call of setMuted(true)
triggers the user-unpublished event on the remote client, and a successful call of setMuted(false)
triggers the user-published event.
- Calling
setMuted(true)
does not stop capturing audio or video and takes shorter time to take effect than setEnabled. For details, see What are the differences between setEnabled and setMuted?.- Do not call
setEnabled
andsetMuted
together.
Whether to stop sending the media data of the track:
true
: Stop sending the media data of the track.false
: Resume sending the media data of the track.Since
4.1.0
Note:
- As of v4.7.0, this method no longer takes effect. Use IRemoteAudioTrack.setPlaybackDevice instead.
- This method supports Chrome on desktop devices only. Other browsers throw a '
NOT_SUPPORTED
error when calling this method.
Sets the playback device (speaker) for the remote audio stream.
The device ID, which can be retrieved by calling getPlaybackDevices.
Sets the volume of a local audio track.
The volume. The value ranges from 0 (mute) to 1000 (maximum). A value of 100 is the original volume。 The volume change may not be obvious to human ear. If local track has been published, setting volume will affect the volume heard by remote users.
Starts processing the audio buffer.
Starting processing the audio buffer means that the processing unit in the SDK has received the audio data. If the audio track has been published, the remote user can hear the audio. Whether the local user can hear the audio depends on whether the SDK calls the play method and sends the audio data to the sound card.
Options for processing the audio buffer. See AudioSourceOptions.
Stops playing the media track.
Stops processing the audio buffer.
Since
4.10.0
Removes the Processor
inserted to the local audio track.
Inherited from LocalAudioTrack,
BufferSourceAudioTrack
is an interface for the audio from a local audio file and adds several functions for controlling the processing of the audio buffer, such as starting processing, stopping processing, and seeking a specified time location.You can create an audio track from an audio file by calling AgoraRTC.createBufferSourceAudioTrack.