IAudioFrameObserver
The audio frame observer.
You can call registerAudioFrameObserver to register or unregister the IAudioFrameObserver audio frame observer.
onEarMonitoringAudioFrame
Gets the in-ear monitoring audio frame.
public abstract boolean onEarMonitoringAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
- Before joining the channel, you need to call the registerAudioFrameObserver to register audio observer object, that is, register the onEarMonitoringAudioFrame callback.
- In order to ensure that the obtained in-ear audio data meets the expectations, Agora recommends that you choose one of the following two methods to set the in-ear monitoring-ear audio data format:
- If you call setEarMonitoringAudioFrameParameters to set the acquired audio data format, the SDK calculates the sampling interval according to the parameters in this method, and triggers the onEarMonitoringAudioFrame callback according to the sampling interval.
- If you set the acquired audio data format in the return value of the getEarMonitoringAudioParams callback, the SDK calculates the sampling interval according to the return value of the callback, and trigger the onEarMonitoringAudioFrame callback according to the sampling interval.
Parameters
- The raw audio data. See AudioFrame.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Reserved for future use.
onMixedAudioFrame
Retrieves the mixed captured and playback audio frame.
public abstract boolean onMixedAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
- Before joining the channel, you need to call the registerAudioFrameObserver to register audio observer object, that is, register the onMixedAudioFrame callback.
- This callback only reports single-channel data.
- To ensure that the data format of mixed captured and playback audio frame meets the expectations, Agora recommends that you choose one of the following two ways to set the data format:
- If you call setMixedAudioFrameParameters to set the audio data format, the SDK calculates the sampling interval according to the parameters in this method, and triggers the onMixedAudioFrame callback according to the sampling interval.
- If you set the audio data format in the return value of the getMixedAudioParams callback, the SDK calculates the sampling interval according to the return value of the callback, and triggers the onMixedAudioFrame callback according to the sampling interval.
Parameters
- The raw audio data. See AudioFrame.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Reserved for future use.
onPlaybackAudioFrame
Gets the raw audio frame for playback.
public abstract boolean onPlaybackAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
- Before joining the channel, you need to call the registerAudioFrameObserver to register audio observer object, that is, register the onPlaybackAudioFrame callback.
- To ensure that the data format of audio frame for playback is as expected, Agora recommends that you choose one of the following two methods to set the audio data format:
- If you call setPlaybackAudioFrameParameters to set the audio data format, the SDK calculates the sampling interval according to the parameters in this method, and triggers the onPlaybackAudioFrame callback according to the sampling interval.
- If you set the audio data format in the return value of the getPlaybackAudioParams callback, the SDK calculates the sampling interval according to the return value of the callback, and triggers the onPlaybackAudioFrame callback according to the sampling interval.
Parameters
- The raw audio data. See AudioFrame.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Reserved for future use.
onPlaybackAudioFrameBeforeMixing
Retrieves the audio frame of a specified user before mixing.
public abstract boolean onPlaybackAudioFrameBeforeMixing(int userId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
Parameters
- userId
- The user ID of the specified user.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Reserved for future use.
onRecordAudioFrame
Gets the captured audio frame.
public abstract boolean onRecordAudioFrame(int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type);
- Before joining the channel, you need to call the registerAudioFrameObserver to register audio observer object, that is, register the onRecordAudioFrame callback.
- To ensure that the data format of captured audio frame is as expected, Agora recommends that you choose one of the following two methods to set the audio data format:
- If you call setRecordingAudioFrameParameters to set the acquired audio data format, the SDK calculates the sampling interval according to the parameters in this method, and triggers the onRecordAudioFrame callback according to the sampling interval.
- If you set the acquired audio data format in the return value of the getRecordAudioParams callback, the SDK calculates the sampling interval according to the return value of the callback, and trigger the onRecordAudioFrame callback according to the sampling interval.
Parameters
- The raw audio data. See AudioFrame.
- type
- The audio frame type.
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample. For example, each PCM audio sample usually takes up 16 bits (2 bytes).
- channels
-
The number of channels.
- 1: Mono.
- 2: Stereo. If the channel uses stereo, the data is interleaved.
- samplesPerSec
- Recording sample rate (Hz).
- buffer
- The audio buffer. The buffer size = samplesPerChannel x channels x bytesPerSample.
- renderTimeMs
- The timestamp (ms) of the external audio frame. You can use this parameter for the following purpose: Synchronize audio and video frames in video or audio related scenarios, including where external video sources are used.
- avsync_type
- Reserved for future use.
Returns
Reserved for future use.
getRecordAudioParams
Sets the audio format for the onRecordAudioFrame callback.
public abstract AudioParams getRecordAudioParams();
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
The SDK triggers the onRecordAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).
Ensure that the sample interval ≥ 0.01 (s).
Returns
The captured audio data, see AudioParams.
getMixedAudioParams
Sets the audio format for the onMixedAudioFrame callback.
public abstract AudioParams getMixedAudioParams();
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
The SDK triggers the onMixedAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).
Ensure that the sample interval ≥ 0.01 (s).
Returns
The mixed captured and playback audio data. See AudioParams.
getPlaybackAudioParams
Sets the audio format for the onPlaybackAudioFrame callback.
public abstract AudioParams getMixedAudioParams();
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
The SDK triggers the onPlaybackAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).
Ensure that the sample interval ≥ 0.01 (s).
Returns
The audio data for playback, see AudioParams.
getEarMonitoringAudioParams
Sets the audio format for the onEarMonitoringAudioFrame callback.
public abstract AudioParams getEarMonitoringAudioParams();
- Since
- v4.0.1
You need to register the callback when calling the registerAudioFrameObserver method. After you successfully register the audio observer, the SDK triggers this callback, and you can set the audio format in the return value of this callback.
The SDK triggers the onEarMonitoringAudioFrame callback with the AudioParams calculated sampling interval you set in the return value. The calculation formula is Sample interval (sec) = samplePerCall/(sampleRate × channel).
Ensure that the sample interval ≥ 0.01 (s).
Returns
The audio data of in-ear monitoring, see AudioParams.