|
abstract boolean | onRecordAudioFrame (int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
|
abstract boolean | onPlaybackAudioFrame (int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
|
abstract boolean | onMixedAudioFrame (int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
|
abstract boolean | onPlaybackAudioFrameBeforeMixing (int userId, int type, int samplesPerChannel, int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) |
|
◆ onRecordAudioFrame()
abstract boolean io.agora.rtc2.IAudioFrameObserver.onRecordAudioFrame |
( |
int |
type, |
|
|
int |
samplesPerChannel, |
|
|
int |
bytesPerSample, |
|
|
int |
channels, |
|
|
int |
samplesPerSec, |
|
|
ByteBuffer |
buffer, |
|
|
long |
renderTimeMs, |
|
|
int |
avsync_type |
|
) |
| |
|
abstract |
Retrieves the local user's raw audio data.
The SDK periodically triggers this callback according to the sample interval set by the setRecordingAudioFrameParameters method. You can retrieve the captured audio data of the local user from this callback.
- Parameters
-
type | The type of the audio frame. |
samplesPerChannel | The number of samples per channel. |
bytesPerSample | The number of bytes per sample. It is usually two bytes for PCM audio data. |
channels | The number of audio channels.
- 1: Mono.
- 2: Stereo. The audio data is interleaved.
|
samplesPerSec | The audio sampling rate (Hz). |
buffer | The audio data buffer. The size of the buffer = samplesPerChannel x channels x bytesPerSample . |
renderTimeMs | The render timestamp of the external audio frame in ms. You can use this parameter for the following purposes:
- Restore the order of the audio frame.
- Synchronize audio and video frames in audio and video scenarios, including scenarios using external video sources.
|
avsync_type | Reserved parameter. |
- Returns
- true: The audio frame is valid and sent back to the SDK.
- false: The audio frame is invalid and discarded.
◆ onPlaybackAudioFrame()
abstract boolean io.agora.rtc2.IAudioFrameObserver.onPlaybackAudioFrame |
( |
int |
type, |
|
|
int |
samplesPerChannel, |
|
|
int |
bytesPerSample, |
|
|
int |
channels, |
|
|
int |
samplesPerSec, |
|
|
ByteBuffer |
buffer, |
|
|
long |
renderTimeMs, |
|
|
int |
avsync_type |
|
) |
| |
|
abstract |
Retrieves all the remote users' raw audio data.
The SDK periodically triggers this callback according to the sample interval set by the setPlaybackAudioFrameParameters method. You can retrieve the audio playback data of all the remote users from this callback.
- Parameters
-
type | The type of the audio frame. |
samplesPerChannel | The number of samples per channel. |
bytesPerSample | The number of bytes per sample. It is usually two bytes for PCM audio data. |
channels | The number of audio channels:
- 1: Mono.
- 2: Stereo. The audio data is interleaved.
|
samplesPerSec | The audio sampling rate (Hz). |
buffer | The audio data buffer. The size of the buffer = samplesPerChannel x channels x bytesPerSample . |
renderTimeMs | The render timestamp of the external audio frame in ms. You can use this parameter for the following purposes:
- Restore the order of the audio frame.
- Synchronize audio and video frames in audio and video scenarios, including scenarios using external video sources.
|
avsync_type | Reserved parameter. |
- Returns
- true: The audio frame is valid and sent back to the SDK.
- false: The audio frame is invalid and discarded.
◆ onMixedAudioFrame()
abstract boolean io.agora.rtc2.IAudioFrameObserver.onMixedAudioFrame |
( |
int |
type, |
|
|
int |
samplesPerChannel, |
|
|
int |
bytesPerSample, |
|
|
int |
channels, |
|
|
int |
samplesPerSec, |
|
|
ByteBuffer |
buffer, |
|
|
long |
renderTimeMs, |
|
|
int |
avsync_type |
|
) |
| |
|
abstract |
Retrieves the raw audio data of the local user and all the remote users.
The SDK periodically triggers this callback according to the sample interval set by the setMixedAudioFrameParameters method. You can retrieve the mixed audio data of the local and remote users from this callback.
- Parameters
-
type | The type of the audio frame. |
samplesPerChannel | The number of samples per channel. |
bytesPerSample | The number of bytes per sample. It is usually two bytes for PCM audio data. |
channels | The number of audio channels.
- 1: Mono.
- 2: Stereo. The audio data is interleaved.
|
samplesPerSec | The audio sampling rate (Hz). |
buffer | The audio data buffer. The size of the buffer = samplesPerChannel x channels x bytesPerSample . |
renderTimeMs | The render timestamp of the external audio frame in ms. You can use this parameter for the following purposes:
- Restore the order of the audio frame.
- Synchronize audio and video frames in audio and video scenarios, including scenarios using external video sources.
|
avsync_type | Reserved parameter. |
- Returns
- true: The audio frame is valid and sent back to the SDK.
- false: The audio frame is invalid and discarded.
◆ onPlaybackAudioFrameBeforeMixing()
abstract boolean io.agora.rtc2.IAudioFrameObserver.onPlaybackAudioFrameBeforeMixing |
( |
int |
userId, |
|
|
int |
type, |
|
|
int |
samplesPerChannel, |
|
|
int |
bytesPerSample, |
|
|
int |
channels, |
|
|
int |
samplesPerSec, |
|
|
ByteBuffer |
buffer, |
|
|
long |
renderTimeMs, |
|
|
int |
avsync_type |
|
) |
| |
|
abstract |
Retrieves the raw audio data of a specific remote user.
- Parameters
-
userId | The user Id. |
type | The type of the audio frame. |
samplesPerChannel | The number of samples per channel. |
bytesPerSample | The number of bytes per sample. It is usually two bytes for PCM audio data. |
channels | The number of audio channels:
- 1: Mono.
- 2: Stereo. The audio data is interleaved.
|
samplesPerSec | The audio sampling rate (Hz). |
buffer | The audio data buffer. The size of the buffer = samplesPerChannel x channels x bytesPerSample . |
renderTimeMs | The render timestamp of the external audio frame in ms. You can use this parameter for the following purposes:
- Restore the order of the captured audio frame.
- Synchronize audio and video frames in audio and video scenarios, including scenarios using external video sources.
|
avsync_type | Reserved parameter. |
- Returns
- true: The audio frame is valid and sent back to the SDK.
- false: The audio frame is invalid and discarded.