This page shows you how to get raw audio data for pre- and post-processing.
During the audio transmission process, you can pre- and post-process the captured audio data to achieve the desired playback effect.
Agora provides the raw data function for you to process the audio data according to your scenarios. This function enables you to pre-process the captured audio signal before sending it to the encoder, or to post-process the decoded audio signal.
The following figure shows the call sequence you need to implement in your app for raw audio data:
Before proceeding, ensure that you have implemented basic real-time functions in your project. See Start a Call or Start Interactive Live Streaming.
To implement the raw audio data function in your project, refer to the following steps.
IAudioFrameObserver
object and then call registerAudioFrameObserver
to register an audio frame observer.set
to configure the format of the audio frame.onRecordFrame
, onPlaybackFrame
, onPlaybackFrameBeforeMixing
and onMixedFrame
callbacks. These callbacks capture and process the audio frames. If the callback returns false
, the audio frame is not successfully processed.// Call registerAudioFrameObserver to register the audio frame observer and pass in an IAudioFrameObserver object.
engine.registerAudioFrameObserver(new IAudioFrameObserver() {
// Implement the onRecordFrame callback
@Override
public boolean onRecordFrame(byte[] samples, int numOfSamples, int bytesPerSample, int channels, int samplesPerSec) {
if(isEnableLoopBack){
mAudioPlayer.play(samples, 0, numOfSamples * bytesPerSample);
}
return false;
}
// Implement the onPlaybackFrame callback
@Override
public boolean onPlaybackFrame(byte[] samples, int numOfSamples, int bytesPerSample, int channels, int samplesPerSec) {
return false;
}
// Implement the onPlaybackFrameBeforeMixing callback
@Override
public boolean onPlaybackFrameBeforeMixing(byte[] samples, int numOfSamples, int bytesPerSample, int channels, int samplesPerSec, int uid) {
return false;
}
// Implement the onMixedFrame callback
@Override
public boolean onMixedFrame(byte[] samples, int numOfSamples, int bytesPerSample, int channels, int samplesPerSec) {
return false;
}
// Call the methods prefixed with set to configure the format of the audio frame captured by each callback.
engine.setRecordingAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL, Constants.RAW_AUDIO_FRAME_OP_MODE_READ_WRITE,SAMPLES_PER_CALL);
engine.setMixedAudioFrameParameters(SAMPLE_RATE, SAMPLES_PER_CALL);
engine.setPlaybackAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL, Constants.RAW_AUDIO_FRAME_OP_MODE_READ_WRITE,SAMPLES_PER_CALL);
});
This section includes in depth information about the methods you used in this page, and links to related pages.
Agora provides the following open-source sample project on GitHub: