The default Agora audio module interacts seamlessly with the devices your app runs on. The SDK enables you to add specialized audio features to your app using custom audio renderers.
This page shows you how to integrate your custom audio renderer in your app.
By default, SDK integrates the default audio modules on the device your app runs on for real-time communication. However, there are scenarios where you may want to integrate a custom audio renderer. For example:
To manage the processing and playback of audio frames when using a custom audio renderer, use methods from outside the Agora SDK.
The following figure shows how the audio data is transferred when you customize the audio renderer:
pullPlaybackAudioFrame
to retrieve the audio data sent by a remote user.Before implementing custom audio rendering, ensure that you have implemented the basic real-time communication functions in your project. For details, see Start a Call or Start Interactive Live Streaming.
This section shows you how to use custom audio renderers.
According to the following API call sequence, use custom audio renderer APIs for custom audio rendering:
Do not allow the SDK to use audio devices during SDK initialization.
// Do not allow the SDK to use audio devices
config.mEnableAudioDevice = false;
engine = RtcEngine.create(config);
Call setExternalAudioSink
to enable and configure the custom audio renderer.
rtcEngine.setExternalAudioSink(
true, // Enables external audio rendering
44100, // Sampling rate (Hz). You can set this value as 16000, 32000, 441000, or 48000
1 // The number of channels of the external audio source. You can set this value as 1 or 2
);
Call joinChannel
to set channel media options and joins a channel.
private ChannelMediaOptions option = new ChannelMediaOptions();
option.publishCustomAudioTrack = true;
engine.joinChannel(accessToken, channelId, 0, option);
After joining the channel, call pullPlaybackAudioFrame
to retrieve the audio data sent by a remote user. Use your own audio renderer to process the audio data, then play the rendered data.
private class FileThread implements Runnable {
@Override
public void run() {
while (mPull) {
int lengthInByte = 48000 / 1000 * 2 * 1 * 10;
ByteBuffer frame = ByteBuffer.allocateDirect(lengthInByte);
int ret = engine.pullPlaybackAudioFrame(frame, lengthInByte);
byte[] data = new byte[frame.remaining()];
frame.get(data, 0, data.length);
// Writes to local file or renders by player
FileIOUtils.writeFileFromBytesByChannel("/sdcard/agora/pull_48k.pcm", data, true, true);
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Before processing, ensure that you have implemented the raw audio data function in your project. For details, see Raw Audio Data.
Take the following steps to use raw audio data APIs for custom audio rendering:
onRecordAudioFrame
, onPlaybackAudioFrame
, onMixedAudioFrame
, or onPlaybackAudioFrameBeforeMixing
.This section includes in depth information about the methods you used in this page, and links to related pages.
Agora provides an open-source demo project on GitHub. You can view the source code on Github or download the project to try it out.