The default Agora audio module interacts seamlessly with the devices your app runs on. The SDK enables you to add specialized audio features to your app using custom audio renderers.
This page shows you how to integrate your custom audio renderer in your app.
By default, SDK integrates the default audio modules on the device your app runs on for real-time communication. However, there are scenarios where you may want to integrate a custom audio renderer. For example:
To manage the processing and playback of audio frames when using a custom audio renderer, use methods from outside the Agora SDK.
Before implementing custom audio rendering, ensure that you have implemented the raw audio data function in your project. For details, see Raw Audio Data.
To implement a custom audio renderer in your project, refer to the following steps.
onRecordAudioFrame
, onPlaybackAudioFrame
, onMixedAudioFrame
, or onPlaybackAudioFrameBeforeMixing
.This section includes in depth information about the methods you used in this page, and links to related pages.
Agora provides the following open-sourced sample projects on GitHub that implement custom audio renderer functions: