The default Agora audio module interacts seamlessly with the devices your app runs on. The SDK enables you to add specialized audio features to your app using a custom audio source.
By default, SDK integrates the default audio modules on the device your app runs on for real-time communication. However, there are scenarios where you may want to integrate a custom audio capturer. For example:
To manage the capture and processing of audio frames when using a custom audio source, use methods from outside the Agora SDK.
The following figure shows the call sequence you need to implement in your app for custom audio source:
The following figure shows how the audio data is transferred when you customize the audio source:
pushAudioFrame
to send the captured audio frames to the SDK.Before proceeding, ensure that you have implemented the basic real-time communication functions in your project. For details, see Start a Call or Start Interactive Live Streaming.
To implement a custom audio source in your project, refer to the following steps.
Before calling joinChannel
, call setExternalAudioSource
to specify the custom audio source.
// Specifies the custom audio source
m_rtcEngine->setExternalAudioSource(true, m_capAudioInfo.sampleRate, m_capAudioInfo.channels);
// The local user joins the channel
ChannelMediaOptions option;
option.autoSubscribeAudio = true;
option.autoSubscribeVideo = true;
m_rtcEngine->joinChannel("Your token", szChannelId.c_str(), 0, option);
Implement audio capture and processing yourself using methods from outside the SDK.
Call pushAudioFrame
to send the audio frames to the SDK for later use.
mediaEngine->pushAudioFrame(AUDIO_RECORDING_SOURCE, &m_audioFrame);
This section includes in depth information about the methods you used in this page, and links to related pages.
Agora provides an open-source demo project on GitHub. You can view the source code on Github or download the project to try it out.