The default Agora audio module interacts seamlessly with the devices your app runs on. The SDK enables you to add specialized audio features to your app using custom audio renderers.
This page shows you how to integrate your custom audio renderer in your app.
By default, SDK integrates the default audio modules on the device your app runs on for real-time communication. However, there are scenarios where you may want to integrate a custom audio renderer. For example:
To manage the processing and playback of audio frames when using a custom audio renderer, use methods from outside the Agora SDK.
The following figure shows how the audio data is transferred when you customize the audio renderer:
pullAudioFrame
to retrieve the audio data sent by a remote user.Before implementing custom audio rendering, ensure that you have implemented the basic real-time communication functions in your project. For details, see Start a Call or Start Interactive Live Streaming.
This section shows you how to use custom audio renderers.
According to the following API call sequence, use custom audio renderer APIs for custom audio rendering:
Do not allow the SDK to use audio devices during SDK initialization.
RtcEngineContext context;
// Do not allow the SDK to use audio devices
context.enableAudioDevice = false;
int ret = m_rtcEngine->initialize(context);
Before calling joinChannel
, call setExternalAudioSink
to enable and configure the custom audio renderer.
// Enables custom audio renderer
// Sampling rate (Hz) can be set as 16000, 32000, 441000, or 48000
// The number of channels of the external audio source can be set as 1 or 2
nRet = m_rtcEngine->setExternalAudioSink(m_renderAudioInfo.sampleRate, m_renderAudioInfo.channels);
After joining the channel, call pullAudioFrame
to retrieve the audio data sent by a remote user. Use your own audio renderer to process the audio data, then play the rendered data.
void CAgoraCaptureAduioDlg::PullAudioFrameThread(CAgoraCaptureAduioDlg * self)
{
int nRet = 0;
agora::util::AutoPtr<agora::media::IMediaEngine> mediaEngine;
mediaEngine.queryInterface(self->m_rtcEngine, AGORA_IID_MEDIA_ENGINE);
IAudioFrameObserver::AudioFrame audioFrame;
audioFrame.avsync_type = 0; // Reserved parameter
audioFrame.bytesPerSample = TWO_BYTES_PER_SAMPLE;
audioFrame.type = agora::media::IAudioFrameObserver::FRAME_TYPE_PCM16;
audioFrame.channels = self->m_renderAudioInfo.channels;
audioFrame.samplesPerChannel = self->m_renderAudioInfo.sampleRate / 100 * self->m_renderAudioInfo.channels;
audioFrame.samplesPerSec = self->m_renderAudioInfo.sampleRate;
audioFrame.buffer = new BYTE[audioFrame.samplesPerChannel * audioFrame.bytesPerSample];
while (self->m_extenalRenderAudio )
{
// Pulls the remote audio data
nRet = mediaEngine->pullAudioFrame(&audioFrame);
if (nRet != 0)
{
Sleep(10);
continue;
}
SIZE_T nSize = audioFrame.samplesPerChannel * audioFrame.bytesPerSample;
self->m_audioRender.Render((BYTE*)audioFrame.buffer, nSize);
}
delete audioFrame.buffer;
}
Take the following steps to use raw audio data APIs for custom audio rendering:
onRecordAudioFrame
, onPlaybackAudioFrame
, onMixedAudioFrame
, or onPlaybackAudioFrameBeforeMixing
.This section includes in depth information about the methods you used in this page, and links to related pages.
Agora provides an open-source demo project on GitHub. You can view the source code on Github or download the project to try it out.