This page shows you how to get raw audio data for pre- and post-processing.
During the audio transmission process, you can pre- and post-process the captured audio data to achieve the desired playback effect.
Agora provides the raw data function for you to process the audio data according to your scenarios. This function enables you to pre-process the captured audio signal before sending it to the encoder, or to post-process the decoded audio signal.
The following figure shows the call sequence you need to implement in your app for raw audio data:
Before proceeding, ensure that you have implemented basic real-time functions in your project. See Start a Call or Start Interactive Live Streaming.
To implement the raw audio data function in your project, refer to the following steps.
IAudioFrameObserver
object and then call registerAudioFrameObserver
to register an audio frame observer.set
to configure the format of the audio frame.onRecordAudioFrame
, onPlaybackAudioFrame
, onPlaybackAudioFrameBeforeMixing
and onMixedAudioFrame
callbacks. These callbacks capture and process the audio frames. If the callback returns false
, the audio frame is not successfully processed.BOOL CAgoraOriginalAudioDlg::RegisterAudioFrameObserver(BOOL bEnable, IAudioFrameObserver *audioFrameObserver)
{
agora::util::AutoPtr<agora::media::IMediaEngine> mediaEngine;
// Query AGORA_IID_MEDIA_ENGINE interface in the engine.
mediaEngine.queryInterface(m_rtcEngine, agora::rtc::AGORA_IID_MEDIA_ENGINE);
int nRet = 0;
if (mediaEngine.get() == NULL)
return FALSE;
if (bEnable)
// Register the audio frame observer and pass in an IAudioFrameObserver object.
nRet = mediaEngine->registerAudioFrameObserver(audioFrameObserver);
else
// Unregister the audio frame observer.
nRet = mediaEngine->registerAudioFrameObserver(NULL);
return nRet == 0 ? TRUE : FALSE;
}
// Implement the onRecordAudioFrame callback.
bool COriginalAudioProcFrameObserver::onRecordAudioFrame(const char* channelId, AudioFrame& audioFrame)
{
SIZE_T nSize = audioFrame.channels * audioFrame.samplesPerChannel * 2;
unsigned int readByte = 0;
int timestamp = GetTickCount();
short *pBuffer = (short *)audioFrame.buffer;
for (SIZE_T i = 0; i < nSize / 2; i++)
{
if (pBuffer[i] * 2 > 32767) {
pBuffer[i] = 32767;
}
else if (pBuffer[i] * 2 < -32768) {
pBuffer[i] = -32768;
}
else {
pBuffer[i] *= 2;
}
}
#ifdef _DEBUG
CString strInfo;
strInfo.Format(_T("audio Frame buffer size:%d, timestamp:%d \n"), nSize, timestamp);
OutputDebugString(strInfo);
audioFrame.renderTimeMs = timestamp;
#endif
return true;
}
// Implement the onPlaybackAudioFrame callback.
bool COriginalAudioProcFrameObserver::onPlaybackAudioFrame(const char* channelId, AudioFrame& audioFrame)
{
return true;
}
// Implement the onMixedAudioFrame callback.
bool COriginalAudioProcFrameObserver::onMixedAudioFrame(const char* channelId, AudioFrame& audioFrame)
{
return true;
}
// Implement the onPlaybackAudioFrameBeforeMixing callback.
bool COriginalAudioProcFrameObserver::onPlaybackAudioFrameBeforeMixing(const char* channelId, rtc::uid_t uid, AudioFrame& audioFrame)
{
return true;
}
// Call the methods prefixed with set to configure the format of the audio frame captured by each callback.
m_rtcEngine->setRecordingAudioFrameParameters(44100, 2, RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, 1024);
m_rtcEngine->setPlaybackAudioFrameParameters(44100, 2, RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, 1024);
m_rtcEngine->setPlaybackAudioFrameBeforeMixingParameters(44100, 2);
m_rtcEngine->setMixedAudioFrameParameters(44100, 2, 1024);
This section includes in depth information about the methods you used in this page, and links to related pages.
Agora provides the following open-source sample project on GitHub: OriginalAudio