v4.0.1 was released on September 29, 2022.
This release deletes the sourceType
parameter in enableDualStreamMode
[3/3] and enableDualStreamModeEx
, and the enableDualStreamMode
[2/3] method, because the SDK supports enabling dual-stream mode for various video sources captured by custom capture or SDK, you don't need to specify the video source type any more.
1. In-ear monitoring
This release adds support for in-ear monitoring. You can call enableInEarMonitoring
to enable the in-ear monitoring function.
After successfully enabling the in-ear monitoring function, you can call registerAudioFrameObserver
to register the audio observer, and the SDK triggers the onEarMonitoringAudioFrame
callback to report the audio frame data. You can use your own audio effect processing module to pre-process the audio frame data of the in-ear monitoring to implement custom audio effects. Agora recommends that you choose one of the following two methods to set the audio data format of the in-ear monitoring:
setEarMonitoringAudioFrameParameters
method to set the audio data format of in-ear monitoring. The SDK calculates the sampling interval based on the parameters in this method, and triggers the onEarMonitoringAudioFrame
callback based on the sampling interval.getEarMonitoringAudioParams
callback. The SDK calculates the sampling interval based on the return value of the callback, and triggers the onEarMonitoringAudioFrame callback based on the sampling interval.To adjust the in-ear monitoring volume, you can call setInEarMonitoringVolume
.
2. Local network connection types
To make it easier for users to know the connection type of the local network at any stage, this release adds the getNetworkType
method. You can use this method to get the type of network connection in use, including UNKNOWN, DISCONNECTED, LAN, WIFI, 2G, 3G, 4G, 5G. When the local network connection type changes, the SDK triggers the onNetworkTypeChanged
callback to report the current network connection type.
3. Audio stream filter
This release introduces filtering audio streams based on volume. Once this function is enabled, the Agora server ranks all audio streams by volume and transports 3 audio streams with the highest volumes to the receivers by default. The number of audio streams to be transported can be adjusted; you can contact support@agora.io to adjust this number according to your scenarios.
Meanwhile, Agora supports publishers to choose whether or not the audio streams being published are to be filtered based on volume. Streams that are not filtered will bypass this filter mechanism and transported directly to the receivers. In scenarios where there are a number of publishers, enabling this function helps reducing the bandwidth and device system pressure for the receivers.
4. Dual-stream mode
This release optimizes the dual-stream mode, you can call enableDualStreamMode
and enableDualStreamModeEx
before and after joining a channel.
The implementation of subscribing low-quality video stream is expanded. The SDK enables the low-quality video stream auto mode on the sender by default (the SDK does not send low-quality video streams), you can follow these steps to enable sending low-quality video streams:
setRemoteVideoStreamType
or setRemoteDefaultVideoStreamType
to initiate a low-quality video stream request.If you want to modify the default behavior above, you can call setDualStreamMode
[1/2] or setDualStreamMode
[2/2] and set the mode
parameter to DISABLE_SIMULCAST_STREAM
(always do not send low-quality video streams) or ENABLE_SIMULCAST_STREAM
(always send low-quality video streams).
5. Loopback device
The SDK uses the playback device as the loopback device by default. Since 4.0.1, you can specify a loopback device separately and publish the captured audio to the remote end.
setLoopbackDevice
:Specifies the loopback device. If you do not want the current playback device to be the loopback device, you can call this method to specify another device as the loopback device.getLoopbackDevice
:Gets the current loopback device.followSystemLoopbackDevice
:Whether the loopback device follows the default playback device of the system.6. Spatial audio effect
This release adds the following features applicable to spatial audio effect scenarios, which can effectively enhance the user's sense of presence experience in virtual interactive scenarios.
setZones
. When the sound source (which can be a user or the media player) and the listener belong to the inside and outside of the sound insulation area, the listner experiences an attenuation effect similar to that of the sound in the real environment when it encounters a building partition. You can also set the sound attenuation parameter for the media player and the user, respectively, by calling setPlayerAttenuation
and setRemoteAudioAttenuation
, and specify whether to use that setting to force an override of the sound attenuation paramter in setZones
.enable_doppler
parameter in SpatialAudioParams
, and the receiver experiences noticeable tonal changes in the event of a high-speed relative displacement between the source source and receiver (such as in a racing game scenario).setHeadphoneEQPreset
method to improve the hearing of the headphones.1. Video information change callback
This release optimizes the trigger logic of onVideoSizeChanged
, which can also be triggered and report the local video size change when startPreview
is called separately.
2. First video frame rendering
This release speeds up the first video frame rendering time to improve the video experience.
This release fixed the following issues.
stopPreview
was called to disable the local video preview, the virtual background that has been set up was occasionally invalidated.CameraCapturerConfiguration
wass inconsistent with that set in setVideoEncoderConfiguration
, the aspect ratio of the local video preview was not rendered according to the latter setting.setVideoEncoderConfigurationEx
in the channel to increase the resolution of the video, it occasionally failed. Added
enableInEarMonitoring
setEarMonitoringAudioFrameParameters
onEarMonitoringAudioFrame
setInEarMonitoringVolume
getEarMonitoringAudioParams
getNetworkType
setRecordingDeviceVolume
isAudioFilterable
in the ChannelMediaOptions
setDualStreamMode
[1/2]setDualStreamMode
[2/2]setDualStreamModeEx
SIMULCAST_STREAM_MODE
setLoopbackDevice
getLoopbackDevice
followSystemLoopbackDevice
setZones
setPlayerAttenuation
setRemoteAudioAttenuation
muteRemoteAudioStream
SpatialAudioParams
setHeadphoneEQPreset
HEADPHONE_EQUALIZER_PRESET
Modified
enableDualStreamMode
[1/3]enableDualStreamMode
[3/3]enableDualStreamModeEx
Deprecated
startEchoTest
[2/3]Deleted
enableDualStreamMode
[2/3] v4.0.0 was released on September 15, 2022.
Integration change
This release has optimized the implementation of some features, resulting in incompatibility with v3.7.0. The following are the main features with compatibility changes:
After upgrading the SDK, you need to update the code in your app according to your business scenarios. For details, see Migrate from v3.7.0 to v4.0.0.
1. Multiple media tracks
This release supports one IRtcEngine
instance to collect multiple audio and video sources at the same time and publish them to the remote users by setting RtcEngineEx
and ChannelMediaOptions.
joinChannel
to join the first channel, call joinChannelEx
multiple times to join multiple channels, and publish the specified stream to different channels through different user ID (localUid
) and ChannelMediaOptions
settings.publishSecondaryCameraTrack
and publishSecondaryScreenTrack
in ChannelMediaOptions
. This release adds createCustomVideoTrack
method to implement video custom capture. You can refer to the following steps to publish multiple custom captured video in the channel:
ChannelMediaOptions
, set the customVideoTrackId
parameter to the ID of the video track you want to publish, and set publishCustomVideoTrack
to true
.pushVideoFrame
, and specify videoTrackId
as the ID of the custom video track in step 2 in order to publish the corresponding custom video source in multiple channels.You can also experience the following features with the multi-channel capability:
uid
).uid
).uid
).2. Ultra HD resolution (Beta)
In order to improve the interactive video experience, the SDK optimizes the whole process of video capture, encoding, decoding and rendering, and now supports 4K resolution. The improved FEC (Forward Error Correction) algorithm enables adaptive switches according to the frame rate and number of video frame packets, which further reduces the video stuttering rate in 4K scenes.
Additionally, you can set the encoding resolution to 4K (3840 × 2160) and the frame rate to 60 fps when calling SetVideoEncoderConfiguration
. The SDK supports automatic fallback to the appropriate resolution and frame rate if your device does not support 4K.
3. Build-in media player
To make it easier for users to integrate the Agora SDK and reduce the SDK's package size, this release introduces the Agora media player. After calling the createMediaPlayer
method to create a media player object, you can then call the methods in the IMediaPlayer
class to experience a series of functions, such as playing local and online media files, preloading a media file, changing the CDN route for playing according to your network conditions, or sharing the audio and video streams being played with remote users.
4.Brand-new AI noise reduction
The SDK supports a new version of AI noise reduction (in comparison to the basic AI noise reduction in v3.7.0). The new AI noise reduction has better vocal fidelity, cleaner noise suppression, and adds a dereverberation option.
5. Ultra-high audio quality
To make the audio clearer and restore more details, this release adds the ULTRA_HIGH_QUALITY_VOICE
enumeration. In scenarios that mainly feature the human voice, such as chat or singing, you can call setVoiceBeautifierPreset
and use this enumeration to experience ultra-high audio quality.
6. Spatial audio
You can set the spatial audio for the remote user as following:
ILocalSpatialAudioEngine
class to implement spatial audio by calculating the spatial coordinates of the remote user. You need to call updateSelfPosition
and updateRemotePosition
to update the spatial coordinates of the local and remote users, respectively, so that the local user can hear the spatial audio effect of the remote user.You can also set the spatial audio for the media player as following:
ILocalSpatialAudioEngine
class to implement spatial audio. You need to call updateSelfPosition
and updatePlayerPositionInfo
to update the spatial coordinates of the local user and media player, respectively, so that the local user can hear the spatial audio effect of media player.7. Real-time chorus
This release gives real-time chorus the following abilities:
This release adds the AUDIO_SCENARIO_CHORUS
enumeration in AUDIO_SCENARIO_TYPE
. With this enumeration, users can experience ultra-low latency in real-time chorus when the network conditions are good.
8. Extensions from the Agora extensions marketplace
In order to enhance the real-time audio and video interactive activities based on the Agora SDK, this release supports the one-stop solution for the extensions from the Agora extensions marketplace:
9. Enhanced channel management
To meet the channel management requirements of various business scenarios, this release adds the following functions to the ChannelMediaOptions
structure:
Set ChannelMediaOptions
when calling joinChannel
or joinChannelEx
to specify the publishing and subscription behavior of a media stream, for example, whether to publish video streams captured by cameras or screen sharing, and whether to subscribe to the audio and video streams of remote users. After joining the channel, call updateChannelMediaOptions
to update the settings in ChannelMediaOptions
at any time, for example, to switch the published audio and video sources.
10. Screen sharing
This release optimizes the screen sharing function. You can enable this function in the following ways.
StartScreenCaptureByDisplayId
method before joining a channel, and then call JoinChannel
[2/2] to join a channel and set publishScreenTrack
or publishSecondaryScreenTrack
as true.StartScreenCaptureByDisplayId
method after joining a channel, and then call UpdateChannelMediaOptions
to set publishScreenTrack
or publishSecondaryScreenTrack
as true.11. Subscription allowlists and blocklists
This release introduces subscription allowlists and blocklists for remote audio and video streams. You can add a user ID that you want to subscribe to in your whitelist, or add a user ID for the streams you do not wish to see to your blacklists. You can experience this feature through the following APIs, and in scenarios that involve multiple channels, you can call the following methods in the IRtcEngineEx
interface:
SetSubscribeAudioBlacklist
:Set the audio subscription blocklist.SetSubscribeAudioWhitelist
:Set the audio subscription allowlist.SetSubscribeVideoBlacklist
:Set the video subscription blocklist.SetSubscribeVideoWhitelist
:Set the video subscription allowlist.If a user is added in a blacklist and a whitelist at the same time, only the blacklist takes effect.
12. Set audio scenarios
To make it easier to change audio scenarios, this release adds the SetAudioScenario
method. For example, if you want to change the audio scenario from AUDIO_SCENARIO_DEFAULT
to AUDIO_SCENARIO_GAME_STREAMING
when you are in a channel, you can call this method.
13. Local video mixing
This release adds a series of APIs supporting local video mixing functions. You can mix multiple video streams into one video stream locally. Common scenarios are as follows:
You can call the startLocalVideoTranscoder
method to start local video mixing and call the stopLocalVideoTranscoder
method to stop local video mixing. After the local video mixing starts, you can call updateLocalTranscoderConfiguration
to update the local video mixing configuration.
14. Video device management
Video capture devices can support multiple video formats, each supporting a different combination of video frame width, video frame height, and frame rate.
This release adds the numberOfCapabilities
and getCapability
methods for getting the number of video formats supported by the video capture device and the details of the video frames in the specified video format. When calling the startPrimaryCameraCapture
or startSecondaryCameraCapture
method to capture video using the camera, you can use the specified video format.
VideoEncoderConfiguration
, so normally you should not need to use these new methods.1. Fast channel switching
This release can achieve the same switching speed as SwitchChannel
in v3.7.0 through the LeaveChannel
and JoinChannel
methods so that you don't need to take the time to call the SwitchChannel
method.
2. Push external video frames
This releases supports pushing video frames in I422 format. You can call the pushVideoFrame
[1/2] method to push such video frames to the SDK.
3. Voice pitch of the local user
This release adds voicePitch
in AudioVolumeInfo
of onAudioVolumeIndication
. You can use voicePitch
to get the local user's voice pitch and perform business functions such as rating for singing.
4. Video preview
This release improves the implementation logic of startPreview
. You can call the startPreview
method to enable video preview at any time.
5. Video types of subscription
You can call the setRemoteDefaultVideoStreamType
method to choose the video stream type when subscribing to streams.