Type definition
This page lists all the type definitions of the Android API.
AreaCode
The region for connection, which is the region where the server the SDK connects to is located.
Enumerator
- AREA_CODE_CN
- Mainland China.
- AREA_CODE_NA
- North America.
- AREA_CODE_EU
- Europe.
- AREA_CODE_AS
- Asia, excluding Mainland China.
- AREA_CODE_JP
- Japan.
- AREA_CODE_IN
- India.
- AREA_CODE_GLOB
- Global.
AudioCodecProfileType
Self-defined audio codec profile.
Enumerator
- LC_AAC
- 0: (Default) LC-AAC.
- HE_AAC
- 1: HE-AAC.
- HE_AAC_V2
- 2: HE-AAC v2.
AudioDualMonoMode
The channel mode.
Enumerator
- AUDIO_DUAL_MONO_STEREO
- 0: Original mode.
- AUDIO_DUAL_MONO_L
- 1: Left channel mode. This mode replaces the audio of the right channel with the audio of the left channel, which means the user can only hear the audio of the left channel.
- AUDIO_DUAL_MONO_R
- 2: Right channel mode. This mode replaces the audio of the left channel with the audio of the right channel, which means the user can only hear the audio of the right channel.
- AUDIO_DUAL_MONO_MIX
- 3: Mixed channel mode. This mode mixes the audio of the left channel and the right channel, which means the user can hear the audio of the left channel and the right channel at the same time.
AUDIO_EQUALIZATION_BAND_FREQUENCY
The midrange frequency for audio equalization.
Enumerator
- AUDIO_EQUALIZATION_BAND_31
- 0: 31 Hz
- AUDIO_EQUALIZATION_BAND_62
- 1: 62 Hz
- AUDIO_EQUALIZATION_BAND_125
- 2: 125 Hz
- AUDIO_EQUALIZATION_BAND_250
- 3: 250 Hz
- AUDIO_EQUALIZATION_BAND_500
- 4: 500 Hz
- AUDIO_EQUALIZATION_BAND_1K
- 5: 1 kHz
- AUDIO_EQUALIZATION_BAND_2K
- 6: 2 kHz
- AUDIO_EQUALIZATION_BAND_4K
- 7: 4 kHz
- AUDIO_EQUALIZATION_BAND_8K
- 8: 8 kHz
- AUDIO_EQUALIZATION_BAND_16K
- 9: 16 kHz
AudioMixingDualMonoMode
The channel mode.
Enumerator
- AUDIO_MIXING_DUAL_MONO_AUTO
- 0: Original mode.
- AUDIO_MIXING_DUAL_MONO_L
- 1: Left channel mode. This mode replaces the audio of the right channel with the audio of the left channel, which means the user can only hear the audio of the left channel.
- AUDIO_MIXING_DUAL_MONO_R
- 2: Right channel mode. This mode replaces the audio of the left channel with the audio of the right channel, which means the user can only hear the audio of the right channel.
- AUDIO_MIXING_DUAL_MONO_MIX
- 3: Mixed channel mode. This mode mixes the audio of the left channel and the right channel, which means the user can hear the audio of the left channel and the right channel at the same time.
AUDIO_REVERB_TYPE
Audio reverberation types.
Enumerator
- AUDIO_REVERB_DRY_LEVEL
- 0: The level of the dry signal (dB). The value is between -20 and 10.
- AUDIO_REVERB_WET_LEVEL
- 1: The level of the early reflection signal (wet signal) (dB). The value is between -20 and 10.
- AUDIO_REVERB_ROOM_SIZE
- 2: The room size of the reflection. The value is between 0 and 100.
- AUDIO_REVERB_WET_DELAY
- 3: The length of the initial delay of the wet signal (ms). The value is between 0 and 200.
- AUDIO_REVERB_STRENGTH
- 4: The reverberation strength. The value is between 0 and 100.
AudioSampleRateType
The audio sampling rate of the stream to be pushed to the CDN.
Enumerator
- AUDIO_SAMPLE_RATE_32000
- 32000: 32 kHz
- AUDIO_SAMPLE_RATE_44100
- 44100: 44.1 kHz
- AUDIO_SAMPLE_RATE_48000
- 48000: (Default) 48 kHz
DEGRADATION_PREFERENCE
Video degradation preferences when the bandwidth is a constraint.
Enumerator
- MAINTAIN_QUALITY
-
0: (Default) Prefers to reduce the video frame rate while maintaining video quality during video encoding under limited bandwidth. This degradation preference is suitable for scenarios where video quality is prioritized.
Attention: In the COMMUNICATION channel profile, the resolution of the video sent may change, so remote users need to handle this issue. See onVideoSizeChanged. - MAINTAIN_FRAMERATE
- 1: Prefers to reduce the video quality while maintaining the video frame rate during video encoding under limited bandwidth. This degradation preference is suitable for scenarios where smoothness is prioritized and video quality is allowed to be reduced.
- MAINTAIN_BALANCED
-
2: Reduces the video frame rate and video quality simultaneously during video encoding under limited bandwidth. The MAINTAIN_BALANCED has a lower reduction than MAINTAIN_QUALITY and MAINTAIN_FRAMERATE, and this preference is suitable for scenarios where both smoothness and video quality are a priority.
Attention: The resolution of the video sent may change, so remote users need to handle this issue. See onVideoSizeChanged. - MAINTAIN_RESOLUTION
- 3: When the bandwidth is limited, the video frame rate is preferentially reduced during video encoding.
ENCRYPTION_ERROR_TYPE
Encryption error type.
Enumerator
- ENCRYPTION_ERROR_INTERNAL_FAILURE
- 0: Internal reason.
- ENCRYPTION_ERROR_DECRYPTION_FAILURE
- 1: Decryption errors. Ensure that the receiver and the sender use the same encryption mode and key.
- ENCRYPTION_ERROR_ENCRYPTION_FAILURE
- 2: Encryption errors.
EncryptionMode
The built-in encryption mode.
Agora recommends using AES_128_GCM2 or AES_256_GCM2 encrypted mode. These two modes support the use of salt for higher security.
Enumerator
- AES_128_XTS
- 1: 128-bit AES encryption, XTS mode.
- AES_128_ECB
- 2: 128-bit AES encryption, ECB mode.
- AES_256_XTS
- 3: 256-bit AES encryption, XTS mode.
- SM4_128_ECB
- 4: 128-bit SM4 encryption, ECB mode.
- AES_128_GCM
- 5: 128-bit AES encryption, GCM mode.
- AES_256_GCM
- 6: 256-bit AES encryption, GCM mode.
- AES_128_GCM2
- 7: (Default) 128-bit AES encryption, GCM mode. This encryption mode requires the setting of salt (
encryptionKdfSalt
). - AES_256_GCM2
- 8: 256-bit AES encryption, GCM mode. This encryption mode requires the setting of salt (
encryptionKdfSalt
). - MODE_END
- Enumerator boundary.
ExternalVideoSourceType
The external video frame encoding type.
Enumerator
- VIDEO_FRAME
- 0: The video frame is not encoded.
- ENCODED_VIDEO_FRAME
- 1: The video frame is encoded.
FRAME_RATE
Video frame rate.
Enumerator
- FRAME_RATE_FPS_1
- 1: 1 fps
- FRAME_RATE_FPS_7
- 7: 7 fps
- FRAME_RATE_FPS_10
- 10: 10 fps
- FRAME_RATE_FPS_15
- 15: 15 fps
- FRAME_RATE_FPS_24
- 24: 24 fps
- FRAME_RATE_FPS_30
- 30: 30 fps
HEADPHONE_EQUALIZER_PRESET
Preset headphone equalizer types.
- Since
- v4.0.1
Enumerator
- HEADPHONE_EQUALIZER_OFF
- The headphone equalizer is disabled, and the original audio is heard.
- HEADPHONE_EQUALIZER_OVEREAR
- An equalizer is used for headphones.
- HEADPHONE_EQUALIZER_INEAR
- An equalizer is used for in-ear headphones.
LogLevel
The output log level of the SDK.
Enumerator
- LOG_LEVEL_NONE
- 0: Do not output any log information.
- LOG_LEVEL_INFO
- 0x0001: (Default) Output
FATAL
,ERROR
,WARN
, andINFO
level log information. We recommend setting your log filter to this level. - LOG_LEVEL_WARN
- 0x0002: Output
FATAL
,ERROR
, andWARN
level log information. - LOG_LEVEL_ERROR
- 0x0004: Output
FATAL
andERROR
level log information. - LOG_LEVEL_FATAL
- 0x0008: Output
FATAL
level log information.
MediaPlayerError
Error codes of the media player.
Enumerator
- PLAYER_ERROR_NONE
- 0: No error.
- PLAYER_ERROR_INVALID_ARGUMENTS
- -1: Invalid arguments.
- PLAYER_ERROR_INTERNAL
- -2: Internal error.
- PLAYER_ERROR_NO_RESOURCE
- -3: No resource.
- PLAYER_ERROR_INVALID_MEDIA_SOURCE
- -4: Invalid media resource.
- PLAYER_ERROR_UNKNOWN_STREAM_TYPE
- -5: The media stream type is unknown.
- PLAYER_ERROR_OBJ_NOT_INITIALIZED
- -6: The object is not initialized.
- PLAYER_ERROR_CODEC_NOT_SUPPORTED
- -7: The codec is not supported.
- PLAYER_ERROR_VIDEO_RENDER_FAILED
- -8: Invalid renderer.
- PLAYER_ERROR_INVALID_STATE
- -9: An error with the internal state of the player occurs.
- PLAYER_ERROR_URL_NOT_FOUND
- -10: The URL of the media resource cannot be found.
- PLAYER_ERROR_INVALID_CONNECTION_STATE
- -11: Invalid connection between the player and the Agora Server.
- PLAYER_ERROR_SRC_BUFFER_UNDERFLOW
- -12: The playback buffer is insufficient.
- PLAYER_ERROR_INTERRUPTED
- -13: The playback is interrupted.
- PLAYER_ERROR_NOT_SUPPORTED
- -14: The SDK does not support the method being called.
- PLAYER_ERROR_TOKEN_EXPIRED
- -15: The authentication information of the media resource is expired.
- PLAYER_ERROR_UNKNOWN
- -17: An unknown error.
MediaPlayerEvent
Media player events.
Enumerator
- PLAYER_EVENT_UNKNOWN
- -1: An unknown event.
- PLAYER_EVENT_SEEK_BEGIN
- 0: The player begins to seek to a new playback position.
- PLAYER_EVENT_SEEK_COMPLETE
- 1: The player finishes seeking to a new playback position.
- PLAYER_EVENT_SEEK_ERROR
- 2: An error occurs when seeking to a new playback position.
- PLAYER_EVENT_AUDIO_TRACK_CHANGED
- 5: The audio track used by the player has been changed.
- PLAYER_EVENT_BUFFER_LOW
- 6: The currently buffered data is not enough to support playback.
- PLAYER_EVENT_BUFFER_RECOVER
- 7: The currently buffered data is just enough to support playback.
- PLAYER_EVENT_FREEZE_START
- 8: The audio or video playback freezes.
- PLAYER_EVENT_FREEZE_STOP
- 9: The audio or video playback resumes without freezing.
- PLAYER_EVENT_SWITCH_BEGIN
- 10: The player starts switching the media resource.
- PLAYER_EVENT_SWITCH_COMPLETE
- 11: Media resource switching is complete.
- PLAYER_EVENT_SWITCH_ERROR
- 12: Media resource switching error.
- PLAYER_EVENT_FIRST_DISPLAYED
- 13: The first video frame is rendered.
- PLAYER_EVENT_REACH_CACHE_FILE_MAX_COUNT
- 14: The cached media files reach the limit in number.
- PLAYER_EVENT_REACH_CACHE_FILE_MAX_SIZE
- 15: The cached media files reach the limit in aggregate storage space.
MediaPlayerMetadataType
The type of media metadata.
Enumerator
- PLAYER_METADATA_TYPE_UNKNOWN
- 0: The type is unknown.
- PLAYER_METADATA_TYPE_SEI
- 1: The type is SEI.
MediaPlayerState
The playback state.
Enumerator
- PLAYER_STATE_UNKNOWN
- -1: The player state is unknown.
- PLAYER_STATE_IDLE
- 0: The default state. The media player returns this state code before you open the media resource or after you stop the playback.
- PLAYER_STATE_OPENING
- Opening the media resource.
- PLAYER_STATE_OPEN_COMPLETED
- Opens the media resource successfully.
- PLAYER_STATE_PLAYING
- The media resource is playing.
- PLAYER_STATE_PAUSED
- Pauses the playback.
- PLAYER_STATE_PLAYBACK_COMPLETED
- The playback finishes.
- PLAYER_STATE_PLAYBACK_ALL_LOOPS_COMPLETED
- The loop finishes.
- PLAYER_STATE_STOPPED
- The playback stops.
- PLAYER_STATE_FAILED
- 100: The media player fails to play the media resource.
MediaSourceType
Media source type.
Enumerator
- AUDIO_PLAYOUT_SOURCE
- 0: Audio playback device.
- AUDIO_RECORDING_SOURCE
- 1: Audio capturing device.
- PRIMARY_CAMERA_SOURCE
- 2: The primary camera.
- SECONDARY_CAMERA_SOURCE
- 3: The secondary camera.
- UNKNOWN_MEDIA_SOURCE
- 100: Unknown media source.
MediaStreamType
The type of the media stream.
Enumerator
- STREAM_TYPE_UNKNOWN
- 0: The type is unknown.
- STREAM_TYPE_VIDEO
- 1: The video stream.
- STREAM_TYPE_AUDIO
- 2: The audio stream.
- STREAM_TYPE_SUBTITLE
- 3: The subtitle stream.
ORIENTATION_MODE
Video output orientation mode.
Enumerator
- ORIENTATION_MODE_ADAPTIVE
-
0: (Default) The output video always follows the orientation of the captured video. The receiver takes the rotational information passed on from the video encoder. This mode applies to scenarios where video orientation can be adjusted on the receiver.
- If the captured video is in landscape mode, the output video is in landscape mode.
- If the captured video is in portrait mode, the output video is in portrait mode.
- ORIENTATION_FIXED_LANDSCAPE
- 1: In this mode, the SDK always outputs videos in landscape (horizontal) mode. If the captured video is in portrait mode, the video encoder crops it to fit the output. Applies to situations where the receiving end cannot process the rotational information. For example, CDN live streaming.
- ORIENTATION_FIXED_PORTRAIT
- 2: In this mode, the SDK always outputs video in portrait (portrait) mode. If the captured video is in landscape mode, the video encoder crops it to fit the output. Applies to situations where the receiving end cannot process the rotational information. For example, CDN live streaming.
MediaPlayerPreloadEvent
Events that occur when media resources are preloaded.
Enumerator
- PLAYER_PRELOAD_EVENT_BEGIN
- 0: Starts preloading media resources.
- PLAYER_PRELOAD_EVENT_COMPLETE
- 1: Preloading media resources is complete.
- PLAYER_PRELOAD_EVENT_ERROR
- 2: An error occurs when preloading media resources.
STREAM_PUBLISH_STATE
The publishing state.
Enumerator
- PUB_STATE_IDLE
- 0: The initial publishing state after joining the channel.
- PUB_STATE_NO_PUBLISHED
-
1: Fails to publish the local stream. Possible reasons:
- The local user calls muteLocalAudioStream(
true
) or muteLocalVideoStream(true
) to stop sending local media streams. - The local user calls disableAudio or disableVideo to disable the local audio or video module.
- The local user calls enableLocalAudio(
false
) or enableLocalVideo(false
) to disable the local audio or video capture. - The role of the local user is audience.
- The local user calls muteLocalAudioStream(
- PUB_STATE_PUBLISHING
- 2: Publishing.
- PUB_STATE_PUBLISHED
- 3: Publishes successfully.
VideoCodecProfileType
Video codec profile types.
- BASELINE
- 66: Baseline video codec profile; generally used for video calls on mobile phones.
- MAIN
- 77: Main video codec profile; generally used in mainstream electronics such as MP4 players, portable video players, PSP, and iPads.
- HIGH
- 100: (Default) High video codec profile; generally used in high-resolution live streaming or television.
VideoCodecType
The codec type of the output video.
Enumerator
- H264
- 1: (Default) H.264.
- H265
- 2: H.265.
MIRROR_MODE_TYPE
Video mirror mode.
Enumerator
- MIRROR_MODE_AUTO
- 0: (Default) The SDK determines the mirror mode.
- MIRROR_MODE_ENABLED
- 1: Enable mirror mode.
- MIRROR_MODE_DISABLED
- 2: Disable mirror mode.
VideoSourceType
The capture type of the custom video source.
Enumerator
- VIDEO_SOURCE_CAMERA_PRIMARY
- (Default) The primary camera.
- VIDEO_SOURCE_CAMERA_SECONDARY
- The secondary camera.
- VIDEO_SOURCE_SCREEN_PRIMARY
- The primary screen.
- VIDEO_SOURCE_SCREEN_SECONDARY
- The secondary screen.
- VIDEO_SOURCE_CUSTOM
- The custom video source.
- VIDEO_SOURCE_MEDIA_PLAYER
- The video source from the media player.
- VIDEO_SOURCE_RTC_IMAGE_PNG
- The video source is a PNG image.
- VIDEO_SOURCE_RTC_IMAGE_JPEG
- The video source is a JPEG image.
- VIDEO_SOURCE_RTC_IMAGE_GIF
- The video source is a GIF image.
- VIDEO_SOURCE_REMOTE
- The video source is remote video acquired by the network.
- VIDEO_SOURCE_TRANSCODED
- A transcoded video source.
- VIDEO_SOURCE_UNKNOWN
- An unknown video source.
AdvancedAudioOptions
The advanced options for audio.
public class AdvancedAudioOptions { public enum AgoraAudioProcessChannels { AGORA_AUDIO_MONO_PROCESSING(1), AGORA_AUDIO_STEREO_PROCESSING(2); private int value; private AgoraAudioProcessChannels(int v) { value = v; } public int getValue() { return this.value; } } public AgoraAudioProcessChannels audioProcessingChannels; public AdvancedAudioOptions(AgoraAudioProcessChannels channels) { audioProcessingChannels = channels; } public AdvancedAudioOptions() { audioProcessingChannels = AgoraAudioProcessChannels.AGORA_AUDIO_MONO_PROCESSING; } }
Attributes
- audioProcessingChannels
- The number of channels for audio preprocessing. See AgoraAudioProcessChannels.
AgoraAudioProcessChannels
The number of channels for audio preprocessing.
In scenarios that require enhanced realism, such as concerts, local users might need to capture stereo audio and send stereo signals to remote users. For example, the singer, guitarist, and drummer are standing in different positions on the stage. The audio capture device captures their stereo audio and sends stereo signals to remote users. Remote users can hear the song, guitar, and drum from different directions as if they were at the auditorium.
- Preprocessing: call setAdvancedAudioOptions and set audioProcessingChannels to AdvancedAudioOptions (2) in AGORA_AUDIO_STEREO_PROCESSING.
- Post-processing: call setAudioProfile [2/2] and set profile to MUSIC_STANDARD_STEREO (3) or MUSIC_HIGH_QUALITY_STEREO (5).
- The stereo setting only takes effect when the SDK uses the media volume. See Voluem type.
Enumerator
- AGORA_AUDIO_MONO_PROCESSING
- 1: (Default) Mono.
- AGORA_AUDIO_STEREO_PROCESSING
- 2: Stereo.
AgoraFacePositionInfo
The information of the detected human face.
public static class AgoraFacePositionInfo { public int x; public int y; public int width; public int height; public int distance; }
Attributes
- x
-
The x-coordinate (px) of the human face in the local video. The horizontal position relative to the origin, where the upper left corner of the captured video is the origin, and the x-coordinate is the upper left corner of the watermark.
- y
-
The y-coordinate (px) of the human face in the local video. Taking the top left corner of the captured video as the origin, the y coordinate represents the relative longitudinal displacement of the top left corner of the human face to the origin.
- width
-
The width (px) of the human face in the captured video.
- height
-
The height (px) of the human face in the captured video.
- distance
-
The distance between the human face and the device screen (cm).
AudioEncodedFrameObserverConfig
Observer settings for encoded audio.
public class AudioEncodedFrameObserverConfig {
public int postionType;
public int encodingType;
public AudioEncodedFrameObserverConfig() {
postionType = Constants.AUDIO_FILE_RECORDING_PLAYBACK;
encodingType = Constants.AUDIO_ENCODING_TYPE_OPUS_48000_MEDIUM;
}
}
Attributes
- postionType
-
Audio profile:
- AUDIO_ENCODED_FRAME_OBSERVER_POSITION_MIC(1): Only encode the audio of the local user.
- AUDIO_ENCODED_FRAME_OBSERVER_POSITION_PLAYBACK(2): Only encode the audio of all remote users.
- AUDIO_ENCODED_FRAME_OBSERVER_POSITION_MIXED(3): Records the mixed audio of the local and all remote users.
- encodingType
-
Audio encoding type:
- AUDIO_ENCODING_TYPE_AAC_16000_LOW: AAC encoding format, 16000 Hz sampling rate, low quality. A file with an audio duration of 10 minutes is approximately 1.2 MB after encoding.
- AUDIO_ENCODING_TYPE_AAC_16000_MEDIUM: AAC encoding format, 16000 Hz sampling rate, medium quality. A file with an audio duration of 10 minutes is approximately 2 MB after encoding.
- AUDIO_ENCODING_TYPE_AAC_32000_LOW: AAC encoding format, 32000 Hz sampling rate, low quality. A file with an audio duration of 10 minutes is approximately 1.2 MB after encoding.
- AUDIO_ENCODING_TYPE_AAC_32000_MEDIUM: AAC encoding format, 32000 Hz sampling rate, medium quality. A file with an audio duration of 10 minutes is approximately 2 MB after encoding.
- AUDIO_ENCODING_TYPE_AAC_32000_HIGH: AAC encoding format, 32000 Hz sampling rate, high quality. A file with an audio duration of 10 minutes is approximately 3.5 MB after encoding.
- AUDIO_ENCODING_TYPE_AAC_48000_MEDIUM: AAC encoding format, 48000 Hz sampling rate, medium quality. A file with an audio duration of 10 minutes is approximately 2 MB after encoding.
- AUDIO_ENCODING_TYPE_AAC_48000_HIGH: AAC encoding format, 48000 Hz sampling rate, high quality. A file with an audio duration of 10 minutes is approximately 3.5 MB after encoding.
- AUDIO_ENCODING_TYPE_OPUS_16000_LOW: OPUS encoding format, 16000 Hz sampling rate, low quality. A file with an audio duration of 10 minutes is approximately 2 MB after encoding.
- AUDIO_ENCODING_TYPE_OPUS_16000_MEDIUM: OPUS encoding format, 16000 Hz sampling rate, medium quality. A file with an audio duration of 10 minutes is approximately 2 MB after encoding.
- AUDIO_ENCODING_TYPE_OPUS_48000_MEDIUM: OPUS encoding format, 48000 Hz sampling rate, medium quality. A file with an audio duration of 10 minutes is approximately 2 MB after encoding.
- AUDIO_ENCODING_TYPE_OPUS_48000_HIGH: OPUS encoding format, 48000 Hz sampling rate, high quality. A file with an audio duration of 10 minutes is approximately 3.5 MB after encoding.
AudioFrame
Raw audio data.
public class AudioFrame { public byte[] bytes; public int sampleRataHz; public int bytesPerSample; public int channelNums; public int samplesPerChannel; public long timestamp; @CalledByNative public AudioFrame(byte[] bytes, int sampleRataHz, int bytesPerSample, int channelNums, int samplesPerChannel, long timestamp) { this.sampleRataHz = sampleRataHz; this.bytesPerSample = bytesPerSample; this.channelNums = channelNums; this.samplesPerChannel = samplesPerChannel; this.timestamp = timestamp; this.bytes = bytes; } @CalledByNative public byte[] getBytes() { return bytes; } @CalledByNative public int getBytesPerSample() { return bytesPerSample; } @CalledByNative public int getChannelNums() { return channelNums; } @CalledByNative public int getSampleRataHz() { return sampleRataHz; } @CalledByNative public int getSamplesPerChannel() { return samplesPerChannel; } @CalledByNative public long getTimestamp() { return timestamp; } @Override public String toString() { return "AudioFrame{sampleRataHz=" + sampleRataHz + ", bytesPerSample=" + bytesPerSample + ", channelNums=" + channelNums + ", samplesPerChannel=" + samplesPerChannel + ", timestamp=" + timestamp + '}'; } }
Attributes
- samplesPerChannel
- The number of samples per channel in the audio frame.
- bytesPerSample
- The number of bytes per audio sample, which is usually 16-bit (2 bytes).
- channelNums
-
The number of audio channels (the data are interleaved if it is stereo).
- 1: Mono.
- 2: Stereo.
- sampleRataHz
- The number of samples per channel in the audio frame.
- bytes
-
The data buffer of the audio frame. When the audio frame uses a stereo channel, the data buffer is interleaved.
The size of the data buffer is as follows:
buffer
=samples
×channels
×bytesPerSample
. - timestamp
- The timestamp (ms) of the audio frame.
AudioParams
Audio data format.
public class AudioParams { public int sampleRate = 0; public int channel = 0; public int mode = Constants.RAW_AUDIO_FRAME_OP_MODE_READ_ONLY; public int samplesPerCall = 0; @CalledByNative public AudioParams(int sampleRate, int channelCnt, int mode, int samplesPerCall) { this.sampleRate = sampleRate; this.channel = channelCnt; this.mode = mode; this.samplesPerCall = samplesPerCall; } }
- getRecordAudioParams: Sets the audio data format for the onRecordAudioFrame callback.
- getPlaybackAudioParams: Sets the audio data format for the onPlaybackAudioFrame callback.
- getMixedAudioParams: Sets the audio data format for the onMixedAudioFrame callback.
- getEarMonitoringAudioParams: Sets the audio data format for the onEarMonitoringAudioFrame callback.
- The SDK calculates the sampling interval through the samplesPerCall, sampleRate, and channel parameters in AudioParams, and triggers the onRecordAudioFrame, onPlaybackAudioFrame, onMixedAudioFrame, and onEarMonitoringAudioFrame callbacks according to the sampling interval.
- Sample interval (sec) = samplePerCall/(sampleRate × channel).
- Ensure that the sample interval ≥ 0.01 (s).
Attributes
- sampleRate
- The audio sample rate (Hz), which can be set as one of the following values:
- 8000.
- (Default) 16000.
- 32000.
- 44100
- 48000
- channel
- The number of audio channels, which can be set as either of the following values:
- 1: (Default) Mono.
- 2: Stereo.
- mode
- The use mode of the audio data, which can be set as either of the following values:
- RAW_AUDIO_FRAME_OP_MODE_READ_ONLY(0): Read-only mode, For example, when users acquire the data with the Agora SDK, then start the media push.
- RAW_AUDIO_FRAME_OP_MODE_READ_WRITE(2): Read and write mode, For example, when users have their own audio-effect processing module and perform some voice pre-processing, such as a voice change.
- samplesPerCall
- The number of samples, such as 1024 for the media push.
AgoraRhythmPlayerConfig
The metronome configuration.
public class AgoraRhythmPlayerConfig { public int beatsPerMeasure; public int beatsPerMinute; public AgoraRhythmPlayerConfig() { this.beatsPerMeasure = 4; this.beatsPerMinute = 60; } @CalledByNative public int getBeatsPerMeasure() { return beatsPerMeasure; } @CalledByNative public int getBeatsPerMinute() { return beatsPerMinute; } }
Attributes
- beatsPerMeasure
- The number of beats per measure, which ranges from 1 to 9. The default value is 4, which means that each measure contains one downbeat and three upbeats.
- beatsPerMinute
- The beat speed (beats/minute), which ranges from 60 to 360. The default value is 60, which means that the metronome plays 60 beats in one minute.
AudioRecordingConfiguration
Recording configuration.
public class AudioRecordingConfiguration { public String filePath; public int sampleRate; public boolean codec; public int fileRecordOption; public int quality; int recordingChannel; public AudioRecordingConfiguration() { sampleRate = 32000; codec = true; fileRecordOption = Constants.AUDIO_FILE_RECORDING_MIXED; quality = Constants.AUDIO_RECORDING_QUALITY_MEDIUM; recordingChannel = 1; } }
Attributes
- filePath
- The absolute path (including the filename extensions) of the recording file. For example:
content://com.android.providers.media.documents/document/audio%203A14441
.Attention:Ensure that the path for the recording file exists and is writable.
- codec
- Whether to encode the audio data:
true
: Encode audio data in AAC.false
: (Default) Do not encode audio data, but save the recorded audio data directly.
- sampleRate
- Recording sample rate (Hz).
- 16000
- (Default) 32000
- 44100
- 48000
Attention:If you set this parameter to 44100 or 48000, Agora recommends recording WAV files, or AAC files with quality set as AUDIO_RECORDING_QUALITY_MEDIUM or AUDIO_RECORDING_QUALITY_HIGH for better recording quality.
- fileRecordOption
-
Recording content:
- AUDIO_FILE_RECORDING_MIC (1): Only records the audio of the local user.
- AUDIO_FILE_RECORDING_PLAYBACK (2): Only records the audio of all remote users.
- AUDIO_FILE_RECORDING_MIXED (3): (Default) Records the mixed audio of the local and all remote users.
- quality
-
Recording quality:
- AUDIO_RECORDING_QUALITY_LOW(0): Low quality. For example, the size of an AAC file with a sample rate of 32,000 Hz and a recording duration of 10 minutes is around 1.2 MB.
- AUDIO_RECORDING_QUALITY_MEDIUM(1): (Default) Medium quality. For example, the size of an AAC file with a sample rate of 32,000 Hz and a recording duration of 10 minutes is around 2 MB.
- AUDIO_RECORDING_QUALITY_HIGH(2): High quality. For example, the size of an AAC file with a sample rate of 32,000 Hz and a recording duration of 10 minutes is around 3.5 MB.
- AUDIO_RECORDING_QUALITY_ULTRA_HIGH(3): Ultra high quality. The sample rate is 32 kHz, and the file size is around 7.5 MB after 10 minutes of recording.
Attention:Note: This parameter applies to AAC files only.
- recordingChannel
- The audio channel of recording: The parameter supports the following values:
- 1: (Default) Mono.
- 2: Stereo.
Note:The actual recorded audio channel is related to the audio channel that you capture.- If the captured audio is mono and recordingChannel is
2
, the recorded audio is the dual-channel data that is copied from mono data, not stereo. - If the captured audio is dual channel and recordingChannel is
1
, the recorded audio is the mono data that is mixed by dual-channel data.
AudioSpectrumInfo
The audio spectrum data.
public class AudioSpectrumInfo { private float[] audioSpectrumData; private int dataLength; }
Attributes
- audioSpectrumData
-
The audio spectrum data. Agora divides the audio frequency into 256 frequency domains, and reports the energy value of each frequency domain through this parameter. The value range of each energy type is [-300, 1] and the unit is dBFS.
- dataLength
- The audio spectrum data length is 256.
AudioVolumeInfo
The volume information of users.
public static class AudioVolumeInfo { public int uid; public int volume; public int vad; public double voicePitch; }
Attributes
- uid
-
The user ID.
- In the local user's callback, uid = 0.
- In the remote users' callback, uid is the user ID of a remote user whose instantaneous volume is one of the three highest.
- volume
- The volume of the user. The value ranges between 0 (lowest volume) and 255 (highest volume). If the user calls startAudioMixing [2/2], the value of volume is the volume after audio mixing.
- vad
-
Voice activity status of the local user.
- 0: The local user is not speaking.
- 1: The local user is speaking.
Attention:- The vad parameter does not report the voice activity status of remote users. In a remote user's callback, the value of vad is always 1.
- To use this parameter, you must set reportVad to
true
when calling enableAudioVolumeIndication.
- voicePitch
-
The voice pitch of the local user. The value ranges between 0.0 and 4000.0.
Attention: The voicePitch parameter does not report the voice pitch of remote users. In the remote users' callback, the value of voicePitch is always 0.0.
BeautyOptions
Image enhancement options.
public class BeautyOptions { public static final int LIGHTENING_CONTRAST_LOW = 0; public static final int LIGHTENING_CONTRAST_NORMAL = 1; public static final int LIGHTENING_CONTRAST_HIGH = 2; public int lighteningContrastLevel; public float lighteningLevel; public float smoothnessLevel; public float rednessLevel; public float sharpnessLevel; }
Attributes
- lighteningContrastLevel
-
The contrast level, used with the lighteningLevel parameter. The larger the value, the greater the contrast between light and dark.
- LIGHTENING_CONTRAST_LOW(0): Low contrast level.
- LIGHTENING_CONTRAST_NORMAL(1): Normal contrast level.
- LIGHTENING_CONTRAST_HIGH(2): High contrast level.
- lighteningLevel
-
The brightening level, in the range [0.0,1.0], where 0.0 means the original brightening. The default value is 0.6. The higher the value, the greater the degree of brightening.
- smoothnessLevel
-
The smoothness level, in the range [0.0,1.0], where 0.0 means the original smoothness. The default value is 0.5. The higher the value, the greater the smoothness level.
- rednessLevel
-
The redness level, in the range [0.0,1.0], where 0.0 means the original redness. The default value is 0.1. The higher the value, the greater the redness level.
- sharpnessLevel
-
The sharpness level, in the range [0.0,1.0], where 0.0 means the original sharpness. The default value is 0.3. The larger the value, the greater the sharpness level.
CacheStatistics
Statistics about the media files being cached.
public class CacheStatistics { @CalledByNative public CacheStatistics() { fileSize = 0; cacheSize = 0; downloadSize = 0; } private long fileSize; private long cacheSize; private long downloadSize; }
Attributes
- fileSize
- The size (bytes) of the media file being played.
- cacheSize
- The size (bytes) of the media file that you want to cache.
- downloadSize
- The size (bytes) of the media file that has been downloaded.
CameraCapturerConfiguration
The camera capturer preference.
public class CameraCapturerConfiguration { public enum CAMERA_DIRECTION { CAMERA_REAR(0), CAMERA_FRONT(1); } public CAMERA_DIRECTION cameraDirection; static public class CaptureFormat { public int width; public int height; public int fps; } public CaptureFormat captureFormat; }
Attributes
- cameraDirection
- The camera direction. See CAMERA_DIRECTION.
- captureFormat
- See CaptureFormat.
CAMERA_DIRECTION
The camera direction.
Enumerator
- CAMERA_REAR
- The rear camera.
- CAMERA_FRONT
- The front camera.
CaptureFormat
The format of the video frame.
static public class CaptureFormat { public int width; public int height; public int fps; }
Attributes
- width
- The width (px) of the video frame.
- height
- The height (px) of the video frame.
- fps
- The video frame rate (fps).
ChannelMediaInfo
The definition of ChannelMediaInfo.
public class ChannelMediaInfo {
public String channelName = null;
public String token = null;
public int uid = 0;
}
Attributes
- channelName
- The channel name.
- token
- The token that enables the user to join the channel.
- uid
- The user ID.
ChannelMediaOptions
The channel media options.
public class ChannelMediaOptions { public Boolean publishCameraTrack; public Boolean publishScreenCaptureVideo; public Boolean publishScreenCaptureAudio; public Boolean publishCustomAudioTrack; public Boolean publishDirectCustomAudioTrack; public Boolean publishCustomVideoTrack; public Boolean publishEncodedVideoTrack; public Boolean publishMediaPlayerAudioTrack; public Boolean publishMediaPlayerVideoTrack; public Boolean publishRhythmPlayerTrack; public Integer publishMediaPlayerId; public Boolean publishMicrophoneTrack; public Boolean autoSubscribeAudio; public Boolean autoSubscribeVideo; public Boolean enableAudioRecordingOrPlayout; public Integer clientRoleType; public Integer audienceLatencyLevel; public Integer defaultVideoStreamType; public Integer channelProfile; public Integer audioDelayMs; public Integer mediaPlayerAudioDelayMs; public String token; public Boolean enableBuiltInMediaEncryption; public Integer publishCustomAudioSourceId; public Integer customVideoTrackId; public Boolean isAudioFilterable; public Boolean isInteractiveAudience;
true
at the same time,
but only one of publishCameraTrack, publishScreenCaptureVideo, , publishCustomVideoTrack, or publishEncodedVideoTrack can be set as true
.Attributes
- publishCameraTrack
- Whether to publish the video captured by the camera:
true
: (Default) Publish the video captured by the camera.false
: Do not publish the video captured by the camera.
- publishMicrophoneTrack
- Whether to publish the audio captured by the microphone:
true
: (Default) Publish the audio captured by the microphone.false
: Do not publish the audio captured by the microphone.
Note: As of v4.0.0, the parameter name is changed from publishAudioTrack to publishMicrophoneTrack. - publishScreenCaptureVideo
-
Whether to publish the video captured from the screen:
true
: Publish the video captured from the screen.false
: (Default) Do not publish the video captured from the screen.
-
Whether to publish the video captured from the screen:
true
: Publish the video captured from the screen.false
: (Default) Do not publish the video captured from the screen.
- publishScreenCaptureAudio
-
Whether to publish the audio captured from the screen:
true
: Publish the audio captured from the screen.false
: (Default) Do not publish the audio captured from the screen.
- publishCustomAudioTrack
- Whether to publish the audio captured from a custom source:
true
: Publish the audio captured from the custom source.false
: (Default) Do not publish the audio captured from the custom source.
- publishCustomAudioSourceId
- The ID of the custom audio source to publish. The default value is 0.
If you have set sourceNumber in setExternalAudioSource [2/2] to a value greater than 1, the SDK creates the corresponding number of custom audio tracks and assigns an ID to each audio track, starting from 0.
- publishCustomVideoTrack
- Whether to publish the video captured from a custom source:
true
: Publish the video captured from the custom source.false
: (Default) Do not publish the video captured from the custom source.
- publishEncodedVideoTrack
- Whether to publish the encoded video:
true
: Publish the encoded video.false
: (Default) Do not publish the encoded video.
- publishMediaPlayerAudioTrack
- Whether to publish the audio from the media player:
true
: Publish the audio from the media player.false
: (Default) Do not publish the audio from the media player.
- publishMediaPlayerVideoTrack
- Whether to publish the video from the media player:
true
: Publish the video from the media player.false
: (Default) Do not publish the video from the media player.
- autoSubscribeAudio
- Whether to automatically subscribe to all remote audio streams when the user joins a channel:
true
: (Default) Automatically subscribe to all remote audio streams.false
: Do not automatically subscribe to any remote audio streams.
- autoSubscribeVideo
- Whether to automatically subscribe to all remote video streams when the user joins the channel:
true
: (Default) Automatically subscribe to all remote video streams.false
: Do not automatically subscribe to any remote video streams.
- enableAudioRecordingOrPlayout
- Whether to enable audio capturing or playback:
true
: (Default) Enable audio capturing or playback.false
: Do not enable audio capturing or playback.
- publishMediaPlayerId
- The ID of the media player to be published. The default value is 0.
- clientRoleType
-
The user role:
- CLIENT_ROLE_BROADCASTER(1): Host.
- CLIENT_ROLE_AUDIENCE(2): Audience.
- audienceLatencyLevel
- The latency level of an audience member in interactive live streaming.
- AUDIENCE_LATENCY_LEVEL_LOW_LATENCY1: Low latency.
- AUDIENCE_LATENCY_LEVEL_ULTRA_LOW_LATENCY2: (Default) Ultra low latency.
- defaultVideoStreamType
-
The default video-stream type:
- VIDEO_STREAM_HIGH(0): High-quality stream, that is, a high-resolution and high-bitrate video stream.
- VIDEO_STREAM_LOW(1): Low-quality stream, that is, a low-resolution and low-bitrate video stream.
- channelProfile
-
The channel profile.
- CHANNEL_PROFILE_COMMUNICATION(0): Communication. Use this profile when there are only two users in the channel.
- CHANNEL_PROFILE_LIVE_BROADCASTING(1): Live streaming. Use this profile when there are more than two users in the channel.
- CHANNEL_PROFILE_GAME(2): This profile is deprecated.
- CHANNEL_PROFILE_CLOUD_GAMING(3): Interaction. The scenario is optimized for latency. Use this profile if the use case requires frequent interactions between users.
- token
-
(Optional) The token generated on your server for authentication. See Authenticate Your Users with Token.
CAUTION:- This parameter takes effect only when calling updateChannelMediaOptions or updateChannelMediaOptionsEx.
- Ensure that the App ID, channel name, and user name used for creating the token are the same as those used by the create [2/2] method for initializing the RTC engine, and those used by the joinChannel [2/2] and joinChannelEx methods for joining the channel.
- publishRhythmPlayerTrack
- Whether to publish the sound of a metronome to remote users:
true
: (Default) Publish the sound of the metronome. Both the local user and remote users can hear the metronome.false
: Do not publish the sound of the metronome. Only the local user can hear the metronome.
- isInteractiveAudience
- Whether to enable interactive mode:
true
: Enable interactive mode. Once this mode is enabled and the user role is set as audience, the user can receive remote video streams with low latency.false
: (Default) Do not enable interactive mode. If this mode is disabled, the user receives the remote video streams in default settings.
Attention:- This parameter only applies to scenarios involving cohosting across channels. The cohosts need to call the joinChannelEx method to join the other host's channel as an audience member, and set isInteractiveAudience to
true
. - This parameter takes effect only when the user role is CLIENT_ROLE_AUDIENCE.
- customVideoTrackId
- The video track ID returned by calling the createCustomVideoTrack method. The default value is 0.
- isAudioFilterable
- Whether the audio stream being published is filtered according to the volume algorithm:
true
: (Default) The audio stream is filtered. If the audio stream filter is not enabled, this setting does not takes effect.false
: The audio stream is not filtered.
Attention: If you need to enable this function, contact support@agora.io.
ChannelMediaRelayConfiguration
The definition of ChannelMediaRelayConfiguration.
public class ChannelMediaRelayConfiguration {
private ChannelMediaInfo srcInfo = null;
private Map<String, ChannelMediaInfo> destInfos = null;
public ChannelMediaRelayConfiguration() {
destInfos = new HashMap<String, ChannelMediaInfo>();
srcInfo = new ChannelMediaInfo(null, null, 0);
}
public void setSrcChannelInfo(ChannelMediaInfo srcInfo) {
this.srcInfo = srcInfo;
}
Attributes
- srcInfo
-
The information of the source channel ChannelMediaInfo. It contains the following members:
channelName
: The name of the source channel. The default value isNULL
, which means the SDK applies the name of the current channel.uid
: The unique ID to identify the relay stream in the source channel. The default value is 0, which means the SDK generates a randomuid
. You must set it as 0.token
: Thetoken
for joining the source channel. It is generated with thechannelName
anduid
you set insrcInfo
.- If you have not enabled the App Certificate, set this parameter as the default value
NULL
, which means the SDK applies the App ID. - If you have enabled the App Certificate, you must use the
token
generated with thechannelName
anduid
, and theuid
must be set as 0.
- If you have not enabled the App Certificate, set this parameter as the default value
- destInfos
-
The information of the destination channel ChannelMediaInfo. It contains the following members:
channelName
: The name of the destination channel.uid
: The unique ID to identify the relay stream in the destination channel. The value ranges from 0 to (232-1). To avoid UID conflicts, this UID must be different from any other UID in the destination channel. The default value is 0, which means the SDK generates a random UID. Do not set this parameter as the UID of the host in the destination channel, and ensure that this UID is different from any other UID in the channel.token
: Thetoken
for joining the destination channel. It is generated with thechannelName
anduid
you set indestInfos
.- If you have not enabled the App Certificate, set this parameter as the default value
NULL
, which means the SDK applies the App ID. - If you have enabled the App Certificate, you must use the
token
generated with thechannelName
anduid
.
- If you have not enabled the App Certificate, set this parameter as the default value
ContentInspectConfig
Configuration of video screenshot and upload.
public class ContentInspectConfig {
public final static int CONTENT_INSPECT_TYPE_INVALID = 0;
public final static int CONTENT_INSPECT_TYPE_SUPERVISE = 2;
public static final int MAX_CONTENT_INSPECT_MODULE_COUNT = 32;
public String extraInfo;
public ContentInspectModule[] modules;
public int moduleCount;
public static class ContentInspectModule {
public int type;
public int interval;
public ContentInspectModule() {
type = CONTENT_INSPECT_TYPE_INVALID;
interval = 0;
}
}>
Parameters
- CONTENT_INSPECT_TYPE_INVALID
- (Default) No actual function. Do not set type to this value.
- CONTENT_INSPECT_SUPERVISE
- Video screenshot and upload. The SDK takes screenshots of videos sent by local users and upload them.
- extraInfo
-
Additional information on the video content (maximum length: 1024 Bytes).
The SDK sends the screenshots and additional information on the video content to the Agora server. Once the video screenshot and upload process is completed, the Agora server sends the additional information and the callback notification to your server.
- modules
-
Functional module. See ContentInspectModule.
A maximum of 32 ContentInspectModule instances can be configured, and the value range of MAX_CONTENT_INSPECT_MODULE_COUNT is an integer in [1,32].
Attention: A function module can only be configured with one instance at most. Currently only the video screenshot and upload function is supported. - moduleCount
- The number of functional modules, that is,the number of configured ContentInspectModule instances, must be the same as the number of instances configured in modules. The maximum number is 32.
ContentInspectModule
ContentInspectModuleA structure used to configure the frequency of video screenshot and upload.
public static class ContentInspectModule {
public int type;
public int interval;
public ContentInspectModule() {
type = CONTENT_INSPECT_TYPE_INVALID;
interval = 0;
}
Attributes
- type
-
Types of functional modules:
- CONTENT_INSPECT_TYPE_INVALID(0): (Default) This module has no actual function. Do not set to this value.
- CONTENT_INSPECT_TYPE_MODERATION(1): Video content moderation. SDK takes screenshots, inspects video content of the video stream in the channel, and uploads the screenshots and moderation results.
- CONTENT_INSPECT_TYPE_SUPERVISE(2): Video screenshot and upload. SDK takes screenshots of the video stream in the channel and uploads them.
- interval
- The frequency (s) of video screenshot and upload. The value should be set as larger than 0. The default value is 0, the SDK does not take screenshots. Agora recommends that you set the value as 10; you can also adjust it according to your business needs.
ClientRoleOptions
The detailed options of a user.
public class ClientRoleOptions {
public int audienceLatencyLevel;
@CalledByNative
public int getAudienceLatencyLevel() {
return audienceLatencyLevel;
}
}
Attributes
- audienceLatencyLevel
- The latency level of an audience member in interactive live streaming.
- AUDIENCE_LATENCY_LEVEL_LOW_LATENCY1: Low latency.
- AUDIENCE_LATENCY_LEVEL_ULTRA_LOW_LATENCY2: (Default) Ultra low latency.
ColorEnhanceOptions
The color enhancement options.
public class ColorEnhanceOptions { public float strengthLevel; public float skinProtectLevel; public ColorEnhanceOptions() { strengthLevel = 0.5f; skinProtectLevel = 1f; } public ColorEnhanceOptions(float strength, float skinProtect) { strengthLevel = strength; skinProtectLevel = skinProtect; } }
Attributes
- strengthLevel
- The level of color enhancement. The value range is [0.0, 1.0].
0.0
is the default value, which means no color enhancement is applied to the video. The higher the value, the higher the level of color enhancement. The default value is0.5
. - skinProtectLevel
The level of skin tone protection. The value range is [0.0, 1.0].
0.0
means no skin tone protection. The higher the value, the higher the level of skin tone protection. The default value is1.0
.- When the level of color enhancement is higher, the portrait skin tone can be significantly distorted, so you need to set the level of skin tone protection.
- When the level of skin tone protection is higher, the color enhancement effect can be slightly reduced.
DataStreamConfig
The configurations for the data stream.
public class DataStreamConfig { public boolean syncWithAudio = false; public boolean ordered = false; }
The following table shows the SDK behaviors under different parameter settings:
syncWithAudio |
ordered |
SDK behaviors |
---|---|---|
false |
false |
The SDK triggers the onStreamMessage callback immediately after the receiver receives a data packet. |
true |
false |
If the data packet delay is within the audio delay, the SDK triggers the onStreamMessage callback when the synchronized audio packet is played out. If the data packet delay exceeds the audio delay, the SDK triggers the onStreamMessage callback as soon as the data packet is received. |
false |
true |
If the delay of a data packet is less than five seconds, the SDK corrects the order of the data packet. If the delay of a data packet exceeds five seconds, the SDK discards the data packet. |
true |
true |
If the delay of the data packet is within the range of the audio delay, the SDK corrects the order of the data packet. If the delay of a data packet exceeds the audio delay, the SDK discards this data packet. |
Attributes
- syncWithAudio
-
Whether to synchronize the data packet with the published audio packet.
true
: Synchronize the data packet with the audio packet.false
: Do not synchronize the data packet with the audio packet.
- ordered
-
Whether the SDK guarantees that the receiver receives the data in the sent order.
true
: Guarantee that the receiver receives the data in the sent order.false
: Do not guarantee that the receiver receives the data in the sent order.
true
if you need the receiver to receive the data packet immediately.
DeviceInfo
The audio device information.
public class DeviceInfo {
public boolean isLowLatencyAudioSupported;
@CalledByNative
public DeviceInfo(boolean isLowLatencyAudioSupported) {
this.isLowLatencyAudioSupported = isLowLatencyAudioSupported;
}
}
Attributes
- isLowLatencyAudioSupported
- Whether the audio device supports ultra-low-latency capture and playback:
true
: The device supports ultra-low-latency capture and playback.false
: The device does not support ultra-low-latency capture and playback.
EchoTestConfiguration
The configuration of the audio and video call loop test.
public class EchoTestConfiguration { public SurfaceView view = null; public boolean enableAudio = true; public boolean enableVideo = true; public String token = null; public String channelId = null; @CalledByNative public EchoTestConfiguration( SurfaceView view, boolean enableAudio, boolean enableVideo, String token, String channelId) { this.view = view; this.enableAudio = enableAudio; this.enableVideo = enableVideo; this.token = token; this.channelId = channelId; } @CalledByNative public EchoTestConfiguration() { this.view = null; this.enableAudio = true; this.enableVideo = true; this.token = null; this.channelId = null; } }
Attributes
- view
- The view used to render the local user's video. This parameter is only applicable to scenarios testing video devices, that is, when enableVideo is true.
- enableAudio
- Whether to enable the audio device for the loop test:
- true: (Default) Enable the audio device. To test the audio device, set this parameter as true.
- false: Disable the audio device.
- enableVideo
- Whether to enable the video device for the loop test:
- true: (Default) Enable the video device. To test the video device, set this parameter as true.
- false: Disable the video device.
- token.
- The token used to secure the audio and video call loop test. If you do not enable App Certificate in Agora Console, you do not need to pass a value in this parameter; if you have enabled App Certificate in Agora Console, you must pass a token in this parameter; the
uid
used when you generate the token must be 0xFFFFFFFF, and the channel name used must be the channel name that identifies each audio and video call loop tested. For server-side token generation, see Authenticate Your Users with Tokens. - channelId
- The channel name that identifies each audio and video call loop. To ensure proper loop test functionality, the channel name passed in to identify each loop test cannot be the same when users of the same project (App ID) perform audio and video call loop tests on different devices.
EncodedVideoFrameInfo
Information about externally encoded video frames.
public class EncodedVideoFrameInfo { public int codecType; public int width; public int height; public int framesPerSecond; public int frameType; public int rotation; public int trackId; public long captureTimeMs; public int uid; public int streamType; public EncodedVideoFrameInfo() { codecType = Constants.VIDEO_CODEC_H264; width = 0; height = 0; framesPerSecond = 0; frameType = Constants.VIDEO_FRAME_TYPE_BLANK_FRAME; rotation = Constants.VIDEO_ORIENTATION_0; trackId = 0; captureTimeMs = 0; uid = 0; streamType = Constants.VIDEO_STREAM_HIGH; } @CalledByNative public EncodedVideoFrameInfo(int codecType, int width, int height, int framesPerSecond, int frameType, int rotation, int trackId, long captureTimeMs, int uid, int streamType) { this.codecType = codecType; this.width = width; this.height = height; this.framesPerSecond = framesPerSecond; this.frameType = frameType; this.rotation = rotation; this.trackId = trackId; this.captureTimeMs = captureTimeMs; this.uid = uid; this.streamType = streamType; } @CalledByNative public int getCodecType() { return codecType; } @CalledByNative public int getWidth() { return width; } @CalledByNative public int getHeight() { return height; } @CalledByNative public int getFramesPerSecond() { return framesPerSecond; } @CalledByNative public int getFrameType() { return frameType; } @CalledByNative public int getRotation() { return rotation; } @CalledByNative public int getTrackId() { return trackId; } @CalledByNative public long getCaptureTimeMs() { return captureTimeMs; } @CalledByNative public int getUid() { return uid; } @CalledByNative public int getStreamType() { return streamType; } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("codecType=").append(codecType); sb.append(" width=").append(width); sb.append(" height=").append(height); sb.append(" framesPerSecond=").append(framesPerSecond); sb.append(" frameType=").append(frameType); sb.append(" rotation=").append(rotation); sb.append(" trackId=").append(trackId); sb.append(" captureTimeMs=").append(captureTimeMs); sb.append(" uid=").append(uid); sb.append(" streamType=").append(streamType); return sb.toString(); } }
Attributes
- codecType
- The codec type of the video:
- 1: VP8
- 2: (Default) H264.
- width
- Width (pixel) of the video frame.
- height
- Height (pixel) of the video frame.
- framesPerSecond
-
The number of video frames per second.
When this parameter is not
0
, you can use it to calculate the Unix timestamp of externally encoded video frames. - frameType
- The video frame type.
- 0: (Default) VIDEO_FRAME_TYPE_BLANK_FRAME, a blank frame.
- 3: VIDEO_FRAME_TYPE_KEY_FRAME, a key frame.
- 4:VIDEO_FRAME_TYPE_DELTA_FRAME, a Delta frame.
- 5:VIDEO_FRAME_TYPE_B_FRAME, a B frame.
- 6: VIDEO_FRAME_TYPE_UNKNOW, an unknown frame.
- rotation
- Rotation information of the video frame, as the following:
- 0: (Default) 0 degree.
- 90: 90 degrees.
- 180: 180 degrees.
- 270: 270 degrees.
- trackId
- Reserved for future use.
- captureTimeMs
- The Unix timestamp (ms) for capturing the external encoded video frames.
- uid
- The user ID to push the externally encoded video frame.
- streamType
- The type of video streams.
EncryptionConfig
Built-in encryption configurations.
public class EncryptionConfig { public enum EncryptionMode { AES_128_XTS(1), AES_128_ECB(2), AES_256_XTS(3), SM4_128_ECB(4), AES_128_GCM(5), AES_256_GCM(6), AES_128_GCM2(7), AES_256_GCM2(8), MODE_END(9); private int value; private EncryptionMode(int v) { value = v; } public int getValue() { return this.value; } }
Attributes
- encryptionMode
-
The built-in encryption mode. See EncryptionMode. Agora recommends using
AES_128_GCM2
orAES_256_GCM2
encrypted mode. These two modes support the use of salt for higher security. - encryptionKey
-
Encryption key in string type with unlimited length. Agora recommends using a 32-byte key.
Attention: If you do not set an encryption key or set it asNULL
, you cannot use the built-in encryption, and the SDK returns-2
. - encryptionKdfSalt
-
Salt, 32 bytes in length. Agora recommends that you use OpenSSL to generate salt on the server side. See Media Stream Encryption for details. See "Media Stream Encryption" for details.
Attention: This parameter takes effect only inAES_128_GCM2
orAES_256_GCM2
encrypted mode. In this case, ensure that this parameter is not0
.
AgoraVideoFrame
The external video frame.
public class AgoraVideoFrame { public static final int FORMAT_NONE = -1; public static final int FORMAT_TEXTURE_2D = 10; public static final int FORMAT_TEXTURE_OES = 11; public static final int FORMAT_I420 = 1; public static final int FORMAT_BGRA = 2; public static final int FORMAT_NV21 = 3; public static final int FORMAT_RGBA = 4; public static final int FORMAT_I422 = 16; public static final int BUFFER_TYPE_NONE = -1; public static final int BUFFER_TYPE_BUFFER = 1; public static final int BUFFER_TYPE_ARRAY = 2; public static final int BUFFER_TYPE_TEXTURE = 3; public int format; public long timeStamp; public int stride; public int height; public int textureID; public boolean syncMode; public float[] transform; public javax.microedition.khronos.egl.EGLContext eglContext11; public android.opengl.EGLContext eglContext14; public byte[] buf; public int cropLeft; public int cropTop; public int cropRight; public int cropBottom; public int rotation; }
- Deprecated:
- This class is deprecated.
Attributes
- format
- The format of the video data:
- 10: TEXTURE_2D
- 11: TEXTURE_OES, usually the data captured by the camera is in this format.
- 1: I420
- 3: NV21
- 4: RGBA
- 16: I422
- buf
- Video frame buffer.
- stride
- Line spacing of the incoming video frame, which must be in pixels instead of bytes. For textures, it is the width of the texture.
- height
- Height of the incoming video frame.
- textureID
- Texture ID of the frame. This parameter only applies to video data in Texture format.
- syncMode
- Set whether to enable the synchronization mode. After enabling, the SDK waits while Texture processing. This parameter only applies to video data in Texture format.
true
: Enable sync mode.false
: Disable sync mode.
- transform
- Additional transform of Texture frames. This parameter only applies to video data in Texture format.
- eglContext11
- EGLContext11. This parameter only applies to video data in Texture format.
- eglContext14
- EGLContext14. This parameter only applies to video data in Texture format.
- This parameter only applies to video data in Texture format. The MetaData buffer. The default value is
NULL
. - This parameter only applies to video data in Texture format. The MetaData size. The default value is
0
. - cropLeft
- Raw data related parameter. The number of pixels trimmed from the left. The default value is 0.
- cropTop
- Raw data related parameter. The number of pixels trimmed from the top. The default value is 0.
- cropRight
- Raw data related parameter. The number of pixels trimmed from the right. The default value is 0.
- cropBottom
- Raw data related parameter. The number of pixels trimmed from the bottom. The default value is 0.
- rotation
- Raw data related parameter. The clockwise rotation of the video frame. You can set the rotation angle as 0, 90, 180, or 270. The default value is 0.
- timestamp
- Timestamp (ms) of the incoming video frame. An incorrect timestamp results in frame loss or unsynchronized audio and video.
ImageTrackOptions
Image configurations
public class ImageTrackOptions { public String getImageUrl() { return imageUrl; } public int getFps() { return fps; } public ImageTrackOptions(String url, int fps) { this.imageUrl = url; this.fps = fps; } }
Attributes
- imageUrl
- The URL of the image that you want to use to replace the video feeds. The image must be in PNG format. This method supports adding an image from the local absolute or relative file path.
- fps
- The frame rate of the video streams being published. The value range is [1,30]. The default value is 1.
LastmileProbeConfig
Configurations of the last-mile network test.
public class LastmileProbeConfig { public boolean probeUplink; public boolean probeDownlink; public int expectedUplinkBitrate; public int expectedDownlinkBitrate; public LastmileProbeConfig() {} }
Attributes
- probeUplink
-
Sets whether to test the uplink network. Some users, for example, the audience members in a LIVE_BROADCASTING channel, do not need such a test.
true
: Test the uplink network.false
: Do not test the uplink network.
- probeDownlink
-
Sets whether to test the downlink network:
true
: Test the downlink network.false
: Do not test the downlink network.
- expectedUplinkBitrate
- The expected maximum uplink bitrate (bps) of the local user. The value range is [100000, 5000000]. Agora recommends referring to setVideoEncoderConfiguration to set the value.
- expectedDownlinkBitrate
- The expected maximum downlink bitrate (bps) of the local user. The value range is [100000,5000000].
LastmileProbeOneWayResult
Results of the uplink or downlink last-mile network test.
public static class LastmileProbeOneWayResult { public int packetLossRate; public int jitter; public int availableBandwidth; }
Attributes
- packetLossRate
- The packet loss rate (%).
- jitter
- The network jitter (ms).
- availableBandwidth
- The estimated available bandwidth (bps).
LastmileProbeResult
Results of the uplink and downlink last-mile network tests.
public static class LastmileProbeResult { public static class LastmileProbeOneWayResult { public int packetLossRate; public int jitter; public int availableBandwidth; } public short state; public int rtt; public LastmileProbeOneWayResult uplinkReport = new LastmileProbeOneWayResult(); public LastmileProbeOneWayResult downlinkReport = new LastmileProbeOneWayResult(); }
Attributes
- state
-
The status of the last-mile network tests, which includes:
- LASTMILE_PROBE_RESULT_COMPLETE(1): The last-mile network probe test is complete.
- LASTMILE_PROBE_RESULT_INCOMPLETE_NO_BWE(2): The last-mile network probe test is incomplete because the bandwidth estimation is not available due to limited test resources. One possible reason is that testing resources are temporarily limited.
- LASTMILE_PROBE_RESULT_UNAVAILABLE(3): The last-mile network probe test is not carried out. Probably due to poor network conditions.
- uplinkReport
- Results of the uplink last-mile network test. See LastmileProbeOneWayResult.
- downlinkReport
- Results of the downlink last-mile network test. See LastmileProbeOneWayResult.
- rtt
- The round-trip time (ms).
LeaveChannelOptions
The options for leaving a channel.
public class LeaveChannelOptions { public boolean stopAudioMixing; public boolean stopAllEffect; public boolean stopMicrophoneRecording; }
Attributes
- stopAudioMixing
- Whether to stop playing and mixing the music file when a user leaves the channel.
true
: (Default) Stop playing and mixing the music file.false
: Do not stop playing and mixing the music file.
- stopAllEffect
- Whether to stop playing all audio effects when a user leaves the channel.
true
: (Default) Stop playing all audio effects.false
: Do not stop playing any audio effect.
- stopMicrophoneRecording
- Whether to stop microphone recording when a user leaves the channel.
true
: (Default) Stop microphone recording.false
: Do not stop microphone recording.
LiveTranscoding
Transcoding configurations for Media Push.
public class LiveTranscoding { public enum AudioSampleRateType { TYPE_32000(32000), TYPE_44100(44100), TYPE_48000(48000); private int value; private AudioSampleRateType(int v) { value = v; } public static int getValue(AudioSampleRateType type) { return type.value; } } public enum VideoCodecProfileType { BASELINE(66), MAIN(77), HIGH(100); private int value; private VideoCodecProfileType(int v) { value = v; } public static int getValue(VideoCodecProfileType type) { return type.value; } } public enum AudioCodecProfileType { LC_AAC(0), HE_AAC(1), HE_AAC(1); HE_AAC_V2(2); private int value; private AudioCodecProfileType(int v) { value = v; } public static int getValue(AudioCodecProfileType type) { return type.value; } } public enum VideoCodecType { H264(1), H265(2); private int value; private VideoCodecType(int v) { value = v; } public static int getValue(VideoCodecType type) { return type.value; } } public int width; public int height; public int videoBitrate; public int videoFramerate; public boolean lowLatency; @Deprecated public boolean lowLatency; public int videoGop; public AgoraImage watermark; private ArrayList<AgoraImage> watermarkList; public void addWatermark(AgoraImage watermark) { if (watermarkList == null) { watermarkList = new ArrayList<AgoraImage>(); } watermarkList.add(watermark); } public boolean removeWatermark(AgoraImage watermark) { if (watermarkList == null) { return false; } return watermarkList.remove(watermark); } public ArrayList<AgoraImage> getWatermarkList() { return watermarkList; } public AgoraImage backgroundImage; private ArrayList<AgoraImage> backgroundImageList; public void addBackgroundImage(AgoraImage backgroundImage) { if (backgroundImageList == null) { backgroundImageList = new ArrayList<AgoraImage>(); } backgroundImageList.add(backgroundImage); } public boolean removeBackgroundImage(AgoraImage backgroundImage) { if (backgroundImageList == null) { return false; } return backgroundImageList.remove(backgroundImage); } public ArrayList<AgoraImage> getBackgroundImageList() { return backgroundImageList; } public AudioSampleRateType audioSampleRate; public int audioBitrate; public int audioChannels; public AudioCodecProfileType audioCodecProfile; public VideoCodecProfileType videoCodecProfile; public VideoCodecType videoCodecType; @Deprecated public int userCount; @Deprecated public int backgroundColor; public String userConfigExtraInfo; public String metadata; @Deprecated public String metadata; private Map<Integer, TranscodingUser> transcodingUsers; private Map<String, Boolean> advancedFeatures; public void setAdvancedFeatures(String featureName, Boolean opened) { advancedFeatures.put(featureName, opened); } public Map<String, Boolean> getAdvancedFeatures() { return advancedFeatures; } public static class TranscodingUser { public int uid; public String userId; public String userId; public int x; public int y; public int width; public int height; public int zOrder; public float alpha; public int audioChannel; public TranscodingUser() { alpha = 1; } } public LiveTranscoding() { width = 360; height = 640; videoBitrate = 400; videoCodecProfile = VideoCodecProfileType.HIGH; videoCodecType = VideoCodecType.H264; videoGop = 30; videoFramerate = 15; watermark = new AgoraImage(); backgroundImage = new AgoraImage(); lowLatency = false; audioSampleRate = AudioSampleRateType.TYPE_44100; audioBitrate = 48; audioChannels = 1; audioCodecProfile = AudioCodecProfileType.LC_AAC; transcodingUsers = new HashMap<>(); advancedFeatures = new HashMap<>(); backgroundColor = 0xFF000000; userConfigExtraInfo = null; metadata = null; } public int addUser(TranscodingUser user) { if (user == null || user.uid == 0) { return -Constants.ERR_INVALID_ARGUMENT; } transcodingUsers.put(user.uid, user); userCount = transcodingUsers.size(); return Constants.ERR_OK; } public final ArrayList<TranscodingUser> getUsers() { Collection<TranscodingUser> values = transcodingUsers.values(); return new ArrayList<>(values); } public void setUsers(ArrayList<TranscodingUser> users) { transcodingUsers.clear(); if (users != null) { for (TranscodingUser user : users) { transcodingUsers.put(user.uid, user); } } userCount = transcodingUsers.size(); } public void setUsers(Map<Integer, TranscodingUser> users) { transcodingUsers.clear(); if (users != null) { transcodingUsers.putAll(users); } userCount = transcodingUsers.size(); } public int removeUser(int uid) { if (!transcodingUsers.containsKey(uid)) return -Constants.ERR_INVALID_ARGUMENT; transcodingUsers.remove(uid); userCount = transcodingUsers.size(); return Constants.ERR_OK; } public int getUserCount() { return transcodingUsers.size(); } public int getBackgroundColor() { return this.backgroundColor; } public void setBackgroundColor(int color) { this.backgroundColor = color; } public void setBackgroundColor(int red, int green, int blue) { this.backgroundColor = (red << 16) | (green << 8) | (blue << 0); } @Deprecated public int getRed() { return (backgroundColor >> 16) & 0x0ff; } @Deprecated public int getGreen() { return (backgroundColor >> 8) & 0x0ff; } @Deprecated public int getBlue() { return backgroundColor & 0x0ff; } @Deprecated public void setRed(int red) { int green = getGreen(); int blue = getBlue(); this.backgroundColor = (red << 16) | (green << 8) | (blue << 0); } @Deprecated public void setGreen(int green) { int red = getRed(); int blue = getBlue(); this.backgroundColor = (red << 16) | (green << 8) | (blue << 0); } @Deprecated public void setBlue(int blue) { int red = getRed(); int green = getGreen(); this.backgroundColor = (red << 16) | (green << 8) | (blue << 0); } }
Attributes
- width
-
The width of the video in pixels. The default value is 360.
- When pushing video streams to the CDN, the value range of
width
is [64,1920]. If the value is less than 64, Agora server automatically adjusts it to 64; if the value is greater than 1920, Agora server automatically adjusts it to 1920. - When pushing audio streams to the CDN, set
width
andheight
as 0.
- When pushing video streams to the CDN, the value range of
- height
-
The height of the video in pixels. The default value is 640.
- When pushing video streams to the CDN, the value range of
height
is [64,1080]. If the value is less than 64, Agora server automatically adjusts it to 64; if the value is greater than 1080, Agora server automatically adjusts it to 1080. - When pushing audio streams to the CDN, set
width
andheight
as 0.
- When pushing video streams to the CDN, the value range of
- videoBitrate
-
Bitrate of the output video stream for Media Push in Kbps. The default value is 400 Kbps.
Set this parameter according to the Video profile table. If you set a bitrate beyond the proper range, the SDK automatically adapts it to a value within the range.
- videoFrameRate
-
Frame rate (in fps) of the output video stream set for Media Push. The default value is 15 , and the value range is (0,30].
Attention: The Agora server adjusts any value over 30 to 30. - lowLatency
-
- Deprecated
- This parameter is deprecated.
Latency mode:
true
: Low latency with unassured quality.false
: (Default) High latency with assured quality.
- videoGop
- GOP (Group of Pictures) in fps of the video frames for Media Push. The default value is 30.
- videoCodecProfile
-
Video codec profile type for Media Push. Set it as 66, 77, or 100 (default). See VideoCodecProfileType for details.
Attention: If you set this parameter to any other value, Agora adjusts it to the default value. - videoCodecType
- Video codec profile types for Media Push. See VideoCodecType.
- transcodingUsers
-
Manages the user layout configuration in the Media Push. Agora supports a maximum of 17 transcoding users in a Media Push channel. See TranscodingUser.
- userConfigExtraInfo
-
Reserved property. Extra user-defined information to send SEI for the H.264/H.265 video stream to the CDN client. Maximum length: 4096 bytes. For more information on SEI, see SEI-related questions.
- backgroundColor
-
- Deprecated
- This parameter is deprecated. Use setBackgroundColor [1/2] instead.
- userCount
-
- Deprecated
- This parameter is deprecated. Use getUserCount instead.
The number of users in the video mixing. The value range is [0,17].
- metadata
-
- Deprecated
- This parameter is deprecated.
The metadata sent to the CDN client.
- watermark
-
- Deprecated
- Please use addWatermark instead.
The watermark on the live video. The image format needs to be PNG. See AgoraImage.
- backgroundImage
-
- Deprecated
- Use addBackgroundImage instead.
The number of background images on the live video. The image format needs to be PNG. See AgoraImage.
- audioSampleRate
-
The audio sampling rate (Hz) of the output media stream. See AudioSampleRateType.
- audioBitrate
-
Bitrate (Kbps) of the audio output stream for Media Push. The default value is 48, and the highest value is 128.
- audioChannels
-
The number of audio channels for Media Push. Agora recommends choosing 1 (mono), or 2 (stereo) audio channels. Special players are required if you choose 3, 4, or 5.
- 1: (Default) Mono.
- 2: Stereo.
- 3: Three audio channels.
- 4: Four audio channels.
- 5: Five audio channels.
- audioCodecProfile
- Audio codec profile type for Media Push. See AudioCodecProfileType.
addBackgroundImage
Adds a background image.
public void addBackgroundImage(AgoraImage backgroundImage) { if (backgroundImageList == null) { backgroundImageList = new ArrayList<AgoraImage>(); } backgroundImageList.add(backgroundImage); }
This method only supports adding one background image at a time. If you need to add more than one background images, call this method multiple times.
The total number of watermarks and background images can range from 0 to 10.
Parameters
- backgroundImage
- The number of background images on the live video. Watermark images must be in the PNG format. See AgoraImage.
addUser
Adds a user for video mixing during the CDN live streaming.
public int addUser(TranscodingUser user) {
if (user == null || user.uid == 0) {
return -Constants.ERR_INVALID_ARGUMENT;
}
Parameters
- user
- The transcoding user. See TranscodingUser for details.
Returns
- 0: Success.
- < 0: Failure.
addWatermark
Adds a watermark.
public void addWatermark(AgoraImage watermark) { if (watermarkList == null) { watermarkList = new ArrayList<AgoraImage>(); } watermarkList.add(watermark); }
This method only supports adding one watermark every time. If you need to add more than one watermark, call this method multiple times.
The total number of watermarks and background images can range from 0 to 10.
Parameters
- watermark
- The watermark on the live video. Watermark images must be in the PNG format. See AgoraImage.
getAdvancedFeatures
Gets the status of the advanced features of streaming with transcoding.
public Map<String, Boolean> getAdvancedFeatures() { return advancedFeatures; }
If you want to enable the advanced features of streaming with transcoding, contact support@agora.io.
Returns
The advanced feature names, including LBHQ (high-quality video with a lower bitrate) and VEO (optimized video encoder), and whether the feature is enabled.
getBackgroundColor
Gets the background color in hex.
public int getBackgroundColor() { return this.backgroundColor; }
Returns
The background color to set in RGB hex value.
getBackgroundImageList
Gets the list of background images.
public ArrayList<AgoraImage> getBackgroundImageList() { return backgroundImageList; }
Returns
The list of background images. See AgoraImage for details.
getBlue
Gets the background color's blue component.
@Deprecated public int getBlue() { return backgroundColor & 0x0ff; }
- Deprecated:
- This method is deprecated.
Returns
Background color's blue component.
getGreen
Gets the background color's green component.
@Deprecated public int getGreen() { return (backgroundColor >> 8) & 0x0ff; }
- Deprecated:
- This method is deprecated.
Returns
Background color's green component.
getRed
Gets the background color's red component.
@Deprecated public int getRed() { return (backgroundColor >> 16) & 0x0ff; }
- Deprecated:
- This method is deprecated.
Returns
Background color's red component.
getUserCount
Gets the number of users transcoded in the CDN live streaming.
public int getUserCount() { return transcodingUsers.size(); }
Returns
The number of users transcoded in the CDN live streaming.
getUsers
Gets the user list in the CDN live streaming.
public final ArrayList<TranscodingUser> getUsers() { Collection<TranscodingUser> values = transcodingUsers.values(); return new ArrayList<>(values); }
This method retrieves all users in the CDN live streaming. The user list returned by this method is read-only and should not be modified.
Returns
The user list. See TranscodingUser for details.
getWatermarkList
Gets the watermark list.
public ArrayList<AgoraImage> getWatermarkList() { return watermarkList; }
Returns
The watermark list. See AgoraImage for details.
removeBackgroundImage
Removes a background image from the background image list.
public boolean removeBackgroundImage(AgoraImage backgroundImage) { if (backgroundImageList == null) { return false; } return backgroundImageList.remove(backgroundImage); }
This method only supports removing one background image every time. If you need to remove more than one background image, call this method multiple times.
Parameters
- backgroundImage
- The number of background images on the live video. Watermark images must be in the PNG format. See AgoraImage.
Returns
Whether the background image is removed:
true
: The background image is removed.false
: The background image is not removed.
removeUser
Removes a user from video mixing during the CDN live streaming.
public int removeUser(int uid) { if (!transcodingUsers.containsKey(uid)) return -Constants.ERR_INVALID_ARGUMENT; transcodingUsers.remove(uid); userCount = transcodingUsers.size(); return Constants.ERR_OK; }
Parameters
- uid
- The ID of the user to be removed.
Returns
- 0: Success.
- < 0: Failure.
removeWatermark
Removes a watermark from the watermark list.
public boolean removeWatermark(AgoraImage watermark) { if (watermarkList == null) { return false; } return watermarkList.remove(watermark); }
This method only supports removing one watermark every time. If you need to removing more than one watermark, call this method multiple times.
Parameters
- watermark
- The watermark on the live video. Watermark images must be in the PNG format. See AgoraImage.
Returns
true
: The watermark is removed.false
: The watermark is not removed.
setAdvancedFeatures
Sets whether to enable the advanced features of streaming with transcoding.
public void setAdvancedFeatures(String featureName, Boolean opened) { advancedFeatures.put(featureName, opened); }
If you want to enable the advanced features of streaming with transcoding, contact support@agora.io.
Parameters
- featureName
- The feature names, including LBHQ (high-quality video with a lower bitrate) and VEO (optimized video encoder).
- opened
- Whether to enable the advanced features of streaming with transcoding:
true
: Enable the advanced features.false
: (Default) Do not enable the advanced features.
setBackgroundColor [1/2]
Sets the background color of the CDN live stream.
public void setBackgroundColor(int color) { this.backgroundColor = color; }
Parameters
- color
- Sets the background color of the CDN live stream in the format of RGB hex. Do not include # in the value. For example, 0xFFB6C1 is light pink. The default value is 0x000000 (black).
setBackgroundColor [2/2]
Sets the background color in RGB format.
public void setBackgroundColor(int red, int green, int blue) { this.backgroundColor = (red << 16) | (green << 8) | (blue << 0); }
Parameters
- red
- Red component.
- green
- Green component.
- blue
- Blue component.
setBlue
Sets the background color's blue component.
@Deprecated public void setBlue(int blue) { int red = getRed(); int green = getGreen(); this.backgroundColor = (red << 16) | (green << 8) | (blue << 0); }
- Deprecated:
- This method is deprecated.
Parameters
- blue
- Background color's blue component.
setGreen
Sets the background color's green component.
@Deprecated public void setGreen(int green) { int red = getRed(); int blue = getBlue(); this.backgroundColor = (red << 16) | (green << 8) | (blue << 0); }
- Deprecated:
- This method is deprecated.
Parameters
- green
- Background color's green component.
setRed
Sets the background color's red component.
@Deprecated public void setRed(int red) { int green = getGreen(); int blue = getBlue(); this.backgroundColor = (red << 16) | (green << 8) | (blue << 0); }
- Deprecated:
- This method is deprecated.
Parameters
- red
- Background color's red component.
setUsers [1/2]
Sets the users in batches in the CDN live streaming.
public void setUsers(ArrayList<TranscodingUser> users) { transcodingUsers.clear(); if (users != null) { for (TranscodingUser user : users) { transcodingUsers.put(user.uid, user); } } userCount = transcodingUsers.size(); }
This method sets all users involved in the CDN live stream. This method replaces the old user data with the new TranscodingUser data.
Parameters
- users
- All users involved in the CDN live streaming. See TranscodingUser for details.
setUsers [2/2]
Sets the users in batches in the CDN live streaming.
public void setUsers(Map<Integer, TranscodingUser> users) { transcodingUsers.clear(); if (users != null) { transcodingUsers.putAll(users); } userCount = transcodingUsers.size(); }
This method sets all users involved in the CDN live stream. This method replaces the old user data with the new TranscodingUser data.
Parameters
- users
- All users involved in the CDN live streaming. See TranscodingUser for details.
LocalAudioStats
Local audio statistics.
public static class LocalAudioStats { public int numChannels; public int sentSampleRate; public int sentBitrate; public int internalCodec; public int txPacketLossRate; public int audioDeviceDelay; };
Attributes
- numChannels
- The number of audio channels.
- sentSampleRate
- The sampling rate (Hz) of sending the local user's audio stream.
- sentBitrate
- The average bitrate (Kbps) of sending the local user's audio stream.
- txPacketLossRate
- The packet loss rate (%) from the local client to the Agora server before applying the anti-packet loss strategies.
- internalCodec
- The internal payload codec.
- audioDeviceDelay
- The delay of the audio device module when playing or recording audio.
LocalVideoStats
The statistics of the local video stream.
public static class LocalVideoStats { public int uid; public int sentBitrate; public int sentFrameRate; public int captureFrameRate; public int captureFrameWidth; public int captureFrameHeight; public int regulatedCaptureFrameRate; public int regulatedCaptureFrameWidth; public int regulatedCaptureFrameHeight; public int encoderOutputFrameRate; public int rendererOutputFrameRate; public int targetBitrate; public int targetFrameRate; public int qualityAdaptIndication; public int encodedBitrate; public int encodedFrameWidth; public int encodedFrameHeight; public int encodedFrameCount; public int codecType; public int txPacketLossRate; public int captureBrightnessLevel; }
Attributes
- uid
- The user ID of the local user.
- sentBitrate
-
The actual bitrate (Kbps) while sending the local video stream.Attention: This value does not include the bitrate for resending the video after packet loss.
- sentFrameRate
- The actual frame rate (fps) while sending the local video stream.Attention: This value does not include the frame rate for resending the video after packet loss.
- captureFrameRate
- The frame rate (fps) for capturing the local video stream.
- captureFrameWidth
- The width (px) for capturing the local video stream.
- captureFrameHeight
- The height (px) for capturing the local video stream.
- regulatedCaptureFrameRate
- The frame rate (fps) adjusted by the built-in video capture adapter (regulator) of the SDK for capturing the local video stream. The regulator adjusts the frame rate of the video captured by the camera according to the video encoding configuration.
- regulatedCaptureFrameWidth
- The width (px) adjusted by the built-in video capture adapter (regulator) of the SDK for capturing the local video stream. The regulator adjusts the height and width of the video captured by the camera according to the video encoding configuration.
- regulatedCaptureFrameHeight
- The height (px) adjusted by the built-in video capture adapter (regulator) of the SDK for capturing the local video stream. The regulator adjusts the height and width of the video captured by the camera according to the video encoding configuration.
- encoderOutputFrameRate
- The output frame rate (fps) of the local video encoder.
- rendererOutputFrameRate
- The output frame rate (fps) of the local video renderer.
- targetBitrate
- The target bitrate (Kbps) of the current encoder. This is an estimate made by the SDK based on the current network conditions.
- targetFrameRate
- The target frame rate (fps) of the current encoder.
- qualityAdaptIndication
- The quality adaption of the local video stream in the reported interval (based on the target frame rate and target bitrate).
- ADAPT_NONE(0): The local video quality stays the same.
- ADAPT_UP_BANDWIDTH(1): The local video quality improves because the network bandwidth increases.
- ADAPT_DOWN_BANDWIDTH(2): The local video quality deteriorates because the network bandwidth decreases.
- encodedBitrate
-
The bitrate (Kbps) while encoding the local video stream.Attention: This value does not include the bitrate for resending the video after packet loss.
- encodedFrameWidth
- The width of the encoded video (px).
- encodedFrameHeight
- The height of the encoded video (px).
- encodedFrameCount
- The number of sent video frames, represented by an aggregate value.
- codecType
- The codec type of the local video.
- VIDEO_CODEC_VP8(1): VP8.
- VIDEO_CODEC_H264(2): (Default) H.264.
- txPacketLossRate
- The video packet loss rate (%) from the local client to the Agora server before applying the anti-packet loss strategies.
- captureBrightnessLevel
- The brightness level of the video image captured by the local camera.
- CAPTURE_BRIGHTNESS_LEVEL_INVALID(-1): The SDK does not detect the brightness level of the video image. Wait a few seconds to get the brightness level from captureBrightnessLevel in the next callback.
- CAPTURE_BRIGHTNESS_LEVEL_NORMAL(0): The brightness level of the video image is normal.
- CAPTURE_BRIGHTNESS_LEVEL_BRIGHT(1): The brightness level of the video image is too bright.
- CAPTURE_BRIGHTNESS_LEVEL_DARK(2): The brightness level of the video image is too dark.
LogConfig
Configuration of Agora SDK log files.
public static class LogConfig { public String filePath; public int fileSizeInKB; public int level = Constants.LogLevel.getValue(Constants.LogLevel.LOG_LEVEL_INFO); }
Attributes
- filePath
-
The complete path of the log files. Ensure that the path for the log file exists and is writable. You can use this parameter to rename the log files.
The default path is /storage/emulated/0/Android/data/<packagename>/files/agorasdk.log.
- fileSizeInKB
- The size (KB) of an
agorasdk.log
file. The value range is [128,1024]. The default value is 1,024 KB. If you setfileSizeInKByte
to a value lower than 128 KB, the SDK adjusts it to 128 KB. If you setfileSizeInKBytes
to a value higher than 1,024 KB, the SDK adjusts it to 1,024 KB. - level
-
The output level of the SDK log file. See LogLevel.
For example, if you set the log level to WARN, the SDK outputs the logs within levels FATAL, ERROR, and WARN.
LowlightEnhanceOptions
The low-light enhancement options.
public class LowLightEnhanceOptions { public static final int LOW_LIGHT_ENHANCE_AUTO = 0; public static final int LOW_LIGHT_ENHANCE_MANUAL = 1; public static final int LOW_LIGHT_ENHANCE_LEVEL_HIGH_QUALITY = 0; public static final int LOW_LIGHT_ENHANCE_LEVEL_FAST = 1; public int lowlightEnhanceMode; public int lowlightEnhanceLevel; public LowLightEnhanceOptions() { lowlightEnhanceMode = LOW_LIGHT_ENHANCE_AUTO; lowlightEnhanceLevel = LOW_LIGHT_ENHANCE_LEVEL_HIGH_QUALITY; } public LowLightEnhanceOptions(int mode, int level) { lowlightEnhanceMode = mode; lowlightEnhanceLevel = level; } }
Attributes
- level
- The low-light enhancement level.
- LOW_LIGHT_ENHANCE_LEVEL_HIGH_QUALITY0: (Default) Promotes video quality during low-light enhancement. It processes the brightness, details, and noise of the video image. The performance consumption is moderate, the processing speed is moderate, and the overall video quality is optimal.
- LOW_LIGHT_ENHANCE_LEVEL_FAST1: Promotes performance during low-light enhancement. It processes the brightness and details of the video image. The processing speed is faster.
- mode
- The low-light enhancement mode.
- LOW_LIGHT_ENHANCE_AUTO0: (Default) Automatic mode. The SDK automatically enables or disables the low-light enhancement feature according to the ambient light to compensate for the lighting level or prevent overexposure, as necessary.
- LOW_LIGHT_ENHANCE_MANUAL1: Manual mode. Users need to enable or disable the low-light enhancement feature manually.
MediaRecorderConfiguration
Configurations for the local audio and video recording.
public static class MediaRecorderConfiguration { public String storagePath; public int containerFormat = CONTAINER_MP4; public int streamType = STREAM_TYPE_BOTH; public int maxDurationMs = 120000; public int recorderInfoUpdateInterval = 0; public MediaRecorderConfiguration(String storagePath, int containerFormat, int streamType, int maxDurationMs, int recorderInfoUpdateInterval) { this.storagePath = storagePath; this.containerFormat = containerFormat; this.streamType = streamType; this.maxDurationMs = maxDurationMs; this.recorderInfoUpdateInterval = recorderInfoUpdateInterval; } }
Attributes
- storagePath
- The absolute path (including the filename extensions) of the recording file. For example:
- Windows:
C:\Users\<user_name>\AppData\Local\Agora\<process_name>\example.mp4
- iOS:
/App Sandbox/Library/Caches/example.mp4
- macOS:
/Library/Logs/example.mp4
- Android:
/storage/emulated/0/Android/data/<package name>/files/example.mp4
Attention: Ensure that the directory for the log files exists and is writable. - Windows:
- containerFormat
- The format of the recording file. Only CONTAINER_MP4 is supported.
- streamType
- The recording content:
- STREAM_TYPE_AUDIO: Only audio.
- STREAM_TYPE_VIDEO: Only video.
- STREAM_TYPE_BOTH: (Default) Audio and video.
- maxDurationMs
- The maximum recording duration, in milliseconds. The default value is 120000.
- recorderInfoUpdateInterval
- The interval (ms) of updating the recording information. The value range is [1000,10000]. Based on the value you set in this parameter, the SDK triggers the onRecorderInfoUpdated callback to report the updated recording information.
MediaPlayerSource
Information related to the media file to be played and the playback scenario configurations.
public class MediaPlayerSource {
public MediaPlayerSource() {
this.startPos = 0;
this.enableCache = false;
this.url = null;
this.uri = null;
this.autoPlay = true;
this.provider = null;
}
String url;
String uri;
long startPos;
boolean autoPlay;
Boolean isAgoraSource;
Boolean isLiveSource;
boolean enableCache;
IMediaPlayerCustomDataProvider provider;
}
Attributes
- url
-
The URL of the media file to be played.
Note: If you need to open a custom media resource, you do not have to pass in a value to the url. - uri
- The URI (Uniform Resource Identifier) of the media file.
- startPos
- The starting position (ms) for playback. The default value is 0.
- autoPlay
- Whether to enable autoplay once the media file is opened:
true
: (Default) Enable autoplay.false
: Disable autoplay.
Note:If autoplay is disabled, you need to call the play method to play a media file after it is opened.
- enableCache
- Whether to cache the media file when it is being played:
true
:Enable caching.false
: (Default) Disable caching.
Note:- If you need to enable caching, pass in a value to uri; otherwise, caching is based on the url of the media file.
- If you enable this function, the Media Player caches part of the media file being played on your local device, and you can play the cached media file without internet connection. The statistics about the media file being cached are updated every second after the media file is played. See CacheStatistics.
- isAgoraSource
- Whether the media resource to be opened is a live stream or on-demand video distributed through Media Broadcast service:
true
: The media resource is a live stream or on-demand video distributed through Media Broadcast service.false
: (Default) The media resource is not a live stream or on-demand video distributed through Media Broadcast service.
Note:If you need to open a live stream or on-demand video distributed through Broadcast Streaming service, pass in the URL of the media resource to url, and set isAgoraSource as
true
; otherwise, you don't need to set the isAgoraSource parameter. - isLiveSource
- Whether the media resource to be opened is a live stream:
true
: The media resource is a live stream.false
: (Default) The media resource is not a live stream.
If the media resource you want to open is a live stream, Agora recommends that you set this parameter as
true
so that the live stream can be loaded more quickly.Note:If the media resource you open is not a live stream, but you set isLiveSource as
true
, the media resource is not to be loaded more quickly. - provider
-
The callback for custom media resource files. See IMediaPlayerCustomDataProvider.
Note:If you need to open a custom media resource, such as an encrypted media file, pass in a value to provider rather than to url.
MediaStreamInfo
The detailed information of the media stream.
public class MediaStreamInfo { private int streamIndex; private int mediaStreamType; private String codecName; private String language; private int videoFrameRate; private int videoBitRate; private int videoWidth; private int videoHeight; private int videoRotation; private int audioSampleRate; private int audioChannels; private int audioBytesPerSample; private long duration; public MediaStreamInfo() {} }
Attributes
- streamIndex
- The index of the media stream.
- mediaStreamType
- The type of the media stream.
- STREAM_TYPE_UNKNOWN(0): The type is unknown.
- STREAM_TYPE_VIDEO(1): The video stream.
- STREAM_TYPE_AUDIO(2): The audio stream.
- STREAM_TYPE_SUBTITLE(3): The subtitle stream.
- codecName
- The codec of the media stream.
- language
- The language of the media stream.
- videoFrameRate
- This parameter only takes effect for video streams, and indicates the video frame rate (fps).
- videoBitrate
- This parameter only takes effect for video streams, and indicates the video bitrate (bps).
- videoWidth
- This parameter only takes effect for video streams, and indicates the video width (pixel).
- videoHeight
- This parameter only takes effect for video streams, and indicates the video height (pixel).
- videoRotation
- This parameter only takes effect for video streams, and indicates the video rotation angle.
- audioSampleRate
- This parameter only takes effect for audio streams, and indicates the audio sample rate (Hz).
- audioChannels
- This parameter only takes effect for audio streams, and indicates the audio channel number.
- audioBytesPerSample
- This parameter only takes effect for audio streams, and indicates the bit number of each audio sample.
- duration
- The total duration (s) of the media stream.
PlayerUpdatedInfo
Information related to the media player.
public class PlayerUpdatedInfo { public String playerId; public String deviceId; public CacheStatistics cacheStatistics; }
Attributes
- playerId
- The ID of a media player.
- deviceId
- The ID of a deivce.
- CacheStatistics
-
The statistics about the media file being cached.
If you call the openWithMediaSource method and set enableCache as
true
, the statistics about the media file being cached is updated every second after the media file is played. See CacheStatistics.
RecorderInfo
The information about the file that is recorded.
public class RecorderInfo {
public String fileName;
public int durationMs;
public int fileSize;
@CalledByNative
public RecorderInfo(String fileName, int durationMs, int fileSize) {
this.fileName = fileName;
this.durationMs = durationMs;
this.fileSize = fileSize;
}
}
Attributes
- filename
- The absolute path of the recording file.
- durationMs
- The recording duration (ms).
- fileSize
- The size (bytes) of the recording file.
Rectangle
The location of the target area relative to the screen or window. If you do not set this parameter, the SDK selects the whole screen or window.
public static class Rectangle { public int x = 0; public int y = 0; public int width = 0; public int height = 0; public Rectangle() { x = 0; y = 0; width = 0; height = 0; } public Rectangle(int x_, int y_, int width_, int height_) { x = x_; y = y_; width = width_; height = height_; } };
Attributes
- x
- The horizontal offset from the top-left corner.
- y
- The vertical offset from the top-left corner.
- width
- The width of the target area.
- height
- The height of the target area.
RemoteAudioStats
Audio statistics of the remote user.
public static class RemoteAudioStats { public int uid; public int quality; public int networkTransportDelay; public int jitterBufferDelay; public int audioLossRate; public int numChannels; public int receivedSampleRate; public int receivedBitrate; public int totalFrozenTime; public int frozenRate; public int mosValue; public long totalActiveTime; public long publishDuration; public long qoeQuality; public int qualityChangedReason; }
Attributes
- uid
- The user ID of the remote user.
- quality
-
The quality of the audio stream sent by the user.
- QUALITY_UNKNOWN(0): The quality is unknown.
- QUALITY_EXCELLENT(1): The quality is excellent.
- QUALITY_GOOD(2): The network quality seems excellent, but the bitrate can be slightly lower than excellent.
- QUALITY_POOR(3): Users can feel the communication is slightly impaired.
- QUALITY_BAD(4): Users cannot communicate smoothly.
- QUALITY_VBAD(5): The quality is so bad that users can barely communicate.
- QUALITY_DOWN(6): The network is down, and users cannot communicate at all.
- networkTransportDelay
- The network delay (ms) from the sender to the receiver.
- jitterBufferDelay
-
The network delay (ms) from the audio receiver to the jitter buffer.Attention: When the receiving end is an audience member and audienceLatencyLevel of ClientRoleOptions is 1, this parameter does not take effect.
- audioLossRate
- The frame loss rate (%) of the remote audio stream in the reported interval.
- numChannels
- The number of audio channels.
- receivedSampleRate
- The sampling rate of the received audio stream in the reported interval.
- receivedBitrate
- The average bitrate (Kbps) of the received audio stream in the reported interval.
- totalFrozenTime
- The total freeze time (ms) of the remote audio stream after the remote user joins the channel. In a session, audio freeze occurs when the audio frame loss rate reaches 4%.
- frozenRate
- The total audio freeze time as a percentage (%) of the total time when the audio is available. The audio is considered available when the remote user neither stops sending the audio stream nor disables the audio module after joining the channel.
- totalActiveTime
-
The total active time (ms) between the start of the audio call and the callback of the remote user.
The active time refers to the total duration of the remote user without the mute state.
- publishDuration
-
The total duration (ms) of the remote audio stream.
- qoeQuality
-
The Quality of Experience (QoE) of the local user when receiving a remote audio stream.
- EXPERIENCE_QUALITY_GOOD(0): The QoE of the local user is good.
- EXPERIENCE_QUALITY_BAD(1): The QoE of the local user is poor.
- qualityChangedReason
-
Reasons why the QoE of the local user when receiving a remote audio stream is poor.
- EXPERIENCE_REASON_NONE(0): No reason, indicating a good QoE of the local user.
- REMOTE_NETWORK_QUALITY_POOR(1): The remote user's network quality is poor.
- LOCAL_NETWORK_QUALITY_POOR(2): The local user's network quality is poor.
- WIRELESS_SIGNAL_POOR(4): The local user's Wi-Fi or mobile network signal is weak.
- WIFI_BLUETOOTH_COEXIST(8): The local user enables both Wi-Fi and bluetooth, and their signals interfere with each other. As a result, audio transmission quality is undermined.
- mosValue
-
The quality of the remote audio stream in the reported interval. The quality is determined by the Agora real-time audio MOS (Mean Opinion Score) measurement method. The return value range is [0, 500]. Dividing the return value by 100 gets the MOS score, which ranges from 0 to 5. The higher the score, the better the audio quality.
The subjective perception of audio quality corresponding to the Agora real-time audio MOS scores is as follows:MOS score Perception of audio quality Greater than 4 Excellent. The audio sounds clear and smooth. From 3.5 to 4 Good. The audio has some perceptible impairment but still sounds clear. From 3 to 3.5 Fair. The audio freezes occasionally and requires attentive listening. From 2.5 to 3 Poor. The audio sounds choppy and requires considerable effort to understand. From 2 to 2.5 Bad. The audio has occasional noise. Consecutive audio dropouts occur, resulting in some information loss. The users can communicate only with difficulty. Less than 2 Very bad. The audio has persistent noise. Consecutive audio dropouts are frequent, resulting in severe information loss. Communication is nearly impossible.
RemoteVideoStats
Statistics of the remote video stream.
public static class RemoteVideoStats { public int uid; public int delay; public int width; public int height; public int receivedBitrate; public int decoderOutputFrameRate; public int rendererOutputFrameRate; public int frameLossRate; public int packetLossRate; public int rxStreamType; public int totalFrozenTime; public int frozenRate; public int avSyncTimeMs; public long totalActiveTime; public long publishDuration; public int superResolutionType; }
Attributes
- uid
- The user ID of the remote user sending the video stream.
- delay
-
- Deprecated:
- In scenarios where audio and video are synchronized, you can get the video delay data from networkTransportDelay and jitterBufferDelay in RemoteAudioStats.
The video delay (ms).
- width
- The width (pixels) of the video.
- height
- The height (pixels) of the video.
- receivedBitrate
- The bitrate (Kbps) of the remote video received since the last count.
- decoderOutputFrameRate
- The frame rate (fps) of decoding the remote video.
- rendererOutputFrameRate
- The frame rate (fps) of rendering the remote video.
- frameLossRate
- The packet loss rate (%) of the remote video.
- packetLossRate
- The packet loss rate (%) of the remote video after using the anti-packet-loss technology.
- rxStreamType
- The type of the video stream.
- VIDEO_STREAM_HIGH(0): High-quality stream, that is, a high-resolution and high-bitrate video stream.
- VIDEO_STREAM_LOW(1): Low-quality stream, that is, a low-resolution and low-bitrate video stream.
- totalFrozenTime
- The total freeze time (ms) of the remote video stream after the remote user joins the channel. In a video session where the frame rate is set to no less than 5 fps, video freeze occurs when the time interval between two adjacent renderable video frames is more than 500 ms.
- frozenRate
- The total video freeze time as a percentage (%) of the total time the video is available. The video is considered available as long as that the remote user neither stops sending the video stream nor disables the video module after joining the channel.
- totalActiveTime
-
The total active time (ms) of the video.
As long as the remote user or host neither stops sending the video stream nor disables the video module after joining the channel, the video is available.
- publishDuration
-
The total duration (ms) of the remote video stream.
- superResolutionType
- The state of super resolution:
- >0: Super resolution is enabled.
- =0: Super resolution is not enabled.
- avSyncTimeMs
- The amount of time (ms) that the audio is ahead of the video.Attention: If this value is negative, the audio is lagging behind the video.
RemoteVoicePositionInfo
The spatial position of the remote user or the media player.
public class RemoteVoicePositionInfo { public float[] position; public float[] forward; public RemoteVoicePositionInfo() { position = new float[] {0.0f, 0.0f, 0.0f}; forward = new float[] {0.0f, 0.0f, 0.0f}; } }
Attributes
- position
- The coordinates in the world coordinate system. This parameter is an array of length 3, and the three values represent the front, right, and top coordinates in turn.
- forward
- The unit vector of the x axis in the coordinate system. This parameter is an array of length 3, and the three values represent the front, right, and top coordinates in turn.
RtcEngineConfig
Configurations for the RtcEngineConfig instance.
public class RtcEngineConfig { public Context mContext; public String mAppId; public Constants.AreaCode mAreaCode; public IAgoraEventHandler mEventHandler; public int mChannelProfile; public int mAudioScenario; public List<String> mExtensionList; public IMediaExtensionObserver mExtensionObserver; public LogConfig mLogConfig; public String mNativeLibPath; public static class LogConfig { public String filePath; public int fileSizeInKB; public int level = Constants.LogLevel.getValue(Constants.LogLevel.LOG_LEVEL_INFO); @CalledByNative("LogConfig") public String getFilePath() { return filePath; } @CalledByNative("LogConfig") public int getFileSize() { return fileSizeInKB; } @CalledByNative("LogConfig") public int getLevel() { return level; } } public RtcEngineConfig() { mContext = null; mAppId = ""; mChannelProfile = Constants.CHANNEL_PROFILE_LIVE_BROADCASTING; mEventHandler = null; mAudioScenario = Constants.AUDIO_SCENARIO_HIGH_DEFINITION; mAreaCode = Constants.AreaCode.AREA_CODE_GLOB; mExtensionList = new ArrayList<>(); mExtensionObserver = null; mLogConfig = new LogConfig(); } public void addExtension(String providerName) { mExtensionList.add(providerName); } @CalledByNative public Context getContext() { return mContext; } @CalledByNative public String getAppId() { return mAppId; } @CalledByNative public int getChannelProfile() { return mChannelProfile; } @CalledByNative public int getAudioScenario() { return mAudioScenario; } @CalledByNative public int getAreaCode() { return Constants.AreaCode.getValue(mAreaCode); } @CalledByNative public IMediaExtensionObserver getExtensionObserver() { return mExtensionObserver; } @CalledByNative public LogConfig getLogConfig() { return mLogConfig; } }
Attributes
- mEventHandler
- The event handler for RtcEngine. See IRtcEngineEventHandler.
- mAppId
- The App ID issued by Agora for your project. Only users in apps with the same App ID can join the same channel and communicate with each other. An App ID can only be used to create one RtcEngine instance. To change your App ID, call destroy to destroy the current RtcEngine instance, and then create a new one.
- mContext
-
The context of Android Activity.
- mNativeLibPath
-
Specifies the storage directory for the
.so
files. The storage directory must be a valid and private directory of the app, which can be obtained usingContext.getDir()
.- If you set this parameter, the SDK automatically loads the
.so
files in the directory you specify, so that the app dynamically loads the required.so
files when it runs, thereby reducing the package size. - If you do not set this parameter or set it to null, the SDK loads the
.so
files from the default app'snativeLibraryPath
when compiling the app, which increases the package size compared to the previous method.
Attention:- This method is applicable when you integrate the SDK manually but not when you integrate the SDK with Maven Central or JitPack.
- Ensure the specified directory exists; otherwise, the RtcEngine initialization fails.
- If you set this parameter, the SDK automatically loads the
- mChannelProfile
-
The channel profile.
- CHANNEL_PROFILE_COMMUNICATION(0): Communication. Use this profile when there are only two users in the channel.
- CHANNEL_PROFILE_LIVE_BROADCASTING(1): Live streaming. Use this profile when there are more than two users in the channel.
- CHANNEL_PROFILE_GAME(2): This profile is deprecated.
- CHANNEL_PROFILE_CLOUD_GAMING(3): Interaction. The scenario is optimized for latency. Use this profile if the use case requires frequent interactions between users.
- mAudioScenario
- The audio scenarios. Under different audio scenarios, the device uses different volume types.
- AUDIO_SCENARIO_DEFAULT(0): Automatic scenario match, where the SDK chooses the appropriate audio quality according to the user role and audio route.
- AUDIO_SCENARIO_GAME_STREAMING(3): High-quality audio scenario, where users mainly play music.
- AUDIO_SCENARIO_CHATROOM(5): Chatroom scenario, where users need to frequently switch the user role or mute and unmute the microphone. In this scenario, audience members receive a pop-up window to request permission of using microphones.
- AUDIO_SCENARIO_CHORUS(7): Real-time chorus scenario, where users have good network conditions and require ultra-low latency.Attention: Before using this enumeration, you need to call getAudioDeviceInfo to get whether the audio device supports ultra-low-latency capture and playback. Only on audio devices that support ultra-low latency (isLowLatencyAudioSupported =
true
) can you experience ultra-low latency. - AUDIO_SCENARIO_MEETING(8): Meeting scenario that mainly involves the human voice.
- mAreaCode
- The region for connection. This is an advanced feature and applies to scenarios that have regional restrictions. For details on supported regions, see AreaCode. The area codes support bitwise operation.
- mLogConfig
-
The log files that the SDK outputs. See LogConfig.
By default, the SDK generates five SDK log files and five API call log files with the following rules:
- The SDK log files are:
agorasdk.log
,agorasdk.1.log
,agorasdk.2.log
,agorasdk.3.log
, andagorasdk.4.log
. - The API call log files are:
agoraapi.log
,agoraapi.1.log
,agoraapi.2.log
,agoraapi.3.log
, andagoraapi.4.log
. - The default size for each SDK log file is 1,024 KB; the default size for each API call log file is 2,048 KB. These log files are encoded in UTF-8.
- The SDK writes the latest logs in
agorasdk.log
oragoraapi.log
. - When
agorasdk.log
is full, the SDK processes the log files in the following order:- Delete the
agorasdk.4.log
file (if any). - Rename
agorasdk.3.log
toagorasdk.4.log
. - Rename
agorasdk.2.log
toagorasdk.3.log
. - Rename
agorasdk.1.log
toagorasdk.2.log
. - Create a new
agorasdk.log
file.
- Delete the
- The overwrite rules for the
agoraapi.log
file are the same as foragorasdk.log
.
- The SDK log files are:
- mExtensionList
- Extension libraries.
- mExtensionObserver
- The IMediaExtensionObserver instance.
addExtension
Adds the extension.
public void addExtension(String providerName) { mExtensionList.add(providerName); }
Parameters
- providerName
- The name of the extension to add.
RtcConnection
Contains connection information.
public class RtcConnection { public enum CONNECTION_STATE_TYPE { CONNECTION_STATE_NOT_INITIALIZED(0), CONNECTION_STATE_DISCONNECTED(1), CONNECTION_STATE_CONNECTING(2), CONNECTION_STATE_CONNECTED(3), CONNECTION_STATE_RECONNECTING(4), CONNECTION_STATE_FAILED(5); private int value; private CONNECTION_STATE_TYPE(int v) { value = v; } public static int getValue(CONNECTION_STATE_TYPE type) { return type.value; } } public String channelId; public int localUid; public String localUserAccount; public RtcConnection() { channelId = null; localUserAccount = null; localUid = Constants.DEFAULT_CONNECTION_ID; } public RtcConnection(String channelId, int uid) { this.channelId = channelId; this.localUid = uid; } @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("channelId=").append(channelId); sb.append("localUid=").append(localUid); sb.append("localUserAccount=").append(localUserAccount); return sb.toString(); } }
Attributes
- channelId
- The channel name.
- localUid
- The ID of the local user.
- localUserAccount
- The user account of the local user.
AgoraImage
Image properties.
public class AgoraImage { public String url; public int x; public int y; public int width; public int height; public int zOrder; public double alpha; public AgoraImage() { this.url = null; this.x = 0; this.y = 0; this.width = 0; this.height = 0; this.zOrder = 0; this.alpha = 1.0; } public AgoraImage(String url) { this.url = url; this.x = 0; this.y = 0; this.width = 0; this.height = 0; this.zOrder = 0; this.alpha = 1.0; } }
This class sets the properties of the watermark and background images in the live video.
Attributes
- url
- The HTTP/HTTPS URL address of the image in the live video. The maximum length of this parameter is 1024 bytes.
- x
- The x coordinate (pixel) of the image on the video frame (taking the upper left corner of the video frame as the origin).
- y
- The y coordinate (pixel) of the image on the video frame (taking the upper left corner of the video frame as the origin).
- width
- The width (pixel) of the image on the video frame.
- height
- The height (pixel) of the image on the video frame.
- zOrder
- The layer index of the watermark or background image. When you use the watermark array to add a watermark or multiple watermarks, you must pass a value to zOrder in the range [1,255]; otherwise, the SDK reports an error. In other cases, zOrder can optionally be passed in the range [0,255], with 0 being the default value. 0 means the bottom layer and 255 means the top layer.
- alpha
- The transparency of the watermark or background image. The value ranges between 0.0 and 1.0:
- 0.0: Completely transparent.
- 1.0: (Default) Opaque.
RtcStats
Statistics of the channel.
public static class RtcStats { public int totalDuration; public int txBytes; public int rxBytes; public int txKBitRate; public int txAudioBytes; public int rxAudioBytes; public int txVideoBytes; public int rxVideoBytes; public int rxKBitRate; public int txAudioKBitRate; public int rxAudioKBitRate; public int txVideoKBitRate; public int rxVideoKBitRate; public int lastmileDelay; public double cpuTotalUsage; public int gatewayRtt; public double cpuAppUsage; public int users; public int connectTimeMs; public int txPacketLossRate; public int rxPacketLossRate; public double memoryAppUsageRatio; public double memoryTotalUsageRatio; public int memoryAppUsageInKbytes; }
Attributes
- totalDuration
- Call duration of the local user in seconds, represented by an aggregate value.
- txBytes
- Total number of bytes transmitted, represented by an aggregate value.
- rxBytes
- Total number of bytes received, represented by an aggregate value.
- txAudioBytes
- Total number of audio bytes sent, represented by an aggregate value.
- txVideoBytes
- The total number of video bytes sent, represented by an aggregate value.
- rxAudioBytes
- The total number of audio bytes received, represented by an aggregate value.
- rxVideoBytes
- The total number of video bytes received, represented by an aggregate value.
- txKBitRate
- Video transmission bitrate (Kbps), represented by an instantaneous value.
- rxKBitRate
- The receiving bitrate (Kbps), represented by an instantaneous value.
- rxAudioKBitRate
- Audio receive bitrate (Kbps), represented by an instantaneous value.
- txAudioKBitRate
- The bitrate (Kbps) of sending the audio packet.
- rxVideoKBitRate
- Video receive bitrate (Kbps), represented by an instantaneous value.
- txVideoKBitRate
- The bitrate (Kbps) of sending the video.
- lastmileDelay
- The client-to-server delay (ms).
- txPacketLossRate
- The packet loss rate (%) from the client to the Agora server before applying the anti-packet-loss algorithm.
- rxPacketLossRate
- The packet loss rate (%) from the Agora server to the client before using the anti-packet-loss method.
- users
- The number of users in the channel.
- cpuAppUsage
- Application CPU usage (%).Attention:
- The value of cpuTotalUsage is always reported as 0 in the onLeaveChannel callback.
- As of Android 8.1, you cannot get the CPU usage from this attribute due to system limitations.
- cpuTotalUsage
-
The system CPU usage (%).
Attention:- The value of cpuTotalUsage is always reported as 0 in the onLeaveChannel callback.
- As of Android 8.1, you cannot get the CPU usage from this attribute due to system limitations.
- connectTimeMs
- The duration (ms) between the SDK starts connecting and the connection is established. If the value reported is 0, it means invalid.
- gatewayRtt
- The round-trip time delay (ms) from the client to the local router.
Note: On Android, to get gatewayRtt, ensure that you add the
android.permission.ACCESS_WIFI_STATE
permission after</application>
in theAndroidManifest.xml
file in your project. - memoryAppUsageRatio
-
The memory ratio occupied by the app (%).
Attention: This value is for reference only. Due to system limitations, you may not get this value. - memoryTotalUsageRatio
-
The memory occupied by the system (%).
Attention: This value is for reference only. Due to system limitations, you may not get this value. - memoryAppUsageInKbytes
-
The memory size occupied by the app (KB).
Attention: This value is for reference only. Due to system limitations, you may not get this value.
ScreenCaptureParameters
Screen sharing configurations.
public class ScreenCaptureParameters { public static class VideoCaptureParameters { public int bitrate = 0; public int framerate = 15; public int width = 1280; public int height = 720; public int contentHint = Constants.SCREEN_CAPTURE_CONTENT_HINT_MOTION; @CalledByNative("VideoCaptureParameters") public int getBitrate() { return bitrate; } @CalledByNative("VideoCaptureParameters") public int getFramerate() { return framerate; } @CalledByNative("VideoCaptureParameters") public int getWidth() { return width; } @CalledByNative("VideoCaptureParameters") public int getHeight() { return height; } @CalledByNative("VideoCaptureParameters") public int getContentHint() { return contentHint; } @Override public String toString() { return "VideoCaptureParameters{" + "bitrate=" + bitrate + ", framerate=" + framerate + ", width=" + width + ", height=" + height + ", contentHint=" + contentHint + '}'; } } public static class AudioCaptureParameters { public int sampleRate = 16000; public int channels = 2; public int captureSignalVolume = 100; @CalledByNative("AudioCaptureParameters") public int getSampleRate() { return sampleRate; } @CalledByNative("AudioCaptureParameters") public int getChannels() { return channels; } @CalledByNative("AudioCaptureParameters") public int getCaptureSignalVolume() { return captureSignalVolume; } @Override public String toString() { return "AudioCaptureParameters{" + "sampleRate=" + sampleRate + ", channels=" + channels + ", captureSignalVolume=" + captureSignalVolume + '}'; } } public boolean captureAudio = false; public VideoCaptureParameters videoCaptureParameters = new VideoCaptureParameters(); public boolean captureVideo = true; public AudioCaptureParameters audioCaptureParameters = new AudioCaptureParameters(); @CalledByNative public boolean isCaptureAudio() { return captureAudio; } @CalledByNative public VideoCaptureParameters getVideoCaptureParameters() { return videoCaptureParameters; } @CalledByNative public boolean isCaptureVideo() { return captureVideo; } @CalledByNative public AudioCaptureParameters getAudioCaptureParameters() { return audioCaptureParameters; } @Override public String toString() { return "ScreenCaptureParameters{" + "captureAudio=" + captureAudio + ", videoCaptureParameters=" + videoCaptureParameters + ", captureVideo=" + captureVideo + ", audioCaptureParameters=" + audioCaptureParameters + '}'; } }
Attributes
- captureAudio
- Determines whether to capture system audio during screen sharing:
true
: Capture system audio.false
: (Default) Do not capture system audio.
Note: Due to system limitations, capturing system audio is only applicable to Android API level 29 and later (that is, Android 10 and later). - captureVideo
- Whether to capture the screen when screen sharing:
true
: (Default) Capture the screen.false
: Do not capture the screen.
Note: Due to system limitations, the capture screen is only applicable to Android API level 21 and above, that is, Android 5 and above.
VideoCaptureParameters
The video configuration for the shared screen stream.
public static class VideoCaptureParameters { public int bitrate = 0; public int framerate = 15; public int width = 1280; public int height = 720; public int contentHint = Constants.SCREEN_CAPTURE_CONTENT_HINT_MOTION; @CalledByNative("VideoCaptureParameters") public int getBitrate() { return bitrate; } @CalledByNative("VideoCaptureParameters") public int getFramerate() { return framerate; } @CalledByNative("VideoCaptureParameters") public int getWidth() { return width; } @CalledByNative("VideoCaptureParameters") public int getHeight() { return height; } @CalledByNative("VideoCaptureParameters") public int getContentHint() { return contentHint; } @Override public String toString() { return "VideoCaptureParameters{" + "bitrate=" + bitrate + ", framerate=" + framerate + ", width=" + width + ", height=" + height + ", contentHint=" + contentHint + '}'; } }
Only available for scenarios where captureVideo is true
.
Attributes
- width
- The width (px) of the video encoding resolution. The default value is 1280. If the aspect ratio of width to height is different from that of the screen, the SDK adjusts the video encoding resolution according to the following rules (take width × height of 1280 × 720 as an example):
- When the width and height of the screen are both lower than those of dimensions, the SDK uses the resolution of the screen for video encoding. For example, if the screen is 640 × 360, the SDK uses 640 × 360 for video encoding.
- When either the width or height of the screen is higher than that of dimensions, the SDK uses the maximum values that do not exceed those of dimensions while maintaining the aspect ratio of the screen for video encoding. For example, if the screen is 2000 × 1500, the SDK uses 960 × 720 for video encoding.
Note:- The billing for the screen sharing stream is based on the value of dimensions. When you do not pass in a value, Agora bills you at 1280 × 720; when you pass in a value, Agora bills you at that value. See Pricing.
- The value of this parameter does not indicate the orientation mode of the output video. For how to set the video orientation, see ORIENTATION_MODE.
- Whether the 720p resolution or above can be supported depends on the device. If the device cannot support 720p, the frame rate will be lower than the set value.
- height
- The height (px) of the video encoding resolution. The default value is 720. If the aspect ratio of width to height is different from that of the screen, the SDK adjusts the video encoding resolution according to the following rules (take width × height of 1280 × 720 as an example):
- When the width and height of the screen are both lower than those of dimensions, the SDK uses the resolution of the screen for video encoding. For example, if the screen is 640 × 360, the SDK uses 640 × 360 for video encoding.
- When either the width or height of the screen is higher than that of dimensions, the SDK uses the maximum values that do not exceed those of dimensions while maintaining the aspect ratio of the screen for video encoding. For example, if the screen is 2000 × 1500, the SDK uses 960 × 720 for video encoding.
Note:- The billing for the screen sharing stream is based on the value of dimensions. When you do not pass in a value, Agora bills you at 1280 × 720; when you pass in a value, Agora bills you at that value. See Pricing.
- The value of this parameter does not indicate the orientation mode of the output video. For how to set the video orientation, see ORIENTATION_MODE.
- Whether the 720p resolution or above can be supported depends on the device. If the device cannot support 720p, the frame rate will be lower than the set value.
- framerate
- The video encoding frame rate (fps). The default value is 15.
- bitrate
- The video encoding bitrate (Kbps).
- contentHint
- The content hint for screen sharing.
- SCREEN_CAPTURE_CONTENT_HINT_NONE(0): (Default) No content hint.
- SCREEN_CAPTURE_CONTENT_HINT_MOTION(1): Motion-intensive content. Choose this option if you prefer smoothness or when you are sharing a video clip, movie, or video game.
- SCREEN_CAPTURE_CONTENT_HINT_DETAILS(2): Motionless content. Choose this option if you prefer sharpness or when you are sharing a picture, PowerPoint slides, or texts.
AudioCaptureParameters
The audio configuration for the shared screen stream.
public static class AudioCaptureParameters { public int sampleRate = 16000; public int channels = 2; public int captureSignalVolume = 100; @CalledByNative("AudioCaptureParameters") public int getSampleRate() { return sampleRate; } @CalledByNative("AudioCaptureParameters") public int getChannels() { return channels; } @CalledByNative("AudioCaptureParameters") public int getCaptureSignalVolume() { return captureSignalVolume; } @Override public String toString() { return "AudioCaptureParameters{" + "sampleRate=" + sampleRate + ", channels=" + channels + ", captureSignalVolume=" + captureSignalVolume + '}'; } }
Only available where captureAudio is true
.
Attributes
- sampleRate
- Audio sample rate (Hz). The default value is 16000.
- channels
- The number of audio channels. The default value is 2, which means stereo.
- captureSignalVolume
- The volume of the captured system audio. The value range is [0, 100]. The default value is 100.
SegmentationProperty
Processing properties for background images.
public class SegmentationProperty { public static final int SEG_MODEL_AI = 1; public static final int SEG_MODEL_GREEN = 2; public int modelType; public float greenCapacity; public SegmentationProperty(int modelType, float greenCapacity) { this.modelType = modelType; this.greenCapacity = greenCapacity; } public SegmentationProperty() { this.modelType = SEG_MODEL_AI; this.greenCapacity = 0.5f; } }
Attributes
- modelType
- The type of algorithms to user for background processing.
- SEG_MODEL_AI(1): (Default) Use the algorithm suitable for all scenarios.
- SEG_MODEL_GREEN(2): Use the algorithm designed specifically for scenarios with a green screen background.
- greenCapacity
-
The range of accuracy for identifying green colors (different shades of green) in the view. The value range is [0,1], and the default value is 0.5. The larger the value, the wider the range of identifiable shades of green. When the value of this parameter is too large, the edge of the portrait and the green color in the portrait range are also detected. Agora recommends that you dynamically adjust the value of this parameter according to the actual effect.
Note: This parameter only takes effect when modelType is set to SEG_MODEL_GREEN.
SimulcastStreamConfig
The configuration of the low-quality video stream.
public class SimulcastStreamConfig { public VideoEncoderConfiguration.VideoDimensions dimensions; public int bitrate; public int framerate; public SimulcastStreamConfig() { this.dimensions = new VideoEncoderConfiguration.VideoDimensions(-1, -1); this.bitrate = -1; this.framerate = 5; } public SimulcastStreamConfig( VideoEncoderConfiguration.VideoDimensions dimensions, int bitrate, int framerate) { this.dimensions = dimensions; this.bitrate = bitrate; this.framerate = framerate; } }
Attributes
- dimensions
- The video dimension. See VideoDimensions. The default value is 160 × 120.
- bitrate
- Video receive bitrate (Kbps). The default value is 65.
- framerate
- The capture frame rate (fps) of the local video. The default value is 5.
SpatialAudioParams
The spatial audio parameters.
public class SpatialAudioParams { public Double speaker_azimuth; public Double speaker_elevation; public Double speaker_distance; public Integer speaker_orientation; public Boolean enable_blur; public Boolean enable_air_absorb; public Double speaker_attenuation; public Boolean enable_doppler; @CalledByNative public Double getSpeakerAzimuth() { return speaker_azimuth; } @CalledByNative public Double getSpeakerElevation() { return speaker_elevation; } @CalledByNative public Double getSpeakerDistance() { return speaker_distance; } @CalledByNative public Integer getSpeakerOrientation() { return speaker_orientation; } @CalledByNative public Boolean getBlurFlag() { return enable_blur; } @CalledByNative public Boolean getAirAbsorbFlag() { return enable_air_absorb; } @CalledByNative public Double getSpeakerAttenuation() { return speaker_attenuation; } @CalledByNative public Boolean getDopplerFlag() { return enable_doppler; } }
Attributes
- speaker_azimuth
- The azimuth angle of the remote user or media player relative to the local user. The value range is [0,360], and the unit is degrees, as follows:
- 0: (Default) 0 degrees, which means directly in front on the horizontal plane.
- 90: 90 degrees, which means directly to the left on the horizontal plane.
- 180: 180 degrees, which means directly behind on the horizontal plane.
- 270: 270 degrees, which means directly to the right on the horizontal plane.
- 360: 360 degrees, which means directly in front on the horizontal plane.
- speaker_elevation
- The elevation angle of the remote user or media player relative to the local user. The value range is [-90,90], and the unit is degrees, as follows:
- 0: (Default) 0 degrees, which means that the horizontal plane is not rotated.
- -90: -90 degrees, which means that the horizontal plane is rotated 90 degrees downwards.
- 90: 90 degrees, which means that the horizontal plane is rotated 90 degrees upwards.
- speaker_distance
- The distance of the remote user or media player relative to the local user. The value range is [1,50], and the unit is meters. The default value is 1 meter.
- speaker_orientation
- The orientation of the remote user or media player relative to the local user. The value range is [0,180], and the unit is degrees, as follows:
- 0: (Default) 0 degrees, which means that the sound source and listener face the same direction.
- 180: 180 degrees, which means that the sound source and listener face each other.
- enable_blur
- Whether to enable audio blurring:
true
: Enable audio blurring.false
: (Default) Disable audio blurring.
- enable_air_absorb
- Whether to enable air absorption, that is, to simulate the sound attenuation effect of sound transmitting in the air; under a certain transmission distance, the attenuation speed of high-frequency sound is fast, and the attenuation speed of low-frequency sound is slow.
true
: (Default) Enable air absorption. Make sure that the value of speaker_attenuation is not0
; otherwise, this setting does not take effect.false
: Disable air absorption.
- speaker_attenuation
- The sound attenuation coefficient of the remote user or media player. The value range is [0,1]. The values are as follows:
- 0: Broadcast mode, where the volume and timbre are not attenuated with distance, and the volume and timbre heard by local users do not change regardless of distance.
- (0,0.5): Weak attenuation mode, where the volume and timbre only have a weak attenuation during the propagation, and the sound can travel farther than that in a real environment. enable_air_absorb needs to be enabled at the same time.
- 0.5: (Default) Simulates the attenuation of the volume in the real environment; the effect is equivalent to not setting the speaker_attenuation parameter.
- (0.5,1]: Strong attenuation mode, where volume and timbre attenuate rapidly during the propagation. enable_air_absorb needs to be enabled at the same time.
- enable_doppler
- Whether to enable the Doppler effect: When there is a relative displacement between the sound source and the receiver of the sound source, the tone heard by the receiver changes.
true
: Enable the Doppler effect.false
: (Default) Disable the Doppler effect.
CAUTION:- This parameter is suitable for scenarios where the sound source is moving at high speed (for example, racing games). It is not recommended for common audio and video interactive scenarios (for example, voice chat, cohosting, or online KTV).
- When this parameter is enabled, Agora recommends that you set a regular period (such as 30 ms), and then call the updatePlayerPositionInfo, updateSelfPosition, and updateRemotePosition methods to continuously update the relative distance between the sound source and the receiver. The following factors can cause the Doppler effect to be unpredictable or the sound to be jittery: the period of updating the distance is too long, the updating period is irregular, or the distance information is lost due to network packet loss or delay.
SpatialAudioZone
Sound insulation area settings.
public class SpatialAudioZone { public int zoneSetId; public float[] position; public float[] forward; public float[] right; public float[] up; public float forwardLength; public float rightLength; public float upLength; public float audioAttenuation; public SpatialAudioZone() { zoneSetId = -1; position = new float[] {0.0f, 0.0f, 0.0f}; forward = new float[] {0.0f, 0.0f, 0.0f}; right = new float[] {0.0f, 0.0f, 0.0f}; up = new float[] {0.0f, 0.0f, 0.0f}; forwardLength = 0.0f; rightLength = 0.0f; upLength = 0.0f; audioAttenuation = 0.0f; } @CalledByNative public float[] getPosition() { return position; } @CalledByNative public float[] getForward() { return forward; } @CalledByNative public float[] getRight() { return right; } @CalledByNative public float[] getUp() { return up; } @CalledByNative public int getZoneSetId() { return zoneSetId; } @CalledByNative public float getForwardLength() { return forwardLength; } @CalledByNative public float getRightLength() { return rightLength; } @CalledByNative public float getUpLength() { return upLength; } @CalledByNative public float getAudioAttenuation() { return audioAttenuation; } }
- Since
- v4.0.1
Attributes
- zoneSetId
- The ID of the sound insulation area.
- position
- The spatial center point of the sound insulation area. This parameter is an array of length 3, and the three values represent the front, right, and top coordinates in turn.
- forward
- Starting at position, the forward unit vector. This parameter is an array of length 3, and the three values represent the front, right, and top coordinates in turn.
- right
- Starting at position, the right unit vector. This parameter is an array of length 3, and the three values represent the front, right, and top coordinates in turn.
- up
- Starting at position, the up unit vector. This parameter is an array of length 3, and the three values represent the front, right, and top coordinates in turn.
- forwardLength
- The entire sound insulation area is regarded as a cube; this represents the length of the forward side in the unit length of the game engine.
- rightLength
- The entire sound insulation area is regarded as a cube; this represents the length of the right side in the unit length of the game engine.
- upLength
- The entire sound insulation area is regarded as a cube; this represents the length of the up side in the unit length of the game engine.
- audioAttenuation
- The sound attenuation coefficient when users within the sound insulation area communicate with external users. The value range is [0,1]. As shown below:
- 0: Broadcast mode, where the volume and timbre are not attenuated with distance, and the volume and timbre heard by local users do not change regardless of distance.
- (0,0.5): Weak attenuation mode, that is, the volume and timbre are only weakly attenuated during the propagation process, and the sound can travel farther than the real environment.
- 0.5: (Default) simulates the attenuation of the volume in the real environment; the effect is equivalent to not setting the audioAttenuation parameter.
- (0.5,1]: Strong attenuation mode (default value is 1), that is, the volume and timbre attenuate rapidly during propagation.
SrcInfo
Information about the video bitrate of the media resource being played.
public class SrcInfo { private int bitrateInKbps; private String name; public SrcInfo() {} }
Attributes
- bitrateInKbps
- The video bitrate (Kbps) of the media resource being played.
- name
- The name of the media resource.
TranscodingUser
Transcoding configurations of each host.
public static class TranscodingUser {
public int uid;
public String userId;
public int x;
public int y;
public int width;
public int height;
public int zOrder;
public float alpha;
public int audioChannel;
public TranscodingUser() {
alpha = 1;
}
}
Attributes
- uid
-
The user ID of the host.
- x
-
The x coordinate (pixel) of the host's video on the output video frame (taking the upper left corner of the video frame as the origin). The value range is [0, width], where width is the
width
set in LiveTranscoding. - y
- The y coordinate (pixel) of the host's video on the output video frame (taking the upper left corner of the video frame as the origin). The value range is [0, height], where height is the
height
set in LiveTranscoding. - width
- The width (pixel) of the host's video.
- height
-
The height (pixel) of the host's video.
- zOrder
-
The layer index number of the host's video. The value range is [0, 100].
- 0: (Default) The host's video is the bottom layer.
- 100: The host's video is the top layer.
Attention:- If the value is less than 0 or greater than 100, ERR_INVALID_ARGUMENT error is returned.
- Setting zOrder to 0 is supported.
- alpha
-
The transparency of the host's video. The value range is [0.0,1.0].
- 0.0: Completely transparent.
- 1.0: (Default) Opaque.
- audioChannel
-
The audio channel used by the host's audio in the output audio. The default value is 0, and the value range is [0, 5].
0
: (Recommended) The defaut setting, which supports dual channels at most and depends on the upstream of the host.1
: The host's audio uses the FL audio channel. If the host's upstream uses multiple audio channels, the Agora server mixes them into mono first.2
: The host's audio uses the FC audio channel. If the host's upstream uses multiple audio channels, the Agora server mixes them into mono first.3
: The host's audio uses the FR audio channel. If the host's upstream uses multiple audio channels, the Agora server mixes them into mono first.4
: The host's audio uses the BL audio channel. If the host's upstream uses multiple audio channels, the Agora server mixes them into mono first.5
: The host's audio uses the BR audio channel. If the host's upstream uses multiple audio channels, the Agora server mixes them into mono first.0xFF
or a value greater than5
: The host's audio is muted, and the Agora server removes the host's audio.
Attention: If the value is not0
, a special player is required.
UplinkNetworkInfo
The uplink network information.
public static class UplinkNetworkInfo { public int video_encoder_target_bitrate_bps; };
Attributes
- video_encoder_target_bitrate_bps
- The target video encoder bitrate (bps).
UserAudioSpectrumInfo
Audio spectrum information of the remote user.
public class UserAudioSpectrumInfo { private int uid; private AudioSpectrumInfo audioSpectrumInfo; }
Attributes
- uid
- The user ID of the remote user.
- audioSpectrumInfo
-
Audio spectrum information of the remote user.See AudioSpectrumInfo.
UserInfo
The information of the user.
public class UserInfo {
public int uid;
public String userAccount;
@CalledByNative
public UserInfo(int uid, String userAccount) {
this.uid = uid;
this.userAccount = userAccount;
}
}
Attributes
- uid
- The user ID.
- userAccount
- User account.
VideoCanvas
Attributes of video canvas object.
public class VideoCanvas { public static final int RENDER_MODE_HIDDEN = 1; public static final int RENDER_MODE_FIT = 2; public static final int RENDER_MODE_ADAPTIVE = 3; public View view; public int renderMode; public int mirrorMode; public int sourceType; public int sourceId; public int uid; }
Attributes
- view
- Video display window.
- renderMode
-
- RENDER_MODE_HIDDEN(1): Hidden mode. Uniformly scale the video until it fills the visible boundaries (cropped). One dimension of the video may have clipped contents.
- RENDER_MODE_FIT(2): Fit mode. Uniformly scale the video until one of its dimension fits the boundary (zoomed to fit). Areas that are not filled due to the disparity in the aspect ratio are filled with black.
- RENDER_MODE_ADAPTIVE(3): This mode is deprecated.
- mirrorMode
-
- MIRROR_MODE_AUTO(0): (Default) The mirror mode determined by the SDK. If you use a front camera, the SDK enables the mirror mode by default; if you use a rear camera, the SDK disables the mirror mode by default.
- MIRROR_MODE_ENABLED(1): Enable the mirror mode.
- MIRROR_MODE_DISABLED(2): Disable the mirror mode.
Attention:- For the mirror mode of the local video view: If you use a front camera, the SDK enables the mirror mode by default; if you use a rear camera, the SDK disables the mirror mode by default.
- For the remote user: The mirror mode is disabled by default.
- uid
- The user ID.
- sourceType
- The type of the video source, see VideoSourceType.
- sourceId
- The ID of the video source.
VideoDenoiserOptions
Video noise reduction options.
public class VideoDenoiserOptions { public static final int VIDEO_DENOISER_AUTO = 0; public static final int VIDEO_DENOISER_MANUAL = 1; public static final int VIDEO_DENOISER_LEVEL_HIGH_QUALITY = 0; public static final int VIDEO_DENOISER_LEVEL_FAST = 1; public static final int VIDEO_DENOISER_LEVEL_STRENGTH = 2; public int denoiserMode; public int denoiserLevel; public VideoDenoiserOptions() { denoiserMode = VIDEO_DENOISER_AUTO; denoiserLevel = VIDEO_DENOISER_LEVEL_HIGH_QUALITY; } public VideoDenoiserOptions(int mode, int level) { denoiserMode = mode; denoiserLevel = level; } }
Attributes
- level
- Video noise reduction level.
- VIDEO_DENOISER_LEVEL_HIGH_QUALITY0: (Default) Promotes video quality during low-light enhancement. It processes the brightness, details, and noise of the video image. The performance consumption is moderate, the processing speed is moderate, and the overall video quality is optimal.
- VIDEO_DENOISER_LEVEL_FAST1: Promotes reducing performance consumption during video noise reduction. It prioritizes reducing performance consumption over video noise reduction quality. The performance consumption is lower, and the video noise reduction speed is faster. To avoid a noticeable shadowing effect (shadows trailing behind moving objects) in the processed video, Agora recommends that you use FAST when the camera is fixed.
- VIDEO_DENOISER_LEVEL_STRENGTH2: Enhanced video noise reduction. It prioritizes video noise reduction quality over reducing performance consumption. The performance consumption is higher, the video noise reduction speed is slower, and the video noise reduction quality is better. If VIDEO_DENOISER_LEVEL_HIGH_QUALITY is not enough for your video noise reduction needs, you can use this enumerator.
- mode
- Video noise reduction mode.
- VIDEO_DENOISER_AUTO0: (Default) Automatic mode. The SDK automatically enables or disables the video noise reduction feature according to the ambient light.
- VIDEO_DENOISER_MANUAL1: Manual mode. Users need to enable or disable the video noise reduction feature manually.
VideoDimensions
The video dimension.
static public class VideoDimensions { public int width; public int height; public VideoDimensions(int width, int height) { this.width = width; this.height = height; } public VideoDimensions() { this.width = 0; this.height = 0; } }
Attributes
- width
-
The width (pixels) of the video.
- height
- The height (pixels) of the video.
VideoEncoderConfiguration
Video encoder configurations.
public class VideoEncoderConfiguration { static public class VideoDimensions { public int width; public int height; public VideoDimensions(int width, int height) { this.width = width; this.height = height; } public VideoDimensions() { this.width = 0; this.height = 0; } } public final static VideoDimensions VD_120x120 = new VideoDimensions(120, 120); public final static VideoDimensions VD_160x120 = new VideoDimensions(160, 120); public final static VideoDimensions VD_180x180 = new VideoDimensions(180, 180); public final static VideoDimensions VD_240x180 = new VideoDimensions(240, 180); public final static VideoDimensions VD_320x180 = new VideoDimensions(320, 180); public final static VideoDimensions VD_240x240 = new VideoDimensions(240, 240); public final static VideoDimensions VD_320x240 = new VideoDimensions(320, 240); public final static VideoDimensions VD_424x240 = new VideoDimensions(424, 240); public final static VideoDimensions VD_360x360 = new VideoDimensions(360, 360); public final static VideoDimensions VD_480x360 = new VideoDimensions(480, 360); public final static VideoDimensions VD_640x360 = new VideoDimensions(640, 360); public final static VideoDimensions VD_480x480 = new VideoDimensions(480, 480); public final static VideoDimensions VD_640x480 = new VideoDimensions(640, 480); public final static VideoDimensions VD_840x480 = new VideoDimensions(840, 480); public final static VideoDimensions VD_960x720 = new VideoDimensions(960, 720); public final static VideoDimensions VD_1280x720 = new VideoDimensions(1280, 720); public final static VideoDimensions VD_1920x1080 = new VideoDimensions(1920, 1080); public final static VideoDimensions VD_2540x1440 = new VideoDimensions(2540, 1440); public final static VideoDimensions VD_3840x2160 = new VideoDimensions(3840, 2160); public enum FRAME_RATE { FRAME_RATE_FPS_1(1), FRAME_RATE_FPS_7(7), FRAME_RATE_FPS_10(10), FRAME_RATE_FPS_15(15), FRAME_RATE_FPS_24(24), FRAME_RATE_FPS_30(30), FRAME_RATE_FPS_60(60); private int value; private FRAME_RATE(int v) { value = v; } public int getValue() { return this.value; } } public enum ORIENTATION_MODE { ORIENTATION_MODE_ADAPTIVE(0), ORIENTATION_MODE_FIXED_LANDSCAPE(1), ORIENTATION_MODE_FIXED_PORTRAIT(2); private int value; private ORIENTATION_MODE(int v) { value = v; } public int getValue() { return this.value; } } public enum DEGRADATION_PREFERENCE { MAINTAIN_QUALITY(0), MAINTAIN_FRAMERATE(1), MAINTAIN_BALANCED(2), MAINTAIN_RESOLUTION(3), DISABLED(100); private int value; private DEGRADATION_PREFERENCE(int v) { value = v; } public int getValue() { return this.value; } } public enum MIRROR_MODE_TYPE { MIRROR_MODE_AUTO(0), MIRROR_MODE_ENABLED(1), MIRROR_MODE_DISABLED(2); private int value; private MIRROR_MODE_TYPE(int v) { value = v; } public int getValue() { return this.value; } } public static final int STANDARD_BITRATE = 0; public static final int COMPATIBLE_BITRATE = -1; public static final int DEFAULT_MIN_BITRATE = -1; public static final int DEFAULT_MIN_FRAMERATE = -1; public static final int DEFAULT_MIN_BITRATE_EQUAL_TO_TARGET_BITRATE = -2; public VideoDimensions dimensions; public int frameRate; public int minFrameRate; public int bitrate; public int minBitrate; public ORIENTATION_MODE orientationMode; public DEGRADATION_PREFERENCE degradationPrefer; public MIRROR_MODE_TYPE mirrorMode; public VideoEncoderConfiguration() { this.dimensions = new VideoDimensions(640, 480); this.frameRate = FRAME_RATE.FRAME_RATE_FPS_15.getValue(); this.minFrameRate = DEFAULT_MIN_FRAMERATE; this.bitrate = STANDARD_BITRATE; this.minBitrate = DEFAULT_MIN_BITRATE; this.orientationMode = ORIENTATION_MODE.ORIENTATION_MODE_ADAPTIVE; this.degradationPrefer = DEGRADATION_PREFERENCE.MAINTAIN_QUALITY; this.mirrorMode = MIRROR_MODE_TYPE.MIRROR_MODE_DISABLED; } public VideoEncoderConfiguration(VideoDimensions dimensions, FRAME_RATE frameRate, int bitrate, ORIENTATION_MODE orientationMode) { this.dimensions = dimensions; this.frameRate = frameRate.getValue(); this.minFrameRate = DEFAULT_MIN_FRAMERATE; this.bitrate = bitrate; this.minBitrate = DEFAULT_MIN_BITRATE; this.orientationMode = orientationMode; this.degradationPrefer = DEGRADATION_PREFERENCE.MAINTAIN_QUALITY; this.mirrorMode = MIRROR_MODE_TYPE.MIRROR_MODE_DISABLED; } public VideoEncoderConfiguration(VideoDimensions dimensions, FRAME_RATE frameRate, int bitrate, ORIENTATION_MODE orientationMode, MIRROR_MODE_TYPE mirrorMode) { this.dimensions = dimensions; this.frameRate = frameRate.getValue(); this.minFrameRate = DEFAULT_MIN_FRAMERATE; this.bitrate = bitrate; this.minBitrate = DEFAULT_MIN_BITRATE; this.orientationMode = orientationMode; this.degradationPrefer = DEGRADATION_PREFERENCE.MAINTAIN_QUALITY; this.mirrorMode = mirrorMode; } public VideoEncoderConfiguration( int width, int height, FRAME_RATE frameRate, int bitrate, ORIENTATION_MODE orientationMode) { this.dimensions = new VideoDimensions(width, height); this.frameRate = frameRate.getValue(); this.minFrameRate = DEFAULT_MIN_FRAMERATE; this.bitrate = bitrate; this.minBitrate = DEFAULT_MIN_BITRATE; this.orientationMode = orientationMode; this.degradationPrefer = DEGRADATION_PREFERENCE.MAINTAIN_QUALITY; this.mirrorMode = MIRROR_MODE_TYPE.MIRROR_MODE_DISABLED; } public VideoEncoderConfiguration(int width, int height, FRAME_RATE frameRate, int bitrate, ORIENTATION_MODE orientationMode, MIRROR_MODE_TYPE mirrorMode) { this.dimensions = new VideoDimensions(width, height); this.frameRate = frameRate.getValue(); this.minFrameRate = DEFAULT_MIN_FRAMERATE; this.bitrate = bitrate; this.minBitrate = DEFAULT_MIN_BITRATE; this.orientationMode = orientationMode; this.degradationPrefer = DEGRADATION_PREFERENCE.MAINTAIN_QUALITY; this.mirrorMode = mirrorMode; } }
Attributes
- dimensions
-
Users can set the resolution by themselves, or directly select the desired resolution from the following list:
- VD_120x120: The video resolution is 120 × 120.
- VD_160x120: The video resolution is 160 × 120.
- VD_180x180: The video resolution is 180 × 180.
- VD_240x180: The video resolution is 240 × 180.
- VD_320x180: The video resolution is 320 × 180.
- VD_240x240: The video resolution is 240 × 240.
- VD_320x240: The video resolution is 320 × 240.
- VD_424x240: The video resolution is 424 × 240.
- VD_360x360: The video resolution is 360 × 360.
- VD_480x360: The video resolution is 480 × 360.
- VD_640x360: The video resolution is 640 × 360.
- VD_480x480: The video resolution is 480 × 480.
- VD_640x480: The video resolution is 640 × 480.
- VD_840x480: The video resolution is 840 × 480.
- VD_960x720: The video resolution is 960 × 720.
- VD_1280x720: The video resolution is 1280 × 720.
- VD_1920x1080: The video resolution is 1920 × 1080.
- VD_2540x1440: The video resolution is 2540 × 1440.
- VD_3840x2160: The video resolution is 3840 × 2160.
Attention:- Whether the 720p resolution or above can be supported depends on the device. If the device cannot support 720p, the frame rate will be lower than the set value.
- The default value is 640 × 360.
- frameRate
- The frame rate (fps) of the encoding video frame. The default value is 15. See FRAME_RATE.
- minFramerate
- The minimum encoding frame rate of the video. The default value is -1.
- bitrate
-
The encoding bitrate (Kbps) of the video.
- STANDARD_BITRATE: (Recommended) Standard bitrate mode. In this mode, the video bitrate is twice the base bitrate.
- COMPATIBLE_BITRATE: Adaptive bitrate mode In this mode, the video bitrate is the same as the base bitrate. If you choose this mode in the LIVE_BROADCASTING profile, the video frame rate may be lower than the set value.
- minBitrate
-
The minimum encoding bitrate (Kbps) of the video.
The SDK automatically adjusts the encoding bitrate to adapt to the network conditions. Using a value greater than the default value forces the video encoder to output high-quality images but may cause more packet loss and sacrifice the smoothness of the video transmission. Unless you have special requirements for image quality, Agora does not recommend changing this value.
Attention: This parameter only applies to the interactive streaming profile. - orientationMode
- The orientation mode of the encoded video. See ORIENTATION_MODE.
- degradationPreference
- Video degradation preference under limited bandwidth. See DEGRADATION_PREFERENCE.
- mirrorMode
-
Sets the mirror mode of the published local video stream. It only affects the video that the remote user sees. See MIRROR_MODE_TYPE.
Attention: By default, the video is not mirrored.
VideoFrame
Configurations of the video frame.
public interface Buffer extends RefCounted { @CalledByNative("Buffer") int getWidth(); @CalledByNative("Buffer") int getHeight(); @Override @CalledByNative("Buffer") void retain(); @Override @CalledByNative("Buffer") void release(); @CalledByNative("Buffer") Buffer cropAndScale( int cropX, int cropY, int cropWidth, int cropHeight, int scaleWidth, int scaleHeight); @CalledByNative("Buffer") @Nullable Buffer mirror(int frameRotation); @CalledByNative("Buffer") @Nullable Buffer rotate(int frameRotation); @CalledByNative("Buffer") @Nullable Buffer transform(int cropX, int cropY, int cropWidth, int cropHeight, int scaleWidth, int scaleHeight, int frameRotation); } public interface ColorSpace { enum Range { Invalid(0), Limited(1), Full(2), Derived(3); private final int range; private Range(int range) { this.range = range; } public int getRange() { return range; }; } enum Matrix { RGB(0), BT709(1), Unspecified(2), FCC(4), BT470BG(5), SMPTE170M(6), SMPTE240M(7), YCOCG(8), BT2020_NCL(9), BT2020_CL(10), SMPTE2085(11), CDNCLS(12), CDCLS(13), BT2100_ICTCP(14); private final int matrix; private Matrix(int matrix) { this.matrix = matrix; } public int getMatrix() { return matrix; }; } enum Transfer { BT709(1), Unspecified(2), GAMMA22(4), GAMMA28(5), SMPTE170M(6), SMPTE240M(7), LINEAR(8), LOG(9), LOG_SQRT(10), IEC61966_2_4(11), BT1361_ECG(12), IEC61966_2_1(13), BT2020_10(14), BT2020_12(15), SMPTEST2084(16), SMPTEST428(17), ARIB_STD_B67(18); private final int transfer; private Transfer(int transfer) { this.transfer = transfer; } public int getTransfer() { return transfer; } } enum Primary { BT709(1), Unspecified(2), BT470M(4), BT470BG(5), kSMPTE170M(6), kSMPTE240M(7), kFILM(8), kBT2020(9), kSMPTEST428(10), kSMPTEST431(11), kSMPTEST432(12), kJEDECP22(22); private final int primary; private Primary(int primary) { this.primary = primary; } public int getPrimary() { return primary; } } Range getRange(); Matrix getMatrix(); Transfer getTransfer(); Primary getPrimary(); } public enum SourceType { kFrontCamera, kBackCamera, kUnspecified, } private Buffer buffer; private int rotation; private long timestampNs; private ColorSpace colorSpace; private SourceType sourceType; private float sampleAspectRatio; private byte[] alphaBuffer; public VideoFrame(Buffer buffer, int rotation, long timestampNs) { this(buffer, rotation, timestampNs, null, null, 1.0f, SourceType.kUnspecified.ordinal()); } @CalledByNative public VideoFrame(Buffer buffer, int rotation, long timestampNs, ColorSpace colorSpace, byte[] alphaBuffer, float sampleAspectRatio, int sourceType) { if (buffer == null) { throw new IllegalArgumentException("buffer not allowed to be null"); } if (rotation % 90 != 0) { throw new IllegalArgumentException("rotation must be a multiple of 90"); } this.buffer = buffer; this.rotation = rotation; this.timestampNs = timestampNs; this.colorSpace = colorSpace; this.alphaBuffer = alphaBuffer; this.sampleAspectRatio = sampleAspectRatio; this.sourceType = SourceType.values()[sourceType]; } @CalledByNative public SourceType getSourceType() { return sourceType; } public float getSampleAspectRatio() { return sampleAspectRatio; } @CalledByNative public Buffer getBuffer() { return buffer; } @CalledByNative public int getRotation() { return rotation; } @CalledByNative public long getTimestampNs() { return timestampNs; } public int getRotatedWidth() { if (rotation % 180 == 0) { return buffer.getWidth(); } return buffer.getHeight(); } public int getRotatedHeight() { if (rotation % 180 == 0) { return buffer.getHeight(); } return buffer.getWidth(); } public void replaceBuffer(Buffer buffer, int rotation, long timestampNs) { release(); this.buffer = buffer; this.rotation = rotation; this.timestampNs = timestampNs; } public ColorSpace getColorSpace() { return colorSpace; } public byte[] getAlphaBuffer() { return alphaBuffer; } @Override public void retain() { buffer.retain(); } @Override @CalledByNative public void release() { buffer.release(); } }
The video data format is YUV420. Note that the buffer provides a pointer to a pointer. This interface cannot modify the pointer of the buffer, but it can modify the content of the buffer.
Attributes
- buffer
-
CAUTION: This parameter cannot be empty; otherwise, an error can occur.Buffer data. The methods associated with this parameter are as follows:
- rotation
- The clockwise rotation of the video frame before rendering. Supported values include 0, 90, 180, and 270 degrees.
- timestampNs
- The timestamp (ns) of a video frame.
- colorSpace
- The color space of a video frame. See VideoColorSpace.
- sourceType
- When using the SDK to capture video, this indicates the type of the video source.
- kFrontCamera: The front camera.
- kBackCamera: The rear camera.
- kUnspecified: (Default) The video source type is unknown.
- sampleAspectRatio
- The aspect ratio of a single pixel, which is the ratio of the width to the height of each pixel.
- alphaBuffer
-
Indicates the output data of the portrait segmentation algorithm, which is consistent with the size of the video frame. The value range of each pixel is [0,255], where 0 represents the background; 255 represents the foreground (portrait).
In the custom video renderer scenario, you can use this parameter to render the video background into various effects, such as transparent, solid color, picture, video, and so on.Note: To use this parameter, contact contact technical support.
VirtualBackgroundSource
The custom background image.
public class VirtualBackgroundSource { public static final int BACKGROUND_COLOR = 1; public static final int BACKGROUND_IMG = 2; public static final int BACKGROUND_BLUR = 3; public static final int BLUR_DEGREE_LOW = 1; public static final int BLUR_DEGREE_MEDIUM = 2; public static final int BLUR_DEGREE_HIGH = 3; public int backgroundSourceType; public int color; public String source = null; public int blurDegree; public VirtualBackgroundSource( int backgroundSourceType, int color, String source, int blurDegree) { this.backgroundSourceType = backgroundSourceType; this.color = color; this.source = source; this.blurDegree = blurDegree; } public VirtualBackgroundSource() { this.backgroundSourceType = BACKGROUND_COLOR; this.color = 0xffffff; this.source = ""; this.blurDegree = BLUR_DEGREE_HIGH; } }
Attributes
- backgroundSourceType
- The type of the custom background image.
- BACKGROUND_COLOR(1): (Default) The background image is a solid color.
- BACKGROUND_IMG(2): The background image is a file in PNG or JPG format.
- BACKGROUND_BLUR(3): The background image is the blurred background.
- color
- The type of the custom background image. The color of the custom background image. The format is a hexadecimal integer defined by RGB, without the # sign, such as 0xFFB6C1 for light pink. The default value is 0xFFFFFF, which signifies white. The value range is [0x000000, 0xffffff]. If the value is invalid, the SDK replaces the original background image with a white background image.Attention: This parameter takes effect only when the type of the custom background image is BACKGROUND_COLOR.
- source
- The local absolute path of the custom background image. PNG and JPG formats are supported. If the path is invalid, the SDK replaces the original background image with a white background image.Attention: This parameter takes effect only when the type of the custom background image is BACKGROUND_IMG.
- blurDegree
- The degree of blurring applied to the custom background image.
- BLUR_DEGREE_LOW(1): The degree of blurring applied to the custom background image is low. The user can almost see the background clearly.
- BLUR_DEGREE_MEDIUM(2): The degree of blurring applied to the custom background image is medium. It is difficult for the user to recognize details in the background.
- BLUR_DEGREE_HIGH(3): The degree of blurring applied to the custom background image is high. The user can barely see any distinguishing features in the background.
Attention: This parameter takes effect only when the type of the custom background image is BACKGROUND_BLUR.
WatermarkOptions
Configurations of the watermark image.
public class WatermarkOptions { public static class Rectangle { public int x = 0; public int y = 0; public int width = 0; public int height = 0; public Rectangle() { x = 0; y = 0; width = 0; height = 0; } public Rectangle(int x_, int y_, int width_, int height_) { x = x_; y = y_; width = width_; height = height_; } }; public boolean visibleInPreview = true; public Rectangle positionInLandscapeMode = new Rectangle(); public Rectangle positionInPortraitMode = new Rectangle(); }