Developers Guide
Let's start building video communication stuff!
UserAgent
The UserAgent
is an entry point to ApiRTC CPaaS. This is the first object to instantiate when implementing a front-end application. It represents the local user that will participate in the conversation.
The UserAgent can be either anonymous or identified.
Identification is done through a JWT retrieved from an authentication service.
Read the Authentication page for more details on how to authenticate.
Read the
UserAgent
reference page.
MediaDevices
UserAgent
's mediaDeviceChanged
event can be listened to in order to get notified of the list of devices available to the browser:
Here is what the mediaDevices
object looks like:
This is useful to propose a list of available media devices to the user.
Session
A Session
instance represents a connection to the ApiRTC CPaaS. A Session
is configured by an API key and a declared UserAgent
.
A session handles all the interactions of participants, including video/audio streams and data exchanges for one Enterprise
identified by its API key.
A Session
object is to get through the UserAgent.register(options)
method. Some options
(of type RegisterInformation
) controls the authentication mechanisms.
Read the Authentication page for more details on how to authenticate.
See the Session reference page.
Contacts
Contacts are the participants to a Session
. They can be authenticated or anonymous.
See the
Contact
reference page
Data exchange
Data can be exchanged across Contacts by using the Contact.sendData(object)
method:
To receive the data, listen on the Session's contactData
event:
Presence group
Each Contact getting into a Session can join presence groups and segment all the connected users into subcategories.
For example: an employee can get into a Session, and join the "Operator" and "Available" groups, while a customer will join the "Customer" group.
To make a user connect within some group, set the RegisterInformation.groups
in the UserAgent.register(options)
options, or user the Session.joinGroup
method.
Joining a group as a participant will activate session's contactListUpdate
event listening on this group. Alternatively, you can subscribe to the group's events without joining it with the Session.subscribeToGroup
method.
If the current participant doesn't subscribe to or join a group, they will not receive event regarding group changes.
The data object associated to Session.contactListUpdate
event has joinedGroup
and leftGroup
properties to carry information on which Contact
joined of left which group:
UserData
UserData
is a class that holds a data object to store values associated to a UserAgent
. Make sure to call UserData.setProp(key, value)
to set up a property.
Once connected to Session
, call userData.setToSession()
to make other connected peers notified of UserData
properties change through the Session.contactListUpdate
event.
For that purpose, the data object associated to Session.contactListUpdate
event has a userDataChanged
property which is an array of Contact
s for which UserData
has changed.
Conversation
A conversation is the way to gather participants to exchange medias. It can be text message, audio/video streams, files...
Whenever there are 2 participants or more, a conversation takes place.
getOrCreateConversation
To get a Conversation
instance, the Session's method getOrCreateConversation(name, options)
should be used.
The name
is of string type without any constraint.
Options
Every participant must enable the moderationEnabled
option to have consistent moderation apply throughout the Conversation.
Precision on the Mesh Mode
The mesh mode enables a peer-to-peer connection across participants, without going through a stream routing server (called SFU for Selective Forwarding Unit).
Mesh mode multiplies the stream sent by each participant. As upload bandwidth is often lower than download bandwith, network connection can become shaky as the number of participants grows.
If meshModeEnabled
is true
when setting the conversation mode, the stream exchanges will remain in P2P until:
the number of participant goes over 4,
or too many packet loss is detected for one participant.
Then the conversation will automatically switch to star topology mode using the ApiRTC SFUs infrastructure.
Setting both meshModeEnabled
and meshOnlyEnabled
to true
forces the conversation to remain mesh only, whatever the connection's events.
Join Conversation
Conversation.join()
makes the local user enter the conversation. Note that this method returns a Promise
and one must wait for it to be fulfilled before doing anything else on the conversation.
A good practice is to register all required Conversation
event listeners before calling the join
method:
Leave Conversation
Conversation.leave()
makes the local user leave the conversation. All ongoing streams will be automatically closed.
A good practice is to destroy the Conversation
after leaving it. (Except if you want to be able to join it again afterward)
Conversation Moderation
Moderation allows a group of users (moderators) to control the conversation's access to other participants.
Activation
In order to activate the moderation for a conversation, every party (moderator or not) must explicitly set the moderationEnabled
option to true
when calling getOrCreateConversation.
Additionally, the moderator participant set the moderator
option to true
as well when calling getOrCreateConversation.
Joining process
If the local user is moderator, then the join()
will resolve immediately.
But if the local user is not moderator, then the join()
will only resolve when a moderator allows it. In the meanwhile, the user will be put in a waiting room.
Waiting Room
The waiting room is a presence group associated to the conversation. It allows to identify participants who are currently waiting for a moderation answer.
Events contactJoinedWaitingRoom
and contactLeftWaitingRoom
will be triggered respectively upon the arrival and departure of a user to/from the waiting room:
Then the moderator can allow or deny a contact to enter the conversation:
Eject
Moderators have the ability to eject another participant from the conversation.
To get notified of a participant ejection, listen on the participantEjected
event. The event data object wears a self
boolean set to true
if the current local user is the ejected participant.
Record the conversation
ApiRTC platform allows to record a composite video of a conversation. The video will be composed of all streams exchanged in the conversation and will be stored in ApiRTC's database.
To start recording a conversation:
Refer to Conversation.startRecording(options)
for details on what are the options.
Example of recordingInfo
(RecordingInfo
) data:
To stop recording a conversation:
Once the recording is stopped, the ApiRTC platform will process it and make it available for download. To get notified when a record is available, listen to the recordAvailable
event of the Converation's instance:
When the video is available, you can use the RecordingInfo.mediaURL
to download it.
Speaker detection
To display which participant is currently talking in a Conversation
, enable the feature:
Then, listen on the audioAmplitude
event:
the event data object (amplitudeInfo
) holds the following information:
When the participant speaks and amplitude
goes over the threshold
configured during feature enabling, event with isSpeaking
set to true
is fired.
When the participant does not speak anymore, the event is fired again with initial amplitude
value that triggered the event, but this time isSpeaking
is false.
QoS statistics
Conversation
event callStatsUpdate
provides statistics information on media stream exchanges quality of service.
Depending on whether the stream is sent or received, the event data object (callStats
) holds the following information:
For a local stream, qos info on sent media
For a remote stream, qos info on received media
The callStats.streamId
is useful to associate data to corresponding streamreams
See the
Stream
reference page for more information.
Stream
Local Streams
Camera
Acquiring camera local stream is done through the UserAgent.createStream(options)
method. The browser asks the user to choose among available media devices. The Promise
resolves with a Stream
instance.
All possible options for the CreateStream method can be found in the CreateStreamOptions
reference page.
constraints
option is of type MediaStreamConstraints. See the Stream Constraint section for more infos.
Screen sharing
Acquiring screen sharing local stream is done through a Stream
static method:
Publish/Unpublish
Publishing a local stream makes it available for remote peer participants to subscribe and view it.
The local user (UserAgent
) must have joined the conversation before publishing a stream.
Conversation.publish(localStream, options)
can optionally take PublishOptions
second parameter object to control publication. Please check reference for details on PublishOptions.
Unpublishing a local stream makes it unavailable for remote peer participants to subscribe and stops sending media stream to peers.
Remote Streams
Handle remote streams availability
ApiRTC triggers an event when stream availability changes through the Conversation.streamListChanged
event.
This event is triggered:
once for each existing stream when the participant joins the Conversation
every time a new stream is published to or unpublished from the Conversation
The data object carried by Conversation.streamListChanged event is StreamInfo: this is not a Stream yet.
The streamInfo.contact.getId()
and streamInfo.streamId
can be useful to identify which remote peer published their stream.
Subscribe to a remote stream
A remote stream is subscribed to using the Conversation.subscribeToStream(streamId)
method. It takes the id of stream provided in the StreamInfo
data object:
Be mindful that whithout subscribing to stream's event, you will not be notified of streams updates and termination.
Conversation.subscribeToStream(streamId, options)
can take optionally take SubscribeOptions
as a second parameter to control subscription.
Unsubscribe from a remote stream
Unsubscribing to a remote stream is done by the Conversation.unsubscribeToStream(streamId)
method.
Manage media streams
Once a stream has been subscribed, ApiRTC notifies with an actual Stream
instance through the streamAdded
event.
This event is triggered every time the actual media stream is available to be displayed.
Whenever a media stream
becomes unavailable, ApiRTC notifies the conversation with a streamRemoved
event.
A media stream
may encounter technical issues, or meet network optimization requiring to change the actual Stream
instance. In this case streamRemoved
event will be also fired, prior to another streamAdded
event with the new instance.
Stream display
In order to display or remove media element in DOM, you can use our helpers:
Stream.addInDiv()
andStream.removeFromDiv()
to add/remove a<video>
element within an existing<div>
Stream.attachToElement(domElement)
to directly attach to a<video>
element.
We manage some devices specificities in our helpers that can help avoid media plays issue. (for instance with Safari iOS).
Audio/Video Mute
To control local or remote stream audio/video mute, use the following Stream
methods:
Evolution has been done on apiRTC 5.0.1 version to reflect standard.
Mute state is managed by enabled/disabled attribute at application level.
Stream constraints
Constraints are camera properties that can be set: resolution, brightness, contrast, frameRate, saturation, torch, zoom.
Capabilities are supported properties and value ranges. Settings are the current properties values.
ApiRTC allows to access constraints, capabilities, settings on both local and remote streams, using the same methods. This means you can easily control both local and remote devices.
Stream.applyConstraints(constraints)
method returns a Promise resolved when all constraints are applied:
Note that the constraints is of type
MediaStreamConstraints
.
Stream.getConstraints()
returns a Promise with all properties that were modified and their current values:
Constraints values depend on the device capabilities. For example, on smartphones with multiple back cameras, sometimes the torch property is only attached to one of the camera.
In addition, supported properties can be queried using Stream.getCapabilities()
that returns a Promise with accepted values ranges:
Example of a capabilities
data object:
In this example video.frameRate
property may be set between 0 and 30.
getCapabilities()
may not work with all browsers. Also, returned capabilities may differ from a device to another.
Finally, properties values can be checked with Stream.getSettings()
that returns in a Promise all current settings:
Example of a settings
data object:
In this example video.frameRate
is a supported property and it's actual value is 30.
video.zoom
is not a supported property for this combination of device/camera/navigator as it is not present in the returned object.
Stream Transformation
Audio filters : noiseReduction - ApplyAudioProcessor()
Noise reduction feature is available on apiRTC 5.0.0
ApiRTC allows to create a stream with a noise reduction filter.
Check the noise reduction tutorial
applyAudioProcessor
helper manages the different stream states for you. (ie : switch from noisedReduction to normal mode)
To start the noise reduction process on a Stream:
This method returns a streamWithEffect
Stream object; it is a encapsulated object of the base stream
with an noise reduction filter applied on it.
It means that there both, the base stream
and the streamWithEffect
stream, are still linked :
If the base
stream
audio is muted thestreamWithEffect
stream audio will be too,If the base
stream
is released, thestreamWithEffect
stream will be too.
Both streams need to be handled by the application as the noise reduction process is going on.
To stop the noise reduction process:
If an error occurs during applyAudioProcessor() process, apiRTC will reject the promise but will try to restore stream with previous effect.
Error description is available in the ApiRTC JS Library Reference
Additionally, ApiRTC gives you access to Stream.startNoiseReduction
and Stream.stopNoiseReduction
methods.
Background subtraction : blur, background image - applyVideoProcessor()
ApiRTC allows to create a background blurred stream or to add a background image based on an original stream.
Have you checked the blur application tutorial?
applyVideoProcessor
helper manages the different stream states. (ie : switch from blur to background image ...)
To start the blur process on a stream:
This method returns a streamWithEffect
Stream object; it is a encapsulated object of the base stream
with blur filter applied on it.
It means that there both, the base stream
and the streamWithEffect
stream, are still linked :
If the base
stream
audio is muted thestreamWithEffect
stream audio will be too,If the base
stream
is released, thestreamWithEffect
stream will be too.
Both streams need to be handled by the application as the noise reduction process is going on.
Use the stream with effect as a local stream:
To stop the blur process:
Additionally, ApiRTC gives you access to Stream.blur()
, Stream.unblur()
, Stream.backgroundImage()
, Stream.unBackgroundImage()
.
Whiteboard
The whiteboard component enables participants to interact together with:
lines (
pen
)shapes (
arrow
,rectangle
orellipse
)texts
and also an eraser (
eraser
)
Lines weight and colors, and text size are configurable. Undo & redo functions are available (whiteboardClient.undo
and whiteboardClient.redo
). The whiteboard area can be zoomed in and out (whiteboardClient.setScale
), and shifted around (whiteboardClient.setOffset
). The whiteboard can be erased at once with the whiteboardClient.deleteHistory
function.
Adding a whiteboard to a web page takes a few lines:
See the whiteboard in action in the following github repos:
Last updated