Jibri is the recording service of jitsi-meet. It can also be used to stream RTMP endpoints. Jibri is a server side recording component which uses a chrome instance rendered in a virtual buffer to capture and encode the output in ffmpeg. The main limitation of jibri is that it can only be used to record a single
meeting at a time since it needs to run on a separate VM. Hence to scale the service we need to have multiple instances running for each meeting that needs to be recorded or streamed. The article discusses about how to achieve this using AWS autoscaling groups. If you intend to go for on-premise servers you would need
to use create a cluster using kubernetes which we will discuss in another post.
AWS EC2 - All the servers will be c5.xlarge (i.e. considering both network and bandwidth capabilities) EC2 instances
AWS EC2 Autoscaling groups - These groups are responsible for handling the autoscaling functionality of the servers. We can set scaling up and scaling down policies, create launch configurations, adjust parameters like max, min capacities of the group.
We will explain more in a bit.
AWS S3 Buckets - S3 buckets will be used to store the recordings
AWS Cloudwatch - Cloudwatch monitor the servers and fires alarm when the defined thresholds are reached.
- Install Jitsi-Meet on a EC2 server
- Install Jibri on a EC2 server and setup a cron job on server to ping the cloudwatch metric described in next step. Also add another cron job to check the status of the jibri service.
If the service is busy add protection to instance and if the service is not busy remove the protection. Having protection ensures that the server won’t shutdown when the autoscaling group requests it to.
Also setup a post script to move the recording from server to a preferred storage location like AWS s3
- Create a custom metric in cloudwatch using AWS CLI. The metric will have 0 if the jibri server is busy and 1 if the server is free
- Create an AMI using Jibri Server
- Create a launch configuration in EC2 - This configuration would be used to spin up new servers when scaling up.
For the configuration use the AMI created on the previous step.
Choose the instance type as per your requirements (We recommend to go for c5.xlarge or above).
Please provide the other information accordingly.
- Create an autoscaling group
- Choose the above launch configuration
- Choose the VPC and subnets that you are going to set this up
- Select the group size keeping in mind the traffic conditions. For example if you do not wish to have more than 10 servers in the group you can set the “Maximum Capacity” to 10.
- Skip the scaling policy for now
- Create a notification for all event types
- Configure cloudwatch to monitor for one minute and add up the metrics of the servers which are free. Then configure it to fire an alarm when custom metrics is above a desired value(i.e. maximum number of free servers)
- Autoscaling group captures the alarm and scales up or down the system
- Cloudwatch metrics is updated per minute with details of the servers that are free
- Cloudwatch triggers an alarm when the metrics is below a certain number (i.e. system needs to scale up) or above a certain number (i.e. the system needs to scale down) which is then captured by the autoscaling group
- If cloudwatch informs to scale up, autoscaling group spins up new servers with the launch configuration
- If cloudwatch informs to scale down, autoscaling group tries to shutdown unprotected servers. (Servers in which recordings are running would remain protected because of the script)
This system is cost effective as it shutdowns the servers when not needed but on the other hand since it takes few seconds to spin up new servers it won’t be able to handle a instant increase of
traffic. We discussed how jibri auto scaling could be handled in aws. The same could be achieved with kubernetes as well. We will discuss about it another post.
If you want to setup a scaling system of jibri servers please contact us through email@example.com. We provide WebRTC development services