Deploy Django Channels + Websockets on AWS Elastic Beanstalk using Gunicorn, Supervisor & Redis Elasticache

Implementing this feature proved nothing less than a monumental task. From sifting through decade old AWS documentation (like, seriously, its 2021 Amazon please update your documentation) to piecing together information from endless StackOverflow posts, in this article, I highlight the challenges I faced and the solutions I was able to come up with to get a fully asynchronous chat module + real-time data feed set up as part of a project I’m currently working on using a Django-React stack deployed on AWS.

Before we dig in, let’s briefly touch up on the above technologies:

Django Channels: Django Channels provides asynchronous support to Django through the Python ASGI specification. Simply put, it allows you to extend Django beyond HTTP to handle other transfer protocols such as Websockets, HTTP long-polling etc. Find out more here:

Websocket Protocol (WS): The Websocket protocol is another communication protocol (similar to HTTP) that operates at the transport layer. Unlike HTTP, it allows for bi-directional communication between client and server. This makes them pretty cool because unlike REST where you need to query the server to get a response, Websockets simply “hook” into your server and open up a pathway for data transfer (ideal for apps that need to be real-time).

AWS Elastic Beanstalk: AWS EB is essentially a wrapper around multiple other AWS services (EC2, Load Balancing, Route 53 etc) that allows you to easily deploy your web application on the cloud. Find out more here:

Gunicorn: For my application, I’m using Gunicorn which is a Python WSGI HTTP Web Server for UNIX. Hence, this set up is specific to Gunicorn ()

Supervisor: Supervisor is a process control system that allows you to monitor and provision resources for different UNIX processes (which makes sense in our case since we need to run our Daphne server as a separate process so that it can handle our Websocket connections seamlessly). Find out more here:

For the application that I am currently working on, I wanted to set up two components:

  • a real-time chat application (whereby users currently logged into the platform can talk to other users)
  • real-time data feed that continually receives data from the server and pushes that data to connected clients (the key point here being that I want the client to simply “get” the data without having to really “ask” for it from the server like a REST application which can be achieved with Websockets).

After meddling for a day or so, I was able to implement the above in a local environment. However, things started to get really confusing once I started the migration to AWS. Here’s a few things I had to learn the hard way:

  • Almost everything that you will find on AWS docs with regards to EB right now is catered to Amazon Linux 1 machines (for which standard support ended as of 31 Dec, 2020). This is very misleading and makes the AWS documentation pretty much useless since the newer Amazon Linux 2 machines have some key fundamental differences (such as application path that you need to configure)
  • As this was my first time using AWS, I decided to stick to the default recommended Elastic Beanstalk setup (in hindsight, I wish I did some prior research). This proved costly as I later learned that the Classic Load Balancer that comes with the default set up does not support Websockets. Instead, I had to migrate to an Application Load Balancer which does support Websockets. This is important to keep in mind.
  • In a local environment, Django Channels uses an in-memory channel layer. However, this is not feasible once your application is live. As such, we require the need for an in-memory cache solution that will also scale. Provisioning a Redis cluster using AWS Elasticache is the way to go in this regard.
  • We will use Gunicorn as our WSGI web server which will take care of all your HTTP/S requests. But what about your Websocket requests ? That’s where Daphne comes in. Daphne is a production-ready ASGI server which complements Django Channels. Setting up Daphne with AWS was a real pain though as it also requires the need for supervisor (i.e the supervisor daemon A.K.A supervisord) which in itself was a pain to set up with AWS’s latest Amazon Linux 2 machines.

But have no fear! I am here to share my knowledge with you all so that you may implement your features and close that ticket in no time!

1) Install Python Packages via requirements.txt


I’m assuming you have all the other packages you need for your project including the AWS CLI client etc. Include the above in your requirements.txt as the EC2 instance will need to install these during the deployment phase to successfully get everything up and running.

2) AWS Load Balancer Configuration + Procfile

AWS EB Load Balancer Set up

The above configuration simply says the following:

If incoming request is received over HTTP / HTTPS protocol on port 80/443, direct them to the default web server process.

If incoming request is over Websockets protocol (or having a path with /ws/ in it) on port 80/443, redirect them to port 5000 (which is where we will be configuring our Daphne server to handle them).

Since Websocket requests are long-lived connections, remember to set ‘Stickiness’ settings under ‘Processes’ to ‘enabled’ in the ‘Load Balancer’ section of the EB Configuration tab.

Next up, we will configure our process commands. We need to issue the following commands to fire up our Daphne (ASGI)+ Gunicorn (WSGI) servers. Add the following lines to your Procfile (put this file in the root directory of your Django application). This ensures that we use servers of our own choice and not the default Apache/NGINX server which EB uses.

web: gunicorn --bind :8000 --workers 3 --threads 2 <project_name>.wsgi:applicationwebsocket: daphne -b :: -p 5000 <project_name>.asgi:application

Remember to replace <project_name> with your actual project name! If you’ve done this correctly, then you should see something like “Successfully deployed application to instances using commands found in Procfile…” pop up in your events tab when you deploy your application.

3) Configure AWS EB Environment + Load Balancer Security Groups To Handle Traffic

Load Balancer Security Groups

Inbound Rules for Load Balancer
Outbound Rules For Load Balancer

My Load Balancer accepts traffic from everywhere on all ports and directs them to my EB security group.

Elastic Beanstalk Security Groups

Inbound Rules For EB
Outbound Rules For EB

My EB application is configured to accept incoming traffic from my Load Balancer on ports 80, 6379 (Redis port) & 5000 respectively.

The above security group configs ensure that only traffic picked up by the load balancer reaches the EB instances. Furthermore, we open up the Redis port to accept Websocket traffic from Daphne.

4) Provision Redis Cluster As In-Memory Cache

Once the setup is complete, grab the host name of the ‘Primary Endpoint’ attribute and head on over to your Django file and make the following changes:

'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
"hosts": [(<primary_endpoint_url>, 6379)]

Remember to replace <primary_endpoint_url> with the correct value! This directs your application to use the Redis cluster as your in-memory cache which is a lot more performant than the default Django Channels implementation.

5) Install, Setup & Configure Supervisor

Now that our overall environment configuration is set up, we now simply need to write a “hook” script that will initialise the supervisor daemon to spin up our Daphne process on our EC2 instances. Ideally, you’d just wanna do this on a single EC2 instance via leader_only: true attribute. However, you can have it run on all your instances if you wish.

Simply, add the following script under .ebextensions and redeploy your EB application. I found this configuration in the following StackOverflow post: Big up to our fellow developer zeros-and-ones for posting this. All credit for this goes to him!

After supervisor is installed via requirements, this start-up script helps turn the daemon service on. Furthermore, it also ensures that every time the machine reboots (for example, during deployment), the service restarts. The sole purpose of supervisor in our case is to use it to run our Daphne server (which also gets installed via requirements).

files:/usr/local/etc/supervisord.conf:mode: "000755"owner: rootgroup: rootcontent: |[unix_http_server]file=/tmp/supervisor.sock   ; (the path to the socket file)[supervisord]logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)logfile_backups=10           ; (num of main logfile rotation backups;default 10)loglevel=info                ; (log level;default info; others: debug,warn,trace)pidfile=/tmp/ ; (supervisord pidfile;default               ; (start in foreground if true;default false)minfds=1024                  ; (min. avail startup file descriptors;default 1024)minprocs=200                 ; (min. avail process descriptors;default 200)[rpcinterface:supervisor]supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface[supervisorctl]serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket[include]files = /usr/local/etc/*.conf[inet_http_server]port = "000755"owner: rootgroup: rootcontent: |#!/bin/bash# Get into root modesudo su# Source function library. /etc/rc.d/init.d/functions# Source system settingsif [ -f /etc/sysconfig/supervisord ]; then. /etc/sysconfig/supervisordfi# Path to the supervisorctl script, server binary,# and short-form for messages.supervisorctl=/usr/local/bin/supervisorctlsupervisord=${SUPERVISORD-/usr/local/bin/supervisord}prog=supervisordpidfile=${PIDFILE-/tmp/}lockfile=${LOCKFILE-/var/lock/subsys/supervisord}STOP_TIMEOUT=${STOP_TIMEOUT-60}OPTIONS="${OPTIONS--c /usr/local/etc/supervisord.conf}"RETVAL=0start() {echo -n $"Starting $prog: "daemon --pidfile=${pidfile} $supervisord $OPTIONSRETVAL=$?echoif [ $RETVAL -eq 0 ]; thentouch ${lockfile}$supervisorctl $OPTIONS statusfireturn $RETVAL}stop() {echo -n $"Stopping $prog: "killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisordRETVAL=$?echo[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}}reload() {echo -n $"Reloading $prog: "LSB=1 killproc -p $pidfile $supervisord -HUPRETVAL=$?echoif [ $RETVAL -eq 7 ]; thenfailure $"$prog reload"else$supervisorctl $OPTIONS statusfi}restart() {stopstart}case "$1" instart)start;;stop)stop;;status)status -p ${pidfile} $supervisordRETVAL=$?[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status;;restart)restart;;condrestart|try-restart)if status -p ${pidfile} $supervisord >&/dev/null; thenstopstartfi;;force-reload|reload)reload;;*)echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"RETVAL=2esacexit $RETVALcommands:01_start_supervisor:command: '/etc/init.d/supervisord restart'leader_only: true

And that’s about it! Assuming your backend and frontend Websocket code is functioning correctly (both Django Channels + frontend Websocket library like W3CWebSocket), your application should be handling both HTTPS/WSS requests correctly after deployment. I hope this helps whoever needs it! Good luck and feel free to share you comments! Happy coding!

Just writing about stuff I got stuck with, so that you guys don’t have to struggle with the same things. Find out more:

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store