Introduction
In my ongoing effort to learn DevOps tools and practices, today I’d like to share my experience setting up scaling for the FastAPI back end of my notes app, using Docker’s Compose feature as well as setting up the incredibly common webserver, Nginx, as a load balancer.
Setup
To get started, I made a new git branch of my project’s repo. This wasn’t super necessary since this is a relatively simple project that I alone work on, but I wanted to get more practice with an important part of the git ecosystem. To make the new branch and immediately check it out I ran git checkout -b scaling
I then confirmed that I could still locally build and run the Dockerfile I had created previously, making note that the uvicorn webserver was running on port 8000 within the container.
Docker Compose
I created a new file: docker-compose.yaml
and added the following:
name: notes
services:
api:
build: .
command: sh -c "uvicorn main:app --port=8000 --host=0.0.0.0"
ports:
- "8000:8000"
volumes:
- ./keys.env:/app/keys.env
Then, I tested it out by running docker compose up
, which worked:
Now, I could scale up the number of containers easily by adding the scale parameter, along with the name of the service and number of containers to spin up: --scale api=3
, which didn’t work.
This was expected, though, because I knew you cannot bind multiple services (containers in this case) to a single local port. To make this work, I removed the static port mapping from docker-compose.yaml
like so:
ports:
- "8000"
Now, Docker would map a random open local port to each container, which it did:
As you can see, I also added -d
to detach from the console outputs. I also ran docker ps
to show the next hurdle to beat - each container is now on a separate random port that you can go to individually in a browser, but there isn’t a single address that points to them all at once. Here is an example of one:
We need to put a load balancer in front of them.
Nginx
The cool thing about using Docker is that we can simply add Nginx as part of our docker-compose.yaml
definitions. Here is what I added to do that:
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- api
ports:
- "80:80"
Along with that modification, I needed to create the nginx.conf
file that I defined in the volumes:
section:
events {
worker_connections 1000;
}
http {
server {
listen 80;
location / {
proxy_pass http://api:8000;
}
}
}
I reran docker compose up --scale api=3 -d
and it seemed to work in the browser, but I couldn’t tell which container I was getting to make sure it was balancing. To give this visibility, I added the following to the main.py
file of my FastAPI app:
import socket
@app.get("/")
async def root():
return {f"Container ID: {socket.gethostname()}"}
I tried rerunning docker compose up
again, but found that it wasn’t rebuilding the image. To fix this and force it to rebuild, I added --build
to get docker compose up --scale api=3 -d --build
It worked!
Curiosity
In writing this up, I realized I didn’t know why the load balancing worked. How did Nginx know where to find each app server instance? I also couldn’t find anyone who really explained it in the tutorials I used. So, I followed what I saw in the configuration files along with what I know about networking.
I looked at the line I had added to the nginx.conf
file that said proxy_pass http://api:8000;
. I noticed that the URL http://api
must have something to do with it. So, after poking around the running containers for a while, I put it together when I ran a dig
command on the container running Nginx:
It was a simple round robin DNS entry that must have been created by Docker - pretty cool! I found the same DNS entries in each of the app containers, but they didn’t function as a round robin presumably because of the default order of DNS resolution, they return their local IP only.
Docker Compose gotcha
I just wanted to point out one thing that gave me difficulty in case it helps anyone else. I was having trouble with one of my docker compose
commands (build). It took me a while to realize that I was using a deprecated version of Compose even though I had the latest version of Docker. This is because they changed it from a separate program, which you ran using docker-compose
(note the dash), to being a sub-command of the main program by running docker compose
. Somehow I had both versions on my system.
Conclusion
This was a really fun one and surprisingly relatively easy. I was able to do the main implementation so fast, that I thought I might continue to the deployment stage, where I’m currently hosting the app using AWS ECS and make sure the Github Actions that I set up last time still worked, but it occurred to me that I already had a lot to write up to show how I got it to run locally. Taking on something as big as learning DevOps tools and practices can only happen incrementally.
I’m also not sure that I would use Docker Compose and Nginx for the AWS deployment. It might make more sense to use the AWS native solutions - something I look forward to researching next time.
Cheers!
Rick
Resources
- Flask Load Balancing Using Nginx and Docker (DevGuyAhnaf) - https://www.youtube.com/watch?v=42Q65H8ch7U \