Docker Swarm and Docker Stack in AWS

John MacLean
9 min readMay 9, 2023

--

uuiPhoto by FLY:D on Unsplash

Introduction

Imagine having the power to effortlessly scale and manage your applications, delivering a seamless experience for your users even as the demand grows. Docker Swarm and Docker Stack, two complementary technologies, can help you achieve this with ease.

Docker Swarm is a powerful orchestration platform that allows you to deploy and manage multiple instances of your application, called containers, across a group of machines, known as a swarm. The true magic of Docker Swarm lies in its ability to automatically distribute and balance the workload, ensuring optimal performance and high availability. In other words, it’s like having a team of skilled traffic conductors directing each container to the right destination, providing a fast lane for your applications.

Docker Stack takes this a step further by simplifying the management of complex multi-service applications. Think of it as a blueprint that outlines your application’s components and requirements. With Docker Stack, you can describe your application using an easy-to-read text file, and then deploy and manage the entire stack with just a few commands. This not only saves you time but also ensures that your application’s infrastructure remains consistent and easy to maintain.

Together, Docker Swarm and Docker Stack empower you to build scalable, high-performing applications that can easily adapt to changing demands. Whether you’re a small business owner or an enterprise IT leader, these tools can help you deliver a top-notch user experience, all while keeping complexity at bay.

In this article, we’ll dive deeper into the world of Docker Swarm and Docker Stack, exploring real-world examples and best practices to help you harness their full potential.

Pre-requisites

  1. I’m going to refer back to one of my early articles: Use the AWS CLI to Customise and Automate Launching of your EC2 Instance. We need to launch 4 EC2 instances and install Docker on them, so I thought automating this via the AWS Command Line Interface would be efficient, rather than repetitive manual clicking via the AWS Console.
  2. A non-root AWS IAM account with enough rights to perform the required actions.
  3. A development environment. This time I am just using Windows PowerShell with the AWS CLI installed and configured.
  4. A basic understanding of Docker commands.
  5. Finally an understanding of AWS services and architecture is most helpful at this stage.

The Challenge!

Basic

Using AWS, create a Docker Swarm that consists of one manager and three worker nodes.

Verify the cluster is working by deploying the following tiered architecture:

  • a service based on the Redis docker image with 4 replicas.
  • a service based on the Apache docker image with 10 replicas.
  • a service based on the Postgres docker image with 1 replica.

Advanced

Create a Docker Stack using the Basic project requirements. Ensure no stacks run on the manager (administrative) node.

Setting Up Our Security Group

The Docker documentation lists the ports that need to be opened for Docker Swarm to work:

  • Port 2377 TCP for communication with and between manager nodes.
  • Port 7946 TCP/UDP for overlay network node discovery.
  • Port 4789 UDP (configurable) for overlay network traffic.

I’m just going to reuse an existing security group I had set up for SSH and HTTP traffic.

So go to the AWS Console, select Security Groups from the left hand menu, click the group you want and press the Edit inbound rules button.

Click Add rule and add each of the required Docker Swarm port access rules.

Once all the rules are in place, click Save rules.

Now we are ready to spin up some EC2 instances.

Creating AWS EC2 Instances with Docker

We are just going to use the AWS CLI to launch 4 EC2 instances at once. We will create user data to ensure Docker is installed automatically once the instances are launched.

#!/bin/bash
sudo yum update -y
sudo yum install -y docker
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -a -G docker ec2-user

So the user data is essentially a script that is run as root once the instance is launched. As it’s run as root, I didn’t actually need sudo in front of the commands!

Save the contents shown above into a file called docker_inst.sh.

aws ec2 run-instances --image-id ami-0a242269c4b530c5e --count 4 
--instance-type t2.micro --key-name <your key pair>
--subnet-id <your subnet id>
--security-group-ids <your security group id>
--user-data file://docker_inst.sh
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=LUITProjectWeek17}]'

Then in the same directory as docker_inst.sh on our development environment terminal, we run the command above.

NOTE: I have formatted the command to make it readable in the code box, so there are additional carriage returns that need removed if you copy it.

I copied the image id from my preferred region via the AWS Console and used my own key pair, subnet id and security group id — again all details can be copied from the AWS Console.

A couple of minutes later and all our instances are shown in the EC2 Dashboard and are up and available.

Initialize Docker Swarm

We will use instance id i-014f3241fb3e9fc50 as our manager node.

Take a note of its public IPv4 address from the AWS Console and then we can SSH into it via our development machine’s terminal.

Ooh, some nice new login ascii art on the Amazon Linux 2023 image!

Running docker version confirms that Docker is installed and working.

docker swarm init --advertise-addr <manager instance IP address>

NOTE: make sure you use the private IPv4 address of your manager instance.

I used the public IPv4 address initially as shown in the screenshot and had to redo everything when there was no connectivity between the master and worker nodes.

Anyway, if successful, you will see a docker swarm join command output to the screen. Copy that as you will need to run it on all your worker nodes.

Run docker service ls on the master and we can see it’s the leader and there are no workers at present.

Join Worker Nodes to Swarm

Run the docker swarm join command you copied earlier on each instance you wish to be a worker node.

If successful, you will get near instant confirmation.

Once all our workers are in place, we can go back to the master and run docker node ls to get confirmation that our swarm is set up.

Deploy our Services

Deploying the services is pretty straightforward for this example.

Each service is set up using the same command structure.

All we need to do is specify a service name, how many replicas we require and the Docker image we require for the service.

Each command is run on the manager node.

So as per the Challenge! requirements:

docker service create --name redis --replicas 4 redis:latest
docker service create --name apache --replicas 10 httpd:latest
docker service create --name postgres --replicas 1 --env POSTGRES_PASSWORD=mysecretpassword postgres:latest

That’s all there is to it. Although we better check everything was set up correctly!

Checking the Swarm Works

There are a few useful Docker commands we can run to make sure our swarm is functioning as it should.

docker node ls — well, we’ve seen this before, but another check after we deploy our services and all nodes are ready and available.

docker service ls — this command lists all the services running in our swarm. We can check the defined and current number of replicas running.

docker service ps <service name> — this command allows us a more detailed view of each replica within a service. Here we are checking the apache service for example. We can see each replica is running and they are well distributed across our nodes.

docker service logs <service name> — as the command suggests, this outputs the relevant log file for a given service which could be useful in troubleshooting. I chose redis in this example and only displayed the last 10 lines of the log file. Mainly because screenshots of log files aren’t that thrilling…

Well, we have confirmed our swarm is working perfectly, so the Basic stage of our Challenge! is complete and it’s time to tidy up before the Advanced stage.

Docker Swarm Cleanup

We will reuse the same manager and worker node EC2 instances for our Advanced task in creating a Docker Stack, but first we need to clear out our swarm to ensure we have a clean environment and avoid any conflicts.

docker service rm redis apache postgres

We first run this command on our manager node to remove all our deployed services.

docker swarm leave --force

Next we run docker swarm leave on our manager node and then all worker nodes.

Finally running the docker service ls and docker node ls commands on our manager node confirms the swarm no longer exists.

Creating a Docker Stack — Advanced Stage

A YAML file, which stands for YAML Ain’t Markup Language, is a simple and easy-to-read text format that helps organize and manage information. In the context of Docker Stack, we use a YAML file to describe the various parts of our application, like its services, their settings, and how they relate to each other. By defining this structure in a YAML file, we can quickly deploy and manage our application, ensuring consistency and simplifying maintenance.

version: '3.9'
services:
redis:
image: redis:latest
deploy:
replicas: 4
placement:
constraints:
- node.role != manager

apache:
image: httpd:latest
deploy:
replicas: 10
placement:
constraints:
- node.role != manager

postgres:
image: postgres:latest
environment:
POSTGRES_PASSWORD: mysecretpassword
deploy:
replicas: 1
placement:
constraints:
- node.role != manager

Firstly we create a docker_stack.yml file on our manager node and paste in the contents shown above.

The node.role != manager constraint in each service should prevent it running on a manager node.

docker swarm init --advertise-addr <manager node private IP address>

We need to define our manager and worker nodes again after tearing everything down previously.

As before, run the docker swarm init command on the manager node and copy the docker swarm join command from the output.

Run the docker swarm join command on all worker nodes.

Next we go back to the manager node.

docker stack deploy --compose-file docker_stack.yml mydockerstack

The next command we run is pretty self explanatory. It deploys the services defined in our YAML file in a stack called mydockerstack.

docker stack services mydockerstack

We can see the stack was created successfully, again by running this command on our manager node.

docker node ps self --filter "desired-state=running"

Lastly, we check the master node again to confirm that no services are running on it.

The final part of our Advanced task is successful!

Conclusion

Harnessing the power of Docker Swarm and Docker Stack, we’ve simplified the complex, turned the daunting into the doable. These tools are the secret sauce to managing and scaling applications, ensuring they run smoothly and efficiently, no matter the demand.

Thank you for diving into this journey with me. Got questions or thoughts to share? Please let me know!

Until next time dear reader, take care!

--

--