Introduction to Docker: Containers - Part 2

John MacLean
6 min readJul 13, 2023

--

Photo by Ray Hennessy on Unsplash

Introduction

Well, dear reader, here we are looking at Docker once more.

This article is a follow on from my earlier Introduction to Docker: Containers escapade! Now, that article did not go well due to issues I had using an AWS Cloud9 machine as my development environment. Although I eventually got the task complete, I think was scarred for life and can only now return to face the next step!

Thanks to my clever coaches at Level Up In Tech, it looks like the Linux image for Cloud9 was the root cause of my problems.

Therefore, this time we’re moving onto a vanilla Ubuntu AWS EC2 image, which will hopefully result in a smoother experience.

I’ll say a little more about the setup in the Pre-requisites section.

This article will concentrate on the Advanced stage of the challenge. For all background to the challenge and some explanation of Docker terms, please refer to my previous article linked above.

Pre-requisites

  1. A non-root AWS IAM account with enough rights to perform the required actions.
  2. A development environment. This time I am using an AWS EC2 instance with a vanilla Ubuntu install. This is a t2.micro instance again and I have not increased the hard drive size from default. I also have VSCode installed on my local Windows workstation and set up with the Remote SSH plugin to connect to the instance.
  3. The Docker engine installed on your development environment. I’m not going to go over that in detail as the install process will vary depending on your own setup. However, if you wish to install on Ubuntu, you can refer to this article. Note that I installed using the convenience script, so go to that section in the article to check it out. It’s really straightforward.
  4. A basic understanding of Docker commands.
  5. Finally knowledge on Git and GitHub would be handy. You can check out some of my previous articles to help with that.

The Challenge!

Scenario:

A software development team is working on a Python-based application. The team needs to simplify their development environment setup and ensure consistent execution of their code across different machines.

The team can create a Dockerfile for Boto3/Python that utilizes either the Ubuntu or Python official images available on Docker Hub. The Dockerfile can be customized to include any necessary libraries or dependencies required by the team’s Python application.

Advanced tasks:

  • Create three dockerfiles that automatically connect to GitHub.
  • Each docker file should connect to a different GitHub repo.
  • Place one container on a network called Development.
  • Place the other two on network called Production.
  • Verify container on the Development network cannot communicate with the other containers.
  • Verify containers on the Production network can communicate with each other.

Creating our Dockerfiles

This time, we are just keeping things really simple and constructing 3 very basic files, each of which downloads a separate GitHub repository as follows:

Dockerfile1

FROM ubuntu:latest

RUN apt-get update && \
apt-get install -y git iputils-ping

WORKDIR /myapp

RUN git clone https://github.com/TopKoda/gold-code.git

CMD tail -f /dev/null

Dockerfile2

FROM ubuntu:latest

RUN apt-get update && \
apt-get install -y git iputils-ping

WORKDIR /myapp

RUN git clone https://github.com/TopKoda/Docker.git

CMD tail -f /dev/null

Dockerfile3

FROM ubuntu:latest

RUN apt-get update && \
apt-get install -y git iputils-ping

WORKDIR /myapp

RUN git clone https://github.com/TopKoda/topkoda.git

CMD tail -f /dev/null

We’re going to install git to clone our repositories and the ping command for testing later on.

The CMD tail -f /dev/null at the end of each file is a common way to start a dummy process that does nothing and doesn’t end which will keep the container up indefinitely.

Building our Images

Once our files are created, we go to the directory where they are located and run our build command.

Note that we need to use sudo for Ubuntu.

sudo docker build -t image1 -f Dockerfile1 .

Everything looks good for the first image and we can see it cloned the first GitHub repository.

Let’s repeat the command (changing the image name) for the next two builds.

sudo docker build -t image2 -f Dockerfile2 .
sudo docker build -t image3 -f Dockerfile3 .

As you can see, both builds were successful and each has a different repository.

Creating our Networks

The next step is really straightforward — creating our Development and Production networks.

Simply enter the following commands:

sudo docker network create Development
sudo docker network create Production

Done!

Running the Docker Containers

Now we want to run each image in a separate Docker container and attach the containers to their respective networks.

We’ll set image1 to Development and set the other two images to Production.

sudo docker run --name container1 --network Development -d image1
sudo docker run --name container2 --network Production -d image2
sudo docker run --name container3 --network Production -d image3

sudo docker container ls will confirm our containers are running…

…and everything looks good at this stage.

Testing

Now we are going to check the communication between our containers.

To do that, we’ll ping between images. To get the IP addresses of our containers, we need to run this rather convoluted command:

sudo docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <your_container_id>

We got each container ID from the sudo docker container ls command we ran earlier.

So container1 has an IP address of 172.18.0.2, container2 has an IP address of 172.19.0.2 and container3 has an IP address of 172.19.0.3.

container1 was in our Development network, so we can use the docker exec command to run ping from inside that container and try to ping the other containers in the Production network.

If the containers are properly isolated, these commands should fail.

sudo docker exec container1 ping -c 3 172.19.0.2
sudo docker exec container1 ping -c 3 172.19.0.3

Indeed, they failed, so the first test passed.

Finally, to verify that the two Production containers can communicate with each other, we can ping from one to the other as follows:

sudo docker exec container2 ping -c 3 <container3-ip>
sudo docker exec container3 ping -c 3 <container2-ip>

This time the commands succeed, confirming successful communication between the two containers in the Production network.

That’s it — our task has been successfully completed!

Tidy Up

The last thing to do is some housekeeping.

To clean up the Docker environment, we will run the following commands:

sudo docker rm -f container1 container2 container3
sudo docker rmi image1 image2 image3
sudo docker network rm Development Production

Remember to power off your AWS EC2 instance too!

Conclusion

This scenario shows a simple way of getting code into Docker containers directly from GitHub and creating and using Docker networks for communication.

Note however, that it is not typical to clone repositories directly in a Dockerfile for a production environment. Usually, you would separate the building of your application and the creation of your Docker image into separate steps in your CI/CD pipeline.

Anyway, at least this task went smoothly, so I’m happy and I hope you are too.

Thank you once again for taking the time to check out my article. Please feel free to leave any feedback or questions!

--

--