Introduction to Docker: Containers

John MacLean
9 min readMay 4, 2023

--

Photo by Dominik Lückmann on Unsplash

Introduction

Let me start by saying — please read the Conclusion. I would now not recommend using an AWS t2.micro machine for working with Docker, even if you extend the hard drive.

This is a last minute edit as a warning, but due to issues I have had with my Cloud9 machine and Docker, I would advise installing VSCode and Docker Desktop on your local machine or having a dedicated local Linux virtual machine for working with Docker instead.

Anyway, onto the article as I intended it to be…

In the world of software development, creating a consistent environment across different machines can be challenging. Docker and containers are game-changers in this aspect, offering an easy way to streamline workflows and ensure reliable execution of code, regardless of the underlying system.

Docker is an open-source platform that simplifies the process of building, deploying, and managing applications by using containers. Containers are lightweight, standalone, and portable units that encapsulate an application, its dependencies, and runtime configuration.

While containers are sometimes compared to virtual machines, this comparison is not entirely accurate; unlike virtual machines, which run full operating systems and emulate hardware, containers share the host operating system and provide lightweight, application-specific environments, resulting in better performance and reduced resource usage.

By using Docker containers, developers can ensure that their applications run consistently across different machines, eliminating the age old “it works on my machine” problem. Containers make it easy to manage dependencies, version control, and portability, significantly enhancing team collaboration and productivity.

Pre-requisites

  1. A non-root AWS IAM account with enough rights to perform the required actions.
  2. A development environment. As before, I am using AWS Cloud9 which is lightweight and integrates seamlessly with AWS services. I have been using Cloud9 for Python development and have made a small tweak shown in the next section to allow it to cope with Docker.
  3. The Docker engine installed on your development environment. I’m not going to go over that in this article as the install process will vary depending on your own setup. However, if you wish to install on an Amazon Linux 2 image (which the Cloud9 machine is), you can refer to this article.
  4. A basic understanding of Docker commands.
  5. Finally knowledge on Git and GitHub would be handy. You can check out some of my previous articles to help with that.

Cloud9 Machine Prep

My Cloud9 machine is a free tier t2.micro machine and some of my colleagues were reporting issues with resources running out when using Docker.

So, I copied this hard drive resize script to my Cloud9 machine and ran it to double the capacity from 10Gb

…to 20Gb on the main disk /dev/xvda1 as shown.

I’m not going to go through executing a Bash script in this article, but if you would like a tutorial, please refer to the Executing the script section of one of my earlier Linux articles: link

The Challenge!

Scenario:

A software development team is working on a Python-based application. The team needs to simplify their development environment setup and ensure consistent execution of their code across different machines.

The team can create a Dockerfile for Boto3/Python that utilizes either the Ubuntu or Python official images available on Docker Hub. The Dockerfile can be customized to include any necessary libraries or dependencies required by the team’s Python application.

Foundational tasks:

  • Build a Docker file for Boto3/Python.
  • Use either the Ubuntu or Python official images on Dockerhub.
  • Download any three repos to your local host.
  • Create three containers.
  • Each container should have a bind mount to one of the repo directories.
  • Log into each container and verify access to each repo directory.

Build a Dockerfile

So the first step is to create our Dockerfile.

On our Cloud9 terminal, go into our preferred directory and enter touch Dockerfile. This will create an empty file in our directory.

Note, the Dockerfile must be spelled exactly as shown — with a capital D and all the other characters lowercase.

This is because Docker expects the name of the file to be Dockerfile (case sensitive) by default. If you run the docker build command without specifying the build configuration file, it will look for Dockerfile. If yours is named differently, you will get a build error.

Next enter vim Dockerfile to open the file in the Vim text editor. Press i to enter insert mode and write your Dockerfile contents.

Press the Escape key to exit Vim’s insert mode and type :wq to save the file and exit the editor.

# Use the official Python image
FROM python:3.10

# Set the working directory
WORKDIR /myapp

# Copy requirements.txt into the container
COPY requirements.txt .

# Install the required packages
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code into the container
#COPY . .

# Expose the port the application will run on
#EXPOSE 8000

# Run the application
#CMD ["python", "myapp.py"]

The Dockerfile we need for our scenario is actually very simple, which is why I commented out the last 3 actions. I left them in to illustrate what you might include had we created a large scale Python application.

We don’t really need to set a working directory in this example either, but again it’s a useful illustration of what you may want to do if you’re working with a real application.

So the only point of note left in our file is the use of a requirements.txt file.

Now you can install any required packages in the Dockerfile directly, but it’s considered best practice to use a requirements.txt file as it has a couple of key advantages:

  1. It makes it easier to manage and share the dependencies required for the project.
  2. It ensures that all team members and environments use the same versions of the dependencies, which helps avoid inconsistencies and potential bugs related to different package versions.

We create our requirements.txt file in the same directory and in the same manner as our Dockerfile.

boto3==1.26.0

As we want our developers to have access to Boto3, all we need in our requirements.txt file is to reference a specific version.

Download 3 Repositories

git clone https://github.com/TopKoda/gold-code.git
git clone https://github.com/TopKoda/psychic-octo-goggles.git
git clone https://github.com/TopKoda/topkoda.git

This stage is straightforward. We choose 3 GitHub repositories and clone them using the commands above entered into our Cloud9 terminal.

Once complete, the repos are downloaded into their relevant folders shown in blue.

Building our Image and Creating Containers

Let’s build our image using the following command structure:

docker build -t repository_name[:tag_name] .

The -t switch allows us to specify a name for our build and an optional tag.

The name and tag are helpful when you want to run a container based on the image, push the image to a registry like Docker Hub, or manage multiple versions of the same image.

The easy to miss . at the end of the command will put our build in the current working directory.

For this example, I will just specify a name and no tag.

As the partial output from the terminal shows, if no tag is specified, Docker will automatically assign the latest tag to the built image.

Running the docker image ls command shows us all available images and we can confirm our new image was created successfully.

Now, let’s create our first container.

docker run -it --name gold_code_container -v "$(pwd)/gold-code:/myapp" jmac_boto3_python /bin/bash

The command structure ensures the container has a bind mount to the respective repo directory via the -v flag. The $(pwd) command is used to get the absolute path of the current directory, which is necessary for Docker bind mounts.

The containers will start with an interactive shell session, allowing us to access and manipulate the contents of the mounted directories.

We can list the contents of our containers myapp folder.

Then if we exit the container and list the contents of the gold-code repo directory, we can confirm the files and folders match and therefore the bind mount was successful.

docker run -it --name psychic_octo_goggles_container -v "$(pwd)/psychic-octo-goggles:/myapp" jmac_boto3_python /bin/bash
docker run -it --name topkoda_container -v "$(pwd)/topkoda:/myapp" jmac_boto3_python /bin/bash

We’ll just create our other two containers in exactly the same way.

Our psychic_octo_goggles_container is confirmed successful…

…and so is our topkoda_container.

Oh, before we tidy up, I forgot to check that our requirements.txt was processed correctly. Remember, we wanted Boto3 installed on our image.

If I start an interactive shell session on one of our containers, I can run pip show boto3 and the output confirms Boto3 is indeed installed and the version is the one we specified.

Tidy Up!

Given we’re on AWS, we don’t want to leave anything running, using up our precious resources and potentially incurring costs. So some housekeeping is in order before we sign off.

We can use the docker container ls command to see if we have any running containers.

As you can see, our topkoda_container is still running, so we can just use the docker container stop command to stop it.

Now we just use the docker container rm command and pass in our container names to remove them all.

And we’re done!

Conclusion

Well, this task seemed simple enough, but it turned into quite the nightmare.

I was half way though the article, about to do my build. It was late, so I just shut my machine down and signed off for the night.

The next morning, I powered on my machine and it wouldn’t connect to the Cloud9 IDE. The EC2 instance was powered on and apparently responsive, but Cloud9 would not connect.

Of course I did the obvious and power cycled it. But to no avail. Then followed many, many hours of troubleshooting, trawling forum posts and ultimately frustration.

I tried increasing the instance size multiple times (currently running a t3.micro which is overkill!).

The strange thing is, I had the same issue if I created another Cloud9 environment in the same region. If I created a Cloud9 environment in another region, it would launch successfully.

So it looked like a potential issue with my VPC. Yet I was using the default VPC that I had been using for weeks and I hadn’t changed it.

I felt (and still feel) Docker was the culprit, but I couldn’t conclusively prove it. I did find this AWS troubleshooting article which states that Docker can potentially cause an IP conflict with your VPC, but my VPC CIDRs and Docker bridge CIDRs seemed different.

Anyway, I messed around with my security groups and got SSH access to my instance and uninstalled Docker. After that, I still couldn’t connect to Cloud9, so I shut down for the evening again.

And the next morning? Lo and behold, I got into Cloud9 straight away. So I never conclusively got root cause, but other colleagues have had intermittent connectivity or slowness issues with a t2.micro machine with Docker installed, hence I will not recommend that combination.

I reinstalled Docker to complete this article, but I wonder what will happen tomorrow after the machine is shut down again? Feel free to reach out and ask me if you want to know…

What a saga eh? I’m glad to see the back of this one to be honest!

Until next time dear reader, best wishes and thank you for reading!

--

--