Master Ansible Playbook to Install Docker on Ubuntu 18.04

Table of Contents

Introduction

Automating the installation and setup of Docker on remote Ubuntu servers can save time and reduce errors. With Ansible, a powerful automation tool, you can easily create playbooks to streamline the process of configuring Docker containers on Ubuntu 18.04 systems. This guide walks you through creating a playbook that installs necessary packages, sets up Docker, and deploys containers across multiple servers, ensuring consistency and efficiency. Whether you’re managing a single server or many, using Ansible for this task will help eliminate manual configuration and improve your workflow.

What is Ansible?

Ansible is a tool used to automate server setup and management. It helps users automate tasks like installing software and managing servers remotely. With Ansible, you can define a set of instructions in a playbook, which can be reused to configure servers consistently without manual intervention.

Step 1 — Preparing your Playbook

The playbook.yml file is where you define all your tasks. A task is the smallest thing you can automate using an Ansible playbook, and it helps you carry out different operations on your target systems. Each task usually does one job, like installing a package, copying files, or configuring system services.

To start making your playbook, just open your favorite text editor and create the playbook.yml file with this command:

$ nano playbook.yml

This will open a blank YAML file. YAML, which stands for “YAML Ain’t Markup Language,” is a format that’s easy for humans to read and is often used for writing configuration files. One thing to keep in mind is that YAML is super picky about indentation, so a small mistake can cause errors.

Before you get into adding tasks to your playbook, begin by adding this basic setup to your file:


hosts: all
become: true
vars:
  container_count: 4
  default_container_name: docker
  default_container_image: ubuntu
  default_container_command: sleep 1

Let’s break this down:

  • hosts: all : This part tells Ansible which servers to target with the playbook. The all part means the playbook will run on all the servers listed in your inventory file.
  • become: true : This tells Ansible to run all tasks with elevated root privileges (basically like using sudo). You’ll need this for tasks that require admin access, like installing software or changing system settings.
  • vars : The vars section is where you define variables that will be used throughout the playbook. This makes your playbook super flexible because you can adjust things without digging through the whole file.

Here’s what the variables mean:

  • container_count : This is how many containers you want to create. Adjust this number based on how many you need.
  • default_container_name : This sets the default name for your containers. It helps keep things organized, especially if you’re creating multiple containers.
  • default_container_image : This defines which Docker image you’ll use when creating containers. In this case, it’s set to use the ubuntu image. You can swap this out for any other Docker image you prefer.
  • default_container_command : This is the command that will run in each container when it’s created. By default, it’s set to make the container run sleep 1 , which just keeps the container running for a second. You can change this to any other command you need.

If you want to see the playbook once it’s finished, check out Step 5. YAML file
You can find more information on automating server setups with Ansible in this detailed guide on Ansible automation for server configuration.

Step 2 — Adding Packages Installation Tasks to your Playbook

By default, tasks in an Ansible playbook are executed one after the other, meaning they run sequentially in the order you’ve written them. This setup ensures that each task is finished before the next one starts. It’s important to know that the order of tasks really matters; the result of one task can affect the next, so you need to think carefully about the flow of your tasks. This feature is super helpful because it lets you manage dependencies—one task won’t kick off until the previous one is done.

Also, every task in Ansible can work on its own, which is really nice. It makes tasks reusable in different playbooks. This means you don’t have to rewrite the same code over and over again. Once you set up something like installing packages in one playbook, you can reuse it anywhere else, saving you time and keeping things consistent across your setup.

Now, let’s go ahead and add the first tasks to your playbook for installing some essential packages. First up, we’ll install aptitude, a tool that interacts with the Linux package manager. Then, we’ll install the required system packages. These packages are important for setting up Docker and all the dependencies you’ll need for managing containers. The configuration below will make sure that Ansible installs these packages on your server, checks for any updates, and ensures you’re always working with the latest versions.

Here’s how you can write this part of the playbook:


tasks:
  – name: Install aptitude
    apt:
      name: aptitude
      state: latest
      update_cache: true
  – name: Install required system packages
    apt:
      pkg:
        – apt-transport-https
        – ca-certificates
        – curl
        – software-properties-common
        – python3-pip
        – virtualenv
        – python3-setuptools
      state: latest
      update_cache: true

Let’s break down what’s happening here:

  • Install aptitude: This task installs aptitude, which is Ansible’s preferred tool over the default apt package manager. Aptitude makes managing packages easier, especially when dealing with dependencies.
  • Install required system packages: This task installs the necessary packages that your server will need for Docker and other configuration tasks. The list of packages includes:
  • apt-transport-https: This allows the server to securely download packages over HTTPS.
  • ca-certificates: This makes sure the system has the right certificates for secure communication.
  • curl: This tool helps transfer data with URLs and is often needed for downloading files or communicating with remote servers.
  • software-properties-common: This package provides helpful utilities for managing repositories on Ubuntu.
  • python3-pip: This is a package installer for Python, which is needed to manage Python dependencies.
  • virtualenv: A tool to create isolated Python environments, which can be really useful in different setups.
  • python3-setuptools: This helps with packaging and distributing Python software.

By using the apt module in Ansible, you’re telling it to install these packages on your system. The apt module works with the apt package manager, which is perfect for managing packages on Ubuntu and other Debian-based systems. The state: latest directive ensures that you’re always installing the most up-to-date versions, and update_cache: true makes sure the apt cache is updated before it starts installing the packages.

This setup will guarantee that the right packages are always installed and up-to-date each time the playbook runs. You’ll automate the setup process for Docker containerization, so you don’t have to worry about doing it manually each time. Ansible takes care of all the installation, ensuring everything is configured the right way.

For a deeper dive into automating server setups and package management, check out this helpful resource on Ansible Package Management Automation.

Step 3 — Adding Docker Installation Tasks to your Playbook

In this step, you’re going to tweak your playbook to install the latest version of Docker directly from the official Docker repository. Docker, as you probably know, is one of the most widely used container platforms. It’s great for running applications in isolated environments, or containers, which are super useful when you’re looking to keep things organized. Getting Docker up and running on your servers is key to making sure everything is fresh and running smoothly.

First, we’re going to add the Docker GPG key to your server. Think of the GPG key like a security check that makes sure the Docker packages you’re installing are legitimate and haven’t been messed with. So, what we’re doing here is fetching this key from a secure URL and adding it to the server’s keyring. This ensures that when Docker packages are downloaded in the future, they’re coming from the official source and haven’t been tampered with.

Next up, we add the Docker repository to your server’s list of package sources. This is how your server knows where to pull the latest Docker packages from. You’ll specify the repository URL for Ubuntu systems, and, in this case, we’re going with “bionic,” which refers to Ubuntu 18.04. By using Ansible’s apt_repository module, we make sure the Docker repository is added correctly to your system, so you don’t have to manually handle this.

Once the repository is added, the next task is to update the local apt package list. We’re basically telling the system, “Hey, go check for any updates from the new Docker repository.” The command apt update runs in the background to make sure the package manager knows about the new Docker packages available.

After that, Ansible is instructed to install the docker-ce (Community Edition) package. By adding state: latest , we ensure that we’re getting the newest version of Docker. The update_cache: true part makes sure that any changes to the package list are taken into account during the installation process.

Finally, the last task in this part of the playbook installs the Docker module for Python using pip. This Python module helps your Python scripts interact with Docker. If you plan to automate Docker container management within Python, this is going to be your best friend.

Here’s what this part of the playbook would look like:


tasks:
  – name: Add Docker GPG apt Key
    apt_key:
      url: https://download.docker.com/linux/ubuntu/gpg
      state: present
  – name: Add Docker Repository
    apt_repository:
      repo: deb https://download.docker.com/linux/ubuntu bionic stable
      state: present
  – name: Update apt and install docker-ce
    apt:
      name: docker-ce
      state: latest
      update_cache: true
  – name: Install Docker Module for Python
    pip:
      name: docker

Here’s a breakdown of each task:

  • Add Docker GPG apt Key: This ensures your server can verify Docker packages, so you know they’re the real deal.
  • Add Docker Repository: This task adds Docker’s repository, which allows you to install Docker from the official source, keeping everything legit.
  • Update apt and install docker-ce: This task installs Docker Community Edition and ensures that your server is using the most recent stable version.
  • Install Docker Module for Python: This installs the Python module that lets you control Docker through Python scripts.

When all of these tasks are executed, Docker will be installed smoothly on your system. And the best part? You won’t have to manually handle any of it—thanks to Ansible automating the whole process. This makes Docker installation repeatable and hassle-free for all your future setups!

For more details on automating Docker installation tasks and playbook configurations, check out this comprehensive guide on Ansible Docker Setup Automation.

Step 4 — Adding Docker Image and Container Tasks to your Playbook

In this step, you’re going to roll up your sleeves and start creating your Docker containers. The first task is to pull the Docker image you want to use as the base for your containers. By default, Docker gets its images from Docker Hub, which is basically a giant online library of container images for all kinds of applications and services. You can think of it as a ready-to-go inventory of Docker environments, all set up for you to use. The image you choose will determine the environment inside your containers, including all the necessary dependencies and settings.

Once you have the image, the next step is to create the Docker containers using that image. These containers will be set up according to the variables you’ve already defined in your playbook. Here’s a breakdown of the tasks:

Pull Docker Image

This part of the task uses the docker_image Ansible module to pull the image from Docker Hub. In the playbook, the name parameter specifies which image to grab, and source: pull makes sure the image is pulled from the Docker registry.


name: Pull default Docker image
docker_image:
  name: “{{ default_container_image }}”
  source: pull

In this case, the default_container_image variable holds the name of the image you want to use, and it could be something like Ubuntu or CentOS, depending on what you’re working with. You can change default_container_image to any image name that fits your project.

Create Docker Containers

Once the image is pulled, the next task is to create one or more containers based on the image you’ve just downloaded. The docker_container Ansible module is used for this, and it’s where you define the configuration for each container. The variables from earlier in your playbook, like default_container_name , default_container_image , and default_container_command , will control how each container is set up.


name: Create default containers
docker_container:
  name: “{{ default_container_name }}{{ item }}”
  image: “{{ default_container_image }}”
  command: “{{ default_container_command }}”
  state: present
with_sequence:
  count={{ container_count }}

Here’s how each part works:

  • name: This dynamically generates the name for each container. It combines the default_container_name with the item variable, which represents the current iteration in the loop. So each container gets a unique name based on its position in the sequence.
  • image: This tells Docker which image to use when creating the container. The image is pulled from Docker Hub as defined in the previous task.
  • command: This is the command that runs when the container starts. By default, it’s set to sleep 1 , which keeps the container running for just one second. You can change this to run whatever command you need inside the container.
  • state: The state is set to present , which means the container will be created if it doesn’t already exist.
  • with_sequence: This part is crucial because it creates a loop that runs the task container_count times, which is the number of containers you want. The item variable ensures that each container in the sequence gets a unique name.

The with_sequence loop is super helpful because it lets you automate the creation of multiple containers without having to repeat the task for each one. Instead, you define how many containers you want at the top of your playbook, and Ansible handles the rest, ensuring each container gets its own name based on the loop iteration.

This method of container creation is not only efficient but also really flexible. You can easily scale up the number of containers you need without manually tweaking the playbook every time. It’s all automated, and you don’t have to worry about a thing!

For a deeper dive into automating container tasks with Ansible, check out this detailed guide on Ansible Docker Image and Container Automation.

Step 5 — Reviewing your Complete Playbook

Once you’ve added all your tasks, it’s time to take a step back and review everything in your playbook to make sure it’s all set up correctly. This is the moment to double-check everything, especially the little details you might’ve customized, like the number of containers you want to create or the Docker image you’re using.

Here’s an example of how your playbook should look when you’re all done:


hosts: all
become: true
vars:
  container_count: 4
  default_container_name: docker
  default_container_image: ubuntu
  default_container_command: sleep 1d
tasks:
  – name: Install aptitude
    apt:
      name: aptitude
      state: latest
      update_cache: true
  – name: Install required system packages
    apt:
      pkg:
        – apt-transport-https
        – ca-certificates
        – curl
        – software-properties-common
        – python3-pip
        – virtualenv
        – python3-setuptools
      state: latest
      update_cache: true
  – name: Add Docker GPG apt Key
    apt_key:
      url: https://download.docker.com/linux/ubuntu/gpg
      state: present
  – name: Add Docker Repository
    apt_repository:
      repo: deb https://download.docker.com/linux/ubuntu bionic stable
      state: present
  – name: Update apt and install docker-ce
    apt:
      name: docker-ce
      state: latest
      update_cache: true
  – name: Install Docker Module for Python
    pip:
      name: docker
  – name: Pull default Docker image
    docker_image:
      name: “{{ default_container_image }}”
      source: pull
  – name: Create default containers
    docker_container:
      name: “{{ default_container_name }}{{ item }}”
      image: “{{ default_container_image }}”
      command: “{{ default_container_command }}”
      state: present
    with_sequence:
      count: {{ container_count }}

This is the full playbook, and as you can see, it includes everything you need to get Docker up and running on your target servers. Let’s break it down a little bit more:

Hosts: The hosts section specifies which machines this playbook will be applied to. In this case, it’s set to “all,” which means it’ll apply to all of the servers in your Ansible inventory.

Become: The become: true line is important because it tells Ansible to run everything with root (admin) privileges, which you’ll need to install packages and do other Docker-related tasks.

Vars: Here, we define variables to make things more flexible. You can easily change the number of containers you want to create, or pick a different Docker image or startup command for your containers. It’s all in one place!

The Tasks:

  • Aptitude and System Packages: The first part installs some necessary tools and packages, like curl and python3-pip, to make sure your system is ready for Docker.
  • Docker Setup: Next up, we add the Docker GPG key (to verify Docker packages) and the Docker repository (so we can download Docker), and finally, install Docker and the Python Docker module.
  • Docker Containers: After Docker is installed, we pull the Docker image you’ve chosen and then create your containers based on that image. Each container gets set up with the specific configuration you’ve defined.

Customization:

  • You could use the docker_image module to push your custom Docker images to Docker Hub.
  • You could also update the docker_container task to set up more complex container networks or tweak other settings.

Just a heads-up, YAML files are picky about indentation. If something goes wrong, it might be due to incorrect spacing, so make sure your indentations are consistent. For YAML, the standard is to use two spaces for each indent level. If you run into any errors, check your spacing first, and you’ll likely spot the issue.

Once everything looks good, save your playbook, exit your text editor, and you’re ready to roll! You’re all set to run your playbook and automate your Docker setup.

For a comprehensive guide on configuring and reviewing Ansible playbooks, take a look at this helpful article on Reviewing Your Complete Playbook.

Step 6 — Running your Playbook

Now that you’ve reviewed and fine-tuned your playbook, it’s time to run it on your server or servers. Typically, most playbooks are set up to run on all servers in your Ansible inventory by default, but if you want to run it on a specific server, you can easily specify that. For example, if you want to run your playbook on server1 and connect using the sammy user, you can use the following command:

$ ansible-playbook playbook.yml -l server1 -u sammy

Let’s break down this command a bit so you can see how it works:

  • -l flag: This specifies the server (or group of servers) where the playbook will run. In this case, it’s limiting the execution to server1.
  • -u flag: This flag tells Ansible which user to log in as. So, in this case, sammy is the user Ansible will use to log into the remote server and run the commands.

Once the command runs, you should see something like this:

Output
changed: [server1]
TASK [Create default containers] *****************************************************************************************************************
changed: [server1] => (item=1)
changed: [server1] => (item=2)
changed: [server1] => (item=3)
changed: [server1] => (item=4)</p>
<p>PLAY RECAP ***************************************************************************************************************************************
server1               : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Output Explanation

  • changed: [server1]: This tells you that changes were made on server1. The number of changes corresponds to the tasks executed, meaning tasks that modified the server’s configuration.
  • TASK [Create default containers]: This is the task that’s being executed. In this case, it’s the creation of your Docker containers.
  • changed: [server1] => (item=1), etc.: This shows the creation of each individual container. The item refers to the number of the container in the loop defined in your playbook (like container 1, container 2, and so on).
  • PLAY RECAP: This section gives you a summary of your playbook’s run. Here’s what each part means:
  • ok=9: Nine tasks were successfully executed.
  • changed=8: Eight tasks made changes to the server.
  • unreachable=0: No servers were unreachable.
  • failed=0: No tasks failed.
  • skipped=0: No tasks were skipped.
  • rescued=0: No tasks needed to be rescued.
  • ignored=0: No tasks were ignored.

Verifying the Container Creation

Once your playbook finishes running, you’ll want to verify that the containers were actually created. Here’s how to check:

SSH into your server using the sammy user:

$ ssh sammy@your_remote_server_ip

Then, list the Docker containers to see if they were created successfully:

$ sudo docker ps -a

You should see output similar to this:

Output
CONTAINER ID     IMAGE          COMMAND          CREATED         STATUS          PORTS       NAMES
a3fe9bfb89cf     ubuntu         “sleep 1d”       5 minutes ago   Created                     docker4
8799c16cde1e     ubuntu         “sleep 1d”       5 minutes ago   Created                     docker3
ad0c2123b183     ubuntu         “sleep 1d”       5 minutes ago   Created                     docker2
b9350916ffd8     ubuntu         “sleep 1d”       5 minutes ago   Created                     docker1

Each line represents a container, and the names (like docker1, docker2, etc.) are assigned based on the loop in your playbook. The status being “Created” tells you the containers were successfully created but may not be running just yet, since you set the command to sleep 1d (which makes them sleep for 1 day).

Confirmation of Successful Execution

If you see the containers listed like above and there are no failures in your playbook output, that’s your confirmation that everything worked just as expected. The tasks were executed correctly, and your containers are good to go!

For further details on running Ansible playbooks effectively, check out this in-depth guide on Running Your Playbook.

Conclusion

In conclusion, automating the installation and setup of Docker on remote Ubuntu 18.04 servers using Ansible offers significant advantages in terms of efficiency and consistency. By creating an Ansible playbook, you can streamline the process of installing Docker, setting up necessary packages, and managing containers across multiple servers. This not only reduces human errors but also ensures that your server configurations remain uniform and easily repeatable. As automation tools like Ansible continue to evolve, mastering these skills will remain essential for IT professionals seeking to enhance operational workflows. In the future, expect even more advanced features from Ansible to make container management even more seamless.

Automate Docker Setup with Ansible on Ubuntu 22.04