Automating Docker setup with Ansible on Ubuntu 22.04 for consistent and efficient server configuration.

Automate Docker Setup with Ansible on Ubuntu 22.04

Table of Contents

Introduction

Automating the Docker setup with Ansible on Ubuntu 22.04 can save you time and reduce errors across multiple servers. Ansible, a powerful automation tool, simplifies the process of configuring Docker, from installing packages to managing containers. This guide walks you through creating a playbook to streamline your Docker installation, ensuring consistency and efficiency every time you set up new servers. Whether you’re working with multiple machines or just a few, Ansible can help automate repetitive tasks, allowing you to focus on more complex configurations. Let’s dive into how you can use Ansible to automate Docker setup seamlessly on Ubuntu 22.04.

What is Ansible?

Ansible is a tool used to automate tasks like setting up servers and installing software. It helps to ensure that tasks are done consistently and without human error. With Ansible, you can write a simple script to automatically set up servers, install necessary software, and manage containers, making the process quicker and more reliable.

Prerequisites

Alright, before we get into the fun of automating things with Ansible and Docker, let’s go over the key things you’ll need to get started. First on the list is your control node. Think of it as the captain of the ship, calling the shots and making sure everything runs smoothly. Your control node will be an Ubuntu 22.04 machine, where Ansible is installed and ready to do its thing. The big thing to remember here is that your control node must be able to connect to your Ansible hosts using SSH keys—kind of like a secret handshake to keep things secure between the two.

Next up, for this setup to go off without a hitch, your control node needs to have a regular user account, and this user must have sudo permissions. This is important because Ansible needs the ability to run commands with admin rights to make changes to the system. Oh, and don’t forget about security—make sure your firewall is turned on to keep your control node safe. If you’re unsure about setting up the firewall, no worries—just follow the steps in the Initial Server Setup guide.

Ubuntu Server 22.04 Setup Guide

Once the control node is ready, it’s time to set up one or more Ansible hosts. These are the remote Ubuntu 22.04 servers that your control node will manage. You should have already gone through the automated server setup guide to get these hosts ready for automation.

Before you get all excited to run your playbook, there’s one final check to make sure everything is lined up correctly. You’ll need to verify that your Ansible control node can connect and run commands on your Ansible hosts. If you’re not totally sure the connection is working, you can test it. Just follow Step 3 in the guide on how to install and configure Ansible on Ubuntu 22.04. This step will confirm that everything is communicating properly, which is crucial for making sure your playbook runs smoothly across all your servers.

What Does this Playbook Do?

Imagine you’re at the helm of your server, ready to dive into the world of Docker, but setting everything up manually feels like trying to navigate through a stormy sea. Well, with this Ansible playbook, it’s like you’re building a boat that sails smoothly on its own every time. No more repeating steps or getting tangled up in technical details. Once you’ve set up this playbook, it’s your reliable guide, automating the entire Docker setup process for you. Here’s the best part: every time you run it, it handles all the setup and gets Docker running with containers ready to go.

Let’s walk through exactly what this playbook does:

Install Aptitude

First up, we have Aptitude. Now, you might be thinking, “What’s Aptitude?” It’s basically a better version of the regular apt package manager. Why? Because it handles package management with fewer hiccups and more flexibility. The playbook makes sure Aptitude is installed and up to date, so you don’t have to worry about outdated tools getting in the way.

Install Required System Packages

The playbook doesn’t miss a beat. It installs all the necessary system packages—think tools like curl , ca-certificates , and various Python libraries. These are the building blocks for getting Docker and everything it needs up and running. The best part? They’re always kept up to date, so you don’t have to stress about security or compatibility issues.

Install Docker GPG APT Key

Now, we’re adding the Docker GPG key to your system. This is like locking the door before entering a house. It makes sure the Docker packages you’re about to install are from a trusted source—straight from Docker’s official site. Once the key is in place, you can feel confident knowing you’re getting secure, verified software.

Add Docker Repository

Next, the playbook adds Docker’s official repository to your APT sources. This opens the door to the latest version of Docker, ensuring your server gets the freshest, most secure version available. No need to worry about old, outdated versions sneaking in.

Install Docker

Here comes the big moment—the actual Docker installation. This step installs the latest stable version of Docker on your server. Once it’s done, you’re all set to start creating and managing containers like a pro. No more dealing with the hassle of manual installation.

Install Python Docker Module via Pip

This next step is where things get really exciting. The playbook installs the Python Docker module using pip , which means you can now interact with Docker through Python scripts. This is a game-changer because it allows you to automate container management with just a few lines of code. It saves you time and effort, making the whole process smoother.

Pull Default Docker Image

Now that Docker is set up, the playbook pulls the default Docker image from Docker Hub using the image you set in the default_container_image variable. Want to change it up? No problem. You can easily swap the image if you need something different for your project, giving you full flexibility.

Create Containers

With the image downloaded, the playbook moves on to creating containers. The number of containers is determined by the container_count variable, and each container is set up according to your specifications. It will run the command you set in the default_container_command , ensuring each container does exactly what you want it to do.

Once the playbook is done, you’ll have Docker containers running on your Ansible hosts, each created exactly how you want them. These containers will follow the rules you’ve set in the playbook, so every time you run it, you get the same, reliable setup. Whether you’re running it on one server or multiple, the playbook ensures everything is consistent and efficient.

Ready to get started? All you need to do is log into your Ansible control node with a user who has sudo privileges, and then run the playbook. Before you know it, your Docker setup will be automated, making managing containers a breeze every time you need it.

For more details, refer to the Ansible Overview and Usage.

Step 1 — Preparing your Playbook

Alright, let’s get started. Imagine you’re in charge of a whole fleet of servers, ready to get them all running smoothly without having to manually do everything yourself. That’s where Ansible comes in, and it’s time to create your playbook.yml file. Think of the playbook as your blueprint, where you lay out all the tasks that Ansible will carry out to get everything set up. In Ansible, a task is like a single action, a small step towards reaching your bigger goal. These tasks will automatically get your servers into the configuration you want.

To start, open your favorite text editor—whether it’s something simple like nano or a more advanced editor—and create a new file called playbook.yml . Here’s the command you can use to open it in nano:

$ nano playbook.yml

This opens up a blank page—an empty YAML file where you can start working your magic. Now, before diving into the specifics of tasks, let’s set up a few basic declarations. Think of these as the building blocks of your playbook.

Here’s what you’ll start with:


hosts: all
become: true
vars:
  container_count: 4
  default_container_name: docker
  default_container_image: ubuntu
  default_container_command: sleep 1

Let’s break it down so you know exactly what each part does:

  • hosts: This line tells Ansible which servers the playbook will target. Setting it to all means the playbook will run on all the servers listed in your Ansible inventory file. But, if you only want it to run on a specific server or a group of servers, you can change this to suit your needs.
  • become: This tells Ansible to use sudo (or root privileges) to run the commands. This is important because many tasks you’re automating (like installing Docker, for example) need administrative permissions. By setting this to true , you’re basically telling Ansible, “Go ahead and run these commands as root!”
  • vars: Here’s where you define variables that can be reused throughout your playbook. Variables are super handy because, instead of changing the value every time in each task, you just change it at the top, and it’ll automatically apply wherever it’s used. Let’s take a look at the variables we’ve defined here:
  • container_count: This is the number of Docker containers you want to create. By default, it’s set to 4, but if you need more, just change this number to what you need.
  • default_container_name: This is the name for your containers. The default is set to “docker,” but feel free to call them whatever you like for your project.
  • default_container_image: The base Docker image for your containers. By default, it’s set to “ubuntu,” but you can easily change this to another image from Docker Hub—whether you need a Node.js image or a Python environment, it’s all customizable.
  • default_container_command: This is the command that will run inside each container once it’s created. The default is sleep 1 , which keeps the container running in idle mode for one second. You can change this to anything you want, like starting a web server or running a background task.

Before you add more tasks, here’s a handy tip: YAML files are super picky about indentation. If your indentation’s off, the playbook won’t run as expected. Always use two spaces per indentation level. Keep an eye on that, and you should be good to go!

And there you have it! You’ve set the stage by defining your hosts, variables, and making sure Ansible can run tasks with sudo privileges. Now, you’re ready to move on to the next steps—adding the tasks that will automate everything for you!

Ansible Automation Platform Review (2023)

Step 2 — Adding Packages Installation Tasks to your Playbook

Now that we’ve got the basic structure of your playbook set up, it’s time to get into the nitty-gritty and add the essential tasks that will ensure everything is set up properly. Here’s the thing: in Ansible, tasks are executed one by one, like a well-organized assembly line. The playbook will go through the tasks from top to bottom, making sure each one finishes before the next one starts. This is important, especially when one task depends on the previous one being completed first.

For example, installing Docker or Python needs certain packages to be installed beforehand. If you skip a step, things can get a bit messy, and that’s why task order is so important. The best part is that the tasks in this playbook can be reused in different projects, making your automation process more efficient and flexible.

Let’s start by looking at two key tasks—installing aptitude and the required system packages for setting up Docker.

First Up, Install Aptitude:

You’ll begin by installing aptitude, which is a powerful tool for managing packages in Linux. Now, you might be wondering, “Why not just stick with the default apt?” Well, here’s why: Aptitude is preferred by Ansible because it handles package dependencies in a more flexible and automated way. The playbook will make sure to install the latest version of aptitude and update the package cache, ensuring it pulls the most current data about available packages.

Here’s how you would set this up in your playbook:


tasks:
  – name: Install aptitude
    apt:
      name: aptitude
      state: latest
      update_cache: true

This tells Ansible to install aptitude and make sure it’s always up-to-date. The update_cache: true part ensures that aptitude gets the most recent info about available packages.

Now, Install the Required System Packages:

The next task is to install all the important system packages that Docker and Python need to run smoothly. These include some important packages:

  • apt-transport-https: This allows apt to fetch packages over HTTPS, which is crucial for secure installations, especially when dealing with repositories.
  • ca-certificates: Ensures your system can verify SSL connections, so you can trust the packages you’re downloading over HTTPS.
  • curl: A handy tool for transferring data from one system to another, often used for downloading files or repositories.
  • software-properties-common: This tool helps you manage software repositories and package sources, giving you more flexibility in handling installations.
  • python3-pip: The package manager for Python. You’ll need it to install libraries required for container management.
  • virtualenv: A tool for creating isolated Python environments, making sure your projects don’t conflict with each other.
  • python3-setuptools: A library for package development and distribution in Python, essential for working with Python packages.

Here’s the code to define this step in the playbook:


– name: Install required system packages
  apt:
    pkg:
      – apt-transport-https
      – ca-certificates
      – curl
      – software-properties-common
      – python3-pip
      – virtualenv
      – python3-setuptools
    state: latest
    update_cache: true

This will make sure that Ansible installs these packages automatically in their latest versions, ensuring you have everything you need for Docker and Python to run without any issues.

Why Aptitude Over apt?

Let’s pause and talk a bit more about aptitude. While you could use the default apt package manager, aptitude is often the preferred choice because it handles complex dependencies better and has a more user-friendly interface. It’s like choosing a power tool instead of a basic one. Sure, you can use a regular hammer, but sometimes a sledgehammer gets the job done more quickly and efficiently. The good news is, if aptitude is not available on your server, Ansible will just fall back to using apt, so you’re covered either way.

Benefits of Automation:

By defining these tasks in your playbook, you’re setting up an automated process that works in the background. You won’t have to manually install each package or worry about keeping track of the latest versions. The playbook will handle everything for you, saving you time and reducing the chance of errors. Plus, the state: latest option ensures that each package is installed in its most up-to-date version, keeping your environment secure and efficient.

The beauty of this setup is that you can also customize your playbook later. For example, if you need Node.js or Java for your project, all you need to do is add them to the list of packages in the playbook.

With Ansible running the show, installing and managing system packages becomes super easy—just one more reason why automation is such a game changer!

Note: You can customize the packages according to your project requirements by adding them to the playbook.


Guide to Linux Package Management

Step 3 — Adding Docker Installation Tasks to your Playbook

Now that we’ve got the basics down, it’s time to dive into the exciting part—installing Docker. If you’ve been thinking about automating your Docker setup, this is where the magic really happens. In this step, you’ll be adding tasks to your Ansible playbook that will automatically install Docker on your server. No more clicking through terminals or worrying about outdated versions. This playbook makes sure every server gets the latest Docker features and security patches, all without you having to do anything.

Let’s break it down step by step:

First Task: Add the Docker GPG Key

Before we even start thinking about installing Docker, we need to make sure we’re getting the real deal—verified, authentic Docker packages. So, the first thing the playbook will do is add the Docker GPG key to your system. Think of this key as a fingerprint—it helps verify the integrity of the Docker packages and ensures they haven’t been tampered with. This is a key security measure that guarantees you’re getting official, safe Docker software, and not something that could cause trouble later on.

Here’s how you’ll set that up in your playbook:


– name: Add Docker GPG apt Key
  apt_key:
    url: https://download.docker.com/linux/ubuntu/gpg
    state: present

The state: present ensures the key is added to your system, preventing any issues when Docker installs. Think of it as the first step in securing the entire process.

Next, Add the Docker Repository

Once we’ve added the key, it’s time to tell Ubuntu where to find the latest Docker packages. We do this by adding the official Docker repository to your APT sources list. This repository is like a catalog where the freshest Docker versions and related tools are stored, ready for installation. Adding this repository means your system can automatically access the newest and most stable Docker releases directly from Docker’s official server.

Here’s the code that takes care of that:


– name: Add Docker Repository
  apt_repository:
    repo: deb https://download.docker.com/linux/ubuntu jammy stable
    state: present

Once this task is executed, your system will be connected to Docker’s official repository, and you won’t have to worry about outdated packages sneaking in.

Update APT and Install Docker

Now, let’s get to the fun part—actually installing Docker. But before we do that, we need to make sure APT knows about the new repository we just added. So, the playbook runs an APT update to refresh the list of available packages. After that, it installs Docker Community Edition (also known as docker-ce). This is the latest stable version of Docker, so your containers will be running the best and most secure version available.

Here’s how this is written in your playbook:


– name: Update apt and install docker-ce
  apt:
    name: docker-ce
    state: latest
    update_cache: true

With state: latest , we’re making sure you always get the most up-to-date version of Docker. No worries about outdated tools here!

Finally, Install the Docker Module for Python

Docker is awesome on its own, but let’s take it a step further. If you want to control Docker through Python scripts, you’ll need the Python Docker module. This module allows you to interact with Docker from within your Python code, which opens up a whole new world of automation. Whether you’re managing containers, pulling images, or handling Docker networks, this Python module is your ticket to more programmatically controlled Docker management.

Here’s how you add it to your playbook:


– name: Install Docker Module for Python
  pip:
    name: docker

Once this task runs, your system will be ready to automate container management using Python. This is perfect for anyone who wants to go beyond the basics and manage Docker with custom scripts.

Putting it All Together

By the time this playbook finishes running, you’ll have Docker installed on your server, set up properly, and running the latest version. You’ve also added the Python Docker module, so now you can manage containers programmatically. This is the kind of automation that makes life so much easier for sysadmins—it’s reliable, repeatable, and makes sure everything stays consistent across your servers.

And here’s the magic part: you don’t need to do this manually every time. Once you’ve set up your playbook, you can run it on as many servers as you like. Sit back, relax, and let Ansible do all the heavy lifting for you.

Install Docker on Ubuntu

Step 4 — Adding Docker Image and Container Tasks to your Playbook

Now that everything is set up, let’s bring the power of Docker to life. In this step, we’re going to focus on creating your Docker containers. You’ve already got the playbook ready, and now we’ll pull the Docker image you want to use and start spinning up containers.

Here’s the deal: Docker images are like blueprints for containers. Normally, Docker pulls its images from the official Docker Hub repository, but if you want to pull from another repository or use a custom image, that’s totally possible. The great thing about Ansible is that it lets you customize all of this, giving you flexibility while still keeping things automated.

Once the image is pulled, Ansible will create containers based on the settings you’ve already defined in your playbook. It’s like saying, “Here’s what I want, now go make it happen.”

Here’s the code you’ll add to your playbook:


tasks:
   – name: Pull default Docker image
     community.docker.docker_image:
       name: “{{ default_container_image }}”
       source: pull
   – name: Create default containers
     community.docker.docker_container:
       name: “{{ default_container_name }}{{ item }}”
       image: “{{ default_container_image }}”
       command: “{{ default_container_command }}”
       state: present
     with_sequence: count={{ container_count }}

Let’s break this down so you can see how it works:

Task 1: Pull the Default Docker Image

The first task pulls the Docker image you specified with the default_container_image variable. This image will serve as the base for the containers you’ll create. Normally, this image comes from Docker Hub, but you can set it to pull from any other repository or registry you want.

Here’s how that’s done:


– name: Pull default Docker image
   community.docker.docker_image:
     name: “{{ default_container_image }}”
     source: pull

This task ensures that the image gets downloaded to your system and is ready for the next step—creating containers. Ansible takes care of all the technical work behind the scenes, so you don’t have to worry about pulling the image manually or dealing with any errors.

Task 2: Create Default Containers

Now that the image is on your system, it’s time to create the actual Docker containers. The docker_container module in Ansible is like the chef in the kitchen—it takes the ingredients (the image) and follows the recipe (your settings) to create the finished product (the container).

Here’s how it works: the playbook uses the with_sequence directive to create as many containers as you’ve specified with the container_count variable. So, if you set container_count to 4, it will create four containers. Each container gets a unique name by combining the default_container_name with the loop number (e.g., docker1, docker2, etc.).

Here’s the code for this task:


– name: Create default containers
   community.docker.docker_container:
     name: “{{ default_container_name }}{{ item }}”
     image: “{{ default_container_image }}”
     command: “{{ default_container_command }}”
     state: present
   with_sequence: count={{ container_count }}

Each container is given a name based on the loop’s iteration number. The command field runs the command you’ve set in the default_container_command variable inside each container. By default, the playbook runs sleep 1 , which keeps the container alive for one second, but you can change that to do something more useful, like running a web server or launching a service.

Additional Notes

The loop ( with_sequence ) is where the magic happens. It lets you define how many containers to create by adjusting the container_count variable. Want 10 containers instead of 4? No problem—just change the number in the variable, and the loop will automatically adjust.

This task only creates containers that don’t already exist. If you want to update a container (like changing its image or tweaking some settings), you can adjust the task to handle that.

By the end of this step, you’ll have Docker containers up and running, each one configured automatically and consistently. This is where the Ansible playbook really shines. You can create multiple containers across multiple servers with just a single command.

Why This Matters

The ability to automatically create and configure containers across your infrastructure is a huge win. Whether you’re scaling up to handle more traffic, testing a new version of your application, or making sure every server is set up the same way, this playbook guarantees consistency and reliability. Plus, it makes everything repeatable, so you don’t have to do the same work over and over again. You can just adjust a few settings and hit “go.”

This automation also makes managing your containers much easier. Whether you’re running a small team or handling a massive environment, this playbook can grow with you. Just tweak a few parameters, and you’re ready to roll.

For more information, check out the Automating Docker Containers Using Ansible webinar.

Step 5 — Reviewing your Complete Playbook

By now, you’ve put together a solid playbook to automate the setup and management of Docker containers. You’ve added several steps and tasks, but let’s take a moment to step back and look at the full picture. At this stage, your playbook should look something like this—though it might have a few small tweaks depending on your project.

Here’s an example of what your playbook might look like once it’s all set up:


hosts: all
become: true
vars:
  container_count: 4
  default_container_name: docker
  default_container_image: ubuntu
  default_container_command: sleep 1d
tasks:
  – name: Install aptitude
    apt:
      name: aptitude
      state: latest
      update_cache: true
  – name: Install required system packages
    apt:
      pkg:
        – apt-transport-https
        – ca-certificates
        – curl
        – software-properties-common
        – python3-pip
        – virtualenv
        – python3-setuptools
      state: latest
      update_cache: true
  – name: Add Docker GPG apt Key
    apt_key:
      url: https://download.docker.com/linux/ubuntu/gpg
      state: present
  – name: Add Docker Repository
    apt_repository:
      repo: deb https://download.docker.com/linux/ubuntu jammy stable
      state: present
  – name: Update apt and install docker-ce
    apt:
      name: docker-ce
      state: latest
      update_cache: true
  – name: Install Docker Module for Python
    pip:
      name: docker
  – name: Pull default Docker image
    community.docker.docker_image:
      name: “{{ default_container_image }}”
      source: pull
  – name: Create default containers
    community.docker.docker_container:
      name: “{{ default_container_name }}{{ item }}”
      image: “{{ default_container_image }}”
      command: “{{ default_container_command }}”
      state: present
    with_sequence:
      count: {{ container_count }}

Breaking It Down:

At this point, your playbook is doing some really powerful stuff. Let’s dive into each section so you can make sure everything is working properly:

  • hosts: all – This line tells Ansible to apply the playbook to every server in your inventory. If you only want to target a specific group or just one server, you can change this line. It’s flexible, so you’re in control of where it runs.
  • become: true – This is where Ansible gets its “superpowers.” By setting it to true, you’re telling Ansible to run all the tasks with sudo privileges. This is necessary for installing software and making system-level changes, like installing Docker. Without it, no Docker for you!

vars: Container Settings

This section defines key variables that make your playbook super flexible:

  • container_count: The number of containers you want to create. By default, it’s set to 4, but feel free to change this based on your needs.
  • default_container_name: The name of your containers. The default is “docker,” but you can change this to whatever you want.
  • default_container_image: The Docker image for your containers. By default, it’s set to “ubuntu,” but you can swap it out for any other image you need.
  • default_container_command: The command that will run inside each container. By default, it’s set to sleep 1d, which just keeps the container alive for one second. But you can change this to whatever command you need, like starting a web server or running an app.

tasks: What Ansible Will Do

Now, let’s look at all the steps Ansible will take to set up Docker on your servers:

  • Install Aptitude: This installs Aptitude, a package manager that Ansible prefers over apt. It’s better at handling dependencies and is a bit more user-friendly.
  • Install Required System Packages: This installs all the necessary packages like curl, apt-transport-https, and python3-pip. These are needed to run Docker and interact with it via Python.
  • Add Docker GPG Key: This task adds the Docker GPG key to your system, which ensures that the packages you’re installing come from a trusted source. It’s like locking the front door to make sure no one sneaks in any bad packages.
  • Add Docker Repository: This adds Docker’s official repository to your system’s list of sources, so your server can access the latest Docker versions and tools directly from Docker’s own servers.
  • Update apt and Install Docker: This updates the package list and installs the latest version of Docker CE (Community Edition). With the state: latest, you’re always getting the most up-to-date version.
  • Install Docker Module for Python: This step installs the Python Docker module using pip, so you can control Docker from your Python scripts.
  • Pull Default Docker Image: This pulls the Docker image you specified earlier from Docker Hub, ensuring that the right version is available for container creation.
  • Create Default Containers: The final step creates your containers. It uses a loop to create as many containers as you’ve specified with container_count. Each container is given a name based on the loop iteration (like docker1, docker2, etc.), and runs the command you set in default_container_command.

A Few Final Notes: Customizing the Playbook

Feel free to adjust this playbook to suit your needs. Want to push images to Docker Hub instead of pulling them? You can do that with the docker_image module. Need to set up more advanced container features, like networking or storage? Ansible has modules for that, too! This playbook is flexible, so you can mold it to fit almost any Docker automation task.

YAML Indentation

One thing to keep in mind with YAML is its picky nature when it comes to indentation. It’s like trying to fold a fitted sheet—it has to be just right. If you run into any errors, double-check your indentation. Use two spaces per level, and you’ll be good to go.

Now that you’ve made sure everything looks good, you’re ready to save your playbook and let Ansible do the heavy lifting. With just a single command, you’ll automate your entire Docker setup process, and you’ll be able to create containers with ease, every time.

Make sure your indentation is consistent in YAML files to avoid errors!

Automating Docker with Ansible Webinar

Step 6 — Running your Playbook

Alright, you’ve made it this far, and your playbook is ready to go! Now it’s time to see all your hard work in action. The first step? Running your Ansible playbook on one or more of your servers. By default, Ansible is set to run on every server in your inventory, but sometimes, you may just want to target a specific server. No problem! In this case, we’ll run the playbook on server1. The best part about this is that you can connect as a specific user, like sammy, ensuring you have the right permissions for the task at hand.

Here’s the command to run it:

$ ansible-playbook playbook.yml -l server1 -u sammy

Let’s break down what this command does:

  • -l server1 : This flag tells Ansible to only run the playbook on server1. If you wanted to run it on a different server or a group of servers, you could change this. But for now, we’re focused on just one server.
  • -u sammy : This flag tells Ansible which user to log in as when connecting to the server. You need to make sure that the user has the right privileges to run tasks (usually, that means sudo access). In this case, sammy is the user we’re using.

Now, let’s talk about what happens once the playbook starts running.

Expected Output

When everything runs smoothly, you’ll see output in your terminal like this:

Output
changed: [server1] TASK [Create default containers] *****************************************************************************************************************
changed: [server1] => (item=1)
changed: [server1] => (item=2)
changed: [server1] => (item=3)
changed: [server1] => (item=4)
PLAY RECAP ***************************************************************************************************************************************
server1              : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Let’s break that down:

  • changed=8 : This means Ansible successfully executed 8 tasks, and 8 of them made changes to the server. So, it’s good news—stuff has been configured as needed.
  • ok=9 : This means 9 tasks were completed without issues. Everything ran smoothly.
  • unreachable=0 : No issues with connecting to the target server. The connection was solid.
  • failed=0 : This is exactly what you want to see—no failures. If there had been any errors, they’d be listed here, but everything ran fine.
  • skipped=0 : No tasks were skipped. Every step in the playbook was executed.
  • rescued=0 and ignored=0 : These show that no special handling or ignored tasks were needed. It was a clean, smooth run.

Verifying Container Creation

After the playbook finishes, it’s time to double-check that everything was created as expected. To do that, log into the remote server and check the containers you just set up.

Log in to the server: Use SSH to log into the server where the playbook ran. Replace your_remote_server_ip with the actual IP address of the server:

$ ssh sammy@your_remote_server_ip

Check the containers: Once logged in, list all the Docker containers on the server by running this command:

$ sudo docker ps -a

The output should show all your containers, and you should see something like this:

Output
CONTAINER ID   IMAGE    COMMAND     CREATED       STATUS     PORTS    NAMES
a3fe9bfb89cf   ubuntu   “sleep 1d”  5 minutes ago Created             docker4
8799c16cde1e   ubuntu   “sleep 1d”  5 minutes ago Created             docker3
ad0c2123b183   ubuntu   “sleep 1d”  5 minutes ago Created             docker2
b9350916ffd8   ubuntu   “sleep 1d”  5 minutes ago Created             docker1

Each container will have its CONTAINER ID , IMAGE (e.g., ubuntu), the COMMAND (which is sleep 1d for now), and NAMES (like docker1, docker2, etc.). This confirms that everything worked and those containers were successfully created.

What Does All This Mean?

If you see your containers listed on the server, it means Docker is up and running, and the playbook executed successfully. It’s all automated now, thanks to Ansible. You don’t have to manually install Docker or set up containers anymore. That’s all taken care of by your playbook, and you can repeat it as many times as you want across any number of servers.

Now that your containers are up and running, you can configure them or run applications inside them. The possibilities are endless, and the best part? You just saved a ton of time with automation.

So, what’s next? Well, now that you’ve automated the Docker setup and container creation, you can keep building on it. Need more containers? Just change the count. Need a new image? Adjust the settings. Ansible will handle the rest. You’re all set to scale, automate, and manage Docker environments like a pro.

Ansible Automation Best Practices Guide

Conclusion

In conclusion, automating Docker setup with Ansible on Ubuntu 22.04 offers significant time-saving benefits and ensures consistency across multiple servers. By creating an efficient playbook, you can streamline the entire installation process, from installing necessary packages to managing Docker containers. Not only does this reduce human error, but it also enhances scalability and reliability, particularly for repetitive tasks. As automation continues to shape the IT landscape, mastering tools like Ansible will be essential for efficient server management. Whether you’re setting up Docker for the first time or scaling your infrastructure, Ansible remains a powerful tool to simplify and speed up the process. Keep exploring Ansible’s capabilities, as its integration with other technologies will continue to evolve, improving your workflow even further.

Docker system prune: how to clean up unused resources

Caasify
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.