
Master Docker Installation and Management on Ubuntu 20.04
Introduction
“Installing and managing Docker on Ubuntu 20.04 can significantly streamline your development and containerization workflow. In this tutorial, we’ll guide you through the process of setting up Docker, including Docker Compose for multi-container management and Docker Desktop for a GUI-based development environment. Whether you’re a beginner or looking to optimize your setup, you’ll learn how to execute Docker commands, manage containers, and work with Docker images on Ubuntu. We’ll also explore optional configurations, like running Docker without sudo, and provide troubleshooting tips for a smooth experience.”
What is Docker?
Docker is a platform that allows you to easily manage and run applications in isolated environments called containers. Containers are lightweight, portable, and use fewer resources compared to traditional virtual machines. With Docker, you can package applications and their dependencies, ensuring they run consistently across different systems. It simplifies the process of setting up, running, and sharing applications by using containerization technology.
Step 1 — Installing Docker
Let me tell you, installing Docker on Ubuntu can sometimes feel like a bit of a puzzle, especially if you want to make sure you’re using the most up-to-date version. The version that comes with Ubuntu’s official repository isn’t always the newest one, which means you might miss out on some cool features, updates, and security fixes. To get the freshest version of Docker, it’s always a good idea to install it directly from Docker’s official repository. That way, you won’t miss anything!
First things first, let’s make sure everything on your system is up to date and ready to go. Open your terminal and run this command:
$ sudo apt update
This will update your list of available packages. Next, you’ll need to install a few helper packages. These are like the backstage crew making sure everything runs smoothly, allowing Ubuntu’s package manager (apt) to fetch and install packages securely over HTTPS. To install them, run this:
$ sudo apt install apt-transport-https ca-certificates curl software-properties-common
Next up, we need to add the GPG key for the Docker repository. Think of this key like a seal of approval that ensures anything you download from Docker is authentic and safe. You can add it by running:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
Alright, now let’s tell your system where to find Docker’s packages. You’ll add Docker’s official repository to your APT sources with this command:
$ sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable”
Once you add the repository, your package database will automatically refresh to include Docker’s packages. But before we move forward, let’s double-check that we’re pulling Docker from the right source—the official Docker repository, and not from the default Ubuntu one. To do this, run:
$ apt-cache policy docker-ce
You should see something like this (the version number might be different based on the latest release):
docker-ce:
Installed: (none)
Candidate: 5:19.03.9~3-0~ubuntu-focal
Version table:
5:19.03.9~3-0~ubuntu-focal 500
500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages
Notice that Docker CE (the Docker Community Edition) isn’t installed yet, but the version ready to install is coming from Docker’s official repository, confirming that everything’s set up correctly.
Now, let’s finish up the installation. Run the final command to install Docker:
$ sudo apt install docker-ce
Once the installation is done, Docker should be installed and ready to go. The Docker daemon—the part that runs your containers—will start automatically and be set up to launch every time your system starts. To make sure everything is running smoothly, check Docker’s status by running:
$ sudo systemctl status docker
You should see output like this, confirming that Docker is up and running:
● docker.service – Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-05-19 17:00:41 UTC; 17s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 24321 (dockerd)
Tasks: 8
Memory: 46.4M
CGroup: /system.slice/docker.service
└─24321 /usr/bin/dockerd -H fd:// –containerd=/run/containerd/containerd.sock
From here, you can see that Docker is up and running. Also, alongside the Docker service, you’ll have the Docker command-line tool (the Docker client) installed, which you’ll use to manage containers, images, and all the cool Docker stuff you’ll be doing.
Now, stay tuned! In the next part of this tutorial, we’ll jump into how to use the Docker command to start working with containers and images. Let’s get ready for the next step in your Docker journey!
Docker Installation Guide for Ubuntu
Step 2 — Executing the Docker Command Without Sudo (Optional)
Here’s the thing about Docker: by default, it needs a bit of extra power to do its thing. The Docker command needs administrative privileges, so it can only be run by the root user or someone who’s part of the docker group. This docker group isn’t just some random thing—it’s created automatically when you install Docker to help manage permissions and ensure that only the right people can interact with Docker containers.
So, if you try running a Docker command and you’re not in that group, or if you forget to use sudo , you’re probably going to get an error message like this:
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host? See ‘docker run –help’.
That’s because, without those elevated privileges, Docker can’t talk to the Docker daemon, which is the engine that actually runs the containers. But don’t worry—there’s an easy fix for that.
If you don’t want to type sudo every time you use Docker, you can just add yourself to the docker group. This lets you run Docker commands without needing to prepend sudo every time, which, let’s be honest, is pretty convenient.
To add yourself to the group, run this command:
sudo usermod -aG docker ${USER}
This command updates your user’s group membership and adds you to the docker group. After running it, you’ll need to either log out and log back in, or, if you’re in a hurry, you can apply the change immediately by running this:
su – ${USER}
Once you do that, you’ll be prompted to enter your password to confirm the change. After everything’s set up, you can check if it worked by running:
groups
If everything went well, you should see something like this, confirming that you’re now part of the docker group:
sammy sudo docker
Now, if you want to add someone else to the docker group (say, you’re helping a buddy out), you can run the same command but specify their username like this:
sudo usermod -aG docker username
And just like that, they’re in the club. From now on, you can run Docker commands without worrying about using sudo .
This tutorial assumes you’re running Docker commands as a user in the docker group, but hey, if you don’t want to join the group (totally your call), you can always just keep using sudo with each Docker command.
Now that we’ve got that sorted, let’s dive into how to actually use the Docker command to do all the cool things Docker can do!
For more detailed instructions, check out the How to Install and Use Docker on Ubuntu 20.04 guide.
Step 3 — Using the Docker Command
Using Docker is a bit like being the captain of a ship—you’re the one in charge, giving the directions and telling it where to go. The way you communicate with Docker is through its command-line interface (CLI), and that means passing a series of options, commands, and arguments. It’s pretty simple once you get the hang of it. Here’s the general structure for the commands you’ll use:
docker [option] [command] [arguments]
But you might be thinking, “Where do I even start?” Well, if you want to see all the cool commands Docker has to offer, you can just type:
docker
This will give you a whole list of available subcommands, and trust me, you’ll use these a lot. Here’s a sneak peek at some of the most important ones you’ll want to get familiar with:
- attach : Need to connect to a running container? This one’s your go-to for attaching local input, output, and error streams.
- build : This command lets you turn a Dockerfile into an image.
- commit : Think of this like “save as”—it lets you create a new image based on the changes you made to a running container.
- cp : This command helps you copy files or folders between a container and your local system. Great for backup or sharing data.
- create : If you want to create a new container, this is the one you need.
- exec : This command lets you run a command inside a running container. It’s perfect when you need to interact with a container that’s already running.
- history : Curious about the history of an image? This one shows you the version history of a specific image.
- images : You’ll use this to list all the Docker images you have on your system. It’s like your personal image gallery.
- inspect : Want to get into the details of a Docker object? This tool gives you low-level information on containers, images, and more.
- logs : If something’s gone wrong inside a container and you’re trying to figure out what, this command fetches the logs, which can be super helpful for debugging.
And there’s so much more where that came from! Docker’s list of commands is huge, and the best way to get familiar with them is by using them. But don’t worry, we’ll take it one step at a time.
Here’s a more detailed look at a few other commands:
- run : This is the command you’ll use when you want to create a container and run it right away. It’s like saying, “Create and go!”
-
start
: If you’ve stopped a container and want to bring it back to life, use
docker start
- stop : On the flip side, if you need to stop a running container, use this command.
- pull : Want to download an image from a registry like Docker Hub? That’s what this command is for.
- push : After you’ve made changes, you might want to share your image. This one lets you push your image to a Docker registry for others to use.
- stats : Curious about how much memory or CPU your container is using? This command will give you a live stream of resource usage.
- rm : Finished with a container? Use this to remove it from your system.
There are plenty of other commands, but you get the idea: Docker has a command for nearly everything.
Now, you might be wondering, “How do I learn more about each specific command?” Well, it’s simple! Just type:
docker docker-subcommand –help
This will give you all the options and details for the specific subcommand you want to use. Plus, for a general look at your Docker setup—like the version you’re running or the configuration—just run:
docker info
In the next parts of this guide, we’ll go deeper into some of these key commands. But don’t worry, I’ll walk you through the most important ones step by step. By the end, you’ll feel like a Docker pro and know exactly how to manage Docker containers, images, and all the cool things Docker Compose can do. Stay tuned!
For a deeper understanding of Docker, you can check out more resources on What is a container?.
Step 4 — Working with Docker Images
Let me take you on a journey into the world of Docker images, which, in a way, are like the blueprints of the applications running inside containers. Imagine Docker images as the carefully designed floor plans, and the containers as the buildings that get constructed from them. Now, Docker doesn’t just create these blueprints from scratch—it gets them from Docker Hub, the public registry where all the magic happens. Docker Hub is like the ultimate warehouse for Docker images, full of pre-built containers waiting for you to bring them to life. And the best part? Anyone—yes, anyone—can upload their Docker images here. So, you’re not just stuck with the basics; you get access to a wide range of applications, operating systems, and development environments, all in one place. Pretty cool, right?
Now, let’s put your Docker installation to the test. How do we know it’s working? Simple—let’s try this little trick. Run this command:
$ docker run hello-world
What happens next will confirm that Docker is up and running smoothly. Here’s what you’ll see:
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
Digest: sha256:6a65f928fb91fcfbc963f7aa6d57c8eeb426ad9a20c7ee045538ef34847f44f1
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
What’s going on here? Well, Docker couldn’t find the hello-world image on your system, so it automatically reached out to Docker Hub, grabbed the image, and ran it. And voila, you get the message: “Hello from Docker!” If you see that, it means your installation is working just fine.
But let’s say you want to get even more adventurous and explore other Docker images on Docker Hub. You don’t have to search manually—use the power of the docker search command to find exactly what you’re looking for. Try this:
$ docker search ubuntu
You’ll get a list of available images like this:
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating sys… 10908 [OK]
dorowu/ubuntu-desktop-lxde-vnc Docker image to provide HTML5 VNC interface … 428 [OK]
rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 244 [OK]
consol/ubuntu-xfce-vnc Ubuntu container with “headless” VNC session… 218 [OK]
Here’s a fun fact: The column labeled OFFICIAL tells you whether Docker itself supports the image (marked with [OK]). Official images are generally more reliable, updated, and well-maintained, which is why they’re usually your safest bet.
Once you’ve found the image you like—let’s say you’ve decided on the official Ubuntu image—you can download it using the docker pull command. It’s as simple as:
$ docker pull ubuntu
What happens next? Docker will pull the latest version of Ubuntu from Docker Hub and download it to your system. Here’s what the output might look like:
Using default tag: latest
latest: Pulling from library/ubuntu
d51af753c3d3: Pull complete
fc878cd0a91c: Pull complete
6154df8ff988: Pull complete
fee5db0ff82f: Pull complete
Digest: sha256:747d2dbbaaee995098c9792d99bd333c6783ce56150d1b11e333bbceed5c54d7
Status: Downloaded newer image for ubuntu:latest
docker.io/library/ubuntu:latest
That’s Docker working its magic, downloading the official Ubuntu image to your local machine. Now, you’re ready to run it! To see all the images you’ve downloaded, just type:
$ docker images
This will show you a list of all your images, complete with details like the image ID, size, and when it was created. Here’s an example of what you might see:
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 1d622ef86b13 3 weeks ago 73.9MB
hello-world latest bf756fb1ae65 4 months ago 13.3kB
What happens next is exciting: you can now run a container based on the images you’ve downloaded. Docker will first check if it already has the image locally. If not, it’ll fetch it for you automatically.
And here’s a neat little trick: As you work with Docker, you’ll often modify your containers, adding new software or making tweaks. If you want to keep those changes, you can create a new image from your container, using the changes you’ve made. This new image can then be shared with others by pushing it to Docker Hub, so they can use it too!
In the next parts of this guide, we’ll dive deeper into managing containers, modifying images, and sharing your creations with others. But for now, enjoy the power of Docker at your fingertips!
Refer to the Docker Documentation: Getting Started for more details.
Step 5 — Running a Docker Container
Alright, so you’ve already run the simple hello-world container—nothing fancy, just a quick test to make sure Docker is doing its thing. But containers? Oh, they’re capable of so much more! They’re like the lightweight, nimbler cousins of traditional virtual machines. They use far fewer resources and are perfect for all sorts of development and testing tasks.
Now let’s take it up a notch and dive into something a bit more practical. Let’s fire up a container based on the latest Ubuntu image. Instead of just running a simple test, you’ll actually be interacting with the container like a pro. Here’s the command that gets us started:
$ docker run -it ubuntu
Once you hit enter, your terminal will look a bit different. It’s like you’ve just stepped inside the container, and your prompt will change to something like this:
root@d9b100f2f636:/#
The “root” part? That means you’re logged in as the root user inside the container, giving you full control. And that long string of numbers and letters, “d9b100f2f636,” is your container’s ID. You’ll need this ID later if you want to manage or remove the container, because it uniquely identifies the container. Pretty cool, huh?
Now that you’re inside, you can execute commands just like you would on any Linux system. Let’s say you want to update the package database within the container. You don’t even need to worry about sudo because, remember, you’re the root user here. Just type:
apt update
Done! Now that the package database is up-to-date, let’s install something useful—like Node.js. You’re going to love this. Installing it is as simple as running:
apt install nodejs
And there you have it! Node.js is now installed in your container directly from the official Ubuntu repository. How do you know it worked? You can check the version by running:
node -v
The terminal will reply with something like:
v10.19.0
Now, here’s something important to remember: anything you do inside this container—whether you’re installing software, tweaking settings, or updating the system—only affects the container itself. Those changes won’t impact your host system, and, if you delete the container, those changes go with it. It’s like a sandbox environment, totally isolated and self-contained.
When you’re ready to leave the container, simply type
exit
In the next step, we’ll cover how to manage these containers—how to start, stop, and remove them when they’re no longer needed. This is a crucial part of keeping your Docker environment organized and efficient. Trust me, you’ll want to know this!
For further details, check out the Ubuntu Docker Container Usage Guide.
Ubuntu Docker Container Usage Guide
Step 6 — Managing Docker Containers
So, you’ve been working with Docker for a while now. You’ve created some containers, played around with a few images, and now you’ve got a bunch of containers hanging around. Some are running, and others are just sitting there, idle. Here’s the thing: As time goes on, managing these containers becomes essential if you want to keep your system clean and efficient. It’s not all that different from cleaning out your closet; you need to keep things organized to make sure everything runs smoothly.
The beauty of Docker is that it gives you the tools to manage containers like a pro, even when you’ve got many running at once. Let’s start by seeing which containers are still alive and kicking.
To list out all the containers that are currently running, you simply need to run the command:
$ docker ps
This command will show you a list of all active containers. For example, you might see something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS
Now, if you remember, we’ve already created two containers in this tutorial. One came from the hello-world image, and the other from the ubuntu image. These containers might not be running anymore, but don’t worry—they’re still there, lurking on your system.
If you want to see all containers, whether they’re active or have exited, you’ll need to throw in the -a flag, like so:
$ docker ps -a
And now, you’ll get a full list, including the ones that have exited, along with their statuses:
1c08a7a0d0e4 ubuntu “/bin/bash” 2 minutes ago Exited (0) 8 seconds ago
quizzical_mcnulty a707221a5f6c hello-world “/hello” 6 minutes ago Exited (0) 6 minutes ago
See that? You’ve got the quizzical_mcnulty and youthful_curie containers sitting there, even though they’ve stopped running.
If you want to focus on the most recently created container, just use the -l flag to check out the latest one:
$ docker ps -l
The output will tell you exactly which container was created most recently:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1c08a7a0d0e4 ubuntu “/bin/bash” 2 minutes ago Exited (0) 40 seconds ago quizzical_mcnulty
What if you want to fire up a container that’s stopped? No problem! Just use the docker start command followed by the container ID or name. Let’s take the quizzical_mcnulty container as an example:
$ docker start quizzical_mcnulty
Once you’ve started it, check to make sure it’s running by using docker ps again. The output should show that it’s up and running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1c08a7a0d0e4 ubuntu “/bin/bash” 3 minutes ago Up 5 seconds quizzical_mcnulty
What if you want to stop a running container? Easy! You use docker stop , followed by the container name or ID. So, if you’re done with quizzical_mcnulty , you can stop it like this:
$ docker stop quizzical_mcnulty
And just like that, your container will stop. You can run docker ps again to verify that it’s no longer running.
Eventually, you might decide that you don’t need a container anymore. If that’s the case, you can remove it to free up some resources. Use the docker rm command, followed by either the container ID or name. For example, to remove the youthful_curie container, which came from the hello-world image, you’d run:
$ docker rm youthful_curie
This will completely remove the container. If you’re unsure which container to remove, just run docker ps -a again, and it’ll give you the full list to choose from.
Now, let’s talk about naming containers. When you’re managing multiple containers, it can get a bit tricky to remember which one is which. The solution? Use the --name flag when you create a new container. This way, you can assign a specific name that’s easy to remember. You can also add the --rm flag if you want the container to automatically delete itself after it stops. Here’s how you’d create a new container named mycontainer from the ubuntu image and have it remove itself after stopping:
$ docker run –name mycontainer –rm ubuntu
This is super helpful if you’re working with temporary containers. No mess, no fuss.
Lastly, Docker makes it super easy to modify and reuse containers. Say you’ve customized a container by installing applications or making changes. You can save those changes as a new image using the docker commit command. This new image can then be used to spin up new containers with the same setup, or you can share it with others by pushing it to a Docker registry.
Docker Resources – What is a Container?
In the next section of this tutorial, we’ll dive into the details of committing those changes from a running container into a brand-new image. It’s like taking a snapshot of all your hard work so you can use it again and again.
Step 7 — Committing Changes in a Container to a Docker Image
Imagine this: You’ve started a Docker container from an image, and now you’re inside, making all sorts of changes—installing Node.js , tweaking settings, and getting everything just how you want it. It feels a bit like working in a clean, empty room where you get to build things from scratch, right? But here’s the catch: any changes you make in that container are temporary. Once you’re done and the container is destroyed (with a simple docker rm command), all that work will be gone. It’s like building a sandcastle at the beach—beautiful while it lasts, but gone as soon as the tide comes in.
So, what if you could save that work? What if you could turn that customized container into a reusable template, so you could start up more containers with the same setup whenever you want? Well, that’s where committing changes comes in. You can take the current state of your container and save it as a new Docker image—no more worrying about losing your changes when the container is wiped out.
Here’s how you do it.
Let’s say you’ve just installed Node.js inside your Ubuntu container. You’re proud of it, and it’s working perfectly. But now, you want to reuse that container as a base for future containers. You don’t want to start from scratch again, right? So, what you need to do is commit those changes and turn them into a new image. With the magic of docker commit , you can do just that. It’s like taking a snapshot of your container’s current state and turning it into a reusable template.
Here’s the command you’ll use to commit your changes:
$ docker commit -m “What you did to the image” -a “Author Name” container_id repository/new_image_name
The -m flag lets you add a commit message. This is super useful for keeping track of what changes you made so you (or anyone else) can understand what’s inside the image.
The -a flag lets you specify the author—whether it’s your name or your Docker username.
The container_id is the unique identifier of the container you’re working with. You can find it by running docker ps -a .
The repository/new_image_name is the name you want to give your new image. If you’re planning to share this image with others, this is where you’d put your Docker Hub username or a custom repository name.
Let’s say your Docker Hub username is sammy , and your container ID is d9b100f2f636 . After installing Node.js in your container, you’d run this:
$ docker commit -m “added Node.js” -a “sammy” d9b100f2f636 sammy/ubuntu-nodejs
What happens now? Docker will take that container’s state— Node.js and all—and save it as a new image called sammy/ubuntu-nodejs . This new image will be stored locally on your computer, so you can use it to create new containers with all those great changes you made.
Now, to make sure your image was created, you can list all the Docker images on your system with this command:
$ docker images
Here’s what the output might look like:
REPOSITORY TAG IMAGE ID CREATED SIZE
sammy/ubuntu-nodejs latest 7c1f35226ca6 7 seconds ago 179MB
Boom! There it is, the new ubuntu-nodejs image you just created, all ready to be spun up into a container with Ubuntu and Node.js pre-installed. You’ll notice the size difference between this new image and the original one. The new image is a little bigger, thanks to Node.js being added.
The beauty of this? Now, every time you want to run a container with Ubuntu and Node.js , you can just pull that image and start a fresh container with the exact same setup. No more installing Node.js every time!
But, hey, we’re just scratching the surface here. This tutorial focused on committing changes to create a new image, but there’s more you can do. You can automate the process of creating images using something called a Dockerfile. A Dockerfile is a script that tells Docker exactly how to build an image, step by step—kind of like writing a recipe for creating your perfect container. But that’s a whole other story for another time!
Next up, we’re going to explore how to share this shiny new image with the world (or at least with your team). By pushing it to a Docker registry, others can pull the image and create their own containers with the same setup. It’s all about sharing the love… and the containers!
Understanding Docker Images and Containers
Step 8 — Pushing Docker Images to a Docker Repository
Alright, so you’ve just created a shiny new Docker image—maybe it’s an Ubuntu container with Node.js installed, or maybe it’s a custom setup you’ve been working on. Now, you’re thinking: “How do I share this with the world (or at least my team)?” Well, here’s the fun part: Docker makes it super easy to push your custom image to Docker Hub or any other Docker registry, so you can collaborate, share, and reuse your work. Let’s walk through the process of pushing your Docker image. You might want to share it with just a few people, or maybe you’re ready to share it with everyone. Either way, the steps are pretty simple.
Logging into Docker Hub
Before you can share your masterpiece, you need to log in to Docker Hub (or any other Docker registry you’re using). Think of Docker Hub as your personal cloud-based library of Docker images, where everything you push gets stored. To log in, you’ll use the docker login command. Here’s how it goes:
$ docker login -u docker-registry-username
Once you hit enter, Docker will ask for your password. It’s like unlocking the door to your Docker Hub account. When you enter the correct password, Docker saves your login details, so you don’t need to log in again for future pushes. Now you’re ready to share your image!
Tagging the Image
Here’s a quick side note: If your Docker registry username is different from the local username you used when creating your image, you need to tag your image with the right name. Think of it like renaming a file so it matches the place you’re going to upload it to.
For example, let’s say your image is called sammy/ubuntu-nodejs , but your Docker Hub username is docker-registry-username . You’d need to tag your image like this:
$ docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs
This command tags your sammy/ubuntu-nodejs image with your Docker Hub username, getting it ready to upload to your repository.
Pushing the Image
Alright, the moment you’ve been waiting for—pushing your image! Now that your image is tagged and ready to go, it’s time to upload it to Docker Hub. Here’s how you do it:
$ docker push docker-registry-username/ubuntu-nodejs
This command tells Docker to upload your image to your Docker Hub account. If your image is big, don’t worry if it takes a bit of time. Docker will send the image in layers, like stacking pieces of a puzzle. Here’s what the output might look like:
The push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Pushed
5f70bf18a086: Pushed
a3b5c80a4eba: Pushed
7f18b442972b: Pushed
3ce512daaf78: Pushed
7aae4540b42d: Pushed
Each of those lines represents a different layer of your image. Once all of them are pushed, your image is officially on Docker Hub, ready for the world to pull down.
Verifying the Push
So, how do you know everything went smoothly? After pushing your image, head over to your Docker Hub account and check your repositories. You should see your image there, waiting for anyone to pull and use. It’s like uploading a new app to an app store—now it’s available for everyone!
But what if something goes wrong and you get an error like this?
The push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Preparing
5f70bf18a086: Preparing
a3b5c80a4eba: Preparing
7f18b442972b: Preparing
3ce512daaf78: Preparing
7aae4540b42d: Waiting
unauthorized: authentication required
This usually means Docker couldn’t authenticate you properly with Docker Hub. No worries! Just run the docker login command again, make sure you enter the correct password, and try pushing the image once more.
Pulling the Image
Now, let’s say you—or someone else—wants to use your shiny new image. Here’s the magic command for pulling it down from Docker Hub:
$ docker pull sammy/ubuntu-nodejs
This command will grab your image from Docker Hub and download it to the local machine. Once it’s done, the image is ready to be used in a new container. How cool is that?
In this step, you’ve learned how to push Docker images to Docker Hub, making it easier to share your custom images with others or use them across different systems. You’ve opened up the door to collaboration and streamlined your workflow, so go ahead—start pushing your custom images and let others enjoy the magic you’ve created!
Pushing Docker Images to Docker Hub
Docker vs Docker Compose
Imagine this: you’re building an app and need to set up multiple services—let’s say a web server, a database, and a caching system. You’re using Docker to containerize everything, but now you’re running into a bit of a headache. Each service needs its own container, and managing all of them individually is starting to feel like a real chore. Here’s the thing—Docker is great for handling one container at a time, but what if you need more? That’s where Docker Compose comes in.
Think of Docker as the individual parts of a car—each container is a unique, self-contained piece of the machine. But to make the car run smoothly, those parts need to be connected. Docker Compose is like the mechanic that helps put everything together, making sure each part is in sync and running efficiently.
The Problem: Managing Multiple Containers with Docker
Running individual containers is pretty straightforward with Docker. You spin up one container, install your app, and voilà—you’re good to go. But when you need more than just one container working together—like when your app needs both a web server and a database—it starts to get a bit trickier. You end up manually linking containers, tweaking settings, and ensuring everything runs as it should. Trust me, it’s a lot more work than it seems.
Here’s where Docker Compose comes to the rescue. Instead of manually managing each container, Docker Compose lets you define and orchestrate multi-container applications with ease. You can use a simple YAML file ( docker-compose.yml ) to describe the services you need. This means you can set up and manage everything from one place, and Docker Compose will handle the rest.
The Magic of Docker Compose
With Docker Compose, you don’t have to run multiple containers by hand or worry about how to link them together. Instead, you define everything in a configuration file, and when you’re ready to go, you simply type:
$ docker-compose up
This command will read the docker-compose.yml file, create the necessary containers, set up the networking, and handle all the dependencies automatically. It’s like setting up a new game, where Docker Compose is the game master—everything is prepared and set up for you to play.
When to Use Docker Compose
Docker Compose is the secret weapon for developers working with complex applications. Whether you’re setting up a local development environment or preparing a staging setup, Docker Compose makes managing multiple interconnected containers a breeze. For example, imagine you’re building a web app that needs a web server, a database, and a caching layer. With Docker Compose, you define all these services in one file, and it takes care of networking them together, so you don’t have to.
Here’s a quick look at how Docker CLI (Command Line Interface) compares with Docker Compose:
Feature | Docker CLI | Docker Compose |
---|---|---|
Usage | Single container operations | Multi-container orchestration |
Configuration | CLI commands | YAML configuration file |
Dependency Handling | Manual | Handles linked services automatically |
Best Use Case | Testing isolated containers | Local development and staging setups |
As you can see, Docker CLI is great for running and managing single containers, but Docker Compose really shines when it comes to orchestrating a bunch of interconnected containers. It’s the ideal tool for managing multi-service applications, where each container has a role to play.
Wrapping It Up
If you’re working on complex applications and need a way to manage several containers in a neat, organized manner, Docker Compose is your best friend. It simplifies container orchestration and ensures that all your services are talking to each other just like they should.
And hey, if you want to dive deeper into Docker Compose, there are some great resources out there that’ll help you get it up and running on your Ubuntu system. It’s a total game-changer for streamlining multi-container setups and making your workflow smoother.
Now go ahead and give Docker Compose a try—trust me, you’ll wonder how you ever lived without it!
Using Docker Compose on Ubuntu
Troubleshooting Common Docker Installation Issues
You’ve decided to give Docker a try on your system, eager to see what containers can do for you. But as soon as you run a command, things don’t go as planned—Docker isn’t working. Don’t worry, it’s not the end of the world! Here are some common issues you might run into during installation and the easy fixes to get you back on track.
Problem 1: docker: command not found
Ah, the “command not found” error—classic! This one’s pretty common and happens when Docker’s command-line interface (CLI) isn’t set up properly in your system’s PATH. In simple terms, your computer doesn’t know where to find the Docker command because something went wrong during the installation.
Fix: No need to stress! You can either reinstall Docker or just make sure the /usr/bin directory is in your PATH. Reinstalling is usually the quickest fix. To do this, run the following command to ensure everything is installed correctly:
$ sudo apt install docker-ce docker-ce-cli containerd.io
This command will install the Docker Engine ( docker-ce ), the Docker CLI ( docker-ce-cli ), and containerd ( containerd.io ). This should make the Docker command work and fix the error. Try again, and it should be all set!
Problem 2: Cannot connect to the Docker daemon
Imagine you’re all excited to use Docker, but you run into the “Cannot connect to the Docker daemon” error. This usually means that Docker isn’t running, or you don’t have the right permissions to communicate with Docker’s background process (the Docker daemon).
Fix: First, start the Docker service. It’s simple:
$ sudo systemctl start docker
Next, let’s make sure you can run Docker commands without needing to use sudo every time. To do that, add your user to the Docker group by running:
$ sudo usermod -aG docker $USER
This lets you run Docker commands as a regular user. To apply the change, log out and back in (you don’t need to restart the computer). After that, run:
$ docker info
If Docker is set up properly, this will show you all the details about your Docker installation. You’re good to go!
Problem 3: GPG Key or Repository Error
Ah, the dreaded GPG key or repository error! This happens when Docker’s repository is misconfigured or when the GPG key used to secure the downloads has changed. This is common when Docker updates their repositories or GPG keys.
Fix: Don’t stress about it! All you need to do is update your setup with the latest repository and GPG key. Docker’s official documentation has up-to-date steps for handling this. Here’s the thing: Docker is always improving, so it’s important to make sure you’re using the current repository and key for your system.
If you’re on Ubuntu 22.04, you might need to follow version-specific instructions to make sure everything works. To keep things running smoothly, you can also automate the installation using tools like Ansible. These tools can help set up Docker automatically, reducing errors and simplifying the process. For a step-by-step guide, check out How To Use Ansible to Install and Set Up Docker on Ubuntu.
By following these steps, you should be all set to fix common Docker installation issues. But if something’s still not working, don’t panic. Docker has a huge community full of helpful troubleshooting tips. So when in doubt, check the forums or Docker’s official support docs. You’ve got this!
Troubleshooting Common Docker Installation Issues
Imagine this: You’ve just installed Docker on your system, excited to explore the world of containers, and then… bam! You run a command, but instead of the expected results, you’re hit with an error. Don’t worry, though. Docker is a powerful tool, but like any software, it can throw some curveballs now and then. Luckily, most common installation issues are pretty easy to fix.
Problem 1: docker: command not found
You know that feeling when you’re ready to go, but the system tells you the Docker command is nowhere to be found? It’s like being all set for a race, only to realize your car won’t start. This error happens when the Docker command-line interface (CLI) is missing from your system’s PATH. Maybe the installation didn’t go quite as planned, or something got missed.
Fix:
No need to panic! You can fix this by either reinstalling Docker or making sure that the /usr/bin directory is included in your PATH. To reinstall, just run this command:
$ sudo apt install docker-ce docker-ce-cli containerd.io
This command will reinstall the necessary parts of Docker, including the Docker Engine ( docker-ce ), Docker CLI ( docker-ce-cli ), and containerd ( containerd.io ). Once that’s done, try the command again, and you should be good to go!
Problem 2: Cannot connect to the Docker daemon
Now, let’s say Docker is installed, but then you run into the issue of not being able to connect to the Docker daemon. It’s like setting up the racetrack, but the car won’t start. This error usually means that Docker isn’t running or your user doesn’t have the correct permissions to interact with the Docker daemon.
Fix:
To get things back on track, start the Docker service by running:
$ sudo systemctl start docker
Next, to avoid having to type sudo every time, you’ll want to give your user permission to talk to Docker without it. To do this, add your user to the Docker group:
$ sudo usermod -aG docker $USER
Once you’ve done that, log out and back in to apply the changes. Then, run:
$ docker info
If everything’s set up right, you should see all the details about your Docker installation. No more daemon issues, and you’re good to go!
Problem 3: GPG Key or Repository Error
You’re cruising along and then you hit a roadblock—a GPG key error or repository issue. This happens when Docker’s GPG key changes or the repository configuration gets messed up. It’s like following an old map that leads you to the wrong place.
Fix:
This one’s easy to fix. Docker updates their repositories regularly, and when they do, the GPG key might change too. To resolve the issue, check Docker’s official documentation for the latest steps to update your repository and GPG key configuration.
If you’re on Ubuntu 22.04, you might need to follow version-specific instructions to get everything working smoothly. To avoid future issues, you could also use tools like Ansible to automate the installation process. These tools handle Docker setups for you, which can reduce mistakes. If you’re interested, our guide on “How To Use Ansible to Install and Set Up Docker on Ubuntu” has all the details.
By following these troubleshooting steps, you should be able to fix most common Docker installation issues and get Docker running smoothly on your system. Still stuck? No problem! Docker’s community forums and support docs are always there to help with more tips and tricks.
Docker Installation Documentation
Installing Docker Using a Dockerfile
So, you’re diving into the world of Docker and want to make your installations a bit smoother. Maybe you’re setting up multiple systems or just need a consistent environment across your team. The best way to do this is by using a Dockerfile. It’s a super handy tool that automates the whole Docker setup process, making things quicker, easier, and more efficient. Think of it like a recipe card that ensures every system gets the exact same result, every time.
Now, let me walk you through how to set up Docker on an Ubuntu 20.04 base image using a Dockerfile. Here’s how it works:
FROM ubuntu:20.04
RUN apt-get update & \\
apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release & \\
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add – & \\
add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable” & \\
apt-get update & \\
apt-get install -y docker-ce docker-ce-cli containerd.io
Breaking It Down:
- FROM ubuntu:20.04: Think of this like picking the foundation for your project. In this case, we’re starting with the Ubuntu 20.04 base image. Docker will begin building from this point.
- RUN apt-get update: Here, we’re refreshing the package lists to make sure everything is up to date. It’s like clearing the cobwebs out before starting your work—keeps things fresh and ready to go.
- Install prerequisites: We need to install some important packages to ensure Docker runs smoothly. These packages let Docker fetch software securely over HTTPS, which is really important for security.
apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
- Add Docker GPG key: To make sure the Docker packages we’re about to download are legit, we download and add Docker’s GPG key. This way, we’re only trusting packages from Docker’s official source.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add –
- Add Docker repository: Now, we tell Ubuntu where to find Docker’s official packages. It’s like giving Ubuntu the map to the treasure. We add Docker’s repository to the list of trusted sources.
add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable”
- Install Docker: Finally, we install Docker, Docker CLI, and containerd, which are the tools we need to run and manage containers. This is the heart of your setup, and once it’s done, you’ll be ready to start creating and managing Docker containers.
apt-get install -y docker-ce docker-ce-cli containerd.io
Automating Docker Installation with Ansible
If you’re managing several servers, you might get tired of repeating this process manually every time. This is where Ansible comes in. Ansible is an automation tool that makes software deployment and configuration management super easy.
With Ansible, you can write a playbook—a simple script that automates Docker’s installation process. By using Ansible, you ensure that every machine gets the exact same setup with minimal effort. You won’t have to worry about any machine-specific differences or forgetting steps—Ansible handles it all for you.
Plus, if you’re managing a large infrastructure, Ansible gives you more control over your configurations and ensures all your systems have the latest version of Docker. It’s the best way to keep everything consistent, especially when you’re deploying Docker across many systems.
If you’re curious about this, take a look at our guide on How To Use Ansible to Install and Set Up Docker on Ubuntu. The guide shows you how to create an Ansible playbook that automates the Docker installation and configuration, and it also covers best practices to make sure your systems stay scalable and consistent.
Wrapping It All Up
Using both a Dockerfile and Ansible together will make managing Docker even easier. When you’re working with multiple servers or complex deployment workflows, automating Docker installations is a game-changer. Whether you’re dealing with just one system or hundreds, this combo is all about efficiency, consistency, and control. It’s the perfect solution for ensuring that Docker is installed and configured the same way every time.
How to Uninstall Docker on Ubuntu
So, you’ve decided that Docker is no longer needed on your Ubuntu system, or maybe you’re just getting ready for a fresh start with a reinstall. Either way, it’s time to remove Docker and its components from your system. But don’t worry, it’s not as complicated as it sounds. In fact, it’s pretty simple. Let’s go step by step to make sure you get everything cleaned up the right way.
Step 1: Purge Docker Packages
First things first: we need to remove the core Docker packages. These include the Docker Engine ( docker-ce ), Docker CLI ( docker-ce-cli ), and containerd ( containerd.io ). This will uninstall Docker, but here’s the catch: it won’t remove Docker’s configuration files or any stored data. So, you’ll still have some leftover files hanging around. To start the removal process, run this command:
$ sudo apt purge docker-ce docker-ce-cli containerd.io
This will take care of the main Docker components, but we’re not done just yet!
Step 2: Remove Docker’s Data
Now that the Docker packages are removed, let’s make sure nothing is left behind. Docker keeps some stuff around, like images, containers, volumes, and networks. Even after you remove the software, these things can stick around and take up space.
To get rid of all the Docker-related data, you’ll need to manually delete its data directories. These directories store Docker’s images, containers, and other files. Run these commands to get rid of everything:
$ sudo rm -rf /var/lib/docker
$ sudo rm -rf /var/lib/containerd
Once you run these, Docker’s images, containers, and configuration files will be completely wiped out. You’ve officially cleared out all of Docker’s clutter from your system.
Step 3: Remove Docker Group (Optional)
If you’re the thorough type and want to clean up every trace of Docker, you can also remove the Docker user group. This step isn’t necessary, but if you’re not planning on using Docker again anytime soon, it might be a good idea to tidy things up a bit more.
To remove the Docker group, run this command:
$ sudo groupdel docker
This will remove the Docker group from your system. But remember, this step is optional—only do it if you really want to go the extra mile.
Step 4: Verify the Removal
Now, let’s double-check that Docker is truly gone. You can verify its removal by checking the status of Docker’s service. Run this command:
$ sudo systemctl status docker
If Docker has been properly uninstalled, you should see that the service is either inactive or not found at all. If it’s still running for some reason, something went wrong earlier, and you’ll need to go back and make sure everything was removed.
By following these steps, you’ll have completely uninstalled Docker from your Ubuntu system, along with all its associated data. You’ve cleared the way for a fresh start—whether you’re freeing up space, upgrading Docker, or just tidying up. And if you ever decide to reinstall Docker in the future, you’ll be starting from scratch, which will make the process smoother.
For further details on installing Docker, refer to the official guide.
Conclusion
In this tutorial, we’ve walked through the essential steps for installing and managing Docker on Ubuntu 20.04. By now, you should be comfortable setting up Docker, executing commands, managing containers, and working with Docker images. We’ve also explored Docker Compose for handling multi-container setups and introduced Docker Desktop as a powerful GUI option for development. Whether you’re optimizing your container management or troubleshooting common issues, these insights will help you make the most of Docker on Ubuntu.Looking ahead, Docker’s capabilities continue to evolve, especially with tools like Docker Compose and Docker Desktop making it easier to manage complex applications. As the Docker ecosystem grows, staying up-to-date with the latest features will ensure your workflows remain efficient and scalable. Keep experimenting with these tools, and you’ll unlock even more possibilities for containerized development.
Containerize Monorepo Apps with Docker and DigitalOcean App Platform
October 6, 2025[…] Master Docker Installation and Management on Ubuntu 20.04 […]