Getting Started with Docker

This entry is part 1 of 3 in the series Easy Containerization with Docker

In the summer of 2014, the Linux Foundation found Docker to be the second most popular cloud project among the open source cloud professionals, second only to Open Stack Cloud Platform. At the time, Docker had only being in existence for about a year and it only reached the 1.0 milestone a few weeks earlier. Docker containers had definitely caught the attention of the open source software community and beyond. With Docker comes the promise of a world where applications coming in compact, efficient and secure containers, that can be easily deployed, managed and updated. Portable applications that can run in the cloud because in theory they can run in any environment that supports Docker.

Docker containers are different than virtual machines because you can start with a small base image instead of a whole operating system. After that you only add the software that the application needs to run.

There are several big advantages to having everything that the application needs to run stored in a container:

  1. Docker containers are comprised of just the application and its dependencies. Because it caries its dependencies with it, a containerized application doesn’t have to depend on the operating system to supply the libraries, executables and other components it needs;
  2. Dockers containers are more portable and efficient than virtual machines. Because they are typically much smaller than virtual machines, containers consume less memory when they run, eat up less storage when they are at rest and they are more efficient to move around.

Here I will try to explain what are the benefits of running your application in a Docker Container, we will take a look at its history, we will decompose Docker to look at its components and learn about the other initiatives that grown around the Docker project.

Delivering Applications with Docker

Applications have evolved from running directly on computer hardware to running in virtual machines and now running in containers. In the old days if you’re going to install an application you would put it right on the hardware and it would run and everything would be fine until something went wrong. So, for example, if you need to take down that machine or somehow the capacity went beyond with that machine could do, you are kind of in trouble. You would pretty much step with trying to install the machine, probably moving all you data over. When applications began moving into virtual machines, people got more flexibility in how they run their applications. So you could have an application like an web server running inside a virtual machine and then that virtual machine if something went wrong with the host it was running on could simply be moved to another host. You could also spin off more virtual machines if you needed for capacity.

Nowadays we have the concept of containers which is what we are talking here with Docker. Containers take another step in the evolution of running applications by taking applications down to the smallest possible amount of stuff in order to run that application in an environment in which is going to work. One way it does that is just by containing the files and the features that it needs to run the specific application. This makes the containers lighter than VMs. This way is also more flexible and secure than running directly on hardware, so you can actually take this container, move it to a different environment and it really doesn’t matter the underlying operating system as long as it supports the Docker service and the application should be able to run just fine in that new environment.

Containers bring their dependencies with them, so when you install an application inside a container you don’t have to worry about if the operating system has the right version of your libraries or other components, the application should just run inside the container, again as long as it support the Docker containers.

There are a lot of environments that can deploy the containers as well, for example:

  • Any full-blown Linux distributions that happens to support Docker: Redhat Enterprise Linux, Fedora, Ubuntu, etc.
  • Lean operating systems built for containers: Core OS, Atomic

Docker is an open source project managed by Docker, Inc. As the project states on their website: “Docker is a platform for developers and sysadmins to develop, ship, and run applications.” Originally Docker has been released in March, 2013 by dotCloud Inc., which later become Docker, Inc. in October 2013. Development continued from that time until June, 2014 when they released their official release Docker 1.0. Lots of other companies are behind Docker as well, for example Docker Inc. has alliances with Red Hat, which has incorporated the Docker technology into their RHEL Atomic as well as Openshift as a development platform. Docker has also alliances with Google, Amazon, and other cloud providers.

What Makes Up Docker?

To user, Docker is primarily the docker command, which:

  • runs as a daemon process that provides the Docker service: typically, in a Linux operating system, whatever initialization services use, it will start up the Docker service and that will run is a daemon in the background waiting for request to come in.
  • runs as a command with many options to work with docker images in containers so basically you type the docker command, you add an option to it like run or images or logs and then you can actually use the docker command to do all the things you need to do to your images in containers.

Behind Docker is a Docker index, also referred to as the Docker hub. This contains more than 14,000 containerized applications that you can use. By default you can upload and download images by pulling and pushing images from the Docker hub, but if you want to control your own software environment, what you can do is set up a private registry and keep your own images in containers in your own private registries and use them from there. As I just mentioned Docker uses push and pull to get images to and from those registries.

What Is Being Developed along with Docker?

One of the main kinds of tools that are being developed right now are tools for orchestrating containers. The idea of these orchestration tools is that allows multiple containers to work together in sets so in a way that’s tightly coupled. This also allows you to run groups of containers across multiple hosts, so depending on your workload, you can move containers around and still have them speak to each other across multiple hosts.

There also Container-ready operating systems that are being developed to deploy containers into the cloud. One example of that is Project Atomic. Project Atomic produces lean operating systems built from rpm packages, so you can get a version of Project Atomic that runs in Red Hat Enterprise Linux, runs in Fedora and runs in CentOS and the idea is that you have a small operating system to run your containers that is really built just for running containers.

Also being developed right now are new methods of virtual networking, storage and other supporting technologies. These are going to make it more interesting for people who want to be able to put together complex container applications and have them be able to communicate on virtual networks and also be able to tap into network storage as well.

Getting Docker Software

Choosing a Docker Build Environment

Docker software packages have been created for many different open source and proprietary operating systems.

When you’re choosing between different environments were to build your Docker containers, there are several things you should consider:

  1. The Operating System. Docker is probably run most often on Linux systems, such as Fedora, Red Hat Enterprise Linux, Ubuntu, CentOS, Debian and Gentoo, Arch Linux, OpenSUSE. All of those include Docker in their distributions. Docker can be run on proprietary operating systems, such as Mac OS X and Microsoft Windows, and cloud environments: Google Cloud Platform, Amazon EC2, and others;
  2. The packaging method (.deb versus .rpm). If you’re familiar with a particular Linux distribution, there’s a good chance that a binary version of Docker is already available for it. If not, then it might be a distribution of the similar. For example, if your Linux distribution uses RPM package style consider Fedora, Red Hat Enterprise Linux or Cent OS. If you’re comfortable with .deb style packages, you can try Docker in Ubuntu, Debian or many other derivatives.
  3. Level of support. Docker is a relatively new technology and, if you plan to use it to deploy your mission critical applications, you should consider working with the distribution that is committed to keeping up with the latest versions of a supported distribution. For example Docker was added to the official Ubuntu 14.4 Long Term Support release, likewise RedHat added it to its supported Red Hat Enterprise Linux 7 and Atomic products.
  4. Latest versus more stable but supported. As Docker matures some distributions will choose to distribute the latest features as they are available and others will seek to publish more stable versions of Docker. So for example, when you are trying out Docker you might choose the latest Fedora or Ubuntu non LTS versions to get the latest Docker version. When you’re ready for serious deployment, Red Hat Enterprise Linux and Ubuntu LTS maybe better choices for you.
  5. Available software development tools. Just being able to quickly put your favorite application in a Docker container might be good if you’re using Docker for personal applications, however if you’re developing containers for critical services, they need to be maintained overtime. Look for a distribution that integrates and test development tools and orchestration tools that play well with Docker. You also want to be able to do version control and manage new features and security patches.

To get you started I’ll show you how to set up Docker and some related tools in Fedora, Ubuntu and Red Hat Enterprise Linux.

Set up Docker in Fedora

Fedora is the free bleeding edge Linux distribution sponsored by Red Hat. New releases of Fedora come out about every 6 months and they’re used as a proving ground for new software as it becomes available. I recommend you install the most recent version of Fedora if you want to try out Docker in Fedora.

To get started with Docker and Fedora you need to install the docker-io package. Once installed you can begin pulling Docker images, building your own images and manage containers that you have started from those images. For this demonstration I downloaded Fedora 22 Desktop Live ISO and install it. Docker doesn’t require a desktop system, so I could just easily have installed a base system and work entirely from the command line.

The Docker package for Fedora is called docker-io so not to conflict with a package named docker that was in an earlier Desktop release. So to install the Docker package, run:

sudo yum install docker-io

Once docker-io is installed run “rpm -ql docker-io | less” to take a look at the contents of that package. This will give you a sense of what you can do with Docker. Let’s take a look at some of the files that are inside that package.

The “/etc/bach_completion.d/docker.bash” file is used to do bash completion so this is very helpful when you’re typing in things like container names or container ids. What you find is you can type the first couple of letters and then bash will complete them for you, which is very useful for these long names.

The “/etc/sysconfig/docker” file contains options that are passed to the Docker daemon when it’s run. By default all they have is an “selinux-enabled” option to turn with it.

The “/etc/udev/rules.d/80-docker.rules” file contains information to create devices that are needed for the Docker daemon to communicate with containers.

The docker binary is located in “/usr/bin/docker“. That’s the only command that comes with this particular package and basically the docker command is used to do almost everything you need to do to create and use Docker images.

The “/usr/lib/systemd/system/docker.service”  file is the systemd file that is used to start and shut down the Docker service and query it. Inside this file you can see information about what daemon is actually run. The “/usr/bin/docker“, the same command you run in the command line, is run as the demon inside this file.  You can see that includes the “/etc/sysconfig/docker” file when it runs and you can see that this runs after the network target is started. So basically it tells you what you can do with the service when it starts and stops.

The “/usr/share/doc/docker-io” directory contains some files that might be interesting to you. In that directory they have change logs, information about maintainer and 2 README files that you can take a look at.

Most of the documentation for Docker however is in man pages. So in “/usr/share/man/man1/” you can see that there are Docker man pages for each of the subcommand associated with Docker: docker-ps, docker-build, docker-cp, docker-diff, etc. If you want to look at these man pages, you just type for example “man docker–push” and that’s going to show you the Docker push man page.

One more configuration file is the “/usr/share/vim/vimfiles/doc/dockerfile.txt” file that contains information for the vim command on how to format a file that’s in Dockerfile format. That way anytime you open a Dockerfile, the colors in it will indicate the formatting of that file and you be able to see if you make any syntax errors.

With the docker-io package in place, you can go and start using the docker command.

Set up Docker in Ubuntu

Ubuntu is a popular distribution with many Linux enthusiasts. Besides offering a popular standard desktop,  Ubuntu is also available in lightweight desktop such as xUbuntu and special spins for education and multimedia among other things. Although Ubuntu releases a new version about every 6 months, most of which are supported for 9 months,  some releases are designated as long term support or LTS releases and those are supported for 5 years.

To use Docker in Ubuntu, start with the latest version of Ubuntu that’s available. For this demonstration I’m using Ubuntu 15.04 Desktop Edition to start with.

To install Docker on that system start by updating the package lists for your system by typing the following:

sudo apt-get update

If you don’t do this you won’t necessarily get the latest Ubuntu Docker package.

Next to install Docker, you must install the package, name slightly differently than is in other versions:

sudo apt-get install

This will also pull in any dependant packages that are needed, it’s adding a group called “docker” and it’s actually starting the process. Just to double check that the service is working, you can go and type:

sudo service status

Once the package is installed, you can list the contents of that package using the command:

sudo dpkg-query -L | less 

There are some differences between the Ubuntu package and the Fedora package: there are many fewer man pages in this particular distribution of Docker, but there a lot more basically shell scripts in the “contrib” directory that you can use to create containers and other things that are useful for Docker.

With the Docker package installed and the Docker service up and running you’re now ready to start using Docker to create your own containers, run your own containers and create your own images.

Set up Docker in Red Hat Enterprise Linux

Red Hat Enterprise Linux is the subscription based Linux distribution from Red Hat. Its goal is to offer a stable, tested Linux distribution that’s available with different levels of customer support from Red Hat. Like Fedora Red Hat Enterprise Linux is an RPM based Linux distribution. However there are a few different steps from Fedora when you install Red Hat Enterprise Linux and add Docker. These have to do with adding the subscription service. Assuming you have Red Hat Enterprise Linux subscriptions and an account to the Red Hat customer portal, here are the steps that you can follow to subscribe that system and add Docker software to it.

First you need to install Red Hat Enterprise Linux 7 Server Edition. Docker isn’t available with desktop or workstation editions. Next you need to use Subscription Manager to enable your Red Hat subscription and enable the repositories you need to get the Docker package, as well as related packages you will need later. To do that, type the following:

subscription-manager register –username=your-username –auto-attach

This will register your system in the Red Hat Network. The next thing you need to do is enable the proper repositories with the following commands:

subscription-manager repos –enable=rhel-7-server-optional-rpms // this will enable the optional rpms repository

subscription-manager repos –enable=rhel-7-server-extras-rpms // this will enable the extras rpms repository

With the optional and extras repositories enabled you’ll be able to install all the packages you need for Docker in Red Hat Enterprise Linux 7.

In Red Hat Enterprise Linux the name of the package is simply docker. To install it you just type:

yum install docker

This will go out and grab the Docker package and any other packages that are needed to go along with that particular package to get it to work.

With Docker installed to start it all you have to do is enable the Docker  service and then start it with the “systemctl start docker.service” command. To make sure that it starts up every time your system boots, you type “systemctl enable docker.service“. To check the Docker status, type “systemctl status docker.service“.

To check out the contents of that package, you can use the RPM command as you did in Fedora:

rpm -ql docker

and you’re going to see pretty much the same packages that you had in Fedora, almost identical to what is in the docker-io package.

As I’ve mentioned earlier Docker is available with operating systems other than those that I’ve demonstrated here. For instructions on installing Docker on those operating systems and cloud environments refer to the installation documentation.

Choose a Docker run-time environment

When you’re developing applications you probably want a full-featured operating system, one that includes all the tools you need to create, manage, debug and employer containerized applications. For Docker runtime environment however you probably want an operating system that is a bit more lightweight.

Docker containers can run on a fully stacked Linux system, however you can also run containers in cloud environments or other virtual environments. There you should consider lighter weight operating systems. There are several different operating systems that were designed specifically to run containers in cloud or other virtualized environments. These lean distributions offer several advantages:

  • because they are small they are easier to move around in virtual machines based on full blown Linux distributions
  • they take up less disk space
  • because they’re designed for cloud deployment they include features needed to run containers but not much else
  • because they contain fewer features there’s also less opportunity for exploit.

Project Atomic and CoreOS are two examples of operating systems built for running containers.

The Atomic Project was created to build lean operating systems for running containers in rpm base distributions. This contains distributions such as Fedora, Red Hat Enterprise Linux, and CentOS. Project Atomic can run on public cloud providers, cloud platforms like OpenStack and Red Hat Enterprise Virtualization, or directly on hardware, or from PXE booting.

CoreOS can run in cloud providers such as Google Compute Engine, Amazon EC2 and Rackspace Cloud. It can also run directly on hardware or be boot using PXE and iPXE protocols.

Set up a Fedora Atomic host

An Atomic host is an operating system that was created specifically for running containers. Like containers themselves an Atomic host starts out light by only including features that is needed to run containers. It stays lean by doing atomic style upgrades, which keeps upgrade sizes is really small or allowing you to roll back to a state before the upgrade at any time.

To get a Fedora Atomic image, go to the Fedora download site, select the qcow2 image and download that to your system. Once the image is downloaded, you’ll need to create a couple of data files, so we can attach the qcow2 image to a virtual machine. First you need to create a file called “meta-data”, in which you specify an instance ID and the hostname for the system once it’s running:

instance-id: FedoraAtomic
localhost-hostname: fedoraatomic

Then create another file: “user-data” and write the username and password information that is needed to log into that system:

password: atomic
chpasswd: {expire: False}
ssh_pwauth: True

To generate an atomic ISO image you’re going to need the “genisoimage” command, so type:

yum install genisoimage
genisoimage -output fedoraatomic.iso -volid cidata -joliet -rock user-data meta-data

This will install “genisoimage” package for us and then will create the ISO image called “fedoraatomic.iso“. Now you can run it as a virtual machine on your local system:

virt-install –import –name atomic0 –ram 4096 path=./{path_to_qcow2_downloaded_file},format=qcow2,bus=virtio –disk=./fedoraatomic.iso,device=cdrom –network bridge=virbr0

command that imports the Atomic image, it names it “atomic0”, it applies certain amount of RAM to it and basically runs it as a virtual machine. Once you have the Atomic image installed and running on your system it’s a good idea to first run the Atomic upgrade command, so as a root user once you login to Atomic type:

atomic upgrade

and this will download all the latest bits for that particular image to your system, it will include them in the Atomic image. After upgrade, you can simply reboot and you will come up running the latest software inside your Atomic image. After you login into your system, type

systemctl status docker.service

and you will see that the Docker service is already running inside your Atomic host. Now you’re ready to run containers.


Post A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.