How to Use Ansible to Install and Set Up Docker on Ubuntu 18.04

Introduction

With the popularization of containerized applications and microservices, server automation now plays an essential role in systems administration. It is also a way to establish standard procedures for new servers and reduce human error.

This guide explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu 18.04. Docker is an application that simplifies the process of managing containers, resource-isolated processes that behave in a similar way to virtual machines, but are more portable, more resource-friendly, and depend more heavily on the host operating system.

While you can complete this setup manually, using a configuration management tool like Ansible to automate the process will save you time and establish standard procedures that can be repeated through tens to hundreds of nodes. Ansible offers a simple architecture that doesn’t require special software to be installed on nodes, and it provides a robust set of features and built-in modules which facilitate writing automation scripts.

Pre-Flight Check

In order to execute the automated setup provided by the playbook discussed in this guide, you’ll need:

Testing Connectivity to Nodes

To make sure Ansible is able to execute commands on your nodes, run the following command from your Ansible Control Node:

  • ansible -m ping all

This command will use Ansible’s built-in ping module to run a connectivity test on all nodes from your default inventory file, connecting as the current system user. The ping module will test whether:

  • your Ansible hosts are accessible;
  • your Ansible Control Node has valid SSH credentials;
  • your hosts are able to run Ansible modules using Python.

If you installed and configured Ansible correctly, you will get output similar to this:

Output
server1 | SUCCESS => { "changed": false, "ping": "pong" } server2 | SUCCESS => { "changed": false, "ping": "pong" } server3 | SUCCESS => { "changed": false, "ping": "pong" }

Once you get a pong reply back from a host, it means you’re ready to run Ansible commands and playbooks on that server.

Note: If you are unable to get a successful response back from your servers, check our Ansible Cheat Sheet Guide for more information on how to run Ansible commands with custom connection options.

What Does this Playbook Do?

This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install and Use Docker on Ubuntu 18.04.

Running this playbook will perform the following actions on your Ansible hosts:

  1. Install aptitude, which is preferred by Ansible as an alternative to the apt package manager.
  2. Install the required system packages.
  3. Install the Docker GPG APT key.
  4. Add the official Docker repository to the apt sources.
  5. Install Docker.
  6. Install the Python Docker module via pip.
  7. Pull the default image specified by default_container_image from Docker Hub.
  8. Create the number of containers defined by create_containers field, each using the image defined by default_container_image, and execute the command defined in default_container_command in each new container.

Once the playbook has finished running, you will have a number of containers created based on the options you defined within your configuration variables.

How to Use this Playbook

To get started, we’ll download the contents of the playbook to your Ansible Control Node. For your convenience, the contents of the playbook are also included in the next section of this guide.

Use curl to download this playbook from the command line:

  • curl -L https://raw.githubusercontent.com/do-community/ansible-playbooks/master/docker/ubuntu1804.yml -o docker_ubuntu.yml

This will download the contents of the playbook to a file named docker_ubuntu.yml in your current working directory. You can examine the contents of the playbook by opening the file with your command-line editor of choice:

  • nano docker_ubuntu.yml

Once you’ve opened the playbook file, you should notice a section named vars with variables that require your attention:

docker_ubuntu.yml
. . . vars:   create_containers: 4   default_container_name: docker   default_container_image: ubuntu   default_container_command: sleep 1d . . . 

Here’s what these variables mean:

  • create_containers: The number of containers to create.
  • default_container_name: Default container name.
  • default_container_image: Default Docker image to be used when creating containers.
  • default_container_command: Default command to run on new containers.

Once you’re done updating the variables inside docker_ubuntu.yml, save and close the file. If you used nano, do so by pressing CTRL + X, Y, then ENTER.

You’re now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on all servers from your inventory, by default. We can use the -l flag to make sure that only a subset of servers, or a single server, is affected by the playbook. To execute the playbook only on server1, you can use the following command:

  • ansible-playbook docker_ubuntu.yml -l server1

You will get output similar to this:

Output
... TASK [Add Docker GPG apt Key] ******************************************************************************************************************** changed: [server1] TASK [Add Docker Repository] ********************************************************************************************************************* changed: [server1] TASK [Update apt and install docker-ce] ********************************************************************************************************** changed: [server1] TASK [Install Docker Module for Python] ********************************************************************************************************** changed: [server1] TASK [Pull default Docker image] ***************************************************************************************************************** changed: [server1] TASK [Create default containers] ***************************************************************************************************************** changed: [server1] => (item=1) changed: [server1] => (item=2) changed: [server1] => (item=3) changed: [server1] => (item=4) PLAY RECAP *************************************************************************************************************************************** server1 : ok=9 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Note: For more information on how to run Ansible playbooks, check our Ansible Cheat Sheet Guide.

When the playbook is finished running, log in via SSH to the server provisioned by Ansible and run docker ps -a to check if the containers were successfully created:

  • sudo docker ps -a

You should see output similar to this:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a3fe9bfb89cf ubuntu "sleep 1d" 5 minutes ago Created docker4 8799c16cde1e ubuntu "sleep 1d" 5 minutes ago Created docker3 ad0c2123b183 ubuntu "sleep 1d" 5 minutes ago Created docker2 b9350916ffd8 ubuntu "sleep 1d" 5 minutes ago Created docker1

This means the containers defined in the playbook were created successfully. Since this was the last task in the playbook, it also confirms that the playbook was fully executed on this server.

The Playbook Contents

You can find the Docker playbook featured in this tutorial in the ansible-playbooks repository within the DigitalOcean Community GitHub organization. To copy or download the script contents directly, click the Raw button towards the top of the script, or click here to view the raw contents directly.

The full contents are also included here for your convenience:

docker_ubuntu.yml
 --- - hosts: all   become: true   vars:     create_containers: 4     default_container_name: docker     default_container_image: ubuntu     default_container_command: sleep 1d    tasks:     - name: Install aptitude using apt       apt: name=aptitude state=latest update_cache=yes force_apt_get=yes      - name: Install required system packages       apt: name={{ item }} state=latest update_cache=yes       loop: [ 'apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common', 'python3-pip', 'virtualenv', 'python3-setuptools']      - name: Add Docker GPG apt Key       apt_key:         url: https://download.docker.com/linux/ubuntu/gpg         state: present      - name: Add Docker Repository       apt_repository:         repo: deb https://download.docker.com/linux/ubuntu bionic stable         state: present      - name: Update apt and install docker-ce       apt: update_cache=yes name=docker-ce state=latest      - name: Install Docker Module for Python       pip:         name: docker      # Pull image specified by variable default_image from the Docker Hub     - name: Pull default Docker image       docker_image:         name: "{{ default_container_image }}"         source: pull      # Creates the number of containers defined by the variable create_containers, using default values     - name: Create default containers       docker_container:         name: "{{ default_container_name }}{{ item }}"         image: "{{ default_container_image }}"         command: "{{ default_container_command }}"         state: present       with_sequence: count={{ create_containers }}  

Feel free to modify this playbook to best suit your individual needs within your own workflow. For example, you could use the docker_image module to push images to Docker Hub or the docker_container module to set up container networks.

Conclusion

Automating your infrastructure setup can not only save you time, but it also helps to ensure that your servers will follow a standard configuration that can be customized to your needs. With the distributed nature of modern applications and the need for consistency between different staging environments, automation like this has become a central component in many teams’ development processes.

In this guide, we demonstrated how to use Ansible to automate the process of installing and setting up Docker on a remote server. Because each individual typically has different needs when working with containers, we encourage you to check out the official Ansible documentation for more information and use cases of the docker_container Ansible module.

If you’d like to include other tasks in this playbook to further customize your initial server setup, please refer to our introductory Ansible guide Configuration Management 101: Writing Ansible Playbooks.

DigitalOcean Community Tutorials

How To Install WordPress With Docker Compose

Introduction

WordPress is a free and open-source Content Management System (CMS) built on a MySQL database with PHP processing. Thanks to its extensible plugin architecture and templating system, and the fact that most of its administration can be done through the web interface, WordPress is a popular choice when creating different types of websites, from blogs to product pages to eCommerce sites.

Running WordPress typically involves installing a LAMP (Linux, Apache, MySQL, and PHP) or LEMP (Linux, Nginx, MySQL, and PHP) stack, which can be time-consuming. However, by using tools like Docker and Docker Compose, you can simplify the process of setting up your preferred stack and installing WordPress. Instead of installing individual components by hand, you can use images, which standardize things like libraries, configuration files, and environment variables, and run these images in containers, isolated processes that run on a shared operating system. Additionally, by using Compose, you can coordinate multiple containers — for example, an application and database — to communicate with one another.

In this tutorial, you will build a multi-container WordPress installation. Your containers will include a MySQL database, an Nginx web server, and WordPress itself. You will also secure your installation by obtaining TLS/SSL certificates with Let’s Encrypt for the domain you want associated with your site. Finally, you will set up a cron job to renew your certificates so that your domain remains secure.

Prerequisites

To follow this tutorial, you will need:

  • A server running Ubuntu 18.04, along with a non-root user with sudo privileges and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.
  • Docker installed on your server, following Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04.
  • Docker Compose installed on your server, following Step 1 of How To Install Docker Compose on Ubuntu 18.04.
  • A registered domain name. This tutorial will use example.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
  • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them to a DigitalOcean account, if that’s what you’re using:

    • An A record with example.com pointing to your server’s public IP address.
    • An A record with www.example.com pointing to your server’s public IP address.

Step 1 — Defining the Web Server Configuration

Before running any containers, our first step will be to define the configuration for our Nginx web server. Our configuration file will include some WordPress-specific location blocks, along with a location block to direct Let’s Encrypt verification requests to the Certbot client for automated certificate renewals.

First, create a project directory for your WordPress setup called wordpress and navigate to it:

  • mkdir wordpress && cd wordpress

Next, make a directory for the configuration file:

  • mkdir nginx-conf

Open the file with nano or your favorite editor:

  • nano nginx-conf/nginx.conf

In this file, we will add a server block with directives for our server name and document root, and location blocks to direct the Certbot client’s request for certificates, PHP processing, and static asset requests.

Paste the following code into the file. Be sure to replace example.com with your own domain name:

~/wordpress/nginx-conf/nginx.conf
server {         listen 80;         listen [::]:80;          server_name example.com www.example.com;          index index.php index.html index.htm;          root /var/www/html;          location ~ /.well-known/acme-challenge {                 allow all;                 root /var/www/html;         }          location / {                 try_files $  uri $  uri/ /index.php$  is_args$  args;         }          location ~ \.php$   {                 try_files $  uri =404;                 fastcgi_split_path_info ^(.+\.php)(/.+)$  ;                 fastcgi_pass wordpress:9000;                 fastcgi_index index.php;                 include fastcgi_params;                 fastcgi_param SCRIPT_FILENAME $  document_root$  fastcgi_script_name;                 fastcgi_param PATH_INFO $  fastcgi_path_info;         }          location ~ /\.ht {                 deny all;         }          location = /favicon.ico {                  log_not_found off; access_log off;          }         location = /robots.txt {                  log_not_found off; access_log off; allow all;          }         location ~* \.(css|gif|ico|jpeg|jpg|js|png)$   {                 expires max;                 log_not_found off;         } } 

Our server block includes the following information:

Directives:

  • listen: This tells Nginx to listen on port 80, which will allow us to use Certbot’s webroot plugin for our certificate requests. Note that we are not including port 443 yet — we will update our configuration to include SSL once we have successfully obtained our certificates.
  • server_name: This defines your server name and the server block that should be used for requests to your server. Be sure to replace example.com in this line with your own domain name.
  • index: The index directive defines the files that will be used as indexes when processing requests to your server. We’ve modified the default order of priority here, moving index.php in front of index.html so that Nginx prioritizes files called index.php when possible.
  • root: Our root directive names the root directory for requests to our server. This directory, /var/www/html, is created as a mount point at build time by instructions in our WordPress Dockerfile. These Dockerfile instructions also ensure that the files from the WordPress release are mounted to this volume.

Location Blocks:

  • location ~ /.well-known/acme-challenge: This location block will handle requests to the .well-known directory, where Certbot will place a temporary file to validate that the DNS for our domain resolves to our server. With this configuration in place, we will be able to use Certbot’s webroot plugin to obtain certificates for our domain.
  • location /: In this location block, we’ll use a try_files directive to check for files that match individual URI requests. Instead of returning a 404 Not Found status as a default, however, we’ll pass control to WordPress’s index.php file with the request arguments.
  • location ~ \.php$ : This location block will handle PHP processing and proxy these requests to our wordpress container. Because our WordPress Docker image will be based on the php:fpm image, we will also include configuration options that are specific to the FastCGI protocol in this block. Nginx requires an independent PHP processor for PHP requests: in our case, these requests will be handled by the php-fpm processor that’s included with the php:fpm image. Additionally, this location block includes FastCGI-specific directives, variables, and options that will proxy requests to the WordPress application running in our wordpress container, set the preferred index for the parsed request URI, and parse URI requests.
  • location ~ /\.ht: This block will handle .htaccess files since Nginx won’t serve them. The deny_all directive ensures that .htaccess files will never be served to users.
  • location = /favicon.ico, location = /robots.txt: These blocks ensure that requests to /favicon.ico and /robots.txt will not be logged.
  • location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ : This block turns off logging for static asset requests and ensures that these assets are highly cacheable, as they are typically expensive to serve.

For more information about FastCGI proxying, see Understanding and Implementing FastCGI Proxying in Nginx. For information about server and location blocks, see Understanding Nginx Server and Location Block Selection Algorithms.

Save and close the file when you are finished editing. If you used nano, do so by pressing CTRL+X, Y, then ENTER.

With your Nginx configuration in place, you can move on to creating environment variables to pass to your application and database containers at runtime.

Step 2 — Defining Environment Variables

Your database and WordPress application containers will need access to certain environment variables at runtime in order for your application data to persist and be accessible to your application. These variables include both sensitive and non-sensitive information: sensitive values for your MySQL root password and application database user and password, and non-sensitive information for your application database name and host.

Rather than setting all of these values in our Docker Compose file — the main file that contains information about how our containers will run — we can set the sensitive values in an .env file and restrict its circulation. This will prevent these values from copying over to our project repositories and being exposed publicly.

In your main project directory, ~/wordpress, open a file called .env:

  • nano .env

The confidential values that we will set in this file include a password for our MySQL root user, and a username and password that WordPress will use to access the database.

Add the following variable names and values to the file. Remember to supply your own values here for each variable:

~/wordpress/.env
MYSQL_ROOT_PASSWORD=your_root_password MYSQL_USER=your_wordpress_database_user MYSQL_PASSWORD=your_wordpress_database_password 

We have included a password for the root administrative account, as well as our preferred username and password for our application database.

Save and close the file when you are finished editing.

Because your .env file contains sensitive information, you will want to ensure that it is included in your project’s .gitignore and .dockerignore files, which tell Git and Docker what files not to copy to your Git repositories and Docker images, respectively.

If you plan to work with Git for version control, initialize your current working directory as a repository with git init:

  • git init

Then open a .gitignore file:

  • nano .gitignore

Add .env to the file:

~/wordpress/.gitignore
.env 

Save and close the file when you are finished editing.

Likewise, it’s a good precaution to add .env to a .dockerignore file, so that it doesn’t end up on your containers when you are using this directory as your build context.

Open the file:

  • nano .dockerignore

Add .env to the file:

~/wordpress/.dockerignore
.env 

Below this, you can optionally add files and directories associated with your application’s development:

~/wordpress/.dockerignore
.env .git docker-compose.yml .dockerignore 

Save and close the file when you are finished.

With your sensitive information in place, you can now move on to defining your services in a docker-compose.yml file.

Step 3 — Defining Services with Docker Compose

Your docker-compose.yml file will contain the service definitions for your setup. A service in Compose is a running container, and service definitions specify information about how each container will run.

Using Compose, you can define different services in order to run multi-container applications, since Compose allows you to link these services together with shared networks and volumes. This will be helpful for our current setup since we will create different containers for our database, WordPress application, and web server. We will also create a container to run the Certbot client in order to obtain certificates for our webserver.

To begin, open the docker-compose.yml file:

  • nano docker-compose.yml

Add the following code to define your Compose file version and db database service:

~/wordpress/docker-compose.yml
version: '3'  services:   db:     image: mysql:8.0     container_name: db     restart: unless-stopped     env_file: .env     environment:       - MYSQL_DATABASE=wordpress     volumes:        - dbdata:/var/lib/mysql     command: '--default-authentication-plugin=mysql_native_password'     networks:       - app-network 

The db service definition contains the following options:

  • image: This tells Compose what image to pull to create the container. We are pinning the mysql:8.0 image here to avoid future conflicts as the mysql:latest image continues to be updated. For more information about version pinning and avoiding dependency conflicts, see the Docker documentation on Dockerfile best practices.
  • container_name: This specifies a name for the container.
  • restart: This defines the container restart policy. The default is no, but we have set the container to restart unless it is stopped manually.
  • env_file: This option tells Compose that we would like to add environment variables from a file called .env, located in our build context. In this case, the build context is our current directory.
  • environment: This option allows you to add additional environment variables, beyond those defined in your .env file. We will set the MYSQL_DATABASE variable equal to wordpress to provide a name for our application database. Because this is non-sensitive information, we can include it directly in the docker-compose.yml file.
  • volumes: Here, we’re mounting a named volume called dbdata to the /var/lib/mysql directory on the container. This is the standard data directory for MySQL on most distributions.
  • command: This option specifies a command to override the default CMD instruction for the image. In our case, we will add an option to the Docker image’s standard mysqld command, which starts the MySQL server on the container. This option, --default-authentication-plugin=mysql_native_password, sets the --default-authentication-plugin system variable to mysql_native_password, specifying which authentication mechanism should govern new authentication requests to the server. Since PHP and therefore our WordPress image won’t support MySQL’s newer authentication default, we must make this adjustment in order to authenticate our application database user.
  • networks: This specifies that our application service will join the app-network network, which we will define at the bottom of the file.

Next, below your db service definition, add the definition for your wordpress application service:

~/wordpress/docker-compose.yml
...   wordpress:     depends_on:        - db     image: wordpress:5.1.1-fpm-alpine     container_name: wordpress     restart: unless-stopped     env_file: .env     environment:       - WORDPRESS_DB_HOST=db:3306       - WORDPRESS_DB_USER=$  MYSQL_USER       - WORDPRESS_DB_PASSWORD=$  MYSQL_PASSWORD       - WORDPRESS_DB_NAME=wordpress     volumes:       - wordpress:/var/www/html     networks:       - app-network 

In this service definition, we are naming our container and defining a restart policy, as we did with the db service. We’re also adding some options specific to this container:

  • depends_on: This option ensures that our containers will start in order of dependency, with the wordpress container starting after the db container. Our WordPress application relies on the existence of our application database and user, so expressing this order of dependency will enable our application to start properly.
  • image: For this setup, we are using the 5.1.1-fpm-alpine WordPress image. As discussed in Step 1, using this image ensures that our application will have the php-fpm processor that Nginx requires to handle PHP processing. This is also an alpine image, derived from the Alpine Linux project, which will help keep our overall image size down. For more information about the benefits and drawbacks of using alpine images and whether or not this makes sense for your application, see the full discussion under the Image Variants section of the Docker Hub WordPress image page.
  • env_file: Again, we specify that we want to pull values from our .env file, since this is where we defined our application database user and password.
  • environment: Here, we’re using the values we defined in our .env file, but we’re assigning them to the variable names that the WordPress image expects: WORDPRESS_DB_USER and WORDPRESS_DB_PASSWORD. We’re also defining a WORDPRESS_DB_HOST, which will be the MySQL server running on the db container that’s accessible on MySQL’s default port, 3306. Our WORDPRESS_DB_NAME will be the same value we specified in the MySQL service definition for our MYSQL_DATABASE: wordpress.
  • volumes: We are mounting a named volume called wordpress to the /var/www/html mountpoint created by the WordPress image. Using a named volume in this way will allow us to share our application code with other containers.
  • networks: We’re also adding the wordpress container to the app-network network.

Next, below the wordpress application service definition, add the following definition for your webserver Nginx service:

~/wordpress/docker-compose.yml
...   webserver:     depends_on:       - wordpress     image: nginx:1.15.12-alpine     container_name: webserver     restart: unless-stopped     ports:       - "80:80"     volumes:       - wordpress:/var/www/html       - ./nginx-conf:/etc/nginx/conf.d       - certbot-etc:/etc/letsencrypt     networks:       - app-network 

Again, we’re naming our container and making it dependent on the wordpress container in order of starting. We’re also using an alpine image — the 1.15.12-alpine Nginx image.

This service definition also includes the following options:

  • ports: This exposes port 80 to enable the configuration options we defined in our nginx.conf file in Step 1.
  • volumes: Here, we are defining a combination of named volumes and bind mounts:
    • wordpress:/var/www/html: This will mount our WordPress application code to the /var/www/html directory, the directory we set as the root in our Nginx server block.
    • ./nginx-conf:/etc/nginx/conf.d: This will bind mount the Nginx configuration directory on the host to the relevant directory on the container, ensuring that any changes we make to files on the host will be reflected in the container.
    • certbot-etc:/etc/letsencrypt: This will mount the relevant Let’s Encrypt certificates and keys for our domain to the appropriate directory on the container.

And again, we’ve added this container to the app-network network.

Finally, below your webserver definition, add your last service definition for the certbot service. Be sure to replace the email address and domain names listed here with your own information:

~/wordpress/docker-compose.yml
  certbot:     depends_on:       - webserver     image: certbot/certbot     container_name: certbot     volumes:       - certbot-etc:/etc/letsencrypt       - wordpress:/var/www/html     command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com -d www.example.com 

This definition tells Compose to pull the certbot/certbot image from Docker Hub. It also uses named volumes to share resources with the Nginx container, including the domain certificates and key in certbot-etc and the application code in wordpress.

Again, we’ve used depends_on to specify that the certbot container should be started once the webserver service is running.

We’ve also included a command option that specifies a subcommand to run with the container’s default certbot command. The certonly subcommand will obtain a certificate with the following options:

  • --webroot: This tells Certbot to use the webroot plugin to place files in the webroot folder for authentication. This plugin depends on the HTTP-01 validation method, which uses an HTTP request to prove that Certbot can access resources from a server that responds to a given domain name.
  • --webroot-path: This specifies the path of the webroot directory.
  • --email: Your preferred email for registration and recovery.
  • --agree-tos: This specifies that you agree to ACME’s Subscriber Agreement.
  • --no-eff-email: This tells Certbot that you do not wish to share your email with the Electronic Frontier Foundation (EFF). Feel free to omit this if you would prefer.
  • --staging: This tells Certbot that you would like to use Let’s Encrypt’s staging environment to obtain test certificates. Using this option allows you to test your configuration options and avoid possible domain request limits. For more information about these limits, please see Let’s Encrypt’s rate limits documentation.
  • -d: This allows you to specify domain names you would like to apply to your request. In this case, we’ve included example.com and www.example.com. Be sure to replace these with your own domain.

Below the certbot service definition, add your network and volume definitions:

~/wordpress/docker-compose.yml
... volumes:   certbot-etc:   wordpress:   dbdata:  networks:   app-network:     driver: bridge   

Our top-level volumes key defines the volumes certbot-etc, wordpress, and dbdata. When Docker creates volumes, the contents of the volume are stored in a directory on the host filesystem, /var/lib/docker/volumes/, that’s managed by Docker. The contents of each volume then get mounted from this directory to any container that uses the volume. In this way, it’s possible to share code and data between containers.

The user-defined bridge network app-network enables communication between our containers since they are on the same Docker daemon host. This streamlines traffic and communication within the application, as it opens all ports between containers on the same bridge network without exposing any ports to the outside world. Thus, our db, wordpress, and webserver containers can communicate with each other, and we only need to expose port 80 for front-end access to the application.

The finished docker-compose.yml file will look like this:

~/wordpress/docker-compose.yml
version: '3'  services:   db:     image: mysql:8.0     container_name: db     restart: unless-stopped     env_file: .env     environment:       - MYSQL_DATABASE=wordpress     volumes:        - dbdata:/var/lib/mysql     command: '--default-authentication-plugin=mysql_native_password'     networks:       - app-network    wordpress:     depends_on:        - db     image: wordpress:5.1.1-fpm-alpine     container_name: wordpress     restart: unless-stopped     env_file: .env     environment:       - WORDPRESS_DB_HOST=db:3306       - WORDPRESS_DB_USER=$  MYSQL_USER       - WORDPRESS_DB_PASSWORD=$  MYSQL_PASSWORD       - WORDPRESS_DB_NAME=wordpress     volumes:       - wordpress:/var/www/html     networks:       - app-network    webserver:     depends_on:       - wordpress     image: nginx:1.15.12-alpine     container_name: webserver     restart: unless-stopped     ports:       - "80:80"     volumes:       - wordpress:/var/www/html       - ./nginx-conf:/etc/nginx/conf.d       - certbot-etc:/etc/letsencrypt     networks:       - app-network    certbot:     depends_on:       - webserver     image: certbot/certbot     container_name: certbot     volumes:       - certbot-etc:/etc/letsencrypt       - wordpress:/var/www/html     command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com -d www.example.com  volumes:   certbot-etc:   wordpress:   dbdata:  networks:   app-network:     driver: bridge   

Save and close the file when you are finished editing.

With your service definitions in place, you are ready to start the containers and test your certificate requests.

Step 4 — Obtaining SSL Certificates and Credentials

We can start our containers with the docker-compose up command, which will create and run our containers in the order we have specified. If our domain requests are successful, we will see the correct exit status in our output and the right certificates mounted in the /etc/letsencrypt/live folder on the webserver container.

Create the containers with docker-compose up and the -d flag, which will run the db, wordpress, and webserver containers in the background:

  • docker-compose up -d

You will see output confirming that your services have been created:

Output
Creating db ... done Creating wordpress ... done Creating webserver ... done Creating certbot ... done

Using docker-compose ps, check the status of your services:

  • docker-compose ps

If everything was successful, your db, wordpress, and webserver services will be Up and the certbot container will have exited with a 0 status message:

Output
Name Command State Ports ------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp wordpress docker-entrypoint.sh php-fpm Up 9000/tcp

If you see anything other than Up in the State column for the db, wordpress, or webserver services, or an exit status other than 0 for the certbot container, be sure to check the service logs with the docker-compose logs command:

  • docker-compose logs service_name

You can now check that your certificates have been mounted to the webserver container with docker-compose exec:

  • docker-compose exec webserver ls -la /etc/letsencrypt/live

If your certificate requests were successful, you will see output like this:

Output
total 16 drwx------ 3 root root 4096 May 10 15:45 . drwxr-xr-x 9 root root 4096 May 10 15:45 .. -rw-r--r-- 1 root root 740 May 10 15:45 README drwxr-xr-x 2 root root 4096 May 10 15:45 example.com

Now that you know your request will be successful, you can edit the certbot service definition to remove the --staging flag.

Open docker-compose.yml:

  • nano docker-compose.yml

Find the section of the file with the certbot service definition, and replace the --staging flag in the command option with the --force-renewal flag, which will tell Certbot that you want to request a new certificate with the same domains as an existing certificate. The certbot service definition will now look like this:

~/wordpress/docker-compose.yml
...   certbot:     depends_on:       - webserver     image: certbot/certbot     container_name: certbot     volumes:       - certbot-etc:/etc/letsencrypt       - certbot-var:/var/lib/letsencrypt       - wordpress:/var/www/html     command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com ... 

You can now run docker-compose up to recreate the certbot container. We will also include the --no-deps option to tell Compose that it can skip starting the webserver service, since it is already running:

  • docker-compose up --force-recreate --no-deps certbot

You will see output indicating that your certificate request was successful:

Output
Recreating certbot ... done Attaching to certbot certbot | Saving debug log to /var/log/letsencrypt/letsencrypt.log certbot | Plugins selected: Authenticator webroot, Installer None certbot | Renewing an existing certificate certbot | Performing the following challenges: certbot | http-01 challenge for example.com certbot | http-01 challenge for www.example.com certbot | Using the webroot path /var/www/html for all unmatched domains. certbot | Waiting for verification... certbot | Cleaning up challenges certbot | IMPORTANT NOTES: certbot | - Congratulations! Your certificate and chain have been saved at: certbot | /etc/letsencrypt/live/example.com/fullchain.pem certbot | Your key file has been saved at: certbot | /etc/letsencrypt/live/example.com/privkey.pem certbot | Your cert will expire on 2019-08-08. To obtain a new or tweaked certbot | version of this certificate in the future, simply run certbot certbot | again. To non-interactively renew *all* of your certificates, run certbot | "certbot renew" certbot | - Your account credentials have been saved in your Certbot certbot | configuration directory at /etc/letsencrypt. You should make a certbot | secure backup of this folder now. This configuration directory will certbot | also contain certificates and private keys obtained by Certbot so certbot | making regular backups of this folder is ideal. certbot | - If you like Certbot, please consider supporting our work by: certbot | certbot | Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate certbot | Donating to EFF: https://eff.org/donate-le certbot | certbot exited with code 0

With your certificates in place, you can move on to modifying your Nginx configuration to include SSL.

Step 5 — Modifying the Web Server Configuration and Service Definition

Enabling SSL in our Nginx configuration will involve adding an HTTP redirect to HTTPS, specifying our SSL certificate and key locations, and adding security parameters and headers.

Since you are going to recreate the webserver service to include these additions, you can stop it now:

  • docker-compose stop webserver

Before we modify the configuration file itself, let’s first get the recommended Nginx security parameters from Certbot using curl:

  • curl -sSLo nginx-conf/options-ssl-nginx.conf https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/options-ssl-nginx.conf

This command will save these parameters in a file called options-ssl-nginx.conf, located in the nginx-conf directory.

Next, remove the Nginx configuration file you created earlier:

  • rm nginx-conf/nginx.conf

Open another version of the file:

  • nano nginx-conf/nginx.conf

Add the following code to the file to redirect HTTP to HTTPS and to add SSL credentials, protocols, and security headers. Remember to replace example.com with your own domain:

~/wordpress/nginx-conf/nginx.conf
server {         listen 80;         listen [::]:80;          server_name example.com www.example.com;          location ~ /.well-known/acme-challenge {                 allow all;                 root /var/www/html;         }          location / {                 rewrite ^ https://$  host$  request_uri? permanent;         } }  server {         listen 443 ssl http2;         listen [::]:443 ssl http2;         server_name example.com www.example.com;          index index.php index.html index.htm;          root /var/www/html;          server_tokens off;          ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;         ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;          include /etc/nginx/conf.d/options-ssl-nginx.conf;          add_header X-Frame-Options "SAMEORIGIN" always;         add_header X-XSS-Protection "1; mode=block" always;         add_header X-Content-Type-Options "nosniff" always;         add_header Referrer-Policy "no-referrer-when-downgrade" always;         add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;         # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;         # enable strict transport security only if you understand the implications          location / {                 try_files $  uri $  uri/ /index.php$  is_args$  args;         }          location ~ \.php$   {                 try_files $  uri =404;                 fastcgi_split_path_info ^(.+\.php)(/.+)$  ;                 fastcgi_pass wordpress:9000;                 fastcgi_index index.php;                 include fastcgi_params;                 fastcgi_param SCRIPT_FILENAME $  document_root$  fastcgi_script_name;                 fastcgi_param PATH_INFO $  fastcgi_path_info;         }          location ~ /\.ht {                 deny all;         }          location = /favicon.ico {                  log_not_found off; access_log off;          }         location = /robots.txt {                  log_not_found off; access_log off; allow all;          }         location ~* \.(css|gif|ico|jpeg|jpg|js|png)$   {                 expires max;                 log_not_found off;         } } 

The HTTP server block specifies the webroot for Certbot renewal requests to the .well-known/acme-challenge directory. It also includes a rewrite directive that directs HTTP requests to the root directory to HTTPS.

The HTTPS server block enables ssl and http2. To read more about how HTTP/2 iterates on HTTP protocols and the benefits it can have for website performance, please see the introduction to How To Set Up Nginx with HTTP/2 Support on Ubuntu 18.04.

This block also includes our SSL certificate and key locations, along with the recommended Certbot security parameters that we saved to nginx-conf/options-ssl-nginx.conf.

Additionally, we’ve included some security headers that will enable us to get A ratings on things like the SSL Labs and Security Headers server test sites. These headers include X-Frame-Options, X-Content-Type-Options, Referrer Policy, Content-Security-Policy, and X-XSS-Protection. The HTTP Strict Transport Security (HSTS) header is commented out — enable this only if you understand the implications and have assessed its “preload” functionality.

Our root and index directives are also located in this block, as are the rest of the WordPress-specific location blocks discussed in Step 1.

Once you have finished editing, save and close the file.

Before recreating the webserver service, you will need to add a 443 port mapping to your webserver service definition.

Open your docker-compose.yml file:

  • nano docker-compose.yml

In the webserver service definition, add the following port mapping:

~/wordpress/docker-compose.yml
...   webserver:     depends_on:       - wordpress     image: nginx:1.15.12-alpine     container_name: webserver     restart: unless-stopped     ports:       - "80:80"       - "443:443"     volumes:       - wordpress:/var/www/html       - ./nginx-conf:/etc/nginx/conf.d       - certbot-etc:/etc/letsencrypt     networks:       - app-network 

The docker-compose.yml file will look like this when finished:

~/wordpress/docker-compose.yml
version: '3'  services:   db:     image: mysql:8.0     container_name: db     restart: unless-stopped     env_file: .env     environment:       - MYSQL_DATABASE=wordpress     volumes:        - dbdata:/var/lib/mysql     command: '--default-authentication-plugin=mysql_native_password'     networks:       - app-network    wordpress:     depends_on:        - db     image: wordpress:5.1.1-fpm-alpine     container_name: wordpress     restart: unless-stopped     env_file: .env     environment:       - WORDPRESS_DB_HOST=db:3306       - WORDPRESS_DB_USER=$  MYSQL_USER       - WORDPRESS_DB_PASSWORD=$  MYSQL_PASSWORD       - WORDPRESS_DB_NAME=wordpress     volumes:       - wordpress:/var/www/html     networks:       - app-network    webserver:     depends_on:       - wordpress     image: nginx:1.15.12-alpine     container_name: webserver     restart: unless-stopped     ports:       - "80:80"       - "443:443"     volumes:       - wordpress:/var/www/html       - ./nginx-conf:/etc/nginx/conf.d       - certbot-etc:/etc/letsencrypt     networks:       - app-network    certbot:     depends_on:       - webserver     image: certbot/certbot     container_name: certbot     volumes:       - certbot-etc:/etc/letsencrypt       - wordpress:/var/www/html     command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com  volumes:   certbot-etc:   wordpress:   dbdata:  networks:   app-network:     driver: bridge   

Save and close the file when you are finished editing.

Recreate the webserver service:

  • docker-compose up -d --force-recreate --no-deps webserver

Check your services with docker-compose ps:

  • docker-compose ps

You should see output indicating that your db, wordpress, and webserver services are running:

Output
Name Command State Ports ---------------------------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp webserver nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp wordpress docker-entrypoint.sh php-fpm Up 9000/tcp

With your containers running, you can now complete your WordPress installation through the web interface.

Step 6 — Completing the Installation Through the Web Interface

With our containers running, we can finish the installation through the WordPress web interface.

In your web browser, navigate to your server’s domain. Remember to substitute example.com here with your own domain name:

https://example.com 

Select the language you would like to use:

WordPress Language Selector

After clicking Continue, you will land on the main setup page, where you will need to pick a name for your site and a username. It’s a good idea to choose a memorable username here (rather than “admin”) and a strong password. You can use the password that WordPress generates automatically or create your own.

Finally, you will need to enter your email address and decide whether or not you want to discourage search engines from indexing your site:

WordPress Main Setup Page

Clicking on Install WordPress at the bottom of the page will take you to a login prompt:

WordPress Login Screen

Once logged in, you will have access to the WordPress administration dashboard:

WordPress Main Admin Dashboard

With your WordPress installation complete, you can now take steps to ensure that your SSL certificates will renew automatically.

Step 7 — Renewing Certificates

Let’s Encrypt certificates are valid for 90 days, so you will want to set up an automated renewal process to ensure that they do not lapse. One way to do this is to create a job with the cron scheduling utility. In this case, we will create a cron job to periodically run a script that will renew our certificates and reload our Nginx configuration.

First, open a script called ssl_renew.sh:

  • nano ssl_renew.sh

Add the following code to the script to renew your certificates and reload your web server configuration. Remember to replace the example username here with your own non-root username:

~/wordpress/ssl_renew.sh
#!/bin/bash  COMPOSE="/usr/local/bin/docker-compose --no-ansi"  cd /home/sammy/wordpress/ $  COMPOSE run certbot renew --dry-run && $  COMPOSE kill -s SIGHUP webserver 

This script first assigns the docker-compose binary to a variable called COMPOSE, and specifies the --no-ansi option, which will run docker-compose commands without ANSI control characters. It then changes to the ~/wordpress project directory and runs the following docker-compose commands:

  • docker-compose run: This will start a certbot container and override the command provided in our certbot service definition. Instead of using the certonly subcommand, we’re using the renew subcommand here, which will renew certificates that are close to expiring. We’ve included the --dry-run option here to test our script.
  • docker-compose kill: This will send a SIGHUP signal to the webserver container to reload the Nginx configuration. For more information on using this process to reload your Nginx configuration, please see this Docker blog post on deploying the official Nginx image with Docker.

Close the file when you are finished editing. Make it executable:

  • chmod +x ssl_renew.sh

Next, open your root crontab file to run the renewal script at a specified interval:

  • sudo crontab -e

If this is your first time editing this file, you will be asked to choose an editor:

Output
no crontab for root - using an empty one Select an editor. To change later, run 'select-editor'. 1. /bin/nano <---- easiest 2. /usr/bin/vim.basic 3. /usr/bin/vim.tiny 4. /bin/ed Choose 1-4 [1]: ...

At the bottom of the file, add the following line:

crontab
... */5 * * * * /home/sammy/wordpress/ssl_renew.sh >> /var/log/cron.log 2>&1 

This will set the job interval to every five minutes, so you can test whether or not your renewal request has worked as intended. We have also created a log file, cron.log, to record relevant output from the job.

After five minutes, check cron.log to see whether or not the renewal request has succeeded:

  • tail -f /var/log/cron.log

You should see output confirming a successful renewal:

Output
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/example.com/fullchain.pem (success) ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates above have not been saved.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

You can now modify the crontab file to set a daily interval. To run the script every day at noon, for example, you would modify the last line of the file to look like this:

crontab
... 0 12 * * * /home/sammy/wordpress/ssl_renew.sh >> /var/log/cron.log 2>&1 

You will also want to remove the --dry-run option from your ssl_renew.sh script:

~/wordpress/ssl_renew.sh
#!/bin/bash  COMPOSE="/usr/local/bin/docker-compose --no-ansi"  cd /home/sammy/wordpress/ $  COMPOSE run certbot renew && $  COMPOSE kill -s SIGHUP webserver 

Your cron job will ensure that your Let’s Encrypt certificates don’t lapse by renewing them when they are eligible. You can also set up log rotation with the Logrotate utility to rotate and compress your log files.

Conclusion

In this tutorial, you used Docker Compose to create a WordPress installation with an Nginx web server. As part of this workflow, you obtained TLS/SSL certificates for the domain you want associated with your WordPress site. Additionally, you created a cron job to renew these certificates when necessary.

As additional steps to improve site performance and redundancy, you can consult the following articles on delivering and backing up WordPress assets:

If you are interested in exploring a containerized workflow with Kubernetes, you can also check out:

DigitalOcean Community Tutorials

Como Otimizar Imagens Docker para Produção

O autor escolheu a Code.org para receber uma doação como parte do programa Write for DOnations.

Introdução

Em um ambiente de produção, o Docker facilita a criação, o deployment e a execução de aplicações dentro de containers. Os containers permitem que os desenvolvedores reúnam aplicações e todas as suas principais necessidades e dependências em um único pacote que você pode transformar em uma imagem Docker e replicar. As imagens Docker são construídas a partir de Dockerfiles. O Dockerfile é um arquivo onde você define como será a imagem, qual sistema operacional básico ela terá e quais comandos serão executados dentro dela.

Imagens Docker muito grandes podem aumentar o tempo necessário para criar e enviar imagens entre clusters e provedores de nuvem. Se, por exemplo, você tem uma imagem do tamanho de um gigabyte para enviar toda vez que um de seus desenvolvedores aciona uma compilação, a taxa de transferência que você cria em sua rede aumentará durante o processo de CI/CD, tornando sua aplicação lenta e, consequentemente, custando seus recursos. Por causa disso, as imagens Docker adequadas para produção devem ter apenas as necessidades básicas instaladas.

Existem várias maneiras de diminuir o tamanho das imagens Docker para otimizá-las para a produção. Em primeiro lugar, essas imagens geralmente não precisam de ferramentas de compilação para executar suas aplicações e, portanto, não há necessidade de adicioná-las. Através do uso de um processo de construção multi-stage, você pode usar imagens intermediárias para compilar e construir o código, instalar dependências e empacotar tudo no menor tamanho possível, depois copiar a versão final da sua aplicação para uma imagem vazia sem ferramentas de compilação. Além disso, você pode usar uma imagem com uma base pequena, como o Alpine Linux. O Alpine é uma distribuição Linux adequada para produção, pois possui apenas as necessidades básicas que sua aplicação precisa para executar.

Neste tutorial, você otimizará as imagens Docker em algumas etapas simples, tornando-as menores, mais rápidas e mais adequadas à produção. Você construirá imagens para um exemplo de API em Go em vários containers Docker diferentes, começando com o Ubuntu e imagens específicas de linguagens, e então passando para a distribuição Alpine. Você também usará compilações multi-stage para otimizar suas imagens para produção. O objetivo final deste tutorial é mostrar a diferença de tamanho entre usar imagens padrão do Ubuntu e as equivalentes otimizadas, e mostrar a vantagem das compilações em vários estágios (multi-stage). Depois de ler este tutorial, você poderá aplicar essas técnicas aos seus próprios projetos e pipelines de CI/CD.

Nota: Este tutorial utiliza uma API escrita em Go como um exemplo. Esta simples API lhe dará uma compreensão clara de como você abordaria a otimização de microsserviços em Go com imagens Docker. Embora este tutorial use uma API Go, você pode aplicar esse processo a praticamente qualquer linguagem de programação.

Pré-requisitos

Antes de começar, você precisará de:

Passo 1 — Baixando a API Go de Exemplo

Antes de otimizar sua imagem Docker, você deve primeiro fazer o download da API de exemplo, a partir da qual você construirá suas imagens Docker. O uso de uma API Go simples mostrará todas as principais etapas de criação e execução de uma aplicação dentro de um container Docker. Este tutorial usa o Go porque é uma linguagem compilada como o C++ ou Java, mas ao contrário dele, tem uma pegada muito pequena.

No seu servidor, comece clonando a API Go de exemplo:

  • git clone https://github.com/do-community/mux-go-api.git

Depois de clonar o projeto, você terá um diretório chamado mux-go-api em seu servidor. Mova-se para este diretório com cd:

  • cd mux-go-api

Este será o diretório home do seu projeto. Você construirá suas imagens Docker a partir desse diretório. Dentro dele você encontrará o código fonte para uma API escrita em Go no arquivo api.go. Embora essa API seja mínima e tenha apenas alguns endpoints, ela será apropriada para simular uma API pronta para produção para os propósitos deste tutorial.

Agora que você baixou a API Go de exemplo, você está pronto para criar uma imagem base do Ubuntu no Docker, com a qual você poderá comparar as imagens posteriores e otimizadas.

Passo 2 — Construindo uma Imagem Base do Ubuntu

Para a sua primeira imagem Docker, será útil ver como ela é quando você começa com uma imagem base do Ubuntu. Isso irá empacotar sua API de exemplo em um ambiente similar ao software que você já está rodando no seu servidor Ubuntu. Isso irá empacotar sua API de exemplo em um ambiente similar ao software que você já está rodando no seu servidor Ubuntu. Dentro da imagem, você instalará os vários pacotes e módulos necessários para executar sua aplicação. Você descobrirá, no entanto, que esse processo cria uma imagem bastante pesada do Ubuntu que afetará o tempo de compilação e a legibilidade do código do seu Dockerfile.

Comece escrevendo um Dockerfile que instrui o Docker a criar uma imagem do Ubuntu, instalar o Go e executar a API de exemplo. Certifique-se de criar o Dockerfile no diretório do repositório clonado. Se você clonou no diretório home, ele deve ser $ HOME/mux-go-api.

Crie um novo arquivo chamado Dockerfile.ubuntu. Abra-o no nano ou no seu editor de texto favorito:

  • nano ~/mux-go-api/Dockerfile.ubuntu

Neste Dockerfile, você irá definir uma imagem do Ubuntu e instalar o Golang. Em seguida, você vai continuar a instalar as dependências necessárias e construir o binário. Adicione o seguinte conteúdo ao Dockerfile.ubuntu:

~/mux-go-api/Dockerfile.ubuntu
FROM ubuntu:18.04  RUN apt-get update -y \   && apt-get install -y git gcc make golang-1.10  ENV GOROOT /usr/lib/go-1.10 ENV PATH $  GOROOT/bin:$  PATH ENV GOPATH /root/go ENV APIPATH /root/go/src/api  WORKDIR $  APIPATH COPY . .  RUN \    go get -d -v \   && go install -v \   && go build  EXPOSE 3000 CMD ["./api"] 

Começando do topo, o comando FROM especifica qual sistema operacional básico a imagem terá. A seguir, o comando RUN instala a linguagem Go durante a criação da imagem. ENV define as variáveis de ambiente específicas que o compilador Go precisa para funcionar corretamente. WORKDIR especifica o diretório onde queremos copiar o código, e o comando COPY pega o código do diretório onde o Dockerfile.ubuntu está e o copia para a imagem. O comando RUN final instala as dependências do Go necessárias para o código-fonte compilar e executar a API.

Nota: Usar os operadores && para unir os comandos RUN é importante para otimizar os Dockerfiles, porque todo comando RUN criará uma nova camada, e cada nova camada aumentará o tamanho da imagem final.

Salve e saia do arquivo. Agora você pode executar o comando build para criar uma imagem Docker a partir do Dockerfile que você acabou de criar:

  • docker build -f Dockerfile.ubuntu -t ubuntu .

O comando build constrói uma imagem a partir de um Dockerfile. A flag -f especifica que você deseja compilar a partir do arquivo Dockerfile.ubuntu, enquanto -t significa tag, o que significa que você está marcando a imagem com o nome ubuntu. O ponto final representa o contexto atual onde o Dockerfile.ubuntu está localizado.

Isso vai demorar um pouco, então sinta-se livre para fazer uma pausa. Quando a compilação estiver concluída, você terá uma imagem Ubuntu pronta para executar sua API. Mas o tamanho final da imagem pode não ser ideal; qualquer coisa acima de algumas centenas de MB para essa API seria considerada uma imagem excessivamente grande.

Execute o seguinte comando para listar todas as imagens Docker e encontrar o tamanho da sua imagem Ubuntu:

  • docker images

Você verá a saída mostrando a imagem que você acabou de criar:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 61b2096f6871 33 seconds ago 636MB . . .

Como é destacado na saída, esta imagem tem um tamanho de 636MB para uma API Golang básica, um número que pode variar um pouco de máquina para máquina. Em múltiplas compilações, esse grande tamanho afetará significativamente os tempos de deployment e a taxa de transferência da rede.

Nesta seção, você construiu uma imagem Ubuntu com todas as ferramentas e dependências necessárias do Go para executar a API que você clonou no Passo 1. Na próxima seção, você usará uma imagem Docker pré-criada e específica da linguagem para simplificar seu Dockerfile e agilizar o processo de criação.

Passo 3 — Construindo uma Imagem Base Específica para a Linguagem

Imagens pré-criadas são imagens básicas comuns que os usuários modificaram para incluir ferramentas específicas para uma situação. Os usuários podem, então, enviar essas imagens para o repositório de imagens Docker Hub, permitindo que outros usuários usem a imagem compartilhada em vez de ter que escrever seus próprios Dockerfiles individuais. Este é um processo comum em situações de produção, e você pode encontrar várias imagens pré-criadas no Docker Hub para praticamente qualquer caso de uso. Neste passo, você construirá sua API de exemplo usando uma imagem específica do Go que já tenha o compilador e as dependências instaladas.

Com imagens base pré-criadas que já contêm as ferramentas necessárias para criar e executar sua aplicação, você pode reduzir significativamente o tempo de criação. Como você está começando com uma base que tem todas as ferramentas necessárias pré-instaladas, você pode pular a adição delas ao seu Dockerfile, fazendo com que pareça muito mais limpo e, finalmente, diminuindo o tempo de construção.

Vá em frente e crie outro Dockerfile e nomeie-o como Dockerfile.golang. Abra-o no seu editor de texto:

  • nano ~/mux-go-api/Dockerfile.golang

Este arquivo será significativamente mais conciso do que o anterior, porque tem todas as dependências, ferramentas e compilador específicos do Go pré-instalados.

Agora, adicione as seguintes linhas:

~/mux-go-api/Dockerfile.golang
FROM golang:1.10  WORKDIR /go/src/api COPY . .  RUN \     go get -d -v \     && go install -v \     && go build  EXPOSE 3000 CMD ["./api"] 

Começando do topo, você verá que a instrução FROM agora é golang:1.10. Isso significa que o Docker buscará uma imagem Go pré-criada do Docker Hub que tenha todas as ferramentas Go necessárias já instaladas.

Agora, mais uma vez, compile a imagem do Docker com:

  • docker build -f Dockerfile.golang -t golang .

Verifique o tamanho final da imagem com o seguinte comando:

  • docker images

Isso produzirá uma saída semelhante à seguinte:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE golang latest eaee5f524da2 40 seconds ago 744MB . . .

Embora o próprio Dockerfile seja mais eficiente e o tempo de compilação seja menor, o tamanho total da imagem aumentou. A imagem pré-criada do Golang está em torno de 744MB, uma quantidade significativa.

Essa é a maneira preferida de criar imagens Docker. Ela lhe dá uma imagem base que a comunidade aprovou como o padrão a ser usado para a linguagem especificada, neste caso, Go. No entanto, para tornar uma imagem pronta para produção, você precisa cortar partes que a aplicação em execução não precisa.

Tenha em mente que o uso dessas imagens pesadas é bom quando você não tem certeza sobre suas necessidades. Sinta-se à vontade para usá-las como containers descartáveis, bem como a base para a construção de outras imagens. Para fins de desenvolvimento ou teste, onde você não precisa pensar em enviar imagens pela rede, é perfeitamente aceitável usar imagens pesadas. Mas, se você quiser otimizar os deployments, precisará fazer o seu melhor para tornar suas imagens o menor possível.

Agora que você testou uma imagem específica da linguagem, você pode passar para a próxima etapa, na qual usará a distribuição leve do Alpine Linux como uma imagem base para tornar a imagem Docker mais leve.

Passo 4 — Construindo Imagens Base do Alpine

Um dos passos mais fáceis para otimizar as imagens Docker é usar imagens base menores. Alpine é uma distribuição Linux leve projetada para segurança e eficiência de recursos. A imagem Docker do Alpine usa musl libc e BusyBox para ficar compacta, exigindo não mais que 8MB em um container para ser executada. O tamanho minúsculo é devido a pacotes binários sendo refinados e divididos, dando a você mais controle sobre o que você instala, o que mantém o ambiente menor e mais eficiente possível.

O processo de criação de uma imagem Alpine é semelhante ao modo como você criou a imagem do Ubuntu no Passo 2. Primeiro, crie um novo arquivo chamado Dockerfile.alpine:

  • nano ~/mux-go-api/Dockerfile.alpine

Agora adicione este trecho:

~/mux-go-api/Dockerfile.alpine
FROM alpine:3.8  RUN apk add --no-cache \     ca-certificates \     git \     gcc \     musl-dev \     openssl \     go  ENV GOPATH /go ENV PATH $  GOPATH/bin:/usr/local/go/bin:$  PATH ENV APIPATH $  GOPATH/src/api RUN mkdir -p "$  GOPATH/src" "$  GOPATH/bin" "$  APIPATH" && chmod -R 777 "$  GOPATH"  WORKDIR $  APIPATH COPY . .  RUN \     go get -d -v \     && go install -v \     && go build  EXPOSE 3000 CMD ["./api"] 

Aqui você está adicionando o comando apk add para utilizar o gerenciador de pacotes do Alpine para instalar o Go e todas as bibliotecas que ele requer. Tal como acontece com a imagem do Ubuntu, você precisa definir as variáveis de ambiente também.

Vá em frente e compile a imagem:

  • docker build -f Dockerfile.alpine -t alpine .

Mais uma vez, verifique o tamanho da imagem:

  • docker images

Você receberá uma saída semelhante à seguinte:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE alpine latest ee35a601158d 30 seconds ago 426MB . . .

O tamanho caiu para cerca de 426MB.

O tamanho reduzido da imagem base Alpine reduziu o tamanho final da imagem, mas há mais algumas coisas que você pode fazer para torná-la ainda menor.

A seguir, tente usar uma imagem Alpine pré-criada para o Go. Isso tornará o Dockerfile mais curto e também reduzirá o tamanho da imagem final. Como a imagem Alpine pré-criada para o Go é construída com o Go compilado dos fontes, sua tamanho é significativamente menor.

Comece criando um novo arquivo chamado Dockerfile.golang-alpine:

  • nano ~/mux-go-api/Dockerfile.golang-alpine

Adicione o seguinte conteúdo ao arquivo:

~/mux-go-api/Dockerfile.golang-alpine
FROM golang:1.10-alpine3.8  RUN apk add --no-cache --update git  WORKDIR /go/src/api COPY . .  RUN go get -d -v \   && go install -v \   && go build  EXPOSE 3000 CMD ["./api"] 

As únicas diferenças entre Dockerfile.golang-alpine e Dockerfile.alpine são o comando FROM e o primeiro comando RUN. Agora, o comando FROM especifica uma imagem golang com a tag 1.10-alpine3.8 e RUN só tem um comando para a instalação do Git. Você precisa do Git para o comando go get para trabalhar no segundo comando RUN na parte inferior do Dockerfile.golang-alpine.

Construa a imagem com o seguinte comando:

  • docker build -f Dockerfile.golang-alpine -t golang-alpine .

Obtenha sua lista de imagens:

  • docker images

Você receberá a seguinte saída:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE golang-alpine latest 97103a8b912b 49 seconds ago 288MB

Agora o tamanho da imagem está em torno de 288MB.

Mesmo que você tenha conseguido reduzir bastante o tamanho, há uma última coisa que você pode fazer para preparar a imagem para a produção. É chamado de uma compilação de múltiplos estágios ou multi-stage. Usando compilações multi-stage, você pode usar uma imagem para construir a aplicação enquanto usa outra imagem mais leve para empacotar a aplicação compilada para produção, um processo que será executado no próximo passo.

Passo 5 — Excluindo Ferramentas de Compilação em uma Compilação Multi-Stage

Idealmente, as imagens que você executa em produção não devem ter nenhuma ferramenta de compilação instalada ou dependências redundantes para a execução da aplicação de produção. Você pode removê-las da imagem Docker final usando compilações multi-stage. Isso funciona através da construção do binário, ou em outros termos, a aplicação Go compilada, em um container intermediário, copiando-o em seguida para um container vazio que não tenha dependências desnecessárias.

Comece criando outro arquivo chamado Dockerfile.multistage:

  • nano ~/mux-go-api/Dockerfile.multistage

O que você vai adicionar aqui será familiar. Comece adicionando o mesmo código que está em Dockerfile.golang-alpine. Mas desta vez, adicione também uma segunda imagem onde você copiará o binário a partir da primeira imagem.

~/mux-go-api/Dockerfile.multistage
FROM golang:1.10-alpine3.8 AS multistage  RUN apk add --no-cache --update git  WORKDIR /go/src/api COPY . .  RUN go get -d -v \   && go install -v \   && go build  ##  FROM alpine:3.8 COPY --from=multistage /go/bin/api /go/bin/ EXPOSE 3000 CMD ["/go/bin/api"] 

Salve e feche o arquivo. Aqui você tem dois comandos FROM. O primeiro é idêntico ao Dockerfile.golang-alpine, exceto por ter um AS multistage adicional no comando FROM. Isto lhe dará um nome de multistage, que você irá referenciar na parte inferior do arquivo Dockerfile.multistage. No segundo comando FROM, você pegará uma imagem base alpine e copiará para dentro dela usando o COPY, a aplicação Go compilada da imagem multiestage. Esse processo reduzirá ainda mais o tamanho da imagem final, tornando-a pronta para produção.

Execute a compilação com o seguinte comando:

  • docker build -f Dockerfile.multistage -t prod .

Verifique o tamanho da imagem agora, depois de usar uma compilação multi-stage.

  • docker images

Você encontrará duas novas imagens em vez de apenas uma:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE prod latest 82fc005abc40 38 seconds ago 11.3MB <none> <none> d7855c8f8280 38 seconds ago 294MB . . .

A imagem <none> é a imagem multistage construída com o comando FROM golang:1.10-alpine3.8 AS multistage. Ela é apenas um intermediário usado para construir e compilar a aplicação Go, enquanto a imagem prod neste contexto é a imagem final que contém apenas a aplicação Go compilada.

A partir dos 744MB iniciais, você reduziu o tamanho da imagem para aproximadamente 11,3MB. Manter o controle de uma imagem minúscula como esta e enviá-la pela rede para os servidores de produção será muito mais fácil do que com uma imagem de mais de 700MB e economizará recursos significativos a longo prazo.

Conclusão

Neste tutorial, você otimizou as imagens Docker para produção usando diferentes imagens Docker de base e uma imagem intermediária para compilar e construir o código. Dessa forma, você empacotou sua API de exemplo no menor tamanho possível. Você pode usar essas técnicas para melhorar a velocidade de compilação e deployment de suas aplicações Docker e de qualquer pipeline de CI/CD que você possa ter.

Se você estiver interessado em aprender mais sobre como criar aplicações com o Docker, confira o nosso tutorial Como Construir uma Aplicação Node.js com o Docker. Para obter informações mais conceituais sobre como otimizar containers, consulte Building Optimized Containers for Kubernetes.

DigitalOcean Community Tutorials

Как установить и использовать Docker в Ubuntu 18.04

Предыдущая версия данной инструкции подготовлена finid.

Введение

Docker — это приложение, которое упрощает управление процессами приложения в контейнерах * *. Контейнеры позволяют запускать приложения в процессах с изолированием ресурсов. Они подобны виртуальным машинам, но являются при этом более портируемыми, менее требовательны к ресурсам, и больше зависят от операционной системы машины-хоста.

Чтобы подробно ознакомиться с различными компонентами контейнеров Docker, рекомендуем прочитать статью Экосистема Docker: Введение в часто используемые компоненты.

Данная инструкция описывает, как установить и использовать Docker Community Edition (CE) в Ubuntu 18.04. Вы научитесь устанавливать Docker, работать с контейнерами и образами и загружать образы в Docker-репозиторий.

Необходимые условия

Чтобы следовать приведенным инструкциям, вам потребуются:

  • Один сервер Ubuntu 18.04, настроенный по руководству по настройке сервера Ubuntu 18.04 initial server setup guide, а также не-рутовый пользователь sudo и файрвол.
  • Учетная запись на Docker Hub, если необходимо создавать собственные образы и отправлять их в Docker Hub, как показано в шагах 7 и 8.

Шаг 1 — Установка Docker

Дистрибутив Docker, доступный в официальном репозитории Ubuntu, не всегда является последней версией программы. Лучше установить последнюю версию Docker, загрузив ее из официального репозитория Docker. Для этого добавляем новый источник дистрибутива, вводим ключ GPG из репозитория Docker, чтобы убедиться, действительна ли загруженная версия, а затем устанавливаем дистрибутив.

Сначала обновляем существующий перечень пакетов:

  • sudo apt update

Затем устанавливаем необходимые пакеты, которые позволяют apt использовать пакеты по HTTPS:

  • sudo apt install apt-transport-https ca-certificates curl software-properties-common

Затем добавляем в свою систему ключ GPG официального репозитория Docker:

  • curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Добавляем репозиторий Docker в список источников пакетов APT:

  • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

Затем обновим базу данных пакетов информацией о пакетах Docker из вновь добавленного репозитория:

  • sudo apt update

Следует убедиться, что мы устанавливаем Docker из репозитория Docker, а не из репозитория по умолчанию Ubuntu:

  • apt-cache policy docker-ce

Вывод получится приблизительно следующий. Номер версии Docker может быть иным:

Output of apt-cache policy docker-ce
docker-ce:   Installed: (none)   Candidate: 18.03.1~ce~3-0~ubuntu   Version table:      18.03.1~ce~3-0~ubuntu 500         500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages 

Обратите внимание, что docker-ce не устанавливается, но для установки будет использован репозиторий Docker для Ubuntu 18.04 (bionic).

Далее устанавливаем Docker:

  • sudo apt install docker-ce

Теперь Docker установлен, демон запущен, и процесс будет запускаться при загрузке системы.  Убедимся, что процесс запущен:

  • sudo systemctl status docker

Вывод должен быть похож на представленный ниже, сервис должен быть запущен и активен:

Output
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago Docs: https://docs.docker.com Main PID: 10096 (dockerd) Tasks: 16 CGroup: /system.slice/docker.service ├─10096 /usr/bin/dockerd -H fd:// └─10113 docker-containerd --config /var/run/docker/containerd/containerd.toml

При установке Docker мы получаем не только сервис (демон) Docker, но и утилиту командной строки docker или клиент Docker. Использование утилиты командной строки docker рассмотрено ниже.

Шаг 2 — Использование команды Docker без sudo (опционально)

По умолчанию, запуск команды docker требует привилегий пользователя root или пользователя группы docker, которая автоматически создается при установке Docker. При попытке запуска команды docker пользователем без привилегий sudo или пользователем, не входящим в группу docker, выводные данные будут выглядеть следующим образом:

Output
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.

Чтобы не вводить sudo каждый раз при запуске команды docker, добавьте имя своего пользователя в группу docker:

  • sudo usermod -aG docker $ {USER}

Для применения этих изменений в составе группы необходимо разлогиниться и снова залогиниться на сервере или задать следующую команду:

  • su - $ {USER}

Для продолжения работы необходимо ввести пароль пользователя.

Убедиться, что пользователь добавлен в группу docker можно следующим образом:

  • id -nG
Output
sammy sudo docker

Если вы хотите добавить произвольного пользователя в группу docker, можно указать конкретное имя пользователя:

  • sudo usermod -aG docker username

Далее в этой статье предполагается, что вы используете команду docker как пользователь, находящийся в группе docker. Если вы не хотите добавлять своего пользователя в группу docker, в начало команд необходимо добавлять sudo.

Теперь рассмотрим команду docker.

Шаг 3 — Использование команды Docker

Команда docker позволяет использовать различные опции, команды с аргументами. Синтаксис выглядит следующим образом:

  • docker [option] [command] [arguments]

Для просмотра всех доступных подкоманд введите:

  • docker

Полный список подкоманд Docker 18:

Output
attach Attach local standard input, output, and error streams to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes to files or directories on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codes

Для просмотра опций использования определенной команды введите:

  • docker docker-subcommand --help

Для просмотра всей информации о Docker используется следующая команда:

  • docker info

Рассмотрим некоторые команды подробнее. Расскажем сначала о работе с образами.

Шаг 4 — Работа с образами Docker

Контейнеры Docker запускаются из образов Docker. По умолчанию Docker получает образы из хаба Docker Hub, представляющего собой реестр образов, который поддерживается компанией Docker. Кто угодно может создать и загрузить свои образы Docker в Docker Hub, поэтому для большинства приложений и дистрибутивов Linux, которые могут потребоваться вам для работы, уже есть соответствующие образы в Docker Hub.

Чтобы проверить, можете ли вы осуществлять доступ и загружать образы из Docker Hub, введите следующую команду:

  • docker run hello-world

Корректный результат работы этой команды, который означает, что Docker работает правильно, представлен ниже:

Output
Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 9bb5a5d4561a: Pull complete Digest: sha256:3e1764d0f546ceac4565547df2ac4907fe46f007ea229fd7ef2718514bcec35d Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ...

Изначально Docker не мог находить образ hello-world локально, поэтому загружал образ из Docker Hub, который является репозиторием по умолчанию. После загрузки образа Docker создавал из образа контейнер и запускал приложение в контейнере, отображая сообщение.

Образы, доступные в Docker Hub, можно искать с помощью команды docker и подкоманды search. Например, для поиска образа Ubuntu вводим:

  • docker search ubuntu

Скрипт просматривает Docker Hub и возвращает список всех образов, имена которых подходят под заданный поиск. В данном случае мы получим примерно следующий результат:

Output
NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating sys… 7917 [OK] dorowu/ubuntu-desktop-lxde-vnc Ubuntu with openssh-server and NoVNC 193 [OK] rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 156 [OK] ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 93 [OK] ubuntu-upstart Upstart is an event-based replacement for th… 87 [OK] neurodebian NeuroDebian provides neuroscience research s… 50 [OK] ubuntu-debootstrap debootstrap --variant=minbase --components=m… 38 [OK] 1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 36 [OK] nuagebec/ubuntu Simple always updated Ubuntu docker images w… 23 [OK] tutum/ubuntu Simple Ubuntu docker images with SSH access 18 i386/ubuntu Ubuntu is a Debian-based Linux operating sys… 13 ppc64le/ubuntu Ubuntu is a Debian-based Linux operating sys… 12 1and1internet/ubuntu-16-apache-php-7.0 ubuntu-16-apache-php-7.0 10 [OK] 1and1internet/ubuntu-16-nginx-php-phpmyadmin-mariadb-10 ubuntu-16-nginx-php-phpmyadmin-mariadb-10 6 [OK] eclipse/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 6 [OK] codenvy/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 4 [OK] darksheer/ubuntu Base Ubuntu Image -- Updated hourly 4 [OK] 1and1internet/ubuntu-16-apache ubuntu-16-apache 3 [OK] 1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4 ubuntu-16-nginx-php-5.6-wordpress-4 3 [OK] 1and1internet/ubuntu-16-sshd ubuntu-16-sshd 1 [OK] pivotaldata/ubuntu A quick freshening-up of the base Ubuntu doc… 1 1and1internet/ubuntu-16-healthcheck ubuntu-16-healthcheck 0 [OK] pivotaldata/ubuntu-gpdb-dev Ubuntu images for GPDB development 0 smartentry/ubuntu ubuntu with smartentry 0 [OK] ossobv/ubuntu ...

В столбце OFFICIAL строка OK показывает, что образ построен и поддерживается компанией, которая занимается разработкой этого проекта. Когда нужный образ выбран, можно загрузить его на ваш компьютер с помощью подкоманды pull.

Чтобы загрузить официальный образ ubuntu на свой компьютер, запускается следующая команда:

  • docker pull ubuntu

Результат будет выглядеть следующим образом:

Output
Using default tag: latest latest: Pulling from library/ubuntu 6b98dfc16071: Pull complete 4001a1209541: Pull complete 6319fc68c576: Pull complete b24603670dc3: Pull complete 97f170c87c6f: Pull complete Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d Status: Downloaded newer image for ubuntu:latest

После загрузки образа можно запустить контейнер с загруженным образом с помощью подкоманды run.  Как видно из примера hello-world, если при выполнении docker с помощью подкоманды run образ еще не загружен, клиент Docker сначала загрузит образ, а затем запустит контейнер с этим образом.

Для просмотра загруженных на компьютер образов нужно ввести:

  • docker images

Вывод должен быть похож на представленный ниже:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 113a43faa138 4 weeks ago 81.2MB hello-world latest e38bc07ac18e 2 months ago 1.85kB

Далее в инструкции показано, что образы, используемые для запуска контейнеров, можно изменять и применять для создания новых образов, которые, в свою очередь, могут быть загружены (технический термин push) в Docker Hub или другой Docker-реестр.

Рассмотрим более подробно, как запускать контейнеры.

Шаг 5 — Запуск контейнера Docker

Контейнер hello-world, запущенный на предыдущем этапе, является примером контейнера, который запускается и завершает работу после вывода тестового сообщения. Контейнеры могут выполнять и более полезные действия, а также могут быть интерактивными. Контейнеры похожи на виртуальные машины, но являются менее требовательными к ресурсам.

В качестве примера запустим контейнер с помощью последней версии образа Ubuntu. Комбинация параметров -i и -t обеспечивает интерактивный доступ к командному процессору контейнера:

  • docker run -it ubuntu

Командная строка должна измениться, показывая, что мы теперь работаем в контейнере. Она будет иметь следующий вид:

Output
root@d9b100f2f636:/#

Обратите внимание, что в командной строке отображается идентификатор контейнера. В данном примере это d9b100f2f636. Идентификатор контейнера потребуется нам позднее, чтобы указать, какой контейнер необходимо удалить.

Теперь можно запускать любые команды внутри контейнера. Попробуем, например, обновить базу данных пакета внутри контейнера. Здесь перед командами не нужно использовать sudo, поскольку вы работаете внутри контейнера как пользователь с привилегиями root:

  • apt update

Теперь в нем можно установить любое приложение. Попробуем установить Node.js:

  • apt install nodejs

Данная команда устанавливает Node.js в контейнер из официального репозитория Ubuntu. Когда установка завершена, убедимся, что Node.js установлен:

  • node -v

В терминале появится номер версии:

Output
v8.10.0

Все изменения, которые вы производите внутри контейнера, применяются только для этого контейнера.

Чтобы выйти из контейнера, вводим команду exit.

Далее рассмотрим, как управлять контейнерами в своей системе.

Шаг 6 — Управление контейнерами Docker

Через некоторое время после начала использования Docker на вашей машине будет множество активных (запущенных) и неактивных контейнеров. Просмотр ** активных контейнеров **:

  • docker ps

Результат получится примерно следующим:

Output
CONTAINER ID IMAGE COMMAND CREATED

По нашей инструкции вы запустили два контейнера: один из образа hello-world, второй из образа ubuntu. Оба контейнера уже не запущены, но существуют в системе.

Чтобы увидеть и активные, и неактивные контейнеры, запускаем docker ps с помощью параметра -a:

  • docker ps -a

Результат получится примерно следующим:

d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Exited (0) 8 minutes ago                           sharp_volhard 01c950718166        hello-world         "/hello"            About an hour ago   Exited (0) About an hour ago                       festive_williams  

Чтобы увидеть последние из созданных контейнеров, задаем параметр -l:

  • docker ps -l
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 10 minutes ago sharp_volhard

Для запуска остановленного контейнера используем команду docker start, затем указываем идентификатор контейнера или его имя. Запустим загруженный из Ubuntu контейнер с идентификатором d9b100f2f636:

  • docker start d9b100f2f636

Контейнер запускается. Теперь для просмотра его статуса можно использовать docker ps:

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Up 8 seconds                            sharp_volhard  

Для остановки запущенного контейнера используем команду docker stop, затем указываем идентификатор контейнера или его имя. В этот раз мы используем имя, которое назначил контейнеру Docker, то есть sharp_volhard:

  • docker stop sharp_volhard

Если вам контейнер больше не нужен, удаляем его командой docker rm с указанием либо идентификатора, либо имени контейнера. Чтобы найти идентификатор или имя контейнера, связанного с образом hello-world, используйте команду docker ps -a. Затем контейнер можно удалить.

  • docker rm festive_williams

Запустить новый контейнер и задать ему имя можно с помощью параметра --name. Параметр --rm позволяет создать контейнер, который самостоятельно удалится после остановки. Для более подробной информации о данных и других опциях используйте команду docker run help.

Контейнеры можно превратить в образы для построения новых контейнеров. Рассмотрим, как это сделать.

Шаг 7 — Сохранение изменений в контейнере в образ Docker

При запуске контейнера из образа Docker вы можете создавать, изменять и удалять файлы, как и на виртуальной машине.  Внесенные изменения применяются только для такого контейнера. Можно запускать и останавливать контейнер, однако как только он будет уничтожен командой docker rm, все изменения будут безвозвратно потеряны.

В данном разделе показано, как сохранить состояние контейнера в виде нового образа Docker.

После установки Node.js в контейнере Ubuntu у вас будет работать запущенный из образа контейнер, но он будет отличаться от образа, использованного для его создания. Однако вам может потребоваться такой контейнер Node.js как основа для будущих образов.

Затем подтверждаем изменения в новом образе Docker с помощью следующей команды. 

  • docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name

Параметр -m позволяет задать сообщение подтверждения, чтобы облегчить вам и другим пользователям образа понимание того, какие изменения были внесены, а параметр -a позволяет указать автора. Идентификатор контейнера container_id — этот тот самый идентификатор, который использовался ранее, когда мы начинали интерактивную сессию в контейнере Docker. Если вы не создавали дополнительных репозиториев в Docker Hub, имя репозитория (repository) обычно является вашим именем пользователя в Docker Hub.

Например, для пользователя sammy и идентификатора контейнера d9b100f2f636 команда будет выглядеть следующим образом:

  • docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs

После подтверждения (commit) образа, новый образ сохраняется локально на вашем компьютере. Далее в этой инструкции мы расскажем, как отправить образ в реестр Docker (например, в Docker Hub) так, чтобы он был доступен не только вам, но и другим пользователям.

Если теперь просмотреть список образов Docker, в нем окажутся и новый образ, и исходный образ, на котором он был основан:

  • docker images

Результат получится примерно следующим:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE sammy/ubuntu-nodejs latest 7c1f35226ca6 7 seconds ago 179MB ubuntu latest 113a43faa138 4 weeks ago 81.2MB hello-world latest e38bc07ac18e 2 months ago 1.85kB

В данном примере ubuntu-nodejs — это новый образ, созданный на основе существующего образа ubuntu из Docker Hub. Разница размеров отражает внесенные изменения. В данном примере изменение связано с установкой NodeJS. В следующий раз, когда потребуется запустить контейнер Ubuntu с предустановленным NodeJS, можно использовать этот новый образ.

Образы также могут строиться с помощью файла Dockerfile, который позволяет автоматизировать установку программ в новом образе. Однако в данной статье этот процесс не описывается.

Давайте теперь поделимся новым образом с другими пользователями, чтобы они могли создавать на его основе контейнеры.

Шаг 8 — Отправка контейнеров Docker в репозиторий Docker

Следующим логичным шагом после создания нового образа из существующего будет поделиться созданным образом с друзьями, со всеми в Docker Hub или в другом реестре Docker, к которому у вас есть доступ. Для отправки образов в Docker Hub или другой Docker-реестр, у вас должна быть в нем учетная запись.

В данном разделе показано, как отправлять образы Docker в Docker Hub. Научиться создавать собственный Docker-реестр можно с помощью статьи How To Set Up a Private Docker Registry on Ubuntu 14.04.

Чтобы отправить свой образ, осуществляем вход на Docker Hub.

  • docker login -u docker-registry-username

Для входа требуется ввести пароль Docker Hub. Если введен правильный пароль, вы будете успешно авторизованы.

Примечание: Если имя пользователя в Docker-реестре отличается от локального имени пользователя, которое использовалось для создания образа, необходимо привязать свой образ к имени пользователя в реестре. Чтобы отправить пример из предыдущего шага, вводим:

  • docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs

Затем можно отправлять собственный образ:

  • docker push docker-registry-username/docker-image-name

Команда для отправки образа ubuntu-nodejs в репозиторий sammy выглядит следующим образом:

  • docker push sammy/ubuntu-nodejs

Для загрузки образа может потребоваться некоторое время, но после завершения результат будет выглядеть следующим образом:

Output
The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Pushed 5f70bf18a086: Pushed a3b5c80a4eba: Pushed 7f18b442972b: Pushed 3ce512daaf78: Pushed 7aae4540b42d: Pushed ...

После отправки образа в реестр, его имя должно появиться в списке панели управления вашей учетной записи, как показано ниже.

Появление нового образа Docker в списке на Docker Hub

Если при отправке появляется ошибка, как показано ниже, значит, не выполнен вход в реестр:

Output
The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required

Авторизуемся в реестре с помощью docker login и снова пытаемся отправить образ. Затем убедимся, что он появился на вашей странице в репозитории Docker Hub.

Теперь с помощью команды docker pull sammy/ubuntu-nodejs можно загрузить образ на новую машину и использовать его для запуска нового контейнера.

Вывод

С помощью данной инструкции вы научились устанавливать Docker, работать с образами и контейнерами и отправлять измененные образы в Docker Hub. Мы заложили основу, и теперь можно просмотреть другие инструкции по Docker в сообществе DigitalOcean Community.

DigitalOcean Community Tutorials

Cómo instalar y usar Docker en Ubuntu 18.04

Finid escribió una versión anterior de este tutorial.

Introducción

Docker es una aplicación que simplifica el proceso de gestionar los procesos de aplicaciones en contenedores. Los contenedores le permiten ejecutar sus aplicaciones en procesos aislados de recursos. Se parecen a las máquinas virtuales, sin embargo, los contenedores son más portátiles, tienen más recursos y son más dependientes del sistema operativo host.

Lea El ecosistema Docker: Una introducción a los componentes comunes, si desea tener una introducción más detallada sobre los distintos componentes de un contenedor Docker.

Este tutorial le enseñará a instalar y usar la edición de comunidad de Docker (Community Edition – CE) en Ubuntu 18.04. Va a instalar Docker, trabajar con contenedores e imágenes y hacer el push de una imagen a un Repositorio de Docker.

Requisitos previos

Necesitará lo siguiente para seguir este tutorial:

Paso 1 — Instalar Docker

Es posible que el paquete de instalación de Docker que está disponible en el repositorio oficial de Ubuntu no sea la última versión. Vamos a instalar Docker desde el repositorio oficial de Docker para asegurarnos de tener la última versión. Para hacer esto, vamos a agregar una nueva fuente de paquete, la clave GPG de Docker para asegurar que las descargas sean válidas y después vamos a instalar el paquete.

Primero, actualice su lista de paquetes existente:

  • sudo apt update

A continuación, instale algunos paquetes de requisitos previos que le permiten a apt usar paquetes mediante HTTPS:

  • sudo apt install apt-transport-https ca-certificates curl software-properties-common

Luego, agregue la clave GPG para el repositorio oficial de Docker a su sistema:

  • curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Agregue el repositorio de Docker a las fuentes de APT:

  • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

Posteriormente, actualice la base de datos de paquetes usando los paquetes de Docker del repositorio que acaba de agregar:

  • sudo apt update

Asegúrese de que va a instalar desde el repositorio de Docker en vez del repositorio de Ubuntu predeterminado:

  • apt-cache policy docker-ce

Verá un resultado como este, aunque el número de versión de Docker puede variar:

Output of apt-cache policy docker-ce
docker-ce:   Installed: (none)   Candidate: 18.03.1~ce~3-0~ubuntu   Version table:      18.03.1~ce~3-0~ubuntu 500         500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages 

Note que docker-ce no está instalado, pero el candidato para la instalación es del repositorio de Docker para Ubuntu 18.04 (bionic).

Por último, instale Docker:

  • sudo apt install docker-ce

Ahora debería tener Docker instalado, el daemon iniciado, y el proceso habilitado para iniciar durante el arranque. Verifique que se esté ejecutando:

  • sudo systemctl status docker

El resultado debería ser parecido al siguiente, indicando que el servicio está activo y se está ejecutando:

Output
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago Docs: https://docs.docker.com Main PID: 10096 (dockerd) Tasks: 16 CGroup: /system.slice/docker.service ├─10096 /usr/bin/dockerd -H fd:// └─10113 docker-containerd --config /var/run/docker/containerd/containerd.toml

Instalar Docker ahora no solamente le ofrece el servicio Docker (daemon), sino también la utilidad de línea de comandos docker o el cliente Docker. Más adelante en este tutorial, vamos a explorar cómo usar el comando docker.

Paso 2 — Ejecutar el comando Docker sin sudo (Opcional)

De forma predeterminada, el comando docker solamente puede ejecutarse por el usuario de root o por un usuario en el grupo docker, el cual se crea automáticamente durante la instalación de Docker. Si intenta ejecutar el comando docker sin prefijarlo con sudo o sin estar en el grupo docker, el resultado será como el siguiente:

Output
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.

Agregue su nombre de usuario al grupo docker si quiere evitar escribir sudo siempre que deba ejecutar el comando docker:

  • sudo usermod -aG docker $ {USER}

Para aplicar la nueva membresía de grupo, debe cerrar sesión en el servidor y volver a iniciarla, o puede escribir lo siguiente:

  • su - $ {USER}

Se le pedirá que ingrese la contraseña de su usuario para poder continuar.

Confirme que se haya agregado su usuario al grupo de docker escribiendo:

  • id -nG
Output
sammy sudo docker

Si necesita agregar un usuario al grupo de docker y no ha iniciado sesión como ese usuario, declare tal nombre de usuario explícitamente usando:

  • sudo usermod -aG docker username

Para el resto de este artículo, se asume que está ejecutando el comando de docker como un usuario que es parte del grupo de docket. Si opta por no hacerlo, anteponga los comandos con sudo.

A continuación, vamos a explorar el comando docker.

Paso 3 — Usar el comando Docker

Usar docker consiste en pasarle una cadena de opciones y comandos seguidos de argumentos. La sintaxis sería la siguiente:

  • docker [option] [command] [arguments]

Para ver todos los subcomandos disponibles, ingrese:

  • docker

Desde que se usa Docker 18, la lista completa de los subcomandos disponibles incluye:

Output
attach Attach local standard input, output, and error streams to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes to files or directories on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codes

Si desea ver las opciones disponibles para un comando específico, ingrese:

  • docker docker-subcommand --help

Si desea ver la información sobre Docker de todo el sistema, use:

  • docker info

Vamos a explorar algunos de estos comandos. Vamos a empezar trabajando con imágenes.

Paso 4 — Trabajo con imágenes de Docker

Los contenedores Docker se forman a partir de imágenes de Docker. De forma predeterminada, Docker extrae estas imágenes de Docker Hub, un registro de Docker administrado por Docker, la empresa responsable del proyecto Docker. Cualquier persona es capaz de alojar sus imágenes Docker en Docker Hub, por lo tanto, la mayoría de las aplicaciones y distribuciones de Linux que necesitará tendrán las imágenes alojadas ahí mismo.

Para verificar si puede acceder y descargar imágenes desde Docker Hub, ingrese:

  • docker run hello-world

El resultado le indicará que Docker está funcionando correctamente:

Output
Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 9bb5a5d4561a: Pull complete Digest: sha256:3e1764d0f546ceac4565547df2ac4907fe46f007ea229fd7ef2718514bcec35d Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ...

Inicialmente, Docker no fue capaz de encontrar la imagen de hello-world localmente, entonces descargó la imagen de Docker Hub, que es el repositorio predeterminado. Una vez que se descargó la imagen, Docker creó un contenedor a partir de la imagen y la aplicación dentro del contenedor ejecutado, mostrando el mensaje.

Puede buscar imágenes disponibles en Docker Hub usando el comando docker con el subcomando de search. Por ejemplo, para buscar la imagen de Ubuntu, ingrese:

  • docker search ubuntu

El script rastreará Docker Hub y le entregará una lista de todas las imágenes que tengan un nombre que concuerde con la cadena de búsqueda. En este caso, el resultado será parecido a esto:

Output
NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating sys… 7917 [OK] dorowu/ubuntu-desktop-lxde-vnc Ubuntu with openssh-server and NoVNC 193 [OK] rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 156 [OK] ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 93 [OK] ubuntu-upstart Upstart is an event-based replacement for th… 87 [OK] neurodebian NeuroDebian provides neuroscience research s… 50 [OK] ubuntu-debootstrap debootstrap --variant=minbase --components=m… 38 [OK] 1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 36 [OK] nuagebec/ubuntu Simple always updated Ubuntu docker images w… 23 [OK] tutum/ubuntu Simple Ubuntu docker images with SSH access 18 i386/ubuntu Ubuntu is a Debian-based Linux operating sys… 13 ppc64le/ubuntu Ubuntu is a Debian-based Linux operating sys… 12 1and1internet/ubuntu-16-apache-php-7.0 ubuntu-16-apache-php-7.0 10 [OK] 1and1internet/ubuntu-16-nginx-php-phpmyadmin-mariadb-10 ubuntu-16-nginx-php-phpmyadmin-mariadb-10 6 [OK] eclipse/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 6 [OK] codenvy/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 4 [OK] darksheer/ubuntu Base Ubuntu Image -- Updated hourly 4 [OK] 1and1internet/ubuntu-16-apache ubuntu-16-apache 3 [OK] 1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4 ubuntu-16-nginx-php-5.6-wordpress-4 3 [OK] 1and1internet/ubuntu-16-sshd ubuntu-16-sshd 1 [OK] pivotaldata/ubuntu A quick freshening-up of the base Ubuntu doc… 1 1and1internet/ubuntu-16-healthcheck ubuntu-16-healthcheck 0 [OK] pivotaldata/ubuntu-gpdb-dev Ubuntu images for GPDB development 0 smartentry/ubuntu ubuntu with smartentry 0 [OK] ossobv/ubuntu ...

En la columna nombrada OFICIAL, OK indica una imagen que fue creada y soportada por la empresa que respalda el proyecto. Una vez que haya identificado la imagen que quiera usar, puede descargarla a su computadora mediante el subcomando de pull.

Para descargar la imagen de ubuntu oficial a su computadora, ejecute el siguiente comando:

  • docker pull ubuntu

Verá el siguiente resultado:

Output
Using default tag: latest latest: Pulling from library/ubuntu 6b98dfc16071: Pull complete 4001a1209541: Pull complete 6319fc68c576: Pull complete b24603670dc3: Pull complete 97f170c87c6f: Pull complete Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d Status: Downloaded newer image for ubuntu:latest

Tras descargar una imagen, puede ejecutar un contenedor usando la imagen descargada con el subcomando de run. Como vio con el ejemplo de hello-world, si no se ha descargado una imagen al ejecutar docker con el subcomando de run, el cliente Docker primero descargará la imagen y luego ejecutará un contenedor usando la misma.

Para ver las imágenes que se descargaron a su computadora, ingrese:

  • docker images

El resultado debería parecerse a esto:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 113a43faa138 4 weeks ago 81.2MB hello-world latest e38bc07ac18e 2 months ago 1.85kB

Como verá más adelante en este tutorial, pueden modificarse y usarse las imágenes que use para ejecutar contenedores para generar imágenes nuevas, las que después pueden cargarse (el término técnico es pushed) a Docker Hub u otros registros de Docker.

Vamos a ver más detalladamente cómo ejecutar contenedores.

Paso 5 — Ejecutar un contenedor Docker

El contenedor hello-world que ejecutó durante el paso anterior es un ejemplo de un contenedor que se ejecuta y se va tras emitir un mensaje de prueba. Los contenedores pueden ser mucho más útiles que eso, y pueden ser interactivos. Después de todo, se parecen a máquinas virtuales, nada más que tiene más recursos.

Para dar un ejemplo, ejecutemos un contenedor utilizando la última imagen de Ubuntu. La combinación de los switch -i y -t le ofrece acceso interactivo a shell en el contenedor:

  • docker run -it ubuntu

Su línea de comandos debería cambiar para reflejar el hecho de que ahora está trabajando dentro del contenedor y debería verse de esta manera:

Output
root@d9b100f2f636:/#

Note la identificación del contenedor en la línea de comandos. En este ejemplo, es d9b100f2f636. Va a requerir esa identificación de contenedor más adelante para identificar el contenedor cuando quiera eliminarlo.

Ahora puede ejecutar cualquier comando dentro del contenedor. Por ejemplo, vamos a actualizar la base de datos del paquete dentro del contenedor. No es necesario que prefije algún comando con sudo porque está trabajando dentro del contenedor como el usuario de root:

  • apt update

Luego, instale cualquier aplicación en él. Vamos a instalar Node.js:

  • apt install nodejs

Esto instala Node.js en el contenedor desde el repositorio oficial de Ubuntu. Una vez que termine la instalación, verifique que Node.js esté instalado:

  • node -v

Verá que el número de versión se muestra en su terminal:

Output
v8.10.0

Los cambios que haga dentro del contenedor únicamente se aplicarán a tal contenedor.

Si desea salir del contenedor, ingrese exit en la línea.

A continuación, vamos a ver cómo gestionar los contenedores en nuestro sistema.

Paso 6 — Gestionar los contenedores de Docker

Una vez que haya estado usando Docker por un tiempo, tendrá varios contenedores activos (siendo ejecutados) e inactivos en su computadora. Si desea ver los que están activos, use:

  • docker ps

Verá un resultado parecido al de abajo:

Output
CONTAINER ID IMAGE COMMAND CREATED

En este tutorial, comenzó teniendo dos contenedores: uno de la imagen de hello-world y otro de la imagen de ubuntu. Ninguno de los contenedores se sigue ejecutando, pero siguen existiendo en su sistema.

Para ver todos los contenedores, tanto los activos como los inactivos, ejecute docker ps con el switch -a:

  • docker ps -a

Verá un resultado parecido a este:

d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Exited (0) 8 minutes ago                           sharp_volhard 01c950718166        hello-world         "/hello"            About an hour ago   Exited (0) About an hour ago                       festive_williams  

Si desea ver el último contenedor que creó, páselo al switch -l:

  • docker ps -l
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 10 minutes ago sharp_volhard

Para iniciar un contenedor que se haya detenido, use docker start, seguido de la identificación o el nombre del contenedor. Vamos a empezar con el contenedor basado en Ubuntu cuya identificación era d9b100f2f636:

  • docker start d9b100f2f636

Se iniciará el contenedor, y puede usar docker ps para ver su estado:

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Up 8 seconds                            sharp_volhard  

Para detener un contenedor que se está ejecutando, use la función de docker stop, seguido de la identificación o el nombre del contenedor. Esta vez, vamos a usar el nombre que Docker le asignó al contenedor, que es sharp_volhard:

  • docker stop sharp_volhard

Una vez que decida que ya no necesita un contenedor, puede eliminarlo usando el comando docker rm, otra vez usando la identificación o el nombre del contenedor. Use el comando docker ps -a para encontrar la identificación o el nombre del contenedor para el contenedor que esté asociado con la imagen de hello-world y eliminarlo.

  • docker rm festive_williams

Puede iniciar un contenedor nuevo y nombrarlo usando el switch de --name. Además, puede usar el switch de --rm para crear un contenedor que se elimine automáticamente una vez que se detenga. Si desea aprender más sobre estas y otras opciones, consulte el comando docker run help.

Los contenedores se pueden convertir en imágenes que puede usar para crear contenedores nuevos. Vamos a ver cómo se hace eso.

Paso 7 — Hacer cambios en un contenedor a una imagen de Docker

Al iniciar una imagen de Docker, puede crear, modificar y borrar archivos al igual que lo hace con una máquina virtual. Los cambios que haga solamente se aplicarán a ese contenedor. Puede iniciarlo y detenerlo, pero una vez que lo destruya usando el comando docker rm, se perderán los cambios para siempre.

En esta sección, se le indica cómo guardar el estado de un contenedor como una imagen de Docker nueva.

Tras instalar Node.js dentro del contenedor de Ubuntu, tendrá un contenedor que se ejecuta de una imagen, pero el contenedor es distinto a la imagen que usó para crearlo. Tal vez quiera volver a usar este contenedor Node.js como base para imágenes nuevas más tarde.

Entonces, confirme los cambios en una instancia de imagen de Docker nueva usando el siguiente comando.

  • docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name

El switch -m es para el mensaje de confirmación que le ayuda a usted y a los demás a saber qué cambios hizo, mientras que -a se usa para especificar el autor. La container_id (identificación del contenedor) es la que anotó más temprano en el tutorial cuando inició la sesión interactiva de Docker. El repository suele ser su nombre de usuario de Docker Hub, a menos que haya creado repositorios adicionales en Docker Hub.

Por ejemplo, para el usuario sammy, cuya identificación de contenedor es d9b100f2f636, el comando sería:

  • docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs

Al confirmar una imagen, se guarda la imagen nueva localmente en su computadora. Más adelante en este tutorial, aprenderá cómo hacer push de una imagen a un registro de Docker como Docker Hub para que otros usuarios puedan tener acceso a la misma.

Si se listan las imágenes de Docker nuevamente, se mostrará la nueva imagen, al igual que la antigua de la que se derivó:

  • docker images

Verá un resultado como el siguiente:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE sammy/ubuntu-nodejs latest 7c1f35226ca6 7 seconds ago 179MB ubuntu latest 113a43faa138 4 weeks ago 81.2MB hello-world latest e38bc07ac18e 2 months ago 1.85kB

En este ejemplo, la imagen nueva es ubuntu-nodejs, que se derivó de la imagen ubuntu existente de Docker Hub. La diferencia de tamaño refleja los cambios que se hicieron. Y en este ejemplo, el cambio fue que se instaló NodeJS. Por lo que, la próxima vez que deba ejecutar un contenedor usando Ubuntu con NodeJS preinstalado, simplemente puede usar la imagen nueva.

Además, puede crear Imágenes desde un Dockerfile, el cual le permite automatizar la instalación del software en una imagen nueva. No obstante, no se abarca eso en este tutorial.

Ahora, vamos a compartir la imagen nueva con los demás para que puedan crear contenedores usándola.

Paso 8 — Hacer push de imágenes de Docker a un repositorio Docker

El siguiente paso lógico tras crear una imagen nueva usando una imagen existente es compartirla con algunos amigos selectos, todo el mundo en Docker Hub u otro registro de Docker al que tenga acceso. Si desea hacer push de una imagen a Docker Hub o cualquier otro registro de Docker, debe tener una cuenta en ese sitio.

Esta sección le enseña a hacer push de una imagen Docker a Docker Hub. Consulte Cómo configurar un registro privado de Docker en Ubuntu 14.04 si desea aprender a crear su propio registro privado de Docker.

Primero, inicie sesión en Docker Hub para hacerle push a su imagen.

  • docker login -u docker-registry-username

Se le pedirá que se certifique utilizando su contraseña de Docker Hub. Si ingresó la contraseña correcta, la certificación debería se exitosa.

Nota: Si su nombre de usuario de registro de Docker es distinto al nombre de usuario local que usó para crear la imagen, deberá etiquetar su imagen con su nombre de usuario de registro. Para el ejemplo que se dio en el último paso, debe escribir:

  • docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs

A continuación, podrá hacer el push de su propia imagen usando:

  • docker push docker-registry-username/docker-image-name

Para hacer el push de la imagen ubuntu-nodejs al repositorio de sammy, el comando sería:

  • docker push sammy/ubuntu-nodejs

Es posible que el proceso tarde un poco para terminarse a medida que se cargan las imágenes, pero una vez que se haya terminado, el resultado se verá así:

Output
The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Pushed 5f70bf18a086: Pushed a3b5c80a4eba: Pushed 7f18b442972b: Pushed 3ce512daaf78: Pushed 7aae4540b42d: Pushed ...

Tras hacer push de una imagen al registro, debería aparecer en el panel de su cuenta, como se muestra en la imagen de abajo.

Nuevo listado de imágenes de Docker en Docker Hub

Si un intento de push le da un error de este tipo, seguramente no haya iniciado sesión:

Output
The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required

Inicie sesión usando el docker login y vuelva a intentar el push. Entonces, verifique que exista en su página del repositorio de Docker Hub.

Ahora puede usar docker pull sammy/ubuntu-nodejs para hacer el pull de la imagen a una nueva máquina y usarla para ejecutar un contenedor nuevo.

Conclusión

Con este tutorial, instaló Docker, trabajó con imágenes y contenedores e hizo push de una imagen modificada a Docker Hub. Ahora que sabe cuáles son los conceptos básicos, examine los demás tutoriales de Docker en la Comunidad de DigitalOcean.

DigitalOcean Community Tutorials