How To Set Up Flask with MongoDB and Docker

The author selected the Internet Archive to receive a donation as part of the Write for DOnations program.

Introduction

Developing web applications can become complex and time consuming when building and maintaining a number of different technologies. Considering lighter weight options designed to reduce complexity and time-to-production for your application can result in a more flexible and scalable solution. As a micro web framework built on Python, Flask provides an extensible way for developers to grow their applications through extensions that can be integrated into projects. To continue the scalability of a developer’s tech stack, MongoDB is a NoSQL database is designed to scale and work with frequent changes. Developers can use Docker to simplify the process of packaging and deploying their applications.

Docker Compose has further simplified the development environment by allowing you to define your infrastructure, including your application services, network volumes, and bind mounts, in a single file. Using Docker Compose provides ease of use over running multiple docker container run commands. It allows you to define all your services in a single Compose file, and with a single command you create and start all the services from your configuration. This ensures that there is version control throughout your container infrastructure. Docker Compose uses a project name to isolate environments from each other, this allows you to run multiple environments on a single host.

In this tutorial you will build, package, and run your to-do web application with Flask, Nginx, and MongoDB inside of Docker containers. You will define the entire stack configuration in a docker-compose.yml file, along with configuration files for Python, MongoDB, and Nginx. Flask requires a web server to serve HTTP requests, so you will also use Gunicorn, which is a Python WSGI HTTP Server, to serve the application. Nginx acts as a reverse proxy server that forwards requests to Gunicorn for processing.

Prerequisites

To follow this tutorial, you will need the following:

Step 1 — Writing the Stack Configuration in Docker Compose

Building your applications on Docker allows you to version infrastructure easily depending on configuration changes you make in Docker Compose. The infrastructure can be defined in a single file and built with a single command. In this step, you will set up the docker-compose.yml file to run your Flask application.

The docker-compose.yml file lets you define your application infrastructure as individual services. The services can be connected to each other and each can have a volume attached to it for persistent storage. Volumes are stored in a part of the host filesystem managed by Docker (/var/lib/docker/volumes/ on Linux).

Volumes are the best way to persist data in Docker, as the data in the volumes can be exported or shared with other applications. For additional information about sharing data in Docker, you can refer to How To Share Data Between the Docker Container and the Host.

To get started, create a directory for the application in the home directory on your server:

  • mkdir flaskapp

Move into the newly created directory:

  • cd flaskapp

Next, create the docker-compose.yml file:

  • nano docker-compose.yml

The docker-compose.yml file starts with a version number that identifies the Docker Compose file version. Docker Compose file version 3 targets Docker Engine version 1.13.0+, which is a prerequisite for this setup. You will also add the services tag that you will define in the next step:

docker-compose.yml
version: '3' services: 

You will now define flask as the first service in your docker-compose.yml file. Add the following code to define the Flask service:

docker-compose.yml
. . .   flask:     build:       context: app       dockerfile: Dockerfile     container_name: flask     image: digitalocean.com/flask-python:3.6     restart: unless-stopped     environment:       APP_ENV: "prod"       APP_DEBUG: "False"       APP_PORT: 5000       MONGODB_DATABASE: flaskdb       MONGODB_USERNAME: flaskuser       MONGODB_PASSWORD: your_mongodb_password       MONGODB_HOSTNAME: mongodb     volumes:       - appdata:/var/www     depends_on:       - mongodb     networks:       - frontend       - backend 

The build property defines the context of the build. In this case, the app folder that will contain the Dockerfile.

You use the container_name property to define a name for each container. The image property specifies the image name and what the Docker image will be tagged as. The restart property defines how the container should be restarted—in your case it is unless-stopped. This means your containers will only be stopped when the Docker Engine is stopped/restarted or when you explicitly stop the containers. The benefit of using the unless-stopped property is that the containers will start automatically once the Docker Engine is restarted or any error occurs.

The environment property contains the environment variables that are passed to the container. You need to provide a secure password for the environment variable MONGODB_PASSWORD. The volumes property defines the volumes the service is using. In your case the volume appdata is mounted inside the container at the /var/www directory. The depends_on property defines a service that Flask depends on to function properly. In this case, the flask service will depend on mongodb since the mongodb service acts as the database for your application. depends_on ensures that the flask service only runs if the mongodb service is running.

The networks property specifies frontend and backend as the networks the flask service will have access to.

With the flask service defined, you’re ready to add the MongoDB configuration to the file. In this example, you will use the official 4.0.8 version mongo image. Add the following code to your docker-compose.yml file following the flask service:

docker-compose.yml
. . .   mongodb:     image: mongo:4.0.8     container_name: mongodb     restart: unless-stopped     command: mongod --auth     environment:       MONGO_INITDB_ROOT_USERNAME: mongodbuser       MONGO_INITDB_ROOT_PASSWORD: your_mongodb_root_password       MONGO_INITDB_DATABASE: flaskdb       MONGODB_DATA_DIR: /data/db       MONDODB_LOG_DIR: /dev/null     volumes:       - mongodbdata:/data/db     networks:       - backend 

The container_name for this service is mongodb with a restart policy of unless-stopped. You use the command property to define the command that will be executed when the container is started. The command mongod --auth will disable logging into the MongoDB shell without credentials, which will secure MongoDB by requiring authentication.

The environment variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD create a root user with the given credentials, so be sure to replace the placeholder with a strong password.

MongoDB stores its data in /data/db by default, therefore the data in the /data/db folder will be written to the named volume mongodbdata for persistence. As a result you won’t lose your databases in the event of a restart. The mongoDB service does not expose any ports, so the service will only be accessible through the backend network.

Next, you will define the web server for your application. Add the following code to your docker-compose.yml file to configure Nginx:

docker-compose.yml
. . .   webserver:     build:       context: nginx       dockerfile: Dockerfile     image: digitalocean.com/webserver:latest     container_name: webserver     restart: unless-stopped     environment:       APP_ENV: "prod"       APP_NAME: "webserver"       APP_DEBUG: "false"       SERVICE_NAME: "webserver"     ports:       - "80:80"       - "443:443"     volumes:       - nginxdata:/var/log/nginx     depends_on:       - flask     networks:       - frontend 

Here you have defined the context of the build, which is the nginx folder containing the Dockerfile. With the image property, you specify the image used to tag and run the container. The ports property will configure the Nginx service to be publicly accessible through :80 and :443 and volumes mounts the nginxdata volume inside the container at /var/log/nginx directory.

You’ve defined the service on which the web server service depends_on as flask. Finally the networks property defines the networks web server service will have access to the frontend.

Next, you will create bridge networks to allow the containers to communicate with each other. Append the following lines to the end of your file:

docker-compose.yml
. . . networks:   frontend:     driver: bridge   backend:     driver: bridge 

You defined two networks—frontend and backend—for the services to connect to. The front-end services, such as Nginx, will connect to the frontend network since it needs to be publicly accessible. Back-end services, such as MongoDB, will connect to the backend network to prevent unauthorized access to the service.

Next, you will use volumes to persist the database, application, and configuration files. Since your application will use the databases and files, it is imperative to persist the changes made to them. The volumes are managed by Docker and stored on the filesystem. Add this code to the docker-compose.yml file to configure the volumes:

docker-compose.yml
. . . volumes:   mongodbdata:     driver: local   appdata:     driver: local   nginxdata:     driver: local 

The volumes section declares the volumes that the application will use to persist data. Here you have defined the volumes mongodbdata, appdata, and nginxdata for persisting your MongoDB databases, Flask application data, and the Nginx web server logs, respectively. All of these volumes use a local driver to store the data locally. The volumes are used to persist this data so that data like your MongoDB databases and Nginx webserver logs could be lost once you restart the containers.

Your complete docker-compose.yml file will look like this:

docker-compose.yml
version: '3' services:    flask:     build:       context: app       dockerfile: Dockerfile     container_name: flask     image: digitalocean.com/flask-python:3.6     restart: unless-stopped     environment:       APP_ENV: "prod"       APP_DEBUG: "False"       APP_PORT: 5000       MONGODB_DATABASE: flaskdb       MONGODB_USERNAME: flaskuser       MONGODB_PASSWORD: your_mongodb_password       MONGODB_HOSTNAME: mongodb     volumes:       - appdata:/var/www     depends_on:       - mongodb     networks:       - frontend       - backend    mongodb:     image: mongo:4.0.8     container_name: mongodb     restart: unless-stopped     command: mongod --auth     environment:       MONGO_INITDB_ROOT_USERNAME: mongodbuser       MONGO_INITDB_ROOT_PASSWORD: your_mongodb_root_password       MONGO_INITDB_DATABASE: flaskdb       MONGODB_DATA_DIR: /data/db       MONDODB_LOG_DIR: /dev/null     volumes:       - mongodbdata:/data/db     networks:       - backend    webserver:     build:       context: nginx       dockerfile: Dockerfile     image: digitalocean.com/webserver:latest     container_name: webserver     restart: unless-stopped     environment:       APP_ENV: "prod"       APP_NAME: "webserver"       APP_DEBUG: "true"       SERVICE_NAME: "webserver"     ports:       - "80:80"       - "443:443"     volumes:       - nginxdata:/var/log/nginx     depends_on:       - flask     networks:       - frontend  networks:   frontend:     driver: bridge   backend:     driver: bridge  volumes:   mongodbdata:     driver: local   appdata:     driver: local   nginxdata:     driver: local 

Save the file and exit the editor after verifying your configuration.

You’ve defined the Docker configuration for your entire application stack in the docker-compose.yml file. You will now move on to writing the Dockerfiles for Flask and the web server.

Step 2 — Writing the Flask and Web Server Dockerfiles

With Docker, you can build containers to run your applications from a file called Dockerfile. The Dockerfile is a tool that enables you to create custom images that you can use to install the software required by your application and configure your containers based on your requirements. You can push the custom images you create to Docker Hub or any private registry.

In this step, you’ll write the Dockerfiles for the Flask and web server services. To get started, create the app directory for your Flask application:

  • mkdir app

Next, create the Dockerfile for your Flask app in the app directory:

  • nano app/Dockerfile

Add the following code to the file to customize your Flask container:

app/Dockerfile
FROM python:3.6.8-alpine3.9  LABEL MAINTAINER="FirstName LastName <example@domain.com>"  ENV GROUP_ID=1000 \     USER_ID=1000  WORKDIR /var/www/ 

In this Dockerfile, you are creating an image on top of the 3.6.8-alpine3.9 image that is based on Alpine 3.9 with Python 3.6.8 pre-installed.

The ENV directive is used to define the environment variables for our group and user ID.
Linux Standard Base (LSB) specifies that UIDs and GIDs 0-99 are statically allocated by the system. UIDs 100-999 are supposed to be allocated dynamically for system users and groups. UIDs 1000-59999 are supposed to be dynamically allocated for user accounts. Keeping this in mind you can safely assign a UID and GID of 1000, furthermore you can change the UID/GID by updating the GROUP_ID and USER_ID to match your requirements.

The WORKDIR directive defines the working directory for the container. Be sure to replace the LABEL MAINTAINER field with your name and email address.

Add the following code block to copy the Flask application into the container and install the necessary dependencies:

app/Dockerfile
. . . ADD ./requirements.txt /var/www/requirements.txt RUN pip install -r requirements.txt ADD . /var/www/ RUN pip install gunicorn 

The following code will use the ADD directive to copy files from the local app directory to the /var/www directory on the container. Next, the Dockerfile will use the RUN directive to install Gunicorn and the packages specified in the requirements.txt file, which you will create later in the tutorial.

The following code block adds a new user and group and initializes the application:

app/Dockerfile
. . . RUN addgroup -g $  GROUP_ID www RUN adduser -D -u $  USER_ID -G www www -s /bin/sh  USER www  EXPOSE 5000  CMD [ "gunicorn", "-w", "4", "--bind", "0.0.0.0:5000", "wsgi"] 

By default, Docker containers run as the root user. The root user has access to everything in the system, so the implications of a security breach can be disastrous. To mitigate this security risk, this will create a new user and group that will only have access to the /var/www directory.

This code will first use the addgroup command to create a new group named www. The -g flag will set the group ID to the ENV GROUP_ID=1000 variable that is defined earlier in the Dockerfile.

The adduser -D -u $ USER_ID -G www www -s /bin/sh lines creates a www user with a user ID of 1000, as defined by the ENV variable. The -s flag creates the user’s home directory if it does not exist and sets the default login shell to /bin/sh. The -G flag is used to set the user’s initial login group to www, which was created by the previous command.

The USER command defines that the programs run in the container will use the www user. Gunicorn will listen on :5000, so you will open this port with the EXPOSE command.

Finally, the CMD [ "gunicorn", "-w", "4", "--bind", "0.0.0.0:5000", "wsgi"] line runs the command to start the Gunicorn server with four workers listening on port 5000. The number should generally be between 2–4 workers per core in the server, Gunicorn documentation recommends (2 x $ num_cores) + 1 as the number of workers to start with.

Your completed Dockerfile will look like the following:

app/Dockerfile
FROM python:3.6.8-alpine3.9  LABEL MAINTAINER="FirstName LastName <example@domain.com>"  ENV GROUP_ID=1000 \     USER_ID=1000  WORKDIR /var/www/  ADD . /var/www/ RUN pip install -r requirements.txt RUN pip install gunicorn  RUN addgroup -g $  GROUP_ID www RUN adduser -D -u $  USER_ID -G www www -s /bin/sh  USER www  EXPOSE 5000  CMD [ "gunicorn", "-w", "4", "--bind", "0.0.0.0:5000", "wsgi"] 

Save the file and exit the text editor.

Next, create a new directory to hold your Nginx configuration:

  • mkdir nginx

Then create the Dockerfile for your Nginx web server in the nginx directory:

  • nano nginx/Dockerfile

Add the following code to the file to create the Dockerfile that will build the image for your Nginx container:

nginx/Dockerfile
FROM digitalocean.com/alpine:latest  LABEL MAINTAINER="FirstName LastName <example@domain.com>"  RUN apk --update add nginx && \     ln -sf /dev/stdout /var/log/nginx/access.log && \     ln -sf /dev/stderr /var/log/nginx/error.log && \     mkdir /etc/nginx/sites-enabled/ && \     mkdir -p /run/nginx && \     rm -rf /etc/nginx/conf.d/default.conf && \     rm -rf /var/cache/apk/*  COPY conf.d/app.conf /etc/nginx/conf.d/app.conf  EXPOSE 80 443 CMD ["nginx", "-g", "daemon off;"] 

This Nginx Dockerfile uses an alpine base image, which is a tiny Linux distribution with a minimal attack surface built for security.

In the RUN directive you are installing nginx as well as creating symbolic links to publish the error and access logs to the standard error (/dev/stderr) and output (/dev/stdout). Publishing errors to standard error and output is a best practice since containers are ephemeral, doing this the logs are shipped to docker logs and from there you can forward your logs to a logging service like the Elastic stack for persistance. After this is done, commands are run to remove the default.conf and /var/cache/apk/* to reduce the size of the resulting image. Executing all of these commands in a single RUN decreases the number of layers in the image, which also reduces the size of the resulting image.

The COPY directive copies the app.conf web server configuration inside of the container. The EXPOSE directive ensures the containers listen on ports :80 and :443, as your application will run on :80 with :443 as the secure port.

Finally, the CMD directive defines the command to start the Nginx server.

Save the file and exit the text editor.

Now that the Dockerfile is ready, you are ready to configure the Nginx reverse proxy to route traffic to the Flask application.

Step 3 — Configuring the Nginx Reverse Proxy

In this step, you will configure Nginx as a reverse proxy to forward requests to Gunicorn on :5000. A reverse proxy server is used to direct client requests to the appropriate back-end server. It provides an additional layer of abstraction and control to ensure the smooth flow of network traffic between clients and servers.

Get started by creating the nginx/conf.d directory:

  • mkdir nginx/conf.d

To configure Nginx, you need to create an app.conf file with the following configuration in the nginx/conf.d/ folder. The app.conf file contains the configuration that the reverse proxy needs to forward the requests to Gunicorn.

  • nano nginx/conf.d/app.conf

Put the following contents into the app.conf file:

nginx/conf.d/app.conf
upstream app_server {     server flask:5000; }  server {     listen 80;     server_name _;     error_log  /var/log/nginx/error.log;     access_log /var/log/nginx/access.log;     client_max_body_size 64M;      location / {         try_files $  uri @proxy_to_app;     }      location @proxy_to_app {         gzip_static on;          proxy_set_header X-Forwarded-For $  proxy_add_x_forwarded_for;         proxy_set_header X-Forwarded-Proto $  scheme;         proxy_set_header Host $  http_host;         proxy_buffering off;         proxy_redirect off;         proxy_pass http://app_server;     } } 

This will first define the upstream server, which is commonly used to specify a web or app server for routing or load balancing.

Your upstream server, app_server, defines the server address with the server directive, which is identified by the container name flask:5000.

The configuration for the Nginx web server is defined in the server block. The listen directive defines the port number on which your server will listen for incoming requests. The error_log and access_log directives define the files for writing logs. The proxy_pass directive is used to set the upstream server for forwarding the requests to http://app_server.

Save and close the file.

With the Nginx web server configured, you can move on to creating the Flask to-do API.

Step 4 — Creating the Flask To-do API

Now that you’ve built out your environment, you’re ready to build your application. In this step, you will write a to-do API application that will save and display to-do notes sent in from a POST request.

Get started by creating the requirements.txt file in the app directory:

  • nano app/requirements.txt

This file is used to install the dependencies for your application. The implementation of this tutorial will use Flask, Flask-PyMongo, and requests. Add the following to the requirements.txt file:

app/requirements.txt
Flask==1.0.2 Flask-PyMongo==2.2.0 requests==2.20.1 

Save the file and exit the editor after entering the requirements.

Next, create the app.py file to contain the Flask application code in the app directory:

  • nano app/app.py

In your new app.py file, enter in the code to import the dependencies:

app/app.py
import os from flask import Flask, request, jsonify from flask_pymongo import PyMongo 

The os package is used to import the environment variables. From the flask library you imported the Flask, request, and jsonify objects to instantiate the application, handle requests, and send JSON responses, respectively. From flask_pymongo you imported the PyMongo object to interact with the MongoDB.

Next, add the code needed to connect to MongoDB:

app/app.py
. . . application = Flask(__name__)  application.config["MONGO_URI"] = 'mongodb://' + os.environ['MONGODB_USERNAME'] + ':' + os.environ['MONGODB_PASSWORD'] + '@' + os.environ['MONGODB_HOSTNAME'] + ':27017/' + os.environ['MONGODB_DATABASE']  mongo = PyMongo(application) db = mongo.db 

The Flask(__name__) loads the application object into the application variable. Next, the code builds the MongoDB connection string from the environment variables using os.environ. Passing the application object in to the PyMongo() method will give you the mongo object, which in turn gives you the db object from mongo.db.

Now you will add the code to create an index message:

app/app.py
. . . @application.route('/') def index():     return jsonify(         status=True,         message='Welcome to the Dockerized Flask MongoDB app!'     ) 

The @application.route('/') defines the / GET route of your API. Here your index() function returns a JSON string using the jsonify method.

Next, add the /todo route to list all to-do’s:

app/app.py
. . . @application.route('/todo') def todo():     _todos = db.todo.find()      item = {}     data = []     for todo in _todos:         item = {             'id': str(todo['_id']),             'todo': todo['todo']         }         data.append(item)      return jsonify(         status=True,         data=data     ) 

The @application.route('/todo') defines the /todo GET route of your API, which returns the to-dos in the database. The db.todo.find() method returns all the to-dos in the database. Next, you iterate over the _todos to build an item that includes only the id and todo from the objects appending them to a data array and finally returns them as JSON.

Next, add the code for creating the to-do:

app/app.py
. . . @application.route('/todo', methods=['POST']) def createTodo():     data = request.get_json(force=True)     item = {         'todo': data['todo']     }     db.todo.insert_one(item)      return jsonify(         status=True,         message='To-do saved successfully!'     ), 201 

The @application.route('/todo') defines the /todo POST route of your API, which creates a to-do note in the database. The request.get_json(force=True) gets the JSON that you post to the route, and item is used to build the JSON that will be saved in the to-do. The db.todo.insert_one(item) is used to insert one item into the database. After the to-do is saved in the database you return a JSON response with a status code of 201 CREATED.

Now you add the code to run the application:

app/app.py
. . . if __name__ == "__main__":     ENVIRONMENT_DEBUG = os.environ.get("APP_DEBUG", True)     ENVIRONMENT_PORT = os.environ.get("APP_PORT", 5000)     application.run(host='0.0.0.0', port=ENVIRONMENT_PORT, debug=ENVIRONMENT_DEBUG) 

The condition __name__ == "__main__" is used to check if the global variable, __name__, in the module is the entry point to your program, is "__main__", then run the application. If the __name__ is equal to "__main__" then the code inside the if block will execute the app using this command application.run(host='0.0.0.0', port=ENVIRONMENT_PORT, debug=ENVIRONMENT_DEBUG).

Next, we get the values for the ENVIRONMENT_DEBUG and ENVIRONMENT_PORT from the environment variables using os.environ.get(), using the key as the first parameter and default value as the second parameter. The application.run() sets the host, port, and debug values for the application.

The completed app.py file will look like this:

app/app.py
import os from flask import Flask, request, jsonify from flask_pymongo import PyMongo  application = Flask(__name__)  application.config["MONGO_URI"] = 'mongodb://' + os.environ['MONGODB_USERNAME'] + ':' + os.environ['MONGODB_PASSWORD'] + '@' + os.environ['MONGODB_HOSTNAME'] + ':27017/' + os.environ['MONGODB_DATABASE']  mongo = PyMongo(application) db = mongo.db  @application.route('/') def index():     return jsonify(         status=True,         message='Welcome to the Dockerized Flask MongoDB app!'     )  @application.route('/todo') def todo():     _todos = db.todo.find()      item = {}     data = []     for todo in _todos:         item = {             'id': str(todo['_id']),             'todo': todo['todo']         }         data.append(item)      return jsonify(         status=True,         data=data     )  @application.route('/todo', methods=['POST']) def createTodo():     data = request.get_json(force=True)     item = {         'todo': data['todo']     }     db.todo.insert_one(item)      return jsonify(         status=True,         message='To-do saved successfully!'     ), 201  if __name__ == "__main__":     ENVIRONMENT_DEBUG = os.environ.get("APP_DEBUG", True)     ENVIRONMENT_PORT = os.environ.get("APP_PORT", 5000)     application.run(host='0.0.0.0', port=ENVIRONMENT_PORT, debug=ENVIRONMENT_DEBUG) 

Save the file and exit the editor.

Next, create the wsgi.py file in the app directory.

  • nano app/wsgi.py

The wsgi.py file creates an application object (or callable) so that the server can use it. Each time a request comes, the server uses this application object to run the application’s request handlers upon parsing the URL.

Put the following contents into the wsgi.py file, save the file, and exit the text editor:

app/wsgi.py
from app import application  if __name__ == "__main__":   application.run() 

This wsgi.py file imports the application object from the app.py file and creates an application object for the Gunicorn server.

The to-do app is now in place, so you’re ready to start running the application in containers.

Step 5 — Building and Running the Containers

Now that you have defined all of the services in your docker-compose.yml file and their configurations, you can start the containers.

Since the services are defined in a single file, you need to issue a single command to start the containers, create the volumes, and set up the networks. This command also builds the image for your Flask application and the Nginx web server. Run the following command to build the containers:

  • docker-compose up -d

When running the command for the first time, it will download all of the necessary Docker images, which can take some time. Once the images are downloaded and stored in your local machine, docker-compose will create your containers. The -d flag daemonizes the process, which allows it to run as a background process.

Use the following command to list the running containers once the build process is complete:

  • docker ps

You will see output similar to the following:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f20e9a7fd2b9 digitalocean.com/webserver:latest "nginx -g 'daemon of…" 2 weeks ago Up 2 weeks 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp webserver 3d53ea054517 digitalocean.com/flask-python:3.6 "gunicorn -w 4 --bin…" 2 weeks ago Up 2 weeks 5000/tcp flask 96f5a91fc0db mongo:4.0.8 "docker-entrypoint.s…" 2 weeks ago Up 2 weeks 27017/tcp mongodb

The CONTAINER ID is a unique identifier that is used to access containers. The IMAGE defines the image name for the given container. The NAMES field is the service name under which containers are created, similar to CONTAINER ID these can be used to access containers. Finally, the STATUS provides information regarding the state of the container whether it’s running, restarting, or stopped.

You’ve used the docker-compose command to build your containers from your configuration files. In the next step, you will create a MongoDB user for your application.

Step 6 — Creating a User for Your MongoDB Database

By default, MongoDB allows users to log in without credentials and grants unlimited privileges. In this step, you will secure your MongoDB database by creating a dedicated user to access it.

To do this, you will need the root username and password that you set in the docker-compose.yml file environment variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD for the mongodb service. In general, it’s better to avoid using the root administrative account when interacting with the database. Instead, you will create a dedicated database user for your Flask application, as well as a new database that the Flask app will be allowed to access.

To create a new user, first start an interactive shell on the mongodb container:

  • docker exec -it mongodb bash

You use the docker exec command in order to run a command inside a running container along with the -it flag to run an interactive shell inside the container.

Once inside the container, log in to the MongoDB root administrative account:

  • mongo -u mongodbuser -p

You will be prompted for the password that you entered as the value for the MONGO_INITDB_ROOT_PASSWORD variable in the docker-compose.yml file. The password can be changed by setting a new value for the MONGO_INITDB_ROOT_PASSWORD in the mongodb service, in which case you will have to re-run the docker-compose up -d command.

Run the show dbs; command to list all databases:

  • show dbs;

You will see the following output:

Output
admin 0.000GB config 0.000GB local 0.000GB 5 rows in set (0.00 sec)

The admin database is a special database that grants administrative permissions to users. If a user has read access to the admin database, they will have read and write permissions to all other databases. Since the output lists the admin database, the user has access to this database and can therefore read and write to all other databases.

Saving the first to-do note will automatically create the MongoDB database. MongoDB allows you to switch to a database that does not exist using the use database command. It creates a database when a document is saved to a collection. Therefore the database is not created here; that will happen when you save your first to-do note in the database from the API. Execute the use command to switch to the flaskdb database:

  • use flaskdb

Next, create a new user that will be allowed to access this database:

  • db.createUser({user: 'flaskuser', pwd: 'your password', roles: [{role: 'readWrite', db: 'flaskdb'}]})
  • exit

This command creates a user named flaskuser with readWrite access to the flaskdb database. Be sure to use a secure password in the pwd field. The user and pwd here are the values you defined in the docker-compose.yml file under the environment variables section for the flask service.

Log in to the authenticated database with the following command:

  • mongo -u flaskuser -p your password --authenticationDatabase flaskdb

Now that you have added the user, log out of the database.

  • exit

And finally, exit the container:

  • exit

You’ve now configured a dedicated database and user account for your Flask application. The database components are ready, so now you can move on to running the Flask to-do app.

Step 7 — Running the Flask To-do App

Now that your services are configured and running, you can test your application by navigating to http://your_server_ip in a browser. Additionally, you can run curl to see the JSON response from Flask:

  • curl -i http://your_server_ip

You will receive the following response:

Output
{"message":"Welcome to the Dockerized Flask MongoDB app!","status":true}

The configuration for the Flask application is passed to the application from the docker-compose.yml file. The configuration regarding the database connection is set using the MONGODB_* variables defined in the environment section of the flask service.

To test everything out, create a to-do note using the Flask API. You can do this with a POST curl request to the /todo route:

  • curl -i -H "Content-Type: application/json" -X POST -d '{"todo": "Dockerize Flask application with MongoDB backend"}' http://your_server_ip/todo

This request results in a response with a status code of 201 CREATED when the to-do item is saved to MongoDB:

Output
{"message":"To-do saved successfully!","status":true}

You can list all of the to-do notes from MongoDB with a GET request to the /todo route:

  • curl -i http://your_server_ip/todo
Output
{"data":[{"id":"5c9fa25591cb7b000a180b60","todo":"Dockerize Flask application with MongoDB backend"}],"status":true}

With this, you have Dockerized a Flask API running a MongoDB backend with Nginx as a reverse proxy deployed to your servers. For a production environment you can use sudo systemctl enable docker to ensure your Docker service automatically starts at runtime.

Conclusion

In this tutorial, you deployed a Flask application with Docker, MongoDB, Nginx, and Gunicorn. You now have a functioning modern stateless API application that can be scaled. Although you can achieve this result by using a command like docker container run, the docker-compose.yml simplifies your job as this stack can be put into version control and updated as necessary.

From here you can also take a look at our further Python Framework tutorials.

DigitalOcean Community Tutorials

Como Instalar e Usar o Docker no CentOS 7

Introdução

O Docker é um aplicativo que torna simples e fácil executar processos de aplicações em um container, que são como máquinas virtuais, apenas mais portáveis, mais fáceis de usar e mais dependentes do sistema operacional do host. Para uma introdução detalhada aos diferentes componentes de um container Docker, confira O Ecossistema do Docker: Uma Introdução aos Componentes Comuns.

Existem dois métodos para instalar o Docker no CentOS 7. Um método envolve instalá-lo em uma instalação existente do sistema operacional. O outro envolve lançar um servidor com uma ferramenta chamada Docker Machine que instala automaticamente o Docker nele.

Neste tutorial, você aprenderá a instalar e usar o Docker em uma instalação existente do CentOS 7.

Pré-requisitos

Nota: O Docker requer uma versão de 64 bits do CentOS 7, bem como uma versão do kernel igual ou maior que 3.10. O Droplet padrão do CentOS 7 de 64 bits atende a esses requisitos.

Todos os comandos neste tutorial devem ser executados como um usuário não-root. Se o acesso como root for requerido para o comando, ele será precedido pelo sudo. O guia de Configuração Inicial do Servidor com o CentOS 7 explica como adicionar usuários e fornecer a eles o acesso ao sudo.

Passo 1 — Instalando o Docker

O pacote de instalação do Docker disponível no repositório oficial do CentOS 7 pode não ser a versão mais recente. Para obter a versão mais recente e melhor, instale o Docker a partir do repositório oficial do Docker. Esta seção mostra como fazer exatamente isso.

Mas primeiro, vamos atualizar o banco de dados de pacotes:

  • sudo yum check-update

Agora execute este comando. Ele adicionará o repositório oficial do Docker, baixará a versão mais recente do Docker e a instalará:

  • curl -fsSL https://get.docker.com/ | sh

Após a conclusão da instalação, inicie o daemon do Docker:

  • sudo systemctl start docker

Verifique se ele está em execução:

  • sudo systemctl status docker

A saída deve ser semelhante à seguinte, mostrando que o serviço está ativo e em execução:

Output
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2016-05-01 06:53:52 CDT; 1 weeks 3 days ago Docs: https://docs.docker.com Main PID: 749 (docker)

Por fim, certifique-se que ele vai iniciar em todas as reinicializações do servidor:

  • sudo systemctl enable docker

A instalação do Docker agora oferece não apenas o serviço Docker (daemon), mas também o utilitário de linha de comando docker ou o cliente Docker. Vamos explorar como usar o comando docker mais adiante neste tutorial.

Passo 2 — Executando Comandos Docker Sem Sudo (Opcional)

Por padrão, executar o comando docker requer privilégios de root — isto é, você tem que prefixar o comando com sudo. Ele também pode ser executado por um usuário no grupo docker, que é criado automaticamente durante a instalação do Docker. Se você tentar executar o comando docker sem prefixá-lo com sudo ou sem estar no grupo docker, você obterá uma saída como esta:

Output
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.

Se você quiser evitar digitar sudo sempre que executar o comando docker, adicione seu nome de usuário ao grupo docker:

  • sudo usermod -aG docker $ (whoami)

Você precisará sair do Droplet e voltar como o mesmo usuário para ativar essa mudança.

Se você precisar adicionar um usuário ao grupo docker no qual você não está logado, declare este username explicitamente usando:

  • sudo usermod -aG docker username

O restante deste artigo supõe que você esteja executando o comando docker como um usuário do grupo de usuários docker. Se você optar por não fazê-lo, por favor, prefixe os comandos com sudo.

Passo 3 — Usando o Comando Docker

Com o Docker instalado e funcionando, agora é a hora de se familiarizar com o utilitário de linha de comando. O uso do docker consiste em passar uma cadeia de opções e subcomandos seguidos por argumentos. A sintaxe assume este formato:

  • docker [option] [command] [arguments]

Para ver todos os subcomandos disponíveis, digite:

  • docker

A partir do Docker 1.11.1, a lista completa de subcomandos disponíveis inclui:

Output
attach Attach to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on a container or image kill Kill a running container load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container network Manage Docker networks pause Pause all processes within a container port List port mappings or a specific mapping for the CONTAINER ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart a container rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop a running container tag Tag an image into a repository top Display the running processes of a container unpause Unpause all processes within a container update Update configuration of one or more containers version Show the Docker version information volume Manage Docker volumes wait Block until a container stops, then print its exit code

Para visualizar as opções disponíveis para um comando específico, digite:

  • docker subcomando-docker --help

Para visualizar informações de todo o sistema, use:

  • docker info

Passo 4 — Trabalhando com Imagens Docker

Os containers Docker são executados a partir de imagens Docker. Por padrão, ele extrai essas imagens do Docker Hub, um registro Docker gerenciado pela Docker, a empresa por trás do projeto Docker. Qualquer pessoa pode criar e hospedar suas imagens no Docker Hub, de modo que a maioria das aplicações e distribuições Linux que você precisa para executar containers Docker tem imagens que estão hospedadas no Docker Hub.

Para verificar se você pode acessar e baixar imagens do Docker Hub, digite:

  • docker run hello-world

A saída, que deve incluir o seguinte, deve indicar que o Docker está funcionando corretamente:

Output
Hello from Docker. This message shows that your installation appears to be working correctly. …

Você pode procurar imagens disponíveis no Docker Hub usando o comando docker com o subcomando search. Por exemplo, para procurar a imagem do CentOS, digite:

  • docker search centos

O script rastreará o Docker Hub e retornará uma listagem de todas as imagens cujo nome corresponde à string de pesquisa. Nesse caso, a saída será semelhante a esta:

Output
NAME DESCRIPTION STARS OFFICIAL AUTOMATED centos The official build of CentOS. 2224 [OK] jdeathe/centos-ssh CentOS-6 6.7 x86_64 / CentOS-7 7.2.1511 x8… 22 [OK] jdeathe/centos-ssh-apache-php CentOS-6 6.7 x86_64 / Apache / PHP / PHP M… 17 [OK] million12/centos-supervisor Base CentOS-7 with supervisord launcher, h… 11 [OK] nimmis/java-centos This is docker images of CentOS 7 with dif… 10 [OK] torusware/speedus-centos Always updated official CentOS docker imag… 8 [OK] nickistre/centos-lamp LAMP on centos setup 3 [OK] …

Na coluna OFFICIAL, o OK indica uma imagem criada e suportada pela empresa por trás do projeto. Depois de identificar a imagem que você gostaria de usar, você pode fazer o download dela para o seu computador usando o subcomando pull, assim:

  • docker pull centos

Depois que uma imagem foi baixada, você pode então executar um container usando a imagem baixada com o subcomando run. Se uma imagem não tiver sido baixada quando o docker for executado com o subcomando run, o cliente do Docker primeiro fará o download da imagem e, em seguida, executará um container usando-a:

  • docker run centos

Para ver as imagens que foram baixadas para o seu computador, digite:

  • docker images

A saída deve ser semelhante ao seguinte:

[secondary_lable Output] REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE centos              latest              778a53015523        5 weeks ago         196.7 MB hello-world         latest              94df4f0ce8a4        2 weeks ago         967 B 

Como você verá mais adiante neste tutorial, as imagens que você usa para executar containers podem ser modificadas e usadas para gerar novas imagens, que podem então ser enviadas (push é o termo técnico) para o Docker Hub ou outros registros Docker.

Passo 5 — Executando um Container Docker

O container hello-world que você executou na etapa anterior é um exemplo de um container que é executado e sai após a emissão de uma mensagem de teste. Os containers, no entanto, podem ser muito mais úteis do que isso e podem ser interativos. Afinal, eles são semelhantes às máquinas virtuais, apenas mais fáceis de usar.

Como um exemplo, vamos rodar um container usando a última imagem do CentOS. A combinação das chaves -i e -t fornece a você o acesso interativo ao shell no container:

  • docker run -it centos

Seu prompt de comando deve mudar para refletir o fato de que você agora está trabalhando dentro do container e deve assumir esta forma:

Output
[root@59839a1b7de2 /]#

Importante: Observe o ID do container no prompt de comando. No exemplo acima, ele é 59839a1b7de2.

Agora você pode executar qualquer comando dentro do container. Por exemplo, vamos instalar o servidor MariaDB no container em execução. Não há necessidade de prefixar qualquer comando com o sudo, porque você está operando dentro do container com privilégios de root:

  • yum install mariadb-server

Passo 6 — Fazendo o Commit de Alterações para uma Imagem Docker

Quando você inicia uma imagem Docker, você pode criar, modificar e excluir arquivos da mesma forma que você faz com uma máquina virtual. As alterações que você fizer serão aplicadas apenas a esse container. Você pode iniciá-lo e pará-lo, mas depois de destruí-lo com o comando docker rm, as alterações serão perdidas para sempre.

Esta seção lhe mostra como salvar o estado de um container como uma nova imagem Docker.

Depois de instalar o servidor MariaDB dentro do container CentOS, agora você tem um container executando uma imagem, mas o container é diferente da imagem que você usou para criá-lo.

Para salvar o estado do container como uma nova imagem, primeiro saia dele:

  • exit

Em seguida, confirme ou faça o commit das alterações em uma nova instância de imagem Docker usando o seguinte comando. A chave -m é para a mensagem de commit que ajuda você e outras pessoas a saber quais alterações você fez, enquanto -a é usado para especificar o autor. O ID do container é aquele que você anotou anteriormente no tutorial quando iniciou a sessão Docker interativa. A menos que você tenha criado repositórios adicionais no Docker Hub, o repositório geralmente é seu nome de usuário do Docker Hub:

  • docker commit -m "O que você fez na imagem" -a "Nome do autor" container-id repositório/novo_nome_da_imagem

Por exemplo:

  • docker commit -m "adicionado mariadb-server" -a "Sunday Ogwu-Chinuwa" 59839a1b7de2 finid/centos-mariadb

Nota: Quando você faz o commit de uma imagem, a nova imagem é salva localmente, isto é, no seu computador. Posteriormente neste tutorial, você aprenderá a enviar uma imagem para um registro Docker, como o Docker Hub, para que ela possa ser avaliada e usada por você e por outras pessoas.

Depois que a operação for concluída, listar as imagens Docker agora no seu computador deve mostrar a nova imagem, bem como a antiga da qual ela foi derivada:

  • docker images

A saída deve ser desse tipo:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE finid/centos-mariadb latest 23390430ec73 6 seconds ago 424.6 MB centos latest 778a53015523 5 weeks ago 196.7 MB hello-world latest 94df4f0ce8a4 2 weeks ago 967 B

No exemplo acima, centos-mariadb é a nova imagem, que foi derivada da imagem CentOS existente do Docker Hub. A diferença de tamanho reflete as alterações que foram feitas. E neste exemplo, a mudança foi que o servidor MariaDB foi instalado. Então, da próxima vez que você precisar executar um container usando o CentOS com o servidor MariaDB pré-instalado, basta usar a nova imagem. As imagens também podem ser construídas a partir do que é chamado de Dockerfile. Mas esse é um processo mais complicado e que está bem fora do escopo deste artigo. Vamos explorar isso em um artigo futuro.

Passo 7 — Listando os Containers Docker

Depois de usar o Docker por um tempo, você terá muitos containers ativos (em execução) e inativos no seu computador. Para ver os ativos, use:

  • docker ps

Você verá uma saída semelhante à seguinte:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f7c79cc556dd centos "/bin/bash" 3 hours ago Up 3 hours silly_spence

Para visualizar todos os containers — ativos e inativos, passe a ele a chave -a:

  • docker ps -a

Para ver o container mais recente que você criou, utilize a opção -l:

  • docker ps -l

Parar um container em execução ou ativo é tão simples quanto digitar:

  • docker stop container-id

O container-id pode ser encontrado na saída do comando docker ps.

Passo 8 — Enviando Imagens para um Repositório Docker

A próximo passo lógico depois de criar uma nova imagem a partir de uma imagem existente é compartilhá-la com alguns de seus amigos selecionados, o mundo inteiro no Docker Hub ou outro registro Docker ao qual você tem acesso. Para enviar uma imagem para o Docker Hub ou qualquer outro registro Docker, você deve ter uma conta lá.

Esta seção mostra como enviar uma imagem para o Docker Hub.

Para criar uma conta no Docker Hub, registre-se em Docker Hub. Depois, para enviar sua imagem, primeiro faça o login no Docker Hub. Você será solicitado a se autenticar:

  • docker login -u usuário_do_registro_docker

Se você especificou a senha correta, a autenticação deve ser bem-sucedida. Então você pode enviar sua própria imagem usando:

  • docker push usuário_do_registro_docker/nome-da-imagem-docker

Isso levará algum tempo para ser concluído e, quando concluído, a saída será algo assim:

Output
The push refers to a repository [docker.io/finid/centos-mariadb] 670194edfaf5: Pushed 5f70bf18a086: Mounted from library/centos 6a6c96337be1: Mounted from library/centos …

Depois de enviar uma imagem para um registro, ela deve estar listada no painel da sua conta, como mostra a imagem abaixo.

Docker image listing on Docker Hub

Se uma tentativa de envio resultar em um erro desse tipo, provavelmente você não efetuou login:

Output
The push refers to a repository [docker.io/finid/centos-mariadb] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required

Faça o login e repita a tentativa de envio.

Conclusão

Há muito mais no Docker do que foi mostrado neste artigo, mas isso deve ser suficiente para você começar a trabalhar com ele no CentOS 7. Como a maioria dos projetos open source, o Docker é construído a partir de uma base de código em rápido desenvolvimento, portanto, crie o hábito de visitar a página do blog do projeto para as informações mais recentes.

Confira também os outros tutoriais do Docker na Comunidade da DigitalOcean.

DigitalOcean Community Tutorials

Como Instalar e Usar o Docker Compose no CentOS 7

Introdução

O Docker é uma ótima ferramenta para automatizar o deployment de aplicações Linux dentro de containers de software, mas para aproveitar realmente ao máximo seu potencial, é melhor se cada componente de sua aplicação for executado em seu próprio container. Para aplicações complexas com muitos componentes, orquestrar todos os containers para iniciar e encerrar juntos (para não mencionar ter que falar uns com os outros) pode rapidamente tornar-se problemático.

A comunidade Docker apareceu com uma solução popular chamada Fig, que permitia usar um único arquivo YAML para orquestrar todos os containers e configurações do Docker. Isso se tornou tão popular que a equipe do Docker decidiu fazer o Docker Compose com base nos fontes do Fig, que agora está obsoleto. O Docker Compose torna mais fácil para os usuários orquestrarem os processos de containers do Docker, incluindo inicialização, encerramento e configuração de links e volumes dentro de containers.

Neste tutorial, você instalará a versão mais recente do Docker Compose para ajudá-lo a gerenciar aplicações de vários containers e explorará os comandos básicos do software.

Conceitos de Docker e Docker Compose

A utilização do Docker Compose requer uma combinação de vários conceitos diferentes do Docker em um, portanto, antes de começarmos, vamos analisar alguns dos vários conceitos envolvidos. Se você já estiver familiarizado com os conceitos do Docker, como volumes, links e port forwarding, você pode querer ir em frente e pular para a próxima seção.

Imagens Docker

Cada container Docker é uma instância local de uma imagem Docker. Você pode pensar em uma imagem Docker como uma instalação completa do Linux. Geralmente, uma instalação mínima contém apenas o mínimo de pacotes necessários para executar a imagem. Essas imagens usam o kernel do sistema host, mas como elas estão rodando dentro de um container Docker e só veem seu próprio sistema de arquivos, é perfeitamente possível executar uma distribuição como o CentOS em um host Ubuntu (ou vice-versa).

A maioria das imagens Docker é distribuída através do Docker Hub, que é mantido pela equipe do Docker. Os projetos open source mais populares têm uma imagem correspondente carregada no Registro Docker, que você pode usar para fazer o deploy do software. Quando possível, é melhor pegar imagens “oficiais”, pois elas são garantidas pela equipe do Docker e seguem as práticas recomendadas do Docker.

Comunicação Entre Imagens Docker

Os containers Docker são isolados da máquina host, o que significa que, por padrão, a máquina host não tem acesso ao sistema de arquivos dentro do container, nem a qualquer meio de comunicação com ele por meio da rede. Isso pode dificultar a configuração e o trabalho com a imagem em execução em um container Docker.

O Docker tem três maneiras principais de contornar isso. O primeiro e mais comum é fazer com que o Docker especifique variáveis de ambiente que serão definidas dentro do container. O código em execução no container Docker verificará os valores dessas variáveis de ambiente na inicialização e os utilizará para se configurar adequadamente.

Outro método comumente usado é um Docker data volume. Os volumes Docker vêm em dois tipos – internos e compartilhados.

Especificar um volume interno significa apenas que, para uma pasta que você especificar para um determinado container Docker, os dados persistirão quando o container for removido. Por exemplo, se você quisesse ter certeza de que seus arquivos de log persistam, você poderia especificar um volume /var/log interno.

Um volume compartilhado mapeia uma pasta dentro de um container Docker para uma pasta na máquina host. Isso permite que você compartilhe arquivos facilmente entre o container Docker e a máquina host.

A terceira maneira de se comunicar com um container Docker é pela rede. O Docker permite a comunicação entre diferentes containers por meio de links, bem como o port forwarding ou encaminhamento de portas, permitindo que você encaminhe portas de dentro do container Docker para portas no servidor host. Por exemplo, você pode criar um link para permitir que os containers do WordPress e do MariaDB se comuniquem entre si e usem o encaminhamento de porta para expor o WordPress ao mundo externo, para que os usuários possam se conectar a ele.

Pré-requisitos

Para seguir este artigo, você precisará do seguinte:

Uma vez que estes requisitos estejam atentidos, você estará pronto para seguir adiante.

Passo 1 — Instalando o Docker Compose

Para obter a versão mais recente, tome conhecimento dos docs do Docker e instale o Docker Compose a partir do binário no repositório GitHub do Docker.

Verifique a release atual e se necessário, atualize-a no comando abaixo:

  • sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$ (uname -s)-$ (uname -m)" -o /usr/local/bin/docker-compose

Em seguida, defina as permissões para tornar o binário executável:

  • sudo chmod +x /usr/local/bin/docker-compose

Logo após, verifique se a instalação foi bem-sucedida, checando a versão

  • docker-compose --version

Isso imprimirá a versão que você instalou:

Output
docker-compose version 1.23.2, build 1110ad01

Agora que você tem o Docker Compose instalado, você está pronto para executar um exemplo de “Hello World”.

Passo 2 — Executando um Container com o Docker Compose

O registro público do Docker, o Docker Hub, inclui uma imagem simples “Hello World” para demonstração e teste. Ela ilustra a configuração mínima necessária para executar um container usando o Docker Compose: um arquivo YAML que chama uma única imagem.

Primeiro, crie um diretório para o nosso arquivo YAML:

  • mkdir hello-world

Em seguida, mude para o diretório:

  • cd hello-world

Agora crie o arquivo YAML usando seu editor de texto favorito. Este tutorial usará o vi:

  • vi docker-compose.yml

Entre no modo de inserção, pressionando i, depois coloque o seguinte conteúdo no arquivo:

docker-compose.yml
my-test:   image: hello-world 

A primeira linha fará parte do nome do container. A segunda linha especifica qual imagem usar para criar o container. Quando você executar o comando docker-compose up, ele procurará uma imagem local com o nome especificado, hello-world.

Com isso pronto, pressione ESC para sair do modo de inserção. Digite :x e depois ENTER para salvar e sair do arquivo.

Para procurar manualmente as imagens no seu sistema, use o comando docker images:

  • docker images

Quando não há imagens locais, apenas os cabeçalhos das colunas são exibidos:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE

Agora, ainda no diretório ~/hello-world, execute o seguinte comando para criar o container:

  • docker-compose up

Na primeira vez que executarmos o comando, se não houver uma imagem local chamada hello-world, o Docker Compose vai baixá-la do repositório público do Docker Hub:

Output
Pulling my-test (hello-world:)… latest: Pulling from library/hello-world 1b930d010525: Pull complete . . .

Depois de baixar a imagem, o docker-compose cria um container, anexa e executa o programa hello, que por sua vez confirma que a instalação parece estar funcionando:

Output
. . . Creating helloworld_my-test_1… Attaching to helloworld_my-test_1 my-test_1 | my-test_1 | Hello from Docker. my-test_1 | This message shows that your installation appears to be working correctly. my-test_1 | . . .

Em seguida, imprimirá uma explicação do que ele fez:

Output
. . . my-test_1 | To generate this message, Docker took the following steps: my-test_1 | 1. The Docker client contacted the Docker daemon. my-test_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. my-test_1 | (amd64) my-test_1 | 3. The Docker daemon created a new container from that image which runs the my-test_1 | executable that produces the output you are currently reading. my-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it my-test_1 | to your terminal. . . .

Os containers Docker só são executados enquanto o comando estiver ativo, portanto, assim que o hello terminar a execução, o container finaliza. Conseqüentemente, quando você olha para os processos ativos, os cabeçalhos de coluna aparecerão, mas o container hello-world não será listado porque não está em execução:

  • docker ps
Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Use a flag -a para mostrar todos os containers, não apenas os ativos:

  • docker ps -a
Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 50a99a0beebd hello-world "/hello" 3 minutes ago Exited (0) 3 minutes ago hello-world_my-test_1

Agora que você testou a execução de um container, é possível explorar alguns dos comandos básicos do Docker Compose.

Passo 3 — Aprendendo os Comandos do Docker Compose

Para começar com o Docker Compose, esta seção irá examinar os comandos gerais que a ferramenta docker-compose suporta.

O comando docker-compose funciona em uma base por diretório. Você pode ter vários grupos de containers do Docker em execução em uma máquina — basta criar um diretório para cada container e um arquivo docker-compose.yml para cada diretório.

Até agora você tem executado o docker-compose up por conta própria, a partir do qual você pode usar o CTRL-C para fechar o container. Isso permite que as mensagens de debug sejam exibidas na janela do terminal. Isso não é o ideal; quando rodando em produção, é mais robusto ter o docker-compose agindo mais como um serviço. Uma maneira simples de fazer isso é adicionar a opção -d quando você fizer um up em sua sessão:

  • docker-compose up -d

O docker-compose agora será executado em segundo plano ou background.

Para mostrar seu grupo de containers Docker (estejam interrompidos ou em execução no momento), use o seguinte comando:

  • docker-compose ps -a

Se um container for interrompido, o State será listado como Exited, conforme mostrado no exemplo a seguir:

Output
Name Command State Ports ------------------------------------------------ hello-world_my-test_1 /hello Exit 0

Um container em execução mostrará Up:

Output
Name Command State Ports --------------------------------------------------------------- nginx_nginx_1 nginx -g daemon off; Up 443/tcp, 80/tcp

Para parar todos os containers Docker em execução para um grupo de aplicações, digite o seguinte comando no mesmo diretório que o arquivo docker-compose.yml que você usou para iniciar o grupo Docker:

  • docker-compose stop

Nota: docker-compose kill também está disponível se você precisar fechar as coisas de maneira forçada.

Em alguns casos, os containers Docker armazenarão suas informações antigas em um volume interno. Se você quiser começar do zero, você pode usar o comando rm para excluir totalmente todos os containers que compõem o seu grupo de containers:

  • docker-compose rm

Se você tentar qualquer um desses comandos a partir de um diretório diferente do diretório que contém um container Docker e um arquivo .yml, ele retornará um erro:

Output
ERROR: Can't find a suitable configuration file in this directory or any parent. Are you in the right directory? Supported filenames: docker-compose.yml, docker-compose.yaml

Esta seção abordou o básico sobre como manipular containers com o Docker Compose. Se você precisasse obter maior controle sobre seus containers, você poderia acessar o sistema de arquivos do container e trabalhar a partir de um prompt de comando dentro de seu container, um processo descrito na próxima seção.

Passo 4 — Acessando o Sistema de Arquivos do Container Docker

Para trabalhar no prompt de comando dentro de um container e acessar seu sistema de arquivos, você pode usar o comando docker exec.

O exemplo “Hello World” sai depois de ser executado, portanto, para testar o docker exec, inicie um container que continuará em execução. Para os fins deste tutorial, use a imagem Nginx do Docker Hub.

Crie um novo diretório chamado nginx e vá até ele:

  • mkdir ~/nginx
  • cd ~/nginx

Em seguida, crie um arquivo docker-compose.yml em seu novo diretório e abra-o em um editor de texto:

  • vi docker-compose.yml

Em seguida, adicione as seguintes linhas ao arquivo:

~/nginx/docker-compose.yml
nginx:   image: nginx 

Salve o arquivo e saia. Inicie o container Nginx como um processo em background com o seguinte comando:

  • docker-compose up -d

O Docker Compose fará o download da imagem Nginx e o container será iniciado em background.

Agora você precisará do CONTAINER ID para o container. Liste todos os containers que estão em execução com o seguinte comando:

  • docker ps

Você verá algo semelhante ao seguinte:

Output of `docker ps`
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b86b6699714c nginx "nginx -g 'daemon of…" 20 seconds ago Up 19 seconds 80/tcp nginx_nginx_1

Se você quisesse fazer uma alteração no sistema de arquivos dentro deste container, você pegaria seu ID (neste exemplo b86b6699714c) e usaria docker exec para iniciar um shell dentro do container:

  • docker exec -it b86b6699714c /bin/bash

A opção -t abre um terminal, e a opção -i o torna interativo. /bin/bash abre um shell bash para o container em execução.

Você verá um prompt bash para o container semelhante a:

root@b86b6699714c:/# 

A partir daqui, você pode trabalhar no prompt de comando dentro do seu container. No entanto, lembre-se de que, a menos que você esteja em um diretório salvo como parte de um volume de dados, suas alterações desaparecerão assim que o container for reiniciado. Além disso, lembre-se de que a maioria das imagens Docker é criada com instalações mínimas do Linux, portanto, alguns dos utilitários e ferramentas de linha de comando aos quais você está acostumado podem não estar presentes.

Conclusão

Agora você instalou o Docker Compose, testou sua instalação executando um exemplo “Hello World” e explorou alguns comandos básicos.

Embora o exemplo “Hello World” tenha confirmado sua instalação, a configuração simples não mostra um dos principais benefícios do Docker Compose — a capacidade de ligar e desligar um grupo de containers Docker ao mesmo tempo. Para ver o poder do Docker Compose em ação, confira How To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt, and Docker Compose e How To Configure a Continuous Integration Testing Environment with Docker and Docker Compose on Ubuntu 16.04. Embora estes tutoriais sejam voltados para o Ubuntu 16.04 e 18.04, os passos podem ser adaptados para o CentOS 7.

DigitalOcean Community Tutorials

How To Install and Use Docker on Debian 10

Introduction

Docker is an application that simplifies the process of managing application processes in containers. Containers let you run your applications in resource-isolated processes. They’re similar to virtual machines, but containers are more portable, more resource-friendly, and more dependent on the host operating system.

For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components.

In this tutorial, you’ll install and use Docker Community Edition (CE) on Debian 10. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.

Prerequisites

To follow this tutorial, you will need the following:

  • One Debian 10 server set up by following the Debian 10 initial server setup guide, including a sudo non-root user and a firewall.
  • An account on Docker Hub if you wish to create your own images and push them to Docker Hub, as shown in Steps 7 and 8.

Step 1 — Installing Docker

The Docker installation package available in the official Debian repository may not be the latest version. To ensure we get the latest version, we’ll install Docker from the official Docker repository. To do that, we’ll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package.

First, update your existing list of packages:

  • sudo apt update

Next, install a few prerequisite packages which let apt use packages over HTTPS:

  • sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common

Then add the GPG key for the official Docker repository to your system:

  • curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

  • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $ (lsb_release -cs) stable"

Next, update the package database with the Docker packages from the newly added repo:

  • sudo apt update

Make sure you are about to install from the Docker repo instead of the default Debian repo:

  • apt-cache policy docker-ce

You’ll see output like this, although the version number for Docker may be different:

Output of apt-cache policy docker-ce
ocker-ce:   Installed: (none)   Candidate: 5:18.09.7~3-0~debian-buster   Version table:      5:18.09.7~3-0~debian-buster 500         500 https://download.docker.com/linux/debian buster/stable amd64 Packages 

Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Debian 10 (buster).

Finally, install Docker:

  • sudo apt install docker-ce

Docker is now installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

  • sudo systemctl status docker

The output will be similar to the following, showing that the service is active and running:

Output
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2019-07-08 15:11:19 UTC; 58s ago Docs: https://docs.docker.com Main PID: 5709 (dockerd) Tasks: 8 Memory: 31.6M CGroup: /system.slice/docker.service └─5709 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Installing Docker gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We’ll explore how to use the docker command later in this tutorial.

Step 2 — Executing the Docker Command Without Sudo (Optional)

By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker’s installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this:

Output
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

  • sudo usermod -aG docker $ {USER}

To apply the new group membership, log out of the server and back in, or type the following:

  • su - $ {USER}

You will be prompted to enter your user’s password to continue.

Confirm that your user is now added to the docker group by typing:

  • id -nG
Output
sammy sudo docker

If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

  • sudo usermod -aG docker username

The rest of this article assumes you are running the docker command as a user in the docker group. If you choose not to, please prepend the commands with sudo.

Let’s explore the docker command next.

Step 3 — Using the Docker Command

Using docker consists of passing it a chain of options and commands followed by arguments. The syntax takes this form:

  • docker [option] [command] [arguments]

To view all available subcommands, type:

  • docker

As of Docker 18, the complete list of available subcommands includes:

Output
attach Attach local standard input, output, and error streams to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes to files or directories on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codes

To view the options available to a specific command, type:

  • docker docker-subcommand --help

To view system-wide information about Docker, use:

  • docker info

Let’s explore some of these commands. We’ll start by working with images.

Step 4 — Working with Docker Images

Docker containers are built from Docker images. By default, Docker pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anyone can host their Docker images on Docker Hub, so most applications and Linux distributions you’ll need will have images hosted there.

To check whether you can access and download images from Docker Hub, type:

  • docker run hello-world

The output will indicate that Docker in working correctly:

Output
Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 1b930d010525: Pull complete Digest: sha256:41a65640635299bab090f783209c1e3a3f11934cf7756b09cb2f1e02147c6ed8 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ...

Docker was initially unable to find the hello-world image locally, so it downloaded the image from Docker Hub, which is the default repository. Once the image downloaded, Docker created a container from the image and the application within the container executed, displaying the message.

You can search for images available on Docker Hub by using the docker command with the search subcommand. For example, to search for the Ubuntu image, type:

  • docker search ubuntu

The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:

Output
NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating sys… 9704 [OK] dorowu/ubuntu-desktop-lxde-vnc Docker image to provide HTML5 VNC interface … 319 [OK] rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 224 [OK] consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC session… 183 [OK] ubuntu-upstart Upstart is an event-based replacement for th… 99 [OK] ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 97 [OK] neurodebian NeuroDebian provides neuroscience research s… 57 [OK] 1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 50 [OK] ubuntu ...

In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you’ve identified the image that you would like to use, you can download it to your computer using the pull subcommand.

Execute the following command to download the official ubuntu image to your computer:

  • docker pull ubuntu

You’ll see the following output:

Output
Using default tag: latest latest: Pulling from library/ubuntu 5b7339215d1d: Pull complete 14ca88e9f672: Pull complete a31c3b1caad4: Pull complete b054a26005b7: Pull complete Digest: sha256:9b1702dcfe32c873a770a32cfd306dd7fc1c4fd134adfb783db68defc8894b3c Status: Downloaded newer image for ubuntu:latest

After an image has been downloaded, you can then run a container using the downloaded image with the run subcommand. As you saw with the hello-world example, if an image has not been downloaded when docker is executed with the run subcommand, the Docker client will first download the image, then run a container using it.

To see the images that have been downloaded to your computer, type:

  • docker images

The output should look similar to the following:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 4c108a37151f 2 weeks ago 64.2MB hello-world latest fce289e99eb9 6 months ago 1.84kB

As you’ll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

Let’s look at how to run containers in more detail.

Step 5 — Running a Docker Container

The hello-world container you ran in the previous step is an example of a container that runs and exits after emitting a test message. Containers can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.

As an example, let’s run a container using the latest image of Ubuntu. The combination of the -i and -t switches gives you interactive shell access into the container:

  • docker run -it ubuntu

Your command prompt should change to reflect the fact that you’re now working inside the container and should take this form:

Output
root@d9b100f2f636:/#

Note the container id in the command prompt. In this example, it is d9b100f2f636. You’ll need that container ID later to identify the container when you want to remove it.

Now you can run any command inside the container. For example, let’s update the package database inside the container. You don’t need to prefix any command with sudo, because you’re operating inside the container as the root user:

  • apt update

Then install any application in it. Let’s install Node.js:

  • apt install nodejs

This installs Node.js in the container from the official Ubuntu repository. When the installation finishes, verify that Node.js is installed:

  • node -v

You’ll see the version number displayed in your terminal:

Output
v8.10.0

Any changes you make inside the container only apply to that container.

To exit the container, type exit at the prompt.

Let’s look at managing the containers on our system next.

Step 6 — Managing Docker Containers

After using Docker for a while, you’ll have many active (running) and inactive containers on your computer. To view the active ones, use:

  • docker ps

You will see output similar to the following:

Output
CONTAINER ID IMAGE COMMAND CREATED

In this tutorial, you started two containers; one from the hello-world image and another from the ubuntu image. Both containers are no longer running, but they still exist on your system.

To view all containers — active and inactive, run docker ps with the -a switch:

  • docker ps -a

You’ll see output similar to this:

CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS                      PORTS               NAMES d42d0bbfbd35        ubuntu              "/bin/bash"         About a minute ago   Exited (0) 20 seconds ago                       friendly_volhard 0740844d024c        hello-world         "/hello"            3 minutes ago        Exited (0) 3 minutes ago                        elegant_neumann 

To view the latest container you created, pass it the -l switch:

  • docker ps -l
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • d42d0bbfbd35 ubuntu "/bin/bash" About a minute ago Exited (0) 34 seconds ago friendly_volhard

To start a stopped container, use docker start, followed by the container ID or the container’s name. Let’s start the Ubuntu-based container with the ID of d9b100f2f636:

  • docker start d42d0bbfbd35

The container will start, and you can use docker ps to see its status:

CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES d42d0bbfbd35        ubuntu              "/bin/bash"         About a minute ago   Up 8 seconds                            friendly_volhard  

To stop a running container, use docker stop, followed by the container ID or name. This time, we’ll use the name that Docker assigned the container, which is friendly_volhard:

  • docker stop friendly_volhard

Once you’ve decided you no longer need a container anymore, remove it with the docker rm command, again using either the container ID or the name. Use the docker ps -a command to find the container ID or name for the container associated with the hello-world image and remove it.

  • docker rm elegant_neumann

You can start a new container and give it a name using the --name switch. You can also use the --rm switch to create a container that removes itself when it’s stopped. See the docker run help command for more information on these options and others.

Containers can be turned into images which you can use to build new containers. Let’s look at how that works.

Step 7 — Committing Changes in a Container to a Docker Image

When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.

This section shows you how to save the state of a container as a new Docker image.

After installing Node.js inside the Ubuntu container, you now have a container running off an image, but the container is different from the image you used to create it. But you might want to reuse this Node.js container as the basis for new images later.

Then commit the changes to a new Docker image instance using the following command.

  • docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name

The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container_id is the one you noted earlier in the tutorial when you started the interactive Docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username.

For example, for the user sammy, with the container ID of d9b100f2f636, the command would be:

  • docker commit -m "added Node.js" -a "sammy" d42d0bbfbd35 sammy/ubuntu-nodejs

When you commit an image, the new image is saved locally on your computer. Later in this tutorial, you’ll learn how to push an image to a Docker registry like Docker Hub so others can access it.

Listing the Docker images again will show the new image, as well as the old one that it was derived from:

  • docker images

You’ll see output like this:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE sammy/ubuntu-nodejs latest d441c62350b4 10 seconds ago 152MB ubuntu latest 4c108a37151f 2 weeks ago 64.2MB hello-world latest fce289e99eb9 6 months ago 1.84kB

In this example, ubuntu-nodejs is the new image, which was derived from the existing ubuntu image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that NodeJS was installed. So next time you need to run a container using Ubuntu with NodeJS pre-installed, you can just use the new image.

You can also build Images from a Dockerfile, which lets you automate the installation of software in a new image. However, that’s outside the scope of this tutorial.

Now let’s share the new image with others so they can create containers from it.

Step 8 — Pushing Docker Images to a Docker Repository

The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.

This section shows you how to push a Docker image to Docker Hub. To learn how to create your own private Docker registry, check out How To Set Up a Private Docker Registry on Ubuntu 14.04.

To push your image, first log into Docker Hub.

  • docker login -u docker-registry-username

You’ll be prompted to authenticate using your Docker Hub password. If you specified the correct password, authentication should succeed.

Note: If your Docker registry username is different from the local username you used to create the image, you will have to tag your image with your registry username. For the example given in the last step, you would type:

  • docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs

Then you may push your own image using:

  • docker push docker-registry-username/docker-image-name

To push the ubuntu-nodejs image to the sammy repository, the command would be:

  • docker push sammy/ubuntu-nodejs

The process may take some time to complete as it uploads the images, but when completed, the output will look like this:

Output
The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Pushed 5f70bf18a086: Pushed a3b5c80a4eba: Pushed 7f18b442972b: Pushed 3ce512daaf78: Pushed 7aae4540b42d: Pushed ...

After pushing an image to a registry, it should be listed on your account’s dashboard, like that show in the image below.

New Docker image listing on Docker Hub

If a push attempt results in an error of this sort, then you likely did not log in:

Output
The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required

Log in with docker login and repeat the push attempt. Then verify that it exists on your Docker Hub repository page.

You can now use docker pull sammy/ubuntu-nodejs to pull the image to a new machine and use it to run a new container.

Conclusion

In this tutorial you installed Docker, worked with images and containers, and pushed a modified image to Docker Hub. Now that you know the basics, explore the other Docker tutorials in the DigitalOcean Community.

DigitalOcean Community Tutorials