Django Weblog: Unauthenticated Remote Code Execution on djangoci.com

Yesterday the Django Security and Operations teams were made aware of a remote code execution vulnerability in the Django Software Foundation’s Jenkins infrastructure, used to run tests on the Django code base for GitHub pull requests and release branches. In this blog post, the teams want to outline the course of events.

Impact

The Django Security and Operations teams want to assure that at no point was there any risk about issuing or uploading malicious releases of Django to PyPI or the Django Project website. Official Django releases have always been issued manually by releasers. Neither was there any risk to any user data related to the Django Project website or the Django bug tracker.

Timeline

On May 14th, 2019 at 07:48 UTC the Django Security team was made aware by Ai Ho through its HackerOne project that the Django’s Continuous Integration service was susceptible to a remote code execution vulnerability, allowing unauthenticated users to execute arbitrary code.

At 08:01 UTC, the Django Security team acknowledged the report and took immediate steps to mitigate the issue by shutting down the primary Jenkins server. The Jenkins master server was shut down by 08:10 UTC.

At 08:45 UTC, the Operations team started provisioning a new server. In cases of a compromised server, it is almost always impractical to clean it up. Starting with a fresh, clean installation is a considerably better and safer approach.

At 14:59 UTC, the new Jenkins master server was up and running again, with some configuration left to do to get Jenkins jobs working again. About 10 minutes later, at 15:09 UTC, that was the case.

At 15:44 UTC, Jenkins started running tests against GitHub pull requests again.

At 16:00 UTC, the Operations team discussed the necessity of revoking various Let’s Encrypt certificates or keys. However, since there was no indication that either the account or the certificate’s private key was exposed, it was deemed sufficient to rely on the auto-expiration of the Let’s Encrypt certificate. However, a new private key for the djangoci.com certificate was generated during the bootstrapping of the new Jenkins master server.

At 16:50 UTC, the Jenkins Windows nodes were working again and started to process jobs.

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com or HackerOne, and not via Django’s Trac instance or the django-developers list. Please see our security policies for further information.

Planet Python

How To Install and Configure Zabbix to Securely Monitor Remote Servers on Ubuntu 18.04

The author selected the Open Source Initiative to receive a donation as part of the Write for DOnations program.

Introduction

Zabbix is open-source monitoring software for networks and applications. It offers real-time monitoring of thousands of metrics collected from servers, virtual machines, network devices, and web applications. These metrics can help you determine the current health of your IT infrastructure and detect problems with hardware or software components before customers complain. Useful information is stored in a database so you can analyze data over time and improve the quality of provided services, or plan upgrades of your equipment.

Zabbix uses several options for collecting metrics, including agentless monitoring of user services and client-server architecture. To collect server metrics, it uses a small agent on the monitored client to gather data and send it to the Zabbix server. Zabbix supports encrypted communication between the server and connected clients, so your data is protected while it travels over insecure networks.

The Zabbix server stores its data in a relational database powered by MySQL, PostgreSQL, or Oracle. You can also store historical data in nosql databases like Elasticsearch and TimescaleDB. Zabbix provides a web interface so you can view data and configure system settings.

In this tutorial, you will configure two machines. One will be configured as the server, and the other as a client that you’ll monitor. The server will use a MySQL database to record monitoring data and use Apache to serve the web interface.

Prerequisites

To follow this tutorial, you will need:

  • Two Ubuntu 18.04 servers set up by following the Initial Server Setup Guide for Ubuntu 18.04, including a non-root user with sudo privileges and a firewall configured with ufw. On one server, you will install Zabbix; this tutorial will refer to this as the Zabbix server. It will monitor your second server; this second server will be referred to as the second Ubuntu server.

  • The server that will run the Zabbix server needs Apache, MySQL, and PHP installed. Follow this guide to configure those on your Zabbix server.

Additionally, because the Zabbix Server is used to access valuable information about your infrastructure that you would not want unauthorized users to access, it’s important that you keep your server secure by installing a TLS/SSL certificate. This is optional but strongly encouraged. You can follow the Let’s Encrypt on Ubuntu 18.04 guide to obtain the free TLS/SSL certificate.

Step 1 — Installing the Zabbix Server

First, you need to install Zabbix on the server where you installed MySQL, Apache, and PHP. Log into this machine as your non-root user:

  • ssh sammy@zabbix_server_ip_address

Zabbix is available in Ubuntu’s package manager, but it’s outdated, so use the official Zabbix repository to install the latest stable version. Download and install the repository configuration package:

  • wget https://repo.zabbix.com/zabbix/4.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.2-1+bionic_all.deb
  • sudo dpkg -i zabbix-release_4.2-1+bionic_all.deb

You will see the following output:

Output
Selecting previously unselected package zabbix-release. (Reading database ... 61483 files and directories currently installed.) Preparing to unpack zabbix-release_4.2-1+bionic_all.deb ... Unpacking zabbix-release (4.2-1+bionicc) ... Setting up zabbix-release (4.2-1+bionicc) ...

Update the package index so the new repository is included:

  • sudo apt update

Then install the Zabbix server and web frontend with MySQL database support:

  • sudo apt install zabbix-server-mysql zabbix-frontend-php

Also, install the Zabbix agent, which will let you collect data about the Zabbix server status itself.

  • sudo apt install zabbix-agent

Before you can use Zabbix, you have to set up a database to hold the data that the Zabbix server will collect from its agents. You can do this in the next step.

Step 2 — Configuring the MySQL Database for Zabbix

You need to create a new MySQL database and populate it with some basic information in order to make it suitable for Zabbix. You’ll also create a specific user for this database so Zabbix isn’t logging into MySQL with the root account.

Log into MySQL as the root user using the root password that you set up during the MySQL server installation:

  • mysql -uroot -p

Create the Zabbix database with UTF-8 character support:

  • create database zabbix character set utf8 collate utf8_bin;

Then create a user that the Zabbix server will use, give it access to the new database, and set the password for the user:

  • grant all privileges on zabbix.* to zabbix@localhost identified by 'your_zabbix_mysql_password';

Then apply these new permissions:

  • flush privileges;

That takes care of the user and the database. Exit out of the database console.

  • quit;

Next you have to import the initial schema and data. The Zabbix installation provided you with a file that sets this up.

Run the following command to set up the schema and import the data into the zabbix database. Use zcat since the data in the file is compressed.

  • zcat /usr/share/doc/zabbix-server-mysql/create.sql.gz | mysql -uzabbix -p zabbix

Enter the password for the zabbix MySQL user that you configured when prompted.

This command will not output any errors if it was successful. If you see the error ERROR 1045 (28000): Access denied for userzabbix@'localhost' (using password: YES) then make sure you used the password for the zabbix user and not the root user.

In order for the Zabbix server to use this database, you need to set the database password in the Zabbix server configuration file. Open the configuration file in your preferred text editor. This tutorial will use nano:

  • sudo nano /etc/zabbix/zabbix_server.conf

Look for the following section of the file:

/etc/zabbix/zabbix_server.conf
### Option: DBPassword                            #       Database password. Ignored for SQLite.    #       Comment this line if no password is used. #                                                 # Mandatory: no                                   # Default:                                        # DBPassword= 

These comments in the file explain how to connect to the database. You need to set the DBPassword value in the file to the password for your database user. Add this line below those comments to configure the database:

/etc/zabbix/zabbix_server.conf
... DBPassword=your_zabbix_mysql_password 

Save and close zabbix_server.conf by pressing CTRL+X, followed by Y and then ENTER if you’re using nano.

That takes care of the Zabbix server configuration. Next, you will make some modifications to your PHP setup in order for the Zabbix web interface to work properly.

Step 3 — Configuring PHP for Zabbix

The Zabbix web interface is written in PHP and requires some special PHP server settings. The Zabbix installation process created an Apache configuration file that contains these settings. It is located in the directory /etc/zabbix and is loaded automatically by Apache. You need to make a small change to this file, so open it up with the following:

  • sudo nano /etc/zabbix/apache.conf

The file contains PHP settings that meet the necessary requirements for the Zabbix web interface. However, the timezone setting is commented out by default. To make sure that Zabbix uses the correct time, you need to set the appropriate timezone.

/etc/zabbix/apache.conf
... <IfModule mod_php7.c>     php_value max_execution_time 300     php_value memory_limit 128M     php_value post_max_size 16M     php_value upload_max_filesize 2M     php_value max_input_time 300     php_value always_populate_raw_post_data -1     # php_value date.timezone Europe/Riga </IfModule> 

Uncomment the timezone line, highlighted in the preceding code block, and change it to your timezone. You can use this list of supported time zones to find the right one for you. Then save and close the file.

Now restart Apache to apply these new settings.

  • sudo systemctl restart apache2

You can now start the Zabbix server.

  • sudo systemctl start zabbix-server

Then check whether the Zabbix server is running properly:

  • sudo systemctl status zabbix-server

You will see the following status:

Output
● zabbix-server.service - Zabbix Server Loaded: loaded (/lib/systemd/system/zabbix-server.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2019-04-05 08:50:54 UTC; 3s ago Process: 16497 ExecStart=/usr/sbin/zabbix_server -c $ CONFFILE (code=exited, status=0/SUCCESS) ...

Finally, enable the server to start at boot time:

  • sudo systemctl enable zabbix-server

The server is set up and connected to the database. Next, set up the web frontend.

Note: As mentioned in the Prerequisites section, it is recommended that you enable SSL/TLS on your server. You can follow this tutorial now to obtain a free SSL certificate for Apache on Ubuntu 18.04. After obtaining your SSL/TLS certificates, you can come back and complete this tutorial.

Step 4 — Configuring Settings for the Zabbix Web Interface

The web interface lets you see reports and add hosts that you want to monitor, but it needs some initial setup before you can use it. Launch your browser and go to the address http://zabbix_server_name/zabbix/. On the first screen, you will see a welcome message. Click Next step to continue.

On the next screen, you will see the table that lists all of the prerequisites to run Zabbix.

Prerequisites

All of the values in this table must be OK, so verify that they are. Be sure to scroll down and look at all of the prerequisites. Once you’ve verified that everything is ready to go, click Next step to proceed.

The next screen asks for database connection information.

DB Connection

You told the Zabbix server about your database, but the Zabbix web interface also needs access to the database to manage hosts and read data. Therefore enter the MySQL credentials you configured in Step 2 and click Next step to proceed.

On the next screen, you can leave the options at their default values.

Zabbix Server Details

The Name is optional; it is used in the web interface to distinguish one server from another in case you have several monitoring servers. Click Next step to proceed.

The next screen will show the pre-installation summary so you can confirm everything is correct.

Summary

Click Next step to proceed to the final screen.

The web interface setup is complete! This process creates the configuration file /usr/share/zabbix/conf/zabbix.conf.php which you could back up and use in the future. Click Finish to proceed to the login screen. The default user is Admin and the password is zabbix.

Before you log in, set up the Zabbix agent on your second Ubuntu server.

Step 5 — Installing and Configuring the Zabbix Agent

Now you need to configure the agent software that will send monitoring data to the Zabbix server.

Log in to the second Ubuntu server:

  • ssh sammy@second_ubuntu_server_ip_address

Then, just like on the Zabbix server, run the following commands to install the repository configuration package:

  • wget https://repo.zabbix.com/zabbix/4.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.2-1+bionic_all.deb
  • sudo dpkg -i zabbix-release_4.2-1+bionic_all.deb

Next, update the package index:

  • sudo apt update

Then install the Zabbix agent:

  • sudo apt install zabbix-agent

While Zabbix supports certificate-based encryption, setting up a certificate authority is beyond the scope of this tutorial, but you can use pre-shared keys (PSK) to secure the connection between the server and agent.

First, generate a PSK:

  • sudo sh -c "openssl rand -hex 32 > /etc/zabbix/zabbix_agentd.psk"

Show the key so you can copy it somewhere. You will need it to configure the host.

  • cat /etc/zabbix/zabbix_agentd.psk

The key will look something like this:

Output
12eb854dea38ac9ee7d1ded2d74cee6262b0a56710f6946f7913d674ab82cdd4

Now edit the Zabbix agent settings to set up its secure connection to the Zabbix server. Open the agent configuration file in your text editor:

  • sudo nano /etc/zabbix/zabbix_agentd.conf

Each setting within this file is documented via informative comments throughout the file, but you only need to edit some of them.

First you have to edit the IP address of the Zabbix server. Find the following section:

/etc/zabbix/zabbix_agentd.conf
... ### Option: Server #       List of comma delimited IP addresses (or hostnames) of Zabbix servers. #       Incoming connections will be accepted only from the hosts listed here. #       If IPv6 support is enabled then '127.0.0.1', '::127.0.0.1', '::ffff:127.0.0.1' are treated equally. # # Mandatory: no # Default: # Server=  Server=127.0.0.1 ... 

Change the default value to the IP of your Zabbix server:

/etc/zabbix/zabbix_agentd.conf
... Server=zabbix_server_ip_address ... 

Next, find the section that configures the secure connection to the Zabbix server and enable pre-shared key support. Find the TLSConnect section, which looks like this:

/etc/zabbix/zabbix_agentd.conf
... ### Option: TLSConnect #       How the agent should connect to server or proxy. Used for active checks. #       Only one value can be specified: #               unencrypted - connect without encryption #               psk         - connect using TLS and a pre-shared key #               cert        - connect using TLS and a certificate # # Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection) # Default: # TLSConnect=unencrypted ... 

Then add this line to configure pre-shared key support:

/etc/zabbix/zabbix_agentd.conf
... TLSConnect=psk ... 

Next, locate the TLSAccept section, which looks like this:

/etc/zabbix/zabbix_agentd.conf
... ### Option: TLSAccept #       What incoming connections to accept. #       Multiple values can be specified, separated by comma: #               unencrypted - accept connections without encryption #               psk         - accept connections secured with TLS and a pre-shared key #               cert        - accept connections secured with TLS and a certificate # # Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection) # Default: # TLSAccept=unencrypted ... 

Configure incoming connections to support pre-shared keys by adding this line:

/etc/zabbix/zabbix_agentd.conf
... TLSAccept=psk ... 

Next, find the TLSPSKIdentity section, which looks like this:

/etc/zabbix/zabbix_agentd.conf
... ### Option: TLSPSKIdentity #       Unique, case sensitive string used to identify the pre-shared key. # # Mandatory: no # Default: # TLSPSKIdentity= ... 

Choose a unique name to identify your pre-shared key by adding this line:

/etc/zabbix/zabbix_agentd.conf
... TLSPSKIdentity=PSK 001 ... 

You’ll use this as the PSK ID when you add your host through the Zabbix web interface.

Then set the option that points to your previously created pre-shared key. Locate the TLSPSKFile option:

/etc/zabbix/zabbix_agentd.conf
... ### Option: TLSPSKFile #       Full pathname of a file containing the pre-shared key. # # Mandatory: no # Default: # TLSPSKFile= ... 

Add this line to point the Zabbix agent to your PSK file you created:

/etc/zabbix/zabbix_agentd.conf
... TLSPSKFile=/etc/zabbix/zabbix_agentd.psk ... 

Save and close the file. Now you can restart the Zabbix agent and set it to start at boot time:

  • sudo systemctl restart zabbix-agent
  • sudo systemctl enable zabbix-agent

For good measure, check that the Zabbix agent is running properly:

  • sudo systemctl status zabbix-agent

You will see the following status, indicating the agent is running:

Output
● zabbix-agent.service - Zabbix Agent Loaded: loaded (/lib/systemd/system/zabbix-agent.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2019-04-05 09:03:04 UTC; 1s ago ...

The agent will listen on port 10050 for connections from the server. Configure UFW to allow connections to this port:

  • sudo ufw allow 10050/tcp

You can learn more about UFW in How To Set Up a Firewall with UFW on Ubuntu 18.04.

Your agent is now ready to send data to the Zabbix server. But in order to use it, you have to link to it from the server’s web console. In the next step, you will complete the configuration.

Step 6 — Adding the New Host to the Zabbix Server

Installing an agent on a server you want to monitor is only half of the process. Each host you want to monitor needs to be registered on the Zabbix server, which you can do through the web interface.

Log in to the Zabbix Server web interface by navigating to the address http://zabbix_server_name/zabbix/.

The Zabbix login screen

When you have logged in, click on Configuration, and then Hosts in the top navigation bar. Then click the Create host button in the top right corner of the screen. This will open the host configuration page.

Creating a host

Adjust the Host name and IP address to reflect the host name and IP address of your second Ubuntu server, then add the host to a group. You can select an existing group, for example Linux servers, or create your own group. The host can be in multiple groups. To do this, enter the name of an existing or new group in the Groups field and select the desired value from the proposed list.

Once you’ve added the group, click the Templates tab.

Adding a template to the host

Type Template OS Linux in the Search field and then click Add to add this template to the host.

Next, navigate to the Encryption tab. Select PSK for both Connections to host and Connections from host. Then set PSK identity to PSK 001, which is the value of the TLSPSKIdentity setting of the Zabbix agent you configured previously. Then set PSK value to the key you generated for the Zabbix agent. It’s the one stored in the file /etc/zabbix/zabbix_agentd.psk on the agent machine.

Setting up the encryption

Finally, click the Add button at the bottom of the form to create the host.

You will see your new host in the list. Wait for a minute and reload the page to see green labels indicating that everything is working fine and the connection is encrypted.

Zabbix shows your new host

If you have additional servers you need to monitor, log in to each host, install the Zabbix agent, generate a PSK, configure the agent, and add the host to the web interface following the same steps you followed to add your first host.

The Zabbix server is now monitoring your second Ubuntu server. Now, set up email notifications to be notified about problems.

Step 7 — Configuring Email Notifications

Zabbix automatically supports several types of notifications: email, Jabber, SMS, etc. You can also use alternative notification methods, such as Telegram or Slack. You can see the full list of integrations here.

The simplest communication method is email, and this tutorial will configure notifications for this media type.

Click on Administration, and then Media types in the top navigation bar. You will see the list of all media types. Click on Email.

Adjust the SMTP options according to the settings provided by your email service. This tutorial uses Gmail’s SMTP capabilities to set up email notifications; if you would like more information about setting this up, see How To Use Google’s SMTP Server.


Note: If you use 2-Step Verification with Gmail, you need to generate an App Password for Zabbix. You don’t need to remember it, you’ll only have to enter an App password once during setup. You will find instructions on how to generate this password in the Google Help Center.

You can also choose the message format—html or plain text. Finally, click the Update button at the bottom of the form to update the email parameters.

Setting up email

Now, create a new user. Click on Administration, and then Users in the top navigation bar. You will see the list of users. Then click the Create user button in the top right corner of the screen. This will open the user configuration page.

Creating a user

Enter the new username in the Alias field and set up a new password. Next, add the user to the administrator’s group. Type Zabbix administrators in the Groups field and select it from the proposed list.

Once you’ve added the group, click the Media tab and click on the Add underlined link. You will see a pop-up window.

Adding an email

Enter your email address in the Send to field. You can leave the rest of the options at the default values. Click the Add button at the bottom to submit.

Now navigate to the Permissions tab. Select Zabbix Super Admin from the User type drop-down menu.

Finally, click the Add button at the bottom of the form to create the user.

Now you need to enable notifications. Click on the Configuration tab, and then Actions in the top navigation bar. You will see a pre-configured action, which is responsible for sending notifications to all Zabbix administrators. You can review and change the settings by clicking on its name. For the purposes of this tutorial, use the default parameters. To enable the action, click on the red Disabled link in the Status column.

Now you are ready to receive alerts. In the next step, you will generate one to test your notification setup.

Step 8 — Generating a Test Alert

In this step, you will generate a test alert to ensure everything is connected. By default, Zabbix keeps track of the amount of free disk space on your server. It automatically detects all disk mounts and adds the corresponding checks. This discovery is executed every hour, so you need to wait a while for the notification to be triggered.

Create a temporary file that’s large enough to trigger Zabbix’s file system usage alert. To do this, log in to your second Ubuntu server if you’re not already connected.

  • ssh sammy@second_ubuntu_server_ip_address

Next, determine how much free space you have on the server. You can use the df command to find out:

  • df -h

The command df will report the disk space usage of your file system, and the -h will make the output human-readable. You’ll see output like the following:

Output
Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.2G 23G 5% /

In this case, the free space is 23GB. Your free space may differ.

Use the fallocate command, which allows you to pre-allocate or de-allocate space to a file, to create a file that takes up more than 80% of the available disk space. This will be enough to trigger the alert:

  • fallocate -l 20G /tmp/temp.img

After around an hour, Zabbix will trigger an alert about the amount of free disk space and will run the action you configured, sending the notification message. You can check your inbox for the message from the Zabbix server. You will see a message like:

Output
Problem started at 10:37:54 on 2019.04.05 Problem name: Free disk space is less than 20% on volume / Host: Second Ubuntu server Severity: Warning Original problem ID: 34

You can also navigate to the Monitoring tab, and then Dashboard to see the notification and its details.

Main dashboard

Now that you know the alerts are working, delete the temporary file you created so you can reclaim your disk space:

  • rm -f /tmp/temp.img

After a minute Zabbix will send the recovery message and the alert will disappear from main dashboard.

Conclusion

In this tutorial, you learned how to set up a simple and secure monitoring solution which will help you monitor the state of your servers. It can now warn you of problems, and you have the opportunity to analyze the processes occurring in your IT infrastructure.

To learn more about setting up monitoring infrastructure, check out How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 18.04 and How To Gather Infrastructure Metrics with Metricbeat on Ubuntu 18.04.

DigitalOcean Community Tutorials

How To Allow Remote Access to MySQL

Many websites and applications start off with their web server and database backend hosted on the same machine. With time, though, a setup like this can become cumbersome and difficult to scale. A common solution is to separate these functions by setting up a remote database, allowing the server and database to grow at their own pace on their own machines.

One of the more common problems that users run into when trying to set up a remote MySQL database is that their MySQL instance is only configured to listen for local connections. This is MySQL’s default setting, but it won’t work for a remote database setup since MySQL must be able to listen for an external IP address where the server can be reached. To enable this, open up your mysqld.cnf file:

  • sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

Navigate to the line that begins with the bind-address directive. It will look like this:

/etc/mysql/mysql.conf.d/mysqld.cnf
. . . lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address            = 127.0.0.1 . . . 

By default, this value is set to 127.0.0.1, meaning that the server will only look for local connections. You will need to change this directive to reference an external IP address. For the purposes of troubleshooting, you could set this directive to a wildcard IP address, either *, ::, or 0.0.0.0:

/etc/mysql/mysql.conf.d/mysqld.cnf
. . . lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address            = 0.0.0.0 . . . 

Note: If you’re running MySQL 8+, the bind-address directive will not be in the mysqld.cnf file by default. In this case, add the following highlighted line to the bottom of the file:

/etc/mysql/mysql.conf.d/mysqld.cnf
. . . [mysqld] pid-file        = /var/run/mysqld/mysqld.pid socket          = /var/run/mysqld/mysqld.sock datadir         = /var/lib/mysql log-error       = /var/log/mysql/error.log bind-address            = 0.0.0.0 

After changing this line, save and close the file and then restart the MySQL service:

  • sudo systemctl restart mysql

Following this, try accessing your database remotely from another machine:

  • mysql -u user -h database_server_ip -p

If you’re able to access your database, it confirms that the bind-address directive in your configuration file was the issue. Please note, though, that setting bind-address to 0.0.0.0 is insecure as it allows connections to your server from any IP address. On the other hand, if you’re still unable to access the database remotely, then something else may be causing the issue. In either case, you may find it helpful to follow our guide on How To Set Up a Remote Database to Optimize Site Performance with MySQL on Ubuntu 18.04 to set up a more secure remote database configuration.

DigitalOcean Community Tutorials

How to Install, Run, and Connect to Jupyter Notebook on a Remote Server

The author selected the Apache Software Foundation to receive a $ 100 donation as part of the Write for DOnations program.

Introduction

Jupyter Notebook is an open-source, interactive web application that allows you to write and run computer code in more than 40 programming languages, including Python, R, Julia, and Scala. A product from Project Jupyter, Jupyter Notebook is useful for iterative coding as it allows you to write a small snippet of code, run it, and return the result.

Jupyter Notebook provides the ability to create notebook documents, referred to simply as “notebooks”. Notebooks created from the Jupyter Notebook are shareable, reproducible research documents which include rich text elements, equations, code and their outputs (figures, tables, interactive plots). Notebooks can also be exported into raw code files, HTML or PDF documents, or used to create interactive slideshows or web pages.

This article will walk you through how to install and configure the Jupyter Notebook application on an Ubuntu 18.04 web server and how to connect to it from your local computer. Additionally, we will also go over how to use Jupyter Notebook to run some example Python code.

Prerequisites

To complete this tutorial, you will need:

Additionally, if your local computer is running Windows, you will need to install PuTTY on it in order to establish an SSH tunnel to your server. Follow our guide on How to Create SSH Keys with PuTTY on Windows to download and install PuTTY.

Step 1 — Installing Jupyter Notebook

Since notebooks are used to write, run and see the result of small snippets of code, you will first need to set up the programming language support. Jupyter Notebook uses a language-specific kernel, a computer program that runs and introspects code. Jupyter Notebook has many kernels in different languages, the default being IPython. In this tutorial, you will set up Jupyter Notebook to run Python code through the IPython kernel.

Assuming that you followed the tutorials linked in the Prerequisites section, you should have Python 3, pip and a virtual environment installed. The examples in this guide follow the convention used in the prerequisite tutorial on installing Python 3, which names the virtual environment “my_env“, but you should feel free to rename it.

Begin by activating the virtual environment:

  • source my_env/bin/activate

Following this, your prompt will be prefixed with the name of your environment.

Now that you’re in your virtual environment, go ahead and install Jupyter Notebook:

  • python3 -m pip install jupyter

If the installation was successful, you will see an output similar to the following:

Output
. . . Successfully installed MarkupSafe-1.0 Send2Trash-1.5.0 backcall-0.1.0 bleach-2.1.3 decorator-4.3.0 entrypoints-0.2.3 html5lib-1.0.1 ipykernel-4.8.2 ipython-6.4.0 ipython-genutils-0.2.0 ipywidgets-7.2.1 jedi-0.12.0 jinja2-2.10 jsonschema-2.6.0 jupyter-1.0.0 jupyter-client-5.2.3 jupyter-console-5.2.0 jupyter-core-4.4.0 mistune-0.8.3 nbconvert-5.3.1 nbformat-4.4.0 notebook-5.5.0 pandocfilters-1.4.2 parso-0.2.0 pexpect-4.5.0 pickleshare-0.7.4 prompt-toolkit-1.0.15 ptyprocess-0.5.2 pygments-2.2.0 python-dateutil-2.7.3 pyzmq-17.0.0 qtconsole-4.3.1 simplegeneric-0.8.1 six-1.11.0 terminado-0.8.1 testpath-0.3.1 tornado-5.0.2

With that, Jupyter Notebook has been installed onto your server. Next, we will go over how to run the application.

Step 2 — Running the Jupyter Notebook

Jupyter Notebook must be run from your VPS so that you can connect to it from your local machine using an SSH Tunnel and your favorite web browser.

To run the Jupyter Notebook server, enter the following command:

  • jupyter notebook

After running this command, you will see output similar to the following:

Output
[I 19:46:22.031 NotebookApp] Writing notebook server cookie secret to /home/sammy/.local/share/jupyter/runtime/notebook_cookie_secret [I 19:46:22.365 NotebookApp] Serving notebooks from local directory: /home/sammy/environments [I 19:46:22.365 NotebookApp] 0 active kernels [I 19:46:22.366 NotebookApp] The Jupyter Notebook is running at: [I 19:46:22.366 NotebookApp] http://localhost:8888/?token=Example_Jupyter_Token_3cadb8b8b7005d9a46ca4d6675 [I 19:46:22.366 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [W 19:46:22.366 NotebookApp] No web browser found: could not locate runnable browser. [C 19:46:22.367 NotebookApp] Copy/paste this URL into your browser when you connect for the first time, to login with a token: http://localhost:8888/?token=Example_Jupyter_Token_3cadb8b8b7005d9a46ca4d6675&tokenExample_Jupyter_Token_3cadb8b8b7005d9a46ca4d6675

You might notice in the output that there is a No web browser found warning. This is to be expected, since the application is running on a server and you likely haven’t installed a web browser onto it. This guide will go over how to connect to the Notebook on the server using SSH tunneling in the next section.

For now, exit the Jupyter Notebook by pressing CTRL+C followed by y, and then pressing ENTER to confirm:

Output
Shutdown this notebook server (y/[n])? y [C 20:05:47.654 NotebookApp] Shutdown confirmed [I 20:05:47.654 NotebookApp] Shutting down 0 kernels

Then log out of the server by using the exit command:

  • exit

You’ve just run Jupyter Notebook on your server. However, in order to access the application and start working with notebooks, you’ll need to connect to the application using SSH tunneling and a web browser on your local computer.

Step 3 — Connecting to the Jupyter Notebook Application with SSH Tunneling

SSH tunneling is a simple and fast way to connect to the Jupyter Notebook application running on your server. Secure shell (more commonly known as SSH) is a network protocol which enables you to connect to a remote server securely over an unsecured network.

The SSH protocol includes a port forwarding mechanism that allows you to tunnel certain applications running on a specific port number on a server to a specific port number on your local computer. We will learn how to securely “forward” the Jupyter Notebook application running on your server (on port 8888, by default) to a port on your local computer.

The method you use for establishing an SSH tunnel will depend on your local computer’s operating system. Jump to the subsection below that is most relevant for your machine.

Note: It’s possible to set up and install the Jupyter Notebook using the DigitalOcean Web Console, but connecting to the application via an SSH tunnel must be done through the terminal or with PuTTY.

SSH Tunneling using macOS or Linux

If your local computer is running Linux or macOS, it’s possible to establish an SSH tunnel just by running a single command.

ssh is the standard command to open an SSH connection, but when used with the -L directive, you can specify that a given port on the local host (that is, your local machine) will be forwarded to a given host and port on the remote host (in this case, your server). This means that whatever is running on the specified port on the remote server (8888, Jupyter Notebook’s default port) will appear on the specified port on your local computer (8000 in the example command).

To establish your own SSH tunnel, run the following command. Feel free to change port 8000 to one of your choosing if, for example, 8000 is in use by another process. It is recommended that you use a port greater than or equal to 8000, as those port numbers are unlikely to be used by another process. Be sure to include your own server’s IP address and the name of your server’s non-root user:

  • ssh -L 8000:localhost:8888 sammy@your_server_ip

If there are no errors from this command, it will log you into your remote server. From there, activate the virtual environment:

  • source ~/environments/my_env/bin/activate

Then run the Jupyter Notebook application:

  • jupyter notebook

To connect to Jupyter Notebook, use your favorite web browser to navigate to the local port on the local host: http://localhost:8000. Now that you’re connected to Jupyter Notebook, continue on to Step 4 to learn how to use it.

SSH Tunneling using Windows and PuTTY

PuTTY is an open-source SSH client for Windows which can be used to connect to your server. After downloading and installing PuTTY on your Windows machine (as described in the prerequisite tutorial), open the program and enter your server URL or IP address, as shown here:

Enter server URL or IP into Putty

Next, click + SSH at the bottom of the left pane, and then click Tunnels. In this window, enter the port that you want to use to access Jupyter on your local machine (8000 ). It is recommended to use a port greater or equal to 8000 as those port numbers are unlikely to be used by another process. If 8000 is used by another process, though, select a different, unused port number. Next, set the destination as localhost:8888, since port 8888 is the one that Jupyter Notebook is running on. Then click the Add button and the ports should appear in the Forwarded ports field:

Configure SSH tunnel in Putty

Finally, click the Open button. This will both connect your machine to the server via SSH and tunnel the desired ports. If no errors show up, go ahead and activate your virtual environment:

  • source ~/environments/my_env/bin/activate

Then run Jupyter Notebook:

  • jupyter notebook

Next, navigate to the local port in your favorite web browser, for example http://localhost:8000 (or whatever port number you chose), to connect to the Jupyter Notebook instance running on the server. Now that you’re connected to Jupyter Notebook, continue on to Step 4 to learn how to use it.

Step 4 — Using Jupyter Notebook

When accessed through a web browser, Jupyter Notebook provides a Notebook Dashboard which acts as a file browser and gives you an interface for creating, editing and exploring notebooks. Think of these notebooks as documents (saved with a .ipynb file extension) which you populate with any number of individual cells. Each cell holds an interactive text editor which can be used to run code or write rendered text. Additionally, notebooks allow you to write and run equations, include other rich media, such as images or interactive plots, and they can be exported and shared in various formats (.ipyb, .pdf, .py). To illustrate some of these functions, we’ll create a notebook file from the Notebook Dashboard, write a simple text board with an equation, and run some basic Python 3 code.

By this point you should have connected to the server using an SSH tunnel and started the Jupyter Notebook application from your server. After navigating to http://localhost:8000, you will be presented with a login page:

Jupyter Notebook login screen

In the Password or token field at the top, enter the token shown in the output after you ran jupyter notebook from your server:

Output
[I 20:35:17.004 NotebookApp] Writing notebook server cookie secret to /run/user/1000/jupyter/notebook_cookie_secret [I 20:35:17.314 NotebookApp] Serving notebooks from local directory: /home/sammy [I 20:35:17.314 NotebookApp] 0 active kernels [I 20:35:17.315 NotebookApp] The Jupyter Notebook is running at: [I 20:35:17.315 NotebookApp] http://localhost:8888/?token=Example_Jupyter_Token_3cadb8b8b7005d9a46ca4d6675 [I 20:35:17.315 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [W 20:35:17.315 NotebookApp] No web browser found: could not locate runnable browser. [C 20:35:17.316 NotebookApp] . . .

Alternatively, you can copy that URL from your terminal output and paste it into your browser’s address bar.

Automatically, Jupyter notebook will show all of the files and folders stored in the directory from which it’s run. Create a new notebook file by clicking New then Python 3 at the top-right of the Notebook Dashboard:

Create a new Python3 notebook

Within this new notebook, change the first cell to accept markdown syntax by clicking Cell > Cell Type > Markdown on the navigation bar at the top. In addition to markdown, this Cell Type also allows you to write equations in LaTeX. For example, type the following into the cell after changing it to markdown:

# Simple Equation  Let us now implement the following equation in Python: $  $   y = x^2$  $    where $  x = 2$   

To turn the markdown into rich text, press CTRL + ENTER and the following should be the result:

Turn sample equation into rich text

You can use the markdown cells to make notes and document your code.

Now, let’s implement a simple equation and print the result. Click Insert > Insert Cell Below to insert a cell. In this new cell, enter the following code:

x = 2 y = x*x print(y) 

To run the code, press CTRL + ENTER, and the following will be the result:

Solve sample equation

These are some relatively simple examples of what you can do with Jupyter Notebook. However, it is a very powerful application with many potential use cases. From here, you can add some Python libraries and use the notebook as you would with any other Python development environment.

Conclusion

You should be now able to write reproducible Python code and text using the Jupyter Notebook running on a remote server. To get a quick tour of Jupyter Notebook, click Help in the top navigation bar and select User Interface Tour as shown here:

Finding Jupyter Notebook help tour

If you’re interested, we encourage you to learn more about Jupyter Notebook by going through the Project Jupyter documentation. Additionally, you can build on what you learned in this tutorial by learning how to code in Python 3.

DigitalOcean Community Tutorials

How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 18.04

Introduction

Docker Machine is a tool that makes it easy to provision and manage multiple Docker hosts remotely from your personal computer. Such servers are commonly referred to as Dockerized hosts and are used to run Docker containers.

While Docker Machine can be installed on a local or a remote system, the most common approach is to install it on your local computer (native installation or virtual machine) and use it to provision Dockerized remote servers.

Though Docker Machine can be installed on most Linux distributions as well as macOS and Windows, in this tutorial, you’ll install it on your local machine running Ubuntu 18.04 and use it to provision Dockerized DigitalOcean Droplets. If you don’t have a local Ubuntu 18.04 machine, you can follow these instructions on any Ubuntu 18.04 server.

Prerequisites

To follow this tutorial, you will need the following:

  • A local machine or server running Ubuntu 18.04 with Docker installed. See How To Install and Use Docker on Ubuntu 18.04 for instructions.
  • A DigitalOcean API token. If you don’t have one, generate it using this guide. When you generate a token, be sure that it has read-write scope. That is the default, so if you do not change any options while generating it, it will have read-write capabilities.

Step 1 — Installing Docker Machine

In order to use Docker Machine, you must first install it locally. On Ubuntu, this means downloading a handful of scripts from the official Docker repository on GitHub.

To download and install the Docker Machine binary, type:

  • wget https://github.com/docker/machine/releases/download/v0.15.0/docker-machine-$ (uname -s)-$ (uname -m)

The name of the file should be docker-machine-Linux-x86_64. Rename it to docker-machine to make it easier to work with:

  • mv docker-machine-Linux-x86_64 docker-machine

Make it executable:

  • chmod +x docker-machine

Move or copy it to the /usr/local/bin directory so that it will be available as a system command:

  • sudo mv docker-machine /usr/local/bin

Check the version, which will indicate that it’s properly installed:

  • docker-machine version

You’ll see output similar to this, displaying the version number and build:

Output
docker-machine version 0.15.0, build b48dc28d

Docker Machine is installed. Let’s install some additional helper tools to make Docker Machine easier to work with.

Step 2 — Installing Additional Docker Machine Scripts

There are three Bash scripts in the Docker Machine GitHub repository you can install to make working with the docker and docker-machine commands easier. When installed, these scripts provide command completion and prompt customization.

In this step, you’ll install these three scripts into the /etc/bash_completion.d directory on your local machine by downloading them directly from the Docker Machine GitHub repository.

Note: Before downloading and installing a script from the internet in a system-wide location, you should inspect the script’s contents first by viewing the source URL in your browser.

The first script allows you to see the active machine in your prompt. This comes in handy when you are working with and switching between multiple Dockerized machines. The script is called docker-machine-prompt.bash. Download it

  • sudo wget https://raw.githubusercontent.com/docker/machine/master/contrib/completion/bash/docker-machine-prompt.bash -O /etc/bash_completion.d/docker-machine-prompt.bash

To complete the installation of this file, you’ll have to modify the value for the PS1 variable in your .bashrc file. The PS1 variable is a special shell variable used to modify the Bash command prompt. Open ~/.bashrc in your editor:

  • nano ~/.bashrc

Within that file, there are three lines that begin with PS1. They should look just like these:

~/.bashrc
PS1='$  {debian_chroot:+($  debian_chroot)}\[3[01;32m\]\u@\h\[3[00m\]:\[3[01;34m\]\w\[3[00m\]$   '  ...  PS1='$  {debian_chroot:+($  debian_chroot)}\u@\h:\w$   '  ...  PS1="\[\e]0;$  {debian_chroot:+($  debian_chroot)}\u@\h: \w\a\]$  PS1" 

For each line, insert $ (__docker_machine_ps1 " [%s]") near the end, as shown in the following example:

~/.bashrc
PS1='$  {debian_chroot:+($  debian_chroot)}\[3[01;32m\]\u@\h\[3[00m\]:\[3[01;34m\]\w\[3[00m\]$  (__docker_machine_ps1 " [%s]")$   '  ...  PS1='$  {debian_chroot:+($  debian_chroot)}\u@\h:\w$  (__docker_machine_ps1 " [%s]")$   '  ...  PS1="\[\e]0;$  {debian_chroot:+($  debian_chroot)}\u@\h: \w\a\]$  (__docker_machine_ps1 " [%s]")$  PS1" 

Save and close the file.

The second script is called docker-machine-wrapper.bash. It adds a use subcommand to the docker-machine command, making it significantly easier to switch between Docker hosts. To download it, type:

  • sudo wget https://raw.githubusercontent.com/docker/machine/master/contrib/completion/bash/docker-machine-wrapper.bash -O /etc/bash_completion.d/docker-machine-wrapper.bash

The third script is called docker-machine.bash. It adds bash completion for docker-machine commands. Download it using:

  • sudo wget https://raw.githubusercontent.com/docker/machine/master/contrib/completion/bash/docker-machine.bash -O /etc/bash_completion.d/docker-machine.bash

To apply the changes you’ve made so far, close, then reopen your terminal. If you’re logged into the machine via SSH, exit the session and log in again, and you’ll have command completion for the docker and docker-machine commands.

Let’s test things out by creating a new Docker host with Docker Machine.

Step 3 — Provisioning a Dockerized Host Using Docker Machine

Now that you have Docker and Docker Machine running on your local machine, you can provision a Dockerized Droplet on your DigitalOcean account using Docker Machine’s docker-machine create command. If you’ve not done so already, assign your DigitalOcean API token to an environment variable:

  • export DOTOKEN=your-api-token

NOTE: This tutorial uses DOTOKEN as the bash variable for the DO API token. The variable name does not have to be DOTOKEN, and it does not have to be in all caps.

To make the variable permanent, put it in your ~/.bashrc file. This step is optional, but it is necessary if you want to the value to persist across shell sessions.

Open that file with nano:

  • nano ~/.bashrc

Add this line to the file:

~/.bashrc
export DOTOKEN=your-api-token 

To activate the variable in the current terminal session, type:

  • source ~/.bashrc

To call the docker-machine create command successfully you must specify the driver you wish to use, as well as a machine name. The driver is the adapter for the infrastructure you’re going to create. There are drivers for cloud infrastructure providers, as well as drivers for various virtualization platforms.

We’ll use the digitalocean driver. Depending on the driver you select, you’ll need to provide additional options to create a machine. The digitalocean driver requires the API token (or the variable that evaluates to it) as its argument, along with the name for the machine you want to create.

To create your first machine, type this command to create a DigitalOcean Droplet called docker-01:

  • docker-machine create --driver digitalocean --digitalocean-access-token $ DOTOKEN docker-01

You’ll see this output as Docker Machine creates the Droplet:

Output
... Installing Docker... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env ubuntu1804-docker

Docker Machine creates an SSH key pair for the new host so it can access the server remotely. The Droplet is provisioned with an operating system and Docker is installed. When the command is complete, your Docker Droplet is up and running.

To see the newly-created machine from the command line, type:

  • docker-machine ls

The output will be similar to this, indicating that the new Docker host is running:

Output
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 - digitalocean Running tcp://209.97.155.178:2376 v18.06.1-ce

Now let’s look at how to specify the operating system when we create a machine.

Step 4 — Specifying the Base OS and Droplet Options When Creating a Dockerized Host

By default, the base operating system used when creating a Dockerized host with Docker Machine is supposed to be the latest Ubuntu LTS. However, at the time of this publication, the docker-machine create command is still using Ubuntu 16.04 LTS as the base operating system, even though Ubuntu 18.04 is the latest LTS edition. So if you need to run Ubuntu 18.04 on a recently-provisioned machine, you’ll have to specify Ubuntu along with the desired version by passing the --digitalocean-image flag to the docker-machine create command.

For example, to create a machine using Ubuntu 18.04, type:

  • docker-machine create --driver digitalocean --digitalocean-image ubuntu-18-04-x64 --digitalocean-access-token $ DOTOKEN docker-ubuntu-1804

You’re not limited to a version of Ubuntu. You can create a machine using any operating system supported on DigitalOcean. For example, to create a machine using Debian 8, type:

  • docker-machine create --driver digitalocean --digitalocean-image debian-8-x64 --digitalocean-access-token $ DOTOKEN docker-debian

To provision a Dockerized host using CentOS 7 as the base OS, specify centos-7-0-x86 as the image name, like so:

  • docker-machine create --driver digitalocean --digitalocean-image centos-7-0-x64 --digitalocean-access-token $ DOTOKEN docker-centos7

The base operating system is not the only choice you have. You can also specify the size of the Droplet. By default, it is the smallest Droplet, which has 1 GB of RAM, a single CPU, and a 25 GB SSD.

Find the size of the Droplet you want to use by looking up the corresponding slug in the DigitalOcean API documentation.

For example, to provision a machine with 2 GB of RAM, two CPUs, and a 60 GB SSD, use the slug s-2vcpu-2gb:

  • docker-machine create --driver digitalocean --digitalocean-size s-2vcpu-2gb --digitalocean-access-token $ DOTOKEN docker-03

To see all the flags specific to creating a Docker Machine using the DigitalOcean driver, type:

  • docker-machine create --driver digitalocean -h

Tip: If you refresh the Droplet page of your DigitalOcean dashboard, you will see the new machines you created using the docker-machine command.

Now let’s explore some of the other Docker Machine commands.

Step 5 — Executing Additional Docker Machine Commands

You’ve seen how to provision a Dockerized host using the create subcommand, and how to list the hosts available to Docker Machine using the ls subcommand. In this step, you’ll learn a few more useful subcommands.

To obtain detailed information about a Dockerized host, use the inspect subcommand, like so:

  • docker-machine inspect docker-01

The output includes lines like the ones in the following output. The Image line reveals the version of the Linux distribution used and the Size line indicates the size slug:

Output
... { "ConfigVersion": 3, "Driver": { "IPAddress": "203.0.113.71", "MachineName": "docker-01", "SSHUser": "root", "SSHPort": 22, ... "Image": "ubuntu-16-04-x64", "Size": "s-1vcpu-1gb", ... }, ---

To print the connection configuration for a host, type:

  • docker-machine config docker-01

The output will be similar to this:

Output
--tlsverify --tlscacert="/home/kamit/.docker/machine/certs/ca.pem" --tlscert="/home/kamit/.docker/machine/certs/cert.pem" --tlskey="/home/kamit/.docker/machine/certs/key.pem" -H=tcp://203.0.113.71:2376

The last line in the output of the docker-machine config command reveals the IP address of the host, but you can also get that piece of information by typing:

  • docker-machine ip docker-01

If you need to power down a remote host, you can use docker-machine to stop it:

  • docker-machine stop docker-01

Verify that it is stopped:

  • docker-machine ls

The output shows that the status of the machine has changed:

Ouput
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 - digitalocean Stopped Unknown

To start it again, use the start subcommand:

  • docker-machine start docker-01

Then review its status again:

  • docker-machine ls

You will see that the STATE is now set Running for the host:

Ouput
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 - digitalocean Running tcp://203.0.113.71:2376 v18.06.1-ce

Next let’s look at how to interact with the remote host using SSH.

Step 6 — Executing Commands on a Dockerized Host via SSH

At this point, you’ve been getting information about your machines, but you can do more than that. For example, you can execute native Linux commands on a Docker host by using the ssh subcommand of docker-machine from your local system. This section explains how to perform ssh commands via docker-machine as well as how to open an SSH session to a Dockerized host.

Assuming that you’ve provisioned a machine with Ubuntu as the operating system, execute the following command from your local system to update the package database on the Docker host:

  • docker-machine ssh docker-01 apt-get update

You can even apply available updates using:

  • docker-machine ssh docker-01 apt-get upgrade

Not sure what kernel your remote Docker host is using? Type the following:

  • docker-machine ssh docker-01 uname -r

Finally, you can log in to the remote host with the docker machine ssh command:

docker-machine ssh docker-01 

You’ll be logged in as the root user and you’ll see something similar to the following:

Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-131-generic x86_64)   * Documentation:  https://help.ubuntu.com  * Management:     https://landscape.canonical.com  * Support:        https://ubuntu.com/advantage    Get cloud support with Ubuntu Advantage Cloud Guest:     http://www.ubuntu.com/business/services/cloud  14 packages can be updated. 10 updates are security updates. 

Log out by typing exit to return to your local machine.

Next, we’ll direct Docker’s commands at our remote host.

Step 7 — Activating a Dockerized Host

Activating a Docker host connects your local Docker client to that system, which makes it possible to run normal docker commands on the remote system.

First, use Docker Machine to create a new Docker host called docker-ubuntu using Ubuntu 18.04:

  • docker-machine create --driver digitalocean --digitalocean-image ubuntu-18-04-x64 --digitalocean-access-token $ DOTOKEN docker-ubuntu

To activate a Docker host, type the following command:

  • eval $ (docker-machine env machine-name)

Alternatively, you can activate it by using this command:

  • docker-machine use machine-name

Tip When working with multiple Docker hosts, the docker-machine use command is the easiest method of switching from one to the other.

After typing any of these commands, your prompt will change to indicate that your Docker client is pointing to the remote Docker host. It will take this form. The name of the host will be at the end of the prompt:

username@localmachine:~ [docker-01]$   

Now any docker command you type at this command prompt will be executed on that remote host.

Execute docker-machine ls again:

  • docker-machine ls

You’ll see an asterisk under the ACTIVE column for docker-01:

Output
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 * digitalocean Running tcp://203.0.113.71:2376 v18.06.1-ce

To exit from the remote Docker host, type the following:

  • docker-machine use -u

Your prompt will no longer show the active host.

Now let’s create containers on the remote machine.

Step 8 — Creating Docker Containers on a Remote Dockerized Host

So far, you have provisioned a Dockerized Droplet on your DigitalOcean account and you’ve activated it — that is, your Docker client is pointing to it. The next logical step is to spin up containers on it. As an example, let’s try running the official Nginx container.

Use docker-machine use to select your remote machine:

  • docker-machine use docker-01

Now execute this command to run an Nginx container on that machine:

  • docker run -d -p 8080:80 --name httpserver nginx

In this command, we’re mapping port 80 in the Nginx container to port 8080 on the Dockerized host so that we can access the default Nginx page from anywhere.

Once the container builds, you will be able to access the default Nginx page by pointing your web browser to http://docker_machine_ip:8080.

While the Docker host is still activated (as seen by its name in the prompt), you can list the images on that host:

  • docker images

The output includes the Nginx image you just used:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest 71c43202b8ac 3 hours ago 109MB

You can also list the active or running containers on the host:

  • docker ps

If the Nginx container you ran in this step is the only active container, the output will look like this:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d3064c237372 nginx "nginx -g 'daemon of…" About a minute ago Up About a minute 0.0.0.0:8080->80/tcp httpserver

If you intend to create containers on a remote machine, your Docker client must be pointing to it — that is, it must be the active machine in the terminal that you’re using. Otherwise you’ll be creating the container on your local machine. Again, let your command prompt be your guide.

Docker Machine can create and manage remote hosts, and it can also remove them.

Step 9 – Removing Docker Hosts

You can use Docker Machine to remove a Docker host you’ve created. Use the docker-machine rm command to remove the docker-01 host you created:

  • docker-machine rm docker-01

The Droplet is deleted along with the SSH key created for it. List the hosts again:

  • docker-machine ls

This time, you won’t see the docker-01 host listed in the output. And if you’ve only created one host, you won’t see any output at all.

Be sure to execute the command docker-machine use -u to point your local Docker daemon back to your local machine.

Step 10 — Disabling Crash Reporting (Optional)

By default, whenever an attempt to provision a Dockerized host using Docker Machine fails, or Docker Machine crashes, some diagnostic information is sent to a Docker account on Bugsnag. If you’re not comfortable with this, you can disable the reporting by creating an empty file called no-error-report in your local computer’s .docker/machine directory.

To create the file, type:

  • touch ~/.docker/machine/no-error-report

Check the file for error messages if provisioning fails or Docker Machine crashes.

Conclusion

You’ve installed Docker Machine and used it to provision multiple Docker hosts on DigitalOcean remotely from your local system. From here you should be able to provision as many Dockerized hosts on your DigitalOcean account as you need.

For more on Docker Machine, visit the official documentation page. The three Bash scripts downloaded in this tutorial are hosted on this GitHub page.

DigitalOcean Community Tutorials