How To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt, and Docker Compose

Introduction

There are multiple ways to enhance the flexibility and security of your Node.js application. Using a reverse proxy like Nginx offers you the ability to load balance requests, cache static content, and implement Transport Layer Security (TLS). Enabling encrypted HTTPS on your server ensures that communication to and from your application remains secure.

Implementing a reverse proxy with TLS/SSL on containers involves a different set of procedures from working directly on a host operating system. For example, if you were obtaining certificates from Let’s Encrypt for an application running on a server, you would install the required software directly on your host. Containers allow you to take a different approach. Using Docker Compose, you can create containers for your application, your web server, and the Certbot client that will enable you to obtain your certificates. By following these steps, you can take advantage of the modularity and portability of a containerized workflow.

In this tutorial, you will deploy a Node.js application with an Nginx reverse proxy using Docker Compose. You will obtain TLS/SSL certificates for the domain associated with your application and ensure that it receives a high security rating from SSL Labs. Finally, you will set up a cron job to renew your certificates so that your domain remains secure.

Prerequisites

To follow this tutorial, you will need:

  • An Ubuntu 18.04 server, a non-root user with sudo privileges, and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.
  • Docker and Docker Compose installed on your server. For guidance on installing Docker, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04. For guidance on installing Compose, follow Step 1 of How To Install Docker Compose on Ubuntu 18.04.
  • A registered domain name. This tutorial will use example.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
  • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them to a DigitalOcean account, if that’s what you’re using:

    • An A record with example.com pointing to your server’s public IP address.
    • An A record with www.example.com pointing to your server’s public IP address.

Step 1 — Cloning and Testing the Node Application

As a first step, we will clone the repository with the Node application code, which includes the Dockerfile that we will use to build our application image with Compose. We can first test the application by building and running it with the docker run command, without a reverse proxy or SSL.

In your non-root user’s home directory, clone the nodejs-image-demo repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker.

Clone the repository into a directory called node_project:

  • git clone https://github.com/do-community/nodejs-image-demo.git node_project

Change to the node_project directory:

  • cd node_project

In this directory, there is a Dockerfile that contains instructions for building a Node application using the Docker node:10 image and the contents of your current project directory. You can look at the contents of the Dockerfile by typing:

  • cat Dockerfile
Output
FROM node:10 RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app WORKDIR /home/node/app COPY package*.json ./ RUN npm install COPY . . COPY --chown=node:node . . USER node EXPOSE 8080 CMD [ "node", "app.js" ]

These instructions build a Node image by copying the project code from the current directory to the container and installing dependencies with npm install. They also take advantage of Docker’s caching and image layering by separating the copy of package.json and package-lock.json, containing the project’s listed dependencies, from the copy of the rest of the application code. Finally, the instructions specify that the container will be run as the non-root node user with the appropriate permissions set on the application code and node_modules directories.

For more information about this Dockerfile and Node image best practices, please see the complete discussion in Step 3 of How To Build a Node.js Application with Docker.

To test the application without SSL, you can build and tag the image using docker build and the -t flag. We will call the image node-demo, but you are free to name it something else:

  • docker build -t node-demo .

Once the build process is complete, you can list your images with docker images:

  • docker images

You will see the following output, confirming the application image build:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE node-demo latest 23961524051d 7 seconds ago 896MB node 10 8a752d5af4ce 10 days ago 894MB

Next, create the container with docker run. We will include three flags with this command:

  • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.
  • -d: This runs the container in the background.
  • --name: This allows us to give the container a memorable name.

Run the following command to build the container:

  • docker run --name node-demo -p 80:8080 -d node-demo

Inspect your running containers with docker ps:

  • docker ps

You will see output confirming that your application container is running:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo

You can now visit your domain to test your setup: http://example.com. Remember to replace example.com with your own domain name. Your application will display the following landing page:

Application Landing Page

Now that you have tested the application, you can stop the container and remove the images. Use docker ps again to get your CONTAINER ID:

  • docker ps
Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo

Stop the container with docker stop. Be sure to replace the CONTAINER ID listed here with your own application CONTAINER ID:

  • docker stop 4133b72391da

You can now remove the stopped container and all of the images, including unused and dangling images, with docker system prune and the -a flag:

  • docker system prune -a

Type y when prompted in the output to confirm that you would like to remove the stopped container and images. Be advised that this will also remove your build cache.

With your application image tested, you can move on to building the rest of your setup with Docker Compose.

Step 2 — Defining the Web Server Configuration

With our application Dockerfile in place, we can create a configuration file to run our Nginx container. We will start with a minimal configuration that will include our domain name, document root, proxy information, and a location block to direct Certbot’s requests to the .well-known directory, where it will place a temporary file to validate that the DNS for our domain resolves to our server.

First, create a directory in the current project directory for the configuration file:

  • mkdir nginx-conf

Open the file with nano or your favorite editor:

  • nano nginx-conf/nginx.conf

Add the following server block to proxy user requests to your Node application container and to direct Certbot’s requests to the .well-known directory. Be sure to replace example.com with your own domain name:

~/node_project/nginx-conf/nginx.conf
server {         listen 80;         listen [::]:80;          root /var/www/html;         index index.html index.htm index.nginx-debian.html;          server_name example.com www.example.com;          location / {                 proxy_pass http://nodejs:8080;         }          location ~ /.well-known/acme-challenge {                 allow all;                 root /var/www/html;         } } 

This server block will allow us to start the Nginx container as a reverse proxy, which will pass requests to our Node application container. It will also allow us to use Certbot’s webroot plugin to obtain certificates for our domain. This plugin depends on the HTTP-01 validation method, which uses an HTTP request to prove that Certbot can access resources from a server that responds to a given domain name.

Once you have finished editing, save and close the file. To learn more about Nginx server and location block algorithms, please refer to this article on Understanding Nginx Server and Location Block Selection Algorithms.

With the web server configuration details in place, we can move on to creating our docker-compose.yml file, which will allow us to create our application services and the Certbot container we will use to obtain our certificates.

Step 3 — Creating the Docker Compose File

The docker-compose.yml file will define our services, including the Node application and web server. It will specify details like named volumes, which will be critical to sharing SSL credentials between containers, as well as network and port information. It will also allow us to specify specific commands to run when our containers are created. This file is the central resource that will define how our services will work together.

Open the file in your current directory:

  • nano docker-compose.yml

First, define the application service:

~/node_project/docker-compose.yml
version: '3'  services:   nodejs:     build:       context: .       dockerfile: Dockerfile     image: nodejs     container_name: nodejs     restart: unless-stopped 

The nodejs service definition includes the following:

  • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
  • context: This defines the build context for the application image build. In this case, it’s the current project directory.
  • dockerfile: This specifies the Dockerfile that Compose will use for the build — the Dockerfile you looked at in Step 1.
  • image, container_name: These apply names to the image and container.
  • restart: This defines the restart policy. The default is no, but we have set the container to restart unless it is stopped.

Note that we are not including bind mounts with this service, since our setup is focused on deployment rather than development. For more information, please see the Docker documentation on bind mounts and volumes.

To enable communication between the application and web server containers, we will also add a bridge network called app-network below the restart definition:

~/node_project/docker-compose.yml
services:   nodejs: ...     networks:       - app-network 

A user-defined bridge network like this enables communication between containers on the same Docker daemon host. This streamlines traffic and communication within your application, since it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, you can be selective about opening only the ports you need to expose your frontend services.

Next, define the webserver service:

~/node_project/docker-compose.yml
...  webserver:     image: nginx:latest     container_name: webserver     restart: unless-stopped     ports:       - "80:80"     volumes:       - web-root:/var/www/html       - ./nginx-conf:/etc/nginx/conf.d       - certbot-etc:/etc/letsencrypt       - certbot-var:/var/lib/letsencrypt     depends_on:       - nodejs     networks:       - app-network 

Some of the settings we defined for the nodejs service remain the same, but we’ve also made the following changes:

  • image: This tells Compose to pull the latest Nginx image from Docker Hub.
  • ports: This exposes port 80 to enable the configuration options we’ve defined in our Nginx configuration.

We have also specified the following named volumes and bind mounts:

  • web-root:/var/www/html: This will add our site’s static assets, copied to a volume called web-root, to the the /var/www/html directory on the container.
  • ./nginx-conf:/etc/nginx/conf.d: This will bind mount the Nginx configuration directory on the host to the relevant directory on the container, ensuring that any changes we make to files on the host will be reflected in the container.
  • certbot-etc:/etc/letsencrypt: This will mount the relevant Let’s Encrypt certificates and keys for our domain to the appropriate directory on the container.
  • certbot-var:/var/lib/letsencrypt: This mounts Let’s Encrypt’s default working directory to the appropriate directory on the container.

Next, add the configuration options for the certbot container. Be sure to replace the domain and email information with your own domain name and contact email:

~/node_project/docker-compose.yml
...   certbot:     image: certbot/certbot     container_name: certbot     volumes:       - certbot-etc:/etc/letsencrypt       - certbot-var:/var/lib/letsencrypt       - web-root:/var/www/html     depends_on:       - webserver     command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com  -d www.example.com  

This definition tells Compose to pull the certbot/certbot image from Docker Hub. It also uses named volumes to share resources with the Nginx container, including the domain certificates and key in certbot-etc, the Let’s Encrypt working directory in certbot-var, and the application code in web-root.

Again, we’ve used depends_on to specify that the certbot container should be started once the webserver service is running.

We’ve also included a command option that specifies the command to run when the container is started. It includes the certonly subcommand with the following options:

  • --webroot: This tells Certbot to use the webroot plugin to place files in the webroot folder for authentication.
  • --webroot-path: This specifies the path of the webroot directory.
  • --email: Your preferred email for registration and recovery.
  • --agree-tos: This specifies that you agree to ACME’s Subscriber Agreement.
  • --no-eff-email: This tells Certbot that you do not wish to share your email with the Electronic Frontier Foundation (EFF). Feel free to omit this if you would prefer.
  • --staging: This tells Certbot that you would like to use Let’s Encrypt’s staging environment to obtain test certificates. Using this option allows you to test your configuration options and avoid possible domain request limits. For more information about these limits, please see Let’s Encrypt’s rate limits documentation.
  • -d: This allows you to specify domain names you would like to apply to your request. In this case, we’ve included example.com and www.example.com. Be sure to replace these with your own domain preferences.

As a final step, add the volume and network definitions. Be sure to replace the username here with your own non-root user:

~/node_project/docker-compose.yml
... volumes:   certbot-etc:   certbot-var:   web-root:     driver: local     driver_opts:       type: none       device: /home/sammy/node_project/views/       o: bind  networks:   app-network:     driver: bridge 

Our named volumes include our Certbot certificate and working directory volumes, and the volume for our site’s static assets, web-root. In most cases, the default driver for Docker volumes is the local driver, which on Linux accepts options similar to the mount command. Thanks to this, we are able to specify a list of driver options with driver_opts that mount the views directory on the host, which contains our application’s static assets, to the volume at runtime. The directory contents can then be shared between containers. For more information about the contents of the views directory, please see Step 2 of How To Build a Node.js Application with Docker.

The docker-compose.yml file will look like this when finished:

~/node_project/docker-compose.yml
version: '3'  services:   nodejs:     build:       context: .       dockerfile: Dockerfile     image: nodejs     container_name: nodejs     restart: unless-stopped     networks:       - app-network    webserver:     image: nginx:latest     container_name: webserver     restart: unless-stopped     ports:       - "80:80"     volumes:       - web-root:/var/www/html       - ./nginx-conf:/etc/nginx/conf.d       - certbot-etc:/etc/letsencrypt       - certbot-var:/var/lib/letsencrypt     depends_on:       - nodejs     networks:       - app-network    certbot:     image: certbot/certbot     container_name: certbot     volumes:       - certbot-etc:/etc/letsencrypt       - certbot-var:/var/lib/letsencrypt       - web-root:/var/www/html     depends_on:       - webserver     command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com  -d www.example.com   volumes:   certbot-etc:   certbot-var:   web-root:     driver: local     driver_opts:       type: none       device: /home/sammy/node_project/views/       o: bind  networks:   app-network:     driver: bridge   

With the service definitions in place, you are ready to start the containers and test your certificate requests.

Step 4 — Obtaining SSL Certificates and Credentials

We can start our containers with docker-compose up, which will create and run our containers and services in the order we have specified. If our domain requests are successful, we will see the correct exit status in our output and the right certificates mounted in the /etc/letsencrypt/live folder on the webserver container.

Create the services with docker-compose up and the -d flag, which will run the nodejs and webserver containers in the background:

  • docker-compose up -d

You will see output confirming that your services have been created:

Output
Creating nodejs ... done Creating webserver ... done Creating certbot ... done

Using docker-compose ps, check the status of your services:

  • docker-compose ps

If everything was successful, your nodejs and webserver services should be Up and the certbot container will have exited with a 0 status message:

Output
Name Command State Ports ------------------------------------------------------------------------ certbot certbot certonly --webroot ... Exit 0 nodejs node app.js Up 8080/tcp webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp

If you see anything other than Up in the State column for the nodejs and webserver services, or an exit status other than 0 for the certbot container, be sure to check the service logs with the docker-compose logs command:

  • docker-compose logs service_name

You can now check that your credentials have been mounted to the webserver container with docker-compose exec:

  • docker-compose exec webserver ls -la /etc/letsencrypt/live

If your request was successful, you will see output like this:

Output
total 16 drwx------ 3 root root 4096 Dec 23 16:48 . drwxr-xr-x 9 root root 4096 Dec 23 16:48 .. -rw-r--r-- 1 root root 740 Dec 23 16:48 README drwxr-xr-x 2 root root 4096 Dec 23 16:48 example.com

Now that you know your request will be successful, you can edit the certbot service definition to remove the --staging flag.

Open docker-compose.yml:

  • nano docker-compose.yml

Find the section of the file with the certbot service definition, and replace the --staging flag in the command option with the --force-renewal flag, which will tell Certbot that you want to request a new certificate with the same domains as an existing certificate. The certbot service definition should now look like this:

~/node_project/docker-compose.yml
...   certbot:     image: certbot/certbot     container_name: certbot     volumes:       - certbot-etc:/etc/letsencrypt       - certbot-var:/var/lib/letsencrypt       - web-root:/var/www/html     depends_on:       - webserver     command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com ... 

You can now run docker-compose up to recreate the certbot container and its relevant volumes. We will also include the --no-deps option to tell Compose that it can skip starting the webserver service, since it is already running:

  • docker-compose up --force-recreate --no-deps certbot

You will see output indicating that your certificate request was successful:

Output
certbot | IMPORTANT NOTES: certbot | - Congratulations! Your certificate and chain have been saved at: certbot | /etc/letsencrypt/live/example.com/fullchain.pem certbot | Your key file has been saved at: certbot | /etc/letsencrypt/live/example.com/privkey.pem certbot | Your cert will expire on 2019-03-26. To obtain a new or tweaked certbot | version of this certificate in the future, simply run certbot certbot | again. To non-interactively renew *all* of your certificates, run certbot | "certbot renew" certbot | - Your account credentials have been saved in your Certbot certbot | configuration directory at /etc/letsencrypt. You should make a certbot | secure backup of this folder now. This configuration directory will certbot | also contain certificates and private keys obtained by Certbot so certbot | making regular backups of this folder is ideal. certbot | - If you like Certbot, please consider supporting our work by: certbot | certbot | Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate certbot | Donating to EFF: https://eff.org/donate-le certbot | certbot exited with code 0

With your certificates in place, you can move on to modifying your Nginx configuration to include SSL.

Step 5 — Modifying the Web Server Configuration and Service Definition

Enabling SSL in our Nginx configuration will involve adding an HTTP redirect to HTTPS and specifying our SSL certificate and key locations. It will also involve specifying our Diffie-Hellman group, which we will use for Perfect Forward Secrecy.

Since you are going to recreate the webserver service to include these additions, you can stop it now:

  • docker-compose stop webserver

Next, create a directory in your current project directory for your Diffie-Hellman key:

  • mkdir dhparam

Generate your key with the openssl command:

  • sudo openssl dhparam -out /home/sammy/node_project/dhparam/dhparam-2048.pem 2048

It will take a few moments to generate the key.

To add the relevant Diffie-Hellman and SSL information to your Nginx configuration, first remove the Nginx configuration file you created earlier:

  • rm nginx-conf/nginx.conf

Open another version of the file:

  • nano nginx-conf/nginx.conf

Add the following code to the file to redirect HTTP to HTTPS and to add SSL credentials, protocols, and security headers. Remember to replace example.com with your own domain:

~/node_project/nginx-conf/nginx.conf
 server {         listen 80;         listen [::]:80;         server_name example.com www.example.com;          location ~ /.well-known/acme-challenge {           allow all;           root /var/www/html;         }          location / {                 rewrite ^ https://$  host$  request_uri? permanent;         } }  server {         listen 443 ssl http2;         listen [::]:443 ssl http2;         server_name example.com www.example.com;          server_tokens off;          ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;         ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;          ssl_buffer_size 8k;          ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;          ssl_protocols TLSv1.2 TLSv1.1 TLSv1;         ssl_prefer_server_ciphers on;          ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;          ssl_ecdh_curve secp384r1;         ssl_session_tickets off;          ssl_stapling on;         ssl_stapling_verify on;         resolver 8.8.8.8;          location / {                 try_files $  uri @nodejs;         }          location @nodejs {                 proxy_pass http://nodejs:8080;                 add_header X-Frame-Options "SAMEORIGIN" always;                 add_header X-XSS-Protection "1; mode=block" always;                 add_header X-Content-Type-Options "nosniff" always;                 add_header Referrer-Policy "no-referrer-when-downgrade" always;                 add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;                 # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;                 # enable strict transport security only if you understand the implications         }          root /var/www/html;         index index.html index.htm index.nginx-debian.html; } 

The HTTP server block specifies the webroot for Certbot renewal requests to the .well-known/acme-challenge directory. It also includes a rewrite directive that directs HTTP requests to the root directory to HTTPS.

The HTTPS server block enables ssl and http2. To read more about how HTTP/2 iterates on HTTP protocols and the benefits it can have for website performance, please see the introduction to How To Set Up Nginx with HTTP/2 Support on Ubuntu 18.04. This block also includes a series of options to ensure that you are using the most up-to-date SSL protocols and ciphers and that OSCP stapling is turned on. OSCP stapling allows you to offer a time-stamped response from your certificate authority during the initial TLS handshake, which can speed up the authentication process.

The block also specifies your SSL and Diffie-Hellman credentials and key locations.

Finally, we’ve moved the proxy pass information to this block, including a location block with a try_files directive, pointing requests to our aliased Node.js application container, and a location block for that alias, which includes security headers that will enable us to get A ratings on things like the SSL Labs and Security Headers server test sites. These headers include X-Frame-Options, X-Content-Type-Options, Referrer Policy, Content-Security-Policy, and X-XSS-Protection. The HTTP Strict Transport Security (HSTS) header is commented out — enable this only if you understand the implications and have assessed its “preload” functionality.

Once you have finished editing, save and close the file.

Before recreating the webserver service, you will need to add a few things to the service definition in your docker-compose.yml file, including relevant port information for HTTPS and a Diffie-Hellman volume definition.

Open the file:

  • nano docker-compose.yml

In the webserver service definition, add the following port mapping and the dhparam named volume:

~/node_project/docker-compose.yml
...  webserver:     image: nginx:latest     container_name: webserver     restart: unless-stopped     ports:       - "80:80"       - "443:443"     volumes:       - web-root:/var/www/html       - ./nginx-conf:/etc/nginx/conf.d       - certbot-etc:/etc/letsencrypt       - certbot-var:/var/lib/letsencrypt       - dhparam:/etc/ssl/certs     depends_on:       - nodejs     networks:       - app-network 

Next, add the dhparam volume to your volumes definitions:

~/node_project/docker-compose.yml
... volumes:   ...   dhparam:     driver: local     driver_opts:       type: none       device: /home/sammy/node_project/dhparam/       o: bind 

Similarly to the web-root volume, the dhparam volume will mount the Diffie-Hellman key stored on the host to the webserver container.

Save and close the file when you are finished editing.

Recreate the webserver service:

  • docker-compose up -d --force-recreate --no-deps webserver

Check your services with docker-compose ps:

  • docker-compose ps

You should see output indicating that your nodejs and webserver services are running:

Output
Name Command State Ports ---------------------------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 nodejs node app.js Up 8080/tcp webserver nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp

Finally, you can visit your domain to ensure that everything is working as expected. Navigate your browser to https://example.com, making sure to substitute example.com with your own domain name. You will see the following landing page:

Application Landing Page

You should also see the lock icon in your browser’s security indicator. If you would like, you can navigate to the SSL Labs Server Test landing page or the Security Headers server test landing page. The configuration options we’ve included should earn your site an A rating on both.

Step 6 — Renewing Certificates

Let’s Encrypt certificates are valid for 90 days, so you will want to set up an automated renewal process to ensure that they do not lapse. One way to do this is to create a job with the cron scheduling utility. In this case, we will schedule a cron job using a script that will renew our certificates and reload our Nginx configuration.

Open a script called ssl_renew.sh in your project directory:

  • nano ssl_renew.sh

Add the following code to the script to renew your certificates and reload your web server configuration:

~/node_project/ssl_renew.sh
#!/bin/bash  /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml run certbot renew --dry-run \ && /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml kill -s SIGHUP webserver 

In addition to specifying the location of our docker-compose binary, we also specify the location of our docker-compose.yml file in order to run docker-compose commands. In this case, we are using docker-compose run to start a certbot container and to override the command provided in our service definition with another: the renew subcommand, which will renew certificates that are close to expiring. We’ve included the --dry-run option here to test our script.

The script then uses docker-compose kill to send a SIGHUP signal to the webserver container to reload the Nginx configuration. For more information on using this process to reload your Nginx configuration, please see this Docker blog post on deploying the official Nginx image with Docker.

Close the file when you are finished editing. Make it executable:

  • chmod +x ssl_renew.sh

Next, open your root crontab file to run the renewal script at a specified interval:

  • sudo crontab -e

If this is your first time editing this file, you will be asked to choose an editor:

crontab
no crontab for root - using an empty one Select an editor.  To change later, run 'select-editor'.   1. /bin/ed   2. /bin/nano        <---- easiest   3. /usr/bin/vim.basic   4. /usr/bin/vim.tiny Choose 1-4 [2]:  ... 

At the bottom of the file, add the following line:

crontab
... */5 * * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1 

This will set the job interval to every five minutes, so you can test whether or not your renewal request has worked as intended. We have also created a log file, cron.log, to record relevant output from the job.

After five minutes, check cron.log to see whether or not the renewal request has succeeded:

  • tail -f /var/log/cron.log

You should see output confirming a successful renewal:

Output
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/example.com/fullchain.pem (success) ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates above have not been saved.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Killing webserver ... done

You can now modify the crontab file to set a daily interval. To run the script every day at noon, for example, you would modify the last line of the file to look like this:

crontab
... 0 12 * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1 

You will also want to remove the --dry-run option from your ssl_renew.sh script:

~/node_project/ssl_renew.sh
#!/bin/bash  /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml run certbot renew \ && /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml kill -s SIGHUP webserver 

Your cron job will ensure that your Let’s Encrypt certificates don’t lapse by renewing them when they are eligible.

Conclusion

You have used containers to set up and run a Node application with an Nginx reverse proxy. You have also secured SSL certificates for your application’s domain and set up a cron job to renew these certificates when necessary.

If you are interested in learning more about Let’s Encrypt plugins, please see our articles on using the Nginx plugin or the standalone plugin.

You can also learn more about Docker Compose by looking at the following resources:

The Compose documentation is also a great resource for learning more about multi-container applications.

DigitalOcean Community Tutorials

gamingdirectional: Increase the points that need to win the game

What is up buddy? In this article, we will continue to edit our pygame project by increasing the difficulty to win this game. 1) We will increase the points that we need to win the game by double. 2) We will also increase the damage points to 3 instead of 1 when the player gets hit by the missile from the horizontal moving enemy ship. We only need to update one file to make those changes…

Source

Planet Python

Django Weblog: Django security releases issued: 2.1.5, 2.0.10, and 1.11.18

In accordance with our security release policy, the Django team is issuing Django 1.11.18, Django 2.0.10, and Django 2.1.5. These release addresses the security issue detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2019-3498: Content spoofing possibility in the default 404 page

An attacker could craft a malicious URL that could make spoofed content appear on the default page generated by the django.views.defaults.page_not_found() view.

The URL path is no longer displayed in the default 404 template and the request_path context variable is now quoted to fix the issue for custom templates that use the path.

Affected supported versions

  • Django master branch
  • Django 2.1
  • Django 2.0
  • Django 1.11

Per our supported versions policy, Django 1.10 and older are no longer supported.

Resolution

Patches to resolve the issue have been applied to Django’s master branch and the 2.1, 2.0, and 1.11 release branches. The patches may be obtained from the following changesets:

The following releases have been issued:

The PGP key ID used for these releases is Tim Graham: 1E8ABDC773EDE252.

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django’s Trac instance, Django’s GitHub repositories, or the django-developers list. Please see our security policies for further information.

This issue was publicly reported through a GitHub pull request, therefore we fixed the issue as soon as possible without the usual prenotification process.

Planet Python

How To Install and Secure Memcached on Ubuntu 18.04

Introduction

Memory object caching systems like Memcached can optimize back-end database performance by temporarily storing information in memory, retaining frequently or recently requested records. In this way, they reduce the number of direct requests to your databases.

Because systems like Memcached can contribute to denial of service attacks if improperly configured, it is important to secure your Memcached servers. In this guide, we will cover how to protect your Memcached server by binding your installation to a local or private network interface and creating an authorized user for your Memcached instance.

Prerequisites

This tutorial assumes that you have a server set up with a non-root sudo user and a basic firewall. If that is not the case, set up the following:

With these prerequisites in place, you will be ready to install and secure your Memcached server.

Step 1 — Installing Memcached from the Official Repositories

If you don’t already have Memcached installed on your server, you can install it from the official Ubuntu repositories. First, make sure that your local package index is updated:

  • sudo apt update

Next, install the official package as follows:

  • sudo apt install memcached

We can also install libmemcached-tools, a library that provides several tools to work with your Memcached server:

  • sudo apt install libmemcached-tools

Memcached should now be installed as a service on your server, along with tools that will allow you to test its connectivity. We can now move on to securing its configuration settings.

Step 2 — Securing Memcached Configuration Settings

To ensure that our Memcached instance is listening on the local interface 127.0.0.1, we will check the default setting in the configuration file located at /etc/memcached.conf. The current version of Memcached that ships with Ubuntu and Debian has the -l parameter set to the local interface, which prevents denial of service attacks from the network. We can inspect this setting to ensure that it is set correctly.

You can open /etc/memcached.conf with nano:

  • sudo nano /etc/memcached.conf

To inspect the interface setting, find the following line in the file:

/etc/memcached.conf
. . . -l 127.0.0.1 . . . 

If you see the default setting of -l 127.0.0.1 then there is no need to modify this line. If you do modify this setting to be more open, then it is also a good idea to also disable UDP, as it is more likely to be exploited in denial of service attacks. To disable UDP (while leaving TCP unaffected), add the following option to the bottom of this file:

/etc/memcached.conf
. . . -U 0 

Save and close the file when you are done.

Restart your Memcached service to apply your changes:

  • sudo systemctl restart memcached

Verify that Memcached is currently bound to the local interface and listening only for TCP connections by typing:

  • sudo netstat -plunt

You should see the following output:

Output
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name . . . tcp 0 0 127.0.0.1:11211 0.0.0.0:* LISTEN 2279/memcached . . .

This confirms that memcached is bound to the 127.0.0.1 address using only TCP.

Step 3 — Adding Authorized Users

To add authenticated users to your Memcached service, it is possible to use Simple Authentication and Security Layer (SASL), a framework that de-couples authentication procedures from application protocols. We will enable SASL within our Memcached configuration file and then move on to adding a user with authentication credentials.

Configuring SASL Support

We can first test the connectivity of our Memcached instance with the memcstat command. This will help us establish that SASL and user authentication are enabled after we make changes to our configuration files.

To check that Memcached is up and running, type the following:

  • memcstat --servers="127.0.0.1"

You should see output like the following:

Output
Server: 127.0.0.1 (11211) pid: 2279 uptime: 65 time: 1546620611 version: 1.5.6 . . .

Now we can move on to enabling SASL. First, we will add the -S parameter to /etc/memcached.conf. Open the file again:

  • sudo nano /etc/memcached.conf

At the bottom of the file, add the following:

/etc/memcached.conf
. . . -S 

Next, find and uncomment the -vv option, which will provide verbose output to /var/log/memcached. The uncommented line should look like this:

/etc/memcached.conf
. . . -vv 

Save and close the file.

Restart the Memcached service:

  • sudo systemctl restart memcached

Next, we can take a look at the logs to be sure that SASL support has been enabled:

  • sudo journalctl -u memcached

You should see the following line, indicating that SASL support has been initialized:

Output
. . . Jan 04 16:51:12 memcached systemd-memcached-wrapper[2310]: Initialized SASL. . . .

We can check the connectivity again, but because SASL has been initialized, this command should fail without authentication:

  • memcstat --servers="127.0.0.1"

This command should not produce output. We can type the following to check its status:

  • echo $ ?

$ ? will always return the exit code of the last command that exited. Typically, anything besides 0 indicates process failure. In this case, we should see an exit status of 1, which tells us that the memcstat command failed.

Adding an Authenticated User

Now we can download sasl2-bin, a package that contains administrative programs for the SASL user database. This will allow us to create our authenticated user:

  • sudo apt install sasl2-bin

Next, we will create the directory and file that Memcached will check for its SASL configuration settings:

  • sudo mkdir /etc/sasl2
  • sudo nano /etc/sasl2/memcached.conf

Add the following to the SASL configuration file:

/etc/sasl2/memcached.conf
mech_list: plain log_level: 5 sasldb_path: /etc/sasl2/memcached-sasldb2 

In addition to specifying our logging level, we will set mech_list to plain, which tells Memcached that it should use its own password file and verify a plaintext password. We will also specify the path to the user database file that we will create next. Save and close the file when you are finished.

Now we will create a SASL database with our user credentials. We will use the saslpasswd2 command to make a new entry for our user in our database using the -c option. Our user will be sammy here, but you can replace this name with your own user. Using the -f option, we will specify the path to our database, which will be the path we set in /etc/sasl2/memcached.conf:

  • sudo saslpasswd2 -a memcached -c -f /etc/sasl2/memcached-sasldb2 sammy

You will be asked to type and re-verify a password of your choosing.

Finally, we will give the memcache user ownership over the SASL database:

  • sudo chown memcache:memcache /etc/sasl2/memcached-sasldb2

Restart the Memcached service:

  • sudo systemctl restart memcached

Running memcstat again will confirm whether or not our authentication process worked. This time we will run it with our authentication credentials:

  • memcstat --servers="127.0.0.1" --username=sammy --password=your_password

You should see output like the following:

Output
Server: 127.0.0.1 (11211) pid: 2772 uptime: 31 time: 1546621072 version: 1.5.6 Ubuntu . . .

Our Memcached service is now successfully running with SASL support and user authentication.

Step 4 — Allowing Access Over the Private Network (Optional)

We have covered how to configure Memcached to listen on the local interface, which can prevent denial of service attacks by protecting the Memcached interface from exposure to outside parties. There may be instances where you will need to allow access from other servers, however. In this case, you can adjust your configuration settings to bind Memcached to the private network interface.

Note: We will cover how to configure firewall settings using UFW in this section, but it is also possible to use DigitalOcean Cloud Firewalls to create these settings. For more information on setting up DigitalOcean Cloud Firewalls, see our Introduction to DigitalOcean Cloud Firewalls. To learn more about how to limit incoming traffic to particular machines, check out the section of this tutorial on applying firewall rules using tags and server names and our discussion of firewall tags.

Limiting IP Access With Firewalls

Before you adjust your configuration settings, it is a good idea to set up firewall rules to limit the machines that can connect to your Memcached server. You will need to know the client server’s private IP address to configure your firewall rules. For more information about private networking on DigitalOcean, please see this overview of DigitalOcean private networking.

If you are using the UFW firewall, you can limit access to your Memcached instance by typing the following:

  • sudo ufw allow from client_server_private_IP/32 to any port 11211

You can find out more about UFW firewalls by reading our ufw essentials guide.

With these changes in place, you can adjust the Memcached service to bind to your server’s private networking interface.

Binding Memcached to the Private Network Interface

Now that your firewall is in place, you can adjust the Memcached configuration to bind to your server’s private networking interface instead of 127.0.0.1.

We can open the /etc/memcached.conf file again by typing:

  • sudo nano /etc/memcached.conf

Inside, find the -l 127.0.0.1 line that you checked or modified earlier, and change the address to match your server’s private networking interface:

/etc/memcached.conf
. . . -l memcached_server_private_IP . . . 

Save and close the file when you are finished.

Next, restart the Memcached service:

  • sudo systemctl restart memcached

Check your new settings with netstat to confirm the change:

  • sudo netstat -plunt
Output
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name . . . tcp 0 0 memcached_server_private_IP:11211 0.0.0.0:* LISTEN 2912/memcached . . .

Test connectivity from your external client to ensure that you can still reach the service. It is a good idea to also check access from a non-authorized client to ensure that your firewall rules are effective.

Conclusion

In this tutorial, we have covered how to secure your Memcached server by configuring it to bind to your local or private network interface and by enabling SASL authentication.

To learn more about Memcached, check out the project documentation. For more information about how to work with Memcached, see our tutorial on How To Install and Use Memcache on Ubuntu 14.04.

DigitalOcean Community Tutorials

The Importance of Diversity in Tech

InterWorks APAC team at Summit

Every December, the whole InterWorks family gathers in Stillwater, Oklahoma, for a company Summit. Each year, it offers a chance to catch up with friends, meet new ones and strategize about the next year. It’s a great time, filled with friendly competition and valuable conversations around the technologies we’re using, lessons learned from trainings and projects, and InterWorks overall, including how we can continue doing great work with great people.

Diversity in Tech

My favorite breakout session from the Summit was titled Diversity in Tech. The session opened by highlighting why diversity is important, as Keith Dykstra emphasized in his blog post for Viz for Social Good, “Dear Tech People.” Diverse teams solve problems more effectively and are simply a good thing to have as we work toward building a more inclusive world.

We talked about industry trends (Mavis Liu’s viz on Gender Diversity in Tech Giants was a great place to start) and had a space to share ideas and experiences around diversity initiatives. All areas of our business were represented, along with all levels of leadership, to create 30+ voices speaking about what diversity means to them and how we at InterWorks can champion it. While we’re a tech company, we’re also a company that has the best people—and creating a company that supports all of those people allows us to innovate and continue doing the best work.

Diversity in Tech presentation

Above: Brandi White, Director of Employee Experience, presenting the Diversity in Tech session.

The session, like the rest of the Summit, also created a space for connections and conversations that may not have otherwise occurred. After the session, I talked to a few people who participated and asked them to share some of their insights or takeaways. Below are some of their responses:

Diversity in Tech Session Takeaways

Zion Spencer | Marketing Analytics Manager 

“The Diversity in Tech session at this year’s Summit was one of the most encouraging discussions that I’ve seen in all my years of working at InterWorks. Given the type of people that are drawn to calling a place like InterWorks their professional home, it made sense why this became one of the most talked-about sessions at Summit this year. We are people that don’t want to just practice equality. We want to make it known that equality is one of the most important pursuits as individuals and as an organization. Our past experiences have informed us that the only way to prevent discrimination and see diversity flourish is by actively giving examples of how a workplace is safe for people of all backgrounds. It makes me proud to know that InterWorks is serious about taking those steps.”

Rob Curtis | Analytics Consultant / ANZ Practice Lead 

Our goal should be to create a world where people of all backgrounds are valued for their ideas and cherished for their diversity. Let’s start by trying to create that ideal at InterWorks.”

Bill Barnes | Data Engineer 

“The more we step into these discussions, the easier it is to keep having them. The phrase ‘We can’t know what we don’t know’ is a very real thing, and it helps to see things from someone else’s point of view.”

Debbie Yu | Analytics Consultant 

“I was really appreciative of having a space to hear from my colleagues, many of whom shared vulnerable experiences. It was also really clear that people wanted to continue having dialogue about the topic since we all wanted to keep talking after the session was over. This conversation alone has helped me create stronger friendships with colleagues I didn’t know as well before, and that has me feeling energized and thankful.”

Ravi Nemani | Analytics Consultant 

“Addressing diversity could take both a top-down approach and a grass-roots approach within an organization. I’m glad InterWorks has taken steps to use language that is much more inclusive, such as phrasing on the website. Our company is a reflection of the community and the world we live in, so having this dialogue is important to continue bringing a variety of perspectives and experiences to the table.”

Brenden Goetz | Analytics Consultant 

“We have our own individual experiences and lenses through which we see the world, and it takes a lot of intentional effort to recognize that other people experience the world differently than you do. Maybe this is obvious, but it struck me so hard to finally see clearly that sometimes people don’t know or don’t care because it’s not an issue for them. We have different groups of people experiencing different things, and we need to do a better job of recognizing, celebrating and caring for those experiences.”

Next Steps toward Inclusion

Many of my sentiments reflect those of my colleagues above: having discussions around creating a workplace that is supportive and welcoming to everyone allows us to see and learn from other people’s perspectives. Plus, it opens up a dialogue that makes us an even stronger and more inclusive company.

Overall, this session and the Summit overall stirred my excitement for 2019. We’ve got exciting partnerships and projects coming up, and we’ll tackle those as a diverse team, with ideas and people you’ll be hard pressed to find elsewhere.

InterWorks APAC team at Summit

Above: Me with some of the InterWorks APAC team’s support at our annual Summit. 

The post The Importance of Diversity in Tech appeared first on InterWorks.

InterWorks