How To Set Up a Private Docker Registry on Top of DigitalOcean Spaces and Use It with DigitalOcean Kubernetes

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

A Docker registry is a storage and content delivery system for named Docker images, which are the industry standard for containerized applications. A private Docker registry allows you to securely share your images within your team or organization with more flexibility and control when compared to public ones. By hosting your private Docker registry directly in your Kubernetes cluster, you can achieve higher speeds, lower latency, and better availability, all while having control over the registry.

The underlying registry storage is delegated to external drivers. The default storage system is the local filesystem, but you can swap this for a cloud-based storage driver. DigitalOcean Spaces is an S3-compatible object storage designed for developer teams and businesses that want a scalable, simple, and affordable way to store and serve vast amounts of data, and is very suitable for storing Docker images. It has a built-in CDN network, which can greatly reduce latency when frequently accessing images.

In this tutorial, you’ll deploy your private Docker registry to your DigitalOcean Kubernetes cluster using Helm, backed up by DigitalOcean Spaces for storing data. You’ll create API keys for your designated Space, install the Docker registry to your cluster with custom configuration, configure Kubernetes to properly authenticate with it, and test it by running a sample deployment on the cluster. At the end of this tutorial, you’ll have a secure, private Docker registry installed on your DigitalOcean Kubernetes cluster.

Prerequisites

Before you begin this tutorial, you’ll need:

  • Docker installed on the machine that you’ll access your cluster from. For Ubuntu 18.04 visit How To Install and Use Docker on Ubuntu 18.04. You only need to complete the first step. Otherwise visit Docker’s website for other distributions.

  • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

  • A DigitalOcean Space with API keys (access and secret). To learn how to create a DigitalOcean Space and API keys, see How To Create a DigitalOcean Space and API Key.

  • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager. You only need to complete the first two steps.

  • The Nginx Ingress Controller and Cert-Manager installed on the cluster. For a guide on how to do this, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

  • A domain name with two DNS A records pointed to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to create A records. In this tutorial, we’ll refer to the A records as registry.example.com and k8s-test.example.com.

Step 1 — Configuring and Installing the Docker Registry

In this step, you will create a configuration file for the registry deployment and install the Docker registry to your cluster with the given config using the Helm package manager.

During the course of this tutorial, you will use a configuration file called chart_values.yaml to override some of the default settings for the Docker registry Helm chart. Helm calls its packages, charts; these are sets of files that outline a related selection of Kubernetes resources. You’ll edit the settings to specify DigitalOcean Spaces as the underlying storage system and enable HTTPS access by wiring up Let’s Encrypt TLS certificates.

As part of the prerequisite, you would have created the echo1 and echo2 services and an echo_ingress ingress for testing purposes; you will not need these in this tutorial, so you can now delete them.

Start off by deleting the ingress by running the following command:

  • kubectl delete -f echo_ingress.yaml

Then, delete the two test services:

  • kubectl delete -f echo1.yaml && kubectl delete -f echo2.yaml

The kubectl delete command accepts the file to delete when passed the -f parameter.

Create a folder that will serve as your workspace:

  • mkdir ~/k8s-registry

Navigate to it by running:

  • cd ~/k8s-registry

Now, using your text editor, create your chart_values.yaml file:

  • nano chart_values.yaml

Add the following lines, ensuring you replace the highlighted lines with your details:

chart_values.yaml
ingress:   enabled: true   hosts:     - registry.example.com   annotations:     kubernetes.io/ingress.class: nginx     certmanager.k8s.io/cluster-issuer: letsencrypt-prod     nginx.ingress.kubernetes.io/proxy-body-size: "30720m"   tls:     - secretName: letsencrypt-prod       hosts:         - registry.example.com  storage: s3  secrets:   htpasswd: ""   s3:     accessKey: "your_space_access_key"     secretKey: "your_space_secret_key"  s3:   region: your_space_region   regionEndpoint: your_space_region.digitaloceanspaces.com   secure: true   bucket: your_space_name 

The first block, ingress, configures the Kubernetes Ingress that will be created as a part of the Helm chart deployment. The Ingress object makes outside HTTP/HTTPS routes point to internal services in the cluster, thus allowing communication from the outside. The overridden values are:

  • enabled: set to true to enable the Ingress.
  • hosts: a list of hosts from which the Ingress will accept traffic.
  • annotations: a list of metadata that provides further direction to other parts of Kubernetes on how to treat the Ingress. You set the Ingress Controller to nginx, the Let’s Encrypt cluster issuer to the production variant (letsencrypt-prod), and tell the nginx controller to accept files with a max size of 30 GB, which is a sensible limit for even the largest Docker images.
  • tls: this subcategory configures Let’s Encrypt HTTPS. You populate the hosts list that defines from which secure hosts this Ingress will accept HTTPS traffic with our example domain name.

Then, you set the file system storage to s3 — the other available option would be filesystem. Here s3 indicates using a remote storage system compatible with the industry-standard Amazon S3 API, which DigitalOcean Spaces fulfills.

In the next block, secrets, you configure keys for accessing your DigitalOcean Space under the s3 subcategory. Finally, in the s3 block, you configure the parameters specifying your Space.

Save and close your file.

Now, if you haven’t already done so, set up your A records to point to the Load Balancer you created as part of the Nginx Ingress Controller installation in the prerequisite tutorial. To see how to set your DNS on DigitalOcean, see How to Manage DNS Records.

Next, ensure your Space isn’t empty. The Docker registry won’t run at all if you don’t have any files in your Space. To get around this, upload a file. Navigate to the Spaces tab, find your Space, click the Upload File button, and upload any file you’d like. You could upload the configuration file you just created.

Empty file uploaded to empty Space

Before installing anything via Helm, you need to refresh its cache. This will update the latest information about your chart repository. To do this run the following command:

  • helm repo update

Now, you’ll deploy the Docker registry chart with this custom configuration via Helm by running:

  • helm install stable/docker-registry -f chart_values.yaml --name docker-registry

You’ll see the following output:

Output
NAME: docker-registry ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-54df68fd64-l26fb 0/1 ContainerCreating 0 1s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 3 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.131.143 <none> 5000/TCP 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 0/1 1 0 1s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 80, 443 1s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

Helm lists all the resources it created as a result of the Docker registry chart deployment. The registry is now accessible from the domain name you specified earlier.

You’ve configured and deployed a Docker registry on your Kubernetes cluster. Next, you will test the availability of the newly deployed Docker registry.

Step 2 — Testing Pushing and Pulling

In this step, you’ll test your newly deployed Docker registry by pushing and pulling images to and from it. Currently, the registry is empty. To have something to push, you need to have an image available on the machine you’re working from. Let’s use the mysql Docker image.

Start off by pulling mysql from the Docker Hub:

  • sudo docker pull mysql

Your output will look like this:

Output
Using default tag: latest latest: Pulling from library/mysql 27833a3ba0a5: Pull complete ... e906385f419d: Pull complete Digest: sha256:a7cf659a764732a27963429a87eccc8457e6d4af0ee9d5140a3b56e74986eed7 Status: Downloaded newer image for mysql:latest

You now have the image available locally. To inform Docker where to push it, you’ll need to tag it with the host name, like so:

  • sudo docker tag mysql registry.example.com/mysql

Then, push the image to the new registry:

  • sudo docker push registry.example.com/mysql

This command will run successfully and indicate that your new registry is properly configured and accepting traffic — including pushing new images. If you see an error, double check your steps against steps 1 and 2.

To test pulling from the registry cleanly, first delete the local mysql images with the following command:

  • sudo docker rmi registry.example.com/mysql && sudo docker rmi mysql

Then, pull it from the registry:

  • sudo docker pull registry.example.com/mysql

This command will take a few seconds to complete. If it runs successfully, that means your registry is working correctly. If it shows an error, double check what you have entered against the previous commands.

You can list Docker images available locally by running the following command:

  • sudo docker images

You’ll see output listing the images available on your local machine, along with their ID and date of creation.

Your Docker registry is configured. You’ve pushed an image to it and verified you can pull it down. Now let’s add authentication so only certain people can access the code.

Step 3 — Adding Account Authentication and Configuring Kubernetes Access

In this step, you’ll set up username and password authentication for the registry using the htpasswd utility.

The htpasswd utility comes from the Apache webserver, which you can use for creating files that store usernames and passwords for basic authentication of HTTP users. The format of htpasswd files is username:hashed_password (one per line), which is portable enough to allow other programs to use it as well.

To make htpasswd available on the system, you’ll need to install it by running:

  • sudo apt install apache2-utils -y

Note:
If you’re running this tutorial from a Mac, you’ll need to use the following command to make htpasswd available on your machine:

  • docker run --rm -v $ {PWD}:/app -it httpd htpasswd -b -c /app/htpasswd_file sammy password

Create it by executing the following command:

  • touch htpasswd_file

Add a username and password combination to htpasswd_file:

  • htpasswd -B htpasswd_file username

Docker requires the password to be hashed using the bcrypt algorithm, which is why we pass the -B parameter. The bcrypt algorithm is a password hashing function based on Blowfish block cipher, with a work factor parameter, which specifies how expensive the hash function will be.

Remember to replace username with your desired username. When run, htpasswd will ask you for the accompanying password and add the combination to htpasswd_file. You can repeat this command for as many users as you wish to add.

Now, show the contents of htpasswd_file by running the following command:

  • cat htpasswd_file

Select and copy the contents shown.

To add authentication to your Docker registry, you’ll need to edit chart_values.yaml and add the contents of htpasswd_file in the htpasswd variable.

Open chart_values.yaml for editing:

  • nano chart_values.yaml

Find the line that looks like this:

chart_values.yaml
  htpasswd: "" 

Edit it to match the following, replacing htpasswd\_file\_contents with the contents you copied from the htpasswd_file:

chart_values.yaml
  htpasswd: |-     htpasswd_file_contents 

Be careful with the indentation, each line of the file contents must have four spaces before it.

Once you’ve added your contents, save and close the file.

To propagate the changes to your cluster, run the following command:

  • helm upgrade docker-registry stable/docker-registry -f chart_values.yaml

The output will be similar to that shown when you first deployed your Docker registry:

Output
Release "docker-registry" has been upgraded. Happy Helming! LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 3m8s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-6c5bb7ffbf-ltnjv 1/1 Running 0 3m7s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 4 3m8s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.128.245 <none> 5000/TCP 3m8s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 1/1 1 1 3m8s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 159.89.215.50 80, 443 3m8s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

This command calls Helm and instructs it to upgrade an existing release, in your case docker-registry, with its chart defined in stable/docker-registry in the chart repository, after applying the chart_values.yaml file.

Now, you’ll try pulling an image from the registry again:

  • sudo docker pull registry.example.com/mysql

The output will look like the following:

Output
Using default tag: latest Error response from daemon: Get https://registry.example.com/v2/mysql/manifests/latest: no basic auth credentials

It correctly failed because you provided no credentials. This means that your Docker registry authorizes requests correctly.

To log in to the registry, run the following command:

  • sudo docker login registry.example.com

Remember to replace registry.example.com with your domain address. It will prompt you for a username and password. If it shows an error, double check what your htpasswd_file contains. You must define the username and password combination in the htpasswd_file, which you created earlier in this step.

To test the login, you can try to pull again by running the following command:

  • sudo docker pull registry.example.com/mysql

The output will look similar to the following:

Output
Using default tag: latest latest: Pulling from mysql Digest: sha256:f2dc118ca6fa4c88cde5889808c486dfe94bccecd01ca626b002a010bb66bcbe Status: Image is up to date for registry.example.com/mysql:latest

You’ve now configured Docker and can log in securely. To configure Kubernetes to log in to your registry, run the following command:

  • sudo kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/sammy/.docker/config.json --type=kubernetes.io/dockerconfigjson

You will see the following output:

Output
secret/regcred created

This command creates a secret in your cluster with the name regcred, takes the contents of the JSON file where Docker stores the credentials, and parses it as dockerconfigjson, which defines a registry credential in Kubernetes.

You’ve used htpasswd to create a login config file, configured the registry to authenticate requests, and created a Kubernetes secret containing the login credentials. Next, you will test the integration between your Kubernetes cluster and registry.

Step 4 — Testing Kubernetes Integration by Running a Sample Deployment

In this step, you’ll run a sample deployment with an image stored in the in-cluster registry to test the connection between your Kubernetes cluster and registry.

In the last step, you created a secret, called regcred, containing login credentials for your private registry. It may contain login credentials for multiple registries, in which case you’ll have to update the Secret accordingly.

You can specify which secret Kubernetes should use when pulling containers in the pod definition by specifying imagePullSecrets. This step is necessary when the Docker registry requires authentication.

You’ll now deploy a sample Hello World image from your private Docker registry to your cluster. First, in order to push it, you’ll pull it to your machine by running the following command:

  • sudo docker pull paulbouwer/hello-kubernetes:1.5

Then, tag it by running:

  • sudo docker tag paulbouwer/hello-kubernetes:1.5 registry.example.com/paulbouwer/hello-kubernetes:1.5

Finally, push it to your registry:

  • sudo docker push registry.example.com/paulbouwer/hello-kubernetes:1.5

Delete it from your machine as you no longer need it locally:

  • sudo docker rmi registry.example.com/paulbouwer/hello-kubernetes:1.5

Now, you’ll deploy the sample Hello World application. First, create a new file, hello-world.yaml, using your text editor:

  • nano hello-world.yaml

Next, you’ll define a Service and an Ingress to make the app accessible to outside of the cluster. Add the following lines, replacing the highlighted lines with your domains:

hello-world.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: hello-kubernetes-ingress   annotations:     kubernetes.io/ingress.class: nginx     nginx.ingress.kubernetes.io/rewrite-target: / spec:   rules:   - host: k8s-test.example.com     http:       paths:       - path: /         backend:           serviceName: hello-kubernetes           servicePort: 80 --- apiVersion: v1 kind: Service metadata:   name: hello-kubernetes spec:   type: NodePort   ports:   - port: 80     targetPort: 8080   selector:     app: hello-kubernetes --- apiVersion: apps/v1 kind: Deployment metadata:   name: hello-kubernetes spec:   replicas: 3   selector:     matchLabels:       app: hello-kubernetes   template:     metadata:       labels:         app: hello-kubernetes     spec:       containers:       - name: hello-kubernetes         image: registry.example.com/paulbouwer/hello-kubernetes:1.5         ports:         - containerPort: 8080       imagePullSecrets:       - name: regcred 

First, you define the Ingress for the Hello World deployment, which you will route through the Load Balancer that the Nginx Ingress Controller owns. Then, you define a service that can access the pods created in the deployment. In the actual deployment spec, you specify the image as the one located in your registry and set imagePullSecrets to regcred, which you created in the previous step.

Save and close the file. To deploy this to your cluster, run the following command:

  • kubectl apply -f hello-world.yaml

You’ll see the following output:

Output
ingress.extensions/hello-kubernetes-ingress created service/hello-kubernetes created deployment.apps/hello-kubernetes created

You can now navigate to your test domain — the second A record, k8s-test.example.com in this tutorial. You will see the Kubernetes Hello world! page.

Hello World page

The Hello World page lists some environment information, like the Linux kernel version and the internal ID of the pod the request was served from. You can also access your Space via the web interface to see the images you’ve worked with in this tutorial.

If you want to delete this Hello World deployment after testing, run the following command:

  • kubectl delete -f hello-world.yaml

You’ve created a sample Hello World deployment to test if Kubernetes is properly pulling images from your private registry.

Conclusion

You have now successfully deployed your own private Docker registry on your DigitalOcean Kubernetes cluster, using DigitalOcean Spaces as the storage layer underneath. There is no limit to how many images you can store, Spaces can extend infinitely, while at the same time providing the same security and robustness. In production, though, you should always strive to optimize your Docker images as much as possible, take a look at the How To Optimize Docker Images for Production tutorial.

DigitalOcean Community Tutorials

How To Set Up a Private Docker Registry on Top of DigitalOcean Spaces and Use It with DO Kubernetes

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

A Docker registry is a storage and content delivery system for named Docker images, which are the industry standard for containerized applications. A private Docker registry allows you to securely share your images within your team or organization with more flexibility and control when compared to public ones. By hosting your private Docker registry directly in your Kubernetes cluster, you can achieve higher speeds, lower latency, and better availability, all while having control over the registry.

The underlying registry storage is delegated to external drivers. The default storage system is the local filesystem, but you can swap this for a cloud-based storage driver. DigitalOcean Spaces is an S3-compatible object storage designed for developer teams and businesses that want a scalable, simple, and affordable way to store and serve vast amounts of data, and is very suitable for storing Docker images. It has a built-in CDN network, which can greatly reduce latency when frequently accessing images.

In this tutorial, you’ll deploy your private Docker registry to your DigitalOcean Kubernetes cluster using Helm, backed up by DigitalOcean Spaces for storing data. You’ll create API keys for your designated Space, install the Docker registry to your cluster with custom configuration, configure Kubernetes to properly authenticate with it, and test it by running a sample deployment on the cluster. At the end of this tutorial, you’ll have a secure, private Docker registry installed on your DigitalOcean Kubernetes cluster.

Prerequisites

Before you begin this tutorial, you’ll need:

  • Docker installed on the machine that you’ll access your cluster from. For Ubuntu 18.04 visit How To Install and Use Docker on Ubuntu 18.04. You only need to complete the first step. Otherwise visit Docker’s website for other distributions.

  • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

  • A DigitalOcean Space with API keys (access and secret). To learn how to create a DigitalOcean Space and API keys, see How To Create a DigitalOcean Space and API Key.

  • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager. You only need to complete the first two steps.

  • The Nginx Ingress Controller and Cert-Manager installed on the cluster. For a guide on how to do this, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

  • A domain name with two DNS A records pointed to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to create A records. In this tutorial, we’ll refer to the A records as registry.example.com and k8s-test.example.com.

Step 1 — Configuring and Installing the Docker Registry

In this step, you will create a configuration file for the registry deployment and install the Docker registry to your cluster with the given config using the Helm package manager.

During the course of this tutorial, you will use a configuration file called chart_values.yaml to override some of the default settings for the Docker registry Helm chart. Helm calls its packages, charts; these are sets of files that outline a related selection of Kubernetes resources. You’ll edit the settings to specify DigitalOcean Spaces as the underlying storage system and enable HTTPS access by wiring up Let’s Encrypt TLS certificates.

As part of the prerequisite, you would have created the echo1 and echo2 services and an echo_ingress ingress for testing purposes; you will not need these in this tutorial, so you can now delete them.

Start off by deleting the ingress by running the following command:

  • kubectl delete -f echo_ingress.yaml

Then, delete the two test services:

  • kubectl delete -f echo1.yaml && kubectl delete -f echo2.yaml

The kubectl delete command accepts the file to delete when passed the -f parameter.

Create a folder that will serve as your workspace:

  • mkdir ~/k8s-registry

Navigate to it by running:

  • cd ~/k8s-registry

Now, using your text editor, create your chart_values.yaml file:

  • nano chart_values.yaml

Add the following lines, ensuring you replace the highlighted lines with your details:

chart_values.yaml
ingress:   enabled: true   hosts:     - registry.example.com   annotations:     kubernetes.io/ingress.class: nginx     certmanager.k8s.io/cluster-issuer: letsencrypt-prod     nginx.ingress.kubernetes.io/proxy-body-size: "30720m"   tls:     - secretName: letsencrypt-prod       hosts:         - registry.example.com  storage: s3  secrets:   htpasswd: ""   s3:     accessKey: "your_space_access_key"     secretKey: "your_space_secret_key"  s3:   region: your_space_region   regionEndpoint: your_space_region.digitaloceanspaces.com   secure: true   bucket: your_space_name 

The first block, ingress, configures the Kubernetes Ingress that will be created as a part of the Helm chart deployment. The Ingress object makes outside HTTP/HTTPS routes point to internal services in the cluster, thus allowing communication from the outside. The overridden values are:

  • enabled: set to true to enable the Ingress.
  • hosts: a list of hosts from which the Ingress will accept traffic.
  • annotations: a list of metadata that provides further direction to other parts of Kubernetes on how to treat the Ingress. You set the Ingress Controller to nginx, the Let’s Encrypt cluster issuer to the production variant (letsencrypt-prod), and tell the nginx controller to accept files with a max size of 30 GB, which is a sensible limit for even the largest Docker images.
  • tls: this subcategory configures Let’s Encrypt HTTPS. You populate the hosts list that defines from which secure hosts this Ingress will accept HTTPS traffic with our example domain name.

Then, you set the file system storage to s3 — the other available option would be filesystem. Here s3 indicates using a remote storage system compatible with the industry-standard Amazon S3 API, which DigitalOcean Spaces fulfills.

In the next block, secrets, you configure keys for accessing your DO Space under the s3 subcategory. Finally, in the s3 block, you configure the parameters specifying your Space.

Save and close your file.

Now, if you haven’t already done so, set up your A records to point to the Load Balancer you created as part of the Nginx Ingress Controller installation in the prerequisite tutorial. To see how to set your DNS on DigitalOcean, see How to Manage DNS Records.

Next, ensure your Space isn’t empty. The Docker registry won’t run at all if you don’t have any files in your Space. To get around this, upload a file. Navigate to the Spaces tab, find your Space, click the Upload File button, and upload any file you’d like. You could upload the configuration file you just created.

Empty file uploaded to empty Space

Before installing anything via Helm, you need to refresh its cache. This will update the latest information about your chart repository. To do this run the following command:

  • helm repo update

Now, you’ll deploy the Docker registry chart with this custom configuration via Helm by running:

  • helm install stable/docker-registry -f chart_values.yaml --name docker-registry

You’ll see the following output:

Output
NAME: docker-registry ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-54df68fd64-l26fb 0/1 ContainerCreating 0 1s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 3 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.131.143 <none> 5000/TCP 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 0/1 1 0 1s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 80, 443 1s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

Helm lists all the resources it created as a result of the Docker registry chart deployment. The registry is now accessible from the domain name you specified earlier.

You’ve configured and deployed a Docker registry on your Kubernetes cluster. Next, you will test the availability of the newly deployed Docker registry.

Step 2 — Testing Pushing and Pulling

In this step, you’ll test your newly deployed Docker registry by pushing and pulling images to and from it. Currently, the registry is empty. To have something to push, you need to have an image available on the machine you’re working from. Let’s use the mysql Docker image.

Start off by pulling mysql from the Docker Hub:

  • sudo docker pull mysql

Your output will look like this:

Output
Using default tag: latest latest: Pulling from library/mysql 27833a3ba0a5: Pull complete ... e906385f419d: Pull complete Digest: sha256:a7cf659a764732a27963429a87eccc8457e6d4af0ee9d5140a3b56e74986eed7 Status: Downloaded newer image for mysql:latest

You now have the image available locally. To inform Docker where to push it, you’ll need to tag it with the host name, like so:

  • sudo docker tag mysql registry.example.com/mysql

Then, push the image to the new registry:

  • sudo docker push registry.example.com/mysql

This command will run successfully and indicate that your new registry is properly configured and accepting traffic — including pushing new images. If you see an error, double check your steps against steps 1 and 2.

To test pulling from the registry cleanly, first delete the local mysql images with the following command:

  • sudo docker rmi registry.example.com/mysql && sudo docker rmi mysql

Then, pull it from the registry:

  • sudo docker pull registry.example.com/mysql

This command will take a few seconds to complete. If it runs successfully, that means your registry is working correctly. If it shows an error, double check what you have entered against the previous commands.

You can list Docker images available locally by running the following command:

  • sudo docker images

You’ll see output listing the images available on your local machine, along with their ID and date of creation.

Your Docker registry is configured. You’ve pushed an image to it and verified you can pull it down. Now let’s add authentication so only certain people can access the code.

Step 3 — Adding Account Authentication and Configuring Kubernetes Access

In this step, you’ll set up username and password authentication for the registry using the htpasswd utility.

The htpasswd utility comes from the Apache webserver, which you can use for creating files that store usernames and passwords for basic authentication of HTTP users. The format of htpasswd files is username:hashed_password (one per line), which is portable enough to allow other programs to use it as well.

To make htpasswd available on the system, you’ll need to install it by running:

  • sudo apt install apache2-utils -y

Note:
If you’re running this tutorial from a Mac, you’ll need to use the following command to make htpasswd available on your machine:

  • docker run --rm -v $ {PWD}:/app -it httpd htpasswd -b -c /app/htpasswd_file sammy password

Create it by executing the following command:

  • touch htpasswd_file

Add a username and password combination to htpasswd_file:

  • htpasswd -B htpasswd_file username

Docker requires the password to be hashed using the bcrypt algorithm, which is why we pass the -B parameter. The bcrypt algorithm is a password hashing function based on Blowfish block cipher, with a work factor parameter, which specifies how expensive the hash function will be.

Remember to replace username with your desired username. When run, htpasswd will ask you for the accompanying password and add the combination to htpasswd_file. You can repeat this command for as many users as you wish to add.

Now, show the contents of htpasswd_file by running the following command:

  • cat htpasswd_file

Select and copy the contents shown.

To add authentication to your Docker registry, you’ll need to edit chart_values.yaml and add the contents of htpasswd_file in the htpasswd variable.

Open chart_values.yaml for editing:

  • nano chart_values.yaml

Find the line that looks like this:

chart_values.yaml
  htpasswd: "" 

Edit it to match the following, replacing htpasswd\_file\_contents with the contents you copied from the htpasswd_file:

chart_values.yaml
  htpasswd: |-     htpasswd_file_contents 

Be careful with the indentation, each line of the file contents must have four spaces before it.

Once you’ve added your contents, save and close the file.

To propagate the changes to your cluster, run the following command:

  • helm upgrade docker-registry stable/docker-registry -f chart_values.yaml

The output will be similar to that shown when you first deployed your Docker registry:

Output
Release "docker-registry" has been upgraded. Happy Helming! LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 3m8s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-6c5bb7ffbf-ltnjv 1/1 Running 0 3m7s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 4 3m8s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.128.245 <none> 5000/TCP 3m8s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 1/1 1 1 3m8s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 159.89.215.50 80, 443 3m8s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

This command calls Helm and instructs it to upgrade an existing release, in your case docker-registry, with its chart defined in stable/docker-registry in the chart repository, after applying the chart_values.yaml file.

Now, you’ll try pulling an image from the registry again:

  • sudo docker pull registry.example.com/mysql

The output will look like the following:

Output
Using default tag: latest Error response from daemon: Get https://registry.example.com/v2/mysql/manifests/latest: no basic auth credentials

It correctly failed because you provided no credentials. This means that your Docker registry authorizes requests correctly.

To log in to the registry, run the following command:

  • sudo docker login registry.example.com

Remember to replace registry.example.com with your domain address. It will prompt you for a username and password. If it shows an error, double check what your htpasswd_file contains. You must define the username and password combination in the htpasswd_file, which you created earlier in this step.

To test the login, you can try to pull again by running the following command:

  • sudo docker pull registry.example.com/mysql

The output will look similar to the following:

Output
Using default tag: latest latest: Pulling from mysql Digest: sha256:f2dc118ca6fa4c88cde5889808c486dfe94bccecd01ca626b002a010bb66bcbe Status: Image is up to date for registry.example.com/mysql:latest

You’ve now configured Docker and can log in securely. To configure Kubernetes to log in to your registry, run the following command:

  • sudo kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/sammy/.docker/config.json --type=kubernetes.io/dockerconfigjson

You will see the following output:

Output
secret/regcred created

This command creates a secret in your cluster with the name regcred, takes the contents of the JSON file where Docker stores the credentials, and parses it as dockerconfigjson, which defines a registry credential in Kubernetes.

You’ve used htpasswd to create a login config file, configured the registry to authenticate requests, and created a Kubernetes secret containing the login credentials. Next, you will test the integration between your Kubernetes cluster and registry.

Step 4 — Testing Kubernetes Integration by Running a Sample Deployment

In this step, you’ll run a sample deployment with an image stored in the in-cluster registry to test the connection between your Kubernetes cluster and registry.

In the last step, you created a secret, called regcred, containing login credentials for your private registry. It may contain login credentials for multiple registries, in which case you’ll have to update the Secret accordingly.

You can specify which secret Kubernetes should use when pulling containers in the pod definition by specifying imagePullSecrets. This step is necessary when the Docker registry requires authentication.

You’ll now deploy a sample Hello World image from your private Docker registry to your cluster. First, in order to push it, you’ll pull it to your machine by running the following command:

  • sudo docker pull paulbouwer/hello-kubernetes:1.5

Then, tag it by running:

  • sudo docker tag paulbouwer/hello-kubernetes:1.5 registry.example.com/paulbouwer/hello-kubernetes:1.5

Finally, push it to your registry:

  • sudo docker push registry.example.com/paulbouwer/hello-kubernetes:1.5

Delete it from your machine as you no longer need it locally:

  • sudo docker rmi registry.example.com/paulbouwer/hello-kubernetes:1.5

Now, you’ll deploy the sample Hello World application. First, create a new file, hello-world.yaml, using your text editor:

  • nano hello-world.yaml

Next, you’ll define a Service and an Ingress to make the app accessible to outside of the cluster. Add the following lines, replacing the highlighted lines with your domains:

hello-world.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: hello-kubernetes-ingress   annotations:     kubernetes.io/ingress.class: nginx     nginx.ingress.kubernetes.io/rewrite-target: / spec:   rules:   - host: k8s-test.example.com     http:       paths:       - path: /         backend:           serviceName: hello-kubernetes           servicePort: 80 --- apiVersion: v1 kind: Service metadata:   name: hello-kubernetes spec:   type: NodePort   ports:   - port: 80     targetPort: 8080   selector:     app: hello-kubernetes --- apiVersion: apps/v1 kind: Deployment metadata:   name: hello-kubernetes spec:   replicas: 3   selector:     matchLabels:       app: hello-kubernetes   template:     metadata:       labels:         app: hello-kubernetes     spec:       containers:       - name: hello-kubernetes         image: registry.example.com/paulbouwer/hello-kubernetes:1.5         ports:         - containerPort: 8080       imagePullSecrets:       - name: regcred 

First, you define the Ingress for the Hello World deployment, which you will route through the Load Balancer that the Nginx Ingress Controller owns. Then, you define a service that can access the pods created in the deployment. In the actual deployment spec, you specify the image as the one located in your registry and set imagePullSecrets to regcred, which you created in the previous step.

Save and close the file. To deploy this to your cluster, run the following command:

  • kubectl apply -f hello-world.yaml

You’ll see the following output:

Output
ingress.extensions/hello-kubernetes-ingress created service/hello-kubernetes created deployment.apps/hello-kubernetes created

You can now navigate to your test domain — the second A record, k8s-test.example.com in this tutorial. You will see the Kubernetes Hello world! page.

Hello World page

The Hello World page lists some environment information, like the Linux kernel version and the internal ID of the pod the request was served from. You can also access your Space via the web interface to see the images you’ve worked with in this tutorial.

If you want to delete this Hello World deployment after testing, run the following command:

  • kubectl delete -f hello-world.yaml

You’ve created a sample Hello World deployment to test if Kubernetes is properly pulling images from your private registry.

Conclusion

You have now successfully deployed your own private Docker registry on your DigitalOcean Kubernetes cluster, using DigitalOcean Spaces as the storage layer underneath. There is no limit to how many images you can store, Spaces can extend infinitely, while at the same time providing the same security and robustness. In production, though, you should always strive to optimize your Docker images as much as possible, take a look at the How To Optimize Docker Images for Production tutorial.

DigitalOcean Community Tutorials

How To Migrate a Docker Compose Workflow to Kubernetes

Introduction

When building modern, stateless applications, containerizing your application’s components is the first step in deploying and scaling on distributed platforms. If you have used Docker Compose in development, you will have modernized and containerized your application by:

  • Extracting necessary configuration information from your code.
  • Offloading your application’s state.
  • Packaging your application for repeated use.

You will also have written service definitions that specify how your container images should run.

To run your services on a distributed platform like Kubernetes, you will need to translate your Compose service definitions to Kubernetes objects. This will allow you to scale your application with resiliency. One tool that can speed up the translation process to Kubernetes is kompose, a conversion tool that helps developers move Compose workflows to container orchestrators like Kubernetes or OpenShift.

In this tutorial, you will translate Compose services to Kubernetes objects using kompose. You will use the object definitions that kompose provides as a starting point and make adjustments to ensure that your setup will use Secrets, Services, and PersistentVolumeClaims in the way that Kubernetes expects. By the end of the tutorial, you will have a single-instance Node.js application with a MongoDB database running on a Kubernetes cluster. This setup will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build out a production-ready solution that will scale with your needs.

Prerequisites

Step 1 — Installing kompose

To begin using kompose, navigate to the project’s GitHub Releases page, and copy the link to the current release (version 1.18.0 as of this writing). Paste this link into the following curl command to download the latest version of kompose:

  • curl -L https://github.com/kubernetes/kompose/releases/download/v1.18.0/kompose-linux-amd64 -o kompose

For details about installing on non-Linux systems, please refer to the installation instructions.

Make the binary executable:

  • chmod +x kompose

Move it to your PATH:

  • sudo mv ./kompose /usr/local/bin/kompose

To verify that it has been installed properly, you can do a version check:

  • kompose version

If the installation was successful, you will see output like the following:

Output
1.18.0 (06a2e56)

With kompose installed and ready to use, you can now clone the Node.js project code that you will be translating to Kubernetes.

Step 2 — Cloning and Packaging the Application

To use our application with Kubernetes, we will need to clone the project code and package the application so that the kubelet service can pull the image.

Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application to demonstrate how to set up a development environment using Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

Clone the repository into a directory called node_project:

  • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

Navigate to the node_project directory:

  • cd node_project

The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application’s state has been offloaded to a MongoDB database.

For more information about designing modern, stateless applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

The project directory includes a Dockerfile with instructions for building the application image. Let’s build the image now so that you can push it to your Docker Hub account and use it in your Kubernetes setup.

Using the docker build command, build the image with the -t flag, which allows you to tag it with a memorable name. In this case, tag the image with your Docker Hub username and name it node-kubernetes or a name of your own choosing:

  • docker build -t your_dockerhub_username/node-kubernetes .

The . in the command specifies that the build context is the current directory.

It will take a minute or two to build the image. Once it is complete, check your images:

  • docker images

You will see the following output:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-kubernetes latest 9c6f897e1fbc 3 seconds ago 90MB node 10-alpine 94f3c8956482 12 days ago 71MB

Next, log in to the Docker Hub account you created in the prerequisites:

  • docker login -u your_dockerhub_username

When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your user’s home directory with your Docker Hub credentials.

Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

  • docker push your_dockerhub_username/node-kubernetes

You now have an application image that you can pull to run your application with Kubernetes. The next step will be to translate your application service definitions to Kubernetes objects.

Step 3 — Translating Compose Services to Kubernetes Objects with kompose

Our Docker Compose file, here called docker-compose.yaml, lays out the definitions that will run our services with Compose. A service in Compose is a running container, and service definitions contain information about how each container image will run. In this step, we will translate these definitions to Kubernetes objects by using kompose to create yaml files. These files will contain specs for the Kubernetes objects that describe their desired state.

We will use these files to create different types of objects: Services, which will ensure that the Pods running our containers remain accessible; Deployments, which will contain information about the desired state of our Pods; a PersistentVolumeClaim to provision storage for our database data; a ConfigMap for environment variables injected at runtime; and a Secret for our application’s database user and password. Some of these definitions will be in the files kompose will create for us, and others we will need to create ourselves.

First, we will need to modify some of the definitions in our docker-compose.yaml file to work with Kubernetes. We will include a reference to our newly-built application image in our nodejs service definition and remove the bind mounts, volumes, and additional commands that we used to run the application container in development with Compose. Additionally, we’ll redefine both containers’ restart policies to be in line with the behavior Kubernetes expects.

Open the file with nano or your favorite editor:

  • nano docker-compose.yaml

The current definition for the nodejs application service looks like this:

~/node_project/docker-compose.yaml
... services:   nodejs:     build:       context: .       dockerfile: Dockerfile     image: nodejs     container_name: nodejs     restart: unless-stopped     env_file: .env     environment:       - MONGO_USERNAME=$  MONGO_USERNAME       - MONGO_PASSWORD=$  MONGO_PASSWORD       - MONGO_HOSTNAME=db       - MONGO_PORT=$  MONGO_PORT       - MONGO_DB=$  MONGO_DB      ports:       - "80:8080"     volumes:       - .:/home/node/app       - node_modules:/home/node/app/node_modules     networks:       - app-network     command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js ... 

Make the following edits to your service definition:

  • Use your node-kubernetes image instead of the local Dockerfile.
  • Change the container restart policy from unless-stopped to always.
  • Remove the volumes list and the command instruction.

The finished service definition will now look like this:

~/node_project/docker-compose.yaml
... services:   nodejs:     image: your_dockerhub_username/node-kubernetes     container_name: nodejs     restart: always     env_file: .env     environment:       - MONGO_USERNAME=$  MONGO_USERNAME       - MONGO_PASSWORD=$  MONGO_PASSWORD       - MONGO_HOSTNAME=db       - MONGO_PORT=$  MONGO_PORT       - MONGO_DB=$  MONGO_DB      ports:       - "80:8080"     networks:       - app-network ... 

Next, scroll down to the db service definition. Here, change the restart policy for the service to always and remove the .env file. Instead of using values from the .env file, we will pass the values for our MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD to the database container using the Secret we will create in Step 4.

The db service definition will now look like this:

~/node_project/docker-compose.yaml
...   db:     image: mongo:4.1.8-xenial     container_name: db     restart: always     environment:       - MONGO_INITDB_ROOT_USERNAME=$  MONGO_USERNAME       - MONGO_INITDB_ROOT_PASSWORD=$  MONGO_PASSWORD     volumes:         - dbdata:/data/db        networks:       - app-network ...   

Finally, at the bottom of the file, remove the node_modules volumes from the top-level volumes key. The key will now look like this:

~/node_project/docker-compose.yaml
... volumes:   dbdata: 

Save and close the file when you are finished editing.

Before translating our service definitions, we will need to write the .env file that kompose will use to create the ConfigMap with our non-sensitive information. Please see Step 2 of Containerizing a Node.js Application for Development With Docker Compose for a longer explanation of this file.

In that tutorial, we added .env to our .gitignore file to ensure that it would not copy to version control. This means that it did not copy over when we cloned the node-mongo-docker-dev repository in Step 2 of this tutorial. We will therefore need to recreate it now.

Create the file:

  • nano .env

kompose will use this file to create a ConfigMap for our application. However, instead of assigning all of the variables from the nodejs service definition in our Compose file, we will add only the MONGO_DB database name and the MONGO_PORT. We will assign the database username and password separately when we manually create a Secret object in Step 4.

Add the following port and database name information to the .env file. Feel free to rename your database if you would like:

~/node_project/.env
MONGO_PORT=27017 MONGO_DB=sharkinfo 

Save and close the file when you are finished editing.

You are now ready to create the files with your object specs. kompose offers multiple options for translating your resources. You can:

  • Create yaml files based on the service definitions in your docker-compose.yaml file with kompose convert.
  • Create Kubernetes objects directly with kompose up.
  • Create a Helm chart with kompose convert -c.

For now, we will convert our service definitions to yaml files and then add to and revise the files kompose creates.

Convert your service definitions to yaml files with the following command:

  • kompose convert

You can also name specific or multiple Compose files using the -f flag.

After you run this command, kompose will output information about the files it has created:

Output
INFO Kubernetes file "nodejs-service.yaml" created INFO Kubernetes file "db-deployment.yaml" created INFO Kubernetes file "dbdata-persistentvolumeclaim.yaml" created INFO Kubernetes file "nodejs-deployment.yaml" created INFO Kubernetes file "nodejs-env-configmap.yaml" created

These include yaml files with specs for the Node application Service, Deployment, and ConfigMap, as well as for the dbdata PersistentVolumeClaim and MongoDB database Deployment.

These files are a good starting point, but in order for our application’s functionality to match the setup described in Containerizing a Node.js Application for Development With Docker Compose we will need to make a few additions and changes to the files kompose has generated.

Step 4 — Creating Kubernetes Secrets

In order for our application to function in the way we expect, we will need to make a few modifications to the files that kompose has created. The first of these changes will be generating a Secret for our database user and password and adding it to our application and database Deployments. Kubernetes offers two ways of working with environment variables: ConfigMaps and Secrets. kompose has already created a ConfigMap with the non-confidential information we included in our .env file, so we will now create a Secret with our confidential information: our database username and password.

The first step in manually creating a Secret will be to convert your username and password to base64, an encoding scheme that allows you to uniformly transmit data, including binary data.

Convert your database username:

  • echo -n 'your_database_username' | base64

Note down the value you see in the output.

Next, convert your password:

  • echo -n 'your_database_password' | base64

Take note of the value in the output here as well.

Open a file for the Secret:

  • nano secret.yaml

Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your yaml files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

  • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

In general, it is a good idea to validate your syntax before creating resources with kubectl.

Add the following code to the file to create a Secret that will define your MONGO_USERNAME and MONGO_PASSWORD using the encoded values you just created. Be sure to replace the dummy values here with your encoded username and password:

~/node_project/secret.yaml
apiVersion: v1 kind: Secret metadata:   name: mongo-secret data:   MONGO_USERNAME: your_encoded_username   MONGO_PASSWORD: your_encoded_password 

We have named the Secret object mongo-secret, but you are free to name it anything you would like.

Save and close this file when you are finished editing. As you did with your .env file, be sure to add secret.yaml to your .gitignore file to keep it out of version control.

With secret.yaml written, our next step will be to ensure that our application and database Pods both use the values we added to the file. Let’s start by adding references to the Secret to our application Deployment.

Open the file called nodejs-deployment.yaml:

  • nano nodejs-deployment.yaml

The file’s container specifications include the following environment variables defined under the env key:

~/node_project/nodejs-deployment.yaml
apiVersion: extensions/v1beta1 kind: Deployment ...     spec:       containers:       - env:         - name: MONGO_DB           valueFrom:             configMapKeyRef:               key: MONGO_DB               name: nodejs-env         - name: MONGO_HOSTNAME           value: db         - name: MONGO_PASSWORD         - name: MONGO_PORT           valueFrom:             configMapKeyRef:               key: MONGO_PORT               name: nodejs-env         - name: MONGO_USERNAME 

We will need to add references to our Secret to the MONGO_USERNAME and MONGO_PASSWORD variables listed here, so that our application will have access to those values. Instead of including a configMapKeyRef key to point to our nodejs-env ConfigMap, as is the case with the values for MONGO_DB and MONGO_PORT, we’ll include a secretKeyRef key to point to the values in our mongo-secret secret.

Add the following Secret references to the MONGO_USERNAME and MONGO_PASSWORD variables:

~/node_project/nodejs-deployment.yaml
apiVersion: extensions/v1beta1 kind: Deployment ...     spec:       containers:       - env:         - name: MONGO_DB           valueFrom:             configMapKeyRef:               key: MONGO_DB               name: nodejs-env         - name: MONGO_HOSTNAME           value: db         - name: MONGO_PASSWORD           valueFrom:             secretKeyRef:               name: mongo-secret               key: MONGO_PASSWORD         - name: MONGO_PORT           valueFrom:             configMapKeyRef:               key: MONGO_PORT               name: nodejs-env         - name: MONGO_USERNAME           valueFrom:             secretKeyRef:               name: mongo-secret               key: MONGO_USERNAME 

Save and close the file when you are finished editing.

Next, we’ll add the same values to the db-deployment.yaml file.

Open the file for editing:

  • nano db-deployment.yaml

In this file, we will add references to our Secret for following variable keys: MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD. The mongo image makes these variables available so that you can modify the initialization of your database instance. MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD together create a root user in the admin authentication database and ensure that authentication is enabled when the database container starts.

Using the values we set in our Secret ensures that we will have an application user with root privileges on the database instance, with access to all of the administrative and operational privileges of that role. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

Under the MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD variables, add references to the Secret values:

~/node_project/db-deployment.yaml
apiVersion: extensions/v1beta1 kind: Deployment ...     spec:       containers:       - env:         - name: MONGO_INITDB_ROOT_PASSWORD           valueFrom:             secretKeyRef:               name: mongo-secret               key: MONGO_PASSWORD                 - name: MONGO_INITDB_ROOT_USERNAME           valueFrom:             secretKeyRef:               name: mongo-secret               key: MONGO_USERNAME         image: mongo:4.1.8-xenial ... 

Save and close the file when you are finished editing.

With your Secret in place, you can move on to creating your database Service and ensuring that your application container only attempts to connect to the database once it is fully set up and initialized.

Step 5 — Creating the Database Service and an Application Init Container

Now that we have our Secret, we can move on to creating our database Service and an Init Container that will poll this Service to ensure that our application only attempts to connect to the database once the database startup tasks, including creating the MONGO_INITDB user and password, are complete.

For a discussion of how to implement this functionality in Compose, please see Step 4 of Containerizing a Node.js Application for Development with Docker Compose.

Open a file to define the specs for the database Service:

  • nano db-service.yaml

Add the following code to the file to define the Service:

~/node_project/db-service.yaml
apiVersion: v1 kind: Service metadata:   annotations:      kompose.cmd: kompose convert     kompose.version: 1.18.0 (06a2e56)   creationTimestamp: null   labels:     io.kompose.service: db   name: db spec:   ports:   - port: 27017     targetPort: 27017   selector:     io.kompose.service: db status:   loadBalancer: {} 

The selector that we have included here will match this Service object with our database Pods, which have been defined with the label io.kompose.service: db by kompose in the db-deployment.yaml file. We’ve also named this service db.

Save and close the file when you are finished editing.

Next, let’s add an Init Container field to the containers array in nodejs-deployment.yaml. This will create an Init Container that we can use to delay our application container from starting until the db Service has been created with a Pod that is reachable. This is one of the possible uses for Init Containers; to learn more about other use cases, please see the official documentation.

Open the nodejs-deployment.yaml file:

  • nano nodejs-deployment.yaml

Within the Pod spec and alongside the containers array, we are going to add an initContainers field with a container that will poll the db Service.

Add the following code below the ports and resources fields and above the restartPolicy in the nodejs containers array:

~/node_project/nodejs-deployment.yaml
apiVersion: extensions/v1beta1 kind: Deployment ...     spec:       containers:       ...         name: nodejs         ports:         - containerPort: 8080         resources: {}       initContainers:       - name: init-db         image: busybox         command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']       restartPolicy: Always ...                

This Init Container uses the BusyBox image, a lightweight image that includes many UNIX utilities. In this case, we’ll use the netcat utility to poll whether or not the Pod associated with the db Service is accepting TCP connections on port 27017.

This container command replicates the functionality of the wait-for script that we removed from our docker-compose.yaml file in Step 3. For a longer discussion of how and why our application used the wait-for script when working with Compose, please see Step 4 of Containerizing a Node.js Application for Development with Docker Compose.

Init Containers run to completion; in our case, this means that our Node application container will not start until the database container is running and accepting connections on port 27017. The db Service definition allows us to guarantee this functionality regardless of the exact location of the database container, which is mutable.

Save and close the file when you are finished editing.

With your database Service created and your Init Container in place to control the startup order of your containers, you can move on to checking the storage requirements in your PersistentVolumeClaim and exposing your application service using a LoadBalancer.

Step 6 — Modifying the PersistentVolumeClaim and Exposing the Application Frontend

Before running our application, we will make two final changes to ensure that our database storage will be provisioned properly and that we can expose our application frontend using a LoadBalancer.

First, let’s modify the storage resource defined in the PersistentVolumeClaim that kompose created for us. This Claim allows us to dynamically provision storage to manage our application’s state.

To work with PersistentVolumeClaims, you must have a StorageClass created and configured to provision storage resources. In our case, because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.comDigitalOcean Block Storage.

We can check this by typing:

  • kubectl get storageclass

If you are working with a DigitalOcean cluster, you will see the following output:

Output
NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 76m

If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

When kompose created dbdata-persistentvolumeclaim.yaml, it set the storage resource to a size that does not meet the minimum size requirements of our provisioner. We will therefore need to modify our PersistentVolumeClaim to use the minimum viable DigitalOcean Block Storage unit: 1GB. Please feel free to modify this to meet your storage requirements.

Open dbdata-persistentvolumeclaim.yaml:

  • nano dbdata-persistentvolumeclaim.yaml

Replace the storage value with 1Gi:

~/node_project/dbdata-persistentvolumeclaim.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata:   creationTimestamp: null   labels:     io.kompose.service: dbdata   name: dbdata spec:   accessModes:   - ReadWriteOnce   resources:     requests:       storage: 1Gi status: {} 

Also note the accessMode: ReadWriteOnce means that the volume provisioned as a result of this Claim will be read-write only by a single node. Please see the documentation for more information about different access modes.

Save and close the file when you are finished.

Next, open nodejs-service.yaml:

  • nano nodejs-service.yaml

We are going to expose this Service externally using a DigitalOcean Load Balancer. If you are not using a DigitalOcean cluster, please consult the relevant documentation from your cloud provider for information about their load balancers. Alternatively, you can follow the official Kubernetes documentation on setting up a highly available cluster with kubeadm, but in this case you will not be able to use PersistentVolumeClaims to provision storage.

Within the Service spec, specify LoadBalancer as the Service type:

~/node_project/nodejs-service.yaml
apiVersion: v1 kind: Service ... spec:   type: LoadBalancer   ports: ... 

When we create the nodejs Service, a load balancer will be automatically created, providing us with an external IP where we can access our application.

Save and close the file when you are finished editing.

With all of our files in place, we are ready to start and test the application.

Step 7 — Starting and Accessing the Application

It’s time to create our Kubernetes objects and test that our application is working as expected.

To create the objects we’ve defined, we’ll use kubectl create with the -f flag, which will allow us to specify the files that kompose created for us, along with the files we wrote. Run the following command to create the Node application and MongoDB database Services and Deployments, along with your Secret, ConfigMap, and PersistentVolumeClaim:

  • kubectl create -f nodejs-service.yaml,nodejs-deployment.yaml,nodejs-env-configmap.yaml,db-service.yaml,db-deployment.yaml,dbdata-persistentvolumeclaim.yaml,secret.yaml

You will see the following output indicating that the objects have been created:

Output
service/nodejs created deployment.extensions/nodejs created configmap/nodejs-env created service/db created deployment.extensions/db created persistentvolumeclaim/dbdata created secret/mongo-secret created

To check that your Pods are running, type:

  • kubectl get pods

You don’t need to specify a Namespace here, since we have created our objects in the default Namespace. If you are working with multiple Namespaces, be sure to include the -n flag when running this command, along with the name of your Namespace.

You will see the following output while your db container is starting and your application Init Container is running:

Output
NAME READY STATUS RESTARTS AGE db-679d658576-kfpsl 0/1 ContainerCreating 0 10s nodejs-6b9585dc8b-pnsws 0/1 Init:0/1 0 10s

Once that container has run and your application and database containers have started, you will see this output:

Output
NAME READY STATUS RESTARTS AGE db-679d658576-kfpsl 1/1 Running 0 54s nodejs-6b9585dc8b-pnsws 1/1 Running 0 54s

The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

Note:
If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

  • kubectl describe pods your_pod
  • kubectl logs your_pod

With your containers running, you can now access the application. To get the IP for the LoadBalancer, type:

  • kubectl get svc

You will see the following output:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db ClusterIP 10.245.189.250 <none> 27017/TCP 93s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 25m12s nodejs LoadBalancer 10.245.15.56 your_lb_ip 80:30729/TCP 93s

The EXTERNAL_IP associated with the nodejs service is the IP address where you can access the application. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

You should see the following landing page:

Application Landing Page

Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark’s general character:

Shark Info Form

In the form, add a shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

Filled Shark Form

Click on the Submit button. You will see a page with this shark information displayed back to you:

Shark Output

You now have a single instance setup of a Node.js application with a MongoDB database running on a Kubernetes cluster.

Conclusion

The files you have created in this tutorial are a good starting point to build from as you move toward production. As you develop your application, you can work on implementing the following:

DigitalOcean Community Tutorials

How To Build and Deploy a Flask Application Using Docker on Ubuntu 18.04

The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.

Introduction

Docker is an open-source application that allows administrators to create, manage, deploy, and replicate applications using containers. Containers can be thought of as a package that houses dependencies that an application requires to run at an operating system level. This means that each application deployed using Docker lives in an environment of its own and its requirements are handled separately.

Flask is a web micro-framework that is built on Python. It is called a micro-framework because it does not require specific tools or plug-ins to run. The Flask framework is lightweight and flexible, yet highly structured, making it preferred over other frameworks.

Deploying a Flask application with Docker will allow you to replicate the application across different servers with minimal reconfiguration.

In this tutorial, you will create a Flask application and deploy it with Docker. This tutorial will also cover how to update an application after deployment.

Prerequisites

To follow this tutorial, you will need the following:

Step 1 — Setting Up the Flask Application

To get started, you will create a directory structure that will hold your Flask application. This tutorial will create a directory called TestApp in /var/www, but you can modify the command to name it whatever you’d like.

  • sudo mkdir /var/www/TestApp

Move in to the newly created TestApp directory:

  • cd /var/www/TestApp

Next, create the base folder structure for the Flask application:

  • sudo mkdir -p app/static app/templates

The -p flag indicates that mkdir will create a directory and all parent directories that don’t exist. In this case, mkdir will create the app parent directory in the process of making the static and templates directories.

The app directory will contain all files related to the Flask application such as its views and blueprints. Views are the code you write to respond to requests to your application. Blueprints create application components and support common patterns within an application or across multiple applications.

The static directory is where assets such as images, CSS, and JavaScript files live. The templates directory is where you will put the HTML templates for your project.

Now that the base folder structure is complete, create the files needed to run the Flask application. First, create an __init__.py file inside the app directory. This file tells the Python interpreter that the app directory is a package and should be treated as such.

Run the following command to create the file:

  • sudo nano app/__init__.py

Packages in Python allow you to group modules into logical namespaces or hierarchies. This approach enables the code to be broken down into individual and manageable blocks that perform specific functions.

Next, you will add code to the __init__.py that will create a Flask instance and import the logic from the views.py file, which you will create after saving this file. Add the following code to your new file:

/var/www/TestApp/__init__.py
from flask import Flask app = Flask(__name__) from app import views 

Once you’ve added that code, save and close the file.

With the __init__.py file created, you’re ready to create the views.py file in your app directory. This file will contain most of your application logic.

  • sudo nano app/views.py

Next, add the code to your views.py file. This code will return the hello world! string to users who visit your web page:

/var/www/TestApp/app/views.py
from app import app  @app.route('/') def home():    return "hello world!" 

The @app.route line above the function is called a decorator. Decorators modify the function that follows it. In this case, the decorator tells Flask which URL will trigger the home() function. The hello world text returned by the home function will be displayed to the user on the browser.

With the views.py file in place, you’re ready to create the uwsgi.ini file. This file will contain the uWSGI configurations for our application. uWSGI is a deployment option for Nginx that is both a protocol and an application server; the application server can serve uWSGI, FastCGI, and HTTP protocols.

To create this file, run the following command:

  • sudo nano uwsgi.ini

Next, add the following content to your file to configure the uWSGI server:

/var/www/TestApp/uwsgi.ini
[uwsgi] module = main callable = app master = true 

This code defines the module that the Flask application will be served from. In this case, this is the main.py file, referenced here as main. The callable option instructs uWSGI to use the app instance exported by the main application. The master option allows your application to keep running, so there is little downtime even when reloading the entire application.

Next, create the main.py file, which is the entry point to the application. The entry point instructs uWSGI on how to interact with the application.

  • sudo nano main.py

Next, copy and paste the following into the file. This imports the Flask instance named app from the application package that was previously created.

/var/www/TestApp/main.py
from app import app 

Finally, create a requirements.txt file to specify the dependencies that the pip package manager will install to your Docker deployment:

  • sudo nano requirements.txt

Add the following line to add Flask as a dependency:

/var/www/TestApp/app/requirements.txt
Flask==1.0.2 

This specifies the version of Flask to be installed. At the time of writing this tutorial, 1.0.2 is the latest Flask version. You can check for updates at the official website for Flask.

Save and close the file. You have successfully set up your Flask application and are ready to set up Docker.

Step 2 — Setting Up Docker

In this step you will create two files, Dockerfile and start.sh, to create your Docker deployment. The Dockerfile is a text document that contains the commands used to assemble the image. The start.sh file is a shell script that will build an image and create a container from the Dockerfile.

First, create the Dockerfile.

  • sudo nano Dockerfile

Next, add your desired configuration to the Dockerfile. These commands specify how the image will be built, and what extra requirements will be included.

/var/www/TestApp/Dockerfile
FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7 RUN apk --update add bash nano ENV STATIC_URL /static ENV STATIC_PATH /var/www/app/static COPY ./requirements.txt /var/www/requirements.txt RUN pip install -r /var/www/requirements.txt 

In this example, the Docker image will be built off an existing image, tiangolo/uwsgi-nginx-flask, which you can find on DockerHub. This particular Docker image is a good choice over others because it supports a wide range of Python versions and OS images.

The first two lines specify the parent image that you’ll use to run the application and install the bash command processor and the nano text editor. It also installs the git client for pulling and pushing to version control hosting services such as GitHub, GitLab, and Bitbucket. ENV STATIC_URL /static is an environment variable specific to this Docker image. It defines the static folder where all assets such as images, CSS files, and JavaScript files are served from.

The last two lines will copy the requirements.txt file into the container so that it can be executed, and then parses the requirements.txt file to install the specified dependencies.

Save and close the file after adding your configuration.

With your Dockerfile in place, you’re almost ready to write your start.sh script that will build the Docker container. Before writing the start.sh script, first make sure that you have an open port to use in the configuration. To check if a port is free, run the following command:

  • sudo nc localhost 56733 < /dev/null; echo $ ?

If the output of the command above is 1, then the port is free and usable. Otherwise, you will need to select a different port to use in your start.sh configuration file.

Once you’ve found an open port to use, create the start.sh script:

  • sudo nano start.sh

The start.sh script is a shell script that will build an image from the Dockerfile and create a container from the resulting Docker image. Add your configuration to the new file:

/var/www/TestApp/start.sh
#!/bin/bash app="docker.test" docker build -t $  {app} . docker run -d -p 56733:80 \   --name=$  {app} \   -v $  PWD:/app $  {app} 

The first line is called a shebang. It specifies that this is a bash file and will be executed as commands. The next line specifies the name you want to give the image and container and saves as a variable named app. The next line instructs Docker to build an image from your Dockerfile located in the current directory. This will create an image called docker.test in this example.

The last three lines create a new container named docker.test that is exposed at port 56733. Finally, it links the present directory to the /var/www directory of the container.

You use the -d flag to start a container in daemon mode, or as a background process. You include the -p flag to bind a port on the server to a particular port on the Docker container. In this case, you are binding port 56733 to port 80 on the Docker container. The -v flag specifies a Docker volume to mount on the container, and in this case, you are mounting the entire project directory to the /var/www folder on the Docker container.

Execute the start.sh script to create the Docker image and build a container from the resulting image:

  • sudo bash start.sh

Once the script finishes running, use the following command to list all running containers:

  • sudo docker ps

You will receive output that shows the containers:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 58b05508f4dd docker.test "/entrypoint.sh /sta…" 12 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:56733->80/tcp docker.test

You will find that the docker.test container is running. Now that it is running, visit the IP address at the specified port in your browser: http://ip-address:56733

You’ll see a page similar to the following:

the home page

In this step you have successfully deployed your Flask application on Docker. Next, you will use templates to display content to users.

Step 3 — Serving Template Files

Templates are files that display static and dynamic content to users who visit your application. In this step, you will create a HTML template to create a home page for the application.

Start by creating a home.html file in the app/templates directory:

  • sudo nano app/templates/home.html

Add the code for your template. This code will create an HTML5 page that contains a title and some text.

/var/www/TestApp/app/templates/home.html
 <!doctype html>  <html lang="en-us">      <head>     <meta charset="utf-8">     <meta http-equiv="x-ua-compatible" content="ie=edge">     <title>Welcome home</title>   </head>    <body>     <h1>Home Page</h1>     <p>This is the home page of our application.</p>   </body>  </html> 

Save and close the file once you’ve added your template.

Next, modify the app/views.py file to serve the newly created file:

  • sudo nano app/views.py

First, add the following line at the beginning of your file to import the render_template method from Flask. This method parses an HTML file to render a web page to the user.

/var/www/TestApp/app/views.py
from flask import render_template ... 

At the end of the file, you will also add a new route to render the template file. This code specifies that users are served the contents of the home.html file whenever they visit the /template route on your application.

/var/www/TestApp/app/views.py
...  @app.route('/template') def template():     return render_template('home.html') 

The updated app/views.py file will look like this:

/var/www/TestApp/app/views.py
from flask import render_template from app import app   @app.route('/') def home():     return "Hello world!"  @app.route('/template') def template():     return render_template('home.html') 

Save and close the file when done.

In order for these changes to take effect, you will need to stop and restart the Docker containers. Run the following command to rebuild the container:

  • sudo docker stop docker.test && sudo docker start docker.test

Visit your application at http://your-ip-address:56733/template to see the new template being served.

homepage

In this you’ve created a Docker template file to serve visitors on your application. In the next step you will see how the changes you make to your application can take effect without having to restart the Docker container.

Step 4 — Updating the Application

Sometimes you will need to make changes to the application, whether it is installing new requirements, updating the Docker container, or HTML and logic changes. In this section, you will configure touch-reload to make these changes without needing to restart the Docker container.

Python autoreloading watches the entire file system for changes and refreshes the application when it detects a change. Autoreloading is discouraged in production because it can become resource intensive very quickly. In this step, you will use touch-reload to watch for changes to a particular file and reload when the file is updated or replaced.

To implement this, start by opening your uwsgi.ini file:

  • sudo nano uwsgi.ini

Next, add the highlighted line to the end of the file:

/var/www/TestApp/uwsgi.ini
module = main callable = app master = true touch-reload = /app/uwsgi.ini 

This specifies a file that will be modified to trigger an entire application reload. Once you’ve made the changes, save and close the file.

To demonstrate this, make a small change to your application. Start by opening your app/views.py file:

  • sudo nano app/views.py

Replace the string returned by the home function:

/var/www/TestApp/app/views.py
  • from flask import render_template
  • from app import app
  • @app.route('/')
  • def home():
  • return "<b>There has been a change</b>"
  • @app.route('/template')
  • def template():
  • return render_template('home.html')

Save and close the file after you’ve made a change.

Next, if you open your application’s homepage at http://ip-address:56733, you will notice that the changes are not reflected. This is because the condition for reload is a change to the uwsgi.ini file. To reload the application, use touch to activate the condition:

  • sudo touch uwsgi.ini

Reload the application homepage in your browser again. You will find that the application has incorporated the changes:

Homepage Updated

In this step, you set up a touch-reload condition to update your application after making changes.

Conclusion

In this tutorial, you created and deployed a Flask application to a Docker container. You also configured touch-reload to refresh your application without needing to restart the container.

With your new application on Docker, you can now scale with ease. To learn more about using Docker, check out their official documentation.

DigitalOcean Community Tutorials

How To Optimize Docker Images for Production

The author selected Code.org to receive a donation as part of the Write for DOnations program.

Introduction

In a production environment, Docker makes it easy to create, deploy, and run applications inside of containers. Containers let developers gather applications and all their core necessities and dependencies into a single package that you can turn into a Docker image and replicate. Docker images are built from Dockerfiles. The Dockerfile is a file where you define what the image will look like, what base operating system it will have, and which commands will run inside of it.

Large Docker images can lengthen the time it takes to build and send images between clusters and cloud providers. If, for example, you have a gigabyte-sized image to push every time one of your developers triggers a build, the throughput you create on your network will add up during the CI/CD process, making your application sluggish and ultimately costing you resources. Because of this, Docker images suited for production should only have the bare necessities installed.

There are several ways to decrease the size of Docker images to optimize for production. First off, these images don’t usually need build tools to run their applications, and so there’s no need to add them at all. By using a multi-stage build process, you can use intermediate images to compile and build the code, install dependencies, and package everything into the smallest size possible, then copy over the final version of your application to an empty image without build tools. Additionally, you can use an image with a tiny base, like Alpine Linux. Alpine is a suitable Linux distribution for production because it only has the bare necessities that your application needs to run.

In this tutorial, you’ll optimize Docker images in a few simple steps, making them smaller, faster, and better suited for production. You’ll build images for a sample Go API in several different Docker containers, starting with Ubuntu and language-specific images, then moving on to the Alpine distribution. You will also use multi-stage builds to optimize your images for production. The end goal of this tutorial is to show the size difference between using default Ubuntu images and optimized counterparts, and to show the advantage of multi-stage builds. After reading through this tutorial, you’ll be able to apply these techniques to your own projects and CI/CD pipelines.

Note: This tutorial uses an API written in Go as an example. This simple API will give you a clear understanding of how you would approach optimizing Go microservices with Docker images. Even though this tutorial uses a Go API, you can apply this process to almost any programming language.

Prerequisites

Before you start you will need:

Step 1 — Downloading the Sample Go API

Before optimizing your Docker image, you must first download the sample API that you will build your Docker images from. Using a simple Go API will showcase all the key steps of building and running an application inside a Docker container. This tutorial uses Go because it’s a compiled language like C++ or Java, but unlike them, has a very small footprint.

On your server, begin by cloning the sample Go API:

  • git clone https://github.com/do-community/mux-go-api.git

Once you have cloned the project, you will have a directory named mux-go-api on your server. Move into this directory with cd:

  • cd mux-go-api

This will be the home directory for your project. You will build your Docker images from this directory. Inside, you will find the source code for an API written in Go in the api.go file. Although this API is minimal and has only a few endpoints, it will be appropriate for simulating a production-ready API for the purposes of this tutorial.

Now that you have downloaded the sample Go API, you are ready to build a base Ubuntu Docker image, against which you can compare the later, optimized Docker images.

Step 2 — Building a Base Ubuntu Image

For your first Docker image, it will be useful to see what it looks like when you start out with a base Ubuntu image. This will package your sample API in an environment similar to the software you’re already running on your Ubuntu server. Inside the image, you will install the various packages and modules you need to run your application. You will find, however, that this process creates a rather heavy Ubuntu image that will affect build time and the code readability of your Dockerfile.

Start by writing a Dockerfile that instructs Docker to create an Ubuntu image, install Go, and run the sample API. Make sure to create the Dockerfile in the directory of the cloned repo. If you cloned to the home directory it should be $ HOME/mux-go-api.

Make a new file called Dockerfile.ubuntu. Open it up in nano or your favorite text editor:

  • nano ~/mux-go-api/Dockerfile.ubuntu

In this Dockerfile, you’ll define an Ubuntu image and install Golang. Then you’ll proceed to install the needed dependencies and build the binary. Add the following contents to Dockerfile.ubuntu:

~/mux-go-api/Dockerfile.ubuntu
FROM ubuntu:18.04  RUN apt-get update -y \   && apt-get install -y git gcc make golang-1.10  ENV GOROOT /usr/lib/go-1.10 ENV PATH $  GOROOT/bin:$  PATH ENV GOPATH /root/go ENV APIPATH /root/go/src/api  WORKDIR $  APIPATH COPY . .  RUN \    go get -d -v \   && go install -v \   && go build  EXPOSE 3000 CMD ["./api"] 

Starting from the top, the FROM command specifies which base operating system the image will have. Then the RUN command installs the Go language during the creation of the image. ENV sets the specific environment variables the Go compiler needs in order to work properly. WORKDIR specifies the directory where we want to copy over the code, and the COPY command takes the code from the directory where Dockerfile.ubuntu is and copies it over into the image. The final RUN command installs Go dependencies needed for the source code to compile and run the API.

Note: Using the && operators to string together RUN commands is important in optimizing Dockerfiles, because every RUN command will create a new layer, and every new layer increases the size of the final image.

Save and exit the file. Now you can run the build command to create a Docker image from the Dockerfile you just made:

  • docker build -f Dockerfile.ubuntu -t ubuntu .

The build command builds an image from a Dockerfile. The -f flag specifies that you want to build from the Dockerfile.ubuntu file, while -t stands for tag, meaning you’re tagging it with the name ubuntu. The final dot represents the current context where Dockerfile.ubuntu is located.

This will take a while, so feel free to take a break. Once the build is done, you’ll have an Ubuntu image ready to run your API. But the final size of the image might not be ideal; anything above a few hundred MB for this API would be considered an overly large image.

Run the following command to list all Docker images and find the size of your Ubuntu image:

  • docker images

You’ll see output showing the image you just created:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 61b2096f6871 33 seconds ago 636MB . . .

As is highlighted in the output, this image has a size of 636MB for a basic Golang API, a number that may vary slightly from machine to machine. Over multiple builds, this large size will significantly affect deployment times and network throughput.

In this section, you built an Ubuntu image with all the needed Go tools and dependencies to run the API you cloned in Step 1. In the next section, you’ll use a pre-built, language-specific Docker image to simplify your Dockerfile and streamline the build process.

Step 3 — Building a Language-Specific Base Image

Pre-built images are ordinary base images that users have modified to include situation-specific tools. Users can then push these images to the Docker Hub image repository, allowing other users to use the shared image instead of having to write their own individual Dockerfiles. This is a common process in production situations, and you can find various pre-built images on Docker Hub for almost any use case. In this step, you’ll build your sample API using a Go-specific image that already has the compiler and dependencies installed.

With pre-built base images already containing the tools you need to build and run your app, you can cut down the build time significantly. Because you’re starting with a base that has all needed tools pre-installed, you can skip adding these to your Dockerfile, making it look a lot cleaner and ultimately decreasing the build time.

Go ahead and create another Dockerfile and name it Dockerfile.golang. Open it up in your text editor:

  • nano ~/mux-go-api/Dockerfile.golang

This file will be significantly more concise than the previous one because it has all the Go-specific dependencies, tools, and compiler pre-installed.

Now, add the following lines:

~/mux-go-api/Dockerfile.golang
FROM golang:1.10  WORKDIR /go/src/api COPY . .  RUN \     go get -d -v \     && go install -v \     && go build  EXPOSE 3000 CMD ["./api"] 

Starting from the top, you’ll find that the FROM statement is now golang:1.10. This means Docker will fetch a pre-built Go image from Docker Hub that has all the needed Go tools already installed.

Now, once again, build the Docker image with:

  • docker build -f Dockerfile.golang -t golang .

Check the final size of the image with the following command:

  • docker images

This will yield output similar to the following:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE golang latest eaee5f524da2 40 seconds ago 744MB . . .

Even though the Dockerfile itself is more efficient and the build time is shorter, the total image size actually increased. The pre-built Golang image is around 744MB, a significant amount.

This is the preferred way to build Docker images. It gives you a base image which the community has approved as the standard to use for the specified language, in this case Go. However, to make an image ready for production, you need to cut away parts that the running application does not need.

Keep in mind that using these heavy images is fine when you are unsure about your needs. Feel free to use them both as throwaway containers as well as the base for building other images. For development or testing purposes, where you don’t need to think about sending images through the network, it’s perfectly fine to use heavy images. But if you want to optimize deployments, then you need to try your best to make your images as tiny as possible.

Now that you have tested a language-specific image, you can move on to the next step, in which you will use the lightweight Alpine Linux distribution as a base image to make your Docker image lighter.

Step 4 — Building Base Alpine Images

One of the easiest steps to optimize your Docker images is to use smaller base images. Alpine is a lightweight Linux distribution designed for security and resource efficiency. The Alpine Docker image uses musl libc and BusyBox to stay compact, requiring no more than 8MB in a container to run. The tiny size is due to binary packages being thinned out and split, giving you more control over what you install, which keeps the environment as small and efficient as possible.

The process of creating an Alpine image is similar to how you created the Ubuntu image in Step 2. First, create a new file called Dockerfile.alpine:

  • nano ~/mux-go-api/Dockerfile.alpine

Now add this snippet:

~/mux-go-api/Dockerfile.alpine
FROM alpine:3.8  RUN apk add --no-cache \     ca-certificates \     git \     gcc \     musl-dev \     openssl \     go  ENV GOPATH /go ENV PATH $  GOPATH/bin:/usr/local/go/bin:$  PATH ENV APIPATH $  GOPATH/src/api RUN mkdir -p "$  GOPATH/src" "$  GOPATH/bin" "$  APIPATH" && chmod -R 777 "$  GOPATH"  WORKDIR $  APIPATH COPY . .  RUN \     go get -d -v \     && go install -v \     && go build  EXPOSE 3000 CMD ["./api"] 

Here you’re adding the apk add command to use Alpine’s package manager to install Go and all libraries it requires. As with the Ubuntu image, you need to set the environment variables as well.

Go ahead and build the image:

  • docker build -f Dockerfile.alpine -t alpine .

Once again, check the image size:

  • docker images

You will receive output similar to the following:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE alpine latest ee35a601158d 30 seconds ago 426MB . . .

The size has gone down to around 426MB.

The small size of the Alpine base image has reduced the final image size, but there are a few more things you can do to make it even smaller.

Next, try using a pre-built Alpine image for Go. This will make the Dockerfile shorter, and will also cut down the size of the final image. Because the pre-built Alpine image for Go is built with Go compiled from source, its footprint is significantly smaller.

Start by creating a new file called Dockerfile.golang-alpine:

  • nano ~/mux-go-api/Dockerfile.golang-alpine

Add the following contents to the file:

~/mux-go-api/Dockerfile.golang-alpine
FROM golang:1.10-alpine3.8  RUN apk add --no-cache --update git  WORKDIR /go/src/api COPY . .  RUN go get -d -v \   && go install -v \   && go build  EXPOSE 3000 CMD ["./api"] 

The only differences between Dockerfile.golang-alpine and Dockerfile.alpine are the FROM command and the first RUN command. Now, the FROM command specifies a golang image with the 1.10-alpine3.8 tag, and RUN only has a command for installing Git. You need Git for the go get command to work in the second RUN command at the bottom of Dockerfile.golang-alpine.

Build the image with the following command:

  • docker build -f Dockerfile.golang-alpine -t golang-alpine .

Retrieve your list of images:

  • docker images

You will receive the following output:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE golang-alpine latest 97103a8b912b 49 seconds ago 288MB

Now the image size is down to around 288MB.

Even though you’ve managed to cut down the size a lot, there’s one last thing you can do to get the image ready for production. It’s called a multi-stage build. By using multi-stage builds, you can use one image to build the application while using another, lighter image to package the compiled application for production, a process you will run through in the next step.

Step 5 — Excluding Build Tools with a Multi-Stage Build

Ideally, images that you run in production shouldn’t have any build tools installed or dependencies that are redundant for the production application to run. You can remove these from the final Docker image by using multi-stage builds. This works by building the binary, or in other terms, the compiled Go application, in an intermediate container, then copying it over to an empty container that doesn’t have any unnecessary dependencies.

Start by creating another file called Dockerfile.multistage:

  • nano ~/mux-go-api/Dockerfile.multistage

What you’ll add here will be familiar. Start out by adding the exact same code as with Dockerfile.golang-alpine. But this time, also add a second image where you’ll copy the binary from the first image.

~/mux-go-api/Dockerfile.multistage
FROM golang:1.10-alpine3.8 AS multistage  RUN apk add --no-cache --update git  WORKDIR /go/src/api COPY . .  RUN go get -d -v \   && go install -v \   && go build  ##  FROM alpine:3.8 COPY --from=multistage /go/bin/api /go/bin/ EXPOSE 3000 CMD ["/go/bin/api"] 

Save and close the file. Here you have two FROM commands. The first is identical to Dockerfile.golang-alpine, except for having an additional AS multistage in the FROM command. This will give it a name of multistage, which you will then reference in the bottom part of the Dockerfile.multistage file. In the second FROM command, you’ll take a base alpine image and COPY over the compiled Go application from the multistage image into it. This process will further cut down the size of the final image, making it ready for production.

Run the build with the following command:

  • docker build -f Dockerfile.multistage -t prod .

Check the image size now, after using a multi-stage build.

  • docker images

You will find two new images instead of only one:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE prod latest 82fc005abc40 38 seconds ago 11.3MB <none> <none> d7855c8f8280 38 seconds ago 294MB . . .

The <none> image is the multistage image built with the FROM golang:1.10-alpine3.8 AS multistage command. It’s only an intermediary used to build and compile the Go application, while the prod image in this context is the final image which only contains the compiled Go application.

From an initial 744MB, you’ve now shaved down the image size to around 11.3MB. Keeping track of a tiny image like this and sending it over the network to your production servers will be much easier than with an image of over 700MB, and will save you significant resources in the long run.

Conclusion

In this tutorial, you optimized Docker images for production using different base Docker images and an intermediate image to compile and build the code. This way, you have packaged your sample API into the smallest size possible. You can use these techniques to improve build and deployment speed of your Docker applications and any CI/CD pipeline you may have.

If you are interested in learning more about building applications with Docker, check out our How To Build a Node.js Application with Docker tutorial. For more conceptual information on optimizing containers, see Building Optimized Containers for Kubernetes.

DigitalOcean Community Tutorials