How To Set Up a Private Docker Registry on Top of DigitalOcean Spaces and Use It with DigitalOcean Kubernetes

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

A Docker registry is a storage and content delivery system for named Docker images, which are the industry standard for containerized applications. A private Docker registry allows you to securely share your images within your team or organization with more flexibility and control when compared to public ones. By hosting your private Docker registry directly in your Kubernetes cluster, you can achieve higher speeds, lower latency, and better availability, all while having control over the registry.

The underlying registry storage is delegated to external drivers. The default storage system is the local filesystem, but you can swap this for a cloud-based storage driver. DigitalOcean Spaces is an S3-compatible object storage designed for developer teams and businesses that want a scalable, simple, and affordable way to store and serve vast amounts of data, and is very suitable for storing Docker images. It has a built-in CDN network, which can greatly reduce latency when frequently accessing images.

In this tutorial, you’ll deploy your private Docker registry to your DigitalOcean Kubernetes cluster using Helm, backed up by DigitalOcean Spaces for storing data. You’ll create API keys for your designated Space, install the Docker registry to your cluster with custom configuration, configure Kubernetes to properly authenticate with it, and test it by running a sample deployment on the cluster. At the end of this tutorial, you’ll have a secure, private Docker registry installed on your DigitalOcean Kubernetes cluster.

Prerequisites

Before you begin this tutorial, you’ll need:

  • Docker installed on the machine that you’ll access your cluster from. For Ubuntu 18.04 visit How To Install and Use Docker on Ubuntu 18.04. You only need to complete the first step. Otherwise visit Docker’s website for other distributions.

  • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

  • A DigitalOcean Space with API keys (access and secret). To learn how to create a DigitalOcean Space and API keys, see How To Create a DigitalOcean Space and API Key.

  • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager. You only need to complete the first two steps.

  • The Nginx Ingress Controller and Cert-Manager installed on the cluster. For a guide on how to do this, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

  • A domain name with two DNS A records pointed to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to create A records. In this tutorial, we’ll refer to the A records as registry.example.com and k8s-test.example.com.

Step 1 — Configuring and Installing the Docker Registry

In this step, you will create a configuration file for the registry deployment and install the Docker registry to your cluster with the given config using the Helm package manager.

During the course of this tutorial, you will use a configuration file called chart_values.yaml to override some of the default settings for the Docker registry Helm chart. Helm calls its packages, charts; these are sets of files that outline a related selection of Kubernetes resources. You’ll edit the settings to specify DigitalOcean Spaces as the underlying storage system and enable HTTPS access by wiring up Let’s Encrypt TLS certificates.

As part of the prerequisite, you would have created the echo1 and echo2 services and an echo_ingress ingress for testing purposes; you will not need these in this tutorial, so you can now delete them.

Start off by deleting the ingress by running the following command:

  • kubectl delete -f echo_ingress.yaml

Then, delete the two test services:

  • kubectl delete -f echo1.yaml && kubectl delete -f echo2.yaml

The kubectl delete command accepts the file to delete when passed the -f parameter.

Create a folder that will serve as your workspace:

  • mkdir ~/k8s-registry

Navigate to it by running:

  • cd ~/k8s-registry

Now, using your text editor, create your chart_values.yaml file:

  • nano chart_values.yaml

Add the following lines, ensuring you replace the highlighted lines with your details:

chart_values.yaml
ingress:   enabled: true   hosts:     - registry.example.com   annotations:     kubernetes.io/ingress.class: nginx     certmanager.k8s.io/cluster-issuer: letsencrypt-prod     nginx.ingress.kubernetes.io/proxy-body-size: "30720m"   tls:     - secretName: letsencrypt-prod       hosts:         - registry.example.com  storage: s3  secrets:   htpasswd: ""   s3:     accessKey: "your_space_access_key"     secretKey: "your_space_secret_key"  s3:   region: your_space_region   regionEndpoint: your_space_region.digitaloceanspaces.com   secure: true   bucket: your_space_name 

The first block, ingress, configures the Kubernetes Ingress that will be created as a part of the Helm chart deployment. The Ingress object makes outside HTTP/HTTPS routes point to internal services in the cluster, thus allowing communication from the outside. The overridden values are:

  • enabled: set to true to enable the Ingress.
  • hosts: a list of hosts from which the Ingress will accept traffic.
  • annotations: a list of metadata that provides further direction to other parts of Kubernetes on how to treat the Ingress. You set the Ingress Controller to nginx, the Let’s Encrypt cluster issuer to the production variant (letsencrypt-prod), and tell the nginx controller to accept files with a max size of 30 GB, which is a sensible limit for even the largest Docker images.
  • tls: this subcategory configures Let’s Encrypt HTTPS. You populate the hosts list that defines from which secure hosts this Ingress will accept HTTPS traffic with our example domain name.

Then, you set the file system storage to s3 — the other available option would be filesystem. Here s3 indicates using a remote storage system compatible with the industry-standard Amazon S3 API, which DigitalOcean Spaces fulfills.

In the next block, secrets, you configure keys for accessing your DigitalOcean Space under the s3 subcategory. Finally, in the s3 block, you configure the parameters specifying your Space.

Save and close your file.

Now, if you haven’t already done so, set up your A records to point to the Load Balancer you created as part of the Nginx Ingress Controller installation in the prerequisite tutorial. To see how to set your DNS on DigitalOcean, see How to Manage DNS Records.

Next, ensure your Space isn’t empty. The Docker registry won’t run at all if you don’t have any files in your Space. To get around this, upload a file. Navigate to the Spaces tab, find your Space, click the Upload File button, and upload any file you’d like. You could upload the configuration file you just created.

Empty file uploaded to empty Space

Before installing anything via Helm, you need to refresh its cache. This will update the latest information about your chart repository. To do this run the following command:

  • helm repo update

Now, you’ll deploy the Docker registry chart with this custom configuration via Helm by running:

  • helm install stable/docker-registry -f chart_values.yaml --name docker-registry

You’ll see the following output:

Output
NAME: docker-registry ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-54df68fd64-l26fb 0/1 ContainerCreating 0 1s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 3 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.131.143 <none> 5000/TCP 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 0/1 1 0 1s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 80, 443 1s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

Helm lists all the resources it created as a result of the Docker registry chart deployment. The registry is now accessible from the domain name you specified earlier.

You’ve configured and deployed a Docker registry on your Kubernetes cluster. Next, you will test the availability of the newly deployed Docker registry.

Step 2 — Testing Pushing and Pulling

In this step, you’ll test your newly deployed Docker registry by pushing and pulling images to and from it. Currently, the registry is empty. To have something to push, you need to have an image available on the machine you’re working from. Let’s use the mysql Docker image.

Start off by pulling mysql from the Docker Hub:

  • sudo docker pull mysql

Your output will look like this:

Output
Using default tag: latest latest: Pulling from library/mysql 27833a3ba0a5: Pull complete ... e906385f419d: Pull complete Digest: sha256:a7cf659a764732a27963429a87eccc8457e6d4af0ee9d5140a3b56e74986eed7 Status: Downloaded newer image for mysql:latest

You now have the image available locally. To inform Docker where to push it, you’ll need to tag it with the host name, like so:

  • sudo docker tag mysql registry.example.com/mysql

Then, push the image to the new registry:

  • sudo docker push registry.example.com/mysql

This command will run successfully and indicate that your new registry is properly configured and accepting traffic — including pushing new images. If you see an error, double check your steps against steps 1 and 2.

To test pulling from the registry cleanly, first delete the local mysql images with the following command:

  • sudo docker rmi registry.example.com/mysql && sudo docker rmi mysql

Then, pull it from the registry:

  • sudo docker pull registry.example.com/mysql

This command will take a few seconds to complete. If it runs successfully, that means your registry is working correctly. If it shows an error, double check what you have entered against the previous commands.

You can list Docker images available locally by running the following command:

  • sudo docker images

You’ll see output listing the images available on your local machine, along with their ID and date of creation.

Your Docker registry is configured. You’ve pushed an image to it and verified you can pull it down. Now let’s add authentication so only certain people can access the code.

Step 3 — Adding Account Authentication and Configuring Kubernetes Access

In this step, you’ll set up username and password authentication for the registry using the htpasswd utility.

The htpasswd utility comes from the Apache webserver, which you can use for creating files that store usernames and passwords for basic authentication of HTTP users. The format of htpasswd files is username:hashed_password (one per line), which is portable enough to allow other programs to use it as well.

To make htpasswd available on the system, you’ll need to install it by running:

  • sudo apt install apache2-utils -y

Note:
If you’re running this tutorial from a Mac, you’ll need to use the following command to make htpasswd available on your machine:

  • docker run --rm -v $ {PWD}:/app -it httpd htpasswd -b -c /app/htpasswd_file sammy password

Create it by executing the following command:

  • touch htpasswd_file

Add a username and password combination to htpasswd_file:

  • htpasswd -B htpasswd_file username

Docker requires the password to be hashed using the bcrypt algorithm, which is why we pass the -B parameter. The bcrypt algorithm is a password hashing function based on Blowfish block cipher, with a work factor parameter, which specifies how expensive the hash function will be.

Remember to replace username with your desired username. When run, htpasswd will ask you for the accompanying password and add the combination to htpasswd_file. You can repeat this command for as many users as you wish to add.

Now, show the contents of htpasswd_file by running the following command:

  • cat htpasswd_file

Select and copy the contents shown.

To add authentication to your Docker registry, you’ll need to edit chart_values.yaml and add the contents of htpasswd_file in the htpasswd variable.

Open chart_values.yaml for editing:

  • nano chart_values.yaml

Find the line that looks like this:

chart_values.yaml
  htpasswd: "" 

Edit it to match the following, replacing htpasswd\_file\_contents with the contents you copied from the htpasswd_file:

chart_values.yaml
  htpasswd: |-     htpasswd_file_contents 

Be careful with the indentation, each line of the file contents must have four spaces before it.

Once you’ve added your contents, save and close the file.

To propagate the changes to your cluster, run the following command:

  • helm upgrade docker-registry stable/docker-registry -f chart_values.yaml

The output will be similar to that shown when you first deployed your Docker registry:

Output
Release "docker-registry" has been upgraded. Happy Helming! LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 3m8s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-6c5bb7ffbf-ltnjv 1/1 Running 0 3m7s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 4 3m8s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.128.245 <none> 5000/TCP 3m8s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 1/1 1 1 3m8s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 159.89.215.50 80, 443 3m8s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

This command calls Helm and instructs it to upgrade an existing release, in your case docker-registry, with its chart defined in stable/docker-registry in the chart repository, after applying the chart_values.yaml file.

Now, you’ll try pulling an image from the registry again:

  • sudo docker pull registry.example.com/mysql

The output will look like the following:

Output
Using default tag: latest Error response from daemon: Get https://registry.example.com/v2/mysql/manifests/latest: no basic auth credentials

It correctly failed because you provided no credentials. This means that your Docker registry authorizes requests correctly.

To log in to the registry, run the following command:

  • sudo docker login registry.example.com

Remember to replace registry.example.com with your domain address. It will prompt you for a username and password. If it shows an error, double check what your htpasswd_file contains. You must define the username and password combination in the htpasswd_file, which you created earlier in this step.

To test the login, you can try to pull again by running the following command:

  • sudo docker pull registry.example.com/mysql

The output will look similar to the following:

Output
Using default tag: latest latest: Pulling from mysql Digest: sha256:f2dc118ca6fa4c88cde5889808c486dfe94bccecd01ca626b002a010bb66bcbe Status: Image is up to date for registry.example.com/mysql:latest

You’ve now configured Docker and can log in securely. To configure Kubernetes to log in to your registry, run the following command:

  • sudo kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/sammy/.docker/config.json --type=kubernetes.io/dockerconfigjson

You will see the following output:

Output
secret/regcred created

This command creates a secret in your cluster with the name regcred, takes the contents of the JSON file where Docker stores the credentials, and parses it as dockerconfigjson, which defines a registry credential in Kubernetes.

You’ve used htpasswd to create a login config file, configured the registry to authenticate requests, and created a Kubernetes secret containing the login credentials. Next, you will test the integration between your Kubernetes cluster and registry.

Step 4 — Testing Kubernetes Integration by Running a Sample Deployment

In this step, you’ll run a sample deployment with an image stored in the in-cluster registry to test the connection between your Kubernetes cluster and registry.

In the last step, you created a secret, called regcred, containing login credentials for your private registry. It may contain login credentials for multiple registries, in which case you’ll have to update the Secret accordingly.

You can specify which secret Kubernetes should use when pulling containers in the pod definition by specifying imagePullSecrets. This step is necessary when the Docker registry requires authentication.

You’ll now deploy a sample Hello World image from your private Docker registry to your cluster. First, in order to push it, you’ll pull it to your machine by running the following command:

  • sudo docker pull paulbouwer/hello-kubernetes:1.5

Then, tag it by running:

  • sudo docker tag paulbouwer/hello-kubernetes:1.5 registry.example.com/paulbouwer/hello-kubernetes:1.5

Finally, push it to your registry:

  • sudo docker push registry.example.com/paulbouwer/hello-kubernetes:1.5

Delete it from your machine as you no longer need it locally:

  • sudo docker rmi registry.example.com/paulbouwer/hello-kubernetes:1.5

Now, you’ll deploy the sample Hello World application. First, create a new file, hello-world.yaml, using your text editor:

  • nano hello-world.yaml

Next, you’ll define a Service and an Ingress to make the app accessible to outside of the cluster. Add the following lines, replacing the highlighted lines with your domains:

hello-world.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: hello-kubernetes-ingress   annotations:     kubernetes.io/ingress.class: nginx     nginx.ingress.kubernetes.io/rewrite-target: / spec:   rules:   - host: k8s-test.example.com     http:       paths:       - path: /         backend:           serviceName: hello-kubernetes           servicePort: 80 --- apiVersion: v1 kind: Service metadata:   name: hello-kubernetes spec:   type: NodePort   ports:   - port: 80     targetPort: 8080   selector:     app: hello-kubernetes --- apiVersion: apps/v1 kind: Deployment metadata:   name: hello-kubernetes spec:   replicas: 3   selector:     matchLabels:       app: hello-kubernetes   template:     metadata:       labels:         app: hello-kubernetes     spec:       containers:       - name: hello-kubernetes         image: registry.example.com/paulbouwer/hello-kubernetes:1.5         ports:         - containerPort: 8080       imagePullSecrets:       - name: regcred 

First, you define the Ingress for the Hello World deployment, which you will route through the Load Balancer that the Nginx Ingress Controller owns. Then, you define a service that can access the pods created in the deployment. In the actual deployment spec, you specify the image as the one located in your registry and set imagePullSecrets to regcred, which you created in the previous step.

Save and close the file. To deploy this to your cluster, run the following command:

  • kubectl apply -f hello-world.yaml

You’ll see the following output:

Output
ingress.extensions/hello-kubernetes-ingress created service/hello-kubernetes created deployment.apps/hello-kubernetes created

You can now navigate to your test domain — the second A record, k8s-test.example.com in this tutorial. You will see the Kubernetes Hello world! page.

Hello World page

The Hello World page lists some environment information, like the Linux kernel version and the internal ID of the pod the request was served from. You can also access your Space via the web interface to see the images you’ve worked with in this tutorial.

If you want to delete this Hello World deployment after testing, run the following command:

  • kubectl delete -f hello-world.yaml

You’ve created a sample Hello World deployment to test if Kubernetes is properly pulling images from your private registry.

Conclusion

You have now successfully deployed your own private Docker registry on your DigitalOcean Kubernetes cluster, using DigitalOcean Spaces as the storage layer underneath. There is no limit to how many images you can store, Spaces can extend infinitely, while at the same time providing the same security and robustness. In production, though, you should always strive to optimize your Docker images as much as possible, take a look at the How To Optimize Docker Images for Production tutorial.

DigitalOcean Community Tutorials

How To Set Up a Private Docker Registry on Top of DigitalOcean Spaces and Use It with DO Kubernetes

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

A Docker registry is a storage and content delivery system for named Docker images, which are the industry standard for containerized applications. A private Docker registry allows you to securely share your images within your team or organization with more flexibility and control when compared to public ones. By hosting your private Docker registry directly in your Kubernetes cluster, you can achieve higher speeds, lower latency, and better availability, all while having control over the registry.

The underlying registry storage is delegated to external drivers. The default storage system is the local filesystem, but you can swap this for a cloud-based storage driver. DigitalOcean Spaces is an S3-compatible object storage designed for developer teams and businesses that want a scalable, simple, and affordable way to store and serve vast amounts of data, and is very suitable for storing Docker images. It has a built-in CDN network, which can greatly reduce latency when frequently accessing images.

In this tutorial, you’ll deploy your private Docker registry to your DigitalOcean Kubernetes cluster using Helm, backed up by DigitalOcean Spaces for storing data. You’ll create API keys for your designated Space, install the Docker registry to your cluster with custom configuration, configure Kubernetes to properly authenticate with it, and test it by running a sample deployment on the cluster. At the end of this tutorial, you’ll have a secure, private Docker registry installed on your DigitalOcean Kubernetes cluster.

Prerequisites

Before you begin this tutorial, you’ll need:

  • Docker installed on the machine that you’ll access your cluster from. For Ubuntu 18.04 visit How To Install and Use Docker on Ubuntu 18.04. You only need to complete the first step. Otherwise visit Docker’s website for other distributions.

  • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step shown when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

  • A DigitalOcean Space with API keys (access and secret). To learn how to create a DigitalOcean Space and API keys, see How To Create a DigitalOcean Space and API Key.

  • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Complete steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager. You only need to complete the first two steps.

  • The Nginx Ingress Controller and Cert-Manager installed on the cluster. For a guide on how to do this, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

  • A domain name with two DNS A records pointed to the DigitalOcean Load Balancer used by the Ingress. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to create A records. In this tutorial, we’ll refer to the A records as registry.example.com and k8s-test.example.com.

Step 1 — Configuring and Installing the Docker Registry

In this step, you will create a configuration file for the registry deployment and install the Docker registry to your cluster with the given config using the Helm package manager.

During the course of this tutorial, you will use a configuration file called chart_values.yaml to override some of the default settings for the Docker registry Helm chart. Helm calls its packages, charts; these are sets of files that outline a related selection of Kubernetes resources. You’ll edit the settings to specify DigitalOcean Spaces as the underlying storage system and enable HTTPS access by wiring up Let’s Encrypt TLS certificates.

As part of the prerequisite, you would have created the echo1 and echo2 services and an echo_ingress ingress for testing purposes; you will not need these in this tutorial, so you can now delete them.

Start off by deleting the ingress by running the following command:

  • kubectl delete -f echo_ingress.yaml

Then, delete the two test services:

  • kubectl delete -f echo1.yaml && kubectl delete -f echo2.yaml

The kubectl delete command accepts the file to delete when passed the -f parameter.

Create a folder that will serve as your workspace:

  • mkdir ~/k8s-registry

Navigate to it by running:

  • cd ~/k8s-registry

Now, using your text editor, create your chart_values.yaml file:

  • nano chart_values.yaml

Add the following lines, ensuring you replace the highlighted lines with your details:

chart_values.yaml
ingress:   enabled: true   hosts:     - registry.example.com   annotations:     kubernetes.io/ingress.class: nginx     certmanager.k8s.io/cluster-issuer: letsencrypt-prod     nginx.ingress.kubernetes.io/proxy-body-size: "30720m"   tls:     - secretName: letsencrypt-prod       hosts:         - registry.example.com  storage: s3  secrets:   htpasswd: ""   s3:     accessKey: "your_space_access_key"     secretKey: "your_space_secret_key"  s3:   region: your_space_region   regionEndpoint: your_space_region.digitaloceanspaces.com   secure: true   bucket: your_space_name 

The first block, ingress, configures the Kubernetes Ingress that will be created as a part of the Helm chart deployment. The Ingress object makes outside HTTP/HTTPS routes point to internal services in the cluster, thus allowing communication from the outside. The overridden values are:

  • enabled: set to true to enable the Ingress.
  • hosts: a list of hosts from which the Ingress will accept traffic.
  • annotations: a list of metadata that provides further direction to other parts of Kubernetes on how to treat the Ingress. You set the Ingress Controller to nginx, the Let’s Encrypt cluster issuer to the production variant (letsencrypt-prod), and tell the nginx controller to accept files with a max size of 30 GB, which is a sensible limit for even the largest Docker images.
  • tls: this subcategory configures Let’s Encrypt HTTPS. You populate the hosts list that defines from which secure hosts this Ingress will accept HTTPS traffic with our example domain name.

Then, you set the file system storage to s3 — the other available option would be filesystem. Here s3 indicates using a remote storage system compatible with the industry-standard Amazon S3 API, which DigitalOcean Spaces fulfills.

In the next block, secrets, you configure keys for accessing your DO Space under the s3 subcategory. Finally, in the s3 block, you configure the parameters specifying your Space.

Save and close your file.

Now, if you haven’t already done so, set up your A records to point to the Load Balancer you created as part of the Nginx Ingress Controller installation in the prerequisite tutorial. To see how to set your DNS on DigitalOcean, see How to Manage DNS Records.

Next, ensure your Space isn’t empty. The Docker registry won’t run at all if you don’t have any files in your Space. To get around this, upload a file. Navigate to the Spaces tab, find your Space, click the Upload File button, and upload any file you’d like. You could upload the configuration file you just created.

Empty file uploaded to empty Space

Before installing anything via Helm, you need to refresh its cache. This will update the latest information about your chart repository. To do this run the following command:

  • helm repo update

Now, you’ll deploy the Docker registry chart with this custom configuration via Helm by running:

  • helm install stable/docker-registry -f chart_values.yaml --name docker-registry

You’ll see the following output:

Output
NAME: docker-registry ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-54df68fd64-l26fb 0/1 ContainerCreating 0 1s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 3 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.131.143 <none> 5000/TCP 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 0/1 1 0 1s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 80, 443 1s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

Helm lists all the resources it created as a result of the Docker registry chart deployment. The registry is now accessible from the domain name you specified earlier.

You’ve configured and deployed a Docker registry on your Kubernetes cluster. Next, you will test the availability of the newly deployed Docker registry.

Step 2 — Testing Pushing and Pulling

In this step, you’ll test your newly deployed Docker registry by pushing and pulling images to and from it. Currently, the registry is empty. To have something to push, you need to have an image available on the machine you’re working from. Let’s use the mysql Docker image.

Start off by pulling mysql from the Docker Hub:

  • sudo docker pull mysql

Your output will look like this:

Output
Using default tag: latest latest: Pulling from library/mysql 27833a3ba0a5: Pull complete ... e906385f419d: Pull complete Digest: sha256:a7cf659a764732a27963429a87eccc8457e6d4af0ee9d5140a3b56e74986eed7 Status: Downloaded newer image for mysql:latest

You now have the image available locally. To inform Docker where to push it, you’ll need to tag it with the host name, like so:

  • sudo docker tag mysql registry.example.com/mysql

Then, push the image to the new registry:

  • sudo docker push registry.example.com/mysql

This command will run successfully and indicate that your new registry is properly configured and accepting traffic — including pushing new images. If you see an error, double check your steps against steps 1 and 2.

To test pulling from the registry cleanly, first delete the local mysql images with the following command:

  • sudo docker rmi registry.example.com/mysql && sudo docker rmi mysql

Then, pull it from the registry:

  • sudo docker pull registry.example.com/mysql

This command will take a few seconds to complete. If it runs successfully, that means your registry is working correctly. If it shows an error, double check what you have entered against the previous commands.

You can list Docker images available locally by running the following command:

  • sudo docker images

You’ll see output listing the images available on your local machine, along with their ID and date of creation.

Your Docker registry is configured. You’ve pushed an image to it and verified you can pull it down. Now let’s add authentication so only certain people can access the code.

Step 3 — Adding Account Authentication and Configuring Kubernetes Access

In this step, you’ll set up username and password authentication for the registry using the htpasswd utility.

The htpasswd utility comes from the Apache webserver, which you can use for creating files that store usernames and passwords for basic authentication of HTTP users. The format of htpasswd files is username:hashed_password (one per line), which is portable enough to allow other programs to use it as well.

To make htpasswd available on the system, you’ll need to install it by running:

  • sudo apt install apache2-utils -y

Note:
If you’re running this tutorial from a Mac, you’ll need to use the following command to make htpasswd available on your machine:

  • docker run --rm -v $ {PWD}:/app -it httpd htpasswd -b -c /app/htpasswd_file sammy password

Create it by executing the following command:

  • touch htpasswd_file

Add a username and password combination to htpasswd_file:

  • htpasswd -B htpasswd_file username

Docker requires the password to be hashed using the bcrypt algorithm, which is why we pass the -B parameter. The bcrypt algorithm is a password hashing function based on Blowfish block cipher, with a work factor parameter, which specifies how expensive the hash function will be.

Remember to replace username with your desired username. When run, htpasswd will ask you for the accompanying password and add the combination to htpasswd_file. You can repeat this command for as many users as you wish to add.

Now, show the contents of htpasswd_file by running the following command:

  • cat htpasswd_file

Select and copy the contents shown.

To add authentication to your Docker registry, you’ll need to edit chart_values.yaml and add the contents of htpasswd_file in the htpasswd variable.

Open chart_values.yaml for editing:

  • nano chart_values.yaml

Find the line that looks like this:

chart_values.yaml
  htpasswd: "" 

Edit it to match the following, replacing htpasswd\_file\_contents with the contents you copied from the htpasswd_file:

chart_values.yaml
  htpasswd: |-     htpasswd_file_contents 

Be careful with the indentation, each line of the file contents must have four spaces before it.

Once you’ve added your contents, save and close the file.

To propagate the changes to your cluster, run the following command:

  • helm upgrade docker-registry stable/docker-registry -f chart_values.yaml

The output will be similar to that shown when you first deployed your Docker registry:

Output
Release "docker-registry" has been upgraded. Happy Helming! LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE docker-registry-config 1 3m8s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE docker-registry-6c5bb7ffbf-ltnjv 1/1 Running 0 3m7s ==> v1/Secret NAME TYPE DATA AGE docker-registry-secret Opaque 4 3m8s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 10.245.128.245 <none> 5000/TCP 3m8s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE docker-registry 1/1 1 1 3m8s ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE docker-registry registry.example.com 159.89.215.50 80, 443 3m8s NOTES: 1. Get the application URL by running these commands: https://registry.example.com/

This command calls Helm and instructs it to upgrade an existing release, in your case docker-registry, with its chart defined in stable/docker-registry in the chart repository, after applying the chart_values.yaml file.

Now, you’ll try pulling an image from the registry again:

  • sudo docker pull registry.example.com/mysql

The output will look like the following:

Output
Using default tag: latest Error response from daemon: Get https://registry.example.com/v2/mysql/manifests/latest: no basic auth credentials

It correctly failed because you provided no credentials. This means that your Docker registry authorizes requests correctly.

To log in to the registry, run the following command:

  • sudo docker login registry.example.com

Remember to replace registry.example.com with your domain address. It will prompt you for a username and password. If it shows an error, double check what your htpasswd_file contains. You must define the username and password combination in the htpasswd_file, which you created earlier in this step.

To test the login, you can try to pull again by running the following command:

  • sudo docker pull registry.example.com/mysql

The output will look similar to the following:

Output
Using default tag: latest latest: Pulling from mysql Digest: sha256:f2dc118ca6fa4c88cde5889808c486dfe94bccecd01ca626b002a010bb66bcbe Status: Image is up to date for registry.example.com/mysql:latest

You’ve now configured Docker and can log in securely. To configure Kubernetes to log in to your registry, run the following command:

  • sudo kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/sammy/.docker/config.json --type=kubernetes.io/dockerconfigjson

You will see the following output:

Output
secret/regcred created

This command creates a secret in your cluster with the name regcred, takes the contents of the JSON file where Docker stores the credentials, and parses it as dockerconfigjson, which defines a registry credential in Kubernetes.

You’ve used htpasswd to create a login config file, configured the registry to authenticate requests, and created a Kubernetes secret containing the login credentials. Next, you will test the integration between your Kubernetes cluster and registry.

Step 4 — Testing Kubernetes Integration by Running a Sample Deployment

In this step, you’ll run a sample deployment with an image stored in the in-cluster registry to test the connection between your Kubernetes cluster and registry.

In the last step, you created a secret, called regcred, containing login credentials for your private registry. It may contain login credentials for multiple registries, in which case you’ll have to update the Secret accordingly.

You can specify which secret Kubernetes should use when pulling containers in the pod definition by specifying imagePullSecrets. This step is necessary when the Docker registry requires authentication.

You’ll now deploy a sample Hello World image from your private Docker registry to your cluster. First, in order to push it, you’ll pull it to your machine by running the following command:

  • sudo docker pull paulbouwer/hello-kubernetes:1.5

Then, tag it by running:

  • sudo docker tag paulbouwer/hello-kubernetes:1.5 registry.example.com/paulbouwer/hello-kubernetes:1.5

Finally, push it to your registry:

  • sudo docker push registry.example.com/paulbouwer/hello-kubernetes:1.5

Delete it from your machine as you no longer need it locally:

  • sudo docker rmi registry.example.com/paulbouwer/hello-kubernetes:1.5

Now, you’ll deploy the sample Hello World application. First, create a new file, hello-world.yaml, using your text editor:

  • nano hello-world.yaml

Next, you’ll define a Service and an Ingress to make the app accessible to outside of the cluster. Add the following lines, replacing the highlighted lines with your domains:

hello-world.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: hello-kubernetes-ingress   annotations:     kubernetes.io/ingress.class: nginx     nginx.ingress.kubernetes.io/rewrite-target: / spec:   rules:   - host: k8s-test.example.com     http:       paths:       - path: /         backend:           serviceName: hello-kubernetes           servicePort: 80 --- apiVersion: v1 kind: Service metadata:   name: hello-kubernetes spec:   type: NodePort   ports:   - port: 80     targetPort: 8080   selector:     app: hello-kubernetes --- apiVersion: apps/v1 kind: Deployment metadata:   name: hello-kubernetes spec:   replicas: 3   selector:     matchLabels:       app: hello-kubernetes   template:     metadata:       labels:         app: hello-kubernetes     spec:       containers:       - name: hello-kubernetes         image: registry.example.com/paulbouwer/hello-kubernetes:1.5         ports:         - containerPort: 8080       imagePullSecrets:       - name: regcred 

First, you define the Ingress for the Hello World deployment, which you will route through the Load Balancer that the Nginx Ingress Controller owns. Then, you define a service that can access the pods created in the deployment. In the actual deployment spec, you specify the image as the one located in your registry and set imagePullSecrets to regcred, which you created in the previous step.

Save and close the file. To deploy this to your cluster, run the following command:

  • kubectl apply -f hello-world.yaml

You’ll see the following output:

Output
ingress.extensions/hello-kubernetes-ingress created service/hello-kubernetes created deployment.apps/hello-kubernetes created

You can now navigate to your test domain — the second A record, k8s-test.example.com in this tutorial. You will see the Kubernetes Hello world! page.

Hello World page

The Hello World page lists some environment information, like the Linux kernel version and the internal ID of the pod the request was served from. You can also access your Space via the web interface to see the images you’ve worked with in this tutorial.

If you want to delete this Hello World deployment after testing, run the following command:

  • kubectl delete -f hello-world.yaml

You’ve created a sample Hello World deployment to test if Kubernetes is properly pulling images from your private registry.

Conclusion

You have now successfully deployed your own private Docker registry on your DigitalOcean Kubernetes cluster, using DigitalOcean Spaces as the storage layer underneath. There is no limit to how many images you can store, Spaces can extend infinitely, while at the same time providing the same security and robustness. In production, though, you should always strive to optimize your Docker images as much as possible, take a look at the How To Optimize Docker Images for Production tutorial.

DigitalOcean Community Tutorials

How to Set Up a Scalable Django App with DigitalOcean Managed Databases and Spaces

Introduction

Django is a powerful web framework that can help you get your Python application or website off the ground quickly. It includes several convenient features like an object-relational mapper, a Python API, and a customizable administrative interface for your application. It also includes a caching framework and encourages clean app design through its URL Dispatcher and Template system.

Out of the box, Django includes a minimal web server for testing and local development, but it should be paired with a more robust serving infrastructure for production use cases. Django is often rolled out with an Nginx web server to handle static file requests and HTTPS redirection, and a Gunicorn WSGI server to serve the app.

In this guide, we will augment this setup by offloading static files like Javascript and CSS stylesheets to DigitalOcean Spaces, and optionally delivering them using a Content Delivery Network, or CDN, which stores these files closer to end users to reduce transfer times. We’ll also use a DigitalOcean Managed PostgreSQL database as our data store to simplify the data layer and avoid having to manually configure a scalable PostgreSQL database.

Prerequisites

Before you begin with this guide, you should have the following available to you:

Step 1 — Installing Packages from the Ubuntu Repositories

To begin, we’ll download and install all of the items we need from the Ubuntu repositories. We’ll use the Python package manager pip to install additional components a bit later.

We need to first update the local apt package index and then download and install the packages.

In this guide, we’ll use Django with Python 3. To install the necessary libraries, log in to your server and type:

  • sudo apt update
  • sudo apt install python3-pip python3-dev libpq-dev curl postgresql-client

This will install pip, the Python development files needed to build Gunicorn, the libpq header files needed to build the Pyscopg PostgreSQL Python adapter, and the PostgreSQL command-line client.

Hit Y and then ENTER when prompted to begin downloading and installing the packages.

Next, we’ll configure the database to work with our Django app.

Step 2 — Creating the PostgreSQL Database and User

We’ll now create a database and database user for our Django application.

To begin, grab the Connection Parameters for your cluster by navigating to Databases from the Cloud Control Panel, and clicking into your database. You should see a Connection Details box containing some parameters for your cluster. Note these down.

Back on the command line, log in to your cluster using these credentials and the psql PostgreSQL client we just installed:

  • psql -U do_admin -h host -p port -d database

When prompted, enter the password displayed alongside the doadmin Postgres username, and hit ENTER.

You will be given a PostgreSQL prompt from which you can manage the database.

First, create a database for your project called polls:

  • CREATE DATABASE polls;

Note: Every Postgres statement must end with a semicolon, so make sure that your command ends with one if you are experiencing issues.

We can now switch to the polls database:

  • \c polls;

Next, create a database user for the project. Make sure to select a secure password:

  • CREATE USER myprojectuser WITH PASSWORD 'password';

We’ll now modify a few of the connection parameters for the user we just created. This will speed up database operations so that the correct values do not have to be queried and set each time a connection is established.

We are setting the default encoding to UTF-8, which Django expects. We are also setting the default transaction isolation scheme to “read committed”, which blocks reads from uncommitted transactions. Lastly, we are setting the timezone. By default, our Django projects will be set to use UTC. These are all recommendations from the Django project itself.

Enter the following commands at the PostgreSQL prompt:

  • ALTER ROLE myprojectuser SET client_encoding TO 'utf8';
  • ALTER ROLE myprojectuser SET default_transaction_isolation TO 'read committed';
  • ALTER ROLE myprojectuser SET timezone TO 'UTC';

Now we can give our new user access to administer our new database:

  • GRANT ALL PRIVILEGES ON DATABASE polls TO myprojectuser;

When you are finished, exit out of the PostgreSQL prompt by typing:

  • \q

Your Django app is now ready to connect to and manage this database.

In the next step, we’ll install virtualenv and create a Python virtual environment for our Django project.

Step 3 — Creating a Python Virtual Environment for your Project

Now that we’ve set up our database to work with our application, we’ll create a Python virtual environment that will isolate this project’s dependencies from the system’s global Python installation.

To do this, we first need access to the virtualenv command. We can install this with pip.

Upgrade pip and install the package by typing:

  • sudo -H pip3 install --upgrade pip
  • sudo -H pip3 install virtualenv

With virtualenv installed, we can create a directory to store our Python virtual environments and make one to use with the Django polls app.

Create a directory called envs and navigate into it:

  • mkdir envs
  • cd envs

Within this directory, create a Python virtual environment called polls by typing:

  • virtualenv polls

This will create a directory called polls within the envs directory. Inside, it will install a local version of Python and a local version of pip. We can use this to install and configure an isolated Python environment for our project.

Before we install our project’s Python requirements, we need to activate the virtual environment. You can do that by typing:

  • source polls/bin/activate

Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (polls)user@host:~/envs$ .

With your virtual environment active, install Django, Gunicorn, and the psycopg2 PostgreSQL adaptor with the local instance of pip:

Note: When the virtual environment is activated (when your prompt has (polls) preceding it), use pip instead of pip3, even if you are using Python 3. The virtual environment’s copy of the tool is always named pip, regardless of the Python version.

  • pip install django gunicorn psycopg2-binary

You should now have all of the software you need to run the Django polls app. In the next step, we’ll create a Django project and install this app.

Step 4 — Creating the Polls Django Application

We can now set up our sample application. In this tutorial, we’ll use the Polls demo application from the Django documentation. It consists of a public site that allows users to view polls and vote in them, and an administrative control panel that allows the admin to modify, create, and delete polls.

In this guide, we’ll skip through the tutorial steps, and simply clone the final application from the DigitalOcean Community django-polls repo.

If you’d like to complete the steps manually, create a directory called django-polls in your home directory and navigate into it:

  • cd
  • mkdir django-polls
  • cd django-polls

From there, you can follow the Writing your first Django app tutorial from the official Django documentation. When you’re done, skip to Step 5.

If you just want to clone the finished app, navigate to your home directory and use git to clone the django-polls repo:

  • cd
  • git clone https://github.com/do-community/django-polls.git

cd into it, and list the directory contents:

  • cd django-polls
  • ls

You should see the following objects:

Output
LICENSE README.md manage.py mysite polls templates

manage.py is the main command-line utility used to manipulate the app. polls contains the polls app code, and mysite contains project-scope code and settings. templates contains custom template files for the administrative interface. To learn more about the project structure and files, consult Creating a Project from the official Django documentation.

Before running the app, we need to adjust its default settings and connect it to our database.

Step 5 — Adjusting the App Settings

In this step, we’ll modify the Django project’s default configuration to increase security, connect Django to our database, and collect static files into a local directory.

Begin by opening the settings file in your text editor:

  • nano ~/django-polls/mysite/settings.py

Start by locating the ALLOWED_HOSTS directive. This defines a list of the addresses or domain names that you want to use to connect to the Django instance. An incoming request with a Host header not in this list will raise an exception. Django requires that you set this to prevent a certain class of security vulnerability.

In the square brackets, list the IP addresses or domain names associated with your Django server. Each item should be listed in quotations with entries separated by a comma. Your list will also include localhost, since you will be proxying connections through a local Nginx instance. If you wish to include requests for an entire domain and any subdomains, prepend a period to the beginning of the entry.

In the snippet below, there are a few commented out examples that demonstrate what these entries should look like:

~/django-polls/mysite/settings.py
. . .  # The simplest case: just add the domain name(s) and IP addresses of your Django server # ALLOWED_HOSTS = [ 'example.com', '203.0.113.5'] # To respond to 'example.com' and any subdomains, start the domain with a dot # ALLOWED_HOSTS = ['.example.com', '203.0.113.5'] ALLOWED_HOSTS = ['your_server_domain_or_IP', 'second_domain_or_IP', . . ., 'localhost']  . . .  

Next, find the section of the file that configures database access. It will start with DATABASES. The configuration in the file is for a SQLite database. We already created a PostgreSQL database for our project, so we need to adjust these settings.

We will tell Django to use the psycopg2 database adaptor we installed with pip, instead of the default SQLite engine. We’ll also reuse the Connection Parameters referenced in Step 2. You can always find this information from the Managed Databases section of the DigitalOcean Cloud Control Panel.

Update the file with your database settings: the database name (polls), the database username, the database user’s password, and the database host and port. Be sure to replace the database-specific values with your own information:

~/django-polls/mysite/settings.py
. . .  DATABASES = {     'default': {         'ENGINE': 'django.db.backends.postgresql_psycopg2',         'NAME': 'polls',         'USER': 'myprojectuser',         'PASSWORD': 'password',         'HOST': 'managed_db_host',         'PORT': 'managed_db_port',     } }  . . . 

Next, move down to the bottom of the file and add a setting indicating where the static files should be placed. This is necessary so that Nginx can handle requests for these items. The following line tells Django to place them in a directory called static in the base project directory:

~/django-polls/mysite/settings.py
. . .  STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'static/') 

Save and close the file when you are finished.

At this point, you’ve configured the Django project’s database, security, and static files settings. If you followed the polls tutorial from the start and did not clone the GitHub repo, you can move on to Step 6. If you cloned the GitHub repo, there remains one additional step.

The Django settings file contains a SECRET_KEY variable that is used to create hashes for various Django objects. It’s important that it is set to a unique, unpredictable value. The SECRET_KEY variable has been scrubbed from the GitHub repository, so we’ll create a new one using a function built-in to the django Python package called get_random_secret_key(). From the command line, open up a Python interpreter:

  • python

You should see the following output and prompt:

Output
Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>>

Import the get_random_secret_key function from the Django package, then call the function:

  • from django.core.management.utils import get_random_secret_key
  • get_random_secret_key()

Copy the resulting key to your clipboard.

Exit the Python interpreter by pressing CTRL+D.

Next, open up the settings file in your text editor once again:

nano ~/django-polls/mysite/settings.py 

Locate the SECRET_KEY variable and paste in the key you just generated:

~/django-polls/mysite/settings.py
. . .  # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'your_secret_key_here'  . . . 

Save and close the file.

We’ll now test the app locally using the Django development server to ensure that everything’s been correctly configured.

Step 6 — Testing the App

Before we run the Django development server, we need to use the manage.py utility to create the database schema and collect static files into the STATIC_ROOT directory.

Navigate into the project’s base directory, and create the initial database schema in our PostgreSQL database using the makemigrations and migrate commands:

  • cd django-polls
  • ./manage.py makemigrations
  • ./manage.py migrate

makemigrations will create the migrations, or database schema changes, based on the changes made to Django models. migrate will apply these migrations to the database schema. To learn more about migrations in Django, consult Migrations from the official Django documentation.

Create an administrative user for the project by typing:

  • ./manage.py createsuperuser

You will have to select a username, provide an email address, and choose and confirm a password.

We can collect all of the static content into the directory location we configured by typing:

  • ./manage.py collectstatic

The static files will then be placed in a directory called static within your project directory.

If you followed the initial server setup guide, you should have a UFW firewall protecting your server. In order to test the development server, we’ll have to allow access to the port we’ll be using.

Create an exception for port 8000 by typing:

  • sudo ufw allow 8000

Testing the App Using the Django Development Server

Finally, you can test your project by starting the Django development server with this command:

  • ./manage.py runserver 0.0.0.0:8000

In your web browser, visit your server’s domain name or IP address followed by :8000 and the polls path:

  • http://server_domain_or_IP:8000/polls

You should see the Polls app interface:

Polls App Interface

To check out the admin interface, visit your server’s domain name or IP address followed by :8000 and the administrative interface’s path:

  • http://server_domain_or_IP:8000/admin

You should see the Polls app admin authentication window:

Polls Admin Auth Page

Enter the administrative username and password you created with the createsuperuser command.

After authenticating, you can access the Polls app’s administrative interface:

Polls Admin Main Interface

When you are finished exploring, hit CTRL-C in the terminal window to shut down the development server.

Testing the App Using Gunicorn

The last thing we want to do before offloading static files is test Gunicorn to make sure that it can serve the application. We can do this by entering our project directory and using gunicorn to load the project’s WSGI module:

  • gunicorn --bind 0.0.0.0:8000 mysite.wsgi

This will start Gunicorn on the same interface that the Django development server was running on. You can go back and test the app again.

Note: The admin interface will not have any of the styling applied since Gunicorn does not know how to find the static CSS content responsible for this.

We passed Gunicorn a module by specifying the relative directory path to Django’s wsgi.py file, the entry point to our application,. This file defines a function called application, which communicates with the application. To learn more about the WSGI specification, click here.

When you are finished testing, hit CTRL-C in the terminal window to stop Gunicorn.

We’ll now offload the application’s static files to DigitalOcean Spaces.

Step 7 — Offloading Static Files to DigitalOcean Spaces

At this point, Gunicorn can serve our Django application but not its static files. Usually we’d configure Nginx to serve these files, but in this tutorial we’ll offload them to DigitalOcean Spaces using the django-storages plugin. This allows you to easily scale Django by centralizing its static content and freeing up server resources. In addition, you can deliver this static content using the DigitalOcean Spaces CDN.

For a full guide on offloading Django static files to Object storage, consult How to Set Up Object Storage with Django.

Installing and Configuring django-storages

We’ll begin by installing the django-storages Python package. The django-storages package provides Django with the S3Boto3Storage storage backend that uses the boto3 library to upload files to any S3-compatible object storage service.

To start, install thedjango-storages and boto3 Python packages using pip:

  • pip install django-storages boto3

Next, open your app’s Django settings file again:

  • nano ~/django-polls/mysite/settings.py

Navigate down to the INSTALLED_APPS section of the file, and append storages to the list of installed apps:

~/django-polls/mysite/settings.py
. . .  INSTALLED_APPS = [     . . .     'django.contrib.staticfiles',     'storages', ]  . . . 

Scroll further down the file to the STATIC_URL we previously modified. We’ll now overwrite these values and append new S3Boto3Storage backend parameters. Delete the code you entered earlier, and add the following blocks, which include access and location information for your Space. Remember to replace the highlighted values here with your own information::

~/django-polls/mysite/settings.py
. . .  # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/2.1/howto/static-files/  AWS_ACCESS_KEY_ID = 'your_spaces_access_key' AWS_SECRET_ACCESS_KEY = 'your_spaces_secret_key'  AWS_STORAGE_BUCKET_NAME = 'your_space_name' AWS_S3_ENDPOINT_URL = 'spaces_endpoint_URL' AWS_S3_OBJECT_PARAMETERS = {     'CacheControl': 'max-age=86400', } AWS_LOCATION = 'static' AWS_DEFAULT_ACL = 'public-read'  STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'  STATIC_URL = '{}/{}/'.format(AWS_S3_ENDPOINT_URL, AWS_LOCATION) STATIC_ROOT = 'static/' 

We define the following configuration items:

  • AWS_ACCESS_KEY_ID: The Access Key ID for the Space, which you created in the tutorial prerequisites. If you didn’t create a set of Access Keys, consult Sharing Access to Spaces with Access Keys.
  • AWS_SECRET_ACCESS_KEY: The secret key for the DigitalOcean Space.
  • AWS_STORAGE_BUCKET_NAME: Your DigitalOcean Space name.
  • AWS_S3_ENDPOINT_URL : The endpoint URL used to access the object storage service. For DigitalOcean, this will be something like https://nyc3.digitaloceanspaces.com depending on the Space region.
  • AWS_S3_OBJECT_PARAMETERS Sets the cache control headers on static files.
  • AWS_LOCATION: Defines a directory within the object storage bucket where all static files will be placed.
  • AWS_DEFAULT_ACL: Defines the access control list (ACL) for the static files. Setting it to public-read ensures that the files are publicly accessible to end users.
  • STATICFILES_STORAGE: Sets the storage backend Django will use to offload static files. This backend should work with any S3-compatible backend, including DigitalOcean Spaces.
  • STATIC_URL: Specifies the base URL that Django should use when generating URLs for static files. Here, we combine the endpoint URL and the static files subdirectory to construct a base URL for static files.
  • STATIC_ROOT: Specifies where to collect static files locally before copying them to object storage.

From now on, when you run collectstatic, Django will upload your app’s static files to the Space. When you start Django, it’ll begin serving static assets like CSS and Javascript from this Space.

Before we test that this is all functioning correctly, we need to configure Cross-Origin Resource Sharing (CORS) headers for our Spaces files or access to certain static assets may be denied by your web browser.

Configuring CORS Headers

CORS headers tell the web browser that the an application running at one domain can access scripts or resources located at another. In this case, we need to allow cross-origin resource sharing for our Django server’s domain so that requests for static files in the Space are not denied by the web browser.

To begin, navigate to the Settings page of your Space using the Cloud Control Panel:

Screenshot of the Settings tab

In the CORS Configurations section, click Add.

CORS advanced settings

Here, under Origin, enter the wildcard origin, *

Warning: When you deploy your app into production, be sure to change this value to your exact origin domain (including the http:// or https:// protocol). Leaving this as the wildcard origin is insecure, and we do this here only for testing purposes since setting the origin to http://example.com:8000 (using a nonstandard port) is currently not supported.

Under Allowed Methods, select GET.

Click on Add Header, and in text box that appears, enter Access-Control-Allow-Origin.

Set Access Control Max Age to 600 so that the header we just created expires every 10 minutes.

Click Save Options.

From now on, objects in your Space will contain the appropriate Access-Control-Allow-Origin response headers, allowing modern secure web browsers to fetch these files across domains.

At this point, you can optionally enable the CDN for your Space, which will serve these static files from a distributed network of edge servers. To learn more about CDNs, consult Using a CDN to Speed Up Static Content Delivery. This can significantly improve web performance. If you don’t want to enable the CDN for your Space, skip ahead to the next section, Testing Spaces Static File Delivery.

Enabling CDN (Optional)

To activate static file delivery via the DigitalOcean Spaces CDN, begin by enabling the CDN for your DigitalOcean Space. To learn how to do this, consult How to Enable the Spaces CDN from the DigitalOcean product documentation.

Once you’ve enabled the CDN for your Space, navigate to it using the Cloud Control Panel. You should see a new Endpoints link under your Space name:

List of Space Endpoints

These endpoints should contain your Space name.

Notice the addition of a new Edge endpoint. This endpoint routes requests for Spaces objects through the CDN, serving them from the edge cache as much as possible. Note down this Edge endpoint, as we’ll use it to configure the django-storages plugin.

Next, edit your app’s Django settings file once again:

  • nano ~/django-polls/mysite/settings.py

Navigate down to the Static Files section we recently modified. Add the AWS_S3_CUSTOM_DOMAIN parameter to configure the django-storages plugin CDN endpoint and update the STATIC_URL parameter to use this new CDN endpoint:

~/django-polls/mysite/settings.py
. . .  # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/2.1/howto/static-files/  # Moving static assets to DigitalOcean Spaces as per: # https://www.digitalocean.com/community/tutorials/how-to-set-up-object-storage-with-django AWS_ACCESS_KEY_ID = 'your_spaces_access_key' AWS_SECRET_ACCESS_KEY = 'your_spaces_secret_key'  AWS_STORAGE_BUCKET_NAME = 'your_space_name' AWS_S3_ENDPOINT_URL = 'spaces_endpoint_URL' AWS_S3_CUSTOM_DOMAIN = 'spaces_edge_endpoint_URL' AWS_S3_OBJECT_PARAMETERS = {     'CacheControl': 'max-age=86400', } AWS_LOCATION = 'static' AWS_DEFAULT_ACL = 'public-read'  STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'  STATIC_URL = '{}/{}/'.format(AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION) STATIC_ROOT = 'static/' 

Here, replace the spaces_edge_endpoint_URL with the Edge endpoint you just noted down, truncating the https:// prefix. For example, if the Edge endpoint URL is https://example.sfo2.cdn.digitaloceanspaces.com, AWS_S3_CUSTOM_DOMAIN should be set to example.sfo2.cdn.digitaloceanspaces.com.

When you’re done, save and close the file.

When you start Django, it will now serve static content using the CDN for your DigitalOcean Space.

Testing Spaces Static File Delivery

We’ll now test that Django is correctly serving static files from our DigitalOcean Space.

Navigate to your Django app directory:

  • cd ~/django-polls

From here, run collectstatic to collect and upload static files to your DigitalOcean Space:

  • python manage.py collectstatic

You should see the following output:

Output
You have requested to collect static files at the destination location as specified in your settings. This will overwrite existing files! Are you sure you want to do this? Type 'yes' to continue, or 'no' to cancel:

Type yes and hit ENTER to confirm.

You should then see output like the following

Output
121 static files copied.

This confirms that Django successfully uploaded the polls app static files to your Space. You can navigate to your Space using the Cloud Control Panel, and inspect the files in the static directory.

Next, we’ll verify that Django is rewriting the appropriate URLs.

Start the Gunicorn server:

  • gunicorn --bind 0.0.0.0:8000 mysite.wsgi

In your web browser, visit your server’s domain name or IP address followed by :8000 and /admin:

http://server_domain_or_IP:8000/admin 

You should once again see the Polls app admin authentication window, this time with correct styling.

Now, use your browser’s developer tools to inspect the page contents and reveal the source file storage locations.

To do this using Google Chrome, right-click the page, and select Inspect.

You should see the following window:

Chrome Dev Tools Window

From here, click on Sources in the toolbar. In the list of source files in the left-hand pane, you should see /admin/login under your Django server’s domain, and static/admin under your Space’s CDN endpoint. Within static/admin, you should see both the css and fonts directories.

This confirms that CSS stylesheets and fonts are correctly being served from your Space’s CDN.

When you are finished testing, hit CTRL-C in the terminal window to stop Gunicorn.

You can disable your active Python virtual environment by entering deactivate:

  • deactivate

Your prompt should return to normal.

At this point you’ve successfully offloaded static files from your Django server, and are serving them from object storage. We can now move on to configuring Gunicorn to start automatically as a system service.

Step 8 — Creating systemd Socket and Service Files for Gunicorn

In Step 6 we tested that Gunicorn can interact with our Django application, but we should implement a more robust way of starting and stopping the application server. To accomplish this, we’ll make systemd service and socket files.

The Gunicorn socket will be created at boot and will listen for connections. When a connection occurs, systemd will automatically start the Gunicorn process to handle the connection.

Start by creating and opening a systemd socket file for Gunicorn with sudo privileges:

  • sudo nano /etc/systemd/system/gunicorn.socket

Inside, we will create a [Unit] section to describe the socket, a [Socket] section to define the socket location, and an [Install] section to make sure the socket is created at the right time. Add the following code to the file:

/etc/systemd/system/gunicorn.socket
[Unit] Description=gunicorn socket  [Socket] ListenStream=/run/gunicorn.sock  [Install] WantedBy=sockets.target 

Save and close the file when you are finished.

Next, create and open a systemd service file for Gunicorn with sudo privileges in your text editor. The service filename should match the socket filename with the exception of the extension:

  • sudo nano /etc/systemd/system/gunicorn.service

Start with the [Unit] section, which specifies metadata and dependencies. We’ll put a description of our service here and tell the init system to only start this after the networking target has been reached. Because our service relies on the socket from the socket file, we need to include a Requires directive to indicate that relationship:

/etc/systemd/system/gunicorn.service
[Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target 

Next, we’ll open up the [Service] section. We’ll specify the user and group that we want to process to run under. We will give our regular user account ownership of the process since it owns all of the relevant files. We’ll give group ownership to the www-data group so that Nginx can communicate easily with Gunicorn.

We’ll then map out the working directory and specify the command to use to start the service. In this case, we’ll have to specify the full path to the Gunicorn executable, which is installed within our virtual environment. We will bind the process to the Unix socket we created within the /run directory so that the process can communicate with Nginx. We log all data to standard output so that the journald process can collect the Gunicorn logs. We can also specify any optional Gunicorn tweaks here, like the number of worker processes. Here, we run Gunicorn with 3 worker processes.

Add the following Service section to the file. Be sure to replace the username listed here with your own username:

/etc/systemd/system/gunicorn.service
[Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target  [Service] User=sammy Group=www-data WorkingDirectory=/home/sammy/django-polls ExecStart=/home/sammy/envs/polls/bin/gunicorn \           --access-logfile - \           --workers 3 \           --bind unix:/run/gunicorn.sock \           mysite.wsgi:application 

Finally, we’ll add an [Install] section. This will tell systemd what to link this service to if we enable it to start at boot. We want this service to start when the regular multi-user system is up and running:

/etc/systemd/system/gunicorn.service
[Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target  [Service] User=sammy Group=www-data WorkingDirectory=/home/sammy/django-polls ExecStart=/home/sammy/envs/polls/bin/gunicorn \           --access-logfile - \           --workers 3 \           --bind unix:/run/gunicorn.sock \           mysite.wsgi:application  [Install] WantedBy=multi-user.target 

With that, our systemd service file is complete. Save and close it now.

We can now start and enable the Gunicorn socket. This will create the socket file at /run/gunicorn.sock now and at boot. When a connection is made to that socket, systemd will automatically start the gunicorn.service to handle it:

  • sudo systemctl start gunicorn.socket
  • sudo systemctl enable gunicorn.socket

We can confirm that the operation was successful by checking for the socket file.

Checking for the Gunicorn Socket File

Check the status of the process to find out whether it started successfully:

  • sudo systemctl status gunicorn.socket

You should see the following output:

Output
Failed to dump process list, ignoring: No such file or directory ● gunicorn.socket - gunicorn socket Loaded: loaded (/etc/systemd/system/gunicorn.socket; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-03-05 19:19:16 UTC; 1h 22min ago Listen: /run/gunicorn.sock (Stream) CGroup: /system.slice/gunicorn.socket Mar 05 19:19:16 django systemd[1]: Listening on gunicorn socket.

Next, check for the existence of the gunicorn.sock file within the /run directory:

  • file /run/gunicorn.sock
Output
/run/gunicorn.sock: socket

If the systemctl status command indicated that an error occurred, or if you do not find the gunicorn.sock file in the directory, it’s an indication that the Gunicorn socket was not created correctly. Check the Gunicorn socket’s logs by typing:

  • sudo journalctl -u gunicorn.socket

Take another look at your /etc/systemd/system/gunicorn.socket file to fix any problems before continuing.

Testing Socket Activation

Currently, if you’ve only started the gunicorn.socket unit, the gunicorn.service will not be active, since the socket has not yet received any connections. You can check this by typing:

  • sudo systemctl status gunicorn
Output
● gunicorn.service - gunicorn daemon Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled) Active: inactive (dead)

To test the socket activation mechanism, we can send a connection to the socket through curl by typing:

  • curl --unix-socket /run/gunicorn.sock localhost

You should see the HTML output from your application in the terminal. This indicates that Gunicorn has started and is able to serve your Django application. You can verify that the Gunicorn service is running by typing:

  • sudo systemctl status gunicorn
Output
● gunicorn.service - gunicorn daemon Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2019-03-05 20:43:56 UTC; 1s ago Main PID: 19074 (gunicorn) Tasks: 4 (limit: 4915) CGroup: /system.slice/gunicorn.service ├─19074 /home/sammy/envs/polls/bin/python3 /home/sammy/envs/polls/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock mysite.wsgi:application ├─19098 /home/sammy/envs/polls/bin/python3 /home/sammy/envs/polls/bin/gunicorn . . . Mar 05 20:43:56 django systemd[1]: Started gunicorn daemon. Mar 05 20:43:56 django gunicorn[19074]: [2019-03-05 20:43:56 +0000] [19074] [INFO] Starting gunicorn 19.9.0 . . . Mar 05 20:44:15 django gunicorn[19074]: - - [05/Mar/2019:20:44:15 +0000] "GET / HTTP/1.1" 301 0 "-" "curl/7.58.0"

If the output from curl or the output of systemctl status indicates that a problem occurred, check the logs for additional details:

  • sudo journalctl -u gunicorn

You can also check your /etc/systemd/system/gunicorn.service file for problems. If you make changes to this file, be sure to reload the daemon to reread the service definition and restart the Gunicorn process:

  • sudo systemctl daemon-reload
  • sudo systemctl restart gunicorn

Make sure you troubleshoot any issues before continuing on to configuring the Nginx server.

Step 8 — Configuring Nginx HTTPS and Gunicorn Proxy Passing

Now that Gunicorn is set up in a more robust fashion, we need to configure Nginx to encrypt connections and hand off traffic to the Gunicorn process.

If you followed the preqrequisites and set up Nginx with Let’s Encrypt, you should already have a server block file corresponding to your domain available to you in Nginx’s sites-available directory. If not, follow How To Secure Nginx with Let’s Encrypt on Ubuntu 18.04 and return to this step.

Before we edit this example.com server block file, we’ll first remove the default server block file that gets rolled out by default after installing Nginx:

  • sudo rm /etc/nginx/sites-enabled/default

We’ll now modify the example.com server block file to pass traffic to Gunicorn instead of the default index.html page configured in the prerequisite step.

Open the server block file corresponding to your domain in your editor:

  • sudo nano /etc/nginx/sites-available/example.com

You should see something like the following:

/etc/nginx/sites-available/example.com
server {          root /var/www/example.com/html;         index index.html index.htm index.nginx-debian.html;          server_name example.com www.example.com;          location / {                 try_files $  uri $  uri/ =404;         }      listen [::]:443 ssl ipv6only=on; # managed by Certbot     listen 443 ssl; # managed by Certbot     ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot     ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot     include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot     ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot  }  server {     if ($  host = example.com) {         return 301 https://$  host$  request_uri;     } # managed by Certbot           listen 80;         listen [::]:80;          server_name example.com www.example.com;     return 404; # managed by Certbot   } 

This is a combination of the default server block file created in How to Install Nginx on Ubuntu 18.04 as well as additions appended automatically by Let’s Encrypt. We are going to delete the contents of this file and write a new configuration that redirects HTTP traffic to HTTPS, and forwards incoming requests to the Gunicorn socket we created in the previous step.

If you’d like, you can make a backup of this file using cp. Quit your text editor and create a backup called example.com.old:

  • sudo cp /etc/nginx/sites-available/example.com /etc/nginx/sites-available/example.com.old

Now, reopen the file and delete its contents. We’ll build the new configuration block by block.

Begin by pasting in the following block, which redirects HTTP requests at port 80 to HTTPS:

/etc/nginx/sites-available/example.com
server {     listen 80 default_server;     listen [::]:80 default_server;     server_name _;     return 301 https://example.com$  request_uri; } 

Here we listen for HTTP IPv4 and IPv6 requests on port 80 and send a 301 response header to redirect the request to HTTPS port 443 using the example.com domain. This will also redirect direct HTTP requests to the server’s IP address.

After this block, append the following block of config code that handles HTTPS requests for the example.com domain:

/etc/nginx/sites-available/example.com
. . .  server {     listen [::]:443 ssl ipv6only=on;     listen 443 ssl;     server_name example.com www.example.com;      # Let's Encrypt parameters     ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;     ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;     include /etc/letsencrypt/options-ssl-nginx.conf;     ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;      location = /favicon.ico { access_log off; log_not_found off; }      location / {         proxy_pass         http://unix:/run/gunicorn.sock;         proxy_redirect     off;          proxy_set_header   Host              $  http_host;         proxy_set_header   X-Real-IP         $  remote_addr;         proxy_set_header   X-Forwarded-For   $  proxy_add_x_forwarded_for;         proxy_set_header   X-Forwarded-Proto https;     } } 

Here, we first listen on port 443 for requests hitting the example.com and www.example.com domains.

Next, we provide the same Let’s Encrypt configuration included in the default server block file, which specifies the location of the SSL certificate and private key, as well as some additional security parameters.

The location = /favicon.ico line instructs Nginx to ignore any problems with finding a favicon.

The last location = / block instructs Nginx to hand off requests to the Gunicorn socket configured in Step 8. In addition, it adds headers to inform the upstream Django server that a request has been forwarded and to provide it with various request properties.

After you’ve pasted in those two configuration blocks, the final file should look something like this:

/etc/nginx/sites-available/example.com
server {     listen 80 default_server;     listen [::]:80 default_server;     server_name _;     return 301 https://example.com$  request_uri; } server {         listen [::]:443 ssl ipv6only=on;         listen 443 ssl;         server_name example.com www.example.com;          # Let's Encrypt parameters         ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;         ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;         include /etc/letsencrypt/options-ssl-nginx.conf;         ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;          location = /favicon.ico { access_log off; log_not_found off; }          location / {           proxy_pass         http://unix:/run/gunicorn.sock;           proxy_redirect     off;            proxy_set_header   Host              $  http_host;           proxy_set_header   X-Real-IP         $  remote_addr;           proxy_set_header   X-Forwarded-For   $  proxy_add_x_forwarded_for;           proxy_set_header   X-Forwarded-Proto https;         } } 

Save and close the file when you are finished.

Test your Nginx configuration for syntax errors by typing:

  • sudo nginx -t

If your configuration is error-free, restart Nginx by typing:

  • sudo systemctl restart nginx

You should now be able to visit your server’s domain or IP address to view your application. Your browser should be using a secure HTTPS connection to connect to the Django backend.

To completely secure our Django project, we need to add a couple of security parameters to its settings.py file. Reopen this file in your editor:

  • nano ~/django-polls/mysite/settings.py

Scroll to the bottom of the file, and add the following parameters:

~/django-polls/mysite/settings.py
. . .  SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') SESSION_COOKIE_SECURE = True CSRF_COOKIE_SECURE = True SECURE_SSL_REDIRECT = True 

These settings tell Django that you have enabled HTTPS on your server, and instruct it to use “secure” cookies. To learn more about these settings, consult the SSL/HTTPS section of Security in Django.

When you’re done, save and close the file.

Finally, restart Gunicorn:

  • sudo systemctl restart gunicorn

At this point, you have configured Nginx to redirect HTTP requests and hand off these requests to Gunicorn. HTTPS should now be fully enabled for your Django project and app. If you’re running into errors, this discussion on troubleshooting Nginx and Gunicorn may help.

Warning: As stated in Configuring CORS Headers, be sure to change the Origin from the wildcard * domain to your domain name (https://example.com in this guide) before making your app accessible to end users.

Conclusion

In this guide, you set up and configured a scalable Django application running on an Ubuntu 18.04 server. This setup can be replicated across multiple servers to create a highly-available architecture. Furthermore, this app and its config can be containerized using Docker or another container runtime to ease deployment and scaling. These containers can then be deployed into a container cluster like Kubernetes. In an upcoming Tutorial series, we will explore how to containerize and modernize this Django polls app so that it can run in a Kubernetes cluster.

In addition to static files, you may also wish to offload your Django Media files to object storage. To learn how to do this, consult Using Amazon S3 to Store your Django Site’s Static and Media Files. You might also consider compressing static files to further optimize their delivery to end users. To do this, you can use a Django plugin like Django compressor.

DigitalOcean Community Tutorials

How to Speed Up WordPress Asset Delivery Using DigitalOcean Spaces CDN

Introduction

Implementing a CDN, or Content Delivery Network, to deliver your WordPress site’s static assets can greatly decrease your servers’ bandwidth usage as well as speed up page load times for geographically dispersed users. WordPress static assets include images, CSS stylesheets, and JavaScript files. Leveraging a system of edge servers distributed worldwide, a CDN caches copies of your site’s static assets across its network to reduce the distance between end users and this bandwidth-intensive content.

In a previous Solutions guide, How to Store WordPress Assets on DigitalOcean Spaces, we covered offloading a WordPress site’s Media Library (where images and other site content gets stored) to DigitalOcean Spaces, a highly redundant object storage service. We did this using the DigitalOcean Spaces Sync plugin, which automatically syncs WordPress uploads to your Space, allowing you to delete these files from your server and free up disk space.

In this Solutions guide, we’ll extend this procedure by rewriting Media Library asset URLs. This forces users’ browsers to download static assets directly from the DigitalOcean Spaces CDN, a geographically distributed set of cache servers optimized for delivering static content. We’ll go over how to enable the CDN for Spaces, how to rewrite links to serve your WordPress assets from the CDN, and finally how to test that your website’s assets are being correctly delivered by the CDN.

Additionally, we’ll demonstrate how to implement Media Library offload and link rewriting using two popular paid WordPress plugins: WP Offload Media and Media Library Folders Pro. You should choose the plugin that suits your production needs best.

Prerequisites

Before you begin this tutorial, you should have a running WordPress installation on top of a LAMP or LEMP stack. You should also have WP-CLI installed on your WordPress server, which you can learn to set up by following these instructions.

To offload your Media Library, you’ll need a DigitalOcean Space and an access key pair:

  • To learn how to create a Space, consult the Spaces product documentation.
  • To learn how to create an access key pair and upload files to your Space using the open source s3cmd tool, consult s3cmd 2.x Setup, also on the DigitalOcean product documentation site.

There are a few WordPress plugins that you can use to offload your WordPress assets:

  • DigitalOcean Spaces Sync is a free and open-source WordPress plugin for offloading your Media Library to a DigitalOcean Space. You can learn how to do this in How To Store WordPress Assets on DigitalOcean Spaces.
  • WP Offload Media is a paid plugin that copies files from your WordPress Media Library to DigitalOcean Spaces and rewrites URLs to serve the files from the CDN. With the Assets Pull addon, it can identify assets (CSS, JS, images, etc) used by your site (for example by WordPress themes) and also serve these from CDN.
  • Media Library Folders Pro is another paid plugin that helps you organize your Media Library assets, as well as offload them to DigitalOcean Spaces.

For testing purposes, be sure to have a modern web browser such as Google Chrome or Firefox installed on your client (e.g. laptop) computer.

Once you have a running WordPress installation and have created a DigitalOcean Space, you’re ready to enable the CDN for your Space and begin with this guide.

Enabling Spaces CDN

We’ll begin this guide by enabling the CDN for your DigitalOcean Space. This will not affect the availability of existing objects. With the CDN enabled, objects in your Space will be “pushed out” to edge caches across the content delivery network, and a new CDN endpoint URL will be made available to you. To learn more about how CDNs work, consult Using a CDN to Speed Up Static Content Delivery.

First, enable the CDN for your Space by following How to Enable the Spaces CDN.

Navigate back to your Space and reload the page. You should see a new Endpoints link under your Space name:

Endpoints Link

These endpoints should contain your Space name. We’re using wordpress-offload in this tutorial.

Notice the addition of the new Edge endpoint. This endpoint routes requests for Spaces objects through the CDN, serving them from the edge cache as much as possible. Note down this Edge endpoint, which you’ll use to configure your WordPress plugin in future steps.

Now that you have enabled the CDN for your Space, you’re ready to begin configuring your asset offload and link rewriting plugin.

If you’re using DigitalOcean Spaces Sync and continuing from How to Store WordPress Assets on DigitalOcean Spaces, begin reading from the following section. If you’re not using Spaces Sync, skip to either the WP Offload Media section or the Media Library Folders Pro section, depending on the plugin you choose to use.

Spaces Sync Plugin

If you’d like to use the free and open-source DigitalOcean Spaces Sync and CDN Enabler plugins to serve your files from the CDN’s edge caches, follow the steps outlined in this section.

We’ll begin by ensuring that our WordPress installation and Spaces Sync plugin are configured correctly and are serving assets from DigitalOcean Spaces.

Modifying Spaces Sync Plugin Configuration

Continuing from How To Store WordPress Assets on DigitalOcean Spaces, your Media Library should be offloaded to your DigitalOcean Space and your Spaces Sync plugin settings should look as follows:

Sync Cloud Only

We are going to make some minor changes to ensure that our configuration allows us to offload WordPress themes and other directories, beyond the wp-content/uploads Media Library folder.

First, we’re going to modify the Full URL-path to files field so that the Media Library files are served from our Space’s CDN and not locally from the server. This setting essentially rewrites links to Media Library assets, changing them from file links hosted locally on your WordPress server, to file links hosted on the DigitalOcean Spaces CDN.

Recall the Edge endpoint you noted down in the Enabling Spaces CDN step.

In this tutorial, the Space’s name is wordpress-offload and the Space’s CDN endpoint is:

https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com 

Now, in the Spaces Sync plugin settings page, replace the URL in the Full URL-path to files field with your Spaces CDN endpoint, followed by /wp-content/uploads.

In this tutorial, using the above Spaces CDN endpoint, the full URL would be:

https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com/wp-content/uploads 

Next, for the Local path field, enter the full path to the wp-content/uploads directory on your WordPress server. In this tutorial, the path to the WordPress installation on the server is /var/www/html/, so the full path to uploads would be /var/www/html/wp-content/uploads.

Note: If you’re continuing from How To Store WordPress Assets on DigitalOcean Spaces, this guide will slightly modify the path to files in your Space to enable you to optionally offload themes and other wp-content assets. You should clear out your Space before doing this, or alternatively you can transfer existing files into the correct wp-content/uploads Space directory using s3cmd.

In the Storage prefix field, we’re going to enter /wp-content/uploads, which will ensure that we build the correct wp-content directory hierarchy so that we can offload other WordPress directories to this Space.

Filemask can remain wildcarded with *, unless you’d like to exclude certain files.

It’s not necessary to check the Store files only in the cloud and delete… option; only check this box if you’d like to delete the Media Library assets from your server after they’ve been successfully uploaded to your DigitalOcean Space.

Your final settings should look something like this:

Final Spaces Sync Settings

Be sure to replace the above values with the values corresponding to your WordPress installation and Spaces configuration.

Finally, hit Save Changes.

You should see a Settings saved box appear at the top of your screen, confirming that the Spaces Sync plugin settings have successfully been updated.

Future WordPress Media Library uploads should now be synced to your DigitalOcean Space, and served using the Spaces Content Delivery Network.

In this step, we did not offload the WordPress theme or other wp-content assets. To learn how to transfer these assets to Spaces and serve them using the Spaces CDN, skip to Offload Additional Assets.

To verify and test that your Media Library uploads are being delivered from the Spaces CDN, skip to Test CDN Caching.

WordPress Offload Media Plugin

The DeliciousBrains WordPress Offload Media plugin allows you to quickly and automatically upload your Media Library assets to DigitalOcean Spaces and rewrite links to these assets so that you can deliver them directly from Spaces or via the Spaces CDN. In addition, the Assets Pull addon allows you to quickly offload additional WordPress assets like JS, CSS, and font files in combination with a pull CDN. Setting up this addon is beyond the scope of this guide but to learn more you can consult the DeliciousBrains documentation.

We’ll begin by installing and configuring the WP Offload Media plugin for a sample WordPress site.

Installing WP Offload Media Plugin

To begin, you must purchase a copy of the plugin on the DeliciousBrains plugin site. Choose the appropriate version depending on the number of assets in your Media Library, and support and feature requirements for your site.

After going through checkout, you’ll be brought to a post-purchase site with a download link for the plugin and a license key. The download link and license key will also be sent to you at the email address you provided when purchasing the plugin.

Download the plugin and navigate to your WordPress site’s admin interface (https://your_site_url/wp-admin). Log in if necessary. From here, hover over Plugins and click on Add New.

Click Upload Plugin and the top of the page, Choose File, and then select the zip archive you just downloaded.

Click Install Now, and then Activate Plugin. You’ll be brought to WordPress’s plugin admin interface.

From here, navigate to the WP Offload Media plugin’s settings page by clicking Settings under the plugin name.

You’ll be brought to the following screen:

WP Offload Media Configuration

Click the radio button next to DigitalOcean Spaces. You’ll now be prompted to either configure your Spaces Access Key in the wp-config.php file (recommended), or directly in the web interface (the latter will store your Spaces credentials in the WordPress database).

We’ll configure our Spaces Access Key in wp-config.php.

Log in to your WordPress server via the command line, and navigate to your WordPress root directory (in this tutorial, this is /var/www/html). From here, open up wp-config.php in your favorite editor:

  • sudo nano wp-config.php

Scroll down to the line that says /* That's all, stop editing! Happy blogging. */, and before it insert the following lines containing your Spaces Access Key pair (to learn how to generate an access key pair, consult the Spaces product docs):

wp-config.php
. . .  define( 'AS3CF_SETTINGS', serialize( array(     'provider' => 'do',     'access-key-id' => 'your_access_key_here',     'secret-access-key' => 'your_secret_key_here', ) ) );  /* That's all, stop editing! Happy blogging. */ . . . 

Once you’re done editing, save and close the file. The changes will take effect immediately.

Back in the WordPress Offload Media plugin admin interface, select the radio button next to Define access keys in wp-config.php and hit Save Changes.

You should be brought to the following interface:

WP Offload Bucket Selection

On this configuration page, select the appropriate region for your Space using the Region dropdown and enter your Space name next to Bucket (in this tutorial, our Space is called wordpress-offload).

Then, hit Save Bucket.

You’ll be brought to the main WP Offload Media configuration page. At the top you should see the following warning box:

WP Offload License

Click on enter your license key, and on the subsequent page enter the license key found in your email receipt or on the checkout page and hit Activate License.

If you entered your license key correctly, you should see License activated successfully.

Now, navigate back to main WP Offload Media configuration page by clicking on Media Library at the top of the window.

At this point, WP Offload Media has successfully been configured for use with your DigitalOcean Space. You can now begin offloading assets and delivering them using the Spaces CDN.

Configuring WP Offload Media

Now that you’ve linked WP Offload Media with your DigitalOcean Space, you can begin offloading assets and configuring URL rewriting to deliver media from the Spaces CDN.

You should see the following configuration options on the main WP Offload Media configuration page:

WP Offload Main Nav

These defaults should work fine for most use cases. If your Media Library exists at a nonstandard path within your WordPress directory, enter the path in the text box under the Path option.

If you’d like to change asset URLs so that they are served directly from Spaces and not your WordPress server, ensure the toggle is set to On next to Rewrite Media URLs.

To deliver Media Library assets using the Spaces CDN, ensure you’ve enabled the CDN for your Space (see Enable Spaces CDN to learn how) and have noted down the URL for the Edge endpoint. Hit the toggle next to Custom Domain (CNAME), and In the text box that appears, enter the CDN Edge endpoint URL, without the https:// prefix.

In this guide the Spaces CDN endpoint is:

https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com 

So here we enter:

 wordpress-offload.nyc3.cdn.digitaloceanspaces.com 

To improve security, we’ll force HTTPS for requests to Media Library assets (now served using the CDN) by setting the toggle to On.

You can optionally clear out files that have been offloaded to Spaces from your WordPress server to free up disk space. To do this, hit On next to Remove Files From Server.

Once you’ve finished configuring WP Offload Media, hit Save Changes at the bottom of the page to save your settings.

The URL Preview box should display a URL containing your Spaces CDN endpoint. It should look something like the following:

https://wordpress‑offload.nyc3.cdn.digitaloceanspaces.com/wp‑content/uploads/2018/09/21211354/photo.jpg

This URL indicates that WP Offload Media has been successfully configured to deliver Media Library assets using the Spaces CDN. If the path doesn’t contain cdn, ensure that you correctly entered the Edge endpoint URL and not the Origin URL.

At this point, WP Offload Media has been set up to deliver your Media Library using Spaces CDN. Any future uploads to your Media Library will be automatically copied over to your DigitalOcean Space and served using the CDN.

You can now bulk offload existing assets in your Media Library using the built-in upload tool.

Offloading Media Library

We’ll use the plugin’s built-in “Upload Tool” to offload existing files in our WordPress Media Library.

On the right-hand side of the main WP Offload Media configuration page, you should see the following box:

WP Offload Upload Tool

Click Offload Now to upload your Media Library files to your DigitalOcean Space.

If the upload procedure gets interrupted, the box will change to display the following:

WP Offload Upload Tool 2

Hit Offload Remaining Now to transfer the remaining files to your DigitalOcean Space.

Once you’ve offloaded the remaining items from your Media Library, you should see the following new boxes:

WP Offload Success

At this point you’ve offloaded your Media Library to your Space and are delivering the files to users using the Spaces CDN.

At any point in time, you can download the files back to your WordPress server from your Space by hitting Download Files.

You can also clear out your DigitalOcean Space by hitting Remove Files. Before doing this, ensure that you’ve first downloaded the files back to your WordPress server from Spaces.

In this step, we learned how to offload our WordPress Media Library to DigitalOcean Spaces and rewrite links to these Library assets using the WP Offload Media plugin.

To offload additional WordPress assets like themes and JavaScript files, you can use the Asset Pull addon or consult the Offload Additional Assets section of this guide.

To verify and test that your Media Library uploads are being delivered from the Spaces CDN, skip to Test CDN Caching.

Media Library Folders Pro and CDN Enabler Plugins

The MaxGalleria Media Library Folders Pro plugin is a convenient WordPress plugin that allows you to better organize your WordPress Media Library assets. In addition, the free Spaces addon allows you to bulk offload your Media Library assets to DigitalOcean Spaces, and rewrite URLs to those assets to serve them directly from object storage. You can then enable the Spaces CDN and use the Spaces CDN endpoint to serve your library assets from the distributed delivery network. To accomplish this last step, you can use the CDN Enabler plugin to rewrite CDN endpoint URLs for your Media Library assets.

We’ll begin by installing and configuring the Media Library Folders Pro (MLFP) plugin, as well as the MLFP Spaces addon. We’ll then install and configure the CDN Enabler plugin to deliver Media Library assets using the Spaces CDN.

Installing MLFP Plugin

After purchasing the MLFP plugin, you should have received an email containing your MaxGalleria account credentials as well as a plugin download link. Click on the plugin download link to download the MLFP plugin zip archive to your local computer.

Once you’ve downloaded the archive, log in to your WordPress site’s administration interface (https://your_site_url/wp-admin), and navigate to Plugins and then Add New in the left-hand sidebar.

From the Add Plugins page, click Upload Plugin and then select the zip archive you just downloaded.

Click Install Now to complete the plugin installation, and from the Installing Plugin screen, click Activate Plugin to activate MLFP.

You should then see a Media Library Folders Pro menu item appear in the left-hand sidebar. Click it to go to the Media Library Folders Pro interface. Covering the plugin’s various features is beyond the scope of this guide, but to learn more, you can consult the MaxGalleria site and forums.

We’ll now activate the plugin. Click into Settings under the MLFP menu item, and enter your license key next to the License Key text box. You can find your MLFP license key in the email sent to you when you purchased the plugin. Hit Save Changes and then Activate License. Next, hit Update Settings.

Your MLFP plugin is now active, and you can use it to organize existing or new Media Library assets for your WordPress site.

We’ll now install and configure the Spaces addon plugin so that you can offload and serve these assets from DigitalOcean Spaces.

Installing MLFP Spaces Addon Plugin and Offload Media Library

To install the Spaces Addon, log in to your MaxGalleria account. You can find your account credentials in an email sent to you when you purchased the MLFP plugin.

Navigate to the Addons page in the top menu bar and scroll down to Media Sources. From here, click into the Media Library Folders Pro S3 and Spaces option.

From this page, scroll down to the Pricing section and select the option that suits the size of your WordPress Media Library (for Media Libraries with 3000 images or less, the addon is free).

After completing the addon “purchase,” you can navigate back to your account page (by clicking the Account link in the top menu bar), from which the addon plugin will now be available.

Click on the Media Library Folders Pro S3 image and the plugin download should begin.

Once the download completes, navigate back to your WordPress administration interface, and install the downloaded plugin using the same method as above, by clicking Upload Plugin. Once again, hit Activate Plugin to activate the plugin.

You will likely receive a warning about configuring access keys in your wp-config.php file. We’ll configure these now.

Log in to your WordPress server using the console or SSH, and navigate to your WordPress root directory (in this tutorial, this is /var/www/html). From here, open up wp-config.php in your favorite editor:

  • sudo nano wp-config.php

Scroll down to the line that says /* That's all, stop editing! Happy blogging. */, and before it insert the following lines containing your Spaces Access Key pair and a plugin configuration option (to learn how to generate an access key pair, consult the Spaces product docs):

wp-config.php
. . .  define('MF_AWS_ACCESS_KEY_ID', 'your_access_key_here'); define( 'MF_AWS_SECRET_ACCESS_KEY', 'your_secret_key_here'); define('MF_CLOUD_TYPE', 'do')  /* That's all, stop editing! Happy blogging. */ . . . 

Once you’re done editing, save and close the file.

Now, navigate to your DigitalOcean Space from the Cloud Control Panel, and create a folder called wp-content by clicking on New Folder.

From here, navigate back to the WordPress administration interface, and click into Media Library Folders Pro and then S3 & Spaces Settings in the sidebar.

The warning banner about configuring access keys should now have disappeared. If it’s still present, you should double check your wp-config.php file for any typos or syntax errors.

In the License Key text box, enter the license key that was emailed to you after purchasing the Spaces addon. Note that this license key is different from the MLFP license key. Hit Save Changes and then Activate License.

Once activated, you should see the following configuration pane:

MLFP Spaces Addon Configuration

From here, click Select Image Bucket & Region to select your DigitalOcean Space. Then select the correct region for your Space and hit Save Bucket Selection.

You’ve now successfully connected the Spaces offload plugin to your DigitalOcean Space. You can begin offloading your WordPress Media Library assets.

The Use files on the cloud server checkbox allows you to specify where Media Library assets will be served from. If you check the box, assets will be served from DigitalOcean Spaces, and URLs to images and other Media Library objects will be correspondingly rewritten. If you plan on using the Spaces CDN to serve your Media Library assets, do not check this box, as the plugin will use the Spaces Origin endpoint and not the CDN Edge endpoint. We will configure CDN link rewriting in a future step.

Click the Remove files from local server box to delete local Media Library assets once they’ve been successfully uploaded to DigitalOcean Spaces.

The Remove individual downloaded files from the cloud server checkbox should be used when bulk downloading files from Spaces to your WordPress server. If checked, these files will be deleted from Spaces after successfully downloading to your WordPress server. We can ignore this option for now.

Since we’re configuring the plugin for use with the Spaces CDN, leave the Use files on the cloud server box unchecked, and hit Copy Media Library to the cloud server to sync your site’s WordPress Media Library to your DigitalOcean Space.

You should see a progress box appear, and then Upload complete. indicating the Media Library sync has concluded successfully.

Navigate to your DigitalOcean Space to confirm that your Media Library files have been copied to your Space. They should be available in the uploads subdirectory of the wp-content directory you created earlier in this step.

Once your files are available in your Space, you’re ready to move on to configuring the Spaces CDN.

Installing CDN Enabler Plugin to Deliver Assets from Spaces CDN

To use the Spaces CDN to serve your now offloaded files, first ensure that you’ve enabled the CDN for your Space.

Once the CDN has been enabled for your Space, you can now install and configure the CDN Enabler WordPress plugin to rewrite links to your Media Library assets. The plugin will rewrite links to these assets so that they are served from the Spaces CDN endpoint.

To install CDN Enabler, you can either use the Plugins menu from the WordPress administration interface, or install the plugin directly from the command line. We’ll demonstrate the latter procedure here.

First, log in to your WordPress server. Then, navigate to your plugins directory:

  • cd /var/www/html/wp-content/plugins

Be sure to replace the above path with the path to your WordPress installation.

From the command line, use the wp-cli interface to install the plugin:

  • wp plugin install cdn-enabler

Now, activate the plugin:

  • wp plugin activate cdn-enabler

Back in the WordPress Admin Area, under Settings, you should see a new link to CDN Enabler settings. Click into CDN Enabler.

You should see the following settings screen:

CDN Enabler Settings

Modify the displayed fields as follows:

  • CDN URL: Enter the Spaces Edge endpoint, which you can find from the Spaces Dashboard. In this tutorial, this is https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com
  • Included Directories: Enter wp-content/uploads. We’ll learn how to serve other wp-content directories in the Offload Additional Assets section.
  • Exclusions: Leave the default .php
  • Relative Path: Leave the box checked
  • CDN HTTPS: Enable it by checking the box
  • Leave the remaining two fields blank

Then, hit Save Changes to save these settings and enable them for your WordPress site.

At this point you’ve successfully offloaded your WordPress site’s Media Library to DigitalOcean Spaces and are serving them to end users using the CDN.

In this step, we did not offload the WordPress theme or other wp-content assets. To learn how to transfer these assets to Spaces and serve them using the Spaces CDN, skip to Offload Additional Assets.

To verify and test that your Media Library uploads are being delivered from the Spaces CDN, skip to Test CDN Caching.

Offloading Additional Assets (Optional)

In previous sections of this guide, we’ve learned how to offload our site’s WordPress Media Library to Spaces and serve these files using the Spaces CDN. In this section, we’ll cover offloading and serving additional WordPress assets like themes, JavaScript files, and fonts.

Most of these static assets live inside of the wp-content directory (which contains wp-themes). To offload and rewrite URLs for this directory, we’ll use CDN Enabler, an open-source plugin developed by KeyCDN.

If you’re using the WP Offload Media plugin, you can use the Asset Pull addon to serve these files using a pull CDN. Installing and configuring this addon is beyond the scope of this guide. To learn more, consult the DeliciousBrains product page.

First, we’ll install CDN Enabler. We’ll then copy our WordPress themes over to Spaces, and finally configure CDN Enabler to deliver these using the Spaces CDN.

If you’ve already installed CDN Enabler in a previous step, skip to Step 2.

Step 1 — Installing CDN Enabler

To install CDN Enabler, log in to your WordPress server. Then, navigate to your plugins directory:

  • cd /var/www/html/wp-content/plugins

Be sure to replace the above path with the path to your WordPress installation.

From the command line, use the wp-cli interface to install the plugin:

  • wp plugin install cdn-enabler

Now, activate the plugin:

  • wp plugin activate cdn-enabler

Back in the WordPress Admin Area, under Settings, you should see a new link to CDN Enabler settings. Click into CDN Enabler.

You should see the following settings screen:

CDN Enabler Settings

At this point you’ve successfully installed CDN Enabler. We’ll now upload our WordPress themes to Spaces.

Step 2 — Uploading Static WordPress Assets to Spaces

In this tutorial, to demonstrate a basic plugin configuration, we’re only going to serve wp-content/themes, the WordPress directory containing WordPress themes’ PHP, JavaScript, HTML, and image files. You can optionally extend this process to other WordPress directories, like wp-includes, and even the entire wp-content directory.

The theme used by the WordPress installation in this tutorial is twentyseventeen, the default theme for a fresh WordPress installation at the time of writing. You can repeat these steps for any other theme or WordPress content.

First, we’ll upload our theme to our DigitalOcean Space using s3cmd. If you haven’t yet configured s3cmd, consult the DigitalOcean Spaces Product Documentation.

Navigate to your WordPress installation’s wp-content directory:

  • cd /var/www/html/wp-content

From here, upload the themes directory to your DigitalOcean Space using s3cmd. Note that at this point you can choose to upload only a single theme, but for simplicity and to offload as much content as possible from our server, we will upload all the themes in the themes directory to our Space.

We’ll use find to build a list of non-PHP (therefore cacheable) files, which we’ll then pipe to s3cmd to upload to Spaces. We’ll exclude CSS stylesheets as well in this first command as we need to set the text/css MIME type when uploading them.

  • find themes/ -type f -not \( -name '*.php' -or -name '*.css' \) | xargs -I{} s3cmd put --acl-public {} s3://wordpress-offload/wp-content/{}

Here, we instruct find to search for files within the themes/ directory, and ignore .php and .css files. We then use xargs -I{} to iterate over this list, executing s3cmd put for each file, and set the file’s permissions in Spaces to public using --acl-public.

Next, we’ll do the same for CSS stylesheets, adding the --mime-type="text/css" flag to set the text/css MIME type for the stylesheets on Spaces. This will ensure that Spaces serves your theme’s CSS files using the correct Content-Type: text/css HTTP header:

  • find themes/ -type f -name '*.css' | xargs -I{} s3cmd put --acl-public --mime-type="text/css" {} s3://wordpress-offload/wp-content/{}

Again, be sure to replace wordpress-offload in the above command with your Space name.

Now that we’ve uploaded our theme, let’s verify that it can be found at the correct path in our Space. Navigate to your Space using the DigitalOcean Cloud Control Panel.

Enter the wp-content directory, followed by the themes directory. You should see your theme’s directory here. If you don’t, verify your s3cmd configuration and re-upload your theme to your Space.

Now that our theme lives in our Space, and we’ve set the correct metadata, we can begin serving its files using CDN Enabler and the DigitalOcean Spaces CDN.

Navigate back to the WordPress Admin Area and click into Settings and then CDN Enabler.

Here, modify the displayed fields as follows:

  • CDN URL: Enter the Spaces Edge endpoint, as done in Step 1. In this tutorial, this is https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com
  • Included Directories: If you’re not using the MLFP plugin, this should be wp-content/themes. If you are, this should be wp-content/uploads,wp-content/themes
  • Exclusions: Leave the default .php
  • Relative Path: Leave the box checked
  • CDN HTTPS: Enable it by checking the box
  • Leave the remaining two fields blank

Your final settings should look something like this:

CDN Enabler Final Settings

Hit Save Changes to save these settings and enable them for your WordPress site.

At this point you’ve successfully offloaded your WordPress site’s theme assets to DigitalOcean Spaces and are serving them to end users using the CDN. We can confirm this using Chrome’s DevTools, following the procedure described below.

Using the CDN Enabler plugin, you can repeat this process for other WordPress directories, like wp-includes, and even the entire wp-content directory.

Testing CDN Caching

In this section, we’ll demonstrate how to determine where your WordPress assets are being served from (e.g. your host server or the CDN) using Google Chrome’s DevTools.

Step 1 — Adding Sample Image to Media Library to Test Syncing

To begin, we’ll first upload a sample image to our Media Library, and verify that it’s being served from the DigitalOcean Spaces CDN servers. You can upload an image using the WordPress Admin web interface, or using the wp-cli command-line tool. In this guide, we’ll use wp-cli to upload the sample image.

Log in to your WordPress server using the command line, and navigate to the home directory for the non-root user you’ve configured. In this tutorial, we’ll use the user sammy.

  • cd

From here, use curl to download the DigitalOcean logo to your Droplet (if you already have an image you’d like to test with, skip this step):

  • curl https://assets.digitalocean.com/logos/DO_Logo_horizontal_blue.png > do_logo.png

Now, use wp-cli to import the image to your Media Library:

  • wp media import --path=/var/www/html/ /home/sammy/do_logo.png

Be sure to replace /var/www/html with the correct path to the directory containing your WordPress files.

You may see some warnings, but the output should end in the following:

Output
Imported file '/home/sammy/do_logo.png' as attachment ID 10. Success: Imported 1 of 1 items.

Which indicates that our test image has successfully been copied to the WordPress Media Library, and also uploaded to our DigitalOcean Space, using your preferred offload plugin.

Navigate to your DigitalOcean Space to confirm:

Spaces Upload Success

This indicates that your offload plugin is functioning as expected and automatically syncing WordPress uploads to your DigitalOcean Space. Note that the exact path to your Media Library uploads in the Space will depend on the plugin you’re using to offload your WordPress files.

Next, we will verify that this file is being served using the Spaces CDN, and not from the server running WordPress.

Step 2 — Inspecting Asset URL

From the WordPress admin area (https://your_domain/wp-admin), navigate to Pages in the left-hand side navigation menu.

We will create a sample page containing our uploaded image to determine where it’s being served from. You can also run this test by adding the image to an existing page on your WordPress site.

From the Pages screen, click into Sample Page, or any existing page. You can alternatively create a new page.

In the page editor, click on Add Media, and select the DigitalOcean logo (or other image you used to test this procedure).

An Attachment Details pane should appear on the right-hand side of your screen. From this pane, add the image to the page by clicking on Insert into page.

Now, back in the page editor, click on either Publish (if you created a new sample page) or Update (if you added the image to an existing page) in the Publish box on the right-hand side of your screen.

Now that the page has successfully been updated to contain the image, navigate to it by clicking on the Permalink under the page title. You’ll be brought to this page in your web browser.

For the purposes of this tutorial, the following steps will assume that you’re using Google Chrome, but you can use most modern web browsers to run a similar test.

From the rendered page preview in your browser, right click on the image and click on Inspect:

Inspect Menu

A DevTools window should pop up, highlighting the img asset in the page’s HTML:

DevTools Output

You should see the CDN endpoint for your DigitalOcean Space in this URL (in this tutorial, our Spaces CDN endpoint is https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com), indicating that the image asset is being served from the DigitalOcean Spaces CDN edge cache.

This confirms that your Media Library uploads are being synced to your DigitalOcean Space and served using the Spaces CDN.

Step 3 — Inspecting Asset Response Headers

From the DevTools window, we’ll run one final test. Click on Network in the toolbar at the top of the window.

Once in the blank Network window, follow the displayed instructions to reload the page.

The page assets should populate in the window. Locate your test image in the list of page assets:

Chrome DevTools Asset List

Once you’ve located your test image, click into it to open an additional information pane. Within this pane, click on Headers to show the response headers for this asset:

Response Headers

You should see the Cache-Control HTTP header, which is a CDN response header. This confirms that this image was served from the Spaces CDN.

Step 4 — Inspecting URLs for Theme Assets (Optional)

If you offloaded your wp-themes (or other) directory as described in Offload Additional Assets, you should perform the following brief check to verify that your theme’s assets are being served from the Spaces CDN.

Navigate to your WordPress site in Google Chrome, and right-click anywhere in the page. In the menu that appears, click on Inspect.

You’ll once again be brought to the Chrome DevTools interface.

Chrome DevTools Interface

From here, click into Sources.

In the left-hand pane, you should see a list of your WordPress site’s assets. Scroll down to your CDN endpoint, and expand the list by clicking the small arrow next to the endpoint name:

DevTools Site Asset List

Observe that your WordPress theme’s header image, JavaScript, and CSS stylesheet are now being served from the Spaces CDN.

Conclusion

In this tutorial, we’ve shown how to offload static content from your WordPress server to DigitalOcean Spaces, and serve this content using the Spaces CDN. In most cases, this should reduce bandwidth on your host infrastructure and speed up page loads for end users, especially those located further away geographically from your WordPress server.

We demonstrated how to offload and serve both Media Library and themes assets using the Spaces CDN, but these steps can be extended to further unload the entire wp-content directory, as well as wp-includes.

Implementing a CDN to deliver static assets is just one way to optimize your WordPress installation. Other plugins like W3 Total Cache can further speed up page loads and improve the SEO of your site. A helpful tool to measure your page load speed and improve it is Google’s PageSpeed Insights. Another helpful tool that provides a waterfall breakdown of request and response times as well as suggested optimizations is Pingdom.

To learn more about Content Delivery Networks and how they work, consult Using a CDN to Speed Up Static Content Delivery.

DigitalOcean Community Tutorials