How To Deploy a PHP Application with Kubernetes on Ubuntu 16.04

The author selected the Open Internet/Free Speech to receive a donation as part of the Write for DOnations program.

Introduction

Kubernetes is an open source container orchestration system. It allows you to create, update, and scale containers without worrying about downtime.

To run a PHP application, Nginx acts as a proxy to PHP-FPM. Containerizing this setup in a single container can be a cumbersome process, but Kubernetes will help manage both services in separate containers. Using Kubernetes will allow you to keep your containers reusable and swappable, and you will not have to rebuild your container image every time there’s a new version of Nginx or PHP.

In this tutorial, you will deploy a PHP 7 application on a Kubernetes cluster with Nginx and PHP-FPM running in separate containers. You will also learn how to keep your configuration files and application code outside the container image using DigitalOcean’s Block Storage system. This approach will allow you to reuse the Nginx image for any application that needs a web/proxy server by passing a configuration volume, rather than rebuilding the image.

Prerequisites

Step 1 — Creating the PHP-FPM and Nginx Services

In this step, you will create the PHP-FPM and Nginx services. A service allows access to a set of pods from within the cluster. Services within a cluster can communicate directly through their names, without the need for IP addresses. The PHP-FPM service will allow access to the PHP-FPM pods, while the Nginx service will allow access to the Nginx pods.

Since Nginx pods will proxy the PHP-FPM pods, you will need to tell the service how to find them. Instead of using IP addresses, you will take advantage of Kubernetes’ automatic service discovery to use human-readable names to route requests to the appropriate service.

To create the service, you will create an object definition file. Every Kubernetes object definition is a YAML file that contains at least the following items:

  • apiVersion: The version of the Kubernetes API that the definition belongs to.
  • kind: The Kubernetes object this file represents. For example, a pod or service.
  • metadata: This contains the name of the object along with any labels that you may wish to apply to it.
  • spec: This contains a specific configuration depending on the kind of object you are creating, such as the container image or the ports on which the container will be accessible from.

First you will create a directory to hold your Kubernetes object definitions.

SSH to your master node and create the definitions directory that will hold your Kubernetes object definitions.

  • mkdir definitions

Navigate to the newly created definitions directory:

  • cd definitions

Make your PHP-FPM service by creating a php_service.yaml file:

  • nano php_service.yaml

Set kind as Service to specify that this object is a service:

php_service.yaml
... apiVersion: v1 kind: Service 

Name the service php since it will provide access to PHP-FPM:

php_service.yaml
... metadata:   name: php 

You will logically group different objects with labels. In this tutorial, you will use labels to group the objects into “tiers”, such as frontend or backend. The PHP pods will run behind this service, so you will label it as tier: backend.

php_service.yaml
...   labels:     tier: backend 

A service determines which pods to access by using selector labels. A pod that matches these labels will be serviced, independent of whether the pod was created before or after the service. You will add labels for your pods later in the tutorial.

Use the tier: backend label to assign the pod into the backend tier. You will also add the app: php label to specify that this pod runs PHP. Add these two labels after the metadata section.

php_service.yaml
... spec:   selector:     app: php     tier: backend 

Next, specify the port used to access this service. You will use port 9000 in this tutorial. Add it to the php_service.yaml file under spec:

php_service.yaml
...   ports:     - protocol: TCP       port: 9000 

Your completed php_service.yaml file will look like this:

php_service.yaml
apiVersion: v1 kind: Service metadata:   name: php   labels:     tier: backend spec:   selector:     app: php     tier: backend   ports:   - protocol: TCP     port: 9000 

Hit CTRL + o to save the file, and then CTRL + x to exit nano.

Now that you’ve created the object definition for your service, to run the service you will use the kubectl apply command along with the -f argument and specify your php_service.yaml file.

Create your service:

  • kubectl apply -f php_service.yaml

This output confirms the service creation:

Output
service/php created

Verify that your service is running:

  • kubectl get svc

You will see your PHP-FPM service running:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m php ClusterIP 10.100.59.238 <none> 9000/TCP 5m

There are various service types that Kubernetes supports. Your php service uses the default service type, ClusterIP. This service type assigns an internal IP and makes the service reachable only from within the cluster.

Now that the PHP-FPM service is ready, you will create the Nginx service. Create and open a new file called nginx_service.yaml with the editor:

  • nano nginx_service.yaml

This service will target Nginx pods, so you will name it nginx. You will also add a tier: backend label as it belongs in the backend tier:

nginx_service.yaml
apiVersion: v1 kind: Service metadata:   name: nginx   labels:     tier: backend 

Similar to the php service, target the pods with the selector labels app: nginx and tier: backend. Make this service accessible on port 80, the default HTTP port.

nginx_service.yaml
... spec:   selector:     app: nginx     tier: backend   ports:   - protocol: TCP     port: 80 

The Nginx service will be publicly accessible to the internet from your Droplet’s public IP address. your_public_ip can be found from your DigitalOcean Cloud Panel. Under spec.externalIPs, add:

nginx_service.yaml
... spec:   externalIPs:   - your_public_ip 

Your nginx_service.yaml file will look like this:

nginx_service.yaml
apiVersion: v1 kind: Service metadata:   name: nginx   labels:     tier: backend spec:   selector:     app: nginx     tier: backend   ports:   - protocol: TCP     port: 80   externalIPs:   - your_public_ip     

Save and close the file. Create the Nginx service:

  • kubectl apply -f nginx_service.yaml

You will see the following output when the service is running:

Output
service/nginx created

You can view all running services by executing:

  • kubectl get svc

You will see both the PHP-FPM and Nginx services listed in the output:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m nginx ClusterIP 10.102.160.47 your_public_ip 80/TCP 50s php ClusterIP 10.100.59.238 <none> 9000/TCP 8m

Please note, if you want to delete a service you can run:

  • kubectl delete svc/service_name

Now that you’ve created your PHP-FPM and Nginx services, you will need to specify where to store your application code and configuration files.

Step 2 — Installing the DigitalOcean Storage Plug-In

Kubernetes provides different storage plug-ins that can create the storage space for your environment. In this step, you will install the DigitalOcean storage plug-in to create block storage on DigitalOcean. Once the installation is complete, it will add a storage class named do-block-storage that you will use to create your block storage.

You will first configure a Kubernetes Secret object to store your DigitalOcean API token. Secret objects are used to share sensitive information, like SSH keys and passwords, with other Kubernetes objects within the same namespace. Namespaces provide a way to logically separate your Kubernetes objects.

Open a file named secret.yaml with the editor:

  • nano secret.yaml

You will name your Secret object digitalocean and add it to the kube-system namespace. The kube-system namespace is the default namespace for Kubernetes’ internal services and is also used by the DigitalOcean storage plug-in to launch various components.

secret.yaml
apiVersion: v1 kind: Secret metadata:   name: digitalocean   namespace: kube-system 

Instead of a spec key, a Secret uses a data or stringData key to hold the required information. The data parameter holds base64 encoded data that is automatically decoded when retrieved. The stringData parameter holds non-encoded data that is automatically encoded during creation or updates, and does not output the data when retrieving Secrets. You will use stringData in this tutorial for convenience.

Add the access-token as stringData:

secret.yaml
... stringData:   access-token: your-api-token 

Save and exit the file.

Your secret.yaml file will look like this:

secret.yaml
apiVersion: v1 kind: Secret metadata:   name: digitalocean   namespace: kube-system stringData:   access-token: your-api-token 

Create the secret:

  • kubectl apply -f secret.yaml

You will see this output upon Secret creation:

Output
secret/digitalocean created

You can view the secret with the following command:

  • kubectl -n kube-system get secret digitalocean

The output will look similar to this:

Output
NAME TYPE DATA AGE digitalocean Opaque 1 41s

The Opaque type means that this Secret is read-only, which is standard for stringData Secrets. You can read more about it on the Secret design spec. The DATA field shows the number of items stored in this Secret. In this case, it shows 1 because you have a single key stored.

Now that your Secret is in place, install the DigitalOcean block storage plug-in:

  • kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v0.3.0.yaml

You will see output similar to the following:

Output
storageclass.storage.k8s.io/do-block-storage created serviceaccount/csi-attacher created clusterrole.rbac.authorization.k8s.io/external-attacher-runner created clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created service/csi-attacher-doplug-in created statefulset.apps/csi-attacher-doplug-in created serviceaccount/csi-provisioner created clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created service/csi-provisioner-doplug-in created statefulset.apps/csi-provisioner-doplug-in created serviceaccount/csi-doplug-in created clusterrole.rbac.authorization.k8s.io/csi-doplug-in created clusterrolebinding.rbac.authorization.k8s.io/csi-doplug-in created daemonset.apps/csi-doplug-in created

Now that you have installed the DigitalOcean storage plug-in, you can create block storage to hold your application code and configuration files.

Step 3 — Creating the Persistent Volume

With your Secret in place and the block storage plug-in installed, you are now ready to create your Persistent Volume. A Persistent Volume, or PV, is block storage of a specified size that lives independently of a pod’s life cycle. Using a Persistent Volume will allow you to manage or update your pods without worrying about losing your application code. A Persistent Volume is accessed by using a PersistentVolumeClaim, or PVC, which mounts the PV at the required path.

Open a file named code_volume.yaml with your editor:

  • nano code_volume.yaml

Name the PVC code by adding the following parameters and values to your file:

code_volume.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata:   name: code 

The spec for a PVC contains the following items:

  • accessModes which vary by the use case. These are:
    • ReadWriteOnce – mounts the volume as read-write by a single node
    • ReadOnlyMany – mounts the volume as read-only by many nodes
    • ReadWriteMany – mounts the volume as read-write by many nodes
  • resources – the storage space that you require

DigitalOcean block storage is only mounted to a single node, so you will set the accessModes to ReadWriteOnce. This tutorial will guide you through adding a small amount of application code, so 1GB will be plenty in this use case. If you plan on storing a larger amount of code or data on the volume, you can modify the storage parameter to fit your requirements. You can increase the amount of storage after volume creation, but shrinking the disk is not supported.

code_volume.yaml
... spec:   accessModes:   - ReadWriteOnce   resources:     requests:       storage: 1Gi 

Next, specify the storage class that Kubernetes will use to provision the volumes. You will use the do-block-storage class created by the DigitalOcean block storage plug-in.

code_volume.yaml
...   storageClassName: do-block-storage 

Your code_volume.yaml file will look like this:

code_volume.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata:   name: code spec:   accessModes:   - ReadWriteOnce   resources:     requests:       storage: 1Gi   storageClassName: do-block-storage 

Save and exit the file.

Create the code PersistentVolumeClaim using kubectl:

  • kubectl apply -f code_volume.yaml

The following output tells you that the object was successfully created, and you are ready to mount your 1GB PVC as a volume.

Output
persistentvolumeclaim/code created

To view available Persistent Volumes (PV):

  • kubectl get pv

You will see your PV listed:

Output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-ca4df10f-ab8c-11e8-b89d-12331aa95b13 1Gi RWO Delete Bound default/code do-block-storage 2m

The fields above are an overview of your configuration file, except for Reclaim Policy and Status. The Reclaim Policy defines what is done with the PV after the PVC accessing it is deleted. Delete removes the PV from Kubernetes as well as the DigitalOcean infrastructure. You can learn more about the Reclaim Policy and Status from the Kubernetes PV documentation.

You’ve successfully created a Persistent Volume using the DigitalOcean block storage plug-in. Now that your Persistent Volume is ready, you will create your pods using a Deployment.

Step 4 — Creating a PHP-FPM Deployment

In this step, you will learn how to use a Deployment to create your PHP-FPM pod. Deployments provide a uniform way to create, update, and manage pods by using ReplicaSets. If an update does not work as expected, a Deployment will automatically rollback its pods to a previous image.

The Deployment spec.selector key will list the labels of the pods it will manage. It will also use the template key to create the required pods.

This step will also introduce the use of Init Containers. Init Containers run one or more commands before the regular containers specified under the pod’s template key. In this tutorial, your Init Container will fetch a sample index.php file from GitHub Gist using wget. These are the contents of the sample file:

index.php
<?php echo phpinfo();  

To create your Deployment, open a new file called php_deployment.yaml with your editor:

  • nano php_deployment.yaml

This Deployment will manage your PHP-FPM pods, so you will name the Deployment object php. The pods belong to the backend tier, so you will group the Deployment into this group by using the tier: backend label:

php_deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata:   name: php   labels:     tier: backend 

For the Deployment spec, you will specify how many copies of this pod to create by using the replicas parameter. The number of replicas will vary depending on your needs and available resources. You will create one replica in this tutorial:

php_deployment.yaml
... spec:   replicas: 1 

This Deployment will manage pods that match the app: php and tier: backend labels. Under selector key add:

php_deployment.yaml
...   selector:     matchLabels:       app: php       tier: backend 

Next, the Deployment spec requires the template for your pod’s object definition. This template will define specifications to create the pod from. First, you will add the labels that were specified for the php service selectors and the Deployment’s matchLabels. Add app: php and tier: backend under template.metadata.labels:

php_deployment.yaml
...   template:     metadata:       labels:         app: php         tier: backend 

A pod can have multiple containers and volumes, but each will need a name. You can selectively mount volumes to a container by specifying a mount path for each volume.

First, specify the volumes that your containers will access. You created a PVC named code to hold your application code, so name this volume code as well. Under spec.template.spec.volumes, add the following:

php_deployment.yaml
...     spec:       volumes:       - name: code         persistentVolumeClaim:           claimName: code 

Next, specify the container you want to run in this pod. You can find various images on the Docker store, but in this tutorial you will use the php:7-fpm image.

Under spec.template.spec.containers, add the following:

php_deployment.yaml
...       containers:       - name: php         image: php:7-fpm 

Next, you will mount the volumes that the container requires access to. This container will run your PHP code, so it will need access to the code volume. You will also use mountPath to specify /code as the mount point.

Under spec.template.spec.containers.volumeMounts, add:

php_deployment.yaml
...         volumeMounts:         - name: code           mountPath: /code 

Now that you have mounted your volume, you need to get your application code on the volume. You may have previously used FTP/SFTP or cloned the code over an SSH connection to accomplish this, but this step will show you how to copy the code using an Init Container.

Depending on the complexity of your setup process, you can either use a single initContainer to run a script that builds your application, or you can use one initContainer per command. Make sure that the volumes are mounted to the initContainer.

In this tutorial, you will use a single Init Container with busybox to download the code. busybox is a small image that contains the wget utility that you will use to accomplish this.

Under spec.template.spec, add your initContainer and specify the busybox image:

php_deployment.yaml
...       initContainers:       - name: install         image: busybox 

Your Init Container will need access to the code volume so that it can download the code in that location. Under spec.template.spec.initContainers, mount the volume code at the /code path:

php_deployment.yaml
...         volumeMounts:         - name: code           mountPath: /code 

Each Init Container needs to run a command. Your Init Container will use wget to download the code from Github into the /code working directory. The -O option gives the downloaded file a name, and you will name this file index.php.

Note: Be sure to trust the code you’re pulling. Before pulling it to your server, inspect the source code to ensure you are comfortable with what the code does.

Under the install container in spec.template.spec.initContainers, add these lines:

php_deployment.yaml
...         command:         - wget         - "-O"         - "/code/index.php"         - https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php 

Your completed php_deployment.yaml file will look like this:

php_deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata:   name: php   labels:     tier: backend spec:   replicas: 1   selector:     matchLabels:       app: php       tier: backend   template:     metadata:       labels:         app: php         tier: backend     spec:       volumes:       - name: code         persistentVolumeClaim:           claimName: code       containers:       - name: php         image: php:7-fpm         volumeMounts:         - name: code           mountPath: /code       initContainers:       - name: install         image: busybox         volumeMounts:         - name: code           mountPath: /code         command:         - wget         - "-O"         - "/code/index.php"         - https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php 

Save the file and exit the editor.

Create the PHP-FPM Deployment with kubectl:

  • kubectl apply -f php_deployment.yaml

You will see the following output upon Deployment creation:

Output
deployment.apps/php created

To summarize, this Deployment will start by downloading the specified images. It will then request the PersistentVolume from your PersistentVolumeClaim and serially run your initContainers. Once complete, the containers will run and mount the volumes to the specified mount point. Once all of these steps are complete, your pod will be up and running.

You can view your Deployment by running:

  • kubectl get deployments

You will see the output:

Output
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php 1 1 1 0 19s

This output can help you understand the current state of the Deployment. A Deployment is one of the controllers that maintains a desired state. The template you created specifies that the DESIRED state will have 1 replicas of the pod named php. The CURRENT field indicates how many replicas are running, so this should match the DESIRED state. You can read about the remaining fields in the Kubernetes Deployments documentation.

You can view the pods that this Deployment started with the following command:

  • kubectl get pods

The output of this command varies depending on how much time has passed since creating the Deployment. If you run it shortly after creation, the output will likely look like this:

Output
NAME READY STATUS RESTARTS AGE php-86d59fd666-bf8zd 0/1 Init:0/1 0 9s

The columns represent the following information:

  • Ready: The number of replicas running this pod.
  • Status: The status of the pod. Init indicates that the Init Containers are running. In this output, 0 out of 1 Init Containers have finished running.
  • Restarts: How many times this process has restarted to start the pod. This number will increase if any of your Init Containers fail. The Deployment will restart it until it reaches a desired state.

Depending on the complexity of your startup scripts, it can take a couple of minutes for the status to change to podInitializing:

Output
NAME READY STATUS RESTARTS AGE php-86d59fd666-lkwgn 0/1 podInitializing 0 39s

This means the Init Containers have finished and the containers are initializing. If you run the command when all of the containers are running, you will see the pod status change to Running.

Output
NAME READY STATUS RESTARTS AGE php-86d59fd666-lkwgn 1/1 Running 0 1m

You now see that your pod is running successfully. If your pod doesn’t start, you can debug with the following commands:

  • View detailed information of a pod:
  • kubectl describe pods pod-name
  • View logs generated by a pod:
  • kubectl logs pod-name
  • View logs for a specific container in a pod:
  • kubectl logs pod-name container-name

Your application code is mounted and the PHP-FPM service is now ready to handle connections. You can now create your Nginx Deployment.

Step 5 — Creating the Nginx Deployment

In this step, you will use a ConfigMap to configure Nginx. A ConfigMap holds your configuration in a key-value format that you can reference in other Kubernetes object definitions. This approach will grant you the flexibility to reuse or swap the image with a different Nginx version if needed. Updating the ConfigMap will automatically replicate the changes to any pod mounting it.

Create a nginx_configMap.yaml file for your ConfigMap with your editor:

  • nano nginx_configMap.yaml

Name the ConfigMap nginx-config and group it into the tier: backend micro-service:

nginx_configMap.yaml
apiVersion: v1 kind: ConfigMap metadata:   name: nginx-config   labels:     tier: backend 

Next, you will add the data for the ConfigMap. Name the key config and add the contents of your Nginx configuration file as the value. You can use the example Nginx configuration from this tutorial.

Because Kubernetes can route requests to the appropriate host for a service, you can enter the name of your PHP-FPM service in the fastcgi_pass parameter instead of its IP address. Add the following to your nginx_configMap.yaml file:

nginx_configMap.yaml
... data:   config : |     server {       index index.php index.html;       error_log  /var/log/nginx/error.log;       access_log /var/log/nginx/access.log;       root ^/code^;        location / {           try_files $  uri $  uri/ /index.php?$  query_string;       }        location ~ \.php$   {           try_files $  uri =404;           fastcgi_split_path_info ^(.+\.php)(/.+)$  ;           fastcgi_pass php:9000;           fastcgi_index index.php;           include fastcgi_params;           fastcgi_param SCRIPT_FILENAME $  document_root$  fastcgi_script_name;           fastcgi_param PATH_INFO $  fastcgi_path_info;         }     } 

Your nginx_configMap.yaml file will look like this:

nginx_configMap.yaml
apiVersion: v1 kind: ConfigMap metadata:   name: nginx-config   labels:     tier: backend data:   config : |     server {       index index.php index.html;       error_log  /var/log/nginx/error.log;       access_log /var/log/nginx/access.log;       root /code;        location / {           try_files $  uri $  uri/ /index.php?$  query_string;       }        location ~ \.php$   {           try_files $  uri =404;           fastcgi_split_path_info ^(.+\.php)(/.+)$  ;           fastcgi_pass php:9000;           fastcgi_index index.php;           include fastcgi_params;           fastcgi_param SCRIPT_FILENAME $  document_root$  fastcgi_script_name;           fastcgi_param PATH_INFO $  fastcgi_path_info;         }     } 

Save the file and exit the editor.

Create the ConfigMap:

  • kubectl apply -f nginx_configMap.yaml

You will see the following output:

Output
configmap/nginx-config created

You’ve finished creating your ConfigMap and can now build your Nginx Deployment.

Start by opening a new nginx_deployment.yaml file in the editor:

  • nano nginx_deployment.yaml

Name the Deployment nginx and add the label tier: backend:

nginx_deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata:   name: nginx   labels:     tier: backend 

Specify that you want one replicas in the Deployment spec. This Deployment will manage pods with labels app: nginx and tier: backend. Add the following parameters and values:

nginx_deployment.yaml
... spec:   replicas: 1   selector:     matchLabels:       app: nginx       tier: backend 

Next, add the pod template. You need to use the same labels that you added for the Deployment selector.matchLabels. Add the following:

nginx_deployment.yaml
...   template:     metadata:       labels:         app: nginx         tier: backend 

Give Nginx access to the code PVC that you created earlier. Under spec.template.spec.volumes, add:

nginx_deployment.yaml
...     spec:       volumes:       - name: code         persistentVolumeClaim:           claimName: code 

Pods can mount a ConfigMap as a volume. Specifying a file name and key will create a file with its value as the content. To use the ConfigMap, set path to name of the file that will hold the contents of the key. You want to create a file site.conf from the key config. Under spec.template.spec.volumes, add the following:

nginx_deployment.yaml
...       - name: config         configMap:           name: nginx-config           items:           - key: config             path: site.conf 

Warning: If a file is not specified, the contents of the key will replace the mountPath of the volume. This means that if a path is not explicitly specified, you will lose all content in the destination folder.

Next, you will specify the image to create your pod from. This tutorial will use the nginx:1.7.9 image for stability, but you can find other Nginx images on the Docker store. Also, make Nginx available on the port 80. Under spec.template.spec add:

nginx_deployment.yaml
...       containers:       - name: nginx         image: nginx:1.7.9         ports:         - containerPort: 80 

Nginx and PHP-FPM need to access the file at the same path, so mount the code volume at /code:

nginx_deployment.yaml
...         volumeMounts:         - name: code           mountPath: /code 

The nginx:1.7.9 image will automatically load any configuration files under the /etc/nginx/conf.d directory. Mounting the config volume in this directory will create the file /etc/nginx/conf.d/site.conf. Under volumeMounts add the following:

nginx_deployment.yaml
...         - name: config           mountPath: /etc/nginx/conf.d 

Your nginx_deployment.yaml file will look like this:

nginx_deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata:   name: nginx   labels:     tier: backend spec:   replicas: 1   selector:     matchLabels:       app: nginx       tier: backend   template:     metadata:       labels:         app: nginx         tier: backend     spec:       volumes:       - name: code         persistentVolumeClaim:           claimName: code       - name: config         configMap:           name: nginx-config           items:           - key: config             path: site.conf       containers:       - name: nginx         image: nginx:1.7.9         ports:         - containerPort: 80         volumeMounts:         - name: code           mountPath: /code         - name: config           mountPath: /etc/nginx/conf.d 

Save the file and exit the editor.

Create the Nginx Deployment:

  • kubectl apply -f nginx_deployment.yaml

The following output indicates that your Deployment is now created:

Output
deployment.apps/nginx created

List your Deployments with this command:

  • kubectl get deployments

You will see the Nginx and PHP-FPM Deployments:

Output
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 0 16s php 1 1 1 1 7m

List the pods managed by both of the Deployments:

  • kubectl get pods

You will see the pods that are running:

Output
NAME READY STATUS RESTARTS AGE nginx-7bf5476b6f-zppml 1/1 Running 0 32s php-86d59fd666-lkwgn 1/1 Running 0 7m

Now that all of the Kubernetes objects are active, you can visit the Nginx service on your browser.

List the running services:

  • kubectl get services -o wide

Get the External IP for your Nginx service:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m <none> nginx ClusterIP 10.102.160.47 your_public_ip 80/TCP 27m app=nginx,tier=backend php ClusterIP 10.100.59.238 <none> 9000/TCP 34m app=php,tier=backend

On your browser, visit your server by typing in http://your_public_ip. You will see the output of php_info() and have confirmed that your Kubernetes services are up and running.

Conclusion

In this guide, you containerized the PHP-FPM and Nginx services so that you can manage them independently. This approach will not only improve the scalability of your project as you grow, but will also allow you to efficiently use resources as well. You also stored your application code on a volume so that you can easily update your services in the future.

DigitalOcean Community Tutorials

How To Install YunoHost on Debian 9

The author selected the Mozilla Foundation to receive a donation as part of the Write for DOnations program.

Introduction

YunoHost is an open-source platform that facilitates the seamless installation and configuration of self-hosted web applications, including webmail clients, password managers, and even WordPress sites. Self-hosting webmail and other applications provides privacy and control over your personal information. YunoHost allows you to configure settings, create users, and self-host your own applications from its graphical user interface. A marketplace of applications is available through YunoHost to add to your hosting environment. The frontend UI acts as a homepage for all of your applications.

In this tutorial, you will install and configure YunoHost on a server running Debian 9. To achieve this, you will configure your DNS records using DigitalOcean, secure your YunoHost instance with Let’s Encrypt, and install your chosen web applications.

Prerequisites

Step 1 — Installing YunoHost

In this step, you will install YunoHost using the official installation script. YunoHost provides this open-source script that guides you through installing and configuring everything necessary for a YunoHost operation.

Before you download the install script, move into a temporary directory. Using the /tmp directory will delete the script on reboot, which you will not need after you’ve installed YunoHost:

  • cd /tmp

Next, run the following command to download the official install script from YunoHost:

  • wget -O yunohost https://install.yunohost.org/

This command downloads the script and saves it to the current directory as a file called yunohost.

Now you can run the script with sudo:

  • sudo /bin/bash yunohost

When asked to overwrite configuration files, select yes.

You will then see a Post-installation screen confirming YunoHost’s installation.

Post-Installation Screen: YunoHost packaged have been installed successfully! Prompts to begin post-installation process.

Select Yes to proceed to the post-installation process.

When asked to enter the Main domain, enter the domain name you want to use to access your YunoHost instance. Then choose and enter a secure password for the administrator account.

You have now installed YunoHost on your server. In the next step, you will log in to your fresh YunoHost instance to configure and manage domains.

Step 2 — Configuring DNS

Now you have YunoHost installed, you can access the admin panel for the first time. You will set up the domain where you would like to host YunoHost by configuring your DNS records.

To start, type either the IP address of your server or the domain name you chose in the last step into your web browser. You’ll see a screen warning that your connection is not private.

This Connection Is Not Private

The connection is not yet secure because YunoHost uses a self-signed certificate by default. You can visit the site anyway since you’ll secure your site with Let’s Encrypt in the next step.

Now, enter the admin password you set in the previous step to access YunoHost’s admin panel.

Admin Panel

In order for YunoHost to function properly, you will configure the DNS settings for your domain name. From the admin panel, navigate to the Domains section and select your domain name. You’ll now see the Operations page where you can access the DNS configuration settings.

Domain Section

Select the DNS configuration button. YunoHost will display a sample zone file for your domain. You’ll use this file to configure the records for your domain.

sample zone file

To start configuring your DNS records, access your domain host. This tutorial walks through configuring DNS records via DigitalOcean’s control panel.

Log in to your DigitalOcean account and click on Networking in the menu. Enter your YunoHost domain in the Domain field and click Add Domain.

You’ll be taken to your domain name’s edit page. On this page, you’ll see the fields where you can add the YunoHost records.

DigitalOcean DNS record create page

There will be three NS records already set up that specify the DigitalOcean servers are providing DNS services for your domain. You can now add the following records using the sample file provided by YunoHost:

  • Create two new A records:

    • Enter @ for the name and choose your Droplet or IP address in the Will Direct To box, leave the TTL at 3600.
    • Enter * for the name and choose your Droplet or IP address in the Will Direct To box, leave the TTL at 3600.
  • Create two new SRV records:

    • Enter _xmpp-client._tcp for the hostname, 5222 for the port, 0 priority, 5 for the weight, and change the TTL to 3600.
    • Enter _xmpp-server._tcp for the hostname, 5269 for the port, 0 priority, 5 for the weight, and change the TTL to 3600.
  • Create three new CNAME records:

    • Enter muc for the hostname, @ in is an alias of, and set the TTL to 3600.
    • Enter pubsub for the hostname, @ in is an alias of, and set the TTL to 3600.
    • Enter vjud for the hostname, @ in is an alias of, and set the TTL to 3600.

For your Mail configuration, create the following records:

  • An MX record with @ for the hostname, your domain name for the mail server with a priority of 10 and the TTL at 3600.
  • Three new TXT records:
    • Copy the TXT string, including the double quotes, from the sample zone file into the value box that starts with: "v=spf1", add @to the hostname, and leave the TTL at 3600.
    • Copy the long TXT string, including the double quotes, from the sample zone file into the value box, add mail._domainkey to the hostname, and leave the TTL at 3600.
    • Copy the TXT string, including the double quotes, from the sample zone file into the value box, something like: "v=DMARC1; p=none", add _dmarcto the hostname, and leave the TTL at 3600.

And finally, for Let’s Encrypt, configure the following record:

  • Create a new CAA record:
    • Enter @ for the hostname, add letsencrypt.org to the authority granted for box, set tag to issue, flags to 128, and set the TTL to 3600.

Once you have added all of the DNS records you’ll see a list on your domain’s control panel. You can also read this guide for more information on managing your records through the DigitalOcean control panel.

List of records set up

You have configured all the DNS records necessary for the YunoHost services to work. In the next step you’ll secure your connection by installing Let’s Encrypt.

Step 3 — Installing Let’s Encrypt

In this step you will configure an SSL certificate via Let’s Encrypt to ensure that your connection is secured by encrypted HTTPS each time you or users log in to your site. YunoHost includes a function to install Let’s Encrypt to your domain through the user interface.

In the Domains section of the admin panel, select your domain name again. Navigate down to the Operations section. From here, under Manage SSL certificates, select SSL certificates. You’ll see an option to Install a Let’s Encrypt certificate, you can select this to install the certificate.

You will now have a Let’s Encrypt certificate installed for your domain. You will no longer see the warning messages when you visit your domain or IP address. Your Let’s Encrypt certificate will automatically renew by default. To manually renew your Let’s Encrypt certificate or revert to a self-signed certificate in the future, you can use this Operations page.

Manage SSL Certificates

You have configured and secured your domain. In the next section you’ll set up a new user and email account to begin installing applications to your YunoHost operation.

Step 4 — Installing Applications

YunoHost provides the ability to install a number of pre-packaged web applications alongside each other. To begin installing and using applications, you need to create a regular, non-admin user and email account. You can do this through the admin panel.

From the root of the admin panel, navigate to the Users section.

Select the green New user button to the right of your screen. Enter the desired credentials for the new user in the fields provided.

New User page with fields for username, email, etc.

You’ve finished creating the user. By default, this user already has an associated email address, which you can access through any IMAP email client. Alternatively, you can install a webmail client on YunoHost to accomplish this, which you will do as part of this tutorial.

You have configured all of YunoHost’s basic functions and created a user, complete with an email account. You can now access the applications through the admin panel that are ready for installation. In this tutorial, you’ll install Rainloop, a lightweight webmail app, but you can follow these instructions to install any of the available applications.

Navigate to the Applications section of the admin panel. From here, you can select and install any of the official applications.

Applications page. List of applications in alphabetical order, ready for installation.

Select Rainloop from the list. You will see some configuration options for the application.

Rainloop Configuration Options

  • Label for Rainloop: You can choose what to enter here, the application displays this to users on YunoHost’s home screen.
  • Choose a domain for Rainloop: Enter the domain name that will host the application.
  • Choose a path for Rainloop: Set the URL path for the application, like /rainloop. If you’d like it to be at the root of the domain, simply enter /. Keep in mind that if you do so, you will not be able to use any other applications with that domain.
  • Is it a public application?: Choose if you want the application to be accessible to the public, or only to logged in users.
  • Enter a strong password for the ‘admin’ user: Enter a password for the admin user of the application.
  • Do you want to add YunoHost users to the recipients suggestions?: “Yes” here will result in the application suggesting other users’ email addresses and names as recipients when composing emails.
  • Select default language: Select your preferred language.

Once finished, click the green Install button.

You’ve installed Rainloop. Open a new browser tab and navigate to the path you chose for the application (example.com/rainloop). You will see the Rainloop main dashboard.

Rainloop main screen.

You can repeat Step 4 to create more users and install further applications as you wish.

In the Applications section of the admin panel, it is also possible to install custom applications from third parties by pulling from GitHub repositories.

You now have a secure YunoHost instance configured on your server.

Conclusion

In this tutorial you have installed YunoHost on your server, created an email account, and installed an application. You have a central place to host all your applications alongside each other, including a webmail client to check your email. See the YunoHost website for a full list of applications, both official and unofficial. Also see the official Troubleshooting guide that provides information on services, configuration, and upgrades to YunoHost.

DigitalOcean Community Tutorials

How to Manually Set Up a Prisma Server on Ubuntu 18.04

The author selected the Electronic Frontier Foundation to receive a donation as part of the Write for DOnations program.

Introduction

Prisma is a data layer that replaces traditional object-relational mapping tools (ORMs) in your application. Offering support for both building GraphQL servers as well as REST APIs, Prisma simplifies database access with a focus on type safety and enables declarative database migrations. Type safety helps reduce potential code errors and inconsistencies, while the declarative database migrations allow you to store your datamodel in version control. These features help developers reduce time spent focused on setting up database access, migrations, and data management workflows.

You can deploy the Prisma server, which acts as a proxy for your database, in a number of ways and host it either remotely or locally. Through the Prisma service you can access your data and connect to your database with the GraphQL API, which allows realtime operations and the ability to create, update, and delete data. GraphQL is a query language for APIs that allows users to send queries to access the exact data they require from their server. The Prisma server is a standalone component that sits on top of your database.

In this tutorial you will manually install a Prisma server on Ubuntu 18.04 and run a test GraphQL query in the GraphQL Playground. You will host your Prisma setup code and development locally — where you will actually build your application — while running Prisma on your remote server. By running through the installation manually, you will have a deeper understanding and customizability of the underlying infrastructure of your setup.

While this tutorial covers the manual steps for deploying Prisma on an Ubuntu 18.04 server, you can also accomplish this in a more automated way with Docker Machine by following this tutorial on Prisma’s site.

Note: The setup described in this section does not include features you would normally expect from production-ready servers, such as automated backups and active failover.

Prerequisites

To complete this tutorial, you will need:

Step 1 — Starting the Prisma Server

The Prisma CLI is the primary tool used to deploy and manage your Prisma services. To start the services, you need to set up the required infrastructure, which includes the Prisma server and a database for it to connect to.

Docker Compose allows you to manage and run multi-container applications. You’ll use it to set up the infrastructure required for the Prisma service.

You will begin by creating the docker-compose.yml file to store the Prisma service configuration on your server. You’ll use this file to automatically spin up Prisma, an associated database, and configure the necessary details, all in one step. Once the file is spun up with Docker Compose, it will configure the passwords for your databases, so be sure to replace the passwords for managementAPIsecret and MYSQL_ROOT_PASSWORD with something secure. Run the following command to create and edit the docker-compose.yml file:

  • sudo nano docker-compose.yml

Add the following content to the file to define the services and volumes for the Prisma setup:

docker-compose.yml
version: "3" services:   prisma:     image: prismagraphql/prisma:1.20     restart: always     ports:       - "4466:4466"     environment:       PRISMA_CONFIG: |         port: 4466         managementApiSecret: my-secret         databases:           default:             connector: mysql             host: mysql             port: 3306             user: root             password: prisma             migrations: true   mysql:     image: mysql:5.7     restart: always     environment:       MYSQL_ROOT_PASSWORD: prisma     volumes:       - mysql:/var/lib/mysql volumes:   mysql: 

This configuration does the following:

  • It launches two services: prisma-db and db.
  • It pulls in the latest version of Prisma. As of this writing, that is Prisma 1.20.
  • It sets the ports Prisma will be available on and specifies all of the credentials to connect to the MySQL database in the databases section.

The docker-compose.yml file sets up the managementApiSecret, which prevents others from accessing your data with knowledge of your endpoint. If you are using this tutorial for anything but a test deployment, you should change the managementAPIsecret to something more secure. When you do, be sure to remember it so that you can enter it later during the prisma init process.

This file also pulls in the MySQL Docker image and sets those credentials as well. For the purposes of this tutorial, this Docker Compose file spins up a MySQL image, but you can also use PostgreSQL with Prisma. Both Docker images are available on Docker hub:

Save and exit the file.

Now that you have saved all of the details, you can start the Docker containers. The -d command tells the containers to run in detached mode, meaning they’ll run in the background:

  • sudo docker-compose up -d

This will fetch the Docker images for both prisma and mysql. You can verify that the Docker containers are running with the following command:

  • sudo docker ps

You will see an output that looks similar to this:

CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                    NAMES 24f4dd6222b1        prismagraphql/prisma:1.12   "/bin/sh -c /app/sta…"   15 seconds ago      Up 1 second         0.0.0.0:4466->4466/tcp   root_prisma_1 d8cc3a393a9f        mysql:5.7                   "docker-entrypoint.s…"   15 seconds ago      Up 13 seconds       3306/tcp                 root_mysql_1 

With your Prisma server and database set up, you are now ready to work locally to deploy the Prisma service.

Step 2 — Installing Prisma Locally

The Prisma server provides the runtime environments for your Prisma services. Now that you have your Prisma server started, you can deploy your Prisma service. You will run these steps locally, not on your server.

To start, create a separate folder to contain all of the Prisma files:

  • mkdir prisma

Then move into that folder:

  • cd prisma

You can install Prisma with Homebrew if you’re using MacOS. To do this, run the following command to add the Prisma repository:

  • brew tap prisma/prisma

You can then install Prisma with the following command:

  • brew install prisma

Or alternately, with npm:

  • npm install -g prisma

With Prisma installed locally, you are ready to bootstrap the new Prisma service.

Step 3 — Creating the Configuration for a New Prisma Service

After the installation, you can use prisma init to create the file structure for a new Prisma database API, which generates the files necessary to build your application with Prisma. Your endpoint will automatically be in the prisma.yml file, and datamodel.prisma will already contain a sample datamodel that you can query in the next step. The datamodel serves as the basis for your Prisma API and specifies the model for your application. At this point, you are only creating the files and the sample datamodel. You are not making any changes to the database until you run prisma deploy later in this step.

Now you can run the following command locally to create the new file structure:

  • prisma init hello-world

After you run this command you will see an interactive prompt. When asked, select, Use other server and press ENTER:

Output
Set up a new Prisma server or deploy to an existing server? You can set up Prisma for local development (based on docker-compose) Use existing database Connect to existing database Create new database Set up a local database using Docker Or deploy to an existing Prisma server: Demo server Hosted demo environment incl. database (requires login) ❯ Use other server Manually provide endpoint of a running Prisma server

You will then provide the endpoint of your server that is acting as the Prisma server. It will look something like: http://SERVER_IP_ADDRESS:4466. It is key that the endpoint begins with http (or https) and has the port number indicated.

Output
Enter the endpoint of your Prisma server http://SERVER_IP_ADDRESS:4466

For the management API secret, enter in the phrase or password that you indicated earlier in the configuration file:

Output
Enter the management API secret my-secret

For the subsequent options, you can choose the default variables by pressing ENTER for the service name and service stage:

Output
Choose a name for your service hello-world Choose a name for your stage dev

You will also be given a choice on a programming language for the Prisma client. In this case, you can choose your preferred language. You can read more about the client here.

Output
Select the programming language for the generated Prisma client (Use arrow keys) ❯ Prisma TypeScript Client Prisma Flow Client Prisma JavaScript Client Prisma Go Client Don't generate

Once you have completed the prompt, you will see the following output that confirms the selections you made:

Output
Created 3 new files: prisma.yml Prisma service definition datamodel.prisma GraphQL SDL-based datamodel (foundation for database) .env Env file including PRISMA_API_MANAGEMENT_SECRET Next steps: 1. Open folder: cd hello-world 2. Deploy your Prisma service: prisma deploy 3. Read more about deploying services: http://bit.ly/prisma-deploy-services

Move into the hello-world directory:

  • cd hello-world

Sync these changes to your server with prisma deploy. This sends the information to the Prisma server from your local machine and creates the Prisma service on the Prisma server:

  • prisma deploy

Note: Running prisma deploy again will update your Prisma service.

Your output will look something like:

Output
Creating stage dev for service hello-world ✔ Deploying service `hello-world` to stage 'dev' to server 'default' 468ms Changes: User (Type) + Created type `User` + Created field `id` of type `GraphQLID!` + Created field `name` of type `String!` + Created field `updatedAt` of type `DateTime!` + Created field `createdAt` of type `DateTime!` Applying changes 716ms Your Prisma GraphQL database endpoint is live: HTTP: http://SERVER_IP_ADDRESS:4466/hello-world/dev WS: ws://SERVER_IP_ADDRESS:4466/hello-world/dev

The output shows that Prisma has updated your database according to your datamodel (created in the prisma init step) with a type User. Types are an essential part of a datamodel; they represent an item from your application, and each type contains multiple fields. For your datamodel the associated fields describing the user are: the user’s ID, name, time they were created, and time they were updated.

If you run into issues at this stage and get a different output, double check that you entered all of the fields correctly during the interactive prompt. You can do so by reviewing the contents of the prisma.yml file.

With your Prisma service running, you can connect to two different endpoints:

  • The management interface, available at http://SERVER_IP_ADDRESS:4466/management, where you can manage and deploy Prisma services.

  • The GraphQL API for your Prisma service, available at http://SERVER_IP_ADDRESS:4466/hello-world/dev.

GraphQL API exploring _Your Project_

You have successfully set up and deployed your Prisma server. You can now explore queries and mutations in GraphQL.

Step 4 — Running an Example Query

To explore another Prisma use case, you can experiment with the GraphQL playground tool, which is an open-source GraphQL integrated development environment (IDE) on your server. To access it, visit your endpoint in your browser from the previous step:

http://SERVER_IP_ADDRESS:4466/hello-world/dev 

A mutation is a GraphQL term that describes a way to modify — create, update, or delete (CRUD) — data in the backend via GraphQL. You can send a mutation to create a new user and explore the functionality. To do this, run the following mutation in the left-hand side of the page:

mutation {   createUser(data: { name: "Alice" }) {     id     name   } } 

Once you press the play button, you will see the results on the right-hand side of the page.
GraphQL Playground Creating a New User

Subsequently, if you want to look up a user by using the ID column in the database, you can run the following query:

query {   user(where: { id: "cjkar2d62000k0847xuh4g70o" }) {     id     name   } } 

You now have a Prisma server and service up and running on your server, and you have run test queries in GraphQL’s IDE.

Conclusion

You have a functioning Prisma setup on your server. You can see some additional Prisma use cases and next steps in the Getting Started Guide or explore Prisma’s feature set in the Prisma Docs. Once you have completed all of the steps in this tutorial, you have a number of options to verify your connection to the database, one possibility is using the Prisma Client.

DigitalOcean Community Tutorials

How To Ensure Code Quality with SonarQube on Ubuntu 18.04

The author selected Internet Archive to receive a donation as part of the Write for DOnations program.

Introduction

Code quality is an approximation of how useful and maintainable a specific piece of code is. Quality code will make the task of maintaining and expanding your application easier. It helps ensure that fewer bugs are introduced when you make required changes in the future.

SonarQube is an open-source tool that assists in code quality analysis and reporting. It scans your source code looking for potential bugs, vulnerabilities, and maintainability issues, and then presents the results in a report which will allow you to identify potential issues in your application.

The SonarQube tool itself is made out of two parts: a scanner, which is an application that would be installed locally on the developer’s machine to do the code analysis, and a centralized server for record-keeping and reporting. A single SonarQube server instance can support multiple scanners, enabling you to centralize code quality reports from many developers in a single place.

In this guide, you will deploy a SonarQube server and scanner to analyze your code and create code quality reports. Then you’ll perform a test on your machine by scanning an example code with the SonarQube scanner.

Prerequisites

Before you begin this guide you’ll need the following:

Step 1 — Preparing for the Install

You need to complete a few steps to prepare for the SonarQube installation. As SonarQube is a Java application that will run as a service, and because you don’t want to run services as the root user, you’ll create another system user specifically to run the SonarQube services. After that, you’ll create the installation directory and set its permissions, and then you’ll create a MySQL database and user for SonarQube.

First, create the sonarqube user:

  • sudo adduser --system --no-create-home --group --disabled-login sonarqube

This user will only be used to run the SonarQube service, so this creates a system user that can’t log in to the server directly.

Next, create the directory to install SonarQube into:

  • sudo mkdir /opt/sonarqube

SonarQube releases are packaged in a zipped format, so install the unzip utility that will allow you to extract those files.

  • sudo apt-get install unzip

Next, you will create a database and credentials that SonarQube will use. Log in to the MySQL server as the root user:

  • sudo mysql -u root -p

Then create the SonarQube database:

  • CREATE DATABASE sonarqube;

Now create the credentials that SonarQube will use to access the database.

  • CREATE USER sonarqube@'localhost' IDENTIFIED BY 'some_secure_password';

Then grant permissions so that the newly created user can make changes to the SonarQube database:

  • GRANT ALL ON sonarqube.* to sonarqube@'localhost';

Then apply the permission changes and exit the MySQL console:

  • FLUSH PRIVILEGES;
  • EXIT;

Now that you have the user and directory in place, you will download and install the SonarQube server.

Step 2 — Downloading and Installing SonarQube

Start by changing the current working directory to the SonarQube installation directory:

  • cd /opt/sonarqube

Then, head over to the SonarQube downloads page and grab the download link for SonarQube 7.5 Community Edition. There are many versions and flavors of SonarQube available for download on the page, but in this specific tutorial we’ll be using SonarQube 7.5 Community Edition.

After getting the link, download the file:

  • sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-7.5.zip

Unzip the file:

  • sudo unzip sonarqube-7.5.zip

Once the files extract, delete the downloaded zip file, as you no longer need it:

  • sudo rm sonarqube-7.5.zip

Finally, update the permissions so that the sonarqube user will own these files, and be able to read and write files in this directory:

  • sudo chown -R sonarqube:sonarqube /opt/sonarqube

Now that all the files are in place, we can move on to configuring the SonarQube server.

Step 3 — Configuring the SonarQube Server

We’ll need to edit a few things in the SonarQube configuration file. Namely:

  • We need to specify the username and password that the SonarQube server will use for the database connection.
  • We also need to tell SonarQube to use MySQL for our back-end database.
  • We’ll tell SonarQube to run in server mode, which will yield improved performance.
  • We’ll also tell SonarQube to only listen on the local network address since we will be using a reverse proxy.

Start by opening the SonarQube configuration file:

  • sudo nano sonarqube-7.5/conf/sonar.properties

First, change the username and password that SonarQube will use to access the database to the username and password you created for MySQL:

/opt/sonarqube/sonarqube-7.5/conf/sonar.properties
     ...      sonar.jdbc.username=sonarqube     sonar.jdbc.password=some_secure_password      ...  

Next, tell SonarQube to use MySQL as the database driver:

/opt/sonarqube/sonarqube-7.5/conf/sonar.properties
     ...      sonar.jdbc.url=jdbc:mysql://localhost:3306/sonarqube?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance&useSSL=false      ...  

As this instance of SonarQube will be run as a dedicated server, we could add the -server option to activate SonarQube’s server mode, which will help in maximizing performance.

Nginx will handle the communication between the SonarQube clients and your server, so you will tell SonarQube to only listen to the local address.

/opt/sonarqube/sonarqube-7.5/conf/sonar.properties
     ...      sonar.web.javaAdditionalOpts=-server     sonar.web.host=127.0.0.1   

Once you have updated those values, save and close the file.

Next, you will use Systemd to configure SonarQube to run as a service so that it will start automatically upon a reboot.

Create the service file:

  • sudo nano /etc/systemd/system/sonarqube.service

Add the following content to the file which specifies how the SonarQube service will start and stop:

/etc/systemd/system/sonarqube.service
 [Unit] Description=SonarQube service After=syslog.target network.target  [Service] Type=forking  ExecStart=/opt/sonarqube/sonarqube-7.5/bin/linux-x86-64/sonar.sh start ExecStop=/opt/sonarqube/sonarqube-7.5/bin/linux-x86-64/sonar.sh stop  User=sonarqube Group=sonarqube Restart=always  [Install] WantedBy=multi-user.target 

You can learn more about systemd unit files in Understanding Systemd Units and Unit Files.

Close and save the file, then start the SonarQube service:

  • sudo service sonarqube start

Check the status of the SonarQube service to ensure that it has started and is running as expected:

  • service sonarqube status

If the service has successfully started, you’ll see a line that says “Active” similar to this:

● sonarqube.service - SonarQube service    Loaded: loaded (/etc/systemd/system/sonarqube.service; enabled; vendor preset    Active: active (running) since Sat 2019-01-05 19:00:00 UTC; 2s ago 

Next, configure the SonarQube service to start automatically on boot:

  • sudo systemctl enable sonarqube

At this point, the SonarQube server will take a few minutes to fully initialize. You can check if the server has started by querying the HTTP port:

  • curl http://127.0.0.1:9000

Once the initialization process is complete, you can move on to the next step.

Step 4 — Configuring the Reverse Proxy

Now that we have the SonarQube server running, it’s time to configure Nginx, which will be the reverse proxy and HTTPS terminator for our SonarQube instance.

Start by creating a new Nginx configuration file for the site:

  • sudo nano /etc/nginx/sites-enabled/sonarqube

Add this configuration so that Nginx will route incoming traffic to SonarQube:

/etc/nginx/sites-enabled/sonarqube
 server {     listen 80;     server_name sonarqube.example.com;      location / {         proxy_pass http://127.0.0.1:9000;     } }  

Save and close the file.

Next, make sure your configuration file has no syntax errors:

  • sudo nginx -t

If you see errors, fix them and run sudo nginx -t again. Once there are no errors, restart Nginx:

  • sudo service nginx restart

For a quick test, you can now visit http://sonarqube.example.com in your web browser. You’ll be greeted with the SonarQube web interface.

Now we’ll use Let’s Encrypt to create HTTPS certificates for our installation so that data will be securely transferred between the server and your local machine. Use certbot to create the certificate for Nginx:

  • sudo certbot --nginx -d sonarqube.example.com

If this is your first time requesting a Let’s Encrypt certificate, Certbot will prompt for your email address and EULA agreement. Enter your email and accept the EULA.

Certbot will then ask how you’d like to configure your security settings. Select the option to redirect all requests to HTTPS. This will ensure that all communication between clients and the SonarQube server gets encrypted.

Now that we’re done setting up the reverse proxy, we can move on to securing our SonarQube server.

Step 5 — Securing SonarQube

SonarQube ships with a default administrator username and password of admin. This default password is not secure, so you’ll want to update it to something more secure as a good security practice.

Start by visiting the URL of your installation, and log in using the default credentials. If prompted to start a tutorial, simply click Skip this tutorial to get to the dashboard.

Once logged in, click the Administration tab, select Security from the drop-down list, and then select Users:

SonarQube users administration tab

From here, click on the small cog on the right of the “Administrator” account row, then click on “Change password”. Be sure to change the password to something that’s easy to remember but hard to guess.

Now create a normal user that you can use to create projects and submit analysis results to your server from the same page. Click on the Create User button on the top-right of the page:
SonarQube new user dialog

Then create a token for a specific user by clicking on the button in the “Tokens” column and giving this token a name. You’ll need this token later when you invoke the code scanner, so be sure to write it down in a safe place.

Finally, you may notice that the SonarQube instance is wide-open to the world, and anyone could view analysis results and your source code. This setting is highly insecure, so we’ll configure SonarQube to only allow logged-in users access to the dashboard. From the same Administration tab, click on Configuration, then General Settings, and then Security on the left pane. Flip the switch that says Force user authentication to enable authentication, then click on the Save button below the switch.

SonarQube Force authentication switch

Now that you’re done setting up the server, let’s set up the SonarQube scanner.

Step 6 — Setting Up the Code Scanner

SonarQube’s code scanner is a separate package that you can install on a different machine than the one running the SonarQube server, such as your local development workstation or a continuous delivery server. There are packages available for Windows, MacOS, and Linux which you can find at the SonarQube web site

In this tutorial, you’ll install the code scanner on the same server that hosts our SonarQube server.

Start by creating a directory for the scanner:

  • sudo mkdir /opt/sonarscanner

Then change into that directory:

  • cd /opt/sonarscanner

Download the SonarQube scanner for Linux using wget:

  • sudo wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-3.2.0.1227-linux.zip

Next, extract the scanner:

  • sudo unzip sonar-scanner-cli-3.2.0.1227-linux.zip

Then delete the zip archive file:

  • sudo rm sonar-scanner-cli-3.2.0.1227-linux.zip

After that, you’ll need to modify a few settings to get the scanner working with your server install. Open the configuration file for editing:

  • sudo nano sonar-scanner-3.2.0.1227-linux/conf/sonar-scanner.properties

First, tell the scanner where it should submit the code analysis results. Un-comment the line starting with sonar.host.url and set it to the URL of your SonarQube server:

/opt/sonarscanner/sonar-scanner-3.2.0.1227-linux/conf/sonar.properties
    sonar.host.url=https://sonarqube.example.com 

Save and close the file. Now make the scanner binary executable:

  • sudo chmod +x sonar-scanner-3.2.0.1227-linux/bin/sonar-scanner

Then create a symbolic link so that you can call the scanner without specifying the path:

  • sudo ln -s /opt/sonarscanner/sonar-scanner-3.2.0.1227-linux/bin/sonar-scanner /usr/local/bin/sonar-scanner

Now that the scanner is set up, we’re ready to run our first code scan.

Step 7 — Running a Test Scan on SonarQube Example Projects

If you’d like to just poke around with SonarQube to see what it can do, you might consider running a test scan on the SonarQube example projects. These are example projects created by the SonarQube team that contains many issues that SonarQube will then detect and report.

Create a new working directory in your home directory, then change to the directory:

  • cd ~
  • mkdir sonar-test && cd sonar-test

Download the example project:

  • wget https://github.com/SonarSource/sonar-scanning-examples/archive/master.zip

Unzip the project and delete the archive file:

  • unzip master.zip
  • rm master.zip

Next, switch to the example project directory:

  • cd sonar-scanning-examples-master/sonarqube-scanner

Run the scanner, passing it the token you created earlier:

  • sonar-scanner -D sonar.login=your_token_here

This will take a while. Once the scan is complete, you’ll see something like this on the console:

INFO: Task total time: 14.128 s INFO: ------------------------------------------------------------------------ INFO: EXECUTION SUCCESS INFO: ------------------------------------------------------------------------ INFO: Total time: 21.776s INFO: Final Memory: 17M/130M INFO: ------------------------------------------------------------------------ 

The example project’s report will now be on the SonarQube dashboard like so:

SonarQube Dashboard

Now that you’ve confirmed that the SonarQube server and scanner works with the test code, you can use SonarQube to analyze your own code.

Step 8 — Running a Scan on Your Own Code

To have SonarQube analyze your own code, start by transferring your project to the server, or follow Step 6 to install and configure the SonarQube scanner on your workstation and configure it to point to your SonarQube server.

Then, in your project’s root directory, create a SonarQube configuration file:

  • nano sonar-project.properties

You’ll use this file to tell SonarQube a few things about your project.

First, define a project key, which is a unique ID for the project. You can use anything you’d like, but this ID must be unique for your SonarQube instance:

sonar-project.properties
     # Unique ID for this project     sonar.projectKey=foobar:hello-world      ...  

Then, specify the project name and version so that SonarQube will display this information in the dashboard:

sonar-project.properties
     ...      sonar.projectName=Hello World Project     sonar.projectVersion=1.0      ...  

Finally, tell SonarQube where to look for the code files. Note that this is relative to the directory in which the configuration file resides. Set it to the current directory:

sonar-project.properties
     # Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.     sonar.sources=.  

Close and save the file.

You’re ready to run a code quality analysis on your own code. Run sonar-scanner again, passing it your token:

  • sonar-scanner -D sonar.login=your_token_here

Once the scan is complete, you’ll see a summary screen similar to this:

INFO: Task total time: 5.417 s INFO: ------------------------------------------------------------------------ INFO: EXECUTION SUCCESS INFO: ------------------------------------------------------------------------ INFO: Total time: 9.659s INFO: Final Memory: 39M/112M INFO: ------------------------------------------------------------------------ 

The project’s code quality report will now be on the SonarQube dashboard.

Conclusion

In this tutorial, you’ve set up a SonarQube server and scanner for code quality analysis. Now you can make sure that your code is easily maintainable by simply running a scan — SonarQube will tell you where the potential problems might be!

From here, you might want to read the SonarQube Scanner documentation to learn how to run analysis on your local development machine or as part of your build process.

DigitalOcean Community Tutorials

Como Construir uma Aplicação Node.js com o Docker

Introdução

A plataforma Docker permite aos desenvolvedores empacotar e executar aplicações como containers. Um container é um processo isolado que executa em um sistema operacional compartilhado, oferecendo uma alternativa mais leve às máquinas virtuais. Embora os containers não sejam novos, eles oferecem benefícios — incluindo isolamento do processo e padronização do ambiente — que estão crescendo em importância à medida que mais desenvolvedores usam arquiteturas de aplicativos distribuídos.

Ao criar e dimensionar uma aplicação com o Docker, o ponto de partida normalmente é a criação de uma imagem para a sua aplicação, que você pode então, executar em um container. A imagem inclui o código da sua aplicação, bibliotecas, arquivos de configuração, variáveis de ambiente, e runtime. A utilização de uma imagem garante que o ambiente em seu container está padronizado e contém somente o que é necessário para construir e executar sua aplicação.

Neste tutorial, você vai criar uma imagem de aplicação para um website estático que usa o framework Express e o Bootstrap. Em seguida, você criará um container usando essa imagem e a enviará para o Docker Hub para uso futuro. Por fim, você irá baixar a imagem armazenada do repositório do Docker Hub e criará outro container, demonstrando como você pode recriar e escalar sua aplicação.

Pré-requisitos

Para seguir este tutorial, você vai precisar de:

Passo 1 — Instalando as Dependências da Sua Aplicação

Para criar a sua imagem, primeiro você precisará produzir os arquivos de sua aplicação, que você poderá copiar para o seu container. Esses arquivos incluirão o conteúdo estático, o código e as dependências da sua aplicação.

Primeiro, crie um diretório para o seu projeto no diretório home do seu usuário não-root. Vamos chamar o nosso de node_project, mas sinta-se à vontade para substituir isso por qualquer outra coisa:

  • mkdir node_project

Navegue até esse diretório:

  • cd node_project

Esse será o diretório raiz do projeto:

Em seguida, crie um arquivo package.json com as dependências do seu projeto e outras informações de identificação. Abra o arquivo com o nano ou o seu editor favorito:

  • nano package.json

Adicione as seguintes informações sobre o projeto, incluindo seu nome, autor, licença, ponto de entrada e dependências. Certifique-se de substituir as informações do autor pelo seu próprio nome e seus detalhes de contato:

~/node_project/package.json
 {   "name": "nodejs-image-demo",   "version": "1.0.0",   "description": "nodejs image demo",   "author": "Sammy the Shark <sammy@example.com>",   "license": "MIT",   "main": "app.js",   "keywords": [     "nodejs",     "bootstrap",     "express"   ],   "dependencies": {     "express": "^4.16.4"   } } 

Este arquivo inclui o nome do projeto, autor e a licença sob a qual ele está sendo compartilhado. O npm recomenda manter o nome do seu projeto curto e descritivo, evitando duplicidades no registro npm. Listamos a licença do MIT no campo de licença, permitindo o uso e a distribuição gratuitos do código do aplicativo.

Além disso, o arquivo especifica:

  • "main": O ponto de entrada para a aplicação, app.js. Você criará esse arquivo em seguida.

  • "dependencies": As dependências do projeto — nesse caso, Express 4.16.4 ou acima.

Embora este arquivo não liste um repositório, você pode adicionar um seguindo estas diretrizes em adicionando um repositório ao seu arquivo package.json. Esse é um bom acréscimo se você estiver versionando sua aplicação.

Salve e feche o arquivo quando você terminar de fazer as alterações.

Para instalar as dependências do seu projeto, execute o seguinte comando:

  • npm install

Isso irá instalar os pacotes que você listou em seu arquivo package.json no diretório do seu projeto.

Agora podemos passar para a construção dos arquivos da aplicação.

Passo 2 — Criando os Arquivos da Aplicação

Vamos criar um site que oferece aos usuários informações sobre tubarões. Nossa aplicação terá um ponto de entrada principal, app.js, e um diretório views, que incluirá os recursos estáticos do projeto. A página inicial, index.html, oferecerá aos usuários algumas informações preliminares e um link para uma página com informações mais detalhadas sobre tubarões, sharks.html. No diretório views, vamos criar tanto a página inicial quanto sharks.html.

Primeiro, abra app.js no diretório principal do projeto para definir as rotas do projeto:

  • nano app.js

A primeira parte do arquivo irá criar a aplicação Express e os objetos Router, e definir o diretório base, a porta, e o host como variáveis:

~/node_project/app.js
 var express = require("express"); var app = express(); var router = express.Router();  var path = __dirname + '/views/'; const PORT = 8080; const HOST = '0.0.0.0'; 

A função require carrega o módulo express, que usamos então para criar os objetos app e router. O objeto router executará a função de roteamento do aplicativo e, como definirmos as rotas do método HTTP, iremos incluí-las nesse objeto para definir como nossa aplicação irá tratar as solicitações.

Esta seção do arquivo também define algumas variáveis, path, PORT, e HOST:

  • path: Define o diretório base, que será o subdiretório views dentro do diretório atual do projeto.

  • HOST: Define o endereço ao qual a aplicação se vinculará e escutará. Configurar isto para 0.0.0.0 ou todos os endereços IPv4 corresponde ao comportamento padrão do Docker de expor os containers para 0.0.0.0, a menos que seja instruído de outra forma.

  • PORT: Diz à aplicação para escutar e se vincular à porta 8080.

Em seguida, defina as rotas para a aplicação usando o objeto router:

~/node_project/app.js
 ...  router.use(function (req,res,next) {   console.log("/" + req.method);   next(); });  router.get("/",function(req,res){   res.sendFile(path + "index.html"); });  router.get("/sharks",function(req,res){   res.sendFile(path + "sharks.html"); }); 

A função router.use carrega uma função de middleware que registrará as solicitações do roteador e as transmitirá para as rotas da aplicação. Estas são definidas nas funções subsequentes, que especificam que uma solicitação GET para a URL base do projeto deve retornar a página index.html, enquanto uma requisição GET para a rota /sharks deve retornar sharks.html.

Finalmente, monte o middleware router e os recursos estáticos da aplicação e diga à aplicação para escutar na porta 8080:

~/node_project/app.js
 ...  app.use(express.static(path)); app.use("/", router);  app.listen(8080, function () {   console.log('Example app listening on port 8080!') }) 

O arquivo app.js finalizado ficará assim:

~/node_project/app.js
 var express = require("express"); var app = express(); var router = express.Router();  var path = __dirname + '/views/'; const PORT = 8080; const HOST = '0.0.0.0';  router.use(function (req,res,next) {   console.log("/" + req.method);   next(); });  router.get("/",function(req,res){   res.sendFile(path + "index.html"); });  router.get("/sharks",function(req,res){   res.sendFile(path + "sharks.html"); });  app.use(express.static(path)); app.use("/", router);  app.listen(8080, function () {   console.log('Example app listening on port 8080!') }) 

Salve e feche o arquivo quando tiver terminado.

Em seguida, vamos adicionar algum conteúdo estático à aplicação. Comece criando o diretório views:

  • mkdir views

Abra a página inicial, index.html:

  • nano views/index.html

Adicione o seguinte código ao arquivo, que irá importar o Bootstrap e criar o componente jumbotron com um link para a página de informações mais detalhadas sharks.html

~/node_project/views/index.html
 <!DOCTYPE html> <html lang="en">  <head>     <title>About Sharks</title>     <meta charset="utf-8">     <meta name="viewport" content="width=device-width, initial-scale=1">     <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">     <link href="css/styles.css" rel="stylesheet">     <link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css"> </head>  <body>     <nav class="navbar navbar-dark navbar-static-top navbar-expand-md">         <div class="container">             <button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span>             </button> <a class="navbar-brand" href="#">Everything Sharks</a>             <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">                 <ul class="nav navbar-nav mr-auto">                     <li class="active nav-item"><a href="/" class="nav-link">Home</a>                     </li>                     <li class="nav-item"><a href="/sharks" class="nav-link">Sharks</a>                     </li>                 </ul>             </div>         </div>     </nav>     <div class="jumbotron">         <div class="container">             <h1>Want to Learn About Sharks?</h1>             <p>Are you ready to learn about sharks?</p>             <br>             <p><a class="btn btn-primary btn-lg" href="/sharks" role="button">Get Shark Info</a>             </p>         </div>     </div>     <div class="container">         <div class="row">             <div class="col-lg-6">                 <h3>Not all sharks are alike</h3>                 <p>Though some are dangerous, sharks generally do not attack humans. Out of the 500 species known to researchers, only 30 have been known to attack humans.                 </p>             </div>             <div class="col-lg-6">                 <h3>Sharks are ancient</h3>                 <p>There is evidence to suggest that sharks lived up to 400 million years ago.                 </p>             </div>         </div>     </div> </body>  </html> 

A navbar de nível superior aqui, permite que os usuários alternem entre as páginas Home e Sharks. No subcomponente navbar-nav, estamos utilizando a classe active do Bootstrap para indicar a página atual ao usuário. Também especificamos as rotas para nossas páginas estáticas, que correspondem às rotas que definimos em app.js:

~/node_project/views/index.html
 ... <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">    <ul class="nav navbar-nav mr-auto">       <li class="active nav-item"><a href="/" class="nav-link">Home</a>       </li>       <li class="nav-item"><a href="/sharks" class="nav-link">Sharks</a>       </li>    </ul> </div> ... 

Além disso, criamos um link para nossa página de informações sobre tubarões no botão do nosso jumbotron:

~/node_project/views/index.html
 ... <div class="jumbotron">    <div class="container">       <h1>Want to Learn About Sharks?</h1>       <p>Are you ready to learn about sharks?</p>       <br>       <p><a class="btn btn-primary btn-lg" href="/sharks" role="button">Get Shark Info</a>       </p>    </div> </div> ... 

Há também um link para uma folha de estilo personalizada no cabeçalho:

~/node_project/views/index.html
... <link href="css/styles.css" rel="stylesheet"> ... 

Vamos criar esta folha de estilo no final deste passo.

Salve e feche o arquivo quando terminar.

Com a página inicial da aplicação funcionando, podemos criar nossa página de informações sobre tubarões, sharks.html, que oferecerá aos usuários interessados mais informações sobre os tubarões.

Abra o arquivo:

  • nano views/sharks.html

Adicione o seguinte código, que importa o Bootstrap e a folha de estilo personalizada, e oferece aos usuários informações detalhadas sobre determinados tubarões:

~/node_project/views/sharks.html
<!DOCTYPE html> <html lang="en">  <head>     <title>About Sharks</title>     <meta charset="utf-8">     <meta name="viewport" content="width=device-width, initial-scale=1">     <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">     <link href="css/styles.css" rel="stylesheet">     <link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css"> </head> <nav class="navbar navbar-dark navbar-static-top navbar-expand-md">     <div class="container">         <button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span>         </button> <a class="navbar-brand" href="/">Everything Sharks</a>         <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">             <ul class="nav navbar-nav mr-auto">                 <li class="nav-item"><a href="/" class="nav-link">Home</a>                 </li>                 <li class="active nav-item"><a href="/sharks" class="nav-link">Sharks</a>                 </li>             </ul>         </div>     </div> </nav> <div class="jumbotron text-center">     <h1>Shark Info</h1> </div> <div class="container">     <div class="row">         <div class="col-lg-6">             <p>                 <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.                 </div>                 <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">             </p>         </div>         <div class="col-lg-6">             <p>                 <div class="caption">Other sharks are known to be friendly and welcoming!</div>                 <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">             </p>         </div>     </div> </div>  </html> 

Observe que neste arquivo, usamos novamente a classe active para indicar a página atual.

Salve e feche o arquivo quando tiver terminado.

Finalmente, crie a folha de estilo CSS personalizada que você vinculou em index.html e sharks.html criando primeiro uma pasta css no diretório views:

  • mkdir views/css

Abra a folha de estilo:

  • nano views/css/styles.css

Adicione o seguinte código, que irá definir a cor desejada e a fonte para nossas páginas:

~/node_project/views/css/styles.css
 .navbar {     margin-bottom: 0;     background: #000000; }  body {     background: #000000;     color: #ffffff;     font-family: 'Merriweather', sans-serif; }  h1, h2 {     font-weight: bold; }  p {     font-size: 16px;     color: #ffffff; }  .jumbotron {     background: #0048CD;     color: white;     text-align: center; }  .jumbotron p {     color: white;     font-size: 26px; }  .btn-primary {     color: #fff;     text-color: #000000;     border-color: white;     margin-bottom: 5px; }  img, video, audio {     margin-top: 20px;     max-width: 80%; }  div.caption: {     float: left;     clear: both; } 

Além de definir a fonte e a cor, esse arquivo também limita o tamanho das imagens especificando max-width ou largura máxima de 80%. Isso evitará que ocupem mais espaço do que gostaríamos na página.

Salve e feche o arquivo quando tiver terminado.

Com os arquivos da aplicação no lugar e as dependências do projeto instaladas, você está pronto para iniciar a aplicação.

Se você seguiu o tutorial de configuração inicial do servidor nos pré-requisitos, você terá um firewall ativo que permita apenas o tráfego SSH. Para permitir o tráfego para a porta 8080, execute:

  • sudo ufw allow 8080

Para iniciar a aplicação, certifique-se de que você está no diretório raiz do seu projeto:

  • cd ~/node_project

Inicie sua aplicação com node app.js:

  • node app.js

Dirija seu navegador para http://ip_do_seu_servidor:8080. Você verá a seguinte página inicial:

Clique no botão Get Shark Info. Você verá a seguinte página de informações:

Agora você tem uma aplicação instalada e funcionando. Quando estiver pronto, saia do servidor digitando CTRL + C. Agora podemos passar a criar o Dockerfile que nos permitirá recriar e escalar essa aplicação conforme desejado.

Step 3 — Escrevendo o Dockerfile

Seu Dockerfile especifica o que será incluído no container de sua aplicação quando for executado. A utilização de um Dockerfile permite que você defina seu ambiente de container e evite discrepâncias com dependências ou versões de runtime.

Seguindo estas diretrizes na construção de containers otimizados, vamos tornar nossa imagem o mais eficiente possível, minimizando o número de camadas de imagem e restringindo a função da imagem a uma única finalidade — recriar nossos arquivos da aplicação e o conteúdo estático.

No diretório raiz do seu projeto, crie o Dockerfile:

  • nano Dockerfile

As imagens do Docker são criadas usando uma sucessão de imagens em camadas que são construídas umas sobre as outras. Nosso primeiro passo será adicionar a imagem base para a nossa aplicação que formará o ponto inicial da construção da aplicação.

Vamos utilizar a imagem node:10, uma vez que, no momento da escrita desse tutorial, esta é a versão LTS reomendada do Node.js. Adicione a seguinte instrução FROM para definir a imagem base da aplicação:

~/node_project/Dockerfile
FROM node:10 

Esta imagem inclui Node.js e npm. Cada Dockerfile deve começar com uma instrução FROM.

Por padrão, a imagem Node do Docker inclui um usuário não-root node que você pode usar para evitar a execução de seu container de aplicação como root. Esta é uma prática de segurança recomendada para evitar executar containers como root e para restringir recursos dentro do container para apenas aqueles necessários para executar seus processos. Portanto, usaremos o diretório home do usuário node como o diretório de trabalho de nossa aplicação e o definiremos como nosso usuário dentro do container. Para mais informações sobre as melhores práticas ao trabalhar com a imagem Node do Docker, veja este guia de melhores práticas.

Para um ajuste fino das permissões no código da nossa aplicação no container, vamos criar o subdiretório node_modules em /home/node juntamente com o diretório app. A criação desses diretórios garantirá que eles tenham as permissões que desejamos, o que será importante quando criarmos módulos de node locais no container com npm install. Além de criar esses diretórios, definiremos a propriedade deles para o nosso usuário node:

~/node_project/Dockerfile
... RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app 

Para obter mais informações sobre o utilitário de consolidação das instruções RUN, veja esta discussão sobre como gerenciar camadas de container.

Em seguida, defina o diretório de trabalho da aplicação para /home/node/app:

~/node_project/Dockerfile
... WORKDIR /home/node/app 

Se WORKDIR não estiver definido, o Docker irá criar um por padrão, então é uma boa ideia defini-lo explicitamente.

A seguir, copie os arquivos package.json e package-lock.json (para npm 5+):

~/node_project/Dockerfile
... COPY package*.json ./ 

Adicionar esta instrução COPY antes de executar o npm install ou copiar o código da aplicação nos permite aproveitar o mecanismo de armazenamento em cache do Docker. Em cada estágio da compilação ou build, o Docker verificará se há uma camada armazenada em cache para essa instrução específica. Se mudarmos o package.json, esta camada será reconstruída, mas se não o fizermos, esta instrução permitirá ao Docker usar a camada de imagem existente e ignorar a reinstalação dos nossos módulos de node.

Depois de copiar as dependências do projeto, podemos executar npm install:

~/node_project/Dockerfile
... RUN npm install 

Copie o código de sua aplicação para o diretório de trabalho da mesma no container:

~/node_project/Dockerfile
... COPY . . 

Para garantir que os arquivos da aplicação sejam de propriedade do usuário não-root node, copie as permissões do diretório da aplicação para o diretório no container:

~/node_project/Dockerfile
... COPY --chown=node:node . . 

Defina o usuário para node:

~/node_project/Dockerfile
... USER node 

Exponha a porta 8080 no container e inicie a aplicação:

~/node_project/Dockerfile
... EXPOSE 8080  CMD [ "node", "app.js" ] 

EXPOSE não publica a porta, mas funciona como uma maneira de documentar quais portas no container serão publicadas em tempo de execução. CMD executa o comando para iniciar a aplicação – neste caso, node app.js. Observe que deve haver apenas uma instrução CMD em cada Dockerfile. Se você incluir mais de uma, somente a última terá efeito.

Há muitas coisas que você pode fazer com o Dockerfile. Para obter uma lista completa de instruções, consulte a documentação de referência Dockerfile do Docker

O Dockerfile completo estará assim:

~/node_project/Dockerfile
 FROM node:10  RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app  WORKDIR /home/node/app  COPY package*.json ./  RUN npm install  COPY . .  COPY --chown=node:node . .  USER node  EXPOSE 8080  CMD [ "node", "app.js" ] 

Salve e feche o arquivo quando terminar a edição.

Antes de construir a imagem da aplicação, vamos adicionar um arquivo .dockerignore. Trabalhando de maneira semelhante a um arquivo .gitignore, .dockerignore especifica quais arquivos e diretórios no diretório do seu projeto não devem ser copiados para o seu container.

Abra o arquivo .dockerignore:

  • nano .dockerignore

Dentro do arquivo, adicione seus módulos de node, logs npm, Dockerfile, e o arquivo .dockerignore:

~/node_project/.dockerignore
node_modules npm-debug.log Dockerfile .dockerignore 

Se você estiver trabalhando com o Git, então você também vai querer adicionar o seu diretório .git e seu arquivo .gitignore.

Salve e feche o arquivo quando tiver terminado.

Agora você está pronto para construir a imagem da aplicação usando o comando docker build. Usar a flag -t com o docker build permitirá que você marque a imagem com um nome memorizável. Como vamos enviar a imagem para o Docker Hub, vamos incluir nosso nome de usuário do Docker Hub na tag. Vamos marcar a imagem como nodejs-image-demo, mas sinta-se à vontade para substituir isto por um nome de sua escolha. Lembre-se também de substituir seu_usuário_dockerhub pelo seu nome real de usuário do Docker Hub:

  • docker build -t seu_usuário_dockerhub/nodejs-image-demo .

O . especifica que o contexto do build é o diretório atual.

Levará um ou dois minutos para construir a imagem. Quando estiver concluído, verifique suas imagens:

  • docker images

Você verá a seguinte saída:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE seu_usuário_dockerhub/nodejs-image-demo latest 1c723fb2ef12 8 seconds ago 895MB node 10 f09e7c96b6de 17 hours ago 893MB

É possível criar um container com essa imagem usando docker run. Vamos incluir três flags com esse comando:

  • -p: Isso publica a porta no container e a mapeia para uma porta em nosso host. Usaremos a porta 80 no host, mas sinta-se livre para modificá-la, se necessário, se tiver outro processo em execução nessa porta. Para obter mais informações sobre como isso funciona, consulte esta discussão nos documentos do Docker sobre port binding.

  • -d: Isso executa o container em segundo plano.

  • --name: Isso nos permite dar ao container um nome memorizável.

Execute o seguinte comando para construir o container:

  • docker run --name nodejs-image-demo -p 80:8080 -d seu_usuário_dockerhub/nodejs-image-demo

Depois que seu container estiver em funcionamento, você poderá inspecionar uma lista de containers em execução com docker ps:

  • docker ps

Você verá a seguinte saída:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e50ad27074a7 seu_usuário_dockerhub/nodejs-image-demo "node app.js" 8 seconds ago Up 7 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo

Com seu container funcionando, você pode visitar a sua aplicação apontando seu navegador para http://ip_do_seu_servidor. Você verá a página inicial da sua aplicação novamente:

Agora que você criou uma imagem para sua aplicação, você pode enviá-la ao Docker Hub para uso futuro.

Passo 4 — Usando um Repositório para Trabalhar com Imagens

Ao enviar sua imagem de aplicação para um registro como o Docker Hub, você a torna disponível para uso subsequente à medida que cria e escala seus containers. Vamos demonstrar como isso funciona, enviando a imagem da aplicação para um repositório e, em seguida, usando a imagem para recriar nosso container.

A primeira etapa para enviar a imagem é efetuar login na conta do Docker Hub que você criou nos pré-requisitos:

  • docker login -u seu_usuário_dockerhub -p senha_do_usuário_dockerhub

Efetuando o login dessa maneira será criado um arquivo ~/.docker/config.json no diretório home do seu usuário com suas credenciais do Docker Hub.

Agora você pode enviar a imagem da aplicação para o Docker Hub usando a tag criada anteriormente, seu_usuário_dockerhub/nodejs-image-demo:

  • docker push seu_usuário_dockerhub/nodejs-image-demo

Vamos testar o utilitário do registro de imagens destruindo nosso container e a imagem de aplicação atual e reconstruindo-os com a imagem em nosso repositório.

Primeiro, liste seus containers em execução:

  • docker ps

Você verá a seguinte saída:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e50ad27074a7 seu_usuário_dockerhub/nodejs-image-demo "node app.js" 3 minutes ago Up 3 minutes 0.0.0.0:80->8080/tcp nodejs-image-demo

Usando o CONTAINER ID listado em sua saída, pare o container da aplicação em execução. Certifique-se de substituir o ID destacado abaixo por seu próprio CONTAINER ID:

  • docker stop e50ad27074a7

Liste todas as suas imagens com a flag -a:

  • docker images -a

Você verá a seguinte saída com o nome da sua imagem, seuusuáriodockerhub/nodejs-image-demo, juntamente com a imagem node e outras imagens do seu build.

Output
REPOSITORY TAG IMAGE ID CREATED SIZE seu_usuário_dockerhub/nodejs-image-demo latest 1c723fb2ef12 7 minutes ago 895MB <none> <none> e039d1b9a6a0 7 minutes ago 895MB <none> <none> dfa98908c5d1 7 minutes ago 895MB <none> <none> b9a714435a86 7 minutes ago 895MB <none> <none> 51de3ed7e944 7 minutes ago 895MB <none> <none> 5228d6c3b480 7 minutes ago 895MB <none> <none> 833b622e5492 8 minutes ago 893MB <none> <none> 5c47cc4725f1 8 minutes ago 893MB <none> <none> 5386324d89fb 8 minutes ago 893MB <none> <none> 631661025e2d 8 minutes ago 893MB node 10 f09e7c96b6de 17 hours ago 893MB

Remova o container parado e todas as imagens, incluindo imagens não utilizadas ou pendentes, com o seguinte comando:

  • docker system prune -a

Digite y quando solicitado na saída para confirmar que você gostaria de remover o container e as imagens parados. Esteja ciente de que isso também removerá seu cache de compilação.

Agora você removeu o container que está executando a imagem da sua aplicação e a própria imagem. Para obter mais informações sobre como remover containers, imagens e volumes do Docker, consulte How To Remove Docker Images, Containers, and Volumes.

Com todas as suas imagens e containers excluídos, agora você pode baixar a imagem da aplicação do Docker Hub:

  • docker pull seu_usuário_dockerhub/nodejs-image-demo

Liste suas imagens mais uma vez:

  • docker images

Você verá a imagem da sua aplicação:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE seu_usuário_dockerhub/nodejs-image-demo latest 1c723fb2ef12 11 minutes ago 895MB

Agora você pode reconstruir seu container usando o comando do Passo 3:

  • docker run --name nodejs-image-demo -p 80:8080 -d seu_usuário_dockerhub/nodejs-image-demo

Liste seus containers em execução:

docker ps 
Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6bc2f50dff6 seu_usuário_dockerhub/nodejs-image-demo "node app.js" 4 seconds ago Up 3 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo

Visite http://ip_do_seu_servidor mais uma vez para ver a sua aplicação em execução.

Conclusão

Neste tutorial, você criou uma aplicação web estática com Express e Bootstrap, bem como uma imagem do Docker para esta aplicação. Você utilizou essa imagem para criar um container e enviou a imagem para o Docker Hub. A partir daí, você conseguiu destruir sua imagem e seu container e recriá-los usando seu repositório do Docker Hub.

Se você estiver interessado em aprender mais sobre como trabalhar com ferramentas como o Docker Compose e o Docker Machine para criar configurações de vários containers, consulte os seguintes guias:

Para dicas gerais sobre como trabalhar com dados de container, consulte:

Se você estiver interessado em outros tópicos relacionados ao Docker, consulte nossa biblioteca completa de tutoriais do Docker.

Por Kathleen Juell

DigitalOcean Community Tutorials