How To Scale a Node.js Application with MongoDB Using Helm


Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet object consisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.


To complete this tutorial, you will need:

Step 1 — Cloning and Packaging the Application

To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

Clone the repository into a directory called node_project:

  • git clone node_project

Navigate to the node_project directory:

  • cd node_project

The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application’s state has been offloaded to a MongoDB database.

For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

When we deploy the Helm mongodb-replicaset chart, it will create:

  • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.
  • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

  • nano db.js

Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node’s process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

The constants for the connection URI and the URI string itself currently look like this:

... const {   MONGO_USERNAME,   MONGO_PASSWORD,   MONGO_HOSTNAME,   MONGO_PORT,   MONGO_DB } = process.env;  ...  const url = `mongodb://$  {MONGO_USERNAME}:$  {MONGO_PASSWORD}@$  {MONGO_HOSTNAME}:$  {MONGO_PORT}/$  {MONGO_DB}?authSource=admin`; ... 

In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

Add MONGO_REPLICASET to both the URI constant object and the connection string:

... const {   MONGO_USERNAME,   MONGO_PASSWORD,   MONGO_HOSTNAME,   MONGO_PORT,   MONGO_DB,   MONGO_REPLICASET } = process.env;  ... const url = `mongodb://$  {MONGO_USERNAME}:$  {MONGO_PASSWORD}@$  {MONGO_HOSTNAME}:$  {MONGO_PORT}/$  {MONGO_DB}?replicaSet=$  {MONGO_REPLICASET}&authSource=admin`; ... 

Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

Save and close the file when you are finished editing.

With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

  • docker build -t your_dockerhub_username/node-replicas .

The . in the command specifies that the build context is the current directory.

It will take a minute or two to build the image. Once it is complete, check your images:

  • docker images

You will see the following output:

REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-replicas latest 56a69b4bc882 7 seconds ago 90.1MB node 10-alpine aa57b0242b33 6 days ago 71MB

Next, log in to the Docker Hub account you created in the prerequisites:

  • docker login -u your_dockerhub_username

When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user’s home directory with your Docker Hub credentials.

Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

  • docker push your_dockerhub_username/node-replicas

You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

Step 2 — Creating Secrets for the MongoDB Replica Set

The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

  • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.
  • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

First, let’s create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

  • openssl rand -base64 756 > key.txt

The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation. The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

You can now create a Secret called keyfilesecret using this file with kubectl create:

  • kubectl create secret generic keyfilesecret --from-file=key.txt

This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

You will see the following output indicating that your Secret has been created:

secret/keyfilesecret created

Remove key.txt:

  • rm key.txt

Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

Convert your database username:

  • echo -n 'your_database_username' | base64

Note down the value you see in the output.

Next, convert your password:

  • echo -n 'your_database_password' | base64

Take note of the value in the output here as well.

Open a file for the Secret:

  • nano secret.yaml

Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

  • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

In general, it is a good idea to validate your syntax before creating resources with kubectl.

Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encoded username and password:

apiVersion: v1 kind: Secret metadata:   name: mongo-secret data:   user: your_encoded_username   password: your_encoded_password 

Here, we’re using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

Save and close the file when you are finished editing.

Create the Secret object with the following command:

  • kubectl create -f secret.yaml

You will see the following output:

secret/mongo-secret created

Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment

Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we’ve just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

  • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.
  • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.
  • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.comDigitalOcean Block Storage — which we can check by typing:

  • kubectl get storageclass

If you are working with a DigitalOcean cluster, you will see the following output:

NAME PROVISIONER AGE do-block-storage (default) 21m

If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

  • nano mongodb-values.yaml

You will set values in this file that will do the following:

  • Enable authorization.
  • Reference your keyfilesecret and mongo-secret objects.
  • Specify 1Gi for your PersistentVolumes.
  • Set your replica set name to db.
  • Specify 3 replicas for the set.
  • Pin the mongo image to the latest version at the time of writing: 4.1.9.

Paste the following code into the file:

replicas: 3 port: 27017 replicaSetName: db podDisruptionBudget: {} auth:   enabled: true   existingKeySecret: keyfilesecret   existingAdminSecret: mongo-secret imagePullSecrets: [] installImage:   repository: unguiculus/mongodb-install   tag: 0.7   pullPolicy: Always copyConfigImage:   repository: busybox   tag: 1.29.3   pullPolicy: Always image:   repository: mongo   tag: 4.1.9   pullPolicy: Always extraVars: {} metrics:   enabled: false   image:     repository: ssalaues/mongodb-exporter     tag: 0.6.1     pullPolicy: IfNotPresent   port: 9216   path: /metrics   socketTimeout: 3s   syncTimeout: 1m   prometheusServiceDiscovery: true   resources: {} podAnnotations: {} securityContext:   enabled: true   runAsUser: 999   fsGroup: 999   runAsNonRoot: true init:   resources: {}   timeout: 900 resources: {} nodeSelector: {} affinity: {} tolerations: [] extraLabels: {} persistentVolume:   enabled: true   #storageClass: "-"   accessModes:     - ReadWriteOnce   size: 1Gi   annotations: {} serviceAnnotations: {} terminationGracePeriodSeconds: 30 tls:   enabled: false configmap: {} readinessProbe:   initialDelaySeconds: 5   timeoutSeconds: 1   failureThreshold: 3   periodSeconds: 10   successThreshold: 1 livenessProbe:   initialDelaySeconds: 30   timeoutSeconds: 5   failureThreshold: 3   periodSeconds: 10   successThreshold: 1 

The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case,

Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

To learn more about the other parameters included in the file, see the configuration table included with the repo.

Save and close the file when you are finished editing.

Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

  • helm repo update

This will get the latest chart information from the stable repository.

Finally, install the chart with the following command:

  • helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

  • helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we’ve specified. We’ve pointed to these options by including the -f flag and our mongodb-values.yaml file.

Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

NAME: mongo LAST DEPLOYED: Tue Apr 16 21:51:05 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE mongo-mongodb-replicaset-init 1 1s mongo-mongodb-replicaset-mongodb 1 1s mongo-mongodb-replicaset-tests 1 0s ...

You can now check on the creation of your Pods with the following command:

  • kubectl get pods

You will see output like the following as the Pods are being created:

NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 67s mongo-mongodb-replicaset-1 0/1 Init:0/3 0 8s

The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod’s containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

Once the Pods have been created and all of their associated containers are running, you will see this output:

NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 2m33s mongo-mongodb-replicaset-1 1/1 Running 0 94s mongo-mongodb-replicaset-2 1/1 Running 0 36s

The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

  • kubectl describe pods your_pod
  • kubectl logs your_pod

Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $ (statefulset-name)-$ (ordinal).$ (service name).$ (namespace).svc.cluster.local.

In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

  • kubectl get statefulset
NAME READY AGE mongo-mongodb-replicaset 3/3 4m2s
  • kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP <none> 443/TCP 42m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 4m35s mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 4m35s

This means that the first member of our StatefulSet will have the following DNS entry:


Because we need our application to connect to each MongoDB instance, it’s essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

With your database instances up and running, you are ready to create the chart for your Node application.

Step 4 — Creating a Custom Application Chart and Configuring Parameters

We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

First, create a new chart directory called nodeapp with the following command:

  • helm create nodeapp

This will create a directory called nodeapp in your ~/node_project folder with the following resources:

  • A Chart.yaml file with basic information about your chart.
  • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.
  • A .helmignore file with file and directory patterns that will be ignored when packaging charts.
  • A templates/ directory with the template files that will generate Kubernetes manifests.
  • A templates/tests/ directory for test files.
  • A charts/ directory for any charts that this chart depends on.

The first file we will modify out of these default files is values.yaml. Open that file now:

  • nano nodeapp/values.yaml

The values that we will set here include:

  • The number of replicas.
  • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.
  • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.
  • The targetPort to specify the port on the Pod where our application will be exposed.

We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

Configure the following values in the values.yaml file:

# Default values for nodeapp. # This is a YAML-formatted file. # Declare variables to be passed into your templates.  replicaCount: 3  image:   repository: your_dockerhub_username/node-replicas   tag: latest   pullPolicy: IfNotPresent  nameOverride: "" fullnameOverride: ""  service:   type: LoadBalancer   port: 80   targetPort: 8080 ... 

Save and close the file when you are finished editing.

Next, open a secret.yaml file in the nodeapp/templates directory:

  • nano nodeapp/templates/secret.yaml

In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

Add the following code to the file:

apiVersion: v1 kind: Secret metadata:   name: {{ .Release.Name }}-auth data:   MONGO_USERNAME: your_encoded_username   MONGO_PASSWORD: your_encoded_password 

The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

Save and close the file when you are finished.

Next, open a file to create a ConfigMap for your application:

  • nano nodeapp/templates/configmap.yaml

In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $ (statefulset-name)-$ (ordinal).$ (service name).$ (namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAME variable.

Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

apiVersion: v1 kind: ConfigMap metadata:   name: {{ .Release.Name }}-config data:   MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"     MONGO_PORT: "27017"   MONGO_DB: "sharkinfo"   MONGO_REPLICASET: "db" 

Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

Save and close the file when you are finished editing.

With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

Step 5 — Integrating Environment Variables into Your Helm Deployment

With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probes that are already defined in the Deployment manifest.

Open the application Deployment template for editing:

  • nano nodeapp/templates/deployment.yaml

Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

In the file, first add an env key to your application container specifications, below the imagePullPolicy key and above ports:

apiVersion: apps/v1 kind: Deployment metadata: ...   spec:     containers:       - name: {{ .Chart.Name }}         image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"         imagePullPolicy: {{ .Values.image.pullPolicy }}         env:         ports: 

Next, add the following keys to the list of env variables:

apiVersion: apps/v1 kind: Deployment metadata: ...   spec:     containers:       - name: {{ .Chart.Name }}         image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"         imagePullPolicy: {{ .Values.image.pullPolicy }}         env:         - name: MONGO_USERNAME           valueFrom:             secretKeyRef:               key: MONGO_USERNAME               name: {{ .Release.Name }}-auth         - name: MONGO_PASSWORD           valueFrom:             secretKeyRef:               key: MONGO_PASSWORD               name: {{ .Release.Name }}-auth         - name: MONGO_HOSTNAME           valueFrom:             configMapKeyRef:               key: MONGO_HOSTNAME               name: {{ .Release.Name }}-config         - name: MONGO_PORT           valueFrom:             configMapKeyRef:               key: MONGO_PORT               name: {{ .Release.Name }}-config         - name: MONGO_DB           valueFrom:             configMapKeyRef:               key: MONGO_DB               name: {{ .Release.Name }}-config               - name: MONGO_REPLICASET           valueFrom:             configMapKeyRef:               key: MONGO_REPLICASET               name: {{ .Release.Name }}-config         

Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

apiVersion: apps/v1 kind: Deployment metadata: ...   spec:     containers:     ...       env:     ...       ports:         - name: http           containerPort: 8080           protocol: TCP       ... 

Next, let’s modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

  • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.
  • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod’s container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

Add the following modification to the stated path for the liveness and readiness probes:

apiVersion: apps/v1 kind: Deployment metadata: ...   spec:     containers:     ...       env:     ...       ports:         - name: http           containerPort: 8080           protocol: TCP       livenessProbe:         httpGet:           path: /sharks           port: http       readinessProbe:         httpGet:           path: /sharks           port: http 

Save and close the file when you are finished editing.

You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

  • helm install --name nodejs ./nodeapp

Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

You will see the following output indicating that your release has been created:

NAME: nodejs LAST DEPLOYED: Wed Apr 17 18:10:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nodejs-config 4 1s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nodejs-nodeapp 0/3 3 0 1s ...

Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

Check the status of your Pods:

  • kubectl get pods
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 57m mongo-mongodb-replicaset-1 1/1 Running 0 56m mongo-mongodb-replicaset-2 1/1 Running 0 55m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 117s

Once your Pods are up and running, check your Services:

  • kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP <none> 443/TCP 96m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 58m mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 58m nodejs-nodeapp LoadBalancer your_lb_ip 80:31518/TCP 3m22s

The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

You should see the following landing page:

Application Landing Page

Now that your replicated application is working, let’s add some test data to ensure that replication is working between members of the replica set.

Step 6 — Testing MongoDB Replication

With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

First, make sure you have navigated your browser to the application landing page:

Application Landing Page

Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark’s general character:

Shark Info Form

In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

Filled Shark Form

Click on the Submit button. You will see a page with this shark information displayed back to you:

Shark Output

Now head back to the shark information form by clicking on Sharks in the top navigation bar:

Shark Info Form

Enter a new shark of your choosing. We’ll go with Whale Shark and Large:

Enter New Shark

Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

Complete Shark Collection

Let’s check that the data we’ve entered has been replicated between the primary and secondary members of our replica set.

Get a list of your Pods:

  • kubectl get pods
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 74m mongo-mongodb-replicaset-1 1/1 Running 0 73m mongo-mongodb-replicaset-2 1/1 Running 0 72m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 5m4s

To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

  • kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

When prompted, enter the password associated with this username:

MongoDB shell version v4.1.9 Enter password:

You will be dropped into an administrative shell:

MongoDB server version: 4.1.9 Welcome to the MongoDB shell. ... db:PRIMARY>

Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the rs.isMaster() method:

  • rs.isMaster()

You will see output like the following, indicating the hostname of the primary:

db:PRIMARY> rs.isMaster() { "hosts" : [ "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017" ], ... "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", ...

Next, switch to your sharkinfo database:

  • use sharkinfo
switched to db sharkinfo

List the collections in the database:

  • show collections

Output the documents in the collection:

  • db.sharks.find()

You will see the following output:

{ "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

Exit the MongoDB Shell:

  • exit

Now that we have checked the data on our primary, let’s check that it’s being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

  • kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

  • db.setSlaveOk(1)

Switch to the sharkinfo database:

  • use sharkinfo
switched to db sharkinfo

Permit the read operation of the documents in the sharks collection:

  • db.setSlaveOk(1)

Output the documents in the collection:

  • db.sharks.find()

You should now see the same information that you saw when running this method on your primary instance:

db:SECONDARY> db.sharks.find() { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

This output confirms that your application data is being replicated between the members of your replica set.


You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm’s stable repository and other chart repositories.

As you move toward production, consider implementing the following:

To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.

DigitalOcean Community Tutorials

How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.


Docker is the most common containerization software used today. It enables developers to easily package apps along with their environments, which allows for quicker iteration cycles and better resource efficiency, while providing the same desired environment on each run. Docker Compose is a container orchestration tool that facilitates modern app requirements. It allows you to run multiple interconnected containers at the same time. Instead of manually running containers, orchestration tools give developers the ability to control, scale, and extend a container simultaneously.

The benefits of using Nginx as a front-end web server are its performance, configurability, and TLS termination, which frees the app from completing these tasks. The nginx-proxy is an automated system for Docker containers that greatly simplifies the process of configuring Nginx to serve as a reverse proxy. Its Let’s Encrypt add-on can accompany the nginx-proxy to automate the generation and renewal of certificates for proxied containers.

In this tutorial, you will deploy an example Go web application with gorilla/mux as the request router and Nginx as the web server, all inside Docker containers, orchestrated by Docker Compose. You’ll use nginx-proxy with the Let’s Encrypt add-on as the reverse proxy. At the end of this tutorial, you will have deployed a Go web app accessible at your domain with multiple routes, using Docker and secured with Let’s Encrypt certificates.


Step 1 — Creating an Example Go Web App

In this step, you will set up your workspace and create a simple Go web app, which you’ll later containerize. The Go app will use the powerful gorilla/mux request router, chosen for its flexibility and speed.

Start off by logging in as sammy:

  • ssh sammy@your_server_ip

For this tutorial, you’ll store all data under ~/go-docker. Run the following command to do this:

  • mkdir ~/go-docker

Navigate to it:

  • cd ~/go-docker

You’ll store your example Go web app in a file named main.go. Create it using your text editor:

  • nano main.go

Add the following lines:

package main  import (     "fmt"     "net/http"      "" )  func main() {     r := mux.NewRouter()      r.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {         fmt.Fprintf(w, "<h1>This is the homepage. Try /hello and /hello/Sammy\n</h1>")     })      r.HandleFunc("/hello", func(w http.ResponseWriter, r *http.Request) {         fmt.Fprintf(w, "<h1>Hello from Docker!\n</h1>")     })      r.HandleFunc("/hello/{name}", func(w http.ResponseWriter, r *http.Request) {         vars := mux.Vars(r)         title := vars["name"]          fmt.Fprintf(w, "<h1>Hello, %s!\n</h1>", title)     })      http.ListenAndServe(":80", r) } 

You first import net/http and gorilla/mux packages, which provide HTTP server functionality and routing.

The gorilla/mux package implements an easier and more powerful request router and dispatcher, while at the same time maintaining interface compatibility with the standard router. Here, you instantiate a new mux router and store it in variable r. Then, you define three routes: /, /hello, and /hello/{name}. The first (/) serves as the homepage and you include a message for the page. The second (/hello) returns a greeting to the visitor. For the third route (/hello/{name}) you specify that it should take a name as a parameter and show a greeting message with the name inserted.

At the end of your file, you start the HTTP server with http.ListenAndServe and instruct it to listen on port 80, using the router you configured.

Save and close the file.

Before running your Go app, you first need to compile and pack it for execution inside a Docker container. Go is a compiled language, so before a program can run, the compiler translates the programming code into executable machine code.

You’ve set up your workspace and created an example Go web app. Next, you will deploy nginx-proxy with an automated Let’s Encrypt certificate provision.

Step 2 — Deploying nginx-proxy with Let’s Encrypt

It’s important that you secure your app with HTTPS. To accomplish this, you’ll deploy nginx-proxy via Docker Compose, along with its Let’s Encrypt add-on. This secures Docker containers proxied using nginx-proxy, and takes care of securing your app through HTTPS by automatically handling TLS certificate creation and renewal.

You’ll be storing the Docker Compose configuration for nginx-proxy in a file named nginx-proxy-compose.yaml. Create it by running:

  • nano nginx-proxy-compose.yaml

Add the following lines to the file:

version: '2'  services:   nginx-proxy:     restart: always     image: jwilder/nginx-proxy     ports:       - "80:80"       - "443:443"     volumes:       - "/etc/nginx/vhost.d"       - "/usr/share/nginx/html"       - "/var/run/docker.sock:/tmp/docker.sock:ro"       - "/etc/nginx/certs"    letsencrypt-nginx-proxy-companion:     restart: always     image: jrcs/letsencrypt-nginx-proxy-companion     volumes:       - "/var/run/docker.sock:/var/run/docker.sock:ro"     volumes_from:       - "nginx-proxy" 

Here you’re defining two containers: one for nginx-proxy and one for its Let’s Encrypt add-on (letsencrypt-nginx-proxy-companion). For the proxy, you specify the image jwilder/nginx-proxy, expose and map HTTP and HTTPS ports, and finally define volumes that will be accessible to the container for persisting Nginx-related data.

In the second block, you name the image for the Let’s Encrypt add-on configuration. Then, you configure access to Docker’s socket by defining a volume and then the existing volumes from the proxy container to inherit. Both containers have the restart property set to always, which instructs Docker to always keep them up (in the case of a crash or a system reboot).

Save and close the file.

Deploy the nginx-proxy by running:

  • docker-compose -f nginx-proxy-compose.yaml up -d

Docker Compose accepts a custom named file via the -f flag. The up command runs the containers, and the -d flag, detached mode, instructs it to run the containers in the background.

Your final output will look like this:

Creating network "go-docker_default" with the default driver Pulling nginx-proxy (jwilder/nginx-proxy:)... latest: Pulling from jwilder/nginx-proxy a5a6f2f73cd8: Pull complete 2343eb083a4e: Pull complete ... Digest: sha256:619f390f49c62ece1f21dfa162fa5748e6ada15742e034fb86127e6f443b40bd Status: Downloaded newer image for jwilder/nginx-proxy:latest Pulling letsencrypt-nginx-proxy-companion (jrcs/letsencrypt-nginx-proxy-companion:)... latest: Pulling from jrcs/letsencrypt-nginx-proxy-companion ... Creating go-docker_nginx-proxy_1 ... done Creating go-docker_letsencrypt-nginx-proxy-companion_1 ... done

You’ve deployed nginx-proxy and its Let’s Encrypt companion using Docker Compose. Next, you’ll create a Dockerfile for your Go web app.

Step 3 — Dockerizing the Go Web App

In this section, you will create a Dockerfile containing instructions on how Docker will create an immutable image for your Go web app. Docker builds an immutable app image—similar to a snapshot of the container—using the instructions found in the Dockerfile. The image’s immutability guarantees the same environment each time a container, based on the particular image, is run.

Create the Dockerfile with your text editor:

  • nano Dockerfile

Add the following lines:

FROM golang:alpine AS build RUN apk --no-cache add gcc g++ make git WORKDIR /go/src/app COPY . . RUN go get ./... RUN GOOS=linux go build -ldflags="-s -w" -o ./bin/web-app ./main.go  FROM alpine:3.9 RUN apk --no-cache add ca-certificates WORKDIR /usr/bin COPY --from=build /go/src/app/bin /go/bin EXPOSE 80 ENTRYPOINT /go/bin/web-app --port 80 

This Dockerfile has two stages. The first stage uses the golang:alpine base, which contains pre-installed Go on Alpine Linux.

Then you install gcc, g++, make, and git as the necessary compilation tools for your Go app. You set the working directory to /go/src/app, which is under the default GOPATH. You also copy the content of the current directory into the container. The first stage concludes with recursively fetching the packages used from the code and compiling the main.go file for release without symbol and debug info (by passing -ldflags="-s -w"). When you compile a Go program it keeps a separate part of the binary that would be used for debugging, however, this extra information uses memory, and is not necessary to preserve when deploying to a production environment.

The second stage bases itself on alpine:3.9 (Alpine Linux 3.9). It installs trusted CA certificates, copies the compiled app binaries from the first stage to the current image, exposes port 80, and sets the app binary as the image entry point.

Save and close the file.

You’ve created a Dockerfile for your Go app that will fetch its packages, compile it for release, and run it upon container creation. In the next step, you will create the Docker Compose yaml file and test the app by running it in Docker.

Step 4 — Creating and Running the Docker Compose File

Now, you’ll create the Docker Compose config file and write the necessary configuration for running the Docker image you created in the previous step. Then, you will run it and check if it works correctly. In general, the Docker Compose config file specifies the containers, their settings, networks, and volumes that the app requires. You can also specify that these elements can start and stop as one at the same time.

You will be storing the Docker Compose configuration for the Go web app in a file named go-app-compose.yaml. Create it by running:

  • nano go-app-compose.yaml

Add the following lines to this file:

version: '2' services:   go-web-app:     restart: always     build:       dockerfile: Dockerfile       context: .     environment:       -       - 

Remember to replace both times with your domain name. Save and close the file.

This Docker Compose configuration contains one container (go-web-app), which will be your Go web app. It builds the app using the Dockerfile you’ve created in the previous step, and takes the current directory, which contains the source code, as the context for building. Furthermore, it sets two environment variables: VIRTUAL_HOST and LETSENCRYPT_HOST. nginx-proxy uses VIRTUAL_HOST to know from which domain to accept the requests. LETSENCRYPT_HOST specifies the domain name for generating TLS certificates, and must be the same as VIRTUAL_HOST, unless you specify a wildcard domain.

Now, you’ll run your Go web app in the background via Docker Compose with the following command:

  • docker-compose -f go-app-compose.yaml up -d

Your final output will look like the following:

Creating network "go-docker_default" with the default driver Building go-web-app Step 1/12 : FROM golang:alpine AS build ---> b97a72b8e97d ... Successfully tagged go-docker_go-web-app:latest WARNING: Image for service go-web-app was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`. Creating go-docker_go-web-app_1 ... done

If you review the output presented after running the command, Docker logged every step of building the app image according to the configuration in your Dockerfile.

You can now navigate to to see your homepage. At your web app’s home address, you’re seeing the page as a result of the / route you defined in the first step.

This is the homepage. Try /hello and /hello/Sammy

Now navigate to You will see the message you defined in your code for the /hello route from Step 1.

Hello from Docker!

Finally, try appending a name to your web app’s address to test the other route, like:

Hello, Sammy!

Note: In the case that you receive an error about invalid TLS certificates, wait a few minutes for the Let’s Encrypt add-on to provision the certificates. If you are still getting errors after a short time, double check what you’ve entered against the commands and configuration shown in this step.

You’ve created the Docker Compose file and written configuration for running your Go app inside a container. To finish, you navigated to your domain to check that the gorilla/mux router setup is serving requests to your Dockerized Go web app correctly.


You have now successfully deployed your Go web app with Docker and Nginx on Ubuntu 18.04. With Docker, maintaining applications becomes less of a hassle, because the environment the app is executed in is guaranteed to be the same each time it’s run. The gorilla/mux package has excellent documentation and offers more sophisticated features, such as naming routes and serving static files. For more control over the Go HTTP server module, such as defining custom timeouts, visit the official docs.

DigitalOcean Community Tutorials

Mike Driscoll: Creating a GUI Application for NASA’s API with wxPython

Growing up, I have always found the universe and space in general to be exciting. It is fun to dream about what worlds remain unexplored. I also enjoy seeing photos from other worlds or thinking about the vastness of space. What does this have to do with Python though? Well, the National Aeronautics and Space Administration (NASA) has a web API that allows you to search their image library.

You can read all about it on their website.

The NASA website recommends getting an Application Programming Interface (API) key. If you go to that website, the form that you will fill out is nice and short.

Technically, you do not need an API key to make requests against NASA’s services. However they do have rate limiting in place for developers who access their site without an API key. Even with a key, you are limited to a default of 1000 requests per hour. If you go over your allocation, you will be temporarily blocked from making requests. You can contact NASA to request a higher rate limit though.

Interestingly, the documentation doesn’t really say how many requests you can make without an API key.

The API documentation disagrees with NASA’s Image API documentation about which endpoints to hit, which makes working with their website a bit confusing.

For example, you will see the API documentation talking about this URL:


But in the Image API documentation, the API root is:


For the purposes of this tutorial, you will be using the latter.

This article is adapted from my book:

Creating GUI Applications with wxPython

Purchase now on Leanpub

Using NASA’s API

When you start out using an unfamiliar API, it is always best to begin by reading the documentation for that interface. Another approach would be to do a quick Internet search and see if there is a Python package that wraps your target API. Unfortunately, there does not seem to be any maintained NASA libraries for Python. When this happens, you get to create your own.

To get started, try reading the NASA Images API document.

Their API documentation isn’t very long, so it shouldn’t take you very long to read or at least skim it.

The next step is to take that information and try playing around with their API.

Here are the first few lines of an experiment at accessing their API:

#   import requests   from urllib.parse import urlencode, quote_plus     base_url = '' search_term = 'apollo 11' desc = 'moon landing' media = 'image' query = {'q': search_term, 'description': desc, 'media_type': media} full_url = base_url + '?' + urlencode(query, quote_via=quote_plus)   r = requests.get(full_url) data = r.json()

If you run this in a debugger, you can print out the JSON that is returned.

Here is a snippet of what was returned:

'items': [{'data':                [{'center': 'HQ',                  'date_created': '2009-07-18T00:00:00Z',                  'description': 'On the eve of the '                                 'fortieth anniversary of '                                 "Apollo 11's first human "                                 'landing on the Moon, '                                 'Apollo 11 crew member, '                                 'Buzz Aldrin speaks during '                                 'a lecture in honor of '                                 'Apollo 11 at the National '                                 'Air and Space Museum in '                                 'Washington, Sunday, July '                                 '19, 2009. Guest speakers '                                 'included Former NASA '                                 'Astronaut and U.S. '                                 'Senator John Glenn, NASA '                                 'Mission Control creator '                                 'and former NASA Johnson '                                 'Space Center director '                                 'Chris Kraft and the crew '                                 'of Apollo 11.  Photo '                                 'Credit: (NASA/Bill '                                 'Ingalls)',                  'keywords': ['Apollo 11',                               'Apollo 40th Anniversary',                               'Buzz Aldrin',                               'National Air and Space '                               'Museum (NASM)',                               'Washington, DC'],                  'location': 'National Air and Space '                              'Museum',                  'media_type': 'image',                  'nasa_id': '200907190008HQ',                  'photographer': 'NASA/Bill Ingalls',                  'title': 'Glenn Lecture With Crew of '                           'Apollo 11'}],        'href': '',        'links': [{'href': '',                   'rel': 'preview',                   'render': 'image'}]}

Now that you know what the format of the JSON is, you can try parsing it a bit.

Let’s add the following lines of code to your Python script:

item = data['collection']['items'][0] nasa_id = item['data'][0]['nasa_id'] asset_url = '' + nasa_id image_request = requests.get(asset_url) image_json = image_request.json() image_urls = [url['href'] for url in image_json['collection']['items']] print(image_urls)

This will extract the first item in the list of items from the JSON response. Then you can extract the nasa_id, which is required to get all the images associated with this particular result. Now you can add that nasa_id to a new URL end point and make a new request.

The request for the image JSON returns this:

{'collection': {'href': '',                 'items': [{'href': ''},                           {'href': ''},                           {'href': ''},                           {'href': ''},                           {'href': ''},                           {'href': ''}],                 'version': '1.0'}}

The last two lines in your Python code will extract the URLs from the JSON. Now you have all the pieces you need to write a basic user interface!

Designing the User Interface

There are many different ways you could design your image downloading application. You will be doing what is simplest as that is almost always the quickest way to create a prototype. The nice thing about prototyping is that you end up with all the pieces you will need to create a useful application. Then you can take your knowledge and either enhance the prototype or create something new with the knowledge you have gained.

Here’s a mockup of what you will be attempting to create:

NASA Image Search Mockup

As you can see, you will want an application with the following features:

  • A search bar
  • A widget to hold the search results
  • A way to display an image when a result is chosen
  • The ability to download the image

Let’s learn how to create this user interface now!

Creating the NASA Search Application

Rapid prototyping is an idea in which you will create a small, runnable application as quickly as you can. Rather than spending a lot of time getting all the widgets laid out, let’s add them from top to bottom in the application. This will give you something to work with more quickly than creating a series of nested sizers will.

Let’s start by creating a script called

#   import os import requests import wx   from download_dialog import DownloadDialog from ObjectListView import ObjectListView, ColumnDefn from urllib.parse import urlencode, quote_plus

Here you import a few new items that you haven’t seen as of yet. The first is the requests package. This is a handy package for downloading files and doing things on the Internet with Python. Many developers feel that it is better than Python’s own urllib. You will need to install it to use it though. You will also need to instal ObjectListView.

Here is how you can do that with pip:

pip install requests ObjectListView

The other piece that is new are the imports from urllib.parse. You will be using this module for encoding URL parameters. Lastly, the DownloadDialog is a class for a small dialog that you will be creating for downloading NASA images.

Since you will be using ObjectListView in this application, you will need a class to represent the objects in that widget:

class Result:       def __init__(self, item):         data = item['data'][0]         self.title = data['title']         self.location = data.get('location', '')         self.nasa_id = data['nasa_id']         self.description = data['description']         self.photographer = data.get('photographer', '')         self.date_created = data['date_created']         self.item = item           if item.get('links'):             try:                 self.thumbnail = item['links'][0]['href']             except:                 self.thumbnail = ''

The Result class is what you will be using to hold that data that makes up each row in your ObjectListView. The item parameter is a portion of JSON that you are receiving from NASA as a response to your query. In this class, you will need to parse out the information you require.

In this case, you want the following fields:

  • Title
  • Location of image
  • NASA’s internal ID
  • Description of the photo
  • The photographer’s name
  • The date the image was created
  • The thumbnail URL

Some of these items aren’t always included in the JSON response, so you will use the dictionary’s get() method to return an empty string in those cases.

Now let’s start working on the UI:

class MainPanel(wx.Panel):       def __init__(self, parent):         super().__init__(parent)         self.search_results = []         self.max_size = 300         self.paths = wx.StandardPaths.Get()         font = wx.Font(12, wx.SWISS, wx.NORMAL, wx.NORMAL)           main_sizer = wx.BoxSizer(wx.VERTICAL)

The MainPanel is where the bulk of your code will be. Here you do some housekeeping and create a search_results to hold a list of Result objects when the user does a search. You also set the max_size of the thumbnail image, the font to be used, the sizer and you get some StandardPaths as well.

Now let’s add the following code to the __init__():

txt = 'Search for images on NASA' label = wx.StaticText(self, label=txt) main_sizer.Add(label, 0, wx.ALL, 5) = wx.SearchCtrl(     self, style=wx.TE_PROCESS_ENTER, size=(-1, 25)), self.on_search), self.on_search) main_sizer.Add(, 0, wx.EXPAND)

Here you create a header label for the application using wx.StaticText. Then you add a wx.SearchCtrl, which is very similar to a wx.TextCtrl except that it has special buttons built into it. You also bind the search button’s click event (EVT_SEARCHCTRL_SEARCH_BTN) and EVT_TEXT_ENTER to a search related event handler (on_search).

The next few lines add the search results widget:

self.search_results_olv = ObjectListView(     self, style=wx.LC_REPORT | wx.SUNKEN_BORDER) self.search_results_olv.SetEmptyListMsg("No Results Found") self.search_results_olv.Bind(wx.EVT_LIST_ITEM_SELECTED,                              self.on_selection) main_sizer.Add(self.search_results_olv, 1, wx.EXPAND) self.update_search_results()

This code sets up the ObjectListView in much the same way as some of my other articles use it. You customize the empty message by calling SetEmptyListMsg() and you also bind the widget to EVT_LIST_ITEM_SELECTED so that you do something when the user selects a search result.

Now let’s add the rest of the code to the __init__() method:

main_sizer.AddSpacer(30) self.title = wx.TextCtrl(self, style=wx.TE_READONLY) self.title.SetFont(font) main_sizer.Add(self.title, 0, wx.ALL|wx.EXPAND, 5) img = wx.Image(240, 240) self.image_ctrl = wx.StaticBitmap(self,                                   bitmap=wx.Bitmap(img)) main_sizer.Add(self.image_ctrl, 0, wx.CENTER|wx.ALL, 5                ) download_btn = wx.Button(self, label='Download Image') download_btn.Bind(wx.EVT_BUTTON, self.on_download) main_sizer.Add(download_btn, 0, wx.ALL|wx.CENTER, 5)   self.SetSizer(main_sizer)

These final few lines of code add a title text control and an image widget that will update when a result is selected. You also add a download button to allow the user to select which image size they would like to download. NASA usually gives several different versions of the image from thumbnail all the way up to the original TIFF image.

The first event handler to look at is on_download():

def on_download(self, event):     selection = self.search_results_olv.GetSelectedObject()     if selection:         with DownloadDialog(selection) as dlg:             dlg.ShowModal()

Here you call GetSelectedObject() to get the user’s selection. If the user hasn’t selected anything, then this method exits. On the other hand, if the user has selected an item, then you instantiate the DownloadDialog and show it to the user to allow them to download something.

Now let’s learn how to do a search:

def on_search(self, event):     search_term = event.GetString()     if search_term:         query = {'q': search_term, 'media_type': 'image'}         full_url = base_url + '?' + urlencode(query, quote_via=quote_plus)         r = requests.get(full_url)         data = r.json()         self.search_results = []         for item in data['collection']['items']:             if item.get('data') and len(item.get('data')) > 0:                 data = item['data'][0]                 if data['title'].strip() == '':                     # Skip results with blank titles                     continue                 result = Result(item)                 self.search_results.append(result)         self.update_search_results()

The on_search() event handler will get the string that the user has entered into the search control or return an empty string. Assuming that the user actually enters something to search for, you use NASA’s general search query, q and hard code the media_type to image. Then you encode the query into a properly formatted URL and use requests.get() to request a JSON response.

Next you attempt to loop over the results of the search. Note that is no data is returned, this code will fail and cause an exception to be thrown. But if you do get data, then you will need to parse it to get the bits and pieces you need.

You will skip items that don’t have the title field set. Otherwise you will create a Result object and add it to the search_results list. At the end of the method, you tell your UI to update the search results.

Before we get to that function, you will need to create on_selection():

def on_selection(self, event):     selection = self.search_results_olv.GetSelectedObject()     self.title.SetValue(f'{selection.title}')     if selection.thumbnail:         self.update_image(selection.thumbnail)     else:         img = wx.Image(240, 240)         self.image_ctrl.SetBitmap(wx.Bitmap(img))         self.Refresh()         self.Layout()

Once again, you get the selected item, but this time you take that selection and update the title text control with the selection’s title text. Then you check to see if there is a thumbnail and update that accordingly if there is one. When there is no thumbnail, you set it back to an empty image as you do not want it to keep showing a previously selected image.

The next method to create is update_image():

def update_image(self, url):     filename = url.split('/')[-1]     tmp_location = os.path.join(self.paths.GetTempDir(), filename)     r = requests.get(url)     with open(tmp_location, "wb") as thumbnail:         thumbnail.write(r.content)       if os.path.exists(tmp_location):         img = wx.Image(tmp_location, wx.BITMAP_TYPE_ANY)         W = img.GetWidth()         H = img.GetHeight()         if W > H:             NewW = self.max_size             NewH = self.max_size * H / W         else:             NewH = self.max_size             NewW = self.max_size * W / H         img = img.Scale(NewW,NewH)     else:         img = wx.Image(240, 240)       self.image_ctrl.SetBitmap(wx.Bitmap(img))     self.Refresh()     self.Layout()

The update_image() accepts a URL as its sole argument. It takes this URL and splits off the filename. Then it creates a new download location, which is the computer’s temp directory. Your code then downloads the image and checks to be sure the file saved correctly. If it did, then the thumbnail is loaded using the max_size that you set; otherwise you set it to use a blank image.

The last couple of lines Refresh() and Layout() the panel so that the widget appear correctly.

Finally you need to create the last method:

def update_search_results(self):     self.search_results_olv.SetColumns([         ColumnDefn("Title", "left", 250, "title"),         ColumnDefn("Description", "left", 350, "description"),         ColumnDefn("Photographer", "left", 100, "photographer"),         ColumnDefn("Date Created", "left", 150, "date_created")     ])     self.search_results_olv.SetObjects(self.search_results)

Here you create the frame, set the title and initial size and add the panel. Then you show the frame.

This is what the main UI will look like:

NASA Image Search Main App

Now let’s learn what goes into making a download dialog!

The Download Dialog

The download dialog will allow the user to download one or more of the images that they have selected. There are almost always at least two versions of every image and sometimes five or six.

The first piece of code to learn about is the first few lines:

#   import requests import wx   wildcard = "All files (*.*)|*.*"

Here you once again import requests and set up a wildcard that you will use when saving the images.

Now let’s create the dialog’s __init__():

class DownloadDialog(wx.Dialog):       def __init__(self, selection):         super().__init__(None, title='Download images')         self.paths = wx.StandardPaths.Get()         main_sizer = wx.BoxSizer(wx.VERTICAL)         self.list_box = wx.ListBox(self, choices=[], size=wx.DefaultSize)         urls = self.get_image_urls(selection)         if urls:             choices = {url.split('/')[-1]: url for url in urls if 'jpg' in url}             for choice in choices:                 self.list_box.Append(choice, choices[choice])         main_sizer.Add(self.list_box, 1, wx.EXPAND|wx.ALL, 5)           save_btn = wx.Button(self, label='Save')         save_btn.Bind(wx.EVT_BUTTON, self.on_save)         main_sizer.Add(save_btn, 0, wx.ALL|wx.CENTER, 5)         self.SetSizer(main_sizer)

In this example, you create a new reference to StandardPaths and add a wx.ListBox. The list box will hold the variants of the photos that you can download. It will also automatically add a scrollbar should there be too many results to fit on-screen at once. You call get_image_urls with the passed in selection object to get a list of urls. Then you loop over the urls and extract the ones that have jpg in their name. This does result in you missing out on alternate image files types, such as PNG or TIFF.

This gives you an opportunity to enhance this code and improve it. The reason that you are filtering the URLs is that the results usually have non-image URLs in the mix and you probably don’t want to show those as potentially downloadable as that would be confusing to the user.

The last widget to be added is the “Save” button. You could add a “Cancel” button as well, but the dialog has an exit button along the top that works, so it’s not required.

Now it’s time to learn what get_image_urls() does:

def get_image_urls(self, item):     asset_url = f'{item.nasa_id}'     image_request = requests.get(asset_url)     image_json = image_request.json()     try:         image_urls = [url['href'] for url in image_json['collection']['items']]     except:         image_urls = []     return image_urls

This event handler is activated when the user presses the “Save” button. When the user tries to save something without selecting an item in the list box, it will return -1. Should that happen, you show them a MessageDialog to tell them that they might want to select something. When they do select something, you will show them a wx.FileDialog that allows them to choose where to save the file and what to call it.

The event handler calls the save() method, so that is your next project:

def save(self, path):     selection = self.list_box.GetSelection()     r = requests.get(         self.list_box.GetClientData(selection))     try:         with open(path, "wb") as image:             image.write(r.content)           message = 'File saved successfully'         with wx.MessageDialog(None, message=message,                               caption='Save Successful',                               style=wx.ICON_INFORMATION) as dlg:             dlg.ShowModal()     except:         message = 'File failed to save!'         with wx.MessageDialog(None, message=message,                               caption='Save Failed',                               style=wx.ICON_ERROR) as dlg:             dlg.ShowModal()

Here you get the selection again and use the requests package to download the image. Note that there is no check to make sure that the user has added an extension, let along the right extension. You can add that yourself when you get a chance.

Anyway, when the file is finished downloading, you will show the user a message letting them know.

If an exception occurs, you can show them a dialog that let’s them know that too!

Here is what the download dialog looks like:

NASA Image Download Dialog

Now let’s add some new functionality!

Adding Advanced Search

There are several fields that you can use to help narrow your search. However you don’t want to clutter your user interface with them unless the user really wants to use those filters. To allow for that, you can add an “Advanced Search” option.

Adding this option requires you to rearrange your code a bit, so let’s copy your file and your module to a new folder called version_2.

Now rename to to make it more obvious which script is the main entry point for your program. To make things more modular, you will be extracting your search results into its own class and have the advanced search in a separate class. This means that you will have three panels in the end:

  • The main panel
  • The search results panel
  • The advanced search panel

Here is what the main dialog will look like when you are finished:

NASA Image Search with Advanced Search Option

Let’s go over each of these separately.

The Script

The main module is your primary entry point for your application. An entry point is the code that your user will run to launch your application. It is also the script that you would use if you were to bundle up your application into an executable.

Let’s take a look at how your main module starts out:

#   import wx   from advanced_search import RegularSearch from regular_search import SearchResults from pubsub import pub     class MainPanel(wx.Panel):       def __init__(self, parent):         super().__init__(parent)         pub.subscribe(self.update_ui, 'update_ui')           self.main_sizer = wx.BoxSizer(wx.VERTICAL)         search_sizer = wx.BoxSizer()

This example imports both of your search-related panels:

  • AdvancedSearch
  • RegularSearch

It also uses pubsub to subscribe to an update topic.

Let’s find out what else is in the __init__():

txt = 'Search for images on NASA' label = wx.StaticText(self, label=txt) self.main_sizer.Add(label, 0, wx.ALL, 5) = wx.SearchCtrl(     self, style=wx.TE_PROCESS_ENTER, size=(-1, 25)), self.on_search), self.on_search) search_sizer.Add(, 1, wx.EXPAND)   self.advanced_search_btn = wx.Button(self, label='Advanced Search',                             size=(-1, 25)) self.advanced_search_btn.Bind(wx.EVT_BUTTON, self.on_advanced_search) search_sizer.Add(self.advanced_search_btn, 0, wx.ALL, 5) self.main_sizer.Add(search_sizer, 0, wx.EXPAND)

Here you add the title for the page along with the search control widget as you did before. You also add the new Advanced Search button and use a new sizer to contain the search widget and the button. You then add that sizer to your main sizer.

Now let’s add the panels:

self.search_panel = RegularSearch(self) self.advanced_search_panel = AdvancedSearch(self) self.advanced_search_panel.Hide() self.main_sizer.Add(self.search_panel, 1, wx.EXPAND) self.main_sizer.Add(self.advanced_search_panel, 1, wx.EXPAND)

In this example, you instantiate the RegularSearch and the AdvancedSearch panels. Since the RegularSearch is the default, you hide the AdvancedSearch from the user on startup.

Now let’s update on_search():

def on_search(self, event):     search_results = []     search_term = event.GetString()     if search_term:         query = {'q': search_term, 'media_type': 'image'}         pub.sendMessage('search_results', query=query)

The on_search() method will get called when the user presses “Enter / Return” on their keyboard or when they press the search button icon in the search control widget. If the user has entered a search string into the search control, a search query will be constructed and then sent off using pubsub.

Let’s find out what happens when the user presses the Advanced Search button:

def on_advanced_search(self, event):     self.search_panel.Hide()     self.advanced_search_btn.Hide()     self.advanced_search_panel.Show()     self.main_sizer.Layout()

When on_advanced_search() fires, it hides the search widget, the regular search panel and the advanced search button. Next, it shows the advanced search panel and calls Layout() on the main_sizer. This will cause the panels to switch out and resize to fit properly within the frame.

The last method to create is update_ui():

def update_ui(self):     """     Hide advanced search and re-show original screen       Called by pubsub when advanced search is invoked     """     self.advanced_search_panel.Hide()     self.search_panel.Show()     self.advanced_search_btn.Show()     self.main_sizer.Layout()

The update_ui() method is called when the user does an Advanced Search. This method is invoked by pubsub. It will do the reverse of on_advanced_search() and un-hide all the widgets that were hidden when the advanced search panel was shown. It will also hide the advanced search panel.

The frame code is the same as it was before, so it is not shown here.

Let’s move on and learn how the regular search panel is created!

The Script

The regular_search module is your refactored module that contains the ObjectListView that will show your search results. It also has the Download button on it.

The following methods / classes will not be covered as they are the same as in the previous iteration:

  • on_download()
  • on_selection()
  • update_image()
  • update_search_results()
  • The Result class

Let’s get started by seeing how the first few lines in the module are laid out:

#   import os import requests import wx   from download_dialog import DownloadDialog from ObjectListView import ObjectListView, ColumnDefn from pubsub import pub from urllib.parse import urlencode, quote_plus   base_url = ''

Here you have all the imports you had in the original script from version_1. You also have the base_url that you need to make requests to NASA’s image API. The only new import is for pubsub.

Let’s go ahead and create the RegularSearch class:

class RegularSearch(wx.Panel):       def __init__(self, parent):         super().__init__(parent)         self.search_results = []         self.max_size = 300         font = wx.Font(12, wx.SWISS, wx.NORMAL, wx.NORMAL)         main_sizer = wx.BoxSizer(wx.VERTICAL)         self.paths = wx.StandardPaths.Get()         pub.subscribe(self.load_search_results, 'search_results')           self.search_results_olv = ObjectListView(             self, style=wx.LC_REPORT | wx.SUNKEN_BORDER)         self.search_results_olv.SetEmptyListMsg("No Results Found")         self.search_results_olv.Bind(wx.EVT_LIST_ITEM_SELECTED,                                      self.on_selection)         main_sizer.Add(self.search_results_olv, 1, wx.EXPAND)         self.update_search_results()

This code will initialize the search_results list to an empty list and set the max_size of the image. It also sets up a sizer and the ObjectListView widget that you use for displaying the search results to the user. The code is actually quite similar to the first iteration of the code when all the classes were combined.

Here is the rest of the code for the __init__():

main_sizer.AddSpacer(30) self.title = wx.TextCtrl(self, style=wx.TE_READONLY) self.title.SetFont(font) main_sizer.Add(self.title, 0, wx.ALL|wx.EXPAND, 5) img = wx.Image(240, 240) self.image_ctrl = wx.StaticBitmap(self,                                   bitmap=wx.Bitmap(img)) main_sizer.Add(self.image_ctrl, 0, wx.CENTER|wx.ALL, 5                ) download_btn = wx.Button(self, label='Download Image') download_btn.Bind(wx.EVT_BUTTON, self.on_download) main_sizer.Add(download_btn, 0, wx.ALL|wx.CENTER, 5)   self.SetSizer(main_sizer)

The first item here is to add a spacer to the main_sizer. Then you add the title and the img related widgets. The last widget to be added is still the download button.

Next, you will need to write a new method:

def reset_image(self):     img = wx.Image(240, 240)     self.image_ctrl.SetBitmap(wx.Bitmap(img))     self.Refresh()

The reset_image() method is for resetting the wx.StaticBitmap back to an empty image. This can happen when the user uses the regular search first, selects an item and then decides to do an advanced search. Resetting the image prevents the user from seeing a previously selected item and potentially confusing the user.

The last method you need to add is load_search_results():

def load_search_results(self, query):     full_url = base_url + '?' + urlencode(query, quote_via=quote_plus)     r = requests.get(full_url)     data = r.json()     self.search_results = []     for item in data['collection']['items']:         if item.get('data') and len(item.get('data')) > 0:             data = item['data'][0]             if data['title'].strip() == '':                 # Skip results with blank titles                 continue             result = Result(item)             self.search_results.append(result)     self.update_search_results()     self.reset_image()

The load_search_results() method is called using pubsub. Both the main and the advanced_search modules call it by passing in a query dictionary. Then you encode that dictionary into a formatted URL. Next you use requests to send a JSON request and you then extract the results. This is also where you call reset_image() so that when a new set of results loads, there is no result selected.

Now you are ready to create an advanced search!

The Script

The advanced_search module is a wx.Panel that has all the widgets you need to do an advanced search against NASA’s API. If you read their documentation, you will find that there are around a dozen filters that can be applied to a search.

Let’s start at the top:

class AdvancedSearch(wx.Panel):       def __init__(self, parent):         super().__init__(parent)           self.main_sizer = wx.BoxSizer(wx.VERTICAL)           self.free_text = wx.TextCtrl(self)         self.ui_helper('Free text search:', self.free_text)         self.nasa_center = wx.TextCtrl(self)         self.ui_helper('NASA Center:', self.nasa_center)         self.description = wx.TextCtrl(self)         self.ui_helper('Description:', self.description)         self.description_508 = wx.TextCtrl(self)         self.ui_helper('Description 508:', self.description_508)         self.keywords = wx.TextCtrl(self)         self.ui_helper('Keywords (separate with commas):',                        self.keywords)

The code to set up the various filters is all pretty similar. You create a text control for the filter, then you pass it into ui_helper() along with a string that is a label for the text control widget. Repeat until you have all the filters in place.

Here are the rest of the filters:

self.location = wx.TextCtrl(self) self.ui_helper('Location:', self.location) self.nasa_id = wx.TextCtrl(self) self.ui_helper('NASA ID:', self.nasa_id) self.photographer = wx.TextCtrl(self) self.ui_helper('Photographer:', self.photographer) self.secondary_creator = wx.TextCtrl(self) self.ui_helper('Secondary photographer:', self.secondary_creator) self.title = wx.TextCtrl(self) self.ui_helper('Title:', self.title) search = wx.Button(self, label='Search') search.Bind(wx.EVT_BUTTON, self.on_search) self.main_sizer.Add(search, 0, wx.ALL | wx.CENTER, 5)   self.SetSizer(self.main_sizer)

At the end, you set the sizer to the main_sizer. Note that not all the filters that are in NASA’s API are implemented in this code. For example, I didn’t add media_type because this application will be hard-coded to only look for images. However if you wanted audio or video, you could update this application for that. I also didn’t include the year_start and year_end filters. Feel free to add those if you wish.

Now let’s move on and create the ui_helper() method:

def ui_helper(self, label, textctrl):     sizer = wx.BoxSizer()     lbl = wx.StaticText(self, label=label, size=(150, -1))     sizer.Add(lbl, 0, wx.ALL, 5)     sizer.Add(textctrl, 1, wx.ALL | wx.EXPAND, 5)     self.main_sizer.Add(sizer, 0, wx.EXPAND)

The ui_helper() takes in label text and the text control widget. It then creates a wx.BoxSizer and a wx.StaticText. The wx.StaticText is added to the sizer, as is the passed-in text control widget. Finally the new sizer is added to the main_sizer and then you’re done. This is a nice way to reduce repeated code.

The last item to create in this class is on_search():

def on_search(self, event):     query = {'q': self.free_text.GetValue(),              'media_type': 'image',              'center': self.nasa_center.GetValue(),              'description': self.description.GetValue(),              'description_508': self.description_508.GetValue(),              'keywords': self.keywords.GetValue(),              'location': self.location.GetValue(),              'nasa_id': self.nasa_id.GetValue(),              'photographer': self.photographer.GetValue(),              'secondary_creator': self.secondary_creator.GetValue(),              'title': self.title.GetValue()}     pub.sendMessage('update_ui')     pub.sendMessage('search_results', query=query)

When the user presses the Search button, this event handler gets called. It creates the search query based on what the user has entered into each of the fields. Then the handler will send out two messages using pubsub. The first message will update the UI so that the advanced search is hidden and the search results are shown. The second message will actually execute the search against NASA’s API.

Here is what the advanced search page looks like:

NASA Image Search with Advanced Search Page

Now let’s update the download dialog.

The Script

The download dialog has a couple of minimal changes to it. Basically you need to add an import of Python’s os module and then update the save() function.

Add the following lines to the beginning of the function:

def save(self, path):     _, ext = os.path.splitext(path)     if ext.lower() != '.jpg':         path = f'{path}.jpg'

This code was added to account for the case where the user does not specify the extension of the image in the saved file name.

Wrapping Up

This article covered a lot of fun new information. You learned one approach for working with an open API that doesn’t have a Python wrapper already around it. You discovered the importance of reading the API documentation and then added a user interface to that API. Then you learned how to parse JSON and download images from the Internet.

While it is not covered here, Python has a json module that you could use as well.

Here are some ideas for enhancing this application:

  • Caching search results
  • Downloading thumbnails in the background
  • Downloading links in the background

You could use threads to download the thumbnails and the larger images as well as for doing the web requests in general. This would improve the performance of your application. You may have noticed that the application became slightly unresponsive, depending on your Internet connectivity. This is because when it is doing a web request or downloading a file, it blocks the UI’s main loop. You should give threads a try if you find that sort of thing bothersome.

Download the Code

Related Reading

Planet Python

How To Build and Deploy a Flask Application Using Docker on Ubuntu 18.04

The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.


Docker is an open-source application that allows administrators to create, manage, deploy, and replicate applications using containers. Containers can be thought of as a package that houses dependencies that an application requires to run at an operating system level. This means that each application deployed using Docker lives in an environment of its own and its requirements are handled separately.

Flask is a web micro-framework that is built on Python. It is called a micro-framework because it does not require specific tools or plug-ins to run. The Flask framework is lightweight and flexible, yet highly structured, making it preferred over other frameworks.

Deploying a Flask application with Docker will allow you to replicate the application across different servers with minimal reconfiguration.

In this tutorial, you will create a Flask application and deploy it with Docker. This tutorial will also cover how to update an application after deployment.


To follow this tutorial, you will need the following:

Step 1 — Setting Up the Flask Application

To get started, you will create a directory structure that will hold your Flask application. This tutorial will create a directory called TestApp in /var/www, but you can modify the command to name it whatever you’d like.

  • sudo mkdir /var/www/TestApp

Move in to the newly created TestApp directory:

  • cd /var/www/TestApp

Next, create the base folder structure for the Flask application:

  • sudo mkdir -p app/static app/templates

The -p flag indicates that mkdir will create a directory and all parent directories that don’t exist. In this case, mkdir will create the app parent directory in the process of making the static and templates directories.

The app directory will contain all files related to the Flask application such as its views and blueprints. Views are the code you write to respond to requests to your application. Blueprints create application components and support common patterns within an application or across multiple applications.

The static directory is where assets such as images, CSS, and JavaScript files live. The templates directory is where you will put the HTML templates for your project.

Now that the base folder structure is complete, create the files needed to run the Flask application. First, create an file inside the app directory. This file tells the Python interpreter that the app directory is a package and should be treated as such.

Run the following command to create the file:

  • sudo nano app/

Packages in Python allow you to group modules into logical namespaces or hierarchies. This approach enables the code to be broken down into individual and manageable blocks that perform specific functions.

Next, you will add code to the that will create a Flask instance and import the logic from the file, which you will create after saving this file. Add the following code to your new file:

from flask import Flask app = Flask(__name__) from app import views 

Once you’ve added that code, save and close the file.

With the file created, you’re ready to create the file in your app directory. This file will contain most of your application logic.

  • sudo nano app/

Next, add the code to your file. This code will return the hello world! string to users who visit your web page:

from app import app  @app.route('/') def home():    return "hello world!" 

The @app.route line above the function is called a decorator. Decorators modify the function that follows it. In this case, the decorator tells Flask which URL will trigger the home() function. The hello world text returned by the home function will be displayed to the user on the browser.

With the file in place, you’re ready to create the uwsgi.ini file. This file will contain the uWSGI configurations for our application. uWSGI is a deployment option for Nginx that is both a protocol and an application server; the application server can serve uWSGI, FastCGI, and HTTP protocols.

To create this file, run the following command:

  • sudo nano uwsgi.ini

Next, add the following content to your file to configure the uWSGI server:

[uwsgi] module = main callable = app master = true 

This code defines the module that the Flask application will be served from. In this case, this is the file, referenced here as main. The callable option instructs uWSGI to use the app instance exported by the main application. The master option allows your application to keep running, so there is little downtime even when reloading the entire application.

Next, create the file, which is the entry point to the application. The entry point instructs uWSGI on how to interact with the application.

  • sudo nano

Next, copy and paste the following into the file. This imports the Flask instance named app from the application package that was previously created.

from app import app 

Finally, create a requirements.txt file to specify the dependencies that the pip package manager will install to your Docker deployment:

  • sudo nano requirements.txt

Add the following line to add Flask as a dependency:


This specifies the version of Flask to be installed. At the time of writing this tutorial, 1.0.2 is the latest Flask version. You can check for updates at the official website for Flask.

Save and close the file. You have successfully set up your Flask application and are ready to set up Docker.

Step 2 — Setting Up Docker

In this step you will create two files, Dockerfile and, to create your Docker deployment. The Dockerfile is a text document that contains the commands used to assemble the image. The file is a shell script that will build an image and create a container from the Dockerfile.

First, create the Dockerfile.

  • sudo nano Dockerfile

Next, add your desired configuration to the Dockerfile. These commands specify how the image will be built, and what extra requirements will be included.

FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7 RUN apk --update add bash nano ENV STATIC_URL /static ENV STATIC_PATH /var/www/app/static COPY ./requirements.txt /var/www/requirements.txt RUN pip install -r /var/www/requirements.txt 

In this example, the Docker image will be built off an existing image, tiangolo/uwsgi-nginx-flask, which you can find on DockerHub. This particular Docker image is a good choice over others because it supports a wide range of Python versions and OS images.

The first two lines specify the parent image that you’ll use to run the application and install the bash command processor and the nano text editor. It also installs the git client for pulling and pushing to version control hosting services such as GitHub, GitLab, and Bitbucket. ENV STATIC_URL /static is an environment variable specific to this Docker image. It defines the static folder where all assets such as images, CSS files, and JavaScript files are served from.

The last two lines will copy the requirements.txt file into the container so that it can be executed, and then parses the requirements.txt file to install the specified dependencies.

Save and close the file after adding your configuration.

With your Dockerfile in place, you’re almost ready to write your script that will build the Docker container. Before writing the script, first make sure that you have an open port to use in the configuration. To check if a port is free, run the following command:

  • sudo nc localhost 56733 < /dev/null; echo $ ?

If the output of the command above is 1, then the port is free and usable. Otherwise, you will need to select a different port to use in your configuration file.

Once you’ve found an open port to use, create the script:

  • sudo nano

The script is a shell script that will build an image from the Dockerfile and create a container from the resulting Docker image. Add your configuration to the new file:

#!/bin/bash app="docker.test" docker build -t $  {app} . docker run -d -p 56733:80 \   --name=$  {app} \   -v $  PWD:/app $  {app} 

The first line is called a shebang. It specifies that this is a bash file and will be executed as commands. The next line specifies the name you want to give the image and container and saves as a variable named app. The next line instructs Docker to build an image from your Dockerfile located in the current directory. This will create an image called docker.test in this example.

The last three lines create a new container named docker.test that is exposed at port 56733. Finally, it links the present directory to the /var/www directory of the container.

You use the -d flag to start a container in daemon mode, or as a background process. You include the -p flag to bind a port on the server to a particular port on the Docker container. In this case, you are binding port 56733 to port 80 on the Docker container. The -v flag specifies a Docker volume to mount on the container, and in this case, you are mounting the entire project directory to the /var/www folder on the Docker container.

Execute the script to create the Docker image and build a container from the resulting image:

  • sudo bash

Once the script finishes running, use the following command to list all running containers:

  • sudo docker ps

You will receive output that shows the containers:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 58b05508f4dd docker.test "/ /sta…" 12 seconds ago Up 3 seconds 443/tcp,>80/tcp docker.test

You will find that the docker.test container is running. Now that it is running, visit the IP address at the specified port in your browser: http://ip-address:56733

You’ll see a page similar to the following:

the home page

In this step you have successfully deployed your Flask application on Docker. Next, you will use templates to display content to users.

Step 3 — Serving Template Files

Templates are files that display static and dynamic content to users who visit your application. In this step, you will create a HTML template to create a home page for the application.

Start by creating a home.html file in the app/templates directory:

  • sudo nano app/templates/home.html

Add the code for your template. This code will create an HTML5 page that contains a title and some text.

 <!doctype html>  <html lang="en-us">      <head>     <meta charset="utf-8">     <meta http-equiv="x-ua-compatible" content="ie=edge">     <title>Welcome home</title>   </head>    <body>     <h1>Home Page</h1>     <p>This is the home page of our application.</p>   </body>  </html> 

Save and close the file once you’ve added your template.

Next, modify the app/ file to serve the newly created file:

  • sudo nano app/

First, add the following line at the beginning of your file to import the render_template method from Flask. This method parses an HTML file to render a web page to the user.

from flask import render_template ... 

At the end of the file, you will also add a new route to render the template file. This code specifies that users are served the contents of the home.html file whenever they visit the /template route on your application.

...  @app.route('/template') def template():     return render_template('home.html') 

The updated app/ file will look like this:

from flask import render_template from app import app   @app.route('/') def home():     return "Hello world!"  @app.route('/template') def template():     return render_template('home.html') 

Save and close the file when done.

In order for these changes to take effect, you will need to stop and restart the Docker containers. Run the following command to rebuild the container:

  • sudo docker stop docker.test && sudo docker start docker.test

Visit your application at http://your-ip-address:56733/template to see the new template being served.


In this you’ve created a Docker template file to serve visitors on your application. In the next step you will see how the changes you make to your application can take effect without having to restart the Docker container.

Step 4 — Updating the Application

Sometimes you will need to make changes to the application, whether it is installing new requirements, updating the Docker container, or HTML and logic changes. In this section, you will configure touch-reload to make these changes without needing to restart the Docker container.

Python autoreloading watches the entire file system for changes and refreshes the application when it detects a change. Autoreloading is discouraged in production because it can become resource intensive very quickly. In this step, you will use touch-reload to watch for changes to a particular file and reload when the file is updated or replaced.

To implement this, start by opening your uwsgi.ini file:

  • sudo nano uwsgi.ini

Next, add the highlighted line to the end of the file:

module = main callable = app master = true touch-reload = /app/uwsgi.ini 

This specifies a file that will be modified to trigger an entire application reload. Once you’ve made the changes, save and close the file.

To demonstrate this, make a small change to your application. Start by opening your app/ file:

  • sudo nano app/

Replace the string returned by the home function:

  • from flask import render_template
  • from app import app
  • @app.route('/')
  • def home():
  • return "<b>There has been a change</b>"
  • @app.route('/template')
  • def template():
  • return render_template('home.html')

Save and close the file after you’ve made a change.

Next, if you open your application’s homepage at http://ip-address:56733, you will notice that the changes are not reflected. This is because the condition for reload is a change to the uwsgi.ini file. To reload the application, use touch to activate the condition:

  • sudo touch uwsgi.ini

Reload the application homepage in your browser again. You will find that the application has incorporated the changes:

Homepage Updated

In this step, you set up a touch-reload condition to update your application after making changes.


In this tutorial, you created and deployed a Flask application to a Docker container. You also configured touch-reload to refresh your application without needing to restart the container.

With your new application on Docker, you can now scale with ease. To learn more about using Docker, check out their official documentation.

DigitalOcean Community Tutorials

Abhijeet Pal: Building A Blog Application With Django

In this tutorial, we’ll build a Blog application with Django 2.1 that allows users to create, edit, and delete posts. The homepage will list all blog posts, and there will be a dedicated detail page for each individual post. Django is capable of making more advanced stuff but making a blog is an excellent first step to get a good grasp over the framework. The purpose of this chapter is to get a general idea about working of Django.

Here is a sneak peek of what we are going to make.

Building a blog with django 2.1

Before kicking off, I hope you already have a brief idea about the framework we are going to use for this project if not then read the following article: Django – Web Framework For Perfectionists.


Django is an open-source web framework, written in Python, that follows the model-view-template architectural pattern. So Python is needed to be installed in your machine. Unfortunately, there was a significant update to Python several years ago that created a big split between Python versions namely Python 2 the legacy version and Python 3 the version in active development.

Since Python 3 is the current version in active development and addressed as the future of Python, Django rolled out a significant update, and now all the releases after Django 2.0 are only compatible with Python 3.x. Therefore this tutorial is strictly for Python 3.x. Make sure you have Python 3 Installed on your machine if not follow the below guides.

Windows Users

Mac And Unix Users

Creating And Activating A Virtual Environment

While building python projects, it’s a good practice to work in virtual environments to keep your project, and it’s dependency isolated on your machine. There is an entire article on the importance of virtual environments Check it out here: How To A Create Virtual Environment for Python

Windows Users

cd Desktop virtualenv django cd django Scripts\activate.bat

Mac and Unix Users

mkdir django cd django python3 -m venv myenv source django/bin/activate

Now you should see (django) prefixed in your terminal, which indicates that the virtual environment is successfully activated, if not then go through the guide again.

Installing Django In The Virtual Environment

If you have already installed Django, you can skip this section and jump straight to the Setting up the project section. To Install Django on your virtual environment run the below command

pip install Django

This will install the latest version of Django in our virtual Environment. To know more about Django installation read: How To Install Django

Note – You must install a version of Django greater than 2.0

Setting Up The Project

In your workspace create a directory called mysite and navigate into it.

cd Desktop mkdir mysite cd mysite

Now run the following command in your shell to create a Django project.

django-admin startproject mysite

This will generate a project structure with several directories and python scripts.

├── mysite │   ├── │   ├── │   ├── │   ├── ├──

To know more about the function of the files read: Starting A Django Project

Next, we need the create a Django application called blog. A Django application exists to perform a particular task. You need to create specific applications that are responsible for providing your site desired functionalities.

Navigate into the outer directory where script exists and run the below command.

cd mysite python startapp blog

These will create an app named blog in our project.

├── db.sqlite3 ├── mysite │   ├── │   ├── │   ├── │   ├── ├── └── blog     ├──     ├──     ├──     ├── migrations     │   └──     ├──     ├──     └──

Now we need to inform Django that a new application has been created, open your file and scroll to the installed apps section, which should have some already installed apps.

INSTALLED_APPS = [     'django.contrib.admin',     'django.contrib.auth',     'django.contrib.contenttypes',     'django.contrib.sessions',     'django.contrib.messages',     'django.contrib.staticfiles', ]

Now add the newly created app blog at the bottom and save it.

INSTALLED_APPS = [     'django.contrib.admin',     'django.contrib.auth',     'django.contrib.contenttypes',     'django.contrib.sessions',     'django.contrib.messages',     'django.contrib.staticfiles',     'blog' ]

Next, make migrations.

python migrate

This will apply all the unapplied migrations on the SQLite database which comes along with the Django installation.

Let’s test our configurations by running the  Django’s built-in development server.

python runserver

Open your browser and go to this address if everything went well you should see this page.

Starting A Django Project
Database Models

Now we will define the data models for our blog. A model is a Python class that subclasses django.db.models.Model, in which each attribute represents a database field. Using this subclass functionality, we automatically have access to everything within django.db.models.Models and can add additional fields and methods as desired. We will have a Post model in our database to store posts.

from django.db import models from django.contrib.auth.models import User   STATUS = (     (0,"Draft"),     (1,"Publish") )  class Post(models.Model):     title = models.CharField(max_length=200, unique=True)     slug = models.SlugField(max_length=200, unique=True)     author = models.ForeignKey(User, on_delete= models.CASCADE,related_name='blog_posts')     updated_on = models.DateTimeField(auto_now= True)     content = models.TextField()     created_on = models.DateTimeField(auto_now_add=True)     status = models.IntegerField(choices=STATUS, default=0)      class Meta:         ordering = ['-created_on']      def __str__(self):         return self.title 

At the top, we’re importing the class models and then creating a subclass of models.Model Like any typical blog each blog post will have a title, slug, author name, and the timestamp or date when the article was published or last updated.

Notice how we declared a tuple for STATUS of a post to keep draft and published posts separated when we render them out with templates.

The Meta class inside the model contains metadata. We tell Django to sort results in the created_on field in descending order by default when we query the database. We specify descending order using the negative prefix. By doing so, posts published recently will appear first.

The __str__() method is the default human-readable representation of the object. Django will use it in many places, such as the administration site.

Now that our new database model is created we need to create a new migration record for it and migrate the change into our database.

(django) $   python makemigrations  (django) $   python migrate

Now we are done with the database.

Creating An Administration Site

We will create an admin panel to create and manage Posts. Fortunately, Django comes with an inbuilt admin interface for such tasks.

In order to use the Django admin first, we need to create a superuser by running the following command in the prompt.

python createsuperuser

You will be prompted to enter email, password, and username. Note that for security concerns Password won’t be visible.

Username (leave blank to use 'user'): admin Email address: Password: Password (again):

Enter any details you can always change them later. After that rerun the development server and go to the address

python runserver

You should see a login page, enter the details you provided for the superuser.

Django Admin login

After you log in you should see a basic admin panel with Groups and Users models which come from Django authentication framework located in django.contrib.auth.

Building a Blog application with Django
Still, we can’t create posts from the panel we need to add the Post model to our admin.

Adding Models To The Administration Site

Open the blog/ file and register the Post model there as follows.

from django.contrib import admin

Save the file and refresh the page you should see the Posts model there.

Building Blog application with django admin
Now let’s create our first blog post click on the Add icon beside Post which will take you to another page where you can create a post. Fill the respective forms and create your first ever post.

Writing blog Post with Django
Once you are done with the Post save it now, you will be redirected to the post list page with a success message at the top.

Building a Blog Application With Django

Though it does the work, we can customize the way data is displayed in the administration panel according to our convenience. Open the file again and replace it with the code below.

from django.contrib import admin from .models import Post  class PostAdmin(admin.ModelAdmin):     list_display = ('title', 'slug', 'status','created_on')     list_filter = ("status",)     search_fields = ['title', 'content']     prepopulated_fields = {'slug': ('title',)}, PostAdmin)

This will make our admin dashboard more efficient. Now if you visit the post list, you will see more details about the Post.

Creating a Blog Application With Django

Note that I have added a few posts for testing.

The list_display attribute does what its name suggests display the properties mentioned in the tuple in the post list for each post.

If you notice at the right, there is a filter which is filtering the post depending on their Status this is done by the list_filter method.

And now we have a search bar at the top of the list, which will search the database from the search_fields attributes. The last attribute prepopulated_fields populates the slug, now if you create a post the slug will automatically be filled based upon your title.

Now that our database model is complete we need to create the necessary views, URLs, and templates so we can display the information on our web application.

Building Views

A Django view is just a Python function that receives a web request and returns a web response. We’re going to use class-based views then map URLs for each view and create an HTML templated for the data returned from the views.

Open the blog/ file and start coding.

from django.views import generic from .models import Post  class PostList(generic.ListView):     model = Post     template_name = 'index.html'  class PostDetail(generic.DetailView):     model = Post     template_name = 'post_detail.html' 

The built-in ListViews which is a subclass of generic class-based-views render a list with the objects of the specified model we just need to mention the template, similarly DetailView provides a detailed view for a given object of the model at the provided template.

Adding URL patterns for Views

We need to map the URL for the views we made above. When a user makes a request for a page on your web app, Django controller takes over to look for the corresponding view via the file, and then return the HTML response or a 404 not found error, if not found.

Create an file in your blog application directory and add the following code.

from . import views from django.urls import path  urlpatterns = [     path('', views.PostList.as_view(), name='home'),     path('<slug:slug>/', views.PostDetail.as_view(), name='post_detail'), ]

We mapped general URL patterns for our views using the path function. The first pattern takes an empty string denoted by ' ' and returns the result generated from the PostList view which is essentially a list of posts for our homepage and at last we have an optional parameter name which is basically a name for the view which will later be used in the templates.

Names are an optional parameter, but it is a good practice to give unique and rememberable names to views which makes our work easy while designing templates and it helps keep things organized as your number of URLs grows.

Next, we have the generalized expression for the PostDetail views which resolves the slug (a string consisting of ASCII letters or numbers) Django uses angle brackets < > to capture the values from the URL and return the equivalent post detail page.

Now we need to include these blog URLs to the actual project for doing so open the mysite/ file.

from django.contrib import admin  urlpatterns = [     path('admin/',, ] 

Now first import the include function and then add the path to the new file in the urlpatterns list.

from django.contrib import admin from django.urls import path, include  urlpatterns = [     path('admin/',,     path('', include('blog.urls')), ] 

Now all the request will directly be handled by the blog app.

Creating Templates For The Views

We are done with the Models and Views now we need to make templates to render the result to our users. To use Django templates we need to configure the template setting first.

Create directory templates in the base directory. Now open the project’s file and just below BASE_DIR add the route to the template directory as follows.

TEMPLATES_DIR = os.path.join(BASE_DIR,'templates') 

Now In scroll to the,TEMPLATES which should look like this.

TEMPLATES = [     {         'BACKEND': 'django.template.backends.django.DjangoTemplates',         'DIRS': [],         'APP_DIRS': True,         'OPTIONS': {             'context_processors': [                 'django.template.context_processors.debug',                 'django.template.context_processors.request',                 'django.contrib.auth.context_processors.auth',                 'django.contrib.messages.context_processors.messages',             ],         },     }, ] 

Now add the newly created TEMPLATE_DIRS  in the DIRS.

TEMPLATES = [     {         'BACKEND': 'django.template.backends.django.DjangoTemplates',         #  Add  'TEMPLATE_DIR' here         'DIRS': [TEMPLATE_DIR],         'APP_DIRS': True,         'OPTIONS': {             'context_processors': [                 'django.template.context_processors.debug',                 'django.template.context_processors.request',                 'django.contrib.auth.context_processors.auth',                 'django.contrib.messages.context_processors.messages',             ],         },     }, ]

Now save and close the file we are done with the configurations.

Django makes it possible to separate python and HTML, the python goes in views and HTML goes in templates. Django has a powerful template language that allows you to specify how data is displayed. It is based on template tags, template variables, and template filters.

I’ll start off with a base.html file and a index.html file that inherits from it. Then later when we add templates for homepage and post detail pages, they too can inherit from base.html.

Let’s start with the base.html file which will have common elements for the blog at any page like the navbar and footer. Also, we are using Bootstrap for the UI and Roboto font.

<!DOCTYPE html> <html>   <head>     <title>Django Central</title>      <link       href=",700"       rel="stylesheet">     <meta name="google" content="notranslate" />     <meta name="viewport" content="width=device-width, initial-scale=1" />     <link       rel="stylesheet"       href=""       integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm"       crossorigin="anonymous"     />        </head>    <body>     <style>       body {         font-family: "Roboto", sans-serif;         font-size: 17px;         background-color: #fdfdfd;       }      .shadow{            box-shadow: 0 4px 2px -2px rgba(0,0,0,0.1);        }       .btn-danger {         color: #fff;         background-color: #f00000;         border-color: #dc281e;       }            .masthead {               background:#3398E1;               height: auto;               padding-bottom: 15px;               box-shadow: 0 16px 48px #E3E7EB;               padding-top: 10px;     }     </style>      <!-- Navigation -->     <nav class="navbar navbar-expand-lg navbar-light bg-light shadow" id="mainNav">       <div class="container-fluid">         <a class="navbar-brand" href="{% url 'home' %}" >Django central</a>         <button           class="navbar-toggler navbar-toggler-right"           type="button"           data-toggle="collapse"           data-target="#navbarResponsive"           aria-controls="navbarResponsive"           aria-expanded="false"           aria-label="Toggle navigation"         >           <span class="navbar-toggler-icon"></span>         </button>                 <div class="collapse navbar-collapse" id="navbarResponsive">           <ul class="navbar-nav ml-auto">                          <li class="nav-item text-black">               <a                 class="nav-link text-black font-weight-bold"                 href="#"                 >About</a               >             </li>             <li class="nav-item text-black">               <a                 class="nav-link text-black font-weight-bold"                 href="#"                 >Policy</a               >             </li>             <li class="nav-item text-black">               <a                 class="nav-link text-black font-weight-bold"                 href="#"                 >Contact</a               >             </li>           </ul>         </div>       </div>     </div>     </nav>                   {% block content %}            <!-- Content Goes here -->             {% endblock content %}          <!-- Footer -->     <footer class="py-3 bg-grey">               <p class="m-0 text-dark text-center ">Copyright &copy; Django Central</p>     </footer>        </body> </html> 

This is a regular HTML file except for the tags inside curly braces { } these are called template tags.

The {% url 'home' %} Returns an absolute path reference, it generates a link to the home view which is also the List view for posts.

The {% block content %} Defines a block that can be overridden by child templates, this is where the content from the other HTML file will get injected.

Next, we will make a small sidebar widget which will be inherited by all the pages across the site. Notice sidebar is also being injected in the base.html file this makes it globally available for pages inheriting the base file.

{% block sidebar %}  <style>         .card{             box-shadow: 0 16px 48px #E3E7EB;         }         </style>  <!-- Sidebar Widgets Column --> <div class="col-md-4 float-right "> <div class="card my-4">         <h5 class="card-header">About Us</h5>     <div class="card-body">         <p class="card-text"> This awesome blog is made on the top of our Favourite full stack Framework 'Django', follow up the tutorial to learn how we made it..!</p>         <a href=""            class="btn btn-danger">Know more!</a>     </div> </div> </div>  {% endblock sidebar %}

Next, create the index.html file of our blog that’s the homepage.

{% extends "base.html" %}         {% block content %} <style>          body {         font-family: "Roboto", sans-serif;         font-size: 18px;         background-color: #fdfdfd;     }      .head_text{     color: white;   }     .card{     box-shadow: 0 16px 48px #E3E7EB; } </style>      <header class="masthead" >             <div class="overlay"></div>             <div class="container">               <div class="row">                 <div class=" col-md-8 col-md-10 mx-auto">                   <div class="site-heading">                     <h3 class=" site-heading my-4 mt-3 text-white"> Welcome to my awesome Blog </h3>                     <p class="text-light">We Love Django As much as you do..! &nbsp                     </p>                 </div>                     </div>                   </div>                 </div>             </div>               </header>                      <div class="container">                 <div class="row">                      <!-- Blog Entries Column -->               <div class="col-md-8 mt-3 left">                     {% for post in post_list %}                 <div class="card mb-4" >                   <div class="card-body">                     <h2 class="card-title">{{ post.title }}</h2>                     <p class="card-text text-muted h6">{{ post.Author }} | {{ post.created_on}} </p>                      <p class="card-text">{{post.content|slice:":200" }}</p>                     <a href="{% url 'post_detail' post.slug  %}" class="btn btn-primary">Read More &rarr;</a>                   </div>                                   </div>                 {% endfor %}             </div>                 {% block sidebar %}                 {% include 'sidebar.html' %}                 {% endblock sidebar %}             </div>         </div> {%endblock%}

With the {% extends %} template tag, we tell Django to inherit from the base.html template. Then, we are filling the content blocks of the base template with content.

Notice we are using for loop in HTML that’s the power of Django templates it makes HTML Dynamic. The loop is iterating through the posts and displaying their title, date, author, and body, including a link in the title to the canonical URL of the post.

In the body of the post, we are also using template filters to limit the words on the excerpts to 200 characters. Template filters allow you to modify variables for display and look like {{ variable | filter }}.

Now run the server and visit you will see the homepage of our blog.

blog made with django

Looks good..!

You might have noticed I have imported some dummy content to fill the page you can do the same. Now let’s make an HTML template for the detailed view of our posts.

Next, Create a file post_detail.html and paste the below HTML there.

{% extends 'base.html' %} {% block content %}  <div class="container">   <div class="row">     <div class="col-md-8 card mb-4  mt-3 left  top">       <div class="card-body">         <h1>{% block title %} {{ object.title }} {% endblock title %}</h1>         <p class=" text-muted">{{ }} | {{ post.created_on }}</p>         <p class="card-text ">{{ object.content | safe }}</p>       </div>     </div>     {% block sidebar %} {% include 'sidebar.html' %} {% endblock sidebar %}   </div> </div>  {% endblock content %} 

At the top, we specify that this template inherits from.base.html Then display the body from our context object, which DetailView makes accessible as an object.

Now visit the homepage and click on read more, it should redirect you to the post detail page.

Blog detail page with django

We have come to the end of this tutorial. Thank you for reading thus far. This post is just the tip of the iceberg considering the number of things you could do with Django.

We have built a basic blog application from scratch! Using the Django admin we can create, edit, or delete the content and we used Django’s class-based views, and at the end, we made beautiful templates to render it out.

If you are stuck at any step, refer to this GitHub repo

The post Building A Blog Application With Django appeared first on Django Central.

Planet Python