Catalin George Festila: Python 3.7.3 : Using the pycryptodome python module.

This python module can be used with python 3. More information can be found here. PyCryptodome is a self-contained Python package of low-level cryptographic primitives. It supports Python 2.6 and 2.7, Python 3.4 and newer, and PyPy. The install of this python module is easy with pip tool: C:\Python373\Scripts>pip install pycryptodome Collecting pycryptodome … Installing collected packages:
Planet Python

Python Anywhere: Using MongoDB on PythonAnywhere with MongoDB Atlas

.jab-post img { border: 2px solid #eeeeee; padding: 5px; }

This requires a paid PythonAnywhere account

Lots of people want to use MongoDB with PythonAnywhere; we don’t have support for it built in to the system, but it’s actually pretty easy to use with a database provided by MongoDB Atlas — and as Atlas is a cloud service provided by Mongo’s creators, it’s probably a good option anyway 🙂

If you’re experienced with MongoDB and Atlas, then our help page has all of the details you need for connecting to them from our systems.

But if you’d just like to dip your toe in the water and find out what all of this MongoDB stuff is about, this blog post explains step-by-step how to get started so that you can try it out.

Prerequisites

The first important thing to mention is that you’ll need a paid PythonAnywhere account to access Atlas. Free accounts can only access the external Internet using HTTP or HTTPS, and unfortunately MongoDB uses it’s own protocol which is quite different to those.

Apart from that, in order to follow along you’ll need at least a basic understanding of PythonAnywhere, writing website code and of databases in general — if you’ve been through our Beginners’ Flask and MySQL tutorial you should be absolutely fine.

Signing up to MongoDB atlas

Unsurprisingly, the first step in using Atlas is to sign up for an account (if you don’t already have one). Go to their site and click the “Try Free” button at the top right. That will take you to a page with the signup form; fill in the appropriate details, and sign up.

Atlas’ user interface may change a little after we publish this post (their sign-up welcome page has already changed between out first draft yesterday and the publication today!), so we won’t give super-detailed screenshots showing what to click, but as of this writing, they present you with a “Welcome” window that has some “help getting started” buttons. You can choose those if you prefer, but we’ll assume that you’ve clicked the “I don’t need help getting started” button. That should land you on a page that looks like this, which is where you create a new MongoDB cluster.

Creating your first cluster and adding a user

A MongoDB cluster is essentially the same as a database server for something more traditional like MySQL or PostgreSQL. It’s called a cluster because it can potentially spread over multiple computers, so it can in theory scale up much more easily than an SQL database.

To create the cluster:

  • Choose the “AWS” cloud provider, then select the region that is closest to where your PythonAnywhere account is hosted:
    • If your account is on our global site at www.pythonanywhere.com, choose the us-east-1 region.
    • If your account is on our EU site at eu.pythonanywhere.com, choose the eu-central-1 region.
  • Click the “Create cluster” button at the bottom of the page.

This will take you to a page describing your cluster. Initially it will have text saying something like “Your cluster is being created” — wait until that has disappeared, and you’ll have a page that will look something like this:

Now we need to add a user account that we’ll use to connect to the cluster from Python:

  • Click the “Database Access” link in the “Security” section of the menu on the left-hand side, which will take you to a page where you can administer users.
  • Click the “Add new user” button on the top right of the pane that appears.
  • Enter a username and a password for the new user (make sure you keep a note of these somewhere)
  • The “User Privileges” should be “Read and write to any database”
  • The “Save as a temporary user” checkbox should not be checked.

Click the button to create the user, and you’ll come back to the user admin page, with your user listed in the list.

Setting up the whitelist

Access to your MongoDB cluster is limited to computers on a whitelist; this provides an extra level of security beyond the username/password combination we just specified.

Just to get started, we’ll create a whitelist that comprises every IP address on the Internet — that is, we won’t have any restrictions at all. While this is not ideal in the long run, it makes taking the first steps much easier. You can tighten things up later — more information about that at the end of this post.

Here’s how to configure the whitelist to allow global access:

  • Click on the “Network Access” link in the “Security” section of the menu on the left-hand side, which will take you to the page where you manage stuff like whitelists.
  • Click the “Add IP Address” button near the top right of the page.
  • On the window that pops up, click the “Allow access from anywhere” button. This will put “0.0.0.0/0” in the “Whitelist entry” field — this is the CIDR notation for “all addresses on the Internet”.
  • Put something like “Everywhere” in the “Comment” field just as a reminder as to what the whitelist entry means, and leave the “Save as temporary whitelist” checkbox unchecked.
  • Click the “Confirm” button.

Getting the connection string

Now we have a MongoDB cluster running, and it’s time to connect to it from PythonAnywhere. The first step is to get the connection string:

  • Click on the “Clusters” link in the “Atlas” section of the menu on the left-hand side, which will take you back to the page you got when you first created the cluster:

  • Click the “Connect” button just underneath the name of the cluster (probably “Cluster0”).
  • You’ll get a window with different connection options; click the “Connect your application” option.
  • In “Choose your driver version”, select a “Driver” of “Python” and a version of “3.6 or later”.
  • Once you’ve done that, in the “Connection String Only” section, you’ll see something like mongodb+srv://admin:<password>@cluster0-zakwe.mongodb.net/test?retryWrites=true&w=majority

  • Copy that string (there’s a button to do that for you) and paste it somewhere safe for later use. You’ll note that it has <password> in it; you should replace that (including the angle brackets) with the actual password that you configured for your user earlier on.
  • Now you can close the popup with the button in the bottom right.

Connecting to the cluster from a PythonAnywhere console

Next, go to PythonAnywhere, and log in if necessary. The first thing we need to do here is make sure that we have the correct packages installed to connect to MongoDB — we’ll be using Python 3.7 in this example, so we need to use the pip3.7 command. Start a Bash console, and run:

pip3.7 install --user --upgrade pymongo dnspython 

Once that’s completed, let’s connect to the cluster from a command line. Run ipython3.7 in your console, and when you have a prompt, import pymongo and connect to the cluster:

import pymongo client = pymongo.MongoClient("<the atlas connection string>") 

…replacing <the atlas connection string> with the actual connection string we got from the Atlas site earlier, with <password> replaced with the password you used when setting up the user.

That’s created a connection object, but hasn’t actually connected to the cluster yet — pymongo only does that on an as-needed basis.

A good way to connect and at least make sure that what we’ve done so far has worked is to ask the cluster for a list of the databases that it currently has:

client.list_database_names() 

If all is well, you’ll get a result like this:

['admin', 'local'] 

…just a couple of default databases created by MongoDB itself for its own internal use.

Now let’s add some data. MongoDB is much more free-form than a relational database like MySQL or PostgreSQL. There’s no such thing as a table, or a row — instead, a database is just a bunch of things called “collections”, each of which is comprised of a set of “documents” — and the documents are just objects, linking keys to values.

That’s all a bit abstract; I find a useful way to imagine it is that a MongoDB database is like a directory on a disk; it contains a number of subdirectories (collections), and each of those contains a number of files (each one being a document). The files just store JSON data — basically, Python dictionary objects.

Alternatively, you can see it by comparison with an SQL database:

  • A MongoDB database is the same kind of thing as an SQL database
  • A MongoDB collection is a bit like a table — it’s meant to hold a set of similar kinds of things — but it’s not so restrictive, and defines no columns.
  • A MongoDB document is kind of like a row in such a table, but it’s not constrained to any specific set of columns — each document could in theory be very different to all of the others. It’s just best practice for all of the documents in a collection to be broadly similar.

For this tutorial, we’re going to create one super-simple database, called “blog”. Unsurprisingly for a blog, it will contain a number of “post” documents, each of which will contain a title, body, a slug (for the per-article URL) and potentially more information.

The first question is, how do we create a database? Neatly, we don’t need to explicitly do anything — if we refer to a database that doesn’t exist, MongoDB will create it for us — and likewise, if we refer to a collection inside the database that doesn’t exist, it will create that for us too. So in order to add the first row to the “posts” collection in the “blog” database we just do this:

db = client.blog db.posts.insert_one({"title": "My first post", "body": "This is the body of my first blog post", "slug": "first-post"}) 

No need for CREATE DATABASE or CREATE TABLE statements — it’s all implicit! IPython will print out the string representation of the MongoDB result object that was returned by the insert_one method; something like this:

<pymongo.results.InsertOneResult at 0x7f9ae871ea88> 

Let’s add a few more posts:

db.posts.insert_one({"title": "Another post", "body": "Let's try another post", "slug": "another-post", "extra-data": "something"}) db.posts.insert_one({"title": "Blog Post III", "body": "The blog post is back in another summer blockbuster", "slug": "yet-another-post", "author": "John Smith"}) 

Now we can inspect the posts that we have:

for post in client.blog.posts.find():     print(post) 

You’ll get something like this:

{'_id': ObjectId('5d0395dcbf76b2ab4ed67948'), 'title': 'My first post', 'body': 'This is the body of my first blog post', 'slug': 'first-post'} {'_id': ObjectId('5d039611bf76b2ab4ed67949'), 'title': 'Another post', 'body': "Let's try another post", 'slug': 'another-post', 'extra-data': 'something'} {'_id': ObjectId('5d039619bf76b2ab4ed6794a'), 'title': 'Blog Post III', 'body': 'The blog post is back in another summer blockbuster', 'slug': 'yet-another-post', 'author': 'John Smith'} 

The find function is a bit like a SELECT statement in SQL; with no parameters, it’s like a SELECT with no WHERE clause, and it just returns a cursor that allows us to iterate over every document in the collection. Let’s try it with a more restrictive query, and just print out one of the defined values in the object we inserted:

for post in client.blog.posts.find({"title": {"$  eq": "Another post"}}):     print(post["body"]) 

You’ll get something like this:

Let's try another post 

MongoDB’s query language is very rich — you can see a list of the query operators here. We won’t go into any more detail here — there are many excellent MongoDB tutorials on the Internet, so if you google for “mongodb python tutorial” you’re bound to find something useful!

Now we’ve connected to a MongoDB database, and created some data, so let’s do something with that.

Connecting to the cluster from a Flask website

The next step is to connect to our cluster from a website’s code. We’ll use Flask for this — because it’s database-agnostic, it’s a better fit for MongoDB than Django, which is quite tied to the SQL model of representing data.

We’re also going to use a Flask extension called Flask-PyMongo to make our connections — the raw PyMongo package has a few problems with the way PythonAnywhere runs websites, and while there are ways around those (see the help page), the Flask extension handles everything smoothly for us. So, in your Bash console, exit IPython and run

pip3.7 install --user Flask-PyMongo 

Once that’s done, let’s create a website: head over to the “Web” page inside PythonAnywhere, and create yourself a Python 3.7 Flask app.

When it’s been created, edit the flask_app.py file that was auto-generated for you, and replace the contents with this:

from flask import Flask, render_template from flask_pymongo import PyMongo  app = Flask(__name__) app.config["MONGO_URI"] = "<the atlas connection string>"  mongo = PyMongo(app)  @app.route('/') def index():     return render_template("blog.html", posts=mongo.db.posts.find())  @app.route('/<post_slug>') def item(post_slug):     return render_template("blog.html", posts=mongo.db.posts.find({"slug": post_slug})) 

…replacing <the atlas connection string> with the connection string as before, with one change — the original connection string will have /test in it, like this:

mongodb+srv://admin:iW8qWskQGJcEpZdu9ZUt@cluster0-zakwe.mongodb.net/test?retryWrites=true&w=majority 

That /test means “connect to the database called test on the cluster”. We’ve put our data into a database called blog, so just replace the test with blog, so that it looks like this:

mongodb+srv://admin:iW8qWskQGJcEpZdu9ZUt@cluster0-zakwe.mongodb.net/blog?retryWrites=true&w=majority 

All of the rest of the code in that file should be pretty obvious if you’re familiar with Flask — the MongoDB-specific stuff is very similar to the code we ran in a console earlier, and also to the way we would connect to MySQL via SQLAlchemy. The only really new thing is the abbreviated syntax for searching for an exact match:

mongo.db.posts.find({"slug": post_slug}) 

…is just a shorter way of saying this:

mongo.db.posts.find({"slug": {"$  eq": post_slug}}) 

To go with this Flask app, we need a template file called blog.html in a new templates subdirectory of the directory containing flask_app.py — here’s something basic that will work:

<html>     <head>         <meta charset="utf-8">         <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" integrity="sha512-dTfge/zgoMYpP7QbHy4gWMEGsbsdZeCXz7irItjcC3sPUFtf0kuFbDz/ixG7ArTxmDjLXDmezHubeNikyKGVyQ==" crossorigin="anonymous">         <title>My blog</title>     </head>      <body>         <div class="container">             <div class="row">                 <h1><a href="/">My Blog</a></h1>             </div>              {% for post in posts %}                 <div class="row">                     <h2><a href="/{{ post.slug }}">{{ post.title }}</a></h2>                      <p>                     {{ post.body }}                     </p>                 </div>             {% endfor %}          </div>      </body> </html> 

Note that there’s one simple difference to the way we reference the post document to how we’d do it in Python code — in Python we have to say (for example) post["title"], while dictionary lookups in a Flask template require us to use post.title.

Once you’ve created that file, reload the website using the button on the “Web” page, and you should see a website with your blog posts on it:

Let’s add a new post: keep the tab showing your website open, but in another tab go to your Bash console, start ipython3.7 again, connect to the database, and add a new post:

import pymongo client = pymongo.MongoClient("<the atlas connection string>") db = client.blog db.posts.insert_one({"title": "Blog Post Goes Forth", "body": "...but I thought we were coding Python?", "slug": "bad-blackadder-nerd-joke"}) 

Head back to the tab showing the site, and hit the browser’s refresh button — your new post will appear!

All done!

So now we have a working super-simple blog running on PythonAnywhere, backed by an Atlas MongoDB cluster.

The one remaining issue is that the whitelist we specified is a little broad. If someone gets hold of your MongoDB username and password, they can access the database. It’s possible to set things up so that you have an initially-empty whitelist, and then every time you run your code, it automatically whitelists the IP address it’s running on using the Atlas API — that’s a slightly more advanced topic, though, so if you want to learn about that, head over to our MongoDB help page.

We hope this post has been useful — if you have any questions or comments, please leave them below. Also, if there are other things you’d like to connect to from PythonAnywhere that you think could benefit from having a blog post explaining how to do it, please do let us know!

Planet Python

How To Scale a Node.js Application with MongoDB on Kubernetes Using Helm

Introduction

Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet object consisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.

Prerequisites

To complete this tutorial, you will need:

Step 1 — Cloning and Packaging the Application

To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

Clone the repository into a directory called node_project:

  • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

Navigate to the node_project directory:

  • cd node_project

The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application’s state has been offloaded to a MongoDB database.

For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

When we deploy the Helm mongodb-replicaset chart, it will create:

  • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.
  • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

  • nano db.js

Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node’s process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

The constants for the connection URI and the URI string itself currently look like this:

~/node_project/db.js
... const {   MONGO_USERNAME,   MONGO_PASSWORD,   MONGO_HOSTNAME,   MONGO_PORT,   MONGO_DB } = process.env;  ...  const url = `mongodb://$  {MONGO_USERNAME}:$  {MONGO_PASSWORD}@$  {MONGO_HOSTNAME}:$  {MONGO_PORT}/$  {MONGO_DB}?authSource=admin`; ... 

In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

Add MONGO_REPLICASET to both the URI constant object and the connection string:

~/node_project/db.js
... const {   MONGO_USERNAME,   MONGO_PASSWORD,   MONGO_HOSTNAME,   MONGO_PORT,   MONGO_DB,   MONGO_REPLICASET } = process.env;  ... const url = `mongodb://$  {MONGO_USERNAME}:$  {MONGO_PASSWORD}@$  {MONGO_HOSTNAME}:$  {MONGO_PORT}/$  {MONGO_DB}?replicaSet=$  {MONGO_REPLICASET}&authSource=admin`; ... 

Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

Save and close the file when you are finished editing.

With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

  • docker build -t your_dockerhub_username/node-replicas .

The . in the command specifies that the build context is the current directory.

It will take a minute or two to build the image. Once it is complete, check your images:

  • docker images

You will see the following output:

Output
REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-replicas latest 56a69b4bc882 7 seconds ago 90.1MB node 10-alpine aa57b0242b33 6 days ago 71MB

Next, log in to the Docker Hub account you created in the prerequisites:

  • docker login -u your_dockerhub_username

When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user’s home directory with your Docker Hub credentials.

Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

  • docker push your_dockerhub_username/node-replicas

You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

Step 2 — Creating Secrets for the MongoDB Replica Set

The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

  • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.
  • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

First, let’s create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

  • openssl rand -base64 756 > key.txt

The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation. The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

You can now create a Secret called keyfilesecret using this file with kubectl create:

  • kubectl create secret generic keyfilesecret --from-file=key.txt

This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

You will see the following output indicating that your Secret has been created:

Output
secret/keyfilesecret created

Remove key.txt:

  • rm key.txt

Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

Convert your database username:

  • echo -n 'your_database_username' | base64

Note down the value you see in the output.

Next, convert your password:

  • echo -n 'your_database_password' | base64

Take note of the value in the output here as well.

Open a file for the Secret:

  • nano secret.yaml

Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

  • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

In general, it is a good idea to validate your syntax before creating resources with kubectl.

Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encoded username and password:

~/node_project/secret.yaml
apiVersion: v1 kind: Secret metadata:   name: mongo-secret data:   user: your_encoded_username   password: your_encoded_password 

Here, we’re using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

Save and close the file when you are finished editing.

Create the Secret object with the following command:

  • kubectl create -f secret.yaml

You will see the following output:

Output
secret/mongo-secret created

Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment

Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we’ve just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

  • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.
  • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.
  • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.comDigitalOcean Block Storage — which we can check by typing:

  • kubectl get storageclass

If you are working with a DigitalOcean cluster, you will see the following output:

Output
NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 21m

If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

  • nano mongodb-values.yaml

You will set values in this file that will do the following:

  • Enable authorization.
  • Reference your keyfilesecret and mongo-secret objects.
  • Specify 1Gi for your PersistentVolumes.
  • Set your replica set name to db.
  • Specify 3 replicas for the set.
  • Pin the mongo image to the latest version at the time of writing: 4.1.9.

Paste the following code into the file:

~/node_project/mongodb-values.yaml
replicas: 3 port: 27017 replicaSetName: db podDisruptionBudget: {} auth:   enabled: true   existingKeySecret: keyfilesecret   existingAdminSecret: mongo-secret imagePullSecrets: [] installImage:   repository: unguiculus/mongodb-install   tag: 0.7   pullPolicy: Always copyConfigImage:   repository: busybox   tag: 1.29.3   pullPolicy: Always image:   repository: mongo   tag: 4.1.9   pullPolicy: Always extraVars: {} metrics:   enabled: false   image:     repository: ssalaues/mongodb-exporter     tag: 0.6.1     pullPolicy: IfNotPresent   port: 9216   path: /metrics   socketTimeout: 3s   syncTimeout: 1m   prometheusServiceDiscovery: true   resources: {} podAnnotations: {} securityContext:   enabled: true   runAsUser: 999   fsGroup: 999   runAsNonRoot: true init:   resources: {}   timeout: 900 resources: {} nodeSelector: {} affinity: {} tolerations: [] extraLabels: {} persistentVolume:   enabled: true   #storageClass: "-"   accessModes:     - ReadWriteOnce   size: 1Gi   annotations: {} serviceAnnotations: {} terminationGracePeriodSeconds: 30 tls:   enabled: false configmap: {} readinessProbe:   initialDelaySeconds: 5   timeoutSeconds: 1   failureThreshold: 3   periodSeconds: 10   successThreshold: 1 livenessProbe:   initialDelaySeconds: 30   timeoutSeconds: 5   failureThreshold: 3   periodSeconds: 10   successThreshold: 1 

The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case, dobs.csi.digitalocean.com.

Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

To learn more about the other parameters included in the file, see the configuration table included with the repo.

Save and close the file when you are finished editing.

Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

  • helm repo update

This will get the latest chart information from the stable repository.

Finally, install the chart with the following command:

  • helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

  • helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we’ve specified. We’ve pointed to these options by including the -f flag and our mongodb-values.yaml file.

Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

Output
NAME: mongo LAST DEPLOYED: Tue Apr 16 21:51:05 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE mongo-mongodb-replicaset-init 1 1s mongo-mongodb-replicaset-mongodb 1 1s mongo-mongodb-replicaset-tests 1 0s ...

You can now check on the creation of your Pods with the following command:

  • kubectl get pods

You will see output like the following as the Pods are being created:

Output
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 67s mongo-mongodb-replicaset-1 0/1 Init:0/3 0 8s

The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod’s containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

Once the Pods have been created and all of their associated containers are running, you will see this output:

Output
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 2m33s mongo-mongodb-replicaset-1 1/1 Running 0 94s mongo-mongodb-replicaset-2 1/1 Running 0 36s

The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

Note:
If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

  • kubectl describe pods your_pod
  • kubectl logs your_pod

Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $ (statefulset-name)-$ (ordinal).$ (service name).$ (namespace).svc.cluster.local.

In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

  • kubectl get statefulset
Output
NAME READY AGE mongo-mongodb-replicaset 3/3 4m2s
  • kubectl get svc
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 42m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 4m35s mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 4m35s

This means that the first member of our StatefulSet will have the following DNS entry:

mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local 

Because we need our application to connect to each MongoDB instance, it’s essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

With your database instances up and running, you are ready to create the chart for your Node application.

Step 4 — Creating a Custom Application Chart and Configuring Parameters

We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

First, create a new chart directory called nodeapp with the following command:

  • helm create nodeapp

This will create a directory called nodeapp in your ~/node_project folder with the following resources:

  • A Chart.yaml file with basic information about your chart.
  • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.
  • A .helmignore file with file and directory patterns that will be ignored when packaging charts.
  • A templates/ directory with the template files that will generate Kubernetes manifests.
  • A templates/tests/ directory for test files.
  • A charts/ directory for any charts that this chart depends on.

The first file we will modify out of these default files is values.yaml. Open that file now:

  • nano nodeapp/values.yaml

The values that we will set here include:

  • The number of replicas.
  • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.
  • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.
  • The targetPort to specify the port on the Pod where our application will be exposed.

We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

Configure the following values in the values.yaml file:

~/node_project/nodeapp/values.yaml
# Default values for nodeapp. # This is a YAML-formatted file. # Declare variables to be passed into your templates.  replicaCount: 3  image:   repository: your_dockerhub_username/node-replicas   tag: latest   pullPolicy: IfNotPresent  nameOverride: "" fullnameOverride: ""  service:   type: LoadBalancer   port: 80   targetPort: 8080 ... 

Save and close the file when you are finished editing.

Next, open a secret.yaml file in the nodeapp/templates directory:

  • nano nodeapp/templates/secret.yaml

In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

Add the following code to the file:

~/node_project/nodeapp/templates/secret.yaml
apiVersion: v1 kind: Secret metadata:   name: {{ .Release.Name }}-auth data:   MONGO_USERNAME: your_encoded_username   MONGO_PASSWORD: your_encoded_password 

The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

Save and close the file when you are finished.

Next, open a file to create a ConfigMap for your application:

  • nano nodeapp/templates/configmap.yaml

In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $ (statefulset-name)-$ (ordinal).$ (service name).$ (namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAME variable.

Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

~/node_project/nodeapp/templates/configmap.yaml
apiVersion: v1 kind: ConfigMap metadata:   name: {{ .Release.Name }}-config data:   MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"     MONGO_PORT: "27017"   MONGO_DB: "sharkinfo"   MONGO_REPLICASET: "db" 

Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

Save and close the file when you are finished editing.

With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

Step 5 — Integrating Environment Variables into Your Helm Deployment

With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probes that are already defined in the Deployment manifest.

Open the application Deployment template for editing:

  • nano nodeapp/templates/deployment.yaml

Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

In the file, first add an env key to your application container specifications, below the imagePullPolicy key and above ports:

~/node_project/nodeapp/templates/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: ...   spec:     containers:       - name: {{ .Chart.Name }}         image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"         imagePullPolicy: {{ .Values.image.pullPolicy }}         env:         ports: 

Next, add the following keys to the list of env variables:

~/node_project/nodeapp/templates/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: ...   spec:     containers:       - name: {{ .Chart.Name }}         image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"         imagePullPolicy: {{ .Values.image.pullPolicy }}         env:         - name: MONGO_USERNAME           valueFrom:             secretKeyRef:               key: MONGO_USERNAME               name: {{ .Release.Name }}-auth         - name: MONGO_PASSWORD           valueFrom:             secretKeyRef:               key: MONGO_PASSWORD               name: {{ .Release.Name }}-auth         - name: MONGO_HOSTNAME           valueFrom:             configMapKeyRef:               key: MONGO_HOSTNAME               name: {{ .Release.Name }}-config         - name: MONGO_PORT           valueFrom:             configMapKeyRef:               key: MONGO_PORT               name: {{ .Release.Name }}-config         - name: MONGO_DB           valueFrom:             configMapKeyRef:               key: MONGO_DB               name: {{ .Release.Name }}-config               - name: MONGO_REPLICASET           valueFrom:             configMapKeyRef:               key: MONGO_REPLICASET               name: {{ .Release.Name }}-config         

Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

~/node_project/nodeapp/templates/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: ...   spec:     containers:     ...       env:     ...       ports:         - name: http           containerPort: 8080           protocol: TCP       ... 

Next, let’s modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

  • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.
  • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod’s container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

Add the following modification to the stated path for the liveness and readiness probes:

~/node_project/nodeapp/templates/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: ...   spec:     containers:     ...       env:     ...       ports:         - name: http           containerPort: 8080           protocol: TCP       livenessProbe:         httpGet:           path: /sharks           port: http       readinessProbe:         httpGet:           path: /sharks           port: http 

Save and close the file when you are finished editing.

You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

  • helm install --name nodejs ./nodeapp

Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

You will see the following output indicating that your release has been created:

Output
NAME: nodejs LAST DEPLOYED: Wed Apr 17 18:10:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nodejs-config 4 1s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nodejs-nodeapp 0/3 3 0 1s ...

Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

Check the status of your Pods:

  • kubectl get pods
Output
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 57m mongo-mongodb-replicaset-1 1/1 Running 0 56m mongo-mongodb-replicaset-2 1/1 Running 0 55m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 117s

Once your Pods are up and running, check your Services:

  • kubectl get svc
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 96m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 58m mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 58m nodejs-nodeapp LoadBalancer 10.245.33.46 your_lb_ip 80:31518/TCP 3m22s

The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

You should see the following landing page:

Application Landing Page

Now that your replicated application is working, let’s add some test data to ensure that replication is working between members of the replica set.

Step 6 — Testing MongoDB Replication

With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

First, make sure you have navigated your browser to the application landing page:

Application Landing Page

Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark’s general character:

Shark Info Form

In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

Filled Shark Form

Click on the Submit button. You will see a page with this shark information displayed back to you:

Shark Output

Now head back to the shark information form by clicking on Sharks in the top navigation bar:

Shark Info Form

Enter a new shark of your choosing. We’ll go with Whale Shark and Large:

Enter New Shark

Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

Complete Shark Collection

Let’s check that the data we’ve entered has been replicated between the primary and secondary members of our replica set.

Get a list of your Pods:

  • kubectl get pods
Output
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 74m mongo-mongodb-replicaset-1 1/1 Running 0 73m mongo-mongodb-replicaset-2 1/1 Running 0 72m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 5m4s

To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

  • kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

When prompted, enter the password associated with this username:

Output
MongoDB shell version v4.1.9 Enter password:

You will be dropped into an administrative shell:

Output
MongoDB server version: 4.1.9 Welcome to the MongoDB shell. ... db:PRIMARY>

Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the rs.isMaster() method:

  • rs.isMaster()

You will see output like the following, indicating the hostname of the primary:

Output
db:PRIMARY> rs.isMaster() { "hosts" : [ "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017" ], ... "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", ...

Next, switch to your sharkinfo database:

  • use sharkinfo
Output
switched to db sharkinfo

List the collections in the database:

  • show collections
Output
sharks

Output the documents in the collection:

  • db.sharks.find()

You will see the following output:

Output
{ "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

Exit the MongoDB Shell:

  • exit

Now that we have checked the data on our primary, let’s check that it’s being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

  • kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

  • db.setSlaveOk(1)

Switch to the sharkinfo database:

  • use sharkinfo
Output
switched to db sharkinfo

Permit the read operation of the documents in the sharks collection:

  • db.setSlaveOk(1)

Output the documents in the collection:

  • db.sharks.find()

You should now see the same information that you saw when running this method on your primary instance:

Output
db:SECONDARY> db.sharks.find() { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

This output confirms that your application data is being replicated between the members of your replica set.

Conclusion

You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm’s stable repository and other chart repositories.

As you move toward production, consider implementing the following:

To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.

DigitalOcean Community Tutorials

How To Automatically Manage DNS Records From DigitalOcean Kubernetes Using ExternalDNS

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

When deploying web apps to Kubernetes, you usually use Services and Ingresses to expose apps beyond the cluster at your desired domain. This involves manually configuring not only the Ingress, but also the DNS records at your provider, which can be a time-consuming and error-prone process. This can become an obstacle as your application grows in complexity; when the external IP changes, it is necessary to update the DNS records accordingly.

To overcome this, the Kubernetes sig-network team created ExternalDNS for the purpose of automatically managing external DNS records from within a Kubernetes cluster. Once deployed, ExternalDNS works in the background and requires almost no additional configuration. Whenever a Service or Ingress is created or changed, ExternalDNS will update the records right away.

In this tutorial, you will install ExternalDNS to your DigitalOcean Kubernetes cluster via Helm and configure it to use DigitalOcean as your DNS provider. Then, you will deploy a sample web app with an Ingress and use ExternalDNS to point it to your domain name. In the end, you will have an automated DNS record managing system in place for both Services and Ingresses.

Prerequisites

  • A DigitalOcean Kubernetes cluster with your connection configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step when you create your cluster. To create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

  • The Helm package manager installed on your local machine, and Tiller installed on your cluster. To do this, complete Steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.

  • The Nginx Ingress Controller installed on your cluster using Helm in order to use ExternalDNS with Ingress Resources. To do this, follow How to Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm. You’ll need to set the publishService property to true as per the instructions in Step 2.

  • A DigitalOcean API key (Personal Access Token) with read and write permissions. To create one, visit How to Create a Personal Access Token.

  • A fully registered domain name. This tutorial will use echo.example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

Step 1 — Installing ExternalDNS Using Helm

In this section, you will install ExternalDNS to your cluster using Helm and configure it to work with the DigitalOcean DNS service.

In order to override some of the default settings of the ExternalDNS Helm chart, you’ll need to create a values.yaml file that you’ll pass in to Helm during installation. On the machine you use to access your cluster in the prerequisites, create the file by running:

  • nano externaldns-values.yaml

Add the following lines:

externaldns-values.yaml
rbac:   create: true  provider: digitalocean  digitalocean:   apiToken: your_api_token  interval: "1m"  policy: sync # or upsert-only  # domainFilters: [ 'example.com' ] 

In the first block, you enable RBAC (Role Based Access Control) manifest creation, which must be enabled on RBAC-enabled Kubernetes clusters like DigitalOcean. In the next line, you set the DNS service provider to DigitalOcean. Then, in the next block, you’ll add your DigitalOcean API token by replacing your_api_token.

The next line sets the interval at which ExternalDNS will poll for changes to Ingresses and Services. You can set it to a lower value to propogate changes to your DNS faster.

The policy setting determines whether ExternalDNS will only insert DNS records (upsert-only) or create and delete them as needed (sync). Fortunately, since version 0.3, ExternalDNS supports the concept of ownership by creating accompanying TXT records in which it stores information about the domains it creates, limiting its scope of action to only those it created.

The domainFilters parameter is used for limiting the domains that ExternalDNS can manage. You can uncomment it and enter your domains in the form of a string array, but this isn’t necessary.

When you’ve finished editing, save and close the file.

Now, install ExternalDNS to your cluster by running the following command:

  • helm install stable/external-dns --name external-dns -f externaldns-values.yaml

The output will look similar to the following:

Output
NAME: external-dns LAST DEPLOYED: ... NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE external-dns-69c545655f-xqjjf 0/1 ContainerCreating 0 0s ==> v1/Secret NAME TYPE DATA AGE external-dns Opaque 1 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE external-dns ClusterIP 10.245.47.69 <none> 7979/TCP 0s ==> v1/ServiceAccount NAME SECRETS AGE external-dns 1 0s ==> v1beta1/ClusterRole NAME AGE external-dns 0s ==> v1beta1/ClusterRoleBinding NAME AGE external-dns 0s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE external-dns 0/1 1 0 0s NOTES: ...

You can verify the ExternalDNS creation by running the following command:

  • kubectl --namespace=default get pods -l "app=external-dns,release=external-dns" -w
Output
NAME READY STATUS RESTARTS AGE external-dns-69bfcf8ccb-7j4hp 0/1 ContainerCreating 0 3s

You’ve installed ExternalDNS to your Kubernetes cluster. Next, you will deploy an example web app, expose it using an Nginx Ingress, and let ExternalDNS automatically point your domain name to the appropriate Load Balancer.

Step 2 — Deploying and Exposing an Example Web App

In this section, you will deploy a dummy web app to your cluster in order to expose it using your Ingress. Then you’ll set up ExternalDNS to automatically configure DNS records for you. In the end, you will have DNS records for your domain pointed to the Load Balancer of the Ingress.

The dummy web app you’ll deploy is http-echo by Hashicorp. It is an in-memory web server that echoes back the message you give it. You’ll store its Kubernetes manifests in a file named echo.yaml. Create it and open it for editing:

  • nano echo.yaml

Add the following lines to your file:

echo.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: echo-ingress spec:   rules:   - host: echo.example.com     http:       paths:       - backend:           serviceName: echo           servicePort: 80 --- apiVersion: v1 kind: Service metadata:   name: echo spec:   ports:   - port: 80     targetPort: 5678   selector:     app: echo --- apiVersion: apps/v1 kind: Deployment metadata:   name: echo spec:   selector:     matchLabels:       app: echo   replicas: 3   template:     metadata:       labels:         app: echo     spec:       containers:       - name: echo         image: hashicorp/http-echo         args:         - "-text=Echo!"         ports:         - containerPort: 5678 

In this configuration, you define a Deployment, an Ingress, and a Service. The Deployment consists of three replicas of the http-echo app, with a custom message (Echo!) passed in. The Service is defined to allow access to the Pods in the Deployment via port 80. The Ingress is configured to expose the Service at your domain.

Replace echo.example.com with your domain, then save and close the file.

Now there is no need for you to configure the DNS records for the domain manually. ExternalDNS will do so automatically, as soon as you apply the configuration to Kubernetes.

To apply the configuration, run the following command:

  • kubectl create -f echo.yaml

You’ll see the following output:

Output
ingress.extensions/echo-ingress created service/echo created deployment.apps/echo created

You’ll need to wait a short amount of time for ExternalDNS to notice the changes and create the appropriate DNS records. The interval setting in the Helm chart governs the length of time you’ll need to wait for your DNS record creation. In values.yaml, the interval length is set to 1 minute by default.

You can visit your DigitalOcean Control Panel to see an A and TXT record.

Control Panel - Generated DNS Records

Once the specified time interval has passed, access your domain using curl:

  • curl echo.example.com

You’ll see the following output:

Output
Echo!

This message confirms you’ve configured ExternalDNS and created the necessary DNS records to point to the Load Balancer of the Nginx Ingress Controller. If you see an error message, give it some time. Or, you can try accessing your domain from your browser where you’ll see Echo!.

You’ve tested ExternalDNS by deploying an example app with an Ingress. You can also observe the new DNS records in your DigitalOcean Control Panel. In the next step, you’ll expose the Service at your domain name.

Step 3 — (Optional) Exposing the App Using a Service

In this optional section, you’ll use Services with ExternalDNS instead of Ingresses. ExternalDNS allows you to make different Kubernetes resources available to DNS servers. Using Services is a similar process to Ingresses with the configuration modified for this alternate resource.

Note: Following this step will delete the DNS records you’ve just created.

Since you’ll be customizing the Service contained in echo.yaml, you won’t need the echo-ingress anymore. Delete it using the following command:

  • kubectl delete ing echo-ingress

The output will be:

Output
ingress.extensions/echo-ingress deleted

ExternalDNS will delete the existing DNS records it created in the previous step. In the remainder of the step, you can use the same domain you have used before.

Next, open the echo.yaml file for editing:

  • nano echo.yaml

Replace the file contents with the following lines:

echo.yaml
apiVersion: v1 kind: Service metadata:   name: echo   annotations:     external-dns.alpha.kubernetes.io/hostname: echo.example.com spec:   type: LoadBalancer   ports:   - port: 80     targetPort: 5678   selector:     app: echo --- apiVersion: apps/v1 kind: Deployment metadata:   name: echo spec:   selector:     matchLabels:       app: echo   replicas: 3   template:     metadata:       labels:         app: echo     spec:       containers:       - name: echo         image: hashicorp/http-echo         args:         - "-text=Echo!"         ports:         - containerPort: 5678 

You’ve removed Ingress from the file for the previous set up and changed the Service type to LoadBalancer. Furthermore, you’ve added an annotation specifying the domain name for ExternalDNS.

Apply the changes to your cluster by running the following command:

  • kubectl apply -f echo.yaml

The output will be:

Output
service/echo configured deployment.apps/echo configured

You can watch the Service’s Load Balancer become available by running:

  • kubectl get svc echo -w

You will see output similar to the following:

Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo LoadBalancer 10.245.81.235 <pending> 80:31814/TCP 8s ...

As in the previous step, you’ll need to wait some time for the DNS records to be created and propagated. Once that is done, curl the domain you specified:

  • curl echo.example.com

The output will be the same as the previous step:

Output
Echo!

If you get an error, wait a little longer, or you can try a different domain. Since DNS records are cached on client systems, it may take a long time for the changes to actually propagate.

In this step, you created a Service (of type LoadBalancer) and pointed it to your domain name using ExternalDNS.

Conclusion

ExternalDNS works silently in the background and provides a friction-free experience. Your Kubernetes cluster has just become the central source of truth regarding the domains. You won’t have to manually update DNS records anymore.

The real power of ExternalDNS will become apparent when creating testing environments from a Continuous Delivery system. If you want to set up one such system on your Kubernetes cluster, visit How To Set Up a CD Pipeline with Spinnaker on DigitalOcean Kubernetes.

DigitalOcean Community Tutorials

Learn PyQt: Using the PyQt5 ModelView Architecture to build a simple Todo app

As you start to build more complex applications with PyQt5 you’ll likely come across issues keeping widgets in sync with your data.

Data stored in widgets (e.g. a simple QListWidget) is not readily available to manipulate from Python — changes require you to get an item, get the data, and then set it back. The default solution to this is to keep an external data representation in Python, and then either duplicate updates to the both the data and the widget, or simple rewrite the whole widget from the data. This can get ugly quickly, and results in a lot of boilerplate just for fiddling the data.

Thankfully Qt has a solution for this — ModelViews. ModelViews are a powerful alternative to the standard display widgets, which use a regular model interface to interact with data sources — from simple data structures to external databases. This isolates your data, allowing it to be kept in any structure you like, while the view takes care of presentation and updates.

This tutorial introduces the key aspects of Qt’s ModelView architecture and uses it to build simple desktop Todo application in PyQt5.

Model View Controller

Model–View–Controller (MVC) is an architectural pattern used for developing user interfaces which divides an application into three interconnected parts. This separates the internal representation of data from how information is presented to and accepted from the user.

The MVC design pattern decouples three major components —

  • Model holds the data structure which the app is working with.
  • View is any representation of information as shown to the user, whether graphical or tables. Multiple views of the same data model are allowed.
  • Controller accepts input from the user, transforming it into commands to for the model or view.

It Qt land the distinction between the View & Controller gets a little murky. Qt accepts input events from the user (via the OS) and delegates these to the widgets (Controller) to handle. However, widgets also handle presentation of the current state to the user, putting them squarely in the View. Rather than agonize over where to draw the line, in Qt-speak the View and Controller are instead merged together creating a Model/ViewController architecture — called “Model View” for simplicity sake.

Importantly, the distinction between the data and how it is presented is preserved.

The Model View

The Model acts as the interface between the data store and the ModelView. The Model holds the data (or a reference to it) and presents this data through a standardised API which Views then consume and present to the user. Multiple Views can share the same data, presenting it in completely different ways.

You can use any “data store” for your model, including for example a standard Python list or dictionary, or a database (via e.g. SQLAlchemy) — it’s entirely up to you.

The two parts are essentially responsible for —

  1. The model stores the data, or a reference to it and returns individual or ranges of records, and associated metadata or display instructions.
  2. The view requests data from the model and displays what is returned on the widget.

There is an in-depth discussion of the Qt architecture in the documentation.

A simple Model View — a Todo List

To demonstrate how to use the ModelViews in practise, we’ll put together a very simple implementation of a desktop Todo List. This will consist of a QListView for the list of items, a QLineEdit to enter new items, and a set of buttons to add, delete, or mark items as done.

The UI

The simple UI was laid out using Qt Creator and saved as mainwindow.ui. The .ui file and all the other parts can be downloaded here.

Designing a Simple Todo app in Qt Creator Designing a Simple Todo app in Qt Creator

The running app is shown below.

The running Todo GUI (nothing works yet) The running Todo GUI (nothing works yet)

The widgets available in the interface were given the IDs shown in the table below.

objectName Type Description
todoView QListView The list of current todos
todoEdit QLineEdit The text input for creating a new todo item
addButton QPushButton Create the new todo, adding it to the todos list
deleteButton QPushButton Delete the current selected todo, removing it from the todos list
completeButton QPushButton Mark the current selected todo as done

We’ll use these identifiers to hook up the application logic later.

The Model

We define our custom model by subclassing from a base implementation, allowing us to focus on the parts unique to our model. Qt provides a number of different model bases, including those with support for multidimensional data (think spreadsheet).

But for this example we only need a simple list for our data and are displaying the result to a QListView. The matching base model for this is QAbstractListModel. The outline definition for our model is shown below.

class TodoModel(QtCore.QAbstractListModel):     def __init__(self, *args, todos=None, **kwargs):         super(TodoModel, self).__init__(*args, **kwargs)         self.todos = todos or []      def data(self, index, role):         if role == Qt.DisplayRole:             # See below for the data structure.             status, text = self.todos[index.row()]             # Return the todo text only.             return text      def rowCount(self, index):         return len(self.todos) 

The.todos variable is our data store and the two methods rowcount() and data() are standard Model methods we must implement for a list model. We’ll go through these in turn below.

.todos list

The data store for our model is .todos, a simple Python list in which we’ll store a tuple of values in the format [(bool, str), (bool, str), (bool, str)] where bool is the done state of a given entry, and str is the text of the todo.

We initialise self.todo to an empty list on startup, unless a list is passed in view the todos keyword argument.

self.todos = todos or [] will set self.todos to the value of the provided todos variable if it is truthy (i.e. anything other than an empty list, the boolean False or None the default value), otherwise it will be set to the empty list [].

To create an instance of this model we can simply do —

model = TodoModel()   # create an empty todo list 

Or to pass in an existing list —

todos = [(False, 'an item'), (False, 'another item')] model = TodoModel(todos) 

.rowcount()

The .rowcount() method is called by the view to get the number of rows in the current data. This is required for the view to know what the maximum index it can request from the data store is (row count-1). Since we’re using a Python list as our data store, the return value for this is simply the len() of the list.

.data()

This is the core of your model, which handles requests for data from the view and returns the appropriate result. It receives two parameters index and role.

index is the position/coordinates of the data which the view is requesting specified by two methods .row() and .column() which give the position in a particular dimension.

For our QListView the column is always 0 and can be ignored, but you would need to use this for 2D data in a spreadsheet view.

role is a flag indicating the type of data the view is requesting. This is because the .data() method actually has more responsibility than just the core data. It also handles requests for style information, tooltips, status bars, etc. — basically anything that could be informed by the data itself.

The naming of Qt.DisplayRole is a bit weird, but this indicates that the view is asking us “please give me data for display”. There are other roles which the data can receive for styling requests or requesting data in “edit-ready” format.

Role Value Description
Qt.DisplayRole 0 The key data to be rendered in the form of text. (QString)
Qt.DecorationRole 1 The data to be rendered as a decoration in the form of an icon. (QColor, QIcon or QPixmap)
Qt.EditRole 2 The data in a form suitable for editing in an editor. (QString)
Qt.ToolTipRole 3 The data displayed in the item’s tooltip. (QString)
Qt.StatusTipRole 4 The data displayed in the status bar. (QString)
Qt.WhatsThisRole 5 The data displayed for the item in “What’s This?” mode. (QString)
Qt.SizeHintRole 13 The size hint for the item that will be supplied to views. (QSize)

For a full list of available roles that you can receive see the Qt ItemDataRole documentation. Our todo list will only be using Qt.DisplayRole and Qt.DecorationRole.

Basic implementation

Below is the basic stub application needed to load the UI and display it. We’ll add our model code and application logic to this base.

import sys from PyQt5 import QtCore, QtGui, QtWidgets, uic from PyQt5.QtCore import Qt   qt_creator_file = "mainwindow.ui" Ui_MainWindow, QtBaseClass = uic.loadUiType(qt_creator_file)   class TodoModel(QtCore.QAbstractListModel):     def __init__(self, *args, todos=None, **kwargs):         super(TodoModel, self).__init__(*args, **kwargs)         self.todos = todos or []      def data(self, index, role):         if role == Qt.DisplayRole:             status, text = self.todos[index.row()]             return text      def rowCount(self, index):         return len(self.todos)   class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):     def __init__(self):         QtWidgets.QMainWindow.__init__(self)         Ui_MainWindow.__init__(self)         self.setupUi(self)         self.model = TodoModel()         self.todoView.setModel(self.model)   app = QtWidgets.QApplication(sys.argv) window = MainWindow() window.show() app.exec_() 

We define our TodoModel as before, and initialise the MainWindow object. In the __init__ for the MainWindow we create an instance of our todo model and set this model on the todo_view. Save this file as todo.py and run it with —

python3 todo.py  

While there isn’t much to see yet, the QListView and our model are actually working — if you add some default data you’ll see it appear in the list.

self.model = TodoModel(todos=[(False, 'my first todo')]) 
QListView showing hard-coded todo item QListView showing hard-coded todo item

You can keep adding items manually like this and they will show up in order in the QListView. Next we’ll make it possible to add items from within the application.

First create a new method on the MainWindow named add_todo. This is our callback which will take care of adding the current text from the input as a new todo. Connect this method to the addButton.pressed signal at the end of the __init__ block.

class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):     def __init__(self):         QtWidgets.QMainWindow.__init__(self)         Ui_MainWindow.__init__(self)         self.setupUi(self)         self.model = TodoModel()         self.todoView.setModel(self.model)         # Connect the button.             self.addButton.pressed.connect(self.add)      def add(self):         """         Add an item to our todo list, getting the text from the QLineEdit .todoEdit         and then clearing it.         """         text = self.todoEdit.text()         if text: # Don't add empty strings.             # Access the list via the model.             self.model.todos.append((False, text))             # Trigger refresh.                     self.model.layoutChanged.emit()               # Empty the input             self.todoEdit.setText("")   

In the add_todo block notice the line self.model.layoutChanged.emit(). Here we’re emitting a model signal .layoutChanged to let the view know that the shape of the data has been altered. This triggers a refresh of the entirety of the view. If you omit this line, the todo will still be added but the QListView won’t update.

If the just data is altered, but the number of rows/columns are unaffected you can use the .dataChanged() signal to let Qt know about this. This also a top-left and bottom-right location in the data, to avoid redrawing the entire view.

Hooking up the other actions

We can now connect the rest of the button’s signals and add helper functions for performing the delete and complete operations. We add the button signals to the __init__ block as before.

        self.addButton.pressed.connect(self.add)         self.deleteButton.pressed.connect(self.delete)         self.completeButton.pressed.connect(self.complete) 

Then define a new delete method as follows —

    def delete(self):         indexes = self.todoView.selectedIndexes()         if indexes:             # Indexes is a list of a single item in single-select mode.             index = indexes[0]             # Remove the item and refresh.             del self.model.todos[index.row()]             self.model.layoutChanged.emit()             # Clear the selection (as it is no longer valid).             self.todoView.clearSelection() 

We use self.todoView.selectedIndexes to get the indexes (actually a list of a single item, as we’re in single-selection mode) and then use the .row() as an index into our list of todos on our model. We delete the indexed item using Python’s del operator, and then trigger a layoutChanged signal because the shape of the data has been modified.

Finally, we clear the active selection since the item it relates to may now out of bounds (if you had selected the last item).

You could try make this smarter, and select the last item in the list instead

The complete method looks like this —

    def complete(self):         indexes = self.todoView.selectedIndexes()         if indexes:             index = indexes[0]             row = index.row()             status, text = self.model.todos[row]             self.model.todos[row] = (True, text)             # .dataChanged takes top-left and bottom right, which are equal              # for a single selection.             self.model.dataChanged.emit(index, index)             # Clear the selection (as it is no longer valid).             self.todoView.clearSelection() 

This uses the same indexing as for delete, but this time we fetch the item from the model .todos list and then replace the status with True.

We have to do this fetch-and-replace, as our data is stored as Python tuples which cannot be modified.

The key difference here vs. standard Qt widgets is that we make changes directly to our data, and simply need to notify Qt that some change has occurred — updating the widget state is handled automatically.

Using Qt.DecorationRole

If you run the application now you should find that adding and deleting both work, but while completing items is working, there is no indication of it in the view. We need to update our model to provide the view with an indicator to display when an item is complete. The updated model is shown below.

tick = QtGui.QImage('tick.png')   class TodoModel(QtCore.QAbstractListModel):     def __init__(self, *args, todos=None, **kwargs):         super(TodoModel, self).__init__(*args, **kwargs)         self.todos = todos or []      def data(self, index, role):         if role == Qt.DisplayRole:             _, text = self.todos[index.row()]             return text          if role == Qt.DecorationRole:             status, _ = self.todos[index.row()]             if status:                 return tick      def rowCount(self, index):         return len(self.todos) 

We’re using a tick icon tick.png to indicate completed items, which we load into a QImage object named tick. In the model we’ve implemented a handler for the Qt.DecorationRole which returns the tick icon for rows who’s status is True (for complete).

The icon I'm using is taken from the Fugue set by p.yusukekamiyamane

Instead of an icon you can also return a color, e.g. QtGui.QColor('green') which will be drawn as solid square.

Running the app you should now be able to mark items as complete.

Todos Marked Complete Todos Marked Complete

A persistent data store

Our todo app works nicely, but it has one fatal flaw — it forgets your todos as soon as you close the application While thinking you have nothing to do when you do may help to contribute to short-term feelings of Zen, long term it’s probably a bad idea.

The solution is to implement some sort of persistent data store. The simplest approach is a simple file store, where we load items from a JSON or Pickle file at startup, and write back on changes.

To do this we define two new methods on our MainWindow class — load and save. These load data from a JSON file name data.json (if it exists, ignoring the error if it doesn’t) to self.model.todos and write the current self.model.todos out to the same file, respectively.

    def load(self):         try:             with open('data.json', 'r') as f:                 self.model.todos = json.load(f)         except Exception:             pass      def save(self):         with open('data.json', 'w') as f:             data = json.dump(self.model.todos, f) 

To persist the changes to the data we need to add the .save() handler to the end of any method that modifies the data, and the .load() handler to the __init__ block after the model has been created.

The final code looks like this —

import sys import json from PyQt5 import QtCore, QtGui, QtWidgets, uic from PyQt5.QtCore import Qt   qt_creator_file = "mainwindow.ui" Ui_MainWindow, QtBaseClass = uic.loadUiType(qt_creator_file) tick = QtGui.QImage('tick.png')   class TodoModel(QtCore.QAbstractListModel):     def __init__(self, *args, todos=None, **kwargs):         super(TodoModel, self).__init__(*args, **kwargs)         self.todos = todos or []      def data(self, index, role):         if role == Qt.DisplayRole:             _, text = self.todos[index.row()]             return text          if role == Qt.DecorationRole:             status, _ = self.todos[index.row()]             if status:                 return tick      def rowCount(self, index):         return len(self.todos)   class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):     def __init__(self):         QtWidgets.QMainWindow.__init__(self)         Ui_MainWindow.__init__(self)         self.setupUi(self)         self.model = TodoModel()         self.load()         self.todoView.setModel(self.model)         self.addButton.pressed.connect(self.add)         self.deleteButton.pressed.connect(self.delete)         self.completeButton.pressed.connect(self.complete)      def add(self):         """         Add an item to our todo list, getting the text from the QLineEdit .todoEdit         and then clearing it.         """         text = self.todoEdit.text()         if text: # Don't add empty strings.             # Access the list via the model.             self.model.todos.append((False, text))             # Trigger refresh.                     self.model.layoutChanged.emit()             # Empty the input             self.todoEdit.setText("")             self.save()      def delete(self):         indexes = self.todoView.selectedIndexes()         if indexes:             # Indexes is a list of a single item in single-select mode.             index = indexes[0]             # Remove the item and refresh.             del self.model.todos[index.row()]             self.model.layoutChanged.emit()             # Clear the selection (as it is no longer valid).             self.todoView.clearSelection()             self.save()      def complete(self):         indexes = self.todoView.selectedIndexes()         if indexes:             index = indexes[0]             row = index.row()             status, text = self.model.todos[row]             self.model.todos[row] = (True, text)             # .dataChanged takes top-left and bottom right, which are equal              # for a single selection.             self.model.dataChanged.emit(index, index)             # Clear the selection (as it is no longer valid).             self.todoView.clearSelection()             self.save()      def load(self):         try:             with open('data.db', 'r') as f:                 self.model.todos = json.load(f)         except Exception:             pass      def save(self):         with open('data.db', 'w') as f:             data = json.dump(self.model.todos, f)   app = QtWidgets.QApplication(sys.argv) window = MainWindow() window.show() app.exec_() 

If the data in your application has the potential to get large or more complex, you may prefer to use an actual database to store it. In this case the model will wrap the interface to the database and query it directly for data to display. I’ll cover how to do this in an upcoming tutorial.

For another interesting example of a QListView see this example media player application. It uses the Qt built-in QMediaPlaylist as the datastore, with the contents displayed to a QListView.

Planet Python