Red Hat Developers: Use the Kubernetes Python client from your running Red Hat OpenShift pods

Red Hat OpenShift is part of the Cloud Native Computing Foundation (CNCF) Certified Program, ensuring portability and interoperability for your container workloads. This also allows you to use Kubernetes tools to interact with an OpenShift cluster, like kubectl, and you can rest assured that all the APIs you know and love are right there at your fingertips.

The Kubernetes Python client is another great tool for interacting with an OpenShift cluster, allowing you to perform actions on Kubernetes resources with Python code. It also has applications within a cluster. We can configure a Python application running on OpenShift to consume the OpenShift API, and list and create resources. We could then create containerized batch jobs from the running application, or a custom service monitor, for example. It sounds a bit like “OpenShift inception,” using the OpenShift API from services created using the OpenShift API.

In this article, we’ll create a Flask application running on OpenShift. This application will use the Kubernetes Python client to interact with the OpenShift API, list other pods in the project, and display them back to the user.

You’ll need a couple of things to follow along:

  • An OpenShift cluster
  • A working knowledge of Python

Let’s get started!

Setup

I’ve created a template to alleviate a lot of the boilerplate, so let’s clone it down:

git clone https://github.com/shaneboulden/openshift-client-demo cd openshift-client-demo 

You can create a new app on your OpenShift cluster using the provided template and see the application running:

oc new-app openshift_deploy/ocd.yaml 

If you do an oc get routes, you’ll be able to see the route that’s been created. For now, if you select the Pods menu item you’ll just get some placeholder text. We’ll fix this shortly 🙂

pods_placeholder

Configure the Kubernetes Python client

Listing pods is trivial once we have our client configured, and, fortunately, we can use a little Kubernetes Python client magic to configure this easily with the correct service account token.

Usually, we’d configure a Kubernetes client using a kubeconfig file, which has the required token and hostname to create API requests. The Kubernetes Python client also provides a method load_incluster_config(), which replaces the kubeconfig file in a running pod, instead using the available environment variables and mount points to find the service account token and build API URLs from the information available within the pod.

There’s another huge benefit to using load_incluster_config()—our code is now portable. We can take this same application to any Kubernetes cluster, assume nothing about hostnames or network addresses, and easily construct API requests using this awesome little method.

Let’s configure our application to use the load_incluster_config() method. First, we need to import the client and config objects, you can verify this in the ocd.py file:

from kubernetes import client, config 

We can now use that magic method to configure the client:

config.load_incluster_config() v1 = client.CoreV1Api() 

That’s it! This is all of the code we need to be able to interact with the OpenShift API from running pods.

Use the Kubernetes Downward API

I’m going to introduce something new here, and yes, it’s another “OpenShift-inception” concept. We’re going to use the list_namespaced_pod method to list pod details; you can find all of the methods available in the documentation. To use this method, we need to pass the current namespace (project) to the Kubernetes client object. But wait, how do we get the namespace for our pod, from inside the running pod?

This is where another awesome Kubernetes API comes into play. It’s called the Downward API and allows us to access metadata about our pod from inside the running pod. To expose information from the Downward API to our pod, we can use environment variables. If you look at the template, you’ll see the following in the ‘env’ section:

- name: POD_NAMESPACE   valueFrom:     fieldRef:       apiVersion: v1       fieldPath: metadata.namespace 

Bring it all together

Now let’s get back to our /pods route in the ocd.py file. The last thing we need to do is to pass the namespace of the app to the Kubernetes client. We have our environment variable configured to use the downward API already, so let’s pass it in:

pods = v1.list_namespaced_pod(namespace=os.environ["POD_NAMESPACE"]) 

Ensure you’re in the top-level project directory (i.e., you can see the README) and start a build from the local directory:

oc start-build openshift-client-demo --from-dir=. 

When you next visit the route and select the Pods menu, you’ll be able to see all of the pods for the current namespace:

pods

I hope you’ve enjoyed this short introduction to the Kubernetes Python client. If you want to explore a little deeper, you can look at creating resources. There’s an example here that looks at creating containerized batch jobs from API POSTs.

Share

The post Use the Kubernetes Python client from your running Red Hat OpenShift pods appeared first on Red Hat Developer Blog.

Planet Python

Test and Code: 74: Technical Interviews: Preparing For, What to Expect, and Tips for Success – Derrick Mar

In this episode, I talk with Derrick Mar, CTO and co-founder of Pathrise.
This is the episode you need to listen to to get ready for software interviews.

  • We discuss four aspects of technical interviews that interviewers are looking for:

    • communication
    • problem solving
    • coding
    • verification
  • How to practice for the interview.

  • Techniques for synchronizing with interviewer and asking for hints.

  • Even how to ask the recruiter or hiring manager how to prepare for the interview.

If you or anyone you know has a software interview coming up, this episode will help you both feel more comfortable about the interview before you show up, and give you concrete tips on how to do better during the interview.

Special Guest: Derrick Mar.

Sponsored By:

Support Test & Code – Software Testing, Development, Python

Links:

<p>In this episode, I talk with Derrick Mar, CTO and co-founder of Pathrise.<br> This is the episode you need to listen to to get ready for software interviews.</p> <ul> <li><p>We discuss four aspects of technical interviews that interviewers are looking for:</p> <ul> <li>communication</li> <li>problem solving</li> <li>coding</li> <li>verification</li> </ul></li> <li><p>How to practice for the interview.</p></li> <li><p>Techniques for synchronizing with interviewer and asking for hints.</p></li> <li><p>Even how to ask the recruiter or hiring manager how to prepare for the interview.</p></li> </ul> <p>If you or anyone you know has a software interview coming up, this episode will help you both feel more comfortable about the interview before you show up, and give you concrete tips on how to do better during the interview.</p><p>Special Guest: Derrick Mar.</p><p>Sponsored By:</p><ul><li><a href=”http://amzn.to/2E6cYZ9″ rel=”nofollow”>Python Testing with pytest</a>: <a href=”http://amzn.to/2E6cYZ9″ rel=”nofollow”>Simple, Rapid, Effective, and Scalable The fastest way to learn pytest. From 0 to expert in under 200 pages.</a></li><li><a href=”https://www.patreon.com/testpodcast” rel=”nofollow”>Patreon Supporters</a>: <a href=”https://www.patreon.com/testpodcast” rel=”nofollow”>Help support the show with as little as $ 1 per month. Funds help pay for expenses associated with the show.</a></li></ul><p><a href=”https://www.patreon.com/testpodcast” rel=”payment”>Support Test & Code – Software Testing, Development, Python</a></p><p>Links:</p><ul><li><a href=”https://testandcode.com/72″ title=”72: Technical Interview Fixes – April Wensel” rel=”nofollow”>72: Technical Interview Fixes – April Wensel</a></li><li><a href=”https://www.pathrise.com/” title=”Pathrise” rel=”nofollow”>Pathrise</a></li></ul>
Planet Python

codingdirectional: Growth of a Population

Hello and welcome back, in this episode we are going to solve a python related problem in Codewars. Before we start I just want to say that this post is related to python programming, you are welcome to leave your comments below this post if and only if they are related to the below solution, kindly do not leave any comment which has nothing to do with python programming under this article, thank you.

In a small town, the population isat p0 = 1000 at the beginning of a year. The population regularly increases by 2 percent per year and moreover, 50 new inhabitants per year come to live in the town. How many years does the town need to see its population greater or equal to p = 1200inhabitants? You need to round up the percentage part of the equation. Below is the entire solution to this question.

 At the end of the first year there will be:  1000 + 1000 * 0.02 + 50 => 1070 inhabitants  At the end of the 2nd year there will be:  1070 + 1070 * 0.02 + 50 => 1141 inhabitants (number of inhabitants is an integer)  At the end of the 3rd year there will be: 1141 + 1141 * 0.02 + 50 => 1213  It will need 3 entire years. 

So how are we going to turn the above population equation into a function?

 def nb_year(p0, percent, aug, p):     # p0 is the present total population, aug is the number of new inhabitants per year and p is the target population needs to be surpassed     perc = round(p0 * percent/100)     total_population = p0 + (perc) + aug     year = 1     while(total_population < p):         perc = round(total_population * percent/100)         total_population = total_population + perc + aug         year += 1      return year 

Simple solution, hope you do enjoy this post. We will start a new project soon so stay tuned!

Planet Python

Portals for Tableau New Feature Spotlight: Portal Backups

Portals for Tableau New Feature Spotlight: Portal Backups

Major League Tornadoes

© InterWorks 2019 – All Rights Reserved, Modified ”Tornado Icon” (https://game-icons.net/1×1/lorc/tornado.html) by Lorc is licensed under CC BY 3.0

The new season of Major League Tornadoes is getting kicked off around the InterWorks headquarters, so disaster recovery is on our minds. Portals for Tableau has had the ability to export various pieces of its data for quite some time. However, it could not provide a proper backup since it lacked features such as the ability to export files, logos, etc.

The new full portal backup system rectifies that issue. These backups will export the database structure and data, the portal code and all uploaded files.

Creating a New Portal Backup

To take a new backup, navigate to Backend > Settings > Import/Export > Full Backup tab, and click on the Take New Backup button. When you do, you will be able to watch the status as the new backup is built. The portal will even show you up-to-date stats on how much free space you have available on your portal server to ensure you have room. These stats will be refreshed periodically while the backup is being built if you like to follow along and keep score in the stands.

When your new backup is complete, you can click on it to download the zip archive and store it for safe keeping. You also have the ability to remove old backups that are no longer needed to free up space for new ones.

Scheduling Backups for Your Portals for Tableau

You even have the option to schedule backups to make life easier. To avoid scheduled backups piling up and maxing out your server’s storage, you can also configure how many backups to retain. As new backups are created, old ones will be purged. By default, a weekly backup will take place and it will retain the two latest ones.

With this new backup system, you can rest a little easier the next time the Twisters come to your data center’s town and the game is a total blowout.

creating a new backup in Portals for Tableau

The post Portals for Tableau New Feature Spotlight: Portal Backups appeared first on InterWorks.

InterWorks