How To Construct For Loops in Go

Introduction

In computer programming, a loop is a code structure that loops around to repeatedly execute a piece of code, often until some condition is met. Using loops in computer programming allows you to automate and repeat similar tasks multiple times. Imagine if you had a list of files that you needed to process, or if you wanted to count the number of lines in an article. You would use a loop in your code to solve these types of problems.

In Go, a for loop implements the repeated execution of code based on a loop counter or loop variable. Unlike other programming languages that have multiple looping constructs such as while, do, etc., Go only has the for loop. This serves to make your code clearer and more readable, since you do not have to worry with multiple strategies to achieve the same looping construct. This enhanced readability and decreased cognitive load during development will also make your code less prone to error than in other languages.

In this tutorial, you will learn how Go’s for loop works, including the three major variations of its use. We’ll start by showing how to create different types of for loops, followed by how to loop through sequential data types in Go. We’ll end by explaining how to use nested loops.

Declaring ForClause and Condition Loops

In order to account for a variety of use cases, there are three distinct ways to create for loops in Go, each with their own capabilities. These are to create a for loop with a Condition, a ForClause, or a RangeClause. In this section, we will explain how to declare and use the ForClause and Condition variants.

Let’s look at how we can use a for loop with the ForClause first.

A ForClause loop is defined as having an initial statement, followed by a condition, and then a post statement. These are arranged in the following syntax:

for [ Initial Statement ] ; [ Condition ] ; [ Post Statement ] {     [Action] } 

To explain what the preceding components do, let’s look at a for loop that increments through a specified range of values using the ForClause syntax:

for i := 0; i < 5; i++ {     fmt.Println(i) } 

Let’s break this loop down and identify each part.

The first part of the loop is i := 0. This is the initial statement:

for i := 0; i < 5; i++ {     fmt.Println(i) } 

It states that we are declaring a variable called i, and setting the initial value to 0.

Next is the condition:

for i := 0; i < 5; i++ {     fmt.Println(i) } 

In this condition, we stated that while i is less than the value of 5, the loop should continue looping.

Finally, we have the post statement:

for i := 0; i < 5; i++ {     fmt.Println(i) } 

In the post statement, we increment the loop variable i up by one each time an iteration occurs using the i++ increment operator.

When we run this program, the output looks like this:

Output
0 1 2 3 4

The loop ran 5 times. Initially, it set i to 0, and then checked to see if i was less than 5. Since the value of i was less than 5, the loop executed and the action of fmt.Println(i) was executed. After the loop finished, the post statement of i++ was called, and the value of i was incremented by 1.

Note: Keep in mind that in programming we tend to begin at index 0, so that is why although 5 numbers are printed out, they range from 0-4.

We aren’t limited to starting at 0 or ending at a specified value. We can assign any value to our initial statement, and also stop at any value in our post statement. This allows us to create any desired range to loop through:

for i := 20; i < 25; i++ {     fmt.Println(i) } 

Here, the iteration goes from 20 (inclusive) to 25 (exclusive), so the output looks like this:

Output
20 21 22 23 24

We can also use our post statement to increment at different values. This is similar to step in other languages:

First, let’s use a post statement with a positive value:

for i := 0; i < 15; i += 3 {     fmt.Println(i) } 

In this case, the for loop is set up so that the numbers from 0 to 15 print out, but at an increment of 3, so that only every third number is printed, like so:

Output
0 3 6 9 12

We can also use a negative value for our post statement argument to iterate backwards, but we’ll have to adjust our initial statement and condition arguments accordingly:

for i := 100; i > 0; i -= 10 {     fmt.Println(i) } 

Here, we set i to an initial value of 100, use the condition of i < 0 to stop at 0, and the post statement decrements the value by 10 with the -= operator. The loop begins at 100 and ends at 0, decreasing by 10 with each iteration. We can see this occur in the output:

Output
100 90 80 70 60 50 40 30 20 10

You can also exclude the initial statement and the post statement from the for syntax, and only use the condition. This is what is known as a Condition loop:

i := 0 for i < 5 {     fmt.Println(i)     i++ } 

This time, we declared the variable i separately from the for loop in the preceding line of code. The loop only has a condition clause that checks to see if i is less than 5. As long as the condition evaluates to true, the loop will continue to iterate.

Sometimes you may not know the number of iterations you will need to complete a certain task. In that case, you can omit all statements, and use the break keyword to exit execution:

for {     if someCondition {         break     }     // do action here } 

An example of this may be if we are reading from an indeterminately sized structure like a buffer and we don’t know when we will be done reading:

buffer.go
package main  import (     "bytes"     "fmt"     "io" )  func main() {     buf := bytes.NewBufferString("one\ntwo\nthree\nfour\n")      for {         line, err := buf.ReadString('\n')         if err != nil {             if err == io.EOF {                  fmt.Print(line)                 break             }             fmt.Println(err)             break         }         fmt.Print(line)     } } 

In the preceding code, buf :=bytes.NewBufferString("one\ntwo\nthree\nfour\n") declares a buffer with some data. Because we don’t know when the buffer will finish reading, we create a for loop with no clause. Inside the for loop, we use line, err := buf.ReadString('\n') to read a line from the buffer and check to see if there was an error reading from the buffer. If there was, we address the error, and use the break keyword to exit the for loop. With these break points, you do not need to include a condition to stop the loop.

In this section, we learned how to declare a ForClause loop and use it to iterate through a known range of values. We also learned how to use a Condition loop to iterate until a specific condition was met. Next, we’ll learn how the RangeClause is used for iterating through sequential data types.

Looping Through Sequential Data Types with RangeClause

It is common in Go to use for loops to iterate over the elements of sequential or collection data types like slices, arrays, and strings. To make it easier to do so, we can use a for loop with RangeClause syntax. While you can loop through sequential data types using the ForClause syntax, the RangeClause is cleaner and easier to read.

Before we look at using the RangeClause, let’s look at how we can iterate through a slice by using the ForClause syntax:

main.go
package main  import "fmt"  func main() {     sharks := []string{"hammerhead", "great white", "dogfish", "frilled", "bullhead", "requiem"}      for i := 0; i < len(sharks); i++ {         fmt.Println(sharks[i])     } } 

Running this will give the following output, printing out each element of the slice:

Output
hammerhead great white dogfish frilled bullhead requiem

Now, let’s use the RangeClause to perform the same set of actions:

main.go
package main  import "fmt"  func main() {     sharks := []string{"hammerhead", "great white", "dogfish", "frilled", "bullhead", "requiem"}      for i, shark := range sharks {         fmt.Println(i, shark)     } } 

In this case, we are printing out each item in the list. Though we used the variables i and shark, we could have called the variable any other valid variable name and we would get the same output:

Output
0 hammerhead 1 great white 2 dogfish 3 frilled 4 bullhead 5 requiem

When using range on a slice, it will always return two values. The first value will be the index that the current iteration of the loop is in, and the second is the value at that index. In this case, for the first iteration, the index was 0, and the value was hammerhead.

Sometimes, we only want the value inside the slice elements, not the index. If we change the preceding code to only print out the value however, we will receive a compile time error:

main.go
package main  import "fmt"  func main() {     sharks := []string{"hammerhead", "great white", "dogfish", "frilled", "bullhead", "requiem"}      for i, shark := range sharks {         fmt.Println(shark)     } } 
Output
src/range-error.go:8:6: i declared and not used

Because i is declared in the for loop, but never used, the compiler will respond with the error of i declared and not used. This is the same error that you will receive in Go any time you declare a variable and don’t use it.

Because of this, Go has the blank identifier which is an underscore (_). In a for loop, you can use the blank identifier to ignore any value returned from the range keyword. In this case, we want to ignore the index, which is the first argument returned.

main.go
package main  import "fmt"  func main() {     sharks := []string{"hammerhead", "great white", "dogfish", "frilled", "bullhead", "requiem"}      for _, shark := range sharks {         fmt.Println(shark)     } } 
Output
hammerhead great white dogfish frilled bullhead requiem

This output shows that the for loop iterated through the slice of strings, and printed each item from the slice without the index.

You can also use range to add items to a list:

main.go
package main  import "fmt"  func main() {     sharks := []string{"hammerhead", "great white", "dogfish", "frilled", "bullhead", "requiem"}      for range sharks {         sharks = append(sharks, "shark")     }      fmt.Printf("%q\n", sharks) } 
Output
['hammerhead', 'great white', 'dogfish', 'frilled', 'bullhead', 'requiem', 'shark', 'shark', 'shark', 'shark', 'shark', 'shark']

Here, we have added a placeholder string of "shark" for each item of the length of the sharks slice.

Notice that we didn’t have to use the blank identifier _ to ignore any of the return values from the range operator. Go allows us to leave out the entire declaration portion of the range statement if we don’t need to use either of the return values.

We can also use the range operator to fill in values of a slice:

main.go
package main  import "fmt"  func main() {     integers := make([]int, 10)     fmt.Println(integers)      for i := range integers {         integers[i] = i     }      fmt.Println(integers) } 

In this example, the slice integers is initialized with ten empty values, but the for loop sets all the values in the list like so:

Output
[0 0 0 0 0 0 0 0 0 0] [0 1 2 3 4 5 6 7 8 9]

The first time we print the value of the slice integers, we see all zeros. Then we iterate through each index and set the value to the current index. Then when we print the value of integers a second time, showing that they all now have a value of 0 through 9.

We can also use the range operator to iterate through each character in a string:

main.go
package main  import "fmt"  func main() {     sammy := "Sammy"      for _, letter := range sammy {         fmt.Printf("%c\n", letter)     } } 
Output
S a m m y

When iterating through a map, range will return both the key and the value:

main.go
package main  import "fmt"  func main() {     sammyShark := map[string]string{"name": "Sammy", "animal": "shark", "color": "blue", "location": "ocean"}      for key, value := range sammyShark {         fmt.Println(key + ": " + value)     } } 
Output
color: blue location: ocean name: Sammy animal: shark

Note: It is important to note that the order in which a map returns is random. Each time you run this program you may get a different result.

Now that we have learned how to iterate over sequential data with range for loops, let’s look at how to use loops inside of loops.

Nested For Loops

Loops can be nested in Go, as they can with other programming languages. Nesting is when we have one construct inside of another. In this case, a nested loop is a loop that occurs within another loop. These can be useful when you would like to have a looped action performed on every element of a data set.

Nested loops are structurally similar to nested if statements. They are constructed like so:

for {     [Action]     for {         [Action]       } } 

The program first encounters the outer loop, executing its first iteration. This first iteration triggers the inner, nested loop, which then runs to completion. Then the program returns back to the top of the outer loop, completing the second iteration and again triggering the nested loop. Again, the nested loop runs to completion, and the program returns back to the top of the outer loop until the sequence is complete or a break or other statement disrupts the process.

Let’s implement a nested for loop so we can take a closer look. In this example, the outer loop will iterate through a slice of integers called numList, and the inner loop will iterate through a slice of strings called alphaList.

main.go
package main  import "fmt"  func main() {     numList := []int{1, 2, 3}     alphaList := []string{"a", "b", "c"}      for _, i := range numList {         fmt.Println(i)         for _, letter := range alphaList {             fmt.Println(letter)         }     } } 

When we run this program, we’ll receive the following output:

Output
1 a b c 2 a b c 3 a b c

The output illustrates that the program completes the first iteration of the outer loop by printing 1, which then triggers completion of the inner loop, printing a, b, c consecutively. Once the inner loop has completed, the program returns to the top of the outer loop, prints 2, then again prints the inner loop in its entirety (a, b, c), etc.

Nested for loops can be useful for iterating through items within slices composed of slices. In a slice composed of slices, if we use just one for loop, the program will output each internal list as an item:

main.go
package main  import "fmt"  func main() {     ints := [][]int{         []int{0, 1, 2},         []int{-1, -2, -3},         []int{9, 8, 7},     }      for _, i := range ints {         fmt.Println(i)     } } 
Output
[0 1 2] [-1 -2 -3] [9 8 7]

In order to access each individual item of the internal slices, we’ll implement a nested for loop:

main.go
package main  import "fmt"  func main() {     ints := [][]int{         []int{0, 1, 2},         []int{-1, -2, -3},         []int{9, 8, 7},     }      for _, i := range ints {         for _, j := range i {             fmt.Println(j)         }     } } 
Output
0 1 2 -1 -2 -3 9 8 7

When we use a nested for loop here, we are able to iterate over the individual items contained in the slices.

Conclusion

In this tutorial we learned how to declare and use for loops to solve for repetitive tasks in Go. We also learned the three different variations of a for loop and when to use them. To learn more about for loops and how to control the flow of them, read Using Break and Continue Statements When Working with Loops in Go.

DigitalOcean Community Tutorials

How to Manage DigitalOcean and Kubernetes Infrastructure with Pulumi

The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

Introduction

Pulumi is a tool for creating, deploying, and managing infrastructure using code written in general purpose programming languages. It supports automating all of DigitalOcean’s managed services—such as Droplets, managed databases, DNS records, and Kubernetes clusters—in addition to application configuration. Deployments are performed from an easy-to-use command-line interface that also integrates with a wide variety of popular CI/CD systems.

Pulumi supports multiple languages but in this tutorial you will use TypeScript, a statically typed version of JavaScript that uses the Node.js runtime. This means you will get IDE support and compile-time checking that will help to ensure you’ve configured the right resources, used correct slugs, etc., while still being able to access any NPM modules for utility tasks.

In this tutorial, you will provision a DigitalOcean Kubernetes cluster, a load balanced Kubernetes application, and a DigitalOcean DNS domain that makes your application available at a stable domain name of your choosing. This can all be provisioned in 60 lines of infrastructure-as-code and a single pulumi up command-line gesture. After this tutorial, you’ll be ready to productively build powerful cloud architectures using Pulumi infrastructure-as-code that leverages the full surface area of DigitalOcean and Kubernetes.

Prerequisites

To follow this tutorial, you will need:

  • A DigitalOcean Account to deploy resources to. If you do not already have one, register here.
  • A DigitalOcean API Token to perform automated deployments. Generate a personal access token here and keep it handy as you’ll use it in Step 2.
  • Because you’ll be creating and using a Kubernetes cluster, you’ll need to install kubectl. Don’t worry about configuring it further — you’ll do that later.
  • You will write your infrastructure-as-code in TypeScript, so you will need Node.js 8 or later. Download it here or install it using your system’s package manager.
  • You’ll use Pulumi to deploy infrastructure, so you’ll need to install the open source Pulumi SDK.
  • To perform the optional Step 5, you will need a domain name configured to use DigitalOcean nameservers. This guide explains how to do this for your registrar of choice.

Step 1 — Scaffolding a New Project

The first step is to create a directory that will store your Pulumi project. This directory will contain the source code for your infrastructure definitions, in addition to metadata files describing the project and its NPM dependencies.

First, create the directory:

  • mkdir do-k8s

Next, move in to the newly created directory:

  • cd do-k8s

From now on, run commands from your newly created do-k8s directory.

Next, create a new Pulumi project. There are different ways to accomplish this, but the easiest way is to use the pulumi new command with the typescript project template. This command will first prompt you to log in to Pulumi so that your project and deployment state are saved, and will then create a simple TypeScript project in the current directory:

  • pulumi new typescript -y

Here you have passed the -y option to the new command which tells it to accept default project options. For example, the project name is taken from the current directory’s name, and so will be do-k8s. If you’d like to use different options for your project name, simply elide the -y.

After running the command, list the contents of the directory with ls:

  • ls

The following files will now be present:

Output
Pulumi.yaml index.ts node_modules package-lock.json package.json tsconfig.json

The primary file you’ll be editing is index.ts. Although this tutorial only uses this single file, you can organize your project any way you see fit using Node.js modules. This tutorial also describes one step at a time, leveraging the fact that Pulumi can detect and incrementally deploy only what has changed. If you prefer, you can just populate the entire program, and deploy it all in one go using pulumi up.

Now that you’ve scaffolded your new project, you are ready to add the dependencies needed to follow the tutorial.

Step 2 — Adding Dependencies

The next step is to install and add dependencies on the DigitalOcean and Kubernetes packages. First, install them using NPM:

This will download the NPM packages, Pulumi plugins, and save them as dependencies.

Next, open the index.ts file with your favorite editor. This tutorial will use nano:

  • nano index.ts

Replace the contents of your index.ts with the following:

index.ts
import * as digitalocean from "@pulumi/digitalocean"; import * as kubernetes from "@pulumi/kubernetes"; 

This makes the full contents of these packages available to your program. If you type "digitalocean." using an IDE that understands TypeScript and Node.js, you should see a list of DigitalOcean resources supported by this package, for instance.

Save and close the file after adding the content.

Note: We will be using a subset of what’s available in those packages. For complete documentation of resources, properties, and associated APIs, please refer to the relevant API documentation for the @pulumi/digitalocean and @pulumi/kubernetes packages.

Next, you will configure your DigitalOcean token so that Pulumi can provision resources in your account:

  • pulumi config set digitalocean:token YOUR_TOKEN_HERE --secret

Notice the --secret flag, which uses Pulumi’s encryption service to encrypt your token, ensuring that it is stored in cyphertext. If you prefer, you can use the DIGITALOCEAN_TOKEN environment variable instead, but you’ll need to remember to set it every time you update your program, whereas using configuration automatically stores and uses it for your project.

In this step you added the necessary dependencies and configured your API token with Pulumi so that you can provision your Kubernetes cluster.

Step 3 — Provisioning a Kubernetes Cluster

Now you’re ready to create a DigitalOcean Kubernetes cluster. Get started by reopening the index.ts file:

  • nano index.ts

Add these lines at the end of your index.ts file:

index.ts
… const cluster = new digitalocean.KubernetesCluster("do-cluster", {     region: digitalocean.Regions.SFO2,     version: "latest",     nodePool: {         name: "default",         size: digitalocean.DropletSlugs.DropletS2VPCU2GB,         nodeCount: 3,     }, });  export const kubeconfig = cluster.kubeConfigs[0].rawConfig; 

This new code allocates an instance of digitalocean.KubernetesCluster and sets a number of properties on it. This includes using the sfo2 region slug, the latest supported version of Kubernetes, the s-2vcpu-2gb Droplet size slug, and states your desired count of three Droplet instances. Feel free to change any of these, but be aware that DigitalOcean Kubernetes is only available in certain regions at the time of this writing. You can refer to the product documentation for updated information about region availability.

For a complete list of properties you can configure on your cluster, please refer to the KubernetesCluster API documentation.

The final line in that code snippet exports the resulting Kubernetes cluster’s kubeconfig file so that it’s easy to use. Exported variables are printed to the console and also accessible to tools. You will use this momentarily to access our cluster from standard tools like kubectl.

Now you’re ready to deploy your cluster. To do so, run pulumi up:

  • pulumi up

This command takes the program, generates a plan for creating the infrastructure described, and carries out a series of steps to deploy those changes. This works for the initial creation of infrastructure in addition to being able to diff and update your infrastructure when subsequent updates are made. In this case, the output will look something like this:

Output
Previewing update (dev): Type Name Plan + pulumi:pulumi:Stack do-k8s-dev create + └─ digitalocean:index:KubernetesCluster do-cluster create Resources: + 2 to create Do you want to perform this update? yes > no details

This says that proceeding with the update will create a single Kubernetes cluster named do-cluster. The yes/no/details prompt allows us to confirm that this is the desired outcome before any changes are actually made. If you select details, a full list of resources and their properties will be shown. Choose yes to begin the deployment:

Output
Updating (dev): Type Name Status + pulumi:pulumi:Stack do-k8s-dev created + └─ digitalocean:index:KubernetesCluster do-cluster created Outputs: kubeconfig: "…" Resources: + 2 created Duration: 6m5s Permalink: https://app.pulumi.com/…/do-k8s/dev/updates/1

It takes a few minutes to create the cluster, but then it will be up and running, and the full kubeconfig will be printed out to the console. Save the kubeconfig to a file:

  • pulumi stack output kubeconfig > kubeconfig.yml

And then use it with kubectl to perform any Kubernetes command:

  • KUBECONFIG=./kubeconfig.yml kubectl get nodes

You will receive output similar to the following:

Output
NAME STATUS ROLES AGE VERSION default-o4sj Ready <none> 4m5s v1.14.2 default-o4so Ready <none> 4m3s v1.14.2 default-o4sx Ready <none> 3m37s v1.14.2

At this point you’ve set up infrastructure-as-code and have a repeatable way to bring up and configure new DigitalOcean Kubernetes clusters. In the next step, you will build on top of this to define the Kubernetes infrastructure in code and learn how to deploy and manage them similarly.

Step 4 — Deploying an Application to Your Cluster

Next, you will describe a Kubernetes application’s configuration using infrastructure-as-code. This will consist of three parts:

  1. A Provider object, which tells Pulumi to deploy Kubernetes resources to the DigitalOcean cluster, rather than the default of whatever kubectl is configured to use.
  2. A Kubernetes Deployment, which is the standard Kubernetes way of deploying a Docker container image that is replicated across any number of Pods.
  3. A Kubernetes Service, which is the standard way to tell Kubernetes to load balance access across a target set of Pods (in this case, the Deployment above).

This is a fairly standard reference architecture for getting up and running with a load balanced service in Kubernetes.

To deploy all three of these, open your index.ts file again:

  • nano index.ts

Once the file is open, append this code to the end of the file:

index.ts
… const provider = new kubernetes.Provider("do-k8s", { kubeconfig })  const appLabels = { "app": "app-nginx" }; const app = new kubernetes.apps.v1.Deployment("do-app-dep", {     spec: {         selector: { matchLabels: appLabels },         replicas: 5,         template: {             metadata: { labels: appLabels },             spec: {                 containers: [{                     name: "nginx",                     image: "nginx",                 }],             },         },     }, }, { provider }); const appService = new kubernetes.core.v1.Service("do-app-svc", {     spec: {         type: "LoadBalancer",         selector: app.spec.template.metadata.labels,         ports: [{ port: 80 }],     }, }, { provider });  export const ingressIp = appService.status.loadBalancer.ingress[0].ip; 

This code is similar to standard Kubernetes configuration, and the behavior of objects and their properties is equivalent, except that it’s written in TypeScript alongside your other infrastructure declarations.

Save and close the file after making the changes.

Just like before, run pulumi up to preview and then deploy the changes:

  • pulumi up

After selecting yes to proceed, the CLI will print out detailed status updates, including diagnostics around Pod availability, IP address allocation, and more. This will help you understand why your deployment might be taking time to complete or getting stuck.

The full output will look something like this:

Output
Updating (dev): Type Name Status pulumi:pulumi:Stack do-k8s-dev + ├─ pulumi:providers:kubernetes do-k8s created + ├─ kubernetes:apps:Deployment do-app-dep created + └─ kubernetes:core:Service do-app-svc created Outputs: + ingressIp : "157.230.199.202" Resources: + 3 created 2 unchanged Duration: 2m52s Permalink: https://app.pulumi.com/…/do-k8s/dev/updates/2

After this completes, notice that the desired number of Pods are running:

  • KUBECONFIG=./kubeconfig.yml kubectl get pods
Output
NAME READY STATUS RESTARTS AGE do-app-dep-vyf8k78z-758486ff68-5z8hk 1/1 Running 0 1m do-app-dep-vyf8k78z-758486ff68-8982s 1/1 Running 0 1m do-app-dep-vyf8k78z-758486ff68-94k7b 1/1 Running 0 1m do-app-dep-vyf8k78z-758486ff68-cqm4c 1/1 Running 0 1m do-app-dep-vyf8k78z-758486ff68-lx2d7 1/1 Running 0 1m

Similar to how the program exports the cluster’s kubeconfig file, this program also exports the Kubernetes service’s resulting load balancer’s IP address. Use this to curl the endpoint and see that it is up and running:

  • curl $ (pulumi stack output ingressIp)
Output
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>

From here, you can easily edit and redeploy your application infrastructure. For example, try changing the replicas: 5 line to say replicas: 7, and then rerun pulumi up:

  • pulumi up

Notice that it just shows what has changed, and that selecting details displays the precise diff:

Output
Previewing update (dev): Type Name Plan Info pulumi:pulumi:Stack do-k8s-dev ~ └─ kubernetes:apps:Deployment do-app-dep update [diff: ~spec] Resources: ~ 1 to update 4 unchanged Do you want to perform this update? details pulumi:pulumi:Stack: (same) [urn=urn:pulumi:dev::do-k8s::pulumi:pulumi:Stack::do-k8s-dev] ~ kubernetes:apps/v1:Deployment: (update) [id=default/do-app-dep-vyf8k78z] [urn=urn:pulumi:dev::do-k8s::kubernetes:apps/v1:Deployment::do-app-dep] [provider=urn:pulumi:dev::do-k8s::pulumi:providers:kubernetes::do-k8s::80f36105-337f-451f-a191-5835823df9be] ~ spec: { ~ replicas: 5 => 7 }

Now you have both a fully functioning Kubernetes cluster and a working application. With your application up and running, you may want to configure a custom domain to use with your application. The next step will guide you through configuring DNS with Pulumi.

Step 5 — Creating a DNS Domain (Optional)

Although the Kubernetes cluster and application are up and running, the application’s address is dependent upon the whims of automatic IP address assignment by your cluster. As you adjust and redeploy things, this address might change. In this step, you will see how to assign a custom DNS name to the load balancer IP address so that it’s stable even as you subsequently change your infrastructure.

Note: To complete this step, ensure you have a domain using DigitalOcean’s DNS nameservers, ns1.digitalocean.com, ns2.digitalocean.com, and ns3.digitalocean.com. Instructions to configure this are available in the Prerequisites section.

To configure DNS, open the index.ts file and append the following code to the end of the file:

index.ts
… const domain = new digitalocean.Domain("do-domain", {     name: "your_domain",     ipAddress: ingressIp, }); 

This code creates a new DNS entry with an A record that refers to your Kubernetes service’s IP address. Replace your_domain in this snippet with your chosen domain name.

It is common to want additional sub-domains, like www, to point at the web application. This is easy to accomplish using a DigitalOcean DNS record. To make this example more interesting, also add a CNAME record that points www.your_domain.com to your_domain.com:

index.ts
… const cnameRecord = new digitalocean.DnsRecord("do-domain-cname", {     domain: domain.name,     type: "CNAME",     name: "www",     value: "@", }); 

Save and close the file after making these changes.

Finally, run pulumi up to deploy the DNS changes to point at your existing application and cluster:

Output
Updating (dev): Type Name Status pulumi:pulumi:Stack do-k8s-dev + ├─ digitalocean:index:Domain do-domain created + └─ digitalocean:index:DnsRecord do-domain-cname created Resources: + 2 created 5 unchanged Duration: 6s Permalink: https://app.pulumi.com/…/do-k8s/dev/updates/3

After the DNS changes have propagated, you will be able to access your content at your custom domain:

  • curl www.your_domain.com

You will receive output similar to the following:

Output
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>

With that, you have successfully set up a new DigitalOcean Kubernetes cluster, deployed a load balanced Kubernetes application to it, and given that application’s load balancer a stable domain name using DigitalOcean DNS, all in 60 lines of code and a pulumi up command.

The next step will guide you through removing the resources if you no longer need them.

Step 6 — Removing the Resources (Optional)

Before concluding the tutorial, you may want to destroy all of the resources created above. This will ensure you don’t get charged for resources that aren’t being used. If you prefer to keep your application up and running, feel free to skip this step.

Run the following command to destroy the resources. Be careful using this, as it cannot be undone!

  • pulumi destroy

Just as with the up command, destroy displays a preview and prompt before taking action:

Output
Previewing destroy (dev): Type Name Plan - pulumi:pulumi:Stack do-k8s-dev delete - ├─ digitalocean:index:DnsRecord do-domain-cname delete - ├─ digitalocean:index:Domain do-domain delete - ├─ kubernetes:core:Service do-app-svc delete - ├─ kubernetes:apps:Deployment do-app-dep delete - ├─ pulumi:providers:kubernetes do-k8s delete - └─ digitalocean:index:KubernetesCluster do-cluster delete Resources: - 7 to delete Do you want to perform this destroy? yes > no details

Assuming this is what you want, select yes and watch the deletions occur:

Output
Destroying (dev): Type Name Status - pulumi:pulumi:Stack do-k8s-dev deleted - ├─ digitalocean:index:DnsRecord do-domain-cname deleted - ├─ digitalocean:index:Domain do-domain deleted - ├─ kubernetes:core:Service do-app-svc deleted - ├─ kubernetes:apps:Deployment do-app-dep deleted - ├─ pulumi:providers:kubernetes do-k8s deleted - └─ digitalocean:index:KubernetesCluster do-cluster deleted Resources: - 7 deleted Duration: 7s Permalink: https://app.pulumi.com/…/do-k8s/dev/updates/4

At this point, nothing remains: the DNS entries are gone and the Kubernetes cluster—along with the application running inside of it—are gone. The permalink is still available, so you can still go back and see the full history of updates for this stack. This could help you recover if the destruction was a mistake, since the service keeps full state history for all resources.

If you’d like to destroy your project in its entirety, remove the stack:

  • pulumi stack rm

You will receive output asking you to confirm the deletion by typing in the stack’s name:

Output
This will permanently remove the 'dev' stack! Please confirm that this is what you'd like to do by typing ("dev"):

Unlike the destroy command, which deletes the cloud infrastructure resources, the removal of a stack erases completely the full history of your stack from Pulumi’s purview.

Conclusion

In this tutorial, you’ve deployed DigitalOcean infrastructure resources—a Kubernetes cluster and a DNS domain with A and CNAME records—in addition to the Kubernetes application configuration that uses this cluster. You have done so using infrastructure-as-code written in a familiar programming language, TypeScript, that works with existing editors, tools, and libraries, and leverages existing communities and packages. You’ve done it all using a single command line workflow for doing deployments that span your application and infrastructure.

From here, there are a number of next steps you might take:

The entire sample from this tutorial is available on GitHub. For extensive details about how to use Pulumi infrastructure-as-code in your own projects today, check out the Pulumi Documentation, Tutorials, or Getting Started guides. Pulumi is open source and free to use.

DigitalOcean Community Tutorials

How to Add and Delete Users on Ubuntu 18.04

Introduction

Adding and removing users on a Linux system is one of the most important system administration tasks to familiarize yourself with. When you create a new system, you are often only given access to the root account by default.

While running as the root user gives you complete control over a system and its users, it is also dangerous and can be destructive. For common system administration tasks, it is a better idea to add an unprivileged user and carry out those tasks without root privileges. You can also create additional unprivileged accounts for any other users you may have on your system. Each user on a system should have their own separate account.

For tasks that require administrator privileges, there is a tool installed on Ubuntu systems called sudo. Briefly, sudo allows you to run a command as another user, including users with administrative privileges. In this guide we will cover how to create user accounts, assign sudo privileges, and delete users.

Prerequisites

To follow along with this guide, you will need:

Adding a User

If you are signed in as the root user, you can create a new user at any time by typing:

  • adduser newuser

If you are signed in as a non-root user who has been given sudo privileges, you can add a new user by typing:

  • sudo adduser newuser

Either way, you will be asked a series of questions. The procedure will be:

  • Assign and confirm a password for the new user
  • Enter any additional information about the new user. This is entirely optional and can be skipped by hitting ENTER if you don’t wish to utilize these fields.
  • Finally, you’ll be asked to confirm that the information you provided was correct. Enter Y to continue.

Your new user is now ready for use. You can now log in using the password that you entered.

If you need your new user to have access to administrative functionality, continue on to the next section.

Granting a User Sudo Privileges

If your new user should have the ability to execute commands with root (administrative) privileges, you will need to give the new user access to sudo. Let’s examine two approaches to this problem: adding the user to a pre-defined sudo user group, and specifying privileges on a per-user basis in sudo’s configuration.

Adding the New User to the Sudo Group

By default, sudo on Ubuntu 18.04 systems is configured to extend full privileges to any user in the sudo group.

You can see what groups your new user is in with the groups command:

  • groups newuser
Output
newuser : newuser

By default, a new user is only in their own group which adduser creates along with the user profile. A user and its own group share the same name. In order to add the user to a new group, we can use the usermod command:

  • usermod -aG sudo newuser

The -aG option here tells usermod to add the user to the listed groups.

Specifying Explicit User Privileges in /etc/sudoers

As an alternative to putting your user in the sudo group, you can use the visudo command, which opens a configuration file called /etc/sudoers in the system’s default editor, and explicitly specify privileges on a per-user basis.

Using visudo is the only recommended way to make changes to /etc/sudoers, because it locks the file against multiple simultaneous edits and performs a sanity check on its contents before overwriting the file. This helps to prevent a situation where you misconfigure sudo and are prevented from fixing the problem because you have lost sudo privileges.

If you are currently signed in as root, type:

  • visudo

If you are signed in as a non-root user with sudo privileges, type:

  • sudo visudo

Traditionally, visudo opened /etc/sudoers in the vi editor, which can be confusing for inexperienced users. By default on new Ubuntu installations, visudo will instead use nano, which provides a more convenient and accessible text editing experience. Use the arrow keys to move the cursor, and search for the line that looks like this:

/etc/sudoers
root    ALL=(ALL:ALL) ALL 

Below this line, add the following highlighted line. Be sure to change newuser to the name of the user profile that you would like to grant sudo privileges:

/etc/sudoers
root    ALL=(ALL:ALL) ALL newuser ALL=(ALL:ALL) ALL 

Add a new line like this for each user that should be given full sudo privileges. When you are finished, you can save and close the file by hitting CTRL+X, followed by Y, and then ENTER to confirm.

Testing Your User’s Sudo Privileges

Now, your new user is able to execute commands with administrative privileges.

When signed in as the new user, you can execute commands as your regular user by typing commands as normal:

  • some_command

You can execute the same command with administrative privileges by typing sudo ahead of the command:

  • sudo some_command

You will be prompted to enter the password of the regular user account you are signed in as.

Deleting a User

In the event that you no longer need a user, it is best to delete the old account.

You can delete the user itself, without deleting any of their files, by typing the following command as root:

  • deluser newuser

If you are signed in as another non-root user with sudo privileges, you could instead type:

  • sudo deluser newuser

If, instead, you want to delete the user’s home directory when the user is deleted, you can issue the following command as root:

  • deluser --remove-home newuser

If you’re running this as a non-root user with sudo privileges, you would instead type:

  • sudo deluser --remove-home newuser

If you had previously configured sudo privileges for the user you deleted, you may want to remove the relevant line again by typing:

  • visudo

Or use this if you are a non-root user with sudo privileges:

  • sudo visudo
root    ALL=(ALL:ALL) ALL newuser ALL=(ALL:ALL) ALL   # DELETE THIS LINE 

This will prevent a new user created with the same name from being accidentally given sudo privileges.

Conclusion

You should now have a fairly good handle on how to add and remove users from your Ubuntu 18.04 system. Effective user management will allow you to separate users and give them only the access that they are required to do their job.

For more information about how to configure sudo, check out our guide on how to edit the sudoers file here.

DigitalOcean Community Tutorials

Como Fazer o Benchmark de um Servidor Redis no Ubuntu 18.04

Introdução

O Benchmarking é uma prática importante quando se trata de analisar o desempenho geral dos servidores de banco de dados. É útil para identificar gargalos e oportunidades de melhoria nesses sistemas.

O Redis é um armazenamento de estrutura dados em memória que pode ser usado como banco de dados, cache e intermediador de mensagens ou message broker. Ele suporta desde estruturas de dados simples a complexas, incluindo hashes, strings, conjuntos classificados, bitmaps, dados geoespaciais, entre outros tipos. Neste guia, demonstraremos como fazer o benchmark de um servidor Redis em execução no Ubuntu 18.04, usando algumas ferramentas e métodos distintos.

Pré-requisitos

Para seguir este guia, você precisará de:

Nota: Os comandos demonstrados neste tutorial foram executados em um servidor Redis dedicado rodando em um Droplet da DigitalOcean de 4 GB.

Usando a ferramenta incluída redis-benchmark

O Redis vem com uma ferramenta de benchmark chamada redis-benchmark. Este programa pode ser usado para simular um número arbitrário de clientes se conectando ao mesmo tempo e executando ações no servidor, medindo quanto tempo leva para que as solicitações sejam concluídas. Os dados resultantes vão lhe fornecer uma ideia do número médio de solicitações que o seu servidor Redis é capaz de processar por segundo.

A lista a seguir detalha algumas das opções de comando comuns usadas com o redis-benchmark:

  • -h: Host do Redis. O padrão é 127.0.0.1.
  • -p: Porta do Redis. O padrão é 6379.
  • -a: Se o seu servidor exigir autenticação, você poderá usar esta opção para fornecer a senha.
  • -c: Número de clientes (conexões paralelas) a serem simulados. O valor padrão é 50.
  • -n: Quantas requisições a fazer. O padrão é 100000.
  • -d: Tamanho dos dados para os valores de SET e GET, medidos em bytes. O padrão é 3.
  • -t: Execute apenas um subconjunto de testes. Por exemplo, você pode usar -t get,set para fazer o benchmark dos comandos GET e SET.
  • -q: Modo silencioso, mostra apenas a informação sobre média de requisições por segundo.

Por exemplo, se você deseja verificar o número médio de solicitações por segundo que o seu servidor Redis local pode suportar, você pode usar:

  • redis-benchmark -q

Você obterá resultados semelhantes a este, mas com números diferentes:

Output
PING_INLINE: 85178.88 requests per second PING_BULK: 83056.48 requests per second SET: 72202.16 requests per second GET: 94607.38 requests per second INCR: 84961.77 requests per second LPUSH: 78988.94 requests per second RPUSH: 88652.48 requests per second LPOP: 87950.75 requests per second RPOP: 80971.66 requests per second SADD: 80192.46 requests per second HSET: 84317.03 requests per second SPOP: 78125.00 requests per second LPUSH (needed to benchmark LRANGE): 84175.09 requests per second LRANGE_100 (first 100 elements): 52383.45 requests per second LRANGE_300 (first 300 elements): 21547.08 requests per second LRANGE_500 (first 450 elements): 14471.78 requests per second LRANGE_600 (first 600 elements): 9383.50 requests per second MSET (10 keys): 71225.07 requests per second

Você também pode limitar os testes a um subconjunto de comandos de sua escolha usando o parâmetro -t. O comando a seguir mostra as médias apenas dos comandos GET eSET:

  • redis-benchmark -t set,get -q
Output
SET: 76687.12 requests per second GET: 82576.38 requests per second

As opções padrão usarão 50 conexões paralelas para criar 100000 requisições ao servidor Redis. Se você deseja aumentar o número de conexões paralelas para simular um pico de uso, pode usar a opção -c para isso:

  • redis-benchmark -t set,get -q -c 1000

Como isso usará 1000 conexões simultâneas em vez das 50 padrão, você deve esperar uma diminuição no desempenho:

Output
SET: 69444.45 requests per second GET: 70821.53 requests per second

Se você quiser informações detalhadas na saída, poderá remover a opção -q. O comando a seguir usará 100 conexões paralelas para executar 1000000 requisições SET no servidor:

  • redis-benchmark -t set -c 100 -n 1000000

Você obterá uma saída semelhante a esta:

Output
====== SET ====== 1000000 requests completed in 11.29 seconds 100 parallel clients 3 bytes payload keep alive: 1 95.22% <= 1 milliseconds 98.97% <= 2 milliseconds 99.86% <= 3 milliseconds 99.95% <= 4 milliseconds 99.99% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 8 milliseconds 88605.35 requests per second

As configurações padrão usam 3 bytes para valores de chave. Você pode mudar isso com a opção -d. O comando a seguir fará o benchmark dos comandos GET e SET usando valores de chave de 1 MB:

  • redis-benchmark -t set,get -d 1000000 -n 1000 -q

Como o servidor está trabalhando com um payload muito maior dessa vez, espera-se uma diminuição significativa do desempenho:

Output
SET: 1642.04 requests per second GET: 822.37 requests per second

É importante perceber que, embora esses números sejam úteis como uma maneira rápida de avaliar o desempenho de uma instância Redis, eles não representam a taxa de transferência máxima que uma instância Redis pode suportar. Usando pipelining, as aplicações podem enviar vários comandos ao mesmo tempo para melhorar o número de requisições por segundo que o servidor pode manipular. Com o redis-benchmark, você pode usar a opção -P para simular aplicações do mundo real que fazem uso desse recurso do Redis.

Para comparar a diferença, primeiro execute o comando redis-benchmark com valores padrão e sem pipelining, para os testes GET e SET:

  • redis-benchmark -t get,set -q
Output
SET: 86281.27 requests per second GET: 89847.26 requests per second

O próximo comando executará os mesmos testes, mas fará o pipeline de 8 comandos juntos:

  • redis-benchmark -t get,set -q -P 8
Output
SET: 653594.81 requests per second GET: 793650.75 requests per second

Como você pode ver na saída, há uma melhoria substancial no desempenho com o uso de pipelining.

Checando a Latência com redis-cli

Se você deseja uma medição simples do tempo médio que uma requisição leva para receber uma resposta, você pode usar o cliente Redis para verificar a latência média do servidor. No contexto do Redis, latência é uma medida de quanto tempo um comando ping leva para receber uma resposta do servidor.

O comando a seguir mostrará estatísticas de latência em tempo real para seu servidor Redis:

  • redis-cli --latency

Você obterá uma saída semelhante a esta, mostrando um número crescente de amostras e uma latência média variável:

Output
min: 0, max: 1, avg: 0.18 (970 samples)

Este comando continuará sendo executado indefinidamente. Você pode pará-lo com um CTRL+C.

Para monitorar a latência por um determinado período, você pode usar:

  • redis-cli --latency-history

Isso irá acompanhar as médias de latência ao longo do tempo, com um intervalo configurável definido como 15 segundos por padrão. Você obterá uma saída semelhante a esta:

Output
min: 0, max: 1, avg: 0.18 (1449 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.16 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1444 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.17 (1446 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.17 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.16 (1444 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1445 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.16 (1445 samples) -- 15.01 seconds range …

Como o servidor Redis em nosso exemplo está ocioso, não há muita variação entre as amostras de latência. Se você tem um pico de uso, no entanto, isso deve ser refletido como um aumento na latência dentro dos resultados.

Se você deseja medir apenas a latência do sistema, pode usar --intrinsic-latency para isso. A latência intrínseca é inerente ao ambiente, dependendo de fatores como hardware, kernel, vizinhança do servidor e outros fatores que não são controlados pelo Redis.

Você pode ver a latência intrínseca como uma linha de base para o desempenho geral do Redis. O comando a seguir verificará a latência intrínseca do sistema, executando um teste por 30 segundos:

  • redis-cli --intrinsic-latency 30

Você deve obter uma saída semelhante a esta:

Output
… 498723744 total runs (avg latency: 0.0602 microseconds / 60.15 nanoseconds per run). Worst run took 22975x longer than the average latency.

Comparar os dois testes de latência pode ser útil para identificar gargalos de hardware ou sistema que podem afetar o desempenho do seu servidor Redis. Considerando que a latência total de uma requisição para o nosso servidor de exemplo tem uma média de 0,18 microssegundos para concluir, uma latência intrínseca de 0,06 microssegundos significa que um terço do tempo total da requisição é gasto pelo sistema em processos que não são controlados pelo Redis.

Usando a Ferramenta Memtier Benchmark

O Memtier é uma ferramenta de benchmark de alto rendimento para Redis e Memcached criada pelo Redis Labs. Embora muito parecido com o redis-benchmark em vários aspectos, o Memtier possui várias opções de configuração que podem ser ajustadas para emular melhor o tipo de carga que você pode esperar no seu servidor Redis, além de oferecer suporte a cluster.

Para instalar o Memtier em seu servidor, você precisará compilar o software a partir do código-fonte. Primeiro, instale as dependências necessárias para compilar o código:

  • sudo apt-get install build-essential autoconf automake libpcre3-dev libevent-dev pkg-config zlib1g-dev

Em seguida, vá para o seu diretório home e clone o projeto memtier_benchmark do repositório Github:

  • cd
  • git clone https://github.com/RedisLabs/memtier_benchmark.git

Navegue para o diretório do projeto e execute o comando autoreconf para gerar os scripts de configuração do aplicativo:

  • cd memtier_benchmark
  • autoreconf -ivf

Execute o script configure para gerar os artefatos do aplicativo necessários para a compilação:

  • ./configure

Agora execute make para compilar o aplicativo:

  • make

Após a conclusão da compilação, você pode testar o executável com:

  • ./memtier_benchmark --version

Isso lhe fornecerá a seguinte saída:

Output
memtier_benchmark 1.2.17 Copyright (C) 2011-2017 Redis Labs Ltd. This is free software. You may redistribute copies of it under the terms of the GNU General Public License <http://www.gnu.org/licenses/gpl.html>. There is NO WARRANTY, to the extent permitted by law.

A lista a seguir contém algumas das opções mais comuns usadas com o comando memtier_benchmark:

  • -s: Host do Servidor. O padrão é localhost.
  • -p: Porta do Servidor. O padrão é 6379.
  • -a: Autentica requisições usando a senha fornecida.
  • -n: Número de requisições por cliente (o padrão é 10000).
  • -c: Número de clientes (o padrão é 50).
  • -t: Número de threads (o padrão é 4).
  • --pipeline: Ativar pipelining.
  • --ratio: Relação entre os comandos SET e GET, o padrão é 1:10.
  • --hide-histogram: Oculta informações detalhadas de saída.

A maioria dessas opções é muito semelhante às opções presentes no redis-benchmark, mas o Memtier testa o desempenho de uma maneira diferente. Para simular melhor os ambientes comuns do mundo real, o benchmark padrão realizado pelo memtier_benchmark testará apenas as solicitações GET e SET, na proporção de 1 a 10. Com 10 operações GET para cada operação SET no teste, esse arranjo é mais representativo de uma aplicação web comum usando o Redis como banco de dados ou cache. Você pode ajustar o valor da taxa com a opção --ratio.

O comando a seguir executa o memtier_benchmark com as configurações padrão, fornecendo apenas informações de saída de alto nível:

  • ./memtier_benchmark --hide-histogram

Nota: se você configurou seu servidor Redis para exigir autenticação, você deve fornecer a opção -a junto com sua senha Redis ao comando memtier_benchmark:

  • ./memtier_benchmark --hide-histogram -a sua_senha_redis

Você verá resultados semelhantes a este:

Output
… 4 Threads 50 Connections per thread 10000 Requests per client ALL STATS ========================================================================= Type Ops/sec Hits/sec Misses/sec Latency KB/sec ------------------------------------------------------------------------- Sets 8258.50 --- --- 2.19800 636.05 Gets 82494.28 41483.10 41011.18 2.19800 4590.88 Waits 0.00 --- --- 0.00000 --- Totals 90752.78 41483.10 41011.18 2.19800 5226.93

De acordo com esta execução do memtier_benchmark, nosso servidor Redis pode executar cerca de 90 mil operações por segundo na proporção 1:10 SET/GET.

É importante observar que cada ferramenta de benchmark possui seu próprio algoritmo para teste de desempenho e apresentação de dados. Por esse motivo, é normal ter resultados ligeiramente diferentes no mesmo servidor, mesmo utilizando configurações semelhantes.

Conclusão

Neste guia, demonstramos como executar testes de benchmark em um servidor Redis usando duas ferramentas distintas: o redis-benchmark incluído e a ferramenta memtier_benchmark desenvolvida pelo Redis Labs. Também vimos como verificar a latência do servidor usando redis-cli. Com base nos dados obtidos com esses testes, você entenderá melhor o que esperar do servidor Redis em termos de desempenho e quais são os gargalos da sua configuração atual.

DigitalOcean Community Tutorials

How To Define and Call Functions in Go

Introduction

A function is a section of code that, once defined, can be reused. Functions are used to make your code easier to understand by breaking it into small, understandable tasks that can be used more than once throughout your program.

Go ships with a powerful standard library that has many predefined functions. Ones that you are probably already familiar with from the fmt package are:

  • fmt.Println() which will print objects to standard out (most likely your terminal).
  • fmt.Printf() which will allow you to format your printed output.

Function names include parentheses and may include parameters.

In this tutorial, we’ll go over how to define your own functions to use in your coding projects.

Defining a Function

Let’s start with turning the classic “Hello, World!” program into a function.

We’ll create a new text file in our text editor of choice, and call the program hello.go. Then, we’ll define the function.

A function is defined by using the func keyword. This is then followed by a name of your choosing and a set of parentheses that hold any parameters the function will take (they can be empty). The lines of function code are enclosed in curly brackets {}.

In this case, we’ll define a function named hello():

hello.go
func hello() {} 

This sets up the initial statement for creating a function.

From here, we’ll add a second line to provide the instructions for what the function does. In this case, we’ll be printing Hello, World! to the console:

hello.go
func hello() {     fmt.Println("Hello, World!") } 

Our function is now fully defined, but if we run the program at this point, nothing will happen since we didn’t call the function.

So, inside of our main() function block, let’s call the function with hello():

hello.go
package main  import "fmt"  func main() {     hello() }  func hello() {     fmt.Println("Hello, World!") } 

Now, let’s run the program:

  • go run hello.go

You’ll receive the following output:

Output
Hello, World!

Notice that we also introduced a function called main(). The main() function is a special function that tells the compiler that this is where the program should start. For any program that you want to be executable (a program that can be run from the command line), you will need a main() function. The main() function must appear only once, be in the main() package, and receive and return no arguments. This allows for program execution in any Go program. As per the following example:

main.go
package main  import "fmt"  func main() {     fmt.Println("this is the main section of the program") } 

Functions can be more complicated than the hello() function we defined. We can use for loops, conditional statements, and more within our function block.

For example, the following function uses a conditional statement to check if the input for the name variable contains a vowel, then uses a for loop to iterate over the letters in the name string.

names.go
package main  import (     "fmt"     "strings" )  func main() {     names() }  func names() {     fmt.Println("Enter your name:")      var name string     fmt.Scanln(&name)     // Check whether name has a vowel     for _, v := range strings.ToLower(name) {         if v == 'a' || v == 'e' || v == 'i' || v == 'o' || v == 'u' {             fmt.Println("Your name contains a vowel.")             return         }     }     fmt.Println("Your name does not contain a vowel.") } 

The names() function we define here sets up a name variable with input, and then sets up a conditional statement within a for loop. This shows how code can be organized within a function definition. However, depending on what we intend with our program and how we want to set up our code, we may want to define the conditional statement and the for loop as two separate functions.

Defining functions within a program makes our code modular and reusable so that we can call the same functions without rewriting them.

Working with Parameters

So far we have looked at functions with empty parentheses that do not take arguments, but we can define parameters in function definitions within their parentheses.

A parameter is a named entity in a function definition, specifying an argument that the function can accept. In Go, you must specify the data type for each parameter.

Let’s create a program that repeats a word a specified number of times. It will take a string parameter called word and an int parameter called reps for the number of times to repeat the word.

repeat.go
package main  import "fmt"  func main() {     repeat("Sammy", 5) }  func repeat(word string, reps int) {     for i := 0; i < reps; i++ {         fmt.Print(word)     } } 

We passed the value Sammy in for the word parameter, and 5 for the reps parameter. These values correspond with each parameter in the order they were given. The repeat function has a for loop that will iterate the number of times specified by the reps parameter. For each iteration, the value of the word parameter is printed.

Here is the output of the program:

Output
SammySammySammySammySammy

If you have a set of parameters that are all the same value, you can omit specifying the type each time. Let’s create a small program that takes in parameters x, y, and z that are all int values. We’ll create a function that adds the parameters together in different configurations. The sums of these will be printed by the function. Then we’ll call the function and pass numbers into the function.

add_numbers.go
package main  import "fmt"  func main() {     addNumbers(1, 2, 3) }  func addNumbers(x, y, z int) {     a := x + y     b := x + z     c := y + z     fmt.Println(a, b, c) } 

When we created the function signature for addNumbers, we did not need to specify the type each time, but only at the end.

We passed the number 1 in for the x parameter, 2 in for the y parameter, and 3 in for the z parameter. These values correspond with each parameter in the order they are given.

The program is doing the following math based on the values we passed to the parameters:

a = 1 + 2 b = 1 + 3 c = 2 + 3 

The function also prints a, b, and c, and based on this math we would expect a to be equal to 3, b to be 4, and c to be 5. Let’s run the program:

  • go run add_numbers.go
Output
3 4 5

When we pass 1, 2, and 3 as parameters to the addNumbers() function, we receive the expected output.

Parameters are arguments that are typically defined as variables within function definitions. They can be assigned values when you run the method, passing the arguments into the function.

Returning a Value

You can pass a parameter value into a function, and a function can also produce a value.

A function can produce a value with the return statement, which will exit a function and optionally pass an expression back to the caller. The return data type must be specified as well.

So far, we have used the fmt.Println() statement instead of the return statement in our functions. Let’s create a program that instead of printing will return a variable.

In a new text file called double.go, we’ll create a program that doubles the parameter x and returns the variable y. We issue a call to print the result variable, which is formed by running the double() function with 3 passed into it:

double.go
package main  import "fmt"  func main() {     result := double(3)     fmt.Println(result) }  func double(x int) int {     y := x * 2     return y }  

We can run the program and see the output:

  • go run double.go
Output
6

The integer 6 is returned as output, which is what we would expect by multiplying 3 by 2.

If a function specifies a return, you must provide a return as part of the code. If you do not, you will receive a compilation error.

We can demonstrate this by commenting out the line with the return statement:

double.go
package main  import "fmt"  func main() {     result := double(3)     fmt.Println(result) }  func double(x int) int {     y := x * 2     // return y }  

Now, let’s run the program again:

  • go run double.go
Output
./double.go:13:1: missing return at end of function

Without using the return statement here, the program cannot compile.

Functions exit immediately when they hit a return statement, even if they are not at the end of the function:

return_loop.go
package main  import "fmt"  func main() {     loopFive() }  func loopFive() {     for i := 0; i < 25; i++ {         fmt.Print(i)         if i == 5 {             // Stop function at i == 5             return         }     }     fmt.Println("This line will not execute.") } 

Here we iterate through a for loop, and tell the loop to run 25 iterations. However, inside the for loop, we have a conditional if statement that checks to see if the value of i is equal to 5. If it is, we issue a return statement. Because we are in the loopFive function, any return at any point in the function will exit the function. As a result, we never get to the last line in this function to print the statement This line will not execute..

Using the return statement within the for loop ends the function, so the line that is outside of the loop will not run. If, instead, we had used a break statement, only the loop would have exited at that time, and the last fmt.Println() line would run.

The return statement exits a function, and may return a value if specified in the function signature.

Returning Multiple Values

More than one return value can be specified for a function. Let’s examine the repeat.go program and make it return two values. The first will be the repeated value and the second will be an error if the reps parameter is not a value greater than 0:

repeat.go
package main  import "fmt"  func main() {     val, err := repeat("Sammy", -1)     if err != nil {         fmt.Println(err)         return     }     fmt.Println(val) }  func repeat(word string, reps int) (string, error) {     if reps <= 0 {         return "", fmt.Errorf("invalid value of %d provided for reps. value must be greater than 0.", reps)     }     var value string     for i := 0; i < reps; i++ {         value = value + word     }     return value, nil } 

The first thing the repeat function does is check to see if the reps argument is a valid value. Any value that is not greater than 0 will cause an error. Since we passed in the value of -1, this branch of code will execute. Notice that when we return from the function, we have to provide both the string and error return values. Because the provided arguments resulted in an error, we will pass back a blank string for the first return value, and the error for the second return value.

In the main() function, we can receive both return values by declaring two new variables, value and err. Because there could be an error in the return, we want to check to see if we received an error before continuing on with our program. In this example, we did receive an error. We print out the error and return out of the main() function to exit the program.

If there was not an error, we would print out the return value of the function.

Note: It is considered best practice to only return two or three values. Additionally, you should always return all errors as the last return value from a function.

Running the program will result in the following output:

Output
invalid value of -1 provided for reps. value must be greater than 0.

In this section we reviewed how we can use the return statement to return multiple values from a function.

Conclusion

Functions are code blocks of instructions that perform actions within a program, helping to make our code reusable and modular.

To learn more about how to make your code more modular, you can read our guide on How To Write Packages in Go.

DigitalOcean Community Tutorials