Understanding Arrays in Go

Introduction

An array in Go is an ordered sequence of elements that has its capacity defined at creation time. Once an array has allocated its size, the size can no longer be changed. Because the size of an array is static, the data structure only needs to allocate memory once, as opposed to a variable length data structure that must dynamically allocate memory so that it can become larger or smaller in the future.

Although the fixed length of arrays can make them somewhat rigid to work with, the one-time memory allocation can increase the speed and performance of your program. Because of this, developers typically use arrays when optimizing programs. In Go, slices are the variable length version of arrays. Slices provide more flexibility and constitute what you would think of as arrays in other languages.

In this article, you will learn how to declare an array, how to call individual elements using indexing, how to slice the array into smaller sets, and the difference between an array and a slice in Go.

Defining an Array

Arrays are defined by declaring the size of the array in brackets [ ], followed by the data type of the elements. An array in Go must have all its elements be the same data type. After the data type, you can declare the individual values of the array elements in curly brackets { }.

The following is the general schema for declaring an array:

[capacity]data_type{element_values} 

Note: It is important to remember that every declaration of a new array creates a distinct type. So, although [2]int and [3]int both have integer elements, their differing capacities make their data types incompatible.

If you do not declare the values of the array’s elements, the default is zero-valued, which means that the array elements of the array will be empty. For integers, this is represented by 0, and for strings this is represented by an empty string.

For example, the following array numbers has three integer elements that do not yet have a value:

var numbers [3]int 

If you printed numbers, you would recieve the following output:

Output
[0 0 0]

If you would like to assign the values of the elements when you create the array, place the values in curly brackets. An array of strings with set values looks like this:

[4]string{"blue coral", "staghorn coral", "pillar coral", "elkhorn coral"} 

You can store an array in a variable and print it out:

coral := [4]string{"blue coral", "staghorn coral", "pillar coral", "elkhorn coral"} fmt.Println(coral) 

Running a program with the preceding lines would give you the following output:

Output
[blue coral staghorn coral pillar coral elkhorn coral]

Notice that there is no delineation between the elements in the array when it is printed, making it difficult to tell where one element ends and another begins. Because of this, it is sometimes helpful to use the fmt.Printf function instead, which can format strings before printing them to the screen. Provide the %q verb with this command to instruct the function to put quotation marks around the values:

fmt.Printf("%q\n", coral) 

This will result in the following:

Output
["blue coral" "staghorn coral" "pillar coral" "elkhorn coral"]

Now each item is quoted. The \n verb instructs to the formatter to add a line return at the end.

With a general idea of how to declare arrays and what they consist of, you can now move on to learning how to specify elements in an array with indexes.

Indexing Arrays

Each element in an array can be called individually through indexing. Each element corresponds to an index number, which is an int value starting from the index number 0 and counting up.

For the coral array from the earlier example, the index breakdown looks like this:

“blue coral” “staghorn coral” “pillar coral” “elkhorn coral”
0 1 2 3

The first element, the string 'blue coral', starts at index 0, and the list ends at index 3 with the item 'elkhorn coral'.

You can call a discrete element of the array by referring to its index number in brackets after the variable in which the array is stored:

fmt.Println(coral[2]) 

This will print the following:

Output
pillar coral

The index numbers for this array range from 03, so to call any of the elements individually and assign them a value, you could refer to the index numbers like this:

coral[0] = "blue coral" coral[1] = "staghorn coral" coral[2] = "pillar coral" coral[3] = "elkhorn coral" 

If you call the array coral with an index number greater than 3, it will be out of range, and Go will consider the action invalid:

fmt.Println(coral[22]) 
Output
invalid array index 22 (out of bounds for 4-element array)

When indexing an array, you must always use a positive number. Unlike some languages that let you index backwards with a negative number, doing that in Go will result in an error:

fmt.Println(coral[-1]) 
Output
invalid array index -1 (index must be non-negative)

Now that you know how to work with individual elements in an array, you can learn how to slice arrays to select a range of elements.

Slicing Arrays

By using index numbers to determine beginning and endpoints, you can call a subsection of the values within an array. This is called slicing the array. You can do this by creating a range of index numbers separated by a colon, in the form of [first_index:second_index].

Let’s say you would like to just print the middle items of coral, without the first and last element. You can do this by creating a slice starting at index 1 and ending just before index 3:

fmt.Println(coral[1:3]) 

Running a program with this line would yield the following:

Output
[staghorn coral pillar coral]

When creating a slice, as in [1:3], the first number is where the slice starts (inclusive), and the second number is the sum of the first number and the total number of elements you would like to retrieve:

array[starting_index : (starting_index + length_of_slice)] 

In this instance, you called the second element (or index 1) as the starting point, and called two elements in total. This is how the calculation would look:

array[1 : (1 + 2)] 

Which is how you arrived at this notation:

coral[1:3] 

If you want to set the beginning or end of the array as a starting or end point of the slice, you can omit one of the numbers in the array[first_index:second_index] syntax. For example, if you want to print the first three items of the array coral — which would be "blue coral", "staghorn coral", and "pillar coral" — you can do so by typing:

fmt.Println(coral[:3]) 

This will print:

Output
[blue coral staghorn coral pillar coral]

This printed the beginning of the array, stopping right before index 3.

To include all the items at the end of an array, you would reverse the syntax:

fmt.Println(coral[1:]) 

This would give the following:

Output
[staghorn coral pillar coral elkhorn coral]

This section discussed calling individual parts of an array by slicing out subsections. Next, you’ll learn a specific function that Go uses for arrays: len().

Array Functions

In Go, len() is a built-in function made to help you work with arrays. Like with strings, you can calculate the length of an array by using len() and passing in the array as a parameter.

For example, to find how many elements are in the coral array, you would use:

len(coral) 

If you print out the length for the array coral, you’ll receive the following output:

Output
4

This gives the length of the array 4 in the int data type, which is correct because the array coral has four items:

coral := [4]string{"blue coral", "staghorn coral", "pillar coral", "elkhorn coral"} 

If you create an array of integers with more elements, you could use the len() function on this as well:

numbers := [13]int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} fmt.Println(len(numbers)) 

This would result in the following output:

Output
13

Although these example arrays have relatively few items, the len() function is especially useful when determining how many elements are in very large arrays.

Now that you know how to use len() to output the length of arrays, you can learn how arrays differ from another common data structure: slices.

How Arrays Differ from Slices

As mentioned before, the primary way in which arrays are different from slices is that the size of an array cannot be modified. This means that while you can change the values of elements in an array, you can’t make the array larger or smaller after it has been defined. A slice, on the other hand, can alter its length.

Let’s consider your coral array:

coral := [4]string{"blue coral", "staghorn coral", "pillar coral", "elkhorn coral"} 

Say you want to add the item "black coral" to this array. If you try to use the append() function with the array by typing:

coral = append(coral, "black coral") 

You will receive an error as your output:

Output
first argument to append must be slice; have [4]string

If you create an array and decide that you need it to have a variable length, you can convert it to a slice. To convert an array to a slice, use the slicing process you learned in the Slicing Arrays step of this tutorial, except this time select the entire slice by omitting both of the index numbers that would determine the endpoints:

coral[:] 

Keep in mind that you can’t convert the variable coral to a slice itself, since once a variable is defined in Go, its type can’t be changed. To work around this, you can copy the entire contents of the array into a new variable as a slice:

coralSlice := coral[:] 

If you printed coralSlice, you would receive the following output:

Output
[blue coral staghorn coral pillar coral elkhorn coral]

Now, try to use append() with the newly converted slice:

newSlice := append(coralSlice, "black coral") fmt.Printf("%q\n", newSlice) 

This will output the slice with the added element:

Output
["blue coral" "staghorn coral" "pillar coral" "elkhorn coral" "black coral"]

Conclusion

In this tutorial, you learned that the array data type is a sequenced data type with a fixed length, which makes it faster for Go to process at the cost of flexibility. Arrays also can help with communication on a team of developers: When others collaborate with you on your code, your use of arrays will convey to them that you don’t intend for their lengths to be changed.

With this data type in your tool box, you can now go more in-depth learning the variable length version of this structure: slices.

DigitalOcean Community Tutorials

How To Install Go on Debian 10

Introduction

Go, also known as golang, is a modern, open-source programming language developed by Google. Go tries to make software development safe, fast and approachable to help you build reliable and efficient software.

This tutorial will guide you through downloading and installing Go from source, as well as compiling and executing a “Hello, World!” program on a Debian 10 server.

Prerequisites

To complete this tutorial, you will need access to a Debian 10 server and a non-root user with sudo privileges, as described in Initial Server Setup with Debian 10.

Step 1 — Downloading Go

In this step, we’ll install Go on your server.

First, ensure your apt package index is up to date using the following command:

  • sudo apt update

Now install curl so you will be able to grab the latest Go release:

  • sudo apt install curl

Next, visit the official Go downloads page and find the URL for the current binary release’s tarball. Make sure you copy the link for the latest version that is compatible with a 64-bit architecture.

From your home directory, use curl to retrieve the tarball:

  • curl -O https://dl.google.com/go/go1.12.7.linux-amd64.tar.gz

Although the tarball came from a genuine source, it is best practice to verify both the authenticity and integrity of items downloaded from the internet. This verification method certifies that the file was neither tampered with nor corrupted or damaged during the download process. The sha256sum command produces a unique 256-bit hash:

  • sha256sum go1.12.7.linux-amd64.tar.gz
Output
go1.12.7.linux-amd64.tar.gz 66d83bfb5a9ede000e33c6579a91a29e6b101829ad41fffb5c5bb6c900e109d9 go1.12.7.linux-amd64.tar.gz

Compare the hash in your output to the checksum value on the Go download page. If they match, then it is safe to conclude that the download is legitimate.

With Go downloaded and the integrity of the file validated, let’s proceed with the installation.

Step 2 — Installing Go

We’ll now use tar to extract the tarball. The following flags are used to instruct tar how to extract, view, and operate on the downloaded tarball:

  • The x flag tells it that we want to extract files from a tarball
  • The v flag indicates that we want verbose output, including a list of the files being extracted
  • The f flag tells tar that we’ll specify a filename to operate on

Now let’s put things all together and run the command to extract the package:

  • tar xvf go1.12.7.linux-amd64.tar.gz

You should now have a directory called go in your home directory. Recursively change the owner and group of this directory to root, and move it to /usr/local:

  • sudo chown -R root:root ./go
  • sudo mv go /usr/local

Note: Although /usr/local/go is the officially-recommended location, some users may prefer or require different paths.

At this point, using Go would require specifying the full path to its install location in the command line. To make interacting with Go more user-friendly, we will set a few paths.

Step 2 — Setting Go Paths

In this step, we’ll set some paths in your environment.

First, set Go’s root value, which tells Go where to look for its files:

  • nano ~/.profile

At the end of the file, add the following lines:

export GOPATH=$  HOME/work export PATH=$  PATH:/usr/local/go/bin:$  GOPATH/bin 

If you chose a different installation location for Go, then you should add the following lines to this file instead of the lines shown above. In this example, we are adding the lines that would be required if you installed Go in your home directory:

export GOROOT=$  HOME/go export GOPATH=$  HOME/work export PATH=$  PATH:$  GOROOT/bin:$  GOPATH/bin 

With the appropriate lines pasted into your profile, save and close the file.

Next, refresh your profile by running:

  • source ~/.profile

With the Go installation in place and the necessary environment paths set, let’s confirm that our setup works by composing a short program.

Step 3 — Testing Your Installation

Now that Go is installed and the paths are set for your server, you can ensure that Go is working as expected.

Create a new directory for your Go workspace, which is where Go will build its files:

  • mkdir $ HOME/work

Then, create a directory hierarchy in this folder so that you will be able to create your test file. We’ll use the directory my_project as an example:

  • mkdir -p work/src/my_project/hello

Next, you can create a traditional “Hello World” Go file:

  • nano ~/work/src/my_project/hello/hello.go

Inside your editor, add the following code to the file, which uses the main Go packages, imports the formatted IO content component, and sets a new function to print “Hello, World!” when run:

~/work/src/my_project/hello/hello.go
package main  import "fmt"  func main() {    fmt.Printf("Hello, World!\n") } 

When it runs, this program will print Hello, World!, indicating that Go programs are compiling correctly.

Save and close the file, then compile it by invoking the Go command install:

  • go install my_project/hello

With the program compiled, you can run it by executing the command:

  • hello

Go is successfully installed and functional if you see the following output:

Output
Hello, World!

You can determine where the compiled hello binary is installed by using the which command:

  • which hello
Output
/home/sammy/work/bin/hello

The “Hello, World!” program established that you have a Go development environment.

Conclusion

By downloading and installing the latest Go package and setting its paths, you now have a system to use for Go development. To learn more about working with Go, see our development series How To Code in Go. You can also consult the official documentation on How to Write Go Code.

DigitalOcean Community Tutorials

How To Create a Self-Signed SSL Certificate for Nginx on Debian 10

Introduction

TLS, or transport layer security, and its predecessor SSL, which stands for secure sockets layer, are web protocols used to wrap normal traffic in a protected, encrypted wrapper.

Using this technology, servers can send traffic safely between the server and clients without the possibility of the messages being intercepted by outside parties. The certificate system also assists users in verifying the identity of the sites that they are connecting with.

In this guide, we will show you how to set up a self-signed SSL certificate for use with an Nginx web server on a Debian 10 server.

Note: A self-signed certificate will encrypt communication between your server and any clients. However, because it is not signed by any of the trusted certificate authorities included with web browsers, users cannot use the certificate to validate the identity of your server automatically.

A self-signed certificate may be appropriate if you do not have a domain name associated with your server and for instances where the encrypted web interface is not user-facing. If you do have a domain name, in many cases it is better to use a CA-signed certificate. To learn how to set up a free trusted certificate with the Let’s Encrypt project, consult How to Secure Nginx with Let’s Encrypt on Debian 10.

Prerequisites

Step 1 — Creating the SSL Certificate

TLS/SSL works by using a combination of a public certificate and a private key. The SSL key is kept secret on the server and is used to encrypt content sent to clients. The SSL certificate is publicly shared with anyone requesting the content. It can be used to decrypt the content signed by the associated SSL key.

We can create a self-signed key and certificate pair with OpenSSL in a single command:

  • sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt

You will be asked a series of questions. Before we go over that, let’s take a look at what is happening in the command we are issuing:

  • openssl: This is the basic command line tool for creating and managing OpenSSL certificates, keys, and other files.
  • req: This subcommand specifies that we want to use X.509 certificate signing request (CSR) management. The “X.509” is a public key infrastructure standard that SSL and TLS adheres to for its key and certificate management. We want to create a new X.509 cert, so we are using this subcommand.
  • -x509: This further modifies the previous subcommand by telling the utility that we want to make a self-signed certificate instead of generating a certificate signing request, as would normally happen.
  • -nodes: This tells OpenSSL to skip the option to secure our certificate with a passphrase. We need Nginx to be able to read the file without user intervention when the server starts up. A passphrase would prevent this from happening because we would have to enter it after every restart.
  • -days 365: This option sets the length of time that the certificate will be considered valid. We set it for one year here.
  • -newkey rsa:2048: This specifies that we want to generate a new certificate and a new key at the same time. We did not create the key that is required to sign the certificate in a previous step, so we need to create it along with the certificate. The rsa:2048 portion tells it to make an RSA key that is 2048 bits long.
  • -keyout: This line tells OpenSSL where to place the generated private key file that we are creating.
  • -out: This tells OpenSSL where to place the certificate that we are creating.

As we stated above, these options will create both a key file and a certificate. We will be asked a few questions about our server in order to embed the information correctly in the certificate.

Fill out the prompts appropriately. The most important line is the one that requests the Common Name (e.g. server FQDN or YOUR name). You need to enter the domain name associated with your server or your server’s public IP address.

The entirety of the prompts will look something like this:

Output
Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:New York Locality Name (eg, city) []:New York City Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc. Organizational Unit Name (eg, section) []:Ministry of Water Slides Common Name (e.g. server FQDN or YOUR name) []:your_domain_or_server_IP_address Email Address []:admin@your_domain.com

Both of the files you created will be placed in the appropriate subdirectories of the /etc/ssl directory.

While we are using OpenSSL, we should also create a strong Diffie-Hellman group, which is used in negotiating Perfect Forward Secrecy with clients.

We can do this by typing:

  • sudo openssl dhparam -out /etc/nginx/dhparam.pem 4096

This will take a while, but when it’s done you will have a strong DH group at /etc/nginx/dhparam.pem that you can use in your configuration.

Step 2 — Configuring Nginx to Use SSL

We have created our key and certificate files under the /etc/ssl directory. Now we just need to modify our Nginx configuration to take advantage of these.

We will make a few adjustments to our configuration.

  1. We will create a configuration snippet containing our SSL key and certificate file locations.
  2. We will create a configuration snippet containing strong SSL settings that can be used with any certificates in the future.
  3. We will adjust our Nginx server blocks to handle SSL requests and use the two snippets above.

This method of configuring Nginx will allow us to keep clean server blocks and put common configuration segments into reusable modules.

Creating a Configuration Snippet Pointing to the SSL Key and Certificate

First, let’s create a new Nginx configuration snippet in the /etc/nginx/snippets directory.

To properly distinguish the purpose of this file, let’s call it self-signed.conf:

  • sudo nano /etc/nginx/snippets/self-signed.conf

Within this file, we need to set the ssl_certificate directive to our certificate file and the ssl_certificate_key to the associated key. Add the following lines to the file:

/etc/nginx/snippets/self-signed.conf
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt; ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key; 

When you’ve added those lines, save and close the file.

Creating a Configuration Snippet with Strong Encryption Settings

Next, we will create another snippet that will define some SSL settings. This will set Nginx up with a strong SSL cipher suite and enable some advanced features that will help keep our server secure.

The parameters we will set can be reused in future Nginx configurations, so we will give the file a generic name:

  • sudo nano /etc/nginx/snippets/ssl-params.conf

To set up Nginx SSL securely, we will be using the recommendations by Remy van Elst on the Cipherli.st site. This site is designed to provide easy-to-consume encryption settings for popular software.

Note: The suggested settings on the Cipherli.st site offer strong security. Sometimes this comes at the cost of greater client compatibility. If you need to support older clients, there is an alternative list that you can access by clicking the link on the page labeled Yes, give me a ciphersuite that works with legacy / old software. That list can be substituted for the items below.

The choice of which config you use will depend largely on what you need to support. They both will provide great security.

For our purposes, we can copy the provided settings in their entirety. We just need to make a few small modifications.

First, we will add our preferred DNS resolver for upstream requests. We will use Google’s for this guide.

Second, we will comment out the line that sets the strict transport security header. Before uncommenting this line, you should take take a moment to read up on HTTP Strict Transport Security, or HSTS, and specifically its “preload” functionality. Preloading HSTS provides increased security, but can have far-reaching consequences if accidentally enabled or enabled incorrectly.

Copy the following into your ssl-params.conf snippet file:

/etc/nginx/snippets/ssl-params.conf
ssl_protocols TLSv1.2; ssl_prefer_server_ciphers on; ssl_dhparam /etc/nginx/dhparam.pem; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384; ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0 ssl_session_timeout  10m; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; # Requires nginx >= 1.5.9 ssl_stapling on; # Requires nginx >= 1.3.7 ssl_stapling_verify on; # Requires nginx => 1.3.7 resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; # Disable strict transport security for now. You can uncomment the following # line if you understand the implications. # add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; 

Because we are using a self-signed certificate, SSL stapling will not be used. Nginx will output a warning but continue to operate correctly.

Save and close the file when you are finished.

Adjusting the Nginx Configuration to Use SSL

Now that we have our snippets, we can adjust our Nginx configuration to enable SSL.

We will assume in this guide that you are using a custom server block configuration file in the /etc/nginx/sites-available directory, as outlined in Step 5 of the prerequisite tutorial on installing Nginx. We will use /etc/nginx/sites-available/your_domain for this example. Substitute your configuration filename/domain name as needed.

Before we go any further, let’s back up our current configuration file:

  • sudo cp /etc/nginx/sites-available/your_domain /etc/nginx/sites-available/your_domain.bak

Now, open the configuration file to make adjustments:

  • sudo nano /etc/nginx/sites-available/your_domain

If you followed the prerequisites, your server block will look like this:

/etc/nginx/sites-available/your_domain
server {     listen 80;     listen [::]:80;      root /var/www/your_domain/html;     index index.html index.htm index.nginx-debian.html;      server_name your_domain www.your_domain;      location / {             try_files $  uri $  uri/ =404;     }  } 

Your file may be in a different order, and instead of the root and index directives you may have some location, proxy_pass, or other custom configuration statements. This is ok, as we only need to update the listen directives and include our SSL snippets. We will modify this existing server block to serve SSL traffic on port 443, and then create a new server block to respond on port 80 and automatically redirect traffic to port 443.

Note: We will use a 302 redirect until we have verified that everything is working properly. Afterwards, we can change this to a permanent 301 redirect.

In your existing configuration file, update the two listen statements to use port 443 and SSL, and then include the two snippet files we created in previous steps:

/etc/nginx/sites-available/your_domain
server {     listen 443 ssl;     listen [::]:443 ssl;     include snippets/self-signed.conf;     include snippets/ssl-params.conf;      root /var/www/your_domain/html;     index index.html index.htm index.nginx-debian.html;      server_name your_domain www.your_domain;      . . . } 

Next, paste a second server block into the configuration file, after the closing bracket (}) of the first block:

/etc/nginx/sites-available/your_domain
. . . server {     listen 80;     listen [::]:80;      server_name your_domain www.your_domain;      return 302 https://$  server_name$  request_uri; } 

This is a bare-bones configuration that listens on port 80 and performs the redirect to HTTPS. Save and close the file when you are finished editing.

Step 3 — Adjusting the Firewall

If you have the ufw firewall enabled, as recommended by the prerequisite guides, you’ll need to adjust the settings to allow for SSL traffic. Luckily, Nginx registers a few profiles with ufw upon installation.

We can see the available profiles by typing:

  • sudo ufw app list

You should see a list like this:

Output
Available applications: . . . Nginx Full Nginx HTTP Nginx HTTPS . . .

You can see the current setting by typing:

  • sudo ufw status

If you followed the prerequisites, it will look like this, meaning that only HTTP traffic is allowed to the web server:

Output
Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere Nginx HTTP ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Nginx HTTP (v6) ALLOW Anywhere (v6)

To additionally let in HTTPS traffic, we can allow the “Nginx Full” profile and then delete the redundant “Nginx HTTP” profile allowance:

  • sudo ufw allow 'Nginx Full'
  • sudo ufw delete allow 'Nginx HTTP'

Your status should now look like this:

  • sudo ufw status
Output
Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere Nginx Full ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Nginx Full (v6) ALLOW Anywhere (v6)

With our firewall configured properly, we can move on to testing our Nginx configuration.

Step 4 — Enabling the Changes in Nginx

Now that we’ve made our changes and adjusted our firewall, we can restart Nginx to implement our new changes.

First, we should check to make sure that there are no syntax errors in our files. We can do this by typing:

  • sudo nginx -t

If everything is successful, you will get a result that looks like this:

Output
nginx: [warn] "ssl_stapling" ignored, issuer certificate not found for certificate "/etc/ssl/certs/nginx-selfsigned.crt" nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

Note the warning in the beginning. As discussed earlier, this particular setting throws a warning since our self-signed certificate can’t use SSL stapling. This is expected and our server can still encrypt connections correctly.

If your output matches the above, your configuration file has no syntax errors. We can safely restart Nginx to implement our changes:

  • sudo systemctl restart nginx

With our Nginx configuration tested, we can move on to testing our setup.

Step 5 — Testing Encryption

We’re now ready to test our SSL server.

Open your web browser and type https:// followed by your server’s domain name or IP into the address bar:

https://your_domain_or_server_IP 

Because the certificate we created isn’t signed by one of your browser’s trusted certificate authorities, you will likely see a scary looking warning like the one below (the following appears when using Google Chrome) :

Nginx self-signed cert warning

This is expected and normal. We are only interested in the encryption aspect of our certificate, not the third party validation of our host’s authenticity. Click “ADVANCED” and then the link provided to proceed to your host:

Nginx self-signed override

You should be taken to your site. If you look in the browser address bar, you will see a lock with an “x” over it. In this case, this just means that the certificate cannot be validated. It is still encrypting your connection.

If you configured Nginx with two server blocks, automatically redirecting HTTP content to HTTPS, you can also check whether the redirect functions correctly:

http://server_domain_or_IP 

If this results in the same icon, this means that your redirect worked correctly.

Step 6 — Changing to a Permanent Redirect

If your redirect worked correctly and you are sure you want to allow only encrypted traffic, you should modify the Nginx configuration to make the redirect permanent.

Open your server block configuration file again:

  • sudo nano /etc/nginx/sites-available/<^>your_domain^>

Find the return 302 and change it to return 301:

/etc/nginx/sites-available/your_domain
    return 301 https://$  server_name$  request_uri; 

Save and close the file.

Check your configuration for syntax errors:

  • sudo nginx -t

When you’re ready, restart Nginx to make the redirect permanent:

  • sudo systemctl restart nginx

Conclusion

You have configured your Nginx server to use strong encryption for client connections. This will allow you serve requests securely, and will prevent outside parties from reading your traffic.

DigitalOcean Community Tutorials

How To Install MariaDB on Debian 10

Introduction

MariaDB is an open-source database management system, commonly used as an alternative for the MySQL portion of the popular LAMP (Linux, Apache, MySQL, PHP/Python/Perl) stack. It is intended to be a drop-in replacement for MySQL and Debian now only ships with MariaDB packages. If you attempt to install MySQL server related packages, you’ll receive the compatible MariaDB replacement versions instead.

The short version of this installation guide consists of these three steps:

  • Update your package index using apt
  • Install the mariadb-server package using apt. The package also pulls in related tools to interact with MariaDB
  • Run the included mysql_secure_installation security script to restrict access to the server
  • sudo apt update
  • sudo apt install mariadb-server
  • sudo mysql_secure_installation

This tutorial will explain how to install MariaDB version 10.3 on a Debian 10 server, and verify that it is running and has a safe initial configuration.

Prerequisites

To follow this tutorial, you will need:

Step 1 — Installing MariaDB

On Debian 10, MariaDB version 10.3 is included in the APT package repositories by default. It is marked as the default MySQL variant by the Debian MySQL/MariaDB packaging team.

To install it, update the package index on your server with apt:

  • sudo apt update

Then install the package:

  • sudo apt install mariadb-server

These commands will install MariaDB, but will not prompt you to set a password or make any other configuration changes. Because the default configuration leaves your installation of MariaDB insecure, we will use a script that the mariadb-server package provides to restrict access to the server and remove unused accounts.

Step 2 — Configuring MariaDB

For new MariaDB installations, the next step is to run the included security script. This script changes some of the less secure default options. We will use it to block remote root logins and to remove unused database users.

Run the security script:

  • sudo mysql_secure_installation

This will take you through a series of prompts where you can make some changes to your MariaDB installation’s security options. The first prompt will ask you to enter the current database root password. Since we have not set one up yet, press ENTER to indicate “none”.

The next prompt asks you whether you’d like to set up a database root password. Type N and then press ENTER. In Debian, the root account for MariaDB is tied closely to automated system maintenance, so we should not change the configured authentication methods for that account. Doing so would make it possible for a package update to break the database system by removing access to the administrative account. Later, we will cover how to optionally set up an additional administrative account for password access if socket authentication is not appropriate for your use case.

From there, you can press Y and then ENTER to accept the defaults for all the subsequent questions. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MariaDB immediately respects the changes you have made.

Step 3 — (Optional) Adjusting User Authentication and Privileges

In Debian systems running MariaDB 10.3, the root MariaDB user is set to authenticate using the unix_socket plugin by default rather than with a password. This allows for some greater security and usability in many cases, but it can also complicate things when you need to allow an external program (e.g., phpMyAdmin) administrative rights.

Because the server uses the root account for tasks like log rotation and starting and stopping the server, it is best not to change the root account’s authentication details. Changing credentials in the /etc/mysql/debian.cnf configuration file may work initially, but package updates could potentially overwrite those changes. Instead of modifying the root account, the package maintainers recommend creating a separate administrative account for password-based access.

To do so, we will create a new account called admin with the same capabilities as the root account, but configured for password authentication. To do this, open up the MariaDB prompt from your terminal:

  • sudo mysql

Now, we will create a new user with root privileges and password-based access. Change the username and password to match your preferences:

MariaDB [(none)]> GRANT ALL ON *.* TO 'admin'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION; 

Flush the privileges to ensure that they are saved and available in the current session:

MariaDB [(none)]> FLUSH PRIVILEGES; 

Following this, exit the MariaDB shell:

MariaDB [(none)]> exit 

Finally, let’s test the MariaDB installation.

Step 4 — Testing MariaDB

When installed from the default repositories, MariaDB should start running automatically. To test this, check its status.

  • sudo systemctl status mariadb

You’ll receive output that is similar to the following:

Output
● mariadb.service - MariaDB 10.3.15 database server    Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)    Active: active (running) since Fri 2019-07-12 20:35:29 UTC; 47min ago      Docs: man:mysqld(8)            https://mariadb.com/kb/en/library/systemd/  Main PID: 2036 (mysqld)    Status: "Taking your SQL requests now..."     Tasks: 30 (limit: 2378)    Memory: 76.1M    CGroup: /system.slice/mariadb.service            └─2036 /usr/sbin/mysqld  Jul 12 20:35:29 deb-mariadb1 /etc/mysql/debian-start[2074]: Phase 6/7: Checking and upgrading tables Jul 12 20:35:29 deb-mariadb1 /etc/mysql/debian-start[2074]: Running 'mysqlcheck' with connection arguments: --socket='/var/run/mysqld/mysqld.sock' --host='localhost' --socket='/var/run/mysqld/mysqld.sock' --host='localhost' --socket='/var/run/mysqld/mysqld.sock' Jul 12 20:35:29 deb-mariadb1 /etc/mysql/debian-start[2074]: # Connecting to localhost... Jul 12 20:35:29 deb-mariadb1 /etc/mysql/debian-start[2074]: # Disconnecting from localhost... Jul 12 20:35:29 deb-mariadb1 /etc/mysql/debian-start[2074]: Processing databases Jul 12 20:35:29 deb-mariadb1 /etc/mysql/debian-start[2074]: information_schema Jul 12 20:35:29 deb-mariadb1 /etc/mysql/debian-start[2074]: performance_schema Jul 12 20:35:29 deb-mariadb1 /etc/mysql/debian-start[2074]: Phase 7/7: Running 'FLUSH PRIVILEGES' Jul 12 20:35:29 deb-mariadb1 /etc/mysql/debian-start[2074]: OK Jul 12 20:35:30 deb-mariadb1 /etc/mysql/debian-start[2132]: Triggering myisam-recover for all MyISAM tables and aria-recover for all Aria tables 

If MariaDB isn’t running, you can start it with the command sudo systemctl start mariadb.

For an additional check, you can try connecting to the database using the mysqladmin tool, which is a client that lets you run administrative commands. For example, this command says to connect to MariaDB as root and return the version using the Unix socket:

  • sudo mysqladmin version

You should receive output similar to this:

Output
mysqladmin Ver 9.1 Distrib 10.3.15-MariaDB, for debian-linux-gnu on x86_64 Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Server version 10.3.15-MariaDB-1 Protocol version 10 Connection Localhost via UNIX socket UNIX socket /var/run/mysqld/mysqld.sock Uptime: 48 min 14 sec Threads: 7 Questions: 474 Slow queries: 0 Opens: 177 Flush tables: 1 Open tables: 31 Queries per second avg: 0.163

If you configured a separate administrative user with password authentication, you could perform the same operation by typing:

  • mysqladmin -u admin -p version

This means that MariaDB is up and running and that your user is able to authenticate successfully.

Conclusion

In this guide you installed MariaDB to act as an SQL server. During the installation process you also secured the server. Optionally, you also created a separate user to ensure administrative access to MariaDB across package updates.

Now that you have a running and secure MariaDB server, here some examples of next steps that you can take to work with the server:

You can also incorporate MariaDB into a larger application stack:

DigitalOcean Community Tutorials

How to Install and use Composer on Debian 10

Introduction

Composer is a popular dependency management tool for PHP, created mainly to facilitate installation and updates for project dependencies. It will check which other packages a specific project depends on and install them for you, using the appropriate versions according to the project requirements. Composer is also commonly used to bootstrap new projects based on popular PHP frameworks, such as Symfony and Laravel.

In this guide, we’ll see how to install and use Composer on a Debian 10 server.

Prerequisites

To complete this tutorial, you will need one Debian 10 server set up by following the Debian 10 initial server setup guide, including a regular user with sudo privileges.

Step 1 — Installing the Dependencies

Before you can download and install Composer, we’ll ensure your server has all dependencies installed.

First, update the package manager cache by running:

  • sudo apt update

Now, let’s install the dependencies. We’ll need curl in order to download Composer and php-cli for installing and running it. The php-mbstring package is necessary to provide functions for a library we’ll be using. git is used by Composer for downloading project dependencies, and unzip for extracting zipped packages. Everything can be installed with the following command:

  • sudo apt install curl php-cli php-mbstring git unzip

With the prerequisites installed, we can install Composer itself.

Step 2 — Downloading and Installing Composer

Composer provides an installer, written in PHP. We’ll download it, verify that it’s not corrupted, and then use it to install Composer.

Make sure you’re in your home directory, then retrieve the installer using curl:

  • cd ~
  • curl -sS https://getcomposer.org/installer -o composer-setup.php

Next, verify that the installer matches the SHA-384 hash for the latest installer found on the [Composer Public Keys / Signatures][composer-sigs] page. Copy the hash from that page and store it as a shell variable:

  • HASH=48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5

Make sure that you substitute the latest hash for the highlighted value.

Now execute the following PHP script to verify that the installation script is safe to run:

  • php -r "if (hash_file('SHA384', 'composer-setup.php') === '$ HASH') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"

You’ll see the following output.

Output
Installer verified 

If you see Installer corrupt, then you’ll need to download the installation script again and double check that you’re using the correct hash. Then run the command to verify the installer again. Once you have a verified installer, you can continue.

To install composer globally, use the following command which will download and install Composer as a system-wide command named composer, under /usr/local/bin:

  • sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer

You’ll see the following output:

Output
All settings correct for using Composer Downloading... Composer (version 1.8.6) successfully installed to: /usr/local/bin/composer Use it: php /usr/local/bin/composer

To test your installation, run:

  • composer

And you’ll see output displaying Composer’s version and arguments, similar to this:

Output
______ / ____/___ ____ ___ ____ ____ ________ _____ / / / __ \/ __ `__ \/ __ \/ __ \/ ___/ _ \/ ___/ / /___/ /_/ / / / / / / /_/ / /_/ (__ ) __/ / \____/\____/_/ /_/ /_/ .___/\____/____/\___/_/ /_/ Composer version 1.8.6 2019-06-11 15:03:05 Usage: command [options] [arguments] Options: -h, --help Display this help message -q, --quiet Do not output any message -V, --version Display this application version --ansi Force ANSI output --no-ansi Disable ANSI output -n, --no-interaction Do not ask any interactive question --profile Display timing and memory usage information --no-plugins Whether to disable plugins. -d, --working-dir=WORKING-DIR If specified, use the given directory as working directory. -v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug . . .

This verifies that Composer installed successfully on your system and is available system-wide.

Note: If you prefer to have separate Composer executables for each project you host on this server, you can install it locally, on a per-project basis. Users of NPM will be familiar with this approach. This method is also useful when your system user doesn’t have permission to install software system-wide.

To do this, use the command php composer-setup.php. This will generate a composer.phar file in your current directory, which can be executed with ./composer.phar command.

Now let’s look at using Composer to manage PHP dependencies.

Step 3 — Using Composer in a PHP Project

PHP projects often depend on external libraries, and managing those dependencies and their versions can be tricky. Composer solves that by tracking your dependencies and making it easy for others to install them.

In order to use Composer in your project, you’ll need a composer.json file. The composer.json file tells Composer which dependencies it needs to download for your project, and which versions of each package are allowed to be installed. This is extremely important to keep your project consistent and avoid installing unstable versions that could potentially cause backwards compatibility issues.

You don’t need to create this file manually – it’s easy to run into syntax errors when you do so. Composer auto-generates the composer.json file when you add a dependency to your project using the require command. You can add additional dependencies in the same way, without the need to manually edit this file.

The process of using Composer to install a package as dependency in a project involves the following steps:

  • Identify what kind of library the application needs.
  • Research a suitable open source library on Packagist.org, the official package repository for Composer.
  • Choose the package you want to depend on.
  • Run composer require to include the dependency in the composer.json file and install the package.

Let’s try this out with a demo application.

The goal of this application is to transform a given sentence into a URL-friendly string – a slug. This is commonly used to convert page titles to URL paths (like the final portion of the URL for this tutorial).

Let’s start by creating a directory for our project. We’ll call it slugify:

  • cd ~
  • mkdir slugify
  • cd slugify

Now it’s time to search Packagist.org for a package that can help us generate slugs. If you search for the term “slug” on Packagist, you’ll get a result similar to this:

Packagist Search

You’ll see two numbers on the right side of each package in the list. The number on the top represents how many times the package was installed, and the number on the bottom shows how many times a package was starred on GitHub. You can reorder the search results based on these numbers (look for the two icons on the right side of the search bar). Generally speaking, packages with more installations and more stars tend to be more stable, since so many people are using them. It’s also important to check the package description for relevance to make sure it’s what you need.

We need a simple string-to-slug converter. From the search results, the package cocur/slugify seems to be a good match, with a reasonable amount of installations and stars.

Packages on Packagist have a vendor name and a package name. Each package has a unique identifier (a namespace) in the same format GitHub uses for its repositories, in the form vendor/package. The library we want to install uses the namespace cocur/slugif. You need the namespace in order to require the package in your project.

Now that you know exactly which package you want to install, run composer require to include it as a dependency and also generate the composer.json file for the project:

  • composer require cocur/slugify

You’ll see the following output as Composer downloads the dependency:

Output
Using version ^3.2 for cocur/slugify ./composer.json has been created Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 1 install, 0 updates, 0 removals - Installing cocur/slugify (v3.2): Downloading (100%) Writing lock file Generating autoload files

As you can see from the output, Composer automatically decided which version of the package to use. If you check your project’s directory now, it will contain two new files: composer.json and composer.lock, and a vendor directory:

  • ls -l
Output
total 12 -rw-r--r-- 1 sammy sammy 59 jul 15 13:53 composer.json -rw-r--r-- 1 sammy sammy 2952 jul 15 13:53 composer.lock drwxr-xr-x 4 sammy sammy 4096 jul 15 13:53 vendor

The composer.lock file is used to store information about which versions of each package are installed, and ensure the same versions are used if someone else clones your project and installs its dependencies. The vendor directory is where the project dependencies are located. The vendor folder doesn’t need to be committed into version control – you only need to include the composer.json and composer.lock files.

When installing a project that already contains a composer.json file, run composer install in order to download the project’s dependencies.

Let’s take a quick look at version constraints. If you check the contents of your composer.json file, you’ll see something like this:

  • cat composer.json
Output
{ "require": { "cocur/slugify": "^3.2" } }

You might notice the special character ^ before the version number in composer.json. Composer supports several different constraints and formats for defining the required package version, in order to provide flexibility while also keeping your project stable. The caret (^) operator used by the auto-generated composer.json file is the recommended operator for maximum interoperability, following semantic versioning. In this case, it defines 3.2 as the minimum compatible version, and allows updates to any future version below 4.0.

Generally speaking, you won’t need to tamper with version constraints in your composer.json file. However, some situations might require that you manually edit the constraints–for instance, when a major new version of your required library is released and you want to upgrade, or when the library you want to use doesn’t follow semantic versioning.

Here are some examples to give you a better understanding of how Composer version constraints work:

Constraint Meaning Example Versions Allowed
^1.0 >= 1.0 < 2.0 1.0, 1.2.3, 1.9.9
^1.1.0 >= 1.1.0 < 2.0 1.1.0, 1.5.6, 1.9.9
~1.0 >= 1.0 < 2.0.0 1.0, 1.4.1, 1.9.9
~1.0.0 >= 1.0.0 < 1.1 1.0.0, 1.0.4, 1.0.9
1.2.1 1.2.1 1.2.1
1.* >= 1.0 < 2.0 1.0.0, 1.4.5, 1.9.9
1.2.* >= 1.2 < 1.3 1.2.0, 1.2.3, 1.2.9

For a more in-depth view of Composer version constraints, see the official documentation.

Next, let’s look at how to load dependencies automatically with Composer.

Step 4 — Including the Autoload Script

Since PHP itself doesn’t automatically load classes, Composer provides an autoload script that you can include in your project to get autoloading for free. This makes it much easier to work with your dependencies.

The only thing you need to do is include the vendor/autoload.php file in your PHP scripts before any class instantiation. This file is automatically generated by Composer when you add your first dependency.

Let’s try it out in our application. Create the file test.php and open it in your text editor:

  • nano test.php

Add the following code which brings in the vendor/autoload.php file, loads the cocur/slugify dependency, and uses it to create a slug:

test.php
<?php require __DIR__ . '/vendor/autoload.php';  use Cocur\Slugify\Slugify;  $  slugify = new Slugify();  echo $  slugify->slugify('Hello World, this is a long sentence and I need to make a slug from it!'); 

Save the file and exit your editor.

Now run the script:

  • php test.php

This produces the output hello-world-this-is-a-long-sentence-and-i-need-to-make-a-slug-from-it.

Dependencies need updates when new versions come out, so let’s look at how to handle that.

Step 5 — Updating Project Dependencies

Whenever you want to update your project dependencies to more recent versions, run the update command:

  • composer update

This will check for newer versions of the libraries you required in your project. If a newer version is found and it’s compatible with the version constraint defined in the composer.json file, Composer will replace the previous version installed. The composer.lock file will be updated to reflect these changes.

You can also update one or more specific libraries by specifying them like this:

  • composer update vendor/package vendor2/package2

Be sure to commit the changes to your composer.json and composer.lock files after you update your dependencies, so whoever is working in the project has access to the same package versions.

Conclusion

Composer is a powerful tool every PHP developer should have in their utility belt. In this tutorial you installed Composer on Debian 10 and used it in a simple project. You now know how to install and update dependencies.

Beyond providing an easy and reliable way for managing project dependencies, it also establishes a new de facto standard for sharing and discovering PHP packages created by the community.

DigitalOcean Community Tutorials