Automating Initial Server Setup with Ansible on Ubuntu 18.04

Introduction

When you first create a new Ubuntu 18.04 server, there are a few configuration steps that you should take early on as part of the basic setup. This will increase the security and usability of your server, working as a solid foundation for subsequent actions.

While you can complete these steps manually, automating the process will save you time and eliminate human error. With the popularization of containerized applications and microservices, server automation now plays an essential role in systems administration. It is also a way to establish standard procedures for new servers.

This guide explains how to use Ansible to automate the steps contained in our Initial Server Setup Guide. Ansible is a modern configuration management tool that can be used to automate the provisioning and configuration of remote systems.

Pre-Flight Check

In order to execute the automated setup provided by the playbook we’re discussing in this guide, you’ll need:

  • Ansible installed either on your local machine or on a remote server that you have set up as an Ansible Control Node. You can follow Step 1 of the tutorial How to Install and Configure Ansible on Ubuntu 18.04 to get this set up.
  • Root access to one or more Ubuntu 18.04 servers that will be managed by Ansible.

Before running a playbook, it’s important to make sure Ansible is able to connect to your servers via SSH and run Ansible modules using Python. The next two sections cover how to set up your Ansible inventory to include your servers and how to run ad-hoc Ansible commands to test for connectivity and valid credentials.

Inventory File

The inventory file contains information about the hosts you’ll manage with Ansible. You can include anywhere from one to several hundred of servers in your inventory file, and hosts can be organized into groups and subgroups. The inventory file is also often used to set variables that will be valid for certain hosts and groups only, in order to be used within playbooks and templates. Some variables can also affect the way a playbook is run, like the ansible_python_interpreter variable that we’ll see in a moment.

To inspect the contents of your default Ansible inventory, open the /etc/ansible/hosts file using your command-line editor of choice, on your local machine or an Ansible Control Node:

  • sudo nano /etc/ansible/hosts

Note: some Ansible installations won’t create a default inventory file. If the file doesn’t exist in your system, you can create a new file at /etc/ansible/hosts or provide a custom inventory path using the -i parameter when running commands and playbooks.

The default inventory file provided by the Ansible installation contains a number of examples that you can use as references for setting up your inventory. The following example defines a group named servers with three different servers in it, each identified by a custom alias: server1, server2, and server3:

/etc/ansible/hosts
[servers] server1 ansible_host=203.0.113.111 server2 ansible_host=203.0.113.112 server3 ansible_host=203.0.113.113  [servers:vars] ansible_python_interpreter=/usr/bin/python3 

The server:vars subgroup sets the ansible_python_interpreter host parameter that will be valid for all hosts included in the servers group. This parameter makes sure the remote server uses the /usr/bin/python3 Python 3 executable instead of /usr/bin/python (Python 2.7), which is not present on recent Ubuntu versions.

To finish setting up your inventory file, replace the highlighted IPs with the IP addresses of your servers. When you’re finished, save and close the file by pressing CTRL+X then y to confirm changes and then ENTER.

Now that your inventory file is ready, it’s time to test connectivity to your nodes

Testing Connectivity

After setting up the inventory file to include your servers, it’s time to check if Ansible is able to connect to these servers and run commands via SSH. For this guide, we will be using the Ubuntu root account because that’s typically the only account available by default on newly created servers. This playbook will create a new non-root user with sudo privileges that you should use in subsequent interactions with the remote server.

From your local machine or Ansible Control Node, run:

  • ansible -m ping all -u root

This command will use the built-in ping Ansible module to run a connectivity test on all nodes from your default inventory, connecting as root. The ping module will test:
if hosts are accessible;
if you have valid SSH credentials;
if hosts are able to run Ansible modules using Python.

If instead of key-based authentication you’re using password-based authentication to connect to remote servers, you should provide the additional parameter -k to the Ansible command, so that it will prompt you for the password of the connecting user.

  • ansible -m ping all -u root -k

Note: Keep in mind that some servers might have additional security measures against password-based authentication as the root user, and in some cases you might be required to manually log in to the server to change the initial root password.

You should get output similar to this:

Output
server1 | SUCCESS => { "changed": false, "ping": "pong" } server2 | SUCCESS => { "changed": false, "ping": "pong" } server3 | SUCCESS => { "changed": false, "ping": "pong" }

If this is the first time you’re connecting to these servers via SSH, you’ll be asked to confirm the authenticity of the hosts you’re connecting to via Ansible. When prompted, type yes and then hit Enter to confirm.

Once you get a “pong” reply back from a host, it means you’re ready to run Ansible commands and playbooks on that server.

What Does this Playbook Do?

This Ansible playbook provides an alternative to manually running through the procedure outlined in the Ubuntu 18.04 initial server setup guide and the guide on setting up SSH keys on Ubuntu 18.04.

Running this playbook will cause the following actions to be performed:

  1. The administrative group wheels is created and then configured for passwordless sudo.
  2. A new administrative user is created within that group, using the name specified by the create_user variable.
  3. A public SSH key is copied from the location defined by the variable copy_local_key, and added to the authorized_keys file for the user created in the previous step.
  4. Password-based authentication is disabled for the root user.
  5. The local apt package index is updated and basic packages defined by the variable sys_packages are installed.
  6. The UFW firewall is configured to allow only SSH connections and deny any other requests.

For more information about each of the steps included in this playbook, please refer to our Ubuntu 18.04 initial server setup guide.

Once the playbook has finished running, you’ll be able to log in to the server using the newly created sudo account.

How to Use this Playbook

To get started, we’ll download the contents of the playbook to your Ansible Control Node. This can be either your local machine, or a remote server where you have Ansible installed and your inventory set up.

For your convenience, the contents of the playbook are also included in a further section of this guide.

To download this playbook from the command-line, you can use curl:

  • curl -L https://raw.githubusercontent.com/do-community/ansible-playbooks/master/initial_server_setup/ubuntu1804.yml -o initial_server_setup.yml

This will download the contents of the playbook to a file named initial_server_setup.yml on your current local path. You can examine the contents of the playbook by opening the file with your command-line editor of choice:

  • sudo nano initial_server_setup.yml

Once you’ve opened the playbook file, you should notice a section named vars with three distinct variables that require your attention:

  • create_user: The name of the non-root user account to create and grant sudo privileges to. Our example uses sammy, but you can use whichever username you’d like.
  • copy_local_key: Local path to a valid SSH public key to set up as an authorized key for the new non-root sudo account. The default value points to the current local user’s public key located at ~/.ssh/id_rsa.pub.
  • sys_packages: A list of basic system packages that will be installed using the package manager tool apt.

Once you’re done updating the variables inside initial_server_setup.yml, save and close the file.

You’re now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on all servers from your inventory, by default. We can use the -l flag to make sure that only a subset of servers, or a single server, is affected by the playbook. To execute the playbook only on server1, you can use the following command:

  • ansible-playbook initial_server_setup.yml -l server1

You will get output similar to this:

Output
PLAY [all] *************************************************************************************************************************************** TASK [Make sure we have a 'wheel' group] ********************************************************************************************************* changed: [server1] TASK [Allow 'wheel' group to have passwordless sudo] ********************************************************************************************* changed: [server1] TASK [Create a new regular user with sudo privileges] ******************************************************************************************** changed: [server1] TASK [Set authorized key for remote user] ******************************************************************************************************** changed: [server1] TASK [Disable password authentication for root] ************************************************************************************************** changed: [server1] TASK [Update apt] ******************************************************************************************************************************** changed: [server1] TASK [Install required system packages] ********************************************************************************************************** ok: [server1] TASK [UFW - Allow SSH connections] *************************************************************************************************************** changed: [server1] TASK [UFW - Deny all other incoming traffic by default] ****************************************************************************************** changed: [server1] PLAY RECAP *************************************************************************************************************************************** server1 : ok=9 changed=8 unreachable=0 failed=0

Once the playbook execution is finished, you’ll be able to log in to the server with:

  • ssh sammy@server_domain_or_IP

Remember to replace sammy with the user defined by the create_user variable, and server_domain_or_IP with your server’s hostname or IP address.

In case you have set a custom public key with the copy_local_key variable, you’ll need to provide an extra parameter specifying the location of its private key counterpart:

  • ssh sammy@server_domain_or_IP -i ~/.ssh/ansible_controller_key

After logging in to the server, you can check the UFW firewall’s active rules to confirm that it’s properly configured:

  • sudo ufw status

You should get output similar to this:

Output
Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6)

This means that the UFW firewall has successfully been enabled. Since this was the last task in the playbook, it confirms that the playbook was fully executed on this server.

The Playbook Contents

You can find the initial server setup playbook in the ansible-playbooks repository in the DigitalOcean Community GitHub organization. To copy or download the script contents directly, click the Raw button towards the top of the script, or click here to view the raw contents directly.

The full contents are also included here for convenience:

initial_server_setup.yml
--- - hosts: all   remote_user: root   gather_facts: false   vars:     create_user: sammy     copy_local_key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"     sys_packages: [ 'curl', 'vim', 'git', 'ufw' ]    tasks:     - name: Make sure we have a 'wheel' group       group:         name: wheel         state: present      - name: Allow 'wheel' group to have passwordless sudo       lineinfile:         path: /etc/sudoers         state: present         regexp: '^%wheel'         line: '%wheel ALL=(ALL) NOPASSWD: ALL'         validate: '/usr/sbin/visudo -cf %s'      - name: Create a new regular user with sudo privileges       user:         name: "{{ create_user }}"         groups: wheel         shell: /bin/bash      - name: Set authorized key for remote user       authorized_key:         user: "{{ create_user }}"         state: present         key: "{{ copy_local_key }}"      - name: Disable password authentication for root       lineinfile:         path: /etc/ssh/sshd_config         state: present         regexp: '^PermitRootLogin'         line: 'PermitRootLogin prohibit-password'      - name: Update apt       apt: update_cache=yes      - name: Install required system packages       apt: name={{ sys_packages }} state=latest      - name: UFW - Allow SSH connections       ufw:         rule: allow         name: OpenSSH      - name: UFW - Deny all other incoming traffic by default       ufw:         state: enabled         policy: deny         direction: incoming  

Feel free to modify this playbook or include new tasks to best suit your individual needs within your own workflow.

Conclusion

Automating the initial server setup can save you time, while also making sure your servers will follow a standard configuration that can be improved and customized to your needs. With the distributed nature of modern applications and the need for more consistency between different staging environments, automation like this becomes a necessity.

In this guide, we demonstrated how to use Ansible for automating the initial tasks that should be executed on a fresh server, such as creating a non-root user with sudo access, enabling UFW and disabling remote root login.

If you’d like to include new tasks in this playbook to further customize your initial server setup, please refer to our introductory Ansible guide Configuration Management 101: Writing Ansible Playbooks.

DigitalOcean Community Tutorials

How to Install and Secure phpMyAdmin with Nginx on an Ubuntu 18.04 server

Introduction

While many users need the functionality of a database system like MySQL, interacting with the system solely from the MySQL command-line client requires familiarity with the SQL language, so it may not be the preferred interface for some.

phpMyAdmin was created so that users can interact with MySQL through an intuitive web interface, running alongside a PHP development environment. In this guide, we’ll discuss how to install phpMyAdmin on top of an Nginx server, and how to configure the server for increased security.

Note: There are important security considerations when using software like phpMyAdmin, since it runs on the database server, it deals with database credentials, and it enables a user to easily execute arbitrary SQL queries into your database. Because phpMyAdmin is a widely-deployed PHP application, it is frequently targeted for attack. We will go over some security measures you can take in this tutorial so that you can make informed decisions.

Prerequisites

Before you get started with this guide, you’ll need the following available to you:

Because phpMyAdmin handles authentication using MySQL credentials, it is strongly advisable to install an SSL/TLS certificate to enable encrypted traffic between server and client. If you do not have an existing domain configured with a valid certificate, you can follow this guide on securing Nginx with Let’s Encrypt on Ubuntu 18.04.

Warning: If you don’t have an SSL/TLS certificate installed on the server and you still want to proceed, please consider enforcing access via SSH Tunnels as explained in Step 5 of this guide.

Once you have met these prerequisites, you can go ahead with the rest of the guide.

Step 1 — Installing phpMyAdmin

The first thing we need to do is install phpMyAdmin on the LEMP server. We’re going to use the default Ubuntu repositories to achieve this goal.

Let’s start by updating the server’s package index with:

  • sudo apt update

Now you can install phpMyAdmin with:

  • sudo apt install phpmyadmin

During the installation process, you will be prompted to choose the web server (either Apache or Lighthttp) to configure. Because we are using Nginx as web server, we shouldn’t make a choice here. Press tab and then OK to advance to the next step.

Next, you’ll be prompted whether to use dbconfig-common for configuring the application database. Select Yes. This will set up the internal database and administrative user for phpMyAdmin. You will be asked to define a new password for the phpmyadmin MySQL user. You can also leave it blank and let phpMyAdmin randomly create a password.

The installation will now finish. For the Nginx web server to find and serve the phpMyAdmin files correctly, we’ll need to create a symbolic link from the installation files to Nginx’s document root directory:

  • sudo ln -s /usr/share/phpmyadmin /var/www/html

Your phpMyAdmin installation is now operational. To access the interface, go to your server’s domain name or public IP address followed by /phpmyadmin in your web browser:

https://server_domain_or_IP/phpmyadmin 

phpMyAdmin login screen

As mentioned before, phpMyAdmin handles authentication using MySQL credentials, which means you should use the same username and password you would normally use to connect to the database via console or via an API. If you need help creating MySQL users, check this guide on How To Manage an SQL Database.

Note: Logging into phpMyAdmin as the root MySQL user is discouraged because it represents a significant security risk. We’ll see how to disable root login in a subsequent step of this guide.

Your phpMyAdmin installation should be completely functional at this point. However, by installing a web interface, we’ve exposed our MySQL database server to the outside world. Because of phpMyAdmin’s popularity, and the large amounts of data it may provide access to, installations like these are common targets for attacks. In the following sections of this guide, we’ll see a few different ways in which we can make our phpMyAdmin installation more secure.

Step 2 — Changing phpMyAdmin’s Default Location

One of the most basic ways to protect your phpMyAdmin installation is by making it harder to find. Bots will scan for common paths, like phpmyadmin, pma, admin, mysql and such. Changing the interface’s URL from /phpmyadmin to something non-standard will make it much harder for automated scripts to find your phpMyAdmin installation and attempt brute-force attacks.

With our phpMyAdmin installation, we’ve created a symbolic link pointing to /usr/share/phpmyadmin, where the actual application files are located. To change phpMyAdmin’s interface URL, we will rename this symbolic link.

First, let’s navigate to the Nginx document root directory and list the files it contains to get a better sense of the change we’ll make:

  • cd /var/www/html/
  • ls -l

You’ll receive the following output:

Output
total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 phpmyadmin -> /usr/share/phpmyadmin

The output shows that we have a symbolic link called phpmyadmin in this directory. We can change this link name to whatever we’d like. This will in turn change phpMyAdmin’s access URL, which can help obscure the endpoint from bots hardcoded to search common endpoint names.

Choose a name that obscures the purpose of the endpoint. In this guide, we’ll name our endpoint /nothingtosee, but you should choose an alternate name. To accomplish this, we’ll rename the link:

  • sudo mv phpmyadmin nothingtosee
  • ls -l

After running the above commands, you’ll receive this output:

Output
total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 nothingtosee -> /usr/share/phpmyadmin

Now, if you go to the old URL, you’ll get a 404 error:

https://server_domain_or_IP/phpmyadmin 

phpMyAdmin 404 error

Your phpMyAdmin interface will now be available at the new URL we just configured:

https://server_domain_or_IP/nothingtosee 

phpMyAdmin login screen

By obfuscating phpMyAdmin’s real location on the server, you’re securing its interface against automated scans and manual brute-force attempts.

Step 3 — Disabling Root Login

On MySQL as well as within regular Linux systems, the root account is a special administrative account with unrestricted access to the system. In addition to being a privileged account, it’s a known login name, which makes it an obvious target for brute-force attacks. To minimize risks, we’ll configure phpMyAdmin to deny any login attempts coming from the user root. This way, even if you provide valid credentials for the user root, you’ll still get an “access denied” error and won’t be allowed to log in.

Because we chose to use dbconfig-common to configure and store phpMyAdmin settings, the default configuration is currently stored in the database. We’ll need to create a new config.inc.php file to define our custom settings.

Even though the PHP files for phpMyAdmin are located inside /usr/share/phpmyadmin, the application uses configuration files located at /etc/phpmyadmin. We will create a new custom settings file inside /etc/phpmyadmin/conf.d, and name it pma_secure.php:

  • sudo nano /etc/phpmyadmin/conf.d/pma_secure.php

The following configuration file contains the necessary settings to disable passwordless logins (AllowNoPassword set to false) and root login (AllowRoot set to false):

/etc/phpmyadmin/conf.d/pma_secure.php
<?php  # PhpMyAdmin Settings # This should be set to a random string of at least 32 chars $  cfg['blowfish_secret'] = '3!#32@3sa(+=_4?),5XP_:U%%8sdfSdg43yH#{o';  $  i=0; $  i++;  $  cfg['Servers'][$  i]['auth_type'] = 'cookie'; $  cfg['Servers'][$  i]['AllowNoPassword'] = false; $  cfg['Servers'][$  i]['AllowRoot'] = false;  ?> 

Save the file when you’re done editing by pressing CTRL + X then y to confirm changes and ENTER. The changes will apply automatically. If you reload the login page now and try to log in as root, you will get an Access Denied error:

access denied

Root login is now prohibited on your phpMyAdmin installation. This security measure will block brute-force scripts from trying to guess the root database password on your server. Moreover, it will enforce the usage of less-privileged MySQL accounts for accessing phpMyAdmin’s web interface, which by itself is an important security practice.

Step 4 — Creating an Authentication Gateway

Hiding your phpMyAdmin installation on an unusual location might sidestep some automated bots scanning the network, but it’s useless against targeted attacks. To better protect a web application with restricted access, it’s generally more effective to stop attackers before they can even reach the application. This way, they’ll be unable to use generic exploits and brute-force attacks to guess access credentials.

In the specific case of phpMyAdmin, it’s even more important to keep the login interface locked away. By keeping it open to the world, you’re offering a brute-force platform for attackers to guess your database credentials.

Adding an extra layer of authentication to your phpMyAdmin installation enables you to increase security. Users will be required to pass through an HTTP authentication prompt before ever seeing the phpMyAdmin login screen. Most web servers, including Nginx, provide this capability natively.

To set this up, we first need to create a password file to store the authentication credentials. Nginx requires that passwords be encrypted using the crypt() function. The OpenSSL suite, which should already be installed on your server, includes this functionality.

To create an encrypted password, type:

  • openssl passwd

You will be prompted to enter and confirm the password that you wish to use. The utility will then display an encrypted version of the password that will look something like this:

Output
O5az.RSPzd.HE

Copy this value, as you will need to paste it into the authentication file we’ll be creating.

Now, create an authentication file. We’ll call this file pma_pass and place it in the Nginx configuration directory:

  • sudo nano /etc/nginx/pma_pass

In this file, you’ll specify the username you would like to use, followed by a colon (:), followed by the encrypted version of the password you received from the openssl passwd utility.

We are going to name our user sammy, but you should choose a different username. The file should look like this:

/etc/nginx/pma_pass
sammy:O5az.RSPzd.HE 

Save and close the file when you’re done.

Now we’re ready to modify the Nginx configuration file. For this guide, we’ll use the configuration file located at /etc/nginx/sites-available/example.com. You should use the relevant Nginx configuration file for the web location where phpMyAdmin is currently hosted. Open this file in your text editor to get started:

  • sudo nano /etc/nginx/sites-available/example.com

Locate the server block, and the location / section within it. We need to create a new location section within this block to match phpMyAdmin’s current path on the server. In this guide, phpMyAdmin’s location relative to the web root is /nothingtosee:

/etc/nginx/sites-available/default
server {     . . .          location / {                 try_files $  uri $  uri/ =404;         }          location /nothingtosee {                 # Settings for phpMyAdmin will go here         }      . . . } 

Within this block, we’ll need to set up two different directives: auth_basic, which defines the message that will be displayed on the authentication prompt, and auth_basic_user_file, pointing to the file we just created. This is how your configuration file should look like when you’re finished:

/etc/nginx/sites-available/default
server {     . . .          location /nothingtosee {                 auth_basic "Admin Login";                 auth_basic_user_file /etc/nginx/pma_pass;         }       . . . } 

Save and close the file when you’re done. To check if the configuration file is valid, you can run:

  • sudo nginx -t

The following output is expected:

Output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

To activate the new authentication gate, you must reload the web server:

  • sudo systemctl reload nginx

Now, if you visit the phpMyAdmin URL in your web browser, you should be prompted for the username and password you added to the pma_pass file:

https://server_domain_or_IP/nothingtosee 

Nginx authentication page

Once you enter your credentials, you’ll be taken to the standard phpMyAdmin login page.

Note: If refreshing the page does not work, you may have to clear your cache or use a different browser session if you’ve already been using phpMyAdmin.

In addition to providing an extra layer of security, this gateway will help keep your MySQL logs clean of spammy authentication attempts.

Step 5 — Setting Up Access via Encrypted Tunnels (Optional)

For increased security, it is possible to lock down your phpMyAdmin installation to authorized hosts only. You can whitelist authorized hosts in your Nginx configuration file, so that any request coming from an IP address that is not on the list will be denied.

Even though this feature alone can be enough in some use cases, it’s not always the best long-term solution, mainly due to the fact that most people don’t access the Internet from static IP addresses. As soon as you get a new IP address from your Internet provider, you’ll be unable to get to the phpMyAdmin interface until you update the Nginx configuration file with your new IP address.

For a more robust long-term solution, you can use IP-based access control to create a setup in which users will only have access to your phpMyAdmin interface if they’re accessing from either an authorized IP address or localhost via SSH tunneling. We’ll see how to set this up in the sections below.

Combining IP-based access control with SSH tunneling greatly increases security because it fully blocks access coming from the public internet (except for authorized IPs), in addition to providing a secure channel between user and server through the use of encrypted tunnels.

Setting Up IP-Based Access Control on Nginx

On Nginx, IP-based access control can be defined in the corresponding location block of a given site, using the directives allow and deny. For instance, if we want to only allow requests coming from a given host, we should include the following two lines, in this order, inside the relevant location block for the site we would like to protect:

allow hostname_or_IP; deny all; 

You can allow as many hosts as you want, you only need to include one allow line for each authorized host/IP inside the respective location block for the site you’re protecting. The directives will be evaluated in the same order as they are listed, until a match is found or the request is finally denied due to the deny all directive.

We’ll now configure Nginx to only allow requests coming from localhost or your current IP address. First, you’ll need to know the current public IP address your local machine is using to connect to the Internet. There are various ways to obtain this information; for simplicity, we’re going to use the service provided by ipinfo.io. You can either open the URL https://ipinfo.io/ip in your browser, or run the following command from your local machine:

  • curl https://ipinfo.io/ip

You should get a simple IP address as output, like this:

Output
203.0.113.111

That is your current public IP address. We’ll configure phpMyAdmin’s location block to only allow requests coming from that IP, in addition to localhost. We’ll need to edit once again the configuration block for phpMyAdmin inside /etc/nginx/sites-available/example.com.

Open the Nginx configuration file using your command-line editor of choice:

  • sudo nano /etc/nginx/sites-available/example.com

Because we already have an access rule within our current configuration, we need to combine it with IP-based access control using the directive satisfy all. This way, we can keep the current HTTP authentication prompt for increased security.

This is how your phpMyAdmin Nginx configuration should look like after you’re done editing:

/etc/nginx/sites-available/example.com
server {     . . .      location /nothingtosee {         satisfy all; #requires both conditions          allow 203.0.113.111; #allow your IP         allow 127.0.0.1; #allow localhost via SSH tunnels         deny all; #deny all other sources          auth_basic "Admin Login";         auth_basic_user_file /etc/nginx/pma_pass;     }      . . . } 

Remember to replace nothingtosee with the actual path where phpMyAdmin can be found, and the highlighted IP address with your current public IP address.

Save and close the file when you’re done. To check if the configuration file is valid, you can run:

  • sudo nginx -t

The following output is expected:

Output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

Now reload the web server so the changes take effect:

  • sudo systemctl reload nginx

Because your IP address is explicitly listed as an authorized host, your access shouldn’t be disturbed. Anyone else trying to access your phpMyAdmin installation will now get a 403 error (Forbidden):

https://server_domain_or_IP/nothingtosee 

403 error

In the next section, we’ll see how to use SSH tunneling to access the web server through local requests. This way, you’ll still be able to access phpMyAdmin’s interface even when your IP address changes.

Accessing phpMyAdmin Through an Encrypted Tunnel

SSH tunneling works as a way of redirecting network traffic through encrypted channels. By running an ssh command similar to what you would use to log into a server, you can create a secure “tunnel” between your local machine and that server. All traffic coming in on a given local port can now be redirected through the encrypted tunnel and use the remote server as a proxy, before reaching out to the internet. It’s similar to what happens when you use a VPN (Virtual Private Network), however SSH tunneling is much simpler to set up.

We’ll use SSH tunneling to proxy our requests to the remote web server running phpMyAdmin. By creating a tunnel between your local machine and the server where phpMyAdmin is installed, you can redirect local requests to the remote web server, and what’s more important, traffic will be encrypted and requests will reach Nginx as if they’re coming from localhost. This way, no matter what IP address you’re connecting from, you’ll be able to securely access phpMyAdmin’s interface.

Because the traffic between your local machine and the remote web server will be encrypted, this is a safe alternative for situations where you can’t have an SSL/TLS certificate installed on the web server running phpMyAdmin.

From your local machine, run this command whenever you need access to phpMyAdmin:

  • ssh user@server_domain_or_IP -L 8000:localhost:80 -L 8443:localhost:443 -N

Let’s examine each part of the command:

  • user: SSH user to connect to the server where phpMyAdmin is running
  • hostname_or_IP: SSH host where phpMyAdmin is running
  • -L 8000:localhost:80 redirects HTTP traffic on port 8000
  • -L 8443:localhost:443 redirects HTTPS traffic on port 8443
  • -N: do not execute remote commands

Note: This command will block the terminal until interrupted with a CTRL+C, in which case it will end the SSH connection and stop the packet redirection. If you’d prefer to run this command in background mode, you can use the SSH option -f.

Now, go to your browser and replace server_domain_or_IP with localhost:PORT, where PORT is either 8000 for HTTP or 8443 for HTTPS:

http://localhost:8000/nothingtosee 
https://localhost:443/nothingtosee 

phpMyAdmin login screen

Note: If you’re accessing phpMyAdmin via https, you might get an alert message questioning the security of the SSL certificate. This happens because the domain name you’re using (localhost) doesn’t match the address registered within the certificate (domain where phpMyAdmin is actually being served). It is safe to proceed.

All requests on localhost:8000 (HTTP) and localhost:8443 (HTTPS) are now being redirected through a secure tunnel to your remote phpMyAdmin application. Not only have you increased security by disabling public access to your phpMyAdmin, you also protected all traffic between your local computer and the remote server by using an encrypted tunnel to send and receive data.

If you’d like to enforce the usage of SSH tunneling to anyone who wants access to your phpMyAdmin interface (including you), you can do that by removing any other authorized IPs from the Nginx configuration file, leaving 127.0.0.1 as the only allowed host to access that location. Considering nobody will be able to make direct requests to phpMyAdmin, it is safe to remove HTTP authentication in order to simplify your setup. This is how your configuration file would look like in such a scenario:

/etc/nginx/sites-available/example.com
server {     . . .      location /nothingtosee {          allow 127.0.0.1; #allow localhost only         deny all; #deny all other sources     }      . . . } 

Once you reload Nginx’s configuration with sudo systemctl reload nginx, your phpMyAdmin installation will be locked down and users will be required to use SSH tunnels in order to access phpMyAdmin’s interface via redirected requests.

Conclusion

In this tutorial, we saw how to install phpMyAdmin on Ubuntu 18.04 running Nginx as the web server. We also covered advanced methods to secure a phpMyAdmin installation on Ubuntu, such as disabling root login, creating an extra layer of authentication, and using SSH tunneling to access a phpMyAdmin installation via local requests only.

After completing this tutorial, you should be able to manage your MySQL databases from a reasonably secure web interface. This user interface exposes most of the functionality available via the MySQL command line. You can browse databases and schema, execute queries, and create new data sets and structures.

DigitalOcean Community Tutorials

How To Build and Deploy a GraphQL Server with Node.js and MongoDB on Ubuntu 18.04

The author selected the Wikimedia Foundation to receive a donation as part of the Write for DOnations program.

Introduction

GraphQL was publicly released by Facebook in 2015 as a query language for APIs that makes it easy to query and mutate data from different data collections. From a single endpoint, you can query and mutate multiple data sources with a single POST request. GraphQL solves some of the common design flaws in REST API architectures, such as situations where the endpoint returns more information than you actually need. Also, it is possible when using REST APIs you would need to send requests to multiple REST endpoints to collect all the information you require—a situation that is called the n+1 problem. An example of this would be when you want to show a users’ information, but need to collect data such as personal details and addresses from different endpoints.

These problems don’t apply to GraphQL as it has only one endpoint, which can return data from multiple collections. The data it returns depends on the query that you send to this endpoint. In this query you define the structure of the data you want to receive, including any nested data collections. In addition to a query, you can also use a mutation to change data on a GraphQL server, and a subscription to watch for changes in the data. For more information about GraphQL and its concepts, you can visit the documentation on the official website.

As GraphQL is a query language with a lot of flexibility, it combines especially well with document-based databases like MongoDB. Both technologies are based on hierarchical, typed schemas and are popular within the JavaScript community. Also, MongoDB’s data is stored as JSON objects, so no additional parsing is necessary on the GraphQL server.

In this tutorial, you’ll build and deploy a GraphQL server with Node.js that can query and mutate data from a MongoDB database that is running on Ubuntu 18.04. At the end of this tutorial, you’ll be able to access data in your database by using a single endpoint, both by sending requests to the server directly through the terminal and by using the pre-made GraphiQL playground interface. With this playground you can explore the contents of the GraphQL server by sending queries, mutations, and subscriptions. Also, you can find visual representations of the schemas that are defined for this server.

At the end of this tutorial, you’ll use the GraphiQL playground to quickly interface with your GraphQL server:

The GraphiQL playground in action

Prerequisites

Before you begin this guide you’ll need the following:

Step 1 — Setting Up the MongoDB Database

Before creating the GraphQL server, make sure your database is configured right, has authentication enabled, and is filled with sample data. For this you need to connect to the Ubuntu 18.04 server running the MongoDB database from your command prompt. All steps in this tutorial will take place on this server.

After you’ve established the connection, run the following command to check if MongoDB is active and running on your server:

  • sudo systemctl status mongodb

You’ll see the following output in your terminal, indicating the MongoDB database is actively running:

Output
● mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2019-02-23 12:23:03 UTC; 1 months 13 days ago Docs: man:mongod(1) Main PID: 2388 (mongod) Tasks: 25 (limit: 1152) CGroup: /system.slice/mongodb.service └─2388 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf

Before creating the database where you’ll store the sample data, you need to create an admin user first, since regular users are scoped to a specific database. You can do this by executing the following command that opens the MongoDB shell:

  • mongo

With the MongoDB shell you’ll get direct access to the MongoDB database and can create users or databases and query data. Inside this shell, execute the following command that will add a new admin user to MongoDB. You can replace the highlighted keywords with your own username and password combination, but don’t forget to write them down somewhere.

  • use admin
  • db.createUser({
  • user: "admin_username",
  • pwd: "admin_password",
  • roles: [{ role: "root", db: "admin"}]
  • })

The first line of the preceding command selects the database called admin, which is the database where all the admin roles are stored. With the method db.createUser() you can create the actual user and define its username, password, and roles.

Executing this command will return:

Output
Successfully added user: { "user" : "admin_username", "roles" : [ { "role" : "root", "db" : "admin" } ] }

You can now close the MongoDB shell by typing exit.

Next, log in at the MongoDB shell again, but this time with the newly created admin user:

  • mongo -u "admin_username" -p "admin_password" --authenticationDatabase "admin"

This command will open the MongoDB shell as a specific user, where the -u flag specifies the username and the -p flag the password of that user. The extra flag --authenticationDatabase specifies that you want to log in as an admin.

Next, you’ll switch to a new database and then use the db.createUser() method to create a new user with permissions to make changes to this database. Replace the highlighted sections with your own information, making sure to write these credentials down.

Run the following command in the MongoDB shell:

  • use database_name
  • db.createUser({
  • user: "username",
  • pwd: "password",
  • roles: ["readWrite"]
  • })

This will return the following:

Output
Successfully added user: { "user" : "username", "roles" : ["readWrite"] }

After creating the database and user, fill this database with sample data that can be queried by the GraphQL server later on in this tutorial. For this, you can use the bios collection sample from the MongoDB website. By executing the commands in the following code snippet you’ll insert a smaller version of this bios collection dataset into your database. You can replace the highlighted sections with your own information, but for the purposes of this tutorial, name the collection bios:

  • db.bios.insertMany([
  • {
  • "_id" : 1,
  • "name" : {
  • "first" : "John",
  • "last" : "Backus"
  • },
  • "birth" : ISODate("1924-12-03T05:00:00Z"),
  • "death" : ISODate("2007-03-17T04:00:00Z"),
  • "contribs" : [
  • "Fortran",
  • "ALGOL",
  • "Backus-Naur Form",
  • "FP"
  • ],
  • "awards" : [
  • {
  • "award" : "W.W. McDowell Award",
  • "year" : 1967,
  • "by" : "IEEE Computer Society"
  • },
  • {
  • "award" : "National Medal of Science",
  • "year" : 1975,
  • "by" : "National Science Foundation"
  • },
  • {
  • "award" : "Turing Award",
  • "year" : 1977,
  • "by" : "ACM"
  • },
  • {
  • "award" : "Draper Prize",
  • "year" : 1993,
  • "by" : "National Academy of Engineering"
  • }
  • ]
  • },
  • {
  • "_id" : ObjectId("51df07b094c6acd67e492f41"),
  • "name" : {
  • "first" : "John",
  • "last" : "McCarthy"
  • },
  • "birth" : ISODate("1927-09-04T04:00:00Z"),
  • "death" : ISODate("2011-12-24T05:00:00Z"),
  • "contribs" : [
  • "Lisp",
  • "Artificial Intelligence",
  • "ALGOL"
  • ],
  • "awards" : [
  • {
  • "award" : "Turing Award",
  • "year" : 1971,
  • "by" : "ACM"
  • },
  • {
  • "award" : "Kyoto Prize",
  • "year" : 1988,
  • "by" : "Inamori Foundation"
  • },
  • {
  • "award" : "National Medal of Science",
  • "year" : 1990,
  • "by" : "National Science Foundation"
  • }
  • ]
  • }
  • ]);

This code block is an array consisting of multiple objects that contain information about successful scientists from the past. After running these commands to enter this collection into your database, you’ll receive the following message indicating the data was added:

Output
{ "acknowledged" : true, "insertedIds" : [ 1, ObjectId("51df07b094c6acd67e492f41") ] }

After seeing the success message, you can close the MongoDB shell by typing exit. Next, configure the MongoDB installation to have authorization enabled so only authenticated users can access the data. To edit the configuration of the MongoDB installation, open the file containing the settings for this installation:

  • sudo nano /etc/mongodb.conf

Uncomment the highlighted line in the following code to enable authorization:

/etc/mongodb.conf
... # Turn on/off security.  Off is currently the default #noauth = true auth = true ... 

In order to make these changes active, restart MongoDB by running:

  • sudo systemctl restart mongodb

Make sure the database is running again by executing the command:

  • sudo systemctl status mongodb

This will yield output similar to the following:

Output
● mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2019-02-23 12:23:03 UTC; 1 months 13 days ago Docs: man:mongod(1) Main PID: 2388 (mongod) Tasks: 25 (limit: 1152) CGroup: /system.slice/mongodb.service └─2388 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf

To make sure that your user can connect to the database you just created, try opening the MongoDB shell as an authenticated user with the command:

  • mongo -u "username" -p "password" --authenticationDatabase "database_name"

This uses the same flags as before, only this time the --authenticationDatabase is set to the database you’ve created and filled with the sample data.

Now you’ve successfully added an admin user and another user that has read/write access to the database with the sample data. Also, the database has authorization enabled meaning you need a username and password to access it. In the next step you’ll create the GraphQL server that will be connected to this database later in the tutorial.

Step 2 — Creating the GraphQL Server

With the database configured and filled with sample data, it’s time to create a GraphQL server that can query and mutate this data. For this you’ll use Express and express-graphql, which both run on Node.js. Express is a lightweight framework to quickly create Node.js HTTP servers, and express-graphql provides middleware to make it possible to quickly build GraphQL servers.

The first step is to make sure your machine is up to date:

  • sudo apt update

Next, install Node.js on your server by running the following commands. Together with Node.js you’ll also install npm, a package manager for JavaScript that runs on Node.js.

  • sudo apt install nodejs npm

After following the installation process, check if the Node.js version you’ve just installed is v8.10.0 or higher:

  • node -v

This will return the following:

Output
v8.10.0

To initialize a new JavaScript project, run the following commands on the server as a sudo user, and replace the highlighted keywords with a name for your project.

First move into the root directory of your server:

  • cd

Once there, create a new directory named after your project:

  • mkdir project_name

Move into this directory:

  • cd project_name

Finally, initialize a new npm package with the following command:

  • sudo npm init -y

After running npm init -y you’ll receive a success message that the following package.json file was created:

Output
Wrote to /home/username/project_name/package.json: { "name": "project_name", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC" }

Note: You can also execute npm init without the -y flag, after which you would answer multiple questions to set up the project name, author, etc. You can enter the details or just press enter to proceed.

Now that you’ve initialized the project, install the packages you need to set up the GraphQL server:

  • sudo npm install --save express express-graphql graphql

Create a new file called index.js and subsequently open this file by running:

  • sudo nano index.js

Next, add the following code block into the newly created file to set up the GraphQL server:

index.js
const express = require('express'); const graphqlHTTP = require('express-graphql'); const { buildSchema } = require('graphql');  // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     hello: String   } `);  // Provide resolver functions for your schema fields const resolvers = {   hello: () => 'Hello world!' };  const app = express(); app.use('/graphql', graphqlHTTP({   schema,   rootValue: resolvers })); app.listen(4000);  console.log(`🚀 Server ready at http://localhost:4000/graphql`); 

This code block consists of several parts that are all important. First you describe the schema of the data that is returned by the GraphQL API:

index.js
... // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     hello: String   } `); ... 

The type Query defines what queries can be executed and in which format it will return the result. As you can see, the only query defined is hello that returns data in a String format.

The next section establishes the resolvers, where data is matched to the schemas that you can query:

index.js
... // Provide resolver functions for your schema fields const resolvers = {   hello: () => 'Hello world!' }; ... 

These resolvers are directly linked to schemas, and return the data that matches these schemas.

The final part of this code block initializes the GraphQL server, creates the API endpoint with Express, and describes the port on which the GraphQL endpoint is running:

index.js
... const app = express(); app.use('/graphql', graphqlHTTP({   schema,   rootValue: resolvers })); app.listen(4000);  console.log(`🚀 Server ready at http://localhost:4000/graphql`); 

After you have added these lines, save and exit from index.js.

Next, to actually run the GraphQL server you need to run the file index.js with Node.js. This can be done manually from the command line, but it’s common practice to set up the package.json file to do this for you.

Open the package.json file:

  • sudo nano package.json

Add the following highlighted line to this file:

package.json
{   "name": "project_name",   "version": "1.0.0",   "description": "",   "main": "index.js",   "scripts": {     "start": "node index.js",     "test": "echo \"Error: no test specified\" && exit 1"   },   "keywords": [],   "author": "",   "license": "ISC" } 

Save and exit the file.

To start the GraphQL server, execute the following command in the terminal:

  • npm start

Once you run this, the terminal prompt will disappear, and a message will appear to confirm the GraphQL server is running:

Output
🚀 Server ready at http://localhost:4000/graphql

If you now open up another terminal session, you can test if the GraphQL server is running by executing the following command. This sends a curl POST request with a JSON body after the --data flag that contains your GraphQL query to the local endpoint:

  • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ hello }" }' http://localhost:4000/graphql

This will execute the query as it’s described in the GraphQL schema in your code and return data in a predictable JSON format that is equal to the data as it’s returned in the resolvers:

Output
{ "data": { "hello": "Hello world!" } }

Note: In case the Express server crashes or gets stuck, you need to manually kill the node process that is running on the server. To kill all such processes, you can execute the following:

  • killall node

After which, you can restart the GraphQL server by running:

  • npm start

In this step you’ve created the first version of the GraphQL server that is now running on a local endpoint that can be accessed on your server. Next, you’ll connect your resolvers to the MongoDB database.

Step 3 — Connecting to the MongoDB Database

With the GraphQL server in order, you can now set up the connection with the MongoDB database that you configured and filled with data before and create a new schema that matches this data.

To be able to connect to MongoDB from the GraphQL server, install the JavaScript package for MongoDB from npm:

  • sudo npm install --save mongodb

Once this has been installed, open up index.js in your text editor:

  • sudo nano index.js

Next, add the following highlighted code to index.js just after the imported dependencies and fill the highlighted values with your own connection details to the local MongoDB database. The username, password, and database_name are those that you created in the first step of this tutorial.

index.js
const express = require('express'); const graphqlHTTP = require('express-graphql'); const { buildSchema } = require('graphql'); const { MongoClient } = require('mongodb');  const context = () => MongoClient.connect('mongodb://username:password@localhost:27017/database_name', { useNewUrlParser: true }).then(client => client.db('database_name')); ... 

These lines add the connection to the local MongoDB database to a function called context. This context function will be available to every resolver, which is why you use this to set up database connections.

Next, in your index.js file, add the context function to the initialization of the GraphQL server by inserting the following highlighted lines:

index.js
... const app = express(); app.use('/graphql', graphqlHTTP({   schema,   rootValue: resolvers,   context })); app.listen(4000);  console.log(`🚀 Server ready at http://localhost:4000/graphql`); 

Now you can call this context function from your resolvers, and thereby read variables from the MongoDB database. If you look back to the first step of this tutorial, you can see which values are present in the database. From here, define a new GraphQL schema that matches this data structure. Overwrite the previous value for the constant schema with the following highlighted lines:

index.js
... // Construct a schema, using GrahQL schema language const schema = buildSchema(`   type Query {     bios: [Bio]   }   type Bio {     name: Name,     title: String,     birth: String,     death: String,     awards: [Award]   }   type Name {     first: String,     last: String   },   type Award {     award: String,     year: Float,     by: String   } `); ... 

The type Query has changed and now returns a collection of the new type Bio. This new type consists of several types including two other non-scalar types Name and Awards, meaning these types don’t match a predefined format like String or Float. For more information on defining GraphQL schemas you can look at the documentation for GraphQL.

Also, since the resolvers tie the data from the database to the schema, update the code for the resolvers when you make changes to the schema. Create a new resolver that is called bios, which is equal to the Query that can be found in the schema and the name of the collection in the database. Note that, in this case, the name of the collection in db.collection('bios') is bios, but that this would change if you had assigned a different name to your collection.

Add the following highlighted line to index.js:

index.js
... // Provide resolver functions for your schema fields const resolvers = {   bios: (args, context) => context().then(db => db.collection('bios').find().toArray()) }; ... 

This function will use the context function, which you can use to retrieve variables from the MongoDB database. Once you have made these changes to the code, save and exit index.js.

In order to make these changes active, you need to restart the GraphQL server. You can stop the current process by using the keyboard combination CTRL + C and start the GraphQL server by running:

  • npm start

Now you’re able to use the updated schema and query the data that is inside the database. If you look at the schema, you’ll see that the Query for bios returns the type Bio; this type could also return the type Name.

To return all the first and last names for all the bios in the database, send the following request to the GraphQL server in a new terminal window:

  • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ bios { name { first, last } } }" }' http://localhost:4000/graphql

This again will return a JSON object that matches the structure of the schema:

Output
{"data":{"bios":[{"name":{"first":"John","last":"Backus"}},{"name":{"first":"John","last":"McCarthy"}}]}}

You can easily retrieve more variables from the bios by extending the query with any of the types that are described in the type for Bio.

Also, you can retrieve a bio by specifying an id. In order to do this you need to add another type to the Query type and extend the resolvers. To do this, open index.js in your text editor:

  • sudo nano index.js

Add the following highlighted lines of code:

index.js
... // Construct a schema, using GrahQL schema language const schema = buildSchema(`   type Query {     bios: [Bio]     bio(id: Int): Bio   }    ...    // Provide resolver functions for your schema fields   const resolvers = {     bios: (args, context) => context().then(db => db.collection('bios').find().toArray()),     bio: (args, context) => context().then(db => db.collection('bios').findOne({ _id: args.id }))   };   ... 

Save and exit the file.

In the terminal that is running your GraphQL server, press CTRL + C to stop it from running, then execute the following to restart it:

  • npm start

In another terminal window, execute the following GraphQL request:

  • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ bio(id: 1) { name { first, last } } }" }' http://localhost:4000/graphql

This returns the entry for the bio that has an id equal to 1:

Output
{ "data": { "bio": { "name": { "first": "John", "last": "Backus" } } } }

Being able to query data from a database is not the only feature of GraphQL; you can also change the data in the database. To do this, open up index.js:

  • sudo nano index.js

Next to the type Query you can also use the type Mutation, which allows you to mutate the database. To use this type, add it to the schema and also create input types by inserting these highlighted lines:

index.js
... // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     bios: [Bio]     bio(id: Int): Bio   }   type Mutation {     addBio(input: BioInput) : Bio   }   input BioInput {     name: NameInput     title: String     birth: String     death: String   }   input NameInput {     first: String     last: String   } ... 

These input types define which variables can be used as inputs, which you can access in the resolvers and use to insert a new document in the database. Do this by adding the following lines to index.js:

index.js
... // Provide resolver functions for your schema fields const resolvers = {   bios: (args, context) => context().then(db => db.collection('bios').find().toArray()),   bio: (args, context) => context().then(db => db.collection('bios').findOne({ _id: args.id })),   addBio: (args, context) => context().then(db => db.collection('bios').insertOne({ name: args.input.name, title: args.input.title, death: args.input.death, birth: args.input.birth})).then(response => response.ops[0]) }; ... 

Just as with the resolvers for regular queries, you need to return a value from the resolver in index.js. In the case of a Mutation where the type Bio is mutated, you would return the value of the mutated bio.

At this point, your index.js file will contain the following lines:

index.js
iconst express = require('express'); const graphqlHTTP = require('express-graphql'); const { buildSchema } = require('graphql'); const { MongoClient } = require('mongodb');  const context = () => MongoClient.connect('mongodb://username:password@localhost:27017/database_name', { useNewUrlParser: true })   .then(client => client.db('GraphQL_Test'));  // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     bios: [Bio]     bio(id: Int): Bio   }   type Mutation {     addBio(input: BioInput) : Bio   }   input BioInput {     name: NameInput     title: String     birth: String     death: String   }   input NameInput {     first: String     last: String   }   type Bio {     name: Name,     title: String,     birth: String,     death: String,     awards: [Award]   }   type Name {     first: String,     last: String   },   type Award {     award: String,     year: Float,     by: String   } `);  // Provide resolver functions for your schema fields const resolvers = {   bios: (args, context) =>context().then(db => db.collection('Sample_Data').find().toArray()),   bio: (args, context) =>context().then(db => db.collection('Sample_Data').findOne({ _id: args.id })),   addBio: (args, context) => context().then(db => db.collection('Sample_Data').insertOne({ name: args.input.name, title: args.input.title, death: args.input.death, birth: args.input.birth})).then(response => response.ops[0]) };  const app = express(); app.use('/graphql', graphqlHTTP({   schema,   rootValue: resolvers,   context })); app.listen(4000);  console.log(`🚀 Server ready at http://localhost:4000/graphql`); 

Save and exit index.js.

To check if your new mutation is working, restart the GraphQL server by pressing CTRL + c and running npm start in the terminal that is running your GraphQL server, then open another terminal session to execute the following curl request. Just as with the curl request for queries, the body in the --data flag will be sent to the GraphQL server. The highlighted parts will be added to the database:

  • curl -X POST -H "Content-Type: application/json" --data '{ "query": "mutation { addBio(input: { name: { first: \"test\", last: \"user\" } }) { name { first, last } } }" }' http://localhost:4000/graphql

This returns the following result, meaning you just inserted a new bio to the database:

Output
{ "data": { "addBio": { "name": { "first": "test", "last": "user" } } } }

In this step, you created the connection with MongoDB and the GraphQL server, allowing you to retrieve and mutate data from this database by executing GraphQL queries. Next, you’ll expose this GraphQL server for remote access.

Step 4 — Allowing Remote Access

Having set up the database and the GraphQL server, you can now configure the GraphQL server to allow remote access. For this you’ll use Nginx, which you set up in the prerequisite tutorial How to install Nginx on Ubuntu 18.04. This Nginx configuration can be found in the /etc/nginx/sites-available/example.com file, where example.com is the server name you added in the prerequisite tutorial.

Open this file for editing, replacing your domain name with example.com:

  • sudo nano /etc/nginx/sites-available/example.com

In this file you can find a server block that listens to port 80, where you’ve already set up a value for server_name in the prerequisite tutorial. Inside this server block, change the value for root to be the directory in which you created the code for the GraphQL server and add index.js as the index. Also, within the location block, set a proxy_pass so you can use your server’s IP or a custom domain name to refer to the GraphQL server:

/etc/nginx/sites-available/example.com
server {   listen 80;   listen [::]:80;    root /project_name;   index index.js;    server_name example.com;    location / {     proxy_pass http://localhost:4000/graphql;   } } 

Make sure there are no Nginx syntax errors in this configuration file by running:

  • sudo nginx -t

You will receive the following output:

Output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

When there are no errors found for the configuration file, restart Nginx:

  • sudo systemctl restart nginx

Now you will be able to access your GraphQL server from any terminal session tab by executing and replacing example.com by either your server’s IP or your custom domain name:

  • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ bios { name { first, last } } }" }' http://example.com

This will return the same JSON object as the one of the previous step, including any additional data you might have added by using a mutation:

Output
{"data":{"bios":[{"name":{"first":"John","last":"Backus"}},{"name":{"first":"John","last":"McCarthy"}},{"name":{"first":"test","last":"user"}}]}}

Now that you have made your GraphQL server accessible remotely, make sure your GraphQL server doesn’t go down when you close the terminal or the server restarts. This way, your MongoDB database will be accessible via the GraphQL server whenever you want to make a request.

To do this, use the npm package forever, a CLI tool that ensures that your command line scripts run continuously, or get restarted in case of any failure.

Install forever with npm:

  • sudo npm install forever -g

Once it is done installing, add it to the package.json file:

package.json
{   "name": "project_name",   "version": "1.0.0",   "description": "",   "main": "index.js",   "scripts": {     "start": "node index.js",     "deploy": "forever start --minUptime 2000 --spinSleepTime 5 index.js",     "test": "echo \"Error: no test specified\" && exit 1"   },   ... 

To start the GraphQL server with forever enabled, run the following command:

  • npm run deploy

This will start the index.js file containing the GraphQL server with forever, and ensure it will keep running with a minimum uptime of 2000 milliseconds and 5 milliseconds between every restart in case of a failure. The GraphQL server will now continuously run in the background, so you don’t need to open a new tab any longer when you want to send a request to the server.

You’ve now created a GraphQL server that is using MongoDB to store data and is set up to allow access from a remote server. In the next step you’ll enable the GraphiQL playground, which will make it easier for you to inspect the GraphQL server.

Step 5 — Enabling GraphiQL Playground

Being able to send cURL requests to the GraphQL server is great, but it would be faster to have a user interface that can execute GraphQL requests immediately, especially during development. For this you can use GraphiQL, an interface supported by the package express-graphql.

To enable GraphiQL, edit the file index.js:

  • sudo nano index.js

Add the following highlighted lines:

index.js
const app = express(); app.use('/graphql', graphqlHTTP({   schema,   rootValue: resolvers,   context,   graphiql: true })); app.listen(4000);  console.log(`🚀 Server ready at http://localhost:4000/graphql`); 

Save and exit the file.

In order for these changes to become visible, make sure to stop forever by executing:

  • forever stop index.js

Next, start forever again so the latest version of your GraphQL server is running:

  • npm run deploy

Open a browser at the URL http://example.com, replacing example.com with your domain name or your server IP. You will see the GraphiQL playground, where you can type GraphQL requests.

The initial screen for the GraphiQL playground

On the left side of this playground you can type the GraphQL queries and mutations, while the output will be shown on the right side of the playground. To test if this is working, type the following query on the left side:

query {   bios {     name {       first       last     }   } } 

This will output the same result on the right side of the playground, again in JSON format:

The GraphiQL playground in action

Now you can send GraphQL requests using the terminal and the GraphiQL playground.

Conclusion

In this tutorial you’ve set up a MongoDB database and retrieved and mutated data from this database using GraphQL, Node.js, and Express for the server. Additionally, you configured Nginx to allow remote access to this server. Not only can you send requests to this GraphQL server directly, you can also use the GraphiQL as a visual, in-browser GraphQL interface.

If you want to learn about GraphQL, you can watch a recording of my presentation on GraphQL at NDC {London} or visit the website howtographql.com for tutorials about GraphQL. To study how GraphQL interacts with other technologies, check out the tutorial on How to Manually Set Up a Prisma Server on Ubuntu 18.04, and for more information on building applications with MongoDB, see How To Build a Blog with Nest.js, MongoDB, and Vue.js.

DigitalOcean Community Tutorials

Unpacking the Built-in Scripts of Tableau Server

unpacking the built-in scripts of Tableau Server

By now, most of us have utilized the tableau-server-obliterate.cmd script that comes with version 2018.2 and beyond (if you haven’t, you are about to find out what that is). But what do the rest of those built-in scripts do!?

unpacking the built-in scripts of Tableau Server

These are, by default, located under the C:\Program Files\Tableau\Tableau Server\Packages\Scripts <version>\ folder. If you navigate via command line to that directory, you can run any one of the scripts. Let’s walk through them one by one:

Disable-coordination-service-authentication

Usage: disable-coordination-service-authentication.cmd -y -y -y

Use Case: When troubleshooting with Tableau Support

  • This tool is for recovery from error conditions and should not be used unless instructed by technical support.
  • It disables the internally used authentication scheme to access Tableau Server Coordination Service.
  • This script must be run as the Administrator. The server node on which this script is run must have the correct credentials to authenticate with Tableau Server Coordination Service in its current state. If unsure, you can try running this script from each one of your nodes until one of them succeeds.

Local-configuration

Usage: local-configuration.cmd get -k <configuration>

Use Case: To determine the setting of any configuration set for your Tableau Server. The list of options can be found here.

  • Example: local-configuration.cmd get -k features.DesktopReporting
  • This tool allows you to grab the value of any configuration set for Tableau Server using the corresponding key.

Move-tsm-controller

Usage: move-tsm-controller.cmd -n <node-id>

Use Case: If your initial node fails, you will need to run this command first from one of your additional nodes. After you have successfully moved your TSM controller to another node, you may utilize TSM commands to reconfigure your topology and add your licensing service. You will need to reactivate your license in an initial node failover scenario.

  • This tool is for moving the TSM Controller to the specified node.
  • The service will be stopped, if running, and unavailable during the move. It will be restarted on the specified node after the move is completed.
  • This script can be run from any node in the Tableau Server cluster.

Refresh-environment-variables

Usage: refresh-environment-variables.cmd

Use Case: This is called from within the other scripts to grab the up-to-date environment variable to ensure the scripts are utilizing the correct paths and version number.

  • It updates the local copy of the TABLEAU_SERVER_* environment variables from the registry.
  • This queries the existing settings in the registry and sets if found, if not found the environment variable is cleared.
  • It looks for four entries:
    • TABLEAU_SERVER_CONFIG_NAME
    • TABLEAU_SERVER_DATA_DIR
    • TABLEAU_SERVER_DATA_DIR_VERSION
    • TABLEAU_SERVER_INSTALL_DIRunpacking the built-in scripts of Tableau Server

Start-administrative-services

Usage: start-administrative-services.cmd

Use Case: If you find TSM or any of the TSM services are unavailable, this will make sure all necessary services are running.

  • This tool is for starting all TSM administrative services.
  • The services started are:
    • Tableau Server License Manager
    • Tableau Server Service Manager (tabsvc)
    • Tableau Server Administration Controller
    • Tableau Server Administration Agent
    • Tableau Server Coordination Service (zookeeper)
    • Tableau Server Client File Service

unpacking the built-in scripts of Tableau Server

unpacking the built-in scripts of Tableau Server

Stop-administrative-services

Usage: stop-administrative-services.cmd

Use Case: If you need to restart all TSM services, you can utilize this in conjunction with start-administrative-services.

  • This tool is for stopping all TSM administrative services.
  • The services stopped are:
    • Tableau Server License Manager
    • Tableau Server Service Manager (tabsvc)
    • Tableau Server Administration Controller
    • Tableau Server Administration Agent
    • Tableau Server Coordination Service (zookeeper)
    • Tableau Server Client File Service

unpacking the built-in scripts of Tableau Server

unpacking the built-in scripts of Tableau Server

Tableau-server-obliterate

Usage: tableau-server-obliterate.cmd [-q] [-l] -y -y -y

Use Case: To completely remove ALL Tableau Server instances and files from the server; good for a clean slate.

  • This script will stop/kill any running Tableau Server processes and remove all Server-related files. No data or configuration is retained except files related to licensing. This is destructive to all data pertaining to Tableau Server and should only be used to clean the machine.
  • If you have a cluster environment, you must run this on every node in the cluster.
  • This script must be run as the Administrator.
  • This script will:
    • Deactivate License fulfillment key
    • Stop and Remove TSM services
    • Silently uninstall packages
    • Delete the data directory
    • Remove the Tableau Server Manager Certificates
    • Remove the environment variables from the Registry
  • You have the options:
    • -y = Yes, perform this action. Must be specified three times to confirm the action is desired.
    • -q = Quiet mode. Do not display progress UI when uninstalling Tableau Server packages.
    • -l = Also delete licensing files and data. This command will attempt to deactivate licenses before deleting licenses. If in doubt, please use tsm licenses deactivate before running this script.

Upgrade-tsm

Usage: upgrade-tsm.cmd

Use Case: Use during the upgrade process from version 2018.2+, and this will point TSM to the new installed version of Tableau Server.

  • Example: upgrade-tsm.cmd
    • Performs upgrade using the backup file in the environment variable Tableau_Server_Data_Dir for restore
  • Example: upgrade-tsm.cmd –backup-path <filepath>
    • Performs upgrade using the specified backup file for restore
  • Parameters:
    • -u, –username <USER>
      • TSM administrator user name
    • -p, –password <PASSWORD>
      • TSM administrator password
    • -sp, –service-runas-password <PASSWORD>
      • Service runas user password
    • You will be prompted to run this script from the upgrade wizard when performing the upgrade.
    • Tableau Server must be stopped prior to running this command.
    • Will grab the pg_version to be installed from the pgsql package to be installed.

The post Unpacking the Built-in Scripts of Tableau Server appeared first on InterWorks.

InterWorks

How To Install the Apache Web Server on CentOS 7

Introduction

The Apache HTTP server is the most widely-used web server in the world. It provides many powerful features including dynamically loadable modules, robust media support, and extensive integration with other popular software.

In this guide, you will install an Apache web server with virtual hosts on your CentOS 7 server.

Prerequisites

You will need the following to complete this guide:

Step 1 — Installing Apache

Apache is available within CentOS’s default software repositories, which means you an install it with the yum package manager.

As the non-root sudo user configured in the prerequisites, update the local Apache httpd package index to reflect the latest upstream changes:

  • sudo yum update httpd

Once the packages are updated, install the Apache package:

  • sudo yum install httpd

After confirming the installation, yum will install Apache and all required dependencies. Once the installation completes, you are ready to start the service.

Step 2 — Checking your Web Server

Apache does not automatically start on CentOS once the installation completes. You will need to start the Apache process manually:

  • sudo systemctl start httpd

Verify that the service is running with the following command:

  • sudo systemctl status httpd

You will see an active status when the service is running:

Output
Redirecting to /bin/systemctl status httpd.service ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2019-02-20 01:29:08 UTC; 5s ago Docs: man:httpd(8) man:apachectl(8) Main PID: 1290 (httpd) Status: "Processing requests..." CGroup: /system.slice/httpd.service ├─1290 /usr/sbin/httpd -DFOREGROUND ├─1291 /usr/sbin/httpd -DFOREGROUND ├─1292 /usr/sbin/httpd -DFOREGROUND ├─1293 /usr/sbin/httpd -DFOREGROUND ├─1294 /usr/sbin/httpd -DFOREGROUND └─1295 /usr/sbin/httpd -DFOREGROUND ...

As you can see from this output, the service appears to have started successfully. However, the best way to test this is to request a page from Apache.

You can access the default Apache landing page to confirm that the software is running properly through your IP address. If you do not know your server’s IP address, you can get it a few different ways from the command line.

Type this at your server’s command prompt:

  • hostname -I

This command will display all of the host’s network addresses, so you will get back a few IP addresses separated by spaces. You can try each in your web browser to see if they work.

Alternatively, you can use curl to request your IP from icanhazip.com, which will give you your public IPv4 address as seen from another location on the internet:

  • curl -4 icanhazip.com

When you have your server’s IP address, enter it into your browser’s address bar:

http://your_server_ip 

You’ll see the default CentOS 7 Apache web page:

Default Apache page for CentOS 7

This page indicates that Apache is working correctly. It also includes some basic information about important Apache files and directory locations. Now that the service is installed and running, you can now use different systemctl commands to manage the service.

Step 3 — Managing the Apache Process

Now that you have your web server up and running, let’s go over some basic management commands.

To stop your web server, type:

  • sudo systemctl stop httpd

To start the web server when it is stopped, type:

  • sudo systemctl start httpd

To stop and then start the service again, type:

  • sudo systemctl restart httpd

If you are simply making configuration changes, Apache can often reload without dropping connections. To do this, use this command:

  • sudo systemctl reload httpd

By default, Apache is configured to start automatically when the server boots. If this is not what you want, disable this behavior by typing:

  • sudo systemctl disable httpd

To re-enable the service to start up at boot, type:

  • sudo systemctl enable httpd

Apache will now start automatically when the server boots again.

The default configuration for Apache will allow your server to host a single website. If you plan on hosting multiple domains on your server, you will need to configure virtual hosts on your Apache web server.

When using the Apache web server, you can use virtual hosts (similar to server blocks in Nginx) to encapsulate configuration details and host more than one domain from a single server. In this step, you will set up a domain called example.com, but you should replace this with your own domain name. To learn more about setting up a domain name with DigitalOcean, see our Introduction to DigitalOcean DNS.

Apache on CentOS 7 has one server block enabled by default that is configured to serve documents from the /var/www/html directory. While this works well for a single site, it can become unwieldy if you are hosting multiple sites. Instead of modifying /var/www/html, you will create a directory structure within /var/www for the example.com site, leaving /var/www/html in place as the default directory to be served if a client request doesn’t match any other sites.

Create the html directory for example.com as follows, using the -p flag to create any necessary parent directories:

  • sudo mkdir -p /var/www/example.com/html

Create an additional directory to store log files for the site:

  • sudo mkdir -p /var/www/example.com/log

Next, assign ownership of the html directory with the $ USER environmental variable:

  • sudo chown -R $ USER:$ USER /var/www/example.com/html

Make sure that your web root has the default permissions set:

  • sudo chmod -R 755 /var/www

Next, create a sample index.html page using vi or your favorite editor:

  • sudo vi /var/www/example.com/html/index.html

Press i to switch to INSERT mode and add the following sample HTML to the file:

/var/www/example.com/html/index.html
<html>   <head>     <title>Welcome to Example.com!</title>   </head>   <body>     <h1>Success! The example.com virtual host is working!</h1>   </body> </html> 

Save and close the file by pressing ESC, typing :wq, and pressing ENTER.

With your site directory and sample index file in place, you are almost ready to create the virtual host files. Virtual host files specify the configuration of your separate sites and tell the Apache web server how to respond to various domain requests.

Before you create your virtual hosts, you will need to create a sites-available directory to store them in. You will also create the sites-enabled directory that tells Apache that a virtual host is ready to serve to visitors. The sites-enabled directory will hold symbolic links to virtual hosts that we want to publish. Create both directories with the following command:

  • sudo mkdir /etc/httpd/sites-available /etc/httpd/sites-enabled

Next, you will tell Apache to look for virtual hosts in the sites-enabled directory. To accomplish this, edit Apache’s main configuration file and add a line declaring an optional directory for additional configuration files:

  • sudo vi /etc/httpd/conf/httpd.conf

Add this line to the end of the file:

IncludeOptional sites-enabled/*.conf 

Save and close the file when you are done adding that line. Now that you have your virtual host directories in place, you will create your virtual host file.

Start by creating a new file in the sites-available directory:

  • sudo vi /etc/httpd/sites-available/example.com.conf

Add in the following configuration block, and change the example.com domain to your domain name:

/etc/httpd/sites-available/example.com.conf
<VirtualHost *:80>     ServerName www.example.com     ServerAlias example.com     DocumentRoot /var/www/example.com/html     ErrorLog /var/www/example.com/log/error.log     CustomLog /var/www/example.com/log/requests.log combined </VirtualHost> 

This will tell Apache where to find the root directly that holds the publicly accessible web documents. It also tells Apache where to store error and request logs for this particular site.

Save and close the file when you are finished.

Now that you have created the virtual host files, you will enable them so that Apache knows to serve them to visitors. To do this, create a symbolic link for each virtual host in the sites-enabled directory:

  • sudo ln -s /etc/httpd/sites-available/example.com.conf /etc/httpd/sites-enabled/example.com.conf

Your virtual host is now configured and ready to serve content. Before restarting the Apache service, let’s make sure that SELinux has the correct policies in place for your virtual hosts.

SELinux is configured to work with the default Apache configuration. Since you set up a custom log directory in the virtual hosts configuration file, you will receive an error if you attempt to start the Apache service. To resolve this, you need to update the SELinux policies to allow Apache to write to the necessary files. SELinux brings heightened security to your CentOS 7 environment, therefore it is not recommended to completely disable the kernel module.

There are different ways to set policies based on your environment’s needs, as SELinux allows you to customize your security level. This step will cover two methods of adjusting Apache policies: universally and on a specific directory. Adjusting policies on directories is more secure, and is therefore the recommended approach.

Adjusting Apache Policies Universally

Setting the Apache policy universally will tell SELinux to treat all Apache processes identically by using the httpd_unified boolean. While this approach is more convenient, it will not give you the same level of control as an approach that focuses on a file or directory policy.

Run the following command to set a universal Apache policy:

  • sudo setsebool -P httpd_unified 1

The setsebool command changes SELinux boolean values. The -P flag will update the boot-time value, making this change persist across reboots. httpd_unified is the boolean that will tell SELinux to treat all Apache processes as the same type, so you enabled it with a value of 1.

Adjusting Apache Policies on a Directory

Individually setting SELinux permissions for the /var/www/example.com/log directory will give you more control over your Apache policies, but may also require more maintenance. Since this option is not universally setting policies, you will need to manually set the context type for any new log directories specified in your virtual host configurations.

First, check the context type that SELinux gave the /var/www/example.com/log directory:

  • sudo ls -dZ /var/www/example.com/log/

This command lists and prints the SELinux context of the directory. You will see output similar to the following:

Output
drwxr-xr-x. root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/example.com/log/

The current context is httpd_sys_content_t, which tells SELinux that the Apache process can only read files created in this directory. In this tutorial, you will change the context type of the /var/www/example.com/log directory to httpd_log_t. This type will allow Apache to generate and append to web application log files:

  • sudo semanage fcontext -a -t httpd_log_t "/var/www/example.com/log(/.*)?"

Next, use the restorecon command to apply these changes and have them persist across reboots:

  • sudo restorecon -R -v /var/www/example.com/log

The -R flag runs this command recursively, meaning it will update any existing files to use the new context. The -v flag will print the context changes the command made. You will see the following output confirming the changes:

Output
restorecon reset /var/www/example.com/log context unconfined_u:object_r:httpd_sys_content_t:s0->unconfined_u:object_r:httpd_log_t:s0

You can list the contexts once more to see the changes:

  • sudo ls -dZ /var/www/example.com/log/

The output reflects the updated context type:

Output
drwxr-xr-x. root root unconfined_u:object_r:httpd_log_t:s0 /var/www/example.com/log

Now that the /var/www/example.com/log directory is using the httpd_log_t type, you are ready to test your virtual host configuration.

Once the SELinux context has been updated with either method, Apache will be able to write to the /var/www/example.com/log directory. You can now successfully restart the Apache service:

  • sudo systemctl restart httpd

List the contents of the /var/www/example.com/log directory to see if Apache created the log files:

  • ls -lZ /var/www/example.com/log

You’ll see that Apache was able to create the error.log and requests.log files specified in the virtual host configuration:

Output
-rw-r--r--. 1 root root 0 Feb 26 22:54 error.log -rw-r--r--. 1 root root 0 Feb 26 22:54 requests.log

Now that you have your virtual host set up and SELinux permissions updated, Apache will now serve your domain name. You can test this by navigating to http://example.com, where you should see something like this:

Success! The example.com virtual host is working!

This confirms that your virtual host is successfully configured and serving content. Repeat Steps 4 and 5 to create new virtual hosts with SELinux permissions for additional domains.

Conclusion

In this tutorial, you installed and managed the Apache web server. Now that you have your web server installed, you have many options for the type of content you can serve and the technologies you can use to create a richer experience.

If you’d like to build out a more complete application stack, you can look at this article on how to configure a LAMP stack on CentOS 7.

DigitalOcean Community Tutorials