Como Fazer o Deploy de uma Aplicação Go Resiliente no Kubernetes da DigitalOcean

O autor escolheu o Girls Who Code para receber uma doação como parte do programa Write for DOnations.

Introdução

O Docker é uma ferramenta de containerização utilizada para fornecer às aplicações um sistema de arquivos que armazena tudo o que eles precisam para executar, garantindo que o software tenha um ambiente de runtime consistente e se comporte da mesma maneira, independentemente de onde esteja implantado ou deployado. O Kubernetes é uma plataforma em nuvem para automatizar o deployment, a escalabilidade e o gerenciamento de aplicações containerizadas.

Ao aproveitar o Docker, você pode fazer o deploy de uma aplicação em qualquer sistema que ofereça suporte ao Docker com a confiança de que ele sempre funcionará conforme o esperado. O Kubernetes, por sua vez, permite que você faça o deploy de sua aplicação em vários nodes em um cluster. Além disso, ele lida com as principais tarefas, como lançar novos containers em caso de queda de qualquer um dos seus containers. Juntas, essas ferramentas simplificam o processo de deployment de uma aplicação, permitindo que você se concentre no desenvolvimento.

Neste tutorial, você vai criar uma aplicação de exemplo escrita em Go e a colocará em funcionamento localmente em sua máquina de desenvolvimento. Em seguida, você irá containerizar a aplicação com o Docker, fazer o deploy em um cluster Kubernetes e vai criar um balanceador de carga que servirá como ponto de entrada voltado ao público para a sua aplicação.

Pré-requisitos

Antes de começar este tutorial, você precisará do seguinte:

  • Um servidor de desenvolvimento ou máquina local a partir da qual você fará o deploy da aplicação. Embora as instruções deste guia funcionem em grande parte para a maioria dos sistemas operacionais, este tutorial pressupõe que você tenha acesso a um sistema Ubuntu 18.04 configurado com um usuário não-root com privilégios sudo, conforme descrito em nosso tutorial Configuração Inicial de servidor com Ubuntu 18.04.
  • A ferramenta de linha de comando docker instalada em sua máquina de desenvolvimento. Para instalar isto, siga os Passos 1 e 2 do nosso tutorial sobre Como Instalar e Usar o Docker no Ubuntu 18.04.
  • A ferramenta de linha de comando kubectl instalada em sua máquina de desenvolvimento. Para instalá-la, siga este guia da documentação oficial do Kubernetes.
  • Uma conta gratuita no Docker Hub para a qual você enviará sua imagem do Docker. Para configurar isso, visite o website do Docker Hub, clique no botão Get Started no canto superior direito da página e siga as instruções de registro.
  • Um cluster Kubernetes. Você pode provisionar um cluster Kubernetes na DigitalOcean seguindo nosso Guia de início rápido do Kubernetes. Você ainda pode concluir este tutorial se provisionar seu cluster em outro provedor de nuvem. Sempre que você adquirir seu cluster, certifique-se de definir um arquivo de configuração e garantir que você possa se conectar ao cluster a partir do seu servidor de desenvolvimento.

Passo 1 — Criando uma Aplicação Web de Exemplo em Go

Nesta etapa, você criará uma aplicação de exemplo escrita em Go. Após containerizar este app com o Docker, ele servirá My Awesome Go App em resposta a solicitações para o endereço IP do seu servidor na porta 3000.

Comece atualizando as listas de pacotes do seu servidor, se você não tiver feito isso recentemente:

  • sudo apt update

Em seguida, instale o Go executando:

  • sudo apt install golang

Depois, verifique se você está em seu diretório home e crie um novo diretório que vai conter todos os seus arquivos do projeto:

  • cd && mkdir go-app

Em seguida, navegue até este novo diretório:

  • cd go-app/

Use o nano ou seu editor de texto preferido para criar um arquivo chamado main.go, que conterá o código da sua aplicação Go:

  • nano main.go

A primeira linha em qualquer arquivo-fonte do Go é sempre uma instrução package que define a qual pacote de código o arquivo pertence. Para arquivos executáveis como este, a declaração package deve apontar para o pacote main:

go-app/main.go
package main 

Depois disso, adicione uma instrução import onde você pode listar todas as bibliotecas que a aplicação precisará. Aqui, inclua fmt, que lida com entrada e saída de texto formatada, e net/http, que fornece implementações de cliente e servidor HTTP:

go-app/main.go
package main  import (   "fmt"   "net/http" ) 

Em seguida, defina uma função homePage que terá dois argumentos: http.ResponseWriter e um ponteiro para http.Request. Em Go, uma interface ResponseWriter é usada para construir uma resposta HTTP, enquanto http.Request é um objeto que representa uma solicitação de entrada. Assim, este bloco lê solicitações HTTP de entrada e, em seguida, constrói uma resposta:

go-app/main.go
. . .  import (   "fmt"   "net/http" )  func homePage(w http.ResponseWriter, r *http.Request) {   fmt.Fprintf(w, "My Awesome Go App") } 

Depois disso, adicione uma função setupRoutes que mapeará as solicitações de entrada para as funções planejadas do handler HTTP. No corpo desta função setupRoutes, adicione um mapeamento da rota / para sua função homePage recém-definida. Isso diz à aplicação para imprimir a mensagem My Awesome Go App mesmo para solicitações feitas a endpoints desconhecidos:

go-app/main.go
. . .  func homePage(w http.ResponseWriter, r *http.Request) {   fmt.Fprintf(w, "My Awesome Go App") }  func setupRoutes() {   http.HandleFunc("/", homePage) } 

E finalmente, adicione a seguinte função main. Isso imprimirá uma string indicando que sua aplicação foi iniciada. Ela então chamará a função setupRoutes antes de começar a ouvir e servir sua aplicação Go na porta 3000.

go-app/main.go
. . .  func setupRoutes() {   http.HandleFunc("/", homePage) }  func main() {   fmt.Println("Go Web App Started on Port 3000")   setupRoutes()   http.ListenAndServe(":3000", nil) } 

Após adicionar essas linhas, é assim que o arquivo final ficará:

go-app/main.go
package main  import (   "fmt"   "net/http" )  func homePage(w http.ResponseWriter, r *http.Request) {   fmt.Fprintf(w, "My Awesome Go App") }  func setupRoutes() {   http.HandleFunc("/", homePage) }  func main() {   fmt.Println("Go Web App Started on Port 3000")   setupRoutes()   http.ListenAndServe(":3000", nil) } 

Salve e feche este arquivo. Se você criou este arquivo usando nano, faça-o pressionando CTRL + X, Y, depois ENTER.

Em seguida, execute a aplicação usando o seguinte comando go run. Isto irá compilar o código no seu arquivo main.go e irá executá-lo localmente em sua máquina de desenvolvimento:

  • go run main.go
Output
Go Web App Started on Port 3000

Esta saída confirma que a aplicação está funcionando conforme o esperado. Ela será executada indefinidamente, entretanto, feche-a pressionando CTRL + C.

Ao longo deste guia, você usará essa aplicação de exemplo para experimentar com o Docker e o Kubernetes. Para esse fim, continue lendo para saber como containerizar sua aplicação com o Docker.

Passo 2 — Dockerizando sua Aplicação Go

Em seu estado atual, a aplicação Go que você acabou de criar está sendo executada apenas em seu servidor de desenvolvimento. Nesta etapa, você tornará essa nova aplicação portátil ao containerizá-la com o Docker. Isso permitirá que ela seja executada em qualquer máquina que ofereça suporte a containers Docker. Você irá criar uma imagem do Docker e a enviará para um repositório público central no Docker Hub. Dessa forma, seu cluster Kubernetes pode baixar a imagem de volta e fazer o deployment dela como um container dentro do cluster.

O primeiro passo para a containerização de sua aplicação é criar um script especial chamado de Dockerfile. Um Dockerfile geralmente contém uma lista de instruções e argumentos que são executados em ordem sequencial para executar automaticamente determinadas ações em uma imagem base ou criar uma nova.

Nota: Nesta etapa, você vai configurar um container Docker simples que criará e executará sua aplicação Go em um único estágio. Se, no futuro, você quiser reduzir o tamanho do container onde suas aplicações Go serão executadas em produção, talvez seja interessante dar uma olhada no mutli-stage builds ou compilação em múltiplos estágios.

Crie um novo arquivo chamado Dockerfile:

  • nano Dockerfile

Na parte superior do arquivo, especifique a imagem base necessária para a aplicação Go:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 

Em seguida, crie um diretório app dentro do container que vai conter os arquivos-fonte da aplicação:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app 

Abaixo disso, adicione a seguinte linha que copia tudo no diretório raiz dentro do diretório app:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app 

Em seguida, adicione a seguinte linha que altera o diretório de trabalho para app, significando que todos os comandos a seguir neste Dockerfile serão executados a partir desse local:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app WORKDIR /app 

Adicione uma linha instruindo o Docker a executar o comando go build -o main, que compila o executável binário da aplicação Go:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app WORKDIR /app RUN go build -o main . 

Em seguida, adicione a linha final, que irá rodar o executável binário:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app WORKDIR /app RUN go build -o main . CMD ["/app/main"] 

Salve e feche o arquivo depois de adicionar essas linhas.

Agora que você tem esse Dockerfile na raiz do seu projeto, você pode criar uma imagem Docker baseada nele usando o seguinte comando docker build. Este comando inclui a flag -t que, quando passado o valor go-web-app, nomeará a imagem Docker como go-web-app e irá marcar ou colocar uma tag nela.

Nota: No Docker, as tags permitem que você transmita informações específicas para uma determinada imagem, como o seu número de versão. O comando a seguir não fornece uma tag específica, portanto, o Docker marcará a imagem com sua tag padrão: latest. Se você quiser atribuir uma tag personalizada a uma imagem, você adicionaria o nome da imagem com dois pontos e a tag de sua escolha, assim:

  • docker build -t sammy/nome_da_imagem:nome_da_tag .

Marcar ou “taggear” uma imagem como essa pode lhe dar maior controle sobre suas imagens. Por exemplo, você poderia fazer o deploy de uma imagem marcada como v1.1 em produção, mas fazer o deploy de outra marcada como v1.2 em seu ambiente de pré-produção ou teste.

O argumento final que você vai passar é o caminho: .. Isso especifica que você deseja criar a imagem Docker a partir do conteúdo do diretório de trabalho atual. Além disso, certifique-se de atualizar sammy para o seu nome de usuário do Docker Hub:

  • docker build -t sammy/go-web-app .

Este comando de compilação vai ler todas as linhas do seu Dockerfile, executá-las em ordem e armazenará em cache, permitindo que futuras compilações sejam executadas muito mais rapidamente:

Output
. . . Successfully built 521679ff78e5 Successfully tagged go-web-app:latest

Quando este comando terminar a compilação, você poderá ver sua imagem quando executar o comando docker images da seguinte forma:

  • docker images
Output
REPOSITORY TAG IMAGE ID CREATED SIZE sammy/go-web-app latest 4ee6cf7a8ab4 3 seconds ago 355MB

Em seguida, use o seguinte comando para criar e iniciar um container com base na imagem que você acabou de criar. Este comando inclui a flag -it, que especifica que o container será executado no modo interativo. Ele também possui a flag -p que mapeia a porta na qual a aplicação Go está sendo executada em sua máquina de desenvolvimento — porta 3000 — para a porta 3000 em seu container Docker.

  • docker run -it -p 3000:3000 sammy/go-web-app
Output
Go Web App Started on Port 3000

Se não houver mais nada em execução nessa porta, você poderá ver a aplicação em ação abrindo um navegador e navegando até a seguinte URL:

http://ip_do_seu_servidor:3000 

Nota: Se você estiver seguindo este tutorial em sua máquina local em vez de um servidor, visite a aplicação acessando a seguinte URL:

http://localhost:3000 

Your containerized Go App

Depois de verificar se a aplicação funciona como esperado no seu navegador, finalize-a pressionando CTRL + C no seu terminal.

Quando você faz o deploy de sua aplicação containerizada em seu cluster Kubernetes, você vai precisar conseguir extrair a imagem de um local centralizado. Para esse fim, você pode enviar sua imagem recém-criada para o repositório de imagens do Docker Hub.

Execute o seguinte comando para efetuar login no Docker Hub a partir do seu terminal:

  • docker login

Isso solicitará seu nome de usuário e sua senha do Docker Hub. Depois de inseri-los corretamente, você verá Login Succeeded na saída do comando.

Após o login, envie sua nova imagem para o Docker Hub usando o comando docker push, assim:

  • docker push sammy/go-web-app

Quando esse comando for concluído com êxito, você poderá abrir sua conta do Docker Hub e ver sua imagem do Docker lá.

Agora que você enviou sua imagem para um local central, está pronto para fazer o seu deployment em seu cluster do Kubernetes. Primeiro, porém, vamos tratar de um breve processo que tornará muito menos tedioso executar comandos kubectl.

Passo 3 — Melhorando a Usabilidade para o kubectl

Nesse ponto, você criou uma aplicação Go funcional e fez a containerização dela com o Docker. No entanto, a aplicação ainda não está acessível publicamente. Para resolver isso, você fará o deploy de sua nova imagem Docker em seu cluster Kubernetes usando a ferramenta de linha de comando kubectl. Antes de fazer isso, vamos fazer uma pequena alteração no arquivo de configuração do Kubernetes que o ajudará a tornar a execução de comandos kubectl menos trabalhosa.

Por padrão, quando você executa comandos com a ferramenta de linha de comando kubectl, você deve especificar o caminho do arquivo de configuração do cluster usando a flag --kubeconfig. No entanto, se o seu arquivo de configuração é chamado config e está armazenado em um diretório chamado ~/.kube, o kubectl saberá onde procurar pelo arquivo de configuração e poderá obtê-lo sem a flag --kubeconfig apontando para ele.

Para esse fim, se você ainda não tiver feito isso, crie um novo diretório chamado ~/.kube:

  • mkdir ~/.kube

Em seguida, mova o arquivo de configuração do cluster para este diretório e renomeie-o como config no processo:

  • mv clusterconfig.yaml ~/.kube/config

Seguindo em frente, você não precisará especificar a localização do arquivo de configuração do seu cluster quando executar o kubectl, pois o comando poderá encontrá-lo agora que está no local padrão. Teste esse comportamento executando o seguinte comando get nodes:

  • kubectl get nodes

Isso exibirá todos os nodes que residem em seu cluster Kubernetes. No contexto do Kubernetes, um node é um servidor ou uma máquina de trabalho na qual pode-se fazer o deployment de um ou mais pods:

Output
NAME STATUS ROLES AGE VERSION k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfd Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfi Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfv Ready <none> 1m v1.13.5

Com isso, você está pronto para continuar e fazer o deploy da sua aplicação em seu cluster Kubernetes. Você fará isso criando dois objetos do Kubernetes: um que fará o deploy da aplicação em alguns pods no cluster e outro que criará um balanceador de carga, fornecendo um ponto de acesso à sua aplicação.

Passo 4 — Criando um Deployment

Recursos RESTful compõem todas as entidades persistentes dentro de um sistema Kubernetes, e neste contexto elas são comumente chamadas de Kubernetes objects. É útil pensar nos objetos do Kubernetes como as ordens de trabalho que você envia ao Kubernetes: você lista quais recursos você precisa e como eles devem funcionar, e então o Kubernetes trabalhará constantemente para garantir que eles existam em seu cluster.

Um tipo de objeto do Kubernetes, conhecido como deployment, é um conjunto de pods idênticos e indistinguíveis. No Kubernetes, um pod é um agrupamento de um ou mais containers que podem se comunicar pela mesma rede compartilhada e interagir com o mesmo armazenamento compartilhado. Um deployment executa mais de uma réplica da aplicação pai de cada vez e substitui automaticamente todas as instâncias que falham, garantindo que a aplicação esteja sempre disponível para atender às solicitações do usuário.

Nesta etapa, você criará um arquivo de descrição de objetos do Kubernetes, também conhecido como manifest, para um deployment. Esse manifest conterá todos os detalhes de configuração necessários para fazer o deploy da sua aplicação Go em seu cluster.

Comece criando um manifest de deployment no diretório raiz do seu projeto: go-app/. Para projetos pequenos como este, mantê-los no diretório raiz minimiza a complexidade. Para projetos maiores, no entanto, pode ser benéfico armazenar seus manifests em um subdiretório separado para manter tudo organizado.

Crie um novo arquivo chamado deployment.yml:

  • nano deployment.yml

Diferentes versões da API do Kubernetes contêm diferentes definições de objetos, portanto, no topo deste arquivo você deve definir a apiVersion que você está usando para criar este objeto. Para o propósito deste tutorial, você estará usando o agrupamento apps/v1, pois ele contém muitas das principais definições de objeto do Kubernetes que você precisará para criar um deployment. Adicione um campo abaixo de apiVersion, descrevendo o kind ou tipo de objeto do Kubernetes que você está criando. Neste caso, você está criando um Deployment:

go-app/deployment.yml
--- apiVersion: apps/v1 kind: Deployment 

Em seguida, defina o metadata para o seu deployment. Um campo metadata é necessário para todos os objetos do Kubernetes, pois contém informações como o name ou nome exclusivo do objeto. Este name é útil, pois permite distinguir diferentes deployments e identificá-los usando nomes inteligíveis:

go-app/deployment.yml
--- apiVersion: apps/v1 kind: Deployment metadata:     name: go-web-app 

Em seguida, você construirá o bloco spec do seu deployment.yml. Um campo spec é um requisito para todos os objetos do Kubernetes, mas seu formato exato é diferente para cada tipo de objeto. No caso de um deployment, ele pode conter informações como o número de réplicas que você deseja executar. No Kubernetes, uma réplica é o número de pods que você deseja executar em seu cluster. Aqui, defina o número de replicas para 5:

go-app/deployment.yml
. . . metadata:     name: go-web-app spec:   replicas: 5 

Depois, crie um bloco selector aninhado sob o bloco spec. Isso servirá como um seletor de label ou seletor de etiquetas para seus pods. O Kubernetes usa seletores de label para definir como o deployment encontra os pods que ele deve gerenciar.

Dentro deste bloco selector, defina matchLabels e adicione a label name. Essencialmente, o campo matchLabels diz ao Kubernetes para quais pods o deployment se aplica. Neste exemplo, o deployment será aplicado a todos os pods com o nome go-web-app:

go-app/deployment.yml
. . . spec:   replicas: 5   selector:     matchLabels:       name: go-web-app 

Depois disso, adicione um bloco template. Cada deployment cria um conjunto de pods usando as labels especificadas em um bloco template. O primeiro subcampo deste bloco é o metadata, que contém as labels que serão aplicadas a todos os pods deste deployment. Essas labels são pares de chave/valor que são usados como atributos de identificação de objetos do Kubernetes. Quando você definir seu serviço mais tarde, você pode especificar que deseja que todos os pods com essa label name sejam agrupados sob esse serviço. Defina esta label name para go-web-app:

go-app/deployment.yml
. . . spec:   replicas: 5   selector:     matchLabels:       name: go-web-app   template:     metadata:       labels:         name: go-web-app 

A segunda parte deste bloco template é o bloco spec. Este é diferente do bloco spec que você adicionou anteriormente, já que este se aplica somente aos pods criados pelo bloco template, em vez de todo o deployment.

Dentro deste bloco spec, adicione um campo containers e mais uma vez defina um atributo name. Este campo name define o nome de qualquer container criado por este deployment específico. Abaixo disso, defina a imagem ou image que você deseja baixar e fazer o deploy. Certifique-se de alterar sammy para seu próprio nome de usuário do Docker Hub:

go-app/deployment.yml
. . .   template:     metadata:       labels:         name: go-web-app     spec:       containers:       - name: application         image: sammy/go-web-app 

Depois disso, adicione um campo imagePullPolicy definido como IfNotPresent, que direcionará o deployment para baixar uma imagem apenas se ainda não tiver feito isso antes. Então, por último, adicione um bloco ports. Lá, defina o containerPort que deve corresponder ao número da porta que sua aplicação Go está escutando. Neste caso, o número da porta é 3000:

go-app/deployment.yml
. . .     spec:       containers:       - name: application         image: sammy/go-web-app         imagePullPolicy: IfNotPresent         ports:           - containerPort: 3000 

A versão completa do seu arquivo deployment.yml ficará assim:

go-app/deployment.yml
--- apiVersion: apps/v1 kind: Deployment metadata:   name: go-web-app spec:   replicas: 5   selector:     matchLabels:       name: go-web-app   template:     metadata:       labels:         name: go-web-app     spec:       containers:       - name: application         image: sammy/go-web-app         imagePullPolicy: IfNotPresent         ports:           - containerPort: 3000 

Salve e feche o arquivo.

Em seguida, aplique seu novo deployment com o seguinte comando:

  • kubectl apply -f deployment.yml

Nota: Para mais informações sobre todas as configurações disponíveis para seus deployments, confira a documentação oficial do Kubernetes aqui: Kubernetes Deployments

Na próxima etapa, você criará outro tipo de objeto do Kubernetes que gerenciará como você vai acessar os pods existentes em seu novo deployment. Esse serviço criará um balanceador de carga que, então, vai expor um único endereço IP, e as solicitações para esse endereço IP serão distribuídas para as réplicas em seu deployment. Esse serviço também manipulará regras de encaminhamento de porta para que você possa acessar sua aplicação por HTTP.

Passo 5 — Criando um Serviço

Agora que você tem um deployment bem sucedido do Kubernetes, está pronto para expor sua aplicação ao mundo externo. Para fazer isso, você precisará definir outro tipo de objeto do Kubernetes: um service. Este serviço irá expor a mesma porta em todos os nodes do cluster. Então, seus nodes encaminharão qualquer tráfego de entrada nessa porta para os pods que estiverem executando sua aplicação.

Nota: Para maior clareza, vamos definir esse objeto de serviço em um arquivo separado. No entanto, é possível agrupar vários manifests de recursos no mesmo arquivo YAML, contanto que estejam separados por ---. Veja esta página da documentação do Kubernetes para maiores detalhes.

Crie um novo arquivo chamado service.yml:

  • nano service.yml

Inicie este arquivo novamente definindo os campos apiVersion e kind de maneira similar ao seu arquivo deployment.yml. Desta vez, aponte o campo apiVersion para v1, a API do Kubernetes comumente usada para serviços:

go-app/service.yml
--- apiVersion: v1 kind: Service 

Em seguida, adicione o nome do seu serviço em um bloco metadata como você fez em deployment.yml. Pode ser qualquer coisa que você goste, mas para clareza, vamos chamar de go-web-service:

go-app/service.yml
--- apiVersion: v1 kind: Service metadata:   name: go-web-service 

Em seguida, crie um bloco spec. Este bloco spec será diferente daquele incluído em seu deployment, e ele conterá o tipo ou type deste serviço, assim como a configuração de encaminhamento de porta e o seletor.

Adicione um campo definindo o type deste serviço e defina-o para LoadBalancer. Isso provisionará automaticamente um balanceador de carga que atuará como o principal ponto de entrada para sua aplicação.

Atenção: O método para criar um balanceador de carga descrito nesta etapa só funcionará para clusters Kubernetes provisionados por provedores de nuvem que também suportam balanceadores de carga externos. Além disso, esteja ciente de que provisionar um balanceador de carga de um provedor de nuvem incorrerá em custos adicionais. Se isto é uma preocupação para você, você pode querer olhar a exposição de um endereço IP externo usando um Ingress.

go-app/service.yml
--- apiVersion: v1 kind: Service metadata:   name: go-web-service spec:   type: LoadBalancer 

Em seguida, adicione um bloco ports onde você definirá como deseja que seus apps sejam acessados. Aninhado dentro deste bloco, adicione os seguintes campos:

  • name, apontando para http
  • port, apontando para a porta 80
  • targetPort, apontando para a porta 3000

Isto irá pegar solicitações HTTP de entrada na porta 80 e encaminhá-las para o targetPort de 3000. Este targetPort é a mesma porta na qual sua aplicação Go está rodando:

go-app/service.yml
--- apiVersion: v1 kind: Service metadata:   name: go-web-service spec:   type: LoadBalancer   ports:   - name: http     port: 80     targetPort: 3000 

Por último, adicione um bloco selector como você fez no arquivo deployments.yml. Este bloco selector é importante, pois mapeia quaisquer pods deployados chamados go-web-app para este serviço:

go-app/service.yml
--- apiVersion: v1 kind: Service metadata:   name: go-web-service spec:   type: LoadBalancer   ports:   - name: http     port: 80     targetPort: 3000   selector:     name: go-web-app 

Depois de adicionar essas linhas, salve e feche o arquivo. Depois disso, aplique este serviço ao seu cluster do Kubernetes novamente usando o comando kubectl apply assim:

  • kubectl apply -f service.yml

Esse comando aplicará o novo serviço do Kubernetes, além de criar um balanceador de carga. Esse balanceador de carga servirá como o ponto de entrada voltado ao público para a sua aplicação em execução no cluster.

Para visualizar a aplicação, você precisará do endereço IP do novo balanceador de carga. Encontre-o executando o seguinte comando:

  • kubectl get services
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE go-web-service LoadBalancer 10.245.107.189 203.0.113.20 80:30533/TCP 10m kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3h4m

Você pode ter mais de um serviço em execução, mas encontre o que está com a label go-web-service. Encontre a coluna EXTERNAL-IP e copie o endereço IP associado ao go-web-service. Neste exemplo de saída, este endereço IP é 203.0.113.20. Em seguida, cole o endereço IP na barra de URL do seu navegador para visualizar a aplicação em execução no seu cluster Kubernetes.

Nota: Quando o Kubernetes cria um balanceador de carga dessa maneira, ele faz isso de forma assíncrona. Consequentemente, a saída do comando kubectl get services pode mostrar o endereço EXTERNAL-IP do LoadBalancer restante em um estado <pending> por algum tempo após a execução do comando kubectl apply. Se for esse o caso, aguarde alguns minutos e tente executar novamente o comando para garantir que o balanceador de carga foi criado e está funcionando conforme esperado.

O balanceador de carga receberá a solicitação na porta 80 e a encaminhará para um dos pods em execução no seu cluster.

Your working Go App!

Com isso, você criou um serviço Kubernetes acoplado a um balanceador de carga, oferecendo um ponto de entrada único e estável para a aplicação.

Conclusão

Neste tutorial, você criou uma aplicação Go, containerizada com o Docker e, em seguida, fez o deploy dela em um cluster Kubernetes. Em seguida, você criou um balanceador de carga que fornece um ponto de entrada resiliente para essa aplicação, garantindo que ela permaneça altamente disponível, mesmo se um dos nodes do cluster falhar. Você pode usar este tutorial para fazer o deploy da sua própria aplicação Go em um cluster Kubernetes ou continuar aprendendo outros conceitos do Kubernetes e do Docker com a aplicação de exemplo que você criou no Passo 1.

Seguindo em frente, você pode mapear o endereço IP do seu balanceador de carga para um nome de domínio que você controla para que você possa acessar a aplicação por meio de um endereço web legível em vez do IP do balanceador de carga. Além disso, os seguintes tutoriais de Kubernetes podem ser de seu interesse:

Por fim, se você quiser saber mais sobre o Go, recomendamos que você confira nossa série sobre Como Programar em Go.

DigitalOcean Community Tutorials

How To Set Up the code-server Cloud IDE Platform on DigitalOcean Kubernetes

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

With developer tools moving to the cloud, creation and adoption of cloud IDE (Integrated Development Environment) platforms is growing. Cloud IDEs allow for real-time collaboration between developer teams to work in a unified development environment that minimizes incompatibilities and enhances productivity. Accessible through web browsers, cloud IDEs are available from every type of modern device. Another advantage of a cloud IDE is the possibility to leverage the power of a cluster, which can greatly exceed the processing power of a single development computer.

code-server is Microsoft Visual Studio Code running on a remote server and accessible directly from your browser. Visual Studio Code is a modern code editor with integrated Git support, a code debugger, smart autocompletion, and customizable and extensible features. This means that you can use various devices, running different operating systems, and always have a consistent development environment on hand.

In this tutorial, you will set up the code-server cloud IDE platform on your DigitalOcean Kubernetes cluster and expose it at your domain, secured with Let’s Encrypt certificates. In the end, you’ll have Microsoft Visual Studio Code running on your Kubernetes cluster, available via HTTPS and protected by a password.

Prerequisites

  • A DigitalOcean Kubernetes cluster with your connection configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step when you create your cluster. To create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.

  • The Helm package manager installed on your local machine, and Tiller installed on your cluster. To do this, complete Steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.

  • The Nginx Ingress Controller and Cert-Manager installed on your cluster using Helm in order to expose code-server using Ingress Resources. To do this, follow How to Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm.

  • A fully registered domain name to host code-server, pointed at the Load Balancer used by the Nginx Ingress. This tutorial will use code-server.your_domain throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice. This domain name must differ from the one used in the How To Set Up an Nginx Ingress on DigitalOcean Kubernetes prerequisite tutorial.

Step 1 — Installing And Exposing code-server

In this section, you’ll install code-server to your DigitalOcean Kubernetes cluster and expose it at your domain, using the Nginx Ingress controller. You will also set up a password for admittance.

You’ll store the deployment configuration on your local machine, in a file named code-server.yaml. Create it using the following command:

  • nano code-server.yaml

Add the following lines to the file:

code-server.yaml
apiVersion: v1 kind: Namespace metadata:   name: code-server --- apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: code-server   namespace: code-server   annotations:     kubernetes.io/ingress.class: nginx spec:   rules:   - host: code-server.your_domain     http:       paths:       - backend:           serviceName: code-server           servicePort: 80 --- apiVersion: v1 kind: Service metadata:  name: code-server  namespace: code-server spec:  ports:  - port: 80    targetPort: 8443  selector:    app: code-server --- apiVersion: extensions/v1beta1 kind: Deployment metadata:   labels:     app: code-server   name: code-server   namespace: code-server spec:   selector:     matchLabels:       app: code-server   replicas: 1   template:     metadata:       labels:         app: code-server     spec:       containers:       - image: codercom/code-server         imagePullPolicy: Always         name: code-server         args: ["--allow-http"]         ports:         - containerPort: 8443         env:         - name: PASSWORD           value: "your_password" 

This configuration defines a Namespace, a Deployment, a Service, and an Ingress. The Namespace is called code-server and separates the code-server installation from the rest of your cluster. The Deployment consists of one replica of the codercom/code-server Docker image, and an environment variable named PASSWORD that specifies the password for access.

The code-server Service internally exposes the pod (created as a part of the Deployment) at port 80. The Ingress defined in the file specifies that the Ingress Controller is nginx, and that the code-server.your_domain domain will be served from the Service.

Remember to replace your_password with your desired password, and code-server.your_domain with your desired domain, pointed to the Load Balancer of the Nginx Ingress Controller.

Then, create the configuration in Kubernetes by running the following command:

  • kubectl create -f code-server.yaml

You’ll see the following output:

Output
namespace/code-server created ingress.extensions/code-server created service/code-server created deployment.extensions/code-server created

You can watch the code-server pod become available by running:

  • kubectl get pods -w -n code-server

The output will look like:

Output
NAME READY STATUS RESTARTS AGE code-server-f85d9bfc9-j7hq6 0/1 ContainerCreating 0 1m

As soon as the status becomes Running, code-server has finished installing to your cluster.

Navigate to your domain in your browser. You’ll see the login prompt for code-server.

code-server login prompt

Enter the password you set in code-server.yaml and press Enter IDE. You’ll enter code-server and immediately see its editor GUI.

code-server GUI

You’ve installed code-server to your Kubernetes cluster and made it available at your domain. You have also verified that it requires you to log in with a password. Now, you’ll move on to secure it with free Let’s Encrypt certificates using Cert-Manager.

Step 2 — Securing the code-server Deployment

In this section, you will secure your code-server installation by applying Let’s Encrypt certificates to your Ingress, which Cert-Manager will automatically create. After completing this step, your code-server installation will be accessible via HTTPS.

Open code-server.yaml for editing:

  • nano code-server.yaml

Add the highlighted lines to your file, making sure to replace the example domain with your own:

code-server.yaml
apiVersion: v1 kind: Namespace metadata:   name: code-server --- apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: code-server   namespace: code-server   annotations:     kubernetes.io/ingress.class: nginx     certmanager.k8s.io/cluster-issuer: letsencrypt-prod spec:   tls:   - hosts:     - code-server.your_domain     secretName: codeserver-prod   rules:   - host: code-server.your_domain     http:       paths:       - backend:           serviceName: code-server           servicePort: 80 ... 

First, you specify that the cluster-issuer that this Ingress will use to provision certificates will be letsencrypt-prod, created as a part of the prerequisites. Then, you specify the domains that will be secured under the tls section, as well as your name for the Secret holding them.

Apply the changes to your Kubernetes cluster by running the following command:

  • kubectl apply -f code-server.yaml

You’ll need to wait a few minutes for Let’s Encrypt to provision your certificate. In the meantime, you can track its progress by looking at the output of the following command:

  • kubectl describe certificate codeserver-prod -n code-server

When it finishes, the end of the output will look similar to this:

Output
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Generated 2m49s cert-manager Generated new private key Normal GenerateSelfSigned 2m49s cert-manager Generated temporary self signed certificate Normal OrderCreated 2m49s cert-manager Created Order resource "codeserver-prod-4279678953" Normal OrderComplete 2m14s cert-manager Order "codeserver-prod-4279678953" completed successfully Normal CertIssued 2m14s cert-manager Certificate issued successfully

You can now refresh your domain in your browser. You’ll see the padlock to the left of the address bar in your browser signifying that the connection is secure.

In this step, you have configured the Ingress to secure your code-server deployment. Now, you can review the code-server user interface.

Step 3 — Exploring the code-server Interface

In this section, you’ll explore some of the features of the code-server interface. Since code-server is Visual Studio Code running in the cloud, it has the same interface as the standalone desktop edition.

On the left-hand side of the IDE, there is a vertical row of six buttons opening the most commonly used features in a side panel known as the Activity Bar.

code-server GUI - Sidepanel

This bar is customizable so you can move these views to a different order or remove them from the bar. By default, the first view opens the Explorer panel that provides tree-like navigation of the project’s structure. You can manage your folders and files here—creating, deleting, moving, and renaming them as necessary. The next view provides access to a search and replace functionality.

Following this, in the default order, is your view of the source control systems, like Git. Visual Studio code also supports other source control providers and you can find further instructions for source control workflows with the editor in this documentation.

Git dropdown menu with version control actions

The debugger option on the Activity Bar provides all the common actions for debugging in the panel. Visual Studio Code comes with built-in support for the Node.js runtime debugger and any language that transpiles to Javascript. For other languages you can install extensions for the required debugger. You can save debugging configurations in the launch.json file.

Debugger View with launch.json open

The final view in the Activity Bar provides a menu to access available extensions on the Marketplace.

code-server GUI - Tabs

The central part of the GUI is your editor, which you can separate by tabs for your code editing. You can change your editing view to a grid system or to side-by-side files.

Editor Grid View

After creating a new file through the File menu, an empty file will open in a new tab, and once saved, the file’s name will be viewable in the Explorer side panel. Creating folders can be done by right clicking on the Explorer sidebar and pressing on New Folder. You can expand a folder by clicking on its name as well as dragging and dropping files and folders to upper parts of the hierarchy to move them to a new location.

code-server GUI - New Folder

You can gain access to a terminal by pressing CTRL+SHIFT+\, or by pressing on Terminal in the upper menu, and selecting New Terminal. The terminal will open in a lower panel and its working directory will be set to the project’s workspace, which contains the files and folders shown in the Explorer side panel.

You’ve explored a high-level overview of the code-server interface and reviewed some of the most commonly used features.

Conclusion

You now have code-server, a versatile cloud IDE, installed on your DigitalOcean Kubernetes cluster. You can work on your source code and documents with it individually or collaborate with your team. Running a cloud IDE on your cluster provides more power for testing, downloading, and more thorough or rigorous computing. For further information see the Visual Studio Code documentation on additional features and detailed instructions on other components of code-server.

DigitalOcean Community Tutorials

How to Set Up a Prometheus, Grafana and Alertmanager Monitoring Stack on DigitalOcean Kubernetes

Introduction

Along with tracing and logging, monitoring and alerting are essential components of a Kubernetes observability stack. Setting up monitoring for your DigitalOcean Kubernetes cluster allows you to track your resource usage and analyze and debug application errors.

A monitoring system usually consists of a time-series database that houses metric data and a visualization layer. In addition, an alerting layer creates and manages alerts, handing them off to integrations and external services as necessary. Finally, one or more components generate or expose the metric data that will be stored, visualized, and processed for alerts by the stack.

One popular monitoring solution is the open-source Prometheus, Grafana, and Alertmanager stack, deployed alongside kube-state-metrics and node_exporter to expose cluster-level Kubernetes object metrics as well as machine-level metrics like CPU and memory usage.

Rolling out this monitoring stack on a Kubernetes cluster requires configuring individual components, manifests, Prometheus metrics, and Grafana dashboards, which can take some time. The DigitalOcean Kubernetes Cluster Monitoring Quickstart, released by the DigitalOcean Community Developer Education team, contains fully defined manifests for a Prometheus-Grafana-Alertmanager cluster monitoring stack, as well as a set of preconfigured alerts and Grafana dashboards. It can help you get up and running quickly, and forms a solid foundation from which to build your observability stack.

In this tutorial, we’ll deploy this preconfigured stack on DigitalOcean Kubernetes, access the Prometheus, Grafana, and Alertmanager interfaces, and describe how to customize it.

Prerequisites

Before you begin, you’ll need a DigitalOcean Kubernetes cluster available to you, and the following tools installed in your local development environment:

  • The kubectl command-line interface installed on your local machine and configured to connect to your cluster. You can read more about installing and configuring kubectl in its official documentation.
  • The git version control system installed on your local machine. To learn how to install git on Ubuntu 18.04, consult How To Install Git on Ubuntu 18.04.
  • The Coreutils base64 tool installed on your local machine. If you’re using a Linux machine, this will most likely already be installed. If you’re using OS X, you can use openssl base64, which comes installed by default.

Note: The Cluster Monitoring Quickstart has only been tested on DigitalOcean Kubernetes clusters. To use the Quickstart with other Kubernetes clusters, some modification to the manifest files may be necessary.

Step 1 — Cloning the GitHub Repository and Configuring Environment Variables

To start, clone the DigitalOcean Kubernetes Cluster Monitoring GitHub repository onto your local machine using git:

  • git clone git@github.com:do-community/doks-monitoring.git

Then, navigate into the repo:

  • cd doks-monitoring

You should see the following directory structure:

  • ls
Output
LICENSE README.md changes.txt manifest

The manifest directory contains Kubernetes manifests for all of the monitoring stack components, including Service Accounts, Deployments, StatefulSets, ConfigMaps, etc. To learn more about these manifest files and how to configure them, skip ahead to Configuring the Monitoring Stack.

If you just want to get things up and running, begin by setting the APP_INSTANCE_NAME and NAMESPACE environment variables, which will be used to configure a unique name for the stack’s components and configure the Namespace into which the stack will be deployed:

  • export APP_INSTANCE_NAME=sammy-cluster-monitoring
  • export NAMESPACE=default

In this tutorial, we set APP_INSTANCE_NAME to sammy-cluster-monitoring, which will prepend all of the monitoring stack Kubernetes object names. You should substitute in a unique descriptive prefix for your monitoring stack. We also set the Namespace to default. If you’d like to deploy the monitoring stack to a Namespace other than default, ensure that you first create it in your cluster:

  • kubectl create namespace "$ NAMESPACE"

You should see the following output:

Output
namespace/sammy created

In this case, the NAMESPACE environment variable was set to sammy. Throughout the rest of the tutorial we’ll assume that NAMESPACE has been set to default.

Now, use the base64 command to base64-encode a secure Grafana password. Be sure to substitute a password of your choosing for your_grafana_password:

  • export GRAFANA_GENERATED_PASSWORD="$ (echo -n 'your_grafana_password' | base64)"

If you’re using macOS, you can substitute the openssl base64 command which comes installed by default.

At this point, you’ve grabbed the stack’s Kubernetes manifests and configured the required environment variables, so you’re now ready to substitute the configured variables into the Kubernetes manifest files and create the stack in your Kubernetes cluster.

Step 2 — Creating the Monitoring Stack

The DigitalOcean Kubernetes Monitoring Quickstart repo contains manifests for the following monitoring, scraping, and visualization components:

  • Prometheus is a time series database and monitoring tool that works by polling metrics endpoints and scraping and processing the data exposed by these endpoints. It allows you to query this data using PromQL, a time series data query language. Prometheus will be deployed into the cluster as a StatefulSet with 2 replicas that uses Persistent Volumes with DigitalOcean Block Storage. In addition, a preconfigured set of Prometheus Alerts, Rules, and Jobs will be stored as a ConfigMap. To learn more about these, skip ahead to the Prometheus section of Configuring the Monitoring Stack.
  • Alertmanager, usually deployed alongside Prometheus, forms the alerting layer of the stack, handling alerts generated by Prometheus and deduplicating, grouping, and routing them to integrations like email or PagerDuty. Alertmanager will be installed as a StatefulSet with 2 replicas. To learn more about Alertmanager, consult Alerting from the Prometheus docs.
  • Grafana is a data visualization and analytics tool that allows you to build dashboards and graphs for your metrics data. Grafana will be installed as a StatefulSet with one replica. In addition, a preconfigured set of Dashboards generated by kubernetes-mixin will be stored as a ConfigMap.
  • kube-state-metrics is an add-on agent that listens to the Kubernetes API server and generates metrics about the state of Kubernetes objects like Deployments and Pods. These metrics are served as plaintext on HTTP endpoints and consumed by Prometheus. kube-state-metrics will be installed as an auto-scalable Deployment with one replica.
  • node-exporter, a Prometheus exporter that runs on cluster nodes and provides OS and hardware metrics like CPU and memory usage to Prometheus. These metrics are also served as plaintext on HTTP endpoints and consumed by Prometheus. node-exporter will be installed as a DaemonSet.

By default, along with scraping metrics generated by node-exporter, kube-state-metrics, and the other components listed above, Prometheus will be configured to scrape metrics from the following components:

  • kube-apiserver, the Kubernetes API server.
  • kubelet, the primary node agent that interacts with kube-apiserver to manage Pods and containers on a node.
  • cAdvisor, a node agent that discovers running containers and collects their CPU, memory, filesystem, and network usage metrics.

To learn more about configuring these components and Prometheus scraping jobs, skip ahead to Configuring the Monitoring Stack. We’ll now substitute the environment variables defined in the previous step into the repo’s manifest files, and concatenate the individual manifests into a single master file.

Begin by using awk and envsubst to fill in the APP_INSTANCE_NAME, NAMESPACE, and GRAFANA_GENERATED_PASSWORD variables in the repo’s manifest files. After substituting in the variable values, the files will be combined and saved into a master manifest file called sammy-cluster-monitoring_manifest.yaml.

  • awk 'FNR==1 {print "---"}{print}' manifest/* \
  • | envsubst '$ APP_INSTANCE_NAME $ NAMESPACE $ GRAFANA_GENERATED_PASSWORD' \
  • > "$ {APP_INSTANCE_NAME}_manifest.yaml"

You should consider storing this file in version control so that you can track changes to the monitoring stack and roll back to previous versions. If you do this, be sure to scrub the admin-password variable from the file so that you don’t check your Grafana password into version control.

Now that you’ve generated the master manifest file, use kubectl apply -f to apply the manifest and create the stack in the Namespace you configured:

  • kubectl apply -f "$ {APP_INSTANCE_NAME}_manifest.yaml" --namespace "$ {NAMESPACE}"

You should see output similar to the following:

Output
serviceaccount/alertmanager created configmap/sammy-cluster-monitoring-alertmanager-config created service/sammy-cluster-monitoring-alertmanager-operated created service/sammy-cluster-monitoring-alertmanager created . . . clusterrolebinding.rbac.authorization.k8s.io/prometheus created configmap/sammy-cluster-monitoring-prometheus-config created service/sammy-cluster-monitoring-prometheus created statefulset.apps/sammy-cluster-monitoring-prometheus created

You can track the stack’s deployment progress using kubectl get all. Once all of the stack components are RUNNING, you can access the preconfigured Grafana dashboards through the Grafana web interface.

Step 3 — Accessing Grafana and Exploring Metrics Data

The Grafana Service manifest exposes Grafana as a ClusterIP Service, which means that it’s only accessible via a cluster-internal IP address. To access Grafana outside of your Kubernetes cluster, you can either use kubectl patch to update the Service in-place to a public-facing type like NodePort or LoadBalancer, or kubectl port-forward to forward a local port to a Grafana Pod port. In this tutorial we’ll forward ports, so you can skip ahead to Forwarding a Local Port to Access the Grafana Service. The following section on exposing Grafana externally is included for reference purposes.

Exposing the Grafana Service using a Load Balancer (optional)

If you’d like to create a DigitalOcean Load Balancer for Grafana with an external public IP, use kubectl patch to update the existing Grafana Service in-place to the LoadBalancer Service type:

  • kubectl patch svc "$ APP_INSTANCE_NAME-grafana" \
  • --namespace "$ NAMESPACE" \
  • -p '{"spec": {"type": "LoadBalancer"}}'

The kubectl patch command allows you to update Kubernetes objects in-place to make changes without having to re-deploy the objects. You can also modify the master manifest file directly, adding a type: LoadBalancer parameter to the Grafana Service spec. To learn more about kubectl patch and Kubernetes Service types, you can consult the Update API Objects in Place Using kubectl patch and Services resources in the official Kubernetes docs.

After running the above command, you should see the following:

Output
service/sammy-cluster-monitoring-grafana patched

It may take several minutes to create the Load Balancer and assign it a public IP. You can track its progress using the following command with the -w flag to watch for changes:

  • kubectl get service "$ APP_INSTANCE_NAME-grafana" -w

Once the DigitalOcean Load Balancer has been created and assigned an external IP address, you can fetch its external IP using the following commands:

  • SERVICE_IP=$ (kubectl get svc $ APP_INSTANCE_NAME-grafana \
  • --namespace $ NAMESPACE \
  • --output jsonpath='{.status.loadBalancer.ingress[0].ip}')
  • echo "http://$ {SERVICE_IP}/"

You can now access the Grafana UI by navigating to http://SERVICE_IP/.

Forwarding a Local Port to Access the Grafana Service

If you don’t want to expose the Grafana Service externally, you can also forward local port 3000 into the cluster directly to a Grafana Pod using kubectl port-forward.

  • kubectl port-forward --namespace $ {NAMESPACE} $ {APP_INSTANCE_NAME}-grafana-0 3000

You should see the following output:

Output
Forwarding from 127.0.0.1:3000 -> 3000 Forwarding from [::1]:3000 -> 3000

This will forward local port 3000 to containerPort 3000 of the Grafana Pod sammy-cluster-monitoring-grafana-0. To learn more about forwarding ports into a Kubernetes cluster, consult Use Port Forwarding to Access Applications in a Cluster.

Visit http://localhost:3000 in your web browser. You should see the following Grafana login page:

Grafana Login Page

To log in, use the default username admin (if you haven’t modified the admin-user parameter), and the password you configured in Step 1.

You’ll be brought to the following Home Dashboard:

Grafana Home Page

In the left-hand navigation bar, select the Dashboards button, then click on Manage:

Grafana Dashboard Tab

You’ll be brought to the following dashboard management interface, which lists the dashboards configured in the dashboards-configmap.yaml manifest:

Grafana Dashboard List

These dashboards are generated by kubernetes-mixin, an open-source project that allows you to create a standardized set of cluster monitoring Grafana dashboards and Prometheus alerts. To learn more, consult the kubernetes-mixin GitHub repo.

Click in to the Kubernetes / Nodes dashboard, which visualizes CPU, memory, disk, and network usage for a given node:

Grafana Nodes Dashboard

Describing how to use these dashboards is outside of this tutorial’s scope, but you can consult the following resources to learn more:

In the next step, we’ll follow a similar process to connect to and explore the Prometheus monitoring system.

Step 4 — Accessing Prometheus and Alertmanager

To connect to the Prometheus Pods, we can use kubectl port-forward to forward a local port. If you’re done exploring Grafana, you can close the port-forward tunnel by hitting CTRL-C. Alternatively, you can open a new shell and create a new port-forward connection.

Begin by listing running Pods in the default namespace:

  • kubectl get pod -n default

You should see the following Pods:

Output
sammy-cluster-monitoring-alertmanager-0 1/1 Running 0 17m sammy-cluster-monitoring-alertmanager-1 1/1 Running 0 15m sammy-cluster-monitoring-grafana-0 1/1 Running 0 16m sammy-cluster-monitoring-kube-state-metrics-d68bb884-gmgxt 2/2 Running 0 16m sammy-cluster-monitoring-node-exporter-7hvb7 1/1 Running 0 16m sammy-cluster-monitoring-node-exporter-c2rvj 1/1 Running 0 16m sammy-cluster-monitoring-node-exporter-w8j74 1/1 Running 0 16m sammy-cluster-monitoring-prometheus-0 1/1 Running 0 16m sammy-cluster-monitoring-prometheus-1 1/1 Running 0 16m

We are going to forward local port 9090 to port 9090 of the sammy-cluster-monitoring-prometheus-0 Pod:

  • kubectl port-forward --namespace $ {NAMESPACE} sammy-cluster-monitoring-prometheus-0 9090

You should see the following output:

Output
Forwarding from 127.0.0.1:9090 -> 9090 Forwarding from [::1]:9090 -> 9090

This indicates that local port 9090 is being forwarded successfully to the Prometheus Pod.

Visit http://localhost:9090 in your web browser. You should see the following Prometheus Graph page:

Prometheus Graph Page

From here you can use PromQL, the Prometheus query language, to select and aggregate time series metrics stored in its database. To learn more about PromQL, consult Querying Prometheus from the official Prometheus docs.

In the Expression field, type kubelet_node_name and hit Execute. You should see a list of time series with the metric kubelet_node_name that reports the Nodes in your Kubernetes cluster. You can see which node generated the metric and which job scraped the metric in the metric labels:

Prometheus Query Results

Finally, in the top navigation bar, click on Status and then Targets to see the list of targets Prometheus has been configured to scrape. You should see a list of targets corresponding to the list of monitoring endpoints described at the beginning of Step 2.

To learn more about Prometheus and how to query your cluster metrics, consult the official Prometheus docs.

To connect to Alertmanager, which manages Alerts generated by Prometheus, we’ll follow a similar process to what we used to connect to Prometheus. . In general, you can explore Alertmanager Alerts by clicking into Alerts in the Prometheus top navigation bar.

To connect to the Alertmanager Pods, we will once again use kubectl port-forward to forward a local port. If you’re done exploring Prometheus, you can close the port-forward tunnel by hitting CTRL-Cor open a new shell to create a new connection. .

We are going to forward local port 9093 to port 9093 of the sammy-cluster-monitoring-alertmanager-0 Pod:

  • kubectl port-forward --namespace $ {NAMESPACE} sammy-cluster-monitoring-alertmanager-0 9093

You should see the following output:

Output
Forwarding from 127.0.0.1:9093 -> 9093 Forwarding from [::1]:9093 -> 9093

This indicates that local port 9093 is being forwarded successfully to an Alertmanager Pod.

Visit http://localhost:9093 in your web browser. You should see the following Alertmanager Alerts page:

Alertmanager Alerts Page

From here, you can explore firing alerts and optionally silencing them. To learn more about Alertmanager, consult the official Alertmanager documentation.

In the next step, you’ll learn how to optionally configure and scale some of the monitoring stack components.

Step 6 — Configuring the Monitoring Stack (optional)

The manifests included in the DigitalOcean Kubernetes Cluster Monitoring Quickstart repository can be modified to use different container images, different numbers of Pod replicas, different ports, and customized configuration files.

In this step, we’ll provide a high-level overview of each manifest’s purpose, and then demonstrate how to scale Prometheus up to 3 replicas by modifying the master manifest file.

To begin, navigate into the manifests subdirectory in the repo, and list the directory’s contents:

  • cd manifest
  • ls
Output
alertmanager-0serviceaccount.yaml alertmanager-configmap.yaml alertmanager-operated-service.yaml alertmanager-service.yaml . . . node-exporter-ds.yaml prometheus-0serviceaccount.yaml prometheus-configmap.yaml prometheus-service.yaml prometheus-statefulset.yaml

Here you’ll find manifests for the different monitoring stack components. To learn more about specific parameters in the manifests, click into the links and consult the comments included throughout the YAML files:

Alertmanager

Grafana

kube-state-metrics

node-exporter

Prometheus

  • prometheus-0serviceaccount.yaml: The Prometheus Service Account, ClusterRole and ClusterRoleBinding.
  • prometheus-configmap.yaml: A ConfigMap that contains three configuration files:

    • alerts.yaml: Contains a preconfigured set of alerts generated by kubernetes-mixin (which was also used to generate the Grafana dashboards). To learn more about configuring alerting rules, consult Alerting Rules from the Prometheus docs.
    • prometheus.yaml: Prometheus’s main configuration file. Prometheus has been preconfigured to scrape all the components listed at the beginning of Step 2. Configuring Prometheus goes beyond the scope of this article, but to learn more, you can consult Configuration from the official Prometheus docs.
    • rules.yaml: A set of Prometheus recording rules that enable Prometheus to compute frequently needed or computationally expensive expressions, and save their results as a new set of time series. These are also generated by kubernetes-mixin, and configuring them goes beyond the scope of this article. To learn more, you can consult Recording Rules from the official Prometheus documentation.
  • prometheus-service.yaml: The Service that exposes the Prometheus StatefulSet.

  • prometheus-statefulset.yaml: The Prometheus StatefulSet, configured with 2 replicas. This parameter can be scaled depending on your needs.

Example: Scaling Prometheus

To demonstrate how to modify the monitoring stack, we’ll scale the number of Prometheus replicas from 2 to 3.

Open the sammy-cluster-monitoring_manifest.yaml master manifest file using your editor of choice:

  • nano sammy-cluster-monitoring_manifest.yaml

Scroll down to the Prometheus StatefulSet section of the manifest:

Output
. . . apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: sammy-cluster-monitoring-prometheus labels: &Labels k8s-app: prometheus app.kubernetes.io/name: sammy-cluster-monitoring app.kubernetes.io/component: prometheus spec: serviceName: "sammy-cluster-monitoring-prometheus" replicas: 2 podManagementPolicy: "Parallel" updateStrategy: type: "RollingUpdate" selector: matchLabels: *Labels template: metadata: labels: *Labels spec: . . .

Change the number of replicas from 2 to 3:

Output
. . . apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: sammy-cluster-monitoring-prometheus labels: &Labels k8s-app: prometheus app.kubernetes.io/name: sammy-cluster-monitoring app.kubernetes.io/component: prometheus spec: serviceName: "sammy-cluster-monitoring-prometheus" replicas: 3 podManagementPolicy: "Parallel" updateStrategy: type: "RollingUpdate" selector: matchLabels: *Labels template: metadata: labels: *Labels spec: . . .

When you’re done, save and close the file.

Apply the changes using kubectl apply -f:

  • kubectl apply -f sammy-cluster-monitoring_manifest.yaml --namespace default

You can track progress using kubectl get pods. Using this same technique, you can update many of the Kubernetes parameters and much of the configuration for this observability stack.

Conclusion

In this tutorial, you installed a Prometheus, Grafana, and Alertmanager monitoring stack into your DigitalOcean Kubernetes cluster with a standard set of dashboards, Prometheus rules, and alerts.

You may also choose to deploy this monitoring stack using the Helm Kubernetes package manager. To learn more, consult How to Set Up DigitalOcean Kubernetes Cluster Monitoring with Helm and Prometheus. One additional way to get this stack up and running is to use the DigitalOcean Marketplace Kubernetes Monitoring Stack solution, currently in beta.

The DigitalOcean Kubernetes Cluster Monitoring Quickstart repository is heavily based on and modified from Google Cloud Platform’s click-to-deploy Prometheus solution. A full manifest of modifications and changes from the original repository can be found in the Quickstart repo’s changes.txt file.

DigitalOcean Community Tutorials

Python Machine Learning Projects — A DigitalOcean eBook

Machine Learning Projects: Python eBook in EPUB format

Machine Learning Projects: Python eBook in PDF format

Machine Learning Projects: Python eBook in Mobi format

Introduction to the eBook

As machine learning is increasingly leveraged to find patterns, conduct analysis, and make decisions — sometimes without final input from humans who may be impacted by these findings — it is crucial to invest in bringing more stakeholders into the fold. This book of Python projects in machine learning tries to do just that: to equip the developers of today and tomorrow with tools they can use to better understand, evaluate, and shape machine learning to help ensure that it is serving us all.

This book will set you up with a Python programming environment if you don’t have one already, then provide you with a conceptual understanding of machine learning in the chapter “An Introduction to Machine Learning.” What follows next are three Python machine learning projects. They will help you create a machine learning classifier, build a neural network to recognize handwritten digits, and give you a background in deep reinforcement learning through building a bot for Atari.

These chapters originally appeared as articles on DigitalOcean Community, written by members of the international software developer community. If you are interested in contributing to this knowledge base, consider proposing a tutorial to the Write for DOnations program. DigitalOcean offers payment to authors and provides a matching donation to tech-focused nonprofits.

Other Books in this Series

If you are learning Python or are looking for reference material, you can download our free Python eBook, How To Code in Python 3

For other programming languages and DevOps engineering articles, check out our knowledge base of over 2,100 tutorials.

Download the eBook

You can download the eBook in either the EPUB, PDF, or Mobi format by following the links below.

How to Deploy a Resilient Go Application to DigitalOcean Kubernetes

The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

Introduction

Docker is a containerization tool used to provide applications with a filesystem holding everything they need to run, ensuring that the software will have a consistent run-time environment and will behave the same way regardless of where it is deployed. Kubernetes is a cloud platform for automating the deployment, scaling, and management of containerized applications.

By leveraging Docker, you can deploy an application on any system that supports Docker with the confidence that it will always work as intended. Kubernetes, meanwhile, allows you to deploy your application across multiple nodes in a cluster. Additionally, it handles key tasks such as bringing up new containers should any of your containers crash. Together, these tools streamline the process of deploying an application, allowing you to focus on development.

In this tutorial, you will build an example application written in Go and get it up and running locally on your development machine. Then you’ll containerize the application with Docker, deploy it to a Kubernetes cluster, and create a load balancer that will serve as the public-facing entry point to your application.

Prerequisites

Before you begin this tutorial, you will need the following:

  • A development server or local machine from which you will deploy the application. Although the instructions in this guide will largely work for most operating systems, this tutorial assumes that you have access to an Ubuntu 18.04 system configured with a non-root user with sudo privileges, as described in our Initial Server Setup for Ubuntu 18.04 tutorial.
  • The docker command-line tool installed on your development machine. To install this, follow Steps 1 and 2 of our tutorial on How to Install and Use Docker on Ubuntu 18.04.
  • The kubectl command-line tool installed on your development machine. To install this, follow this guide from the official Kubernetes documentation.
  • A free account on Docker Hub to which you will push your Docker image. To set this up, visit the Docker Hub website, click the Get Started button at the top-right of the page, and follow the registration instructions.
  • A Kubernetes cluster. You can provision a DigitalOcean Kubernetes cluster by following our Kubernetes Quickstart guide. You can still complete this tutorial if you provision your cluster from another cloud provider. Wherever you procure your cluster, be sure to set up a configuration file and ensure that you can connect to the cluster from your development server.

Step 1 — Building a Sample Web Application in Go

In this step, you will build a sample application written in Go. Once you containerize this app with Docker, it will serve My Awesome Go App in response to requests to your server’s IP address at port 3000.

Get started by updating your server’s package lists if you haven’t done so recently:

  • sudo apt update

Then install Go by running:

  • sudo apt install golang

Next, make sure you’re in your home directory and create a new directory which will contain all of your project files:

  • cd && mkdir go-app

Then navigate to this new directory:

  • cd go-app/

Use nano or your preferred text editor to create a file named main.go which will contain the code for your Go application:

  • nano main.go

The first line in any Go source file is always a package statement that defines which code bundle the file belongs to. For executable files like this one, the package statement must point to the main package:

go-app/main.go
package main 

Following that, add an import statement where you can list all the libraries the application will need. Here, include fmt, which handles formatted text input and output, and net/http, which provides HTTP client and server implementations:

go-app/main.go
package main  import (   "fmt"   "net/http" ) 

Next, define a homePage function which will take in two arguments: http.ResponseWriter and a pointer to http.Request. In Go, a ResponseWriter interface is used to construct an HTTP response, while http.Request is an object representing an incoming request. Thus, this block reads incoming HTTP requests and then constructs a response:

go-app/main.go
. . .  import (   "fmt"   "net/http" )  func homePage(w http.ResponseWriter, r *http.Request) {   fmt.Fprintf(w, "My Awesome Go App") } 

After this, add a setupRoutes function which will map incoming requests to their intended HTTP handler functions. In the body of this setupRoutes function, add a mapping of the / route to your newly defined homePage function. This tells the application to print the My Awesome Go App message even for requests made to unknown endpoints:

go-app/main.go
. . .  func homePage(w http.ResponseWriter, r *http.Request) {   fmt.Fprintf(w, "My Awesome Go App") }  func setupRoutes() {   http.HandleFunc("/", homePage) } 

And finally, add the following main function. This will print out a string indicating that your application has started. It will then call the setupRoutes function before listening and serving your Go application on port 3000.

go-app/main.go
. . .  func setupRoutes() {   http.HandleFunc("/", homePage) }  func main() {   fmt.Println("Go Web App Started on Port 3000")   setupRoutes()   http.ListenAndServe(":3000", nil) } 

After adding these lines, this is how the final file will look:

go-app/main.go
package main  import (   "fmt"   "net/http" )  func homePage(w http.ResponseWriter, r *http.Request) {   fmt.Fprintf(w, "My Awesome Go App") }  func setupRoutes() {   http.HandleFunc("/", homePage) }  func main() {   fmt.Println("Go Web App Started on Port 3000")   setupRoutes()   http.ListenAndServe(":3000", nil) } 

Save and close this file. If you created this file using nano, do so by pressing CTRL + X, Y, then ENTER.

Next, run the application using the following go run command. This will compile the code in your main.go file and run it locally on your development machine:

  • go run main.go
Output
Go Web App Started on Port 3000

This output confirms that the application is working as expected. It will run indefinitely, however, so close it by pressing CTRL + C.

Throughout this guide, you will use this sample application to experiment with Docker and Kubernetes. To that end, continue reading to learn how to containerize your application with Docker.

Step 2 — Dockerizing Your Go Application

In its current state, the Go application you just created is only running on your development server. In this step, you’ll make this new application portable by containerizing it with Docker. This will allow it to run on any machine that supports Docker containers. You will build a Docker image and push it to a central public repository on Docker Hub. This way, your Kubernetes cluster can pull the image back down and deploy it as a container within the cluster.

The first step towards containerizing your application is to create a special script called a Dockerfile. A Dockerfile typically contains a list of instructions and arguments that run in sequential order so as to automatically perform certain actions on a base image or create a new one.

Note: In this step, you will configure a simple Docker container that will build and run your Go application in a single stage. If, in the future, you want to reduce the size of the container where your Go applications will run in production, you may want to look into mutli-stage builds.

Create a new file named Dockerfile:

  • nano Dockerfile

At the top of the file, specify the base image needed for the Go app:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 

Then create an app directory within the container that will hold the application’s source files:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app 

Below that, add the following line which copies everything in the root directory into the app directory:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app 

Next, add the following line which changes the working directory to app, meaning that all the following commands in this Dockerfile will be run from that location:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app WORKDIR /app 

Add a line instructing Docker to run the go build -o main command, which compiles the binary executable of the Go app:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app WORKDIR /app RUN go build -o main . 

Then add the final line, which will run the binary executable:

go-app/Dockerfile
FROM golang:1.12.0-alpine3.9 RUN mkdir /app ADD . /app WORKDIR /app RUN go build -o main . CMD ["/app/main"] 

Save and close the file after adding these lines.

Now that you have this Dockerfile in the root of your project, you can create a Docker image based off of it using the following docker build command. This command includes the -t flag which, when passed the value go-web-app, will name the Docker image go-web-app and tag it.

Note: In Docker, tags allow you to convey information specific to a given image, such as its version number. The following command doesn’t provide a specific tag, so Docker will tag the image with its default tag: latest. If you want to give an image a custom tag, you would append the image name with a colon and the tag of your choice, like so:

  • docker build -t sammy/image_name:tag_name .

Tagging an image like this can give you greater control over your images. For example, you could deploy an image tagged v1.1 to production, but deploy another tagged v1.2 to your pre-production or testing environment.

The final argument you’ll pass is the path: .. This specifies that you wish to build the Docker image from the contents of the current working directory. Also, be sure to update sammy to your Docker Hub username:

  • docker build -t sammy/go-web-app .

This build command will read all of the lines in your Dockerfile, execute them in order, and then cache them, allowing future builds to run much faster:

Output
. . . Successfully built 521679ff78e5 Successfully tagged go-web-app:latest

Once this command finishes building it, you will be able to see your image when you run the docker images command like so:

  • docker images
Output
REPOSITORY TAG IMAGE ID CREATED SIZE sammy/go-web-app latest 4ee6cf7a8ab4 3 seconds ago 355MB

Next, use the following command create and start a container based on the image you just built. This command includes the -it flag, which specifies that the container will run in interactive mode. It also has the -p flag which maps the port on which the Go application is running on your development machine — port 3000 — to port 3000 in your Docker container:

  • docker run -it -p 3000:3000 sammy/go-web-app
Output
Go Web App Started on Port 3000

If there is nothing else running on that port, you’ll be able to see the application in action by opening up a browser and navigating to the following URL:

http://your_server_ip:3000 

Note: If you’re following this tutorial from your local machine instead of a server, visit the application by instead going to the following URL:

http://localhost:3000 

Your containerized Go App

After checking that the application works as expected in your browser, stop it by pressing CTRL + C in your terminal.

When you deploy your containerized application to your Kubernetes cluster, you’ll need to be able to pull the image from a centralized location. To that end, you can push your newly created image to your Docker Hub image repository.

Run the following command to log in to Docker Hub from your terminal:

  • docker login

This will prompt you for your Docker Hub username and password. After entering them correctly, you will see Login Succeeded in the command’s output.

After logging in, push your new image up to Docker Hub using the docker push command, like so:

  • docker push sammy/go-web-app

Once this command has successfully completed, you will be able to open up your Docker Hub account and see your Docker image there.

Now that you’ve pushed your image to a central location, you’re ready to deploy it to your Kubernetes cluster. First, though, we will walk through a brief process that will make it much less tedious to run kubectl commands.

Step 3 — Improving Usability for kubectl

By this point, you’ve created a functioning Go application and containerized it with Docker. However, the application still isn’t publicly accessible. To resolve this, you will deploy your new Docker image to your Kubernetes cluster using the kubectl command line tool. Before doing this, though, let’s make a small change to the Kubernetes configuration file that will help to make running kubectl commands less laborious.

By default, when you run commands with the kubectl command-line tool, you have to specify the path of the cluster configuration file using the --kubeconfig flag. However, if your configuration file is named config and is stored in a directory named ~/.kube, kubectl will know where to look for the configuration file and will be able pick it up without the --kubeconfig flag pointing to it.

To that end, if you haven’t already done so, create a new directory called ~/.kube:

  • mkdir ~/.kube

Then move your cluster configuration file to this directory, and rename it config in the process:

  • mv clusterconfig.yaml ~/.kube/config

Moving forward, you won’t need to specify the location of your cluster’s configuration file when you run kubectl, as the command will be able to find it now that it’s in the default location. Test out this behavior by running the following get nodes command:

  • kubectl get nodes

This will display all of the nodes that reside within your Kubernetes cluster. In the context of Kubernetes, a node is a server or a worker machine on which one or more pods can be deployed:

Output
NAME STATUS ROLES AGE VERSION k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfd Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfi Ready <none> 1m v1.13.5 k8s-1-13-5-do-0-nyc1-1554148094743-1-7lfv Ready <none> 1m v1.13.5

With that, you’re ready to move on and deploy your application to your Kubernetes cluster. You will do this by creating two Kubernetes objects: one that will deploy the application to some pods in your cluster and another that will create a load balancer, providing an access point to your application.

Step 4 — Creating a Deployment

RESTful resources make up all the persistent entities wihtin a Kubernetes system, and in this context they’re commonly referred to as Kubernetes objects. It’s helpful to think of Kubernetes objects as the work orders you submit to Kubernetes: you list what resources you need and how they should work, and then Kubernetes will constantly work to ensure that they exist in your cluster.

One kind of Kubernetes object, known as a deployment, is a set of identical, indistinguishable pods. In Kubernetes, a pod is a grouping of one or more containers which are able to communicate over the same shared network and interact with the same shared storage. A deployment runs more than one replica of the parent application at a time and automatically replaces any instances that fail, ensuring that your application is always available to serve user requests.

In this step, you’ll create a Kubernetes object description file, also known as a manifest, for a deployment. This manifest will contain all of the configuration details needed to deploy your Go app to your cluster.

Begin by creating a deployment manifest in the root directory of your project: go-app/. For small projects such as this one, keeping them in the root directory minimizes the complexity. For larger projects, however, it may be beneficial to store your manifests in a separate subdirectory so as to keep everything organized.

Create a new file called deployment.yml:

  • nano deployment.yml

Different versions of the Kubernetes API contain different object definitions, so at the top of this file you must define the apiVersion you’re using to create this object. For the purpose of this tutorial, you will be using the apps/v1 grouping as it contains many of the core Kubernetes object definitions that you’ll need in order to create a deployment. Add a field below apiVersion describing the kind of Kubernetes object you’re creating. In this case, you’re creating a Deployment:

go-app/deployment.yml
--- apiVersion: apps/v1 kind: Deployment 

Then define the metadata for your deployment. A metadata field is required for every Kubernetes object as it contains information such as the unique name of the object. This name is useful as it allows you to distinguish different deployments from one another and identify them using names that are human-readable:

go-app/deployment.yml
--- apiVersion: apps/v1 kind: Deployment metadata:     name: go-web-app 

Next, you’ll build out the spec block of your deployment.yml. A spec field is a requirement for every Kubernetes object, but its precise format differs for each type of object. In the case of a deployment, it can contain information such as the number of replicas of you want to run. In Kubernetes, a replica is the number of pods you want to run in your cluster. Here, set the number of replicas to 5:

go-app/deployment.yml
. . . metadata:     name: go-web-app spec:   replicas: 5 

Next, create a selector block nested under the spec block. This will serve as a label selector for your pods. Kubernetes uses label selectors to define how the deployment finds the pods which it must manage.

Within this selector block, define matchLabels and add the name label. Essentially, the matchLabels field tells Kubernetes what pods the deployment applies to. In this example, the deployment will apply to any pods with the name go-web-app:

go-app/deployment.yml
. . . spec:   replicas: 5   selector:     matchLabels:       name: go-web-app 

After this, add a template block. Every deployment creates a set of pods using the labels specified in a template block. The first subfield in this block is metadata which contains the labels that will be applied to all of the pods in this deployment. These labels are key/value pairs that are used as identifying attributes of Kubernetes objects. When you define your service later on, you can specify that you want all the pods with this name label to be grouped under that service. Set this name label to go-web-app:

go-app/deployment.yml
. . . spec:   replicas: 5   selector:     matchLabels:       name: go-web-app   template:     metadata:       labels:         name: go-web-app 

The second part of this template block is the spec block. This is different from the spec block you added previously, as this one applies only to the pods created by the template block, rather than the whole deployment.

Within this spec block, add a containers field and once again define a name attribute. This name field defines the name of any containers created by this particular deployment. Below that, define the image you want to pull down and deploy. Be sure to change sammy to your own Docker Hub username:

go-app/deployment.yml
. . .   template:     metadata:       labels:         name: go-web-app     spec:       containers:       - name: application         image: sammy/go-web-app 

Following that, add an imagePullPolicy field set to IfNotPresent which will direct the deployment to only pull an image if it has not already done so before. Then, lastly, add a ports block. There, define the containerPort which should match the port number that your Go application listens on. In this case, the port number is 3000:

go-app/deployment.yml
. . .     spec:       containers:       - name: application         image: sammy/go-web-app         imagePullPolicy: IfNotPresent         ports:           - containerPort: 3000 

The full version of your deployment.yml will look like this:

go-app/deployment.yml
--- apiVersion: apps/v1 kind: Deployment metadata:   name: go-web-app spec:   replicas: 5   selector:     matchLabels:       name: go-web-app   template:     metadata:       labels:         name: go-web-app     spec:       containers:       - name: application         image: sammy/go-web-app         imagePullPolicy: IfNotPresent         ports:           - containerPort: 3000 

Save and close the file.

Next, apply your new deployment with the following command:

  • kubectl apply -f deployment.yml

Note: For more information on all of the configuration available to you for deployments, please check out the official Kubernetes documentation here: Kubernetes Deployments

In the next step, you’ll create another kind of Kubernetes object which will manage how you access the pods that exist in your new deployment. This service will create a load balancer which will then expose a single IP address, and requests to this IP address will be distributed to the replicas in your deployment. This service will also handle port forwarding rules so that you can access your application over HTTP.

Step 5 — Creating a Service

Now that you have a successful Kubernetes deployment, you’re ready to expose your application to the outside world. In order to do this, you’ll need to define another kind of Kubernetes object: a service. This service will expose the same port on all of your cluster’s nodes. Your nodes will then forward any incoming traffic on that port to the pods running your application.

Note: For clarity, we will define this service object in a separate file. However, it is possible to group multiple resource manifests in the same YAML file, as long as they’re separated by ---. See this page from the Kubernetes documentation for more details.

Create a new file called service.yml:

  • nano service.yml

Start this file off by again defining the apiVersion and the kind fields in a similar fashion to your deployment.yml file. This time, point the apiVersion field to v1, the Kubernetes API commonly used for services:

go-app/service.yml
--- apiVersion: v1 kind: Service 

Next, add the name of your service in a metadata block as you did in deployment.yml. This could be anything you like, but for clarity we will call it go-web-service:

go-app/service.yml
--- apiVersion: v1 kind: Service metadata:   name: go-web-service 

Next, create a spec block. This spec block will be different than the one included in your deployment, and it will contain the type of this service, as well as the port forwarding configuration and the selector.

Add a field defining this service’s type and set it to LoadBalancer. This will automatically provision a load balancer that will act as the main entry point to your application.

Warning: The method for creating a load balancer outlined in this step will only work for Kubernetes clusters provisioned from cloud providers that also support external load balancers. Additionally, be advised that provisioning a load balancer from a cloud provider will incur additional costs. If this is a concern for you, you may want to look into exposing an external IP address using an Ingress.

go-app/service.yml
--- apiVersion: v1 kind: Service metadata:   name: go-web-service spec:   type: LoadBalancer 

Then add a ports block where you’ll define how you want your apps to be accessed. Nested within this block, add the following fields:

  • name, pointing to http
  • port, pointing to port 80
  • targetPort, pointing to port 3000

This will take incoming HTTP requests on port 80 and forward them to the targetPort of 3000. This targetPort is the same port on which your Go application is running:

go-app/service.yml
--- apiVersion: v1 kind: Service metadata:   name: go-web-service spec:   type: LoadBalancer   ports:   - name: http     port: 80     targetPort: 3000 

Lastly, add a selector block as you did in the deployments.yml file. This selector block is important, as it maps any deployed pods named go-web-app to this service:

go-app/service.yml
--- apiVersion: v1 kind: Service metadata:   name: go-web-service spec:   type: LoadBalancer   ports:   - name: http     port: 80     targetPort: 3000   selector:     name: go-web-app 

After adding these lines, save and close the file. Following that, apply this service to your Kubernetes cluster by once again using the kubectl apply command like so:

  • kubectl apply -f service.yml

This command will apply the new Kubernetes service as well as create a load balancer. This load balancer will serve as the public-facing entry point to your application running within the cluster.

To view the application, you will need the new load balancer’s IP address. Find it by running the following command:

  • kubectl get services
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE go-web-service LoadBalancer 10.245.107.189 203.0.113.20 80:30533/TCP 10m kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3h4m

You may have more than one service running, but find the one labeled go-web-service. Find the EXTERNAL-IP column and copy the IP address associated with the go-web-service. In this example output, this IP address is 203.0.113.20. Then, paste the IP address into the URL bar of your browser to the view the application running on your Kubernetes cluster.

Note: When Kubernetes creates a load balancer in this manner, it does so asynchronously. Consequently, the kubectl get services command’s output may show the EXTERNAL-IP address of the LoadBalancer remaining in a <pending> state for some time after running the kubectl apply command. If this the case, wait a few minutes and try re-running the command to ensure that the load balancer was created and is functioning as expected.

The load balancer will take in the request on port 80 and forward it to one of the pods running within your cluster.

Your working Go App!

With that, you’ve created a Kubernetes service coupled with a load balancer, giving you a single, stable entry point to application.

Conclusion

In this tutorial, you’ve built Go application, containerized it with Docker, and then deployed it to a Kubernetes cluster. You then created a load balancer that provides a resilient entry point to this application, ensuring that it will remain highly available even if one of the nodes in your cluster fails. You can use this tutorial to deploy your own Go application to a Kubernetes cluster, or continue learning other Kubernetes and Docker concepts with the sample application you created in Step 1.

Moving forward, you could map your load balancer’s IP address to a domain name that you control so that you can access the application through a human-readable web address rather than the load balancer IP. Additionally, the following Kubernetes tutorials may be of interest to you:

Finally, if you’d like to learn more about Go, we encourage you to check out our series on How To Code in Go.

DigitalOcean Community Tutorials