Innovation

Deploying Your App Using Shipa and Azure Pipelines

IaC with TF, Shipa, and Azure Pipelines

In this article, you will learn a bit about how you can deploy an app in IaC way to Kubernetes using Shipa and Azure Pipelines. Shipa is a unique product that solves one of the main issues that developers face while developing Cloud Native applications on Kubernetes. The underlying issue of learning Kubernetes in a faster phase is a difficult task for most of the new developers, that is where Shipa comes to the rescue. Shipa provides a platform that abstracts all the underlying Kubernetes objects, enabling developers to focus more on the application development rather than on learning the entire Kubernetes complex eco-system. If you like to know more about Shipa and the configuration, check out my previous articles 1/2 which explains the basic setup, configuration, and sample code.

The entire code repository for the example is hosted on Azure Repos. To illustrate the work, we will deploy the application, that is containerized to a Kubernetes cluster using Shipa Terraform providers, and this is completely done in an automated fashion with Azure Pipelines.

Pre-requisites

  • Azure DevOps Services (Register)
  • Shipa CLI — You can download the CLI from here
  • Shipa Cloud (Register)
  • Azure Portal, Azure CLI, Azure Blob Storage service(Backend for Terraform)
  • A Kubernetes Cluster to Deploy

Platform Setup

There are a few components on the infra, that need to be set up initially. Below are the ones

  1. Azure Blob Storage

You need to have an Azure account to create the Storage. I am using Blob Storage for the terraform backend. Use Azure CLI to create a Service Principal which enables us to create resources on Azure programmatically. The steps for this are as follows

a. Azure Service Principal Creation

az ad sp create-for-rbac -n "Azure-Admin" -role Owner -scopes /subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup1}

b. Export Service principal credentials from the query above

export AZURE_SUBSCRIPTION_ID=’’
export AZURE_TENANT_ID=’’
export AZURE_CLIENT_ID=’’
export AZURE_CLIENT_SECRET=’’

c. Use the script below to create the Azure Blob Storage account and copy the storage account key at the end

#!/bin/bashRESOURCE_GROUP_NAME=rg-experiments-sea
STORAGE_ACCOUNT_NAME=terraformblobstoragedev
CONTAINER_NAME=terraform
LOCATION=southeastasia# Create resource group
az group create — name $RESOURCE_GROUP_NAME — location $LOCATION# Create storage account
az storage account create — resource-group $RESOURCE_GROUP_NAME — name $STORAGE_ACCOUNT_NAME — sku Standard_LRS — encryption-services blob# Get storage account key
ACCOUNT_KEY=$(az storage account keys list — resource-group $RESOURCE_GROUP_NAME — account-name $STORAGE_ACCOUNT_NAME — query ‘[0].value’ -o tsv)# Create blob container
az storage container create — name $CONTAINER_NAME — account-name $STORAGE_ACCOUNT_NAME — account-key $ACCOUNT_KEYecho “storage_account_name: $STORAGE_ACCOUNT_NAME”
echo “container_name: $CONTAINER_NAME”
echo “access_key: $ACCOUNT_KEY”

2. Azure Pipelines

Azure pipelines are one of the services in the Azure DevOps Services suite and it helps you to orchestrate CI/CD. For building and deploying your application, you may have self-hosted agents or Microsoft-hosted agents. For this case, I am using a custom build containerized Azure DevOps Agents with necessary packages – Terraform CLI and Azure CLI. The agents are being run as a pod in a Kubernetes Cluster. The image repository is in Github and the images are stored in Github Container Registry. Also, the whole basic steps for the Azure DevOps Agents can be found in the MSDocs here.

Below is a reference for the pod template that I am using for running the agents:

Gist:

https://gist.githubusercontent.com/mysticrenji/adb267cc3283268a494911e7c721936c/raw/8d239459df69a0f28ee608c467430c68fc4c2789/azure_devops_agents_pod.yaml

3. Shipa CLI

Your machine will be requiring Shipa CLI to fetch the Auth Token to communicate with Shipa Cloud. After registering an account in Shipa Cloud, you can login with credentials in CLI to retrieve the Auth Token. The Auth Token will later be used in the TF script.

shipa target add shipa-cloud target.shipa.cloud — set-current
shipa login
shipa token show

4. Kubernetes Cluster

For Shipa to safely manage and deploy applications to Kubernetes, it requires some specific CRDs to be installed. For quick install, you may use the below one-liner. Feel free to explore the contents in Gist, before applying.

Gist:

https://gist.githubusercontent.com/brunoa19/25505efd56d46472b092adaf83fe56dd/raw/ba1a0a024237c90dd47266735323d882d07e4c4d/shipa-service.sh

After applying it, please note of the Cluster Address, Token, and Certificate. We will use these values in the TF variables so that Shipa providers can use these values to communicate with the Kubernetes Cluster and manage.

Automating your IaC

I am making use of Azure Pipelines to deploy policy frameworks and application to the Kubernetes Cluster. The pipelines are of YAML format and it is flexible to add stages with the inbuilt editor. You can see the code in Azure Repos Service.

Azure Repos

Please note, if you would like to dig deeper into the TF scripts used in the example, please refer to this section.

Variables

There are quite a few variables group used in the pipeline for the terraform configuration and values for the application/policy deployments. Variables groups in Azure Pipelines are a collection of variables grouped together, for ease of injecting in the pipeline.

Variable group for Terrform/Shipa Config
Variable group for application/policy changes

Pipelines

Basically, we have 4 different stages in the pipeline.

Stages in the IaC Azure Pipelines
  • First Stage — Terraform Config Validation
    This stage will do the initiation of providers, linting and validating the terraform scripts added
  • Second Stage — Terraform Plan
    A plan would be generated and pushed as an artifact in Azure Pipelines. The variables are injected on the stage as environment variables, which terraform will automatically map to the required variable in the TF script. TF_VAR_ prefix is used to pass the values as I wanted to pass it separately during the stage. This plan will be used in the next stage for validation.
  • Third Stage — Terraform Plan Validation
    The terraform output plan will be peer-reviewed and it requires manual approval to proceed to the next stage. Naturally, an extra check is required to confirm, whether the plan will be applied correctly, without deleting any existing changes.
  • Fourth Stage — Terraform Apply
    In this stage, the plan will be processed and applied to the Kubernetes Cluster.

The YAML file for the pipeline is as below. The contents are ok to read ;), and there are few repetitions of variables since I am passing it each stage. If you have noticed, at each stage I am initializing the terraform providers multiple times. The reason is, at each stage the agents used will be different and the workspace also differs, since it’s running inside Kubernetes. I didn’t allow data to be persistent in the Kubernetes nodes.

Gist:

https://gist.githubusercontent.com/mysticrenji/9fa89b4d98ee7a38fec10f265bc858b2/raw/a1919740799f5606af46725f851523aed8469225/azure-pipelines.yaml

Conclusion

As a wrap-up, through this article, you have learned how you can use Shipa to deploy your application/frameworks in IaC way and use Azure Pipelines to consistently run if there is a change in the code following CI/CD principles. As an extension of this, I will experiment with the functionalities of Shipa CLI and use that to deploy it through Azure Pipelines. That is for another time :). Thanks for reading the articles and feedbacks are appreciated.

References

  1. https://dev.azure.com/renjiravindranathan/app-release-shipa
  2. https://github.com/mysticrenji/azdevopsagent_dockerized
  3. https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser