GitOps for developers today
The idea to fully manage applications, in addition to infrastructure, using a Git-based workflow, or GitOps, is gaining a lot of traction recently. We are seeing an increasing number of users connecting their Shipa account with tools such as ArgoCD and FluxCD.
Based on that, we conducted multiple user interviews to understand some of the challenges teams face when implementing GitOps, especially those introduced or faced by their developers. Two of the main challenges we learned from these interviews were:
- Defining and deploying applications on Linux VMs and Kubernetes with GitOps is very different, introducing complexity.
- Developers lack a centralized application portal to observe and support their applications, no matter where they are deployed.
While GitOps is a very desirable continuous deployment method, it is no secret that the tools mentioned above are primarily used for deploying applications on Kubernetes. Let’s face it; you probably have many virtual machines around where your developers are deploying applications to.
The goal for DevOps or Platform Engineers is to build an extensible platform that can allow their teams to adopt technologies, such as GitOps, without impacting, or increasing, the developer experience, but this current limitation imposes challenges.
So the question is, how do you extend GitOps beyond Kubernetes without increasing complexity?
Working towards a solution
The solution proposed above allows you to:
- Define a standard application definition that can be leveraged by GitOps tools, such as ArgoCD and FluxCD, to deploy to Kubernetes and Linux-based servers or virtual machines.
- Provide all the GitOps goodies such as drift control so you ensure deployed apps always match the desired state or what was declared in Git.
- Give developers an Internal Developer Platform (IDP) to manage and support their applications post-deployment, whether deployed on Kubernetes or Linux-based servers.
GitOps workflow components
Before we talk about how to implement the solution above, let’s understand the components present in this picture:
- Shipa Cloud: An Application as Code platform that helps you implement a standard application definition used by any pipeline. You can create a free account here: https://apps.shipa.cloud.
- K8s GitOps Cluster: That’s my central “GitOps” cluster that I used to install both ArgoCD and Shipa’s provider. You can find more information on how to install the Shipa provider here: Shipa Provider Install.
- K8s-app1 Repo: A sample application repo with my application definition. We will use this definition to deploy this application to Kubernetes using ArgoCD. You can find and clone/fork the app definition here: git
- Sn-app2 Repo: Another sample application repo with a similar application definition. We will use this definition to deploy this application to an Amazon EC2 machine using ArgoCD. You can find and clone/fork the app definition here: git
- K8s App Cluster: A Kubernetes cluster hosted on GKE connected to a Shipa policy framework. We will deploy our K8s-app1 application to this cluster.
- EC2 Machine: A Linux machine hosted on AWS EC2 and connected to a Shipa policy framework. We will deploy our sn-app2 application to this server.
For this example, we assume you are already familiar with the concept of Shipa policy frameworks. If not, you can find detailed info here:
- Creating policy frameworks: Framework Management (shipa.io)
- Binding policy frameworks to clusters: Connecting Clusters (shipa.io)
- Binding policy frameworks to Linux servers: Shipa Node Management
Standard application definition for GitOps
As mentioned above, you can fork the application definition for both applications from here:
Keeping in mind that we will deploy these applications to complete different environments, let’s inspect the definitions for each:
Here are the definitions expressed on each file, aside from the Kubernetes boilerplate YAML above:
- Name: The name you want to use for your application or service
- Team Owner: In case you are part of multiple teams, which team will own the app
- Framework: Which policy framework should this application be bound to on Shipa
That’s it! And what’s the difference between the definition of the application to be deployed on Kubernetes vs. the application to be deployed on EC2? None!
With the application creation defined and understood, let’s check the application deployment definition:
Again, aside from the Kubernetes boilerplate, we have:
- App: The name of the app you chose in the previous file
- Image: The image address that should be deployed for your application or service
Those are the basic requirements to have an application deployed. You can find additional definitions such as port, CNAME, network policy, and more here: Application Management.
As noted before, there is no difference in definition between the two apps.
By introducing a standard application definition, DevOps teams can freely move from EC2 to GCE, from a VMware VM to Kubernetes, and so on, and your developers won’t be impacted in any way. It’s a win for DevOps teams as you can evolve your infrastructure fast while giving developers a consistent experience.
Deploying the applications
As mentioned earlier, we assume you already have ArgoCD installed on your cluster. If not, you can find more information here: ArgoCD (shipa.io)
With ArgoCD installed, and its UI available, we can then create and deploy our applications.
Yeah, yeah, I know you can create a definition file and use things such as app of apps and so on, but to make things easier, we will use ArgoCD’s UI here
Deploying the k8s-app1 application
Using ArgoCD’s UI, I define my application using some of the basic options:
Nothing special there but one thing you will notice is my Cluster URL inside the Destination section.
I use the option https://kubernetes.default.svc instead of pointing ArgoCD to the specific cluster I want to deploy this application. I do this because ArgoCD will communicate locally with the Shipa provider, sending your Shipa Cloud API requests. Hence, no need to configure multiple clusters in ArgoCD or even multiple instances of ArgoCD spread across multiple clusters.
Once I click on CREATE, I can see the app as healthy on the ArgoCD dashboard:
If I check my Shipa Cloud dashboard, I can see my application created, deployed, and with all the necessary information I need to manage it:
From the Shipa dashboard, the developers in my team or I can see the application lifecycle information, connect it to incident management tools, see logs, and more.
How much Kubernetes or underlying infrastructure do they need to know to deploy and manage the application? 0!
Deploying the sn-app2 application
We can use the same ArgoCD UI config as before to deploy the sn-app2 to EC2:
Once I click on create, I can see the application with a healthy status on my ArgoCD dashboard:
The same way as the previously deployed application, we can see our sn-app2 application and manage it through Shipa’s dashboard:
As noted before, the underlying infrastructure that you decide to use does not impact how developers in your team deploy and manage their applications. They have a consistent experience while you are free to implement GitOps beyond Kubernetes.
Although we used ArgoCD, GKE, and EC2 for this example, you are free to swap components and use FluxCD, AKS, and others. That’s the main point of this solution; you can choose the components that fit your architecture best while keeping a consistent experience.
We will cover additional topics such as defining policy-as-code with a consistent experience and definition across multiple tools, including GitOps. Stay tuned!