Another post inspired by our weekly internal enablement office hours [should we open this up to the public?] and a few conversations at DevOps Days Atlanta, talking about the experience with Kubernetes can reverberate some sighs. Though in our weekly enablement office hours, the blunt question was asked “well why is Kubernetes so difficult?”. Industry thought leaders would state that Kubernetes is a platform to build other platforms. The proliferation of Kubernetes has opened up new opportunities in cloud-native scale. Like anything in computer science is an abacus, you never get rid of complexity you just shift complexity around. Let’s take a look at some history and exactly why Kubernetes can be difficult and how you can improve your Developer Experience [DX] on K8s.
The Road to Kubernetes
Projects that heavily influenced Kubernetes were Facebook Tupperware, Google Borg, and Microsoft Apollo. These projects were workload and job schedulers for massive cloud-scale workloads and core to each firm’s compute strategy. Kubernetes was born from the influences of these projects
Especially in the early years of the project, the project moved notoriously fast which is not unusual for a fast-growing and rapidly evolving project. In the “BK” e.g the time Before Kubernetes, workloads and application infrastructure looked like the below [which is for our upcoming SUSE Master Class]. Each component had a separate lifecycle, configuration, and syntax to interact with. Clustering was very application-specific.
With Kubernetes, e.g the “K” era, now the markup language YAML, can replace a lot of application infrastructure decisions. Need to have more than one instance of your application, just increase the Replica count from one to two in the below YAML. This is the declarative nature of Kubernetes. Author a given state and Kubernetes will try with the best of it’s ability and given resources to meet your desired state.
With multiple pieces of the application and application infrastructure stack now being able to be described by YAML, since YAML is code and is the same language that development and operations can use, all of our problems are solved with Kubernetes right?
Solve All My Problems K8s!
Let’s say the year is 2010 and you are an eCommerce company that is having some logistics problems. Just fire up Hadoop and all your business problems are solved, right? Well Hadoop by itself wouldn’t magically solve all your logistical problems by being able to process large amounts of data. Similarly, just by leveraging Kubernetes today, all of your problems will not go away.
There are many capabilities of Kubernetes that draw users in and increase adoption. Kubernetes workloads are more portable; you can run Kubernetes on a multitude of infrastructures from public cloud to bare metal thus going from one cluster to the next should in theory be easier. At the root of Kubernetes’ capability is being a container orchestrator. Containers are also designed to be more portable and have higher densities than virtual machines.
In the previous section, describing everything in a common language e.g YAML does help both sides of the table e.g development and operations speak someone of a common language. Though there are some inherent challenges; what happened before Kubernetes does not magically go away with Kubernetes.
Challenges with Kubernetes
Kubernetes is a powerful platform but also has a large amount of complexity. An inception problem with configuration-based systems is who authors the YAMLs and also how does expertise gets disseminated across domain expertise. There are even entire Twitter Accounts dedicated to challenges with Kubernetes.
Taking one facet of non-functional requirements for applications, let’s say networking. Your networking policies and business controls did not change overnight when introducing Kubernetes. What has been baked in the “BK” era now has to be adopted and even evolved into the Kubernetes world. Since this is all YAML, why get a networking engineer involved when a software developer can write the same YAML?
Typically software engineers such as myself do not have deep networking skills. Engineering is about specialty and collaboration is key. This challenge can be repeated over and over with most non-functional requirements like scaling, security, storage, etc.
Kubernetes also has a very pluggable architecture. If you don’t like the opinion on your cluster, you can change that opinion. Want to add another Ingress Controller like Nginx, you are more than free to do that. Looking a the CNCF Landscape, engineers are inundated with choices. A duality of users starts to appear in Kubernetes. There are those who maintain the platform ensuring capacity and uptime and then there are those whose workloads need to get on the cluster. Even though the universal language is YAML, that still is a split-brain/split-org problem.
The challenges in Kubernetes boil down to two parts, which are no different than any other new platform looking to be adopted. How does expertise get disseminated in the system and also how to maintain requirements/policies/business controls that were in place in the BK era. If this feels daunting, don’t worry, Shipa is here to help.
Tame the Chaos of Complexity with Shipa
With Shipa, you can simply deploy to Kubernetes without having to get into decisions about networking, storage, and even auto-scaling and deployment schemas in a matter of moments. Shipa’s purpose is to make Kubernetes an afterthought to developers.
Simply provide an image or build from source with your favorite language, and Shipa will take care of the rest. Tame complexity for your internal customers today with Shipa.