These solutions help you run containerized workloads without having to dive into all the intricacies of cluster management and worry about K8s updates and patches.
But the very transition to Managed Kubernetes inevitably poses a question for customers: how to organize the transition from the current technology stack correctly, with the least cost and impact on users?
In this article I will tell you:
- what migration strategies to Managed Kubernetes exist,
- what determines the complexity and duration of migration,
- what stages does it consist of,
- what problems may arise during the migration and how to cope with them.
Table of Contents
Types Of Migration
Two main criteria determine the migration strategy and its complexity: how developed the DevOps culture is in the company and what state the applications are in at the moment. Several scenarios are possible:
The company Has Not Used Orchestrators And Containers Before; Applications Are Mostly Monolithic
The most challenging start for migration. Requires partial or complete refactoring of applications and the addition of containerization
The Company Did Not Use Orchestrators, But The Applications Are Built On Top Of Microservices And Containers
Slightly more straightforward option compared to the previous one. All the preparatory part has already been completed; it remains to configure and deploy the Kubernetes cluster and write Kubernetes manifests for deploying the application.
The Company Used An Orchestrator Other Than Kubernetes: Docker Swarm, Mesos Marathon, Nomad, Etc
Another problematic scenario can be considered a little easier than the previous two because the team already has experience with containers and orchestrators. However, keep in mind that the concepts and technologies behind other orchestrators can be very different from those used in K8s. For example, Kubernetes does not support the two-stage scheduling implemented in Mesos. These and other differences will have to be taken into account when rewriting YAML files.
The Company Used One Of The Kubernetes Distributions: Rancher, OpenShift And Others
While this migration option is more accessible than migrating from another orchestrator, it is still not the easiest. While Rancher, OpenShift, and other similar tools are Kubernetes-based, there can be significant differences between them and vanilla K8s. For example, the Deployment config used when deploying services in OpenShift is different from Deployment Kubernetes. A similar situation will be with other distributions.
The Company Used “Vanilla” Kubernetes On-Premise Or In The Cloud (And Wants To Change The Provider, For Example, For Financial Reasons)
The simplest migration scenario is when you plan to migrate an existing cluster deployed locally using Kops and Kubeadm or in the cloud infrastructure. As a rule, it requires minor adjustments to YAML files and traffic switching to new addresses. The situation could become more complicated if some proprietary services are on the side of the previous cloud provider.
But this will most likely have nothing to do with Kubernetes itself since providers are trying to maintain the portability of their Managed solution and its compatibility with the Open Source version of K8s.
Thus, the easiest way to organize the transition is when the company has already used K8s (locally or in the cloud) and the hardest of all was when there was no containerization.
However, having DevOps professionals with experience with Kubernetes can make adjustments to the seating allocation described above. Typically, the level of DevOps culture is directly related to containerization and orchestration: in the scenarios I have described, it will increase from option 1 to option 5. But there are exceptions. For example, migrating from “pure” containerization (without using orchestrators) with a sufficient level of expertise in K8s can be much easier than migrating from other orchestrators and Kubernetes distributions.