TECHNOLOGY

2 PRO Errors When Implementing Kubernetes And How To Avoid Them

Kubernetes

Dmitry Lazarenko, Product Director at VK Cloud Solutions (formerly MCS), talks about typical strategic miscalculations when implementing Kubernetes in large projects.

There are unobvious difficulties and nuances of implementation and operation; there are advantages that are underutilized and, as a result, lose money. And there is a global lack of expertise in Kubernetes because it is challenging to properly prepare a book’s technology. It isn’t easy to hire qualified specialists.

This complexity is superimposed on the well-known hype, which complicates rational decision-making on the implementation of Kubernetes. Oddly enough, Kubernetes is often implemented formally enough to be because it will somehow do better.

I hope that my note will help you navigate and make a choice that you will later remember with pride (and not regret, not curse yourself and not try to put together a time machine to reverse your decision).

So, welcome to the world of Kubernetes PRO implementation backups. And how to get the notorious magic in each case: decrease in Time-to-Market, acceleration of releases, and uninterrupted operation of applications.

Mistake 1: Not Preparing To Work with Stateful Applications

Kubernetes was initially developed for Stateless applications that process data but do not store anything – the same web applications.

Persistent Volumes (PV) – data stores associated with pods – can be used to deploy data-storing Stateful applications on Kubernetes. From the application’s point of view, PV is connected inside the container as a local directory, which allows you not to change the logic of working with data. Same time, thanks to Kubernetes, the storage is available for the application on any cluster node. To organize PV, you will need to organize storage and configure Kubernetes to work with it.

Alternatively, you can redesign the application itself to store data not under itself but in separate databases or object storages, which you can deploy yourself or can be obtained in the form of ready-made services (aaS).

When organizing the work of these Stateful storages, it is necessary to solve the problem of their reservation. This is a complex task that is difficult to solve independently, and it is difficult to maintain such a system. But our Kubernetes aaS already has integrated PVs out of the box based on CEPH block storage, virtual file storage, or SSD / NVME disks connected via iSCSI to each computer server. It is also possible to object storage or databases on the same platform in aaS format.

Mistake 2. Not Adapting The Architecture Of Monolithic applications to work in Kubernetes

It’s about landing Kubernetes on a monolithic project without adapting the infrastructure, principles of its operation, and management.

Kubernetes is optimal for applications in a loosely coupled microservices architecture – with clear modularity boundaries, independent deployment, development, and scaling. Such applications can be conveniently packaged in containers and managed through an orchestrator. Accordingly, we should not use Kubernetes in a monolithic architecture and workloads with many dependencies. Updating such apps alone takes hours.

First, monoliths like to store data underneath them. And, as we have already discussed, the monolith will have to be taught to work with Stateful storage.

Secondly, the monolith will have to be taught to work in several instances:

  • Instances must be able to work with and access data stored separately from the monolith.
  • Memcached or Redis are usually raised next to the application for caches in a monolith, which the application accesses locally. When placing an application in Kubernetes, you need a separate Redis cluster. The application must work with it precisely as with a cluster, which classical monoliths cannot do.
  • In Kubernetes, fault tolerance, balancing, and Self-Healing are achieved because application instances are easily moved to other nodes. It’s different with a monolith: even if it works in multiple instances, client applications usually can continue working only with the same instance they started. Moving to another node resets the session. Therefore, the application must learn how to handle such switches, or else the Sticky Session must be implemented on the balancer.

Also Read: Simple Video Uploading In Parts And Adding To The Application Or Website In Two Clicks

 

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *