Industry Use Case For K8S!

Zoro
4 min readMar 14, 2021

Kubernetes is an opensource workload distribution and orchestration system for servers in a data center that are clustered. It ensures resource availability and accessibility, as well as balanced execution for multiple servers running at the same time.
Kubernetes, as a distributed data processing mechanism, can allow any number of servers of different types to share workloads for a common client at the same time, regardless of their location. Clients are then faced with these workloads as resources. Kubernetes uses this framework to allow the client system to connect to services over the network, transfer data, and receive responses.
Many data points indicate that Kubernetes is increasingly being embraced for its IT infrastructure benefits. Kubernetes, which was developed by Google, is now part of the CNCF (Cloud Native Computing Foundation) and can be used on-premises or in the cloud.

Kubernetes deploys applications more cost-effectively because it takes less IT manpower to handle them. Kubernetes also allows software more versatile, allowing them to be quickly transferred between different clouds and internal environments.

Kubernetes provides the organization with the following features :-

Multi-cloud versatility : As more businesses adopt multi-cloud platforms, Kubernetes benefits them by allowing them to run any application on any public cloud provider or a mix of public and private clouds. It’s a smart way to prevent vendor lock-in and allocate the appropriate workloads to the appropriate cloud.
Time to Market : Kubernetes will help the development team break down into smaller units to concentrate on single, complex, and smaller micro-services, resulting in a faster time to market. Additionally, APIs between these micro-services will greatly reduce the amount of cross-communication between teams, which can be a huge advantage to your IT infrastructure. Kubernetes, on the other hand, efficiently manages massive applications through several containers and handles service discovery by allowing containers to interact with one another. It also gives you access to storage from a number of vendors, including Azure and AWS.
Scalability and Accountability : Two key factors in the success of an application are performance and scalability. As the workload rises, Kubernetes acts as a critical management framework that can scale up the application as well as its infrastructure, and scale down when the load decreases. Kubernetes’ auto-scaling capability can handle any type of metric, be it resource usage or custom metrics.

Migration to the cloud that works : Rehosting, replatforming, and refactoring are all possible with Kubernetes. It provides a smooth transition from on-premise to cloud for your application.

Cost Optimization : Kubernetes, on the other hand, manages large systems efficiently using several containers and handles service discovery by allowing containers to communicate with one another. It also allows you to access storage from a variety of providers, including Azure and Amazon Web services.

Spoyify’s Kubernetes Use Case!

The audio-streaming website, which was launched in 2008, now has over 200 million monthly active users around the world. Their mission is to inspire creators and have a genuinely immersive listening experience for all of their current customers — and hopefully potential customers.

Spotify, an early adopter of microservices and Docker, used a homegrown container orchestration framework named Helios to operate containerized microservices through its fleet of VMs.
By late 2017, it was obvious that “having a small team working on the features was just not as effective as implementing anything with a much larger community’s support”.
They saw an incredible community that had sprung up around Kubernetes and wanted to be a part of it.
Helios had less features than Kubernetes, but they wanted to benefit from higher velocity and lower costs while still aligning with the rest of the industry on best practices and applications. At the same time, the team decided to contribute its awareness and impact to the Kubernetes community, which is growing rapidly. The migration, which happened in parallel with Helios running, could go smoothly because Kubernetes fit very nicely as a complement and now as a replacement to Helios.
The team spent most of 2018 discussing the key technology problems necessary for a migration, which began late that year and will be a major focus in 2019. they had a small portion of our fleet to Kubernetes.
As an aggregate service, the largest Kubernetes service currently handles about 10 million requests per second and greatly benefits from autoscaling. Previously, teams would have to wait an hour to create a new service and obtain an operational host on which to run it in development, but now they can do so in seconds or minutes using Kubernetes. Furthermore, thanks to Kubernetes’ bin-packing and multi-tenancy capabilities, CPU usage has increased by two to three times on average.

Thanks for reading!

--

--