Source: securityboulevard.com – Author: Danielle Cook
We had so many great questions about Kubernetes in the Enterprise in our recent Cloud Native Now webinar that I wanted to share more of the discussion. Mike Vizard, Chief Content Officer at Techstrong Group, Maz Tamadon, Director of Product and Solution Marketing at Kasten by Veeam, Frank J. Ohlhorst, Editor at Large, Mostafa Radwan, Principal at Road Clouds, Alex Nauda, CTO at Nobl9, joined me, co-chair of the CNCF Cartografos Working Group and VP at Fairwinds. Read the first six questions in this post, which covered provisioning, frameworks for using Kubernetes, getting started, and more — including how platform teams and DevOps work together. This post explores Day One, Two, and Three operations, deploying stateful apps, moving apps to Kubernetes, managed services, and how artificial intelligence and machine learning fit into Kubernetes.
1. Day One vs. Day Two vs. Day Three Operations
Often, teams new to Kubernetes are so focused on getting it up and running that they don’t consider the day two and day three operations. That can lead to a lot of unnecessary confusion and complexity, unfortunately. The Cloud Native Computing Foundation (CNCF) created the Cartographos Working Group, which developed the Cloud Native Maturity Model to help adopters and end-users navigate the cloud-native ecosystem and the CNCF landscape.
Based on end user experiences, the model outlines five stages of maturity:
- Level 1: Build — you put a baseline cloud-native implementation in place in pre-production
- Level 2: Operate — you established a cloud-native foundation and are moving to production
- Level 3: Scale — you have increased competency and are defining processes for scale
- Level 4: Improve — your security, policy, and governance are improving across your environment
- Level 5: Optimize — you are reviewing earlier decisions and monitoring applications and infrastructure for optimization opportunities
Using these levels as a guide, teams adopting Kubernetes understand all the things that need to be decided at level two and three, while at level five you revisit some earlier decisions and consider how to optimize operations. This type of guidance helps organizations understand the different levels of maturity beyond day one.
2. Deploying Stateful Applications
In its early days, Kubernetes focused primarily on truly stateless applications, programs that do not save client data from one session for use in the next session with that client. At the enterprise level today, many stateful applications are emerging. In enterprise workloads, it is critical to store information about customers, prospects, products, and more, in a database (of course, there could be many databases in an enterprise environment). Applications may have many different data sets within the cluster and outside the cluster. To protect the application and make sure it’s running (and can keep running), workloads become complex. However, there are many things to support you running stateful apps, such as operators, storage drivers, and other Kubernetes features.
In the Kubernetes space, teams need to account for all the different pieces, artifacts, secrets, config maps, and more to ensure mission-critical applications are resilient enough to weather issues. The application must be able to recover if something happens, such as a ransomware attack or an outage. To do business in the cloud native world, these apps and services must be designed to recover quickly from an incident.
3. Moving Apps to Kubernetes
Stateless applications are a lot easier to manage, scale, and deploy in Kubernetes. When you start moving stateful applications, you need to provide persistent access to data and ensure that you can keep those applications updated consistently, which adds a lot more complexity.
If you’re moving a legacy application up to the cloud or writing an app from scratch, it’s important to understand whether a stateless application can do what you need or you need to build a stateful application. Remember, too, it may be easy to deploy an app and there may be tools to help you deploy them and monitor them, but there’s often a lack of skills in terms of troubleshooting.
Assessing an Apps’ Fitness for Kubernetes
Kubernetes is not for every organization or team. It may not be a good fit for a small team unless they are skilled in Kubernetes, run multiple clusters, and are comfortable with day one and day two operations. And there are some stateful apps that may not run in Kubernetes, so you need to continue to run them where they are instead, but you still need to update them to integrate and interact with the other Kubernetes apps in the cluster.
Application Modernization
Many enterprises are adopting Kubernetes strategically due to the high degree of automation, the declarative infrastructure, the ability to version control everything, and where everything is code and the entire deployment is repeatable. This shift helps organizations modernize infrastructure apps and prepare for the future, but there are other impacts of moving to Kubernetes.
Enterprises want to leverage the cloud, beginning by lifting and shifting applications over to cloud environments. Unfortunately, you can’t achieve the flexibility and agility of the cloud if you don’t re-architect or refactor your application. Kubernetes is a cloud-native architecture based on containers, microservices, and of course, the cloud, used by a vast community of people. Whether you want to refactor your application or if you’re born in the cloud and are starting from scratch, the choice is clear. It makes sense to implement microservices, package apps in containers, and run and orchestrate them using Kubernetes.
Moving from Bare Metal
Typically, questions about bare metal servers are related to legacy applications. There is no way to lift and shift these apps into the cloud to benefit from the cloud model in terms of agility, flexibility, and cost. Enterprises must decide what to do with these legacy applications; the choices are to change it, refactor it, or determine if the app is no longer needed or can be replaced by a commercially available solution. If you want to get to the cloud in the right way, you need to refactor, create, or design from scratch.
4. Kubernetes Managed Services
In some organizations, it makes sense to have an internal team that manages Kubernetes, but in others, it makes more sense to use a managed service. Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (Amazon EKS), and Azure Kubernetes Service (AKS) are all useful and offer many pre-packaged tools. There are other organizations that offer software and services that provide expertise across all the cloud vendors as well as in different open source tools. It depends on the needs of and comfort level at your organization.
Many organizations choose to use at least two public cloud vendors because they don’t want to be tied to a single one for business or technology reasons. It’s important to have choices based on what’s best for certain apps and services as well. For example, some may choose to put some AI workloads on Google Cloud and more traditional online transactional processing (OLTP) workloads on Amazon Web Services (AWS). In case of a cloud service failure or a need to move due to a business-related problem, it helps to have different CSPs in place. For many using Kubernetes to build and deploy applications, one of the benefits of going cloud native is portability. If an application runs in one Kubernetes environment, it can probably run in another with minimal changes.
5. ML, Generative AI, and Kubernetes
When it comes to Kubernetes, the first major use case from artificial intelligence (AI) models and machine learning (ML) is likely related to troubleshooting, tech support, or modeling use cases. In the future, these tools may be able to identify particular trends, such as how your clusters are performing or what applications are doing what. There’s still a lot to be done on the AI front when it comes to taking over the management and troubleshooting of a complex application, but these tools may become excellent assistants for the individuals or teams managing Kubernetes environments.
Growing Kubernetes Adoption in the Enterprise
Kubernetes is still a young technology, but the past year has shown many enterprise organizations deploying more workloads into production. If your enterprise is committed to implementing Kubernetes and other cloud-native technologies, make sure you research cloud native solutions that will provide guardrails that help your developers deploy apps and services quickly without putting reliability, security, and cost efficiency at risk.
Learn more about multi-cluster deployments at scale. Register for KubeCrash, a free, virtual, one day event on October 18 with crash courses in cloud native open source technology.
Watch the full Cloud Native Now discussion on demand here: Kubernetes in the Enterprise.
*** This is a Security Bloggers Network syndicated blog from Fairwinds | Blog authored by Danielle Cook. Read the original post at: https://www.fairwinds.com/blog/day-2-ops-stateful-apps-kubernetes-enterprise-experts
Original Post URL: https://securityboulevard.com/2023/09/diving-in-to-day-2-ops-stateful-apps-more-with-kubernetes-experts/
Category & Tags: Security Bloggers Network,General – Security Bloggers Network,General
Views: 0