Source: securityboulevard.com – Author: Abdul Akhter
As Kubernetes continues to mature, so do the tools we use to manage it. In this blog post, we’ll explore the process of upgrading from Kubernetes Operations (kOps) to Amazon Elastic Kubernetes Service (EKS), focusing on the technical aspects and considerations involved.
Background
Currently, many organizations are running Kubernetes on Kubernetes Operations (more familiarly called kOps). However, Kubernetes itself is continually being updated. One example of a recent significant change is the complete removal of Canal CNI (Container Network Interface) support in Kubernetes 1.28. This change presents a challenge, because it requires teams to either do either a live in-place migration to a new CNI or to build new clusters with a replacement CNI, then move the workloads over. As teams approach this upgrade, it’s important to consider different options, plan for a smooth transition, and choose a platform that simplifies future upgrades.
kOps was an early popular choice for many organizations who were early Kubernetes adopters, because it was easy to do the initial cluster set up and manage clusters effectively. Since kOps was first deployed, however, a lot of new managed Kubernetes services have emerged that make it easier to make the most of Kubernetes without all the heavy lifting.
Why Move to EKS?
While the CNI change is the immediate catalyst for this transition for many organizations, moving to EKS offers several benefits:
- Standardization: EKS provides a standard platform for all Amazon Web Services (AWS) clients, simplifying tooling development and upgrades.
- Faster Upgrades: AWS manages the control plane, making cluster upgrades quicker and easier.
- Enhanced Security: EKS leverages AWS’s shared responsibility model, improving overall cluster security.
- IRSA Support: EKS natively supports AWS Identity and Access Management (IAM) Roles for Service Accounts (IRSA), offering a more secure alternative to kube2iam.
Some organizations may be concerned that moving CNIs will be more disruptive with higher mean time to resolution (MTR) for issues that come up. However, in this case it’s safe to build new clusters with the replacement CNI because you can keep everything running smoothly in kOps until you’re sure the new EKS clusters (and CNI) are working as intended.
Technical Migration Process
Here’s a more detailed look at the migration process from kOps to EKS. We’d recommend using Infrastructure as Code (IaC), such as Terraform, to ensure consistent configuration across environments. This will also allow you to catch anything that drifts from your IaC due to manual changes via the user interface (UI) or otherwise. The guide below assumes you are using the AWS EKS terraform module for cluster creation.
1. Prepare
- Review current cluster configuration, including addons and custom resources.
- Document all workloads, their resource requirements, and any specific node affinities or tolerations.
- Identify any kOps-specific features in use that may need alternatives in EKS.
2. Create EKS Clusters
- Use AWS CloudFormation or Terraform to define the EKS cluster infrastructure.
- Configure the VPC, subnets, and security groups as per requirements.
The Terraform sample code block below is the output from the AWS EKS module; it would create an EKS cluster named my-eks-cluster running Kubernetes version 1.31, associated with a specific IAM role, deployed in specified subnets, and with comprehensive control plane logging enabled.
resource “aws_eks_cluster” “main” {
name = “my-eks-cluster”
role_arn = aws_iam_role.eks_cluster.arn
version = “1.31”
…
vpc_config {
subnet_ids = var.subnet_ids
}
…
# Enable control plane logging
enabled_cluster_log_types = [“api”, “audit”, “authenticator”, “controllerManager”, “scheduler”]}
3. Managed-nodegroup Configuration
Define node groups with appropriate instance types and scaling configurations.
The Terraform sample code block below is the output from the AWS EKS module that would create a EKS managed node group named default-01 running on our my-eks-cluster Kubernetes cluster with minimum, maximum, and instance types defined.You can also configure other details, such as attached disks and image type.
resource “aws_eks_node_group” “example” {
cluster_name = aws_eks_cluster.main.name
node_group_name = “default-01”
…
scaling_config {
desired_size = 1
max_size = 10
min_size = 1
}
…
update_config {
max_unavailable = 1
}
instance_types = [“m5a.large”] }
4. CNI Configuration
Deploy a compatible CNI. AWS-VPC-CNI is the default, but you may want to consider alternatives, such as Calico, depending on your organization’s requirements.
The Terraform sample code block below is the output from the AWS EKS module that deploys the AWS-VPC-CNI plugin on every node in the cluster. This is responsible for IP address management and network interface configuration for pods in our cluster.
resource “aws_eks_addon” “vpc_cni” {
cluster_name = aws_eks_cluster.main.name
addon_name = “vpc-cni”
addon_version = “v1.19.2-eksbuild.1”
resolve_conflicts_on_update = “PRESERVE”
}
5. Addon Migration
- Install and configure necessary addons in the new EKS cluster.
- Ensure compatibility of addons with EKS Kubernetes version and the chosen CNI.
6. IRSA Setup
Configure IRSA for workloads that require AWS IAM permissions.
This sample annotation associates the ServiceAccount with an AWS IAM role. When a pod uses this ServiceAccount, it can assume the specified IAM role and inherit its permissions. This allows pods to securely access AWS services without needing to manage AWS credentials within the pod or use instance-level IAM roles. It’s a more granular and secure way to manage permissions for containerized applications running in EKS.
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/IAM_ROLE_NAME
7. Workload Migration
- Use tools, such as kubectl or rok8s-scripts (Helm-based), to deploy workloads to the new EKS cluster or ideally deploy via a GitOps pattern using something like ArgoCD or Flux.
- Verify functionality and performance in the new environment.
8. DNS Cutover
- Update DNS records to point to the new EKS cluster’s ingress or load balancer.
- Consider scheduling a maintenance window and using a blue-green deployment strategy for minimal downtime.
9. Validation and Monitoring
- Implement thorough testing of all migrated workloads.
- Set up monitoring and logging for the new EKS cluster using CloudWatch or other preferred tools.
10. Decommissioning
Once all workloads are successfully migrated and validated, plan for the decommissioning of the old kOps cluster.
Potential Challenges and Mitigation Strategies
- Networking Differences: EKS may have different networking configurations. Ensure all required ports and protocols are allowed in security groups.
- Storage: If using Amazon Elastic Block Store (Amazon EBS) volumes, ensure they are in the same availability zone (AZ) as the EKS nodes. Consider using Elastic File System (EFS) for cross-AZ persistence.
- Resource Quotas: Verify that AWS account limits can accommodate the new EKS cluster alongside the existing kOps cluster during migration.
- Downtime Concerns: Use strategies, such as blue-green deployments or canary releases, to minimize downtime during the cutover.
Ready to Migrate to EKS?
Migrating from kOps to EKS is not an insignificant undertaking, but it offers numerous benefits, including faster and easier cluster upgrades, increased security (the control plane for EKS is inaccessible to any users), and workload identity management via IRSA. By following this technical guide and carefully planning each step, organizations can ensure a smooth transition from kOps to EKS, setting themselves up for improved Kubernetes operations in the long term.
If you’d like to move from kOps to EKS, but don’t have bandwidth for the project, Fairwinds can help. Fairwinds has extensive experience with this migration, and can make the move simple for your team.
*** This is a Security Bloggers Network syndicated blog from Fairwinds | Blog authored by Abdul Akhter. Read the original post at: https://www.fairwinds.com/blog/migrating-from-kops-to-eks-a-technical-guide-for-when-why-to-switch
Original Post URL: https://securityboulevard.com/2025/02/migrating-from-kops-to-eks-a-technical-guide-for-when-why-to-switch/
Category & Tags: Security Bloggers Network,Managed Kubernetes – Security Bloggers Network,Managed Kubernetes
Views: 2