Voiced by Amazon Polly |
Introduction
Amazon Elastic Kubernetes Service (EKS) offers flexibility in managing worker nodes through managed and self-managed node groups. In this guide, we will walk through creating and configuring a self-managed node group with custom max_pods settings, which can be crucial for optimizing resource utilization and cluster performance.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
Prerequisites
- AWS CLI configured with appropriate permissions
- kubectl installed and configured
- eksctl installed
- An existing EKS cluster
- AWS Systems Manager Parameter Store access
Understanding max_pods
Before we dive in, it’s important to understand what max_pods means in the context of Kubernetes:
- max_pods determines the maximum number of pods that can run on a single node
- The default value varies based on the instance type
- Setting an appropriate max_pods value is crucial for:
- Network performance
- Resource utilization
- Cluster stability
Creating a Self-Managed Node Group
Step 1: Get Cluster Information
- First, we need to retrieve the cluster information that will be used in our bootstrap script:
bash:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# Store cluster name export CLUSTER_NAME="your-cluster-name" # Get cluster endpoint export API_SERVER_URL=$(aws eks describe-cluster \ --name ${CLUSTER_NAME} \ --query "cluster.endpoint" \ --output text) # Get cluster CA certificate export B64_CLUSTER_CA=$(aws eks describe-cluster \ --name ${CLUSTER_NAME} \ --query "cluster.certificateAuthority.data" \ --output text) |
Step 2: Create Node AWS IAM Role
bash:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
# Create IAM role for nodes aws iam create-role \ --role-name EKSNodeRole \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' # Attach required policies aws iam attach-role-policy \ --role-name EKSNodeRole \ --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy aws iam attach-role-policy \ --role-name EKSNodeRole \ --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy aws iam attach-role-policy \ --role-name EKSNodeRole \ --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly |
Step 3: Create a Launch Template
Create a launch template that includes our custom max_pods configuration. Note how we explicitly set max_pods in both bootstrap.sh and kubelet arguments:
bash:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
# Store cluster name cat << EOF > launch-template-data.json { "LaunchTemplateData": { "InstanceType": "t3.large", "IamInstanceProfile": { "Name": "EKSNodeInstanceProfile" }, "BlockDeviceMappings": [ { "DeviceName": "/dev/xvda", "Ebs": { "VolumeSize": 20, "VolumeType": "gp3" } } ], "UserData": "$(echo -n "#!/bin/bash # Download and install worker node binaries /etc/eks/bootstrap.sh ${CLUSTER_NAME} \ --b64-cluster-ca ${B64_CLUSTER_CA} \ --apiserver-endpoint ${API_SERVER_URL} \ --dns-cluster-ip 10.100.0.10 \ --use-max-pods false \ --kubelet-extra-args '--max-pods=110 --node-labels=node-type=self-managed' \ --container-runtime containerd" | base64 -w 0)", "TagSpecifications": [ { "ResourceType": "instance", "Tags": [ { "Key": "Name", "Value": "EKS-Self-Managed-Node" } ] } ] } } EOF aws ec2 create-launch-template \ --launch-template-name eks-self-managed-node \ --version-description 1 \ --launch-template-data file://launch-template-data.json |
Step 4: Create Auto Scaling Group
bash:
1 2 3 4 5 6 7 8 9 |
aws autoscaling create-auto-scaling-group \ --auto-scaling-group-name eks-self-managed-nodes \ --launch-template LaunchTemplateName=eks-self-managed-node,Version='$Latest' \ --min-size 1 \ --max-size 5 \ --desired-capacity 3 \ --vpc-zone-identifier "subnet-xxxxx,subnet-yyyyy" \ --tags "Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned" \ --tags "Key=k8s.io/cluster-autoscaler/enabled,Value=true" |
Step 5: Enable Nodes to Join the Cluster
Create the node authentication ConfigMap to allow nodes to join the cluster:
bash:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# Get the node role ARN NODE_ROLE_ARN=$(aws iam get-role --role-name EKSNodeRole --query 'Role.Arn' --output text) # Create auth ConfigMap cat << EOF > aws-auth-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: ${NODE_ROLE_ARN} username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes EOF # Apply the ConfigMap kubectl apply -f aws-auth-cm.yaml |
Verifying the Configuration
After deployment, verify your node group configuration:
1 2 3 4 5 6 7 8 |
# Check nodes in your cluster kubectl get nodes # Verify max-pods setting kubectl describe node <node-name> | grep "max-pods" # Check node labels kubectl get nodes --show-labels | grep "node-type=self-managed" # View node capacity kubectl get nodes -o json | jq '.items[].status.capacity.pods' |
Best Practices
- Instance Type Selection
- Choose instance types based on your workload requirements
- Consider CPU, memory, and networking requirements
- max_pods Calculation
- Use Amazon’s formula: (Number of ENIs × (IPv4 addresses per ENI – 1)) + 2
- Consider the network interface limits of your instance type
- Monitoring and Scaling
- Set up Amazon CloudWatch alarms for node metrics
- Implement horizontal pod autoscaling
- Monitor pod scheduling failures
Conclusion
Remember to monitor your cluster’s behavior and adjust settings based on your specific workload requirements.
Drop a query if you have any questions regarding Amazon EKS and we will get back to you quickly.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is an award-winning company and the first in India to offer cloud training and consulting services worldwide. As a Microsoft Solutions Partner, AWS Advanced Tier Training Partner, and Google Cloud Platform Partner, CloudThat has empowered over 850,000 professionals through 600+ cloud certifications winning global recognition for its training excellence including 20 MCT Trainers in Microsoft’s Global Top 100 and an impressive 12 awards in the last 8 years. CloudThat specializes in Cloud Migration, Data Platforms, DevOps, IoT, and cutting-edge technologies like Gen AI & AI/ML. It has delivered over 500 consulting projects for 250+ organizations in 30+ countries as it continues to empower professionals and enterprises to thrive in the digital-first world.
FAQs
1. Can I modify max_pods after the node group is created?
ANS: – You cannot modify max_pods for existing nodes. You will need to:
- Create a new launch template version with updated max_pods
- Update your Auto Scaling Group to use the new template version
- Gradually replace old nodes with new ones
2. How does max_pods affect cluster autoscaling?
ANS: – max_pods influences how the cluster autoscaler makes scaling decisions by:
- Determining the maximum pod capacity per node
- Affecting when new nodes are added based on pending pods
- Impacting resource utilization calculations

WRITTEN BY Aditi Agarwal
Comments