Turn Your Vehicle Into a Smart Earning Asset

While you’re not driving your car or bike, it can still be working for you. MOTOSHARE helps you earn passive income by connecting your vehicle with trusted renters in your city.

🚗 You set the rental price
🔐 Secure bookings with verified renters
📍 Track your vehicle with GPS integration
💰 Start earning within 48 hours

Join as a Partner Today

It’s simple, safe, and rewarding. Your vehicle. Your rules. Your earnings.

How to Create EKS Cluster on AWS

Here’s a clean, console-only workflow to spin up a current Amazon EKS cluster (Kubernetes 1.33 as of today) and verify it end-to-end. I’ll also point out the 2025-specific gotchas (AL2 deprecation, default add-ons, Pod Identity). Citations sit right after each section.


0) Prereqs (one-time)

  • Permissions: You’ll need IAM rights to create EKS clusters, roles, VPC resources, and EC2 node groups.
  • Network: Have a VPC with at least 2 subnets in different AZs (3 private + 3 public is the common pattern). EKS requires subnets in ≥2 AZs and recommends using private subnets for nodes. (AWS Documentation)
  • Kubernetes version: EKS supports up to v1.33 now (note: v1.32 was the last to support AL2). From 1.33 onward use AL2023 or Bottlerocket for node AMIs. (AWS Documentation)

1) Create the two IAM roles (cluster + node) in the IAM console

A. Cluster role (eksClusterRole)
IAM → Roles → Create roleAWS ServiceEKS → EKS – Cluster → Next → name it eksClusterRoleCreate. This attaches AmazonEKSClusterPolicy and sets the correct trust policy for eks.amazonaws.com. (AWS Documentation)

B. Node role (AmazonEKSNodeRole)
IAM → Roles → Create roleAWS ServiceEC2 → Next → attach:

  • AmazonEKSWorkerNodePolicy
  • AmazonEC2ContainerRegistryPullOnly (newer name; replaces old ReadOnly in many guides)

Don’t attach AmazonEKS_CNI_Policy here unless you’re intentionally not using IRSA/Pod Identity for the VPC CNI. AWS recommends giving the CNI its own role via IRSA/Pod Identity.
Name it AmazonEKSNodeRoleCreate. (AWS Documentation)


2) Create the cluster in the EKS console

EKS console → ClustersCreate cluster:

  1. Name: e.g., prod-eks-133
  2. Kubernetes version: 1.33 (default/recommended). (AWS Documentation)
  3. Cluster service role: eksClusterRole (from step 1A). (AWS Documentation)
  4. Secrets encryption (recommended): pick a KMS key for etcd secrets encryption.
  5. Networking: choose your VPC and select at least 2 private subnets (add public subnets if you plan internet-facing LBs). EKS requires ≥6 free IPs per subnet (16+ recommended). (AWS Documentation)
  6. Cluster endpoint access: “Public and private” (restrict public CIDRs) or “Private only” if you have reachability.
  7. Control plane logs (recommended): enable api, audit, authenticator, controllerManager, scheduler.
  8. Add-ons: When you create via console, EKS auto-adds VPC CNI, CoreDNS, kube-proxy as EKS add-ons (managed). You can adjust versions post-create. EBS CSI driver is optional here; you can add it later. Create. (AWS Documentation)

Provisioning the control plane takes a few minutes. The doc’s “Create a cluster” page captures the flow and AZ capacity note. (AWS Documentation)


3) Add compute: Managed node group (console)

Cluster → Compute tab → Add node group:

  1. Name: mng-al2023
  2. Node IAM role: AmazonEKSNodeRole (from step 1B). (AWS Documentation)
  3. AMI family: Amazon Linux 2023 (or Bottlerocket)do not use AL2 on 1.33. (AWS Documentation)
  4. Instance type: start with t3.large (adjust to your workloads).
  5. Size: min/desired/max (e.g., 2/3/6).
  6. Subnets: select the private subnets.
  7. Remote access: optional; leave off unless you need SSH.
  8. Create and wait for nodes to register.

4) Verify add-ons (console)

Cluster → Add-ons tab:

  • Confirm Amazon VPC CNI, kube-proxy, and CoreDNS are Installed.
  • Click each and select the Recommended version if an update is suggested.
  • (Optional but strongly recommended) Install EBS CSI driver add-on for dynamic PersistentVolumes. (AWS Documentation)

If you’ll publish Services publicly via Ingress/ALB, plan to install the AWS Load Balancer Controller (Helm) and ensure subnets are tagged (kubernetes.io/role/elb=1 for public, …/internal-elb=1 for private). This part is not built-in to EKS by default. (AWS Documentation)


5) (Modern auth) EKS Pod Identity (optional but preferred over classic IRSA)

  • Enable the EKS Pod Identity Agent add-on (Cluster → Add-ons → Get more add-onsEKS Pod Identity Agent → Install).
  • Then create Pod Identity associations to map a service account to an IAM role per workload. This is the current recommended path for pods to access AWS APIs, and it simplifies policy reuse. (AWS Documentation)

6) “Check” the cluster (fastest via AWS CloudShell from the console)

From the AWS console header, open CloudShell in the same Region:

# 1) Merge kubeconfig for your cluster
aws eks update-kubeconfig --region <REGION> --name <CLUSTER_NAME>

# 2) See nodes & system pods
kubectl get nodes -o wide
kubectl get pods -A

# 3) Smoke test: run nginx and expose it
kubectl create deploy hello-nginx --image=nginx
kubectl expose deploy hello-nginx --port=80 --type=ClusterIP
kubectl port-forward deploy/hello-nginx 8080:80
# Then in CloudShell "Preview" or curl http://127.0.0.1:8080
Code language: PHP (php)

The update-kubeconfig step and kubectl connectivity are the official flow. (AWS Documentation)

Optional (Service LoadBalancer test):
If you didn’t install the AWS Load Balancer Controller and just want an external IP fast, you can kubectl expose with type=LoadBalancer (this provisions an AWS load balancer via the legacy provider). For production Ingress/ALB, install the AWS Load Balancer Controller and use Ingress resources. (AWS Documentation)


2025 tips & gotchas (read this!)

  • Kubernetes 1.33: pick AL2023 or Bottlerocket for nodes (AL2 is not released for 1.33). (AWS Documentation)
  • Default add-ons: when created via console, VPC CNI + CoreDNS + kube-proxy are installed as EKS add-ons (managed). Keep them on Recommended versions. (AWS Documentation)
  • Subnet tags for ALB: If/when you install the AWS Load Balancer Controller, tag subnets (kubernetes.io/role/elb=1 for public, …/internal-elb=1 for private) or specify subnets on the Ingress annotation. (kubernetes-sigs.github.io)
  • Pod to AWS access: Prefer EKS Pod Identity over older IRSA flow for new clusters; install the Pod Identity Agent and create associations per service account. (AWS Documentation)
  • Node role policies: Use AmazonEKSWorkerNodePolicy + AmazonEC2ContainerRegistryPullOnly; attach CNI policy via the CNI’s own service account role (IRSA/Pod Identity), not to the node role, unless you intentionally choose otherwise. (AWS Documentation)

Clean up (to avoid costs)

  • Delete test resources: kubectl delete deploy/hello-nginx svc/hello-nginx.
  • Compute tab → delete Node group(s).
  • Cluster → Delete (after node groups are gone).
    General flow and constraints are in the create/cleanup docs. (AWS Documentation)

Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Certification Courses

DevOpsSchool has introduced a series of professional certification courses designed to enhance your skills and expertise in cutting-edge technologies and methodologies. Whether you are aiming to excel in development, security, or operations, these certifications provide a comprehensive learning experience. Explore the following programs:

DevOps Certification, SRE Certification, and DevSecOps Certification by DevOpsSchool

Explore our DevOps Certification, SRE Certification, and DevSecOps Certification programs at DevOpsSchool. Gain the expertise needed to excel in your career with hands-on training and globally recognized certifications.

0
Would love your thoughts, please comment.x
()
x