Turn Your Vehicle Into a Smart Earning Asset

While you’re not driving your car or bike, it can still be working for you. MOTOSHARE helps you earn passive income by connecting your vehicle with trusted renters in your city.

🚗 You set the rental price
🔐 Secure bookings with verified renters
📍 Track your vehicle with GPS integration
💰 Start earning within 48 hours

Join as a Partner Today

It’s simple, safe, and rewarding. Your vehicle. Your rules. Your earnings.

Kubernetes Tutorials: Traffic-Driven Autoscaling on Kubernetes & Comparing KEDA, HPA, VPA & Custom Adapters

Comparing KEDA, HPA, VPA & Custom Adapters for Real-World Scaling with Cost, Complexity & Best Practices

CategoryKEDA (Event-driven)Prometheus-based (Adapter/HPA)Datadog-based (Cluster Agent)CloudWatch-based (Adapter/HPA)
Primary FunctionEvent-driven autoscaler (creates HPAs dynamically from external metrics like ALB, SQS, Kafka, etc.)Uses HPA with Prometheus metrics via adapter (e.g., kube-metrics-adapter or prometheus-adapter)Uses Datadog metrics (via Cluster Agent) as external metrics for HPAUses CloudWatch metrics (via AWS CloudWatch Metrics Adapter) for HPA
Metric SourceMultiple external sources: CloudWatch, Prometheus, SQS, Kafka, HTTP, etc. (50+ scalers)Prometheus time-series metrics (scraped from exporters or apps)Datadog platform metrics (ingested from AWS, custom apps, APM)AWS CloudWatch (e.g., ALB metrics, RDS, SQS, Lambda, etc.)
Data Flow ModelPull metrics or events → internal HPA → scalePrometheus scrapes → adapter → Kubernetes Metrics API → HPADatadog agent → Cluster Agent → External Metrics API → HPACloudWatch Adapter → External Metrics API → HPA
Setup Complexity🟢 Medium (Helm + few YAMLs; no exporter needed)🔵 Medium-High (need Prometheus + adapter configuration)🟣 Medium (if Datadog is already deployed)🟠 Medium (adapter installation + IAM + mappings)
Integration with ALB Traffic✅ Native (via CloudWatch scaler – uses RequestCountPerTarget, TargetResponseTime)⚠️ Requires Prometheus CloudWatch exporter (YACE or similar)✅ Native (Datadog already pulls ALB metrics)✅ Native (direct access to ALB metrics)
Supports Scale-to-Zero✅ Yes❌ No (HPA cannot scale to zero)❌ No❌ No
Responsiveness / Latency~30–60 seconds (depends on CloudWatch polling)~15–30 seconds (depends on scrape interval)~30–60 seconds (depends on Datadog ingestion)~60 seconds (CloudWatch metric delay)
Operational Cost💲 Low (CloudWatch API calls only)💲💲 Medium (Prometheus infra + storage + exporter costs)💲💲💲 High (Datadog licensing per host/container)💲 Low (CloudWatch API calls)
Infrastructure OverheadLightweight (1 KEDA controller)Heavy (Prometheus, exporters, adapter)Moderate (Datadog Cluster Agent)Moderate (Adapter deployment)
Ease of Maintenance🟢 Easy – one Helm upgrade for all namespaces🔵 Moderate – maintain adapter & Prometheus🟣 Easy if Datadog already managed🟠 Moderate – periodic IAM & adapter updates
EKS Auto Mode Compatibility✅ Fully compatible – scales pods; NodePools handle nodes✅ Compatible✅ Compatible✅ Compatible
Multi-Namespace Scaling✅ Native support (Scoped per namespace)✅ Supported✅ Supported✅ Supported
Security / IAMUses IRSA or static keys for AWS APIsNo AWS permissions required (depends on Prometheus)Uses Datadog API key & IAM integrationUses IRSA for AWS CloudWatch read access
Supported Triggers / Metrics50+ sources (CloudWatch, Kafka, RabbitMQ, HTTP, Redis, MySQL, etc.)Limited to Prometheus metricsLimited to Datadog metricsLimited to AWS metrics
Scales on Events (not metrics)✅ Yes❌ No❌ No❌ No
Can Combine Multiple Triggers✅ Yes (multi-trigger scaling rules)⚠️ Only via complex PromQL expressions⚠️ Limited (Datadog composite metrics)⚠️ Limited (one metric per HPA)
Recommended ForEvent-driven / traffic-based workloads (ALB, queues, web APIs)Resource or app-metric-based workloadsOrganizations using Datadog for monitoring & APMAWS-centric workloads without Prometheus
Learning Curve🟢 Low🔵 Medium🟣 Low🟠 Medium
Vendor Lock-inLow (Open Source)Low (OSS ecosystem)High (Datadog SaaS)Medium (AWS-only)
Community & EcosystemVery active (CNCF Sandbox Project)Large (K8s ecosystem standard)Proprietary (Datadog documentation)AWS-maintained (moderate community)
Use with WAF + ALB✅ Seamless (uses ALB TG metrics directly)⚠️ Need exporter for ALB metrics✅ Seamless (Datadog ALB integration)✅ Seamless (ALB metrics native in CloudWatch)
Example MetricCloudWatch → RequestCountPerTarget, TargetResponseTimePrometheus → nginx_ingress_controller_requests_totalDatadog → aws.applicationelb.request_countCloudWatch → RequestCountPerTarget
Scale Behavior VisualizationKEDA Metrics API + Grafana dashboardsPrometheus / GrafanaDatadog DashboardsCloudWatch Dashboards
Maturity (as of 2025)⭐⭐⭐⭐⭐ (CNCF Incubating)⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Overall Recommendation (for EKS + ALB)✅✅✅ Best Option✅ Good (if Prometheus is already in place)⚙️ Suitable for Datadog-native orgs✅ Good fallback if KEDA not allowed

🌐 1 | Architecture Overview

Flow:
Client → DNS → WAF → ALB → TargetGroup → EKS Service/Pods

LayerPurposeKey AWS / K8s Component
Edge SecurityFilter malicious trafficAWS WAF (Web ACL)
Load BalancingDistribute inbound requestsALB (AWS Load Balancer Controller)
RoutingPath/host-based dispatch to namespacesKubernetes Ingress
ComputeRun workloadsEKS Pods/Deployments
Node CapacityProvision nodes automaticallyEKS Auto Mode NodePools (Karpenter)
Autoscaling BrainAdjust replicas dynamicallyKEDA / HPA / VPA / Custom Adapter

With EKS Auto Mode, AWS manages node scaling.
Your responsibility is pod-level scaling — deciding how many replicas each service needs based on traffic or resource metrics.


🧩 2 | Namespace-Scoped Design Pattern

  • Each microservice (e.g., booking, auth, medical, telematics) lives in its own namespace.
  • Each namespace has its Ingress, Service, Deployment, ConfigMap, and autoscaler objects.
  • Optionally, multiple namespaces can share one ALB via alb.ingress.kubernetes.io/group.name to save cost while keeping per-namespace isolation.

⚙️ 3 | Ingress & WAF Setup (Shared ALB Example)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: booking-ing
  namespace: booking
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/group.name: shared-edge
    alb.ingress.kubernetes.io/group.order: "20"
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:ap-northeast-1:111111111111:regional/webacl/mywebacl/abcd1234
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /booking
        pathType: Prefix
        backend:
          service:
            name: booking-svc
            port:
              number: 80

Each namespace can repeat this pattern using different paths (/auth, /legal, etc.) but share the same group.name → one ALB under one WAF.


🚀 4 | Autoscaling Options for Pod Level Control

Below are five viable mechanisms for pod autoscaling inside EKS.

#MethodScaling SourceScales To ZeroWorks with ALB MetricsTypical LatencySetup TimeMaint. EffortApprox. Cost*Skill Level
1KEDAExternal events (CloudWatch ALB, SQS, Prometheus, etc.)✅ (native scaler)30-60 s⚙️ Medium🧩 Low (once installed)💲💲 CloudWatch API callsIntermediate
2HPACPU / memory / custom metrics⚠️ via adapter15-30 s⚙️ Low🧩 Low💲 freeBeginner
3VPAInternal resource usageN/A⚙️ Medium🧩 Low💲 freeIntermediate
4Custom Metric AdapterPrometheus / CloudWatch✅ with manual mapping45-60 s⚙️ High🧩 High💲💲 metrics infraAdvanced
5Manual ScalingHuman inputN/A⚙️ Instant🧩 High Opex💲 noneBasic

* Cost = relative AWS service charges + operational overhead


🧮 5 | Detailed Analysis of Each Approach

🔹 A | KEDA (Event-Driven Autoscaler)

How it works:
KEDA reads external metrics (CloudWatch ALB RequestCountPerTarget, TargetResponseTime, SQS depth, PromQL queries, etc.) and creates an internal HPA.

Pros

  • Supports 50 + scalers (AWS, Azure, Kafka, Prometheus, etc.).
  • Scales to zero during idle.
  • Simple YAML (ScaledObject) per Deployment.
  • Works seamlessly with EKS Auto Mode and NodePools.
  • Natively integrates with CloudWatch ALB metrics.

Cons

  • Extra component to operate.
  • CloudWatch polling → small metric costs and ≈ 1 min delay.
  • Needs IRSA permissions for CloudWatch API.

Setup time: ≈ 1 hr (Helm install + ScaledObject YAMLs)
Maintenance: Low (central Helm upgrade + namespace YAMLs)
Recommended for: Multi-namespace EKS clusters with real-traffic scaling.


🔹 B | HPA (Native Horizontal Pod Autoscaler)

How it works:
Built into Kubernetes; scales based on CPU and memory by default.
Can also use custom metrics with an adapter.

Pros

  • Native, stable, zero extra components.
  • Predictable behavior and fine-grained control.

Cons

  • Default metrics = CPU / memory only.
  • Cannot scale to zero.
  • Needs a metric adapter to use ALB metrics.
  • Not event-driven; reactive after load hits CPU.

Setup time: ≈ 30 min
Maintenance: Minimal
Recommended for: Steady workloads or CPU-bound apps.


🔹 C | VPA (Vertical Pod Autoscaler)

How it works:
Adjusts CPU and memory requests/limits per pod automatically.

Pros

  • Prevents over/under-provisioning.
  • Complements KEDA/HPA.

Cons

  • No replica count scaling.
  • Not suited for traffic bursts.

Setup time: ≈ 45 min
Maintenance: Low
Recommended for: Batch or steady apps to optimize resources.


🔹 D | Custom Metric Adapters (Prometheus / CloudWatch)

How it works:
Deploy an external-metrics adapter exposing selected metrics to HPA.
HPA then scales on those metrics.

Pros

  • Fine control; use any metric you own.
  • Integrates into existing monitoring plane.

Cons

  • Complex to deploy and maintain.
  • Harder to debug.
  • No scale-to-zero.
  • Usually delayed by scrape interval + adapter polling.

Setup time: 1 – 2 hrs
Maintenance: High
Recommended for: Large orgs with centralized Prometheus or Datadog.


🔹 E | Manual Scaling

kubectl scale deployment <name> --replicas=N

Pros: 100 % control, simple to understand.
Cons: No automation; wastes capacity; high operational risk.
Use only for: testing or stable low-traffic sites.


💡 6 | KEDA Setup Walkthrough (for EKS + ALB)

  1. Install KEDA helm repo add kedacore https://kedacore.github.io/charts helm install keda kedacore/keda -n keda --create-namespace
  2. Enable IRSA (for CloudWatch) eksctl utils associate-iam-oidc-provider --cluster my-eks --approve
  3. IAM Policy { "Version":"2012-10-17", "Statement":[{"Effect":"Allow","Action":[ "cloudwatch:GetMetricData","cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics","cloudwatch:DescribeAlarms" ],"Resource":"*"}] }
  4. ServiceAccount + TriggerAuthentication apiVersion: v1 kind: ServiceAccount metadata: name: svc-traffic-autoscale namespace: booking annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/eks-traffic-autoscale --- apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: alb-cw-auth namespace: booking spec: podIdentity: provider: aws
  5. ScaledObject apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: booking-traffic namespace: booking spec: scaleTargetRef: name: booking-deployment minReplicaCount: 2 maxReplicaCount: 30 triggers: - type: aws-cloudwatch authenticationRef: name: alb-cw-auth metadata: namespace: AWS/ApplicationELB metricName: RequestCountPerTarget dimensionName: TargetGroup dimensionValue: targetgroup/k8s-xyz/abc123456 statistic: Sum period: "60" metricUnit: Count targetValue: "100"
  6. Observe Scaling kubectl get hpa -n booking kubectl get pods -n booking -w

📈 7 | Performance & Cost Considerations

FactorKEDAHPAVPACustom Adapter
Responsiveness30-60 s15-30 sN/A45-60 s
Infra CostLow (CloudWatch polling)NoneNoneMedium (Prometheus infra)
Setup OverheadMediumLowMediumHigh
MaintenanceLowLowLowHigh
ComplexityMediumLowLowHigh
Best forTraffic / Event drivenCPU/MemResource tuningCentralized metrics
Scale-to-Zero

🧠 8 | Decision Matrix

RequirementBest ChoiceReason
Real ALB traffic scalingKEDADirect CloudWatch integration
CPU/memory bound appsHPANative simple autoscaler
Optimize pod resources over timeVPAAdjusts requests/limits
Central metrics team wants Prometheus-based controlCustom Adapter + HPAFull metric plane
Low-traffic or manual controlManualNo automation needed

🧰 9 | Combining Approaches

A production-grade EKS stack often mixes them:

LayerToolRole
Replica ScalingKEDA + HPARespond to traffic & CPU
Resource TuningVPAAdjust limits automatically
Node ScalingEKS Auto Mode (NodePools)Provide capacity
MonitoringCloudWatch + AMP + GrafanaVisibility into metrics

🔒 10 | Security and Auth Notes

  • Keep Firebase OIDC at pod level (not ALB listener), which avoids auth redirect limits.
  • Enable IRSA for KEDA & pods requiring AWS API access.
  • WAF rules protect ALB from volumetric attacks before KEDA reacts.
  • Monitor 5xx errors + TargetResponseTime to guard against scaling loops.

🧭 11 | Final Recommendation

For your multi-namespace, WAF-protected, ALB-routed EKS cluster running in Full Auto Mode,
KEDA is the best fit for traffic-driven autoscaling:

  • Event-driven and responsive to real user load.
  • Scales independently per namespace/service.
  • Integrates cleanly with EKS NodePools for capacity.
  • Minimizes cost via scale-to-zero and fine-grained rules.

Use HPA as a fallback for CPU-based logic, VPA for optimization, and custom adapters only when you already maintain Prometheus or Datadog metric infrastructure.


🏁 Summary Matrix

DimensionBest Fit
Speed to implementHPA
Responsiveness to trafficKEDA
Ease of maintenanceKEDA / HPA
Cost efficiencyKEDA (scale-to-zero)
Complex metric logicCustom Adapter
Resource tuningVPA

📚 References & Further Reading

AWS Blog – Autoscaling EKS with KEDA & CloudWatch
AWS Docs – EKS Auto Mode & NodePools
KEDA Docs – CloudWatch Scaler
SpectroCloud – Kubernetes Autoscaling Patterns: HPA, VPA & KEDA


Final Takeaway:
If you need hands-off, event-driven, traffic-aware, namespace-isolated scaling for an ALB-fronted EKS cluster,
KEDA + EKS Auto Mode (NodePools) is the modern production-grade combination—
balancing performance, cost, and operational simplicity for any multi-service cloud platform.

Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Certification Courses

DevOpsSchool has introduced a series of professional certification courses designed to enhance your skills and expertise in cutting-edge technologies and methodologies. Whether you are aiming to excel in development, security, or operations, these certifications provide a comprehensive learning experience. Explore the following programs:

DevOps Certification, SRE Certification, and DevSecOps Certification by DevOpsSchool

Explore our DevOps Certification, SRE Certification, and DevSecOps Certification programs at DevOpsSchool. Gain the expertise needed to excel in your career with hands-on training and globally recognized certifications.

0
Would love your thoughts, please comment.x
()
x