Category
Networking and CDN
1. Introduction
Server Load Balancer (SLB) on Alibaba Cloud is a managed load balancing service that distributes inbound traffic across multiple backend servers to improve availability, scalability, and application responsiveness.
In simple terms, you place Server Load Balancer (SLB) in front of your application servers (for example, ECS instances). Users connect to SLB instead of connecting directly to a single server. SLB then forwards each request to a healthy backend server based on the listener protocol and routing rules you configure.
Technically, Server Load Balancer (SLB) provides Layer 4 (TCP/UDP) and Layer 7 (HTTP/HTTPS) load balancing capabilities (depending on the SLB type you choose) with health checks, backend server groups, session persistence (for applicable protocols), TLS termination (for HTTPS), and integrations with common Alibaba Cloud services such as ECS, VPC, CloudMonitor, ActionTrail, and Log Service (SLS). SLB instances can be Internet-facing or internal (VPC) and are deployed as managed infrastructure so you don’t have to run and patch load balancer VMs yourself.
The problem SLB solves is the “single server bottleneck” and “single point of failure” problem: when one instance fails, traffic can be routed to other healthy instances; when traffic increases, you can scale out by adding more backends without changing the client endpoint.
Naming note (important): Alibaba Cloud’s load balancing offerings have evolved. “Server Load Balancer” is the product family, and in many Alibaba Cloud materials you will see different load balancer types (such as Classic Load Balancer (CLB), Application Load Balancer (ALB), and Network Load Balancer (NLB)) under the broader load balancing portfolio. This tutorial uses Server Load Balancer (SLB) as the primary service name (as requested) and calls out where behavior depends on the specific load balancer type. Verify the latest naming and feature mapping in the official docs if your console shows ALB/NLB/CLB as separate services.
2. What is Server Load Balancer (SLB)?
Official purpose: Server Load Balancer (SLB) is Alibaba Cloud’s managed service for distributing incoming traffic across multiple backend endpoints to increase application availability and scale.
Core capabilities
- Traffic distribution across backend servers for supported protocols (commonly TCP, UDP, HTTP, HTTPS; exact support depends on SLB type).
- Health checks to detect unhealthy backends and stop sending them traffic.
- Backend server groups (server pools) to organize and scale backends.
- TLS/SSL termination for HTTPS listeners (capability and certificate workflow depend on type).
- Session persistence (sticky sessions) for applicable listener types.
- Access control (such as ACLs/security group controls, depending on implementation and type).
- Observability via metrics and logs using Alibaba Cloud monitoring/logging services.
Major components (common model)
While terminology varies slightly by load balancer type, you will commonly work with: – SLB instance: The load balancer endpoint (Internet-facing or internal). – Listeners: Define protocol/port, forwarding behavior, and optional features (health checks, persistence, TLS). – Backend server groups: A set of backend endpoints (often ECS instances, ENIs, or IP-based backends depending on the SLB type and configuration). – Health checks: Active probing configuration to determine backend health. – Routing / forwarding rules: L7 routing (host/path) for HTTP/HTTPS where supported. – Certificates: For HTTPS/TLS termination (managed via Alibaba Cloud certificate services or uploaded certs, depending on the workflow).
Service type and scope
- Service type: Managed load balancing (Networking and CDN category).
- Scope: Typically regional. You create SLB resources in a specific Alibaba Cloud region. High availability is generally achieved by multi-zone deployment within a region (exact behavior depends on the SLB type and selected zones).
- Networking model: Works inside a VPC for internal load balancing, and can expose a public endpoint for Internet-facing load balancing.
How SLB fits into the Alibaba Cloud ecosystem
SLB commonly sits at the edge of your VPC or as an internal traffic distribution layer: – Fronts ECS workloads (web apps, APIs, game servers, TCP services). – Integrates with VPC subnets (vSwitches), security groups, and route tables. – Sends metrics to CloudMonitor and audit events to ActionTrail. – Can push access logs or logs/metrics to Log Service (SLS) (verify the exact logging integration for your chosen load balancer type in official docs). – Works alongside complementary services such as CDN, WAF, Anti-DDoS, and Global Accelerator-like products (availability and product names vary—verify in official Alibaba Cloud docs for your region).
3. Why use Server Load Balancer (SLB)?
Business reasons
- Higher uptime and better user experience: Reduce downtime by avoiding a single backend server dependency.
- Simpler scaling: Add or remove backend capacity without changing the public endpoint or client configuration.
- Faster time to production: Managed service reduces operational burden (patching, failover design, HA setup).
Technical reasons
- Health-based routing: Automatic failover away from unhealthy nodes.
- Protocol flexibility: Support for L4 and L7 balancing (depending on type).
- TLS offload (HTTPS termination): Centralize certificate management and reduce CPU load on app servers.
- Layer 7 routing (where supported): Route requests to different services based on hostname/path.
Operational reasons
- Centralized traffic control: One place to tune timeouts, connection limits, and routing behavior.
- Monitoring and logging: Standard metrics and logs enable faster troubleshooting.
- Automation: SLB is API-driven; integrates well with IaC and DevOps workflows (Terraform/CLI/SDK—verify current provider support and resource coverage).
Security/compliance reasons
- Reduced direct exposure: Backends can remain private in a VPC behind an internal SLB, limiting public ingress.
- Consistent TLS posture: Centralize cipher/TLS configuration and certificate rotations.
- Auditability: Changes are trackable via cloud audit mechanisms such as ActionTrail.
Scalability/performance reasons
- Horizontal scale-out: Add many backend servers to handle more traffic.
- Connection distribution: Better handling of concurrent connections than a single server.
- High availability constructs: Managed HA within the region (verify specifics for your SLB type).
When teams should choose SLB
- You need one stable endpoint for multiple backends.
- You want health checks and managed failover.
- You need Internet-facing entry with controlled backend exposure.
- You need L7 routing and TLS termination (where supported).
When teams should not choose SLB
- Single instance, low-traffic apps where the complexity/cost is unnecessary (a simple ECS with an EIP might be enough).
- Cross-region active-active requirements that SLB alone doesn’t cover (you may need DNS-based routing, GSLB/traffic manager, or a global acceleration service—verify Alibaba Cloud options).
- Ultra-custom proxy logic requiring bespoke filters/modules (a self-managed reverse proxy might be more flexible, though more operationally complex).
- Protocols/features not supported by your chosen SLB type (always confirm listener feature support in official docs).
4. Where is Server Load Balancer (SLB) used?
Industries
- E-commerce and retail (web storefronts, checkout APIs)
- FinTech and payments (API gateways, internal microservices)
- Gaming (TCP/UDP services, matchmaking endpoints)
- Media and streaming (API endpoints, origin balancing)
- SaaS (multi-tenant application front doors)
- Education (LMS portals, exam platforms)
- Enterprise IT (internal apps, line-of-business services)
Team types
- DevOps and SRE teams managing reliability and scaling
- Platform engineering teams building reusable application platforms
- Security teams enforcing standardized ingress controls
- Application teams deploying microservices and web apps
- Network teams managing VPC ingress/egress topology
Workloads
- Public web applications and APIs
- Private microservices behind internal load balancers
- Stateful apps that need controlled stickiness (with caution)
- TCP-based enterprise services (message brokers, custom protocols)
- Blue/green or canary deployments (when L7 routing supports it)
Architectures
- 2-tier (SLB → ECS app servers)
- 3-tier (SLB → web tier → internal SLB → app tier → DB)
- Microservices (Internet SLB/ALB → API services; internal SLB for east-west traffic where relevant)
- Kubernetes ingress patterns (often using ALB/Ingress controller patterns; verify current Alibaba Cloud Kubernetes integration guidance)
Real-world deployment contexts
- Production: Multi-zone, autoscaling, robust health checks, logs/metrics enabled, TLS managed centrally.
- Dev/Test: Minimal instances, smaller bandwidth, simplified health checks, short-lived environments to control cost.
5. Top Use Cases and Scenarios
Below are realistic SLB scenarios. Exact feasibility can depend on whether you use CLB/ALB/NLB under the SLB umbrella—verify supported features for your chosen type.
1) Highly available web front end (HTTP)
- Problem: A single web server fails or becomes overloaded.
- Why SLB fits: Distributes HTTP requests across multiple ECS instances and uses health checks.
- Example: Two ECS instances run NGINX; SLB listens on port 80 and forwards to both.
2) HTTPS termination for web apps
- Problem: Managing TLS certificates on every ECS is error-prone.
- Why SLB fits: Central certificate management and TLS termination at SLB (if supported by your type).
- Example: An e-commerce site terminates TLS at SLB and forwards HTTP to internal app servers.
3) L7 routing for microservices (host/path-based)
- Problem: Multiple services need one public entry point.
- Why SLB fits: L7 routing rules can route
/api/*to API backends and/static/*to another pool (feature depends on ALB/CLB L7 capabilities). - Example:
api.example.comroutes to API pool;www.example.comroutes to web pool.
4) Internal load balancing for private services
- Problem: You want private service discovery without exposing instances publicly.
- Why SLB fits: Internal SLB provides stable private IP/endpoint within VPC.
- Example: An internal SLB fronts a payments microservice used by multiple internal apps.
5) TCP load balancing for custom protocols
- Problem: You have a TCP service needing horizontal scale and failover.
- Why SLB fits: L4 listeners distribute TCP connections across backends with health checks.
- Example: A proprietary TCP telemetry collector runs on 6 ECS instances.
6) UDP load balancing for latency-sensitive workloads
- Problem: UDP services need scale without manual routing.
- Why SLB fits: NLB/CLB UDP support (depending on type) can distribute UDP flows.
- Example: A multiplayer game service uses UDP listeners to spread traffic.
7) Blue/green deployments (two backend pools)
- Problem: Deployments cause downtime or risky cutovers.
- Why SLB fits: Switch backend weights or routing rules to gradually move traffic (capabilities depend on type).
- Example: Shift 10% traffic to the new version, monitor errors, then shift 100%.
8) Centralized access control with IP allowlists
- Problem: Admin endpoints should only be reachable from corporate IPs.
- Why SLB fits: Listener ACLs/controls (where supported) plus security groups reduce exposure.
- Example:
/adminis routed to a restricted backend and only allowed from office IPs.
9) Simplified scaling with Auto Scaling
- Problem: Traffic spikes require rapid capacity changes.
- Why SLB fits: SLB works with ECS scaling patterns; backends can be added/removed without changing clients.
- Example: A flash sale triggers scale-out of ECS instances behind SLB.
10) Multi-tier architectures with both Internet and internal SLB
- Problem: Separate public ingress from internal east-west traffic.
- Why SLB fits: Internet-facing SLB for users; internal SLB for app-to-app traffic.
- Example: Public SLB → web tier → internal SLB → app tier.
11) API gateway-like fronting (basic)
- Problem: Multiple API services need one endpoint, but not full API management.
- Why SLB fits: L7 routing and TLS offload can provide a basic front door.
- Example: Route
/v1/and/v2/to different server groups.
12) Migration from single ECS to scalable architecture
- Problem: Monolithic app on one ECS needs HA without redesign.
- Why SLB fits: “Lift-and-shift” scale-out by cloning the instance into multiple backends.
- Example: Bake an image, deploy 3 ECS instances, put SLB in front.
6. Core Features
Feature availability depends on the specific SLB type (CLB/ALB/NLB). Always confirm in the official feature matrix for your region.
1) Internet-facing and internal load balancers
- What it does: Creates a public endpoint (Internet SLB) or a private endpoint inside VPC (internal SLB).
- Why it matters: Lets you design both public ingress and private service-to-service traffic.
- Practical benefit: Backends can remain private even for Internet workloads.
- Caveat: Internet-facing configurations often incur Internet bandwidth/data transfer costs; internal traffic is still billed depending on network path and products used—verify your billing model.
2) Layer 4 load balancing (TCP/UDP)
- What it does: Distributes transport-layer connections/packets to backends.
- Why it matters: Supports non-HTTP protocols and high-performance connection distribution.
- Practical benefit: Works for databases proxies, game servers, custom TCP services.
- Caveat: L4 does not provide HTTP-level routing (paths/hosts). Client IP preservation and proxy protocol behavior varies by product/type—verify.
3) Layer 7 load balancing (HTTP/HTTPS)
- What it does: Terminates/understands HTTP(S) and forwards requests based on HTTP properties.
- Why it matters: Enables host/path routing, redirects, header-based routing (advanced features often in ALB).
- Practical benefit: One load balancer can front multiple web services.
- Caveat: Feature richness differs between CLB and ALB; confirm supported rule types.
4) Health checks
- What it does: Probes backend servers to determine availability.
- Why it matters: Prevents sending traffic to failed instances.
- Practical benefit: Automatic failover during crashes, deploys, or network issues.
- Caveat: Misconfigured health checks can cause “all backends unhealthy.” Ensure security groups and app endpoints allow health check traffic.
5) Backend server groups / pools
- What it does: Organizes backends into groups; attaches a group to listeners/rules.
- Why it matters: Clean separation of services and environments.
- Practical benefit: Easier scaling and safer deployments (blue/green via separate groups).
- Caveat: Backend types (ECS, ENI, IP) and cross-VPC possibilities depend on SLB type—verify.
6) Session persistence (sticky sessions)
- What it does: Routes a client to the same backend for a period (cookie-based or source-IP based depending on protocol).
- Why it matters: Useful for legacy apps that store session state locally.
- Practical benefit: Reduces “logged out” issues without shared session storage.
- Caveat: Sticky sessions can reduce effective load distribution and complicate scaling; prefer stateless apps when possible.
7) TLS/SSL termination for HTTPS
- What it does: Handles TLS handshake and encryption at the load balancer.
- Why it matters: Simplifies certificate rotation and offloads compute from app servers.
- Practical benefit: Centralized control of TLS versions/ciphers (depending on product support).
- Caveat: Ensure compliance with your organization’s TLS policy; confirm support for modern TLS versions and certificate formats in official docs.
8) Access control and traffic filtering (where supported)
- What it does: Restricts inbound traffic using allow/deny lists or related controls.
- Why it matters: Reduces attack surface.
- Practical benefit: Quick blocks for known bad IPs; restrict admin endpoints.
- Caveat: For robust application-layer protection, use Alibaba Cloud WAF and DDoS protections; SLB controls are not a full security suite.
9) Cross-zone load balancing / high availability patterns
- What it does: Balances across backends in multiple zones (depending on SLB type and configuration).
- Why it matters: Reduces blast radius from zonal failures.
- Practical benefit: Higher availability for critical apps.
- Caveat: Ensure your backends and dependencies are also multi-zone.
10) Monitoring and logging integrations
- What it does: Provides metrics and (optionally) access logs integrated with Alibaba Cloud observability services.
- Why it matters: You need visibility into latency, errors, backend health, and traffic.
- Practical benefit: Faster incident response; capacity planning.
- Caveat: Log retention and ingestion costs can be significant in high-traffic environments.
7. Architecture and How It Works
High-level architecture
SLB sits between clients and backend servers: 1. Client resolves the SLB endpoint (IP/DNS). 2. Client connects to the SLB listener (e.g., TCP:443). 3. SLB performs protocol handling (TLS termination for HTTPS, HTTP routing if L7). 4. SLB selects a healthy backend from the server group. 5. SLB forwards traffic to the backend. 6. SLB collects metrics/logs and exposes them to monitoring systems.
Request/data/control flow
- Data plane (traffic): Client → SLB listener → backend server.
- Control plane (management): You configure SLB via Alibaba Cloud Console, API, CLI, SDK, or IaC tools. These changes update the SLB configuration and propagate to the managed infrastructure.
Integrations with related services (common)
- ECS: Typical backend compute.
- VPC/vSwitch: Network placement of internal SLB and backends.
- Security Groups: Protect ECS backends; allow health check and listener traffic.
- CloudMonitor: Metrics and alerts.
- ActionTrail: Audit trail for configuration changes.
- Log Service (SLS): Access logs and analytics (verify per SLB type).
- Certificate services: Manage and deploy certificates for HTTPS listeners (verify current product name and workflow).
Dependency services
- VPC and vSwitch are foundational for most deployments.
- ECS instances or other backend endpoints must be reachable from SLB.
- DNS (Alibaba Cloud DNS or external DNS) maps your domain to SLB endpoint if you use custom domains.
Security/authentication model
- Control plane access is governed by Resource Access Management (RAM).
- Use RAM users/roles and least-privilege policies for creating and modifying SLB resources.
- Network security is enforced with VPC security constructs (security groups, NACLs where applicable) plus SLB listener controls.
Networking model
- Internet SLB: Has a public endpoint; forwards to private backends in your VPC.
- Internal SLB: Has a private IP; accessible only within the VPC (or via connected networks such as CEN/VPN/Express Connect, depending on your topology).
Monitoring/logging/governance considerations
- Define SLOs around availability, latency, and error rates.
- Enable metrics and alerts early (backend unhealthy count, 4xx/5xx rates, latency).
- Use consistent tagging for cost allocation and ownership.
Simple architecture diagram (Mermaid)
flowchart LR
U[Users/Clients] -->|HTTP/HTTPS| SLB[Server Load Balancer (SLB)]
SLB --> ECS1[ECS Backend 1]
SLB --> ECS2[ECS Backend 2]
SLB --> ECS3[ECS Backend 3]
Production-style architecture diagram (Mermaid)
flowchart TB
subgraph Internet
Users[Users]
DNS[DNS: app.example.com]
end
subgraph AlibabaCloudVPC[Alibaba Cloud VPC]
SLBpub[Internet-facing SLB]
subgraph WebTier[Web Tier - Multi-zone]
ECSa[ECS Web A (Zone A)]
ECSb[ECS Web B (Zone B)]
end
SLBint[Internal SLB]
subgraph AppTier[App Tier - Multi-zone]
APPa[ECS App A (Zone A)]
APPb[ECS App B (Zone B)]
end
DB[(Managed DB service / ECS DB)]
end
Users --> DNS --> SLBpub
SLBpub --> ECSa
SLBpub --> ECSb
ECSa --> SLBint
ECSb --> SLBint
SLBint --> APPa
SLBint --> APPb
APPa --> DB
APPb --> DB
SLBpub -.metrics.-> CM[CloudMonitor]
SLBpub -.audit.-> AT[ActionTrail]
SLBpub -.logs.-> SLS[Log Service (SLS)]
8. Prerequisites
Before starting the lab and production work, ensure the following.
Account and billing
- An active Alibaba Cloud account with billing enabled.
- A payment method or credits available (SLB and ECS are billable).
- If your organization uses a resource directory or multi-account structure, ensure you’re operating in the correct account.
Permissions (RAM)
You need permissions to: – Create/modify/delete SLB resources – Create/modify/delete VPC/vSwitch – Create/modify/delete ECS instances and security groups – Allocate or use public Internet bandwidth/EIP depending on workflow
If you’re in an enterprise environment, request a RAM policy granting least-privilege access. If you are unsure which exact actions are required, verify in official docs for “SLB RAM policy” and “ECS/VPC permissions”.
Tools
- Alibaba Cloud Console access
- Optional: SSH client (macOS/Linux terminal or Windows Terminal/PuTTY)
- Optional: Alibaba Cloud CLI (
aliyun) if you want to script (not required for this tutorial)
Region availability
- Choose a region close to your users.
- Ensure SLB type you want (CLB/ALB/NLB) is supported in the region. Availability can vary—verify in official docs.
Quotas and limits
Common limits can include: – Number of SLB instances per region – Number of listeners per instance – Number of backend servers per server group – ACL sizes and rule counts
Exact quotas can change and vary by account/region. Check the Quotas/limits section in official SLB documentation for current values and how to request increases.
Prerequisite services
- VPC with at least one vSwitch
- ECS instances in the VPC to act as backends
- Security groups configured to allow required traffic
9. Pricing / Cost
Alibaba Cloud SLB pricing is usage-based and differs by load balancer type (for example, CLB vs ALB vs NLB) and by region. Do not assume one universal price.
Pricing dimensions (typical)
Depending on SLB type and billing mode, you may see charges such as: – Instance fee / capacity fee: A base cost for the SLB instance and selected performance/spec level. – LCU-based billing (common in application load balancing models): Charges based on “load balancer capacity units” that represent combinations of new connections, active connections, bandwidth, and rule evaluations. (Exact definition is product-specific—verify in official pricing docs.) – Bandwidth / data transfer: Especially for Internet-facing SLB: – Pay-by-bandwidth (fixed Mbps) or pay-by-data-transfer (per GB) options may be available depending on region and product. – Feature add-ons: Some advanced capabilities may affect cost (for example, enhanced logging, reserved capacity, or premium editions—verify).
Free tier
Alibaba Cloud offerings sometimes have free trials or promotional credits, but SLB free tiers are not guaranteed and can change. Verify in official pricing pages and promotions for your account/region.
Key cost drivers
- Public Internet egress: The biggest cost driver for many public apps.
- Traffic volume and concurrency: Higher QPS, more concurrent connections, and more rules can increase LCU or capacity charges.
- Number of load balancers: Per-environment (dev/test/stage/prod) duplication.
- Logging volume: Access logs to SLS can add ingestion and storage costs.
- Backend compute scaling: SLB makes scaling easy, but ECS costs increase as you add instances.
Hidden or indirect costs
- Cross-zone traffic: Some architectures can increase inter-zone data transfer costs (billing depends on Alibaba Cloud networking pricing—verify).
- Idle but running environments: SLB instances and public bandwidth allocations can incur costs even at low traffic.
- Certificate management: Paid certificates or certificate management features (if not using free certificates) may add cost.
How to optimize cost
- Prefer internal SLB for east-west traffic where possible.
- Use pay-by-data-transfer for unpredictable traffic (if available) or pay-by-bandwidth for steady traffic—choose based on traffic pattern and pricing in your region.
- Right-size SLB type: use L4 when you don’t need L7 features; use ALB only when you need advanced L7 routing.
- Keep access logs with a sensible retention policy; aggregate/ship only what you need.
- Use autoscaling on backends to avoid overprovisioning ECS.
Example low-cost starter estimate (conceptual)
A small dev environment typically includes: – 1 Internet-facing SLB (smallest suitable spec or entry capacity tier) – 2 small ECS instances as backends – Minimal Internet data transfer (light testing) – Basic monitoring enabled
Because prices vary by region, SLB type, and billing mode, estimate using official sources: – SLB pricing page (official): https://www.alibabacloud.com/product/slb (navigate to pricing from here) – Pricing calculator (official): https://www.alibabacloud.com/pricing/calculator (or region-specific calculator link shown in console)
Example production cost considerations
For production, plan for: – Multi-zone backends (and possibly multiple SLBs: Internet + internal) – Higher LCU/capacity usage due to peaks – Significant Internet egress – Logging/monitoring retention – WAF/Anti-DDoS services in front of SLB (additional cost) – Multi-environment duplication (staging, DR)
10. Step-by-Step Hands-On Tutorial
This lab builds a small, real SLB setup: an Internet-facing load balancer distributing HTTP traffic across two ECS instances running NGINX. It is designed to be low-risk and easy to clean up.
Notes: – Console wording can vary by region and by whether your account shows “SLB/CLB/ALB/NLB” separately. Choose the option that corresponds to Classic Load Balancer (CLB) or the SLB type that supports a simple HTTP listener and ECS backends in your region. – If any screen differs, follow the closest equivalent and verify in official docs.
Objective
Create Server Load Balancer (SLB) to distribute HTTP requests across two ECS instances and verify that traffic is balanced.
Lab Overview
You will: 1. Create a VPC, vSwitch, and security group 2. Create two ECS instances and install NGINX 3. Create an Internet-facing SLB instance 4. Configure an HTTP listener and add ECS instances as backends 5. Validate balancing and health checks 6. Clean up all resources to stop charges
Step 1: Create a VPC and vSwitch
Console path (typical): VPC Console → VPCs → Create VPC
- Choose a Region for all resources (example: a region close to you).
- Create a VPC with an IPv4 CIDR (example:
10.0.0.0/16). - Create a vSwitch in one zone with a subnet CIDR (example:
10.0.1.0/24).
Expected outcome – A new VPC and vSwitch exist in your chosen region.
Verification – In the VPC console, confirm the VPC status is Available and the vSwitch is present.
Step 2: Create a security group for the backends
Console path (typical): ECS Console → Network & Security → Security Groups → Create
- Create a security group in the same region and VPC.
- Add inbound rules:
– Allow TCP 22 from your IP (for SSH administration).
– Allow TCP 80 from the SLB or from
0.0.0.0/0for the lab.- More secure option: if the console provides SLB health check IP ranges or an SLB security group reference, restrict to those. Otherwise, allow
0.0.0.0/0temporarily and tighten later.
- More secure option: if the console provides SLB health check IP ranges or an SLB security group reference, restrict to those. Otherwise, allow
Expected outcome – Security group exists with inbound rules for SSH and HTTP.
Verification – View inbound rules and confirm ports 22 and 80 are allowed.
Step 3: Create two ECS instances (backend servers)
Console path (typical): ECS Console → Instances → Create Instance
Create two ECS instances with these common settings: – Network: select your VPC and vSwitch – Security group: the one you created – Public IP: – For a private backend design, you do not need public IPs; you can administer via a bastion. – For this beginner lab, you may assign public IPs temporarily for SSH (or use ECS Workbench if available). – OS: choose a Linux distribution you’re comfortable with (Ubuntu/CentOS/Alibaba Cloud Linux). Commands below assume Ubuntu/Debian-like and RHEL-like alternatives are provided. – Authentication: SSH key pair preferred (or password if necessary).
Name them:
– slb-backend-1
– slb-backend-2
Expected outcome – Two running ECS instances in the same VPC.
Verification
– Instances show Running and have private IP addresses in 10.0.1.0/24.
Step 4: Install and configure NGINX on both ECS instances
SSH into each instance and install NGINX. Use a unique page per instance so you can see which backend responded.
Option A: Ubuntu/Debian
On backend 1:
sudo apt-get update
sudo apt-get install -y nginx
echo "Hello from backend-1" | sudo tee /var/www/html/index.html
sudo systemctl enable nginx
sudo systemctl restart nginx
On backend 2:
sudo apt-get update
sudo apt-get install -y nginx
echo "Hello from backend-2" | sudo tee /var/www/html/index.html
sudo systemctl enable nginx
sudo systemctl restart nginx
Option B: RHEL/CentOS/Alibaba Cloud Linux (verify package manager)
sudo yum install -y nginx
echo "Hello from backend-1" | sudo tee /usr/share/nginx/html/index.html
sudo systemctl enable nginx
sudo systemctl restart nginx
Expected outcome – Each ECS serves a simple HTML page on port 80.
Verification From your workstation (if the instance has a public IP) or from another host inside the VPC:
curl -s http://<BACKEND_1_PRIVATE_OR_PUBLIC_IP>/
curl -s http://<BACKEND_2_PRIVATE_OR_PUBLIC_IP>/
You should see different text for each backend.
Step 5: Create an Internet-facing Server Load Balancer (SLB) instance
Console path (typical): Server Load Balancer Console → Instances → Create
- Choose the SLB type that supports a basic HTTP listener and ECS backends in your region (often Classic Load Balancer (CLB) for this kind of simple lab).
- Select: – Region: same as ECS/VPC – Network type: Internet – VPC: select your VPC – vSwitch/zone: follow console guidance (some products ask for multiple zones)
- Billing: – Choose a low-cost mode suitable for a short lab (for example pay-as-you-go). – For Internet bandwidth, choose the smallest practical setting and a billing method appropriate for low traffic (availability varies by region).
Create the instance.
Expected outcome – SLB instance is created with a public endpoint (IP or DNS name).
Verification – SLB console shows the instance as Active/Running and displays a public address.
Step 6: Add backend servers to SLB
Console path (typical): SLB instance → Backend Servers / Server Group
- Add
slb-backend-1andslb-backend-2as backend servers. - Set a weight for each backend (e.g., both 100) to distribute evenly.
Expected outcome – Both ECS instances are registered as backends.
Verification – In SLB backend list, both appear attached. Health may still be unknown until the listener is created.
Step 7: Create an HTTP listener on port 80
Console path (typical): SLB instance → Listeners → Add Listener (HTTP)
Configure:
– Listener protocol: HTTP
– Listener port: 80
– Backend port: 80
– Health check: Enabled
– Path: /
– Expected HTTP codes: often http_2xx or similar (choose default)
– Scheduling algorithm: choose a default (e.g., round-robin). Exact names vary.
– Session persistence: Disabled for this lab (so you can observe distribution more easily)
Save/apply configuration.
Expected outcome – SLB listens on port 80 and starts health-checking backends.
Verification – Listener status shows Running. – Backend health shows Healthy for both (may take 1–2 minutes).
Step 8: Test load balancing behavior
From your laptop:
for i in {1..10}; do curl -s http://<SLB_PUBLIC_IP_OR_DOMAIN>/; echo; done
Expected outcome
– Output alternates between:
– Hello from backend-1
– Hello from backend-2
If you consistently see only one backend, check whether:
– session persistence is enabled
– your client is reusing connections (try curl --no-keepalive if supported)
– weights are uneven
Validation
Use these checks to confirm the setup is correct:
-
Listener reachable
bash curl -I http://<SLB_PUBLIC_IP_OR_DOMAIN>/Expect HTTP200 OK(or301/302depending on default NGINX config). -
Backends healthy in console – SLB console shows both backends healthy.
-
Backend ports open – Security group allows inbound TCP 80 (for the lab, from
0.0.0.0/0or from SLB where supported).
Troubleshooting
Common issues and fixes:
-
Backends show Unhealthy – Confirm NGINX is running:
bash sudo systemctl status nginx– Confirm port 80 listening:bash sudo ss -lntp | grep :80 || sudo netstat -lntp | grep :80– Confirm security group allows inbound TCP 80. – Confirm the health check path/returns 200:bash curl -I http://127.0.0.1/ -
SLB public endpoint times out – Ensure SLB listener is created and in Running state. – Verify Internet-facing SLB has a public address assigned. – Check whether your local network blocks outbound port 80.
-
Only one backend responds – Disable session persistence. – Ensure both backends are healthy. – Ensure both backends have non-zero weight.
-
SSH not accessible – Security group inbound TCP 22 should allow your IP. – If you did not assign public IPs, use a bastion host or ECS Workbench feature (if available).
Cleanup
To avoid ongoing charges, delete resources in this order:
- Delete SLB listener(s) (if required by console)
- Delete SLB instance
- Delete ECS instances
slb-backend-1andslb-backend-2 - Delete security group (after instances are gone)
- Delete vSwitch
- Delete VPC
- If you allocated any EIP or reserved bandwidth resources separately, release them as well.
Expected outcome – No SLB/ECS/VPC resources remain from the lab, minimizing cost.
11. Best Practices
Architecture best practices
- Design for multi-zone: Place backends across at least two zones where possible; ensure the SLB type supports cross-zone behavior as you expect.
- Use internal SLB for east-west traffic: Keep internal service traffic private and reduce attack surface.
- Separate tiers: Use one Internet SLB for the public edge and internal SLBs for app tiers when architectures grow.
- Stateless where possible: Store sessions in shared stores (Redis, DB) instead of relying on stickiness.
IAM/security best practices
- Enforce least privilege with RAM:
- Separate roles for network admins vs application deployers.
- Restrict who can modify listeners and backend pools.
- Require MFA for privileged accounts.
- Use ActionTrail to audit changes to SLB resources.
Cost best practices
- Choose the right billing model for your traffic pattern (bandwidth vs traffic, capacity vs LCU).
- Scale backends automatically but set sane min/max to prevent runaway compute cost.
- Enable logs selectively; keep retention minimal and archive if needed.
Performance best practices
- Tune health checks:
- Use a lightweight endpoint (e.g.,
/healthz) that checks dependencies appropriately. - Avoid overly aggressive intervals that cause noise or load.
- Use appropriate idle timeouts and keep-alive settings (listener-specific).
- For HTTP apps, avoid large headers and excessive redirects; SLB may have header size or request limits depending on type—verify.
Reliability best practices
- Implement graceful deregistration during deployments:
- Drain connections before removing a backend (capability depends on SLB type).
- Use multiple backends and plan capacity for N+1.
- Monitor backend health and error rates with alerts.
Operations best practices
- Tag SLB resources:
env=prod|staging|devowner=team-nameapp=service-namecost-center=...- Standardize naming:
slb-{app}-{env}-{region}- Define runbooks:
- “Backends unhealthy”
- “Latency spike”
- “TLS certificate renewal”
Governance best practices
- Use policy and review controls around:
- public exposure (Internet vs internal)
- allowed ports
- TLS policies
- Maintain an inventory of Internet-facing endpoints.
12. Security Considerations
Identity and access model
- SLB configuration is controlled by RAM.
- Use separate roles for:
- Creating/deleting SLB instances
- Modifying listeners and certificates
- Modifying backend server groups
- Restrict high-risk actions (like deleting load balancers or opening listeners to
0.0.0.0/0) to a small admin group.
Encryption
- For HTTPS endpoints, terminate TLS at SLB (if supported) or pass-through TLS at L4 (if your architecture requires end-to-end encryption).
- Manage certificates carefully:
- Prefer managed certificate services and rotation workflows.
- Track expiration dates with alerts.
- For internal traffic, consider whether mTLS is needed—SLB may not provide mTLS for all types; verify.
Network exposure
- Prefer internal SLB for non-public services.
- Use security groups to:
- Allow only SLB-to-backend traffic on backend ports.
- Restrict SSH to your admin IPs (or use bastion/SSM-like alternatives).
- For Internet SLB, consider adding:
- WAF (for HTTP/HTTPS) and
- Anti-DDoS services, depending on your threat model (confirm Alibaba Cloud’s current products and integration patterns).
Secrets handling
- Do not store secrets in user data scripts or in world-readable files.
- Use a secrets manager product if available in your environment (verify Alibaba Cloud options) or a secure vault solution.
Audit/logging
- Enable ActionTrail for audit events.
- Collect SLB access logs (where supported) into SLS with:
- least retention necessary
- restricted access and immutable storage policies if required by compliance
Compliance considerations
- Confirm data residency requirements: SLB is regional; keep traffic termination and logs within required regions.
- For regulated workloads, document:
- TLS policy
- logging retention
- change management and approvals for listener changes
Common security mistakes
- Leaving Internet listeners open to all when not needed.
- Allowing backend instances to be publicly reachable directly (bypassing SLB).
- Not rotating TLS certificates.
- Overly permissive RAM policies granting wildcard access.
Secure deployment recommendations
- Use private backends without public IPs where possible.
- Restrict backend security group inbound to:
- SLB source addresses (where available)
- or VPC CIDR ranges as appropriate
- Standardize listener ports (80/443) and redirect HTTP→HTTPS if your policy requires.
- Add WAF for public web apps.
13. Limitations and Gotchas
Because SLB is a family of products (CLB/ALB/NLB), limitations vary. Confirm your exact type’s limits in official docs.
Common real-world gotchas include:
- Quota ceilings: Listener counts, backend counts, and rule counts can be limited per instance or per region.
- Feature mismatch by type:
- L7 routing and advanced rules may require ALB.
- High-performance L4 might be NLB.
- Legacy behavior might be CLB.
- Health check pitfalls: If security groups block probes or your health endpoint is slow, SLB marks instances unhealthy.
- Client IP preservation: How the real client IP is forwarded (headers like
X-Forwarded-Forfor HTTP, proxy protocol for TCP) depends on SLB type and configuration—verify and test. - Connection reuse and stickiness: Even without sticky sessions, keep-alive can cause repeated requests to hit one backend in some cases.
- Timeout defaults: Idle timeouts and backend response timeouts can lead to 502/504-like errors; tune based on app behavior.
- Certificate workflow differences: Upload vs managed certificates; SNI support; TLS policies differ by product and region.
- Logging availability/cost: Access logs may require enabling a feature and can add noticeable cost at scale.
- Public bandwidth billing surprises: Public SLB can incur charges even with moderate use if bandwidth is reserved or if egress is high.
14. Comparison with Alternatives
Alternatives within Alibaba Cloud
- Application Load Balancer (ALB): Generally preferred for advanced HTTP/HTTPS L7 routing and modern app patterns.
- Network Load Balancer (NLB): Typically for high-performance L4 workloads.
- Classic Load Balancer (CLB): Often used for traditional L4/L7 patterns and broad compatibility; may be considered “classic” compared to newer ALB/NLB offerings.
- CDN + origin: For static-heavy web delivery; CDN is not a load balancer but reduces origin load.
- WAF: Not a load balancer, but often deployed in front of SLB for security.
Alternatives in other clouds (conceptual equivalents)
- AWS: Elastic Load Balancing (ALB/NLB/CLB)
- Azure: Azure Load Balancer / Application Gateway
- Google Cloud: Cloud Load Balancing (Use these for conceptual mapping; implementation details differ.)
Open-source / self-managed alternatives
- NGINX / HAProxy on ECS instances
- Envoy proxy fleets These provide flexibility but require you to manage scaling, HA, patching, and failure recovery.
Comparison table
| Option | Best For | Strengths | Weaknesses | When to Choose |
|---|---|---|---|---|
| Alibaba Cloud Server Load Balancer (SLB) – CLB | Traditional L4/L7 balancing for ECS | Mature patterns, managed HA, simple listener model | May lack some modern L7 features compared to ALB | You need a straightforward HTTP/HTTPS/TCP/UDP LB with broad compatibility |
| Alibaba Cloud Server Load Balancer (SLB) – ALB | Advanced HTTP/HTTPS routing, microservices ingress | Rich L7 routing, modern app patterns | May be more complex/costly than basic LB | You need host/path routing, multiple services behind one entry, advanced L7 features |
| Alibaba Cloud Server Load Balancer (SLB) – NLB | High-performance L4, large connection volumes | Efficient L4 balancing, scalable | Not an HTTP router | You need TCP/UDP performance and don’t need L7 routing |
| Self-managed NGINX/HAProxy on ECS | Maximum customization | Full control over proxy logic | High operational burden; HA is on you | You must run custom modules/logic not supported by managed SLB |
| CDN in front of origin | Static content acceleration | Reduced origin load, global edge caching | Not a replacement for LB | Your primary challenge is static delivery and latency, not backend distribution |
| AWS/Azure/GCP load balancers | Multi-cloud standards | Deep integration in their ecosystems | Not relevant inside Alibaba Cloud without redesign | You are deploying in those clouds or standardizing multi-cloud patterns |
15. Real-World Example
Enterprise example: Internal and external balancing for a regulated portal
- Problem: A large enterprise runs a customer portal and multiple internal services. They need high availability, controlled access, centralized TLS, audit trails, and predictable operations.
- Proposed architecture:
- Internet-facing SLB terminates HTTPS for the portal.
- Web tier ECS instances in multiple zones.
- Internal SLB fronts application services.
- Private database tier (managed DB or ECS-based).
- CloudMonitor alerts and ActionTrail auditing.
- WAF in front of the Internet SLB for OWASP protections (verify the recommended integration path).
- Why SLB was chosen:
- Managed HA and health checks reduce operational load.
- Centralized TLS and consistent ingress policies.
- Works natively with VPC and ECS.
- Expected outcomes:
- Improved uptime during instance failures.
- Safer deployments via backend pool updates.
- Better observability of traffic and backend health.
Startup/small-team example: Fast launch of a scalable API
- Problem: A small team has one API server and needs to scale for a marketing launch without downtime.
- Proposed architecture:
- One Internet-facing SLB with HTTP/HTTPS listeners.
- Two or more ECS instances behind it, built from the same image.
- Optional Auto Scaling to add/remove ECS based on CPU/QPS signals.
- Why SLB was chosen:
- Simple way to add redundancy without building custom HA.
- Allows rolling upgrades by removing one backend at a time.
- Expected outcomes:
- Minimal downtime during deploys.
- Stable endpoint for mobile/web clients.
- Predictable scaling path as usage grows.
16. FAQ
-
Is Server Load Balancer (SLB) the same as ALB/NLB/CLB?
SLB is commonly used as the broader load balancing product name, while ALB/NLB/CLB refer to specific load balancer types. What you can do depends on the type. Verify the current console and documentation for your account/region. -
Should I choose an Internet-facing or internal SLB?
Use Internet-facing for public endpoints; use internal for private services inside a VPC. Many production architectures use both. -
Do I need public IPs on backend ECS instances?
Usually no. Backends can be private in a VPC. You only need a public IP for direct administration, and even that can be avoided using bastions or management tools. -
How does SLB know a backend is down?
Health checks probe each backend on a configured port/path. Unhealthy backends are removed from rotation until healthy again. -
What causes “all backends unhealthy”?
Common causes: security group blocks health checks, wrong backend port, app not listening, wrong health check path, firewall on instance, or backend in a different network. -
Does SLB support HTTPS?
Typically yes for L7 load balancing. TLS termination workflows differ by SLB type; confirm certificate format and SNI support in official docs. -
Can SLB route by URL path or hostname?
This is an L7 feature. Availability and depth of routing rules depends on whether you use ALB or CLB L7 capabilities. -
How do I preserve the real client IP?
For HTTP/HTTPS, look forX-Forwarded-Forsupport. For TCP, look for proxy protocol support. Exact behavior depends on SLB type—verify and test. -
Can SLB balance traffic to IP addresses instead of ECS instances?
Some load balancer types support IP-based backends or ENI-based targets. Confirm the supported backend target types for your chosen SLB type. -
Does SLB work with Kubernetes on Alibaba Cloud?
Alibaba Cloud Kubernetes (ACK) commonly integrates with cloud load balancers for ingress and services. The recommended approach depends on ACK version and whether ALB ingress is used—verify in ACK docs. -
What’s the difference between L4 and L7 load balancing?
L4 balances TCP/UDP connections without understanding HTTP. L7 understands HTTP/HTTPS and can do routing based on HTTP properties. -
How do I do blue/green or canary with SLB?
Use separate backend groups or weighted backends (if supported) and shift traffic gradually. For complex routing, ALB is often a better fit—verify capabilities. -
Can I restrict access to only certain IP ranges?
You can restrict at multiple layers: SLB ACLs (where supported), security groups, and WAF. For strong security, combine these controls. -
How do I monitor SLB?
Use CloudMonitor metrics, configure alerts for backend health, latency, and traffic spikes, and enable logs to SLS if needed. -
What are common cost surprises with SLB?
Public Internet egress, reserved bandwidth billing, high LCU usage (for ALB-style billing), and log ingestion/storage costs. -
Can I use SLB for WebSocket?
WebSocket is an HTTP upgrade; support depends on the L7 implementation. Verify support in official docs for your SLB type. -
How do I migrate from a single ECS public IP to SLB?
Create SLB, add the ECS as a backend, test, then update DNS to point to SLB. After validation, remove public exposure of the backend.
17. Top Online Resources to Learn Server Load Balancer (SLB)
| Resource Type | Name | Why It Is Useful |
|---|---|---|
| Official product page | Alibaba Cloud Server Load Balancer (SLB) | Entry point for capabilities, editions, and navigation to docs/pricing: https://www.alibabacloud.com/product/slb |
| Official documentation | SLB Documentation (Alibaba Cloud Docs) | Authoritative setup, concepts, and API references. Start from the product page docs link or search “Alibaba Cloud SLB documentation”. |
| Official pricing | SLB Pricing | Region- and type-specific pricing details; always validate here: https://www.alibabacloud.com/product/slb (follow pricing link) |
| Official calculator | Alibaba Cloud Pricing Calculator | Build estimates without guessing: https://www.alibabacloud.com/pricing/calculator |
| Official observability docs | CloudMonitor Documentation | How to alert on SLB metrics: https://www.alibabacloud.com/help/en/cloudmonitor |
| Official audit docs | ActionTrail Documentation | Audit changes to SLB configuration: https://www.alibabacloud.com/help/en/actiontrail |
| Official logging docs | Log Service (SLS) Documentation | Central logging pipelines and retention planning: https://www.alibabacloud.com/help/en/sls |
| Official compute docs | ECS Documentation | Backend server configuration patterns: https://www.alibabacloud.com/help/en/ecs |
| Official networking docs | VPC Documentation | Subnets, routing, and private networking for SLB deployments: https://www.alibabacloud.com/help/en/vpc |
| Trusted IaC ecosystem | Terraform Alibaba Cloud Provider | Infrastructure-as-code patterns for SLB (resource coverage varies by type; verify): https://registry.terraform.io/providers/aliyun/alicloud/latest |
18. Training and Certification Providers
(Neutral listing as requested.)
-
DevOpsSchool.com
– Suitable audience: DevOps engineers, SREs, cloud engineers, architects
– Likely learning focus: Cloud DevOps practices, infrastructure automation, operations fundamentals
– Mode: Check website
– Website: https://www.devopsschool.com/ -
ScmGalaxy.com
– Suitable audience: DevOps and SCM practitioners, build/release engineers
– Likely learning focus: SCM, CI/CD, DevOps tooling, process and implementation patterns
– Mode: Check website
– Website: https://www.scmgalaxy.com/ -
CLoudOpsNow.in
– Suitable audience: Cloud operations and platform teams
– Likely learning focus: Cloud operations, monitoring, reliability, automation basics
– Mode: Check website
– Website: https://www.cloudopsnow.in/ -
SreSchool.com
– Suitable audience: SREs, operations engineers, reliability-focused architects
– Likely learning focus: SRE practices, incident management, monitoring, reliability engineering
– Mode: Check website
– Website: https://www.sreschool.com/ -
AiOpsSchool.com
– Suitable audience: Ops teams exploring AIOps, monitoring automation
– Likely learning focus: AIOps concepts, operations analytics, automation approaches
– Mode: Check website
– Website: https://www.aiopsschool.com/
19. Top Trainers
(Listed as trainer-related sites/platforms as provided.)
-
RajeshKumar.xyz
– Likely specialization: DevOps/cloud training and guidance (verify on site)
– Suitable audience: Beginners to intermediate DevOps/cloud learners
– Website: https://rajeshkumar.xyz/ -
devopstrainer.in
– Likely specialization: DevOps tooling and practices (verify on site)
– Suitable audience: DevOps engineers, build/release engineers, students
– Website: https://www.devopstrainer.in/ -
devopsfreelancer.com
– Likely specialization: DevOps consulting/training-style services (verify on site)
– Suitable audience: Teams needing short-term expertise or mentoring
– Website: https://www.devopsfreelancer.com/ -
devopssupport.in
– Likely specialization: DevOps support, troubleshooting, and enablement (verify on site)
– Suitable audience: Operations and DevOps teams needing ongoing support
– Website: https://www.devopssupport.in/
20. Top Consulting Companies
(Neutral descriptions; no client/award claims.)
-
cotocus.com
– Likely service area: Cloud/DevOps consulting (verify service catalog on site)
– Where they may help: Architecture reviews, migrations, CI/CD, operational maturity
– Consulting use case examples: Designing scalable ingress with SLB, setting up monitoring/alerts, cost optimization reviews
– Website: https://cotocus.com/ -
DevOpsSchool.com
– Likely service area: DevOps consulting and enablement (verify offerings)
– Where they may help: Training-led transformation, platform automation, SRE/DevOps practices
– Consulting use case examples: Building standardized SLB + ECS reference architectures, IaC enablement, deployment runbooks
– Website: https://www.devopsschool.com/ -
DEVOPSCONSULTING.IN
– Likely service area: DevOps consulting (verify offerings)
– Where they may help: Toolchain implementation, operations processes, cloud adoption
– Consulting use case examples: Production readiness review for SLB-based architectures, governance/tagging setup, incident response processes
– Website: https://www.devopsconsulting.in/
21. Career and Learning Roadmap
What to learn before SLB
- Networking basics: IP, CIDR, DNS, TCP vs UDP, TLS
- HTTP fundamentals: methods, status codes, headers
- Alibaba Cloud basics:
- ECS fundamentals (instances, images, security groups)
- VPC fundamentals (VPC, vSwitch, routing)
- Basic IAM with RAM
What to learn after SLB
- Advanced routing and ingress patterns (ALB features, Kubernetes ingress on ACK)
- WAF and DDoS protection patterns for Internet-facing workloads
- Observability:
- CloudMonitor alert design
- Log Service (SLS) queries and dashboards
- Automation:
- Terraform for Alibaba Cloud
- CI/CD pipelines for infrastructure and app deploys
- Resilience engineering:
- multi-zone/multi-region design
- DR strategies (DNS failover, backup/restore)
Job roles that use SLB
- Cloud engineer
- DevOps engineer
- Site Reliability Engineer (SRE)
- Platform engineer
- Solutions architect
- Security engineer (ingress and exposure controls)
Certification path (if available)
Alibaba Cloud certification programs and availability change over time. Check the official Alibaba Cloud certification portal for current tracks relevant to networking and architecture, and look for modules covering load balancing, VPC, and security.
Project ideas for practice
- Build an HA web tier with SLB + Auto Scaling and measure failover time.
- Implement HTTPS with managed certificates and HTTP→HTTPS redirects.
- Create internal SLB for a microservices tier and restrict access using security groups.
- Implement blue/green deployment using two backend server groups and controlled cutover.
- Add CloudMonitor alerts for backend unhealthy and high latency; create an incident runbook.
22. Glossary
- SLB (Server Load Balancer): Alibaba Cloud managed load balancing family/service used to distribute traffic across backends.
- CLB (Classic Load Balancer): A classic/traditional load balancer type under Alibaba Cloud’s portfolio (verify current positioning in your region).
- ALB (Application Load Balancer): L7-focused load balancer type with advanced HTTP/HTTPS routing (verify features and pricing).
- NLB (Network Load Balancer): L4-focused load balancer type optimized for TCP/UDP performance (verify features and pricing).
- Listener: A configuration that defines protocol/port and how SLB accepts and forwards traffic.
- Backend server: A server that receives forwarded traffic (often an ECS instance).
- Server group / backend pool: A collection of backends associated with a listener or rule.
- Health check: A probe to determine whether a backend is healthy and should receive traffic.
- Session persistence (stickiness): A mechanism to keep a client routed to the same backend.
- TLS termination: Decrypting HTTPS at the load balancer and forwarding HTTP to backends.
- VPC: Virtual Private Cloud; private network boundary for your Alibaba Cloud resources.
- vSwitch: Subnet within a VPC.
- Security group: Stateful firewall policy for ECS instances.
- CloudMonitor: Alibaba Cloud monitoring service for metrics and alerts.
- ActionTrail: Alibaba Cloud audit logging for API actions and configuration changes.
- SLS (Log Service): Alibaba Cloud log ingestion, storage, and analysis service.
- Egress: Outbound data transfer to the Internet (often a major cost driver).
23. Summary
Server Load Balancer (SLB) on Alibaba Cloud (Networking and CDN category) is the managed way to distribute inbound traffic across multiple backends for higher availability, better scalability, and simpler operations. It fits at the edge of your VPC for Internet ingress and internally for service-to-service balancing. The most important cost drivers are public Internet egress, the SLB type/capacity billing model (instance/capacity/LCU depending on product), and logging volume. The most important security practices are least-privilege RAM access, keeping backends private, tight security group rules, and strong TLS/certificate management.
Use SLB when you need a stable endpoint with health checks and managed failover; avoid it when a single small instance is sufficient or when your requirements are cross-region/global without additional traffic management services. Next, deepen skills by exploring ALB/NLB/CLB differences in the official docs and practicing production patterns: multi-zone backends, HTTPS, observability, and automated scaling.