{"id":19,"date":"2026-04-12T13:20:50","date_gmt":"2026-04-12T13:20:50","guid":{"rendered":"https:\/\/www.devopsschool.com\/tutorials\/alibaba-cloud-elastic-gpu-service-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-computing\/"},"modified":"2026-04-12T13:20:50","modified_gmt":"2026-04-12T13:20:50","slug":"alibaba-cloud-elastic-gpu-service-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-computing","status":"publish","type":"post","link":"https:\/\/www.devopsschool.com\/tutorials\/alibaba-cloud-elastic-gpu-service-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-computing\/","title":{"rendered":"Alibaba Cloud Elastic GPU Service Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for Computing"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Category<\/h2>\n\n\n\n<p>Computing<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. Introduction<\/h2>\n\n\n\n<p>Elastic GPU Service is Alibaba Cloud\u2019s GPU-accelerated computing offering used to run workloads that need massively parallel processing\u2014most commonly AI\/ML training and inference, graphics rendering, and high-performance computing (HPC). In Alibaba Cloud documentation, \u201cElastic GPU Service\u201d is closely associated with (and in practice delivered through) GPU-accelerated <strong>Elastic Compute Service (ECS)<\/strong> instance families.<\/p>\n\n\n\n<p>In simple terms: you create a GPU-enabled virtual machine, connect to it over the network, install (or use a prebuilt) GPU software stack, and run your GPU applications. You pay based on the chosen billing method (for example, pay-as-you-go or subscription), the selected instance type (GPU model\/quantity, vCPU, RAM), storage, and network.<\/p>\n\n\n\n<p>Technically, Elastic GPU Service uses ECS infrastructure with attached GPU hardware. It integrates with foundational Alibaba Cloud services such as <strong>VPC<\/strong>, <strong>Security Groups<\/strong>, <strong>Elastic IP (EIP)<\/strong>, <strong>CloudMonitor<\/strong>, <strong>ActionTrail<\/strong>, <strong>Resource Access Management (RAM)<\/strong>, and storage services like <strong>ESSD disks<\/strong>, <strong>OSS<\/strong>, and <strong>NAS<\/strong>. You get the familiar VM lifecycle (create, stop, start, snapshot, image, scale) while leveraging GPUs for throughput.<\/p>\n\n\n\n<p>The problem it solves: enabling teams to access GPUs on-demand without buying and operating physical GPU servers\u2014while still retaining VM-level control for drivers, runtimes, and performance tuning.<\/p>\n\n\n\n<blockquote>\n<p>Naming note (verify in official docs): Alibaba Cloud uses \u201cElastic GPU Service\u201d as a product documentation entry, while GPU compute is provisioned via ECS GPU instance families. Always confirm the latest positioning, supported instance families, and regions in official Alibaba Cloud documentation before standardizing an internal platform design.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">2. What is Elastic GPU Service?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Official purpose<\/h3>\n\n\n\n<p>Elastic GPU Service provides GPU-accelerated compute capacity on Alibaba Cloud so customers can run workloads that benefit from GPU parallelism\u2014AI, visualization, rendering, video processing, scientific computing, and more.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Core capabilities<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Provision GPU-enabled instances (VMs) with different GPU models and sizes (availability varies by region).<\/li>\n<li>Run standard OS images (Linux\/Windows) and install GPU drivers and frameworks.<\/li>\n<li>Integrate into VPC networking with security groups, private subnets (vSwitches), and optional public access via EIP.<\/li>\n<li>Support common ECS operations: scaling via instance replacement, creating custom images, snapshots, monitoring, and automation.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Major components (as used in real deployments)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>ECS GPU instance<\/strong>: the VM that includes one or more physical GPUs (and corresponding vCPU\/RAM).<\/li>\n<li><strong>System disk and data disks<\/strong>: typically ESSD (performance SSD) volumes for OS and datasets.<\/li>\n<li><strong>VPC, vSwitch, security group<\/strong>: networking and firewall boundaries.<\/li>\n<li><strong>EIP \/ NAT Gateway \/ SLB (optional)<\/strong>: controlled ingress\/egress and service publishing.<\/li>\n<li><strong>RAM users\/roles\/policies<\/strong>: access control and least privilege.<\/li>\n<li><strong>CloudMonitor + ActionTrail<\/strong>: metrics\/alarms and audit trails.<\/li>\n<li><strong>OSS\/NAS (optional)<\/strong>: data lake and shared storage for models, artifacts, and datasets.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Service type<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Infrastructure-as-a-Service (IaaS)<\/strong> GPU compute delivered as <strong>ECS<\/strong> instances.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scope (regional\/zonal\/account)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Regional<\/strong> service with <strong>zonal<\/strong> instance placement (availability depends on region\/zone capacity).<\/li>\n<li><strong>Account-scoped<\/strong> resources under your Alibaba Cloud account; access controlled via RAM.<\/li>\n<li>Certain attributes (instance type availability, GPU models, quotas) are <strong>region\/zone dependent<\/strong>.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">How it fits into the Alibaba Cloud ecosystem<\/h3>\n\n\n\n<p>Elastic GPU Service is a compute building block. Typical ecosystem pairings:\n&#8211; <strong>Compute orchestration<\/strong>: Auto Scaling (to scale stateless inference) and\/or ACK (Kubernetes) with GPU nodes (verify GPU support and scheduling in your ACK version\/region).\n&#8211; <strong>Data<\/strong>: OSS for object storage; NAS for shared POSIX-like storage.\n&#8211; <strong>Security<\/strong>: RAM, KMS (for secrets\/keys), security groups, VPC isolation.\n&#8211; <strong>Operations<\/strong>: CloudMonitor for metrics\/alarms; ActionTrail for auditing; Log Service (SLS) for log centralization (verify exact integration pattern you choose).<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">3. Why use Elastic GPU Service?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Business reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Faster time-to-value<\/strong>: spin up GPU capacity in minutes rather than procuring hardware.<\/li>\n<li><strong>Elasticity<\/strong>: match GPU spend to demand (training bursts, rendering deadlines, seasonal inference load).<\/li>\n<li><strong>Global deployment<\/strong>: place GPU compute nearer users or data (where regions support GPU capacity).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Technical reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>GPU acceleration<\/strong>: dramatically higher throughput for parallel workloads.<\/li>\n<li><strong>VM-level control<\/strong>: choose OS, install drivers, pin framework versions, tune performance.<\/li>\n<li><strong>Workload fit<\/strong>: supports diverse stacks (CUDA-based workloads, ML frameworks, rendering engines), subject to driver and GPU compatibility.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operational reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Standard ECS lifecycle<\/strong>: familiar instance operations, disk snapshots, images, monitoring.<\/li>\n<li><strong>Repeatability<\/strong>: bake golden images with drivers and frameworks; use Infrastructure as Code (IaC) (for example, Terraform\u2014verify provider resources and versions).<\/li>\n<li><strong>Isolation<\/strong>: per-instance isolation fits teams that require dedicated environments.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Security\/compliance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Network isolation<\/strong> with VPC and security groups.<\/li>\n<li><strong>Centralized access control<\/strong> via RAM; audit via ActionTrail.<\/li>\n<li><strong>Encryption options<\/strong> for disks and objects (verify which encryption modes you enable and in which regions).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scalability\/performance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Scale <strong>out<\/strong> (more instances for inference\/render farms) or scale <strong>up<\/strong> (larger GPU instance types) depending on your application.<\/li>\n<li>Place instances near data sources (OSS endpoints, NAS) to reduce latency and egress.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">When teams should choose it<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You need <strong>GPU acceleration<\/strong> and want <strong>VM control<\/strong> over drivers and runtime.<\/li>\n<li>Your workload can be packaged into an image or reproducible bootstrap scripts.<\/li>\n<li>You want a stepping stone between managed AI platforms and bare metal.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">When they should not choose it<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You do not actually need GPUs (many \u201cAI\u201d workloads run fine on CPU; profile first).<\/li>\n<li>You want a fully managed training\/inference platform with minimal infrastructure management\u2014consider Alibaba Cloud AI platform services instead (for example, PAI offerings; verify best-fit product).<\/li>\n<li>You have strict requirements for a specific GPU model\/feature and it is not available in your target region\/zone or quota.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">4. Where is Elastic GPU Service used?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Industries<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Media &amp; entertainment (rendering, transcoding acceleration)<\/li>\n<li>Retail and e-commerce (recommendation inference, vision search)<\/li>\n<li>Manufacturing (computer vision QC, digital twins)<\/li>\n<li>Healthcare\/life sciences (imaging inference, research compute)<\/li>\n<li>Finance (risk modeling acceleration, NLP inference)<\/li>\n<li>Education &amp; research (GPU labs, coursework environments)<\/li>\n<li>Gaming (asset rendering, AI NPC training)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Team types<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>ML engineering and data science teams<\/li>\n<li>Platform engineering teams building shared GPU platforms<\/li>\n<li>DevOps\/SRE teams running GPU-backed services<\/li>\n<li>Research teams running iterative experiments<\/li>\n<li>Media pipeline teams operating render farms<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Workloads<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Deep learning training (batch jobs)<\/li>\n<li>Deep learning inference (online services)<\/li>\n<li>GPU-accelerated ETL or feature processing<\/li>\n<li>Rendering (frame\/scene rendering)<\/li>\n<li>Simulation and scientific computing<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Architectures<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Single-instance experimentation<\/li>\n<li>Multi-instance distributed jobs (framework-dependent; verify networking requirements)<\/li>\n<li>Inference microservices behind load balancers<\/li>\n<li>Batch pipelines that read from OSS and write results back<\/li>\n<li>Kubernetes clusters with GPU nodes (ACK)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Production vs dev\/test usage<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Dev\/test<\/strong>: smaller GPU instances, spot\/preemptible when acceptable, short-lived experiments, per-branch environments.<\/li>\n<li><strong>Production<\/strong>: stable instance families, reserved\/subscription capacity, multi-zone design where possible, strong monitoring, and well-defined patching and image pipelines.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">5. Top Use Cases and Scenarios<\/h2>\n\n\n\n<p>Below are realistic scenarios where Elastic GPU Service is commonly used. Availability of GPU models and instance families varies by region\/zone\u2014verify in official docs and the console.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1) ML model training on demand<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Training jobs need GPUs for hours\/days, but idle GPUs are expensive.<\/li>\n<li><strong>Why this fits<\/strong>: Provision GPU instances only during training windows; use OSS\/NAS for datasets.<\/li>\n<li><strong>Example<\/strong>: A team launches a GPU VM nightly to retrain a demand forecast model, then shuts it down.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">2) Real-time inference API for computer vision<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: CPU inference latency is too high for image classification\/detection.<\/li>\n<li><strong>Why this fits<\/strong>: GPU instances reduce inference latency and increase throughput.<\/li>\n<li><strong>Example<\/strong>: A retail app uses GPU instances behind a load balancer to classify product images in real time.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">3) Batch inference at scale<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Running inference on millions of images\/videos takes too long on CPU.<\/li>\n<li><strong>Why this fits<\/strong>: GPU batch processing + OSS data lake improves throughput.<\/li>\n<li><strong>Example<\/strong>: A media company runs nightly GPU batch inference to tag scenes for search.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4) Rendering farm (animation\/3D)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Rendering frames locally is too slow; deadlines require parallel rendering.<\/li>\n<li><strong>Why this fits<\/strong>: Scale out GPU instances for a render burst, then release them.<\/li>\n<li><strong>Example<\/strong>: A studio launches 50 GPU instances for a weekend render push and tears them down Monday.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">5) GPU-accelerated video processing<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Video transcoding and filters are compute-heavy.<\/li>\n<li><strong>Why this fits<\/strong>: GPU acceleration can increase throughput per node (framework\/codec dependent).<\/li>\n<li><strong>Example<\/strong>: A streaming platform uses GPU instances to accelerate transcoding pipelines.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">6) Interactive data science workstation<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Data scientists need a consistent GPU environment without local setup pain.<\/li>\n<li><strong>Why this fits<\/strong>: A GPU VM with preinstalled drivers and Jupyter stack provides a reproducible workstation.<\/li>\n<li><strong>Example<\/strong>: A team standardizes on a golden GPU image and gives each scientist an isolated VM.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">7) NLP inference for chat\/semantic search<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Large transformer inference is slow on CPU and expensive at scale.<\/li>\n<li><strong>Why this fits<\/strong>: GPUs improve token throughput (model\/framework dependent).<\/li>\n<li><strong>Example<\/strong>: A SaaS company serves embeddings and reranking models on GPU VMs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">8) Scientific simulation \/ HPC kernels<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Numerical kernels run for days on CPU.<\/li>\n<li><strong>Why this fits<\/strong>: GPU acceleration reduces runtime for compatible codes.<\/li>\n<li><strong>Example<\/strong>: A research lab runs GPU-enabled simulations, storing outputs in OSS.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">9) Game development asset pipelines<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Texture\/asset generation and validation pipelines need acceleration.<\/li>\n<li><strong>Why this fits<\/strong>: GPU VMs can run pipeline tooling and scale with CI workloads.<\/li>\n<li><strong>Example<\/strong>: CI triggers GPU-based asset validation jobs during release cycles.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">10) Proof-of-concept for GPU migration<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: On-prem GPU apps need a safe cloud POC before migration.<\/li>\n<li><strong>Why this fits<\/strong>: VM-level control mirrors on-prem patterns; lift-and-shift is straightforward.<\/li>\n<li><strong>Example<\/strong>: A company clones an on-prem inference stack to a GPU ECS instance to validate performance.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">6. Core Features<\/h2>\n\n\n\n<blockquote>\n<p>Note: Exact feature names and availability can vary by region and by ECS instance family. Confirm in the Elastic GPU Service and ECS documentation.<\/p>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">GPU-accelerated ECS instances<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Provides ECS instance types that include one or more GPUs.<\/li>\n<li><strong>Why it matters<\/strong>: GPU hardware enables parallel computation for ML and graphics.<\/li>\n<li><strong>Practical benefit<\/strong>: Faster training\/inference or rendering compared to CPU-only instances.<\/li>\n<li><strong>Caveats<\/strong>: GPU models and counts vary; not all regions\/zones have the same capacity.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Multiple billing methods (via ECS)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Typically supports pay-as-you-go and subscription; some regions may support preemptible\/spot (verify).<\/li>\n<li><strong>Why it matters<\/strong>: Aligns cost with workload patterns.<\/li>\n<li><strong>Practical benefit<\/strong>: Use subscription for steady inference; pay-as-you-go for bursty experiments.<\/li>\n<li><strong>Caveats<\/strong>: Spot\/preemptible instances can be reclaimed; design for interruption.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Image-based provisioning and automation<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Use public images, custom images, or snapshots to standardize GPU environments.<\/li>\n<li><strong>Why it matters<\/strong>: GPU stacks are sensitive to driver\/CUDA\/framework version compatibility.<\/li>\n<li><strong>Practical benefit<\/strong>: Golden images reduce \u201cworks on my machine\u201d issues.<\/li>\n<li><strong>Caveats<\/strong>: Driver updates can break ABI compatibility\u2014pin versions and test.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">VPC networking and Security Groups<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Place GPU instances into private networks; control inbound\/outbound with security group rules.<\/li>\n<li><strong>Why it matters<\/strong>: Many GPU workloads handle sensitive datasets and models.<\/li>\n<li><strong>Practical benefit<\/strong>: Minimize public exposure; use bastion or VPN for admin access.<\/li>\n<li><strong>Caveats<\/strong>: Misconfigured security groups are a common source of exposure.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Elastic storage options (ECS disks, OSS, NAS)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Attach high-performance system\/data disks; integrate with OSS\/NAS for datasets and artifacts.<\/li>\n<li><strong>Why it matters<\/strong>: ML pipelines are often data-bound, not compute-bound.<\/li>\n<li><strong>Practical benefit<\/strong>: Keep datasets in OSS, cache locally on ESSD for hot reads.<\/li>\n<li><strong>Caveats<\/strong>: Data transfer and storage requests can add cost; plan caching strategy.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Monitoring and auditing (CloudMonitor, ActionTrail)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Collect instance metrics, set alarms, and track API actions.<\/li>\n<li><strong>Why it matters<\/strong>: GPUs are expensive; you need visibility and governance.<\/li>\n<li><strong>Practical benefit<\/strong>: Alert on idle instances, high costs, and suspicious operations.<\/li>\n<li><strong>Caveats<\/strong>: GPU-level metrics may require in-guest tooling (for example, via <code>nvidia-smi<\/code> and an exporter); verify what CloudMonitor provides natively for your instance family.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Integration with container and orchestration platforms (workload-dependent)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: GPU ECS instances can act as nodes for containerized workloads (for example, Kubernetes via ACK).<\/li>\n<li><strong>Why it matters<\/strong>: Many modern ML inference stacks run in containers.<\/li>\n<li><strong>Practical benefit<\/strong>: Standard deployment, rolling updates, autoscaling patterns.<\/li>\n<li><strong>Caveats<\/strong>: GPU scheduling requires correct device plugins and runtime configuration; validate with your ACK version and documentation.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">7. Architecture and How It Works<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">High-level architecture<\/h3>\n\n\n\n<p>At a high level, Elastic GPU Service workloads run on GPU-enabled ECS instances within your VPC:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Control plane<\/strong>: You create and manage instances using Alibaba Cloud console\/API\/CLI. IAM is enforced via RAM.<\/li>\n<li><strong>Network plane<\/strong>: Instances connect through VPC and security groups. Public access is typically through an EIP (or via NAT\/Bastion\/VPN).<\/li>\n<li><strong>Data plane<\/strong>: Workloads read datasets and models from OSS\/NAS and write results back. Local disks provide low-latency scratch space.<\/li>\n<li><strong>Observability<\/strong>: CloudMonitor collects ECS metrics; additional agents\/exporters can push GPU metrics and logs to Log Service (implementation-dependent).<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Request\/data\/control flow<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Provisioning<\/strong>: User\/API \u2192 ECS\/Elastic GPU Service \u2192 allocate host with GPU \u2192 attach disks\/network \u2192 boot OS.<\/li>\n<li><strong>Runtime<\/strong>:<\/li>\n<li>App requests \u2192 (optional) SLB\/API Gateway \u2192 GPU inference service on ECS.<\/li>\n<li>Data reads \u2192 OSS\/NAS \u2192 cached to local disk \u2192 processed on GPU \u2192 outputs stored to OSS\/DB.<\/li>\n<li><strong>Operations<\/strong>:<\/li>\n<li>Metrics \u2192 CloudMonitor; logs \u2192 Log Service (if configured).<\/li>\n<li>Audit events \u2192 ActionTrail.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Integrations with related services (common patterns)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>VPC + Security Groups<\/strong>: network segmentation and micro-perimeter.<\/li>\n<li><strong>EIP \/ NAT Gateway<\/strong>: controlled outbound internet for package installs; controlled inbound admin access.<\/li>\n<li><strong>OSS<\/strong>: dataset\/model artifact storage and sharing.<\/li>\n<li><strong>NAS<\/strong>: shared filesystem for multi-instance workloads (throughput\/latency characteristics vary; verify).<\/li>\n<li><strong>ACK (Kubernetes)<\/strong>: GPU nodes for containerized inference\/training (verify support and GPU scheduling details).<\/li>\n<li><strong>RAM<\/strong>: least privilege, MFA, and role-based access.<\/li>\n<li><strong>KMS<\/strong>: encrypt secrets and data keys (verify service integration options).<\/li>\n<li><strong>CloudMonitor + ActionTrail<\/strong>: monitoring, alerting, auditing.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Dependency services<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>ECS (primary compute resource manager)<\/li>\n<li>VPC, vSwitch, Security Groups<\/li>\n<li>Storage (cloud disks; optional OSS\/NAS)<\/li>\n<li>IAM (RAM)<\/li>\n<li>Monitoring\/audit (CloudMonitor, ActionTrail)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Security\/authentication model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Human\/API access<\/strong>: RAM users, RAM roles, policies; use MFA and least privilege.<\/li>\n<li><strong>Instance-to-service access<\/strong>: use instance RAM roles (where supported) to access OSS\/other APIs without long-lived keys (verify your pattern in official docs).<\/li>\n<li><strong>Network security<\/strong>: security groups and private subnets; optionally bastion host.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Networking model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Instances run in a <strong>VPC<\/strong> and <strong>vSwitch<\/strong> (subnet).<\/li>\n<li>Ingress\/egress governed by <strong>security group<\/strong> rules.<\/li>\n<li>Public internet access typically via:<\/li>\n<li><strong>EIP<\/strong> bound to the instance, or<\/li>\n<li>NAT Gateway for outbound-only access, with bastion\/VPN for admin.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Monitoring\/logging\/governance considerations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Baseline: CPU\/memory\/disk\/network metrics in CloudMonitor.<\/li>\n<li>Add-on: GPU utilization\/temperature metrics via in-guest exporters (implementation-dependent).<\/li>\n<li>Centralize logs to Log Service for retention and querying (optional).<\/li>\n<li>Use ActionTrail for auditing instance create\/modify\/delete operations.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Simple architecture diagram (Mermaid)<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart LR\n  User[Engineer \/ Data Scientist] --&gt;|Console\/API| ECS[ECS: Elastic GPU Service Instance]\n  ECS --&gt; VPC[VPC + vSwitch]\n  ECS --&gt;|Read\/Write| OSS[Object Storage Service (OSS)]\n  ECS --&gt;|Metrics| CM[CloudMonitor]\n  ECS --&gt;|Audit events| AT[ActionTrail]\n  User --&gt;|SSH via EIP| EIP[Elastic IP]\n  EIP --&gt; ECS\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Production-style architecture diagram (Mermaid)<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart TB\n  subgraph VPC1[VPC (Production)]\n    subgraph ZoneA[Zone A]\n      SLB[Server Load Balancer] --&gt; INF1[GPU ECS Inference #1]\n      SLB --&gt; INF2[GPU ECS Inference #2]\n    end\n    subgraph ZoneB[Zone B]\n      SLB --&gt; INF3[GPU ECS Inference #3]\n    end\n\n    INF1 --&gt; NAS[NAS (shared model cache)]\n    INF2 --&gt; NAS\n    INF3 --&gt; NAS\n\n    INF1 --&gt; OSS[OSS (datasets\/models\/artifacts)]\n    INF2 --&gt; OSS\n    INF3 --&gt; OSS\n\n    INF1 --&gt; SLS[Log Service (SLS)]\n    INF2 --&gt; SLS\n    INF3 --&gt; SLS\n  end\n\n  Users[Clients] --&gt;|HTTPS| SLB\n  Sec[RAM + KMS + Security Groups] -.govern.-&gt; VPC1\n  CM[CloudMonitor Alarms] --&gt; Ops[Ops\/SRE On-call]\n  AT[ActionTrail] --&gt; SecOps[Security Review \/ Audit]\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">8. Prerequisites<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Account and billing<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>An active <strong>Alibaba Cloud account<\/strong> with a valid billing method.<\/li>\n<li>Billing enabled for ECS and related services (VPC is typically free; EIP, disks, OSS requests\/storage can incur cost\u2014verify).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Permissions \/ IAM (RAM)<\/h3>\n\n\n\n<p>Minimum recommended:\n&#8211; Ability to create\/manage ECS instances, VPC resources, security groups, disks, EIPs.\n&#8211; Read access to monitoring\/audit services.\n&#8211; If using OSS\/NAS, permissions to access those resources.<\/p>\n\n\n\n<p>Practical guidance:\n&#8211; Use <strong>RAM users<\/strong> for humans, not root account credentials.\n&#8211; Use least privilege policies; scope by region\/resource groups where possible (verify RAM policy capabilities in your account).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Tools (optional but recommended)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>SSH client:<\/li>\n<li>macOS\/Linux: <code>ssh<\/code><\/li>\n<li>Windows: Windows Terminal + OpenSSH or PuTTY<\/li>\n<li>(Optional) Alibaba Cloud CLI:<\/li>\n<li>Alibaba Cloud CLI documentation (verify current install steps): https:\/\/www.alibabacloud.com\/help<\/li>\n<li>(Optional) Docker for containerized GPU apps (installed on the instance).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Region availability<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>GPU instance families are not available in every region\/zone and may have capacity constraints.<\/li>\n<li>Confirm:<\/li>\n<li>Supported regions and zones<\/li>\n<li>Available GPU instance families and GPU models<\/li>\n<li>Whether spot\/preemptible is available<\/li>\n<\/ul>\n\n\n\n<p>Use the official Elastic GPU Service and ECS instance type pages for confirmation (see Resources section).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Quotas\/limits<\/h3>\n\n\n\n<p>Common quota considerations (verify exact limits in your account\/region):\n&#8211; vCPU and instance quotas per region\n&#8211; GPU instance quotas per region\/zone\n&#8211; EIP quota\n&#8211; Disk quota and snapshot quota<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Prerequisite services<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>VPC<\/strong> and <strong>vSwitch<\/strong> (subnet)<\/li>\n<li><strong>Security Group<\/strong><\/li>\n<li>Optional: <strong>EIP<\/strong> (for SSH), <strong>OSS\/NAS<\/strong> (for data), <strong>Log Service<\/strong> (for logs)<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">9. Pricing \/ Cost<\/h2>\n\n\n\n<p>Elastic GPU Service cost is primarily the cost of <strong>GPU-enabled ECS instances<\/strong>, plus associated storage and networking.<\/p>\n\n\n\n<blockquote>\n<p>Do not rely on static numbers in articles\u2014GPU pricing is region- and instance-family dependent and changes over time. Use official pricing pages and calculators.<\/p>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">Pricing dimensions (typical)<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Instance type<\/strong> (major driver)\n   &#8211; GPU model and count\n   &#8211; vCPU and memory size\n   &#8211; Instance family generation<\/li>\n<li><strong>Billing method<\/strong>\n   &#8211; Pay-as-you-go (hourly\/second-level granularity depends on ECS rules\u2014verify)\n   &#8211; Subscription (reserved capacity for a term)\n   &#8211; Preemptible\/spot (if available; interruptible)<\/li>\n<li><strong>Storage<\/strong>\n   &#8211; System disk and data disks (ESSD categories and size)\n   &#8211; Snapshots (snapshot storage and API usage)<\/li>\n<li><strong>Network<\/strong>\n   &#8211; EIP (public IP) charges\n   &#8211; Internet outbound bandwidth charges (billing model depends on EIP settings)\n   &#8211; Cross-region traffic (if any)<\/li>\n<li><strong>Data services<\/strong>\n   &#8211; OSS storage (GB-month), requests, and data transfer\n   &#8211; NAS capacity\/throughput billing model (verify)<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Free tier<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>GPU instances are typically <strong>not<\/strong> part of free tiers. Verify Alibaba Cloud promotions\/free trials for your account\/region.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Cost drivers (what surprises people)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Idle GPU instances<\/strong>: the most common and expensive mistake.<\/li>\n<li><strong>Always-on EIPs<\/strong>: public IP resources accrue cost even when you\u2019re not actively SSH\u2019d.<\/li>\n<li><strong>Large disks<\/strong>: oversized ESSD volumes or snapshots kept forever.<\/li>\n<li><strong>OSS request costs<\/strong>: repeated small reads\/writes during training can add up.<\/li>\n<li><strong>Data egress<\/strong>: moving datasets out of Alibaba Cloud can be costly.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Network\/data transfer implications<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Keeping data in-region (OSS + ECS in same region) typically reduces latency and avoids cross-region transfer charges.<\/li>\n<li>Pulling large container images and packages over the internet increases bandwidth usage; consider local mirrors or image caching.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">How to optimize cost<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prefer <strong>short-lived<\/strong> GPU instances for experiments; shut down and release when done.<\/li>\n<li>Use <strong>custom images<\/strong> with preinstalled drivers\/frameworks to reduce bootstrapping time.<\/li>\n<li>Store datasets in <strong>OSS<\/strong> and cache on local disks only when needed.<\/li>\n<li>Evaluate <strong>spot\/preemptible<\/strong> for fault-tolerant training and batch inference (design for interruption).<\/li>\n<li>For steady production inference, compare <strong>subscription<\/strong> vs pay-as-you-go break-even (use the calculator).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Example low-cost starter estimate (no fabricated prices)<\/h3>\n\n\n\n<p>A \u201cstarter lab\u201d cost typically includes:\n&#8211; 1 small GPU ECS instance (pay-as-you-go) for 1\u20132 hours\n&#8211; 40\u2013100 GB system disk (ESSD)\n&#8211; Minimal EIP bandwidth for SSH and package installs<\/p>\n\n\n\n<p>Because GPU SKUs and EIP billing vary by region, calculate using:\n&#8211; ECS pricing page \/ calculator (see official resources)\n&#8211; The console\u2019s \u201cBuy\u201d page cost estimate for the chosen instance type<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Example production cost considerations<\/h3>\n\n\n\n<p>For a production inference service, model your monthly cost around:\n&#8211; N GPU instances (often multiple for HA and rolling deployments)\n&#8211; SLB (if used), EIP\/NAT Gateway\n&#8211; Disk snapshots, image storage, logs (SLS), OSS model storage\n&#8211; Headroom for scaling during peak<\/p>\n\n\n\n<p>A practical approach:\n&#8211; Start with load testing to measure <strong>requests\/sec per GPU instance<\/strong>\n&#8211; Convert peak QPS into instance count with a utilization target (for example, 60\u201370% GPU utilization)\n&#8211; Then compare pay-as-you-go vs subscription pricing.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Official pricing sources (use these)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>ECS pricing: https:\/\/www.alibabacloud.com\/product\/ecs (navigate to pricing)  <\/li>\n<li>Alibaba Cloud Pricing Calculator (verify current URL in official site navigation): https:\/\/www.alibabacloud.com\/pricing  <\/li>\n<\/ul>\n\n\n\n<p>If you find a dedicated Elastic GPU Service pricing page for your region, prefer that over secondary sources.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">10. Step-by-Step Hands-On Tutorial<\/h2>\n\n\n\n<p>This lab creates a GPU-enabled ECS instance (Elastic GPU Service), verifies the GPU is accessible, runs a simple GPU validation, and then cleans up to avoid ongoing cost.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Objective<\/h3>\n\n\n\n<p>Provision a low-cost, short-lived Elastic GPU Service instance on Alibaba Cloud, install\/verify the NVIDIA driver stack (or confirm it\u2019s already present), and validate GPU compute availability with <code>nvidia-smi<\/code> and a small CUDA\/container test.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Lab Overview<\/h3>\n\n\n\n<p>You will:\n1. Create networking (VPC, vSwitch) and a security group.\n2. Launch a GPU ECS instance and connect over SSH.\n3. Install NVIDIA driver (if required) and validate GPU visibility.\n4. Optionally run a container-based GPU workload.\n5. Set basic monitoring and then clean up all resources.<\/p>\n\n\n\n<blockquote>\n<p>Important: GPU availability is region\/zone dependent. If you cannot find a GPU instance type in your zone, choose a different zone\/region.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 1: Create a VPC and vSwitch (subnet)<\/h3>\n\n\n\n<p><strong>Console path (typical):<\/strong>\n1. Go to the Alibaba Cloud Console.\n2. Navigate to <strong>VPC<\/strong>.\n3. Create a <strong>VPC<\/strong>:\n   &#8211; IPv4 CIDR example: <code>10.0.0.0\/16<\/code>\n4. Create a <strong>vSwitch<\/strong> in a specific zone:\n   &#8211; CIDR example: <code>10.0.1.0\/24<\/code><\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; You have a VPC and one vSwitch ready for instance placement.<\/p>\n\n\n\n<p><strong>Verification<\/strong>\n&#8211; In the VPC console, confirm the VPC and vSwitch show \u201cAvailable\u201d.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 2: Create a Security Group with minimal inbound rules<\/h3>\n\n\n\n<p>Create a security group in the same region\/VPC.<\/p>\n\n\n\n<p><strong>Inbound rules (recommended for the lab)<\/strong>\n&#8211; SSH (TCP 22) from <strong>your public IP only<\/strong> (preferred)\n  &#8211; If you cannot restrict to one IP (for example, dynamic IP), use a temporary broader range and tighten later.<\/p>\n\n\n\n<p><strong>Optional (only if you serve a web app in this lab)<\/strong>\n&#8211; HTTP\/HTTPS from specific sources<\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; A security group that allows you to SSH into the instance.<\/p>\n\n\n\n<p><strong>Verification<\/strong>\n&#8211; Confirm the inbound rule exists and is scoped properly.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 3: Create an SSH key pair (recommended)<\/h3>\n\n\n\n<p><strong>Console path (typical):<\/strong>\n&#8211; ECS \u2192 Network &amp; Security \u2192 Key Pairs (naming may vary)<\/p>\n\n\n\n<p>Create a key pair and download the private key file (<code>.pem<\/code>).<\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; You have a key pair available to attach to the instance.<\/p>\n\n\n\n<p><strong>Verification<\/strong>\n&#8211; Confirm the key pair is listed in your region.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 4: Launch a GPU ECS instance (Elastic GPU Service)<\/h3>\n\n\n\n<p><strong>Console path (typical):<\/strong>\n&#8211; ECS \u2192 Instances \u2192 Create Instance<\/p>\n\n\n\n<p>Choose:\n&#8211; <strong>Region\/Zone<\/strong>: pick the zone where GPU instances are available.\n&#8211; <strong>Instance type<\/strong>: choose any <strong>GPU-accelerated<\/strong> instance type shown in the wizard.\n  &#8211; The console may show GPU family names and specs. Select a smaller option for cost.\n&#8211; <strong>Image<\/strong>: Ubuntu LTS is a good default for driver installation.\n  &#8211; If the console offers a GPU-optimized image with drivers, you can use it; <strong>verify what it includes<\/strong> in the image description.\n&#8211; <strong>System disk<\/strong>: ESSD, 40\u2013100 GB for the lab.\n&#8211; <strong>Network<\/strong>: select your VPC\/vSwitch.\n&#8211; <strong>Security group<\/strong>: select the one you created.\n&#8211; <strong>Login credential<\/strong>: choose your key pair.\n&#8211; <strong>Public IP<\/strong>:\n  &#8211; Option A: allocate an <strong>EIP<\/strong> and bind it to the instance (common for labs).\n  &#8211; Option B: enable a public IPv4 if offered in the wizard (region-dependent).\n  &#8211; Prefer EIP so you can release it explicitly during cleanup.<\/p>\n\n\n\n<p>Create the instance.<\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; A running GPU ECS instance.<\/p>\n\n\n\n<p><strong>Verification<\/strong>\n&#8211; ECS console shows instance state as <strong>Running<\/strong>.\n&#8211; You can see the public IP (EIP) address assigned.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 5: SSH to the instance<\/h3>\n\n\n\n<p>From your local machine (Linux\/macOS), set permissions and connect:<\/p>\n\n\n\n<pre><code class=\"language-bash\">chmod 600 ~\/Downloads\/your-key.pem\nssh -i ~\/Downloads\/your-key.pem ubuntu@&lt;EIP_or_Public_IP&gt;\n<\/code><\/pre>\n\n\n\n<p>If you used a different image, the default username might differ (for example, <code>root<\/code>). Check the console connection instructions.<\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; You are logged into the instance shell.<\/p>\n\n\n\n<p><strong>Verification<\/strong><\/p>\n\n\n\n<pre><code class=\"language-bash\">uname -a\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 6: Check whether GPU is visible (and whether drivers are installed)<\/h3>\n\n\n\n<p>Run:<\/p>\n\n\n\n<pre><code class=\"language-bash\">lspci | grep -i nvidia || true\nnvidia-smi || true\n<\/code><\/pre>\n\n\n\n<p>Interpretation:\n&#8211; If <code>lspci<\/code> shows an NVIDIA device but <code>nvidia-smi<\/code> fails, drivers may not be installed.\n&#8211; If <code>nvidia-smi<\/code> works, drivers are installed and the GPU is visible.<\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; You confirm whether GPU hardware is present and whether drivers are installed.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 7: Install NVIDIA drivers (Ubuntu example)<\/h3>\n\n\n\n<blockquote>\n<p>Driver installation is the most variable part because it depends on GPU model, kernel version, and image contents. Follow Alibaba Cloud\u2019s official GPU driver guidance for your instance family whenever possible. If your image already includes drivers, skip this step.<\/p>\n<\/blockquote>\n\n\n\n<p>Update packages:<\/p>\n\n\n\n<pre><code class=\"language-bash\">sudo apt-get update\nsudo apt-get -y install ubuntu-drivers-common\n<\/code><\/pre>\n\n\n\n<p>List recommended drivers:<\/p>\n\n\n\n<pre><code class=\"language-bash\">ubuntu-drivers devices\n<\/code><\/pre>\n\n\n\n<p>Install a recommended driver (example command; choose the recommended one shown on your VM):<\/p>\n\n\n\n<pre><code class=\"language-bash\">sudo ubuntu-drivers autoinstall\n<\/code><\/pre>\n\n\n\n<p>Reboot:<\/p>\n\n\n\n<pre><code class=\"language-bash\">sudo reboot\n<\/code><\/pre>\n\n\n\n<p>Reconnect via SSH and run:<\/p>\n\n\n\n<pre><code class=\"language-bash\">nvidia-smi\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; <code>nvidia-smi<\/code> outputs GPU model, driver version, and current GPU utilization.<\/p>\n\n\n\n<p><strong>Verification<\/strong>\n&#8211; <code>nvidia-smi<\/code> returns exit code 0 and prints a GPU table.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 8 (Optional): Run a GPU smoke test using a container<\/h3>\n\n\n\n<p>This is a practical way to validate that:\n&#8211; The driver is functional\n&#8211; The runtime can access the GPU<\/p>\n\n\n\n<p>You need Docker installed. Install Docker (Ubuntu):<\/p>\n\n\n\n<pre><code class=\"language-bash\">sudo apt-get -y install docker.io\nsudo usermod -aG docker $USER\nnewgrp docker\n<\/code><\/pre>\n\n\n\n<p>Now, GPU containers require the NVIDIA container runtime. The exact install steps vary; follow NVIDIA\u2019s official instructions and verify compatibility with your driver and OS:\n&#8211; NVIDIA Container Toolkit: https:\/\/docs.nvidia.com\/datacenter\/cloud-native\/container-toolkit\/latest\/install-guide.html<\/p>\n\n\n\n<p>After installing NVIDIA container runtime, test with a CUDA base image (image tags vary; pick a current one compatible with your setup):<\/p>\n\n\n\n<pre><code class=\"language-bash\">docker run --rm --gpus all nvidia\/cuda:12.4.1-base-ubuntu22.04 nvidia-smi\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; The container prints <code>nvidia-smi<\/code> output from inside the container.<\/p>\n\n\n\n<p><strong>Verification<\/strong>\n&#8211; The container sees the GPU(s) and prints the same (or similar) output as the host.<\/p>\n\n\n\n<blockquote>\n<p>If pulling images is slow\/expensive, consider stopping at Step 7. Container image downloads can increase outbound bandwidth cost.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 9: (Optional) Basic monitoring checks<\/h3>\n\n\n\n<p>In the console:\n&#8211; Open <strong>CloudMonitor<\/strong> \u2192 ECS monitoring.\n&#8211; Confirm CPU\/network metrics are visible.\n&#8211; For GPU utilization, rely on <code>nvidia-smi<\/code> for the lab unless you have an established GPU exporter pipeline.<\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; You can see baseline instance metrics and confirm the instance is healthy.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Validation<\/h3>\n\n\n\n<p>Run these checks:<\/p>\n\n\n\n<pre><code class=\"language-bash\"># GPU visibility\nnvidia-smi\n\n# Kernel\/driver sanity\nlsmod | grep -i nvidia || true\n\n# Disk and memory checks\ndf -h\nfree -h\n<\/code><\/pre>\n\n\n\n<p>What \u201cgood\u201d looks like:\n&#8211; <code>nvidia-smi<\/code> shows at least one GPU and no fatal errors.\n&#8211; Disk has free space.\n&#8211; No unusual system load at idle.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Troubleshooting<\/h3>\n\n\n\n<p>Common issues and realistic fixes:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>No GPU instance types available in your zone<\/strong>\n   &#8211; <strong>Cause<\/strong>: capacity or region limitations.\n   &#8211; <strong>Fix<\/strong>: choose a different zone\/region; request quota\/capacity via Alibaba Cloud support if needed.<\/p>\n<\/li>\n<li>\n<p><strong><code>nvidia-smi: command not found<\/code><\/strong>\n   &#8211; <strong>Cause<\/strong>: drivers not installed (or PATH not set).\n   &#8211; <strong>Fix<\/strong>: install drivers (Step 7) or use a GPU-optimized image (verify its contents).<\/p>\n<\/li>\n<li>\n<p><strong><code>nvidia-smi<\/code> fails after driver install<\/strong>\n   &#8211; <strong>Possible causes<\/strong>: mismatched driver\/kernel, secure boot constraints (less common on cloud), incomplete install.\n   &#8211; <strong>Fix<\/strong>:<\/p>\n<ul>\n<li>Re-check recommended driver version via <code>ubuntu-drivers devices<\/code><\/li>\n<li>Ensure you rebooted<\/li>\n<li>Review <code>\/var\/log\/syslog<\/code> and <code>dmesg | grep -i nvidia<\/code><\/li>\n<li>Consider using Alibaba Cloud\u2019s recommended driver\/CUDA guidance for that instance family (preferred)<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>SSH connection timeout<\/strong>\n   &#8211; <strong>Cause<\/strong>: security group missing port 22, wrong source IP, or no public route.\n   &#8211; <strong>Fix<\/strong>: verify inbound rule, verify EIP binding, confirm the instance has public connectivity.<\/p>\n<\/li>\n<li>\n<p><strong>Docker GPU test fails: \u201ccould not select device driver\u201d<\/strong>\n   &#8211; <strong>Cause<\/strong>: NVIDIA container runtime not installed\/configured.\n   &#8211; <strong>Fix<\/strong>: install NVIDIA Container Toolkit and configure Docker runtime per NVIDIA docs.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Cleanup<\/h3>\n\n\n\n<p>To avoid ongoing charges, delete resources in this order:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Terminate\/release the ECS instance<\/strong>\n   &#8211; ECS \u2192 Instances \u2192 Select instance \u2192 Release<\/li>\n<li><strong>Release EIP<\/strong> (if you allocated one)\n   &#8211; VPC \u2192 EIPs \u2192 Release<\/li>\n<li><strong>Delete unused disks\/snapshots<\/strong>\n   &#8211; ECS \u2192 Disks \/ Snapshots (ensure nothing remains billable)<\/li>\n<li><strong>Delete security group<\/strong> (optional)<\/li>\n<li><strong>Delete vSwitch and VPC<\/strong> (optional, if created only for this lab)<\/li>\n<\/ol>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; No GPU instances, no EIPs, and no unattached billable disks remain.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">11. Best Practices<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Architecture best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Separate training and inference<\/strong> environments:<\/li>\n<li>Training: bursty, batch, interruption-tolerant (spot where acceptable).<\/li>\n<li>Inference: stable, HA, scaled behind load balancers.<\/li>\n<li><strong>Keep data close to compute<\/strong>:<\/li>\n<li>Same region for OSS\/NAS and GPU instances.<\/li>\n<li><strong>Use immutable images<\/strong>:<\/li>\n<li>Build golden images with pinned driver\/CUDA\/framework versions.<\/li>\n<li><strong>Design for scale-out<\/strong>:<\/li>\n<li>Prefer horizontal scaling for inference when possible; keep instances stateless and load models from shared storage.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">IAM\/security best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use <strong>RAM users<\/strong> with MFA; avoid using root credentials.<\/li>\n<li>Use <strong>RAM roles<\/strong> for instances to access OSS or other APIs without embedding long-lived keys (verify exact feature availability and configuration).<\/li>\n<li>Scope permissions by <strong>resource group<\/strong>, <strong>region<\/strong>, and <strong>tags<\/strong>.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Cost best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Implement \u201c<strong>idle shutdown<\/strong>\u201d automation for dev\/test GPU instances.<\/li>\n<li>Use budgets\/alerts:<\/li>\n<li>CloudMonitor alarms for running instances beyond expected windows<\/li>\n<li>Billing center budgets (verify feature availability in your account)<\/li>\n<li>Right-size:<\/li>\n<li>Track GPU utilization; if consistently low, move to a smaller GPU SKU or CPU inference.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Performance best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Match GPU model to workload:<\/li>\n<li>Inference vs training, FP16\/BF16 support, memory requirements (verify GPU capabilities).<\/li>\n<li>Avoid I\/O bottlenecks:<\/li>\n<li>Use local ESSD as cache\/scratch for training.<\/li>\n<li>Use OSS multipart downloads or prefetching where appropriate.<\/li>\n<li>Pin software versions:<\/li>\n<li>Driver \u2194 CUDA \u2194 framework compatibility is critical.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Reliability best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>For production inference:<\/li>\n<li>Deploy across zones when available.<\/li>\n<li>Use health checks and rolling updates.<\/li>\n<li>For batch training:<\/li>\n<li>Checkpoint frequently to OSS so jobs can resume after interruption.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operations best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Standardize:<\/li>\n<li>naming conventions (project-env-role-index)<\/li>\n<li>tags (Owner, CostCenter, Environment, DataSensitivity)<\/li>\n<li>Patch management:<\/li>\n<li>Update images in CI and roll out via instance replacement.<\/li>\n<li>Observability:<\/li>\n<li>Centralize logs; alert on OOM, disk full, high error rates.<\/li>\n<li>Collect GPU utilization metrics via agents if you operate at scale.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Governance\/tagging\/naming best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Enforce tags via policy\/process:<\/li>\n<li>Owner<\/li>\n<li>Application<\/li>\n<li>Environment<\/li>\n<li>Cost center<\/li>\n<li>Keep separate accounts or resource groups for dev\/test vs production.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">12. Security Considerations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Identity and access model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use <strong>RAM<\/strong> for:<\/li>\n<li>Human access (RAM users, SSO if available)<\/li>\n<li>Programmatic access (RAM roles, access keys with rotation policies)<\/li>\n<li>Prefer instance roles over embedding credentials in code.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Encryption<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>At rest<\/strong>:<\/li>\n<li>Enable disk encryption where supported\/required (verify ECS disk encryption options and constraints).<\/li>\n<li>Encrypt OSS buckets\/objects as required (server-side encryption options\u2014verify).<\/li>\n<li><strong>In transit<\/strong>:<\/li>\n<li>Use SSH for admin, TLS for APIs, HTTPS endpoints for OSS access.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Network exposure<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Avoid public SSH whenever possible:<\/li>\n<li>Use a bastion host, VPN, or private connectivity (best practice).<\/li>\n<li>If you must use EIP:<\/li>\n<li>Restrict security group inbound rules to your IP.<\/li>\n<li>Consider changing SSH port only as a minor hardening measure\u2014real security comes from IP restriction and keys.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secrets handling<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Do not store access keys in code or AMIs.<\/li>\n<li>Use environment injection from a secrets store if available (for example, KMS-based patterns; verify Alibaba Cloud-native secrets solutions you adopt).<\/li>\n<li>Rotate credentials and audit usage.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Audit\/logging<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Enable <strong>ActionTrail<\/strong> and retain logs according to your policy.<\/li>\n<li>Collect OS logs and application logs to a centralized service (SLS) for incident response (implementation-dependent).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Compliance considerations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data residency: choose regions aligned with data requirements.<\/li>\n<li>Access logging: ensure administrative actions are traceable.<\/li>\n<li>Least privilege: restrict who can create GPU instances (cost and data risk).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Common security mistakes<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Opening SSH (22) to <code>0.0.0.0\/0<\/code><\/li>\n<li>Leaving EIPs attached to instances permanently<\/li>\n<li>Using shared SSH keys across the organization<\/li>\n<li>Running workloads as root inside the VM without controls<\/li>\n<li>Copying datasets to local disks and forgetting to wipe\/decommission properly<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secure deployment recommendations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Private subnets + NAT Gateway for outbound-only traffic.<\/li>\n<li>Bastion host with strong authentication and session recording (if required).<\/li>\n<li>Golden images with pre-hardened baseline and CIS-like settings (where applicable).<\/li>\n<li>Use separate resource groups\/accounts for different sensitivity levels.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">13. Limitations and Gotchas<\/h2>\n\n\n\n<blockquote>\n<p>Confirm exact values and availability in official docs; limits vary by region, instance family, and account quotas.<\/p>\n<\/blockquote>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Regional\/zone capacity constraints<\/strong>: GPU capacity can be scarce. Plan procurement early for production.<\/li>\n<li><strong>Quota limitations<\/strong>: GPU instance quotas may be low by default and require increases.<\/li>\n<li><strong>Driver\/framework compatibility<\/strong>: CUDA, cuDNN, TensorRT, and ML frameworks have strict version compatibility.<\/li>\n<li><strong>Spot\/preemptible interruptions<\/strong>: great for cost but requires checkpointing and fault tolerance.<\/li>\n<li><strong>Data gravity and egress<\/strong>: moving large datasets across regions or out of cloud costs money and time.<\/li>\n<li><strong>GPU monitoring<\/strong>: Cloud-native metrics may not include detailed GPU utilization; you may need in-guest monitoring.<\/li>\n<li><strong>Image baking complexity<\/strong>: maintaining a secure, patched GPU image pipeline requires discipline and testing.<\/li>\n<li><strong>Kubernetes GPU scheduling complexity<\/strong> (if using ACK): device plugin, node labeling\/taints, and runtime setup require careful validation.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">14. Comparison with Alternatives<\/h2>\n\n\n\n<p>Elastic GPU Service is best viewed as \u201cGPU-enabled ECS.\u201d Alternatives include other Alibaba Cloud services that abstract infrastructure, other cloud providers\u2019 GPU VMs, or self-managed on-prem GPU clusters.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Option<\/th>\n<th>Best For<\/th>\n<th>Strengths<\/th>\n<th>Weaknesses<\/th>\n<th>When to Choose<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Alibaba Cloud Elastic GPU Service (GPU ECS)<\/strong><\/td>\n<td>Teams needing VM control for GPU workloads<\/td>\n<td>Full OS control, integrates with VPC\/RAM\/OSS, flexible stacks<\/td>\n<td>You manage drivers, patching, scaling patterns<\/td>\n<td>You want GPU compute with VM-level flexibility<\/td>\n<\/tr>\n<tr>\n<td><strong>Alibaba Cloud ECS (CPU-only)<\/strong><\/td>\n<td>Non-GPU workloads, lightweight inference<\/td>\n<td>Lower cost, simpler ops<\/td>\n<td>Not suitable for heavy training\/inference\/rendering<\/td>\n<td>When profiling shows CPU is sufficient<\/td>\n<\/tr>\n<tr>\n<td><strong>Alibaba Cloud ACK with GPU nodes<\/strong> (uses GPU ECS nodes)<\/td>\n<td>Containerized GPU inference\/training<\/td>\n<td>Orchestration, rolling updates, scaling patterns<\/td>\n<td>More operational complexity; GPU scheduling setup<\/td>\n<td>When you standardize on Kubernetes for deployment<\/td>\n<\/tr>\n<tr>\n<td><strong>Alibaba Cloud PAI (managed AI platform offerings)<\/strong><\/td>\n<td>Managed training\/inference pipelines<\/td>\n<td>Higher-level abstractions, potentially less ops<\/td>\n<td>Less low-level control; product fit varies<\/td>\n<td>When you want managed ML workflows over raw VMs<\/td>\n<\/tr>\n<tr>\n<td><strong>AWS EC2 GPU instances<\/strong><\/td>\n<td>Multi-cloud or AWS-native stacks<\/td>\n<td>Broad ecosystem, mature tooling<\/td>\n<td>Different IAM\/networking model; migration effort<\/td>\n<td>When the rest of your platform is on AWS<\/td>\n<\/tr>\n<tr>\n<td><strong>Azure GPU VMs<\/strong><\/td>\n<td>Azure-native ML\/VDI<\/td>\n<td>Tight integration with Azure services<\/td>\n<td>Different tooling and costs<\/td>\n<td>When you\u2019re standardized on Azure<\/td>\n<\/tr>\n<tr>\n<td><strong>Google Cloud GPU VMs<\/strong><\/td>\n<td>GCP-native ML\/data stacks<\/td>\n<td>Integration with GCP data\/AI services<\/td>\n<td>Different networking\/IAM<\/td>\n<td>When your data platform is on GCP<\/td>\n<\/tr>\n<tr>\n<td><strong>On-prem GPU servers<\/strong><\/td>\n<td>Fixed high utilization, strict data locality<\/td>\n<td>Full control, potentially lower long-term cost at high utilization<\/td>\n<td>CapEx, capacity planning, ops burden<\/td>\n<td>When GPUs are near-100% utilized and data must stay on-prem<\/td>\n<\/tr>\n<tr>\n<td><strong>Self-managed Kubernetes + GPUs<\/strong> (anywhere)<\/td>\n<td>Custom platform engineering<\/td>\n<td>Portable patterns<\/td>\n<td>High complexity<\/td>\n<td>When you have a platform team and portability is critical<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">15. Real-World Example<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Enterprise example: Visual quality inspection in manufacturing<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Multiple factories produce high-resolution images; CPU inference can\u2019t meet latency\/throughput, and the company needs strong network isolation.<\/li>\n<li><strong>Proposed architecture<\/strong><\/li>\n<li>GPU ECS instances (Elastic GPU Service) run inference services.<\/li>\n<li>VPC with private subnets; access via internal SLB.<\/li>\n<li>OSS stores images and model artifacts; NAS caches frequently used models.<\/li>\n<li>CloudMonitor alarms on instance health; ActionTrail for audit.<\/li>\n<li><strong>Why Elastic GPU Service was chosen<\/strong><\/li>\n<li>VM-level control to pin driver\/CUDA\/framework versions for validation.<\/li>\n<li>Predictable performance and easier compliance alignment than ad-hoc desktops.<\/li>\n<li><strong>Expected outcomes<\/strong><\/li>\n<li>Lower per-image inference time.<\/li>\n<li>Centralized governance and auditability.<\/li>\n<li>Faster rollout of updated models using golden images and rolling instance replacement.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Startup\/small-team example: GPU-backed semantic search MVP<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: A startup needs fast embedding generation and reranking for search; they can\u2019t justify on-prem GPUs.<\/li>\n<li><strong>Proposed architecture<\/strong><\/li>\n<li>One small GPU ECS instance for inference.<\/li>\n<li>OSS bucket for model artifacts and logs.<\/li>\n<li>EIP only for admin access; app served behind a managed load balancer when scaling.<\/li>\n<li><strong>Why Elastic GPU Service was chosen<\/strong><\/li>\n<li>Quick provisioning and pay-as-you-go experimentation.<\/li>\n<li>Simple VM deployment without building a full Kubernetes platform on day one.<\/li>\n<li><strong>Expected outcomes<\/strong><\/li>\n<li>MVP performance meets product needs.<\/li>\n<li>Ability to scale out by cloning the instance image and adding a load balancer.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">16. FAQ<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>Is Elastic GPU Service different from ECS?<\/strong><br\/>\n   In practice, Elastic GPU Service is delivered through GPU-enabled ECS instance families. Think of it as the GPU compute capability of ECS. Verify current product positioning in Alibaba Cloud docs.<\/p>\n<\/li>\n<li>\n<p><strong>Which GPU models are available?<\/strong><br\/>\n   Availability depends on region\/zone and instance family. Check the ECS instance type list in your region and the Elastic GPU Service documentation.<\/p>\n<\/li>\n<li>\n<p><strong>Can I use pay-as-you-go billing?<\/strong><br\/>\n   Typically yes (as an ECS billing method). Availability of subscription\/spot\/preemptible varies\u2014verify in the console for your region.<\/p>\n<\/li>\n<li>\n<p><strong>Do GPU instances include NVIDIA drivers by default?<\/strong><br\/>\n   Some images may include drivers, others do not. Always verify the selected image description and test with <code>nvidia-smi<\/code>.<\/p>\n<\/li>\n<li>\n<p><strong>How do I verify the GPU is working?<\/strong><br\/>\n   Run <code>nvidia-smi<\/code>. For deeper validation, run a small CUDA sample or a container test with the NVIDIA runtime.<\/p>\n<\/li>\n<li>\n<p><strong>Can I use Docker containers with the GPU?<\/strong><br\/>\n   Yes, but you must configure the NVIDIA container runtime\/toolkit and ensure driver compatibility. Follow NVIDIA\u2019s official documentation.<\/p>\n<\/li>\n<li>\n<p><strong>Can I use Kubernetes (ACK) with GPUs?<\/strong><br\/>\n   Many teams use GPU ECS instances as Kubernetes worker nodes. You must configure GPU scheduling and device plugins; verify ACK GPU guidance for your region\/version.<\/p>\n<\/li>\n<li>\n<p><strong>What storage is best for ML datasets?<\/strong><br\/>\n   Common pattern: store datasets and artifacts in OSS, cache hot data on local ESSD, and optionally use NAS for shared filesystem needs.<\/p>\n<\/li>\n<li>\n<p><strong>What are the main cost risks?<\/strong><br\/>\n   Leaving GPU instances running idle, paying for EIPs, and data egress. Set budgets and automate shutdown for non-production.<\/p>\n<\/li>\n<li>\n<p><strong>How do I secure SSH access?<\/strong><br\/>\n   Restrict security group inbound to your IP, use key-based auth, and ideally use a bastion host or VPN rather than public SSH.<\/p>\n<\/li>\n<li>\n<p><strong>How do I scale an inference service?<\/strong><br\/>\n   Keep inference nodes stateless, load models from OSS\/NAS, place them behind SLB, and scale out by adding instances or using Auto Scaling (pattern depends on your stack).<\/p>\n<\/li>\n<li>\n<p><strong>How do I handle spot\/preemptible interruptions?<\/strong><br\/>\n   Design training to checkpoint frequently to OSS and resume. For inference, use multiple instances and graceful draining.<\/p>\n<\/li>\n<li>\n<p><strong>Can I snapshot a GPU instance and replicate it?<\/strong><br\/>\n   You can create custom images\/snapshots like normal ECS, but ensure driver licensing\/compatibility and validate after cloning.<\/p>\n<\/li>\n<li>\n<p><strong>Do I get GPU utilization metrics in CloudMonitor?<\/strong><br\/>\n   Baseline ECS metrics are available. Detailed GPU metrics often require in-guest tooling\/exporters. Verify what CloudMonitor provides natively for your instance family.<\/p>\n<\/li>\n<li>\n<p><strong>What\u2019s the safest way to keep driver\/CUDA consistent?<\/strong><br\/>\n   Use a golden image pipeline: build \u2192 test \u2192 publish. Pin versions and document compatibility matrices.<\/p>\n<\/li>\n<li>\n<p><strong>Is Elastic GPU Service suitable for regulated workloads?<\/strong><br\/>\n   It can be, if you design for encryption, access controls, auditing, and region residency requirements. Confirm compliance needs and Alibaba Cloud controls with official documentation and your compliance team.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">17. Top Online Resources to Learn Elastic GPU Service<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Resource Type<\/th>\n<th>Name<\/th>\n<th>Why It Is Useful<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Official documentation<\/td>\n<td>Elastic GPU Service documentation (Alibaba Cloud Help Center) \u2013 https:\/\/www.alibabacloud.com\/help<\/td>\n<td>Primary reference for capabilities, regions, and guidance (verify the Elastic GPU Service section)<\/td>\n<\/tr>\n<tr>\n<td>Official documentation<\/td>\n<td>ECS documentation \u2013 https:\/\/www.alibabacloud.com\/help\/en\/ecs<\/td>\n<td>GPU instances are provisioned as ECS; this covers instance lifecycle, networking, disks, images<\/td>\n<\/tr>\n<tr>\n<td>Official documentation<\/td>\n<td>VPC documentation \u2013 https:\/\/www.alibabacloud.com\/help\/en\/vpc<\/td>\n<td>Networking fundamentals for secure GPU deployments<\/td>\n<\/tr>\n<tr>\n<td>Official documentation<\/td>\n<td>RAM documentation \u2013 https:\/\/www.alibabacloud.com\/help\/en\/ram<\/td>\n<td>Access control, least privilege policies, roles<\/td>\n<\/tr>\n<tr>\n<td>Official documentation<\/td>\n<td>CloudMonitor documentation \u2013 https:\/\/www.alibabacloud.com\/help\/en\/cloudmonitor<\/td>\n<td>Monitoring\/alarms for ECS and related resources<\/td>\n<\/tr>\n<tr>\n<td>Official documentation<\/td>\n<td>ActionTrail documentation \u2013 https:\/\/www.alibabacloud.com\/help\/en\/actiontrail<\/td>\n<td>Audit logging of control-plane actions<\/td>\n<\/tr>\n<tr>\n<td>Official pricing<\/td>\n<td>Alibaba Cloud pricing overview \u2013 https:\/\/www.alibabacloud.com\/pricing<\/td>\n<td>Pricing entry point; use it to reach ECS pricing and calculator<\/td>\n<\/tr>\n<tr>\n<td>Official product page<\/td>\n<td>Elastic Compute Service (ECS) product page \u2013 https:\/\/www.alibabacloud.com\/product\/ecs<\/td>\n<td>Background, billing methods, and entry point to pricing\/specs<\/td>\n<\/tr>\n<tr>\n<td>Official docs (data)<\/td>\n<td>OSS documentation \u2013 https:\/\/www.alibabacloud.com\/help\/en\/oss<\/td>\n<td>Best practices for dataset\/model storage<\/td>\n<\/tr>\n<tr>\n<td>Official architecture<\/td>\n<td>Alibaba Cloud Architecture Center \u2013 https:\/\/www.alibabacloud.com\/architecture<\/td>\n<td>Reference architectures; verify GPU-specific patterns available<\/td>\n<\/tr>\n<tr>\n<td>External (vendor official)<\/td>\n<td>NVIDIA Container Toolkit install guide \u2013 https:\/\/docs.nvidia.com\/datacenter\/cloud-native\/container-toolkit\/latest\/install-guide.html<\/td>\n<td>Required to run GPU workloads in Docker containers reliably<\/td>\n<\/tr>\n<tr>\n<td>Community (use with care)<\/td>\n<td>Trusted GitHub examples for CUDA\/PyTorch\/TensorFlow<\/td>\n<td>Helps with smoke tests; ensure they match your driver\/CUDA versions and security policies<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">18. Training and Certification Providers<\/h2>\n\n\n\n<p>Below are training providers (neutral listing). Confirm current course catalogs and delivery modes on their websites.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>DevOpsSchool.com<\/strong>\n   &#8211; <strong>Suitable audience<\/strong>: DevOps engineers, SREs, platform teams, cloud engineers\n   &#8211; <strong>Likely learning focus<\/strong>: cloud operations, DevOps practices, infrastructure automation\n   &#8211; <strong>Mode<\/strong>: check website\n   &#8211; <strong>Website<\/strong>: https:\/\/www.devopsschool.com\/<\/p>\n<\/li>\n<li>\n<p><strong>ScmGalaxy.com<\/strong>\n   &#8211; <strong>Suitable audience<\/strong>: software engineers, DevOps learners, build\/release engineers\n   &#8211; <strong>Likely learning focus<\/strong>: SCM, CI\/CD, DevOps tooling foundations\n   &#8211; <strong>Mode<\/strong>: check website\n   &#8211; <strong>Website<\/strong>: https:\/\/www.scmgalaxy.com\/<\/p>\n<\/li>\n<li>\n<p><strong>CLoudOpsNow.in<\/strong>\n   &#8211; <strong>Suitable audience<\/strong>: cloud operations engineers, sysadmins moving to cloud\n   &#8211; <strong>Likely learning focus<\/strong>: cloud operations, monitoring, reliability practices\n   &#8211; <strong>Mode<\/strong>: check website\n   &#8211; <strong>Website<\/strong>: https:\/\/www.cloudopsnow.in\/<\/p>\n<\/li>\n<li>\n<p><strong>SreSchool.com<\/strong>\n   &#8211; <strong>Suitable audience<\/strong>: SREs, operations, platform engineering\n   &#8211; <strong>Likely learning focus<\/strong>: SRE principles, observability, incident response\n   &#8211; <strong>Mode<\/strong>: check website\n   &#8211; <strong>Website<\/strong>: https:\/\/www.sreschool.com\/<\/p>\n<\/li>\n<li>\n<p><strong>AiOpsSchool.com<\/strong>\n   &#8211; <strong>Suitable audience<\/strong>: operations teams adopting AIOps, monitoring\/automation engineers\n   &#8211; <strong>Likely learning focus<\/strong>: AIOps concepts, automation, operational analytics\n   &#8211; <strong>Mode<\/strong>: check website\n   &#8211; <strong>Website<\/strong>: https:\/\/www.aiopsschool.com\/<\/p>\n<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">19. Top Trainers<\/h2>\n\n\n\n<p>Listed as training resources\/platforms (neutral listing). Verify specific trainer profiles and offerings on each site.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>RajeshKumar.xyz<\/strong>\n   &#8211; <strong>Likely specialization<\/strong>: check website (often individual trainer branding)\n   &#8211; <strong>Suitable audience<\/strong>: learners seeking guided training\/mentorship\n   &#8211; <strong>Website<\/strong>: https:\/\/rajeshkumar.xyz\/<\/p>\n<\/li>\n<li>\n<p><strong>devopstrainer.in<\/strong>\n   &#8211; <strong>Likely specialization<\/strong>: DevOps tools and practices training\n   &#8211; <strong>Suitable audience<\/strong>: DevOps engineers, CI\/CD learners\n   &#8211; <strong>Website<\/strong>: https:\/\/www.devopstrainer.in\/<\/p>\n<\/li>\n<li>\n<p><strong>devopsfreelancer.com<\/strong>\n   &#8211; <strong>Likely specialization<\/strong>: DevOps freelancing\/services and training resources\n   &#8211; <strong>Suitable audience<\/strong>: teams and individuals seeking practical DevOps support\n   &#8211; <strong>Website<\/strong>: https:\/\/www.devopsfreelancer.com\/<\/p>\n<\/li>\n<li>\n<p><strong>devopssupport.in<\/strong>\n   &#8211; <strong>Likely specialization<\/strong>: DevOps support services and training\n   &#8211; <strong>Suitable audience<\/strong>: ops teams and DevOps practitioners\n   &#8211; <strong>Website<\/strong>: https:\/\/www.devopssupport.in\/<\/p>\n<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">20. Top Consulting Companies<\/h2>\n\n\n\n<p>Neutral listing based on provided names. Verify service lines and case studies directly with each company.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>cotocus.com<\/strong>\n   &#8211; <strong>Likely service area<\/strong>: cloud\/DevOps consulting (verify on website)\n   &#8211; <strong>Where they may help<\/strong>: architecture reviews, cloud migrations, ops automation\n   &#8211; <strong>Consulting use case examples<\/strong>:<\/p>\n<ul>\n<li>Designing a secure VPC layout for GPU workloads<\/li>\n<li>Building an image pipeline for GPU instances<\/li>\n<li><strong>Website<\/strong>: https:\/\/cotocus.com\/<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>DevOpsSchool.com<\/strong>\n   &#8211; <strong>Likely service area<\/strong>: DevOps consulting and corporate training (verify on website)\n   &#8211; <strong>Where they may help<\/strong>: CI\/CD, infrastructure automation, operational readiness\n   &#8211; <strong>Consulting use case examples<\/strong>:<\/p>\n<ul>\n<li>Implementing IaC for ECS GPU fleets<\/li>\n<li>Setting up monitoring\/alerting standards for GPU services<\/li>\n<li><strong>Website<\/strong>: https:\/\/www.devopsschool.com\/<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>DEVOPSCONSULTING.IN<\/strong>\n   &#8211; <strong>Likely service area<\/strong>: DevOps consulting services (verify on website)\n   &#8211; <strong>Where they may help<\/strong>: DevOps transformations, reliability improvements, automation\n   &#8211; <strong>Consulting use case examples<\/strong>:<\/p>\n<ul>\n<li>Hardening SSH\/bastion patterns for production GPU instances<\/li>\n<li>Cost optimization for dev\/test GPU usage<\/li>\n<li><strong>Website<\/strong>: https:\/\/www.devopsconsulting.in\/<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">21. Career and Learning Roadmap<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn before Elastic GPU Service<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Linux fundamentals<\/strong>: SSH, systemd, logs, package managers, storage.<\/li>\n<li><strong>Networking<\/strong>: VPC concepts, subnets, routing, security groups, NAT.<\/li>\n<li><strong>Cloud basics on Alibaba Cloud<\/strong>:<\/li>\n<li>ECS instance lifecycle<\/li>\n<li>RAM and least privilege<\/li>\n<li>OSS basics<\/li>\n<li><strong>GPU basics<\/strong>:<\/li>\n<li>What drivers do<\/li>\n<li>CUDA conceptually (even if you don\u2019t write CUDA kernels)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn after<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Image pipelines<\/strong>: Packer or similar image build workflows (verify your toolchain).<\/li>\n<li><strong>Containers for GPU<\/strong>: Docker + NVIDIA runtime; container security.<\/li>\n<li><strong>Orchestration<\/strong>: ACK Kubernetes GPU scheduling, autoscaling patterns.<\/li>\n<li><strong>MLOps<\/strong> (if ML workloads):<\/li>\n<li>Model registry patterns, CI for models, canary deploys, drift monitoring (tooling varies).<\/li>\n<li><strong>Observability at scale<\/strong>: centralized logs, tracing, metrics pipelines.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Job roles that use it<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud engineer \/ infrastructure engineer<\/li>\n<li>DevOps engineer \/ SRE<\/li>\n<li>ML engineer \/ MLOps engineer<\/li>\n<li>Data scientist (advanced users managing their own GPU environments)<\/li>\n<li>Platform engineer building shared compute platforms<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Certification path (if available)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Alibaba Cloud certifications change over time and vary by region. Verify current Alibaba Cloud certification tracks on the official site:<\/li>\n<li>https:\/\/edu.alibabacloud.com\/ (verify current certification pages and availability)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Project ideas for practice<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Build a golden GPU image with pinned driver + CUDA + PyTorch.<\/li>\n<li>Create an inference service on a GPU ECS instance and publish via SLB.<\/li>\n<li>Implement a training job that checkpoints to OSS and resumes after interruption.<\/li>\n<li>Build a cost-control script that shuts down idle GPU instances after N minutes.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">22. Glossary<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Elastic GPU Service<\/strong>: Alibaba Cloud offering for GPU-accelerated compute, typically provisioned via GPU-enabled ECS instances.<\/li>\n<li><strong>ECS (Elastic Compute Service)<\/strong>: Alibaba Cloud virtual machine service used to run compute workloads.<\/li>\n<li><strong>GPU<\/strong>: Graphics Processing Unit; excels at parallel workloads.<\/li>\n<li><strong>VPC (Virtual Private Cloud)<\/strong>: logically isolated virtual network in Alibaba Cloud.<\/li>\n<li><strong>vSwitch<\/strong>: subnet within a VPC in a specific zone.<\/li>\n<li><strong>Security Group<\/strong>: stateful virtual firewall controlling inbound\/outbound traffic to instances.<\/li>\n<li><strong>EIP (Elastic IP)<\/strong>: static public IP that can be bound to cloud resources.<\/li>\n<li><strong>ESSD<\/strong>: enterprise SSD cloud disk types for ECS (performance varies by category).<\/li>\n<li><strong>OSS (Object Storage Service)<\/strong>: Alibaba Cloud object storage for datasets, models, artifacts.<\/li>\n<li><strong>NAS<\/strong>: managed shared file storage service (POSIX-like access).<\/li>\n<li><strong>RAM (Resource Access Management)<\/strong>: Alibaba Cloud IAM service for users, roles, and policies.<\/li>\n<li><strong>KMS (Key Management Service)<\/strong>: service for managing encryption keys (verify integrations you use).<\/li>\n<li><strong>CloudMonitor<\/strong>: Alibaba Cloud monitoring and alerting service.<\/li>\n<li><strong>ActionTrail<\/strong>: Alibaba Cloud audit logging of API actions.<\/li>\n<li><strong>Golden image<\/strong>: prebuilt VM image with standardized configuration and software.<\/li>\n<li><strong>CUDA<\/strong>: NVIDIA GPU computing platform and API ecosystem.<\/li>\n<li><strong><code>nvidia-smi<\/code><\/strong>: NVIDIA tool to display GPU status, driver info, and utilization.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">23. Summary<\/h2>\n\n\n\n<p>Elastic GPU Service on Alibaba Cloud (Computing) provides GPU-accelerated compute primarily through GPU-enabled ECS instances. It matters because GPUs are often the difference between feasible and impractical runtimes for AI training\/inference, rendering, and parallel compute workloads.<\/p>\n\n\n\n<p>Architecturally, treat it as a secure, VPC-isolated GPU VM layer that integrates with OSS\/NAS for data, RAM for access control, and CloudMonitor\/ActionTrail for operations and governance. Cost-wise, focus on the big drivers: instance type selection, running time (avoid idle), storage, and network egress\/EIP usage. Security-wise, prioritize least privilege RAM policies, private networking, restricted SSH, encryption, and audit trails.<\/p>\n\n\n\n<p>Use Elastic GPU Service when you need GPU acceleration with VM-level control and reproducibility; consider higher-level managed platforms when you want less infrastructure management. Next, deepen your skills by building a golden image pipeline and\u2014if you operate at scale\u2014validating GPU orchestration patterns with ACK and robust monitoring.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Computing<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2,5],"tags":[],"class_list":["post-19","post","type-post","status-publish","format-standard","hentry","category-alibaba-cloud","category-computing"],"_links":{"self":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/19","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/comments?post=19"}],"version-history":[{"count":0,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/19\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/media?parent=19"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/categories?post=19"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/tags?post=19"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}