{"id":216,"date":"2026-04-13T05:46:16","date_gmt":"2026-04-13T05:46:16","guid":{"rendered":"https:\/\/www.devopsschool.com\/tutorials\/aws-iot-analytics-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-internet-of-things-iot\/"},"modified":"2026-04-13T05:46:16","modified_gmt":"2026-04-13T05:46:16","slug":"aws-iot-analytics-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-internet-of-things-iot","status":"publish","type":"post","link":"https:\/\/www.devopsschool.com\/tutorials\/aws-iot-analytics-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-internet-of-things-iot\/","title":{"rendered":"AWS IoT Analytics Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for Internet of Things (IoT)"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Category<\/h2>\n\n\n\n<p>Internet of Things (IoT)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. Introduction<\/h2>\n\n\n\n<p>AWS IoT Analytics is an AWS service designed to help you collect, process, store, and analyze Internet of Things (IoT) device data at scale\u2014without building and operating a full custom data pipeline from scratch.<\/p>\n\n\n\n<p>In simple terms: devices produce noisy telemetry (JSON messages, sensor readings, status events). AWS IoT Analytics helps you bring that data in, clean and transform it, store it in a query-friendly way, and then run analytics (SQL or custom container-based processing) to produce datasets you can use in dashboards or machine learning.<\/p>\n\n\n\n<p>Technically, AWS IoT Analytics provides managed IoT-specific ingestion endpoints (\u201cchannels\u201d), transformation workflows (\u201cpipelines\u201d), durable storage (\u201cdata stores\u201d), and analytics outputs (\u201cdatasets\u201d), with integrations into the broader AWS data and analytics ecosystem (Amazon S3, Amazon QuickSight, AWS IoT Core rules, AWS Lambda, and\u2014depending on how you analyze\u2014services like Amazon SageMaker).<\/p>\n\n\n\n<p>The problem it solves: IoT telemetry is high-volume, time-oriented, and often messy (missing values, inconsistent units, out-of-order timestamps, duplicated messages). Teams commonly waste weeks building plumbing and data-quality logic. AWS IoT Analytics packages common IoT data engineering patterns into a managed service, letting you focus on insights and applications.<\/p>\n\n\n\n<blockquote>\n<p>Service lifecycle note: AWS IoT Analytics is an established AWS IoT service. Always verify current service availability, feature status, and any roadmap announcements in the official AWS documentation and AWS \u201cWhat\u2019s New\u201d before starting a new production build.<\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">2. What is AWS IoT Analytics?<\/h2>\n\n\n\n<p>AWS IoT Analytics is a managed service whose official purpose is to make it easier to run analytics on IoT device data by providing purpose-built components for ingestion, processing, storage, and dataset generation.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Core capabilities<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Ingest IoT messages<\/strong> into a managed entry point (channels) either directly via the service API or via AWS IoT Core rules.<\/li>\n<li><strong>Transform and enrich data<\/strong> using pipelines (filtering, selecting attributes, math transforms, adding attributes, enriching from device registry\/shadow where applicable, invoking AWS Lambda, etc.).<\/li>\n<li><strong>Persist data<\/strong> in a managed data store designed for IoT workloads and downstream analytics.<\/li>\n<li><strong>Create datasets<\/strong> from stored data using SQL queries or custom container-based processing, and deliver dataset content for use by BI\/ML tools.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Major components (conceptual model)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Channel<\/strong>: The ingestion buffer\/entry point for messages.<\/li>\n<li><strong>Pipeline<\/strong>: A sequence of processing steps (\u201cactivities\u201d) applied to channel messages.<\/li>\n<li><strong>Data store<\/strong>: The durable storage for processed messages.<\/li>\n<li><strong>Dataset<\/strong>: A defined query or processing job that produces an analysis-ready output (dataset content).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Service type<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Managed IoT data ingestion + transformation + analytics dataset service<\/strong> (not a general-purpose stream processor, not a time-series database, not a full data lakehouse).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scope and locality<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Regional service<\/strong> in practice: you create channels\/pipelines\/data stores\/datasets in a specific AWS Region within your AWS account. Data residency and latency depend on the Region you choose.<br\/>\n  Verify exact Region availability in the official documentation for AWS IoT Analytics.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">How it fits into the AWS ecosystem<\/h3>\n\n\n\n<p>AWS IoT Analytics commonly sits between:\n&#8211; <strong>Device connectivity\/ingestion<\/strong>: AWS IoT Core (MQTT topics, rules engine), or direct ingestion to IoT Analytics APIs.\n&#8211; <strong>Transformation\/enrichment<\/strong>: IoT Analytics pipelines and\/or AWS Lambda.\n&#8211; <strong>Storage and analytics<\/strong>: IoT Analytics data stores and datasets; optional delivery to Amazon S3 for a broader analytics stack (Amazon Athena, Amazon QuickSight, Amazon EMR, AWS Glue, Amazon SageMaker).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">3. Why use AWS IoT Analytics?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Business reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Faster time to insight<\/strong>: pre-built IoT ingestion and data preparation primitives reduce engineering lead time.<\/li>\n<li><strong>Lower operational overhead<\/strong>: a managed service can reduce the burden of running streaming infrastructure and custom ETL jobs.<\/li>\n<li><strong>Better data quality<\/strong>: pipelines encourage consistent transformations and standardized schemas across devices and fleets.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Technical reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>IoT-specific processing model<\/strong>: designed around device telemetry patterns (small JSON messages, high frequency, occasional duplicates).<\/li>\n<li><strong>SQL-based dataset creation<\/strong>: lets analysts and engineers create repeatable datasets from stored telemetry.<\/li>\n<li><strong>Optional custom processing<\/strong>: container-based dataset jobs support advanced transformations when SQL isn\u2019t enough (verify current dataset types in docs).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operational reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Repeatable pipelines<\/strong>: pipeline activities are defined declaratively and can be version-controlled.<\/li>\n<li><strong>Integration with CloudWatch and CloudTrail<\/strong>: supports observability and auditability (verify exact metrics\/log events available for your setup).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Security\/compliance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>IAM-based access control<\/strong>: control who can ingest, transform, query, and export.<\/li>\n<li><strong>Encryption<\/strong>: supports encryption at rest and in transit (verify KMS options and defaults in official docs).<\/li>\n<li><strong>Audit<\/strong>: AWS CloudTrail can record management API actions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scalability\/performance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Managed scaling<\/strong> for ingestion and processing within service quotas.<\/li>\n<li><strong>Decoupled stages<\/strong> (channel \u2192 pipeline \u2192 data store \u2192 dataset) reduce the need for custom backpressure handling in many cases.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">When teams should choose AWS IoT Analytics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You have <strong>IoT telemetry<\/strong> that needs <strong>cleaning\/enrichment<\/strong> and you want a managed path to <strong>queryable datasets<\/strong>.<\/li>\n<li>You want to integrate IoT telemetry into <strong>dashboards<\/strong> or <strong>ML<\/strong> workflows without assembling a complex ETL stack first.<\/li>\n<li>You need a clear separation of raw ingestion, processing logic, durable storage, and dataset outputs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">When teams should not choose AWS IoT Analytics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You primarily need a <strong>time-series database<\/strong> optimized for ad hoc time-range queries and downsampling (consider purpose-built time-series databases; in AWS, evaluate Amazon Timestream or other options depending on requirements).<\/li>\n<li>You need <strong>low-latency real-time stream analytics<\/strong> (evaluate Amazon Kinesis Data Analytics \/ Apache Flink, or AWS Lambda\/Kinesis patterns).<\/li>\n<li>You already have a mature <strong>data lakehouse<\/strong> (S3 + Iceberg\/Hudi\/Delta + Glue\/Athena\/EMR) and prefer to standardize everything there\u2014IoT Analytics may be redundant.<\/li>\n<li>Your use case is mostly <strong>industrial asset modeling and OT integration<\/strong> (evaluate AWS IoT SiteWise).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">4. Where is AWS IoT Analytics used?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Industries<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Manufacturing (machine telemetry, OEE-like metrics pipelines)<\/li>\n<li>Energy and utilities (smart meters, substation monitoring)<\/li>\n<li>Transportation and logistics (fleet telemetry, cold-chain sensors)<\/li>\n<li>Smart buildings (HVAC sensors, occupancy\/air quality)<\/li>\n<li>Retail (refrigeration, footfall sensors, device health)<\/li>\n<li>Healthcare devices (telemetry and operational monitoring; ensure compliance requirements are met)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Team types<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>IoT platform teams building standardized telemetry ingestion<\/li>\n<li>Data engineering teams that need managed IoT ETL<\/li>\n<li>Analytics\/BI teams consuming curated datasets<\/li>\n<li>ML engineering teams using curated IoT features for training<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Workloads and architectures<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>IoT Core \u2192 IoT Analytics<\/strong> for MQTT ingestion + rules-based routing<\/li>\n<li>Direct device\/application ingestion to IoT Analytics when IoT Core is not used<\/li>\n<li>IoT Analytics \u2192 S3 \u2192 Athena\/QuickSight for BI<\/li>\n<li>IoT Analytics \u2192 datasets \u2192 SageMaker workflows (where applicable)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Real-world deployment contexts<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Large fleets (thousands to millions of devices) with standardized message schemas<\/li>\n<li>Multi-tenant device platforms (separate channels\/pipelines per tenant or per device class)<\/li>\n<li>Regulated environments requiring audit trails for data processing steps<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Production vs dev\/test usage<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Dev\/test<\/strong>: prototype pipelines, validate transformations, create small scheduled datasets.<\/li>\n<li><strong>Production<\/strong>: enforce naming\/tagging conventions, least-privilege IAM, encryption policies, retention rules, and cost controls; integrate monitoring and alarms.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">5. Top Use Cases and Scenarios<\/h2>\n\n\n\n<p>Below are realistic scenarios where AWS IoT Analytics is commonly a fit.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1) Fleet health monitoring dataset<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: You need a daily fleet-wide report of device connectivity and error rates.<\/li>\n<li><strong>Why this service fits<\/strong>: Pipelines standardize and clean telemetry; datasets generate scheduled summaries.<\/li>\n<li><strong>Example<\/strong>: Every night, create a dataset showing % online devices, top error codes, and firmware versions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">2) Sensor data normalization (units + schema)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Devices send temperatures in mixed units (C\/F) and inconsistent field names.<\/li>\n<li><strong>Why this service fits<\/strong>: Pipeline transformations can standardize fields and values before storage.<\/li>\n<li><strong>Example<\/strong>: Convert all temps to Celsius; rename <code>tempF<\/code>\/<code>tempC<\/code> into <code>temperature_c<\/code>.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">3) Detecting missing\/late telemetry for SLA reporting<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Some devices stop reporting; you need reports for SLA and operations.<\/li>\n<li><strong>Why this service fits<\/strong>: Store cleaned telemetry and generate datasets that compute last-seen timestamps per device.<\/li>\n<li><strong>Example<\/strong>: Create a dataset listing devices with no telemetry in the last 2 hours\/day.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4) Cold-chain compliance analytics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: You must prove goods stayed within temperature ranges during transit.<\/li>\n<li><strong>Why this service fits<\/strong>: Pipelines can remove noisy readings and datasets can compute time-in-range metrics.<\/li>\n<li><strong>Example<\/strong>: Daily dataset per shipment: duration outside threshold, min\/max temperature, stop locations.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">5) Predictive maintenance feature generation<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: ML models require engineered features (rolling averages, counts, deltas).<\/li>\n<li><strong>Why this service fits<\/strong>: Datasets can produce curated training tables; container datasets can compute custom features (verify dataset type support).<\/li>\n<li><strong>Example<\/strong>: Generate features: vibration RMS over last N windows, mean motor current, anomaly counts.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">6) Device firmware rollout analytics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Track firmware adoption and correlate with crash rates.<\/li>\n<li><strong>Why this service fits<\/strong>: Enrich telemetry with firmware metadata and produce daily adoption datasets.<\/li>\n<li><strong>Example<\/strong>: Dataset groups by firmware version and outputs crash rate trends.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">7) Smart building energy optimization reporting<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Compare energy usage to occupancy and weather.<\/li>\n<li><strong>Why this service fits<\/strong>: Centralize telemetry, generate datasets for BI.<\/li>\n<li><strong>Example<\/strong>: Hourly dataset joining sensor readings with derived occupancy metrics.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">8) IoT event quality control (deduplication + filtering)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Duplicate messages inflate costs and distort analytics.<\/li>\n<li><strong>Why this service fits<\/strong>: Pipelines can apply filtering and transformations; you can enforce minimal schema and drop invalid records.<\/li>\n<li><strong>Example<\/strong>: Drop records missing <code>deviceId<\/code> or <code>timestamp<\/code>; keep only message types you care about.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">9) Multi-tenant IoT analytics for SaaS platforms<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: A SaaS IoT platform needs per-customer analytics outputs.<\/li>\n<li><strong>Why this service fits<\/strong>: Separate pipelines\/data stores per tenant or partition in datasets (architecture-dependent).<\/li>\n<li><strong>Example<\/strong>: Create datasets per tenant with scheduled exports to their S3 prefixes.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">10) Operational dashboards for manufacturing lines<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Build daily\/shift reports on machine state transitions and downtime.<\/li>\n<li><strong>Why this service fits<\/strong>: Pipelines can normalize state transitions; datasets produce shift-level aggregates.<\/li>\n<li><strong>Example<\/strong>: Dataset calculates downtime minutes by reason code per line per shift.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">11) Edge-to-cloud telemetry consolidation<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Multiple edge gateways send aggregated data; you need one consistent analytics store.<\/li>\n<li><strong>Why this service fits<\/strong>: Channels unify ingestion; pipelines enforce a common format.<\/li>\n<li><strong>Example<\/strong>: Gateways publish aggregated metrics every minute; IoT Analytics produces hourly KPIs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">12) Compliance\/audit-friendly processing traceability<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: You need to show how raw telemetry becomes curated datasets.<\/li>\n<li><strong>Why this service fits<\/strong>: Pipeline definitions are explicit and can be reviewed and audited alongside CloudTrail logs.<\/li>\n<li><strong>Example<\/strong>: Documented pipeline steps + dataset SQL queries support internal audits.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">6. Core Features<\/h2>\n\n\n\n<p>Features below are described in practical terms. If you need exact limits, API shapes, and newest behaviors, verify in the official AWS IoT Analytics documentation.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Channels (ingestion)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Provides a managed entry point for device messages.<\/li>\n<li><strong>Why it matters<\/strong>: Decouples ingestion from processing; simplifies routing from IoT Core or direct API calls.<\/li>\n<li><strong>Practical benefit<\/strong>: You can ingest data consistently even as downstream processing changes.<\/li>\n<li><strong>Caveats<\/strong>: Channels and ingestion are subject to service quotas and payload constraints (verify in docs).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Pipelines (data processing workflow)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Applies a sequence of activities to messages (e.g., filter, select attributes, transform values, enrich, invoke Lambda, then store).<\/li>\n<li><strong>Why it matters<\/strong>: Turns raw telemetry into standardized, analytics-ready records.<\/li>\n<li><strong>Practical benefit<\/strong>: Central place to implement \u201cdata contract\u201d rules (required fields, type conversions, computed attributes).<\/li>\n<li><strong>Caveats<\/strong>: Complex enrichments or heavy computations may be better in downstream systems or container datasets, depending on latency\/cost constraints.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Pipeline activities (common transformation building blocks)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Lets you implement common transformations without writing full custom code.<\/li>\n<li><strong>Why it matters<\/strong>: Reduces operational risk vs. custom ETL code.<\/li>\n<li><strong>Practical benefit<\/strong>: Faster iteration and easier review of data logic.<\/li>\n<li><strong>Caveats<\/strong>: Exact activity list and behavior should be verified in docs; some enrichments may require IoT Core registry\/shadow integration and correct IAM permissions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Data stores (durable storage)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Stores processed messages for querying and dataset generation.<\/li>\n<li><strong>Why it matters<\/strong>: Creates a stable, queryable source of truth for processed telemetry.<\/li>\n<li><strong>Practical benefit<\/strong>: Separates processed analytics storage from raw ingestion.<\/li>\n<li><strong>Caveats<\/strong>: Retention, encryption, and storage costs must be managed.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Datasets (repeatable analytics outputs)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Defines how to generate curated outputs from the data store (often SQL-based; some setups support custom container processing\u2014verify).<\/li>\n<li><strong>Why it matters<\/strong>: Gives you repeatable, scheduled, and shareable \u201canalysis tables\u201d.<\/li>\n<li><strong>Practical benefit<\/strong>: Downstream dashboards and ML can rely on stable dataset schemas.<\/li>\n<li><strong>Caveats<\/strong>: Dataset generation can scan large amounts of data\u2014watch cost and performance.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Dataset content delivery \/ export<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Produces dataset \u201ccontent\u201d that can be retrieved via API (often as pre-signed URLs) and\/or delivered to destinations like Amazon S3 (verify supported delivery options).<\/li>\n<li><strong>Why it matters<\/strong>: Bridges IoT Analytics outputs to the rest of your data platform.<\/li>\n<li><strong>Practical benefit<\/strong>: Easy to integrate with Athena\/QuickSight\/Glue by writing outputs to S3.<\/li>\n<li><strong>Caveats<\/strong>: S3 storage and request costs apply; dataset scheduling frequency impacts cost.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Integration with AWS IoT Core (rules engine)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: IoT Core rules can route MQTT messages into IoT Analytics channels.<\/li>\n<li><strong>Why it matters<\/strong>: IoT Core is often the connectivity layer; rules provide flexible routing and filtering.<\/li>\n<li><strong>Practical benefit<\/strong>: No device changes required\u2014route topics to analytics centrally.<\/li>\n<li><strong>Caveats<\/strong>: IoT Core has its own pricing and quotas; rule misconfiguration can duplicate or drop data.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">AWS Lambda integration (enrichment\/custom logic)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Pipelines can invoke Lambda for custom transforms.<\/li>\n<li><strong>Why it matters<\/strong>: Lets you implement logic not covered by built-in activities.<\/li>\n<li><strong>Practical benefit<\/strong>: Custom parsing, mapping, lookup, validation.<\/li>\n<li><strong>Caveats<\/strong>: Adds cost and potential latency; ensure retries\/idempotency.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Monitoring and auditing (CloudWatch\/CloudTrail)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Supports operational visibility and audit trails of API actions.<\/li>\n<li><strong>Why it matters<\/strong>: Production systems need alerting, troubleshooting data, and access auditing.<\/li>\n<li><strong>Practical benefit<\/strong>: Helps detect ingestion failures, dataset job failures, permission issues.<\/li>\n<li><strong>Caveats<\/strong>: Exact metrics and log locations vary\u2014verify in docs and in your account.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">7. Architecture and How It Works<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">High-level architecture<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Ingestion<\/strong>: Telemetry arrives either:\n   &#8211; From <strong>AWS IoT Core<\/strong> via an IoT rule action to an IoT Analytics channel, or\n   &#8211; Directly to IoT Analytics via ingestion APIs (e.g., batch put).<\/li>\n<li><strong>Processing<\/strong>: A <strong>pipeline<\/strong> reads messages from the channel and applies transformations and enrichment steps.<\/li>\n<li><strong>Storage<\/strong>: The pipeline writes processed records into a <strong>data store<\/strong>.<\/li>\n<li><strong>Analytics output<\/strong>: A <strong>dataset<\/strong> runs (on demand or on a schedule) to produce curated output from the data store.<\/li>\n<li><strong>Consumption<\/strong>: Dataset content is retrieved via API or delivered to <strong>Amazon S3<\/strong>, then used by BI\/ML tools.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Data\/control flow<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Control plane<\/strong>: Create and manage channels, pipelines, data stores, datasets (via console\/CLI\/SDK). CloudTrail can log these actions.<\/li>\n<li><strong>Data plane<\/strong>: Device messages flow through channel \u2192 pipeline \u2192 data store; dataset generation reads from data store and writes dataset content.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Integrations with related services<\/h3>\n\n\n\n<p>Common integrations (choose based on architecture):\n&#8211; <strong>AWS IoT Core<\/strong>: device connectivity (MQTT), rules engine for routing.\n&#8211; <strong>Amazon S3<\/strong>: dataset exports and long-term storage.\n&#8211; <strong>AWS Lambda<\/strong>: custom transforms\/enrichment.\n&#8211; <strong>Amazon QuickSight<\/strong>: dashboards (often via S3\/Athena patterns).\n&#8211; <strong>Amazon Athena + AWS Glue<\/strong>: query dataset outputs stored in S3.\n&#8211; <strong>Amazon SageMaker<\/strong>: model development using exported datasets (verify best practice patterns in docs).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Security\/authentication model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>IAM policies<\/strong> govern management and data plane actions.<\/li>\n<li>Devices typically authenticate to <strong>IoT Core<\/strong> using X.509 certificates; IoT Core rules then deliver to IoT Analytics.<\/li>\n<li>If ingesting directly to IoT Analytics APIs, clients use <strong>AWS credentials<\/strong> (IAM users\/roles), commonly via STS-assumed roles for applications.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Networking model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>AWS IoT Analytics endpoints are AWS service endpoints in a Region.<\/li>\n<li>Public internet access is possible by default for API calls; private connectivity options (VPC endpoints\/PrivateLink) vary by service and Region\u2014<strong>verify in Amazon VPC endpoint documentation<\/strong> for AWS IoT Analytics availability.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Monitoring\/logging\/governance<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>CloudTrail<\/strong>: audit who created\/modified\/deleted IoT Analytics resources, who ran dataset jobs, etc.<\/li>\n<li><strong>CloudWatch<\/strong>: service metrics (where available), alarms, dashboards; Lambda logs if Lambda is used.<\/li>\n<li><strong>Tagging<\/strong>: tag channels\/pipelines\/data stores\/datasets for cost allocation and governance (verify tag support for each resource type in docs).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Simple architecture diagram<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart LR\n  D[IoT Devices] --&gt;|MQTT| IOTC[AWS IoT Core]\n  IOTC --&gt;|Rule action| CH[IoT Analytics Channel]\n  CH --&gt; PL[IoT Analytics Pipeline]\n  PL --&gt; DS[IoT Analytics Data Store]\n  DS --&gt; DT[IoT Analytics Dataset]\n  DT --&gt; CON[Consumers: BI\/ML\/Apps]\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Production-style architecture diagram<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart TB\n  subgraph Edge[\"Edge \/ Field\"]\n    DEV[Devices &amp; Gateways]\n  end\n\n  subgraph AWS[\"AWS Region\"]\n    IOTC[AWS IoT Core\\nAuth (X.509), MQTT]\n    RULES[IoT Core Rules Engine]\n    CH[IoT Analytics Channel]\n    PL[IoT Analytics Pipeline\\nFilter\/Transform\/Enrich]\n    LAMBDA[AWS Lambda\\nCustom enrichment]\n    DS[IoT Analytics Data Store\\nEncrypted at rest]\n    DATASET[IoT Analytics Dataset\\nScheduled SQL or container]\n    S3[Amazon S3\\nDataset exports \/ data lake]\n    GLUE[AWS Glue Data Catalog]\n    ATHENA[Amazon Athena]\n    QS[Amazon QuickSight]\n    CW[Amazon CloudWatch\\nMetrics\/Alarms]\n    CT[AWS CloudTrail\\nAudit logs]\n    KMS[AWS KMS\\nKeys\/Policies]\n  end\n\n  DEV --&gt; IOTC\n  IOTC --&gt; RULES\n  RULES --&gt; CH\n  CH --&gt; PL\n  PL --&gt;|optional| LAMBDA\n  LAMBDA --&gt; PL\n  PL --&gt; DS\n  DS --&gt; DATASET\n  DATASET --&gt; S3\n  S3 --&gt; GLUE --&gt; ATHENA --&gt; QS\n\n  PL -.metrics\/logs.-&gt; CW\n  DATASET -.events.-&gt; CW\n  IOTC -.audit.-&gt; CT\n  CH -.audit.-&gt; CT\n  PL -.audit.-&gt; CT\n  DS -.audit.-&gt; CT\n  DATASET -.audit.-&gt; CT\n  DS -.encrypt.-&gt; KMS\n  S3 -.encrypt.-&gt; KMS\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8. Prerequisites<\/h2>\n\n\n\n<p>Before starting the lab and using AWS IoT Analytics, you need:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">AWS account and billing<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>An <strong>AWS account<\/strong> with <strong>billing enabled<\/strong>.<\/li>\n<li>Ability to create IAM roles\/policies and AWS IoT Analytics resources.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Permissions \/ IAM<\/h3>\n\n\n\n<p>For a lab, you typically need permissions to:\n&#8211; Create\/delete IoT Analytics channels, pipelines, data stores, datasets.\n&#8211; Put messages into a channel (data plane).\n&#8211; Create dataset content and fetch dataset content.\n&#8211; Read CloudWatch logs\/metrics (optional).<\/p>\n\n\n\n<p>AWS provides managed policies for IoT Analytics in many accounts (names can change). For least privilege, prefer custom IAM policies scoped to the resources you create. If you must use managed policies for learning, use them temporarily and remove afterward.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Tools<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>AWS CLI v2<\/strong> installed and configured:<\/li>\n<li>https:\/\/docs.aws.amazon.com\/cli\/latest\/userguide\/getting-started-install.html<\/li>\n<li>Configure credentials: <code>aws configure<\/code> (or SSO-based config)<\/li>\n<li>Optional: <code>curl<\/code> to download dataset content from a pre-signed URL.<\/li>\n<li>Optional: <code>jq<\/code> for JSON parsing in terminal examples.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Region availability<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Choose a Region where AWS IoT Analytics is available.<br\/>\n  Verify Region support in official documentation (service endpoints\/Region table).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Quotas \/ limits<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>AWS IoT Analytics has service quotas (resource counts, message sizes, throughput, dataset schedules, etc.).<br\/>\n  Check <strong>Service Quotas<\/strong> in the AWS Console and the IoT Analytics documentation for up-to-date limits.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Prerequisite services (optional)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>AWS IoT Core<\/strong> is optional for this tutorial (we\u2019ll ingest directly via IoT Analytics APIs to keep the lab small).<\/li>\n<li><strong>Amazon S3 \/ Athena \/ QuickSight<\/strong> are optional if you extend the lab to BI.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">9. Pricing \/ Cost<\/h2>\n\n\n\n<p>AWS IoT Analytics pricing is usage-based. Exact rates vary by Region and can change, so do not hardcode numbers in planning documents.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Official pricing page and calculator<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>AWS IoT Analytics pricing: https:\/\/aws.amazon.com\/iot-analytics\/pricing\/<\/li>\n<li>AWS Pricing Calculator: https:\/\/calculator.aws\/#\/<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Typical pricing dimensions (verify exact dimensions on pricing page)<\/h3>\n\n\n\n<p>Common cost drivers usually include:\n&#8211; <strong>Data ingestion \/ message processing<\/strong>: charges based on volume of data ingested and\/or processed through the service.\n&#8211; <strong>Data store storage<\/strong>: charges for storing data over time (GB-month).\n&#8211; <strong>Dataset generation \/ query processing<\/strong>: charges related to dataset jobs and the amount of data scanned\/processed.\n&#8211; <strong>Data transfer<\/strong>: standard AWS data transfer rules apply:\n  &#8211; Intra-Region service-to-service transfer may be free or discounted depending on services and paths (verify).\n  &#8211; Data egress to the internet is generally charged.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Free tier<\/h3>\n\n\n\n<p>AWS IoT Analytics has historically had a free tier\/trial style offer in some contexts, but <strong>you must verify current free tier availability and terms<\/strong> on the pricing page or AWS Free Tier page:\n&#8211; https:\/\/aws.amazon.com\/free\/<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Hidden or indirect costs<\/h3>\n\n\n\n<p>Even if IoT Analytics costs are small in a lab, real deployments often add:\n&#8211; <strong>AWS IoT Core<\/strong> costs (connectivity, messaging, rules) if used for ingestion.\n&#8211; <strong>AWS Lambda<\/strong> costs (invocations\/duration) if used for pipeline enrichment.\n&#8211; <strong>Amazon S3<\/strong> costs for dataset exports (storage, PUT\/GET requests, lifecycle transitions).\n&#8211; <strong>Athena<\/strong> query costs (data scanned).\n&#8211; <strong>QuickSight<\/strong> user licensing and SPICE capacity (if used).\n&#8211; <strong>KMS<\/strong> costs (key usage and API calls) if using customer-managed keys heavily.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Cost optimization strategies<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Filter early<\/strong>: drop invalid\/unneeded messages in pipelines before storing them.<\/li>\n<li><strong>Normalize schemas<\/strong>: consistent schemas reduce reprocessing and downstream complexity.<\/li>\n<li><strong>Use retention and lifecycle policies<\/strong>:<\/li>\n<li>Retention in IoT Analytics data stores (if configurable).<\/li>\n<li>Lifecycle rules in S3 for exported datasets.<\/li>\n<li><strong>Control dataset schedules<\/strong>: run datasets only as frequently as needed.<\/li>\n<li><strong>Avoid scanning too much history<\/strong>: use time filters\/partition strategies in dataset queries where possible.<\/li>\n<li><strong>Sample in development<\/strong>: ingest a subset of devices during pipeline iteration.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Example low-cost starter estimate (conceptual)<\/h3>\n\n\n\n<p>A small lab setup typically incurs minimal charges if you:\n&#8211; Ingest only a few KB\/MB of sample data.\n&#8211; Keep a single data store with short-lived data.\n&#8211; Run datasets on-demand once or twice.\nBecause rates vary, calculate with the AWS Pricing Calculator using:\n&#8211; Expected daily ingestion volume (MB\/GB per day),\n&#8211; Retention days,\n&#8211; Dataset run frequency and estimated data scanned.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Example production cost considerations<\/h3>\n\n\n\n<p>For production fleets:\n&#8211; <strong>Ingestion volume<\/strong> is usually the largest driver (devices \u00d7 messages\/min \u00d7 payload size).\n&#8211; <strong>Retention<\/strong> multiplies storage costs.\n&#8211; <strong>Dataset scanning<\/strong> can become significant if you create many datasets that scan large time ranges.\nA common approach is to use IoT Analytics for curation and then export curated data into an S3 data lake with partitioning and lifecycle controls.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">10. Step-by-Step Hands-On Tutorial<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Objective<\/h3>\n\n\n\n<p>Build a minimal, real AWS IoT Analytics pipeline that:\n1. Creates a channel, pipeline, and data store.\n2. Ingests sample IoT telemetry into the channel using the AWS CLI.\n3. Creates a SQL dataset and generates dataset content.\n4. Downloads the dataset output to verify results.\n5. Cleans up all resources.<\/p>\n\n\n\n<p>This lab avoids AWS IoT Core to keep setup simple and low-cost, while still using core AWS IoT Analytics components.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Lab Overview<\/h3>\n\n\n\n<p>You will create:\n&#8211; <strong>Channel<\/strong>: <code>lab_channel<\/code>\n&#8211; <strong>Data store<\/strong>: <code>lab_datastore<\/code>\n&#8211; <strong>Pipeline<\/strong>: <code>lab_pipeline<\/code> (channel \u2192 datastore)\n&#8211; <strong>Dataset<\/strong>: <code>lab_dataset<\/code> (SQL query selecting recent records)<\/p>\n\n\n\n<p>You will then send a few JSON messages (temperature readings) via the IoT Analytics <strong>BatchPutMessage<\/strong> API.<\/p>\n\n\n\n<p><strong>Estimated time<\/strong>: 30\u201360 minutes<br\/>\n<strong>Cost<\/strong>: Minimal for small test data, but not zero. Delete resources after.<\/p>\n\n\n\n<blockquote>\n<p>Names must be unique within your account\/Region for some resource types. If a name is taken, add a suffix like <code>-&lt;yourinitials&gt;-01<\/code>.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 1: Choose a Region and configure environment variables<\/h3>\n\n\n\n<p>Pick a Region where AWS IoT Analytics is available.<\/p>\n\n\n\n<pre><code class=\"language-bash\">export AWS_REGION=\"us-east-1\"   # change if needed\nexport AWS_PAGER=\"\"\n\n# Resource names\nexport CHANNEL_NAME=\"lab_channel\"\nexport DATASTORE_NAME=\"lab_datastore\"\nexport PIPELINE_NAME=\"lab_pipeline\"\nexport DATASET_NAME=\"lab_dataset\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>: Your shell is set up to reuse consistent names.<\/p>\n\n\n\n<p><strong>Verification<\/strong>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws sts get-caller-identity\naws configure get region\n<\/code><\/pre>\n\n\n\n<p>If your CLI Region differs from <code>AWS_REGION<\/code>, either set <code>AWS_DEFAULT_REGION<\/code> or pass <code>--region \"$AWS_REGION\"<\/code> on each command.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 2: Create an IoT Analytics channel<\/h3>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics create-channel \\\n  --channel-name \"$CHANNEL_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>: Channel is created.<\/p>\n\n\n\n<p><strong>Verification<\/strong>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics describe-channel \\\n  --channel-name \"$CHANNEL_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 3: Create an IoT Analytics data store<\/h3>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics create-datastore \\\n  --datastore-name \"$DATASTORE_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>: Data store is created.<\/p>\n\n\n\n<p><strong>Verification<\/strong>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics describe-datastore \\\n  --datastore-name \"$DATASTORE_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 4: Create a pipeline (channel \u2192 datastore)<\/h3>\n\n\n\n<p>A pipeline is a list of activities. The simplest useful pipeline reads from a channel and stores into a datastore.<\/p>\n\n\n\n<p>Create a file named <code>pipeline-activities.json<\/code>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">cat &gt; pipeline-activities.json &lt;&lt; 'EOF'\n[\n  {\n    \"channel\": {\n      \"name\": \"from_channel\",\n      \"channelName\": \"lab_channel\",\n      \"next\": \"to_datastore\"\n    }\n  },\n  {\n    \"datastore\": {\n      \"name\": \"to_datastore\",\n      \"datastoreName\": \"lab_datastore\"\n    }\n  }\n]\nEOF\n<\/code><\/pre>\n\n\n\n<p>Now create the pipeline:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics create-pipeline \\\n  --pipeline-name \"$PIPELINE_NAME\" \\\n  --pipeline-activities file:\/\/pipeline-activities.json \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>: The pipeline exists and will begin processing new messages arriving in the channel.<\/p>\n\n\n\n<p><strong>Verification<\/strong>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics describe-pipeline \\\n  --pipeline-name \"$PIPELINE_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<blockquote>\n<p>Note: In real deployments, you\u2019ll add activities (filter\/select\/math\/lambda\/enrich) between channel and datastore.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 5: Ingest sample IoT telemetry messages into the channel<\/h3>\n\n\n\n<p>You will send a small batch of messages. Each message includes a <code>messageId<\/code> and a JSON payload.<\/p>\n\n\n\n<p>Create a file named <code>messages.json<\/code>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">NOW_MS=$(python3 - &lt;&lt; 'PY'\nimport time\nprint(int(time.time()*1000))\nPY\n)\n\ncat &gt; messages.json &lt;&lt; EOF\n{\n  \"channelName\": \"$CHANNEL_NAME\",\n  \"messages\": [\n    {\n      \"messageId\": \"m1\",\n      \"payload\": \"$(printf '{\"deviceId\":\"device-001\",\"timestamp_ms\":%s,\"temperature_c\":21.5,\"status\":\"ok\"}' \"$NOW_MS\" | base64)\"\n    },\n    {\n      \"messageId\": \"m2\",\n      \"payload\": \"$(printf '{\"deviceId\":\"device-001\",\"timestamp_ms\":%s,\"temperature_c\":22.1,\"status\":\"ok\"}' \"$((NOW_MS+1000))\" | base64)\"\n    },\n    {\n      \"messageId\": \"m3\",\n      \"payload\": \"$(printf '{\"deviceId\":\"device-002\",\"timestamp_ms\":%s,\"temperature_c\":19.9,\"status\":\"ok\"}' \"$((NOW_MS+2000))\" | base64)\"\n    }\n  ]\n}\nEOF\n<\/code><\/pre>\n\n\n\n<p>Send the batch:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics batch-put-message \\\n  --cli-input-json file:\/\/messages.json \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>: API returns a result; failures array should be empty.<\/p>\n\n\n\n<p><strong>Verification<\/strong>:\n&#8211; If the command returns failures, review them (common issues are payload encoding and permissions).\n&#8211; Give the pipeline a short time to process messages (a minute or two in small labs).<\/p>\n\n\n\n<blockquote>\n<p>Payload requirement: <code>payload<\/code> is binary; AWS CLI expects base64-encoded bytes. That\u2019s why we base64-encode JSON strings.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 6: Create a dataset (SQL query) to read from the data store<\/h3>\n\n\n\n<p>Datasets define the query\/processing that produces dataset content. For a beginner lab, we\u2019ll create a simple SQL dataset.<\/p>\n\n\n\n<p>Create a file named <code>dataset.json<\/code>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">cat &gt; dataset.json &lt;&lt; 'EOF'\n{\n  \"datasetName\": \"lab_dataset\",\n  \"actions\": [\n    {\n      \"actionName\": \"select_all\",\n      \"queryAction\": {\n        \"sqlQuery\": \"SELECT * FROM lab_datastore\"\n      }\n    }\n  ]\n}\nEOF\n<\/code><\/pre>\n\n\n\n<p>Create the dataset:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics create-dataset \\\n  --cli-input-json file:\/\/dataset.json \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>: Dataset definition is created.<\/p>\n\n\n\n<p><strong>Verification<\/strong>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics describe-dataset \\\n  --dataset-name \"$DATASET_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<blockquote>\n<p>If your datastore name differs, update the SQL query accordingly. SQL syntax and supported functions can vary\u2014verify supported SQL in the AWS IoT Analytics documentation.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 7: Generate dataset content (run the dataset)<\/h3>\n\n\n\n<p>Create dataset content (this is the job run that materializes results):<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics create-dataset-content \\\n  --dataset-name \"$DATASET_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>: A dataset content job starts.<\/p>\n\n\n\n<p><strong>Verification<\/strong> (poll until succeeded):<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics list-dataset-contents \\\n  --dataset-name \"$DATASET_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p>Look for the latest entry and check its <code>status<\/code>. If it\u2019s still <code>RUNNING<\/code>, wait 15\u201330 seconds and try again.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 8: Download the dataset content and inspect it<\/h3>\n\n\n\n<p>Get the dataset content details:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics get-dataset-content \\\n  --dataset-name \"$DATASET_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p>The response typically includes one or more entries with a <code>dataURI<\/code> (often a pre-signed URL) and a <code>fileName<\/code>.<\/p>\n\n\n\n<p>If you have <code>jq<\/code>, extract the URL:<\/p>\n\n\n\n<pre><code class=\"language-bash\">DATA_URI=$(aws iotanalytics get-dataset-content \\\n  --dataset-name \"$DATASET_NAME\" \\\n  --region \"$AWS_REGION\" | jq -r '.entries[0].dataURI')\n\necho \"$DATA_URI\"\n<\/code><\/pre>\n\n\n\n<p>Download it:<\/p>\n\n\n\n<pre><code class=\"language-bash\">curl -L \"$DATA_URI\" -o lab_dataset_output\nfile lab_dataset_output\n<\/code><\/pre>\n\n\n\n<p>Depending on output format and compression, you may need to unzip:<\/p>\n\n\n\n<pre><code class=\"language-bash\"># Try listing as zip (if applicable)\npython3 - &lt;&lt; 'PY'\nimport zipfile\np=\"lab_dataset_output\"\nif zipfile.is_zipfile(p):\n    z=zipfile.ZipFile(p)\n    print(\"ZIP contains:\", z.namelist())\nelse:\n    print(\"Not a zip file (this is fine).\")\nPY\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>: You can retrieve the dataset output file and see records corresponding to your ingested messages.<\/p>\n\n\n\n<blockquote>\n<p>Output format can vary. Some configurations return CSV; some may return JSON or a compressed file. Verify dataset output formats in official docs.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Validation<\/h3>\n\n\n\n<p>Use this checklist:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>Channel exists<\/strong>\n<code>bash\n   aws iotanalytics describe-channel --channel-name \"$CHANNEL_NAME\" --region \"$AWS_REGION\"<\/code><\/p>\n<\/li>\n<li>\n<p><strong>Pipeline exists<\/strong>\n<code>bash\n   aws iotanalytics describe-pipeline --pipeline-name \"$PIPELINE_NAME\" --region \"$AWS_REGION\"<\/code><\/p>\n<\/li>\n<li>\n<p><strong>Data store exists<\/strong>\n<code>bash\n   aws iotanalytics describe-datastore --datastore-name \"$DATASTORE_NAME\" --region \"$AWS_REGION\"<\/code><\/p>\n<\/li>\n<li>\n<p><strong>Dataset run succeeded<\/strong>\n<code>bash\n   aws iotanalytics list-dataset-contents --dataset-name \"$DATASET_NAME\" --region \"$AWS_REGION\"<\/code><\/p>\n<\/li>\n<li>\n<p><strong>Dataset output is downloadable<\/strong>\n   &#8211; <code>get-dataset-content<\/code> returns a valid <code>dataURI<\/code>.\n   &#8211; <code>curl<\/code> downloads a non-empty file.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Troubleshooting<\/h3>\n\n\n\n<p>Common issues and fixes:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>AccessDeniedException<\/strong>\n   &#8211; Cause: IAM user\/role lacks required IoT Analytics permissions.\n   &#8211; Fix: Attach the correct permissions for <code>iotanalytics:*<\/code> actions used in the lab (create\/describe\/delete resources, batch-put-message, create-dataset-content, get-dataset-content). Prefer least privilege in production.<\/p>\n<\/li>\n<li>\n<p><strong><code>batch-put-message<\/code> failures<\/strong>\n   &#8211; Cause: Payload not base64-encoded, message too large, invalid channel name, or throttling.\n   &#8211; Fix:<\/p>\n<ul>\n<li>Ensure <code>payload<\/code> is base64 of the raw JSON bytes.<\/li>\n<li>Keep messages small for the lab.<\/li>\n<li>Retry with fewer messages per batch if throttled.<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Dataset content stuck in RUNNING\/FAILED<\/strong>\n   &#8211; Cause: SQL query issues, dataset permissions, service-side delays.\n   &#8211; Fix:<\/p>\n<ul>\n<li>Check dataset definition (<code>describe-dataset<\/code>).<\/li>\n<li>Simplify the SQL query.<\/li>\n<li>Wait and retry.<\/li>\n<li>Verify in CloudWatch (where available) and check service quotas.<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Downloaded output file unreadable<\/strong>\n   &#8211; Cause: Output is compressed or in a different format.\n   &#8211; Fix:<\/p>\n<ul>\n<li>Inspect the file type (<code>file lab_dataset_output<\/code>).<\/li>\n<li>Attempt unzip or treat as CSV\/text depending on content.<\/li>\n<li>Verify dataset output format settings in docs.<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Resource name collisions<\/strong>\n   &#8211; Cause: Resource name already exists.\n   &#8211; Fix: Add a unique suffix to names and update JSON files accordingly.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Cleanup<\/h3>\n\n\n\n<p>Delete resources to stop charges.<\/p>\n\n\n\n<p>Delete dataset contents (optional; not always necessary) and dataset:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics delete-dataset \\\n  --dataset-name \"$DATASET_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p>Delete pipeline:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics delete-pipeline \\\n  --pipeline-name \"$PIPELINE_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p>Delete data store:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics delete-datastore \\\n  --datastore-name \"$DATASTORE_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p>Delete channel:<\/p>\n\n\n\n<pre><code class=\"language-bash\">aws iotanalytics delete-channel \\\n  --channel-name \"$CHANNEL_NAME\" \\\n  --region \"$AWS_REGION\"\n<\/code><\/pre>\n\n\n\n<p>Remove local files:<\/p>\n\n\n\n<pre><code class=\"language-bash\">rm -f pipeline-activities.json dataset.json messages.json lab_dataset_output\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>: All lab resources are removed.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">11. Best Practices<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Architecture best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Separate raw vs curated data<\/strong>:<\/li>\n<li>Use IoT Analytics pipelines to curate data for analytics.<\/li>\n<li>Export curated datasets to S3 if you need a broader analytics ecosystem.<\/li>\n<li><strong>Design for schema evolution<\/strong>:<\/li>\n<li>Add new fields in a backward-compatible way.<\/li>\n<li>Version your message schemas and transformation logic.<\/li>\n<li><strong>Use multiple pipelines for different device classes<\/strong>:<\/li>\n<li>Separate high-frequency telemetry from low-frequency status events to optimize cost and query patterns.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">IAM\/security best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Least privilege IAM<\/strong>:<\/li>\n<li>Separate roles for ingestion, pipeline management, dataset execution, and export access.<\/li>\n<li><strong>Use dedicated roles for automation<\/strong>:<\/li>\n<li>CI\/CD role to deploy resources; runtime roles for apps to ingest.<\/li>\n<li><strong>Restrict dataset export locations<\/strong>:<\/li>\n<li>If exporting to S3, restrict to specific buckets\/prefixes.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Cost best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Filter and compress the stream<\/strong>: drop fields you don\u2019t use.<\/li>\n<li><strong>Tune dataset schedules<\/strong>: avoid frequent full scans.<\/li>\n<li><strong>Use retention and lifecycle<\/strong>:<\/li>\n<li>Data store retention (if supported\/configured).<\/li>\n<li>S3 lifecycle for exported datasets.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Performance best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Keep telemetry payloads small<\/strong>: avoid embedding large blobs in messages.<\/li>\n<li><strong>Avoid heavy Lambda transforms on every message<\/strong>: consider batch processing or dataset container jobs for expensive computations.<\/li>\n<li><strong>Partition downstream<\/strong>: if exporting to S3, partition by date\/device class to reduce Athena scan costs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Reliability best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Idempotency<\/strong>: design message IDs and ingestion to handle retries without duplicates (where possible).<\/li>\n<li><strong>Backpressure strategy<\/strong>: understand quotas and throttling behaviors; implement retry with exponential backoff in producers.<\/li>\n<li><strong>Multi-Region<\/strong>: if you need DR, plan for cross-Region replication at the data lake layer (often S3) rather than assuming native replication.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operations best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Tag everything<\/strong> (where supported): <code>env<\/code>, <code>team<\/code>, <code>app<\/code>, <code>cost-center<\/code>, <code>data-classification<\/code>.<\/li>\n<li><strong>Use CloudTrail<\/strong> for audit and alert on risky changes (e.g., dataset delivery destinations).<\/li>\n<li><strong>Create dashboards and alarms<\/strong>:<\/li>\n<li>Pipeline\/dataset failures (where metrics exist).<\/li>\n<li>Ingestion throttles.<\/li>\n<li>Lambda errors (if used).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Governance\/naming\/tagging best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Naming convention example:<\/li>\n<li><code>org-env-domain-iota-channel-telemetry-v1<\/code><\/li>\n<li><code>org-env-domain-iota-pipeline-clean-v1<\/code><\/li>\n<li><code>org-env-domain-iota-datastore-curated-v1<\/code><\/li>\n<li><code>org-env-domain-iota-dataset-daily-kpis-v1<\/code><\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">12. Security Considerations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Identity and access model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>AWS IoT Analytics uses <strong>IAM<\/strong> for authorization.<\/li>\n<li>If ingesting via <strong>IoT Core<\/strong>, device identities are handled by IoT Core (certificates\/policies), and a rule action delivers data onward.<\/li>\n<li>Use separate IAM roles for:<\/li>\n<li>Admin\/provisioning (create\/update\/delete resources)<\/li>\n<li>Producers\/ingestors (batch put message \/ channel ingestion)<\/li>\n<li>Analysts (dataset execution and retrieval)<\/li>\n<li>Export jobs (writing to S3)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Encryption<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>In transit<\/strong>: AWS service endpoints use TLS.<\/li>\n<li><strong>At rest<\/strong>: data stores and dataset outputs typically support encryption; KMS integration is common across AWS storage services.<br\/>\n  Confirm the exact encryption behavior and KMS configuration options in the AWS IoT Analytics docs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Network exposure<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>API endpoints are generally public AWS endpoints.<\/li>\n<li>For private connectivity, check for <strong>VPC endpoints\/PrivateLink<\/strong> support for AWS IoT Analytics in your Region (verify in official VPC endpoint documentation).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secrets handling<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Do not embed AWS access keys in device firmware or client apps.<\/li>\n<li>Use:<\/li>\n<li><strong>IoT Core device certificates<\/strong> for devices, and\/or<\/li>\n<li><strong>Temporary credentials<\/strong> via STS for apps\/services running in AWS (EC2\/ECS\/EKS\/Lambda) using IAM roles.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Audit\/logging<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Enable and monitor <strong>CloudTrail<\/strong> for:<\/li>\n<li>Resource changes (pipelines, datasets, delivery destinations)<\/li>\n<li>Dataset executions<\/li>\n<li>Centralize logs in a dedicated security account if using AWS Organizations.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Compliance considerations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Classify telemetry data (PII, location, health data).<\/li>\n<li>Apply appropriate retention and access controls.<\/li>\n<li>For regulated workloads, validate that your Region and service support your compliance requirements (HIPAA, GDPR, etc.)\u2014this is architecture- and contract-dependent.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Common security mistakes<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Overly broad IAM policies (<code>iotanalytics:*<\/code> on <code>*<\/code>) in production.<\/li>\n<li>Exporting datasets to broadly accessible S3 buckets.<\/li>\n<li>Missing encryption and bucket policies on S3 exports.<\/li>\n<li>No alerting on pipeline\/dataset failures and no audit review on changes.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secure deployment recommendations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use <strong>least privilege<\/strong> and <strong>resource-level permissions<\/strong> where supported.<\/li>\n<li>Encrypt S3 exports with <strong>SSE-KMS<\/strong> and restrict KMS key usage.<\/li>\n<li>Use <strong>separate AWS accounts<\/strong> for dev\/test\/prod.<\/li>\n<li>Implement <strong>change management<\/strong> for pipeline and dataset definitions (IaC + code review).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">13. Limitations and Gotchas<\/h2>\n\n\n\n<p>Always confirm the latest limits and behaviors in official docs, but plan for these common realities:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Service quotas exist<\/strong>: maximum number of channels\/pipelines\/data stores\/datasets per account\/Region, ingestion throughput, dataset scheduling frequency, message sizes.<\/li>\n<li><strong>SQL feature set is not identical to Athena<\/strong>: dataset SQL may support a subset\/different dialect\u2014verify supported syntax and functions.<\/li>\n<li><strong>Dataset jobs can be expensive<\/strong>: frequent schedules + large scans can increase cost quickly.<\/li>\n<li><strong>Schema drift<\/strong>: IoT payloads often change; without strict validation, downstream datasets can break or become inconsistent.<\/li>\n<li><strong>Duplicates and out-of-order data<\/strong>: IoT networks are unreliable; design pipelines\/datasets for imperfect data.<\/li>\n<li><strong>Debugging data issues<\/strong>: without a raw \u201clanding zone\u201d (e.g., S3 raw archive), it can be harder to reprocess from original messages. Consider storing raw data elsewhere if reprocessing\/audit is required.<\/li>\n<li><strong>Multi-tenant isolation<\/strong>: per-tenant separation can be done, but it\u2019s an architecture decision\u2014avoid mixing tenant data unless you have robust partitioning and access controls.<\/li>\n<li><strong>Regional constraints<\/strong>: not all Regions have identical feature support; verify endpoints and supported integrations.<\/li>\n<li><strong>Export formats and delivery behaviors<\/strong> can surprise you (compression, file naming, output structure). Validate outputs early.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">14. Comparison with Alternatives<\/h2>\n\n\n\n<p>AWS IoT Analytics is one option in a broader IoT and analytics landscape.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Key alternatives to evaluate<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Within AWS<\/strong><\/li>\n<li>AWS IoT Core (ingestion\/routing, not analytics storage)<\/li>\n<li>AWS IoT SiteWise (industrial asset modeling and time-series data for industrial equipment)<\/li>\n<li>Amazon Timestream (purpose-built time-series database)<\/li>\n<li>Amazon Kinesis (streaming ingestion + processing)<\/li>\n<li>AWS Glue + Amazon S3 + Amazon Athena (data lake ETL\/query)<\/li>\n<li>Amazon MSK (Kafka) + Spark\/Flink (self-managed or managed streaming)<\/li>\n<li><strong>Other clouds<\/strong><\/li>\n<li>Azure IoT Hub + Azure Stream Analytics + ADX (Azure Data Explorer)<\/li>\n<li>(GCP note) Google Cloud IoT Core was retired; equivalent solutions typically use Pub\/Sub + Dataflow + BigQuery.<\/li>\n<li><strong>Open-source\/self-managed<\/strong><\/li>\n<li>InfluxDB \/ TimescaleDB for time-series<\/li>\n<li>Kafka + Flink\/Spark + Iceberg for pipeline and lakehouse<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Comparison table<\/h3>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Option<\/th>\n<th>Best For<\/th>\n<th>Strengths<\/th>\n<th>Weaknesses<\/th>\n<th>When to Choose<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>AWS IoT Analytics<\/strong><\/td>\n<td>Managed IoT data prep and dataset generation<\/td>\n<td>Purpose-built IoT pipeline primitives; curated datasets; integrates with IoT Core and S3<\/td>\n<td>Not a general lakehouse; SQL dialect\/behavior differs from Athena; quotas\/cost for dataset scans<\/td>\n<td>You want a managed IoT-to-dataset workflow with minimal custom ETL<\/td>\n<\/tr>\n<tr>\n<td><strong>AWS IoT Core<\/strong><\/td>\n<td>Device connectivity and message routing<\/td>\n<td>MQTT, device auth, rules engine, integrations<\/td>\n<td>Not designed for analytics storage\/query<\/td>\n<td>You need secure ingestion and routing; pair with analytics\/storage services<\/td>\n<\/tr>\n<tr>\n<td><strong>AWS IoT SiteWise<\/strong><\/td>\n<td>Industrial\/OT asset modeling<\/td>\n<td>Asset models, metrics, industrial connectors<\/td>\n<td>Focused on industrial context; not a general IoT analytics ETL<\/td>\n<td>You need asset-centric modeling and industrial telemetry management<\/td>\n<\/tr>\n<tr>\n<td><strong>Amazon Timestream<\/strong><\/td>\n<td>Time-series storage\/query<\/td>\n<td>Fast time-range queries; time-series functions<\/td>\n<td>Not an IoT ETL pipeline; ingestion and transforms handled elsewhere<\/td>\n<td>You need time-series DB semantics and query performance<\/td>\n<\/tr>\n<tr>\n<td><strong>S3 + Glue + Athena<\/strong><\/td>\n<td>Data lake analytics<\/td>\n<td>Open formats, broad ecosystem, cost controls via lifecycle<\/td>\n<td>More DIY for IoT cleansing; needs partitioning and ETL design<\/td>\n<td>You want maximum flexibility and standard lake patterns<\/td>\n<\/tr>\n<tr>\n<td><strong>Kinesis + Lambda\/Flink<\/strong><\/td>\n<td>Real-time stream processing<\/td>\n<td>Low-latency processing; flexible real-time actions<\/td>\n<td>More components to operate; costs can rise with throughput<\/td>\n<td>You need real-time decisions\/alerts, not just datasets<\/td>\n<\/tr>\n<tr>\n<td><strong>Azure IoT Hub + ADX<\/strong><\/td>\n<td>Azure-centric IoT analytics<\/td>\n<td>Strong integration in Azure; powerful analytics<\/td>\n<td>Different ecosystem; migration effort<\/td>\n<td>Your platform is standardized on Azure<\/td>\n<\/tr>\n<tr>\n<td><strong>InfluxDB\/TimescaleDB (self-managed)<\/strong><\/td>\n<td>Custom time-series needs<\/td>\n<td>Full control; specialized querying<\/td>\n<td>Ops burden, scaling, HA, security<\/td>\n<td>You need full control and accept operational overhead<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">15. Real-World Example<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Enterprise example: global logistics cold-chain analytics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: A logistics enterprise monitors millions of temperature readings per day across shipments. They must prove compliance (time within temperature range) and investigate excursions quickly.<\/li>\n<li><strong>Proposed architecture<\/strong>:<\/li>\n<li>Devices \u2192 AWS IoT Core (MQTT)<\/li>\n<li>IoT Core rules \u2192 AWS IoT Analytics channel<\/li>\n<li>IoT Analytics pipeline:<ul>\n<li>Filter invalid readings<\/li>\n<li>Normalize units and timestamps<\/li>\n<li>Enrich with shipment metadata (via Lambda or registry mapping\u2014verify best approach)<\/li>\n<\/ul>\n<\/li>\n<li>IoT Analytics data store for curated telemetry<\/li>\n<li>Scheduled datasets:<ul>\n<li>Daily per-shipment compliance summary<\/li>\n<li>Exception lists (excursions &gt; threshold)<\/li>\n<\/ul>\n<\/li>\n<li>Export datasets to S3; query with Athena; dashboards in QuickSight<\/li>\n<li><strong>Why AWS IoT Analytics was chosen<\/strong>:<\/li>\n<li>Managed IoT data preparation patterns reduce custom ETL.<\/li>\n<li>Repeatable dataset generation supports audits and reporting.<\/li>\n<li><strong>Expected outcomes<\/strong>:<\/li>\n<li>Faster compliance reporting and fewer manual data-cleaning steps.<\/li>\n<li>Consistent KPI definitions across regions and teams.<\/li>\n<li>Improved operational visibility into sensor health and shipment risks.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Startup\/small-team example: smart building MVP analytics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: A startup builds an MVP for smart building monitoring (CO\u2082, temperature, humidity). They need weekly usage and anomaly reports without hiring a full data engineering team.<\/li>\n<li><strong>Proposed architecture<\/strong>:<\/li>\n<li>Devices \u2192 (either IoT Core or direct ingestion API, depending on device capability)<\/li>\n<li>AWS IoT Analytics pipeline to standardize schema and drop malformed messages<\/li>\n<li>Data store retains 30\u201390 days of curated telemetry<\/li>\n<li>Weekly datasets exported to S3 and visualized in QuickSight<\/li>\n<li><strong>Why AWS IoT Analytics was chosen<\/strong>:<\/li>\n<li>Faster to implement than building a full pipeline with Kinesis + custom ETL.<\/li>\n<li>SQL datasets allow quick iteration of reporting logic.<\/li>\n<li><strong>Expected outcomes<\/strong>:<\/li>\n<li>MVP dashboards in days, not weeks.<\/li>\n<li>Clear understanding of sensor reliability and customer usage patterns.<\/li>\n<li>Straightforward growth path by exporting to S3 for more advanced analytics later.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">16. FAQ<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>Is AWS IoT Analytics the same as AWS IoT Core?<\/strong><br\/>\n   No. AWS IoT Core is primarily for device connectivity, authentication, and message routing. AWS IoT Analytics focuses on processing, storing, and producing analytics datasets from IoT data.<\/p>\n<\/li>\n<li>\n<p><strong>Do I need AWS IoT Core to use AWS IoT Analytics?<\/strong><br\/>\n   Not always. You can ingest data directly to AWS IoT Analytics APIs using AWS credentials. IoT Core is common for device connectivity, but not mandatory for every architecture.<\/p>\n<\/li>\n<li>\n<p><strong>What are the main building blocks of AWS IoT Analytics?<\/strong><br\/>\n   Channels, pipelines, data stores, and datasets.<\/p>\n<\/li>\n<li>\n<p><strong>What is a channel in AWS IoT Analytics?<\/strong><br\/>\n   A channel is an ingestion entry point for messages before processing.<\/p>\n<\/li>\n<li>\n<p><strong>What does a pipeline do?<\/strong><br\/>\n   A pipeline applies processing steps (activities) to messages and typically writes results into a data store.<\/p>\n<\/li>\n<li>\n<p><strong>What is a data store used for?<\/strong><br\/>\n   It stores processed IoT messages for querying and dataset generation.<\/p>\n<\/li>\n<li>\n<p><strong>What is a dataset in AWS IoT Analytics?<\/strong><br\/>\n   A dataset defines a repeatable analytics job (often SQL-based) that produces dataset content you can download or export.<\/p>\n<\/li>\n<li>\n<p><strong>Can AWS IoT Analytics run transformations like unit conversions?<\/strong><br\/>\n   Yes\u2014commonly via pipeline activities or Lambda integration.<\/p>\n<\/li>\n<li>\n<p><strong>Can I export AWS IoT Analytics results to Amazon S3?<\/strong><br\/>\n   Commonly yes via dataset delivery mechanisms, but verify current delivery options and configuration in official docs.<\/p>\n<\/li>\n<li>\n<p><strong>How do I visualize IoT Analytics data in QuickSight?<\/strong><br\/>\n   A common pattern is exporting dataset outputs to S3, cataloging with Glue, querying with Athena, and then connecting QuickSight to Athena.<\/p>\n<\/li>\n<li>\n<p><strong>How do I handle schema changes in device payloads?<\/strong><br\/>\n   Version schemas, validate required fields in pipelines, and maintain backward compatibility. Consider routing different schema versions to different pipelines\/data stores.<\/p>\n<\/li>\n<li>\n<p><strong>Is AWS IoT Analytics a time-series database?<\/strong><br\/>\n   Not exactly. It supports IoT analytics workflows, but if you need specialized time-series query performance and functions, evaluate purpose-built time-series databases.<\/p>\n<\/li>\n<li>\n<p><strong>How do I secure ingestion without long-lived access keys on devices?<\/strong><br\/>\n   Use AWS IoT Core with device certificates for device authentication. For applications running in AWS, use IAM roles and temporary credentials.<\/p>\n<\/li>\n<li>\n<p><strong>What are the biggest cost risks with AWS IoT Analytics?<\/strong><br\/>\n   High ingestion volume, long retention, and frequent datasets scanning large data ranges. Also factor in connected services like IoT Core, S3, Athena, QuickSight, and Lambda.<\/p>\n<\/li>\n<li>\n<p><strong>How do I troubleshoot failed dataset runs?<\/strong><br\/>\n   Validate SQL syntax and dataset definition, check service quotas, confirm IAM permissions, and review CloudWatch\/CloudTrail signals where available.<\/p>\n<\/li>\n<li>\n<p><strong>Can I do real-time alerting with AWS IoT Analytics?<\/strong><br\/>\n   AWS IoT Analytics is typically used for analytics and dataset generation rather than sub-second alerting. For real-time alerting, consider IoT Core rules, Lambda, or streaming analytics services.<\/p>\n<\/li>\n<li>\n<p><strong>Should I store raw telemetry in AWS IoT Analytics?<\/strong><br\/>\n   Often you store curated data in IoT Analytics and keep a raw archive in S3 (or another store) for reprocessing and audits. Your compliance and reprocessing needs drive this decision.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">17. Top Online Resources to Learn AWS IoT Analytics<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Resource Type<\/th>\n<th>Name<\/th>\n<th>Why It Is Useful<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Official Documentation<\/td>\n<td>AWS IoT Analytics Developer Guide<\/td>\n<td>Authoritative details on channels, pipelines, data stores, datasets, APIs, limits<\/td>\n<\/tr>\n<tr>\n<td>Official Pricing<\/td>\n<td>AWS IoT Analytics Pricing<\/td>\n<td>Up-to-date pricing dimensions and Region-specific rates<\/td>\n<\/tr>\n<tr>\n<td>Pricing Tool<\/td>\n<td>AWS Pricing Calculator<\/td>\n<td>Model ingestion, storage, and dataset run costs for your expected usage<\/td>\n<\/tr>\n<tr>\n<td>Official CLI Reference<\/td>\n<td>AWS CLI Command Reference (<code>iotanalytics<\/code>)<\/td>\n<td>Copy-paste CLI commands and parameter definitions<\/td>\n<\/tr>\n<tr>\n<td>Official Architecture<\/td>\n<td>AWS Architecture Center<\/td>\n<td>Patterns for IoT ingestion, analytics, data lakes, and security best practices<\/td>\n<\/tr>\n<tr>\n<td>Official IoT Docs<\/td>\n<td>AWS IoT Core Documentation<\/td>\n<td>If integrating via rules engine and MQTT ingestion<\/td>\n<\/tr>\n<tr>\n<td>Security\/Audit<\/td>\n<td>AWS CloudTrail Documentation<\/td>\n<td>How to audit IoT Analytics management actions<\/td>\n<\/tr>\n<tr>\n<td>Monitoring<\/td>\n<td>Amazon CloudWatch Documentation<\/td>\n<td>Metrics, alarms, dashboards for operations<\/td>\n<\/tr>\n<tr>\n<td>Official Videos<\/td>\n<td>AWS YouTube Channel<\/td>\n<td>Service overviews and architecture talks (search \u201cAWS IoT Analytics\u201d)<\/td>\n<\/tr>\n<tr>\n<td>Samples<\/td>\n<td>AWS Samples on GitHub (search)<\/td>\n<td>Reference implementations and patterns; validate repository ownership and recency<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<p>Helpful starting URLs:\n&#8211; AWS IoT Analytics docs: https:\/\/docs.aws.amazon.com\/iotanalytics\/\n&#8211; AWS IoT Analytics pricing: https:\/\/aws.amazon.com\/iot-analytics\/pricing\/\n&#8211; AWS Pricing Calculator: https:\/\/calculator.aws\/#\/\n&#8211; AWS Architecture Center: https:\/\/aws.amazon.com\/architecture\/\n&#8211; AWS IoT Core docs: https:\/\/docs.aws.amazon.com\/iot\/<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">18. Training and Certification Providers<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Institute<\/th>\n<th>Suitable Audience<\/th>\n<th>Likely Learning Focus<\/th>\n<th>Mode<\/th>\n<th>Website URL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>DevOpsSchool.com<\/td>\n<td>Beginners to working engineers<\/td>\n<td>AWS, DevOps, cloud operations, hands-on labs<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.devopsschool.com\/<\/td>\n<\/tr>\n<tr>\n<td>ScmGalaxy.com<\/td>\n<td>Students and early-career professionals<\/td>\n<td>DevOps fundamentals, tooling, SDLC, automation<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.scmgalaxy.com\/<\/td>\n<\/tr>\n<tr>\n<td>CLoudOpsNow.in<\/td>\n<td>Cloud ops and platform teams<\/td>\n<td>Cloud operations, deployment, monitoring, reliability<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.cloudopsnow.in\/<\/td>\n<\/tr>\n<tr>\n<td>SreSchool.com<\/td>\n<td>SREs, DevOps, operations engineers<\/td>\n<td>Reliability engineering, observability, incident response<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.sreschool.com\/<\/td>\n<\/tr>\n<tr>\n<td>AiOpsSchool.com<\/td>\n<td>Ops and platform teams adopting AIOps<\/td>\n<td>AIOps concepts, automation, monitoring\/analytics<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.aiopsschool.com\/<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">19. Top Trainers<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Platform\/Site<\/th>\n<th>Likely Specialization<\/th>\n<th>Suitable Audience<\/th>\n<th>Website URL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>RajeshKumar.xyz<\/td>\n<td>DevOps\/cloud training and guidance (verify offerings)<\/td>\n<td>Beginners to intermediate engineers<\/td>\n<td>https:\/\/rajeshkumar.xyz\/<\/td>\n<\/tr>\n<tr>\n<td>devopstrainer.in<\/td>\n<td>DevOps training programs (verify course catalog)<\/td>\n<td>Engineers and students<\/td>\n<td>https:\/\/www.devopstrainer.in\/<\/td>\n<\/tr>\n<tr>\n<td>devopsfreelancer.com<\/td>\n<td>Freelance DevOps help\/training (verify services)<\/td>\n<td>Teams needing short-term coaching<\/td>\n<td>https:\/\/www.devopsfreelancer.com\/<\/td>\n<\/tr>\n<tr>\n<td>devopssupport.in<\/td>\n<td>DevOps support and training (verify scope)<\/td>\n<td>Ops\/DevOps teams<\/td>\n<td>https:\/\/www.devopssupport.in\/<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">20. Top Consulting Companies<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Company<\/th>\n<th>Likely Service Area<\/th>\n<th>Where They May Help<\/th>\n<th>Consulting Use Case Examples<\/th>\n<th>Website URL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>cotocus.com<\/td>\n<td>Cloud\/DevOps consulting (verify exact portfolio)<\/td>\n<td>Architecture reviews, implementation support<\/td>\n<td>IoT pipeline design review; cost optimization assessment; security hardening<\/td>\n<td>https:\/\/cotocus.com\/<\/td>\n<\/tr>\n<tr>\n<td>DevOpsSchool.com<\/td>\n<td>DevOps and cloud consulting\/training<\/td>\n<td>Delivery acceleration, platform engineering<\/td>\n<td>Implementing AWS IoT ingestion + analytics patterns; CI\/CD for IoT infrastructure<\/td>\n<td>https:\/\/www.devopsschool.com\/<\/td>\n<\/tr>\n<tr>\n<td>DEVOPSCONSULTING.IN<\/td>\n<td>DevOps consulting services (verify offerings)<\/td>\n<td>DevOps transformation and operations<\/td>\n<td>Observability setup; IAM hardening; deployment automation for AWS services<\/td>\n<td>https:\/\/devopsconsulting.in\/<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">21. Career and Learning Roadmap<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn before AWS IoT Analytics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>AWS fundamentals<\/strong>: IAM, Regions, networking basics, CloudWatch, CloudTrail<\/li>\n<li><strong>IoT fundamentals<\/strong>: telemetry patterns, MQTT basics, device identity concepts<\/li>\n<li><strong>Data basics<\/strong>: JSON schemas, timestamps, partitioning, data retention<\/li>\n<li><strong>Security basics<\/strong>: least privilege, encryption, key management basics (KMS)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn after AWS IoT Analytics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>AWS IoT Core deep dive<\/strong>: fleet provisioning, policies, rules engine patterns<\/li>\n<li><strong>Data lake patterns<\/strong>: S3 + Glue + Athena; partition strategies; lifecycle policies<\/li>\n<li><strong>BI and dashboards<\/strong>: QuickSight + Athena; KPIs and semantic layers<\/li>\n<li><strong>Streaming and real-time analytics<\/strong>: Kinesis, Lambda, Apache Flink<\/li>\n<li><strong>Time-series databases<\/strong>: Amazon Timestream or alternatives<\/li>\n<li><strong>ML for IoT<\/strong>: feature engineering, SageMaker pipelines, model monitoring<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Job roles that use it<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>IoT Solutions Architect<\/li>\n<li>Cloud Solutions Engineer<\/li>\n<li>Data Engineer (IoT)<\/li>\n<li>DevOps \/ Platform Engineer supporting IoT platforms<\/li>\n<li>SRE supporting data pipelines<\/li>\n<li>Security Engineer reviewing IoT data platforms<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Certification path (AWS)<\/h3>\n\n\n\n<p>AWS certifications are role-based rather than service-specific. Common relevant options:\n&#8211; AWS Certified Cloud Practitioner (foundational)\n&#8211; AWS Certified Solutions Architect \u2013 Associate\/Professional\n&#8211; AWS Certified Developer \u2013 Associate\n&#8211; AWS Certified Data Engineer \u2013 Associate (if applicable to your path; verify current AWS cert lineup)\n&#8211; AWS Certified Security \u2013 Specialty<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Project ideas for practice<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Build an end-to-end device simulator \u2192 IoT Core \u2192 IoT Analytics \u2192 S3 \u2192 Athena dashboard.<\/li>\n<li>Implement schema validation and \u201cquarantine\u201d routing for invalid messages.<\/li>\n<li>Create daily and hourly datasets and compare cost\/performance tradeoffs.<\/li>\n<li>Add Lambda enrichment that tags telemetry with site metadata and measure latency\/cost impact.<\/li>\n<li>Export curated datasets to S3 and build an Athena table + QuickSight dashboard.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">22. Glossary<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Internet of Things (IoT)<\/strong>: Network of physical devices that collect and exchange data.<\/li>\n<li><strong>Telemetry<\/strong>: Time-stamped measurements or events sent from devices (e.g., temperature, battery).<\/li>\n<li><strong>Channel (AWS IoT Analytics)<\/strong>: Ingestion entry point for messages.<\/li>\n<li><strong>Pipeline (AWS IoT Analytics)<\/strong>: A sequence of processing steps applied to ingested messages.<\/li>\n<li><strong>Activity (pipeline activity)<\/strong>: A single processing step within a pipeline (filter, transform, enrich, etc.).<\/li>\n<li><strong>Data store (AWS IoT Analytics)<\/strong>: Durable storage for processed IoT messages.<\/li>\n<li><strong>Dataset (AWS IoT Analytics)<\/strong>: A definition of how to generate an analytics output from stored data, often via SQL.<\/li>\n<li><strong>Dataset content<\/strong>: The materialized output generated when a dataset runs.<\/li>\n<li><strong>AWS IoT Core rule<\/strong>: A routing rule that can filter and send MQTT messages to AWS services.<\/li>\n<li><strong>IAM<\/strong>: AWS Identity and Access Management; controls permissions.<\/li>\n<li><strong>KMS<\/strong>: AWS Key Management Service; manages encryption keys.<\/li>\n<li><strong>CloudTrail<\/strong>: Service that logs AWS API calls for auditing.<\/li>\n<li><strong>CloudWatch<\/strong>: Monitoring service for metrics, logs, and alarms.<\/li>\n<li><strong>Least privilege<\/strong>: Security principle of granting only the permissions needed.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">23. Summary<\/h2>\n\n\n\n<p>AWS IoT Analytics is an AWS Internet of Things (IoT) service for ingesting device telemetry, processing and enriching it through pipelines, storing curated data in data stores, and producing repeatable analytics outputs through datasets.<\/p>\n\n\n\n<p>It matters because IoT data is messy and high-volume; AWS IoT Analytics provides managed building blocks to standardize telemetry and generate queryable datasets without assembling a full custom ETL platform from scratch.<\/p>\n\n\n\n<p>Architecturally, it often fits behind AWS IoT Core (for connectivity) and in front of S3\/Athena\/QuickSight (for broad analytics). Cost is driven by ingestion volume, storage retention, and how frequently\/expensively datasets scan data. Security depends on strong IAM boundaries, encryption choices (often KMS-backed), and controlled exports to S3.<\/p>\n\n\n\n<p>Use AWS IoT Analytics when you want a managed IoT data preparation and dataset workflow. Consider alternatives when you need real-time stream analytics, a dedicated time-series database, or a standardized lakehouse approach.<\/p>\n\n\n\n<p>Next learning step: integrate AWS IoT Core rules with AWS IoT Analytics, export curated datasets to Amazon S3, and query them with Athena to build a complete IoT analytics pipeline.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Internet of Things (IoT)<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20,31],"tags":[],"class_list":["post-216","post","type-post","status-publish","format-standard","hentry","category-aws","category-internet-of-things-iot"],"_links":{"self":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/216","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/comments?post=216"}],"version-history":[{"count":0,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/216\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/media?parent=216"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/categories?post=216"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/tags?post=216"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}