{"id":357,"date":"2026-04-13T19:03:53","date_gmt":"2026-04-13T19:03:53","guid":{"rendered":"https:\/\/www.devopsschool.com\/tutorials\/azure-ai-metrics-advisor-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-ai-machine-learning\/"},"modified":"2026-04-13T19:03:53","modified_gmt":"2026-04-13T19:03:53","slug":"azure-ai-metrics-advisor-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-ai-machine-learning","status":"publish","type":"post","link":"https:\/\/www.devopsschool.com\/tutorials\/azure-ai-metrics-advisor-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-ai-machine-learning\/","title":{"rendered":"Azure AI Metrics Advisor Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for AI + Machine Learning"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Category<\/h2>\n\n\n\n<p>AI + Machine Learning<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. Introduction<\/h2>\n\n\n\n<p>Azure AI Metrics Advisor is an Azure AI service designed to monitor time-series metrics, detect anomalies automatically, and help you diagnose why those anomalies happened. It\u2019s commonly used for business KPIs (revenue, conversions), operational metrics (CPU, latency), and product metrics (active users, signups) where you want early warning and rapid root-cause clues without building and maintaining custom anomaly detection pipelines.<\/p>\n\n\n\n<p>In simple terms: you connect a metric source (like Azure SQL Database, Azure Data Explorer, or files in Azure Storage), tell Azure AI Metrics Advisor how often data arrives, and it continuously looks for unusual behavior. When something looks wrong (a sudden drop, spike, change in pattern, or deviation from expected seasonality), it creates an incident and can notify your team.<\/p>\n\n\n\n<p>Technically, Azure AI Metrics Advisor pulls metric data on a schedule from supported data sources, models historical patterns (including seasonality and trends), detects anomalies at the time-series level (including multi-dimensional slicing), groups anomalies into incidents, and supports root cause analysis to identify contributing dimensions. It exposes management and investigation features via the Metrics Advisor web portal and APIs\/SDKs.<\/p>\n\n\n\n<p>The main problem it solves is the \u201csignal-to-noise + time-to-detection + time-to-diagnosis\u201d challenge for metrics monitoring: traditional static thresholds don\u2019t adapt to seasonality, and custom ML solutions take time and expertise to build. Azure AI Metrics Advisor offers a managed approach to anomaly detection and triage for metric time series.<\/p>\n\n\n\n<blockquote>\n<p>Important naming\/status note (verify in official docs): Microsoft has rebranded many Cognitive Services under the \u201cAzure AI services\u201d umbrella. The service is widely known as \u201cMetrics Advisor\u201d and appears in documentation as \u201cAzure AI Metrics Advisor\u201d in many places. Also verify the current lifecycle status (active vs. retirement\/legacy) in the latest Azure product documentation and Azure Updates before starting a new long-term implementation.<\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">2. What is Azure AI Metrics Advisor?<\/h2>\n\n\n\n<p>Azure AI Metrics Advisor is a managed anomaly detection and diagnostics service for time-series metrics in Azure\u2019s <strong>AI + Machine Learning<\/strong> portfolio (Azure AI services).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Official purpose<\/h3>\n\n\n\n<p>Its purpose is to:\n&#8211; Ingest (typically via scheduled pull) time-series metric data from supported data sources\n&#8211; Detect anomalies automatically (spikes, dips, trend changes, level shifts)\n&#8211; Group anomalies into incidents and provide investigation workflows\n&#8211; Assist with root cause analysis using dimensional breakdowns\n&#8211; Notify operators through configurable alerts and hooks<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Core capabilities (what it does)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Continuous metric monitoring with configurable frequency and detection sensitivity<\/li>\n<li>Multi-dimensional analysis (e.g., metric by region, SKU, channel)<\/li>\n<li>Incident management (grouping anomalies across time series)<\/li>\n<li>Root cause exploration (dimension contribution analysis)<\/li>\n<li>Alerting via hooks (for example, email and webhooks\u2014verify exact hook types in the current docs)<\/li>\n<li>Feedback loop (mark anomalies as true\/false to refine results\u2014verify availability)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Major components (conceptual model)<\/h3>\n\n\n\n<p>While exact naming can differ slightly across portal\/API versions, the service generally includes:\n&#8211; <strong>Metrics Advisor resource<\/strong>: the Azure resource you provision (endpoint + keys\/AAD)\n&#8211; <strong>Metrics Advisor portal<\/strong>: web UI for configuration and investigation\n&#8211; <strong>Data feed<\/strong>: definition of where data comes from, schema, and ingestion schedule\n&#8211; <strong>Metric &amp; dimensions<\/strong>: measures (numeric values) and attributes used to slice the data\n&#8211; <strong>Detection configuration<\/strong>: anomaly detection settings (sensitivity, conditions, series-level options)\n&#8211; <strong>Alert configuration<\/strong>: routing rules for notifications\n&#8211; <strong>Hooks\/notification channels<\/strong>: how alerts are delivered (email\/webhook, etc.\u2014verify current list)\n&#8211; <strong>Incidents and anomalies<\/strong>: detected issues and grouped events<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Service type<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Managed Azure AI service<\/strong> (PaaS). You don\u2019t manage servers or ML infrastructure.<\/li>\n<li>Accessed through:<\/li>\n<li>Azure Portal (resource creation)<\/li>\n<li>Metrics Advisor portal (configuration and investigation)<\/li>\n<li>REST APIs and SDKs (automation\/integration)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scope and locality<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Provisioned as an Azure resource<\/strong> in a specific <strong>subscription<\/strong> and <strong>resource group<\/strong>.<\/li>\n<li><strong>Region-bound<\/strong>: you choose a region during provisioning. Data residency and latency considerations apply.<\/li>\n<li>Networking and identity depend on how you connect data sources and how you expose the endpoint (public endpoint is typical; private networking options\u2014if available\u2014must be verified in current docs).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">How it fits into the Azure ecosystem<\/h3>\n\n\n\n<p>Azure AI Metrics Advisor typically sits between:\n&#8211; <strong>Your metric stores<\/strong> (Azure SQL Database, Azure Data Explorer, Azure Storage, etc.)\n&#8211; <strong>Your operations tooling<\/strong> (email, ticketing, ChatOps, incident response, dashboards)<\/p>\n\n\n\n<p>It complements (not replaces) Azure Monitor:\n&#8211; <strong>Azure Monitor<\/strong> is excellent for Azure resource telemetry, logs, and alerting with thresholds\/KQL-based detection.\n&#8211; <strong>Azure AI Metrics Advisor<\/strong> focuses on <strong>time-series anomaly detection and diagnostic workflows<\/strong> for arbitrary business and product metrics, especially multi-dimensional metrics.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">3. Why use Azure AI Metrics Advisor?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Business reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Detect revenue-impacting or customer-impacting issues earlier (before dashboards are manually checked).<\/li>\n<li>Reduce time spent manually tuning alert thresholds for seasonal metrics (weekday\/weekend, campaigns).<\/li>\n<li>Improve response time by surfacing likely contributing dimensions (region, product, channel).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Technical reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Managed anomaly detection without building custom ML pipelines.<\/li>\n<li>Works well for <strong>multi-dimensional<\/strong> time-series data (metric sliced by multiple attributes).<\/li>\n<li>Supports scheduled ingestion and continuous monitoring patterns.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operational reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Centralizes anomaly triage: incidents, timelines, series breakdown, and alert routing.<\/li>\n<li>Helps reduce alert fatigue by grouping anomalies and using adaptive models rather than static thresholds.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Security\/compliance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Runs as an Azure service with Azure identity and governance patterns.<\/li>\n<li>Can integrate with Azure RBAC and organizational policies (exact authentication options vary\u2014verify in docs).<\/li>\n<li>Supports auditing via Azure platform logging options where available (verify diagnostic log support for this resource type).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scalability\/performance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Designed to monitor many time series without you provisioning compute.<\/li>\n<li>Scales through service limits and quotas (verify quotas\/limits in official docs).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">When teams should choose it<\/h3>\n\n\n\n<p>Choose Azure AI Metrics Advisor when:\n&#8211; You have time-series metrics that are noisy or seasonal, and static thresholds produce too many false alarms.\n&#8211; You need multi-dimensional slicing and root cause hints.\n&#8211; You want a managed Azure-native service rather than running your own anomaly detection stack.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">When teams should not choose it<\/h3>\n\n\n\n<p>Avoid or reconsider when:\n&#8211; You only need simple threshold alerts (Azure Monitor alerts may be simpler and cheaper).\n&#8211; Your data source is unsupported or cannot be exposed to the service securely within your constraints.\n&#8211; You need fully custom anomaly models, feature engineering, or model explainability beyond what the service provides (consider Azure Machine Learning).\n&#8211; Your organization requires strict private networking only and the service cannot meet that requirement (verify private networking support and options).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">4. Where is Azure AI Metrics Advisor used?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Industries<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>E-commerce and retail: conversion rate, cart abandonment, payment success rate<\/li>\n<li>Fintech: fraud signals, transaction success rate, latency, throughput<\/li>\n<li>SaaS: signups, churn, feature adoption, API error rates<\/li>\n<li>Manufacturing\/IoT: sensor metrics (when summarized into time-series aggregations)<\/li>\n<li>Media and gaming: concurrency, streaming quality metrics, engagement<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Team types<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>SRE\/Operations: reduce time to detect and diagnose production issues<\/li>\n<li>Data engineering\/analytics: monitor KPI pipelines and data freshness<\/li>\n<li>Product analytics: detect unexpected user behavior changes<\/li>\n<li>Finance\/revenue ops: monitor billing and revenue indicators<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Workloads and architectures<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data platforms: Azure SQL \/ ADX \/ ADLS feeding KPI dashboards<\/li>\n<li>Microservices: service-level metrics exported to a store and monitored as KPIs<\/li>\n<li>ETL\/ELT pipelines: monitor aggregates emitted by ADF\/Synapse\/Databricks jobs<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Real-world deployment contexts<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Production monitoring<\/strong>: continuous detection and alerting integrated with incident response.<\/li>\n<li><strong>Dev\/test validation<\/strong>: validate anomaly detection configs and reduce false positives before production rollout.<\/li>\n<li><strong>KPI governance<\/strong>: formalize which metrics matter, who owns them, and what \u201cabnormal\u201d looks like.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">5. Top Use Cases and Scenarios<\/h2>\n\n\n\n<p>Below are realistic scenarios where Azure AI Metrics Advisor is commonly applied.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1) Revenue KPI anomaly detection<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Daily revenue drops unexpectedly, but it\u2019s masked by normal weekday\/weekend seasonality.<\/li>\n<li><strong>Why it fits:<\/strong> Learns seasonality and detects drops relative to expected values.<\/li>\n<li><strong>Scenario:<\/strong> Revenue by region and channel dips in one region; incident highlights \u201cRegion=EU\u201d as top contributor.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">2) Conversion funnel monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Checkout conversion fluctuates; fixed thresholds create noise.<\/li>\n<li><strong>Why it fits:<\/strong> Detects pattern changes and level shifts beyond normal volatility.<\/li>\n<li><strong>Scenario:<\/strong> Conversion rate drops only for \u201cDevice=Android\u201d; root cause analysis suggests a segment regression.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">3) API success rate and latency anomalies (business-impact view)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> A small latency increase causes big drop in signups; resource metrics alone don\u2019t show impact.<\/li>\n<li><strong>Why it fits:<\/strong> Monitors business metrics alongside technical metrics and correlates incidents.<\/li>\n<li><strong>Scenario:<\/strong> Signups drop while p95 latency spikes; alerts route to on-call with incident context.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4) Data pipeline health via \u201cdata completeness\u201d metrics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> ETL job succeeds, but output data is partially missing.<\/li>\n<li><strong>Why it fits:<\/strong> Monitors derived metrics (row counts, null rates) rather than job status.<\/li>\n<li><strong>Scenario:<\/strong> \u201cOrders ingested per hour\u201d drops for one source system; anomaly triggers investigation.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">5) Marketing campaign performance monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Campaign traffic normally spikes; you want to detect when a spike is abnormally low (underperforming).<\/li>\n<li><strong>Why it fits:<\/strong> Detects deviations from expected spike magnitude.<\/li>\n<li><strong>Scenario:<\/strong> Paid traffic is 40% below expected during a scheduled campaign window.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">6) Fraud or risk indicator monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> A fraud score aggregate slowly trends upward.<\/li>\n<li><strong>Why it fits:<\/strong> Detects trend changes and sustained anomalies.<\/li>\n<li><strong>Scenario:<\/strong> \u201cChargebacks per 10k transactions\u201d rises gradually; incident triggers risk team review.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">7) Store\/branch performance monitoring (multi-dimensional)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Thousands of branches; you can\u2019t set thresholds per branch.<\/li>\n<li><strong>Why it fits:<\/strong> Multi-dimensional time series monitoring across branch IDs.<\/li>\n<li><strong>Scenario:<\/strong> Incident groups anomalies across a subset of branches in one region.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">8) Inventory and supply chain anomaly detection<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Inventory levels fluctuate; you need early detection of abnormal depletion.<\/li>\n<li><strong>Why it fits:<\/strong> Learns patterns per SKU\/warehouse.<\/li>\n<li><strong>Scenario:<\/strong> A specific warehouse shows abnormal inventory drop for a SKU\u2014possible shrinkage or upstream issue.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">9) Customer support and ops workload forecasting<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Ticket volume spikes unexpectedly, causing SLA risk.<\/li>\n<li><strong>Why it fits:<\/strong> Detects spikes relative to historical patterns.<\/li>\n<li><strong>Scenario:<\/strong> \u201cTickets per hour by category\u201d spikes for \u201cPayments\u201d; alert routes to support lead.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">10) SLA\/SLO leading indicator monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> You meet SLA today, but leading indicators show deterioration.<\/li>\n<li><strong>Why it fits:<\/strong> Detects subtle shifts before hard SLA breach.<\/li>\n<li><strong>Scenario:<\/strong> \u201cError budget burn rate\u201d metric becomes anomalous; on-call acts before SLA breach.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">11) Billing and usage anomaly detection<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> A bug causes usage to be undercounted (or overcounted), impacting invoices.<\/li>\n<li><strong>Why it fits:<\/strong> Detects anomalies in usage aggregates, segmented by plan\/tenant.<\/li>\n<li><strong>Scenario:<\/strong> \u201cDaily billable events\u201d spikes for one tenant; incident supports rapid containment.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">12) Experiment and feature flag monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> A new feature rollout impacts engagement in a specific cohort.<\/li>\n<li><strong>Why it fits:<\/strong> Detects cohort-level deviation with dimensions (cohort, experimentId).<\/li>\n<li><strong>Scenario:<\/strong> \u201cSessions per user\u201d dips for \u201cCohort=NewUsers\u201d; rollback triggered.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">6. Core Features<\/h2>\n\n\n\n<blockquote>\n<p>Feature availability can vary by region\/version and may evolve. Verify the current Azure AI Metrics Advisor documentation for the latest supported capabilities.<\/p>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">Data feeds (scheduled metric ingestion)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Defines a connection to a metric source, query\/file path, schema mapping, and ingestion frequency.<\/li>\n<li><strong>Why it matters:<\/strong> A correct data feed design is the foundation for accurate anomaly detection.<\/li>\n<li><strong>Practical benefit:<\/strong> Automated recurring pulls reduce manual data export and reduce operational overhead.<\/li>\n<li><strong>Limitations\/caveats:<\/strong> Supported data sources and authentication methods are constrained; ensure your data source and network\/security model are compatible.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Multi-dimensional metrics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Lets you define dimensions (e.g., region, product, channel) so the service can monitor many time series under one metric definition.<\/li>\n<li><strong>Why it matters:<\/strong> Most real-world metrics need slicing to pinpoint where the issue is.<\/li>\n<li><strong>Practical benefit:<\/strong> Automatically identifies affected segments without you creating separate monitors.<\/li>\n<li><strong>Limitations\/caveats:<\/strong> High-cardinality dimensions can increase cost and complexity; design dimensions intentionally.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Anomaly detection configurations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Controls sensitivity, boundary conditions, and detection logic for your metrics.<\/li>\n<li><strong>Why it matters:<\/strong> Different metrics require different detection behavior.<\/li>\n<li><strong>Practical benefit:<\/strong> Reduces false positives\/negatives and aligns detection with business expectations.<\/li>\n<li><strong>Limitations\/caveats:<\/strong> Misconfigured sensitivity is a common cause of alert fatigue.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Incident grouping<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Groups anomalies across related time series into an incident.<\/li>\n<li><strong>Why it matters:<\/strong> Operators act on incidents, not thousands of individual anomalies.<\/li>\n<li><strong>Practical benefit:<\/strong> Improved triage workflow and fewer noisy notifications.<\/li>\n<li><strong>Limitations\/caveats:<\/strong> Grouping behavior may not match every team\u2019s incident taxonomy; plan integrations accordingly.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Root cause analysis (dimension contribution)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Helps identify which dimension values contributed most to an incident.<\/li>\n<li><strong>Why it matters:<\/strong> Reduces time-to-diagnosis for multi-dimensional metrics.<\/li>\n<li><strong>Practical benefit:<\/strong> Quickly points to a failing region\/SKU\/channel, narrowing the search space.<\/li>\n<li><strong>Limitations\/caveats:<\/strong> Root-cause results depend on data quality, dimension design, and statistical signals; treat as guidance, not certainty.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Alerting and hooks<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Sends notifications when anomalies\/incidents occur, based on alert rules.<\/li>\n<li><strong>Why it matters:<\/strong> Detection without notification doesn\u2019t reduce time-to-response.<\/li>\n<li><strong>Practical benefit:<\/strong> Integrates with operational workflows (email\/webhook patterns).<\/li>\n<li><strong>Limitations\/caveats:<\/strong> Hook types and authentication options vary; verify supported integrations and secure webhook handling.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Metrics Advisor portal (investigation UI)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Provides dashboards for incidents, anomaly timelines, drill-down by dimensions, and configuration management.<\/li>\n<li><strong>Why it matters:<\/strong> Speeds up human investigation and tuning.<\/li>\n<li><strong>Practical benefit:<\/strong> Non-ML users can manage detection and interpret incidents.<\/li>\n<li><strong>Limitations\/caveats:<\/strong> Portal access and role management must be aligned to your org\u2019s identity and governance policies.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">APIs and SDKs<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Automates creation of data feeds, configurations, and alerts; integrates anomalies into other systems.<\/li>\n<li><strong>Why it matters:<\/strong> Infrastructure-as-code and repeatability are critical for production operations.<\/li>\n<li><strong>Practical benefit:<\/strong> CI\/CD-friendly onboarding of new metrics and environments.<\/li>\n<li><strong>Limitations\/caveats:<\/strong> API surface area and auth methods should be validated in the latest SDK docs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Feedback\/annotation (if available in your version)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Lets users label anomalies (true\/false) and add context.<\/li>\n<li><strong>Why it matters:<\/strong> Improves operational record-keeping and can support tuning.<\/li>\n<li><strong>Practical benefit:<\/strong> Better post-incident reviews and iterative improvement.<\/li>\n<li><strong>Limitations\/caveats:<\/strong> Not all workflows support automated learning from feedback; verify behavior.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">7. Architecture and How It Works<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">High-level architecture<\/h3>\n\n\n\n<p>At a high level, Azure AI Metrics Advisor:\n1. Connects to your metric store (via a configured data feed).\n2. Ingests metric values on a schedule.\n3. Builds\/updates statistical models for each time series (often per dimension combination).\n4. Detects anomalies and groups them into incidents.\n5. Offers investigation tooling and sends alerts via hooks.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Data flow vs control flow<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Control plane:<\/strong> Create the Azure AI Metrics Advisor resource, manage access, configure data feeds, detection, and alerts.<\/li>\n<li><strong>Data plane:<\/strong> The service reads metric data from your data source, performs detection, stores incident\/anomaly metadata, and emits notifications.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Integrations with related Azure services (common patterns)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Data sources:<\/strong> Azure SQL Database, Azure Data Explorer, Azure Storage\/ADLS Gen2 (CSV), and other supported sources (verify current list).<\/li>\n<li><strong>Alert routing:<\/strong> Email\/webhook; webhooks can call Azure Functions or Logic Apps to create tickets or post to ChatOps.<\/li>\n<li><strong>Dashboards:<\/strong> Power BI, Azure Managed Grafana, or internal dashboards can visualize the same metrics; Metrics Advisor focuses on anomalies\/incidents.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Dependency services (typical)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A metric store (SQL\/ADX\/Storage)<\/li>\n<li>Identity provider (Microsoft Entra ID \/ Azure AD)<\/li>\n<li>Notification endpoints (email systems, webhooks, Functions, Logic Apps)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Security\/authentication model (typical)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>To Metrics Advisor APIs:<\/strong> Key-based auth and\/or Microsoft Entra ID (verify supported methods in current docs).<\/li>\n<li><strong>To data sources:<\/strong> Often connection strings\/credentials; some sources may support Entra ID-based auth. Treat these credentials as secrets and store them securely.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Networking model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Commonly accessed via a public endpoint over HTTPS.<\/li>\n<li>Private networking options (Private Link) may or may not be supported for this service or in all regions\u2014<strong>verify in official docs<\/strong>.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Monitoring\/logging\/governance considerations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Monitor:<\/li>\n<li>Data feed ingestion success\/failures<\/li>\n<li>Alert delivery success\/failures<\/li>\n<li>API usage and throttling<\/li>\n<li>Governance:<\/li>\n<li>Tag resources (env, owner, cost center)<\/li>\n<li>Use separate environments (dev\/test\/prod)<\/li>\n<li>Control portal access and credentials<\/li>\n<li>Azure Monitor integration (metrics\/diagnostics) should be validated for this resource type.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Simple architecture diagram (Mermaid)<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart LR\n  A[Metric Source\\n(Azure SQL \/ ADX \/ Storage)] --&gt;|Scheduled pull| B[Azure AI Metrics Advisor]\n  B --&gt; C[Anomaly Detection\\n+ Incidents]\n  C --&gt; D[Alerts (Email\/Webhook)]\n  C --&gt; E[Metrics Advisor Portal\\nInvestigation]\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Production-style architecture diagram (Mermaid)<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart TB\n  subgraph Data[Data Platform]\n    SQL[(Azure SQL Database\\nKPI Tables)]\n    ADX[(Azure Data Explorer\\nAggregates)]\n    STG[(ADLS Gen2 \/ Blob\\nCSV Exports)]\n  end\n\n  subgraph AI[AI + Machine Learning]\n    MA[Azure AI Metrics Advisor\\n(Resource + Portal)]\n  end\n\n  subgraph Ops[Operations &amp; Response]\n    LA[Logic Apps \/ Azure Functions\\nWebhook Handler]\n    ITSM[ITSM\/Ticketing System\\n(e.g., ServiceNow\/Jira)]\n    CHAT[ChatOps\\n(Teams\/Slack via connector)]\n    EMAIL[Email Distribution List]\n    SIEM[Microsoft Sentinel\\n(optional)]\n  end\n\n  SQL --&gt;|Data feed| MA\n  ADX --&gt;|Data feed| MA\n  STG --&gt;|Data feed| MA\n\n  MA --&gt;|Incidents + Alerts| EMAIL\n  MA --&gt;|Webhook| LA\n  LA --&gt; ITSM\n  LA --&gt; CHAT\n\n  MA --&gt;|Audit\/Diagnostics\\n(verify support)| SIEM\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8. Prerequisites<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Azure account and subscription<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>An active <strong>Azure subscription<\/strong> with billing enabled.<\/li>\n<li>Ability to create resources in a resource group.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Permissions \/ IAM roles<\/h3>\n\n\n\n<p>You typically need:\n&#8211; <strong>Contributor<\/strong> (or Owner) on the target resource group to create resources.\n&#8211; Permissions to create and manage <strong>Azure AI Metrics Advisor<\/strong> (Cognitive Services resource type).\n&#8211; Permissions on the <strong>data source<\/strong>:\n  &#8211; For Azure SQL Database: ability to create tables and read data (SELECT) for the query used by the data feed.\n  &#8211; For Storage\/ADLS: read access to the container\/path holding metric files.<\/p>\n\n\n\n<p>If using Microsoft Entra ID authentication for APIs or data sources, ensure your org policy allows it and roles are assigned appropriately. <strong>Verify exact roles required in official docs.<\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Billing requirements<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure AI Metrics Advisor is usage billed. You must have a payment method and ensure the subscription is not restricted.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Tools (recommended)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure Portal: https:\/\/portal.azure.com\/<\/li>\n<li>Azure CLI: https:\/\/learn.microsoft.com\/cli\/azure\/install-azure-cli<\/li>\n<li>SQL client for the lab:<\/li>\n<li><code>sqlcmd<\/code> (cross-platform) or Azure Data Studio<\/li>\n<li>For SQL Server\/Azure SQL: install instructions: https:\/\/learn.microsoft.com\/sql\/tools\/sqlcmd\/sqlcmd-utility<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Region availability<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure AI Metrics Advisor is not available in all regions. Confirm supported regions in the Azure Portal resource creation UI and official docs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Quotas\/limits<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Expect limits such as maximum number of data feeds, metrics, and time series monitored. <strong>Verify current quotas<\/strong> in official docs before production rollouts.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Prerequisite services for the lab<\/h3>\n\n\n\n<p>This tutorial\u2019s hands-on lab uses:\n&#8211; Azure SQL Database (as a metric source)\n&#8211; Azure AI Metrics Advisor resource<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">9. Pricing \/ Cost<\/h2>\n\n\n\n<blockquote>\n<p>Do not rely on static blog prices. Pricing varies by region and can change. Always confirm with the official pricing page and the Azure Pricing Calculator.<\/p>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">Official pricing references<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Pricing page (verify current URL and whether the service is listed under \u201cMetrics Advisor\u201d or \u201cAzure AI services\u201d):<br\/>\n  https:\/\/azure.microsoft.com\/pricing\/<br\/>\n  (Search within Azure Pricing for \u201cMetrics Advisor\u201d.)<\/li>\n<li>Azure Pricing Calculator: https:\/\/azure.microsoft.com\/pricing\/calculator\/<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Pricing dimensions (typical model)<\/h3>\n\n\n\n<p>Azure AI Metrics Advisor pricing is generally based on monitored usage such as:\n&#8211; <strong>Number of time series<\/strong> monitored (a \u201ctime series\u201d is typically a unique combination of metric + dimension values)\n&#8211; <strong>Frequency of ingestion<\/strong> and monitoring cadence\n&#8211; Potentially <strong>API calls<\/strong> or other operations (verify)\n&#8211; Potentially separate charges for advanced diagnostic capabilities (verify)<\/p>\n\n\n\n<p>Because the exact meters and units can change, <strong>verify the meters on the current official pricing page<\/strong> for your region.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Free tier<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Some Azure AI services have free tiers; for Azure AI Metrics Advisor, availability may vary over time. <strong>Verify free tier availability<\/strong> in the pricing page.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Key cost drivers<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Cardinality of dimensions<\/strong><br\/>\n   A metric like <code>Revenue<\/code> with dimensions <code>Region (20)<\/code> \u00d7 <code>Channel (10)<\/code> \u00d7 <code>SKU (500)<\/code> can explode into 100,000 time series.<\/li>\n<li><strong>Number of metrics<\/strong><br\/>\n   Monitoring 50 metrics vs 5 metrics changes the total time-series volume.<\/li>\n<li><strong>Ingestion frequency<\/strong><br\/>\n   Hourly vs daily ingestion increases processing and can increase billable usage.<\/li>\n<li><strong>Retention and analysis horizon<\/strong><br\/>\n   If the service retains more data for modeling (implementation-dependent), it may impact cost\/limits.<\/li>\n<li><strong>Alert volume and integrations<\/strong><br\/>\n   Webhook endpoints (Functions\/Logic Apps) have their own costs.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Hidden or indirect costs<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Data source costs<\/strong>: Azure SQL\/ADX compute and storage costs to host and query the metrics.<\/li>\n<li><strong>Query costs<\/strong>:<\/li>\n<li>Azure SQL: DTU\/vCore consumption for frequent read queries.<\/li>\n<li>ADX: query costs depending on cluster sizing and query frequency.<\/li>\n<li><strong>Networking<\/strong>: outbound data transfer from the data source or integration endpoints can add cost depending on architecture.<\/li>\n<li><strong>Operational overhead<\/strong>: time spent tuning detection configs and managing alert routing.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Network\/data transfer implications<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>If Metrics Advisor pulls data from sources across regions, you may incur cross-region data transfer and increased latency. Prefer same-region designs when possible.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">How to optimize cost<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Start with <strong>coarser frequency<\/strong> (daily) and move to hourly only where required.<\/li>\n<li>Reduce dimension cardinality:<\/li>\n<li>Monitor at the right aggregation level (e.g., by region and channel, not SKU, unless necessary).<\/li>\n<li>Use separate monitors:<\/li>\n<li>A subset of high-value SKUs\/tenants may justify higher granularity.<\/li>\n<li>Ensure queries are efficient:<\/li>\n<li>Pre-aggregate metrics into a narrow \u201cfact table\u201d with indexed timestamp + dimensions.<\/li>\n<li>Implement alert routing rules to reduce noise and prevent downstream automation costs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Example low-cost starter estimate (method, not numbers)<\/h3>\n\n\n\n<p>To estimate:\n1. Pick 1\u20133 metrics (e.g., <code>Orders<\/code>, <code>Revenue<\/code>, <code>ConversionRate<\/code>).\n2. Choose 1\u20132 dimensions with limited values (e.g., <code>Region<\/code> with 5 values).\n3. Compute time series count: <code>metrics \u00d7 region_values<\/code> \u2192 <code>3 \u00d7 5 = 15<\/code> time series.\n4. Choose daily ingestion frequency.\n5. Plug the time-series count and frequency into the <strong>Azure Pricing Calculator<\/strong> (or pricing page meters).<\/p>\n\n\n\n<p>This usually keeps the proof-of-concept low-cost while you validate value.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Example production cost considerations<\/h3>\n\n\n\n<p>In production, costs commonly grow due to:\n&#8211; Higher frequency ingestion (hourly or every few minutes)\n&#8211; Large dimension sets (tenantId\/customerId, SKU, endpoint)\n&#8211; More environments (dev\/test\/prod)\n&#8211; More teams onboarding their KPIs<\/p>\n\n\n\n<p>A common production approach is a <strong>tiered monitoring strategy<\/strong>:\n&#8211; Tier 1: high-level KPIs at daily\/hourly frequency (low cardinality)\n&#8211; Tier 2: drill-down metrics for key segments (moderate cardinality)\n&#8211; Tier 3: ad-hoc investigations (handled via analytics tooling rather than continuous monitoring)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">10. Step-by-Step Hands-On Tutorial<\/h2>\n\n\n\n<p>This lab builds a working anomaly detection loop using Azure SQL Database as the metric store and Azure AI Metrics Advisor for monitoring and alerting. It\u2019s designed to be executable and relatively low-cost, but always review pricing before running.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Objective<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Create an Azure AI Metrics Advisor resource<\/li>\n<li>Create a simple KPI table in Azure SQL Database with time-series values<\/li>\n<li>Configure a data feed in Metrics Advisor to ingest the KPI<\/li>\n<li>Configure anomaly detection and an alert<\/li>\n<li>Inject an anomaly and verify that it is detected<\/li>\n<li>Clean up resources<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Lab Overview<\/h3>\n\n\n\n<p>You will:\n1. Provision Azure SQL Database and load sample time-series KPI data.\n2. Provision Azure AI Metrics Advisor.\n3. Create a Metrics Advisor data feed that queries the KPI data.\n4. Configure detection and alerting.\n5. Add an outlier data point and verify an incident\/anomaly.\n6. Remove resources.<\/p>\n\n\n\n<p><strong>Expected outcome:<\/strong> A working monitor that detects a spike\/drop and triggers an alert (at minimum, an anomaly\/incident visible in the portal; alert delivery depends on your hook configuration and email policies).<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 1: Create a resource group<\/h3>\n\n\n\n<p>You can do this in the Azure Portal or with Azure CLI.<\/p>\n\n\n\n<pre><code class=\"language-bash\"># Variables (edit)\nRG=\"rg-metricsadvisor-lab\"\nLOC=\"eastus\"\n\naz group create --name \"$RG\" --location \"$LOC\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> Resource group exists in your subscription.<\/p>\n\n\n\n<p><strong>Verification:<\/strong><\/p>\n\n\n\n<pre><code class=\"language-bash\">az group show --name \"$RG\" --query \"{name:name, location:location}\" -o table\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 2: Create an Azure SQL Database (lab data source)<\/h3>\n\n\n\n<blockquote>\n<p>Cost note: Azure SQL pricing varies by tier. Choose a low-cost option suitable for a short lab (for example, a small DTU tier or low vCore\/serverless if available). Verify current options in your region.<\/p>\n<\/blockquote>\n\n\n\n<p>Create SQL logical server + database:<\/p>\n\n\n\n<pre><code class=\"language-bash\"># Variables (edit)\nSQL_SERVER=\"sqlma$(openssl rand -hex 3)\"   # must be globally unique\nSQL_ADMIN=\"sqladminuser\"\nSQL_PASSWORD='Replace-With-A-Strong-Password!123'\nSQL_DB=\"kpidb\"\n\naz sql server create \\\n  --name \"$SQL_SERVER\" \\\n  --resource-group \"$RG\" \\\n  --location \"$LOC\" \\\n  --admin-user \"$SQL_ADMIN\" \\\n  --admin-password \"$SQL_PASSWORD\"\n\naz sql db create \\\n  --resource-group \"$RG\" \\\n  --server \"$SQL_SERVER\" \\\n  --name \"$SQL_DB\" \\\n  --service-objective \"S0\"\n<\/code><\/pre>\n\n\n\n<blockquote>\n<p>If <code>S0<\/code> is not available or you want cheaper options, list SKUs and choose an appropriate one:<\/p>\n<\/blockquote>\n\n\n\n<pre><code class=\"language-bash\">az sql db list-editions --location \"$LOC\" -o table\n<\/code><\/pre>\n\n\n\n<p>Allow your client IP and (optionally) Azure services:<\/p>\n\n\n\n<pre><code class=\"language-bash\">MYIP=$(curl -s https:\/\/api.ipify.org)\n\naz sql server firewall-rule create \\\n  --resource-group \"$RG\" \\\n  --server \"$SQL_SERVER\" \\\n  --name \"AllowMyIP\" \\\n  --start-ip-address \"$MYIP\" \\\n  --end-ip-address \"$MYIP\"\n\n# Optional (common for labs): allow Azure services\naz sql server firewall-rule create \\\n  --resource-group \"$RG\" \\\n  --server \"$SQL_SERVER\" \\\n  --name \"AllowAzureServices\" \\\n  --start-ip-address 0.0.0.0 \\\n  --end-ip-address 0.0.0.0\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> SQL server and database are created and reachable from your machine.<\/p>\n\n\n\n<p><strong>Verification:<\/strong><\/p>\n\n\n\n<pre><code class=\"language-bash\">az sql db show --resource-group \"$RG\" --server \"$SQL_SERVER\" --name \"$SQL_DB\" \\\n  --query \"{db:name, status:status, sku:sku.name}\" -o table\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 3: Create a KPI table and load sample time-series data<\/h3>\n\n\n\n<p>Connect using <code>sqlcmd<\/code> (or Azure Data Studio). With <code>sqlcmd<\/code>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">sqlcmd -S \"${SQL_SERVER}.database.windows.net\" -d \"$SQL_DB\" -U \"$SQL_ADMIN\" -P \"$SQL_PASSWORD\" -N -C -Q \"SELECT @@VERSION;\"\n<\/code><\/pre>\n\n\n\n<p>Create a table and insert sample data. This example creates <strong>hourly<\/strong> revenue values for 14 days for two regions. You can adjust to daily if you prefer.<\/p>\n\n\n\n<pre><code class=\"language-bash\">sqlcmd -S \"${SQL_SERVER}.database.windows.net\" -d \"$SQL_DB\" -U \"$SQL_ADMIN\" -P \"$SQL_PASSWORD\" -N -C &lt;&lt;'SQL'\nSET NOCOUNT ON;\n\nIF OBJECT_ID('dbo.KpiRevenueHourly') IS NOT NULL\n  DROP TABLE dbo.KpiRevenueHourly;\n\nCREATE TABLE dbo.KpiRevenueHourly (\n  ts          DATETIME2(0) NOT NULL,\n  region      NVARCHAR(20) NOT NULL,\n  revenue     FLOAT        NOT NULL,\n  CONSTRAINT PK_KpiRevenueHourly PRIMARY KEY (ts, region)\n);\n\n;WITH n AS (\n  SELECT TOP (24*14)\n    ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1 AS i\n  FROM sys.all_objects a CROSS JOIN sys.all_objects b\n),\nt AS (\n  SELECT\n    DATEADD(HOUR, i, DATEADD(DAY, -14, SYSUTCDATETIME())) AS ts_utc,\n    i\n  FROM n\n),\nbase AS (\n  SELECT\n    ts_utc,\n    CASE WHEN (DATEPART(HOUR, ts_utc) BETWEEN 8 AND 20) THEN 1.2 ELSE 0.8 END AS hour_factor,\n    CASE WHEN DATENAME(WEEKDAY, ts_utc) IN ('Saturday','Sunday') THEN 0.85 ELSE 1.0 END AS weekend_factor\n  FROM t\n)\nINSERT INTO dbo.KpiRevenueHourly(ts, region, revenue)\nSELECT\n  b.ts_utc AS ts,\n  r.region,\n  -- base seasonal pattern + mild noise\n  (CASE WHEN r.region='us' THEN 1000 ELSE 700 END) * b.hour_factor * b.weekend_factor\n  + (ABS(CHECKSUM(NEWID())) % 50) AS revenue\nFROM base b\nCROSS JOIN (VALUES ('us'), ('eu')) r(region);\n\nSELECT COUNT(*) AS rows_loaded FROM dbo.KpiRevenueHourly;\nSQL\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> Table created with ~672 rows (14 days \u00d7 24 hours \u00d7 2 regions = 672).<\/p>\n\n\n\n<p><strong>Verification query:<\/strong><\/p>\n\n\n\n<pre><code class=\"language-bash\">sqlcmd -S \"${SQL_SERVER}.database.windows.net\" -d \"$SQL_DB\" -U \"$SQL_ADMIN\" -P \"$SQL_PASSWORD\" -N -C -Q \\\n\"SELECT TOP 5 * FROM dbo.KpiRevenueHourly ORDER BY ts DESC, region;\"\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 4: Create an Azure AI Metrics Advisor resource<\/h3>\n\n\n\n<p>Create the resource in the Azure Portal (recommended for beginners) because the portal will also show you the correct endpoint and the link to the Metrics Advisor portal.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Azure Portal \u2192 <strong>Create a resource<\/strong><\/li>\n<li>Search for <strong>Azure AI Metrics Advisor<\/strong> (or <strong>Metrics Advisor<\/strong>)<\/li>\n<li>Create the resource in:\n   &#8211; Subscription: your subscription\n   &#8211; Resource group: <code>rg-metricsadvisor-lab<\/code>\n   &#8211; Region: same as SQL if possible\n   &#8211; Name: e.g., <code>ma-kpi-lab<\/code>\n   &#8211; Pricing tier: choose what\u2019s available (verify)<\/li>\n<\/ol>\n\n\n\n<p>After deployment, open the resource and locate:\n&#8211; <strong>Endpoint<\/strong>\n&#8211; <strong>Keys<\/strong> (if using key-based auth)<\/p>\n\n\n\n<p><strong>Expected outcome:<\/strong> Metrics Advisor resource is deployed and you can open the Metrics Advisor portal from the resource.<\/p>\n\n\n\n<p><strong>Verification:<\/strong>\n&#8211; Azure Portal shows resource status as <strong>Succeeded<\/strong>.\n&#8211; You can see endpoint\/keys in the resource.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 5: Open the Metrics Advisor portal and add a data feed<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li>In the Azure Portal, open your <strong>Azure AI Metrics Advisor<\/strong> resource.<\/li>\n<li>Select <strong>Open Metrics Advisor portal<\/strong> (wording may vary).<\/li>\n<li>In the portal, create a <strong>Data feed<\/strong>.<\/li>\n<\/ol>\n\n\n\n<p>Choose <strong>Azure SQL Database<\/strong> as the data source (if supported\u2014verify current supported sources in the UI\/docs).<\/p>\n\n\n\n<p>You\u2019ll typically provide:\n&#8211; <strong>Server<\/strong>: <code>${SQL_SERVER}.database.windows.net<\/code>\n&#8211; <strong>Database<\/strong>: <code>kpidb<\/code>\n&#8211; Authentication:\n  &#8211; SQL username\/password (lab)\n  &#8211; Or Entra ID-based auth (preferred for production if supported\u2014verify)\n&#8211; A <strong>query<\/strong> that returns:\n  &#8211; Timestamp column\n  &#8211; One or more dimension columns\n  &#8211; One or more metric columns<\/p>\n\n\n\n<p>Example query (use UTC timestamps consistently):<\/p>\n\n\n\n<pre><code class=\"language-sql\">SELECT\n  ts,\n  region,\n  revenue\nFROM dbo.KpiRevenueHourly\nWHERE ts &gt;= DATEADD(DAY, -14, SYSUTCDATETIME())\n<\/code><\/pre>\n\n\n\n<p>Then set:\n&#8211; <strong>Granularity:<\/strong> Hourly\n&#8211; <strong>Ingestion time offset:<\/strong> If your timestamps are UTC, keep offset consistent (verify portal setting)\n&#8211; <strong>Start time:<\/strong> earliest timestamp in the table\n&#8211; <strong>Timezone:<\/strong> choose carefully; mismatches can look like missing data<\/p>\n\n\n\n<p><strong>Expected outcome:<\/strong> Data feed is created and initial ingestion starts or is scheduled.<\/p>\n\n\n\n<p><strong>Verification:<\/strong>\n&#8211; In the data feed details, check ingestion status.\n&#8211; Confirm the portal shows metric <code>revenue<\/code> and dimension <code>region<\/code>.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 6: Configure anomaly detection<\/h3>\n\n\n\n<p>In the Metrics Advisor portal:\n1. Go to the metric under your data feed.\n2. Create or edit an <strong>anomaly detection configuration<\/strong>:\n   &#8211; Start with default sensitivity.\n   &#8211; Ensure it\u2019s enabled for the metric.\n3. Save the configuration.<\/p>\n\n\n\n<p><strong>Expected outcome:<\/strong> The service begins evaluating ingested points for anomalies.<\/p>\n\n\n\n<p><strong>Verification:<\/strong>\n&#8211; You can view a chart of the time series.\n&#8211; You can see expected band\/bounds (if shown) and anomaly markers (once detection runs).<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 7: Create an alert configuration and hook<\/h3>\n\n\n\n<p>Create a hook and alert routing:\n1. Create a <strong>hook<\/strong> (notification channel).\n   &#8211; If email is supported, add your email.\n   &#8211; If webhook is supported, use an endpoint you control (an Azure Function HTTP trigger is a good option).\n2. Create an <strong>alert configuration<\/strong>:\n   &#8211; Select the detection configuration\n   &#8211; Select which severity or anomaly types should alert\n   &#8211; Attach the hook<\/p>\n\n\n\n<p><strong>Expected outcome:<\/strong> Alerts will be sent when anomalies\/incidents are generated (subject to detection and alert rules).<\/p>\n\n\n\n<p><strong>Verification:<\/strong>\n&#8211; The alert configuration shows as enabled.\n&#8211; Hook test (if available) succeeds.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 8: Inject an anomaly (spike or drop) into the data<\/h3>\n\n\n\n<p>Insert an outlier for the most recent hour for region <code>eu<\/code> (a sudden drop). Use <code>sqlcmd<\/code>:<\/p>\n\n\n\n<pre><code class=\"language-bash\">sqlcmd -S \"${SQL_SERVER}.database.windows.net\" -d \"$SQL_DB\" -U \"$SQL_ADMIN\" -P \"$SQL_PASSWORD\" -N -C &lt;&lt;'SQL'\nDECLARE @t DATETIME2(0) = DATEADD(HOUR, DATEDIFF(HOUR, 0, SYSUTCDATETIME()), 0);\n\n-- Upsert the point to an extreme low value\nMERGE dbo.KpiRevenueHourly AS target\nUSING (SELECT @t AS ts, N'eu' AS region, 10.0 AS revenue) AS src\nON target.ts = src.ts AND target.region = src.region\nWHEN MATCHED THEN UPDATE SET revenue = src.revenue\nWHEN NOT MATCHED THEN INSERT (ts, region, revenue) VALUES (src.ts, src.region, src.revenue);\n\nSELECT * FROM dbo.KpiRevenueHourly WHERE ts=@t AND region='eu';\nSQL\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> The latest hour\u2019s <code>eu<\/code> revenue is now extremely low compared to history.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 9: Trigger ingestion \/ wait for the next run and review anomalies<\/h3>\n\n\n\n<p>Depending on your ingestion schedule:\n&#8211; If the portal supports <strong>manual refresh\/backfill<\/strong> for a data feed, run it for the latest window.\n&#8211; Otherwise, wait for the next scheduled ingestion.<\/p>\n\n\n\n<p>Then:\n1. Go to <strong>Incidents<\/strong> (or anomaly dashboard).\n2. Filter by your metric and time range.\n3. Inspect the incident and drill into dimension <code>region<\/code>.<\/p>\n\n\n\n<p><strong>Expected outcome:<\/strong> You see an anomaly (and often an incident) around the injected timestamp, especially for <code>region=eu<\/code>.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Validation<\/h3>\n\n\n\n<p>Use this checklist:\n&#8211; Data feed ingestion shows success for recent time.\n&#8211; The metric chart displays the recent point.\n&#8211; An anomaly marker appears at or near the injected timestamp for <code>region=eu<\/code>.\n&#8211; An incident is created or the anomaly is listed in anomaly results.\n&#8211; If alerting is configured and enabled, you receive an email\/webhook notification.<\/p>\n\n\n\n<p>If you do not receive alerts but you do see the anomaly in the portal, the detection is working; focus troubleshooting on hook configuration and alert rules.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Troubleshooting<\/h3>\n\n\n\n<p>Common issues and fixes:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>No data ingested \/ ingestion failed<\/strong>\n   &#8211; Check SQL firewall rules (client IP and \u201cAllow Azure services\u201d for labs).\n   &#8211; Confirm credentials and that the user can <code>SELECT<\/code> from the table.\n   &#8211; Validate the query returns rows for the selected time range.\n   &#8211; Confirm timestamp column type and timezone assumptions.<\/p>\n<\/li>\n<li>\n<p><strong>Missing data or misaligned time buckets<\/strong>\n   &#8211; Confirm granularity (hourly vs daily).\n   &#8211; Ensure timestamps align to hour boundaries if required by your configuration.\n   &#8211; Check time zone settings in the data feed.<\/p>\n<\/li>\n<li>\n<p><strong>No anomalies detected<\/strong>\n   &#8211; You may need more historical data for modeling (add more days).\n   &#8211; Reduce detection threshold (increase sensitivity).\n   &#8211; Ensure the outlier is extreme enough relative to variance.\n   &#8211; Confirm you\u2019re viewing the correct dimension slice (<code>eu<\/code>).<\/p>\n<\/li>\n<li>\n<p><strong>Alerts not received<\/strong>\n   &#8211; Verify hook configuration and that your email system didn\u2019t quarantine messages.\n   &#8211; If using webhook, check endpoint logs (Function\/Logic App runs).\n   &#8211; Confirm the alert configuration is linked to the detection config and is enabled.\n   &#8211; Check alert rules: severity filters might exclude the anomaly.<\/p>\n<\/li>\n<li>\n<p><strong>Throttling or API errors<\/strong>\n   &#8211; Reduce automation frequency; back off and retry.\n   &#8211; Verify quotas and limits in official docs.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Cleanup<\/h3>\n\n\n\n<p>To avoid ongoing charges, delete the resource group:<\/p>\n\n\n\n<pre><code class=\"language-bash\">az group delete --name \"$RG\" --yes --no-wait\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> SQL Database and the Metrics Advisor resource are removed (deletion completes asynchronously).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">11. Best Practices<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Architecture best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Design metric tables for monitoring<\/strong>:<\/li>\n<li>Narrow schema: timestamp + dimensions + numeric measures<\/li>\n<li>Pre-aggregated to the monitoring granularity (hour\/day)<\/li>\n<li>Indexed on (timestamp, dimensions) for fast reads<\/li>\n<li>Keep metric sources and Metrics Advisor in the <strong>same Azure region<\/strong> where possible to reduce latency and cross-region transfer.<\/li>\n<li>Use <strong>tiered monitoring<\/strong>:<\/li>\n<li>High-level KPIs always-on<\/li>\n<li>Drill-down metrics selectively enabled for high-value segments<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">IAM\/security best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prefer <strong>Microsoft Entra ID<\/strong> authentication where supported (APIs and data sources). If not supported for your use case, use keys\/secrets carefully.<\/li>\n<li>Restrict who can:<\/li>\n<li>Create\/modify data feeds (these contain data source credentials)<\/li>\n<li>Modify detection settings (impacts alert behavior)<\/li>\n<li>Manage hooks (webhooks can exfiltrate data if misused)<\/li>\n<li>Use least privilege for data source access (read-only for monitoring queries).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Cost best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Control <strong>dimension cardinality<\/strong> intentionally.<\/li>\n<li>Start with daily granularity; move to hourly only when justified.<\/li>\n<li>Monitor only the KPIs that drive action; avoid \u201cmonitor everything\u201d.<\/li>\n<li>Optimize data source query cost (indexes, pre-aggregation, materialized views where applicable).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Performance best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Ensure stable ingestion:<\/li>\n<li>Avoid long-running queries<\/li>\n<li>Avoid querying raw event tables; query aggregates<\/li>\n<li>Keep detection configs consistent across environments to avoid drift.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Reliability best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Implement alert routing fallback:<\/li>\n<li>If webhook fails, also notify an email list (or vice versa) if supported.<\/li>\n<li>Regularly review ingestion failures and alert delivery failures.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operations best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Establish an \u201canomaly playbook\u201d:<\/li>\n<li>What counts as actionable?<\/li>\n<li>Who is on call for each metric group?<\/li>\n<li>What is the escalation path?<\/li>\n<li>Run periodic tuning sessions:<\/li>\n<li>Review false positives\/negatives<\/li>\n<li>Adjust sensitivity and dimension scopes<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Governance\/tagging\/naming best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Naming convention example:<\/li>\n<li>Resource group: <code>rg-&lt;app&gt;-&lt;env&gt;-ma<\/code><\/li>\n<li>Metrics Advisor: <code>ma-&lt;app&gt;-&lt;env&gt;<\/code><\/li>\n<li>Data feed: <code>&lt;env&gt;-&lt;domain&gt;-&lt;metricgroup&gt;<\/code><\/li>\n<li>Tagging:<\/li>\n<li><code>env<\/code>, <code>owner<\/code>, <code>costCenter<\/code>, <code>dataClassification<\/code>, <code>app<\/code>, <code>managedBy<\/code><\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">12. Security Considerations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Identity and access model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Resource access<\/strong> is controlled by Azure RBAC at the Azure resource level.<\/li>\n<li><strong>Portal-level roles<\/strong> within the Metrics Advisor portal may exist (admin\/viewer style). Align portal roles with RBAC and operational responsibilities.<\/li>\n<li>For programmatic access, use:<\/li>\n<li><strong>Key-based auth<\/strong> (protect keys as secrets)<\/li>\n<li><strong>Entra ID<\/strong> auth if supported by the service\/API version (verify in docs)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Encryption<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data in transit uses HTTPS.<\/li>\n<li>Data at rest is managed by Azure (service-managed). For customer-managed keys (CMK) support, <strong>verify<\/strong> whether Metrics Advisor supports CMK in your region and SKU.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Network exposure<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>If the service uses public endpoints, restrict access where possible:<\/li>\n<li>Use organizational controls, conditional access, and limited admin access.<\/li>\n<li>For private connectivity (Private Link), <strong>verify availability<\/strong> for Azure AI Metrics Advisor.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secrets handling<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data feed credentials (SQL usernames\/passwords, storage keys, service principals) are sensitive.<\/li>\n<li>Store secrets in <strong>Azure Key Vault<\/strong> where possible and use integration patterns supported by the service (if direct Key Vault references are not supported, ensure secure operational handling).<\/li>\n<li>Rotate keys and credentials periodically.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Audit\/logging<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Enable Azure platform logging\/diagnostics when available for the resource type (verify diagnostic settings support).<\/li>\n<li>Log webhook receiver activity (Functions\/Logic Apps) and store logs in Log Analytics.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Compliance considerations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Determine data classification of your metrics (PII rarely belongs in Metrics Advisor; keep metrics aggregated).<\/li>\n<li>Confirm data residency requirements are met by the region you choose.<\/li>\n<li>Review Microsoft compliance offerings for Azure AI services relevant to your organization (verify current compliance scope).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Common security mistakes<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Using overly privileged SQL accounts for data feeds.<\/li>\n<li>Allowing anyone to create or edit hooks (data leakage risk via webhooks).<\/li>\n<li>Hardcoding keys in application code or scripts.<\/li>\n<li>Monitoring granular user-level identifiers unnecessarily (high cardinality + privacy risk).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secure deployment recommendations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Separate dev\/test\/prod subscriptions or resource groups.<\/li>\n<li>Use read-only data source credentials for data feeds.<\/li>\n<li>Use webhooks that require authentication and validate payloads.<\/li>\n<li>Review alerts for sensitive information before sending to broad email lists.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">13. Limitations and Gotchas<\/h2>\n\n\n\n<blockquote>\n<p>Verify current limits and supported features in the official documentation, as these can change.<\/p>\n<\/blockquote>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Service lifecycle changes:<\/strong> Azure AI services sometimes get rebranded, merged, or retired. Verify Azure AI Metrics Advisor\u2019s current lifecycle status.<\/li>\n<li><strong>Data source support is limited:<\/strong> Not every database\/store is supported as a first-class connector.<\/li>\n<li><strong>High-cardinality explosion:<\/strong> Dimensions like <code>tenantId<\/code> or <code>userId<\/code> can create massive numbers of time series and cost.<\/li>\n<li><strong>Timezone and granularity mismatch:<\/strong> The most common cause of \u201cmissing data\u201d and false anomalies.<\/li>\n<li><strong>Cold start \/ insufficient history:<\/strong> Detection quality improves with enough historical data.<\/li>\n<li><strong>Query cost and throttling:<\/strong> Frequent ingestion queries can load your SQL\/ADX systems.<\/li>\n<li><strong>Alert fatigue:<\/strong> Poorly tuned sensitivity or too many metrics can overwhelm teams.<\/li>\n<li><strong>Webhook security:<\/strong> Webhooks can become an exfiltration path if not locked down.<\/li>\n<li><strong>Environment parity:<\/strong> Detection configs that work in production may not work in dev due to low traffic and sparse data.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">14. Comparison with Alternatives<\/h2>\n\n\n\n<p>Azure AI Metrics Advisor fits a specific niche: managed anomaly detection and diagnostics for time-series metrics. Depending on your needs, alternatives may be a better fit.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Comparison table<\/h3>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Option<\/th>\n<th>Best For<\/th>\n<th>Strengths<\/th>\n<th>Weaknesses<\/th>\n<th>When to Choose<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Azure AI Metrics Advisor<\/strong><\/td>\n<td>KPI anomaly detection + incident\/root cause workflows<\/td>\n<td>Managed anomaly detection; multi-dimensional analysis; investigation portal; alert hooks<\/td>\n<td>Limited connectors; cost scales with time series; lifecycle considerations (verify)<\/td>\n<td>When you need anomalies + diagnostics for business\/product metrics at scale<\/td>\n<\/tr>\n<tr>\n<td><strong>Azure Monitor (Metrics\/Logs\/Alerts)<\/strong><\/td>\n<td>Infrastructure\/app monitoring for Azure resources and logs<\/td>\n<td>Native Azure telemetry; KQL; strong alerting and integration; mature ops ecosystem<\/td>\n<td>Threshold tuning can be hard for seasonal KPIs; root cause across dimensions is manual<\/td>\n<td>When monitoring Azure resources, logs, and service health with operational alerting<\/td>\n<\/tr>\n<tr>\n<td><strong>Azure Data Explorer (ADX) anomaly functions<\/strong><\/td>\n<td>Custom analytics + anomaly detection in query layer<\/td>\n<td>Powerful time-series analytics; flexible; integrates with dashboards<\/td>\n<td>You build workflows and alerting; more engineering effort<\/td>\n<td>When you already use ADX and want custom detection logic with full control<\/td>\n<\/tr>\n<tr>\n<td><strong>Azure Machine Learning (custom models)<\/strong><\/td>\n<td>Highly customized anomaly detection<\/td>\n<td>Maximum flexibility; custom features\/models; MLOps<\/td>\n<td>Higher complexity; requires ML engineering and ongoing maintenance<\/td>\n<td>When managed service detection doesn\u2019t meet accuracy\/explainability requirements<\/td>\n<\/tr>\n<tr>\n<td><strong>Azure AI Anomaly Detector<\/strong><\/td>\n<td>Single-series anomaly detection via API (verify current status)<\/td>\n<td>Simple API for anomaly detection<\/td>\n<td>Not a full monitoring + incident portal; multi-dimensional workflows may be limited<\/td>\n<td>When you want API-based anomaly detection embedded into your app<\/td>\n<\/tr>\n<tr>\n<td><strong>AWS Lookout for Metrics<\/strong><\/td>\n<td>Managed KPI anomaly detection on AWS<\/td>\n<td>AWS-native connectors and workflows<\/td>\n<td>Cloud lock-in; different data sources; migration effort<\/td>\n<td>When your data and ops are primarily on AWS<\/td>\n<\/tr>\n<tr>\n<td><strong>Google Cloud Monitoring + custom detection<\/strong><\/td>\n<td>Monitoring in GCP<\/td>\n<td>Native monitoring ecosystem<\/td>\n<td>KPI anomaly workflows may require custom work<\/td>\n<td>When your workloads are primarily on GCP<\/td>\n<\/tr>\n<tr>\n<td><strong>Open-source (Prophet, Kats, ADTK) self-managed<\/strong><\/td>\n<td>Full control, offline\/batch detection<\/td>\n<td>No service lock-in; customizable<\/td>\n<td>You operate pipelines, scaling, alerting, UI<\/td>\n<td>When you can invest in platform engineering and need portability<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">15. Real-World Example<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Enterprise example: Global e-commerce KPI monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> A global retailer has revenue, orders, and payment success KPIs by region\/channel. They struggle with alert fatigue from static thresholds and slow diagnosis when a subset of regions fails.<\/li>\n<li><strong>Proposed architecture:<\/strong><\/li>\n<li>Aggregations computed hourly into Azure Data Explorer or Azure SQL Database<\/li>\n<li>Azure AI Metrics Advisor data feeds ingest <code>Orders<\/code>, <code>Revenue<\/code>, <code>PaymentSuccessRate<\/code> with dimensions <code>Region<\/code>, <code>Channel<\/code>, <code>PaymentProvider<\/code><\/li>\n<li>Webhook alerts to Logic Apps:<ul>\n<li>Create an incident ticket in ITSM<\/li>\n<li>Post a notification to the on-call channel<\/li>\n<\/ul>\n<\/li>\n<li>Operations dashboard uses existing BI\/Grafana; Metrics Advisor used for anomalies\/incidents<\/li>\n<li><strong>Why chosen:<\/strong><\/li>\n<li>Managed anomaly detection reduces threshold maintenance<\/li>\n<li>Root cause analysis highlights which provider\/region\/channel contributes most<\/li>\n<li><strong>Expected outcomes:<\/strong><\/li>\n<li>Faster detection of partial outages (one provider\/region)<\/li>\n<li>Reduced false positives vs fixed thresholds<\/li>\n<li>Improved mean time to diagnose (MTTD\/MTTR)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Startup\/small-team example: SaaS signups and activation monitoring<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> A SaaS startup monitors signups and activation rate. Marketing campaigns create natural spikes; static alerts generate noise, and the team misses real drops.<\/li>\n<li><strong>Proposed architecture:<\/strong><\/li>\n<li>Daily aggregates in Azure SQL Database<\/li>\n<li>Azure AI Metrics Advisor monitors <code>Signups<\/code>, <code>ActivationRate<\/code> by <code>Channel<\/code> and <code>Country<\/code> (low cardinality)<\/li>\n<li>Email alerts to founders + on-call engineer<\/li>\n<li><strong>Why chosen:<\/strong><\/li>\n<li>Minimal engineering effort vs building custom detection<\/li>\n<li>Easy triage in the portal<\/li>\n<li><strong>Expected outcomes:<\/strong><\/li>\n<li>Earlier detection of onboarding regressions<\/li>\n<li>Fewer noisy alerts during campaigns<\/li>\n<li>Clearer ownership and actionability for KPI alerts<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">16. FAQ<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>What kind of data does Azure AI Metrics Advisor analyze?<\/strong><br\/>\n   Time-series metric data (numeric measures over time), often with dimensions for slicing (region, product, channel).<\/p>\n<\/li>\n<li>\n<p><strong>Is Azure AI Metrics Advisor the same as Azure Monitor?<\/strong><br\/>\n   No. Azure Monitor is a broad monitoring platform for Azure resources and logs. Azure AI Metrics Advisor focuses on anomaly detection and diagnostics for time-series metrics, often business KPIs.<\/p>\n<\/li>\n<li>\n<p><strong>Do I need machine learning expertise to use it?<\/strong><br\/>\n   Not for basic usage. You configure data feeds and detection settings; the service provides managed modeling. ML expertise helps when tuning and designing metrics\/dimensions.<\/p>\n<\/li>\n<li>\n<p><strong>What is a \u201ctime series\u201d in pricing terms?<\/strong><br\/>\n   Typically, it\u2019s a unique metric combined with a specific dimension value set (e.g., Revenue for Region=EU, Channel=Paid). Verify the exact definition on the pricing page.<\/p>\n<\/li>\n<li>\n<p><strong>How much history do I need before anomalies are reliable?<\/strong><br\/>\n   More history usually improves modeling (especially for seasonality). If you only have a small amount of data, expect less reliable detection and more tuning.<\/p>\n<\/li>\n<li>\n<p><strong>Can it detect seasonal anomalies (like \u201clower than expected Monday traffic\u201d)?<\/strong><br\/>\n   That is a primary use case. It\u2019s designed to detect deviations from expected patterns rather than absolute thresholds.<\/p>\n<\/li>\n<li>\n<p><strong>Can it monitor near-real-time metrics?<\/strong><br\/>\n   It supports scheduled ingestion at defined granularity. \u201cNear-real-time\u201d depends on supported ingestion frequency and your data availability. Verify minimum granularity\/frequency in official docs.<\/p>\n<\/li>\n<li>\n<p><strong>Can I use it with Power BI directly?<\/strong><br\/>\n   Metrics Advisor is not a BI tool. You can monitor the same underlying dataset that Power BI uses, but the integration is typically indirect via the data source.<\/p>\n<\/li>\n<li>\n<p><strong>Does it support private networking (Private Link)?<\/strong><br\/>\n   Possibly for some Azure AI services, but support can vary. Verify Azure AI Metrics Advisor private networking support in official docs for your region.<\/p>\n<\/li>\n<li>\n<p><strong>How do I avoid alert fatigue?<\/strong><br\/>\n   Limit dimension cardinality, tune sensitivity, use incident grouping, and create routing rules so only actionable anomalies notify humans.<\/p>\n<\/li>\n<li>\n<p><strong>What happens if the data feed query returns late or missing points?<\/strong><br\/>\n   You may see \u201cmissing data\u201d issues or false anomalies. Ensure ingestion offsets\/time zones match how your data lands.<\/p>\n<\/li>\n<li>\n<p><strong>Can I automate configuration with Terraform or CLI?<\/strong><br\/>\n   Resource provisioning can be automated via IaC. Portal-level configurations (data feeds, configs) may require APIs\/SDKs. Verify the current API support and provider support.<\/p>\n<\/li>\n<li>\n<p><strong>Can I send alerts to Teams\/Slack?<\/strong><br\/>\n   If webhooks are supported, you can send them to a Logic App\/Function that posts to Teams\/Slack. Verify hook capabilities and secure the endpoint.<\/p>\n<\/li>\n<li>\n<p><strong>Is it suitable for per-user monitoring?<\/strong><br\/>\n   Usually not. Per-user IDs create massive cardinality and privacy risk. Prefer aggregated metrics.<\/p>\n<\/li>\n<li>\n<p><strong>What\u2019s the recommended way to structure metric tables?<\/strong><br\/>\n   Use a narrow fact table: timestamp, dimension columns, and numeric measures at the monitoring granularity (hour\/day), indexed for fast reads.<\/p>\n<\/li>\n<li>\n<p><strong>Can it monitor multiple metrics from one query?<\/strong><br\/>\n   Often yes (one timestamp + dimensions + multiple measure columns). Verify data feed schema rules in the docs.<\/p>\n<\/li>\n<li>\n<p><strong>How do I handle deployments across dev\/test\/prod?<\/strong><br\/>\n   Use separate resources\/environments, keep configs versioned (via APIs\/IaC where supported), and validate detection settings before promoting.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">17. Top Online Resources to Learn Azure AI Metrics Advisor<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Resource Type<\/th>\n<th>Name<\/th>\n<th>Why It Is Useful<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Official documentation<\/td>\n<td>Azure AI Metrics Advisor documentation (Learn) \u2014 https:\/\/learn.microsoft.com\/azure\/ai-services\/metrics-advisor\/<\/td>\n<td>Canonical docs for concepts, connectors, APIs, and portal workflows<\/td>\n<\/tr>\n<tr>\n<td>Official overview<\/td>\n<td>Overview page (verify current URL under Learn) \u2014 https:\/\/learn.microsoft.com\/azure\/ai-services\/metrics-advisor\/overview<\/td>\n<td>Best starting point for purpose, workflow, and terminology<\/td>\n<\/tr>\n<tr>\n<td>Official pricing<\/td>\n<td>Azure Pricing (search \u201cMetrics Advisor\u201d) \u2014 https:\/\/azure.microsoft.com\/pricing\/<\/td>\n<td>Official pricing meters and regional availability<\/td>\n<\/tr>\n<tr>\n<td>Pricing calculator<\/td>\n<td>Azure Pricing Calculator \u2014 https:\/\/azure.microsoft.com\/pricing\/calculator\/<\/td>\n<td>Build scenario-based estimates without guessing prices<\/td>\n<\/tr>\n<tr>\n<td>SDK docs<\/td>\n<td>Azure SDK for Python\/Java\/.NET (search \u201cMetrics Advisor\u201d on Learn) \u2014 https:\/\/learn.microsoft.com\/azure\/developer\/<\/td>\n<td>Shows authentication and automation patterns (verify current SDK status)<\/td>\n<\/tr>\n<tr>\n<td>REST API reference<\/td>\n<td>Azure REST API reference (search \u201cMetrics Advisor\u201d) \u2014 https:\/\/learn.microsoft.com\/rest\/api\/<\/td>\n<td>API details for automation (verify current API version)<\/td>\n<\/tr>\n<tr>\n<td>Samples<\/td>\n<td>GitHub (search Microsoft samples for Metrics Advisor) \u2014 https:\/\/github.com\/Azure-Samples<\/td>\n<td>Practical code samples; validate repo freshness and compatibility<\/td>\n<\/tr>\n<tr>\n<td>Architecture guidance<\/td>\n<td>Azure Architecture Center \u2014 https:\/\/learn.microsoft.com\/azure\/architecture\/<\/td>\n<td>Patterns for monitoring, alerting, and data platforms that commonly feed KPI monitoring<\/td>\n<\/tr>\n<tr>\n<td>Product updates<\/td>\n<td>Azure Updates \u2014 https:\/\/azure.microsoft.com\/updates\/<\/td>\n<td>Track lifecycle changes, region rollouts, and retirements (critical for long-term planning)<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">18. Training and Certification Providers<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Institute<\/th>\n<th>Suitable Audience<\/th>\n<th>Likely Learning Focus<\/th>\n<th>Mode<\/th>\n<th>Website URL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>DevOpsSchool.com<\/td>\n<td>DevOps engineers, SREs, cloud engineers<\/td>\n<td>Azure ops, monitoring patterns, automation, integrations<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.devopsschool.com\/<\/td>\n<\/tr>\n<tr>\n<td>ScmGalaxy.com<\/td>\n<td>Beginners to intermediate engineers<\/td>\n<td>DevOps fundamentals, tooling, cloud basics<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.scmgalaxy.com\/<\/td>\n<\/tr>\n<tr>\n<td>CLoudOpsNow.in<\/td>\n<td>Cloud operations teams<\/td>\n<td>Cloud operations practices, monitoring and reliability<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.cloudopsnow.in\/<\/td>\n<\/tr>\n<tr>\n<td>SreSchool.com<\/td>\n<td>SREs and platform teams<\/td>\n<td>SRE practices, incident response, reliability engineering<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.sreschool.com\/<\/td>\n<\/tr>\n<tr>\n<td>AiOpsSchool.com<\/td>\n<td>Ops + data\/AI practitioners<\/td>\n<td>AIOps concepts, anomaly detection for operations<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.aiopsschool.com\/<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">19. Top Trainers<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Platform\/Site<\/th>\n<th>Likely Specialization<\/th>\n<th>Suitable Audience<\/th>\n<th>Website URL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>RajeshKumar.xyz<\/td>\n<td>DevOps\/cloud training content (verify specifics)<\/td>\n<td>Engineers seeking hands-on guidance<\/td>\n<td>https:\/\/rajeshkumar.xyz\/<\/td>\n<\/tr>\n<tr>\n<td>devopstrainer.in<\/td>\n<td>DevOps training platform (verify offerings)<\/td>\n<td>Beginners to intermediate DevOps learners<\/td>\n<td>https:\/\/www.devopstrainer.in\/<\/td>\n<\/tr>\n<tr>\n<td>devopsfreelancer.com<\/td>\n<td>Freelance DevOps guidance (treat as a platform)<\/td>\n<td>Teams needing short-term expert help<\/td>\n<td>https:\/\/www.devopsfreelancer.com\/<\/td>\n<\/tr>\n<tr>\n<td>devopssupport.in<\/td>\n<td>DevOps support\/training platform (verify services)<\/td>\n<td>Ops teams needing troubleshooting help<\/td>\n<td>https:\/\/www.devopssupport.in\/<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">20. Top Consulting Companies<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Company Name<\/th>\n<th>Likely Service Area<\/th>\n<th>Where They May Help<\/th>\n<th>Consulting Use Case Examples<\/th>\n<th>Website URL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>cotocus.com<\/td>\n<td>Cloud\/DevOps consulting (verify portfolio)<\/td>\n<td>Architecture, automation, operational readiness<\/td>\n<td>KPI monitoring rollout, alerting integration, secure webhook design<\/td>\n<td>https:\/\/cotocus.com\/<\/td>\n<\/tr>\n<tr>\n<td>DevOpsSchool.com<\/td>\n<td>DevOps\/cloud consulting and training<\/td>\n<td>Implementations, enablement, operational processes<\/td>\n<td>Onboarding metrics, building incident playbooks, DevOps\/SRE coaching<\/td>\n<td>https:\/\/www.devopsschool.com\/<\/td>\n<\/tr>\n<tr>\n<td>DEVOPSCONSULTING.IN<\/td>\n<td>DevOps consulting services (verify service catalog)<\/td>\n<td>CI\/CD, monitoring, reliability practices<\/td>\n<td>Integrating anomaly alerts with ITSM\/ChatOps, governance and access controls<\/td>\n<td>https:\/\/www.devopsconsulting.in\/<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">21. Career and Learning Roadmap<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn before Azure AI Metrics Advisor<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure fundamentals:<\/li>\n<li>Resource groups, regions, RBAC, managed identities<\/li>\n<li>Data fundamentals:<\/li>\n<li>Time-series basics (granularity, seasonality, missing data)<\/li>\n<li>SQL querying and indexing for aggregation tables<\/li>\n<li>Monitoring fundamentals:<\/li>\n<li>SLI\/SLO concepts, alert fatigue, incident response basics<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn after Azure AI Metrics Advisor<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure Monitor + Log Analytics for infrastructure and application telemetry<\/li>\n<li>Data platform skills:<\/li>\n<li>Azure Data Explorer time-series analytics<\/li>\n<li>Data pipelines (ADF\/Synapse\/Databricks) for generating KPI tables<\/li>\n<li>Automation:<\/li>\n<li>SDK\/API-based configuration<\/li>\n<li>Logic Apps\/Functions for alert routing and ITSM integration<\/li>\n<li>MLOps (optional):<\/li>\n<li>Azure Machine Learning for custom anomaly models when needed<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Job roles that use it<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud Engineer \/ DevOps Engineer<\/li>\n<li>Site Reliability Engineer (SRE)<\/li>\n<li>Data Engineer \/ Analytics Engineer<\/li>\n<li>Platform Engineer<\/li>\n<li>Solutions Architect<\/li>\n<li>Product Analyst \/ Growth Engineer (when monitoring product KPIs)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Certification path (Azure)<\/h3>\n\n\n\n<p>Azure certifications change frequently; choose ones aligned with your role:\n&#8211; AZ-900 (Azure Fundamentals)\n&#8211; AZ-104 (Azure Administrator)\n&#8211; AZ-305 (Azure Solutions Architect)\n&#8211; DP-203 (Data Engineering on Azure)\nFor AI-focused paths, review Azure AI certifications available at the time. <strong>Verify current certification offerings<\/strong> on Microsoft Learn.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Project ideas for practice<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Build a \u201cKPI monitoring warehouse\u201d in Azure SQL\/ADX with hourly aggregates and monitor 10 KPIs.<\/li>\n<li>Integrate anomaly alerts into a Logic App that:<\/li>\n<li>Opens a ticket<\/li>\n<li>Posts to Teams<\/li>\n<li>Enriches with a link to the affected dashboard<\/li>\n<li>Create a cost-optimized monitoring plan:<\/li>\n<li>Compare dimension cardinality options and document tradeoffs<\/li>\n<li>Implement a \u201ctuning loop\u201d:<\/li>\n<li>Weekly review of anomalies<\/li>\n<li>Adjust sensitivity and document outcomes<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">22. Glossary<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Anomaly:<\/strong> A data point or pattern that deviates from expected behavior.<\/li>\n<li><strong>Incident:<\/strong> A grouped set of anomalies that represent a broader event requiring investigation.<\/li>\n<li><strong>Metric:<\/strong> A numeric measure tracked over time (revenue, error rate, latency).<\/li>\n<li><strong>Dimension:<\/strong> An attribute used to segment a metric (region, SKU, channel).<\/li>\n<li><strong>Time series:<\/strong> A sequence of metric values for a specific metric + dimension combination over time.<\/li>\n<li><strong>Granularity:<\/strong> The time bucket size (hourly, daily).<\/li>\n<li><strong>Seasonality:<\/strong> Repeating patterns over time (daily cycles, weekly cycles).<\/li>\n<li><strong>Alert hook:<\/strong> A notification channel configuration used to deliver alerts (email\/webhook\u2014verify).<\/li>\n<li><strong>Cardinality:<\/strong> The number of distinct values in a dimension; impacts how many time series exist.<\/li>\n<li><strong>Ingestion:<\/strong> The process of pulling\/reading metric data from the source into the service on a schedule.<\/li>\n<li><strong>False positive:<\/strong> Alert\/anomaly detected when nothing actionable is wrong.<\/li>\n<li><strong>False negative:<\/strong> A real issue that was not detected by the system.<\/li>\n<li><strong>SLI\/SLO:<\/strong> Service Level Indicator \/ Objective\u2014operational reliability metrics and targets.<\/li>\n<li><strong>KPI:<\/strong> Key Performance Indicator\u2014business or operational metric tracked for performance.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">23. Summary<\/h2>\n\n\n\n<p>Azure AI Metrics Advisor is a managed <strong>AI + Machine Learning<\/strong> service in <strong>Azure<\/strong> that continuously monitors time-series metrics, detects anomalies, groups them into incidents, and helps diagnose contributing dimensions. It matters because many real-world KPIs are seasonal and multi-dimensional, making static thresholds noisy and manual triage slow.<\/p>\n\n\n\n<p>Architecturally, it fits between your metric stores (Azure SQL\/ADX\/Storage) and your incident workflows (email\/webhooks\/automation). Cost is primarily driven by the number of time series (dimension cardinality), ingestion frequency, and the scale of monitored metrics\u2014so start small, aggregate wisely, and expand intentionally. From a security standpoint, apply least privilege to data sources, protect keys\/secrets, secure webhook endpoints, and validate whether private networking and Entra ID authentication meet your requirements.<\/p>\n\n\n\n<p>Use Azure AI Metrics Advisor when you need managed anomaly detection plus investigation workflows for KPIs. If you only need simple thresholds, Azure Monitor may be simpler; if you need fully custom models, Azure Machine Learning or ADX-based analytics may be better.<\/p>\n\n\n\n<p>Next step: implement the lab in this guide, then productionize it by standardizing metric tables, defining ownership and incident playbooks, and automating onboarding and alert routing through APIs and Azure-native automation.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI + Machine Learning<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,40],"tags":[],"class_list":["post-357","post","type-post","status-publish","format-standard","hentry","category-ai-machine-learning","category-azure"],"_links":{"self":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/357","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/comments?post=357"}],"version-history":[{"count":0,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/357\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/media?parent=357"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/categories?post=357"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/tags?post=357"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}