{"id":380,"date":"2026-04-13T20:54:23","date_gmt":"2026-04-13T20:54:23","guid":{"rendered":"https:\/\/www.devopsschool.com\/tutorials\/azure-data-share-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-analytics\/"},"modified":"2026-04-13T20:54:23","modified_gmt":"2026-04-13T20:54:23","slug":"azure-data-share-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-analytics","status":"publish","type":"post","link":"https:\/\/www.devopsschool.com\/tutorials\/azure-data-share-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-analytics\/","title":{"rendered":"Azure Data Share Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for Analytics"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Category<\/h2>\n\n\n\n<p>Analytics<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. Introduction<\/h2>\n\n\n\n<p>Azure Data Share is an Azure service designed to <strong>share data securely and repeatedly<\/strong> with other people, teams, or organizations\u2014without building custom pipelines for every partner.<\/p>\n\n\n\n<p>In simple terms: <strong>a data provider publishes a share<\/strong>, <strong>a data consumer accepts an invitation<\/strong>, and Azure Data Share helps deliver the data to the consumer\u2019s chosen destination in Azure. You can deliver <strong>one-time snapshots<\/strong> or <strong>scheduled updates<\/strong>, so consumers keep getting refreshed data without ongoing manual exports.<\/p>\n\n\n\n<p>In more technical terms, Azure Data Share is a <strong>managed data-sharing orchestration service<\/strong>. It uses Azure identity (Microsoft Entra ID) for access control and coordinates <strong>dataset publication, invitations, subscriptions, and snapshot-based delivery<\/strong> to supported Azure data stores (for example, Azure Storage and some Azure SQL-based services). It focuses on <strong>governed distribution<\/strong>, not transformation\u2014meaning it\u2019s not an ETL tool.<\/p>\n\n\n\n<p>The problem it solves: teams often need to share data across <strong>subscriptions, tenants, and external partners<\/strong>. Common approaches (SAS tokens, ad-hoc exports, FTP, bespoke copy scripts, custom ADF pipelines) are error-prone, hard to audit, and difficult to operate at scale. Azure Data Share provides a structured, repeatable way to share datasets with clearer governance and operational visibility.<\/p>\n\n\n\n<blockquote>\n<p>Service status note: <strong>Azure Data Share is an active Azure service<\/strong> as of the latest generally available documentation. Always verify current availability and any recent changes in the official docs and Azure Updates before adopting it broadly in production.<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">2. What is Azure Data Share?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Official purpose<\/h3>\n\n\n\n<p>Azure Data Share enables organizations to <strong>share data with other organizations and users<\/strong> in a controlled manner. The service is intended to reduce friction in inter-team and inter-company data distribution by providing a managed sharing model (provider \u2192 consumer).<\/p>\n\n\n\n<p>Official documentation: https:\/\/learn.microsoft.com\/azure\/data-share\/<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Core capabilities (what it does)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Create shares<\/strong> (provider side) that package one or more datasets.<\/li>\n<li><strong>Invite recipients<\/strong> (consumers) using Microsoft Entra ID identities.<\/li>\n<li><strong>Enable consumers to subscribe<\/strong> and map shared datasets to their Azure destinations.<\/li>\n<li><strong>Deliver data snapshots<\/strong> (and scheduled updates, depending on supported dataset types and configuration).<\/li>\n<li>Provide a central place to manage and audit sharing activities (at least through Azure resource management and logs; deeper details depend on configuration\u2014verify in official docs).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Major components<\/h3>\n\n\n\n<p>Azure Data Share typically involves:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Data Share account<\/strong>: The Azure resource you create to manage shares and invitations.<\/li>\n<li><strong>Share (provider)<\/strong>: A logical container that includes datasets and invitation settings.<\/li>\n<li><strong>Dataset<\/strong>: A pointer\/configuration to the data being shared from a supported source (for example, a storage container or a SQL table).<\/li>\n<li><strong>Invitation<\/strong>: An invitation sent to a recipient (consumer).<\/li>\n<li><strong>Share subscription (consumer)<\/strong>: The consumer\u2019s accepted subscription that maps datasets to a destination in the consumer\u2019s environment.<\/li>\n<li><strong>Snapshot \/ synchronization<\/strong>: The delivery action (manual or scheduled) that copies or syncs data to the consumer destination (exact behavior varies by dataset type\u2014verify supported data stores and snapshot behavior in the docs).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Service type<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Managed orchestration service<\/strong> for data sharing (control plane + coordinated data movement for supported sources).<\/li>\n<li>Not a database, not an ETL engine, and not a general-purpose file transfer service.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scope and geography<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A <strong>Data Share account is created in an Azure region<\/strong> (regional Azure resource).<\/li>\n<li>Data sources and destinations may be in the same or different regions depending on supported scenarios and constraints. <strong>Verify region and cross-region\/cross-tenant behaviors<\/strong> in the official \u201csupported data stores\u201d and \u201climits\u201d documentation.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">How it fits into the Azure ecosystem (Analytics context)<\/h3>\n\n\n\n<p>Azure Data Share sits in the <strong>Analytics<\/strong> category because it supports:\n&#8211; <strong>Data product distribution<\/strong> across business units and partner ecosystems.\n&#8211; Reliable delivery of curated datasets into analytics destinations (storage, SQL-based analytics systems).\n&#8211; Repeatable refresh of shared datasets without reinventing pipelines.<\/p>\n\n\n\n<p>Common adjacent services:\n&#8211; <strong>Azure Storage \/ ADLS Gen2<\/strong> for lake-based sharing targets.\n&#8211; <strong>Azure Synapse Analytics \/ Azure SQL<\/strong> for relational\/warehouse sharing scenarios.\n&#8211; <strong>Azure Data Factory \/ Synapse Pipelines<\/strong> when you need transformation and complex workflows (Azure Data Share is primarily for sharing\/delivery, not transformation).\n&#8211; <strong>Microsoft Purview<\/strong> for governance and cataloging (integration and workflows should be verified in official docs for your environment).<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">3. Why use Azure Data Share?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Business reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Faster partner onboarding<\/strong>: Share curated datasets with suppliers, customers, or subsidiaries without long custom engineering cycles.<\/li>\n<li><strong>Clear ownership<\/strong>: Data producers publish \u201cofficial\u201d datasets with controlled refresh.<\/li>\n<li><strong>Repeatable delivery<\/strong>: Scheduled snapshots can reduce recurring manual exports.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Technical reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Standardized sharing model<\/strong> (provider \u2192 consumer) with invitations and subscriptions.<\/li>\n<li><strong>Supports multiple datasets per share<\/strong> so consumers can receive a coherent dataset package.<\/li>\n<li><strong>Snapshot-based delivery<\/strong> can be easier to reason about than continuous streaming when consumers need stable, point-in-time data.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operational reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Central management<\/strong>: Shares, recipients, and subscriptions are managed as Azure resources.<\/li>\n<li><strong>Reduced ad-hoc scripts<\/strong>: Avoid fragile copy scripts and one-off pipelines for each partner.<\/li>\n<li><strong>Environment separation<\/strong>: Share across subscriptions\/tenants while keeping provider systems isolated.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Security \/ compliance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Uses <strong>Microsoft Entra ID<\/strong> for identity and access patterns.<\/li>\n<li>Helps replace insecure patterns like emailing files, unmanaged SFTP accounts, or long-lived tokens.<\/li>\n<li>Enables clearer auditing and governance (through Azure logs and resource history; exact depth depends on your logging configuration).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scalability \/ performance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Designed for <strong>repeatable distribution<\/strong> to multiple consumers.<\/li>\n<li>Lets providers manage <strong>one share<\/strong> consumed by multiple subscribers.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">When teams should choose Azure Data Share<\/h3>\n\n\n\n<p>Choose Azure Data Share when you need:\n&#8211; <strong>Productized dataset sharing<\/strong> to internal teams or external organizations.\n&#8211; <strong>Scheduled snapshot delivery<\/strong> to supported Azure destinations.\n&#8211; A managed approach rather than building and operating custom copy pipelines for each consumer.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">When teams should not choose Azure Data Share<\/h3>\n\n\n\n<p>Avoid (or reconsider) Azure Data Share if you need:\n&#8211; <strong>Complex transformations, joins, enrichment, or workflows<\/strong> (use Azure Data Factory \/ Synapse Pipelines \/ Databricks).\n&#8211; <strong>Real-time streaming<\/strong> sharing (consider Event Hubs, Kafka, or streaming platforms).\n&#8211; Data sharing to consumers who <strong>cannot receive data into supported Azure services<\/strong> (you may need alternate delivery channels).\n&#8211; Fine-grained row\/column-level sharing at query time (consider database-native sharing patterns or dedicated data products; capabilities differ by platform).<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">4. Where is Azure Data Share used?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Industries<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Finance<\/strong>: regulatory reporting, risk datasets, market\/reference data distribution.<\/li>\n<li><strong>Healthcare &amp; life sciences<\/strong>: sharing de-identified datasets for research collaborations (subject to compliance and data handling policies).<\/li>\n<li><strong>Retail &amp; e-commerce<\/strong>: vendor scorecards, sell-through, inventory and promotions datasets.<\/li>\n<li><strong>Manufacturing &amp; supply chain<\/strong>: supplier data exchange, demand forecasts, quality metrics.<\/li>\n<li><strong>Energy &amp; utilities<\/strong>: operational metrics and partner analytics datasets.<\/li>\n<li><strong>Software\/SaaS<\/strong>: sharing customer usage exports or benchmark datasets into customer-owned Azure environments.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Team types<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data platform teams distributing curated datasets.<\/li>\n<li>Analytics engineering teams sharing \u201cgold\u201d datasets.<\/li>\n<li>Security and governance teams defining and enforcing sharing patterns.<\/li>\n<li>Partner engineering teams enabling B2B data exchange.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Workloads and architectures<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Lakehouse\/lake architectures where the consumer wants data in <strong>ADLS Gen2<\/strong> or <strong>Azure Storage<\/strong> for analytics engines (Synapse, Databricks, Fabric\u2014verify target requirements).<\/li>\n<li>Data warehouse distribution scenarios where consumers ingest into <strong>SQL-based analytics<\/strong>.<\/li>\n<li>Cross-subscription \u201cdata mesh\u201d patterns where domains publish datasets for internal consumers.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Real-world deployment contexts<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Hub-and-spoke data distribution<\/strong>: central data producer shares to multiple downstream teams.<\/li>\n<li><strong>Partner data exchange<\/strong>: enterprise shares data with vendors and receives counterpart datasets (often with separate shares in each direction).<\/li>\n<li><strong>M&amp;A or multi-tenant enterprises<\/strong>: sharing data across tenants with controlled invitations.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Production vs dev\/test<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Dev\/test<\/strong>: great for validating sharing workflows, permissions, and dataset mappings using small sample datasets.<\/li>\n<li><strong>Production<\/strong>: requires tighter governance\u2014data classification, least privilege, monitoring, and lifecycle controls for snapshots and destinations.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">5. Top Use Cases and Scenarios<\/h2>\n\n\n\n<p>Below are realistic scenarios where Azure Data Share is commonly a good fit.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1) Partner data distribution (B2B analytics)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: You need to deliver curated datasets to external partners reliably.<\/li>\n<li><strong>Why Azure Data Share fits<\/strong>: Invitation\/subscription model formalizes sharing; scheduled snapshots keep data current.<\/li>\n<li><strong>Example<\/strong>: A retailer shares weekly product performance datasets with a brand partner into the partner\u2019s Azure Storage account.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">2) Internal \u201cdata product\u201d publishing across subscriptions<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Domain teams in different subscriptions need trusted datasets without direct access to producer storage\/accounts.<\/li>\n<li><strong>Why it fits<\/strong>: Shares provide a controlled distribution surface.<\/li>\n<li><strong>Example<\/strong>: The finance domain publishes a \u201cdaily revenue snapshot\u201d share consumed by BI teams in separate subscriptions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">3) Multi-subsidiary enterprise sharing<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Subsidiaries operate in separate tenants\/subscriptions but need standardized datasets.<\/li>\n<li><strong>Why it fits<\/strong>: Supports cross-tenant invitation patterns using Microsoft Entra ID (often via B2B).<\/li>\n<li><strong>Example<\/strong>: HQ shares a master customer reference dataset monthly to regional business units.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4) Data exchange for supply chain optimization<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Suppliers and manufacturers need to exchange demand forecasts and inventory levels.<\/li>\n<li><strong>Why it fits<\/strong>: Snapshot-based data exchange gives a clear, auditable \u201cas-of\u201d dataset.<\/li>\n<li><strong>Example<\/strong>: A manufacturer shares weekly forecast tables; suppliers share weekly capacity tables back.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">5) Secure replacement for SAS-token-based sharing<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Teams share storage data using ad-hoc SAS tokens, which become hard to rotate and audit.<\/li>\n<li><strong>Why it fits<\/strong>: Managed sharing process reduces token sprawl and improves governance.<\/li>\n<li><strong>Example<\/strong>: Replace \u201csend SAS URL by email\u201d with Data Share invitations and subscriptions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">6) Publishing reference data (slowly changing datasets)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Many teams need the same reference tables (e.g., currency rates, product taxonomy).<\/li>\n<li><strong>Why it fits<\/strong>: Scheduled delivery ensures broad consistency.<\/li>\n<li><strong>Example<\/strong>: Data platform team publishes reference datasets weekly to multiple consumer subscriptions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">7) Standardizing dataset delivery for regulated reporting<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Regulators or auditors require periodic extracts with consistent format and timing.<\/li>\n<li><strong>Why it fits<\/strong>: Snapshot delivery helps ensure consistent point-in-time datasets.<\/li>\n<li><strong>Example<\/strong>: A bank shares monthly risk exposure snapshots to a controlled reporting subscription.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">8) Sharing curated training datasets for ML teams<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: ML teams need consistent training data extracts that match governance rules.<\/li>\n<li><strong>Why it fits<\/strong>: Providers can publish curated datasets; consumers receive into their own storage.<\/li>\n<li><strong>Example<\/strong>: Central team publishes de-identified feature tables monthly to multiple ML squads.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">9) Data marketplace-like internal distribution<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Teams don\u2019t know what data exists or how to request it.<\/li>\n<li><strong>Why it fits<\/strong>: Azure Data Share can be part of an internal publishing workflow (cataloging and discovery may require additional tooling\u2014verify).<\/li>\n<li><strong>Example<\/strong>: A \u201cdata products\u201d program uses Data Share as the delivery mechanism once access is approved.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">10) Dev\/test data distribution (controlled snapshots)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Test environments need consistent datasets but should not have full production access.<\/li>\n<li><strong>Why it fits<\/strong>: Provide sanitized snapshots into dev\/test destinations.<\/li>\n<li><strong>Example<\/strong>: Weekly sanitized datasets shared to dev subscription for integration testing.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">11) Cross-team modernization (legacy exports \u2192 managed shares)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: Legacy systems export flat files to shared network locations.<\/li>\n<li><strong>Why it fits<\/strong>: Move to Azure Storage and publish via Data Share with governance.<\/li>\n<li><strong>Example<\/strong>: Replace a nightly FTP drop with a share delivering data to consumer storage accounts.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">12) Publishing a \u201csingle source of truth\u201d dataset to BI teams<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: BI teams copy data independently, leading to mismatched numbers.<\/li>\n<li><strong>Why it fits<\/strong>: Provider publishes one authoritative dataset with a defined refresh cadence.<\/li>\n<li><strong>Example<\/strong>: A centralized KPI dataset delivered daily to BI workspaces\u2019 underlying storage.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">6. Core Features<\/h2>\n\n\n\n<blockquote>\n<p>Important: Azure Data Share capabilities depend on the <strong>supported data stores<\/strong> and the chosen dataset type. Always confirm supported sources\/targets and snapshot behavior in the official docs:\nhttps:\/\/learn.microsoft.com\/azure\/data-share\/<\/p>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">1) Data Share accounts (management boundary)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Provides a regional Azure resource to manage shares, invitations, and subscriptions.<\/li>\n<li><strong>Why it matters<\/strong>: Centralizes administration and access management.<\/li>\n<li><strong>Practical benefit<\/strong>: You can apply Azure RBAC, tags, policies, and logging patterns around a dedicated resource.<\/li>\n<li><strong>Caveat<\/strong>: Regional resource; confirm region availability and any cross-region constraints.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">2) Shares and datasets (provider-side packaging)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Lets a provider bundle one or more datasets into a share.<\/li>\n<li><strong>Why it matters<\/strong>: Consumers receive a cohesive data package rather than scattered files\/tables.<\/li>\n<li><strong>Practical benefit<\/strong>: Consistency and easier operational ownership (one share, many consumers).<\/li>\n<li><strong>Caveat<\/strong>: Dataset types are limited to supported sources; confirm current list in docs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">3) Invitations and subscriptions (consumer onboarding)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Provider invites recipients; consumers accept and create a subscription mapping to destinations.<\/li>\n<li><strong>Why it matters<\/strong>: Establishes a clear handshake and permission boundary.<\/li>\n<li><strong>Practical benefit<\/strong>: Reduces accidental oversharing and clarifies who is receiving what.<\/li>\n<li><strong>Caveat<\/strong>: Recipient identity and tenant configuration must align (B2B and guest users may require governance controls\u2014verify your Entra settings).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4) Snapshot-based delivery (manual and scheduled)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Delivers point-in-time copies of shared datasets to consumer destinations; can be scheduled for periodic refresh.<\/li>\n<li><strong>Why it matters<\/strong>: Stable, repeatable data delivery is often sufficient for analytics and reporting.<\/li>\n<li><strong>Practical benefit<\/strong>: Consumers can process snapshots with predictable cadence.<\/li>\n<li><strong>Caveat<\/strong>: This is not streaming. Snapshot frequency, incremental behavior, and scheduling constraints vary by dataset type\u2014verify.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">5) Incremental updates (where supported)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Reduces repeated full copies by delivering only changes since the last snapshot (for certain dataset types\/configurations).<\/li>\n<li><strong>Why it matters<\/strong>: Saves time and potentially cost for large datasets.<\/li>\n<li><strong>Practical benefit<\/strong>: Faster refreshes and reduced duplicate storage.<\/li>\n<li><strong>Caveat<\/strong>: Not universally available for every source; confirm supported incremental mechanics in docs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">6) Destination mapping (consumer control)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Consumers select where the incoming data lands (for example, a destination storage container).<\/li>\n<li><strong>Why it matters<\/strong>: Consumers maintain control of their environment and security boundary.<\/li>\n<li><strong>Practical benefit<\/strong>: Consumers can integrate with their analytics stack immediately.<\/li>\n<li><strong>Caveat<\/strong>: Destination permissions must be granted correctly (RBAC\/ACL). Misconfiguration is a common source of failures.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">7) Azure RBAC integration<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Uses Azure role-based access control for management operations and, depending on connected sources\/targets, for data access operations.<\/li>\n<li><strong>Why it matters<\/strong>: Aligns with standard Azure governance.<\/li>\n<li><strong>Practical benefit<\/strong>: Least privilege patterns; separation between provider admins and consumer admins.<\/li>\n<li><strong>Caveat<\/strong>: Data plane permissions (like Storage Blob Data Contributor) are distinct from management plane permissions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">8) Azure resource governance (tags, policy, locks)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Because Data Share accounts and related items are Azure resources, you can apply governance controls.<\/li>\n<li><strong>Why it matters<\/strong>: Large organizations need consistent controls across environments.<\/li>\n<li><strong>Practical benefit<\/strong>: Enforce naming\/tagging, restrict regions, and control who can create shares.<\/li>\n<li><strong>Caveat<\/strong>: Azure Policy coverage varies by resource type\u2014verify available policies for Microsoft.DataShare.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">9) Auditing via Azure activity logs (and diagnostics where available)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Management actions are recorded in Azure Activity Log; diagnostic logs may be available depending on resource support.<\/li>\n<li><strong>Why it matters<\/strong>: You need traceability for \u201cwho shared what and when.\u201d<\/li>\n<li><strong>Practical benefit<\/strong>: Support investigations, compliance evidence, and operations troubleshooting.<\/li>\n<li><strong>Caveat<\/strong>: Data-plane copy details may not be fully represented in activity logs; design additional operational telemetry where needed.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">10) API\/automation (ARM\/REST, templates)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does<\/strong>: Supports infrastructure-as-code patterns to provision and manage Data Share resources.<\/li>\n<li><strong>Why it matters<\/strong>: Repeatable deployments and environment parity (dev\/test\/prod).<\/li>\n<li><strong>Practical benefit<\/strong>: Integrate with CI\/CD.<\/li>\n<li><strong>Caveat<\/strong>: Tooling depth can vary; verify current ARM schema and CLI\/SDK support in official docs.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">7. Architecture and How It Works<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">High-level architecture<\/h3>\n\n\n\n<p>Azure Data Share implements a provider\/consumer model:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Provider<\/strong> creates a Data Share account and a share.<\/li>\n<li>Provider adds <strong>datasets<\/strong> referencing supported sources.<\/li>\n<li>Provider sends <strong>invitations<\/strong> to consumers (via Entra identity\/email).<\/li>\n<li><strong>Consumer<\/strong> creates a Data Share account and accepts the invitation.<\/li>\n<li>Consumer configures <strong>destination mapping<\/strong> (where shared data should land).<\/li>\n<li>Provider triggers a <strong>snapshot<\/strong> (or a schedule triggers it).<\/li>\n<li>Data is delivered to consumer destination (implementation depends on dataset type and configuration).<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Control flow vs data flow<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Control plane<\/strong>: invitations, subscriptions, snapshot triggers, and configuration are handled as Azure resource operations.<\/li>\n<li><strong>Data plane<\/strong>: underlying copy\/sync into consumer destinations; permissions to read sources and write destinations must be correct.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Integrations with related services<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Microsoft Entra ID<\/strong>: identities for invitations and access.<\/li>\n<li><strong>Azure Storage \/ ADLS Gen2<\/strong>: common source and destination for analytics files.<\/li>\n<li><strong>Azure SQL \/ Synapse (SQL)<\/strong>: common relational dataset sharing pattern (confirm supported variants).<\/li>\n<li><strong>Azure Monitor<\/strong>: activity logs and diagnostics where available.<\/li>\n<li><strong>Microsoft Purview<\/strong> (optional): governance\/catalog workflows often complement sharing (verify integration points for your scenario).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Security\/authentication model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Identity<\/strong>: Microsoft Entra ID users and service principals (where supported) manage resources through Azure RBAC.<\/li>\n<li><strong>Data access<\/strong>: Dataset sources\/destinations require appropriate data-plane roles (for Storage, roles like <em>Storage Blob Data Reader\/Contributor<\/em> are commonly relevant). Exact required roles depend on the dataset type and sharing workflow\u2014verify the official dataset-specific setup instructions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Networking model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure Data Share is a managed service. For sources and destinations with network restrictions (storage firewalls, private endpoints), you must ensure the service can access the data.<\/li>\n<li>Whether \u201ctrusted Microsoft services\u201d bypass, private endpoints, or additional network configuration is required depends on the data store and configuration\u2014<strong>verify the latest official guidance<\/strong> for your dataset types.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Monitoring\/logging\/governance considerations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Track:<\/li>\n<li>Share creation\/changes (Activity Log)<\/li>\n<li>Invitation events and subscription acceptance<\/li>\n<li>Snapshot execution outcomes (in the Data Share UI\/portal and logs where available)<\/li>\n<li>Governance:<\/li>\n<li>Tagging and naming standards<\/li>\n<li>Approval workflows outside the service (ticketing, access review)<\/li>\n<li>Data classification and DLP policies (outside Data Share)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Simple architecture diagram (Mermaid)<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart LR\n  P[Provider Team] --&gt;|Creates share + datasets| DS1[Azure Data Share Account (Provider)]\n  DS1 --&gt;|Sends invitation| C[Consumer Team]\n  C --&gt;|Accepts invitation + maps destination| DS2[Azure Data Share Account (Consumer)]\n  DS1 --&gt;|Triggers snapshot| MOVE[Managed delivery]\n  SRC[(Source: Azure Storage \/ SQL \/ Synapse\\n(supported types))] --&gt; MOVE --&gt; DEST[(Destination: Consumer Azure Storage \/ SQL\\n(supported types))]\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Production-style architecture diagram (Mermaid)<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart TB\n  subgraph Provider_Subscription[\"Provider Subscription \/ Tenant\"]\n    PUsers[Provider Admins\/Publishers]\n    PDS[Azure Data Share Account]\n    PLake[(ADLS Gen2 \/ Azure Storage\\nCurated zone)]\n    PSQL[(Azure SQL \/ Synapse SQL\\nCurated tables)]\n    KV1[Key Vault\\n(for provider apps, optional)]\n    MON1[Azure Monitor\\nActivity Log + Alerts]\n  end\n\n  subgraph Consumer_Subscription[\"Consumer Subscription \/ Tenant\"]\n    CUsers[Consumer Admins\/Operators]\n    CDS[Azure Data Share Account]\n    CLake[(ADLS Gen2 \/ Azure Storage\\nLanding zone)]\n    DW[(Synapse\/SQL\/Databricks\/Fabric\\nDownstream analytics)]\n    MON2[Azure Monitor\\nLogs + Alerts]\n  end\n\n  PUsers --&gt; PDS\n  PDS --&gt;|Datasets reference| PLake\n  PDS --&gt;|Datasets reference| PSQL\n\n  PDS --&gt;|Invitation (Entra ID)| CUsers\n  CUsers --&gt; CDS\n  CDS --&gt;|Destination mapping| CLake\n\n  PDS --&gt;|Snapshot schedule\/trigger| TRANSFER[Snapshot Delivery]\n  PLake --&gt; TRANSFER --&gt; CLake\n  PSQL --&gt; TRANSFER --&gt; CLake\n\n  MON1 --- PDS\n  MON2 --- CDS\n  CLake --&gt; DW\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">8. Prerequisites<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Azure account and subscription<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>An active <strong>Azure subscription<\/strong> for the provider.<\/li>\n<li>An active <strong>Azure subscription<\/strong> for the consumer (can be the same subscription for lab\/testing).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Identity \/ tenant requirements<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Microsoft Entra ID tenant access for both provider and consumer identities.<\/li>\n<li>Ability to receive\/accept invitations (email identity must align with the intended Entra user).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Permissions \/ IAM roles (practical minimums)<\/h3>\n\n\n\n<p>Exact roles vary by dataset type; commonly required:<\/p>\n\n\n\n<p><strong>Provider side<\/strong>\n&#8211; Azure RBAC on the Data Share account\/resource group: <strong>Contributor<\/strong> (or a more restrictive custom role) to create shares\/invitations.\n&#8211; Data-plane permissions on source:\n  &#8211; For Azure Storage: typically <strong>Storage Blob Data Reader<\/strong> (or higher) on the source container\/account, depending on mechanism.\n  &#8211; For SQL-based sources: permissions to read the tables\/views being shared (exact permissions depend on supported configuration).<\/p>\n\n\n\n<p><strong>Consumer side<\/strong>\n&#8211; Azure RBAC on consumer Data Share account: <strong>Contributor<\/strong> to create subscriptions and dataset mappings.\n&#8211; Data-plane permissions on destination:\n  &#8211; For Azure Storage: typically <strong>Storage Blob Data Contributor<\/strong> on destination container\/account.<\/p>\n\n\n\n<blockquote>\n<p>Always validate the exact role requirements with the official dataset-specific documentation for Azure Data Share because data-plane access patterns can differ by source\/destination type and by service updates.<\/p>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">Billing requirements<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure subscription with billing enabled for:<\/li>\n<li>Azure Data Share usage (per pricing model)<\/li>\n<li>Destination storage\/compute usage<\/li>\n<li>Any data transfer\/egress where applicable<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Tools<\/h3>\n\n\n\n<p>For the hands-on lab:\n&#8211; Azure Portal access\n&#8211; Optional: Azure CLI (<code>az<\/code>) for creating storage and uploading sample files:\n  &#8211; Install: https:\/\/learn.microsoft.com\/cli\/azure\/install-azure-cli<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Region availability<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure Data Share is not available in every region.<\/li>\n<li>Confirm availability in your target region before designing production architecture:<\/li>\n<li>Docs: https:\/\/learn.microsoft.com\/azure\/data-share\/<\/li>\n<li>Also check the Azure \u201cProducts available by region\u201d page: https:\/\/azure.microsoft.com\/explore\/global-infrastructure\/products-by-region\/<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Quotas \/ limits<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Limits exist for shares, invitations, snapshots, dataset sizes\/types, and schedules.<\/li>\n<li>Review the official limits documentation before production rollout. If limits are not clearly listed for your scenario, <strong>verify in official docs<\/strong> and test with representative data volume.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Prerequisite services for the lab<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Two Azure Storage accounts (provider source + consumer destination)<\/li>\n<li>Two Data Share accounts (provider + consumer)<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">9. Pricing \/ Cost<\/h2>\n\n\n\n<p>Azure Data Share is a managed service with usage-based billing. The exact meters and rates can change and can be region-dependent.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Official pricing resources<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure Data Share pricing page: https:\/\/azure.microsoft.com\/pricing\/details\/data-share\/<\/li>\n<li>Azure Pricing Calculator: https:\/\/azure.microsoft.com\/pricing\/calculator\/<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Pricing dimensions (how you are billed)<\/h3>\n\n\n\n<p>From a cost-design perspective, expect charges in these buckets:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Azure Data Share service meters<\/strong>\n   &#8211; Charges related to shares\/snapshots\/transactions (exact meters and names vary\u2014confirm on the pricing page).<\/li>\n<li><strong>Underlying data platform costs<\/strong>\n   &#8211; Source and destination <strong>Azure Storage<\/strong> capacity and transactions.\n   &#8211; SQL\/Synapse compute\/storage if those are used as sources\/targets.<\/li>\n<li><strong>Network and data transfer<\/strong>\n   &#8211; Data movement can incur bandwidth charges depending on region boundaries and whether data crosses certain network billing boundaries.\n   &#8211; <strong>Egress to the public internet<\/strong> is usually a major cost driver if data leaves Azure; many Azure-to-Azure flows remain on Azure backbone, but billing depends on endpoints and regions\u2014verify with Azure bandwidth pricing.<\/li>\n<\/ol>\n\n\n\n<p>Bandwidth pricing reference: https:\/\/azure.microsoft.com\/pricing\/details\/bandwidth\/<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Free tier<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>If a free tier exists or changes over time, it will be listed on the official pricing page. If not listed, assume <strong>no free tier<\/strong> and design a pilot with small datasets.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Primary cost drivers<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Snapshot frequency<\/strong> (hourly vs daily vs weekly).<\/li>\n<li><strong>Data size per snapshot<\/strong> (GB\/TB transferred).<\/li>\n<li><strong>Number of consumers<\/strong> (each consumer subscription may cause separate deliveries and storage growth).<\/li>\n<li><strong>Destination retention<\/strong> (how long consumers keep delivered snapshots).<\/li>\n<li><strong>Cross-region\/cross-tenant<\/strong> distribution patterns and any associated data transfer charges.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Hidden or indirect costs to plan for<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Duplicate storage<\/strong>: snapshot-based sharing often means data exists in multiple places (provider + each consumer).<\/li>\n<li><strong>Downstream compute<\/strong>: consumers may run jobs after every snapshot (Databricks\/Synapse\/Fabric), increasing compute spend.<\/li>\n<li><strong>Operations<\/strong>: monitoring, alerting, and incident response time if sharing becomes business-critical.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">How to optimize cost<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prefer <strong>lower snapshot frequency<\/strong> for large datasets unless business requires more.<\/li>\n<li>Share <strong>incrementally<\/strong> where supported instead of full snapshots.<\/li>\n<li>Partition datasets (for example, by date) so consumers can process only what they need.<\/li>\n<li>Avoid unnecessary cross-region distribution.<\/li>\n<li>Apply lifecycle management policies on consumer destinations (for example, Storage lifecycle rules) to delete or archive old snapshots.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Example low-cost starter estimate (illustrative, no fabricated prices)<\/h3>\n\n\n\n<p>A small pilot might look like:\n&#8211; 1 provider share\n&#8211; 1 consumer subscription\n&#8211; 1 dataset in Azure Storage\n&#8211; 1 snapshot\/day\n&#8211; 1\u20135 GB per snapshot<\/p>\n\n\n\n<p>Cost components to estimate using the calculator:\n&#8211; Azure Data Share snapshot\/service charges (per pricing page)\n&#8211; Destination Storage:\n  &#8211; capacity growth (GB stored)\n  &#8211; write transactions\n&#8211; Network charges (if cross-region)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Example production cost considerations<\/h3>\n\n\n\n<p>A production roll-out might include:\n&#8211; 10\u201350 datasets, 5\u201320 consumers\n&#8211; 50\u2013500 GB\/day snapshots per consumer (or larger)\n&#8211; Multiple regions<\/p>\n\n\n\n<p>Key planning points:\n&#8211; Multiply <strong>data volume by number of consumers<\/strong> (each consumer may receive its own copy).\n&#8211; Model retention (30\/90\/365 days) for storage cost.\n&#8211; Budget for operational overhead and alerting.\n&#8211; Validate whether your network architecture triggers bandwidth charges.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">10. Step-by-Step Hands-On Tutorial<\/h2>\n\n\n\n<p>This lab uses <strong>Azure Storage<\/strong> as both the source and destination because it\u2019s a common and cost-effective way to learn Azure Data Share safely.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Objective<\/h3>\n\n\n\n<p>Create a <strong>provider<\/strong> Azure Data Share that publishes files from a source Storage container, invite a <strong>consumer<\/strong>, accept the invitation, and deliver a <strong>snapshot<\/strong> to a consumer destination container.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Lab Overview<\/h3>\n\n\n\n<p>You will:\n1. Create two resource groups (provider + consumer).\n2. Create two Storage accounts (source + destination) and upload a sample file.\n3. Create two Azure Data Share accounts (provider + consumer).\n4. Create a share and dataset (provider).\n5. Send and accept an invitation (consumer).\n6. Configure destination mapping and trigger a snapshot.\n7. Validate that the file is delivered to the consumer container.\n8. Clean up all resources.<\/p>\n\n\n\n<p><strong>Expected end state<\/strong>: A file uploaded to the provider container is copied to the consumer container via Azure Data Share snapshot delivery.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 1: Create resource groups<\/h3>\n\n\n\n<p>Use Azure CLI (optional). You can also do this in the Azure Portal.<\/p>\n\n\n\n<pre><code class=\"language-bash\"># Set your subscription (optional)\naz account show\n# az account set --subscription \"&lt;SUBSCRIPTION_ID&gt;\"\n\n# Variables\nLOCATION=\"eastus\"   # choose a region where Azure Data Share is available\nRG_PROVIDER=\"rg-datashare-provider\"\nRG_CONSUMER=\"rg-datashare-consumer\"\n\naz group create -n \"$RG_PROVIDER\" -l \"$LOCATION\"\naz group create -n \"$RG_CONSUMER\" -l \"$LOCATION\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; Two resource groups exist in the chosen region.<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; Azure Portal \u2192 Resource groups \u2192 confirm both groups exist.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 2: Create provider (source) Storage account and upload a sample file<\/h3>\n\n\n\n<p>Create a Storage account and a container, then upload a small CSV.<\/p>\n\n\n\n<pre><code class=\"language-bash\"># Storage account names must be globally unique and 3-24 lowercase letters\/numbers.\nSA_PROVIDER=\"sadsshareprov$RANDOM\"\nCONTAINER_PROVIDER=\"source-data\"\n\naz storage account create \\\n  -n \"$SA_PROVIDER\" \\\n  -g \"$RG_PROVIDER\" \\\n  -l \"$LOCATION\" \\\n  --sku Standard_LRS \\\n  --kind StorageV2\n\naz storage container create \\\n  --name \"$CONTAINER_PROVIDER\" \\\n  --account-name \"$SA_PROVIDER\" \\\n  --auth-mode login\n<\/code><\/pre>\n\n\n\n<p>Create a small sample CSV locally:<\/p>\n\n\n\n<pre><code class=\"language-bash\">cat &gt; sales_sample.csv &lt;&lt; 'EOF'\ndate,region,product,units,amount\n2026-01-01,NA,WidgetA,10,250.00\n2026-01-02,EU,WidgetB,5,140.00\nEOF\n<\/code><\/pre>\n\n\n\n<p>Upload it:<\/p>\n\n\n\n<pre><code class=\"language-bash\">az storage blob upload \\\n  --account-name \"$SA_PROVIDER\" \\\n  --container-name \"$CONTAINER_PROVIDER\" \\\n  --name \"sales\/sales_sample.csv\" \\\n  --file \"sales_sample.csv\" \\\n  --auth-mode login\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; A blob <code>sales\/sales_sample.csv<\/code> exists in the provider container.<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; Portal \u2192 Storage account \u2192 Containers \u2192 <code>source-data<\/code> \u2192 confirm file exists.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 3: Create consumer (destination) Storage account and container<\/h3>\n\n\n\n<pre><code class=\"language-bash\">SA_CONSUMER=\"sadssharecons$RANDOM\"\nCONTAINER_CONSUMER=\"received-data\"\n\naz storage account create \\\n  -n \"$SA_CONSUMER\" \\\n  -g \"$RG_CONSUMER\" \\\n  -l \"$LOCATION\" \\\n  --sku Standard_LRS \\\n  --kind StorageV2\n\naz storage container create \\\n  --name \"$CONTAINER_CONSUMER\" \\\n  --account-name \"$SA_CONSUMER\" \\\n  --auth-mode login\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; Consumer destination container is ready (empty).<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; Portal \u2192 consumer storage account \u2192 container exists.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 4: Create Azure Data Share accounts (provider and consumer)<\/h3>\n\n\n\n<p>Azure Data Share account creation is simplest via the Azure Portal.<\/p>\n\n\n\n<p><strong>Provider Data Share account<\/strong>\n1. Portal \u2192 <strong>Create a resource<\/strong>\n2. Search for <strong>Azure Data Share<\/strong>\n3. Create <strong>Data Share account<\/strong>\n4. Choose:\n   &#8211; Subscription: your subscription\n   &#8211; Resource group: <code>rg-datashare-provider<\/code>\n   &#8211; Name: <code>dsa-provider-&lt;unique&gt;<\/code>\n   &#8211; Region: same region as your lab (recommended)<\/p>\n\n\n\n<p><strong>Consumer Data Share account<\/strong>\nRepeat creation:\n&#8211; Resource group: <code>rg-datashare-consumer<\/code>\n&#8211; Name: <code>dsa-consumer-&lt;unique&gt;<\/code><\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; Two Data Share accounts exist: one for provider, one for consumer.<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; Portal \u2192 search \u201cData Share accounts\u201d and confirm both show up.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 5: Provider creates a share and adds a dataset<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Portal \u2192 Open <strong>provider Data Share account<\/strong><\/li>\n<li>Go to <strong>Shares<\/strong> (or <strong>Sent shares<\/strong>, depending on portal wording)<\/li>\n<li>Select <strong>+ Create<\/strong><\/li>\n<li>Provide:\n   &#8211; <strong>Share name<\/strong>: <code>share-sales-files<\/code>\n   &#8211; (Optional) Description: \u201cSample sales dataset via Storage snapshot\u201d<\/li>\n<\/ol>\n\n\n\n<p>Now add a dataset:\n1. Open the created share\n2. Select <strong>Datasets<\/strong> \u2192 <strong>+ Add dataset<\/strong>\n3. Choose dataset type for <strong>Azure Storage<\/strong> (wording may vary such as \u201cAzure Blob Storage\u201d \/ \u201cAzure Data Lake Storage Gen2\u201d depending on account type)\n4. Select the provider Storage account and container:\n   &#8211; Storage account: your provider account\n   &#8211; Container: <code>source-data<\/code>\n   &#8211; Path or folder: select <code>sales\/<\/code> if supported, otherwise share at container level (follow portal options)<\/p>\n\n\n\n<blockquote>\n<p>If the portal offers multiple dataset granularities (container vs folder vs file), choose the smallest that matches the lab goal.<\/p>\n<\/blockquote>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; The share contains a dataset referencing the provider storage location.<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; The dataset appears under the share\u2019s dataset list.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 6: Provider creates an invitation to the consumer<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li>In the provider share, select <strong>Invitations<\/strong> \u2192 <strong>+ Add<\/strong><\/li>\n<li>\n<p>Enter the <strong>recipient email<\/strong> address (the identity that will accept the invitation)\n   &#8211; For a lab in a single tenant, use your own user email (or another user in the directory).\n   &#8211; For a true cross-organization test, use an external user configured via B2B (requires tenant settings and governance).<\/p>\n<\/li>\n<li>\n<p>Send the invitation.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; Invitation is created and sent.<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; Portal \u2192 provider share \u2192 Invitations shows status (for example \u201cSent\u201d).<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 7: Consumer accepts the invitation and creates a subscription<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Portal \u2192 Open <strong>consumer Data Share account<\/strong><\/li>\n<li>Navigate to <strong>Received invitations<\/strong><\/li>\n<li>Select the invitation and click <strong>Accept<\/strong><\/li>\n<li>When prompted, create a <strong>share subscription<\/strong>:\n   &#8211; Subscription name: <code>sub-sales-files<\/code>\n   &#8211; Choose destination mapping configuration steps.<\/li>\n<\/ol>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; A share subscription exists on the consumer side.<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; Consumer Data Share account \u2192 <strong>Share subscriptions<\/strong> shows <code>sub-sales-files<\/code>.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 8: Consumer maps dataset to destination container<\/h3>\n\n\n\n<p>Within the consumer subscription:\n1. Open the subscription <code>sub-sales-files<\/code>\n2. Go to <strong>Dataset mappings<\/strong> (or similar section)\n3. For the Storage dataset, configure destination:\n   &#8211; Destination Storage account: your consumer storage account\n   &#8211; Destination container: <code>received-data<\/code>\n   &#8211; Destination path: choose a folder like <code>incoming\/<\/code> if available<\/p>\n\n\n\n<p>Ensure the consumer identity has permission to write to the destination container (data-plane):\n&#8211; If using your user identity: assign <strong>Storage Blob Data Contributor<\/strong> on the destination storage account or container.<\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; Dataset mapping is configured successfully.<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; Mapping status indicates configured\/ready.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 9: Trigger a snapshot (provider or consumer, depending on workflow)<\/h3>\n\n\n\n<p>Depending on portal workflow and dataset type:\n&#8211; Provider may <strong>trigger a snapshot<\/strong> from the share.\n&#8211; Consumer may <strong>initiate synchronization<\/strong> from the subscription.<\/p>\n\n\n\n<p>Look for actions such as:\n&#8211; <strong>Trigger snapshot<\/strong>\n&#8211; <strong>Start snapshot<\/strong>\n&#8211; <strong>Synchronize<\/strong>\n&#8211; <strong>Run now<\/strong><\/p>\n\n\n\n<p>Trigger it once.<\/p>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; Snapshot run starts and completes successfully.<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; In the Data Share portal pages, find <strong>Snapshots<\/strong> \/ <strong>History<\/strong> and confirm a successful run.\n&#8211; Check consumer destination container for the delivered file(s).<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Validation<\/h3>\n\n\n\n<p>Confirm the file arrived in consumer storage:<\/p>\n\n\n\n<pre><code class=\"language-bash\">az storage blob list \\\n  --account-name \"$SA_CONSUMER\" \\\n  --container-name \"$CONTAINER_CONSUMER\" \\\n  --auth-mode login \\\n  --query \"[].name\" -o tsv\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; You should see a path that includes <code>sales_sample.csv<\/code> (the exact folder structure may differ based on mapping settings).<\/p>\n\n\n\n<p>Download it to validate content:<\/p>\n\n\n\n<pre><code class=\"language-bash\">az storage blob download \\\n  --account-name \"$SA_CONSUMER\" \\\n  --container-name \"$CONTAINER_CONSUMER\" \\\n  --name \"incoming\/sales\/sales_sample.csv\" \\\n  --file \"downloaded_sales_sample.csv\" \\\n  --auth-mode login\n<\/code><\/pre>\n\n\n\n<p>If your mapping used a different path, adjust the blob name accordingly.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Troubleshooting<\/h3>\n\n\n\n<p>Common issues and fixes:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>Invitation not visible on consumer side<\/strong>\n   &#8211; Ensure the consumer is signed into the correct tenant\/account.\n   &#8211; Confirm the invitation was sent to the exact email that matches the consumer\u2019s Entra identity.\n   &#8211; In cross-tenant cases, confirm B2B guest access settings and that the recipient accepted tenant invitation (if required).<\/p>\n<\/li>\n<li>\n<p><strong>Snapshot fails due to permissions<\/strong>\n   &#8211; Provider side: verify read access to the source dataset location.\n   &#8211; Consumer side: verify write access to the destination container.\n   &#8211; For Storage, ensure roles like <strong>Storage Blob Data Contributor<\/strong> are assigned at the right scope and that role assignment propagation has completed.<\/p>\n<\/li>\n<li>\n<p><strong>Storage firewall \/ networking restrictions<\/strong>\n   &#8211; If source\/destination storage accounts restrict networks, the managed service may be unable to access them.\n   &#8211; Review official guidance for Azure Data Share + Storage firewall compatibility and required configuration. If needed for lab simplicity, temporarily allow access from trusted services or open to selected networks (prefer least privilege and revert after testing).<\/p>\n<\/li>\n<li>\n<p><strong>Snapshot completes but files not where expected<\/strong>\n   &#8211; Review dataset mapping destination path.\n   &#8211; Confirm whether the dataset was shared at container level or folder level; the landing folder structure may differ.<\/p>\n<\/li>\n<li>\n<p><strong>Region availability errors<\/strong>\n   &#8211; Choose a region where Azure Data Share is available and supported for your dataset type.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Cleanup<\/h3>\n\n\n\n<p>To avoid ongoing costs, delete both resource groups:<\/p>\n\n\n\n<pre><code class=\"language-bash\">az group delete -n \"$RG_PROVIDER\" --yes --no-wait\naz group delete -n \"$RG_CONSUMER\" --yes --no-wait\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome<\/strong>\n&#8211; All created resources (Storage accounts, Data Share accounts, and related objects) are removed.<\/p>\n\n\n\n<p><strong>Verify<\/strong>\n&#8211; Portal \u2192 Resource groups \u2192 confirm both are deleted.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">11. Best Practices<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Architecture best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Treat shares as data products<\/strong>: define owners, SLAs, and refresh cadence.<\/li>\n<li>Prefer <strong>curated zones<\/strong> as sources (gold\/silver datasets) rather than raw ingestion zones.<\/li>\n<li>Use <strong>stable schemas and folder conventions<\/strong> so consumers are not broken by provider reorganizations.<\/li>\n<li>Design for <strong>consumer independence<\/strong>: consumers should be able to process snapshots without provider-side coordination.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">IAM \/ security best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Apply <strong>least privilege<\/strong>:<\/li>\n<li>Separate roles: share publishers vs share administrators.<\/li>\n<li>Grant only required data-plane permissions to source\/destination.<\/li>\n<li>For external sharing:<\/li>\n<li>Use governed B2B processes (access reviews, sponsor\/owner, expiration where applicable).<\/li>\n<li>Prefer dedicated \u201cpartner sharing\u201d subscriptions with strict policies.<\/li>\n<li>Use <strong>resource locks<\/strong> carefully to prevent accidental deletion of critical Data Share accounts (balance with operational needs).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Cost best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Minimize snapshot frequency for large data unless required.<\/li>\n<li>Share <strong>incremental changes<\/strong> where supported.<\/li>\n<li>Apply lifecycle policies on destination to manage retention cost.<\/li>\n<li>Avoid distributing full datasets to many consumers when a <strong>centralized analytics workspace<\/strong> could serve multiple internal teams (trade-off: consumer autonomy vs centralized cost).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Performance best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Partition large file-based datasets (date-based folders) so snapshots and downstream jobs are manageable.<\/li>\n<li>Keep shared datasets consistent and avoid frequent rename\/move operations that can cause large re-copies.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Reliability best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Define operational ownership: who investigates snapshot failures?<\/li>\n<li>Use alerting based on:<\/li>\n<li>Snapshot failure events (from portal history and logs)<\/li>\n<li>Missing expected data in destination (consumer-side validation jobs)<\/li>\n<li>Create runbooks for common failures (permissions, network restrictions).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operations best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Standardize naming:<\/li>\n<li>Data Share account: <code>dsa-&lt;env&gt;-&lt;team&gt;<\/code><\/li>\n<li>Share: <code>share-&lt;domain&gt;-&lt;dataset&gt;<\/code><\/li>\n<li>Subscription: <code>sub-&lt;consumer&gt;-&lt;dataset&gt;<\/code><\/li>\n<li>Tag resources with: owner, cost center, data classification, environment, business domain.<\/li>\n<li>Capture metadata outside the service if needed: data dictionary, schema version, PII classification, and intended use.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Governance best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use Azure Policy where possible to enforce:<\/li>\n<li>Approved regions<\/li>\n<li>Mandatory tags<\/li>\n<li>Restrictions on public storage access for destinations<\/li>\n<li>Establish an approval workflow for creating external invitations.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">12. Security Considerations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Identity and access model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure Data Share is managed through <strong>Azure RBAC<\/strong>.<\/li>\n<li>Sharing involves two security planes:\n  1. <strong>Management plane<\/strong> permissions to create\/manage shares and subscriptions.\n  2. <strong>Data plane<\/strong> permissions to read source data and write destination data (e.g., Storage RBAC\/ACL, SQL permissions).<\/li>\n<\/ul>\n\n\n\n<p>Recommendations:\n&#8211; Separate provider roles:\n  &#8211; Publishers can manage shares but not change core infrastructure.\n  &#8211; Storage admins manage source container access.\n&#8211; Ensure consumers only have access to their own destination landing zones.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Encryption<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure services like Storage and SQL typically encrypt data at rest by default (service-managed keys), with options for customer-managed keys in many cases.<\/li>\n<li>In transit encryption is generally supported via TLS for service endpoints.<\/li>\n<li>Confirm encryption posture for your chosen data stores and compliance requirements.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Network exposure<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Storage accounts can be configured with public endpoints, firewall rules, and private endpoints.<\/li>\n<li>Data Share delivery may require network accessibility between the managed service and storage endpoints; if you enforce strict private connectivity, validate compatibility in official docs.<\/li>\n<li>Avoid \u201copen to all networks\u201d in production; prefer least exposure and documented patterns.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secrets handling<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Avoid embedding storage keys in scripts or distributing them to partners.<\/li>\n<li>Prefer identity-based access (Entra ID) and managed governance patterns.<\/li>\n<li>If automation is required, use managed identities\/service principals with least privilege and store secrets in <strong>Azure Key Vault<\/strong>.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Audit\/logging<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Enable and retain:<\/li>\n<li>Azure Activity Logs for Data Share accounts and relevant storage accounts<\/li>\n<li>Diagnostic settings where supported (to Log Analytics\/Event Hub\/Storage)<\/li>\n<li>For compliance, maintain records of:<\/li>\n<li>Who approved invitations<\/li>\n<li>Who owns each share\/subscription<\/li>\n<li>Data classification and lawful basis for sharing<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Compliance considerations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>For regulated data (PII\/PHI\/PCI):<\/li>\n<li>Share only de-identified\/minimized datasets where possible<\/li>\n<li>Apply contractual controls and data processing agreements as required<\/li>\n<li>Implement retention limits on consumer side<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Common security mistakes<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Sharing raw or sensitive datasets without classification and approval.<\/li>\n<li>Over-granting destination write permissions (e.g., contributor at subscription scope).<\/li>\n<li>Leaving partner invitations unmanaged (no access reviews or expiration processes).<\/li>\n<li>Ignoring storage firewall\/private endpoint compatibility until late in deployment.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secure deployment recommendations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use separate subscriptions\/resource groups for:<\/li>\n<li>Provider data share operations<\/li>\n<li>Partner-facing datasets<\/li>\n<li>Enforce required tags and logging via policy.<\/li>\n<li>Implement periodic access reviews for external recipients.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">13. Limitations and Gotchas<\/h2>\n\n\n\n<blockquote>\n<p>Limitations can change. Always validate against official docs for Azure Data Share supported data stores and limits: https:\/\/learn.microsoft.com\/azure\/data-share\/<\/p>\n<\/blockquote>\n\n\n\n<p>Common limitations and operational gotchas include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Supported data stores are limited<\/strong>: Azure Data Share works only with specific Azure sources\/targets. Confirm the current list and supported configurations.<\/li>\n<li><strong>Not real-time<\/strong>: Snapshot-based sharing is not streaming; consumers will see data only after snapshots complete.<\/li>\n<li><strong>Data duplication<\/strong>: Each consumer typically receives its own copy in their destination, increasing storage costs.<\/li>\n<li><strong>Schema and breaking changes<\/strong>: If provider changes folder structures or table schemas, consumers can break. Version your datasets.<\/li>\n<li><strong>Cross-tenant complexities<\/strong>: External recipients may require B2B onboarding and tenant governance alignment.<\/li>\n<li><strong>Permissions propagation delays<\/strong>: RBAC role assignments can take time to propagate; snapshot runs may fail immediately after changes.<\/li>\n<li><strong>Networking constraints<\/strong>: Storage firewalls\/private endpoints can prevent delivery unless configured correctly (verify exact requirements).<\/li>\n<li><strong>Operational visibility<\/strong>: Management actions are visible in Azure logs, but detailed data-plane diagnostics may require additional tooling and validation jobs.<\/li>\n<li><strong>Consumer-side responsibilities<\/strong>: Consumers must manage retention, downstream processing, and costs after data lands.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">14. Comparison with Alternatives<\/h2>\n\n\n\n<p>Azure Data Share is best understood as a <strong>managed sharing orchestration<\/strong> tool. Alternatives vary depending on whether you need transformation, marketplace distribution, or query-time sharing.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Option<\/th>\n<th>Best For<\/th>\n<th>Strengths<\/th>\n<th>Weaknesses<\/th>\n<th>When to Choose<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Azure Data Share<\/strong><\/td>\n<td>Repeatable dataset sharing to supported Azure destinations<\/td>\n<td>Invitation\/subscription model; scheduled snapshots; Azure governance alignment<\/td>\n<td>Limited sources\/targets; snapshot-based not real-time; duplicates data<\/td>\n<td>When you need governed B2B\/B2B2B distribution with repeatable refresh<\/td>\n<\/tr>\n<tr>\n<td><strong>Azure Data Factory \/ Synapse Pipelines<\/strong><\/td>\n<td>ETL\/ELT, orchestration, complex workflows<\/td>\n<td>Powerful transformations, connectors, monitoring, CI\/CD<\/td>\n<td>You must build\/operate pipelines per partner; governance and invitations are DIY<\/td>\n<td>When you need transformation + custom delivery logic<\/td>\n<\/tr>\n<tr>\n<td><strong>Azure Storage SAS \/ signed URLs<\/strong><\/td>\n<td>Quick ad-hoc file sharing<\/td>\n<td>Simple, fast to start<\/td>\n<td>Token sprawl; difficult auditing; rotation risk<\/td>\n<td>Short-lived, controlled ad-hoc access (not recommended for scalable partner sharing)<\/td>\n<\/tr>\n<tr>\n<td><strong>Database-native sharing (platform-specific)<\/strong><\/td>\n<td>Query-time sharing within a platform<\/td>\n<td>Can avoid duplication; fine-grained access (platform-dependent)<\/td>\n<td>Locks you into a platform; may not fit all consumers<\/td>\n<td>When consumers are on the same data platform and need query-time access<\/td>\n<\/tr>\n<tr>\n<td><strong>AWS Data Exchange<\/strong><\/td>\n<td>Data product distribution in AWS ecosystems<\/td>\n<td>Marketplace-style distribution<\/td>\n<td>AWS-centric; different governance model<\/td>\n<td>If your consumers are primarily on AWS and need marketplace workflows<\/td>\n<\/tr>\n<tr>\n<td><strong>Google Analytics Hub \/ BigQuery sharing<\/strong><\/td>\n<td>Sharing datasets in BigQuery ecosystems<\/td>\n<td>Strong BigQuery-native sharing<\/td>\n<td>Google Cloud-centric<\/td>\n<td>If your data and consumers are primarily on GCP\/BigQuery<\/td>\n<\/tr>\n<tr>\n<td><strong>Snowflake Secure Data Sharing<\/strong><\/td>\n<td>Cross-account sharing within Snowflake<\/td>\n<td>Near-zero copy sharing; governance within Snowflake<\/td>\n<td>Requires Snowflake for both parties<\/td>\n<td>When both producer and consumer are Snowflake users<\/td>\n<\/tr>\n<tr>\n<td><strong>Delta Sharing (open protocol)<\/strong><\/td>\n<td>Cross-platform sharing for lakehouse data<\/td>\n<td>Open protocol; supports various platforms<\/td>\n<td>Requires setup and operational ownership; feature parity varies<\/td>\n<td>When you need open, cross-platform sharing and can operate it<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">15. Real-World Example<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Enterprise example: Manufacturing partner data exchange<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: A manufacturer must deliver weekly demand forecasts to 30 suppliers and receive weekly capacity confirmations. Each supplier wants data delivered into their own Azure environment.<\/li>\n<li><strong>Proposed architecture<\/strong><\/li>\n<li>Provider hosts curated forecast datasets in ADLS Gen2.<\/li>\n<li>Provider publishes one Azure Data Share per data product (forecast, inventory, shipment performance).<\/li>\n<li>Suppliers accept invitations and map datasets to their own landing containers.<\/li>\n<li>Suppliers run validation + ingestion pipelines after each snapshot.<\/li>\n<li>Provider monitors snapshot success rates and maintains a partner onboarding runbook.<\/li>\n<li><strong>Why Azure Data Share was chosen<\/strong><\/li>\n<li>Repeatable snapshot delivery to many partners.<\/li>\n<li>Centralized management of invitations and subscriptions.<\/li>\n<li>Reduces bespoke pipeline work per supplier.<\/li>\n<li><strong>Expected outcomes<\/strong><\/li>\n<li>Faster partner onboarding (days instead of weeks).<\/li>\n<li>Reduced operational incidents from custom copy scripts.<\/li>\n<li>Improved auditability of who receives what data.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Startup\/small-team example: SaaS customer data export<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem<\/strong>: A SaaS startup needs to deliver weekly usage exports to enterprise customers who require data to land in their Azure Storage for their analytics.<\/li>\n<li><strong>Proposed architecture<\/strong><\/li>\n<li>Startup exports aggregated usage files (CSV\/Parquet) to a \u201ccustomer exports\u201d storage container.<\/li>\n<li>Create one share per customer (or per plan tier), inviting the customer\u2019s Entra identity.<\/li>\n<li>Customer maps to their own destination container and runs their BI pipeline.<\/li>\n<li><strong>Why Azure Data Share was chosen<\/strong><\/li>\n<li>Simple managed sharing model without building custom per-customer ADF pipelines.<\/li>\n<li>Customers control their destination and retention.<\/li>\n<li><strong>Expected outcomes<\/strong><\/li>\n<li>Lower engineering overhead for exports.<\/li>\n<li>More consistent customer delivery and fewer manual support tickets.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">16. FAQ<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">1) Is Azure Data Share an ETL service?<\/h3>\n\n\n\n<p>No. Azure Data Share is primarily for <strong>sharing\/delivering datasets<\/strong>. For transformation and complex pipelines, use Azure Data Factory, Synapse Pipelines, or Databricks.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2) Does Azure Data Share support real-time streaming?<\/h3>\n\n\n\n<p>Generally, it is <strong>snapshot-based<\/strong>. If you need real-time streaming, consider Event Hubs, Kafka, or a streaming analytics platform.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3) Can I share data with external organizations?<\/h3>\n\n\n\n<p>Yes, using invitations to recipient identities. Cross-tenant sharing often requires Microsoft Entra B2B and governance alignment. Verify your tenant settings and the service\u2019s cross-tenant requirements.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4) Do consumers need an Azure subscription?<\/h3>\n\n\n\n<p>Typically yes, because consumers usually create a Data Share account and configure Azure destinations. For non-Azure consumers, you\u2019ll likely need other delivery mechanisms.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">5) Who pays for storage of received data?<\/h3>\n\n\n\n<p>Consumers generally pay for their <strong>destination storage<\/strong> and downstream processing. Providers pay for their source storage and any Data Share service charges on their side (billing specifics should be verified on the pricing page).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">6) Can I share only a folder inside a container?<\/h3>\n\n\n\n<p>Depending on dataset type and portal options, you may be able to share at different granularities (container\/folder\/file). Verify current dataset options in the portal and docs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">7) Can I revoke access after sharing?<\/h3>\n\n\n\n<p>You can stop sharing by removing subscriptions\/invitations or disabling future snapshots. However, <strong>data already delivered<\/strong> to consumer destinations remains under the consumer\u2019s control. Design contracts and governance accordingly.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">8) Does Azure Data Share copy data or provide query access?<\/h3>\n\n\n\n<p>Commonly it <strong>delivers snapshots<\/strong> (copies) into consumer destinations. Some ecosystems support \u201cin-place\u201d or query-time sharing, but availability depends on dataset type\u2014verify official docs for current capabilities.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">9) Can multiple consumers subscribe to the same share?<\/h3>\n\n\n\n<p>Yes. A single provider share can typically be consumed by multiple recipients.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">10) How do I monitor snapshot failures?<\/h3>\n\n\n\n<p>Use the Azure Portal\u2019s Data Share history views plus Azure monitoring\/logging (Activity Log, diagnostic settings where available). For production, add consumer-side checks to confirm expected files\/tables arrived.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">11) Does Azure Data Share work across regions?<\/h3>\n\n\n\n<p>Often yes, but cross-region delivery may have constraints and cost implications. Verify supported cross-region configurations and bandwidth charges.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">12) Is Microsoft Purview required?<\/h3>\n\n\n\n<p>No. Purview is a separate governance service. It can complement sharing, but Azure Data Share can be used without it.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">13) What\u2019s the difference between a share and a subscription?<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Share<\/strong>: created by the provider; contains datasets and invitations.<\/li>\n<li><strong>Subscription<\/strong>: created by the consumer when accepting; maps datasets to destinations.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">14) Can I automate Azure Data Share creation?<\/h3>\n\n\n\n<p>Yes, via Azure resource management APIs\/templates where supported. Confirm current ARM\/REST schemas and tooling support in the official docs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">15) What\u2019s the biggest operational risk?<\/h3>\n\n\n\n<p>Misconfigured permissions and network restrictions are common causes of delivery failures. Establish runbooks and pre-flight checks for source\/destination access.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">17. Top Online Resources to Learn Azure Data Share<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Resource Type<\/th>\n<th>Name<\/th>\n<th>Why It Is Useful<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Official Documentation<\/td>\n<td>Azure Data Share documentation<\/td>\n<td>Canonical concepts, how-to guides, supported data stores, and limits: https:\/\/learn.microsoft.com\/azure\/data-share\/<\/td>\n<\/tr>\n<tr>\n<td>Official Pricing<\/td>\n<td>Azure Data Share pricing<\/td>\n<td>Current meters and billing details: https:\/\/azure.microsoft.com\/pricing\/details\/data-share\/<\/td>\n<\/tr>\n<tr>\n<td>Pricing Tool<\/td>\n<td>Azure Pricing Calculator<\/td>\n<td>Estimate total solution cost: https:\/\/azure.microsoft.com\/pricing\/calculator\/<\/td>\n<\/tr>\n<tr>\n<td>Product Availability<\/td>\n<td>Products by region<\/td>\n<td>Confirm regional availability: https:\/\/azure.microsoft.com\/explore\/global-infrastructure\/products-by-region\/<\/td>\n<\/tr>\n<tr>\n<td>Official Azure CLI<\/td>\n<td>Azure CLI install<\/td>\n<td>Used to build lab prerequisites: https:\/\/learn.microsoft.com\/cli\/azure\/install-azure-cli<\/td>\n<\/tr>\n<tr>\n<td>Bandwidth Pricing<\/td>\n<td>Azure bandwidth pricing<\/td>\n<td>Understand transfer costs: https:\/\/azure.microsoft.com\/pricing\/details\/bandwidth\/<\/td>\n<\/tr>\n<tr>\n<td>Cloud Governance<\/td>\n<td>Azure Policy documentation<\/td>\n<td>Enforce governance for Data Share resources and storage: https:\/\/learn.microsoft.com\/azure\/governance\/policy\/<\/td>\n<\/tr>\n<tr>\n<td>Identity<\/td>\n<td>Microsoft Entra ID documentation<\/td>\n<td>Understand B2B and access governance: https:\/\/learn.microsoft.com\/entra\/<\/td>\n<\/tr>\n<tr>\n<td>Monitoring<\/td>\n<td>Azure Monitor documentation<\/td>\n<td>Logs\/metrics and diagnostic settings patterns: https:\/\/learn.microsoft.com\/azure\/azure-monitor\/<\/td>\n<\/tr>\n<tr>\n<td>Community (use carefully)<\/td>\n<td>Microsoft Q&amp;A for Azure Data Share<\/td>\n<td>Practical troubleshooting patterns (validate against official docs): https:\/\/learn.microsoft.com\/answers\/topics\/azure-data-share.html<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">18. Training and Certification Providers<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Institute<\/th>\n<th>Suitable Audience<\/th>\n<th>Likely Learning Focus<\/th>\n<th>Mode<\/th>\n<th>Website URL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>DevOpsSchool.com<\/td>\n<td>Engineers, architects, ops teams<\/td>\n<td>Azure + DevOps + cloud operations fundamentals that support services like Azure Data Share<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.devopsschool.com\/<\/td>\n<\/tr>\n<tr>\n<td>ScmGalaxy.com<\/td>\n<td>DevOps learners, build\/release engineers<\/td>\n<td>DevOps, automation, CI\/CD practices relevant to IaC and cloud operations<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.scmgalaxy.com\/<\/td>\n<\/tr>\n<tr>\n<td>CLoudOpsNow.in<\/td>\n<td>Cloud ops teams, SRE\/ops-focused learners<\/td>\n<td>Cloud operations, monitoring, reliability practices<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.cloudopsnow.in\/<\/td>\n<\/tr>\n<tr>\n<td>SreSchool.com<\/td>\n<td>SREs, platform engineers<\/td>\n<td>Reliability engineering, incident response, operational maturity<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.sreschool.com\/<\/td>\n<\/tr>\n<tr>\n<td>AiOpsSchool.com<\/td>\n<td>Ops + analytics practitioners<\/td>\n<td>AIOps concepts, monitoring analytics, operational automation<\/td>\n<td>Check website<\/td>\n<td>https:\/\/www.aiopsschool.com\/<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">19. Top Trainers<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Platform\/Site<\/th>\n<th>Likely Specialization<\/th>\n<th>Suitable Audience<\/th>\n<th>Website URL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>RajeshKumar.xyz<\/td>\n<td>Cloud\/DevOps training content (verify specific offerings)<\/td>\n<td>Beginners to intermediate engineers<\/td>\n<td>https:\/\/www.rajeshkumar.xyz\/<\/td>\n<\/tr>\n<tr>\n<td>devopstrainer.in<\/td>\n<td>DevOps-focused training (verify Azure coverage)<\/td>\n<td>DevOps engineers and admins<\/td>\n<td>https:\/\/www.devopstrainer.in\/<\/td>\n<\/tr>\n<tr>\n<td>devopsfreelancer.com<\/td>\n<td>Independent consulting\/training platform (verify services)<\/td>\n<td>Teams seeking practical DevOps enablement<\/td>\n<td>https:\/\/www.devopsfreelancer.com\/<\/td>\n<\/tr>\n<tr>\n<td>devopssupport.in<\/td>\n<td>DevOps support\/training (verify specialization)<\/td>\n<td>Ops\/DevOps teams needing hands-on support<\/td>\n<td>https:\/\/www.devopssupport.in\/<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">20. Top Consulting Companies<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Company Name<\/th>\n<th>Likely Service Area<\/th>\n<th>Where They May Help<\/th>\n<th>Consulting Use Case Examples<\/th>\n<th>Website URL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>cotocus.com<\/td>\n<td>Cloud\/DevOps consulting (verify specific offerings)<\/td>\n<td>Architecture, cloud operations, implementation support<\/td>\n<td>Designing governed partner data sharing workflows; building landing zones; operational runbooks<\/td>\n<td>https:\/\/www.cotocus.com\/<\/td>\n<\/tr>\n<tr>\n<td>DevOpsSchool.com<\/td>\n<td>Training + consulting (verify scope)<\/td>\n<td>Cloud enablement, DevOps processes, implementation acceleration<\/td>\n<td>Setting up IaC pipelines for Data Share resources; governance patterns; operational training<\/td>\n<td>https:\/\/www.devopsschool.com\/<\/td>\n<\/tr>\n<tr>\n<td>DEVOPSCONSULTING.IN<\/td>\n<td>DevOps consulting (verify services)<\/td>\n<td>DevOps\/SRE practices and cloud automation<\/td>\n<td>CI\/CD + IaC for Azure resource provisioning; monitoring and incident readiness<\/td>\n<td>https:\/\/www.devopsconsulting.in\/<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">21. Career and Learning Roadmap<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn before Azure Data Share<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Azure fundamentals:<\/li>\n<li>Resource groups, subscriptions, regions<\/li>\n<li>Azure RBAC and Microsoft Entra ID basics<\/li>\n<li>Data fundamentals:<\/li>\n<li>Files vs tables, partitions, formats (CSV\/Parquet)<\/li>\n<li>Data lake concepts (landing\/bronze\/silver\/gold)<\/li>\n<li>Azure Storage essentials:<\/li>\n<li>Containers, blobs, ADLS Gen2 namespaces<\/li>\n<li>Storage RBAC vs ACLs, lifecycle policies<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn after Azure Data Share<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Data ingestion and transformation:<\/li>\n<li>Azure Data Factory \/ Synapse Pipelines<\/li>\n<li>Databricks or Synapse Spark<\/li>\n<li>Governance:<\/li>\n<li>Microsoft Purview (catalog, lineage, classification)<\/li>\n<li>Security hardening:<\/li>\n<li>Private endpoints and network design for data platforms<\/li>\n<li>Key Vault and managed identities<\/li>\n<li>Operations:<\/li>\n<li>Azure Monitor, Log Analytics, alerting, dashboards<\/li>\n<li>Cost management and chargeback models<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Job roles that use it<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud solutions architect<\/li>\n<li>Data platform engineer<\/li>\n<li>Analytics engineer<\/li>\n<li>Partner integration engineer<\/li>\n<li>Cloud security engineer (governance of external sharing)<\/li>\n<li>Platform engineer (landing zones and guardrails)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Certification path (general Azure)<\/h3>\n\n\n\n<p>There is no widely recognized standalone certification dedicated only to Azure Data Share. Relevant Microsoft certification tracks often include:\n&#8211; Azure fundamentals and administrator paths\n&#8211; Azure data engineer \/ analytics-focused paths<\/p>\n\n\n\n<p>Verify the latest Microsoft certification offerings here:\nhttps:\/\/learn.microsoft.com\/credentials\/<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Project ideas for practice<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Build a \u201cdata product\u201d share catalog:<\/li>\n<li>Three shares: reference data, daily KPIs, weekly aggregates<\/li>\n<li>Two consumers: BI team and ML team<\/li>\n<li>Implement governance:<\/li>\n<li>Mandatory tags, naming, and policy rules for storage + data share accounts<\/li>\n<li>Reliability drill:<\/li>\n<li>Simulate permission failure and build a runbook + alerting workflow<\/li>\n<li>Cost optimization exercise:<\/li>\n<li>Compare full daily snapshot vs incremental patterns (where supported) and model storage retention impacts<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">22. Glossary<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Azure Data Share account<\/strong>: The Azure resource used to manage shares, invitations, and subscriptions.<\/li>\n<li><strong>Provider<\/strong>: The party that publishes data via a share.<\/li>\n<li><strong>Consumer<\/strong>: The party that accepts an invitation and receives data into a destination.<\/li>\n<li><strong>Share<\/strong>: Provider-defined container for datasets and invitations.<\/li>\n<li><strong>Dataset<\/strong>: A configured reference to a supported source dataset (files\/tables depending on type).<\/li>\n<li><strong>Invitation<\/strong>: Provider-initiated request for a recipient to subscribe to a share.<\/li>\n<li><strong>Subscription (share subscription)<\/strong>: Consumer-side acceptance and configuration of how\/where to receive shared datasets.<\/li>\n<li><strong>Snapshot<\/strong>: A point-in-time delivery of shared data to the consumer destination.<\/li>\n<li><strong>RBAC<\/strong>: Role-Based Access Control in Azure for management actions.<\/li>\n<li><strong>Data plane<\/strong>: Permissions and operations on the actual data (for example, Storage blobs, SQL tables).<\/li>\n<li><strong>B2B<\/strong>: Business-to-business identity collaboration, commonly via Microsoft Entra B2B guest users.<\/li>\n<li><strong>ADLS Gen2<\/strong>: Azure Data Lake Storage Gen2, Azure Storage with hierarchical namespace features for analytics.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">23. Summary<\/h2>\n\n\n\n<p>Azure Data Share is an Azure Analytics service for <strong>governed, repeatable dataset sharing<\/strong> between a provider and one or more consumers using <strong>invitations, subscriptions, and snapshot-based delivery<\/strong> to supported Azure data stores.<\/p>\n\n\n\n<p>It matters because organizations frequently need to share curated data across teams, subscriptions, and external partners\u2014yet ad-hoc exports and token-based access are difficult to secure, audit, and operate. Azure Data Share provides a structured sharing workflow that fits well into Azure governance and identity patterns.<\/p>\n\n\n\n<p>Cost-wise, plan for <strong>Data Share service charges<\/strong>, plus the bigger indirect drivers: <strong>duplicate storage across consumers<\/strong>, snapshot frequency, and potential <strong>data transfer<\/strong> charges. Security-wise, focus on <strong>least privilege<\/strong>, external identity governance, network restrictions compatibility, and strong auditing.<\/p>\n\n\n\n<p>Use Azure Data Share when you need <strong>managed dataset distribution<\/strong> with repeatable refresh. Choose ETL or streaming services when you need transformation or real-time delivery.<\/p>\n\n\n\n<p>Next learning step: review the official documentation and supported data stores for Azure Data Share, then expand this lab by adding governance controls (Azure Policy\/tags), monitoring\/alerts, and a consumer-side validation pipeline.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Analytics<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21,40,7],"tags":[],"class_list":["post-380","post","type-post","status-publish","format-standard","hentry","category-analytics","category-azure","category-storage"],"_links":{"self":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/380","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/comments?post=380"}],"version-history":[{"count":0,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/380\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/media?parent=380"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/categories?post=380"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/tags?post=380"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}