Category
Networking
1. Introduction
Azure Virtual Network Manager is an Azure Networking control-plane service that helps you centrally manage connectivity and security rules across many Azure virtual networks (VNets) at scale.
In simple terms: instead of manually creating and maintaining VNet peerings and duplicating security rules in dozens (or hundreds) of places, you define your intent once (for example, “all production VNets can talk to each other” or “all spokes connect to this hub”) and Azure Virtual Network Manager applies it consistently.
Technically, Azure Virtual Network Manager lets you: – Group VNets (for example, by environment, region, business unit, or application) – Define connectivity configurations (mesh or hub-and-spoke patterns) – Define security admin rules (central allow/deny rules enforced across many VNets) – Deploy these configurations to selected Azure regions and keep them consistent as your network grows
The main problem it solves is operational sprawl: when your Azure estate grows, hand-managing peering, routing intent, and “baseline” security rules becomes error-prone, inconsistent, and difficult to audit. Azure Virtual Network Manager provides a scalable, policy-like way to implement and govern networking intent.
Service name note: As of the latest Microsoft documentation, the service name is Azure Virtual Network Manager. Verify the latest naming and feature scope in the official documentation: https://learn.microsoft.com/azure/virtual-network-manager/
2. What is Azure Virtual Network Manager?
Official purpose (conceptually): Azure Virtual Network Manager is designed to help you centrally manage virtual network connectivity and network security across your Azure environment, using a hub-and-spoke or mesh topology model and centrally governed security admin rules.
Core capabilities
- Network grouping: Organize VNets into network groups so you can apply connectivity/security intent to a set of VNets.
- Connectivity configurations: Define network connectivity intent (commonly mesh or hub-and-spoke) that Azure Virtual Network Manager turns into the required VNet peerings.
- Security admin configurations: Define centralized security admin rules (allow/deny) that apply across VNets in a network group, providing consistent guardrails that complement local NSGs.
- Scoped governance: Apply intent across a defined scope (for example, one or more subscriptions, or management group scope—verify exact scoping options in your tenant).
- Regional deployments: Deploy configurations to selected Azure regions so that Azure Virtual Network Manager applies/updates the configuration for VNets in those regions.
Major components (high-level)
- Network Manager resource: The top-level Azure resource representing your Azure Virtual Network Manager instance.
- Scope: Which subscriptions/management groups the Network Manager can manage.
- Network groups: A collection of VNets (static membership and/or dynamic membership depending on feature availability—verify in official docs).
- Configurations:
- Connectivity configuration
- Security admin configuration
- Deployment: The act of applying a configuration to selected Azure regions.
Service type
- Control plane / orchestration service (not a packet-forwarding data plane).
- It creates/updates underlying Azure networking resources (for example, VNet peerings) and applies centrally managed security constructs.
Scope and geography (practical view)
- You create an Azure Virtual Network Manager resource in a specific region (metadata location), but it can manage VNets across the defined scope.
- Connectivity/security intent is deployed to regions. The “deployment” concept is important operationally: defining configuration is not the same as applying it.
How it fits into the Azure ecosystem
Azure Virtual Network Manager complements (not replaces) core Azure Networking services: – Azure Virtual Network (VNets) and VNet peering – Network Security Groups (NSGs) for workload-level filtering – Azure Firewall / Azure Firewall Manager for centralized inspection and egress control (different layer; complementary) – Azure Policy and management groups for governance and organization – Azure Monitor and Activity Log for auditing changes and operations
3. Why use Azure Virtual Network Manager?
Business reasons
- Lower operational cost: Fewer manual networking changes, less repetitive ticket work, and fewer outages caused by inconsistent network configuration.
- Faster onboarding: New VNets can automatically become part of the intended topology and baseline security, reducing time-to-delivery for teams.
- Auditability: Central definitions help demonstrate that network connectivity and security intent is consistently applied.
Technical reasons
- Consistency at scale: Ensures VNets follow the same connectivity model (hub-and-spoke or mesh) without per-VNet manual peering management.
- Reduced configuration drift: Central definitions are easier to keep aligned than hundreds of individually maintained peerings/NSGs.
- Safer change management: A “define then deploy” model supports controlled rollouts.
Operational reasons
- Central management for platform teams: A platform/network team can manage connectivity and baseline security while application teams manage their VNets and subnets.
- Repeatable patterns: Apply proven network designs across many environments (dev/test/prod) with less custom work.
Security/compliance reasons
- Central security guardrails: Security admin rules can enforce global denies/allows that reduce the chance of accidental exposure.
- Separation of duties: Central network/security team can enforce org-wide rules while still allowing workload teams to manage local NSGs.
Scalability/performance reasons
- The primary scaling benefit is operational (configuration scaling), not higher bandwidth or throughput.
- Performance characteristics still depend on underlying Azure networking (VNet peering, gateways, firewalls). Azure Virtual Network Manager orchestrates those resources.
When teams should choose it
Choose Azure Virtual Network Manager when you have: – Multiple VNets across subscriptions/regions – Repeated peering work (new spokes, new environments) – A desire for standardized connectivity patterns and centralized security intent – A platform team responsible for connectivity and baseline network security
When they should not choose it
It may not be the best fit when: – You have a very small footprint (one or two VNets) and simple needs – You need data-plane features like packet inspection, NAT, TLS termination (use Azure Firewall / Application Gateway, etc.) – You require complex routing behaviors that must be controlled via custom UDRs, NVAs, or advanced architectures—Azure Virtual Network Manager focuses on topology and security guardrails, not end-to-end traffic engineering – You want cross-cloud orchestration (Azure Virtual Network Manager is Azure-only)
4. Where is Azure Virtual Network Manager used?
Industries
- Enterprises (finance, healthcare, manufacturing, retail) with multi-subscription Azure estates and strong governance needs
- Public sector with strict segmentation and audit requirements
- SaaS providers operating multiple environments and customer-facing deployments across regions
Team types
- Central Network/Platform teams managing shared connectivity
- Security teams defining baseline deny/allow rules
- DevOps/SRE teams integrating network intent with landing zones
- Application teams benefiting from self-service VNet provisioning that “just connects correctly”
Workloads
- Multi-tier applications spanning multiple VNets
- Shared services networks (identity, logging, patching, CI/CD runners)
- Microservices and internal APIs where east-west connectivity and segmentation must be consistent
- Regulated workloads requiring centralized guardrails
Architectures
- Hub-and-spoke (common in Azure landing zones)
- Regional hubs with multiple spoke VNets per region
- Mesh for tightly coupled environments (often smaller sets of VNets or special cases)
Real-world deployment contexts
- Azure landing zones using management groups and multiple subscriptions
- Large organizations with separate subscriptions for dev/test/prod and shared services
- M&A scenarios where newly acquired subscriptions need standardized connectivity and baseline security
Production vs dev/test usage
- Production: Strong value due to governance, drift reduction, and auditability. Use controlled deployments and change management.
- Dev/test: Useful for consistency and speed, but be mindful of creating too much peering sprawl (mesh can grow quickly).
5. Top Use Cases and Scenarios
Below are realistic scenarios where Azure Virtual Network Manager is commonly applied.
1) Auto-onboard new VNets into a hub-and-spoke topology
- Problem: Each new spoke VNet requires manual peering to the hub, plus updates to documentation and runbooks.
- Why this service fits: Connectivity configuration can define hub-and-spoke intent; deployment ensures spokes connect as they appear (depending on network group membership method).
- Example scenario: A platform team creates a standard “Prod-Spokes” network group. Any new production VNet is tagged and automatically joins the group and gets hub connectivity.
2) Enforce a global “deny inbound from Internet” baseline
- Problem: Teams accidentally add permissive NSG rules, creating compliance risk.
- Why this service fits: Security admin rules can enforce org-wide denies (and selective allows) that apply across many VNets.
- Example scenario: Security defines an admin rule to deny inbound from
Internetto all subnets except a designated DMZ.
3) Standardize connectivity across multiple subscriptions
- Problem: Different subscriptions evolve different connectivity patterns, making operations and troubleshooting inconsistent.
- Why this service fits: A single Azure Virtual Network Manager instance can be scoped to multiple subscriptions (verify scoping in your tenant) and deploy consistent configurations.
- Example scenario: Corporate IT manages connectivity for all subscriptions under a “Corp” management group.
4) Accelerate application environment replication (dev/test/prod)
- Problem: Environments drift—dev is connected differently than prod; security rules differ.
- Why this service fits: Use consistent network group definitions per environment and deploy the same connectivity/security intent.
- Example scenario: “AppA-Dev”, “AppA-Test”, “AppA-Prod” VNets all follow the same hub pattern, with environment-specific exceptions.
5) Reduce peering sprawl and misconfiguration in mesh networks
- Problem: In mesh, every VNet must peer with every other; manual maintenance becomes unmanageable.
- Why this service fits: Mesh connectivity config automates peering creation and updates.
- Example scenario: A data platform team needs full connectivity among a set of analytics VNets for a limited scope.
6) Implement segmentation between business units with centralized exceptions
- Problem: Business unit networks must be isolated, but some shared services must be reachable (DNS, identity, logging).
- Why this service fits: Combine hub-and-spoke connectivity with security admin rules that allow only specific ports/protocols to shared services.
- Example scenario: HR and Finance spokes connect to the hub; admin rules allow access only to centralized domain services.
7) Control blast radius during reorganizations or M&A
- Problem: Newly onboarded networks may be insecure or inconsistently connected.
- Why this service fits: Place acquired VNets into a quarantine network group with restrictive security admin rules and limited connectivity.
- Example scenario: Acquired subscription VNets are added to “Quarantine” and only allowed to reach update servers and a jump host network.
8) Centralize “break-glass” response rules
- Problem: During incident response, you may need to quickly block traffic patterns across many VNets.
- Why this service fits: Central admin rules can be updated and deployed consistently, with Activity Log traceability.
- Example scenario: Security adds a temporary deny rule for a known malicious IP range across all production VNets.
9) Create per-region hub connectivity with regional deployments
- Problem: Global organizations deploy VNets in many Azure regions; hubs are regional for latency/compliance.
- Why this service fits: Deploy connectivity configurations to each region where you operate.
- Example scenario: East US spokes connect to East US hub; West Europe spokes connect to West Europe hub, managed under one Azure Virtual Network Manager.
10) Improve auditing and change governance for networking intent
- Problem: Hard to prove who changed peering or baseline rules and when.
- Why this service fits: Configuration and deployment actions show up in Azure control-plane logs (Activity Log), improving audit trails.
- Example scenario: A change advisory board reviews Azure Virtual Network Manager deployments instead of ad-hoc peering changes.
6. Core Features
Feature availability can vary by Azure cloud, region, and service updates. Verify the latest feature set in the official docs: https://learn.microsoft.com/azure/virtual-network-manager/
1) Network groups
- What it does: Lets you group VNets so you can apply connectivity and/or security intent to the group.
- Why it matters: Groups are the unit of management at scale.
- Practical benefit: “All production VNets” or “All VNets for App X” can be treated consistently.
- Caveats: Membership methods (static vs dynamic) and scale limits should be confirmed in official docs.
2) Connectivity configuration (mesh)
- What it does: Establishes a mesh topology among VNets in a network group.
- Why it matters: Mesh requires N×(N−1)/2 peerings—manual management doesn’t scale.
- Practical benefit: Automated peering creation and lifecycle management.
- Caveats: Mesh can create a large number of peerings and operational complexity; consider hub-and-spoke for most enterprise cases.
3) Connectivity configuration (hub-and-spoke)
- What it does: Connects spoke VNets to one (or more) hub VNets (depending on configuration design).
- Why it matters: Hub-and-spoke is a common enterprise landing zone pattern for centralized control and shared services.
- Practical benefit: Rapid onboarding of spoke VNets; centralized inspection/egress patterns become easier.
- Caveats: Hub capacity and design still matters (firewall throughput, gateway limits, routing). Azure Virtual Network Manager does not replace good hub architecture.
4) Deployments to Azure regions
- What it does: Applies configurations to selected Azure regions. VNets in those regions that match group membership receive the intended connectivity/security.
- Why it matters: Separates design-time from run-time; helps controlled rollouts.
- Practical benefit: You can stage changes and deploy region-by-region.
- Caveats: “Defined” does not mean “active”—you must deploy. Also, multi-region environments require consistent deployment practices.
5) Security admin configuration (central guardrails)
- What it does: Defines centralized security admin rules that are enforced across target VNets.
- Why it matters: Prevents local misconfigurations from violating baseline security requirements.
- Practical benefit: Enforce denies/critical allows consistently (for example, deny RDP/SSH from Internet everywhere).
- Caveats: Understand precedence relative to NSGs and application security groups; test thoroughly. Exact evaluation order and supported scenarios should be verified in official docs.
6) Rule collections and prioritization (security admin rules)
- What it does: Organizes admin rules into collections, usually with priorities and intent.
- Why it matters: Makes large rule sets manageable and auditable.
- Practical benefit: Separate “baseline” from “application-specific exceptions.”
- Caveats: Misordered priorities can cause outages; implement strict change control.
7) Centralized lifecycle management of peerings
- What it does: Creates/updates/removes peerings when VNets enter/leave network groups or when configs are updated/deployed.
- Why it matters: Removes repetitive work and reduces drift.
- Practical benefit: Faster environment scaling and simpler ops.
- Caveats: If teams manually change peerings, you can get unexpected drift or conflicts—define operational ownership clearly.
8) Works with Azure governance constructs (scope, RBAC, policy)
- What it does: Aligns to management groups/subscriptions, and uses Azure RBAC to control who can define and deploy configurations.
- Why it matters: Enterprise governance needs separation of duties.
- Practical benefit: Platform team controls deployments; application teams can be restricted.
- Caveats: RBAC must be designed carefully to avoid privilege escalation or accidental broad changes.
7. Architecture and How It Works
High-level service architecture
Azure Virtual Network Manager is a control-plane orchestrator: 1. You define scope, network groups, and configurations. 2. You deploy configurations to selected regions. 3. Azure Virtual Network Manager creates/updates underlying Azure networking constructs—most notably VNet peerings for connectivity, and centrally enforced admin security rules for security.
Request / control flow (conceptual)
- User/automation (Portal/ARM/Bicep/Terraform/Azure CLI where supported) updates Azure Virtual Network Manager resources.
- Azure Virtual Network Manager evaluates network group membership.
- On deployment, Azure Virtual Network Manager applies changes to target VNets in the selected region(s).
- Resulting artifacts appear in the managed VNets (for example, peering entries).
Integrations with related Azure services
- Azure Virtual Network / VNet peering: Primary connectivity mechanism Azure Virtual Network Manager orchestrates.
- Network Security Groups (NSGs): Still used at subnet/NIC level; security admin rules provide centrally governed rules across VNets.
- Azure Policy: Commonly used to enforce tagging and resource standards so network group membership is reliable.
- Azure Monitor / Activity Log: Track deployments and configuration changes at the Azure control plane.
- Azure Resource Graph: Helpful to query VNets and verify membership/metadata at scale.
Dependency services
- Microsoft.Network resource provider must be registered in the subscription(s).
- VNets must exist in the selected scope and have non-overlapping address spaces (for peering scenarios).
Security / authentication model
- Uses Azure AD authentication and Azure RBAC authorization.
- Actions are standard Azure Resource Manager operations: creating/updating the network manager, groups, configs, deployments.
- For enterprise, use least privilege roles and consider separating “author” vs “deployer” responsibilities.
Networking model
- No data-plane traffic flows “through” Azure Virtual Network Manager.
- The actual data-plane connectivity is provided by VNet peering (or other network components in your architecture).
- For hub-and-spoke, traffic inspection and egress typically rely on Azure Firewall/NVAs plus UDRs—Azure Virtual Network Manager is not a firewall.
Monitoring, logging, governance considerations
- Activity Log: Your primary audit trail for configuration and deployment actions.
- Change management: Treat deployments like production changes; use pipelines and approvals where possible.
- Tagging and naming: Strongly recommended to make group membership predictable and auditable.
Simple architecture diagram (conceptual)
flowchart LR
A[Platform Team] -->|Define groups + configs| B[Azure Virtual Network Manager]
B -->|Deploy to Region X| C[Connectivity Config]
B -->|Deploy to Region X| D[Security Admin Config]
C --> E[VNet 1]
C --> F[VNet 2]
E <-->|VNet Peering| F
D --> E
D --> F
Production-style architecture diagram (landing-zone oriented)
flowchart TB
subgraph MG[Management Group Scope]
subgraph Sub1[Subscription: Shared Services]
HUB[VNet: Hub]
FW[Azure Firewall/NVA (optional)]
DNS[DNS/Identity/Shared Services]
HUB --> FW
HUB --> DNS
end
subgraph Sub2[Subscription: Production Spokes]
S1[VNet: App-Spoke-1]
S2[VNet: App-Spoke-2]
S3[VNet: Data-Spoke]
end
subgraph Sub3[Subscription: Dev/Test]
D1[VNet: Dev-Spoke-1]
D2[VNet: Test-Spoke-1]
end
AVNM[Azure Virtual Network Manager]
NG1[Network Group: Prod-Spokes]
NG2[Network Group: DevTest-Spokes]
CC[Connectivity Config: Hub-and-Spoke]
SAC[Security Admin Config: Baseline Deny/Allow]
AVNM --> NG1
AVNM --> NG2
AVNM --> CC
AVNM --> SAC
NG1 --> S1
NG1 --> S2
NG1 --> S3
NG2 --> D1
NG2 --> D2
CC --> HUB
CC --> S1
CC --> S2
CC --> S3
CC --> D1
CC --> D2
SAC --> S1
SAC --> S2
SAC --> S3
SAC --> D1
SAC --> D2
HUB <-->|Peering| S1
HUB <-->|Peering| S2
HUB <-->|Peering| S3
HUB <-->|Peering| D1
HUB <-->|Peering| D2
end
8. Prerequisites
Account/subscription/tenant requirements
- An Azure subscription where you can create networking resources.
- If you intend to manage multiple subscriptions, ensure you have access to all target subscriptions and confirm supported scoping (subscription vs management group) in official docs.
Permissions (IAM / RBAC)
You typically need: – Permission to create the Network Manager resource. – Permission to read VNets and write connectivity/security configurations within the defined scope. – Permission to create/modify VNet peerings in the target VNets.
Common built-in roles that may be involved (exact roles and least-privilege options can vary; verify in official docs): – Owner or Contributor (broad; not least-privilege) – Network Contributor on target VNets/subscriptions – Potential service-specific roles (for example, “Network Manager Contributor”) if available in your tenant (verify)
Billing requirements
- A subscription with billing enabled. Even if Azure Virtual Network Manager charges are minimal, dependent resources (VMs, firewalls, gateways) can create costs.
Tools
For the lab in this tutorial: – Azure Portal (recommended) – Azure CLI (optional, for VNet creation/verification): https://learn.microsoft.com/cli/azure/install-azure-cli
Resource provider registration
Ensure the networking resource provider is registered:
– Microsoft.Network
You can register via Azure CLI:
az provider register --namespace Microsoft.Network
Region availability
- Azure Virtual Network Manager is not necessarily available in all Azure regions or sovereign clouds.
- Connectivity/security deployments are region-based. Confirm supported regions and clouds here: https://learn.microsoft.com/azure/virtual-network-manager/
Quotas/limits
- Limits exist for VNet peering, number of VNets, and potentially Azure Virtual Network Manager constructs. These limits can vary by region and subscription.
- Verify current limits and quotas in official docs before large-scale designs.
Prerequisite services/resources
- At least two VNets if you want to test connectivity automation.
- Non-overlapping IP address spaces across VNets you plan to peer.
9. Pricing / Cost
Pricing changes and is region-dependent. Do not rely on static blog numbers. Use the official pricing page and the Azure Pricing Calculator for current rates.
Official pricing references
- Azure Virtual Network Manager pricing page (verify URL and current pricing dimensions): https://azure.microsoft.com/pricing/details/virtual-network-manager/
- Azure Pricing Calculator: https://azure.microsoft.com/pricing/calculator/
Pricing dimensions (how cost is typically modeled)
Azure Virtual Network Manager costs (where applicable) are generally associated with: – The number of managed VNets (VNets under management / targeted by configurations) – The number of deployments and/or configurations (connectivity and security) – The hours the service is active (depending on SKU/model)
The exact meters (and whether there is a free tier) must be confirmed on the official pricing page for your region.
What may be free vs billable
- The control-plane definitions (creating configs) may be low cost, but deployments and managed VNets can drive charges depending on the pricing model.
- Even if Azure Virtual Network Manager is low cost, the underlying networking resources may have their own costs.
Major cost drivers (direct and indirect)
Direct: – Azure Virtual Network Manager service charges (per official meters)
Indirect (often larger): – VNet peering: In Azure, VNet peering itself is typically not charged as a “resource,” but data transfer across peering can incur bandwidth charges depending on direction and region (intra-region vs inter-region). Verify current bandwidth charges: https://azure.microsoft.com/pricing/details/bandwidth/ – Azure Firewall / NVAs / gateways in hub networks (compute and processing charges) – Azure Bastion (if used for validation/testing) – Log ingestion (Azure Monitor/Log Analytics) if you send extensive diagnostics
Network/data transfer implications
- Mesh topologies can increase east-west traffic paths across peerings, potentially increasing data transfer charges.
- Inter-region connectivity and data transfer is usually more expensive than intra-region.
How to optimize cost
- Prefer hub-and-spoke for larger estates rather than full mesh, unless mesh is truly needed.
- Keep traffic in-region where possible; use regional hubs.
- Use tag-driven grouping (or well-defined static membership) to avoid accidentally managing VNets that shouldn’t be included.
- Avoid frequent large-scale redeployments during business hours; use change windows and staged rollouts to reduce risk (cost impact is usually indirect via operational disruption).
Example low-cost starter estimate (lab mindset)
A minimal lab can be kept low cost by: – Creating only VNets and Azure Virtual Network Manager (no VMs). – Validating via peering objects in the Portal/CLI. Costs: – Azure Virtual Network Manager charges (if any) depend on the current meter. – VNet resources alone are generally not costly, but always verify.
Example production cost considerations
In production, your main costs often come from: – Firewalls/NVAs (inspection) – Gateways (VPN/ExpressRoute) – Inter-region data transfer – Logging/monitoring ingestion Azure Virtual Network Manager can reduce operational cost, but it does not eliminate these network architecture costs.
10. Step-by-Step Hands-On Tutorial
This lab builds a small, real topology and uses Azure Virtual Network Manager to create connectivity automatically.
Objective
Create two VNets, group them using Azure Virtual Network Manager, deploy a mesh connectivity configuration, and verify that VNet peerings were created automatically.
Lab Overview
You will: 1. Create a resource group 2. Create two VNets with non-overlapping address spaces 3. Create an Azure Virtual Network Manager instance 4. Create a network group and add the VNets 5. Create a mesh connectivity configuration 6. Deploy the configuration to a region 7. Validate that peerings exist 8. Clean up
Estimated time: 45–75 minutes
Cost: Low (VNets are low cost; service charges depend on current pricing; optional VMs increase cost)
Step 1: Create a resource group and two VNets
Option A: Azure Portal (beginner-friendly)
- In the Azure Portal, go to Resource groups → Create.
- Set:
– Subscription: your lab subscription
– Resource group:
rg-avnm-lab– Region: choose a region where Azure Virtual Network Manager is supported (for example,East US—verify availability) - Select Review + create → Create.
Now create two VNets:
1. Go to Virtual networks → Create.
2. Create VNet A:
– Name: vnet-avnm-a
– Region: same as RG (for simplicity)
– Address space: 10.10.0.0/16
– Subnet: subnet-1 = 10.10.1.0/24
– Tags: add avnm-lab = true (optional but useful)
3. Create VNet B similarly:
– Name: vnet-avnm-b
– Address space: 10.20.0.0/16
– Subnet: subnet-1 = 10.20.1.0/24
– Tags: add avnm-lab = true
Expected outcome: Two VNets exist in the same region with non-overlapping IP ranges.
Option B: Azure CLI (fast and repeatable)
az login
az account set --subscription "<YOUR_SUBSCRIPTION_ID>"
# Variables
RG="rg-avnm-lab"
LOC="eastus"
az group create -n "$RG" -l "$LOC"
# Create VNet A
az network vnet create \
-g "$RG" -n "vnet-avnm-a" -l "$LOC" \
--address-prefixes "10.10.0.0/16" \
--subnet-name "subnet-1" --subnet-prefixes "10.10.1.0/24"
# Create VNet B
az network vnet create \
-g "$RG" -n "vnet-avnm-b" -l "$LOC" \
--address-prefixes "10.20.0.0/16" \
--subnet-name "subnet-1" --subnet-prefixes "10.20.1.0/24"
# Optional tagging (helps with dynamic grouping in some designs)
az resource tag \
--ids "$(az network vnet show -g "$RG" -n vnet-avnm-a --query id -o tsv)" \
--tags avnm-lab=true
az resource tag \
--ids "$(az network vnet show -g "$RG" -n vnet-avnm-b --query id -o tsv)" \
--tags avnm-lab=true
Expected outcome: Same as the portal workflow.
Step 2: Create an Azure Virtual Network Manager instance
Because Azure Virtual Network Manager has multiple configuration objects and UI flows that evolve, the Portal is the safest way to ensure an executable lab without relying on possibly changing CLI syntax.
- In the Azure Portal, search for Virtual Network Manager (Azure Virtual Network Manager).
- Select Create.
- Configure:
– Subscription: your subscription
– Resource group:
rg-avnm-lab– Name:avnm-lab-01– Region: choose the same region (for simplicity) – Scope: select the subscription containing your VNets- If you see options for management group scope, use subscription scope for this lab.
- Select Review + create → Create.
Expected outcome: An Azure Virtual Network Manager resource exists and can “see” (scope) the subscription that contains your VNets.
Step 3: Create a network group and add VNets
- Open
avnm-lab-01. - Find Network groups → Create.
- Name the network group:
ng-avnm-lab - Add VNets to the group:
– Use Static membership and explicitly add:
vnet-avnm-avnet-avnm-b
If your portal offers dynamic membership based on tags/conditions and you want to try it, ensure you understand the exact condition syntax and test membership. Dynamic membership capabilities can evolve—verify in official docs.
Expected outcome: The network group contains exactly two VNets.
Step 4: Create a mesh connectivity configuration
- In
avnm-lab-01, go to Configurations → Connectivity (wording may vary slightly). - Select Create.
- Set:
– Name:
cc-mesh-avnm-lab– Topology: Mesh - Target the network group:
– Add
ng-avnm-labas the group to apply mesh connectivity to. - Save/Create the connectivity configuration.
Expected outcome: A connectivity configuration exists, but it is not yet applied to VNets until you deploy it.
Step 5: Deploy the configuration to the region
- In
avnm-lab-01, go to Deployments (or Deploy configuration). - Select the configuration
cc-mesh-avnm-lab. - Select the target region (for example, the region where both VNets live).
- Start the deployment.
Wait for deployment status to show Succeeded.
Expected outcome: Azure Virtual Network Manager creates the required peerings between vnet-avnm-a and vnet-avnm-b.
Step 6: Verify that VNet peerings were created
Verify in Azure Portal
For each VNet:
1. Open vnet-avnm-a → Peerings
2. You should see a peering to vnet-avnm-b
3. Open vnet-avnm-b → Peerings
4. You should see a peering to vnet-avnm-a
Expected outcome: Two peering objects exist (one in each VNet), in “Connected” state (or “Initiated/Connected” depending on timing).
Verify with Azure CLI
List peerings on each VNet:
az network vnet peering list -g rg-avnm-lab --vnet-name vnet-avnm-a -o table
az network vnet peering list -g rg-avnm-lab --vnet-name vnet-avnm-b -o table
Expected outcome: Output shows one peering per VNet pointing to the other VNet.
Validation
Use this checklist:
- [ ] Both VNets exist in the same region (for this lab)
- [ ] Network group
ng-avnm-labcontains both VNets - [ ] Connectivity config
cc-mesh-avnm-labexists - [ ] Deployment succeeded to the correct region
- [ ] VNet peerings exist in both VNets and show Connected
Optional deeper validation (costs more):
– Create a small Linux VM in each VNet and test connectivity (ICMP may be blocked by default; using TCP tests like curl to a simple service is often more realistic). If you do this, ensure NSGs allow required traffic and remove VMs during cleanup to control cost.
Troubleshooting
Issue: Deployment succeeded but no peerings appear
Common causes: – The deployment was targeted to the wrong region. – The network group has no members (membership misconfiguration). – Insufficient permissions to create peerings in one or both VNets.
Fix: – Re-check the deployment region and group membership. – Confirm RBAC: you need permissions to write peering resources on both VNets. – Check Activity Log on the Azure Virtual Network Manager resource for errors.
Issue: Peerings fail to connect
Common causes: – Overlapping address spaces between VNets. – A policy restriction blocking peering creation. – VNets are in unsupported configurations/regions (verify service limitations).
Fix:
– Ensure 10.10.0.0/16 and 10.20.0.0/16 do not overlap.
– Review Azure Policy assignments and deny assignments.
– Verify region support in official docs.
Issue: You can’t find “Virtual Network Manager” in the portal
Common causes: – Service not available in your cloud/region. – Lack of permissions.
Fix: – Verify region and cloud support: https://learn.microsoft.com/azure/virtual-network-manager/ – Try creating in a different supported region. – Confirm you have Contributor/Owner on the subscription.
Cleanup
To avoid ongoing charges and resource clutter:
Option A: Delete the resource group (recommended)
This removes VNets and Azure Virtual Network Manager created in this lab.
az group delete -n rg-avnm-lab --yes --no-wait
Option B: Delete resources individually (portal)
- Delete Azure Virtual Network Manager
avnm-lab-01 - Delete VNets
vnet-avnm-aandvnet-avnm-b - Delete the resource group
rg-avnm-lab
11. Best Practices
Architecture best practices
- Prefer hub-and-spoke for enterprise-scale environments; use mesh sparingly.
- Design regional hubs when latency, sovereignty, or cost require it.
- Treat Azure Virtual Network Manager as intent management, not a substitute for traffic inspection, routing design, or firewall policy.
IAM/security best practices
- Separate responsibilities:
- “Network intent authors” can update configurations.
- “Deployers” can perform deployments to production regions.
- Use least privilege:
- Scope permissions only to required subscriptions/resource groups/VNets.
- Use managed identities and CI/CD pipelines for deployments when possible, with approvals.
Cost best practices
- Avoid large meshes that generate unnecessary east-west traffic and management overhead.
- Keep inter-region traffic minimal.
- Monitor bandwidth costs and firewall processing costs (often the true driver).
Performance best practices
- Keep high-volume east-west traffic within region.
- Ensure hub components (firewalls, gateways) are sized for throughput and connections.
- Understand that Azure Virtual Network Manager doesn’t increase bandwidth; it standardizes connectivity.
Reliability best practices
- Use staged rollouts: deploy to a non-prod region or subset first.
- Maintain a rollback plan (for example, redeploy a previous known-good configuration).
- Avoid “big bang” changes across every region at once.
Operations best practices
- Use consistent naming:
avnm-<org>-<env>ng-<env>-<purpose>cc-<topology>-<env>sac-<baseline>-<env>- Use tags on VNets to support predictable grouping and reporting.
- Monitor Activity Logs and maintain change tickets for deployments.
Governance/tagging/naming best practices
- Standardize tags like:
env=prod|test|devapp=<name>owner=<team>connectivity=hub|mesh- Use Azure Policy to enforce required tags on VNets so grouping remains accurate.
12. Security Considerations
Identity and access model
- Azure Virtual Network Manager uses Azure AD + Azure RBAC.
- Key security goal: prevent unauthorized broad connectivity/security changes. Recommendations:
- Limit who can create/update deployments.
- Use management group scoping carefully to avoid overly broad blast radius.
- Prefer CI/CD with approvals for production deployments.
Encryption
- Control-plane operations use Azure’s standard management plane security.
- Data-plane encryption is outside Azure Virtual Network Manager scope:
- VNet peering traffic is on Microsoft backbone, but application-level encryption may still be required for compliance.
- Use TLS/mTLS where appropriate.
Network exposure
- Azure Virtual Network Manager can increase connectivity if misconfigured (for example, accidental mesh among sensitive VNets). Recommendations:
- Start with least connectivity: hub-and-spoke and explicit allows.
- Use security admin rules to enforce baseline denies.
- Document which VNets are allowed to communicate and why.
Secrets handling
- Azure Virtual Network Manager itself doesn’t store application secrets.
- If using automation, store credentials in Azure Key Vault and prefer managed identities.
Audit/logging
- Use Azure Activity Log for:
- Who created/updated configurations
- Who deployed changes
- Deployment success/failure
- Consider exporting Activity Logs to Log Analytics / SIEM for long-term retention.
Compliance considerations
- Central guardrails help with controls like “restricted inbound access” and “segmentation.”
- Always validate how admin security rules map to your compliance framework and whether they meet evidence requirements.
Common security mistakes
- Creating full mesh across prod and non-prod VNets.
- Allowing too many users to deploy configurations to production.
- Not testing rule precedence (admin rules vs NSGs) and causing outages.
- Using overlapping IP ranges, leading to connectivity and security ambiguity.
Secure deployment recommendations
- Define network groups by environment and sensitivity.
- Apply baseline security admin rules broadly; allow exceptions only with formal approval.
- Treat deployments as change-controlled events and log them.
13. Limitations and Gotchas
Always confirm current limitations in official docs: https://learn.microsoft.com/azure/virtual-network-manager/
Common limitations/gotchas to plan for
- Regional deployment model: Defining a configuration is not enough—you must deploy it to regions. Teams often forget this step.
- Address space overlap: VNets with overlapping CIDRs cannot be peered as expected. Standardize IP management early.
- Mesh scaling: Mesh creates many peerings as VNets increase; it becomes hard to reason about traffic flows and can increase costs indirectly (traffic).
- RBAC complexity: Managing peerings and security across many subscriptions requires careful role design.
- Policy conflicts: Azure Policy deny assignments can block peering creation or updates, causing deployments to fail.
- Change blast radius: A single deployment can affect many VNets; implement staging, approvals, and clear ownership.
- Mixed ownership models: If app teams manually change peerings or NSGs, it can conflict with centrally managed intent and complicate troubleshooting.
Pricing surprises
- Even if Azure Virtual Network Manager charges are modest, bandwidth/data transfer and firewall processing can dominate costs.
- Inter-region and egress traffic costs can rise when connectivity is expanded unintentionally.
Compatibility issues
- Designs that rely on complex routing (UDRs, NVAs) still need careful validation. Azure Virtual Network Manager handles topology and central rules; it does not automatically validate your routing intent end-to-end.
14. Comparison with Alternatives
How to think about alternatives
Azure Virtual Network Manager is primarily for centralized intent (connectivity and security guardrails) across many VNets. Alternatives often fall into: – Manual operations (Portal/CLI) – Infrastructure-as-code orchestration (Terraform/Bicep) – Other cloud-native connectivity hubs (Transit Gateway-like services) – Network security products for inspection/policy (not topology management)
Comparison table
| Option | Best For | Strengths | Weaknesses | When to Choose |
|---|---|---|---|---|
| Azure Virtual Network Manager | Centralized connectivity + admin security rules across many VNets in Azure | Intent-based, scalable, reduces drift, integrates with Azure governance | Requires good RBAC/change control; not a firewall; deployment model must be understood | You have many VNets/subscriptions and want standardized topologies and baseline rules |
| Manual VNet peering + NSGs | Small environments | Simple, no new service to learn | Doesn’t scale; high risk of drift and inconsistent security | 1–5 VNets with stable topology |
| Terraform/Bicep orchestration of peerings | Teams already standardized on IaC | Versioning, code review, automation | You still build/maintain logic; drift risk if out-of-band changes occur | You want everything-as-code and can invest in robust modules and guardrails |
| Azure Firewall Manager | Central firewall policy management | Strong for firewall policy at scale | Different problem: doesn’t create VNet peerings/topologies | You need centralized firewall policy; pair with AVNM for topology |
| Azure Virtual WAN | Large-scale branch connectivity, SD-WAN-like scenarios, centralized routing | Strong connectivity hub model, integrates VPN/ER | More complex; different architecture than pure VNet peering | You need branch/user connectivity patterns and centralized routing services |
| AWS Transit Gateway (other cloud) | AWS multi-VPC connectivity hub | Purpose-built hub routing | Not Azure; different primitives | When you’re designing in AWS |
| GCP Network Connectivity Center (other cloud) | GCP hub connectivity management | Central connectivity in GCP | Not Azure | When you’re designing in GCP |
| Self-managed scripts (CLI/PowerShell) | Ad-hoc automation | Quick to start | Hard to govern and test; fragile | Temporary automation in small estates (not ideal long-term) |
15. Real-World Example
Enterprise example (multi-subscription landing zone)
Problem:
A global enterprise has 200+ VNets across dozens of subscriptions and multiple regions. Spoke VNets are created weekly by different teams. Connectivity must follow hub-and-spoke per region, and security must enforce baseline denies (no inbound from Internet, controlled lateral movement).
Proposed architecture:
– Management groups for Platform, Prod, NonProd
– Regional hub VNets in shared services subscriptions
– Azure Virtual Network Manager scoped to the management group (or subscription set, depending on governance)
– Network groups:
– ng-prod-spokes
– ng-nonprod-spokes
– Connectivity configurations:
– cc-prod-hubspoke-eastus, cc-prod-hubspoke-weu, etc.
– Security admin configurations:
– Baseline deny inbound from Internet
– Allow only required ports to shared services (DNS, patching, logging)
Why this service was chosen: – Centralizes and standardizes connectivity onboarding – Reduces drift and manual peering work – Provides centralized guardrails that support compliance
Expected outcomes: – New VNets onboard into correct topology faster – Fewer misconfigurations and fewer “network tickets” – Improved auditability via centralized deployments and Activity Log
Startup/small-team example (growing SaaS on Azure)
Problem:
A startup has 8 VNets across dev/test/prod and is adding regions. They’re starting to experience inconsistent peering and security rules between environments.
Proposed architecture:
– One Azure Virtual Network Manager scoped to the subscription (or two subscriptions if they separate prod)
– Network groups per environment: ng-dev, ng-prod
– Hub-and-spoke connectivity for prod; simple mesh for a small dev sandbox (only if needed)
– Baseline admin security rules to prevent accidental inbound exposure
Why this service was chosen: – They want standardization now, before they reach 30–50 VNets – They prefer platform-level governance without building complex IaC logic immediately
Expected outcomes: – Faster environment creation with fewer mistakes – Clearer separation between dev and prod connectivity – Security baseline that reduces risk as the team scales
16. FAQ
1) Is Azure Virtual Network Manager a data-plane service that routes traffic?
No. It is a control-plane service that helps manage intent (connectivity and security guardrails). Traffic flows through VNets, peerings, gateways, firewalls, etc.
2) Does Azure Virtual Network Manager replace Azure Firewall?
No. Azure Firewall is for inspection and policy enforcement at the traffic level. Azure Virtual Network Manager manages topology and admin security rules.
3) Do I still need NSGs if I use security admin rules?
Yes in most cases. Admin security rules provide centralized guardrails; NSGs still handle workload/subnet/NIC-level rules. Understand the precedence model (verify current behavior in official docs).
4) What topologies does Azure Virtual Network Manager support?
Commonly mesh and hub-and-spoke via connectivity configurations. Verify the latest supported options in official docs.
5) Can it manage VNets across multiple subscriptions?
Yes if your scope includes those subscriptions and you have RBAC permissions. Exact scoping options (subscription vs management group) should be verified in your tenant/docs.
6) Do changes take effect immediately after I edit a configuration?
No. Typically you must deploy the configuration to regions for changes to apply.
7) How do I roll back a bad deployment?
Common approaches include redeploying a previous known-good configuration, or reversing the change and redeploying. Your rollback strategy should be part of change management.
8) Can Azure Virtual Network Manager automatically add new VNets to groups?
Some environments use dynamic membership (for example, based on tags/conditions). Availability and syntax can change—verify in official docs and test carefully.
9) Does it support inter-region connectivity?
It can manage VNets in multiple regions, but connectivity relies on underlying constructs (for example, global VNet peering capabilities and region deployment selections). Validate costs and architecture.
10) Will it create a lot of peerings in mesh?
Yes. Mesh peerings grow rapidly as you add VNets. Use mesh only when necessary and keep membership tightly controlled.
11) How do I audit who deployed network changes?
Use the Azure Activity Log on the Azure Virtual Network Manager resource and related resources to see deployment actions and callers.
12) What happens if someone manually edits a peering created by Azure Virtual Network Manager?
This can create drift and unexpected behavior during future deployments. Establish an operating model: either treat these as centrally managed and disallow manual changes, or have clear exceptions and documentation.
13) Can I use Azure Virtual Network Manager with Azure Policy?
Yes. Azure Policy is often used to enforce consistent tags/naming and prevent unauthorized network changes, supporting predictable group membership and governance.
14) Is Azure Virtual Network Manager suitable for small environments?
It can be, but the value increases with scale. For one or two VNets, manual configuration may be simpler.
15) Where should I start: connectivity or security admin rules?
Start with connectivity standardization (hub-and-spoke) and then add baseline security admin rules once you’ve tested precedence and impact in non-production.
16) Does it manage DNS, UDRs, or BGP routing automatically?
Its primary focus is connectivity topology and centralized admin security rules. For routing control and DNS, use dedicated Azure networking components and validate integration (verify exact capabilities in docs).
17. Top Online Resources to Learn Azure Virtual Network Manager
| Resource Type | Name | Why It Is Useful |
|---|---|---|
| Official documentation | Azure Virtual Network Manager docs: https://learn.microsoft.com/azure/virtual-network-manager/ | Canonical source for concepts, supported features, and how-to guides |
| Official pricing | Pricing page: https://azure.microsoft.com/pricing/details/virtual-network-manager/ | Current pricing meters and region-dependent details |
| Pricing tool | Azure Pricing Calculator: https://azure.microsoft.com/pricing/calculator/ | Build estimates for your environment |
| Azure bandwidth pricing | Bandwidth pricing: https://azure.microsoft.com/pricing/details/bandwidth/ | Understand inter-region and egress costs that can dominate networking spend |
| Architecture guidance | Azure Architecture Center: https://learn.microsoft.com/azure/architecture/ | Reference architectures for hub-spoke, enterprise-scale landing zones, governance patterns |
| Landing zone guidance | Cloud Adoption Framework / Azure landing zones: https://learn.microsoft.com/azure/cloud-adoption-framework/ready/landing-zone/ | Best practices for management groups, subscriptions, network topology at scale |
| Monitoring/auditing | Azure Activity Log: https://learn.microsoft.com/azure/azure-monitor/essentials/activity-log | How to audit deployments and control-plane changes |
| Networking fundamentals | Virtual network documentation: https://learn.microsoft.com/azure/virtual-network/ | Foundational VNet concepts required for successful AVNM design |
| VNet peering | VNet peering docs: https://learn.microsoft.com/azure/virtual-network/virtual-network-peering-overview | Understand what AVNM orchestrates and peering limitations |
| Community learning (reputable) | Microsoft Tech Community (Networking blog hub): https://techcommunity.microsoft.com/category/azure-networking/blog/azurenetworkingblog | Practical announcements and deep dives; validate against docs |
18. Training and Certification Providers
| Institute | Suitable Audience | Likely Learning Focus | Mode | Website URL |
|---|---|---|---|---|
| DevOpsSchool.com | Beginners to working professionals | Azure, DevOps, cloud operations foundations and implementation | Check website | https://www.devopsschool.com/ |
| ScmGalaxy.com | Students and early-career professionals | DevOps, SCM, CI/CD, cloud fundamentals | Check website | https://www.scmgalaxy.com/ |
| CLoudOpsNow.in | Cloud engineers and operations teams | Cloud operations, automation, monitoring, reliability practices | Check website | https://www.cloudopsnow.in/ |
| SreSchool.com | SREs, platform engineers, operations | SRE principles, reliability engineering, observability | Check website | https://www.sreschool.com/ |
| AiOpsSchool.com | Ops teams exploring AIOps | AIOps concepts, automation, monitoring strategy | Check website | https://www.aiopsschool.com/ |
19. Top Trainers
| Platform/Site | Likely Specialization | Suitable Audience | Website URL |
|---|---|---|---|
| RajeshKumar.xyz | Cloud/DevOps training content (verify offerings) | Individuals seeking guided training | https://rajeshkumar.xyz/ |
| devopstrainer.in | DevOps and cloud training (verify offerings) | Beginners to intermediate engineers | https://www.devopstrainer.in/ |
| devopsfreelancer.com | Freelance DevOps coaching/implementation (verify offerings) | Teams/individuals needing hands-on help | https://www.devopsfreelancer.com/ |
| devopssupport.in | DevOps support and training resources (verify offerings) | Ops/DevOps practitioners | https://www.devopssupport.in/ |
20. Top Consulting Companies
| Company | Likely Service Area | Where They May Help | Consulting Use Case Examples | Website URL |
|---|---|---|---|---|
| cotocus.com | Cloud/DevOps consulting (verify exact services) | Cloud architecture, DevOps enablement, operations | Landing zone setup, network governance planning, CI/CD for infrastructure | https://cotocus.com/ |
| DevOpsSchool.com | Training + consulting (verify consulting scope) | Platform engineering, DevOps processes, cloud adoption | Designing operational model for AVNM, IaC pipelines, governance | https://www.devopsschool.com/ |
| DEVOPSCONSULTING.IN | DevOps consulting (verify exact services) | DevOps automation, cloud migration support | Implementing standardized network provisioning workflows and guardrails | https://www.devopsconsulting.in/ |
21. Career and Learning Roadmap
What to learn before Azure Virtual Network Manager
- Azure fundamentals: subscriptions, resource groups, RBAC
- Azure Networking basics:
- VNets, subnets, CIDR planning
- NSGs and effective security rules
- VNet peering fundamentals and limitations
- Governance:
- Management groups
- Azure Policy basics (especially tag enforcement)
- Operational basics:
- Activity Log, Azure Monitor basics
What to learn after Azure Virtual Network Manager
- Hub network design patterns:
- Azure Firewall and Firewall Manager
- UDR design and routing intent
- Private Link and private DNS architectures
- Connectivity to on-prem:
- VPN Gateway, ExpressRoute
- Enterprise-scale operations:
- CI/CD for networking changes (Bicep/Terraform)
- Change management and release strategies for network deployments
Job roles that use it
- Cloud Network Engineer
- Platform Engineer
- SRE (in platform/network-heavy orgs)
- Cloud Solutions Architect
- Security Engineer (network security governance)
- DevOps Engineer (infrastructure automation and governance)
Certification path (Azure)
Azure certifications change over time. Common role-based paths that align well include: – AZ-900 (fundamentals) – AZ-104 (administrator) – AZ-305 (solutions architect) – Networking-focused specialty certifications may exist or evolve—verify current certification offerings on Microsoft Learn: https://learn.microsoft.com/credentials/
Project ideas for practice
- Build a three-environment topology (dev/test/prod) with hub-and-spoke connectivity managed by Azure Virtual Network Manager.
- Implement baseline security admin rules to block inbound Internet and restrict east-west traffic; document precedence and test cases.
- Create an IaC pipeline (Bicep or Terraform) that updates Azure Virtual Network Manager configs and performs gated deployments.
- Create operational dashboards using Azure Resource Graph queries: list all VNets in each network group and peering status.
22. Glossary
- Azure Virtual Network (VNet): A private network in Azure where you deploy subnets, VMs, and private endpoints.
- Subnet: A range of IP addresses within a VNet used to segment resources.
- CIDR: Address notation like
10.10.0.0/16defining an IP range. - VNet peering: A connection between two VNets enabling private traffic over Microsoft’s backbone (subject to limitations).
- Hub-and-spoke: Architecture where spokes connect to a central hub for shared services and centralized control.
- Mesh topology: Many-to-many topology where each VNet peers with others in the group.
- Azure Virtual Network Manager (AVNM): Azure service to centrally manage VNet connectivity and security admin rules across groups of VNets.
- Network Manager resource: The Azure resource representing an AVNM instance.
- Network group: A set of VNets targeted by connectivity/security configurations.
- Connectivity configuration: AVNM configuration defining how VNets should connect (for example, mesh or hub-and-spoke).
- Security admin configuration: AVNM configuration defining centrally governed allow/deny rules across VNets.
- NSG (Network Security Group): Stateful L3/L4 filtering rules applied to subnets/NICs.
- RBAC: Role-Based Access Control in Azure, used to authorize actions on resources.
- Management group: A container for subscriptions used for governance at scale.
- Activity Log: Azure control-plane audit log for operations performed on resources.
- UDR (User-defined route): Custom routes that control how traffic is forwarded in Azure VNets.
- NVA: Network Virtual Appliance (often a third-party firewall/router) deployed as a VM.
- Azure Firewall: Microsoft-managed firewall service used for traffic inspection and policy enforcement.
23. Summary
Azure Virtual Network Manager is an Azure Networking control-plane service that centralizes connectivity intent (mesh or hub-and-spoke) and baseline security admin rules across many Azure VNets. It matters most when your environment grows beyond a handful of networks and manual peering/security administration becomes inconsistent and risky.
Architecturally, it fits best alongside Azure landing zone patterns: management groups, multiple subscriptions, hub networks, and standardized governance. Cost-wise, don’t focus only on the service meter—watch the indirect drivers like bandwidth/data transfer, firewalls, and monitoring ingestion. Security-wise, treat deployments as high-impact changes: implement least privilege, approvals, staged rollouts, and audit via Activity Log.
Use Azure Virtual Network Manager when you need scalable, consistent network topology and centralized guardrails. Start next by reading the official documentation and then extending the lab into a hub-and-spoke design with security admin rules and a formal deployment process: https://learn.microsoft.com/azure/virtual-network-manager/