Oracle Cloud Globally Distributed Exadata Database on Exascale Infrastructure Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for Data Management

Category

Data Management

1. Introduction

What this service is

Globally Distributed Exadata Database on Exascale Infrastructure (in Oracle Cloud) is best understood as an architecture pattern that combines:

  • Exadata Database Service on Exascale Infrastructure (OCI’s managed Exadata deployment model on shared “Exascale” infrastructure), and
  • Oracle Globally Distributed Database capabilities (Oracle Database sharding / global services concepts) to place data close to users and to scale out OLTP globally.

Naming note (important): In Oracle’s documentation and pricing pages, you will commonly see “Exadata Database Service on Exascale Infrastructure” and “Oracle Globally Distributed Database” discussed as separate product/service concepts. If Oracle has introduced (or renamed) a single, unified offering explicitly called “Globally Distributed Exadata Database on Exascale Infrastructure”, verify the current name and scope in official OCI documentation before implementing. This tutorial stays aligned to official, current building blocks and explains how teams typically implement the “globally distributed Exadata” outcome on OCI.

One-paragraph simple explanation

You use this approach when one Oracle database is not enough—because you need global scale, low latency for users in multiple geographies, and high availability—while still keeping Oracle Database and Exadata performance characteristics. Exascale makes Exadata consumption more elastic, and global distribution techniques place data and database services closer to users.

One-paragraph technical explanation

At a technical level, you deploy Oracle Database on Exadata Database Service on Exascale Infrastructure in one or more OCI regions, design data placement and routing using globally distributed database/sharding patterns (catalog, shard directors/global service managers, shards, and optionally replicated reference tables), connect applications using service-based routing, and secure the environment with OCI networking (VCNs/subnets/private endpoints), IAM policies, encryption (TDE, TLS, OCI Vault), and operational tooling (metrics, logging, audit, backups).

What problem it solves

This solves the classic tension between:

  • Scale (handling more users/transactions),
  • Latency (serving users worldwide with local response time),
  • Availability and fault isolation (regional failures, planned maintenance),
  • Operational control (patching, monitoring, governance),
  • Performance (Exadata offload, fast storage, Oracle RAC patterns where applicable),

…while keeping an Oracle Database platform rather than rewriting the application for a different distributed database.


2. What is Globally Distributed Exadata Database on Exascale Infrastructure?

Official purpose

The official purpose of the underlying components is:

  • Exadata Database Service on Exascale Infrastructure (OCI): Run Oracle Database on Exadata with OCI-managed infrastructure and lifecycle operations, using an Exascale consumption model designed for elasticity and efficient resource usage.
    Official docs entry point (verify current pages and naming):
    https://docs.oracle.com/en-us/iaas/exadata/

  • Oracle Globally Distributed Database (often associated with Oracle Sharding concepts): Distribute data horizontally across multiple databases (“shards”) to scale OLTP and place data geographically closer to users while maintaining Oracle Database semantics for many workloads.
    Oracle Database sharding / globally distributed concepts (verify version-specific docs):
    https://docs.oracle.com/en/database/

Core capabilities (combined outcome)

When teams say “globally distributed Exadata on Exascale,” they typically mean:

  • Elastic Exadata-backed Oracle Database capacity in OCI (Exascale model).
  • Global scale-out OLTP using sharding patterns (scale-out writes) or regionally distributed read/write patterns depending on design.
  • Geographic locality: keep user data in-region for latency and, in some cases, data residency.
  • Failure isolation: shard-level, region-level, and service-level isolation options.
  • Managed service operations for the Exadata layer (provisioning, patching workflows, monitoring integration, backups).

Major components

A practical decomposition looks like this:

  1. OCI tenancy + compartments (governance boundary).
  2. VCN + private subnets for database and management.
  3. Exadata Database Service on Exascale Infrastructure resources (exact resource names vary by OCI console evolution; verify in docs): – Exascale infrastructure allocation model – VM cluster(s) – Oracle Database homes – Container database (CDB) and pluggable databases (PDBs)
  4. Global distribution layer (Oracle Database globally distributed/sharding building blocks): – Shard catalog database – Shard databases (each shard is an Oracle database/PDB) – Global service management / shard directors (often installed on compute)
  5. Security services: – OCI IAM policies – OCI Vault (KMS keys) where applicable – Network Security Groups (NSGs), security lists, route tables
  6. Operations & governance: – OCI Audit – OCI Monitoring + Alarms – OCI Logging – OCI Database Management (if enabled/licensed; verify) – Backups to OCI Object Storage

Service type

  • Exadata Database Service on Exascale Infrastructure is a managed database platform service (PaaS-like) in OCI.
  • The “globally distributed” portion is best treated as a database architecture implemented using Oracle Database features/components (some features may be licensed or edition-dependent—verify licensing and availability for your subscription).

Scope: regional vs global

  • Exadata Database Service resources are regional (created in a specific OCI region and availability domain model as applicable).
  • A “globally distributed” architecture is multi-region by design, but you can validate concepts in a single region using multiple databases/PDBs (useful for labs).

How it fits into the Oracle Cloud ecosystem

This sits in the Oracle Cloud Data Management portfolio and connects tightly with:

  • OCI Networking (private IPs, DRGs, FastConnect, cross-region connectivity)
  • OCI IAM (least-privilege administration)
  • OCI Vault (key management, secrets patterns)
  • OCI Object Storage (backup destinations, data movement)
  • OCI Observability (Monitoring/Logging/Audit/Events)
  • Terraform/Resource Manager (repeatable provisioning)

3. Why use Globally Distributed Exadata Database on Exascale Infrastructure?

Business reasons

  • Global user experience: keep latency low for worldwide customers.
  • Business continuity: reduce blast radius of regional outages with a multi-region design.
  • Regulatory pressure: support data residency strategies (architecture-dependent; verify compliance requirements).
  • Modernization without full rewrite: keep Oracle Database while scaling beyond a single database footprint.

Technical reasons

  • Scale-out writes (when using sharding patterns) instead of only scaling up.
  • High throughput + predictable performance using Exadata characteristics (SQL offload, optimized storage, engineered system patterns).
  • Workload isolation: shards can isolate noisy tenants or high-traffic geographies.
  • Online expansion: add shards/capacity as the workload grows (procedures depend on design and tooling).

Operational reasons

  • Managed infrastructure lifecycle on OCI for Exadata components.
  • Standardized monitoring and governance through OCI services.
  • Repeatable builds via IaC (Terraform).

Security/compliance reasons

  • Private networking and strong segmentation (VCN, NSGs).
  • Encryption (Oracle TDE, TLS) and integration with OCI Vault for key management patterns.
  • Auditability through OCI Audit and database auditing (where configured).

Scalability/performance reasons

  • Exascale aligns to elastic consumption rather than fixed, large allocations (exact scaling behavior and limits vary—verify in official docs).
  • Exadata is typically chosen for performance consistency on Oracle workloads that need engineered storage features.

When teams should choose it

Choose this approach when you need one or more of:

  • Multi-region active architecture requirements for Oracle workloads
  • Extreme OLTP throughput that benefits from Exadata + horizontal distribution
  • Data locality by geography or tenant
  • Failure-domain isolation beyond a single database

When they should not choose it

Avoid or reconsider if:

  • You only need read scaling (a simpler read replica pattern may suffice).
  • Your application cannot tolerate shard-aware behavior (some designs require routing keys and changes).
  • You need a fully managed distributed SQL database with minimal database expertise—this approach is powerful but operationally complex.
  • Budget constraints: Exadata and multi-region connectivity can be expensive.

4. Where is Globally Distributed Exadata Database on Exascale Infrastructure used?

Industries

  • Financial services (payments, trading platforms, fraud systems)
  • Retail/e-commerce (global carts, orders, inventory)
  • Telecom (subscriber profiles, charging, real-time policy)
  • SaaS platforms with multi-tenant requirements
  • Gaming (player profiles, purchases, matchmaking metadata)
  • Logistics/travel (reservations, shipment tracking)
  • Healthcare (regional data separation—subject to compliance validation)

Team types

  • Platform engineering teams operating Oracle at scale
  • Database engineering and performance teams
  • SRE/operations teams with strict SLAs
  • Security and governance teams enforcing segmentation and auditability

Workloads

  • High-throughput OLTP with strict latency SLOs
  • Mixed OLTP + operational reporting (with careful design)
  • Multi-tenant data models
  • Geo-partitioned data (country/region-based ownership)

Architectures

  • Multi-region active/active by shard (common globally distributed pattern)
  • Active/active for multiple geographies, active/passive for disaster recovery (hybrid)
  • Regional service entry + shard-aware routing layer
  • Private connectivity with FastConnect and cross-region DRG routing

Production vs dev/test usage

  • Production: Multi-region deployments, strict networking, strong change control, automated failover testing.
  • Dev/test: Often single region, fewer shards, smaller allocations, and scripted provisioning.

5. Top Use Cases and Scenarios

Below are realistic scenarios that match Oracle Cloud Data Management needs.

1) Global customer profile store (geo-locality)

  • Problem: Users worldwide need low-latency access to profile data; a single region is too slow.
  • Why this fits: Sharding places user rows in the closest shard/region; Exadata provides consistent performance.
  • Example: APAC users route to a Tokyo shard; EU users route to Frankfurt shard.

2) Multi-region order processing with failure isolation

  • Problem: Order volume spikes and regional outages must not stop global ordering.
  • Why this fits: Orders can be routed to regional shards; outage impacts only that shard/region.
  • Example: North America order shard continues while EU shard is in incident mode.

3) SaaS multi-tenant isolation (tenant-per-shardgroup)

  • Problem: Noisy tenants degrade performance for others; compliance requires isolation.
  • Why this fits: Place large tenants on dedicated shard groups; small tenants share shard groups.
  • Example: Platinum tenants on dedicated shards with stricter SLOs.

4) Payments ledger with scale-out writes

  • Problem: Single database becomes a write bottleneck.
  • Why this fits: Partition/shard by account_id or region; scale writes horizontally.
  • Example: Add shards as transaction volume grows.

5) Country-based data residency

  • Problem: Certain customer data must remain in specific jurisdictions.
  • Why this fits: Shard placement can enforce geographic location (design must be validated).
  • Example: German customer records stored only in EU region shard.

6) High-throughput session store with hot key mitigation

  • Problem: Session updates are frequent; hotspots occur.
  • Why this fits: Choose shard keys that spread load; Exadata accelerates database operations.
  • Example: Hash-based shard key on session_id.

7) Global inventory with regional ownership

  • Problem: Inventory is updated locally; global reads must be fast.
  • Why this fits: Regional shards handle local writes; replication/reference tables can support shared lookup data (design-dependent).
  • Example: Product catalog replicated; stock levels sharded by region.

8) M&A consolidation with gradual migration

  • Problem: Two companies have separate Oracle systems; need phased integration.
  • Why this fits: New global distribution layer can onboard subsets of data/users while legacy remains.
  • Example: Move one brand/region into new shard first.

9) Event-driven microservices with shard-aware routing

  • Problem: Microservices need consistent routing to the right database for each entity.
  • Why this fits: A routing tier can direct traffic based on shard key; services stay stateless.
  • Example: API gateway passes customer_id; routing maps to shard.

10) Latency-sensitive trading reference + transactional workloads

  • Problem: Traders require low latency; reference data needs global consistency.
  • Why this fits: Keep transactional shards local; replicate reference datasets appropriately.
  • Example: Local order book shard; replicated instruments table.

11) Online scaling for seasonal spikes

  • Problem: Traffic multiplies during holidays; capacity must expand without a redesign.
  • Why this fits: Add shards and/or increase Exascale resources (where supported).
  • Example: Add a shardgroup for holiday region traffic.

12) Blast-radius reduction for schema changes

  • Problem: Schema changes risk full-database outages.
  • Why this fits: Changes can roll shard-by-shard; failures isolated.
  • Example: Rolling index change across shard groups.

6. Core Features

This section lists key capabilities you will likely use in a “Globally Distributed Exadata Database on Exascale Infrastructure” design. Some are service features, others are Oracle Database features. Availability can vary by database version/edition and OCI region—verify in official docs.

1) Exadata Database Service on Exascale Infrastructure (managed Exadata consumption)

  • What it does: Provides Oracle Database running on Exadata with an Exascale model intended to make resource usage more elastic than fixed infrastructure.
  • Why it matters: Helps match capacity to demand and reduce stranded capacity.
  • Practical benefit: Faster right-sizing compared to long-lived fixed allocations.
  • Caveats: Scaling granularity, quotas, and minimums vary—verify current Exascale limits and shapes.

2) Oracle Database high availability patterns (RAC and service-based access where applicable)

  • What it does: Uses Oracle’s HA mechanisms (e.g., RAC patterns on Exadata) to reduce node-level downtime.
  • Why it matters: HA inside a region reduces disruption from host failures.
  • Practical benefit: Better uptime for shard databases and catalogs.
  • Caveats: HA does not replace multi-region DR; design both.

3) Globally Distributed Database / sharding concepts (catalog, shards, routing)

  • What it does: Horizontally partitions data across multiple databases and routes requests using a shard key.
  • Why it matters: Enables scale-out writes and geographic placement.
  • Practical benefit: You can add throughput by adding shards.
  • Caveats: Often requires shard key selection and sometimes application changes.

4) Service management / routing tier (Global Service Manager / shard director)

  • What it does: Maintains topology and routes connections to the right shard based on services and shard keys.
  • Why it matters: Central point for routing and failover decisions.
  • Practical benefit: Apps connect to a global service instead of hardcoding shard endpoints.
  • Caveats: Must be highly available; treat as critical infrastructure.

5) Private networking and segmentation (VCN, subnets, NSGs)

  • What it does: Keeps database endpoints private and controls east/west and north/south traffic.
  • Why it matters: Database exposure is a top risk.
  • Practical benefit: Minimal attack surface; controlled admin access.
  • Caveats: Multi-region connectivity requires careful routing and security design.

6) Encryption at rest (Oracle TDE) and in transit (TLS)

  • What it does: Protects data files and network sessions.
  • Why it matters: Baseline requirement for most compliance regimes.
  • Practical benefit: Reduces impact of storage compromise and traffic interception.
  • Caveats: Key management and rotation procedures must be tested.

7) Key management with OCI Vault (KMS integration patterns)

  • What it does: Centralizes key management; supports separation of duties.
  • Why it matters: Helps meet compliance controls for encryption keys.
  • Practical benefit: Standard key lifecycle operations across OCI.
  • Caveats: Exact integration points depend on the database deployment model—verify for Exascale.

8) Backups to OCI Object Storage

  • What it does: Stores database backups outside the database environment.
  • Why it matters: Foundational for restore and ransomware recovery.
  • Practical benefit: Durable, lifecycle-managed storage tiering options.
  • Caveats: Cross-region restore planning needs explicit testing.

9) Observability: metrics, logs, alarms, and audit

  • What it does: Provides OCI-level and (optionally) database-level telemetry.
  • Why it matters: Global distributed systems fail in complex ways; observability is mandatory.
  • Practical benefit: Faster incident response and capacity planning.
  • Caveats: Decide what is monitored centrally vs per-region; avoid alert storms.

10) Automation with Terraform / Resource Manager

  • What it does: Enables repeatable builds for networks, policies, and database resources.
  • Why it matters: Global deployments are hard to do manually.
  • Practical benefit: Consistent environments, audit-friendly change history.
  • Caveats: Exadata resources can take significant time to provision; design pipelines accordingly.

7. Architecture and How It Works

High-level architecture

A common global pattern includes:

  • Regional application stacks (compute, Kubernetes, or app services) close to end users
  • A routing layer for database connections (global service management / shard director)
  • Shard catalog and shard databases (often one shard group per region)
  • Optional reference/replicated data patterns (depending on your model)
  • Secure private interconnect among regions (DRG-to-DRG, FastConnect, VPN; exact design varies)

Request/data/control flow (conceptual)

  1. Client request hits nearest regional app endpoint.
  2. App computes shard key (e.g., customer_id) and requests a connection to a global service.
  3. Routing layer determines which shard holds that key and returns/establishes a session to the correct shard.
  4. Transaction executes locally to the shard database.
  5. Operational control plane (admins, automation) manages provisioning, patching, backups, monitoring.

Integrations with related OCI services

  • Networking: VCN, subnets, NSGs, DRG, FastConnect
  • Security: IAM, Vault, Bastion, Cloud Guard (if enabled), Security Zones (where used)
  • Observability: Monitoring, Logging, Events, Notifications, Audit
  • Data movement: Object Storage, OCI Database tools (varies), replication patterns (architecture-specific)

Dependency services

  • OCI IAM and compartments
  • OCI Networking
  • Object Storage (for backups)
  • Compute instances (often for routing tier / management tooling)
  • DNS (private DNS or custom) for service discovery patterns

Security/authentication model

  • OCI IAM controls who can create/modify Exadata and network resources.
  • Database users/roles control SQL access (least privilege).
  • Network controls enforce that only app subnets (and admin jump hosts) can reach DB ports.
  • TLS and strong authentication (password, IAM integration patterns where supported, or enterprise directory integration—verify per deployment) protect in-flight traffic.

Networking model

  • Prefer private endpoints for databases.
  • Use hub-and-spoke VCN with DRG for multi-region connectivity, or per-region isolated VCNs with controlled peering.
  • Use NSGs for instance-level network policy.
  • Plan for latency and bandwidth between regions (this directly affects global designs).

Monitoring/logging/governance considerations

  • Define SLOs per region and globally.
  • Centralize logs (OCI Logging) with retention and access controls.
  • Enable OCI Audit for API-level actions.
  • Consider tagging strategy for chargeback and governance.

Simple architecture diagram (learning/lab scale)

flowchart LR
  U[Users] --> A[Regional App]
  A --> R[Shard Routing / Global Service Manager]
  R --> C[(Shard Catalog DB)]
  R --> S1[(Shard DB 1 on Exadata Exascale)]
  R --> S2[(Shard DB 2 on Exadata Exascale)]
  S1 --> B[Backups to Object Storage]
  S2 --> B

Production-style architecture diagram (multi-region)

flowchart TB
  subgraph RegionA[OCI Region A]
    A1[App Tier A] --> GSMa[GSM / Shard Director A]
    GSMa --> ShA1[(Shard Group A\nExadata DB Service on Exascale)]
    GSMa --> CatA[(Shard Catalog\n(Primary or Active))]
    ShA1 --> ObsA[Monitoring/Logging]
  end

  subgraph RegionB[OCI Region B]
    A2[App Tier B] --> GSMb[GSM / Shard Director B]
    GSMb --> ShB1[(Shard Group B\nExadata DB Service on Exascale)]
    GSMb --> CatB[(Catalog Replica/Standby\n(Design-dependent))]
    ShB1 --> ObsB[Monitoring/Logging]
  end

  subgraph Shared[Shared OCI Services]
    IAM[IAM & Compartments]
    Vault[OCI Vault / KMS]
    Obj[(Object Storage Backups)]
    Audit[OCI Audit]
  end

  CatA --- CatB
  RegionA --- RegionB

  ShA1 --> Obj
  ShB1 --> Obj
  IAM --- RegionA
  IAM --- RegionB
  Vault --- RegionA
  Vault --- RegionB
  Audit --- RegionA
  Audit --- RegionB

The exact catalog replication/HA model depends on your chosen globally distributed database design. Validate supported topologies in the official Oracle Globally Distributed Database / sharding documentation for your Oracle Database version.


8. Prerequisites

Account/tenancy requirements

  • An Oracle Cloud (OCI) tenancy with billing enabled.
  • A compartment strategy (e.g., prod, nonprod, network, security).

Permissions / IAM roles

At minimum, you need permissions to manage:

  • Exadata Database Service resources (Exascale/VM clusters/DB homes/databases)
  • Networking (VCNs, subnets, NSGs, DRGs)
  • Object Storage (backup buckets)
  • Vault (if using customer-managed keys)
  • Logging/Monitoring/Audit viewing

OCI IAM is policy-based. Because policy verbs and resource-types can be precise and occasionally change, use the official OCI policy reference and generate least-privilege policies for: – DB administrators (Exadata + database resources) – Network admins (VCN/DRG) – Security admins (Vault, logging, audit) – App teams (read-only DB metadata, no destructive actions)

Policy reference (official): https://docs.oracle.com/en-us/iaas/Content/Identity/Reference/policyreference.htm

Billing requirements

  • Exadata Database Service on Exascale Infrastructure is generally not free-tier. Budget accordingly.
  • Multi-region designs add network egress and operational overhead.

CLI/SDK/tools needed

  • OCI Console access
  • OCI CLI (optional but recommended): https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm
  • Terraform (optional): https://developer.hashicorp.com/terraform
  • SQL client tools:
  • SQL*Plus / SQLcl (Oracle)
  • ORDS/other tools as applicable

Region availability

  • Exadata on Exascale availability is region-dependent and capacity-dependent.
    Verify region availability in official OCI documentation and your tenancy limits.

Quotas/limits

  • Service limits exist for Exadata resources, cores/ECPUs, storage, VCN components, etc.
    Check Service Limits in OCI Console and request increases early.

Prerequisite services

  • VCN with appropriately sized subnets (database subnets are often private)
  • Object Storage bucket for backups
  • Bastion or jump host strategy for admin access (recommended)

9. Pricing / Cost

Current pricing model (how to think about it)

Pricing for Exadata Database Service on Exascale Infrastructure is typically based on a combination of:

  • Database compute consumption (often measured in OCPUs/ECPUs per hour; naming depends on the service model)
  • Exadata storage consumption (GB/month) and potentially performance tiers
  • License model:
  • License Included (Oracle Database license cost included in hourly)
  • Bring Your Own License (BYOL) (requires appropriate Oracle licenses; discounted service rate typically applies)

For the globally distributed part: – Costs depend on how many databases/shards you run and their sizes. – Some Oracle Database features may be option-licensed. Verify licensing for Oracle Globally Distributed Database / sharding in your commercial agreement and OCI service terms.

Official OCI pricing entry point (verify specific Exascale SKU pages): – OCI Price List: https://www.oracle.com/cloud/price-list/ – OCI Cost Estimator: https://www.oracle.com/cloud/costestimator.html

Pricing dimensions to track

Dimension What it affects Notes
DB compute (ECPU/OCPU) Main hourly cost driver Scales with performance and concurrency needs
Exadata storage (GB-month) Persistent cost Includes backups/snapshots depending on model
Backup storage (Object Storage) Additional monthly Retention and cross-region copies increase cost
Data transfer (inter-region) Can be significant Global architectures can create constant replication/traffic
Compute for routing tier Additional hourly GSM/shard director VMs, bastions, automation
Monitoring/logging retention Monthly Higher log volume and retention increases cost

Free tier

  • Exadata Database Service is typically not part of OCI Always Free.
  • You may be able to prototype some concepts using Always Free compute/Autonomous resources, but that would not be the same service. If you do, treat it as a conceptual lab only.

Cost drivers (direct + indirect)

Direct: – Number of regions – Number of shards/databases – Compute sizing (ECPU/OCPU) – Storage allocation and growth – Backup retention and replication

Indirect / hidden: – Inter-region networking (especially if applications constantly cross regions) – Operational tooling (extra instances, third-party monitoring) – Engineering time for shard key design, testing, and operations – Data migration and dual-running during cutover

Network/data transfer implications

  • Cross-region traffic has both latency and potential egress costs.
  • Design to keep most transactions region-local (app in region talks to shard in same region).
  • Only replicate what you must (e.g., reference tables), and choose consistency models carefully.

How to optimize cost

  • Start with the minimum shard count that meets throughput and locality needs.
  • Keep dev/test in single region where possible.
  • Right-size compute: scale up only when metrics show sustained need.
  • Use lifecycle policies in Object Storage for backup tiering (where policy allows).
  • Avoid chatty cross-region calls: make routing deterministic and local.

Example low-cost starter estimate (model, not numbers)

A “starter” non-production footprint might include: – 1 region – 1 Exadata on Exascale VM cluster with the smallest permitted compute allocation – 1 catalog database + 1–2 shard databases (can be PDBs depending on design; verify supported topology) – 1 small compute VM for routing tier (GSM) and admin tooling – Backups to Object Storage with short retention (e.g., days not months)

Use the OCI Cost Estimator to model this; exact numbers vary by region, license model, and quotas.

Example production cost considerations

A production multi-region footprint often includes: – 2–4 regions – Multiple shard groups (per region) sized for peak – HA routing tier per region – Cross-region backup/restore strategy – Higher log retention and monitoring – DR testing environments

Expect costs to be driven primarily by database compute and storage, then by cross-region networking.


10. Step-by-Step Hands-On Tutorial

This lab focuses on a realistic, executable path that teaches the workflow without pretending Exadata is “free.” You will provision real OCI resources and prepare the building blocks for a globally distributed design. Because Exadata provisioning is expensive and capacity-limited, this lab is written to be minimal and emphasizes verification and cleanup.

If you cannot provision Exadata Database Service on Exascale Infrastructure in your tenancy/region, you can still complete the networking/IAM/observability steps and then map the same patterns to your eventual Exadata environment.

Objective

Provision the foundational OCI components and deploy a minimal Oracle Database environment on Exadata Database Service on Exascale Infrastructure, then prepare a basic globally distributed database topology skeleton (catalog + shard databases conceptually) that you can expand to multi-region.

Lab Overview

You will:

  1. Create a compartment and tagging baseline.
  2. Create a VCN with private subnets and NSGs for database and admin access.
  3. Create an Object Storage bucket for backups.
  4. Provision an Exadata Database Service on Exascale Infrastructure deployment (minimum viable).
  5. Create databases (catalog + shard candidates) and validate connectivity.
  6. (Optional) Provision a small compute VM to host shard routing tooling (GSM) and validate network reachability.
  7. Clean up everything.

Step 1: Create compartment and tags (governance baseline)

Console actions 1. Open OCI Console → Identity & SecurityCompartmentsCreate Compartment. 2. Name: dm-gdd-exascale-lab (example). 3. Create a tag namespace (optional but recommended): – Identity & Security → Tag Namespaces → create costcenter – Add tag key: env with values like lab, prod

Expected outcome – A compartment exists to isolate lab resources. – Tagging is ready for cost tracking.

Verification – Navigate to the compartment and confirm it appears in the compartment picker.


Step 2: Create VCN, subnets, and NSGs (private-by-default)

Goal: Create a secure, private network layout suitable for database deployments.

Recommended minimal layout – VCN CIDR: 10.10.0.0/16 – Private subnet for databases: 10.10.10.0/24 – Private subnet for admin/routing VM: 10.10.20.0/24 – Optional public subnet only if you must (prefer Bastion)

Console actions 1. Networking → Virtual Cloud NetworksStart VCN Wizard. 2. Choose “VCN with Internet Connectivity” only if required; otherwise build custom: – Create VCN – Create route tables – Create gateways as needed (NAT Gateway is common for private subnets needing outbound updates) 3. Create Network Security Groups (NSGs): – nsg-dbnsg-admin

NSG rules (example; adjust to your ports and policy)nsg-db inbound: – From nsg-admin to TCP 1521 (Oracle listener)
– From nsg-admin to TCP 22 (only if SSH to DB hosts is required; many managed DB models restrict this—verify) – nsg-admin inbound: – From Bastion (or your corporate CIDR via VPN/FastConnect) to TCP 22

Expected outcome – VCN and subnets exist. – NSGs enforce least-privilege access.

Verification – Confirm subnets are private (no public IP assignment). – Confirm NSG rules are in place.


Step 3: Create Object Storage bucket for backups

Console actions 1. Storage → Object Storage & Archive StorageBuckets → Create Bucket. 2. Name: exascale-gdd-lab-backups 3. Storage tier: Standard (typical for backups; lifecycle policies optional)

Expected outcome – A bucket exists for database backups (or for later integration).

Verification – Upload a small test file to confirm access.


Step 4: Provision Exadata Database Service on Exascale Infrastructure (minimal)

This is the costliest step. Ensure you understand quotas and pricing, and choose the smallest viable configuration. Exact screens and resource names can change—follow the current OCI documentation for Exadata Database Service on Exascale Infrastructure.

Console actions (high-level) 1. Oracle Database → Exadata Database Service (or the current console entry). 2. Choose Exascale Infrastructure deployment model (if prompted). 3. Select: – Compartment: dm-gdd-exascale-lab – VCN + private subnet: 10.10.10.0/24 – NSG: nsg-db 4. Configure: – Database version (LTS recommended for production; for lab choose what is available) – License model: License Included or BYOL (match your entitlement) – Admin credentials (store securely) 5. Start provisioning and wait until the VM cluster and database resources show Available.

Expected outcome – Exadata on Exascale resources are provisioned. – You have at least one Oracle Database available (CDB/PDB depends on the chosen workflow).

Verification – In the Exadata service page, status shows Available. – Note the private IPs, SCAN/listener endpoints, and database service name.

Common errorsQuota exceeded: Request service limit increase. – No capacity in region: Try a different region or engage Oracle support/sales. – Networking misconfiguration: Ensure private subnet has required routing (NAT for outbound if needed) and DNS settings.


Step 5: Create “catalog” and “shard candidate” databases (minimal topology)

To learn global distribution patterns, you typically need: – A catalog database – One or more shard databases

In OCI, how you create these depends on the Exadata deployment model and whether you’re using CDB/PDB creation workflows. Follow the OCI Exadata Database Service database creation flow.

Console actions 1. In your Exadata deployment, create: – Database GDDCAT (catalog candidate) – Database GDD01 (shard candidate) – Database GDD02 (shard candidate, optional)

Expected outcome – You have databases that can serve as catalog and shards for later sharding configuration.

Verification From an admin host (Step 6) or approved client network, connect using SQL*Plus/SQLcl:

sqlplus admin_user@//<db_private_fqdn_or_ip>:1521/<service_name>

Run:

SELECT name, open_mode FROM v$database;
SELECT sys_context('USERENV','DB_NAME') AS db_name FROM dual;

Step 6 (Optional but recommended): Create an admin/routing compute VM and validate connectivity

Globally distributed database deployments typically require a routing/management component (often installed on compute). Even if you don’t install GSM in this lab, a private admin VM helps validate network access without exposing databases publicly.

Console actions 1. Compute → Instances → Create instance 2. Placement: – VCN: 10.10.0.0/16 – Subnet: 10.10.20.0/24 (private) – NSG: nsg-admin 3. Access: – Prefer OCI Bastion to reach the private VM (recommended). – Or attach through VPN/FastConnect if you have corporate connectivity.

Expected outcome – You can securely reach the admin VM. – The admin VM can reach DB private endpoints on port 1521.

Verification On the admin VM:

nc -zv <db_private_ip_or_fqdn> 1521

If SQLcl is installed:

sql /nolog
-- then connect as appropriate

Common errorsNo route to host: Check route tables and NSG rules. – DNS resolution issues: Confirm VCN DNS settings; use private DNS resolver if needed.


Step 7 (Optional): Install and validate sharding tooling skeleton (conceptual)

Because sharding/GDD tooling is version-specific and has strict prerequisites, the safest approach is:

  • Install the tooling exactly as documented for your Oracle Database version.
  • Validate only connectivity and prerequisites in this lab.

What to verify – The admin/routing VM can resolve and connect to: – catalog DB endpoint – each shard DB endpoint – You can create required users/roles in each DB (per Oracle sharding docs). – Time sync (NTP/chrony) is correct (important for distributed systems).

For the exact step-by-step gsm / gdsctl workflow, follow the official Oracle Database documentation for your database version and confirm it is supported on OCI Exadata Database Service on Exascale Infrastructure.

Expected outcome – You have a working foundation to proceed with a full globally distributed database deployment.


Validation

Use a checklist to confirm the lab is correct:

  1. Networking – Databases have private endpoints – NSGs restrict access to only admin/app subnets
  2. Database – You can connect to each database from the admin VM – Databases are open and healthy
  3. Backups – Bucket exists and access is controlled
  4. Observability – You can see metrics for Exadata/database resources in OCI Monitoring (where available) – OCI Audit is recording API actions

Troubleshooting

Symptom Likely cause Fix
Exadata provisioning fails Quota/capacity/shape restrictions Check Service Limits; try another region; open Oracle SR
Cannot connect to DB NSG rules missing Add inbound rule to DB NSG from admin NSG for 1521
DNS name doesn’t resolve VCN DNS misconfigured Enable VCN DNS; use private DNS resolver if needed
Admin VM can’t reach internet No NAT gateway/route Add NAT gateway and route for outbound updates (if required)
High unexpected cost Resources left running Proceed to cleanup immediately after validation

Cleanup

To avoid ongoing charges, delete resources in the right order:

  1. Delete databases created in Exadata deployment (if required by the service workflow).
  2. Terminate the admin/routing compute instance.
  3. Delete Exadata Database Service on Exascale Infrastructure resources (VM clusters / infrastructure allocations per console workflow).
  4. Delete Object Storage bucket (empty it first).
  5. Delete NSGs, subnets, VCN (if not reused).
  6. Delete compartment only if it is dedicated to this lab and empty.

Exadata resources can take time to delete. Confirm deletion completes and check billing dashboards for lingering resources.


11. Best Practices

Architecture best practices

  • Design for region-local transactions: app in region should talk to shard in same region.
  • Choose shard keys that:
  • Distribute load evenly
  • Match access patterns
  • Avoid hotspots
  • Keep a clear separation between:
  • Catalog
  • Shard directors/routing tier
  • Shard databases
  • Plan for schema evolution with rolling, shard-by-shard changes.

IAM/security best practices

  • Use separate admin roles:
  • Network admin
  • DB platform admin
  • Security admin
  • Enforce least privilege with compartment boundaries.
  • Require MFA and strong auth for console access.
  • Rotate secrets and keys; use Vault for secrets patterns where appropriate.

Cost best practices

  • Start in one region for dev/test.
  • Keep shard count minimal until scale requires more.
  • Use tagging for cost allocation.
  • Right-size compute using real utilization metrics, not peak guesses.

Performance best practices

  • Validate SQL performance on Exadata with realistic workload tests.
  • Keep transactions local to avoid cross-region latency.
  • Use connection pooling and service-based routing.
  • Monitor top SQL and wait events (via Database Management or native tooling—verify availability).

Reliability best practices

  • Treat routing tier as critical infrastructure: make it HA per region.
  • Regularly test:
  • Shard failover
  • Region failover
  • Restore from backups
  • Automate builds and rebuilds.

Operations best practices

  • Standardize runbooks:
  • Provisioning
  • Patching windows
  • Incident response
  • Use OCI Monitoring alarms for:
  • Storage thresholds
  • DB availability
  • Backup failures
  • Centralize logs and protect them from tampering.

Governance/tagging/naming best practices

  • Naming convention example:
  • gdd-<env>-<region>-catalog
  • gdd-<env>-<region>-shard01
  • Tag keys:
  • env, owner, costcenter, data_classification, service

12. Security Considerations

Identity and access model

  • OCI IAM controls API operations (create/delete/modify Exadata, network, vault, storage).
  • Oracle Database roles control data access:
  • Separate application schemas from admin accounts
  • Use least privilege, avoid shared admin accounts

Encryption

  • At rest: Use Oracle Transparent Data Encryption (TDE).
  • In transit: Enforce TLS for client connectivity.
  • Keys: Prefer centralized key management and documented rotation procedures (OCI Vault integration patterns vary—verify for your exact deployment).

Network exposure

  • Avoid public IPs on databases.
  • Use Bastion, VPN, or FastConnect for admin access.
  • Use NSGs to restrict:
  • App-to-DB ports
  • Admin-to-DB ports
  • East/west between tiers

Secrets handling

  • Don’t store DB passwords in code or images.
  • Use OCI Vault secrets (or a trusted secrets manager) for:
  • DB credentials
  • Wallets and certificates (where applicable)
  • Rotate regularly and after incidents.

Audit/logging

  • Enable OCI Audit (default for API calls).
  • Enable database auditing aligned to compliance needs (Oracle Unified Auditing where applicable).
  • Lock down log access and retention.

Compliance considerations

  • Data residency and cross-border replication must be validated legally and technically.
  • Maintain evidence:
  • IAM policies
  • Encryption configuration
  • Audit logs
  • Change records (IaC plans/applies)

Common security mistakes

  • Allowing wide CIDR access to DB port 1521
  • Using a single shared admin credential
  • Ignoring inter-region traffic security (no encryption, weak routing policies)
  • Not testing backup restore paths

Secure deployment recommendations

  • Private-only database endpoints
  • Separate compartments per environment
  • Dedicated key management and auditing
  • Regular penetration tests focused on network paths and IAM

13. Limitations and Gotchas

Some items are inherently deployment- and version-dependent. Confirm specifics in official docs for Exadata on Exascale and your Oracle Database version.

Known limitations / design constraints

  • Sharding/global distribution is not a drop-in replacement for a single database:
  • Requires shard key design
  • Changes to data model and routing may be needed
  • Multi-region latency can break assumptions (synchronous cross-region calls will be slow).

Quotas and capacity

  • Exadata on Exascale is capacity constrained in some regions.
  • Service limits can block provisioning until increased.

Regional constraints

  • Not all OCI regions support all Exadata/Exascale configurations.
  • Cross-region architectures must consider connectivity options and compliance.

Pricing surprises

  • Inter-region data transfer costs can rise quickly.
  • Backup retention and cross-region copies add recurring storage charges.
  • Extra compute VMs for routing, admin, and monitoring add up.

Compatibility issues

  • Feature availability can differ by Oracle Database version and edition.
  • Some Oracle options may require separate licensing even in cloud—verify contract terms.

Operational gotchas

  • Patching windows must be coordinated across:
  • Shards
  • Catalog
  • Routing tier
  • Monitoring becomes more complex with shard count.
  • Troubleshooting requires correlation across regions and components.

Migration challenges

  • Moving from a monolith Oracle DB to shards can require:
  • Data re-partitioning
  • Application routing changes
  • Dual-write or staged cutovers

Vendor-specific nuances

  • OCI Exadata service workflows and naming can change in the console.
  • Always rely on the official Exadata Database Service documentation for the current supported procedures.

14. Comparison with Alternatives

How to choose

If you need Oracle compatibility + engineered performance + global scale, this approach can fit. If you prefer a simpler managed distributed database with minimal operational overhead, consider alternatives.

Option Best For Strengths Weaknesses When to Choose
OCI Exadata Database Service on Exascale Infrastructure + Globally Distributed Database Global OLTP at scale with Oracle Exadata performance + Oracle feature set + geo distribution patterns Complexity, cost, shard-aware design effort You need Oracle + global scale-out writes and locality
OCI Exadata Database Service (non-Exascale models) High performance Oracle in one region Strong performance, familiar operations Less “elastic” consumption (model-dependent), still regional You need Exadata but not global distribution
OCI Autonomous Database Managed Oracle with less ops Automation, fast provisioning Not the same as Exadata DB Service control model; global distribution differs You want managed Oracle with minimal admin
OCI MySQL HeatWave MySQL + analytics acceleration Simpler, cost-effective for many web apps Not Oracle DB compatibility New workloads not requiring Oracle
Self-managed Oracle on OCI Compute Full control Flexibility Highest ops burden, patching, HA complexity You must control everything and accept ops cost
AWS Aurora Global Database (other cloud) Global relational apps Managed global reads, fast failover Not Oracle; write scaling differs You’re cloud-agnostic and fit Aurora model
Azure SQL / Hyperscale (other cloud) Microsoft SQL workloads Ecosystem fit Not Oracle You are SQL Server-first
Google Cloud Spanner (other cloud) Globally consistent relational Global consistency, managed Different SQL/semantics, migration effort You can redesign for Spanner
Open-source sharding (Postgres+Citus etc.) Cost-sensitive scale-out Flexibility More engineering effort, not Oracle You can replatform away from Oracle

15. Real-World Example

Enterprise example: Global retail payments and loyalty platform

  • Problem: A retailer operates in 20+ countries. Checkout and loyalty point accrual must be low-latency locally and resilient to regional outages.
  • Proposed architecture
  • OCI regions per major geography
  • Exadata Database Service on Exascale Infrastructure in each region
  • Shard key = customer_id (or region+customer_id)
  • Catalog + routing tier designed for HA
  • Reference data replicated (e.g., product/offer catalog), transactional data sharded
  • Private connectivity via DRG/FastConnect; strict IAM separation
  • Why this service was chosen
  • Oracle Database compatibility for existing apps
  • Exadata performance for peak transaction events
  • Sharding patterns for scale-out writes and locality
  • Expected outcomes
  • Improved regional latency
  • Higher throughput by adding shards
  • Reduced blast radius during incidents
  • Clearer compliance mapping for data locality (subject to validation)

Startup/small-team example: Fast-growing multi-tenant SaaS (Oracle-based)

  • Problem: A SaaS starts on one Oracle database; growth creates contention between tenants and global customers complain about latency.
  • Proposed architecture
  • One OCI region initially, with Exadata on Exascale for performance
  • Tenant-based shard strategy as top tenants grow
  • Add a second region later for EU customers
  • IaC to standardize environments
  • Why this service was chosen
  • Maintain Oracle feature set and existing schema
  • Scale-out strategy available when growth demands it
  • Expected outcomes
  • Tenant isolation
  • Predictable performance for premium tiers
  • A roadmap to multi-region without abandoning Oracle

16. FAQ

1) Is “Globally Distributed Exadata Database on Exascale Infrastructure” a single OCI service?

It may be used as a solution phrase rather than a single SKU. In official OCI docs you will commonly see Exadata Database Service on Exascale Infrastructure and Oracle Globally Distributed Database/sharding as separate topics. Verify current OCI naming in the Exadata documentation.

2) What’s the difference between sharding and Data Guard?

Sharding is typically used for scale-out writes by distributing data across multiple databases. Data Guard is commonly used for replication/DR of the same database (not horizontal partitioning). They solve different problems.

3) Do I need to change my application to use a globally distributed database?

Often yes. Many designs require: – Choosing a shard key – Ensuring queries include the shard key for efficient routing – Handling cross-shard queries carefully

4) Can I do this in one region only?

Yes for learning and some scale-out needs. But “globally distributed” implies multi-region; many benefits require multiple regions.

5) Is Exadata on Exascale available in every OCI region?

No. Availability is region- and capacity-dependent. Check OCI documentation and your tenancy’s service availability.

6) What are the main cost drivers?

Compute allocation (ECPU/OCPU), storage, backups, and inter-region data transfer for multi-region designs.

7) Is there an OCI free tier for Exadata on Exascale?

Typically no. Use the OCI cost estimator and run short-lived labs with strict cleanup.

8) How do I secure database access?

Use private subnets, NSGs, Bastion/VPN/FastConnect, least-privilege IAM, TDE, TLS, and strong database roles.

9) What’s the hardest part of a globally distributed Oracle design?

Usually: – Shard key selection – Operational complexity (patching, monitoring, failover testing) – Data model and query pattern changes

10) Can I keep reporting/analytics on the same shards?

You can, but heavy analytics can hurt OLTP. Consider isolating analytics workloads (separate systems, replicas, or ETL patterns). Validate with performance testing.

11) How do I handle cross-shard queries?

Cross-shard queries exist but can be expensive and complex. Prefer app-level aggregation, denormalization, or replicated reference data patterns where appropriate.

12) What happens if a region goes down?

A well-designed architecture routes traffic away from the impacted shard group/region. Exact behavior depends on your routing tier HA and your shard/cross-region strategy.

13) Do I need a routing tier in every region?

In production, typically yes (for locality and resilience). Treat it as critical infrastructure and make it HA.

14) How do I monitor a sharded system effectively?

Use a combination of: – OCI Monitoring/Alarms for infrastructure/service metrics – Database-level telemetry (AWR/ASH equivalents where available/licensed) – Centralized logging and correlation IDs across app + routing + DB

15) What’s a sensible first production milestone?

A common approach: 1. Single region, 2–3 shards, stable shard key 2. Add HA routing tier and operational runbooks 3. Add second region and migrate a geography/tenant slice 4. Expand gradually with continuous failover drills


17. Top Online Resources to Learn Globally Distributed Exadata Database on Exascale Infrastructure

Resource Type Name Why It Is Useful
Official documentation OCI Exadata Database Service docs Primary reference for provisioning, operations, and supported configurations: https://docs.oracle.com/en-us/iaas/exadata/
Official documentation OCI IAM policy reference Correct way to build least-privilege policies: https://docs.oracle.com/en-us/iaas/Content/Identity/Reference/policyreference.htm
Official documentation OCI Networking docs VCN/NSG/DRG design fundamentals: https://docs.oracle.com/en-us/iaas/Content/Network/Concepts/overview.htm
Official documentation OCI Vault docs Key management patterns and APIs: https://docs.oracle.com/en-us/iaas/Content/KeyManagement/home.htm
Official documentation OCI Object Storage docs Backup storage and lifecycle policies: https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm
Pricing OCI price list Authoritative pricing SKUs (region-specific): https://www.oracle.com/cloud/price-list/
Pricing tool OCI Cost Estimator Build estimates without guessing: https://www.oracle.com/cloud/costestimator.html
Architecture center OCI Architecture Center Reference architectures and patterns: https://docs.oracle.com/en/solutions/
Database docs Oracle Database documentation library Version-specific sharding/GDD docs (select your version): https://docs.oracle.com/en/database/
Tutorials/labs Oracle LiveLabs Hands-on OCI labs (search for Exadata/Database): https://livelabs.oracle.com/
Videos Oracle Cloud Infrastructure YouTube Service walkthroughs and best practices (verify playlists): https://www.youtube.com/@OracleCloudInfrastructure
SDK/CLI OCI CLI install guide Repeatable automation: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm
IaC OCI Terraform provider docs Automate infrastructure provisioning: https://registry.terraform.io/providers/oracle/oci/latest/docs

18. Training and Certification Providers

Institute Suitable Audience Likely Learning Focus Mode Website URL
DevOpsSchool.com Cloud/DevOps engineers, architects OCI fundamentals, DevOps/IaC, operational practices Check website https://www.devopsschool.com/
ScmGalaxy.com Beginners to intermediate SCM/DevOps foundations that support cloud delivery Check website https://www.scmgalaxy.com/
CLoudOpsNow.in Cloud operations teams Cloud ops/SRE style operations, monitoring, automation Check website https://www.cloudopsnow.in/
SreSchool.com SREs, platform teams Reliability engineering, SLIs/SLOs, incident response Check website https://www.sreschool.com/
AiOpsSchool.com Ops + automation teams AIOps concepts, monitoring automation Check website https://www.aiopsschool.com/

19. Top Trainers

Platform/Site Likely Specialization Suitable Audience Website URL
RajeshKumar.xyz DevOps/cloud training content Beginners to intermediate engineers https://www.rajeshkumar.xyz/
devopstrainer.in DevOps tooling and practices DevOps engineers and SREs https://www.devopstrainer.in/
devopsfreelancer.com Freelance DevOps support/training Teams needing short-term guidance https://www.devopsfreelancer.com/
devopssupport.in DevOps support and enablement Ops teams needing practical troubleshooting help https://www.devopssupport.in/

20. Top Consulting Companies

Company Likely Service Area Where They May Help Consulting Use Case Examples Website URL
cotocus.com Cloud/DevOps consulting Architecture, automation, platform ops IaC pipelines, monitoring setup, operational readiness https://www.cotocus.com/
DevOpsSchool.com Training + consulting Enablement, DevOps transformation CI/CD design, cloud migration planning, SRE practices https://www.devopsschool.com/
DEVOPSCONSULTING.IN DevOps consulting Delivery acceleration, reliability practices Kubernetes/CI/CD, observability, incident response processes https://www.devopsconsulting.in/

21. Career and Learning Roadmap

What to learn before this service

  1. OCI fundamentals – Compartments, IAM policies, VCN/subnets/NSGs, Object Storage
  2. Oracle Database fundamentals – Backup/restore concepts, performance basics, security roles, TDE
  3. Networking for multi-region – DRGs, routing, private DNS, latency and bandwidth planning
  4. IaC and automation – Terraform basics, CI/CD pipelines, secrets management

What to learn after this service

  • Oracle globally distributed database/sharding deep dive:
  • Shard key design and data modeling
  • Routing tier HA
  • Operational playbooks for shard expansion and failover
  • Observability engineering for distributed systems
  • Compliance engineering (data residency, auditing, evidence collection)
  • Advanced performance engineering on Exadata

Job roles that use it

  • Cloud Solutions Architect (OCI + databases)
  • Database Platform Engineer
  • Senior DBA / Exadata Engineer
  • SRE / Reliability Engineer (database-focused)
  • Security Engineer (cloud data platforms)

Certification path (if available)

Oracle’s certification offerings change over time. A practical approach: – Start with OCI foundations certifications – Add OCI architect-level certification tracks – Complement with Oracle Database administration certifications (where relevant)

Verify current OCI certification tracks on Oracle University:
https://education.oracle.com/

Project ideas for practice

  • Build a two-region VCN + DRG connectivity lab with private DNS and NSGs
  • Implement a shard key selection exercise with a sample schema and workload model
  • Create Terraform modules for:
  • VCN + NSGs
  • Bastion + admin host
  • Logging/Monitoring alarms
  • Write a runbook for “region failover drill” and test it quarterly

22. Glossary

  • OCI (Oracle Cloud Infrastructure): Oracle Cloud’s IaaS/PaaS platform.
  • Data Management: The OCI category covering databases, storage, and data services.
  • Exadata: Oracle engineered system combining compute, networking, and smart storage optimized for Oracle Database workloads.
  • Exascale Infrastructure (OCI): An OCI consumption/deployment model for Exadata intended to provide more elastic/shared scaling characteristics (verify exact definition in current docs).
  • ECPU/OCPU: OCI compute capacity units used for pricing and sizing (exact usage depends on service).
  • VCN: Virtual Cloud Network—your isolated cloud network in OCI.
  • NSG: Network Security Group—virtual firewall rules applied to VNICs/resources.
  • DRG: Dynamic Routing Gateway—connects VCNs to on-prem or other VCNs/regions.
  • TDE: Transparent Data Encryption—Oracle encryption for data at rest.
  • TLS: Transport Layer Security—encryption for data in transit.
  • Shard: A database that holds a subset of the total dataset in a sharded design.
  • Shard key: Column(s) used to determine where data lives and how requests route.
  • Shard catalog: Metadata database that stores sharding topology/configuration.
  • Routing tier / GSM: Global service management component that routes connections to the correct shard (terminology varies by Oracle version/docs).
  • IaC: Infrastructure as Code (e.g., Terraform).
  • SLO/SLI: Service level objective/indicator—reliability engineering metrics.

23. Summary

Globally Distributed Exadata Database on Exascale Infrastructure on Oracle Cloud is best approached as a Data Management architecture that combines Exadata Database Service on Exascale Infrastructure with globally distributed database/sharding patterns to deliver global scale, locality, and resilience for Oracle workloads.

Key takeaways: – It’s a strong fit when you need global OLTP scale-out, regional low latency, and failure isolation, while staying in the Oracle Database ecosystem. – The main tradeoffs are complexity (routing, shard keys, operations) and cost (Exadata resources, multi-region networking, backups). – Security and governance must be designed upfront: private networking, least-privilege IAM, encryption (TDE/TLS), auditing, and disciplined operations. – Start small: validate networking, provisioning, observability, and a minimal topology; then expand to multi-region with repeatable automation.

Next step: read the OCI Exadata documentation and the Oracle Database sharding/globally distributed database documentation for your database version, then turn the lab foundation into a two-region proof-of-concept with a real shard key and failover drill.