Oracle Cloud Management Dashboard Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for Observability and Management

Category

Observability and Management

1. Introduction

Oracle Cloud Management Dashboard is a console-based service in the Observability and Management suite that helps you build interactive dashboards to visualize operational data (such as metrics and logs) across Oracle Cloud resources.

In simple terms: Management Dashboard lets you create “single pane of glass” views—charts, tables, and tiles—so teams can quickly understand system health, performance trends, and operational status without jumping between many service pages.

Technically, Management Dashboard provides dashboard resources (and typically organizational constructs like dashboard groups/folders and reusable saved searches, depending on what your tenancy has enabled) that query and render data from supported observability backends (most commonly OCI Monitoring metrics and log/search sources available in your environment). Access to dashboards and underlying data is governed by OCI IAM policies; dashboard activity is traceable through Audit events.

The core problem it solves is operational visibility: turning raw telemetry into actionable, shared views for SREs, DevOps, platform teams, and application owners—while keeping access controlled and auditable.

Naming note: In the Oracle Cloud Console you may see the product labeled as Management Dashboards (plural) in navigation. This tutorial uses Management Dashboard as the primary service name, as requested. Verify exact UI labels in your region because Oracle updates console navigation periodically.

2. What is Management Dashboard?

Official purpose (scope-aligned): Management Dashboard in Oracle Cloud provides a way to create, organize, and share dashboards that visualize operational data collected across Oracle Cloud Infrastructure (OCI) services.

Core capabilities

  • Create dashboards made of widgets (charts/tables/tiles/text) for operational views.
  • Visualize time-series data (commonly OCI Monitoring metrics).
  • Reuse queries/searches (often implemented as “saved searches” where supported).
  • Organize dashboards (for example, via dashboard groups/folders when available).
  • Control access using OCI IAM (who can view, create, update, or delete dashboards).
  • Share consistent views across teams and environments (dev/test/prod), typically using compartments and tags.

Major components (conceptual model)

While exact naming can vary by console release, the service typically includes: – Dashboard: A resource representing a layout with widgets and configuration. – Widget: A visualization element (chart, table, text, etc.) that renders data. – Dashboard group / folder (if enabled): A container for dashboards to organize them. – Saved search / saved query (if enabled): Reusable query definitions used by widgets. – Compartments and tags: Governance primitives used to scope and organize access.

Service type

  • Control-plane service: You create and manage dashboard definitions; the service renders dashboards by querying supported telemetry backends.
  • Visualization layer: It is not a telemetry ingestion system by itself—cost and retention are driven by the underlying sources (metrics, logs, APM, etc.).

Scope: regional vs global

Oracle Cloud services are commonly region-scoped in the Console. Management Dashboard is generally used within the context of the currently selected region and the compartments you have access to.

  • Practical guidance: Assume dashboards are created and viewed in a region and access is governed by tenancy IAM + compartment scope.
  • Verify in official docs: Cross-region dashboards and cross-region querying behavior can change over time and by data source.

How it fits into the Oracle Cloud ecosystem

Management Dashboard sits in Observability and Management alongside services such as: – Monitoring (metrics, alarms) – Logging (logs, log groups) – Logging Analytics (if enabled/licensed in your tenancy) – Application Performance Monitoring (APM) (if enabled) – Events / Notifications (for alert routing) – Audit (for governance and security traceability)

Management Dashboard is typically used as the visualization and sharing layer across these sources.

3. Why use Management Dashboard?

Business reasons

  • Faster incident response: Shared operational views reduce time spent hunting for signals.
  • Standardized reporting: Executives and stakeholders get consistent dashboards for SLAs/SLOs, latency, error rate, and capacity.
  • Lower tool sprawl: For OCI-centric operations, you can reduce reliance on multiple external dashboard tools for basic visibility.

Technical reasons

  • Native OCI integration: Dashboards can use OCI-native telemetry sources without custom connectors (depending on your enabled services).
  • Compartment-aware views: Dashboards align naturally with OCI compartment design (app/env/team boundaries).
  • Reusable dashboard templates: Many teams standardize “golden dashboards” for each service tier.

Operational reasons

  • Single pane of glass: Reduce context switching between Monitoring, compute pages, load balancers, and logging views.
  • Runbook alignment: Dashboards can mirror runbooks (“check CPU, then error logs, then LB 5xx…”).
  • Shift-left observability: Developers can use the same dashboards in dev/test to catch regressions early.

Security/compliance reasons

  • IAM-controlled visibility: Separate who can view dashboards from who can modify them (and from who can view underlying logs).
  • Auditability: Dashboard create/update/delete activity is typically captured in OCI Audit.
  • Least privilege: Provide read-only access to dashboards for stakeholders without granting admin access.

Scalability/performance reasons

  • Scale dashboards with org structure: As environments grow, dashboards can be organized by compartment and tags.
  • Avoid per-user tool installs: A browser-based experience reduces client-side operational overhead.

When teams should choose it

Choose Management Dashboard when: – Your telemetry sources are primarily in Oracle Cloud (Monitoring, Logging, Logging Analytics, APM, etc.). – You want shared, controlled, browser-native dashboards for operations and reporting. – You want dashboards that align tightly with OCI IAM and compartment governance.

When teams should not choose it

Avoid (or complement with other tools) when: – You need deep cross-cloud correlation (AWS/Azure/GCP + OCI) in a single tool and Management Dashboard doesn’t meet your source requirements. – You rely on advanced visualization plugins, heavy transformations, or complex multi-source joins better handled by tools like Grafana, Kibana, or commercial observability platforms. – Your compliance requirements mandate on-prem-only tooling or specific third-party certifications not met by your OCI configuration.

4. Where is Management Dashboard used?

Industries

  • SaaS and internet platforms: dashboards for latency, error rates, saturation, capacity.
  • Finance and insurance: operational reporting and audit-friendly views (with strict IAM).
  • Retail/e-commerce: dashboards for peak traffic, order pipeline health, payment failures.
  • Healthcare: performance and availability tracking for patient-facing apps (with careful PHI handling).
  • Telecom and media: network throughput, service availability, and incident correlation.

Team types

  • SRE / DevOps / Platform Engineering
  • Cloud Operations / NOC
  • Application owners and on-call engineers
  • Security operations (when used for security signal dashboards—only if your log sources and access model support it)
  • FinOps (cost-adjacent dashboards, usually by visualizing utilization + budgets/alerts from other sources)

Workloads and architectures

  • Compute instances (VMs), autoscaling fleets
  • OKE (Kubernetes) clusters (often via metrics/log sources integrated into OCI)
  • Load balancers, API gateways, microservices
  • OCI databases (visibility often depends on Database Management/APM integrations you have enabled)
  • Multi-compartment enterprise landing zones

Real-world deployment contexts

  • Production: golden dashboards for availability, error rate, and resource saturation; executive summaries; on-call readiness.
  • Dev/Test: release validation dashboards; performance regression detection; environment health checks.

5. Top Use Cases and Scenarios

Below are realistic scenarios where Oracle Cloud Management Dashboard is a good fit.

1) Fleet CPU and memory overview (per environment)

  • Problem: Ops needs to see whether any compute instances are overloaded.
  • Why it fits: Dashboards consolidate key utilization metrics into one view.
  • Example: A “Prod Compute Health” dashboard shows CPU utilization trends and highlights top instances by CPU.

Note: Memory metrics availability depends on your instance monitoring/agent configuration and metric source. Verify what your tenancy publishes in OCI Monitoring.

2) Load balancer health and 5xx error tracking

  • Problem: Customer errors spike; you need a quick LB-level view.
  • Why it fits: Widgets can show request counts, backend errors, and latency.
  • Example: A dashboard shows LB HTTP 5xx, backend connection errors, and traffic.

3) Release-day dashboard for canary validation

  • Problem: During a deploy, teams need a single page to validate KPIs.
  • Why it fits: A shared dashboard focuses on latency/error/saturation signals.
  • Example: Canary dashboard shows p95 latency, error rate, and instance CPU for the canary pool.

4) Multi-compartment operational reporting

  • Problem: Enterprises split workloads by compartment; leaders want consolidated visibility.
  • Why it fits: Management Dashboard can standardize dashboards per compartment and replicate patterns.
  • Example: Same “Service Health” dashboard cloned per BU compartment; executives switch dashboards rather than hunting metrics.

5) On-call triage dashboard (runbook-aligned)

  • Problem: On-call needs a checklist-driven view during incidents.
  • Why it fits: Dashboards can be structured in the same order as a runbook.
  • Example: Widgets show service availability, then recent errors (logs), then resource saturation.

6) Capacity planning trend views

  • Problem: Teams need to forecast scaling needs.
  • Why it fits: Time-series dashboards can show utilization trends and growth rates.
  • Example: 30/90-day CPU and network throughput trends for a service tier.

7) Security operations overview (signal-only)

  • Problem: Security wants high-level patterns (e.g., auth failures).
  • Why it fits: If your log analytics/log sources are integrated, dashboards can visualize counts and trends.
  • Example: Dashboard shows failed login attempts and API calls by principal type (only if your log source supports this).

8) SLA/SLO tracking and executive scorecards

  • Problem: Leaders want uptime and key indicators without raw telemetry complexity.
  • Why it fits: Dashboards deliver simplified views for non-operators.
  • Example: Monthly availability and major incident count widgets (data sourced from monitoring/incident tooling integrations you maintain).

9) Platform team “landing zone” health dashboards

  • Problem: Shared infrastructure (DNS, networking, bastions) needs visibility.
  • Why it fits: Central dashboards provide governance-aligned visibility.
  • Example: Dashboard tracks VPN/IPSec tunnel status (where metrics exist), NAT usage, and key network telemetry.

10) Cost-adjacent utilization dashboards (FinOps support)

  • Problem: FinOps wants evidence of overprovisioning and idle resources.
  • Why it fits: Utilization dashboards are often the first step to right-sizing.
  • Example: A dashboard highlights instances with consistently low CPU utilization over 30 days.

11) Incident postmortem timeline dashboards

  • Problem: After an incident, you need a clear timeline view.
  • Why it fits: Dashboards can be used to replay a time window and capture screenshots/exports (where supported).
  • Example: A dashboard pinned to the incident time range shows CPU spike, traffic surge, and error increase.

12) Service owner self-service dashboards

  • Problem: Central ops can’t build dashboards for every team quickly.
  • Why it fits: With the right IAM, service owners build and maintain their dashboards.
  • Example: Each microservice team maintains a dashboard with their own KPIs and alerts.

6. Core Features

The exact widget types and data sources can evolve; validate the current feature list in Oracle Cloud documentation for your region/tenancy. The features below reflect common, practical capabilities of Management Dashboard in Oracle Cloud.

1) Dashboard resources (create/view/update/delete)

  • What it does: Lets you define dashboards as managed OCI resources.
  • Why it matters: Dashboards become shareable, governed artifacts (not local browser bookmarks).
  • Practical benefit: Teams can standardize “golden dashboards.”
  • Limitations/caveats: Dashboards are subject to IAM permissions and service limits.

2) Widget-based visualizations

  • What it does: Adds visual components such as charts, tables, and text blocks.
  • Why it matters: Different problems need different visual forms (trend vs ranking vs narrative).
  • Practical benefit: Combine KPIs, context, and links in one page.
  • Limitations/caveats: Widget types and capabilities vary by supported data sources.

3) Time range selection and time alignment

  • What it does: Lets viewers select a time window (e.g., last 15 minutes, last 24 hours).
  • Why it matters: Incident triage depends on narrowing to the right time window.
  • Practical benefit: Rapid zoom-in/zoom-out for troubleshooting and postmortems.
  • Limitations/caveats: Metric resolution depends on the underlying telemetry (e.g., 1m vs 5m granularity).

4) Integration with OCI Monitoring metrics (common)

  • What it does: Visualizes metrics from OCI Monitoring (service metrics and custom metrics).
  • Why it matters: Metrics are the backbone of operational observability.
  • Practical benefit: Turn metrics explorer queries into durable, shareable widgets.
  • Limitations/caveats: Viewing metrics requires IAM permissions to read metrics in relevant compartments.

5) Saved searches / reusable queries (where supported)

  • What it does: Lets you store a query once and reuse it across widgets/dashboards.
  • Why it matters: Avoid duplicated query definitions and inconsistent dashboards.
  • Practical benefit: Update one saved search; all dependent widgets reflect the change.
  • Limitations/caveats: Availability and supported query languages depend on enabled services (e.g., Logging Analytics). Verify in official docs.

6) Organizational constructs (dashboard groups/folders) (where supported)

  • What it does: Organizes dashboards for teams/environments/projects.
  • Why it matters: At scale, hundreds of dashboards need structure.
  • Practical benefit: “Prod / Dev / Shared Services” grouping.
  • Limitations/caveats: Some tenancies may rely primarily on compartments + tags rather than separate folder constructs.

7) IAM-based access control (view vs manage)

  • What it does: Uses OCI IAM policies to control dashboard administration and viewing.
  • Why it matters: Prevent unauthorized edits and protect sensitive operational views.
  • Practical benefit: Give stakeholders view access without granting admin rights.
  • Limitations/caveats: Users also need permissions for the underlying data sources (metrics/logs), not just dashboards.

8) Compartment scoping and governance alignment

  • What it does: Aligns dashboard access and organization with compartment boundaries.
  • Why it matters: Compartments are OCI’s primary isolation and delegation mechanism.
  • Practical benefit: Teams can self-manage dashboards within their compartment.
  • Limitations/caveats: Cross-compartment dashboards require deliberate IAM design.

9) Tagging support for dashboards (governance)

  • What it does: Applies defined/freeform tags to dashboards (if supported as standard OCI resource tagging).
  • Why it matters: Enables cost allocation, ownership, and lifecycle governance.
  • Practical benefit: Tag dashboards by owner/team/app/env.
  • Limitations/caveats: Tagging policies vary by tenancy governance.

10) Sharing operational context (links, notes, runbook references)

  • What it does: Adds textual widgets or descriptions (where supported) to embed runbook links and escalation info.
  • Why it matters: The best dashboards reduce “tribal knowledge.”
  • Practical benefit: On-call sees the exact runbook link next to the KPI chart.
  • Limitations/caveats: Avoid embedding secrets/tokens in text.

11) Dashboard lifecycle operations (clone/export/import where available)

  • What it does: Some environments support duplication or export/import of dashboard definitions.
  • Why it matters: Promotes reuse across compartments and environments.
  • Practical benefit: Copy a “Prod template” dashboard into each service compartment.
  • Limitations/caveats: Export/import capabilities differ by service version and tenancy features. Verify in official docs.

12) Auditability through OCI Audit

  • What it does: Records management operations (create/update/delete) in OCI Audit.
  • Why it matters: Compliance and change tracking require a reliable audit trail.
  • Practical benefit: Identify who changed a dashboard before an incident.
  • Limitations/caveats: Audit retention and access require IAM and governance planning.

7. Architecture and How It Works

High-level service architecture

Management Dashboard is primarily a visualization and configuration layer: – Users interact through the OCI Console (and in some cases APIs/SDKs if exposed for dashboard resources in your environment—verify). – Dashboard definitions are stored as OCI resources. – When a dashboard is viewed, the service queries underlying telemetry sources (for example, OCI Monitoring metrics) based on widget configuration and user permissions. – Access is governed by OCI IAM; actions are logged in OCI Audit.

Data/control flow (typical)

  1. A user opens a dashboard in the OCI Console (in a specific region).
  2. The console requests the dashboard definition (layout, widgets, queries).
  3. For each widget, the backend performs read operations against supported telemetry services (metrics/logs/search).
  4. Results are returned and rendered in the browser.
  5. Any create/update/delete actions are recorded by Audit (subject to your tenancy configuration).

Integrations with related services

Commonly integrated services in the Observability and Management category: – OCI Monitoring: metrics, metric namespaces, alarms. – OCI Logging / Logging Analytics: log exploration and analytics (capability depends on what’s enabled). – OCI Notifications (indirect): alarms route to Notifications; dashboards show the metric that triggers alarms. – OCI Events (indirect): event rules can drive remediation; dashboards help you observe outcomes. – OCI Identity and Access Management (IAM): policies controlling dashboard and data access. – OCI Audit: records who changed what.

Dependency services (conceptual)

  • IAM (authentication/authorization)
  • Telemetry backends (Monitoring, Logging, etc.)
  • Audit (governance)
  • Tagging (governance)

Security/authentication model

  • Users authenticate to OCI using IAM (user/password + MFA, identity federation, or API signing for programmatic access).
  • Authorization is policy-based (groups/dynamic groups, compartments, resource families).
  • To view a widget, the user generally must have:
  • Permission to view the dashboard resource, and
  • Permission to read the widget’s underlying data (metrics/logs) in the relevant compartment(s).

Networking model

  • Management Dashboard is accessed over HTTPS via the OCI Console.
  • It is a managed service (control plane). You typically do not place it in your VCN.
  • Network restrictions are handled via IAM, federation, and organizational controls (and potentially IP allowlisting solutions at the identity/provider layer). For private access patterns, verify OCI’s latest guidance for console access and service endpoints.

Monitoring/logging/governance considerations

  • Audit: Track dashboard CRUD operations.
  • Alarms: For critical widgets, pair charts with alarms so dashboards are complemented by proactive alerts.
  • Standard tags: Tag dashboards for owner/app/env.
  • Service limits: Plan for limits on number of dashboards, widgets, etc. (check “Service Limits” in Console for your region).

Simple architecture diagram

flowchart LR
  U[User / Browser] --> C[OCI Console]
  C --> MD[Management Dashboard]
  MD --> IAM[IAM Policy Check]
  MD --> MON[OCI Monitoring (Metrics)]
  MD --> LOG[OCI Logging / Analytics (optional)]
  MD --> AUD[OCI Audit (records changes)]

Production-style architecture diagram

flowchart TB
  subgraph Org[Enterprise / Startup Org]
    IDP[Identity Provider (SSO/Federation)]
    IAM[IAM: Users, Groups, Policies]
    AUD[OCI Audit]
  end

  subgraph Obs[Observability and Management]
    MD[Management Dashboard]
    MON[OCI Monitoring: Metrics + Alarms]
    LOG[OCI Logging / Logging Analytics (if enabled)]
    NOTIF[OCI Notifications]
  end

  subgraph Workloads[Workloads]
    OKE[OKE / Microservices]
    VM[Compute Instances]
    DB[Databases]
    LB[Load Balancer]
  end

  IDP --> IAM
  IAM --> MD
  MD --> MON
  MD --> LOG
  MON --> NOTIF

  VM --> MON
  LB --> MON
  OKE --> MON
  DB --> MON

  VM --> LOG
  OKE --> LOG
  LB --> LOG

  MD --> AUD
  IAM --> AUD

8. Prerequisites

Before you start, ensure the following are in place.

Tenancy/account requirements

  • An active Oracle Cloud tenancy.
  • Access to the OCI Console.
  • Ability to select the region where you’ll build and view dashboards.

Permissions / IAM roles

You need IAM permissions for: 1. Management Dashboard resources (to create/edit dashboards). 2. The underlying telemetry sources (at least read permissions for metrics/logs used in widgets).

Because OCI policy syntax and resource family names can change, use these examples as patterns and verify the exact policy verbs/resource families in official docs:

  • Example pattern (dashboard administration):
  • Allow group <group-name> to manage <management-dashboard-resource-family> in compartment <compartment-name>
  • Example pattern (read-only viewers):
  • Allow group <group-name> to read <management-dashboard-resource-family> in compartment <compartment-name>
  • Example pattern (metrics readers):
  • Allow group <group-name> to read metrics in compartment <compartment-name>

Tip: Many “it shows no data” issues are IAM issues. Make sure viewers can read both the dashboard and the data source.

Billing requirements

  • Management Dashboard often does not have a standalone line-item cost; your spend is typically driven by the telemetry sources (Monitoring custom metrics, log ingestion/analytics, APM, etc.).
  • You still need a valid billing setup for any paid underlying services you enable.

Tools (optional)

  • OCI Console (required for this tutorial)
  • OCI CLI (optional) for related tasks (metrics/logs management). This tutorial keeps steps console-first to stay executable even without CLI.

Region availability

  • Management Dashboard availability can vary by region. Verify in the OCI Console navigation for your region and in official docs.

Quotas/limits

  • Expect limits on the number of dashboards, widgets, etc. Check:
  • OCI Console → Governance & AdministrationLimits, Quotas and Usage (naming can vary) and search for Management Dashboard related limits.

Prerequisite services

For the hands-on lab in this article: – OCI Monitoring metrics for at least one resource (a compute instance is the easiest demo). – (Optional) An Always Free eligible compute instance to generate a visible CPU change.

9. Pricing / Cost

Pricing model (what to expect)

As of typical OCI patterns, Management Dashboard itself is usually a configuration/visualization feature rather than a metered ingestion/compute service. Your cost is commonly driven by: – Metrics ingestion and storage (especially custom metrics, higher resolution, longer retention—depending on OCI Monitoring pricing). – Logs ingestion, retention, and analytics (OCI Logging and/or Logging Analytics pricing, if used). – APM (if used). – Data egress (if dashboards are accessed in ways that trigger egress, or if telemetry is shipped out—less common for console-only use, but relevant for exports/integrations).

Because Oracle pricing can change and differs by region and service edition/SKU: – Do not assume a separate “Management Dashboard” charge exists without verifying. – Confirm on official sources: – Oracle Cloud price list: https://www.oracle.com/cloud/price-list/ – Oracle Cloud cost estimator: https://www.oracle.com/cloud/costestimator/ – Oracle Cloud Free Tier: https://www.oracle.com/cloud/free/

Pricing dimensions (indirect but real)

Even if dashboards are “free,” you should model cost for the data they rely on:

Cost Driver What Increases Cost How Dashboards Influence It
Custom metrics High-cardinality dimensions, high frequency, many series Dashboards encourage more metrics; keep series count under control
Logs High ingestion volume, long retention, verbose debug logging Dashboards often trigger “log everything”; set sane log levels
Logging Analytics/APM Analytics queries, ingestion, retention Dashboards can popularize expensive queries—optimize saved searches
Storage Longer retention for logs/metrics exports Use tiering and retention policies
Egress Shipping telemetry outside OCI Dashboards may drive exports; keep analysis in-region where possible

Free tier (if applicable)

  • OCI offers an Always Free tier and free allowances for some telemetry services.
  • Whether your dashboard use stays within free allowances depends on:
  • Which telemetry sources you enable
  • Ingestion volumes and retention
  • Region and tenancy entitlements
    Verify current free-tier entitlements: https://www.oracle.com/cloud/free/

Hidden or indirect costs

  • Over-instrumentation: Adding too many metrics/logs “just for dashboards.”
  • Retention creep: Keeping logs longer than needed for compliance/ops.
  • High-cardinality metric dimensions: Explodes time-series count and can raise costs.
  • Operational time: Maintaining many dashboards without ownership/tagging.

Network/data transfer implications

  • Viewing dashboards in the console typically does not create notable data egress charges by itself.
  • If you export data or integrate external systems (Grafana, SIEM, etc.), model network charges.

Cost optimization tips

  • Prefer service metrics over custom metrics when they meet requirements.
  • Keep metric dimensions low-cardinality (avoid embedding user IDs, request IDs, etc.).
  • Use retention policies for logs.
  • Use saved searches and standard queries to reduce “query sprawl.”
  • Tag dashboards and telemetry resources to enforce ownership and cleanup.

Example low-cost starter estimate (no fabricated numbers)

A low-cost way to start: – 1 Always Free compute instance (if available in your region/tenancy) – Use existing service metrics already emitted by OCI services – Create 1–3 dashboards with a handful of widgets
Primary costs: typically $0 for dashboards, but verify whether any underlying telemetry usage exceeds free allowances in your region.

Example production cost considerations

In production, costs are dominated by: – Log ingestion (especially app logs at high volume) – APM and distributed tracing (if used) – Custom metrics across many microservices and dimensions
Model costs by estimating: – Number of services emitting telemetry – Events/second and log size – Retention period – Query frequency and user count (for analytics tools)

10. Step-by-Step Hands-On Tutorial

This lab builds a practical, low-risk dashboard that visualizes real OCI metrics from a compute instance, then shares it with a read-only group.

Objective

Create a Management Dashboard in Oracle Cloud that: – Shows a CPU utilization chart for a compute instance (or another resource metric available to you) – Includes basic dashboard context (owner, purpose) – Demonstrates IAM separation between dashboard editors and viewers

Lab Overview

You will: 1. Confirm you have a metric source (a compute instance is easiest). 2. Create IAM groups and policies for dashboard admins and dashboard viewers. 3. Create a Management Dashboard (and optionally a dashboard group/folder if your console supports it). 4. Add a metrics widget using a metric you confirm exists in your tenancy via Metrics Explorer. 5. Validate the dashboard shows data (and optionally create a brief CPU spike to see the chart react). 6. Test viewer access. 7. Clean up all created resources.

If your tenancy already has strict IAM guardrails, work in a dedicated compartment (recommended) and coordinate with your OCI admins.


Step 1: Prepare a compartment for the lab

Console steps 1. Open the OCI Console. 2. Go to Identity & SecurityCompartments. 3. Click Create Compartment. 4. Name: lab-observability 5. Description: Lab compartment for Management Dashboard tutorial 6. Parent compartment: choose an appropriate parent (often your root compartment, if allowed). 7. Click Create Compartment.

Expected outcome – You have a compartment lab-observability to isolate resources and policies for this tutorial.

Verification – Open the compartment details and confirm it appears in the compartment list.


Step 2: Ensure you have a metric source (compute instance)

You need some OCI Monitoring metrics to visualize. The most straightforward path is to use a compute instance.

Option A (recommended): Use an existing compute instance

If you already have a VM in a compartment you can access: – Note its compartment and region. – Ensure you have permission to view its metrics.

Option B: Create a small compute instance (low-cost / Always Free where possible)

Console steps 1. Go to ComputeInstances. 2. Select compartment: lab-observability. 3. Click Create instance. 4. Choose: – Name: lab-md-vm – Image: Oracle Linux (or Ubuntu) – Shape: choose Always Free eligible if available (verify in your tenancy/region) – Networking: default VCN/subnet is fine for the lab 5. Create.

Expected outcome – A running compute instance exists in lab-observability.

Verification – Instance state is RUNNING.

Metrics note: Some compute-level metrics (like CPU utilization) may require specific monitoring/agent configuration depending on your environment and metric namespace. If you don’t see CPU metrics later, don’t force it—use any metric you can confirm exists in your tenancy via Metrics Explorer.


Step 3: Confirm which metrics you can query (Metrics Explorer)

This step prevents the most common lab failure: adding a widget with a metric that doesn’t exist for your resource.

Console steps 1. Go to Observability & ManagementMonitoring. 2. Open Metrics Explorer (name may vary slightly). 3. Select compartment: lab-observability. 4. In the metric selector: – Browse available namespaces – Pick a namespace relevant to your resource (Compute, Load Balancer, etc.) 5. Confirm you can chart any metric for your resource.

Expected outcome – You can see at least one metric chart with real data points.

Verification – The chart displays a time series for the last 5–60 minutes.

If you do not see any metrics – Confirm you are in the correct region. – Confirm you selected the correct compartment. – Confirm you have IAM permissions to read metrics. – If using compute CPU utilization, verify instance monitoring/agent configuration in your environment. (Exact steps vary—verify in official docs for your OS/agent setup.)


Step 4: Create IAM groups for dashboard admins and viewers

You’ll create: – MD-Admins: can create and edit dashboards – MD-Viewers: can view dashboards (and read required metrics)

Console steps 1. Go to Identity & SecurityGroups. 2. Click Create Group: – Name: MD-Admins – Description: Management Dashboard administrators 3. Create Group again: – Name: MD-Viewers – Description: Management Dashboard viewers

Add at least one test user to each group (or use separate test users if you have them).

Expected outcome – Two groups exist.

Verification – Group details pages show expected members.


Step 5: Create IAM policies for dashboards and metrics access

Policies must grant access to: – Management Dashboard resources – Underlying metrics (Monitoring)

Console steps 1. Go to Identity & SecurityPolicies. 2. Select compartment: where you manage policies (often root compartment; depends on tenancy design). 3. Click Create Policy. 4. Name: policy-md-lab 5. Description: Policies for Management Dashboard tutorial 6. Policy statements (use as a starting point; verify exact resource families in official docs):

Allow group MD-Admins to manage management-dashboard-family in compartment lab-observability
Allow group MD-Viewers to read management-dashboard-family in compartment lab-observability

Allow group MD-Admins to read metrics in compartment lab-observability
Allow group MD-Viewers to read metrics in compartment lab-observability

Expected outcome – MD-Admins can create/edit dashboards in the lab compartment. – MD-Viewers can open dashboards but not edit them. – Both can read metrics used by widgets.

Verification – If you can, sign in as an admin-group user and confirm “Create dashboard” is available. – Sign in as a viewer-group user later and confirm dashboard opens read-only.

Common IAM caveat – If your dashboard pulls metrics from a different compartment than the dashboard resource, you must grant metrics read access in that compartment too.


Step 6: Create a dashboard in Management Dashboard

Console steps 1. Go to Observability & ManagementManagement Dashboard. 2. Choose compartment: lab-observability. 3. Click Create dashboard. 4. Provide: – Name: Lab - Instance Health – Description: CPU utilization and basic health signals for lab VM – Tags (recommended): – env=labowner=<your-team-or-name> 5. Create.

Expected outcome – An empty dashboard is created and you are in the dashboard editor.

Verification – Dashboard appears in the list and opens successfully.


Step 7: Add a metrics widget (using a metric you confirmed exists)

Because metric names and namespaces can vary, base this step on what you saw in Metrics Explorer (Step 3).

Console steps (generic) 1. In the dashboard editor, click Add widget (or Add). 2. Choose Metric (or “Monitoring metric”) as the widget type. 3. Configure: – Compartment: lab-observability – Namespace: pick the namespace you used in Metrics Explorer – Metric name: select a metric with visible data – Dimensions: filter to your resource (for example, instance OCID/resourceId, or a resourceName dimension if available) – Statistic: mean/avg (typical) or sum, depending on metric – Interval: choose something like 1m or 5m depending on available resolution 4. Visualization: – Line chart is typical for utilization – Set title: CPU Utilization (lab-md-vm) (or appropriate)

Expected outcome – The widget renders a chart with data points.

Verification – Change the time range (last 15 minutes → last 1 hour) and confirm the chart updates. – If the widget shows “No data,” revisit Troubleshooting.


Step 8 (optional): Generate a visible CPU spike to validate the chart reacts

If your widget is CPU utilization and your instance is mostly idle, generate a short CPU load. On Linux, a quick way is running yes loops.

SSH into the instance (use Cloud Shell or your local terminal; verify your network/SSH access):

ssh opc@<public-ip>

Run a short CPU load (Oracle Linux/Ubuntu):

# Start 2 CPU burners
yes > /dev/null &
yes > /dev/null &

# Let it run for ~2 minutes
sleep 120

# Stop the burners
killall yes

Expected outcome – CPU utilization should increase during the 2-minute window.

Verification – Refresh the dashboard time range and confirm the CPU chart shows a bump.

Safety note – Keep the load brief. This is only to validate the visualization.


Step 9: Add context widgets (text, links) for operational usability

Dashboards are most useful when they contain context.

Console steps 1. Add a Text widget (or “Markdown/Text” widget if available). 2. Add: – Purpose: what this dashboard is for – Owner/team – On-call rotation link – Runbook link – Escalation path (no secrets)

Example text: – Owner: Platform TeamRunbook: <internal URL>Escalation: #oncall channel

Expected outcome – Dashboard is self-documenting.

Verification – Viewer can understand “what to do next” from the dashboard alone.


Step 10: Test viewer access (read-only)

Console steps 1. Sign out, then sign in as a user in MD-Viewers (or temporarily add your user to that group and remove from admins for the test). 2. Navigate to Management Dashboard → compartment lab-observability. 3. Open Lab - Instance Health.

Expected outcome – Viewer can open dashboard and see charts. – Viewer cannot edit or add widgets.

Verification – Edit controls are absent or disabled. – If viewer sees “Not authorized” or blank widgets, it’s usually missing metrics read permissions.


Validation

Use this checklist:

  • [ ] Dashboard opens without errors.
  • [ ] At least one metrics widget shows real data points.
  • [ ] Time range changes update the chart.
  • [ ] Viewer user can view the dashboard but cannot edit it.
  • [ ] (Optional) CPU spike is visible in the time series.

Troubleshooting

Problem: Dashboard exists but widgets show “No data”

Likely causes – Wrong compartment selected in the widget – Wrong namespace/metric – Time range too narrow – Resource dimension filter doesn’t match (wrong OCID/dimension) – Metrics not emitted for that resource type in your environment

Fixes – Re-check the metric in Monitoring → Metrics Explorer first. – Copy the same namespace/metric/dimensions into the widget. – Expand time range to last 1 hour. – Remove dimension filters temporarily to confirm data exists, then re-add.

Problem: Viewer can open dashboard but sees authorization errors in widgets

Cause – Viewer has permission to read dashboards but not to read metrics/logs used by widgets.

Fix – Add/adjust IAM policy statements for read metrics (and/or logs) in the relevant compartment(s). – Confirm the viewer is in the right group and policies are in the correct parent compartment.

Problem: Management Dashboard menu is missing

Cause – Service not available in that region, or your tenancy restricts it, or console navigation changed.

Fix – Switch regions and check again. – Use OCI Console search (top search bar) for “dashboard”. – Verify in official documentation and your tenancy service enablement.

Problem: CPU utilization metric not found

Cause – CPU metrics can depend on agent/plugin configuration or available namespaces.

Fix – Use any metric that exists (VCN, LB, storage, etc.) to complete the lab. – Verify compute monitoring metric requirements in official OCI Monitoring documentation.

Cleanup

To avoid ongoing cost (from compute/logging) and reduce clutter:

  1. Delete the dashboard – Management Dashboard → select Lab - Instance Health → Delete

  2. Delete dashboard groups/folders (if you created any) – Delete after dashboards are removed

  3. Terminate compute instance (if you created one) – Compute → Instances → lab-md-vm → Terminate

  4. Remove IAM policies – Identity & Security → Policies → delete policy-md-lab

  5. Remove IAM groups (optional) – Delete MD-Admins and MD-Viewers if they were created only for the lab

  6. Delete compartment (optional, only if empty) – Delete lab-observability after confirming it contains no resources

11. Best Practices

Architecture best practices

  • Design dashboards by audience:
  • Executive scorecard (few KPIs, stable)
  • On-call dashboard (golden signals + links)
  • Deep-dive dashboard (many widgets for diagnosis)
  • Use multiple dashboards rather than one giant board. Large dashboards are harder to maintain and can be slower to render.
  • Standardize layouts across services: top row = availability/errors, middle = saturation, bottom = dependencies.

IAM / security best practices

  • Separate creators from viewers:
  • MD-Admins can manage dashboards
  • MD-Viewers can read dashboards
  • Least privilege for data sources:
  • If a dashboard uses metrics only, don’t grant broad log access.
  • Use compartments intentionally:
  • Place dashboards in the same compartment as the team that owns them, or in a shared ops compartment with carefully scoped access.

Cost best practices

  • Don’t create telemetry “just for a dashboard” unless you’ve modeled cost.
  • Reduce metric cardinality and avoid high-frequency custom metrics unless necessary.
  • Right-size retention for logs used in dashboards.

Performance best practices

  • Limit the number of high-resolution widgets on a single dashboard.
  • Prefer aggregated views (mean/sum over 5m) for scorecards; use 1m only when needed.
  • Use consistent time ranges (like last 1h) for on-call views; allow zoom for deep dives.

Reliability best practices

  • Pair dashboards with alarms. Dashboards don’t wake anyone up.
  • Ensure every critical widget has:
  • A linked alarm (where applicable)
  • A runbook reference (text widget)
  • An owner tag

Operations best practices

  • Tag dashboards: owner, env, app, costCenter.
  • Maintain an inventory: a “Dashboard Catalog” dashboard listing key dashboards and owners (simple text/table).
  • Review dashboards quarterly to remove stale ones.

Governance/tagging/naming best practices

  • Naming convention examples:
  • Prod - Payments - Golden Signals
  • Dev - API - Latency and Errors
  • Use defined tags for ownership and environment to enable lifecycle management.

12. Security Considerations

Identity and access model

  • Management Dashboard access is governed by OCI IAM.
  • Access to widget data depends on permissions to the underlying services:
  • Metrics: read metrics permissions in relevant compartments
  • Logs/analytics: permissions for logging/log groups/log analytics features you use
  • Use federation + MFA for human users where possible.

Encryption

  • Data at rest and in transit is typically handled by OCI’s platform controls for managed services.
  • Your focus should be:
  • Protecting sensitive logs and metrics
  • Avoiding exposure through overly broad IAM
  • Avoiding embedding sensitive values in dashboard text

Verify encryption specifics for each telemetry service (Monitoring, Logging, Logging Analytics) in official docs.

Network exposure

  • Dashboards are accessed through OCI Console over HTTPS.
  • Primary risk is not inbound network exposure, but:
  • Credential compromise
  • Over-permissioned users/groups
  • Sharing dashboards that reveal sensitive operational details

Secrets handling

  • Do not put secrets (API keys, tokens, passwords) in text widgets.
  • Do not paste sensitive payloads into dashboards for debugging; use secure secret stores and controlled access logs.

Audit/logging

  • Ensure you can access OCI Audit to track dashboard changes.
  • Consider alerting on:
  • Changes to key dashboards
  • Changes to IAM policies that affect dashboard or telemetry access

Compliance considerations

  • If dashboards display regulated data (even indirectly), ensure:
  • Proper compartment isolation
  • Strict IAM
  • Retention controls on logs
  • Review and approval process for saved searches and dashboards

Common security mistakes

  • Granting viewer groups “manage” instead of “read.”
  • Allowing dashboards to query logs that contain sensitive payloads without proper access controls.
  • Storing dashboards in a compartment that many users can manage.

Secure deployment recommendations

  • Create a dedicated compartment for shared dashboards.
  • Implement least privilege IAM and periodic access reviews.
  • Use defined tags to enforce ownership and accountability.
  • Monitor Audit for changes to dashboards and related IAM policies.

13. Limitations and Gotchas

Because Oracle Cloud features evolve, validate specifics in your region/tenancy. Common practical limitations and “gotchas” include:

  • Data access is separate from dashboard access: Users may open a dashboard but still see authorization errors in widgets.
  • Region context matters: Metrics/logs are usually region-scoped; ensure the console region matches the data location.
  • Metric availability varies by service and configuration: Some compute metrics may require agent/plugin enablement.
  • Service limits: Maximum dashboards, widgets, saved searches, and query sizes can be limited. Check Service Limits.
  • High-cardinality telemetry impacts cost and usability: Too many dimension combinations make charts noisy and expensive.
  • Cross-compartment dashboards require deliberate IAM: Otherwise, dashboards become fragmented.
  • Dashboards are not alerting: They are observational; pair with alarms/notifications.
  • Export/import (if available) may not be perfect: IDs/OCIDs differ between compartments and environments; you may need to remap resource identifiers.

14. Comparison with Alternatives

Nearest services in Oracle Cloud (same cloud)

  • OCI Monitoring (Metrics Explorer): Great for ad-hoc metric exploration, but not a shared, curated multi-widget board.
  • Service-specific dashboards (APM, Logging Analytics, Database Management): Deep service insights, but may not unify across multiple telemetry sources.
  • Third-party visualization (Grafana, etc.) on OCI: More flexible and multi-cloud, but you manage integration and access control.

Nearest services in other clouds

  • AWS: CloudWatch Dashboards (and Managed Grafana)
  • Azure: Azure Monitor Workbooks / Dashboards
  • Google Cloud: Cloud Monitoring Dashboards

Open-source / self-managed alternatives

  • Grafana (visualization; multi-source; strong ecosystem)
  • Kibana/OpenSearch Dashboards (log-centric)
  • Prometheus + Grafana (metrics pipeline; you manage it)

Comparison table

Option Best For Strengths Weaknesses When to Choose
Oracle Cloud Management Dashboard OCI-native, shared operational dashboards IAM/compartment alignment, native console experience, low operational overhead Data-source scope depends on enabled OCI services; less extensible than open ecosystems You operate mostly in OCI and want governed dashboards quickly
OCI Monitoring (Metrics Explorer) Ad-hoc metric exploration Fast, direct metric analysis Not a curated shared dashboard system You need quick investigation, not a long-lived shared view
OCI APM dashboards (if enabled) App performance deep dives Traces/APM-native views APM-specific; not always unified across infra You need application tracing and service-level insights
Grafana (self-managed or managed where available) Multi-cloud, plugin-based dashboards Rich visualization, many data sources Setup/maintenance, IAM integration complexity You need advanced visuals and multi-source correlation
Splunk/Datadog/New Relic Enterprise observability platform Powerful analytics/correlation, mature features Higher cost, vendor lock-in, integration work You need enterprise-wide observability across many platforms

15. Real-World Example

Enterprise example: regulated financial services platform

Problem A financial services company runs multiple OCI workloads split by compartments (networking, shared services, app teams). On-call teams struggle with fragmented visibility across compute, load balancers, and application telemetry. Compliance requires strict change auditing.

Proposed architecture – OCI compartments per environment and per business unit – Central “Shared Observability” compartment containing curated Management Dashboard resources – IAM: – Platform ops group: manage dashboards – BU on-call groups: read dashboards + read metrics/logs for their compartments – Dashboards: – Executive scorecard (availability, latency, error rate, saturation) – On-call triage boards per service tier – Audit monitoring: – Review dashboard changes in OCI Audit – Alert on policy changes affecting observability access

Why Management Dashboard was chosen – Native alignment with OCI IAM and compartments – Centralized, shareable operational views without rolling out a new toolchain – Audit-friendly operational governance

Expected outcomes – Faster mean time to detect (MTTD) and mean time to resolve (MTTR) – Fewer access-related incidents (clear viewer vs editor separation) – Consistent operational reporting across teams

Startup / small-team example: SaaS on OCI with lean ops

Problem A startup has a small engineering team and cannot afford significant ops overhead. They need quick dashboards for the VM tier and load balancer signals, plus a simple executive view.

Proposed architecture – One compartment per environment: dev, prod – Management Dashboard per environment: – Prod - Golden SignalsProd - Infrastructure Health – Basic IAM: – Engineers can manage dashboards in their compartment – Founders have read-only access – Alarms for critical thresholds (CPU, 5xx errors), routed via Notifications

Why Management Dashboard was chosen – Minimal setup: works directly in Oracle Cloud console – Sufficient for early-stage visibility without running Grafana/Prometheus

Expected outcomes – Rapid troubleshooting using shared views – Reduced need for manual metric checks during incidents – Clear operational posture for stakeholders

16. FAQ

1) Is Management Dashboard the same as OCI Monitoring Metrics Explorer?
No. Metrics Explorer is primarily for ad-hoc exploration. Management Dashboard is for building curated, reusable dashboards with multiple widgets and shareable access controls.

2) Do dashboards cost extra?
Often the dashboard feature itself is not billed separately, but the telemetry sources (custom metrics, logs ingestion/analytics, APM) may be billed. Verify in Oracle’s official pricing pages.

3) Can I restrict who can edit a dashboard vs who can view it?
Yes—use OCI IAM policies to separate manage vs read permissions for Management Dashboard resources.

4) Why can a user open the dashboard but see “Not authorized” inside widgets?
They likely have permission to read the dashboard but not permission to read the underlying metrics/logs. Grant the required permissions for the widget data sources.

5) Are dashboards regional?
In practice you build and view dashboards within a selected OCI region. Cross-region behavior depends on service design and data source capabilities. Verify in official docs.

6) Can dashboards query multiple compartments?
They can, as long as the user has permissions and the widget configuration supports selecting those compartments. Cross-compartment visibility requires careful IAM design.

7) Can I use Management Dashboard for security monitoring?
You can visualize security-relevant metrics/log patterns if your telemetry sources contain them and IAM is properly restricted. For full SIEM/SOC requirements, consider dedicated security tooling.

8) How do I standardize dashboards across dev/test/prod?
Use naming conventions, tagging, and (if supported) export/import or cloning. Otherwise, replicate dashboards manually and keep a “golden template.”

9) What’s the best way to prevent dashboard sprawl?
Require owner tags, enforce naming conventions, and run quarterly reviews to delete unused dashboards.

10) Can I embed runbooks and links in dashboards?
Yes, typically via text widgets. Avoid secrets; use internal docs systems for runbooks.

11) How do I troubleshoot “No data” in a metrics widget?
First confirm the metric exists in Monitoring → Metrics Explorer, then replicate the same namespace/metric/dimensions/time range in the widget.

12) Do dashboards provide alerting?
No. Use OCI Monitoring Alarms and route notifications. Dashboards complement alerting; they don’t replace it.

13) Can I share dashboards with external users?
Dashboards are secured by OCI IAM. External sharing typically requires federation/guest identities consistent with your security posture. Verify with your IAM administrators.

14) Can I automate dashboard creation?
Possibly, if Management Dashboard resources are exposed via APIs/SDK/Terraform in your environment. Verify official docs and provider support (OCI Terraform provider) for current capabilities.

15) What’s the safest first dashboard to build?
Start with non-sensitive infrastructure metrics (CPU, load balancer request counts, error rates) and ensure IAM is least-privilege before adding log-based widgets.

17. Top Online Resources to Learn Management Dashboard

Because Oracle documentation URLs can be reorganized, the most reliable approach is to start at OCI documentation and search within it for “Management Dashboard” / “Management Dashboards”.

Resource Type Name Why It Is Useful
Official documentation (entry point) OCI Documentation home: https://docs.oracle.com/en-us/iaas/Content/home.htm Canonical starting point; search for “Management Dashboard”
Official Observability docs OCI Monitoring docs (entry point): https://docs.oracle.com/en-us/iaas/Content/monitoring/home.htm Metrics, namespaces, alarms—core for dashboard widgets
Official Logging docs OCI Logging docs (entry point): https://docs.oracle.com/en-us/iaas/Content/Logging/home.htm Logs and log groups; helps when dashboards include log-based widgets
Official IAM docs OCI IAM docs (entry point): https://docs.oracle.com/en-us/iaas/Content/Identity/home.htm Policies, groups, compartments—required for secure dashboard access
Official Audit docs OCI Audit docs (entry point): https://docs.oracle.com/en-us/iaas/Content/Audit/home.htm Governance and change tracking for dashboards and IAM
Pricing Oracle Cloud price list: https://www.oracle.com/cloud/price-list/ Verify whether dashboards have a SKU and understand telemetry pricing
Cost estimation Oracle Cost Estimator: https://www.oracle.com/cloud/costestimator/ Model costs for logs/metrics/APM that back dashboards
Free tier Oracle Cloud Free Tier: https://www.oracle.com/cloud/free/ Identify Always Free resources for low-cost labs
Architecture guidance OCI Architecture Center: https://docs.oracle.com/en/solutions/ Reference architectures for observability patterns (search observability/monitoring)
Videos Oracle Cloud Infrastructure YouTube: https://www.youtube.com/@OracleCloudInfrastructure Official walkthroughs and demos (search “dashboards”, “observability”)
Tutorials/Labs Oracle Cloud tutorials (entry point): https://docs.oracle.com/en/learn/ Hands-on labs; search for monitoring/logging/dashboard topics
Terraform provider OCI Terraform Provider docs: https://registry.terraform.io/providers/oracle/oci/latest/docs Check whether dashboard resources are supported for automation (verify current coverage)

18. Training and Certification Providers

The providers below are listed as requested. Evaluate course outlines directly on their websites to confirm current Oracle Cloud, Observability and Management, and Management Dashboard coverage.

1) DevOpsSchool.com
Suitable audience: DevOps engineers, SREs, cloud engineers, beginners to advanced
Likely learning focus: DevOps tooling, cloud fundamentals, automation, operations practices (verify OCI-specific coverage)
Mode: Check website (online/corporate/self-paced/live)
Website: https://www.devopsschool.com/

2) ScmGalaxy.com
Suitable audience: SCM/DevOps learners, engineers building CI/CD and ops skills
Likely learning focus: DevOps, SCM, automation, release engineering (verify OCI observability modules)
Mode: Check website
Website: https://www.scmgalaxy.com/

3) CLoudOpsNow.in
Suitable audience: Cloud operations practitioners, platform teams
Likely learning focus: CloudOps practices, operations, monitoring/management topics (verify OCI coverage)
Mode: Check website
Website: https://www.cloudopsnow.in/

4) SreSchool.com
Suitable audience: SREs, on-call engineers, reliability-focused teams
Likely learning focus: SRE practices, monitoring/alerting, incident management (verify OCI tooling coverage)
Mode: Check website
Website: https://www.sreschool.com/

5) AiOpsSchool.com
Suitable audience: Ops teams exploring AIOps and automation
Likely learning focus: AIOps concepts, automation, monitoring analytics (verify OCI integrations)
Mode: Check website
Website: https://www.aiopsschool.com/

19. Top Trainers

The sites below are provided as training resources/platforms as requested. Review current course pages and credentials directly.

1) RajeshKumar.xyz
Likely specialization: DevOps/cloud training and guidance (verify current OCI content)
Suitable audience: Beginners to intermediate practitioners
Website: https://rajeshkumar.xyz/

2) devopstrainer.in
Likely specialization: DevOps training and mentoring (verify OCI observability coverage)
Suitable audience: DevOps engineers, SREs, students
Website: https://www.devopstrainer.in/

3) devopsfreelancer.com
Likely specialization: Freelance DevOps services/training resources (verify offerings)
Suitable audience: Teams needing short-term coaching or implementation guidance
Website: https://www.devopsfreelancer.com/

4) devopssupport.in
Likely specialization: DevOps support/training services (verify OCI focus)
Suitable audience: Operations teams and DevOps practitioners
Website: https://www.devopssupport.in/

20. Top Consulting Companies

Listed neutrally as requested. Confirm current Oracle Cloud and Observability/Management offerings directly.

1) cotocus.com
Likely service area: Cloud/DevOps consulting and implementation (verify current portfolio)
Where they may help: Observability rollout, dashboard standards, IAM design for ops tooling
Consulting use case examples:
– Define golden signals dashboards for production
– Implement compartment/IAM model for ops visibility
– Create operational runbooks aligned with dashboards
Website: https://cotocus.com/

2) DevOpsSchool.com
Likely service area: DevOps consulting, training, and enablement (verify OCI services)
Where they may help: Platform engineering practices, CI/CD, monitoring strategy, operational readiness
Consulting use case examples:
– Observability program kickstart
– Dashboard and alerting standardization
– SRE operating model and incident process
Website: https://www.devopsschool.com/

3) DEVOPSCONSULTING.IN
Likely service area: DevOps consulting and support services (verify OCI specialization)
Where they may help: Deployment pipelines, monitoring/alerting integration, operational playbooks
Consulting use case examples:
– Set up production readiness dashboards
– Tune alerting thresholds and escalation
– Implement tagging/governance for observability assets
Website: https://www.devopsconsulting.in/

21. Career and Learning Roadmap

What to learn before Management Dashboard

  • OCI basics: regions, compartments, VCN fundamentals
  • IAM fundamentals: users, groups, policies, dynamic groups
  • Observability basics: metrics vs logs vs traces, golden signals
  • OCI Monitoring basics: namespaces, metrics explorer, alarms
  • OCI Logging basics: log groups, service logs, retention (if used)

What to learn after Management Dashboard

  • Alerting and incident response:
  • OCI Monitoring alarms + Notifications
  • On-call processes and runbooks
  • Advanced observability (as applicable in your tenancy):
  • Logging Analytics
  • APM and tracing
  • Infrastructure as Code:
  • Terraform modules for observability patterns (verify dashboard resource support)
  • SRE/FinOps collaboration:
  • Right-sizing, capacity planning, and cost governance

Job roles that use it

  • Cloud Engineer
  • DevOps Engineer
  • Site Reliability Engineer (SRE)
  • Platform Engineer
  • Cloud Operations / NOC Engineer
  • Solutions Architect (operational readiness)
  • Security Engineer (for security signal dashboards, where applicable)

Certification path (Oracle Cloud)

Oracle certification offerings change. Start here and verify current tracks: – Oracle Cloud training and certification landing page: https://education.oracle.com/ (navigate to OCI certifications)

A practical progression: 1. OCI foundations (IAM, networking, core services) 2. OCI observability basics (Monitoring, Logging) 3. Build operational dashboards + alarms + incident process

Project ideas for practice

  • Build a “Golden Signals” dashboard for a sample service (latency, traffic, errors, saturation).
  • Create per-environment dashboards and enforce tagging + ownership.
  • Implement an on-call runbook and embed links into dashboards.
  • Create alarms for each critical widget and test end-to-end notification routing.

22. Glossary

  • OCI (Oracle Cloud Infrastructure): Oracle Cloud’s IaaS/PaaS platform and services.
  • Observability: The practice of understanding system behavior using telemetry (metrics, logs, traces).
  • Management Dashboard: Oracle Cloud service for building and sharing dashboards over operational data sources.
  • Widget: A dashboard component that visualizes a specific query/result (chart, table, text).
  • Metric: Numeric time-series data (e.g., CPU utilization).
  • Namespace: A logical grouping for metrics in OCI Monitoring.
  • Dimension: A key/value attribute on a metric time series (e.g., resourceId).
  • Compartment: OCI’s logical isolation boundary for organizing and controlling access to resources.
  • IAM Policy: A statement defining who can do what on which resources in OCI.
  • Least privilege: Security practice of granting only the minimum permissions required.
  • Audit (OCI Audit): Service that records API calls and management actions for governance.
  • Alarm: A rule that evaluates a metric and triggers notifications/actions when conditions are met.
  • Free tier / Always Free: Oracle Cloud offerings that provide limited resources at no cost (subject to current terms).
  • High cardinality: Too many unique label/dimension combinations, causing a large number of time series (cost and usability risk).
  • Golden signals: Common SRE indicators—latency, traffic, errors, saturation.

23. Summary

Oracle Cloud Management Dashboard (in Observability and Management) is the OCI-native way to create curated, shareable operational dashboards that visualize telemetry—most commonly OCI Monitoring metrics, and potentially log/analytics sources depending on what your tenancy has enabled.

It matters because it improves day-to-day operations: faster incident triage, consistent reporting, and a governed “single pane of glass” aligned with compartments and IAM. From a cost perspective, dashboards themselves are usually not the primary cost driver; the real spend comes from the metrics/logs/APM data you ingest and retain—so control telemetry volume and cardinality. From a security perspective, success depends on least-privilege IAM and making sure dashboard viewers have only the data access they need, with changes traceable through Audit.

Use Management Dashboard when you want OCI-native visibility with low operational overhead and strong governance. Next learning step: pair your dashboards with OCI Monitoring alarms + Notifications, then build a repeatable on-call runbook workflow around your “golden signals” dashboards.