Oracle Cloud Console Dashboards Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for Observability and Management

Category

Observability and Management

1. Introduction

Oracle Cloud Console Dashboards are a built-in way to create, organize, and share visual dashboards directly in the Oracle Cloud Infrastructure (OCI) Console. They are most commonly used to visualize OCI Monitoring metrics (and, depending on the surrounding OCI services you use, to provide a “single pane of glass” for operational status and troubleshooting workflows).

Simple explanation: Console Dashboards let you put the charts you care about—CPU, latency, error rates, throughput, and other key metrics—onto one page so you can quickly understand whether systems are healthy.

Technical explanation: Console Dashboards are a Console-based dashboarding capability integrated with OCI’s Observability and Management services—especially Monitoring. You typically build dashboards by adding widgets that query metric time series data in a selected region and compartment scope, and then you secure access using OCI IAM policies. Dashboards support operational patterns such as per-environment views (dev/test/prod), per-team views (network, platform, app), and per-service views (compute, load balancer, database).

What problem it solves: In production, teams waste time switching between resource pages, manually searching for metrics, and re-running the same investigations. Console Dashboards reduce that friction by standardizing operational views and making common checks repeatable, shareable, and permission-controlled.

Naming note (verify in official docs): In OCI documentation and the Console UI, this capability is often labeled simply “Dashboards” under Monitoring/Observability. This tutorial uses “Console Dashboards” as the primary name (as requested) to clearly distinguish it from third-party dashboards or dashboards embedded in other OCI services.


2. What is Console Dashboards?

Official purpose (in OCI terms): Console Dashboards are used to visualize and monitor OCI telemetry—most commonly Monitoring metrics—through customizable dashboard pages inside the OCI Console.

Core capabilities

Console Dashboards typically enable you to:

  • Create dashboards and arrange widgets to visualize metrics over time.
  • Organize dashboards into dashboard groups (to manage dashboards at scale).
  • Control access with OCI IAM so teams can view or manage dashboards.
  • Use dashboards for day-to-day operations, incident response, and reporting.

Major components

While exact naming can vary slightly in the Console UI, the typical building blocks are:

  • Dashboard: A container page that holds widgets (charts and other elements).
  • Dashboard group: A logical grouping for dashboards (often by team, environment, or application).
  • Widget: A dashboard element, commonly a metric chart, and often a text/markdown-style widget for context (verify widget types in the current Console UI).

Service type

  • Type: Console feature integrated with OCI Observability and Management (most closely tied to Monitoring).
  • APIs/Automation: OCI generally exposes APIs and CLI for many Monitoring resources. If you plan to automate dashboards, confirm the latest OCI Monitoring API/CLI support for dashboards in official docs before implementing.

Scope and locality (how “big” the service is)

  • Compartment-scoped: Dashboards and groups are typically created within a compartment and governed by compartment IAM policies.
  • Region context: OCI Observability workflows are region-aware in the Console. In practice, you usually create/view dashboards in a selected region and query metrics available in that region. Cross-region operations are commonly handled by switching regions or designing a strategy (for example, per-region dashboards). Verify current cross-region behavior in the official documentation for your OCI setup.

How it fits into the Oracle Cloud ecosystem

Console Dashboards commonly sit at the center of an OCI operations toolchain:

  • Monitoring provides metric collection, queries, alarms, and notifications.
  • Logging provides logs for diagnostics; dashboards often complement logs by revealing when/where to look.
  • Notifications delivers alarm messages to email, PagerDuty-like workflows, or chat systems.
  • Events and OS Management/Compute can trigger operational responses, while dashboards provide visibility into outcomes.

3. Why use Console Dashboards?

Business reasons

  • Faster incident resolution: Pre-built dashboards reduce mean time to detect (MTTD) and mean time to resolve (MTTR).
  • Shared operational language: Dashboards align teams on agreed SLIs/SLO indicators (latency, error rate, saturation).
  • Lower operational overhead: Less time spent building ad-hoc charts or navigating multiple screens.

Technical reasons

  • Metric-centric visibility: Metrics are ideal for trend detection, anomaly awareness, and capacity planning.
  • Standardized views: You can codify “what good looks like” and “what to check first.”
  • Better troubleshooting flow: Dashboards can be organized to mirror system architecture and dependencies.

Operational reasons

  • On-call readiness: Dashboards are useful during incidents: they can show blast radius, whether mitigation worked, and which dependency is failing.
  • Environment separation: Separate dashboards per environment (dev/test/prod) reduce confusion and mistakes.
  • Runbook integration: Text widgets (if available in your UI) can link to runbooks, tickets, and escalation procedures.

Security/compliance reasons

  • Least privilege: OCI IAM can restrict who can view vs. edit dashboards.
  • Auditability: OCI Audit can track changes to relevant resources (verify which dashboard actions are captured in Audit for your tenancy).
  • Reduced data sprawl: Teams can rely on OCI-native dashboards rather than exporting operational data broadly.

Scalability/performance reasons

  • Designed for OCI telemetry: Console Dashboards are optimized for OCI Monitoring’s metric model and dimensions.
  • Operational scale: Dashboard groups and compartment organization help maintain structure as the number of services and teams grows.

When teams should choose Console Dashboards

Choose Console Dashboards when: – Your workloads run primarily on Oracle Cloud and you want native dashboards tightly integrated with OCI Monitoring. – You want quick time-to-value without deploying third-party visualization stacks. – You need dashboards that respect OCI IAM and compartment boundaries.

When teams should not choose Console Dashboards

Consider alternatives when: – You require advanced visualization (complex transformations, custom panels, extensive plugin ecosystems). – You need single dashboards across multiple cloud providers with consistent UI and query language. – You require extensive dashboard-as-code pipelines and want mature open-source tooling (for example, Grafana with GitOps), though OCI can still integrate with those.


4. Where is Console Dashboards used?

Industries

Common across industries running production systems on OCI, such as: – Financial services (availability and latency monitoring) – SaaS and ISVs (multi-tenant service health) – Retail and e-commerce (traffic spikes, checkout performance) – Telecommunications (network and edge-related metrics) – Healthcare (system uptime and compliance-driven reporting)

Team types

  • SRE and platform engineering teams
  • Cloud infrastructure and operations teams
  • DevOps teams
  • Application engineering teams (especially with ownership of operational KPIs)
  • NOC/SOC teams (as part of broader monitoring and incident response)

Workloads

  • Web applications and APIs on Compute or OKE (Kubernetes)
  • Databases (Oracle Database services and self-managed DBs)
  • Networking-heavy architectures (Load Balancers, gateways)
  • Batch and integration workloads (functions, scheduled jobs)
  • Hybrid deployments where OCI is one part of the estate

Architectures

  • Single-region production
  • Active/standby multi-region
  • Multi-compartment landing zones (separated by environment or business unit)
  • Shared services models (central observability with delegated access)

Real-world deployment contexts

  • Production: Primary usage—operational dashboards for availability, latency, capacity.
  • Dev/test: Useful for validating new releases, load tests, cost/usage experiments, and comparing baselines.

5. Top Use Cases and Scenarios

Below are realistic scenarios where Console Dashboards add concrete value.

1) Production “Service Health” overview

  • Problem: On-call needs a fast health snapshot across critical components.
  • Why Console Dashboards fits: Consolidates key metrics into one view with consistent time ranges.
  • Example: A dashboard shows API latency, HTTP 5xx rate, load balancer backend health signals, and compute CPU.

2) Environment comparison (dev vs. prod)

  • Problem: Teams need to compare performance behavior across environments to spot regressions.
  • Why it fits: You can create separate dashboards per compartment/environment with similar widget layout.
  • Example: A “Prod API” and “Dev API” dashboard share the same charts but pull from different compartments.

3) Capacity planning and saturation tracking

  • Problem: Engineers need to anticipate scaling needs before customers are impacted.
  • Why it fits: Dashboards show trends and peaks over weeks/months (subject to retention/availability of metrics).
  • Example: Weekly CPU peak and network throughput charts guide right-sizing decisions.

4) Incident response “war room” dashboard

  • Problem: During an outage, too many people check too many different pages.
  • Why it fits: A prepared incident dashboard focuses attention on the most telling signals.
  • Example: Widgets track error rate, dependency latency, and a “known mitigations” text widget links to the runbook.

5) Change validation after deployment

  • Problem: After a release, teams must confirm stability quickly.
  • Why it fits: Dashboards show immediate before/after behavior around deploy time.
  • Example: Compare request latency and error rates 30 minutes before and after deployment.

6) Network troubleshooting and performance checks

  • Problem: Network issues present as intermittent timeouts and slow responses.
  • Why it fits: Dashboards can group network and application metrics to correlate symptoms.
  • Example: A dashboard shows NAT gateway connections, load balancer metrics, and API latency side by side (verify available metrics for your resources).

7) Compartment-level operational reporting

  • Problem: Platform teams need a view per team/compartment without giving broad tenancy access.
  • Why it fits: Dashboards are compartment-scoped and controlled by IAM.
  • Example: Each product team owns its compartment dashboards; central SRE has read-only access.

8) SLA/SLO indicator tracking

  • Problem: Management wants high-level service KPIs; engineers want drill-down.
  • Why it fits: Dashboards can combine SLI-like widgets (availability, latency percentiles—if supported by the metric) with detailed charts.
  • Example: A top row shows availability and error rate; lower rows show per-endpoint or per-AZ breakdown.

9) Post-incident review and learning

  • Problem: After incidents, teams need evidence and timelines.
  • Why it fits: Dashboards provide consistent historical views (within retention limits).
  • Example: Exported screenshots/data (manual) support the incident timeline discussion (verify export options in your console).

10) Executive or stakeholder visibility (controlled)

  • Problem: Stakeholders want visibility without operational privileges.
  • Why it fits: IAM can provide read-only dashboard access.
  • Example: A “Business Ops” dashboard shows high-level throughput and availability, without exposing sensitive resource management capabilities.

11) Migration tracking (on-prem to OCI)

  • Problem: During migration waves, teams must ensure OCI performance matches expectations.
  • Why it fits: Dashboards compare old vs new system behavior (if metrics are available).
  • Example: A migration dashboard shows OCI compute and database metrics during cutover windows.

12) Cost-risk early signals (indirect)

  • Problem: Sudden traffic spikes can create cost spikes.
  • Why it fits: While dashboards are not a billing tool, they can show usage signals that correlate with cost.
  • Example: A “Traffic & Egress Risk” dashboard tracks outbound bytes and request rates as an early warning.

6. Core Features

The exact feature set can evolve; always confirm in the latest OCI Console and docs. The following are the core, commonly available capabilities associated with Console Dashboards in Oracle Cloud Observability and Management.

6.1 Dashboard creation and editing in the Console

  • What it does: Lets you create a dashboard and configure layout and widgets.
  • Why it matters: Provides quick, no-code operational views for teams.
  • Practical benefit: Faster rollout of standardized dashboards without additional tooling.
  • Caveats: Editing rights must be carefully controlled; accidental changes can disrupt on-call workflows.

6.2 Dashboard groups for organization

  • What it does: Organizes dashboards into logical groups.
  • Why it matters: As dashboards scale, you need governance and discoverability.
  • Practical benefit: Clear separation by team, environment, and application.
  • Caveats: Governance is only as good as your naming/tagging and compartment strategy.

6.3 Metric visualization widgets (Monitoring integration)

  • What it does: Shows time-series charts based on OCI Monitoring metrics.
  • Why it matters: Metrics are the fastest signal for “something changed.”
  • Practical benefit: You can watch KPIs like latency, CPU, errors, and throughput in real time.
  • Caveats: You only see data that exists in Monitoring. For OS-level metrics, you may need the OCI agent/compute monitoring features enabled (verify per OS/image).

6.4 Compartment-aware scoping

  • What it does: Dashboards and metrics access are governed by compartments.
  • Why it matters: Compartments are OCI’s primary isolation and access boundary.
  • Practical benefit: Teams can own dashboards within their compartment without seeing others.
  • Caveats: Cross-compartment views require IAM policies that allow reading metrics across compartments—easy to over-permission if not designed carefully.

6.5 Time range and query controls

  • What it does: View charts over selected time windows and (in many cases) adjust aggregation.
  • Why it matters: Different tasks need different windows—incident response vs. trend analysis.
  • Practical benefit: One dashboard can be useful for real-time and retrospective analysis.
  • Caveats: Very large windows may be limited by retention or query constraints; verify Monitoring limits.

6.6 Sharing via IAM (no “public link” by default)

  • What it does: Access is granted to users/groups by OCI IAM policy.
  • Why it matters: Prevents sensitive operational data from being shared publicly.
  • Practical benefit: Enables least-privilege, audit-ready access patterns.
  • Caveats: This is not the same as publishing external dashboards to customers; use a dedicated portal/tool for that.

6.7 Tagging support (governance)

  • What it does: Many OCI resources support tags; dashboards may support defined/freeform tags (verify in your tenancy).
  • Why it matters: Tags help cost allocation, ownership, and lifecycle management.
  • Practical benefit: You can identify which dashboards belong to which application or cost center.
  • Caveats: Tag compliance requires governance; otherwise tags become inconsistent.

6.8 Integration into operational workflows

  • What it does: Dashboards complement alarms, notifications, and incident response.
  • Why it matters: Alerts tell you something is wrong; dashboards help you see what and where.
  • Practical benefit: Better operator experience and faster diagnosis.
  • Caveats: Dashboards do not replace alerting; they must be paired with alarms and runbooks.

7. Architecture and How It Works

High-level architecture

At a high level, Console Dashboards are a UI layer in the OCI Console that queries metrics from OCI Monitoring (and sometimes correlates with other services operationally, even if not directly embedded).

  • Control plane: You create/modify dashboards and groups in the Console (or via API/CLI if supported).
  • Data plane: Widgets request metric time series data from Monitoring for selected namespaces/dimensions.
  • Security: Requests are authorized by OCI IAM policies tied to the authenticated user (or federated identity).

Request/data/control flow (typical)

  1. User signs into OCI Console (native IAM user or federated SSO).
  2. User navigates to Observability & Management → Monitoring → Dashboards (label may vary).
  3. The Console requests dashboard definitions (metadata) from OCI services.
  4. For each widget, the Console queries Monitoring for metric data based on the widget’s configuration.
  5. Monitoring returns time series data; Console renders charts.

Integrations with related services

Console Dashboards most commonly integrate with: – Monitoring: Metrics, alarms, metric queries. – Notifications: Alarm destinations and operational workflows. – Logging: Operators pivot from dashboard anomalies to logs (workflow-level integration; exact embedded links vary). – Compute / OKE / Networking / Database services: Provide the resource metrics you chart. – Audit: Tracks changes to relevant resources (verify coverage for dashboards).

Dependency services

  • OCI IAM for authentication and authorization
  • Monitoring service for metric data and alarm definitions
  • The OCI Console front-end

Security/authentication model

  • Authentication: OCI IAM user/password + MFA, API signing keys, or federated identity providers (IdP).
  • Authorization: IAM policies control actions like:
  • viewing dashboards
  • managing dashboards
  • reading metrics
  • managing alarms (if you build alerting alongside dashboards)

Networking model

  • From a user perspective, Console Dashboards are accessed via the OCI Console over HTTPS.
  • From a service perspective, widget queries are internal service-to-service calls, authorized in the context of the user session.
  • There is no requirement to place dashboards inside a VCN; access control is IAM-based. If your organization restricts Console access, you may use corporate network controls, OCI IAM, and federation policies.

Monitoring/logging/governance considerations

  • Monitoring retention and limits affect how far back charts can go and how much granularity you can view.
  • Compartment strategy determines dashboard ownership and access patterns.
  • Tagging and naming conventions reduce sprawl.

Simple architecture diagram (Mermaid)

flowchart LR
  U[Operator / Engineer] -->|HTTPS| C[OCI Console]
  C -->|List dashboards| D[Console Dashboards<br/>Metadata]
  C -->|Query metric time series| M[OCI Monitoring<br/>Metrics]
  M --> C
  C --> U

Production-style architecture diagram (Mermaid)

flowchart TB
  subgraph Identity
    IDP[Enterprise IdP / Federation]
    IAM[OCI IAM]
    IDP --> IAM
  end

  subgraph Observability_Region_A[OCI Region A]
    MON[Monitoring<br/>Metrics & Alarms]
    DASH[Console Dashboards<br/>Dashboard Groups & Dashboards]
    NOTIF[Notifications]
    AUD[Audit]
  end

  subgraph Workloads[Workloads]
    APP[Apps on Compute / OKE]
    NET[Load Balancer / Network]
    DB[Database Services]
  end

  USER[On-call / SRE] -->|Sign-in| IDP
  USER -->|Console session| IAM
  USER -->|View dashboards| DASH
  DASH -->|Metric queries| MON
  APP -->|Emits metrics| MON
  NET -->|Emits metrics| MON
  DB -->|Emits metrics| MON
  MON -->|Alarm triggers| NOTIF
  DASH -->|Config changes| AUD
  MON -->|Alarm changes| AUD

8. Prerequisites

Tenancy/account requirements

  • An active Oracle Cloud (OCI) tenancy with access to Observability and Management services.
  • Access to at least one compartment where you can create dashboards (or a sandbox compartment).

Permissions / IAM policies

You typically need permissions for: – Managing dashboards (create/update/delete dashboard groups and dashboards) – Reading metrics (to populate widgets) – (Optional) Managing alarms and notifications if you build alerting in the same lab

OCI IAM policy syntax can vary by resource type. Common patterns include:

  • Read metrics:
  • allow group <group-name> to read metrics in compartment <compartment-name>
  • Manage dashboards:
  • allow group <group-name> to manage dashboards in compartment <compartment-name>

Verify exact policy verbs and resource-types in the official OCI IAM policy reference for Monitoring and Dashboards: – https://docs.oracle.com/en-us/iaas/Content/Identity/policyreference/policyreference.htm

Billing requirements

  • Console Dashboards themselves are typically not billed as a standalone line item; costs usually come from underlying services (Monitoring custom metrics, Logging ingestion/storage, notifications, etc.). Still, you need a tenancy with billing enabled to create many OCI resources.
  • If you create a Compute instance for the lab, ensure you understand Always Free eligibility (if applicable) and region constraints.

CLI/SDK/tools needed (optional)

  • OCI Console (required)
  • Optional: OCI CLI for verification commands
  • Install docs: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm

Region availability

  • Observability and Management services are broadly available across OCI commercial regions, but feature availability can vary.
  • Verify service availability in your region:
  • OCI Services by Region: https://www.oracle.com/cloud/data-regions/

Quotas/limits

Expect limits around: – Number of dashboards/groups per compartment/tenancy – Number of widgets per dashboard – Metric query limits and resolution/retention constraints

Exact limits can change. Verify current limits in official docs for Monitoring and Dashboards.

Prerequisite services/resources

To make the tutorial meaningful, you need at least one resource producing metrics, such as: – A Compute instance – A Load Balancer – A database service – Any OCI service with Monitoring metrics


9. Pricing / Cost

Pricing model (accurate framing)

In OCI, Console Dashboards are primarily a visualization capability. In many cases, there is no direct additional charge specifically for creating dashboards in the Console. The costs you should evaluate are driven by the telemetry sources and underlying observability services:

  • Monitoring metrics
  • OCI service metrics are generally available without you paying per-metric ingestion in the same way as custom metrics (details vary).
  • Custom metrics and higher-volume metric ingestion may incur charges depending on the OCI Monitoring pricing model.
  • Logging
  • Log ingestion, storage/retention, and analysis features can incur cost.
  • Notifications
  • Message delivery can incur cost depending on destinations and volumes.
  • Compute/OKE/Database
  • The resources you monitor generate the metrics; those services have their own charges.

Because OCI pricing is region-specific and can change, rely on official sources:

  • Official pricing pages (Observability & Management section):
    https://www.oracle.com/cloud/price-list/
  • OCI Cost Estimator:
    https://www.oracle.com/cloud/costestimator.html

If you need exact, up-to-date pricing for Monitoring custom metrics, Logging ingestion, or retention, verify in official docs and the price list for your region.

Pricing dimensions to watch

Even if dashboards are “free,” these dimensions can drive your bill:

  1. Custom metrics ingestion volume
  2. Metric retention / query usage constraints (not always priced directly, but can influence design)
  3. Logging ingestion and storage
  4. Notification deliveries
  5. Data transfer (network egress) – Viewing dashboards in the Console doesn’t typically incur egress in the same way as exporting large datasets, but exporting logs/metrics or integrating external tools might.

Free tier considerations

OCI has an Always Free tier and free usage allotments for some services. Whether Monitoring/Logging has free tiers or included quotas depends on current OCI offers. Verify current free tier details here: – https://www.oracle.com/cloud/free/

Hidden or indirect costs

  • Creating resources “just for dashboards” (extra load balancers, extra compute) costs money.
  • Overly verbose logging increases logging cost; dashboards might encourage collecting more data than necessary.
  • High-cardinality custom metrics can become expensive and harder to query.

Cost optimization tips

  • Prefer built-in OCI service metrics where they meet your needs.
  • Be deliberate about custom metrics: keep cardinality low (avoid dimensions like requestId/userId).
  • Use dashboards to focus on a few meaningful KPIs rather than charting everything.
  • Align log retention to compliance needs (not “keep forever” by default).
  • Standardize dashboards and reuse across environments rather than creating many redundant dashboards.

Example low-cost starter estimate (conceptual)

A low-cost start can be: – Use an existing Always Free eligible Compute instance (if available) – Create a few dashboards with a handful of widgets querying built-in metrics – Minimal/no custom metrics – Minimal logging ingestion

In this scenario, dashboard cost is usually negligible, but your compute instance (if not free) and any logging/custom metrics are the real cost drivers.

Example production cost considerations

For production, budget and review: – Custom metrics volume (if you instrument apps heavily) – Log ingestion and retention (often the largest observability cost) – Third-party integrations (data export, egress) – Alarm volume and notification delivery costs – Operational overhead: governance, access review, and compliance audits


10. Step-by-Step Hands-On Tutorial

Objective

Create a practical operations dashboard in Oracle Cloud Console Dashboards that visualizes key metrics for a real OCI resource (a Compute instance), and validate that you can view meaningful charts for troubleshooting.

Lab Overview

You will:

  1. Prepare a compartment and IAM access (or confirm you already have it).
  2. Create (or reuse) a Compute instance that emits Monitoring metrics.
  3. Explore metrics to identify the right signals.
  4. Create a dashboard group and dashboard in Console Dashboards.
  5. Add metric widgets to visualize CPU and network-related metrics.
  6. Validate the dashboard shows data and learn common fixes.
  7. Clean up resources to avoid cost.

This lab is designed to be beginner-friendly and low-cost. If you already have a Compute instance in a sandbox compartment, reuse it.


Step 1: Confirm region and compartment strategy

  1. Sign in to the Oracle Cloud Console.
  2. In the top region selector, choose the region where your resources exist (or where you will create them).
  3. Choose or create a compartment for the lab, for example: – sandbox-observability – or dev-ops-lab

Expected outcome: You know which region and compartment you will use consistently for the rest of the tutorial.

Verification: – When you open the compartment selector in the Console, you can see your chosen compartment and select it.


Step 2: Ensure you have the right IAM permissions

If you are a tenancy admin, you can proceed. If not, ask an admin to grant permissions.

At a minimum, you want permissions to: – read metrics – manage dashboards (in the chosen compartment) – (optional) manage a compute instance (if you need to create one)

Example IAM policies (verify in official IAM policy reference):allow group <group-name> to read metrics in compartment <compartment-name>allow group <group-name> to manage dashboards in compartment <compartment-name> – (optional) allow group <group-name> to manage instance-family in compartment <compartment-name>

Expected outcome: Your user can access Monitoring metrics and create dashboards.

Verification: – Navigate to Observability & Management → Monitoring. – If you can open Monitoring pages without authorization errors, you likely have at least read access.

Common issue: – “Not authorized” errors usually mean you are in the wrong compartment/region or missing IAM policy permissions.


Step 3: Create or reuse a Compute instance (metric source)

If you already have an instance generating metrics, you can skip to Step 4.

  1. Go to ComputeInstances.
  2. Select your lab compartment.
  3. Click Create instance.
  4. Choose an image and shape appropriate for your account. – If you want low cost, choose an Always Free eligible option if your tenancy supports it (availability varies by region and account limits; verify on OCI Free Tier pages).
  5. Create the instance and wait until it becomes Running.

Expected outcome: You have a running instance with an OCID and a resource page.

Verification: – The instance shows Running. – Note the instance name and OCID (helpful for later).

Tip (optional): Generate some CPU activity so charts are more obvious. – SSH to the instance and run a short CPU load test (only if you understand and accept the impact). – For example, on many Linux distros you can install a stress tool, but package names and availability differ. – If you do this, keep it short and safe; avoid impacting other workloads.


Step 4: Confirm metrics exist for your resource (Metrics Explorer)

Before building dashboards, confirm you can see relevant metrics.

  1. Go to Observability & Management → Monitoring.
  2. Open Metrics Explorer (label can vary slightly).
  3. Select: – Compartment: your lab compartment – Namespace: choose the namespace relevant to Compute/instances (the Console typically lists namespaces; choose one that clearly maps to Compute metrics)
  4. Browse available metrics and dimensions.
  5. Select a metric you expect to exist for instances (for example, a CPU-related metric) and set a time range (for example, “Last 1 hour”).

Expected outcome: You see a metric chart with data points.

Verification checklist: – The chart renders without errors. – You can adjust time range and see the chart update. – If you have multiple instances, you can filter/group by dimensions (resource name/OCID) depending on the metric’s dimensions.

Common issues and fixes:No data: switch to the correct region; increase time range; confirm the instance exists in the selected compartment. – Wrong namespace: try another namespace in the list that maps to the service you’re monitoring. – Missing OS-level metrics: some metrics require an agent or plugin; verify your instance image includes OCI agent and that monitoring is enabled for the relevant metric set.


Step 5: Create a dashboard group in Console Dashboards

  1. Go to Observability & Management → Monitoring → Dashboards (or “Dashboards” under Observability).
  2. Ensure the compartment selector is set to your lab compartment.
  3. Create a Dashboard Group.
  4. Name it using a clear convention, for example: – ops-sandboxteam-platform-prodapp-payments-prod

Expected outcome: A dashboard group exists and is visible in the Dashboards list.

Verification: – You can open the dashboard group and see an empty list ready for dashboards.


Step 6: Create a dashboard and add a CPU widget

  1. Inside your dashboard group, click Create dashboard.
  2. Name the dashboard: – Compute Instance - Basic Ops
  3. Add a widget (often “Add widget”).
  4. Choose a metric chart widget (exact wording may vary).
  5. Configure the widget: – Compartment: your lab compartment – Metric namespace: the one you validated in Metrics Explorer – Metric: choose a CPU-related metric available for your instance – Filter/dimensions: select the specific instance if possible – Time range: start with “Last 1 hour”
  6. Save the widget and dashboard.

Expected outcome: The dashboard displays a CPU chart for your selected instance.

Verification: – The widget shows a line chart with recent data. – If you generate CPU activity on the instance, you should see changes reflected in the chart (with normal metric collection delay).

Practical tip: Rename the widget title to something operator-friendly: – CPU Utilization (instance01)CPU (mean, 1h)


Step 7: Add network or disk-related widgets (second signal)

Repeat the widget process for another metric category that helps operational troubleshooting, such as:

  • Network throughput (ingress/egress)
  • Disk I/O (if available)
  • Memory utilization (often requires agent support; verify availability)

Expected outcome: The dashboard now has at least two widgets representing different resource saturation signals.

Verification: – Both widgets show data for the same time range and same instance context (as applicable).

Operator tip: Layout matters. A common layout is: – Top row: “symptoms” (latency/errors if app metrics exist) – Second row: “saturation” (CPU/memory/disk/network) – Third row: “dependency” (load balancer, database metrics)


Step 8 (Optional): Create an alarm aligned to your dashboard

Dashboards are most effective when paired with alerts. If you have permissions:

  1. Go to Monitoring → Alarms.
  2. Create an alarm for your CPU metric (or another key metric): – Compartment: lab compartment – Metric and dimension filter: same instance and metric you used in the dashboard widget – Threshold: pick a sensible value for a short lab (for example, CPU > X for Y minutes) – Destination: configure Notifications (topic/subscription) if required
  3. Save the alarm.

Expected outcome: An alarm exists and will trigger if the condition is met.

Verification: – The alarm appears in the alarm list. – Its status is “OK” initially (assuming no threshold breach).

Note: Alarm configuration details (metric query syntax, aggregation, delay) can be subtle. Use the Console wizard and verify behavior in official docs: https://docs.oracle.com/en-us/iaas/Content/Monitoring/home.htm


Validation

Use this checklist to confirm the lab is successful:

  1. Dashboard group exists in the correct compartment.
  2. Dashboard exists and opens without authorization errors.
  3. At least two widgets show real metric data.
  4. Changing the time range updates charts.
  5. (Optional) Alarm exists and is tied to a metric shown on the dashboard.

Optional CLI validation (only if you have OCI CLI installed and configured): – You can validate access to Monitoring by listing metric namespaces or querying metric data. Exact commands depend on your setup; start from OCI CLI Monitoring docs: – https://docs.oracle.com/en-us/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/monitoring.html


Troubleshooting

Symptom Likely Cause Fix
Dashboard widgets show “No data” Wrong region selected Switch to the region where the resource exists
Dashboard widgets show “No data” Wrong compartment or missing metric dimensions Select the correct compartment; refine filter to the specific resource
“Not authorized” when opening Dashboards Missing IAM permissions Add/adjust IAM policies for read metrics and manage dashboards
Metrics Explorer works but dashboard widget doesn’t Widget configured with different namespace/metric Reconfigure widget to match Metrics Explorer selections
OS-level metrics missing Agent/plugin not enabled or not supported by image Verify OCI agent installation and supported metrics in docs
Charts are flat/boring No workload activity Generate safe test load briefly, or select a more active resource

Cleanup

To avoid ongoing costs and clutter:

  1. Delete the dashboard widget(s) or delete the entire dashboard.
  2. Delete the dashboard group (if it’s only for the lab).
  3. (Optional) Delete alarms and notification topics you created for the lab.
  4. Terminate the Compute instance if it was created only for this lab: – Compute → Instances → select instance → Terminate
  5. Remove IAM policies if they were created only for lab access (coordinate with your admin).

Expected outcome: No unnecessary resources remain, minimizing cost and governance overhead.


11. Best Practices

Architecture best practices

  • Design dashboards around services and dependencies: Organize widgets in the same order your architecture fails (edge → app → database).
  • Standardize “golden dashboards”: Create a baseline set for common services (compute, LB, DB, OKE) and clone per environment.
  • Prefer symptoms-first layout: Put user-visible indicators near the top (errors/latency), then resource saturation signals below.

IAM/security best practices

  • Separate viewers from editors: Most users should have view-only access; restrict edit rights to platform/SRE leads.
  • Use compartments intentionally: Align dashboards with workload compartments so access is naturally scoped.
  • Review policies regularly: Dashboard access often implies visibility into operational metadata that can be sensitive.

Cost best practices

  • Avoid high-cardinality custom metrics: Keep dimensions stable and low in count.
  • Align telemetry collection with decisions: If nobody uses a metric, don’t collect it at high frequency.
  • Keep logging under control: Dashboards often lead to “collect everything.” Set retention and ingestion budgets.

Performance best practices

  • Avoid overly complex dashboards: Too many widgets can slow down loading and overwhelm operators.
  • Use consistent time ranges and aggregations: Makes comparisons easier during incidents.

Reliability best practices

  • Treat dashboards as production artifacts: Apply change control. Small layout changes can confuse on-call.
  • Document intent in the dashboard: If text widgets exist, add context: what the charts mean and what “normal” looks like.

Operations best practices

  • Tie dashboards to runbooks: Link alarms and charts to concrete operator actions.
  • Use naming conventions: Include environment, service, and ownership in dashboard names.
  • Tag dashboards (if supported): Owner, cost center, application, environment.

Governance/tagging/naming best practices

A simple naming convention: – Dashboard group: <team>-<env> (e.g., platform-prod) – Dashboard: <service>-<purpose> (e.g., payments-api-service-health) – Widgets: <signal> (<aggregation>, <window>) (e.g., CPU Utilization (mean, 1h))


12. Security Considerations

Identity and access model

  • Console Dashboards are secured via OCI IAM.
  • Use least privilege:
  • Many users only need to read metrics and view dashboards.
  • A smaller group should manage dashboards.

Key security design point: Access to dashboards often equals access to underlying metric data. If metric dimensions expose resource identifiers or topology, treat access as sensitive.

Encryption

  • OCI services encrypt data at rest by default for many services. For Monitoring and associated metadata, encryption is handled by OCI-managed keys in most default scenarios.
  • If you have strict requirements (customer-managed keys, specific compliance controls), verify which parts of Monitoring/Dashboards support those controls.

Network exposure

  • Dashboards are accessed via the OCI Console over the public internet (HTTPS), unless your corporate controls restrict access.
  • Use:
  • MFA
  • federation with strong IdP policies
  • conditional access (if supported by your IdP)
  • device posture controls (enterprise security)

Secrets handling

  • Do not embed secrets in dashboard text widgets or labels.
  • If dashboards link to runbooks, ensure runbooks do not expose credentials.

Audit/logging

  • OCI Audit can record API calls to many OCI resources.
  • Confirm whether dashboard create/update/delete actions are captured in Audit for your tenancy and region, and ensure audit logs are retained per policy.

Compliance considerations

  • Dashboards can expose operational signals about regulated systems (availability, performance).
  • If you operate under SOC 2 / ISO 27001 / HIPAA-style controls, treat dashboard access as part of your access review scope.

Common security mistakes

  • Granting manage permissions broadly for convenience.
  • Building dashboards in a shared compartment with too-wide access.
  • Mixing dev/test/prod dashboards without clear separation, leading to accidental operational confusion.

Secure deployment recommendations

  • Keep production dashboards in production compartments with controlled access.
  • Use a dedicated group like dashboard-admins for edit rights.
  • Use read-only groups like dashboard-viewers for broad access.

13. Limitations and Gotchas

Because OCI evolves, confirm current limits and behavior in official docs. Common limitations/gotchas include:

  • Region context is easy to overlook: Many “no data” issues are just the wrong region selected in the Console.
  • Compartment scoping matters: Dashboards created in one compartment may not automatically query metrics from another without IAM permission.
  • Metric availability varies by resource and configuration:
  • Some OS-level metrics require agent support.
  • Some services emit different metrics or dimensions than expected.
  • Widget and dashboard limits: There are often practical or enforced limits on the number of widgets per dashboard and dashboards per group/compartment.
  • Not a full BI tool: Console Dashboards are optimized for operational telemetry, not business analytics with complex joins.
  • Dashboard-as-code maturity: If you require GitOps-style dashboard pipelines, validate current API/CLI support and consider integrating with external dashboard platforms.
  • Retention constraints: Metric history and resolution may be constrained by Monitoring settings and service limits.
  • Permission troubleshooting can be non-obvious: A user might see dashboards but not see widget data if they lack metric read permissions.

14. Comparison with Alternatives

Console Dashboards sit in a broader observability landscape. Here’s how they compare to common alternatives.

Key alternatives to consider

  • OCI Monitoring Metrics Explorer: Great for ad-hoc exploration; less ideal for curated, shareable operational views.
  • Dashboards inside other OCI services: Some OCI services (for example, log analytics or APM features) may offer their own dashboards tailored to that data source (verify current OCI service capabilities).
  • Grafana (self-managed or managed): Strong visualization and ecosystem; requires setup and governance.
  • Prometheus + Grafana: Great for Kubernetes and cloud-native metrics, but requires operational investment.
  • Other cloud-native dashboards (AWS CloudWatch Dashboards, Azure Dashboards/Workbooks, Google Cloud Monitoring dashboards): Useful if you’re operating multi-cloud, but each is best within its own ecosystem.

Comparison table

Option Best For Strengths Weaknesses When to Choose
Oracle Cloud Console Dashboards OCI-first operations teams Native OCI IAM + compartment alignment; quick setup; tight Monitoring integration Less extensible than dedicated visualization platforms; may be region/compartment-centric You want OCI-native dashboards with minimal setup
OCI Metrics Explorer Ad-hoc troubleshooting Fast exploration; flexible slicing/dicing Not a curated “single pane” for teams You’re investigating one-off questions
OCI Logging / Log analytics dashboards (service-specific) Log-centric investigations Tailored to logs/analytics features Not always a unified view with metrics Your primary signal is logs and analytics
Grafana (with OCI data sources) Advanced dashboards and broad integrations Rich visualization; plugins; dashboard-as-code patterns Setup/ops overhead; access control integration work You need advanced visualization and/or multi-cloud dashboards
Prometheus + Grafana Kubernetes/SRE teams Cloud-native patterns; strong ecosystem Requires running and maintaining telemetry pipeline You want open-source control and K8s-native metrics
AWS/Azure/GCP native dashboards Multi-cloud teams per-cloud Deep integration per cloud Fragmentation across clouds You operate significant workloads in each cloud and accept tool fragmentation

15. Real-World Example

Enterprise example: Multi-compartment OCI landing zone operations

  • Problem: A large enterprise runs dozens of applications in OCI with a landing zone model: separate compartments per business unit and environment. On-call teams need consistent views without giving broad admin access.
  • Proposed architecture:
  • Each application team maintains a dashboard group in its compartment (dev/test/prod).
  • A central SRE group has read-only access across production compartments for incident coordination.
  • Alarms route to Notifications topics integrated with an on-call tool.
  • Why Console Dashboards was chosen:
  • Native IAM and compartment alignment simplifies access governance.
  • Faster rollout than deploying and standardizing third-party dashboards across teams.
  • Expected outcomes:
  • Standardized operational views across business units.
  • Reduced time to triage incidents.
  • Clearer ownership boundaries and fewer “who has access?” delays.

Startup/small-team example: Single-service SaaS on OCI

  • Problem: A startup runs one main API and database on OCI. They need quick operational visibility without spending engineering cycles on an observability platform.
  • Proposed architecture:
  • One dashboard group: startup-prod
  • One dashboard: api-service-health
  • Widgets: API gateway/load balancer metrics (if used), compute CPU, DB performance indicators
  • A small set of alarms for the top failure modes (availability, high error rate, saturation)
  • Why Console Dashboards was chosen:
  • Minimal setup and no extra infrastructure.
  • Easy to share read-only access with founders/ops without risking changes.
  • Expected outcomes:
  • Faster awareness of performance problems.
  • Less time spent navigating resource pages.
  • Clear operational habits without additional tooling cost.

16. FAQ

1) Is “Console Dashboards” a separate OCI service I need to enable?

Typically, Console Dashboards are a Console capability integrated with OCI Observability and Management (commonly Monitoring). You usually don’t “enable” them separately, but you must have IAM permissions. Verify in your tenancy’s Monitoring documentation.

2) Do Console Dashboards cost money?

In many cases, dashboards themselves aren’t billed separately; costs come from Monitoring (custom metrics), Logging, Notifications, and the underlying resources you monitor. Always confirm with the official OCI price list: https://www.oracle.com/cloud/price-list/

3) Why do my widgets show “No data”?

Common causes are: – wrong region – wrong compartment – wrong namespace/metric selection – missing agent/config for OS-level metrics
Start by reproducing the metric in Metrics Explorer, then mirror that selection in the widget.

4) Can I share a dashboard with someone outside my tenancy?

OCI Console Dashboards are intended for tenancy users governed by IAM. If you need external sharing, consider a separate portal/reporting approach. Verify any cross-tenancy sharing capabilities in official docs.

5) Can I restrict users to view-only access?

Yes—use IAM policies to separate “view/read” from “manage” permissions. Confirm the exact policy statements for dashboards and metrics in the IAM policy reference.

6) Are dashboards global or regional?

OCI Console workflows are region-aware. Dashboards often behave as regional resources in practice. Verify current behavior in the Dashboards/Monitoring documentation for your region.

7) Can I build one dashboard that shows metrics from multiple compartments?

Yes, if the viewer has IAM permissions to read metrics across those compartments and the widget configuration supports cross-compartment selection. Many teams prefer separate dashboards per compartment to preserve ownership clarity.

8) Can I use Console Dashboards for Kubernetes (OKE) monitoring?

You can visualize any metrics available in OCI Monitoring related to OKE or its nodes (depending on what metrics are emitted and configured). For deep Kubernetes metrics, many teams integrate Prometheus/Grafana—evaluate based on your needs.

9) How do Console Dashboards relate to alarms?

Dashboards provide visualization; alarms provide alerting. A best practice is to design alarms for key failure modes and dashboards for diagnosis and context.

10) Can I export dashboards?

Export capabilities vary by product/version. Some platforms support JSON export/import; others are Console-only. Verify current OCI dashboard export/import options in official docs.

11) What’s the best way to organize dashboards?

Use dashboard groups aligned to: – team ownership – environment (dev/test/prod) – service boundaries
Keep names and tags consistent.

12) How many widgets should I put on one dashboard?

Enough to answer a focused operational question without overwhelming the viewer. Many teams keep “service health” dashboards to one or two screens worth of charts and create separate drill-down dashboards.

13) Can I build dashboards with Terraform?

Terraform support depends on whether OCI exposes dashboards as Terraform-managed resources. Verify in the OCI Terraform provider documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs

14) Are there limits on dashboards and widgets?

Yes, there are typically service limits. Check current Monitoring and Dashboards limits in official docs.

15) What’s the fastest way to troubleshoot missing permissions?

Check: – the compartment in which the dashboard exists – whether your group has read metrics – whether your group has manage dashboards (or view permission) Then review OCI Audit logs (if available) for denied actions.


17. Top Online Resources to Learn Console Dashboards

Resource Type Name Why It Is Useful
Official documentation OCI Monitoring documentation Primary reference for metrics, alarms, and dashboard workflows in Observability and Management: https://docs.oracle.com/en-us/iaas/Content/Monitoring/home.htm
Official documentation OCI IAM policy reference Required to correctly grant access for dashboards and metrics: https://docs.oracle.com/en-us/iaas/Content/Identity/policyreference/policyreference.htm
Official pricing Oracle Cloud Price List Authoritative pricing source for Monitoring/Logging-related costs: https://www.oracle.com/cloud/price-list/
Official pricing tool OCI Cost Estimator Model overall solution cost drivers: https://www.oracle.com/cloud/costestimator.html
Official free tier Oracle Cloud Free Tier Understand Always Free and free usage offers: https://www.oracle.com/cloud/free/
Official CLI docs OCI CLI installation Helpful for ops automation and verification: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm
Official CLI cmdref OCI CLI Monitoring command reference Explore metric/alarm commands (verify dashboard automation support): https://docs.oracle.com/en-us/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/monitoring.html
Architecture center Oracle Architecture Center Reference architectures to place dashboards into broader designs: https://docs.oracle.com/solutions/
Region availability OCI regions and availability Confirm service availability by region: https://www.oracle.com/cloud/data-regions/
Community learning OCI blogs and tutorials (Oracle-owned channels) Practical walkthroughs and updates; validate against current docs: https://blogs.oracle.com/cloud-infrastructure/

18. Training and Certification Providers

Institute Suitable Audience Likely Learning Focus Mode Website URL
DevOpsSchool.com DevOps engineers, SREs, platform teams OCI operations, DevOps practices, monitoring/observability fundamentals Check website https://www.devopsschool.com/
ScmGalaxy.com Beginners to intermediate engineers DevOps/SCM foundations, CI/CD, cloud basics Check website https://www.scmgalaxy.com/
CLoudOpsNow.in Cloud ops and operations teams Cloud operations practices, monitoring, reliability Check website https://www.cloudopsnow.in/
SreSchool.com SREs and reliability-focused engineers SRE principles, SLIs/SLOs, incident response, observability Check website https://www.sreschool.com/
AiOpsSchool.com Ops teams exploring AIOps AIOps concepts, monitoring analytics, automation foundations Check website https://www.aiopsschool.com/

19. Top Trainers

Platform/Site Likely Specialization Suitable Audience Website URL
RajeshKumar.xyz DevOps/cloud training content (verify specific offerings) Learners seeking practical DevOps guidance https://rajeshkumar.xyz/
devopstrainer.in DevOps training and enablement (verify course catalog) Beginners to intermediate DevOps engineers https://www.devopstrainer.in/
devopsfreelancer.com Freelance DevOps consulting/training (verify services) Teams needing short-term coaching or delivery help https://www.devopsfreelancer.com/
devopssupport.in DevOps support and training resources (verify scope) Operations teams needing hands-on support https://www.devopssupport.in/

20. Top Consulting Companies

Company Name Likely Service Area Where They May Help Consulting Use Case Examples Website URL
cotocus.com Cloud/DevOps consulting (verify current portfolio) Landing zones, observability rollouts, CI/CD, ops maturity Designing compartment/IAM strategy for dashboards; setting up monitoring/alarm standards https://cotocus.com/
DevOpsSchool.com DevOps consulting and training (verify services) DevOps transformation, toolchain implementation, training Standardizing OCI operations dashboards; building incident response runbooks aligned to metrics https://www.devopsschool.com/
DEVOPSCONSULTING.IN DevOps consulting (verify services) Automation, monitoring strategy, operational best practices Implementing alerting + dashboard practices; optimizing telemetry cost drivers https://www.devopsconsulting.in/

21. Career and Learning Roadmap

What to learn before Console Dashboards

  • OCI fundamentals: tenancies, compartments, VCN basics
  • IAM: users, groups, policies, dynamic groups (where applicable)
  • Observability basics: metrics vs logs vs traces
  • Basic operations: incident response, change management, runbooks

What to learn after Console Dashboards

  • OCI Monitoring alarms and Notifications (alerting pipelines)
  • OCI Logging and (if applicable) logging analytics capabilities
  • Distributed tracing/APM concepts (if your OCI services support them)
  • Infrastructure as Code:
  • Terraform for OCI resource governance
  • SRE maturity:
  • SLIs/SLOs, error budgets, capacity planning

Job roles that use it

  • Site Reliability Engineer (SRE)
  • Cloud Operations Engineer
  • DevOps Engineer
  • Platform Engineer
  • Solutions Architect (operational design)
  • Security Engineer (visibility and audit-oriented dashboards)

Certification path (if available)

Oracle certification offerings change over time. For OCI certifications and learning paths, verify the official Oracle University and certification pages: – https://education.oracle.com/ – https://education.oracle.com/oracle-certification

Project ideas for practice

  1. Build a “golden dashboard” for a 3-tier app (LB → app → DB) with a consistent layout.
  2. Create separate dashboards per environment and prove you can’t accidentally query prod metrics from dev without IAM permission.
  3. Define an SLO dashboard: availability + latency + saturation.
  4. Run a game day: intentionally create load and watch dashboards + alarms react; document improvements.
  5. Cost exercise: estimate the impact of adding custom metrics vs. relying on service metrics.

22. Glossary

  • OCI (Oracle Cloud Infrastructure): Oracle Cloud’s IaaS/PaaS platform.
  • Observability and Management: OCI category covering Monitoring, Logging, and related operations services.
  • Console Dashboards: Dashboarding capability in the OCI Console used to visualize operational telemetry (commonly Monitoring metrics).
  • Monitoring: OCI service for collecting/querying metrics and creating alarms.
  • Metric: A time-series measurement (for example, CPU utilization over time).
  • Namespace: A logical grouping of metrics (often per OCI service or metric source).
  • Dimension: Key/value attributes attached to metrics (for example, resourceId, instance name).
  • Widget: A dashboard element that displays a chart or contextual information.
  • Dashboard group: A container to organize dashboards.
  • Compartment: OCI’s logical isolation boundary for organizing resources and access control.
  • IAM policy: A statement defining who can do what in OCI (authorization).
  • OCID: Oracle Cloud Identifier, a unique identifier for OCI resources.
  • Alarm: A Monitoring rule that triggers notifications when a metric condition is met.
  • Notifications: OCI service that delivers messages to endpoints (email, integrations, etc.).
  • Audit: OCI service that records API calls for governance and security review.

23. Summary

Oracle Cloud Console Dashboards are an OCI-native way to build and share operational dashboards inside the Oracle Cloud Console, strongly aligned with the Observability and Management ecosystem—especially OCI Monitoring metrics. They matter because they turn raw telemetry into repeatable, team-friendly operational views that speed up troubleshooting and improve reliability.

From an architecture standpoint, Console Dashboards sit above Monitoring (metrics and alarms) and are governed by IAM and compartments. From a cost standpoint, dashboards usually don’t drive cost directly; the real cost drivers are custom metrics, logging ingestion/retention, notifications, and the monitored resources themselves—so design telemetry intentionally. From a security standpoint, apply least privilege: separate dashboard viewers from editors, and scope access via compartments.

Use Console Dashboards when you want quick, OCI-native operational visibility with strong IAM integration. If you need advanced, multi-cloud, or GitOps-heavy dashboarding, evaluate tools like Grafana alongside OCI’s native capabilities.

Next step: deepen your operational setup by pairing dashboards with well-designed alarms, notification routing, and runbooks—then validate your approach with a game day in a non-production environment.