Category
Observability and Management
1. Introduction
Oracle Cloud Logging is the native log collection, storage, search, and routing service in Oracle Cloud Infrastructure (OCI) under Observability and Management. It centralizes logs generated by OCI services (service logs) and can also ingest logs from your workloads (custom logs), so teams can troubleshoot incidents, investigate security events, and build operational visibility without standing up a separate logging stack on day one.
In simple terms: Logging lets you put logs in one place, search them quickly, and optionally move them to other destinations (for example, Object Storage for archiving or Streaming for downstream processing).
Technically, Logging provides: – Log groups and logs (resources scoped to compartments and regions) – Collection of service logs from supported OCI services – Custom log ingestion (for example, from compute instances via an agent, depending on the source type) – A Log Explorer UI and APIs for querying and viewing log records – Integration patterns to route logs to other services (commonly via Service Connector Hub)
The problem it solves: production systems generate massive amounts of log data across networks, compute, identity, databases, and applications. Without centralization and governance, teams lose time correlating issues, proving compliance, and responding to incidents. Logging provides a governed, OCI-native foundation for those operational and security workflows.
2. What is Logging?
Official purpose (OCI scope): Oracle Cloud Logging is an OCI service that helps you collect, store, search, and manage logs from OCI resources and workloads. It is part of the broader Observability and Management portfolio.
Core capabilities
- Service log collection: capture logs emitted by supported OCI services (examples include networking, gateways, load balancers, functions, and other OCI resources—availability varies by service and region).
- Custom logs: ingest logs from applications/OS where supported (for example, via an agent or ingestion mechanisms—verify current supported sources in official docs for your region).
- Centralized search and exploration: query logs in a single UI (Log Explorer) or via API/CLI.
- Retention controls: define how long logs are kept in Logging before they are aged out (and optionally archive elsewhere).
- Routing/export patterns: use native OCI integrations (commonly Service Connector Hub) to send logs to targets such as Object Storage or Streaming for long-term retention, analytics, or SIEM pipelines.
Major components
- Log Group: a container resource used to organize logs (often by environment, application, or team).
- Log: the resource representing a stream of log entries. A log can be configured for a particular source (service log) or custom ingestion.
- Log Explorer / Search: the console experience to filter and query log entries.
- IAM policies: control who can manage logs, read logs, and ingest log content.
- Optional routing: Service Connector Hub can move log data to other OCI services.
Service type
- Managed OCI service (you do not manage servers, shards, or indexing clusters directly in the base Logging service).
Scope (regional/global/compartment)
- Region-scoped: Logs and log groups are created in a specific OCI region. If you operate multi-region, you typically configure Logging per region.
- Compartment-scoped resources: Log groups and logs live in compartments; IAM policies and compartment design strongly affect governance.
Fit in the Oracle Cloud ecosystem
Logging typically sits at the center of an OCI observability design: – Monitoring and Alarms cover metrics-based alerting. – Logging covers event records and text/structured log lines. – Events and Notifications support event-driven automation. – Service Connector Hub provides data movement between services (including from Logging). – Logging Analytics (a separate OCI service) provides deeper parsing, enrichment, and analytics for logs—useful when you outgrow basic search (verify your requirements and licensing/pricing).
Official docs starting point:
https://docs.oracle.com/en-us/iaas/Content/Logging/Concepts/loggingoverview.htm
3. Why use Logging?
Business reasons
- Lower time-to-resolution (MTTR): central logs reduce the time spent gathering evidence during incidents.
- Reduce tool sprawl: use OCI-native Logging as a baseline before adopting additional platforms.
- Improve audit readiness: consistent retention and access control help when demonstrating operational controls.
Technical reasons
- Centralization across OCI services: collect and query logs from multiple OCI resources without building a pipeline first.
- Compartment-aware governance: align log access with your tenancy structure.
- API-driven automation: manage logs and integrate with CI/CD and infrastructure-as-code patterns (Terraform is common in OCI).
Operational reasons
- Standardized troubleshooting workflow: one place to search by time window, resource, and log source.
- Retention management: keep short-term logs in Logging and archive long-term logs cheaply (often to Object Storage).
- Routing for downstream processing: forward to Streaming or other services for real-time processing.
Security / compliance reasons
- Least-privilege access: enforce which teams can view or manage log content with OCI IAM policies.
- Evidence preservation: route logs to immutable/controlled storage destinations for investigations.
- Segregation of duties: keep production logs accessible to SRE/SecOps while limiting access for other roles.
Scalability / performance reasons
- Managed service scales ingestion and query for typical operational needs (exact scaling characteristics and service limits depend on region and tenancy; verify in official docs).
When teams should choose it
- You need native OCI log visibility quickly.
- You want an OCI-governed log store integrated with compartments and IAM.
- You plan to archive or stream logs into analytics/SIEM later.
When teams should not choose it (or should supplement it)
- You need advanced log analytics features such as deep parsing, correlation rules, enrichment, and long-term investigative workflows—consider OCI Logging Analytics or an external SIEM/log analytics platform.
- You have strict requirements for on-prem-only log storage (Logging is a cloud service).
- You must use a specific enterprise logging platform already standardized across your organization—Logging may still serve as an ingestion point, but your “system of record” might be elsewhere.
4. Where is Logging used?
Industries
- Financial services: audit trails, network logging, security investigations.
- Healthcare: operational monitoring with access governance.
- Retail/e-commerce: incident response during traffic spikes.
- SaaS/technology: multi-tenant operations and SRE workflows.
- Public sector: controlled access and retention policies.
Team types
- Platform engineering / cloud foundations teams
- DevOps and SRE teams
- Security engineering and SOC teams
- Application developers (for debugging and release validation)
- Compliance and governance teams (for evidence and oversight)
Workloads
- Web and API platforms (API gateways, load balancers)
- Microservices and Kubernetes-based workloads (often forwarding container logs via agents/pipelines—implementation varies)
- Batch jobs and integration pipelines
- Network-intensive applications (flow logs are often key)
- Identity-sensitive systems where auditability matters
Architectures
- Single-region production environments
- Multi-region active/active or active/passive
- Hub-and-spoke network architectures (flow logs per spoke)
- Shared services landing zones (centralized logging compartments/projects)
Real-world deployment contexts
- Production: controlled retention, restricted access, and routing to archive/SIEM are common.
- Dev/Test: shorter retention, broader access, and selective logging to reduce cost/noise.
5. Top Use Cases and Scenarios
Below are realistic scenarios where Oracle Cloud Logging is commonly used.
1) Network flow visibility (VCN Flow Logs)
- Problem: You need to confirm what traffic is allowed/denied and troubleshoot connectivity issues.
- Why Logging fits: Flow logs are delivered into Logging where you can query by time, subnet/VNIC, source/destination.
- Example: After a security list/NSG change, SREs search flow logs to confirm whether traffic to a database port is being dropped.
2) Load balancer access investigation
- Problem: You need request visibility (status codes, client IPs) for troubleshooting or security review.
- Why Logging fits: Load balancer logs (where supported) can be centralized and searched in Logging.
- Example: During a 5xx spike, engineers correlate backend errors with load balancer access patterns.
3) API gateway request debugging
- Problem: Clients report intermittent API failures; you need to trace gateway behavior.
- Why Logging fits: Gateway logs can be searched by time window and filtered by gateway resource.
- Example: A deployment introduced stricter authentication; gateway logs show which clients are failing auth.
4) Custom application logs from compute instances
- Problem: Your app writes to
/var/log/myapp.logand you need centralized visibility without installing a full ELK stack. - Why Logging fits: Custom logs can be ingested into Logging (agent/ingestion mechanism depends on OCI support).
- Example: A Java service emits structured JSON logs; you ingest them and filter by
requestIdor error codes.
5) Security investigations and incident response
- Problem: A suspected compromise requires rapid log review across identity, network, and application layers.
- Why Logging fits: Centralized log search and strict IAM controls help SOC teams investigate.
- Example: Analysts review network flow logs and service logs during the incident window.
6) Long-term retention via Object Storage archive
- Problem: Compliance requires keeping logs longer than the default/typical retention.
- Why Logging fits: You can route logs to Object Storage for cost-effective retention (and lifecycle policies).
- Example: Store 13 months of logs in Object Storage with immutability controls (where configured) while keeping 14 days hot in Logging.
7) Real-time log streaming for analytics
- Problem: You want near-real-time processing (alerting, enrichment, correlation).
- Why Logging fits: Route logs to OCI Streaming via Service Connector Hub, then process with consumers.
- Example: A stream processor flags suspicious IPs from gateway logs and triggers a response workflow.
8) Environment-based separation (dev/test/prod)
- Problem: Teams need isolation so dev logs don’t pollute prod visibility and access.
- Why Logging fits: Use compartments and log groups per environment with IAM boundaries.
- Example: Developers can read dev logs but cannot read prod log content.
9) Release validation and canary analysis
- Problem: After deployment, you need evidence that error rates and specific errors did not increase.
- Why Logging fits: Compare log patterns pre/post deployment time window.
- Example: Query “ERROR” occurrences for a canary instance or subnet during rollout.
10) Central operations dashboard inputs (metrics from logs)
- Problem: Some operational signals are only visible in logs (e.g., business events).
- Why Logging fits: Use downstream processing (Functions/Streaming/analytics) to transform logs into metrics/alerts.
- Example: Convert payment failure logs into metrics and trigger alarms.
11) Multi-team shared services governance
- Problem: Multiple teams share VCNs and gateways; you need per-team log access without duplicating resources.
- Why Logging fits: Use log groups by team and IAM policies by group/compartment.
- Example: The network team can manage flow logs; application teams can read only their API logs.
12) Forensics-friendly evidence pipeline
- Problem: You need tamper-resistant retention and a clear chain of custody.
- Why Logging fits: Central collection + export to controlled storage + strict IAM can support investigations.
- Example: Export security-relevant logs to a dedicated security compartment with tight access and retention policies.
6. Core Features
Note: Feature availability can vary by region and source type. Always verify the latest capabilities in the official OCI Logging documentation for your tenancy and region.
6.1 Log Groups (organization and governance)
- What it does: Provides containers to organize logs by application, environment, team, or compliance boundary.
- Why it matters: Governance becomes manageable when log sources are grouped logically.
- Practical benefit: You can apply IAM policies and naming conventions cleanly.
- Caveats: Poor compartment/log-group design can create noisy, hard-to-administer logging environments.
6.2 Logs (service logs and custom logs)
- What it does: Represents a logical stream of log entries associated with a source.
- Why it matters: Logs are the unit you query, retain, and route.
- Practical benefit: Separate logs by resource type or criticality (e.g.,
prod-vcn-flow,prod-api-gw-access). - Caveats: Retention and routing are typically configured per log/log group depending on OCI capabilities.
6.3 Log Explorer (search, filtering, time windows)
- What it does: Lets you search and filter logs by time range, log group/log, and content.
- Why it matters: Operations relies on fast narrowing of “what changed” during incidents.
- Practical benefit: Quick triage without exporting to external tools.
- Caveats: Advanced correlation across many sources may require additional analytics tooling (e.g., Logging Analytics or a SIEM).
6.4 Retention and lifecycle controls (hot retention vs archive)
- What it does: Keeps logs for a configurable time period, after which they are removed from the hot store.
- Why it matters: Retention drives both compliance and cost.
- Practical benefit: Keep short retention for high-volume logs (flow logs) and archive long-term to Object Storage.
- Caveats: Maximum/minimum retention and behavior are service-defined—verify current limits.
6.5 IAM access control for manage/read/ingest
- What it does: Uses OCI IAM to control:
- who can create log groups/logs,
- who can read log content,
- who can ingest log entries (for custom ingestion).
- Why it matters: Logs often contain sensitive data.
- Practical benefit: Enforce least privilege and separation of duties.
- Caveats: Mis-scoped policies are a common reason logs don’t appear or teams can’t view them.
6.6 API and CLI support
- What it does: Manage log groups/logs and access logs programmatically (capabilities depend on API set).
- Why it matters: Automation and repeatable environments (IaC) reduce human error.
- Practical benefit: Provision standard log group layouts across compartments with Terraform/CLI.
- Caveats: Logging “search” behavior and ingestion APIs have specific payload and permission requirements—use official SDK/CLI references.
6.7 Integration with Service Connector Hub (routing)
- What it does: Moves logs from Logging to other OCI services (common targets include Object Storage and Streaming).
- Why it matters: Logging is often the collection layer, while your analytics/retention layer may differ.
- Practical benefit: Build pipelines without self-managed agents for every integration.
- Caveats: Service Connector introduces its own IAM policies, monitoring, and potential costs in target services.
6.8 Compartment-based multi-tenancy design
- What it does: Aligns log resources with OCI compartments and organizational boundaries.
- Why it matters: Compartment structure is the backbone of governance in OCI.
- Practical benefit: A central security compartment can own export pipelines while app compartments own sources.
- Caveats: Cross-compartment routing requires careful IAM planning.
7. Architecture and How It Works
High-level architecture
At a high level, Oracle Cloud Logging has: – A control plane (create/manage log groups, logs, retention settings) – A data plane (ingest log entries, store them, support queries/search) – Integration points to service log sources (OCI services) and custom sources (agents/ingestion APIs)
Data flow (typical)
- A source generates log events: – Service logs: generated by OCI services (e.g., networking flow logs). – Custom logs: generated by your instances/apps and shipped through a supported ingestion mechanism.
- The log entries are ingested into a Log within a Log Group in a specific region.
- Operators query logs in Log Explorer or via APIs/CLI.
- Optional: logs are routed to other destinations (Object Storage, Streaming, etc.) via Service Connector Hub.
Integrations with related services
Common OCI services used with Logging: – Service Connector Hub: route logs to Object Storage/Streaming/Functions (targets depend on current connector options). – Object Storage: low-cost retention, lifecycle policies, legal hold/immutability features (where configured). – Streaming: near-real-time fan-out for analytics pipelines. – Functions: custom processing/enrichment (e.g., redact sensitive values). – Notifications: used by downstream automation to alert humans/systems. – IAM: access control for log content and administration.
Dependency services
- IAM is mandatory for all access.
- VCN/Networking is not required for the Logging service itself, but is relevant when you enable network-related logs (like flow logs) and when you build pipelines.
Security/authentication model
- Uses OCI IAM policies and principals (users, groups, dynamic groups, service principals).
- Data access is controlled by permissions to read log content and manage logging resources.
- Export/routing operations use service-to-service permissions (service connectors require explicit policies).
Networking model
- Logging is an OCI regional managed service.
- Access is typically via the OCI Console and public OCI API endpoints. Private access patterns (if required) depend on OCI’s current private endpoint/service gateway capabilities for the involved services—verify in official docs for your target region and design.
Monitoring/governance considerations
- Monitor:
- ingestion health (are logs arriving?),
- connector success/failure (if exporting),
- retention compliance (archived vs hot retention).
- Govern with:
- compartment boundaries,
- standardized log group naming,
- tagging (cost and ownership),
- IAM least privilege.
Simple architecture diagram (Mermaid)
flowchart LR
A[OCI Service or Workload] --> B[Logging: Log]
B --> C[Log Explorer (Search)]
B --> D[Service Connector Hub]
D --> E[Object Storage (Archive)]
Production-style architecture diagram (Mermaid)
flowchart TB
subgraph Region["OCI Region"]
subgraph AppComp["App Compartment"]
S1[API Gateway / Load Balancer / Compute] --> LG1[Logging Log Group: app-prod]
LG1 --> L1[Log: access]
LG1 --> L2[Log: errors]
end
subgraph NetComp["Network Compartment"]
VCN[VCN/Subnets] --> LG2[Logging Log Group: net-prod]
LG2 --> FL[Log: vcn-flow]
end
subgraph SecComp["Security Compartment"]
SCH[Service Connector Hub] --> OS[Object Storage: security-archive]
SCH --> STR[Streaming: sec-events-stream]
end
L1 --> SCH
L2 --> SCH
FL --> SCH
end
STR --> SIEM[External SIEM / Analytics Consumers]
OS --> IR[Incident Response / Forensics]
8. Prerequisites
Before you start working with Oracle Cloud Logging, ensure you have the following.
Tenancy / account requirements
- An active Oracle Cloud (OCI) tenancy
- Permissions to create and manage resources in a compartment (or access to an existing compartment)
Permissions / IAM roles
You need IAM permissions for: – Logging: create/manage log groups and logs; read log content. – Source service permissions (e.g., networking permissions for enabling flow logs). – Service Connector Hub permissions if exporting. – Object Storage permissions if archiving.
If you’re a tenancy administrator, you typically already have these. If you’re not, request a least-privilege policy from your IAM admin.
Example IAM policy patterns (verify the latest resource-types/verbs in official docs): – Allow a group to manage log groups and logs in a compartment. – Allow a group to read log content in a compartment. – Allow a service connector to read from logs and write to Object Storage.
Official IAM guidance for Logging (start here and follow linked pages):
https://docs.oracle.com/en-us/iaas/Content/Logging/Concepts/loggingoverview.htm (see IAM sections)
Billing requirements
- Logging usage may incur charges depending on ingestion/storage and the log source type.
- Other services used in labs (Compute, Object Storage, Streaming, Load Balancer) may incur charges.
- Use Always Free eligible resources where possible and keep retention short in labs.
Tools (optional but recommended)
- OCI Console access (web)
- OCI CLI (optional): https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm
- SSH client (if you deploy a Compute instance in the lab)
- Terraform (optional for production IaC)
Region availability
- Logging is regional; confirm it’s available in your chosen region. Most OCI core services are broadly available, but specific log sources may vary.
Quotas / limits
- Service limits exist for log groups/logs and ingestion/search behaviors. Check “Service Limits” in official docs and in the OCI Console for your tenancy and region.
Prerequisite services (for the lab)
For the hands-on lab in this tutorial, you will use: – Virtual Cloud Network (VCN) – Compute instance (Always Free eligible if available) – VCN Flow Logs (as the log source) – Object Storage (optional archive destination via Service Connector Hub)
9. Pricing / Cost
Oracle Cloud Logging pricing is usage-based and can vary by: – region, – log source type, – ingestion volume, – retention/storage, – and any downstream services you route logs to.
Because pricing changes and can be region/SKU dependent, do not rely on copied numbers from blogs. Use official sources:
- Official Oracle Cloud price list (navigate to Observability/Management and Logging):
https://www.oracle.com/cloud/price-list/ - OCI Cost Estimator:
https://www.oracle.com/cloud/costestimator.html - Oracle Cloud Free Tier overview:
https://www.oracle.com/cloud/free/
Pricing dimensions to expect (verify on the price list)
Common cost dimensions for managed logging services include: – Log ingestion (GB ingested) – Log storage/retention (GB-month for hot storage) – Potentially different treatment for service logs vs custom logs (verify in the current OCI price list) – Charges for downstream targets: – Object Storage storage and requests – Streaming partitions/throughput and storage – Functions invocations – Network egress where applicable (e.g., exporting outside OCI)
Free tier considerations
- OCI has an Always Free offering, but Logging free allocations and included volumes are subject to change.
- Some OCI services may generate logs as part of their own pricing model (for example, network flow logs may have their own cost considerations). Verify before enabling high-volume logs in production.
Primary cost drivers
- High-volume sources (network flow logs, verbose access logs)
- Long retention in the hot log store
- Duplicated routing (sending the same logs to multiple destinations)
- Over-collection (logging debug-level messages in production)
Hidden/indirect costs
- Object Storage retention: archives are cheap per GB, but long retention adds up.
- Search productivity: too many logs can reduce operational efficiency (human cost).
- Data transfer: exporting logs out of OCI can incur egress costs.
How to optimize cost (practical guidance)
- Keep hot retention short (e.g., 7–30 days) and archive older logs to Object Storage.
- Enable flow logs only where needed (specific subnets/VNICs) and review sampling/aggregation options if available.
- Use separate logs for high-volume vs high-value sources.
- Apply tags for cost allocation (team/app/environment).
- Build a log data classification policy: avoid logging secrets, credentials, or high-cardinality payloads.
Example low-cost starter estimate (qualitative)
A minimal lab setup can be kept low-cost by: – using an Always Free compute instance, – enabling flow logs briefly, – setting short retention, – and archiving only a small subset (or skipping archive in the lab).
Exact cost depends on your region and actual volume; use the OCI Cost Estimator for a numeric estimate.
Example production cost considerations
In production, expect cost planning around: – peak traffic periods (flow/access logs spike), – retention requirements (compliance), – and any real-time analytics pipeline (Streaming + consumers). A common pattern is hot Logging for fast search + Object Storage for long-term retention.
10. Step-by-Step Hands-On Tutorial
This lab uses VCN Flow Logs as a realistic, OCI-native source that delivers into Logging. You will: – create a log group, – enable a flow log to send records to Logging, – generate traffic, – search logs in Log Explorer, – optionally archive logs to Object Storage using Service Connector Hub.
Objective
Enable Oracle Cloud Logging for VCN flow visibility, confirm logs arrive, and export them to Object Storage for archival.
Lab Overview
You will build this in a single region: 1. Create a compartment (optional) and a log group 2. Create a VCN and a Compute instance 3. Enable VCN Flow Logs to a Logging log 4. Generate network traffic and verify log entries in Log Explorer 5. (Optional) Export logs to Object Storage using Service Connector Hub 6. Clean up resources
Step 1: Prepare a compartment and naming plan
Goal: Keep lab resources isolated for cleanup and IAM clarity.
- In the OCI Console, open Identity & Security → Compartments.
- Create a compartment, for example:
– Name:
lab-logging– Description:Logging lab resources - Note the compartment name for later steps.
Expected outcome: A dedicated compartment exists for the lab.
Verification: You can select lab-logging in the compartment selector.
Step 2: Create a Log Group in Logging
Goal: Create the container where your flow logs will be stored.
- In the OCI Console, go to Observability & Management → Logging → Log Groups.
- Ensure you are in the correct region and compartment (
lab-logging). - Click Create Log Group.
- Enter:
– Name:
net-lab-log-group– (Optional) Description:Flow logs lab - Create the log group.
Expected outcome: A log group named net-lab-log-group exists.
Verification: The log group appears in the list.
Step 3: Create a VCN (network for traffic generation)
Goal: Create a small network so you can generate traffic that appears in flow logs.
- Go to Networking → Virtual Cloud Networks.
- Select compartment
lab-logging. - Click Create VCN.
- Use a wizard option such as VCN with Internet Connectivity (names may vary).
- Keep defaults unless you have a standard CIDR plan. Record: – VCN name – Subnet name(s) – Whether you created a public subnet
Expected outcome: A VCN and at least one subnet exist.
Verification: The VCN shows as “Available” and subnets are listed under VCN resources.
Step 4: Create an Always Free Compute instance (traffic source)
Goal: Create a host inside the VCN to generate traffic.
- Go to Compute → Instances → Create instance.
- Choose:
– Compartment:
lab-logging– Name:flowlog-client-1– Image: Oracle Linux (or another supported Linux) – Shape: choose an Always Free eligible shape if available in your region - Networking: – Select the VCN/subnet created in Step 3 (public subnet is simplest for this lab)
- Add an SSH key (generate one if needed).
- Create the instance and wait until it is Running.
Expected outcome: One running compute instance in your subnet.
Verification: You can see its private IP, and (if public subnet) a public IP.
Optional SSH test
ssh -i /path/to/private_key opc@<public_ip>
Step 5: Enable VCN Flow Logs to send to Logging
Goal: Configure a flow log resource to publish records to your Logging log group.
The exact UI labels can vary (Flow Logs may be created at the subnet level or VNIC level depending on OCI features and your region). Follow the Console prompts and choose your log group.
- In the Console, navigate to your VCN.
- Find Flow Logs under VCN resources (or under the subnet/VNIC resources).
- Click Create Flow Log (or Enable Flow Logs).
- Select the target:
– The subnet that contains
flowlog-client-1(recommended for this lab) - For the log destination:
– Log group:
net-lab-log-group– Log name:vcn-flow-lab(the console may create a log resource for you) - Keep default options for aggregation/interval unless you have a reason to change them.
- Create/enable.
Expected outcome: A flow log configuration exists and is active, writing into Logging.
Verification:
– In Observability & Management → Logging, confirm a log exists under your log group (name like vcn-flow-lab).
– In the VCN Flow Logs page, confirm status is enabled/active.
Step 6: Generate network traffic
Goal: Produce traffic so flow logs have something to record.
- SSH into the instance:
ssh -i /path/to/private_key opc@<public_ip>
- Generate outbound traffic (examples):
curl -I https://www.oracle.com
curl -I https://example.com
- Optional: generate DNS traffic and a few more connections:
for i in {1..10}; do
curl -s -o /dev/null -w "%{http_code}\n" https://example.com
done
Expected outcome: The instance makes outbound connections through the VCN, producing flow log records.
Verification: Your curl commands return HTTP status codes like 200 or 301.
Step 7: Search and view flow logs in Log Explorer
Goal: Confirm log ingestion and learn basic filtering workflows.
- Go to Observability & Management → Logging → Log Explorer.
- Set:
– Compartment:
lab-logging– Log group:net-lab-log-group– Log:vcn-flow-lab(or the flow log created) – Time window: “Last 15 minutes” - Run a search (even an empty search) to display recent entries.
- Expand an entry and look for typical flow log fields (examples often include source/destination addresses, ports, protocol, bytes/packets, and an allow/deny action—field names vary by log format and OCI version).
Expected outcome: You can see recent flow log entries corresponding to your traffic.
Verification tips: – Filter by the instance’s private IP (if the search UI supports field filtering). – If logs are delayed, extend the time window to the last 60 minutes.
Step 8 (Optional): Archive logs to Object Storage using Service Connector Hub
Goal: Demonstrate a common production pattern: hot search in Logging + cheap archival in Object Storage.
8A) Create an Object Storage bucket
- Go to Storage → Object Storage & Archive Storage → Buckets.
- Compartment:
lab-logging - Create bucket:
– Name:
logging-archive-lab-<unique-suffix>– Default storage tier is fine for a lab - Create.
Expected outcome: A bucket exists for archival.
8B) Create a Service Connector
- Go to Observability & Management → Service Connector Hub.
- Compartment:
lab-logging - Click Create Service Connector.
- Configure:
– Source: Logging
– Source log group/log:
net-lab-log-group/vcn-flow-lab– Target: Object Storage – Target bucket:logging-archive-lab-... - Create the connector.
Expected outcome: A connector exists that exports new log entries to Object Storage.
Verification: – Wait a few minutes, then check the bucket for newly created objects/prefixes. – Connector status should show as active/successful (exact UI varies).
If the connector requires IAM policies, the console often shows a policy suggestion. Apply it using your tenancy admin account or through your IAM admin. Always review suggested policies for least privilege.
Validation
Use this checklist to confirm the lab works end-to-end:
- [ ] Flow logs are enabled for your subnet/VNIC.
- [ ] Log group exists and contains a flow log log.
- [ ] Log Explorer shows recent entries in the correct time range.
- [ ] (Optional) Object Storage bucket contains exported log objects.
- [ ] (Optional) Service Connector shows healthy status.
Troubleshooting
Common issues and realistic fixes:
No logs appear in Log Explorer
- Wrong region: Logging is regional. Confirm you are viewing the same region where the VCN/flow logs were created.
- Wrong time window: Expand to “Last 60 minutes”.
- Flow logs not active: Check flow log status in the VCN page.
- No traffic: Re-run
curlcommands and ensure the instance has internet egress (route table + internet gateway + security rules). - IAM issues (less common for viewing if you’re admin): Ensure your user/group has permission to read log content in the compartment.
Flow logs enabled but still empty
- Flow logs record network flows; if your instance is idle or blocked, there may be nothing to record.
- Confirm the instance can resolve DNS and reach external sites.
- Verify security lists/NSGs and route rules.
Service Connector exports fail
- Missing IAM permissions for the connector to read from Logging and write to Object Storage.
- Bucket policy/compartment mismatch.
- Verify connector status details in Service Connector Hub.
Cleanup
To avoid ongoing charges and clutter, delete resources in reverse order:
- Service Connector Hub: delete the connector (if created).
- Object Storage bucket: delete objects in the bucket, then delete the bucket.
- Flow log configuration: disable/delete the flow log from the VCN/subnet.
- Compute instance: terminate
flowlog-client-1. - VCN: delete the VCN (the console can delete associated sub-resources).
- Logging resources: delete the log(s) (if required) and the log group
net-lab-log-group. - Compartment (optional): only after everything inside is deleted.
11. Best Practices
Architecture best practices
- Design compartments first: align log ownership with org structure (app compartments vs shared security compartment).
- Use separate log groups per environment (
dev,test,prod) and per domain (net,app,security). - For production, implement two-tier retention:
- short “hot” retention in Logging for fast search,
- long retention in Object Storage via Service Connector Hub.
IAM / security best practices
- Follow least privilege:
- Only security/SRE groups should have broad
read log-contentin production. - Developers often need read access to dev logs, not prod.
- Use separation of duties:
- One group manages log configuration (log groups/logs/connectors),
- Another group reads logs (operations),
- Security may have separate access for investigations.
- Prefer dynamic groups and instance principals for agents/connectors where applicable (verify current best practice for your ingestion method).
Cost best practices
- Enable high-volume logs (like flow logs) only where necessary.
- Reduce retention for noisy logs; archive older logs.
- Tag logs/log groups and export destinations for cost allocation.
- Avoid duplicating exports to multiple expensive targets.
Performance best practices
- Keep log sources scoped and targeted (avoid “log everything everywhere”).
- Use consistent structured logging in apps (JSON) so searches are more precise (when your ingestion path preserves structure).
Reliability best practices
- Treat log export pipelines as production services:
- monitor connector status,
- alert on failures,
- test recovery and replay patterns as needed.
Operations best practices
- Establish a standard incident workflow:
- confirm time sync and timestamps,
- define default queries/filters per service,
- document “what good looks like” for each environment.
- Regularly review:
- retention settings,
- access policies,
- archived bucket lifecycle policies.
Governance, tagging, naming
- Naming convention example:
- Log group:
prod-net-logging,prod-app-logging - Logs:
vcn-flow-subnet-a,api-gw-access,lb-access - Tag dimensions:
environment,costCenter,application,ownerTeam,dataSensitivity
12. Security Considerations
Identity and access model
- Logging access is governed by OCI IAM.
- Key access patterns:
- Manage log groups/logs (administration)
- Read log content (sensitive access)
- Ingest log content (custom log ingestion sources)
- Export/routing permissions (Service Connector Hub)
Recommendation: Create dedicated IAM groups for:
– LoggingAdmins (manage configuration)
– LogReaders-Prod (read prod logs)
– LogReaders-Dev (read dev logs)
Encryption
- OCI services generally encrypt data at rest and in transit as part of the platform. For Logging-specific encryption controls (customer-managed keys, if supported), verify in official docs because capabilities can differ by service and region.
Network exposure
- Logging is accessed via OCI endpoints. If you have strict network controls, review OCI patterns for private access and service gateway usage for related services (Object Storage, Streaming) and confirm what applies to Logging in your region.
Secrets handling
- Do not log:
- passwords,
- API keys,
- auth tokens,
- private keys,
- sensitive personal data.
- Implement application-side redaction and log filtering before ingestion.
Audit and governance
- Ensure auditability of:
- who changed log retention,
- who created/modified routing connectors,
- who accessed log content (where OCI provides such visibility; audit trails may be available through OCI’s audit capabilities—verify your audit coverage).
Compliance considerations
- Map log sources and retention to controls (e.g., SOC 2, ISO 27001).
- Use separate compartments and buckets for regulated data.
- Apply Object Storage lifecycle, retention rules, and access controls for archived logs (where required).
Common security mistakes
- Granting broad log read access to all developers in production.
- Using long hot retention for sensitive logs without a need.
- Exporting logs to buckets with overly permissive policies.
- Logging secrets or PII in plaintext.
Secure deployment recommendations
- Keep production logs in dedicated compartments with strict policies.
- Export security-relevant logs to a security-owned archive bucket with restricted access.
- Use infrastructure-as-code and change control for logging configuration.
13. Limitations and Gotchas
Because OCI services evolve, confirm current limits in official docs. Common limitations/gotchas to plan for include:
- Regional boundaries: Logs do not automatically aggregate across regions. Multi-region requires multi-region setup.
- Service limits: Limits exist for number of log groups/logs and potentially ingestion/search behaviors. Check service limits in your tenancy.
- Ingestion latency: Logs may not appear instantly; design operational expectations accordingly.
- High-volume surprises: Flow logs and access logs can generate large volumes quickly.
- Retention mismatch: Keeping everything “hot” is expensive and often unnecessary; archive older logs.
- Schema variability: Different sources produce different fields; build queries per source type.
- Export pipeline complexity: Service connectors require IAM and monitoring; treat them as production components.
- Cross-compartment routing: Powerful, but easy to misconfigure IAM and ownership boundaries.
- Tooling expectations: Base Logging is not the same as a full SIEM or advanced log analytics platform; plan for escalation to Logging Analytics or external tools if needed.
14. Comparison with Alternatives
Oracle Cloud Logging is one piece of an observability stack. Here’s how it compares.
Key alternatives
- Within Oracle Cloud
- Logging Analytics: deeper parsing, enrichment, analytics (separate service).
- Monitoring + Alarms: metrics-based alerting rather than log search.
- Audit: records API calls and identity events (separate service, may integrate with logging workflows).
- Other clouds
- AWS CloudWatch Logs
- Azure Monitor Logs (Log Analytics)
- Google Cloud Logging
- Open-source / self-managed
- Elasticsearch/OpenSearch + Fluent Bit/Fluentd
- Grafana Loki
- Splunk (commercial, can be self-managed or SaaS)
Comparison table
| Option | Best For | Strengths | Weaknesses | When to Choose |
|---|---|---|---|---|
| Oracle Cloud Logging | OCI-native centralized logs, basic search, routing | Tight OCI IAM/compartment integration; service logs; managed; easy start | Advanced analytics/correlation may require additional tooling | Default choice for OCI workloads needing centralized logs |
| OCI Logging Analytics | Deep log analytics in OCI | Parsing/enrichment, analytics workflows (verify features/pricing) | Additional cost/complexity | When you need more than basic search and retention |
| OCI Monitoring + Alarms | Numeric metrics and alerting | Strong for SLIs/SLOs; alarms; integrations | Not a replacement for log detail | Use alongside Logging for production ops |
| OCI Audit | Governance and API activity trails | Required for security governance | Different purpose than app/network logs | Use for compliance and security investigations |
| AWS CloudWatch Logs | AWS-native logging | Mature ecosystem | Not OCI-native; cross-cloud adds complexity | If your workloads are primarily in AWS |
| Azure Monitor Logs | Azure-native log analytics | Strong query language and analytics | Not OCI-native | If your workloads are primarily in Azure |
| Google Cloud Logging | GCP-native logging | Integrated with GCP tooling | Not OCI-native | If your workloads are primarily in GCP |
| Elastic/OpenSearch stack | Custom log analytics/search at scale | Full control; broad ecosystem | You manage scaling, upgrades, security | When you need custom pipelines and control |
| Grafana Loki | Cost-effective log aggregation (label-based) | Integrates with Grafana; good for cloud-native | Different query/search model; requires 운영 | For Kubernetes-heavy environments with Grafana |
| Splunk | Enterprise SIEM/log analytics | Powerful search/correlation; enterprise features | Can be expensive; vendor-specific | When SOC/enterprise standard requires Splunk |
15. Real-World Example
Enterprise example: regulated multi-compartment landing zone
- Problem: A financial services organization runs payment APIs and core databases on Oracle Cloud. They must meet compliance retention requirements and enable SOC investigations without granting broad access to developers.
- Proposed architecture:
- App compartments own their log groups for app/service logs.
- A security compartment contains:
- a centralized Object Storage archive bucket,
- Service Connector Hub connectors that export selected logs.
- IAM policies:
- SRE can read prod logs for operational troubleshooting.
- SOC can read security-relevant logs and access archives.
- Developers can read dev/test logs only.
- Why Logging was chosen:
- OCI-native governance (compartments/IAM),
- Fast deployment across many OCI resources,
- Easy export path to Object Storage for long retention.
- Expected outcomes:
- Lower MTTR during incidents,
- Clear audit posture with controlled access,
- Predictable log retention cost through hot+archive strategy.
Startup/small-team example: lean operations for a SaaS API
- Problem: A small team runs an API on OCI and needs visibility into traffic and errors without managing an ELK stack.
- Proposed architecture:
- One log group per environment.
- Enable key service logs (gateway/load balancer where used).
- Keep hot retention short (e.g., 7–14 days).
- Export only critical logs to Object Storage for longer retention.
- Why Logging was chosen:
- Minimal operational overhead,
- Works with OCI resources out of the box,
- Scales as the product grows (and can later feed a SIEM).
- Expected outcomes:
- Faster debugging during releases,
- Controlled costs by limiting retention and sources,
- A clean path to add analytics later.
16. FAQ
1) Is “Logging” the current official OCI service name?
Yes—Oracle Cloud Infrastructure refers to the service as Logging in Observability and Management. Always confirm naming in the official docs for your region: https://docs.oracle.com/en-us/iaas/Content/Logging/Concepts/loggingoverview.htm
2) Is Logging global or regional in Oracle Cloud?
Logging resources (log groups/logs) are regional. You must configure Logging separately in each region where you operate.
3) What is the difference between a log group and a log?
A log group is a container used to organize logs. A log is the actual resource that receives log entries from a specific source.
4) Can I ingest application logs into Logging?
Yes, through supported custom log ingestion methods (often via agents or ingestion mechanisms). The exact supported sources and steps can vary—verify current ingestion options in the official docs.
5) How do I control who can see log content?
Use OCI IAM policies to grant read log-content (or equivalent permissions) only to the right groups/roles in the correct compartments.
6) Does Logging replace a SIEM?
No. Logging is primarily a centralized log store and search/routing layer. A SIEM adds correlation, detection rules, case management, and advanced security analytics.
7) How long can logs be retained in Logging?
Retention is configurable, but there are service-defined minimums/maximums depending on log type. For long retention, export to Object Storage. Verify retention limits in current docs.
8) What’s the recommended retention strategy?
Common pattern: short “hot” retention in Logging (days/weeks) + export to Object Storage for months/years.
9) How do I archive logs to Object Storage?
A common OCI-native approach is Service Connector Hub with Logging as the source and Object Storage as the target.
10) Can I stream logs to real-time consumers?
Yes, commonly by routing logs to OCI Streaming via Service Connector Hub, then consuming from stream clients.
11) Why don’t I see logs immediately after enabling a source?
There can be ingestion latency. Also confirm region, time window, and that traffic/events are actually occurring.
12) Do flow logs generate a lot of data?
They can. Flow logs are often high-volume in busy subnets. Enable selectively and keep retention short unless required.
13) Can I separate dev/test logs from prod logs?
Yes. Use compartments and/or distinct log groups and enforce IAM boundaries.
14) Can I use Terraform to manage Logging resources?
Often yes for infrastructure resources in OCI, including log groups/logs (provider support varies by resource). Verify current Terraform resource support in the OCI Terraform provider docs.
15) What should I avoid logging?
Avoid secrets and sensitive personal data. Implement redaction and secure logging practices in your applications.
16) How do I troubleshoot export failures to Object Storage?
Check Service Connector status details, confirm IAM policies (read from logs, write to bucket), and verify bucket existence/permissions.
17) Is Logging Analytics the same as Logging?
No. Logging is the base log collection/search/routing service. Logging Analytics is a separate service for deeper analytics (verify features and pricing).
17. Top Online Resources to Learn Logging
| Resource Type | Name | Why It Is Useful |
|---|---|---|
| Official documentation | OCI Logging Overview | Canonical description of concepts, log groups/logs, and workflows: https://docs.oracle.com/en-us/iaas/Content/Logging/Concepts/loggingoverview.htm |
| Official documentation | OCI Logging tasks (search/view/manage) | Step-by-step tasks and references (navigate from overview): https://docs.oracle.com/en-us/iaas/Content/Logging/Concepts/loggingoverview.htm |
| Official documentation | VCN Flow Logs (Networking) | Practical source of logs for networking visibility (verify URL structure if it changes): https://docs.oracle.com/en-us/iaas/Content/Network/Concepts/vcnflowlogs.htm (verify in official docs) |
| Official documentation | Service Connector Hub Overview | How to route logs to Object Storage/Streaming and more: https://docs.oracle.com/en-us/iaas/Content/service-connector-hub/overview.htm |
| Official pricing | Oracle Cloud Price List | Official, up-to-date pricing by service and region/SKU: https://www.oracle.com/cloud/price-list/ |
| Official tool | OCI Cost Estimator | Build estimates for ingestion/storage and downstream services: https://www.oracle.com/cloud/costestimator.html |
| Official free tier | Oracle Cloud Free Tier | Understand Always Free limits and trials: https://www.oracle.com/cloud/free/ |
| Official tooling | OCI CLI Installation | CLI for automation and scripting: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm |
| Official architecture | OCI Architecture Center | Reference architectures and best practices (search for logging/observability patterns): https://docs.oracle.com/en/solutions/ |
| Official videos | Oracle Cloud YouTube channel | Product overviews and demos (search “OCI Logging”): https://www.youtube.com/@OracleCloudInfrastructure |
18. Training and Certification Providers
| Institute | Suitable Audience | Likely Learning Focus | Mode | Website URL |
|---|---|---|---|---|
| DevOpsSchool.com | DevOps engineers, SREs, platform teams | DevOps practices, cloud operations, observability foundations | Check website | https://www.devopsschool.com/ |
| ScmGalaxy.com | Developers, DevOps beginners | SCM/DevOps lifecycle skills and toolchain learning | Check website | https://www.scmgalaxy.com/ |
| CLoudOpsNow.in | Cloud engineers, operations teams | Cloud operations and day-2 management topics | Check website | https://cloudopsnow.in/ |
| SreSchool.com | SREs, reliability engineers | SRE principles, incident response, observability | Check website | https://sreschool.com/ |
| AiOpsSchool.com | Ops teams, architects | AIOps concepts, automation, operational analytics | Check website | https://aiopsschool.com/ |
19. Top Trainers
| Platform/Site | Likely Specialization | Suitable Audience | Website URL |
|---|---|---|---|
| RajeshKumar.xyz | DevOps/cloud training content (verify specific offerings) | Beginners to intermediate engineers | https://rajeshkumar.xyz/ |
| devopstrainer.in | DevOps training platform (verify portfolio) | DevOps engineers, students | https://devopstrainer.in/ |
| devopsfreelancer.com | DevOps freelancing/training resource (verify services) | Teams seeking short-term expertise | https://devopsfreelancer.com/ |
| devopssupport.in | DevOps support/training resource (verify services) | Ops teams needing implementation help | https://devopssupport.in/ |
20. Top Consulting Companies
| Company | Likely Service Area | Where They May Help | Consulting Use Case Examples | Website URL |
|---|---|---|---|---|
| cotocus.com | Cloud/DevOps consulting (verify exact portfolio) | Architecture, implementation, operations | Designing log retention + archive pipeline; implementing Service Connector Hub routing | https://cotocus.com/ |
| DevOpsSchool.com | DevOps/cloud consulting and training | Platform engineering, DevOps process, tooling | Defining logging standards; building OCI landing zone observability patterns | https://www.devopsschool.com/ |
| DEVOPSCONSULTING.IN | DevOps consulting (verify services) | CI/CD, operations, cloud migrations | Integrating OCI Logging with SIEM pipelines; operational runbooks and automation | https://devopsconsulting.in/ |
21. Career and Learning Roadmap
What to learn before Logging
- OCI fundamentals:
- compartments, regions, availability domains
- IAM users/groups/policies and dynamic groups
- Basic networking:
- VCN, subnets, routing, gateways, NSGs/security lists
- Linux fundamentals:
- reading logs, journald/syslog basics, time sync concepts
- Observability basics:
- logs vs metrics vs traces
- incident response lifecycle
What to learn after Logging
- Service Connector Hub advanced patterns (fan-out, filtering where supported)
- OCI Streaming for real-time pipelines
- Object Storage lifecycle, retention, and governance
- Logging Analytics (if you need deeper analytics)
- SIEM integration patterns (Splunk/Elastic) and security operations workflows
- Infrastructure as Code (Terraform) for repeatable observability setup
Job roles that use it
- Cloud Engineer / Cloud Operations
- DevOps Engineer
- Site Reliability Engineer (SRE)
- Platform Engineer
- Security Engineer / SOC Analyst (as a log consumer)
- Solutions Architect (designing governance and pipelines)
Certification path (OCI)
Oracle certifications change over time; verify the latest tracks on Oracle’s official certification site. Common starting points: – OCI Foundations (entry) – OCI Architect (associate/professional) – OCI DevOps-related certifications (for automation)
Project ideas for practice
- Build a multi-compartment logging layout (dev/test/prod + security).
- Implement flow logs for only critical subnets and define retention and archive.
- Route logs to Object Storage and apply lifecycle policies for 90/180/365-day tiers.
- Route selected logs to Streaming and implement a simple consumer that detects anomalies (e.g., repeated 401/403 patterns).
- Create a “golden runbook” for incident response using Log Explorer queries and time windows.
22. Glossary
- OCI (Oracle Cloud Infrastructure): Oracle Cloud’s infrastructure platform (compute, networking, storage, identity, observability).
- Logging: OCI service for collecting, storing, searching, and routing logs.
- Log Group: A container resource in Logging used to organize logs.
- Log: A resource representing a stream of log entries from a source (service/custom).
- Log Explorer: Console interface for searching and filtering logs.
- Service Logs: Logs emitted by OCI services (e.g., networking, gateways) into Logging.
- Custom Logs: Logs ingested from customer workloads using supported ingestion paths.
- Compartment: OCI’s logical container for organizing resources and applying IAM policies.
- IAM Policy: A rule defining what a principal (user/group/service) can do in OCI.
- VCN (Virtual Cloud Network): OCI’s private network construct.
- VCN Flow Logs: Network flow records captured for VCN traffic and delivered to Logging.
- Service Connector Hub: OCI service for moving data (including logs) between OCI services.
- Object Storage: OCI service for storing objects, commonly used for log archives.
- Streaming: OCI service for real-time event/log streaming pipelines.
- Retention: How long logs are kept in the hot store before deletion/aging out.
- Least Privilege: Security principle of granting only the permissions required.
23. Summary
Oracle Cloud Logging is the OCI-native service in Observability and Management for centralizing logs from OCI services and supported custom sources. It matters because it reduces incident response time, improves governance through compartments and IAM, and enables practical retention strategies (hot search in Logging + archival to Object Storage).
Cost and security are the two biggest design themes: – Cost: manage ingestion volume and retention; archive long-term logs to Object Storage. – Security: restrict who can read log content; avoid logging secrets; use compartment boundaries and least privilege IAM.
Use Logging when you need fast, governed log visibility for OCI workloads and a clean path to export to analytics/SIEM systems. Next step: deepen your skills with Service Connector Hub routing patterns and (if needed) evaluate Logging Analytics for advanced log insights.