Category
Data Management
1. Introduction
Oracle Cloud Connector Hub is a managed integration service that moves data and events between Oracle Cloud Infrastructure (OCI) services and selected targets, with minimal operational overhead.
In simple terms: you pick a source (where data is produced), pick a target (where you want it to go), optionally apply tasks like filtering or transformation, and Connector Hub delivers the data continuously.
In technical terms, Connector Hub (often referenced in official OCI documentation as Service Connector Hub—verify the latest naming in your region’s OCI Console) provides service-managed connectors that read from supported OCI sources (for example, logs, metrics, streams—verify exact supported sources) and deliver to supported OCI targets (for example, Object Storage, Streaming, Functions, Logging Analytics—verify exact supported targets). The service is designed for near-real-time, operationally simple routing without building and operating custom ingestion pipelines.
What problem it solves: teams routinely need to export logs for retention, stream events to downstream processing, move operational telemetry into analytics tools, and connect multiple OCI services together. Building this yourself often means managing compute, scaling, retries, credentials, and failure handling. Connector Hub aims to standardize and simplify those patterns as a managed service.
Naming note (important): OCI has long used the name Service Connector Hub in documentation and APIs, while some consoles and discussions shorten it to Connector Hub. This tutorial uses Connector Hub as the primary name to match your requested service name, and calls out “Service Connector Hub” where it is relevant for finding official docs.
2. What is Connector Hub?
Official purpose
Connector Hub is an OCI-managed service for routing data from OCI services to other OCI services (and in some cases, to external endpoints through supported patterns—verify for your specific target). It is typically used to move observability and event data (logs, metrics, streaming messages) into storage, analytics, notifications, or processing services.
Core capabilities (conceptual)
- Create connectors (often called service connectors) that connect a source to a target
- Optionally apply tasks such as filtering, transformation, or invoking processing (for example, Functions—verify)
- Operate continuously with managed scaling and retry behavior (exact semantics and SLAs: verify in official docs)
- Use OCI IAM for access control and OCI audit/logging for governance
Major components
While naming can vary slightly across UI, API, and docs, the model generally includes:
- Connector (Service Connector): The main resource you create. It defines:
- Source configuration (what to read)
- Target configuration (where to write)
- Optional tasks (how to process)
- Source: An OCI service producing data (for example, OCI Logging)
- Task(s): Optional processing steps (for example, filtering or invoking a function—verify available task types)
- Target: Destination service (for example, OCI Object Storage)
- Policies (IAM): Permissions so the connector can read from the source and write to the target
Service type
- Managed cloud service (PaaS-style integration / routing)
- You do not manage servers, agents, or scaling infrastructure
Scope: regional, compartment, tenancy
- Connector Hub resources are typically regional and compartment-scoped in OCI (verify in your region’s docs/console).
- Policies are defined at tenancy level but can be scoped to compartments.
How it fits into the Oracle Cloud ecosystem
Connector Hub often sits in the middle of: – Observability (Logging, Monitoring) – Data Management (archival to Object Storage, feeding streaming pipelines, analytics ingestion) – Event-driven compute (Functions) – Security operations (log centralization, retention, SIEM-like patterns)
It acts as a “managed glue” so you can build data movement patterns without deploying custom integration runtimes.
3. Why use Connector Hub?
Business reasons
- Faster delivery of integrations: Common “export logs to storage” or “send stream to processing” patterns can be set up in minutes.
- Reduced operational burden: Less custom code, fewer pipelines to manage, fewer on-call issues.
- Cost control (often indirect): You avoid running always-on compute for routing tasks; costs are concentrated in the source/target services you already need.
Technical reasons
- Managed routing and delivery: Built-in retry/error handling (verify exact behavior).
- Standardized integration approach: Consistent patterns across teams and environments.
- IAM-based access: Central identity model instead of embedding long-lived credentials in scripts.
Operational reasons
- Simplified operations: Monitor connector status and failures from OCI tooling (verify exact metrics/logs available).
- Scales with demand: As source volume grows, Connector Hub is designed to scale (verify limits/quotas).
Security/compliance reasons
- Centralized policy control: Use OCI IAM policies to control who can create connectors and where data can flow.
- Auditability: Actions are auditable via OCI Audit (verify details).
- Data residency alignment: Because resources are regional, you can align with residency requirements by choosing the right region (verify cross-region patterns).
Scalability/performance reasons
- Near-real-time movement: Designed for continuous delivery (latency varies—verify).
- Backpressure/retries: Managed service handles transient failures more consistently than ad hoc scripts.
When teams should choose it
Choose Connector Hub when you need: – Continuous, managed routing between supported OCI services – Simple patterns like “logs → Object Storage” or “stream → function” – Minimal operational overhead, strong IAM governance, and repeatable deployments
When they should not choose it
Avoid or reconsider Connector Hub when: – You need a complex ETL/ELT workflow with many joins and transformations (consider OCI Data Integration, Data Flow, or external tools) – You must deliver to an unsupported target (you may need custom compute, Kafka Connect, NiFi, etc.) – You require very specific delivery guarantees/ordering semantics that Connector Hub does not provide (verify exact semantics) – You need cross-cloud routing with advanced protocol support (you may need a message bus, API gateway, or integration platform)
4. Where is Connector Hub used?
Industries
- Financial services: audit log retention, security monitoring pipelines
- Healthcare: compliance-driven log archival
- Retail/e-commerce: operational telemetry aggregation
- SaaS/technology: multi-environment observability export
- Public sector: region-bound data routing and long-term retention
Team types
- Platform engineering teams building shared infrastructure
- SRE and operations teams centralizing logs/metrics
- Security teams exporting audit logs for retention and analysis
- Data engineering teams feeding streams into processing
- DevOps teams standardizing “day-2” operations patterns
Workloads
- Centralized logging and retention
- Streaming ingestion for near-real-time processing
- Event-driven automation
- Observability data export to analytics targets
Architectures
- Landing zone architectures with centralized logging compartments
- Multi-compartment application estates
- Event-driven microservices with streaming and functions
- “Data lake” patterns where Object Storage is the durable sink
Real-world deployment contexts
- Production: log export, audit retention, operational event processing
- Dev/test: validating integration flows, testing downstream processing with safe volumes
5. Top Use Cases and Scenarios
Below are realistic scenarios. Availability depends on supported sources/targets/tasks in your region—verify in official docs.
1) Archive OCI Audit logs to Object Storage (compliance retention)
- Problem: You need immutable-ish retention of audit activity for months/years.
- Why Connector Hub fits: Managed export from log sources to durable, low-cost storage.
- Example: Send audit logs from security compartments into a centralized Object Storage bucket with lifecycle policies.
2) Centralize application logs across compartments
- Problem: Logs are scattered across many app compartments and teams.
- Why it fits: Standard connectors per compartment to a central destination.
- Example: Each app exports its log group to a shared Object Storage bucket prefix for centralized search/ETL.
3) Stream logs into a real-time processing pipeline
- Problem: You need near-real-time detection/alerting beyond basic alarms.
- Why it fits: Route Logging to Streaming (if supported) to feed a processing application.
- Example: Logs → Streaming → consumer app → alerts and enriched events.
4) Trigger serverless workflows from streaming data
- Problem: You want to process messages without running servers.
- Why it fits: Connector can deliver to Functions (if supported) with managed invocation patterns.
- Example: Streaming topic → Function task → write enriched results to Object Storage.
5) Export Monitoring metrics to downstream analytics
- Problem: You need to store metrics history beyond default retention or analyze in external tools.
- Why it fits: Metrics source → target sink for retention/analysis (verify supported target).
- Example: Metrics → Object Storage for batch analytics in Data Flow.
6) Route telemetry into Logging Analytics (operational intelligence)
- Problem: You want centralized log analytics across services.
- Why it fits: Connector Hub can feed analytics targets (verify target support and licensing).
- Example: Application logs → Logging Analytics for dashboards and anomaly detection.
7) Near-real-time ingestion into a data lake
- Problem: You want a durable raw zone for later ETL.
- Why it fits: Connector Hub can continuously land data into Object Storage.
- Example: Events/logs → Object Storage “raw/” → nightly Data Flow jobs to curate Parquet.
8) Cross-team “self-service export” with guardrails
- Problem: Teams need exports, but security wants governance.
- Why it fits: IAM policies and compartment scoping provide guardrails.
- Example: Allow teams to create connectors only within their compartments and only to approved buckets.
9) Decouple producers from consumers
- Problem: Multiple consumers need the same data stream.
- Why it fits: Use Streaming as a fan-out mechanism (where supported).
- Example: Logging → Streaming; multiple consumers read and process independently.
10) Reduce custom glue code for integrations
- Problem: Many scripts and cron jobs move data around unreliably.
- Why it fits: Managed, consistent connectors with standardized monitoring.
- Example: Replace ad hoc Python scripts with connectors and IAM policies.
11) Security operations: suspicious activity detection pipeline
- Problem: You want to detect suspicious API calls or changes.
- Why it fits: Audit logs exported to a pipeline for detection logic.
- Example: Audit logs → Streaming → detection service → Notifications.
12) Multi-region strategy (careful)
- Problem: You need to move data between regions for DR or global analytics.
- Why it fits: Sometimes feasible using region endpoints/targets (verify supported cross-region patterns and costs).
- Example: Region A logs exported to Object Storage; replication or downstream copy handles region B availability.
6. Core Features
Because Connector Hub evolves, always confirm your region’s supported features in the official docs. The items below cover the core capabilities commonly associated with OCI Connector Hub / Service Connector Hub.
Feature 1: Managed connectors (source → target)
- What it does: Creates a continuously running managed route from a supported source to a supported target.
- Why it matters: Eliminates custom agents and “glue” compute.
- Practical benefit: Faster setup, fewer operational incidents, consistent deployment pattern.
- Limitations/caveats: Supported sources/targets vary by region and service updates—verify.
Feature 2: Compartment-based resource organization
- What it does: Connectors are created in compartments and can be governed by compartment policies.
- Why it matters: OCI compartments are the foundation for multi-team governance.
- Practical benefit: You can delegate management to app teams while protecting central sinks.
- Limitations/caveats: Cross-compartment writes require explicit IAM policy design.
Feature 3: IAM-controlled access to sources and targets
- What it does: Uses OCI IAM policies to authorize Connector Hub to read and write.
- Why it matters: No hard-coded credentials in scripts; consistent least-privilege.
- Practical benefit: Central security team can audit and control data flows.
- Limitations/caveats: Policy syntax must be exact; service principal naming must match OCI docs—verify.
Feature 4: Optional tasks (filtering / transformation / function invocation)
- What it does: Applies optional tasks to data in-flight (capabilities vary—verify).
- Why it matters: Reduces downstream processing load and noise.
- Practical benefit: Filter out debug logs, route only security events, or invoke serverless processing.
- Limitations/caveats: Not a full ETL engine; complex transforms may require downstream processing.
Feature 5: Error handling and retries (managed)
- What it does: Retries delivery on transient failures and surfaces failures via status/metrics/logs (verify).
- Why it matters: Reduces manual reprocessing compared to cron scripts.
- Practical benefit: More reliable operational pipelines.
- Limitations/caveats: Delivery guarantees (at-least-once vs exactly-once) and ordering should be verified.
Feature 6: Integration with OCI Observability
- What it does: Connector health and activity can be monitored (verify specifics: metrics, logs, alarms).
- Why it matters: Production routing needs operational visibility.
- Practical benefit: Build alarms on failed deliveries or throttling.
- Limitations/caveats: The specific metrics and dimensions can differ—verify in metric reference.
Feature 7: Native integration to Object Storage for durable landing
- What it does: Writes data to OCI Object Storage buckets (commonly used for archival and data lakes).
- Why it matters: Object Storage is durable, cost-effective, and integrates with analytics.
- Practical benefit: Long-term retention and downstream batch processing become straightforward.
- Limitations/caveats: Object naming/partitioning conventions and file formats should be verified for your connector type.
Feature 8: Streaming integration (where supported)
- What it does: Reads from or writes to OCI Streaming as a decoupling layer.
- Why it matters: Supports near-real-time processing and fan-out.
- Practical benefit: Multiple consumers can process the same data independently.
- Limitations/caveats: Streaming retention, partitions, throughput units, and costs must be planned.
7. Architecture and How It Works
High-level architecture
Connector Hub runs as an OCI-managed service. You define a connector resource that references: – A source (e.g., Logging) – Optional task(s) (e.g., filter/function) – A target (e.g., Object Storage)
Connector Hub then continuously reads from the source and delivers to the target using OCI internal service-to-service integration controlled by IAM policies.
Data flow vs control flow
- Control plane: Creating/updating connectors, IAM policies, configuration changes, and status checks.
- Data plane: The actual movement of data from source to target.
Integrations with related services (examples)
Common integrations include (verify availability): – OCI Logging as a source for audit/service/custom logs – OCI Object Storage as a durable sink – OCI Streaming for event streaming pipelines – OCI Functions for serverless processing – OCI Monitoring for metrics-based pipelines (where supported) – OCI Notifications for alerting (often downstream of processing)
Dependency services
- IAM for authorization
- Compartments for resource organization
- Source/target services (Logging, Object Storage, etc.)
- Optionally Vault (if your downstream processing needs secrets), Events, Functions, Streaming
Security/authentication model
- Human and automation users authenticate to OCI to create/manage connectors.
- The Connector Hub service itself operates with OCI service identity/service principal to access sources/targets based on policies you configure (exact service principal name and policy verbs: verify in official docs).
Networking model
- Typically service-to-service traffic stays on OCI’s backbone.
- If you integrate with external endpoints, confirm whether it requires NAT, public endpoints, private endpoints, or service gateways—verify supported patterns.
Monitoring/logging/governance considerations
- Use OCI Audit to track who created/changed connectors.
- Use OCI Monitoring to alarm on connector failures (verify metrics).
- Apply tags to connectors for cost allocation and governance.
- Use separate compartments for:
- Source producers (application compartments)
- Central sinks (security/log archive compartment)
Simple architecture diagram (Mermaid)
flowchart LR
A[OCI Source Service\n(e.g., Logging)] --> B[Connector Hub\n(Managed Connector)]
B --> C[OCI Target Service\n(e.g., Object Storage)]
Production-style architecture diagram (Mermaid)
flowchart TB
subgraph Tenancy[OCI Tenancy]
subgraph AppComp[App Compartments]
L1[OCI Logging\nApp Logs]:::svc
S1[OCI Streaming\nEvents Stream]:::svc
end
subgraph IntegrationComp[Integration Compartment]
CH[Connector Hub\nService Connectors]:::svc
FN[OCI Functions\nOptional Task]:::svc
end
subgraph SecurityComp[Security / Shared Services Compartment]
OS[(Object Storage\nLog Archive Bucket)]:::svc
LA[Logging Analytics\n(Optional)]:::svc
MON[Monitoring + Alarms]:::svc
AUD[Audit Logs]:::svc
end
end
L1 --> CH
S1 --> CH
CH --> FN
FN --> OS
CH --> OS
CH --> LA
MON -. monitors .-> CH
AUD -. audits .-> CH
classDef svc fill:#eef,stroke:#446,stroke-width:1px;
8. Prerequisites
Tenancy/account requirements
- An active Oracle Cloud (OCI) tenancy
- Access to an OCI region where Connector Hub is available (verify regional availability)
Permissions / IAM roles
You generally need: – Permission to manage connectors in the compartment where you create them – Permission for Connector Hub (service principal) to read from the source and write to the target
Because policy statements must be exact for OCI service principals and resource families, verify the current policy examples in official docs for Connector Hub / Service Connector Hub: – Official docs landing page: https://docs.oracle.com/en-us/iaas/Content/service-connector-hub/home.htm (verify)
Billing requirements
- Connector Hub pricing may be $0 in some models, but you still pay for:
- Object Storage capacity/requests
- Streaming throughput/retention
- Function invocations
- Logging Analytics ingestion (if used)
- Network egress (if data leaves region/OCI)
- Ensure your tenancy has billing enabled if you will generate usage.
Tools needed
- OCI Console access (browser)
- Optional but recommended:
- OCI CLI: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm
- A terminal environment for validation and cleanup
Region availability
- Connector Hub is not guaranteed in every region or may have feature differences.
- Verify in the OCI Console (region selector) and service documentation for supported regions.
Quotas/limits
- Connector count, throughput, and supported source/target combinations may be governed by service limits.
- Review OCI service limits in the Console: Governance & Administration → Limits, Quotas and Usage (names may vary).
Prerequisite services
For the hands-on lab in this tutorial, you need: – OCI Logging (Audit logs are usually available by default) – OCI Object Storage (to create a bucket)
9. Pricing / Cost
Current pricing model (how to think about it)
Connector Hub pricing can change and may be represented differently depending on OCI updates and contracts. In many OCI setups, the Connector Hub service itself is not the main cost driver; instead, costs come from the source/target services and data volume.
To be accurate: – Verify Connector Hub pricing on the official OCI pricing pages and cost estimator.
Official pricing references: – OCI pricing landing: https://www.oracle.com/cloud/pricing/ – OCI price list: https://www.oracle.com/cloud/price-list/ – OCI cost estimator (if available for your tenancy): https://www.oracle.com/cloud/costestimator.html (verify current URL)
Pricing dimensions to consider
Even if Connector Hub is priced at $0 (verify), your solution cost depends on:
-
Data volume – GB/day of logs exported – Messages/sec in Streaming
-
Target service costs – Object Storage: capacity (GB-month), operations (PUT/GET), lifecycle transitions – Streaming: partitions, throughput units, retention – Functions: invocations, duration, memory allocation – Logging Analytics: ingestion and retention (often licensed/paid—verify)
-
Network and data transfer – Intra-region service traffic is typically not billed as internet egress, but rules vary. – Cross-region replication or delivery may incur data transfer charges. – Outbound to internet endpoints (if used) can incur egress charges.
-
Indirect operational costs – People-time saved vs building/operating custom pipelines – Reduced incidents compared to scripts/agents
Free tier
OCI has a Free Tier program, but eligibility varies by account and region. Verify: – https://www.oracle.com/cloud/free/
Cost drivers (what will increase your bill)
- High log volume exported continuously
- Many small objects written to Object Storage (request costs)
- Large streaming throughput and high retention
- Frequent function invocations for in-flight processing
- Cross-region data movement
Hidden or indirect costs
- Object Storage request pattern: If Connector Hub writes many small objects frequently, request counts can rise.
- Downstream analytics ingestion: Logging Analytics or other analytics tools may charge per GB ingested.
- Retention growth: Long retention without lifecycle policies causes storage growth.
How to optimize cost
- Filter at the source/task level (if supported) to reduce volume.
- Use Object Storage lifecycle policies to move older data to Archive tier (where appropriate).
- Aggregate data into fewer objects (if configurable—verify).
- Avoid cross-region routing unless required.
- Use tags for cost reporting and showback.
Example low-cost starter estimate (no fabricated numbers)
A low-cost starter lab typically includes: – 1 connector exporting low-volume audit logs – 1 Object Storage bucket in Standard tier – Short retention / manual cleanup
Costs depend on: – Object Storage GB stored and operations – Any additional analytics services enabled
Use the OCI Cost Estimator with: – Object Storage: expected GB stored + monthly operations – (Optional) Streaming/Functions if you add tasks
Example production cost considerations
For production, cost planning should include: – Expected daily log volume (GB/day) and growth rate – Retention policy (days/months/years) – Storage tiering strategy (Standard → Archive) – Throughput requirements (if using Streaming) – Monitoring and alerting overhead (usually minimal, but depends on metrics and alarms) – Cross-compartment and cross-region governance requirements
10. Step-by-Step Hands-On Tutorial
Objective
Create a Connector Hub connector that exports OCI Audit logs (via OCI Logging) to an OCI Object Storage bucket for retention, then validate delivery and clean up.
This is a common “first production use case” because audit logs are valuable and Object Storage is a low-cost durable sink.
Lab Overview
You will: 1. Create (or choose) a compartment for the lab 2. Create an Object Storage bucket 3. Create a Connector Hub connector: – Source: OCI Logging (Audit logs) – Target: Object Storage bucket 4. Generate a few audit events 5. Validate objects appear in the bucket 6. Troubleshoot if data doesn’t arrive 7. Clean up resources to stop ongoing costs
Note: UI labels vary by OCI Console version. If you see Service Connector Hub in the console, that is the same service family as Connector Hub used in this tutorial.
Step 1: Prepare a compartment and naming
- In OCI Console, open the navigation menu.
- Go to Identity & Security → Compartments.
- Create a compartment like:
– Name:
lab-connector-hub– Description:Connector Hub lab compartment
Expected outcome: You have a dedicated compartment to isolate policies and resources.
Tip: If you cannot create compartments, use an existing sandbox compartment where you have permissions.
Step 2: Create an Object Storage bucket (target)
- Go to Storage → Object Storage & Archive Storage → Buckets
- Select the compartment
lab-connector-hub - Click Create Bucket
- Use:
– Bucket name:
audit-log-archive-<unique-suffix>– Default storage tier: Standard (typical default) – Encryption: Oracle-managed keys (default) unless you require customer-managed keys
Expected outcome: Bucket is created and empty.
Optional CLI validation (list buckets):
oci os bucket list --compartment-id <COMPARTMENT_OCID>
Step 3: Ensure Audit logs are accessible in OCI Logging (source)
Audit logs are typically available by default in OCI, but access may be compartment-scoped.
- Go to Observability & Management → Logging → Logs
- Select your compartment (often the tenancy root or the relevant compartment)
- Look for Audit logs (naming and location can vary)
If you can view audit events, you’re ready.
Expected outcome: You can see Audit log entries in the Logging UI.
If you can’t see them: – Confirm you’re in the correct compartment – Confirm you have permission to read logs – Verify audit log availability and scope in your tenancy (some orgs centralize access)
Step 4: Create required IAM policies (critical)
Connector Hub needs permission to: – Read from the source (Logging/Audit) – Write to the target (Object Storage bucket)
Because OCI policy statements must be exact—and service principal naming can change—use the official Connector Hub policy examples for your tenancy: – https://docs.oracle.com/en-us/iaas/Content/service-connector-hub/Tasks/service-connector-policies.htm (verify exact URL path)
Below is a typical pattern you should adapt and verify.
4.1 Allow admins to manage connectors in the lab compartment
Create a policy in your tenancy (or compartment) that allows your user group to manage connectors:
Example (verify group name and syntax): – Identity & Security → Policies → Create Policy – Compartment: tenancy root (common) or appropriate parent
Allow group <YOUR_GROUP_NAME> to manage serviceconnectors in compartment lab-connector-hub
4.2 Allow the Connector Hub service to write objects to your bucket compartment
Example (verify service name and permissions):
Allow service serviceconnectorhub to manage object-family in compartment lab-connector-hub
4.3 Allow the Connector Hub service to read logs (if required)
Depending on how Logging is scoped, you may need something like (verify):
Allow service serviceconnectorhub to read log-content in compartment <COMPARTMENT_WITH_AUDIT_LOGS>
Expected outcome: Policies are created and in “ACTIVE” state.
Important: If you get permission errors later, they almost always come from: – Incorrect compartment selection in the policy – Wrong service principal name – Missing permission for reading log content or writing objects
Step 5: Create a connector in Connector Hub
- Go to the OCI Console menu and locate Connector Hub or Service Connector Hub (exact location varies).
- Select compartment:
lab-connector-hub - Click Create Connector (or Create Service Connector)
Configure:
– Name: audit-to-objectstorage
– Source: Logging
– Choose Audit logs (or the relevant Audit log group/log)
– Target: Object Storage
– Choose the bucket: audit-log-archive-<unique-suffix>
– Choose a prefix like: audit/ (if supported)
– Tasks: None (keep it simple for the first lab), or select only safe filtering if available and clearly documented.
Create/Save the connector.
Expected outcome: Connector is created and shows status such as Active or Running (exact wording varies).
Step 6: Generate audit events (so there is something to export)
Now perform a few actions that produce audit events. For example: – List buckets – Create and delete a test object – View compartment details
A simple action:
1. Open your bucket
2. Upload a small text file test.txt
3. Delete it after a minute
Expected outcome: Audit log entries should be generated within OCI Audit/Logging.
Step 7: Validate delivery to Object Storage
There can be a delay between audit event generation and export delivery depending on batching and service behavior.
7.1 Check the bucket in Console
- Go to your bucket
- Browse objects under the prefix (e.g.,
audit/)
You should see one or more objects created by the connector.
Expected outcome: Objects appear in the bucket containing exported log data.
7.2 Optional: Validate with OCI CLI
List objects:
oci os object list \
--bucket-name audit-log-archive-<unique-suffix> \
--compartment-id <COMPARTMENT_OCID>
Download one object to inspect:
oci os object get \
--bucket-name audit-log-archive-<unique-suffix> \
--name <OBJECT_NAME_FROM_LIST> \
--file exported-audit-log.json
Inspect:
head -n 50 exported-audit-log.json
What you should see: – JSON records or structured log entries (format depends on connector and source type—verify)
Validation
You have successfully completed the lab if: – The connector is in an active/running state – Audit events are visible in OCI Logging – Objects containing exported audit logs appear in the Object Storage bucket – You can download and inspect exported data
Troubleshooting
Issue 1: Connector stuck in “Failed” or not delivering
Likely causes: – Missing IAM permissions for the Connector Hub service to write to Object Storage – Incorrect bucket/compartment selection – Connector cannot read log content due to policy scope
Fix: – Re-check policies using official policy examples: – Verify the compartment names/OCIDs match the source/target locations – Verify service principal name is correct for your region/tenancy (verify in docs)
Issue 2: No objects show up in the bucket
Likely causes: – Not enough events generated – Delivery delay due to batching – Wrong log selected as source
Fix: – Generate more audit events – Wait 10–20 minutes (timing varies—verify) – Confirm the connector source points to the correct audit log group/log
Issue 3: Permission errors when creating the connector
Likely causes: – Your user/group lacks permission to manage connectors – You are in the wrong compartment
Fix:
– Add/adjust policy:
text
Allow group <YOUR_GROUP_NAME> to manage serviceconnectors in compartment lab-connector-hub
– Verify group membership and policy attachment location
Issue 4: Exported data format isn’t what you expected
Explanation: – Export format can vary by source type and connector implementation. – Some connectors write batches rather than per-event objects.
Fix: – Confirm export format in official docs for your source/target combination.
Cleanup
To stop ongoing storage growth and keep your tenancy tidy:
-
Delete the connector – Go to Connector Hub / Service Connector Hub – Select
audit-to-objectstorage– Delete -
Delete exported objects in the bucket – Bucket → Objects → Delete objects (or delete bucket if empty)
-
Delete the bucket – Object Storage → Buckets → Delete bucket – If bucket is not empty, remove all objects first
-
Remove IAM policies you created for the lab if they are no longer needed
-
Delete the compartment
lab-connector-hub(only if you created it solely for this lab and it’s empty)
Expected outcome: No connector remains running, and no bucket remains accumulating data.
11. Best Practices
Architecture best practices
- Use compartments to separate producers and sinks: Keep central log archive buckets in a security/shared compartment.
- Standardize naming: Include environment, source, and target in connector names:
prod-audit-to-archive-bucket- Design for downstream processing: Use clear Object Storage prefixes (e.g.,
audit/yyyy/mm/dd/if supported—verify) to simplify analytics.
IAM/security best practices
- Least privilege: Grant Connector Hub only the minimum required:
- Read from specific log groups/logs
- Write to a specific bucket/prefix (if prefix scoping is supported—verify)
- Separate duties: Admins manage policies; app teams manage connectors within allowed boundaries.
- Avoid wildcard policies at tenancy root unless required.
Cost best practices
- Filter aggressively if supported (drop noisy debug logs).
- Lifecycle policies in Object Storage:
- Standard → Archive after N days
- Delete after compliance retention is met (if allowed)
- Control object sprawl: If configuration allows batching, prefer fewer larger objects over many tiny ones (verify for your connector type).
Performance best practices
- Know throughput limits: Check Connector Hub limits and Streaming/Object Storage limits.
- Avoid chatty downstream tasks: If using Functions, ensure it can handle burst traffic and has idempotent processing.
Reliability best practices
- Assume at-least-once delivery unless docs explicitly guarantee exactly-once (verify).
- Make downstream processing idempotent: Deduplicate by event ID/time where possible.
- Alarm on failures: Create Monitoring alarms on connector errors (verify metric names).
Operations best practices
- Tag everything: cost-center, environment, application, owner.
- Document each connector: source, target, purpose, data classification.
- Runbook for failures: include policy checks, target service health, and backlog handling.
Governance/tagging/naming best practices
- Naming pattern example:
- Connector:
env-source-to-target-purpose - Bucket:
env-log-archive-app - Tag keys:
Environment=prod|devDataClass=public|internal|confidentialOwner=team-nameCostCenter=...
12. Security Considerations
Identity and access model
- User identities (human/automation) create and manage connectors via IAM.
- Connector Hub service identity accesses sources/targets according to policies.
- Use groups and dynamic groups where appropriate (dynamic groups typically for compute identities; Connector Hub is a service principal—verify exact model).
Encryption
- In transit: OCI service-to-service traffic is typically encrypted (verify specifics).
- At rest: Object Storage encrypts by default; choose:
- Oracle-managed keys (default)
- Customer-managed keys in OCI Vault (if required)
Network exposure
- Connector Hub is a managed service; you typically do not place it in a VCN.
- If your targets involve public endpoints or cross-region, review:
- Data egress
- Public exposure
- Allowed endpoints (verify supported configurations)
Secrets handling
- Prefer IAM-based access and resource principals/service principals.
- If using Functions as a task and it needs secrets (API keys, DB passwords):
- Store secrets in OCI Vault
- Grant the function least-privilege access to secrets
- Avoid embedding secrets in code or environment variables unless properly managed
Audit/logging
- Use OCI Audit to track:
- Connector creation/update/deletion
- Policy changes
- Export audit logs to Object Storage (as shown) for retention.
Compliance considerations
- Data classification: Logs can contain sensitive data (user IDs, IPs, request payload fragments).
- Residency: Choose region appropriately; document cross-region flows.
- Retention: Define retention and deletion schedules aligned with regulatory requirements.
Common security mistakes
- Overly broad policies like:
- “manage all-resources in tenancy”
- Writing sensitive logs to buckets without restricted access
- No lifecycle policy leading to uncontrolled retention
- No monitoring/alerting for connector failure (silent data loss risk)
Secure deployment recommendations
- Use a dedicated “log archive” compartment with tight access.
- Restrict bucket access to a limited group; deny public access.
- Enable object versioning or retention features if needed (Object Storage features—verify suitability).
- Build alarms and periodic validation checks (e.g., confirm objects arrive daily).
13. Limitations and Gotchas
Because limits and supported integrations can change, confirm all items in official docs for your region.
Known limitations (common patterns)
- Supported sources/targets are limited: You can only connect what OCI supports in Connector Hub.
- Not a full ETL tool: Tasks are not equivalent to a data integration platform.
- Delivery semantics: Often at-least-once; ordering not guaranteed (verify).
- Batching behavior: Exports may arrive in batches, not per event.
- Latency variability: Export can lag due to batching, throttling, or downstream limits.
Quotas
- Maximum connectors per compartment/region (verify service limits).
- Throughput limits depending on source/target type.
Regional constraints
- Some targets (like analytics integrations) may not be available in all regions.
- Cross-region flows can be complex and costly.
Pricing surprises
- Object Storage request charges if many small objects are written.
- Streaming throughput and retention charges.
- Analytics ingestion charges.
Compatibility issues
- Downstream systems may require specific formats/partitioning.
- If you plan to consume exported objects with Spark/Data Flow, verify format is friendly for your tooling.
Operational gotchas
- IAM policy mistakes are the #1 root cause of failed connectors.
- Moving connectors between compartments is not always supported; you may need to recreate them (verify).
- If you delete a bucket target, connectors will fail until reconfigured.
Migration challenges
- If you previously used scripts/agents, migrating requires:
- Mapping source/target permissions to IAM policies
- Updating operational monitoring
- Validating format and retention policies
Vendor-specific nuances
- OCI’s compartment and IAM model is powerful but strict—plan policy design early.
- Some connector capabilities may be tied to licensing of target services (e.g., Logging Analytics).
14. Comparison with Alternatives
Nearest services in Oracle Cloud (OCI)
- OCI Data Integration: Better for ETL/ELT, complex pipelines, scheduling, data quality (not just routing).
- OCI GoldenGate: Best for database replication and CDC (change data capture).
- OCI Streaming: Core event streaming; Connector Hub often complements it.
- OCI Functions + Events: For event-driven automation; may require more custom wiring than Connector Hub.
- OCI Logging sinks/exports (if available): Depending on OCI Logging features, some exports might be handled directly (verify).
Nearest services in other clouds
- AWS: EventBridge Pipes, Kinesis Data Firehose, CloudWatch Logs subscriptions/exports
- Azure: Event Grid, Azure Monitor diagnostic settings, Event Hubs capture
- Google Cloud: Logging sinks, Pub/Sub, Eventarc, Dataflow templates
Open-source / self-managed alternatives
- Kafka Connect (with S3 sink, etc.)
- Fluent Bit / Fluentd
- Logstash
- Apache NiFi
- Airbyte (more ELT-focused)
Comparison table
| Option | Best For | Strengths | Weaknesses | When to Choose |
|---|---|---|---|---|
| Oracle Cloud Connector Hub | Managed routing between supported OCI services | Low ops, IAM-governed, fast setup | Limited sources/targets; not full ETL | You need simple, continuous OCI-to-OCI routing |
| OCI Data Integration | ETL/ELT pipelines | Rich transforms, schedules, data workflows | More setup; not “just routing” | You need complex transformation and orchestration |
| OCI GoldenGate | Database replication/CDC | Strong for DB change capture | Specialized; can be costly/complex | You need near-real-time DB replication/CDC |
| OCI Streaming (direct producers/consumers) | Event streaming backbone | Decoupling, fan-out, scalable | You manage producers/consumers logic | You need custom event processing and multiple consumers |
| OCI Functions + Events | Serverless automation | Flexible custom logic | You build and operate more glue | You need custom workflows triggered by events |
| Kafka Connect (self-managed) | Broad integration ecosystem | Many connectors, flexible | You manage infra, scaling, patching | You need unsupported endpoints or advanced connector ecosystem |
| Fluent Bit/Fluentd/Logstash | Log shipping and transformation | Powerful parsing/routing | Agents/infra to manage | You need deep log parsing or non-OCI sources/targets |
| AWS Firehose / Azure Monitor / GCP Logging sinks | Cloud-native routing in other clouds | Tight integration in their ecosystems | Not OCI-native | You’re in those clouds primarily |
15. Real-World Example
Enterprise example: Regulated audit retention and security analytics
- Problem: A financial organization must retain OCI audit logs for 1–7 years (policy-dependent) and make them accessible for security investigations.
- Proposed architecture:
- Source: OCI Audit logs (via OCI Logging)
- Connector Hub: audit export connectors per region/compartment
- Target: Central Object Storage bucket in a security compartment
- Lifecycle: Standard for 30–90 days → Archive for long-term retention
- Optional: Periodic Data Flow job to build searchable indexes or curated datasets
- Why Connector Hub was chosen:
- Managed export reduces custom pipeline risk
- Strong IAM governance and compartment boundaries
- Operational simplicity and standardization across regions
- Expected outcomes:
- Consistent retention and evidence chain
- Faster investigations with centralized access
- Lower ops overhead compared to agent-based log shippers
Startup/small-team example: Centralized operational logs for a SaaS
- Problem: A small SaaS team runs workloads in multiple compartments (dev/stage/prod) and needs centralized log retention for debugging and customer support.
- Proposed architecture:
- Source: Application/service logs in OCI Logging
- Connector Hub: one connector per environment
- Target: Object Storage buckets with prefixes per app/environment
- Optional: Stream high-severity events into a notifications pipeline (verify supported target)
- Why Connector Hub was chosen:
- Quick to deploy; no infra to run
- IAM policies allow safe self-service
- Easy to expand to more apps later
- Expected outcomes:
- Improved debugging with centralized archives
- Predictable storage-based retention costs
- Minimal maintenance work
16. FAQ
1) Is “Connector Hub” the same as “Service Connector Hub” in OCI?
In many OCI contexts, yes. OCI documentation commonly uses Service Connector Hub while some interfaces shorten it to Connector Hub. Verify the naming in your OCI Console and docs for your region.
2) Is Connector Hub part of Oracle Cloud Data Management?
It is frequently used for data movement and operational data routing, which strongly supports Data Management patterns (archival, ingestion, pipelines). OCI’s product-category placement may differ in the console—verify.
3) What are typical sources for Connector Hub?
Common sources include OCI services that produce telemetry such as Logging and Streaming (and sometimes Monitoring metrics)—verify the up-to-date supported list.
4) What are typical targets?
Common targets include Object Storage, Streaming, Functions, and analytics targets like Logging Analytics—verify availability and licensing.
5) Does Connector Hub guarantee exactly-once delivery?
Do not assume exactly-once. Many managed routing systems provide at-least-once delivery with possible duplicates. Verify delivery semantics in official docs for your connector type.
6) Can I filter data before it reaches the target?
Often yes, through tasks or filtering options (capability varies). Verify supported task types and filter expressions.
7) Can I transform data formats in Connector Hub?
Some task capabilities may exist, but Connector Hub is not a full ETL service. For complex transformation, use Data Integration, Data Flow, or downstream processing.
8) How do I secure exported logs in Object Storage?
Use IAM policies to restrict bucket access, enable encryption (default), and consider customer-managed keys via OCI Vault if required.
9) How do I know if a connector is failing?
Use connector status in the console and set up Monitoring alarms based on connector metrics (verify metric names and dimensions).
10) What’s the most common reason a connector doesn’t work?
IAM policy misconfiguration: missing permissions to read from the source or write to the target.
11) Can I export logs cross-compartment?
Yes, but you must design IAM policies carefully for cross-compartment access.
12) Can I export data cross-region?
Sometimes, but it depends on supported targets and design. Cross-region data movement can increase cost and complexity—verify supported patterns.
13) Do I need a VCN or subnets for Connector Hub?
Typically no—Connector Hub is service-managed. Networking requirements depend on your targets (especially if external endpoints are involved—verify).
14) How do I keep Object Storage costs under control?
Use lifecycle policies, reduce volume with filtering, and avoid excessive small object creation if possible.
15) Can I manage connectors with Terraform?
OCI supports many services via Terraform provider. Verify whether Connector Hub resources are supported by the current OCI Terraform provider and which fields are available:
– OCI Terraform provider docs: https://registry.terraform.io/providers/oracle/oci/latest/docs (verify specific resource name)
16) Can I automate connector creation with OCI CLI or API?
Yes, if the service exposes CLI/API operations (often under “serviceconnector”). Verify CLI commands and API endpoints in official SDK/CLI docs.
17) How should I handle duplicates downstream?
Design downstream processing to be idempotent—deduplicate by event ID, timestamp + hash, or object name/versioning strategies.
17. Top Online Resources to Learn Connector Hub
| Resource Type | Name | Why It Is Useful |
|---|---|---|
| Official documentation | OCI Service Connector Hub docs: https://docs.oracle.com/en-us/iaas/Content/service-connector-hub/home.htm | Primary source for features, setup, policies, and limits |
| Official IAM/policies guidance | OCI policies overview: https://docs.oracle.com/en-us/iaas/Content/Identity/Concepts/policies.htm | Essential for least-privilege and troubleshooting permissions |
| Official pricing | OCI pricing: https://www.oracle.com/cloud/pricing/ | Understand service and dependent service pricing |
| Official price list | OCI price list: https://www.oracle.com/cloud/price-list/ | SKU-level pricing reference (region/contract can vary) |
| Cost estimator | OCI cost estimator: https://www.oracle.com/cloud/costestimator.html (verify) | Build scenario estimates without guessing |
| Architecture center | OCI Architecture Center: https://docs.oracle.com/solutions/ | Reference architectures for logging, analytics, data lakes |
| Logging docs | OCI Logging docs: https://docs.oracle.com/en-us/iaas/Content/Logging/home.htm | Understand log sources, log groups, and audit/service logs |
| Object Storage docs | OCI Object Storage docs: https://docs.oracle.com/en-us/iaas/Content/Object/home.htm | Buckets, lifecycle policies, encryption, access control |
| OCI CLI install | OCI CLI installation: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm | Automate validation and cleanup |
| Terraform provider | OCI Terraform provider: https://registry.terraform.io/providers/oracle/oci/latest/docs | Infrastructure-as-code patterns (verify Connector Hub resource support) |
| Official videos | Oracle Cloud YouTube: https://www.youtube.com/@OracleCloudInfrastructure | Product walkthroughs and best practices (search “Service Connector Hub”) |
| API reference (if applicable) | OCI API docs: https://docs.oracle.com/en-us/iaas/api/ | Confirm REST operations for connector resources (search for service connector) |
18. Training and Certification Providers
-
DevOpsSchool.com
– Suitable audience: DevOps engineers, SREs, platform teams, cloud beginners
– Likely learning focus: OCI fundamentals, DevOps practices, cloud operations, automation basics (verify current catalog)
– Mode: Check website
– Website: https://www.devopsschool.com/ -
ScmGalaxy.com
– Suitable audience: DevOps learners, engineers focused on tooling and process
– Likely learning focus: SCM, CI/CD, DevOps practices, introductory cloud content (verify current catalog)
– Mode: Check website
– Website: https://www.scmgalaxy.com/ -
CLoudOpsNow.in
– Suitable audience: Cloud operations teams, system administrators moving to cloud
– Likely learning focus: Cloud operations, monitoring, incident response, operational readiness (verify current catalog)
– Mode: Check website
– Website: https://www.cloudopsnow.in/ -
SreSchool.com
– Suitable audience: SREs, production engineers, reliability-focused teams
– Likely learning focus: SRE principles, observability, reliability patterns (verify current catalog)
– Mode: Check website
– Website: https://www.sreschool.com/ -
AiOpsSchool.com
– Suitable audience: Ops and platform teams exploring AIOps
– Likely learning focus: AIOps concepts, monitoring/telemetry analytics, automation (verify current catalog)
– Mode: Check website
– Website: https://www.aiopsschool.com/
19. Top Trainers
-
RajeshKumar.xyz
– Likely specialization: DevOps/cloud training and guidance (verify offerings)
– Suitable audience: Beginners to intermediate engineers
– Website: https://www.rajeshkumar.xyz/ -
devopstrainer.in
– Likely specialization: DevOps tooling, CI/CD, cloud basics (verify offerings)
– Suitable audience: DevOps engineers and students
– Website: https://www.devopstrainer.in/ -
devopsfreelancer.com
– Likely specialization: Freelance DevOps consulting/training resources (verify offerings)
– Suitable audience: Teams needing practical help and coaching
– Website: https://www.devopsfreelancer.com/ -
devopssupport.in
– Likely specialization: DevOps support and operational troubleshooting (verify offerings)
– Suitable audience: Ops teams and engineers needing hands-on support
– Website: https://www.devopssupport.in/
20. Top Consulting Companies
-
cotocus.com
– Likely service area: Cloud/DevOps consulting (verify service lines)
– Where they may help: Architecture reviews, implementation support, operational readiness
– Consulting use case examples: Designing centralized log archival, setting up IAM guardrails, cost optimization reviews
– Website: https://cotocus.com/ -
DevOpsSchool.com
– Likely service area: DevOps and cloud consulting/training services (verify current offerings)
– Where they may help: Implementation workshops, CI/CD, infrastructure automation, SRE practices
– Consulting use case examples: Terraform-based OCI onboarding, governance setup, observability pipelines with Connector Hub patterns
– Website: https://www.devopsschool.com/ -
DEVOPSCONSULTING.IN
– Likely service area: DevOps consulting (verify service lines)
– Where they may help: Deployment automation, cloud operations, reliability improvements
– Consulting use case examples: Building operational telemetry pipelines, alerting strategy, incident response readiness
– Website: https://www.devopsconsulting.in/
21. Career and Learning Roadmap
What to learn before Connector Hub
- OCI fundamentals:
- Compartments, VCN basics (even if not required here), regions/availability domains
- IAM fundamentals:
- Users, groups, policies, dynamic groups (conceptually)
- Core data/observability services:
- OCI Logging (log groups, audit logs)
- OCI Object Storage (buckets, access control, lifecycle)
What to learn after Connector Hub
- Streaming-based architectures:
- OCI Streaming partitions/retention/consumers
- Serverless patterns:
- OCI Functions, event-driven design, idempotency
- Analytics pipelines:
- OCI Data Flow (Spark), Data Integration, Lakehouse patterns on Object Storage
- Governance:
- Tagging strategy, quotas, landing zone design
- IaC and automation:
- Terraform for OCI, CI/CD pipelines for infrastructure
Job roles that use it
- Cloud Engineer (OCI)
- Platform Engineer
- DevOps Engineer
- Site Reliability Engineer (SRE)
- Security Engineer (log retention and audit pipelines)
- Data Engineer (ingestion into lakes/streams)
- Cloud Solutions Architect
Certification path (if available)
OCI certification offerings change frequently. A practical path is: – OCI Foundations (entry) – OCI Architect (associate/professional depending on your track) – Observability/Security specialty learning (if offered)
Verify current certification tracks: – https://education.oracle.com/oracle-cloud-infrastructure-certification (verify current URL)
Project ideas for practice
- Export audit logs to Object Storage with lifecycle policies.
- Build a “log lake” layout in Object Storage with prefixes by compartment/app.
- Add a function task (if supported) that redacts sensitive fields before storage.
- Route Streaming messages to Object Storage for replayable ingestion.
- Build alarms for connector failure metrics and write a runbook.
22. Glossary
- OCI (Oracle Cloud Infrastructure): Oracle Cloud’s IaaS/PaaS platform.
- Connector Hub (Service Connector Hub): Managed service for routing data between OCI services.
- Connector / Service Connector: The configured routing resource (source → tasks → target).
- Source: The service producing data (e.g., Logging).
- Target: The destination service (e.g., Object Storage).
- Task: Optional processing step inside the connector (filtering, function invocation—verify).
- Compartment: OCI’s logical container for resources and access control.
- IAM Policy: Text rules that grant permissions to users/groups/services in OCI.
- Service principal (service identity): OCI identity used by a managed service to access other services per policy.
- Audit logs: Records of API calls and actions in OCI for governance and security.
- Object Storage bucket: Container for objects/files in OCI Object Storage.
- Lifecycle policy: Rules to transition or delete objects over time to control cost and retention.
- At-least-once delivery: A message can be delivered more than once; consumers must handle duplicates.
- Idempotent processing: Processing that can safely run multiple times without changing the result incorrectly.
23. Summary
Connector Hub in Oracle Cloud (OCI) is a managed way to route operational and event data—often logs, metrics, or stream messages—between supported OCI services. It matters because it replaces fragile scripts and always-on glue infrastructure with policy-governed, repeatable connectors that fit cleanly into OCI’s compartments and IAM model.
From a Data Management perspective, Connector Hub is especially useful for landing telemetry into durable storage, feeding streaming pipelines, and standardizing ingestion patterns across teams. Cost is typically driven less by the connector itself and more by Object Storage, Streaming, Functions, analytics ingestion, and data transfer—so lifecycle policies and volume reduction are key. Security hinges on correct IAM policies, least privilege, and careful handling of sensitive log data.
Use Connector Hub when you need straightforward, continuous routing with minimal ops. If you need complex transformations or advanced integration to unsupported endpoints, use dedicated data integration tools or self-managed connectors.
Next step: implement the lab pattern (Audit logs → Object Storage), then expand to production readiness by adding lifecycle policies, alarms on connector failures, and infrastructure-as-code once you confirm Connector Hub resource support in your OCI Terraform provider.