Google Cloud Eventarc Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for Application development

Category

Application development

1. Introduction

Eventarc is Google Cloud’s managed event routing service that connects event producers (Google Cloud services and some custom/partner sources) to event consumers (such as Cloud Run, Cloud Functions (2nd gen), Workflows, and—where available—GKE destinations). It lets you build event-driven applications without managing message brokers, webhooks, or custom polling loops.

In simple terms: Eventarc listens for events you care about and delivers them to your service. For example, “when a file is uploaded to Cloud Storage, call my Cloud Run service” or “when an IAM policy changes, trigger my workflow”.

Technically, Eventarc provides event triggers that filter events (using CloudEvents attributes and service-specific filters) and deliver matching events to a destination using Google-managed infrastructure and authentication. Depending on the event source, Eventarc may integrate with Cloud Audit Logs, Pub/Sub, and provider-specific pipelines to detect events and route them reliably.

Eventarc solves a common application development problem: decoupling producers and consumers. Instead of building brittle point-to-point integrations, you define event-driven contracts and let Google Cloud handle routing, filtering, and delivery. This improves agility, operational simplicity, and security posture when done with strong IAM and least-privilege design.

Service name status: Eventarc is the current official name in Google Cloud and is actively supported. Cloud Functions (2nd gen) uses Eventarc under the hood for many triggers; using Eventarc directly gives you explicit control over triggers and routing.

2. What is Eventarc?

Official purpose (what it’s for):
Eventarc is designed to help you build event-driven architectures on Google Cloud by routing events from sources (Google Cloud services, audit logs, custom events, and some partner events) to destinations (compute/services that react to events).

Core capabilities: – Create Eventarc triggers to match events with filters. – Route events to supported destinations (commonly Cloud Run; also Cloud Functions (2nd gen), Workflows, and other destinations depending on current GA/Preview status—verify in official docs for your region and use case). – Use the CloudEvents standard as the event envelope for consistent event metadata and payload handling. – Support multiple event source patterns, including: – Google Cloud service events (for example, Cloud Storage object events). – Audit log–based events (Cloud Audit Logs as a source for many administrative and data access activities). – Custom events via Pub/Sub (publish CloudEvents to Pub/Sub and route via Eventarc). – Partner events via channels/providers (availability varies—verify the current partner list and regions).

Major components (conceptual):

Component What it is Why it matters
Eventarc trigger A resource that defines filters + destination Central configuration for routing rules
Event provider/source A Google Cloud service (or audit logs / Pub/Sub / partner provider) that emits events Determines event types and filtering options
Destination The service that receives the event (often Cloud Run) Where your code runs
Service account (for invocation) Identity Eventarc uses to call your destination Enables least-privilege access control
CloudEvents envelope Standard event metadata + data Consistent handling and portability

Service type:
Fully managed event routing service used for Application development and integration patterns.

Scope and location model:
Eventarc is primarily regional: triggers are created in a specific location/region, and in many cases the trigger region must align with destination and/or event source location constraints. Triggers are project-scoped resources (within a Google Cloud project). Some event sources are global in nature but still require regional trigger configuration—always confirm regional support in the Eventarc documentation for your chosen event type.

How it fits into the Google Cloud ecosystem: – Works closely with Cloud Run (serverless containers) for event-driven services. – Integrates with Cloud Functions (2nd gen) (which is built on Cloud Run and Eventarc). – Can trigger Workflows for orchestration and long-running steps. – Often relies on or interacts with Pub/Sub and Cloud Logging (especially for audit log sources). – Complements API-centric integration (Apigee/API Gateway) by enabling async, event-first design.

Official documentation entry point: https://cloud.google.com/eventarc/docs

3. Why use Eventarc?

Business reasons

  • Faster feature delivery: Teams add new consumers by defining triggers rather than rewriting producer integrations.
  • Reduced integration cost: Avoid maintaining webhook endpoints, polling jobs, and bespoke routing logic.
  • Better resilience: Asynchronous event patterns reduce tight coupling and can isolate failures.

Technical reasons

  • CloudEvents standardization: Uniform event metadata simplifies parsing, observability correlation, and portability.
  • Filtering at the routing layer: Only deliver relevant events to each service.
  • Serverless-friendly: Pairs naturally with Cloud Run and Cloud Functions (2nd gen) for scale-to-zero compute.

Operational reasons

  • Less infrastructure to manage: No self-managed brokers required for basic patterns.
  • Clear ownership boundaries: Triggers can be managed by platform teams while application teams own consumers.
  • Consistent tooling: Create, update, and audit triggers via Console, gcloud, and IAM.

Security/compliance reasons

  • IAM-based invocation: Eventarc calls destinations using a service account you specify, enabling least privilege.
  • Auditability: Trigger configuration changes are visible via audit logs; event sources can include audit events.
  • Controlled ingress: For Cloud Run, you can restrict who can invoke the service and require authentication.

Scalability/performance reasons

  • Designed for high event volumes with managed delivery plumbing (exact limits depend on quotas—verify in official docs).
  • Works well with horizontally scalable destinations (Cloud Run, GKE services).

When teams should choose Eventarc

  • You want event-driven application development on Google Cloud with minimal ops overhead.
  • You need Google Cloud service events delivered to Cloud Run/Workflows/Functions.
  • You want a unified routing layer rather than implementing a different mechanism per service.

When teams should not choose Eventarc

  • You need a general-purpose streaming platform with complex replay/retention semantics and consumer offset control (consider Pub/Sub, Kafka, or Dataflow depending on requirements).
  • You need advanced cross-cloud event routing and multi-tenant event bus governance not covered by your current Eventarc feature set (verify current capabilities).
  • Your primary integration is synchronous request/response with strict API governance—Eventarc complements but doesn’t replace API management.

4. Where is Eventarc used?

Industries

  • E-commerce: order lifecycle events, fulfillment automation, inventory updates.
  • Media & entertainment: asset ingestion pipelines (uploads, transcodes, notifications).
  • Fintech: audit-driven controls, security automation, transaction processing steps.
  • Healthcare/life sciences: data ingestion events, compliance monitoring (subject to regulatory constraints).
  • SaaS/platform companies: internal automation, user lifecycle events, integration events.

Team types

  • Application development teams building microservices on Cloud Run.
  • Platform engineering teams providing “golden path” eventing patterns.
  • DevOps/SRE teams automating responses to infrastructure changes.
  • Security engineering teams reacting to audit events and policy changes.

Workloads and architectures

  • Event-driven microservices, asynchronous pipelines, serverless backends.
  • Automation workflows triggered by changes (config, storage, identity, deployments).
  • Hybrid patterns: event triggers + Workflows orchestration + Cloud Run services.
  • “Glue” for loosely coupled systems, often alongside Pub/Sub for fan-out and buffering.

Real-world deployment contexts

  • Production: high-value operational automation, business process events, ingestion pipelines.
  • Dev/test: validating event schemas, testing new consumers, building sandbox automation.

5. Top Use Cases and Scenarios

Below are realistic Eventarc use cases. Exact event types and destination support can vary by region and by Google Cloud product—verify the event type list in official docs.

1) Cloud Storage upload triggers a processing service

  • Problem: You need to automatically process files as soon as they land in Cloud Storage.
  • Why Eventarc fits: It can route Cloud Storage object events to Cloud Run without polling.
  • Scenario: When a CSV is uploaded, a Cloud Run service validates it and loads rows into BigQuery.

2) Audit log event triggers security remediation

  • Problem: You must detect and remediate risky changes quickly (e.g., “public bucket”).
  • Why Eventarc fits: Audit log–based triggers can route administrative events to an automation service.
  • Scenario: If IAM policy changes on a sensitive project, trigger a Workflow to open a ticket and revert changes.

3) New Firestore/Database event triggers downstream actions

  • Problem: You need asynchronous reactions to data changes without tight coupling.
  • Why Eventarc fits: For supported databases/events, Eventarc can route changes to compute.
  • Scenario: A new user document triggers a Cloud Run service to provision resources and send email.

Note: Firestore/Database triggers may be routed via Cloud Functions (2nd gen) and/or Eventarc depending on feature support. Verify current Eventarc source coverage.

4) CI/CD event triggers environment validation

  • Problem: After deployment, you must run smoke tests and update status.
  • Why Eventarc fits: Route events to Workflows/Cloud Run to orchestrate validation steps.
  • Scenario: On a deployment event, run tests, update a dashboard, and notify Slack (via a webhook from your service).

5) Centralized “event gateway” for multiple microservices

  • Problem: Each microservice implementing its own webhook/event intake leads to drift and security gaps.
  • Why Eventarc fits: Triggers provide consistent routing rules; destinations authenticate via IAM.
  • Scenario: One team owns triggers and service accounts, app teams only implement Cloud Run receivers.

6) Partner/SaaS events routed into Google Cloud

  • Problem: You want third-party events (e.g., code repo events) to drive automation.
  • Why Eventarc fits: Eventarc supports partner events via channels/providers where available.
  • Scenario: A repo event triggers a Workflow that runs compliance checks and updates a ticket.

Partner event availability varies—verify the current provider list and regions.

7) Real-time thumbnail generation for uploaded images

  • Problem: Users upload images; you need derived thumbnails quickly.
  • Why Eventarc fits: Storage object finalize events can trigger a scalable Cloud Run image processor.
  • Scenario: Upload to raw-images/ triggers resizing and writes to thumbnails/.

8) Data ingestion pipeline kickoff (storage → compute → warehouse)

  • Problem: Batch pipelines start late or fail silently when scheduled rather than event-driven.
  • Why Eventarc fits: Storage events can start pipelines exactly when inputs arrive.
  • Scenario: An object upload triggers a Cloud Run service that starts a Dataflow job (via API).

9) Infrastructure governance automation from audit events

  • Problem: You need automated tagging, labeling, or policy application.
  • Why Eventarc fits: Audit log–based events can trigger policy enforcement.
  • Scenario: When a new project is created, trigger a Workflow to apply org policies and labels.

10) Workflow orchestration for multi-step business processes

  • Problem: A single event must trigger a multi-step process with retries and branching.
  • Why Eventarc fits: Route events directly to Workflows, which can call services and handle errors.
  • Scenario: Order-created event triggers a Workflow that reserves inventory, charges payment, and notifies shipping.

11) Event-driven cache invalidation

  • Problem: Caches become stale and users see outdated data.
  • Why Eventarc fits: Emit/publish custom events or use data-change events to invalidate caches.
  • Scenario: When product price changes, trigger a Cloud Run service to purge cache keys.

12) Fan-out pattern with Pub/Sub + Cloud Run consumers

  • Problem: One event must be consumed by multiple independent services.
  • Why Eventarc fits: Eventarc can be part of a pattern where custom events are published to Pub/Sub and routed (or consumed) by multiple services (architecture varies).
  • Scenario: Publish an “order-created” CloudEvent to Pub/Sub; multiple services react (email, analytics, fulfillment).

6. Core Features

Feature coverage evolves. Always cross-check the latest feature list and supported event sources/destinations in official documentation: https://cloud.google.com/eventarc/docs

6.1 Eventarc triggers (routing rules)

  • What it does: Defines filters + destination for events.
  • Why it matters: Triggers are the unit of routing configuration you can version, audit, and manage.
  • Practical benefit: Clean separation: producers emit events; Eventarc routes; consumers handle.
  • Caveats: Triggers are regional and subject to quotas and location constraints.

6.2 CloudEvents-based delivery

  • What it does: Delivers events using CloudEvents metadata (type, source, id, time, etc.).
  • Why it matters: Standard envelope simplifies parsing and improves portability across runtimes.
  • Practical benefit: Your Cloud Run service can use standard CloudEvents libraries.
  • Caveats: Payload schema depends on the source; your service must handle versioning.

6.3 Filtering on event attributes

  • What it does: Filters events by type and other attributes (for example bucket name for storage events).
  • Why it matters: Reduces noise and cost by delivering only relevant events.
  • Practical benefit: Different microservices can subscribe to different subsets of events.
  • Caveats: Filter fields differ by provider/source; audit-log-derived events have their own filter patterns.

6.4 Destinations: Cloud Run (common default)

  • What it does: Delivers events to a Cloud Run service via authenticated HTTP request.
  • Why it matters: Cloud Run scales automatically and is a natural fit for event handlers.
  • Practical benefit: You can implement handlers in any language supported by containers.
  • Caveats: Ensure service concurrency/timeouts and idempotency; consider ingress settings and authentication.

6.5 Destinations: Cloud Functions (2nd gen)

  • What it does: Routes events to Cloud Functions (2nd gen), which leverages Eventarc.
  • Why it matters: Many teams prefer function-style deployment for simple handlers.
  • Practical benefit: Minimal packaging; integrates well with Google Cloud developer workflows.
  • Caveats: Understand the Cloud Functions (2nd gen) runtime model and limits; some triggers are configured through Functions UI/CLI rather than standalone Eventarc triggers.

6.6 Destinations: Workflows

  • What it does: Triggers a Workflow execution when an event matches.
  • Why it matters: Workflows is ideal for multi-step orchestration, retries, branching, and calling APIs.
  • Practical benefit: Keep orchestration out of application code; improve reliability and traceability.
  • Caveats: Workflow execution costs and limits apply; design for idempotency and replay.

6.7 Custom events via Pub/Sub (integration pattern)

  • What it does: Allows routing of custom-defined events (published to Pub/Sub in CloudEvents format) to Eventarc destinations.
  • Why it matters: Enables domain events not tied to a single Google Cloud service.
  • Practical benefit: You can standardize on CloudEvents for internal event contracts.
  • Caveats: You must design event schemas, versioning, and publishing discipline; Pub/Sub costs and retention apply.

6.8 Audit log events as triggers (governance/ops)

  • What it does: Uses Cloud Audit Logs to detect events (admin activity and, where enabled, data access).
  • Why it matters: Powerful for security and compliance automation.
  • Practical benefit: Trigger automation without modifying the source system.
  • Caveats: Data Access audit logs can be high volume and may incur logging costs; ensure you understand which logs are enabled and retained.

6.9 IAM-controlled event delivery identity

  • What it does: Eventarc uses a service account (specified on the trigger) to invoke the destination.
  • Why it matters: Strong, auditable authentication and least privilege.
  • Practical benefit: Rotate permissions by updating IAM, not code secrets.
  • Caveats: Misconfigured IAM is a top cause of “permission denied” deliveries.

6.10 Operational visibility (logs/metrics)

  • What it does: Integrates with Cloud Logging and Cloud Monitoring for operational insights.
  • Why it matters: You need to detect failures, retries, and misconfigurations.
  • Practical benefit: You can build dashboards and alerts around delivery failures (metric names and availability vary—verify in official docs).
  • Caveats: Logging volume can be a cost driver; tune log levels and retention.

7. Architecture and How It Works

7.1 High-level architecture

At a high level: 1. A source emits an event (directly or via audit logs / Pub/Sub depending on source type). 2. Eventarc evaluates triggers in a region and matches based on filters. 3. For a match, Eventarc delivers the event to the destination (for example, a Cloud Run HTTPS endpoint) using the trigger’s service account identity. 4. The destination processes the event; logs and metrics are emitted to Cloud Logging/Monitoring.

7.2 Data flow vs control flow

  • Control plane: Trigger creation, IAM policy changes, enabling APIs, setting filters and destinations.
  • Data plane: Actual events moving from source to destination.

7.3 Integrations and dependency services

Common dependencies (varies by source/destination): – Cloud Run (destination) – IAM (service accounts, invoker permissions) – Cloud Logging / Cloud Audit Logs (audit-log-based sources and operational logs) – Pub/Sub (custom events; may also be used internally by some pipelines—implementation details can vary) – Artifact Registry / Cloud Build (if deploying Cloud Run from source in the tutorial)

7.4 Security/authentication model

  • Event delivery is authenticated using IAM:
  • A service account specified on the trigger is used to invoke the destination.
  • The destination (e.g., Cloud Run) must grant roles/run.invoker (or equivalent) to that service account.
  • Your application should also verify and safely parse the CloudEvent, and be resilient to duplicates.

7.5 Networking model

  • For Cloud Run destinations, Eventarc delivers events via HTTPS requests to the Cloud Run service endpoint.
  • If you restrict Cloud Run ingress (internal-only), confirm compatibility with Eventarc delivery for your configuration in official docs.
  • Egress from your handler to other services may traverse VPC connectors or public internet depending on your design.

7.6 Monitoring/logging/governance considerations

  • Use Cloud Logging to inspect handler logs and, where available, Eventarc delivery logs.
  • Use Cloud Monitoring to alert on destination error rates, latency, and failed deliveries.
  • Use labels and naming conventions on triggers and Cloud Run services for cost allocation and ownership.
  • Manage triggers via IaC (Terraform) where possible; validate against drift.

7.7 Simple architecture diagram (Mermaid)

flowchart LR
  A[Cloud Storage Bucket] -->|Object finalized event| B[Eventarc Trigger (region)]
  B -->|Authenticated CloudEvent| C[Cloud Run Service: event-receiver]
  C --> D[Cloud Logging]

7.8 Production-style architecture diagram (Mermaid)

flowchart TB
  subgraph Sources
    S1[Cloud Storage]
    S2[Cloud Audit Logs]
    S3[Custom Events via Pub/Sub]
  end

  subgraph Eventing
    E[Eventarc Triggers (per region)\nFilters + Destinations]
  end

  subgraph Destinations
    W[Workflows\nOrchestration]
    R1[Cloud Run: processor]
    R2[Cloud Run: notifier]
    G[GKE Service\n(verify support)]
  end

  subgraph Platform
    IAM[IAM & Service Accounts]
    LOG[Cloud Logging]
    MON[Cloud Monitoring]
    SEC[Security Command Center / Policy tooling\n(optional)]
  end

  S1 --> E
  S2 --> E
  S3 --> E

  E -->|CloudEvents| W
  E -->|CloudEvents| R1
  E -->|CloudEvents| R2
  E -->|CloudEvents| G

  IAM -.invoker permissions.-> E
  R1 --> LOG
  R2 --> LOG
  W --> LOG
  E --> MON

8. Prerequisites

Accounts/projects/billing

  • A Google Cloud project with billing enabled.
  • Ability to enable APIs in the project.

Required APIs (commonly needed)

Enable at least: – Eventarc API – Cloud Run API – Cloud Logging API – (For this lab) Cloud Storage API – (For deploy-from-source) Cloud Build API, Artifact Registry API

Exact list varies; the tutorial includes commands.

IAM permissions and roles

To complete the lab, you typically need permissions to: – Deploy Cloud Run services – Create service accounts and grant IAM roles – Create Eventarc triggers – Create a Cloud Storage bucket

Common roles (principle of least privilege is recommended; these are simplified): – roles/run.admin (or equivalent) – roles/eventarc.adminroles/iam.serviceAccountAdmin and roles/iam.serviceAccountUser (for creating/using service accounts) – roles/storage.admin (for bucket creation) – roles/logging.viewer (for viewing logs)

In production, split responsibilities: – Platform team owns Eventarc trigger creation and IAM bindings. – App team owns Cloud Run service deployment and code.

Tools

  • Google Cloud SDK (gcloud) installed and authenticated
    Install: https://cloud.google.com/sdk/docs/install
  • Optional: curl, jq for verification.

Region availability

  • Choose a region that supports Eventarc and Cloud Run.
  • Some event sources/destinations are region-limited. Verify supported locations: https://cloud.google.com/eventarc/docs/locations (or the current “Locations” page in docs).

Quotas/limits

  • Triggers per project/region, event delivery rates, and other quotas apply.
  • Always check the quota page for Eventarc in your project and request increases if needed (quota names can change—verify in official docs/console).

Prerequisite services

  • Cloud Run service as destination.
  • Cloud Storage bucket as event source (for this tutorial).

9. Pricing / Cost

Eventarc pricing is usage-based. The exact SKUs and unit prices can change and can be region-dependent. Do not rely on blog posts for numbers—use official pricing pages.

  • Official pricing page: https://cloud.google.com/eventarc/pricing
  • Pricing calculator: https://cloud.google.com/products/calculator

9.1 Pricing dimensions (what you pay for)

Common cost dimensions you should expect in an Eventarc solution: 1. Eventarc event delivery charges (typically per number of events delivered/attempted—verify exact billing metric on the pricing page). 2. Destination compute costs – Cloud Run: vCPU-seconds, memory-seconds, requests, and any configured minimum instances. – Workflows: per step/execution. 3. Source and plumbing costs (indirect)Cloud Logging / Audit Logs: if using audit log triggers, logging ingestion/retention can become a major cost driver. – Pub/Sub: if using custom events via Pub/Sub (topic, subscriptions, data volume). 4. Networking costs – Egress from your destination to external services or cross-region traffic (depending on architecture).

9.2 Free tier

Google Cloud often provides free tiers for Cloud Run and some other services. Eventarc may or may not include a free allowance depending on current pricing. Verify in official pricing docs for your billing account and region.

9.3 Primary cost drivers

  • Event volume (events per second, events per day).
  • Handler execution time (Cloud Run CPU/memory and request count).
  • Audit log volume (especially Data Access logs).
  • Retries due to handler errors (increases attempts and compute time).

9.4 Hidden/indirect costs to watch

  • Enabling Data Access audit logs can significantly increase Logging costs.
  • Verbose application logs from event handlers can exceed expectations at scale.
  • Cold starts can increase runtime duration (and cost) if your handler is heavy.
  • Cross-region architectures can increase latency and cost (and sometimes are not supported for certain event sources).

9.5 Cost optimization strategies

  • Filter narrowly: only route required events (bucket-specific, type-specific).
  • Use efficient handlers:
  • Keep container image small.
  • Avoid heavy initialization on each request.
  • Use concurrency where safe.
  • Control Cloud Run scaling:
  • Set appropriate concurrency and timeouts.
  • Avoid min instances unless needed for latency SLOs.
  • Manage logging:
  • Use structured logging with sampling for high-volume events.
  • Set retention policies appropriately.
  • For audit-based triggers: enable only required audit log types and scopes.

9.6 Example low-cost starter estimate (how to think about it)

A small dev/test setup might include: – A few thousand storage events per month. – A Cloud Run service that runs for a fraction of a second per event. – Minimal logs.

To estimate: 1. Count events/month. 2. Multiply by Eventarc per-event charge (from pricing page). 3. Add Cloud Run request + compute time. 4. Add any logging ingestion due to your handler logs.

Because exact per-event pricing and free tiers vary, use the pricing calculator with your region and expected volumes.

9.7 Example production cost considerations

In production, model: – Peak event bursts (uploads, batch jobs, or incident storms). – Retry amplification if the destination returns 5xx. – Logging growth (both app logs and audit logs). – Downstream API calls (e.g., BigQuery, Dataflow, external APIs) triggered by each event.

A practical approach: – Build a load test that simulates expected event rates. – Capture average handler duration and log volume. – Use that to forecast monthly costs with the calculator.

10. Step-by-Step Hands-On Tutorial

This lab builds a real, low-cost event-driven integration: – Cloud Storage emits an object event – Eventarc routes the event – Cloud Run receives and logs the CloudEvent

Objective

Create an Eventarc trigger that delivers Cloud Storage “object finalized” events to a Cloud Run service, then verify delivery by uploading a file.

Lab Overview

You will: 1. Set up environment variables and enable APIs. 2. Create a Cloud Run service that can receive CloudEvents and log them. 3. Create a Cloud Storage bucket. 4. Create an Eventarc trigger targeting the Cloud Run service. 5. Upload a file to generate an event and validate logs. 6. Clean up to avoid ongoing costs.

Notes before you start: – Event types and required permissions vary by source. This lab uses a common pattern. If any step fails due to regional/source constraints, check the “Troubleshooting” section and verify in the official Eventarc docs for Cloud Storage events. – Commands assume a bash-like shell.

Step 1: Set project, region, and enable APIs

1) Set environment variables:

export PROJECT_ID="YOUR_PROJECT_ID"
export REGION="us-central1"   # choose a supported region
export SERVICE_NAME="eventarc-receiver"
export TRIGGER_NAME="storage-finalized-to-run"
export BUCKET_NAME="${PROJECT_ID}-eventarc-lab-$(date +%s)"

2) Set the active project:

gcloud config set project "${PROJECT_ID}"
gcloud config set run/region "${REGION}"

3) Enable required APIs:

gcloud services enable \
  eventarc.googleapis.com \
  run.googleapis.com \
  storage.googleapis.com \
  cloudbuild.googleapis.com \
  artifactregistry.googleapis.com \
  logging.googleapis.com

Expected outcome: APIs enable successfully.

Verification:

gcloud services list --enabled --filter="name:(eventarc.googleapis.com run.googleapis.com storage.googleapis.com)" --format="value(name)"

Step 2: Create a service account for Eventarc to invoke Cloud Run

Create a dedicated service account that Eventarc will use to call your Cloud Run service:

export INVOKER_SA_NAME="eventarc-invoker"
gcloud iam service-accounts create "${INVOKER_SA_NAME}" \
  --display-name="Eventarc invoker for Cloud Run"
export INVOKER_SA_EMAIL="${INVOKER_SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"

Expected outcome: Service account exists.

Verification:

gcloud iam service-accounts describe "${INVOKER_SA_EMAIL}" --format="value(email)"

Step 3: Build and deploy a Cloud Run CloudEvents receiver

You will deploy a minimal HTTP service that: – Accepts POST requests (CloudEvent delivery) – Logs key CloudEvent headers and body

Option A (recommended): Python + Functions Framework style receiver on Cloud Run

1) Create a local folder:

mkdir -p eventarc-lab && cd eventarc-lab

2) Create main.py:

import json
from flask import Flask, request

app = Flask(__name__)

def _get_ce_headers(headers):
    ce = {}
    for k, v in headers.items():
        lk = k.lower()
        if lk.startswith("ce-"):
            ce[lk] = v
    return ce

@app.post("/")
def receive_event():
    ce_headers = _get_ce_headers(request.headers)

    # Best-effort parse body
    raw = request.get_data(as_text=True) or ""
    try:
        body = request.get_json(silent=True)
    except Exception:
        body = None

    print("=== Received Eventarc request ===")
    print("CloudEvent headers:", json.dumps(ce_headers, indent=2, sort_keys=True))
    if body is not None:
        print("JSON body:", json.dumps(body, indent=2, sort_keys=True))
    else:
        print("Raw body:", raw[:2000])

    # Return 2xx to acknowledge successful delivery
    return ("OK", 204)

@app.get("/")
def health():
    return ("ok", 200)

3) Create requirements.txt:

flask==3.0.3
gunicorn==22.0.0

4) Create a Procfile (optional but helpful for buildpacks):

web: gunicorn -b :$PORT main:app

5) Deploy to Cloud Run from source:

gcloud run deploy "${SERVICE_NAME}" \
  --source . \
  --region "${REGION}" \
  --allow-unauthenticated

Why --allow-unauthenticated in a lab? – It reduces friction while you validate connectivity. – In production, you typically do not allow unauthenticated invocation; you grant roles/run.invoker only to the Eventarc trigger service account and other approved identities.

Expected outcome: Cloud Run service deploys and returns a URL.

Verification:

export SERVICE_URL="$(gcloud run services describe "${SERVICE_NAME}" --region "${REGION}" --format='value(status.url)')"
curl -sS "${SERVICE_URL}/" && echo

You should see ok.

Step 4: Restrict Cloud Run invocation to Eventarc (recommended) and grant invoker role

Now that the service works, lock it down so only authorized callers can invoke it.

1) Remove public (unauthenticated) access:

gcloud run services remove-iam-policy-binding "${SERVICE_NAME}" \
  --region "${REGION}" \
  --member="allUsers" \
  --role="roles/run.invoker"

If you never granted it or deploy behavior differs, you may see a warning; that’s fine.

2) Grant the Eventarc invoker service account permission to call the service:

gcloud run services add-iam-policy-binding "${SERVICE_NAME}" \
  --region "${REGION}" \
  --member="serviceAccount:${INVOKER_SA_EMAIL}" \
  --role="roles/run.invoker"

Expected outcome: Only the invoker service account can invoke the service (plus project owners/admins with broad roles).

Verification (optional):
Calling the service URL from your terminal may now return 403 unless you provide an identity token.

Step 5: Create a Cloud Storage bucket

Create a bucket in (or aligned with) your chosen region. For some event types, aligning locations matters.

gcloud storage buckets create "gs://${BUCKET_NAME}" \
  --location="${REGION}"

Expected outcome: Bucket is created.

Verification:

gcloud storage buckets describe "gs://${BUCKET_NAME}" --format="value(name,location)"

Step 6: Create an Eventarc trigger for Cloud Storage object finalized events

Create an Eventarc trigger that sends matching events to your Cloud Run service.

The exact event type string and available filters can vary. A common Cloud Storage event type is: – google.cloud.storage.object.v1.finalized

Verify event types and required filters in the official Eventarc docs for Cloud Storage events if this fails.

Create the trigger:

gcloud eventarc triggers create "${TRIGGER_NAME}" \
  --location="${REGION}" \
  --destination-run-service="${SERVICE_NAME}" \
  --destination-run-region="${REGION}" \
  --event-filters="type=google.cloud.storage.object.v1.finalized" \
  --event-filters="bucket=${BUCKET_NAME}" \
  --service-account="${INVOKER_SA_EMAIL}"

Expected outcome: Trigger is created successfully.

Verification:

gcloud eventarc triggers describe "${TRIGGER_NAME}" --location="${REGION}"

Look for: – destination service name – matching filters (type and bucket) – service account

Step 7: Generate an event by uploading a file

Upload a small file to the bucket:

echo "hello eventarc $(date)" > hello.txt
gcloud storage cp hello.txt "gs://${BUCKET_NAME}/hello.txt"

Expected outcome: Upload succeeds; shortly after, Eventarc delivers an event to Cloud Run.

Validation

1) View Cloud Run logs:

gcloud logging read \
  "resource.type=cloud_run_revision AND resource.labels.service_name=${SERVICE_NAME}" \
  --limit=50 \
  --format="value(textPayload)"

You should see log lines like: – “=== Received Eventarc request ===” – “CloudEvent headers: …” – Body content describing the storage object (fields vary)

2) Confirm trigger health (high-level): – In Google Cloud Console: – Go to Eventarc → Triggers – Select your trigger – Review recent deliveries/errors (UI fields vary by release)

Troubleshooting

Common issues and realistic fixes:

1) Permission denied invoking Cloud Run

Symptoms: – Cloud Run logs show 403 – Eventarc delivery errors mention permission issues

Fix: – Ensure your trigger specifies the correct service account: – --service-account="${INVOKER_SA_EMAIL}" – Ensure that service account has: – roles/run.invoker on the Cloud Run service

Re-check IAM binding:

gcloud run services get-iam-policy "${SERVICE_NAME}" --region "${REGION}"

2) Trigger creation fails due to unsupported location or event type

Symptoms:INVALID_ARGUMENT or message about unsupported event type/provider/location

Fix: – Confirm Eventarc supported locations and the event type list: – Locations: https://cloud.google.com/eventarc/docs/locations – Event types: https://cloud.google.com/eventarc/docs/reference/event-types (or the current “Event types” reference page)

Try a different region that supports both Cloud Run and your chosen event type.

3) No events arrive after upload

Possible causes and checks: – Wrong bucket filter value (typo). – Trigger created in a region incompatible with the bucket/event source. – Event type string mismatch for your environment.

Fix approach: – Re-describe trigger and confirm filters. – Upload a second object to ensure a new “finalized” event occurs. – Check Cloud Logging for Eventarc-related logs (availability depends on current logging integration—verify in docs).

4) Cloud Run handler returns non-2xx

Symptoms: – Event delivery retries; logs show errors

Fix: – Ensure the handler returns 2xx (204 in this lab). – Avoid exceptions during JSON parsing (use silent=True as shown).

Cleanup

To avoid ongoing costs, delete lab resources.

1) Delete the Eventarc trigger:

gcloud eventarc triggers delete "${TRIGGER_NAME}" --location="${REGION}"

2) Delete the Cloud Run service:

gcloud run services delete "${SERVICE_NAME}" --region "${REGION}"

3) Delete the bucket and its contents:

gcloud storage rm -r "gs://${BUCKET_NAME}"

4) Delete the service account:

gcloud iam service-accounts delete "${INVOKER_SA_EMAIL}"

11. Best Practices

Architecture best practices

  • Design for idempotency: Event delivery can be retried; duplicates can occur. Use event IDs and safe upserts.
  • Keep handlers small and fast: Offload long-running work to Workflows, Cloud Tasks, Pub/Sub, or batch systems.
  • Use Workflows for orchestration: Keep complex branching/retry logic out of service code.
  • Separate domains: Use different triggers per domain/event type; avoid “catch-all” triggers.

IAM/security best practices

  • Dedicated invoker service accounts per trigger (or per application) to reduce blast radius.
  • Grant only required roles:
  • For Cloud Run destinations, typically roles/run.invoker to the trigger SA.
  • Restrict who can create/modify triggers using roles/eventarc.admin sparingly.
  • Use separate projects or folders for dev/test/prod; avoid sharing triggers across environments unless you have strong governance.

Cost best practices

  • Filter aggressively to reduce event volume.
  • Avoid enabling broad audit logs unless necessary.
  • Reduce handler logging verbosity; use sampling for high-volume events.
  • Right-size Cloud Run resources and timeouts.

Performance best practices

  • Use Cloud Run concurrency for efficient scaling when handlers are thread-safe.
  • Avoid heavy cold-start initialization; load dependencies lazily if possible.
  • Consider regional placement near the source and downstream dependencies.

Reliability best practices

  • Implement retries in downstream calls with backoff; handle partial failures.
  • Use dead-letter/error handling patterns:
  • If your handler cannot process an event, log structured details and optionally publish to Pub/Sub for later inspection.
  • Eventarc’s built-in failure handling options depend on destination/source type—verify current capabilities in official docs.
  • Add SLO-based alerting on:
  • Handler error rate
  • Handler latency
  • Event delivery failures (where metrics are exposed)

Operations best practices

  • Use structured logs with correlation fields (event ID, trace ID if available).
  • Create runbooks for common failures (permissions, schema changes, throttling).
  • Adopt Infrastructure as Code for triggers and IAM.

Governance/tagging/naming best practices

  • Use consistent naming:
  • trg-<source>-<event>-<dest>-<env>
  • Add labels/annotations where supported for cost allocation and ownership.
  • Document event schemas and versioning rules for custom events.

12. Security Considerations

Identity and access model

  • Trigger service account: Identity used to invoke the destination.
  • Destination IAM policy: Must explicitly allow that service account to invoke (for Cloud Run, roles/run.invoker).
  • Admin controls: Limit who can create/modify triggers; trigger changes can alter routing and data exposure.

Encryption

  • Data in transit to Cloud Run is protected by Google’s transport security.
  • At rest encryption applies to logs and any stored payloads according to Google Cloud defaults and your CMEK policies (where supported by each service). Verify CMEK support for each component you use (Cloud Run, Logging, Pub/Sub, etc.).

Network exposure

  • If your Cloud Run service is public (allUsers invoker), it is reachable from the internet. Prefer authenticated invocation.
  • Consider Cloud Run ingress settings carefully:
  • “Internal” ingress can reduce exposure, but confirm Eventarc compatibility for your configuration (verify in docs).
  • If your handler calls internal resources (databases, private APIs), use Serverless VPC Access where appropriate.

Secrets handling

  • Do not embed secrets in code. Use Secret Manager and runtime identity.
  • For outbound calls to third parties, store API keys in Secret Manager and rotate regularly.

Audit/logging

  • Audit:
  • Trigger creation/modification is auditable via Cloud Audit Logs.
  • Logging:
  • Log minimal sensitive payload data; redact PII where possible.
  • Set retention policies aligned with compliance requirements.

Compliance considerations

  • Event payloads can include sensitive metadata. Treat event routing as data processing.
  • Ensure data residency and region constraints are met for regulated workloads.
  • For audit-log triggers, confirm that collecting data access events is permitted and properly controlled.

Common security mistakes

  • Leaving Cloud Run destination publicly invokable.
  • Using a broad service account (Editor/Owner) for trigger invocation.
  • Logging entire payloads containing sensitive data.
  • Creating triggers that route more events than intended due to broad filters.

Secure deployment recommendations

  • Use least-privilege service accounts per trigger.
  • Use separate environments/projects.
  • Use IaC and code reviews for trigger changes.
  • Implement schema validation and input sanitization in handlers.

13. Limitations and Gotchas

Because Eventarc evolves, treat this list as “common gotchas” and verify details against current docs for your chosen source/destination.

  • Regional triggers: Triggers are regional; mismatched regions are a common failure cause.
  • Event type availability varies: Not all event types are available in all regions or for all destinations.
  • Audit logs can be expensive: Data Access logs can generate large volumes and cost.
  • At-least-once delivery: Design for duplicates and retries (idempotency).
  • Schema/version changes: Event payloads can evolve; use robust parsing and version checks.
  • Ingress restrictions: Cloud Run ingress/auth settings can block Eventarc if misconfigured.
  • Quotas: Trigger counts and event delivery rates have quotas; monitor and request increases as needed.
  • Partner events and channels: Availability and configuration differ; verify provider onboarding steps.
  • Troubleshooting visibility: Depending on your setup, you may need to rely on destination logs and limited routing diagnostics; build observability into handlers.

14. Comparison with Alternatives

Eventarc is one tool in Google Cloud’s broader integration and Application development toolbox.

Comparison table

Option Best For Strengths Weaknesses When to Choose
Eventarc Routing Google Cloud events to serverless/managed destinations Managed triggers, CloudEvents, tight Cloud Run/Workflows integration Regional constraints, event type coverage varies, not a general streaming platform You want event-driven apps with minimal ops and native Google event sources
Pub/Sub High-throughput messaging, fan-out, buffering, decoupling Mature messaging, replay via retention (limited), many integrations Consumers must manage subscriptions; not “source event” aware by default You need durable messaging and multiple consumers, or custom domain events at scale
Cloud Functions (2nd gen) triggers Simple event handlers Easy deployment, integrates with Eventarc Less explicit routing control than direct Eventarc in some workflows You want function-style development and supported triggers
Workflows (with HTTP triggers / connectors) Orchestration and business process automation Built-in retries, branching, connectors Not intended for heavy compute; per-step costs You need orchestration more than compute
Cloud Scheduler + HTTP/Tasks Time-based automation Simple scheduling Not event-driven; polling patterns can be inefficient You need cron-like behavior
Apache Kafka (self-managed or managed elsewhere) Streaming with offsets, replay, event log patterns Strong streaming semantics Higher ops complexity/cost You need Kafka-specific semantics and ecosystem
Knative Eventing (GKE) Kubernetes-native eventing Portable and configurable You manage and operate it You require Kubernetes-native, self-managed eventing
AWS EventBridge AWS-native event bus Strong AWS ecosystem integration Different cloud; not Google Cloud Multi-cloud comparison; choose if you’re primarily on AWS
Azure Event Grid Azure-native event routing Strong Azure integration Different cloud; not Google Cloud Choose if you’re primarily on Azure

15. Real-World Example

Enterprise example: compliance automation for IAM and storage exposure

  • Problem: A regulated enterprise must detect and respond to risky configuration changes (e.g., IAM policy updates, storage access changes) quickly and consistently.
  • Proposed architecture:
  • Cloud Audit Logs (admin activity) emits change events.
  • Eventarc triggers route selected audit events to Workflows.
  • Workflows calls:
    • Cloud Run remediation service (policy validation)
    • Ticketing integration (via a secured outbound call)
    • Optional notification service
  • Why Eventarc was chosen:
  • Central routing rules with filters, strong IAM-based invocation, and native integration with Workflows/Cloud Run.
  • Expected outcomes:
  • Reduced mean time to detect/remediate misconfigurations.
  • Auditable automation with consistent routing and role separation.

Startup/small-team example: event-driven media processing

  • Problem: A small team needs automatic processing of user-uploaded media without running servers.
  • Proposed architecture:
  • Cloud Storage bucket for uploads.
  • Eventarc trigger for object-finalized events.
  • Cloud Run service performs transcoding/thumbnail generation and stores outputs back to Cloud Storage.
  • Why Eventarc was chosen:
  • Minimal operations overhead; easy to connect storage events to compute.
  • Expected outcomes:
  • Faster iteration and lower cost through scale-to-zero.
  • Clean separation between upload system and processing service.

16. FAQ

1) Is Eventarc a message broker like Pub/Sub?
Not exactly. Eventarc is primarily an event routing service that connects supported sources to supported destinations using triggers and filters. Pub/Sub is a general-purpose messaging system. Eventarc can integrate with Pub/Sub for custom events, but they solve different layers of the problem.

2) Does Eventarc guarantee exactly-once delivery?
Typically, event delivery systems provide at-least-once delivery. Design handlers to be idempotent and tolerate duplicates. Verify current delivery guarantees in the official docs for your source/destination.

3) Can Eventarc trigger Cloud Run services securely?
Yes. Eventarc invokes Cloud Run using a service account identity. You grant that identity roles/run.invoker on the target service.

4) Do I need to expose my Cloud Run service publicly?
No. In production, you generally keep Cloud Run authenticated (no allUsers) and allow only the Eventarc trigger service account (and other approved identities).

5) What event format does Eventarc send to my service?
Eventarc uses CloudEvents. Your service receives CloudEvent metadata (often in ce- HTTP headers) and a payload whose schema depends on the source.

6) Can I route the same event to multiple services?
Yes. Create multiple triggers with matching filters that point to different destinations (subject to quotas and design constraints).

7) How do I handle schema changes in events?
Treat event payloads as versioned contracts: – Validate required fields. – Tolerate unknown fields. – Use event type/version attributes when available. – Write backward-compatible parsers.

8) Is Eventarc global or regional?
Triggers are generally regional. Choose regions carefully and verify the required alignment for the event source and destination.

9) Can I use Eventarc for cross-project routing?
Some cross-project patterns are possible in Google Cloud, but exact support depends on source/destination and IAM configuration. Verify current cross-project guidance in official docs.

10) Does Eventarc support dead-letter queues (DLQ)?
DLQ behavior depends on the destination and the specific feature set available at the time. If you need DLQ semantics, a common pattern is to have your handler publish failures to Pub/Sub or persist to a database for replay.

11) How does Eventarc relate to Cloud Functions (2nd gen)?
Cloud Functions (2nd gen) uses Eventarc for eventing. You can use Cloud Functions triggers directly, or use Eventarc to route events to Cloud Run/Workflows (and other supported destinations).

12) What’s the biggest operational risk with Eventarc?
Misconfigured IAM (invoker permissions) and mismatched regions are frequent issues. Also, audit-log-based designs can create unexpected Logging volume and cost.

13) Can I test Eventarc locally?
You can test your handler by sending mock CloudEvents via HTTP to your local server. Eventarc itself is a managed service; end-to-end tests typically run in a dev project.

14) How do I monitor Eventarc?
Use: – Cloud Run logs/metrics for handler behavior. – Eventarc trigger status in the Console. – Cloud Monitoring metrics for delivery where available (verify metric names and availability).

15) When should I choose Workflows as a destination instead of Cloud Run?
Choose Workflows when you need orchestration: multi-step logic, branching, retries, waiting, calling multiple services. Choose Cloud Run when you need custom compute and full language/runtime control.

16) Does Eventarc support GKE destinations?
Eventarc has had integration options for GKE in some releases. Support status (GA/Preview), configuration, and regions can change—verify in official docs for “Eventarc destinations” and “Eventarc for GKE”.

17) How do I reduce Eventarc solution costs?
Filter narrowly, keep handlers efficient, control logging volume, avoid enabling broad Data Access logs unless required, and use the pricing calculator.

17. Top Online Resources to Learn Eventarc

Resource Type Name Why It Is Useful
Official documentation https://cloud.google.com/eventarc/docs Canonical docs for concepts, configuration, destinations, and troubleshooting
Official pricing https://cloud.google.com/eventarc/pricing Current pricing model and billing dimensions
Pricing calculator https://cloud.google.com/products/calculator Model end-to-end cost including Cloud Run, Logging, Pub/Sub
Event types reference https://cloud.google.com/eventarc/docs/reference/event-types Up-to-date list of supported event types and attributes (verify current page path)
Locations https://cloud.google.com/eventarc/docs/locations Region/location support and constraints
Cloud Run + events guidance https://cloud.google.com/run/docs Destination behavior, IAM, scaling, and logging
Workflows documentation https://cloud.google.com/workflows/docs If using Eventarc → Workflows patterns
CloudEvents spec https://cloudevents.io Understand the event envelope standard used by Eventarc
Official samples (GitHub) https://github.com/GoogleCloudPlatform Search for “eventarc” repositories and samples maintained by Google Cloud (verify exact repo names)
Google Cloud Architecture Center https://cloud.google.com/architecture Reference architectures for event-driven patterns (search within for Eventarc/eventing)
YouTube (official) https://www.youtube.com/googlecloudtech Talks and demos; search channel for “Eventarc” for current sessions

18. Training and Certification Providers

Institute Suitable Audience Likely Learning Focus Mode Website URL
DevOpsSchool.com DevOps engineers, SREs, platform teams, developers Google Cloud operations, CI/CD, event-driven patterns, hands-on labs Check website https://www.devopsschool.com/
ScmGalaxy.com Beginners to intermediate engineers DevOps fundamentals, cloud tooling, practical exercises Check website https://www.scmgalaxy.com/
CLoudOpsNow.in Cloud engineers, operations teams Cloud ops practices, automation, monitoring, cost awareness Check website https://www.cloudopsnow.in/
SreSchool.com SREs, reliability engineers, platform teams Reliability engineering, monitoring, incident response, production readiness Check website https://www.sreschool.com/
AiOpsSchool.com Ops teams exploring AIOps Observability, automation, AIOps concepts and tooling Check website https://www.aiopsschool.com/

19. Top Trainers

Platform/Site Likely Specialization Suitable Audience Website URL
RajeshKumar.xyz DevOps/cloud training and guidance (verify offerings) Engineers looking for practical coaching https://rajeshkumar.xyz/
devopstrainer.in DevOps training resources (verify offerings) Beginners to advanced DevOps learners https://www.devopstrainer.in/
devopsfreelancer.com Freelance DevOps help/training platform (verify offerings) Teams needing short-term expertise https://www.devopsfreelancer.com/
devopssupport.in DevOps support and training resources (verify offerings) Ops/DevOps teams needing implementation help https://www.devopssupport.in/

20. Top Consulting Companies

Company Likely Service Area Where They May Help Consulting Use Case Examples Website URL
cotocus.com Cloud/DevOps consulting (verify exact services) Architecture, implementations, migrations, automation Event-driven microservices setup, CI/CD integration, observability rollouts https://cotocus.com/
DevOpsSchool.com DevOps and cloud consulting/training (verify exact services) Platform engineering, DevOps practices, cloud adoption Eventarc + Cloud Run reference patterns, IAM hardening, cost optimization https://www.devopsschool.com/
DEVOPSCONSULTING.IN DevOps consulting (verify exact services) Delivery pipelines, operations, reliability Event-driven automation, monitoring/alerting design, incident readiness https://www.devopsconsulting.in/

21. Career and Learning Roadmap

What to learn before Eventarc

  • Google Cloud basics: projects, IAM, service accounts, billing.
  • Cloud Run fundamentals: services, revisions, scaling, concurrency, IAM invocation.
  • Cloud Storage basics (for storage-based examples).
  • HTTP fundamentals and JSON parsing.
  • Intro to event-driven architecture: pub/sub, async design, idempotency.

What to learn after Eventarc

  • Pub/Sub advanced patterns: ordering keys, dead-letter topics, retries, subscriptions.
  • Workflows for orchestration and long-running processes.
  • Observability: Cloud Monitoring dashboards, SLOs, alerting strategies.
  • Security: least privilege IAM design, organization policies, audit log governance.
  • IaC: Terraform for Eventarc triggers, Cloud Run, and IAM policies.
  • Reliability engineering: backpressure patterns, circuit breakers, bulkheads.

Job roles that use Eventarc

  • Cloud engineer / platform engineer
  • DevOps engineer / SRE
  • Backend developer building event-driven services
  • Security engineer (audit-driven automation)
  • Solutions architect (integration and eventing design)

Certification path (Google Cloud)

Eventarc is typically covered as part of broader Google Cloud skill sets rather than a standalone certification topic. Relevant certifications to consider: – Associate Cloud Engineer – Professional Cloud Developer – Professional Cloud Architect – Professional DevOps Engineer

Always verify current exam guides on the official Google Cloud certification site.

Project ideas for practice

  1. Storage upload → Cloud Run → Virus scan simulation → quarantine bucket.
  2. Audit log change → Workflows → policy check → Slack notification (via Cloud Run).
  3. Custom domain events (CloudEvents) published via Pub/Sub → Eventarc → Cloud Run.
  4. Multi-trigger routing: different buckets and event types mapped to different microservices.
  5. Build an internal “event schema registry” document and versioning plan for custom events.

22. Glossary

  • Event-driven architecture (EDA): A design where services react to events rather than direct synchronous calls.
  • Eventarc trigger: Configuration that defines which events to match and where to deliver them.
  • CloudEvents: A standard specification for describing event metadata in a common way.
  • Destination: The service that receives events (e.g., Cloud Run).
  • Source/provider: The system that produces events (e.g., Cloud Storage, Audit Logs).
  • Idempotency: Ability to process the same event multiple times without incorrect side effects.
  • At-least-once delivery: Events may be delivered more than once; consumers must handle duplicates.
  • Cloud Run invoker role: IAM permission (roles/run.invoker) that allows invoking a Cloud Run service.
  • Audit logs: Logs that record administrative actions and (optionally) data access; can be used as an event source.
  • Workflows: Google Cloud service for orchestrating multi-step processes with retries and branching.
  • Pub/Sub: Google Cloud messaging service for asynchronous communication between services.

23. Summary

Eventarc is Google Cloud’s managed event routing service for Application development patterns where services react to events. It routes events from Google Cloud sources (and, in some cases, custom or partner sources) to destinations like Cloud Run, Cloud Functions (2nd gen), and Workflows, using CloudEvents and IAM-authenticated delivery.

It matters because it helps teams build decoupled, scalable systems without maintaining webhook plumbing or polling infrastructure. Architecturally, Eventarc fits best as a routing layer in event-driven designs—often alongside Pub/Sub (for buffering/fan-out) and Workflows (for orchestration).

Cost and security are primarily driven by: – event volume and delivery attempts, – destination compute usage (Cloud Run/Workflows), – logging/audit log volume (especially for audit-based triggers), – IAM correctness (invoker permissions, least privilege).

Use Eventarc when you want native Google Cloud event routing with minimal ops. Next step: review the official Eventarc event types and locations, then extend the lab by adding a Workflows destination or implementing idempotent processing with a datastore and structured logging.