{"id":781,"date":"2026-04-16T03:25:16","date_gmt":"2026-04-16T03:25:16","guid":{"rendered":"https:\/\/www.devopsschool.com\/tutorials\/google-cloud-logging-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-observability-and-monitoring\/"},"modified":"2026-04-16T03:25:16","modified_gmt":"2026-04-16T03:25:16","slug":"google-cloud-logging-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-observability-and-monitoring","status":"publish","type":"post","link":"https:\/\/www.devopsschool.com\/tutorials\/google-cloud-logging-tutorial-architecture-pricing-use-cases-and-hands-on-guide-for-observability-and-monitoring\/","title":{"rendered":"Google Cloud Logging Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for Observability and monitoring"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Category<\/h2>\n\n\n\n<p>Observability and monitoring<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. Introduction<\/h2>\n\n\n\n<p>Cloud Logging is Google Cloud\u2019s managed service for collecting, storing, searching, analyzing, and routing logs from Google Cloud services and your applications.<\/p>\n\n\n\n<p>In simple terms: Cloud Logging is where your Google Cloud logs go. It captures logs from services like Cloud Run, Compute Engine, GKE, Cloud Functions, and Google Cloud APIs (audit logs), then lets you explore them, filter them, set retention, and export them to other destinations.<\/p>\n\n\n\n<p>Technically: Cloud Logging provides a centralized logging pipeline made of log ingestion endpoints, the Log Router (for filtering, routing, and exporting), log storage (log buckets with retention policies), and user access controls (views and IAM). It integrates with Cloud Monitoring for metrics\/alerting, with BigQuery\/Cloud Storage\/Pub\/Sub for exports, and with security services for auditing and forensics.<\/p>\n\n\n\n<p>The main problem it solves is operational visibility: without centralized logs you can\u2019t reliably debug incidents, prove compliance, investigate security events, or understand application behavior across distributed systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">2. What is Cloud Logging?<\/h2>\n\n\n\n<p><strong>Official purpose (scope):<\/strong> Cloud Logging is Google Cloud\u2019s logging platform for collecting and managing logs from Google Cloud resources, applications, and on-prem\/hybrid environments. It supports log exploration, retention management, and exporting (routing) logs to other systems.<\/p>\n\n\n\n<p><strong>Core capabilities<\/strong>\n&#8211; Collect logs from Google Cloud services automatically (for example, Cloud Run request logs; Google Kubernetes Engine system\/workload logs; Cloud Audit Logs).\n&#8211; Ingest custom application logs (structured or unstructured).\n&#8211; Search and analyze logs with advanced filters and (where enabled) analytics-style querying.\n&#8211; Route logs to destinations using the <strong>Log Router<\/strong> (for example, BigQuery, Pub\/Sub, Cloud Storage, or another log bucket).\n&#8211; Control retention and access with <strong>log buckets<\/strong> and <strong>log views<\/strong>.\n&#8211; Reduce noise and cost using <strong>exclusions<\/strong>.\n&#8211; Create <strong>log-based metrics<\/strong> (counts or extracted numeric values) that can be used for dashboards and alerting (via Cloud Monitoring).<\/p>\n\n\n\n<p><strong>Major components (mental model)<\/strong>\n&#8211; <strong>Log entries<\/strong>: The individual records (timestamp, severity, textPayload\/jsonPayload\/protoPayload, labels, and monitored resource).\n&#8211; <strong>Monitored resource<\/strong>: The thing that produced the log (for example, <code>cloud_run_revision<\/code>, <code>gce_instance<\/code>, <code>k8s_container<\/code>).\n&#8211; <strong>Log buckets<\/strong>: Storage containers with retention settings and (optionally) specialized capabilities. Projects have default buckets; you can create additional buckets.\n&#8211; <strong>Log views<\/strong>: Filtered windows into a bucket that you can grant access to (least privilege for logs).\n&#8211; <strong>Log Router<\/strong>: The routing layer that evaluates log entries against sinks and exclusions.\n&#8211; <strong>Sinks<\/strong>: Routing rules that export logs to supported destinations (BigQuery, Pub\/Sub, Cloud Storage, or other log buckets).\n&#8211; <strong>Log Explorer \/ Logs Query UI<\/strong>: Primary console interface to find and analyze logs.\n&#8211; <strong>Logging API \/ gcloud logging<\/strong>: Programmatic interfaces to write, read, and manage logging configuration.<\/p>\n\n\n\n<p><strong>Service type<\/strong>\n&#8211; Fully managed Google Cloud service (part of the Google Cloud \u201cOperations\u201d\/observability family).\n&#8211; API-driven with Console and CLI tooling.<\/p>\n\n\n\n<p><strong>Scope (project\/folder\/org + location)<\/strong>\n&#8211; Log ingestion and storage are configured at the <strong>project<\/strong> level, but you can also create aggregated exports at the <strong>folder<\/strong> or <strong>organization<\/strong> level (via sinks).\n&#8211; Log buckets have a <strong>location<\/strong> (often <code>global<\/code> or multi-region\/region options depending on current product capabilities). Choose location based on compliance and latency needs. <strong>Verify current bucket location options in official docs<\/strong> because availability can evolve.<\/p>\n\n\n\n<p><strong>How it fits into the Google Cloud ecosystem<\/strong>\n&#8211; Cloud Logging is the foundational \u201cevent record\u201d layer in <strong>Observability and monitoring<\/strong> on Google Cloud.\n&#8211; It integrates tightly with:\n  &#8211; <strong>Cloud Monitoring<\/strong> (dashboards, alerting, SLOs; log-based metrics feed Monitoring).\n  &#8211; <strong>Cloud Trace \/ Profiler \/ Error Reporting<\/strong> (correlating application telemetry).\n  &#8211; <strong>Cloud IAM<\/strong> and <strong>Cloud Audit Logs<\/strong> (who did what, when).\n  &#8211; <strong>BigQuery<\/strong> (long-term analytics), <strong>Pub\/Sub<\/strong> (streaming), <strong>Cloud Storage<\/strong> (archival), and SIEM tooling (via exports).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">3. Why use Cloud Logging?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Business reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Faster incident resolution<\/strong> reduces downtime and support costs.<\/li>\n<li><strong>Auditability<\/strong> supports compliance programs (SOC 2, ISO 27001, PCI, HIPAA\u2014depending on your workload and controls).<\/li>\n<li><strong>Centralization<\/strong> reduces operational overhead versus managing log servers.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Technical reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Native integration with Google Cloud services<\/strong>: logs appear without installing agents for many services.<\/li>\n<li><strong>Powerful filtering<\/strong>: query by resource type, labels, severity, trace correlation fields, JSON payload keys, and more.<\/li>\n<li><strong>Programmable routing<\/strong>: export different log subsets to different destinations.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operational reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Retention controls<\/strong>: keep logs for the time you need (and no longer).<\/li>\n<li><strong>Noise reduction<\/strong>: exclusions can drop low-value logs before storage\/export.<\/li>\n<li><strong>Least-privilege access<\/strong>: views allow narrowing what different teams can see.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Security\/compliance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cloud Audit Logs<\/strong> provide immutable evidence of API activity (within platform constraints).<\/li>\n<li><strong>Central log exports<\/strong> support security operations (SIEM ingestion, threat hunting).<\/li>\n<li><strong>Access control<\/strong> via IAM, and advanced protections (for example, organization policies and VPC Service Controls where applicable\u2014verify applicability to your environment).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Scalability\/performance reasons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Designed for high-volume ingestion from managed services and distributed workloads.<\/li>\n<li>Avoids scaling and operating your own ingestion pipeline for baseline needs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">When teams should choose Cloud Logging<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You run workloads on Google Cloud and want a managed logging system.<\/li>\n<li>You need consistent access controls and retention policy management.<\/li>\n<li>You want flexible exports (archive, analytics, or streaming) with minimal plumbing.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">When teams should not choose it (or should augment it)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You require a single cross-cloud logging system with identical semantics across all clouds and you already standardized on a third-party SIEM\/observability platform (you may still export from Cloud Logging).<\/li>\n<li>You need full-text indexing and extremely customized query semantics at very large scale and already operate a mature Elastic\/OpenSearch\/Splunk platform (Cloud Logging can still be the ingestion source).<\/li>\n<li>You must keep logs entirely off-cloud for regulatory reasons (Cloud Logging may still be used transiently, but you\u2019d prioritize immediate export\u2014verify compliance constraints with your legal\/security team).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">4. Where is Cloud Logging used?<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Industries<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>SaaS and technology companies (incident response, debugging, SRE)<\/li>\n<li>Financial services (audit trails, fraud investigation, compliance)<\/li>\n<li>Healthcare (auditability; protected data handling policies)<\/li>\n<li>Retail and e-commerce (transaction monitoring, performance troubleshooting)<\/li>\n<li>Media and gaming (high-scale workloads, real-time ops)<\/li>\n<li>Public sector (governance, compliance, retention requirements)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Team types<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>SRE and platform engineering teams (reliability, runbooks, incident response)<\/li>\n<li>DevOps teams (CI\/CD observability, deployment debugging)<\/li>\n<li>Application developers (request tracing, error diagnostics)<\/li>\n<li>Security engineering \/ SOC (audit logs, threat detection via exports)<\/li>\n<li>Data engineering (log analytics via BigQuery exports)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Workloads<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Microservices (Cloud Run, GKE)<\/li>\n<li>VM-based legacy apps (Compute Engine)<\/li>\n<li>Event-driven systems (Pub\/Sub, Cloud Functions)<\/li>\n<li>Data platforms (Dataproc, Dataflow\u2014service logs)<\/li>\n<li>API-driven integrations (audit logs for governance)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Architectures<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Single-project startups (simple default logging)<\/li>\n<li>Multi-project enterprises (centralized logging with aggregated sinks)<\/li>\n<li>Hybrid (on-prem apps sending logs via agents to Cloud Logging)<\/li>\n<li>Regulated environments (separate buckets with controlled retention and views)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Production vs dev\/test usage<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Dev\/test<\/strong>: shorter retention, more verbose logs, aggressive exclusions for cost control.<\/li>\n<li><strong>Production<\/strong>: structured logging, strict IAM\/view separation, longer retention for audit\/security, dedicated export pipelines for SIEM and analytics.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">5. Top Use Cases and Scenarios<\/h2>\n\n\n\n<p>Below are realistic scenarios where Cloud Logging is commonly used in Google Cloud.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1) Centralized troubleshooting for Cloud Run services<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Errors occur intermittently across multiple services; developers can\u2019t reproduce locally.<\/li>\n<li><strong>Why Cloud Logging fits:<\/strong> Automatic capture of request and application logs; filter by service name\/revision\/trace.<\/li>\n<li><strong>Example:<\/strong> Filter <code>resource.type=\"cloud_run_revision\"<\/code> and <code>severity&gt;=ERROR<\/code> during an incident to find failing endpoints after a rollout.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">2) GKE workload and platform logging<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Pods restart and nodes show unusual behavior; you need container logs plus cluster system signals.<\/li>\n<li><strong>Why it fits:<\/strong> Integrates with GKE logging pipelines and monitored resources like <code>k8s_container<\/code>.<\/li>\n<li><strong>Example:<\/strong> Query logs for a namespace\/label to isolate one microservice causing elevated 5xx.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">3) Governance and audit trails with Cloud Audit Logs<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Need to know who changed IAM policies or deleted resources.<\/li>\n<li><strong>Why it fits:<\/strong> Cloud Audit Logs are delivered into Cloud Logging (Admin Activity, Data Access, System Event, Policy Denied\u2014availability depends on service and configuration).<\/li>\n<li><strong>Example:<\/strong> Investigate project IAM changes by filtering audit logs for <code>SetIamPolicy<\/code>.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4) Compliance-driven retention and access separation<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Compliance requires 1-year retention for security logs, but only 30 days for app debug logs.<\/li>\n<li><strong>Why it fits:<\/strong> Buckets can have different retention; views and IAM can restrict access.<\/li>\n<li><strong>Example:<\/strong> Store audit\/security logs in a long-retention bucket while dropping debug logs via exclusions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">5) Cost control by dropping noisy logs<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Health checks and debug logs generate high volume and cost.<\/li>\n<li><strong>Why it fits:<\/strong> Exclusions can drop logs before storage\/export.<\/li>\n<li><strong>Example:<\/strong> Exclude <code>severity=DEBUG<\/code> from high-traffic services in production while keeping INFO+.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">6) Streaming logs to SIEM via Pub\/Sub<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Security team needs near-real-time log ingestion into a SIEM.<\/li>\n<li><strong>Why it fits:<\/strong> Log Router supports Pub\/Sub sinks; downstream consumers process\/forward.<\/li>\n<li><strong>Example:<\/strong> Export audit logs to Pub\/Sub, then a Dataflow pipeline normalizes and forwards to SIEM.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">7) Long-term archival to Cloud Storage<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Need cheap, long-term archive for rare investigations.<\/li>\n<li><strong>Why it fits:<\/strong> Export to Cloud Storage; apply lifecycle rules (Nearline\/Coldline\/Archive).<\/li>\n<li><strong>Example:<\/strong> Export all logs older than operational window to GCS with object lifecycle policies.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">8) Analytics and reporting in BigQuery<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Need weekly reports on error rates per endpoint\/customer and ad-hoc forensics.<\/li>\n<li><strong>Why it fits:<\/strong> BigQuery sinks enable SQL analysis at scale; join logs to business data.<\/li>\n<li><strong>Example:<\/strong> Export structured logs to BigQuery, then build Looker dashboards for error trends.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">9) Release validation and canary analysis<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> After deployment, need to compare error spikes between revisions.<\/li>\n<li><strong>Why it fits:<\/strong> Logs contain revision labels and can be filtered rapidly.<\/li>\n<li><strong>Example:<\/strong> Compare logs between <code>revision_name<\/code> values and correlate with trace IDs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">10) Creating alertable metrics from logs (log-based metrics)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> You need alerts on specific log patterns (e.g., \u201cpayment failed\u201d) without code changes.<\/li>\n<li><strong>Why it fits:<\/strong> Log-based metrics count matching logs; Cloud Monitoring alerts trigger on metric thresholds.<\/li>\n<li><strong>Example:<\/strong> Metric increments when <code>jsonPayload.event=\"PAYMENT_FAILED\"<\/code> appears; alert if &gt; X\/min.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">11) Cross-project centralization for platform teams<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Enterprise has dozens of projects; incident response needs a central place to search.<\/li>\n<li><strong>Why it fits:<\/strong> Aggregated sinks at folder\/org export logs into a central project.<\/li>\n<li><strong>Example:<\/strong> Export all Admin Activity audit logs from all projects into a single \u201csecurity-logging\u201d project.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">12) Forensics after a security incident<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem:<\/strong> Need timeline reconstruction: API calls, workload errors, suspicious access.<\/li>\n<li><strong>Why it fits:<\/strong> Cloud Logging + Audit Logs provide time-indexed record; exports preserve evidence.<\/li>\n<li><strong>Example:<\/strong> Use saved queries and exports to isolate activity for a compromised service account.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">6. Core Features<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">1) Automatic ingestion for Google Cloud services<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Many services write logs to Cloud Logging by default.<\/li>\n<li><strong>Why it matters:<\/strong> You get immediate observability without deploying log infrastructure.<\/li>\n<li><strong>Practical benefit:<\/strong> Faster time-to-debug and consistent format for platform logs.<\/li>\n<li><strong>Caveats:<\/strong> Some log types (notably certain Data Access audit logs) may require explicit enablement and can increase cost. Verify per-service audit logging behavior.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">2) Structured logging support (JSON payloads)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Lets apps emit JSON fields (<code>jsonPayload<\/code>) rather than only plain text.<\/li>\n<li><strong>Why it matters:<\/strong> Structured fields are easier to filter, aggregate, and export reliably.<\/li>\n<li><strong>Practical benefit:<\/strong> Query <code>jsonPayload.userId=\"123\"<\/code> or <code>jsonPayload.latency_ms&gt;500<\/code>.<\/li>\n<li><strong>Caveats:<\/strong> Ensure your app logs valid JSON and avoids putting sensitive data in logs.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">3) Log Explorer (search, filter, saved queries)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> UI for querying logs with advanced filters; supports saving queries.<\/li>\n<li><strong>Why it matters:<\/strong> Incident response depends on fast, accurate log searches.<\/li>\n<li><strong>Practical benefit:<\/strong> Quickly narrow down by <code>resource.type<\/code>, <code>severity<\/code>, labels, and text.<\/li>\n<li><strong>Caveats:<\/strong> Very broad queries over large time ranges can be slower; use precise filters.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4) Live Tail (near real-time log viewing)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Streams recent logs matching a filter for quick debugging.<\/li>\n<li><strong>Why it matters:<\/strong> Useful while reproducing issues or validating a fix.<\/li>\n<li><strong>Practical benefit:<\/strong> Watch errors appear immediately after a deploy.<\/li>\n<li><strong>Caveats:<\/strong> Not a substitute for durable retention\/exports; it\u2019s a troubleshooting tool.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">5) Log buckets (storage + retention)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Store logs in buckets with configurable retention.<\/li>\n<li><strong>Why it matters:<\/strong> Retention and separation are core governance controls.<\/li>\n<li><strong>Practical benefit:<\/strong> Short retention for dev logs, longer for security\/audit logs.<\/li>\n<li><strong>Caveats:<\/strong> Longer retention increases storage cost. Bucket location choices may affect compliance.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">6) Log views (least-privilege access)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Creates filtered views into a bucket; grant access to the view.<\/li>\n<li><strong>Why it matters:<\/strong> Logs often contain sensitive operational details.<\/li>\n<li><strong>Practical benefit:<\/strong> Give app teams access only to their service logs, not security logs.<\/li>\n<li><strong>Caveats:<\/strong> Requires careful IAM design; test access boundaries before production rollout.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">7) Log Router (routing + processing point)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Evaluates every incoming log entry against sinks and exclusions.<\/li>\n<li><strong>Why it matters:<\/strong> Central place to implement \u201cwhat do we keep, and where do we send it?\u201d<\/li>\n<li><strong>Practical benefit:<\/strong> Export audit logs to SIEM, app logs to BigQuery, archive to GCS.<\/li>\n<li><strong>Caveats:<\/strong> Exclusions drop data\u2014treat them as data-loss controls; document decisions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">8) Sinks (exports to other services)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Exports matching logs to supported destinations:<\/li>\n<li><strong>BigQuery<\/strong> (analytics)<\/li>\n<li><strong>Pub\/Sub<\/strong> (streaming pipelines)<\/li>\n<li><strong>Cloud Storage<\/strong> (archive)<\/li>\n<li><strong>Log bucket<\/strong> (re-bucket\/reroute within Logging)<\/li>\n<li><strong>Why it matters:<\/strong> Enables downstream analytics and security workflows.<\/li>\n<li><strong>Practical benefit:<\/strong> Centralize logs across projects, build SIEM pipelines.<\/li>\n<li><strong>Caveats:<\/strong> Destination services have their own costs and permissions. Exports can create high downstream spend if filters are broad.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">9) Exclusions (drop noisy logs)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Prevents certain logs from being stored\/exported.<\/li>\n<li><strong>Why it matters:<\/strong> Cost control and noise reduction.<\/li>\n<li><strong>Practical benefit:<\/strong> Drop health check spam or debug logs in prod.<\/li>\n<li><strong>Caveats:<\/strong> Excluded logs cannot be recovered. Use carefully and version-control your filters.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">10) Log-based metrics<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Creates metrics from logs (counter or distribution, depending on metric type).<\/li>\n<li><strong>Why it matters:<\/strong> Turns log patterns into measurable signals for alerting and SLOs.<\/li>\n<li><strong>Practical benefit:<\/strong> Alert on \u201cauthentication failures\u201d derived from logs.<\/li>\n<li><strong>Caveats:<\/strong> Metric cardinality and volume can affect Monitoring costs\/limits. Prefer structured fields.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">11) Cloud Audit Logs integration<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Stores Admin Activity and other audit logs in Cloud Logging.<\/li>\n<li><strong>Why it matters:<\/strong> Security and governance foundation.<\/li>\n<li><strong>Practical benefit:<\/strong> Investigate changes to IAM, firewall rules, storage access.<\/li>\n<li><strong>Caveats:<\/strong> Data Access logs can be high volume; scope them deliberately.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">12) Programmatic access (Logging API, CLI, client libraries)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Read logs, write logs, manage sinks\/buckets\/views via APIs and <code>gcloud<\/code>.<\/li>\n<li><strong>Why it matters:<\/strong> Enables IaC and automated governance.<\/li>\n<li><strong>Practical benefit:<\/strong> CI pipeline validates logging sinks and retention policies.<\/li>\n<li><strong>Caveats:<\/strong> Protect admin capabilities; changes to sinks\/exclusions impact evidence collection.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">13) (Optional\/advanced) Log analytics-style querying<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>What it does:<\/strong> Some environments support enhanced analytics on logs (for example, SQL-like querying against logs stored in certain bucket configurations).<\/li>\n<li><strong>Why it matters:<\/strong> Enables deeper analysis beyond basic filtering.<\/li>\n<li><strong>Practical benefit:<\/strong> Aggregations, joins (often via exports to BigQuery).<\/li>\n<li><strong>Caveats:<\/strong> Capabilities and billing can vary. <strong>Verify in official docs<\/strong> for current behavior and cost.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">7. Architecture and How It Works<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">High-level architecture<\/h3>\n\n\n\n<p>Cloud Logging is best understood as a pipeline:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>Log producers<\/strong> emit logs:\n   &#8211; Google Cloud services automatically (Cloud Run, GKE, Load Balancers, etc.)\n   &#8211; Your applications (stdout\/stderr, client libraries, or agents)\n   &#8211; Audit logs from Google Cloud APIs<\/p>\n<\/li>\n<li>\n<p><strong>Ingestion<\/strong> accepts log entries and normalizes them into a consistent schema.<\/p>\n<\/li>\n<li>\n<p><strong>Log Router<\/strong> processes each log entry:\n   &#8211; Applies <strong>exclusions<\/strong> (drop)\n   &#8211; Evaluates <strong>sinks<\/strong> (export copies to destinations)\n   &#8211; Routes remaining logs to <strong>log buckets<\/strong> for storage<\/p>\n<\/li>\n<li>\n<p><strong>Storage<\/strong> stores logs in buckets with retention policies.<\/p>\n<\/li>\n<li>\n<p><strong>Access and analysis<\/strong>:\n   &#8211; Users query logs via Log Explorer or APIs (subject to IAM).\n   &#8211; <strong>Log-based metrics<\/strong> can be generated and sent to Cloud Monitoring.\n   &#8211; Exports deliver logs to BigQuery\/GCS\/Pub\/Sub for other use cases.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Data flow vs control flow<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Data flow<\/strong>: log entries \u2192 ingestion \u2192 Log Router \u2192 bucket storage and\/or sink destinations.<\/li>\n<li><strong>Control flow<\/strong>: admins configure buckets, views, sinks, exclusions, and IAM policies.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Integrations with related services<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cloud Monitoring<\/strong>: dashboards\/alerts; uses metrics derived from logs.<\/li>\n<li><strong>BigQuery<\/strong>: long-term analytics via sinks; often used for reporting and SIEM-like queries.<\/li>\n<li><strong>Pub\/Sub + Dataflow<\/strong>: real-time log processing pipelines.<\/li>\n<li><strong>Cloud Storage<\/strong>: archival with lifecycle management.<\/li>\n<li><strong>Cloud IAM<\/strong>: authorization for reading logs and administering routing.<\/li>\n<li><strong>Security Command Center \/ SIEM tooling<\/strong>: commonly consumes exports (architecture varies).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Dependency services (typical)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>IAM (for roles and permissions)<\/li>\n<li>Destination services for exports (BigQuery\/GCS\/Pub\/Sub)<\/li>\n<li>Workload runtime (Cloud Run\/GKE\/Compute Engine)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Security\/authentication model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>User access<\/strong> is controlled via IAM roles (viewer vs admin vs view access).<\/li>\n<li><strong>Sink exports<\/strong> use a <strong>writer identity<\/strong> (a service account identity managed for the sink). You must grant that identity permissions on the destination (e.g., write to GCS or publish to Pub\/Sub).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Networking model<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud Logging is a Google-managed service; you access it via Google APIs.<\/li>\n<li>Exports to BigQuery\/GCS\/Pub\/Sub are internal Google Cloud service-to-service operations, but you still need to consider:<\/li>\n<li>Organization policies \/ VPC Service Controls (if used)<\/li>\n<li>Cross-project permissions<\/li>\n<li>Any egress to external SIEM typically happens from your pipeline (e.g., Dataflow) rather than directly from Cloud Logging<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Governance considerations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Establish a bucket strategy (by environment, by sensitivity, by compliance domain).<\/li>\n<li>Use views for least-privilege access.<\/li>\n<li>Centralize audit logs into a dedicated security project via aggregated sinks.<\/li>\n<li>Document and review exclusions and sink filters regularly.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Simple architecture diagram (Mermaid)<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart LR\n  A[Google Cloud services&lt;br\/&gt;Cloud Run \/ GKE \/ GCE] --&gt; B[Cloud Logging Ingestion]\n  C[Cloud Audit Logs] --&gt; B\n  B --&gt; D[Log Router]\n  D --&gt;|Store| E[Log Bucket(s)&lt;br\/&gt;Retention policy]\n  D --&gt;|Export via Sink| F[BigQuery \/ Pub\/Sub \/ Cloud Storage]\n  E --&gt; G[Log Explorer \/ Logging API]\n  E --&gt; H[Log-based Metrics]\n  H --&gt; I[Cloud Monitoring&lt;br\/&gt;Dashboards &amp; Alerts]\n<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Production-style architecture diagram (Mermaid)<\/h3>\n\n\n\n<pre><code class=\"language-mermaid\">flowchart TB\n  subgraph Org[\"Google Cloud Organization\"]\n    subgraph FolderA[\"Folder: Prod\"]\n      P1[\"Project: app-prod-1\"] --&gt; LR1[\"Log Router\"]\n      P2[\"Project: app-prod-2\"] --&gt; LR2[\"Log Router\"]\n    end\n\n    subgraph SecProj[\"Project: security-logging (central)\"]\n      CentralBucket[\"Central Log Bucket(s)&lt;br\/&gt;- Audit (long retention)&lt;br\/&gt;- Security (long retention)\"]\n      SIEMTopic[\"Pub\/Sub Topic: siem-stream\"]\n      ArchiveBucket[\"Cloud Storage Bucket: log-archive&lt;br\/&gt;Lifecycle rules\"]\n      BQ[\"BigQuery Dataset: log_analytics\"]\n    end\n\n    LR1 --&gt;|Aggregated sink filter: Audit logs| CentralBucket\n    LR2 --&gt;|Aggregated sink filter: Audit logs| CentralBucket\n\n    CentralBucket --&gt;|Sink: security stream| SIEMTopic\n    CentralBucket --&gt;|Sink: archive| ArchiveBucket\n    CentralBucket --&gt;|Sink: analytics| BQ\n  end\n\n  SIEMTopic --&gt; DF[\"Dataflow pipeline&lt;br\/&gt;normalize\/enrich\"] --&gt; ExtSIEM[\"External SIEM&lt;br\/&gt;(vendor managed)\"]\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8. Prerequisites<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Account\/project\/billing<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A Google Cloud account with an active <strong>billing account<\/strong> attached to your project.<\/li>\n<li>A Google Cloud project where you can create resources (Cloud Run, Cloud Storage, Logging configuration).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Permissions (IAM roles)<\/h3>\n\n\n\n<p>At minimum, for the hands-on lab you typically need:\n&#8211; Cloud Logging:\n  &#8211; <code>roles\/logging.viewer<\/code> (to view logs)\n  &#8211; <code>roles\/logging.configWriter<\/code> or <code>roles\/logging.admin<\/code> (to create sinks\/exclusions\/buckets\/metrics)\n&#8211; Cloud Run:\n  &#8211; <code>roles\/run.admin<\/code> (to deploy a service)\n  &#8211; <code>roles\/iam.serviceAccountUser<\/code> (to allow Cloud Run to use a service account, if you specify one)\n&#8211; Cloud Storage:\n  &#8211; <code>roles\/storage.admin<\/code> (or at least permission to create a bucket and set IAM on it)<\/p>\n\n\n\n<p>In production, avoid broad roles and prefer least privilege (covered later).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Tools<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google Cloud Console access<\/li>\n<li><code>gcloud<\/code> CLI (Cloud SDK) and authentication:<\/li>\n<li>Cloud Shell includes <code>gcloud<\/code> preinstalled.<\/li>\n<li><code>gsutil<\/code> (included with Cloud SDK) for Cloud Storage operations.<\/li>\n<li>Optional: <code>curl<\/code> for sending test requests.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">APIs to enable<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud Logging API: <code>logging.googleapis.com<\/code><\/li>\n<li>Cloud Run Admin API: <code>run.googleapis.com<\/code><\/li>\n<li>Cloud Build API (for source-based deploys): <code>cloudbuild.googleapis.com<\/code><\/li>\n<li>Cloud Storage API: <code>storage.googleapis.com<\/code><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Region availability<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud Logging is a global Google Cloud service with location-based storage options (log bucket location).<\/li>\n<li>Cloud Run is regional; pick a supported region (e.g., <code>us-central1<\/code>). Verify in the Cloud Run locations doc if needed.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Quotas\/limits to be aware of (high level)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Logging ingestion, read API usage, export volume, and sink counts are quota-managed.<\/li>\n<li>Maximum log entry size and label limits exist.<\/li>\n<li><strong>Always check the official quotas\/limits docs<\/strong> before large production rollouts, especially for high-volume workloads or centralized exports.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Prerequisite services<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud Run (to generate logs in the lab)<\/li>\n<li>Cloud Storage (as an export destination in the lab)<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">9. Pricing \/ Cost<\/h2>\n\n\n\n<p>Cloud Logging pricing is <strong>usage-based<\/strong>. The exact SKUs and free-tier amounts can change, so confirm using official sources:\n&#8211; Pricing page: https:\/\/cloud.google.com\/logging\/pricing\n&#8211; Cloud Pricing Calculator: https:\/\/cloud.google.com\/products\/calculator<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Pricing dimensions (what you pay for)<\/h3>\n\n\n\n<p>Common cost components include:\n1. <strong>Log ingestion volume<\/strong><br\/>\n   &#8211; Typically charged by the amount of log data ingested (bytes\/GiB).\n2. <strong>Log storage \/ retention<\/strong><br\/>\n   &#8211; Default retention is provided for a baseline period; keeping logs longer or in additional buckets can incur storage charges (GiB-month).\n3. <strong>Log routing \/ exports (indirect costs)<\/strong><br\/>\n   &#8211; Cloud Logging itself routes logs, but the destination services charge you:\n     &#8211; <strong>BigQuery<\/strong>: storage + query processing\n     &#8211; <strong>Cloud Storage<\/strong>: object storage + operations\n     &#8211; <strong>Pub\/Sub<\/strong>: message volume + delivery\/storage (depending on configuration)\n     &#8211; <strong>Dataflow<\/strong> (if you process exports): compute + streaming charges\n4. <strong>Log-based metrics (indirect costs)<\/strong><br\/>\n   &#8211; The metrics created from logs are used in Cloud Monitoring and may contribute to Monitoring metric volume and alerting costs depending on your usage and Google Cloud\u2019s current pricing model. Verify current Monitoring pricing.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Free tier (important)<\/h3>\n\n\n\n<p>Google Cloud typically provides some <strong>no-cost baseline<\/strong> for logging (often a monthly ingestion allowance and\/or default retention). The exact amounts and what\u2019s included can change. <strong>Verify the current free tier details on the official pricing page<\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Key cost drivers<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>High-volume workloads (HTTP access logs, verbose debug logs, chatty services)<\/li>\n<li>Multi-tenant platforms producing large structured logs per request<\/li>\n<li>Enabling high-volume audit logs (especially Data Access logs) without scoping<\/li>\n<li>Broad export sinks sending \u201ceverything\u201d to BigQuery (query costs can become significant)<\/li>\n<li>Long retention on large buckets<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Hidden\/indirect costs to plan for<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>BigQuery query costs<\/strong> from analysts running wide scans over exported logs.<\/li>\n<li><strong>Storage growth<\/strong> when you extend retention for large log streams.<\/li>\n<li><strong>Egress<\/strong> if you forward logs to an external SIEM from a pipeline running in Google Cloud (network costs depend on routing and destination).<\/li>\n<li><strong>Operational overhead<\/strong>: managing sink filters, schema normalization, and access controls.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Network\/data transfer implications<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Intra-Google service routing (Logging \u2192 BigQuery\/GCS\/Pub\/Sub) is typically not treated like public internet egress, but costs and policies can vary by service, region, and configuration. <strong>Verify in the pricing docs<\/strong> for the specific destination service and your network architecture.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">How to optimize cost (high impact)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use <strong>structured logging<\/strong> and log only what you will query.<\/li>\n<li>Implement <strong>exclusions<\/strong> for noisy low-value logs (health checks, debug).<\/li>\n<li>Separate buckets by <strong>retention needs<\/strong> (short for dev\/debug, longer for audit\/security).<\/li>\n<li>Export only what you need:<\/li>\n<li>Use sink filters to export subsets (e.g., <code>severity&gt;=ERROR<\/code> to BigQuery).<\/li>\n<li>For BigQuery exports:<\/li>\n<li>Partition\/cluster appropriately (if applicable)<\/li>\n<li>Restrict who can run expensive queries<\/li>\n<li>Use scheduled queries\/materialized summaries rather than repeated ad-hoc scans<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Example low-cost starter estimate (qualitative)<\/h3>\n\n\n\n<p>A small Cloud Run service with moderate traffic:\n&#8211; Keeps default retention\n&#8211; Excludes DEBUG logs\n&#8211; No BigQuery export (or exports only ERROR logs)\nOften stays near the low end of Logging costs; your spend is primarily driven by ingestion volume. <strong>Check the pricing calculator with your expected GiB\/month<\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Example production cost considerations (what to model)<\/h3>\n\n\n\n<p>For a platform with dozens of services and security requirements:\n&#8211; Centralized audit log exports across projects\n&#8211; Long retention for audit\/security buckets (months\/years)\n&#8211; Streaming export to SIEM via Pub\/Sub + Dataflow\nYou should model:\n&#8211; Total ingestion GiB\/month (by environment)\n&#8211; Retention duration by bucket and expected growth\n&#8211; BigQuery storage\/query costs (if used)\n&#8211; Dataflow and Pub\/Sub throughput\n&#8211; Access controls that prevent \u201crunaway queries\u201d and unnecessary exports<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">10. Step-by-Step Hands-On Tutorial<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Objective<\/h3>\n\n\n\n<p>Deploy a small Cloud Run service that emits structured logs, explore those logs in Cloud Logging, create a log-based metric, and export logs to Cloud Storage using a sink.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Lab Overview<\/h3>\n\n\n\n<p>You will:\n1. Set up variables and enable required APIs.\n2. Deploy a Cloud Run service that generates INFO and ERROR logs.\n3. Query logs using Log Explorer and <code>gcloud logging read<\/code>.\n4. Create a log-based metric for ERROR logs.\n5. Create a Cloud Storage sink and verify exported log objects.\n6. Clean up all resources to avoid ongoing charges.<\/p>\n\n\n\n<p>This lab is designed to be low-cost, but you should still use a billing-enabled project and clean up afterward.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 1: Select a project, region, and enable APIs<\/h3>\n\n\n\n<p>1) Open <strong>Cloud Shell<\/strong> in the Google Cloud Console.<\/p>\n\n\n\n<p>2) Set environment variables (adjust as needed):<\/p>\n\n\n\n<pre><code class=\"language-bash\">export PROJECT_ID=\"YOUR_PROJECT_ID\"\nexport REGION=\"us-central1\"\ngcloud config set project \"${PROJECT_ID}\"\ngcloud config set run\/region \"${REGION}\"\n<\/code><\/pre>\n\n\n\n<p>3) Enable the required APIs:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud services enable \\\n  logging.googleapis.com \\\n  run.googleapis.com \\\n  cloudbuild.googleapis.com \\\n  storage.googleapis.com\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> APIs enable successfully (this may take a minute).<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 2: Create and deploy a Cloud Run service that writes structured logs<\/h3>\n\n\n\n<p>1) Create a working directory:<\/p>\n\n\n\n<pre><code class=\"language-bash\">mkdir -p ~\/cloud-logging-lab &amp;&amp; cd ~\/cloud-logging-lab\n<\/code><\/pre>\n\n\n\n<p>2) Create a simple Python service.<\/p>\n\n\n\n<p>Create <code>main.py<\/code>:<\/p>\n\n\n\n<pre><code class=\"language-python\">import json\nimport logging\nimport os\nfrom flask import Flask, request\n\napp = Flask(__name__)\n\n# Cloud Run captures stdout\/stderr into Cloud Logging automatically.\nlogger = logging.getLogger(\"log-demo\")\nlogger.setLevel(logging.INFO)\n\n@app.get(\"\/\")\ndef hello():\n    payload = {\n        \"event\": \"hello\",\n        \"path\": request.path,\n        \"user_agent\": request.headers.get(\"User-Agent\"),\n    }\n    logger.info(json.dumps(payload))\n    return (\"OK\\n\", 200)\n\n@app.get(\"\/error\")\ndef error():\n    payload = {\n        \"event\": \"forced_error\",\n        \"path\": request.path,\n        \"reason\": \"demo\",\n    }\n    logger.error(json.dumps(payload))\n    return (\"ERROR\\n\", 500)\n\nif __name__ == \"__main__\":\n    port = int(os.environ.get(\"PORT\", \"8080\"))\n    app.run(host=\"0.0.0.0\", port=port)\n<\/code><\/pre>\n\n\n\n<p>Create <code>requirements.txt<\/code>:<\/p>\n\n\n\n<pre><code class=\"language-text\">Flask==3.0.3\ngunicorn==22.0.0\n<\/code><\/pre>\n\n\n\n<p>Create <code>Procfile<\/code> (for buildpacks):<\/p>\n\n\n\n<pre><code class=\"language-text\">web: gunicorn -b :$PORT main:app\n<\/code><\/pre>\n\n\n\n<p>3) Deploy to Cloud Run:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud run deploy log-demo \\\n  --source . \\\n  --allow-unauthenticated\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> Deployment completes and prints a service URL, such as:\n<code>https:\/\/log-demo-&lt;hash&gt;-uc.a.run.app<\/code><\/p>\n\n\n\n<p>Save it:<\/p>\n\n\n\n<pre><code class=\"language-bash\">export SERVICE_URL=\"$(gcloud run services describe log-demo --format='value(status.url)')\"\necho \"${SERVICE_URL}\"\n<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 3: Generate logs and confirm they appear in Cloud Logging<\/h3>\n\n\n\n<p>1) Send a few requests:<\/p>\n\n\n\n<pre><code class=\"language-bash\">curl -s \"${SERVICE_URL}\/\" &gt;\/dev\/null\ncurl -s \"${SERVICE_URL}\/\" &gt;\/dev\/null\ncurl -s -o \/dev\/null -w \"%{http_code}\\n\" \"${SERVICE_URL}\/error\"\n<\/code><\/pre>\n\n\n\n<p>You should see <code>500<\/code> for the last request.<\/p>\n\n\n\n<p>2) In the Console:\n&#8211; Go to <strong>Logging \u2192 Log Explorer<\/strong>\n&#8211; Use a query like:<\/p>\n\n\n\n<pre><code>resource.type=\"cloud_run_revision\"\nresource.labels.service_name=\"log-demo\"\n<\/code><\/pre>\n\n\n\n<p>Optionally narrow to errors:<\/p>\n\n\n\n<pre><code>resource.type=\"cloud_run_revision\"\nresource.labels.service_name=\"log-demo\"\nseverity&gt;=ERROR\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> You see recent log entries from Cloud Run, including your JSON message content.<\/p>\n\n\n\n<p>3) Verify via CLI (fast and scriptable):<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud logging read \\\n  'resource.type=\"cloud_run_revision\" AND resource.labels.service_name=\"log-demo\"' \\\n  --limit 10 \\\n  --format \"value(timestamp,severity,textPayload)\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> You see timestamps and log messages. Depending on how Cloud Run captured the output, your JSON may appear in <code>textPayload<\/code> (as a JSON string) or as structured fields. Either is fine for this lab.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 4: Create a log-based metric for ERROR logs<\/h3>\n\n\n\n<p>Create a counter metric that increments for each ERROR (or higher) log entry from this service:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud logging metrics create log_demo_error_count \\\n  --description=\"Count of ERROR logs from Cloud Run service log-demo\" \\\n  --log-filter='resource.type=\"cloud_run_revision\"\nAND resource.labels.service_name=\"log-demo\"\nAND severity&gt;=ERROR'\n<\/code><\/pre>\n\n\n\n<p>List and describe it:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud logging metrics list --format=\"table(name,description)\"\ngcloud logging metrics describe log_demo_error_count\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> The metric exists in Cloud Logging configuration.<\/p>\n\n\n\n<p>To generate more error logs:<\/p>\n\n\n\n<pre><code class=\"language-bash\">for i in {1..5}; do\n  curl -s -o \/dev\/null -w \"%{http_code}\\n\" \"${SERVICE_URL}\/error\"\ndone\n<\/code><\/pre>\n\n\n\n<p><strong>Verification in Console (optional):<\/strong>\n&#8211; Go to <strong>Monitoring \u2192 Metrics Explorer<\/strong>\n&#8211; Look for a user-defined metric related to Logging (often under a namespace like <code>logging.googleapis.com\/user\/...<\/code>).\n&#8211; It can take a few minutes for new time series to appear.<\/p>\n\n\n\n<p><strong>Note:<\/strong> Alerting is configured in Cloud Monitoring, not directly in Cloud Logging. If you build alerts, ensure you understand Monitoring pricing and metric cardinality.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 5: Export Cloud Run logs to Cloud Storage using a sink<\/h3>\n\n\n\n<p>This step demonstrates the Log Router + Sink workflow.<\/p>\n\n\n\n<p>1) Create a Cloud Storage bucket (choose a globally-unique name):<\/p>\n\n\n\n<pre><code class=\"language-bash\">export EXPORT_BUCKET=\"log-demo-export-${PROJECT_ID}-${RANDOM}\"\ngsutil mb -l \"${REGION}\" \"gs:\/\/${EXPORT_BUCKET}\"\n<\/code><\/pre>\n\n\n\n<p>2) Create a sink that exports only <code>log-demo<\/code> service logs:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud logging sinks create log-demo-to-gcs \\\n  \"storage.googleapis.com\/${EXPORT_BUCKET}\" \\\n  --log-filter='resource.type=\"cloud_run_revision\"\nAND resource.labels.service_name=\"log-demo\"'\n<\/code><\/pre>\n\n\n\n<p>3) Grant the sink\u2019s writer identity permission to write objects into the bucket:<\/p>\n\n\n\n<pre><code class=\"language-bash\">export WRITER_IDENTITY=\"$(gcloud logging sinks describe log-demo-to-gcs --format='value(writerIdentity)')\"\necho \"${WRITER_IDENTITY}\"\n\ngsutil iam ch \"${WRITER_IDENTITY}:objectCreator\" \"gs:\/\/${EXPORT_BUCKET}\"\n<\/code><\/pre>\n\n\n\n<p>4) Generate more logs:<\/p>\n\n\n\n<pre><code class=\"language-bash\">curl -s \"${SERVICE_URL}\/\" &gt;\/dev\/null\ncurl -s -o \/dev\/null -w \"%{http_code}\\n\" \"${SERVICE_URL}\/error\"\n<\/code><\/pre>\n\n\n\n<p>5) Wait a bit (exports can be delayed), then list exported objects:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gsutil ls \"gs:\/\/${EXPORT_BUCKET}\/**\" || true\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> You should eventually see objects written by the export pipeline. The exact directory structure and file naming are managed by Cloud Logging.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Step 6: (Optional) Create an exclusion to drop DEBUG logs<\/h3>\n\n\n\n<p>If your app were emitting debug logs, you could exclude them to reduce noise\/cost.<\/p>\n\n\n\n<p>Create an exclusion (example filter; adjust as needed):<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud logging exclusions create drop-debug-cloudrun \\\n  --description=\"Drop DEBUG logs from Cloud Run to reduce noise\" \\\n  --filter='resource.type=\"cloud_run_revision\" AND severity=DEBUG'\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> Exclusion is created. Future matching logs will not be stored\/exported.<\/p>\n\n\n\n<p><strong>Important:<\/strong> Exclusions permanently drop matching logs. Use carefully.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Validation<\/h3>\n\n\n\n<p>Use this checklist:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud Run service is reachable:<\/li>\n<li><code>curl ${SERVICE_URL}\/<\/code> returns <code>OK<\/code><\/li>\n<li><code>curl ${SERVICE_URL}\/error<\/code> returns HTTP 500<\/li>\n<li>Logs appear in Cloud Logging:<\/li>\n<li>Log Explorer shows entries for <code>service_name=\"log-demo\"<\/code><\/li>\n<li><code>gcloud logging read ...<\/code> returns recent entries<\/li>\n<li>Log-based metric exists:<\/li>\n<li><code>gcloud logging metrics describe log_demo_error_count<\/code> succeeds<\/li>\n<li>Sink exports are working:<\/li>\n<li><code>gsutil ls gs:\/\/${EXPORT_BUCKET}\/**<\/code> shows objects (after some delay)<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Troubleshooting<\/h3>\n\n\n\n<p><strong>Issue: No logs appear in Log Explorer<\/strong>\n&#8211; Wait 1\u20133 minutes; ingestion and UI indexing aren\u2019t always instant.\n&#8211; Confirm you used the correct resource filter:\n  &#8211; <code>resource.type=\"cloud_run_revision\"<\/code>\n  &#8211; <code>resource.labels.service_name=\"log-demo\"<\/code>\n&#8211; Confirm you\u2019re in the correct project in the Console and Cloud Shell:\n  &#8211; <code>gcloud config get-value project<\/code><\/p>\n\n\n\n<p><strong>Issue: Sink created but nothing shows up in Cloud Storage<\/strong>\n&#8211; Exports can be delayed; wait several minutes and try again.\n&#8211; Confirm bucket IAM was granted to the sink writer identity:\n  &#8211; Re-run <code>gcloud logging sinks describe ...<\/code> and <code>gsutil iam get ...<\/code>\n&#8211; Confirm the sink filter matches the log entries you are generating.\n&#8211; Ensure the destination bucket exists and you used the correct bucket name.<\/p>\n\n\n\n<p><strong>Issue: Permission denied when creating sinks\/exclusions\/metrics<\/strong>\n&#8211; You likely need <code>roles\/logging.configWriter<\/code> or <code>roles\/logging.admin<\/code>.\n&#8211; In locked-down orgs, organization policies might restrict sink destinations or storage creation.<\/p>\n\n\n\n<p><strong>Issue: Cloud Run deploy fails<\/strong>\n&#8211; Confirm Cloud Build API is enabled.\n&#8211; Confirm you have <code>roles\/run.admin<\/code> and permissions to build.\n&#8211; Try <code>gcloud run deploy ... --verbosity=debug<\/code> for details.<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Cleanup<\/h3>\n\n\n\n<p>To avoid ongoing charges and clutter, delete everything you created.<\/p>\n\n\n\n<p>1) Delete the Cloud Run service:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud run services delete log-demo --quiet\n<\/code><\/pre>\n\n\n\n<p>2) Delete the log sink:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud logging sinks delete log-demo-to-gcs --quiet\n<\/code><\/pre>\n\n\n\n<p>3) Delete the log-based metric:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud logging metrics delete log_demo_error_count --quiet\n<\/code><\/pre>\n\n\n\n<p>4) Delete the exclusion (if created):<\/p>\n\n\n\n<pre><code class=\"language-bash\">gcloud logging exclusions delete drop-debug-cloudrun --quiet\n<\/code><\/pre>\n\n\n\n<p>5) Delete exported objects and the Cloud Storage bucket:<\/p>\n\n\n\n<pre><code class=\"language-bash\">gsutil -m rm -r \"gs:\/\/${EXPORT_BUCKET}\/**\" || true\ngsutil rb \"gs:\/\/${EXPORT_BUCKET}\"\n<\/code><\/pre>\n\n\n\n<p><strong>Expected outcome:<\/strong> All lab resources are removed.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">11. Best Practices<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Architecture best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Design bucket strategy intentionally<\/strong><\/li>\n<li>Separate by environment (dev\/test\/prod) and by sensitivity (audit\/security\/app).<\/li>\n<li>Assign retention based on real requirements.<\/li>\n<li><strong>Centralize audit logs<\/strong><\/li>\n<li>Use aggregated sinks at folder\/org to route audit logs into a dedicated security project.<\/li>\n<li><strong>Use structured logging<\/strong><\/li>\n<li>Prefer JSON fields and consistent keys (<code>event<\/code>, <code>request_id<\/code>, <code>user_id<\/code>, <code>tenant_id<\/code>, <code>latency_ms<\/code>).<\/li>\n<li><strong>Correlate logs with traces<\/strong><\/li>\n<li>Where supported, propagate trace context and include trace identifiers to pivot between logs and traces.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">IAM\/security best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Least privilege<\/strong><\/li>\n<li>Use views to restrict access to subsets of logs.<\/li>\n<li>Separate \u201cread logs\u201d from \u201cadmin routing configuration\u201d roles.<\/li>\n<li><strong>Separate duties<\/strong><\/li>\n<li>Security team owns audit\/security buckets and export sinks.<\/li>\n<li>App teams get access to their views only.<\/li>\n<li><strong>Protect sink configuration<\/strong><\/li>\n<li>Changing sinks\/exclusions can break compliance evidence pipelines. Restrict who can edit them.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Cost best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Drop what you don\u2019t need<\/strong><\/li>\n<li>Exclude overly verbose logs (health checks, debug in production).<\/li>\n<li><strong>Scope exports<\/strong><\/li>\n<li>Export only relevant logs to BigQuery\/SIEM. Broad exports increase downstream cost.<\/li>\n<li><strong>Short retention for noisy logs<\/strong><\/li>\n<li>Use shorter retention buckets for high-volume access logs unless required longer.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Performance best practices (operational efficiency)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use specific filters (resource type, labels, severity) rather than broad text searches.<\/li>\n<li>Save common queries and document them in runbooks.<\/li>\n<li>For analytics at scale, export to BigQuery and design a dataset strategy for query performance.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Reliability best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Treat exports as production pipelines:<\/li>\n<li>Validate sink filters after changes.<\/li>\n<li>Monitor destination write errors (where available).<\/li>\n<li>Ensure permissions remain correct (especially if buckets\/datasets are recreated).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operations best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Standardize log formats across teams.<\/li>\n<li>Use consistent severity levels.<\/li>\n<li>Include request IDs and tenant identifiers for multi-tenant systems (but avoid sensitive data).<\/li>\n<li>Create an \u201cincident mode\u201d checklist: what queries to run, what time windows, what buckets.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Governance\/tagging\/naming best practices<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Naming conventions for sinks and buckets:<\/li>\n<li><code>env-domain-destination<\/code> (e.g., <code>prod-audit-to-siem-pubsub<\/code>)<\/li>\n<li>Document ownership:<\/li>\n<li>Who owns each sink\/bucket, why it exists, and what filter it uses.<\/li>\n<li>Version-control filters:<\/li>\n<li>Store sink and exclusion filters in IaC (Terraform or similar) to reduce drift.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">12. Security Considerations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Identity and access model<\/h3>\n\n\n\n<p>Cloud Logging uses <strong>Cloud IAM<\/strong>:\n&#8211; Viewer roles can read logs (subject to restrictions).\n&#8211; Admin\/config roles can change sinks, exclusions, buckets, and views.\n&#8211; Sink exports use a <strong>writer identity<\/strong> that must be granted destination permissions.<\/p>\n\n\n\n<p>Key recommendations:\n&#8211; Use <strong>views<\/strong> to limit exposure instead of giving broad project-level log viewer access.\n&#8211; Separate roles for:\n  &#8211; Reading logs (operators, developers)\n  &#8211; Administering routing\/retention (platform\/security admins)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Encryption<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Logs are encrypted at rest by Google by default.<\/li>\n<li>For stricter requirements, Cloud Logging supports customer-managed encryption keys (CMEK) for some storage configurations (such as log buckets). <strong>Verify current CMEK support and limitations in official docs<\/strong> and validate with your compliance team.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Network exposure<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud Logging is accessed via Google APIs.<\/li>\n<li>If your organization uses perimeter controls (e.g., VPC Service Controls), verify how Logging access and exports behave within your perimeter.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secrets handling<\/h3>\n\n\n\n<p>Common mistake: logging secrets accidentally.\n&#8211; Never log raw credentials, API keys, OAuth tokens, session cookies, or private keys.\n&#8211; Implement application-level log scrubbing\/redaction.\n&#8211; Restrict who can view sensitive logs; use separate buckets\/views.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Audit\/logging of logging changes<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Changes to sinks, exclusions, buckets, and IAM should be monitored.<\/li>\n<li>Use Cloud Audit Logs to track admin actions on logging configuration.<\/li>\n<li>Consider exporting logging configuration audit events to a security-owned destination for independent oversight.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Compliance considerations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Define retention that meets regulatory requirements and legal holds.<\/li>\n<li>Ensure log bucket location aligns with data residency needs.<\/li>\n<li>Ensure access controls align with privacy policies (logs can contain personal data).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Common security mistakes<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Granting <code>roles\/logging.admin<\/code> to too many users.<\/li>\n<li>Exporting \u201call logs\u201d to external systems without data classification.<\/li>\n<li>Using exclusions without review, dropping critical audit\/security events.<\/li>\n<li>Storing logs long-term without access review (stale permissions).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Secure deployment recommendations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Create separate security log buckets with strict IAM.<\/li>\n<li>Use aggregated sinks for organization-level audit coverage.<\/li>\n<li>Limit who can disable\/alter sinks and exclusions.<\/li>\n<li>Implement periodic access reviews and automated policy checks (IaC + CI).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">13. Limitations and Gotchas<\/h2>\n\n\n\n<p>The following are common pitfalls. Exact limits evolve; <strong>verify quotas\/limits in official documentation<\/strong> for your environment.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Known limitations \/ quotas (examples to check)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Maximum log entry size and payload constraints.<\/li>\n<li>Rate limits for writing logs (API) and reading logs (queries).<\/li>\n<li>Limits on number of sinks, exclusions, and metrics per project.<\/li>\n<li>Export delivery is not necessarily instantaneous; sinks can have delays.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Regional constraints<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Log bucket location options may not match every region.<\/li>\n<li>If you need strict residency, validate bucket location support and export destinations.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Pricing surprises<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Enabling high-volume audit logs (especially Data Access) can increase ingestion significantly.<\/li>\n<li>BigQuery exports can cause large query costs if analysts scan huge time ranges.<\/li>\n<li>Verbose application logs (INFO per request with large payloads) can dominate ingestion.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Compatibility issues<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Some workloads output logs in formats that are hard to query (giant unstructured text).<\/li>\n<li>If you mix languages\/frameworks, ensure a consistent structured schema.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Operational gotchas<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Exclusions permanently drop data\u2014treat them as high-risk changes.<\/li>\n<li>Sink filters that are too broad can overwhelm downstream systems.<\/li>\n<li>Cross-project exports require careful IAM and can fail silently if permissions drift.<\/li>\n<li>Developers may rely on logs that are excluded in prod\u2014ensure a policy and communication.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Migration challenges<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Moving from self-managed stacks (ELK\/Splunk) requires:<\/li>\n<li>Schema mapping decisions<\/li>\n<li>Query translation<\/li>\n<li>Retention and access model redesign<\/li>\n<li>Export pipeline design for existing SIEM<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Vendor-specific nuances<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud Logging uses Google\u2019s monitored resource model; learning resource types\/labels is essential for reliable filters.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">14. Comparison with Alternatives<\/h2>\n\n\n\n<p>Cloud Logging is the native choice for Google Cloud logs, but it\u2019s not the only option.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Comparison table<\/h3>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Option<\/th>\n<th>Best For<\/th>\n<th>Strengths<\/th>\n<th>Weaknesses<\/th>\n<th>When to Choose<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Cloud Logging (Google Cloud)<\/strong><\/td>\n<td>Google Cloud-first workloads<\/td>\n<td>Native integration, Log Router exports, IAM\/views, audit logs<\/td>\n<td>Costs scale with volume; analytics may require exports for deep SQL<\/td>\n<td>You run on Google Cloud and want managed logging with strong routing\/governance<\/td>\n<\/tr>\n<tr>\n<td><strong>Cloud Monitoring (Google Cloud)<\/strong><\/td>\n<td>Metrics, dashboards, alerting<\/td>\n<td>Excellent for time-series; integrates with log-based metrics<\/td>\n<td>Not a log store; limited for raw event forensics<\/td>\n<td>Use alongside Cloud Logging; don\u2019t replace logs with metrics<\/td>\n<\/tr>\n<tr>\n<td><strong>BigQuery (via Logging sink)<\/strong><\/td>\n<td>Large-scale log analytics<\/td>\n<td>SQL, joins with business data, long-term analysis<\/td>\n<td>Separate cost model; requires schema\/partitioning strategy<\/td>\n<td>When you need analytics\/reporting beyond interactive log search<\/td>\n<\/tr>\n<tr>\n<td><strong>Pub\/Sub + Dataflow pipeline<\/strong><\/td>\n<td>Real-time processing\/forwarding<\/td>\n<td>Streaming enrichment, SIEM forwarders<\/td>\n<td>More moving parts; operational overhead<\/td>\n<td>When you need near-real-time detection or cross-tool distribution<\/td>\n<\/tr>\n<tr>\n<td><strong>Cloud Storage (via sink)<\/strong><\/td>\n<td>Low-cost archival<\/td>\n<td>Cheap long-term storage + lifecycle classes<\/td>\n<td>Not interactive for search; retrieval\/processing needed<\/td>\n<td>When logs are rarely accessed but must be retained<\/td>\n<\/tr>\n<tr>\n<td><strong>AWS CloudWatch Logs<\/strong><\/td>\n<td>AWS workloads<\/td>\n<td>Tight AWS integration<\/td>\n<td>Not native to Google Cloud; cross-cloud overhead<\/td>\n<td>Only if primary workloads are on AWS<\/td>\n<\/tr>\n<tr>\n<td><strong>Azure Monitor Logs (Log Analytics)<\/strong><\/td>\n<td>Azure workloads<\/td>\n<td>Azure-native querying and integrations<\/td>\n<td>Not native to Google Cloud<\/td>\n<td>Only if primary workloads are on Azure<\/td>\n<\/tr>\n<tr>\n<td><strong>Elastic Stack \/ OpenSearch (self-managed or managed)<\/strong><\/td>\n<td>Custom search + dashboards<\/td>\n<td>Powerful search\/indexing; flexible<\/td>\n<td>Requires ops or managed service costs; ingestion pipelines<\/td>\n<td>If you need custom full-text search and already standardize on Elastic\/OpenSearch<\/td>\n<\/tr>\n<tr>\n<td><strong>Splunk \/ Datadog \/ other SaaS observability<\/strong><\/td>\n<td>Multi-cloud standardization<\/td>\n<td>Unified UI across clouds; advanced analytics<\/td>\n<td>Vendor costs; still need exports and governance<\/td>\n<td>If org standardizes on a third-party platform; export from Cloud Logging<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">15. Real-World Example<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Enterprise example: Centralized audit logging for a regulated organization<\/h3>\n\n\n\n<p><strong>Problem<\/strong>\nA financial services company operates 100+ Google Cloud projects. Auditors require:\n&#8211; Centralized access to Admin Activity audit logs\n&#8211; Long retention for specific audit\/security logs\n&#8211; Strict separation: developers must not access security logs<\/p>\n\n\n\n<p><strong>Proposed architecture<\/strong>\n&#8211; Folder\/org-level aggregated sinks export:\n  &#8211; Admin Activity audit logs\n  &#8211; Policy Denied events\n  &#8211; Selected Data Access logs (scoped and approved)\n&#8211; Destination: a central \u201csecurity-logging\u201d project with:\n  &#8211; Dedicated long-retention log buckets\n  &#8211; Views for different security personas (SOC analysts vs compliance auditors)\n&#8211; Additional sinks export subsets to:\n  &#8211; Pub\/Sub for SIEM streaming\n  &#8211; Cloud Storage for long-term archive with lifecycle rules<\/p>\n\n\n\n<p><strong>Why Cloud Logging was chosen<\/strong>\n&#8211; Native audit log delivery and consistent schema\n&#8211; Built-in routing (Log Router) and permission model (views + IAM)\n&#8211; Works across projects with organization-level configuration patterns<\/p>\n\n\n\n<p><strong>Expected outcomes<\/strong>\n&#8211; Faster audit response and forensics\n&#8211; Reduced risk of missing evidence due to project-by-project variation\n&#8211; Controlled cost via scoped exports and retention policies<\/p>\n\n\n\n<hr class=\"wp-block-separator\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">Startup\/small-team example: Cloud Run microservices debugging and alerting<\/h3>\n\n\n\n<p><strong>Problem<\/strong>\nA startup runs microservices on Cloud Run. They need:\n&#8211; Quick debugging during feature releases\n&#8211; Alerts when error rate spikes\n&#8211; Low operational overhead<\/p>\n\n\n\n<p><strong>Proposed architecture<\/strong>\n&#8211; Use Cloud Logging as the primary log store (default buckets)\n&#8211; Standardize structured JSON logs (event, severity, request_id)\n&#8211; Create a few log-based metrics (e.g., <code>payment_failed<\/code>, <code>auth_error<\/code>)\n&#8211; Configure Cloud Monitoring alerts on those metrics\n&#8211; Export only ERROR logs to Cloud Storage for lightweight archiving<\/p>\n\n\n\n<p><strong>Why Cloud Logging was chosen<\/strong>\n&#8211; No agents required for Cloud Run\n&#8211; Fast search during incidents\n&#8211; Easy to add routing later as the company grows<\/p>\n\n\n\n<p><strong>Expected outcomes<\/strong>\n&#8211; Faster debugging and fewer \u201cblind\u201d incidents\n&#8211; Predictable baseline costs by excluding DEBUG logs\n&#8211; A path to mature exports (BigQuery\/SIEM) later without redesign<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">16. FAQ<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">1) Is \u201cCloud Logging\u201d the same as Stackdriver Logging?<\/h3>\n\n\n\n<p>Cloud Logging is the current Google Cloud product name. \u201cStackdriver Logging\u201d is an older name you may still see in older posts and legacy references.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2) Do I need to install an agent to get logs from Google Cloud services?<\/h3>\n\n\n\n<p>Many Google Cloud services send logs automatically. For VM-based workloads or custom apps, you may use an agent (for example, Google Cloud Ops Agent) or write logs via stdout\/client libraries depending on the environment. Verify the recommended collection method for your compute platform.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3) What\u2019s the difference between a log bucket and a sink?<\/h3>\n\n\n\n<p>A <strong>log bucket<\/strong> stores logs in Cloud Logging with retention. A <strong>sink<\/strong> exports (routes) matching logs to a destination (BigQuery, Pub\/Sub, Cloud Storage, or another bucket).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4) What are log views used for?<\/h3>\n\n\n\n<p>Views provide least-privilege access to subsets of logs (via a filter), so teams can see only what they need without broad access to all project logs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">5) How do exclusions affect my logs?<\/h3>\n\n\n\n<p>Exclusions drop matching logs before they are stored or exported. This can reduce cost and noise, but it is irreversible data loss for excluded entries.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">6) Can I export logs from multiple projects into one central project?<\/h3>\n\n\n\n<p>Yes. Common patterns use aggregated sinks at the folder or organization level to route logs into a central logging project.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">7) Are Cloud Audit Logs stored in Cloud Logging automatically?<\/h3>\n\n\n\n<p>Admin Activity logs are typically enabled by default for many services. Other audit log types (like Data Access) may require explicit configuration and can be high volume. Verify per-service audit logging behavior.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">8) Can I send logs to BigQuery for SQL analysis?<\/h3>\n\n\n\n<p>Yes, using a BigQuery sink. Be sure to model BigQuery storage and query costs and apply dataset governance.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">9) How do I alert on log messages?<\/h3>\n\n\n\n<p>Cloud Logging itself is for log storage\/search\/routing. Alerts are typically implemented in <strong>Cloud Monitoring<\/strong>, often using <strong>log-based metrics<\/strong> created from filters.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">10) What\u2019s the best way to reduce Logging costs?<\/h3>\n\n\n\n<p>Focus on ingestion volume:\n&#8211; Exclude low-value logs\n&#8211; Reduce verbosity in production\n&#8211; Export only the subset you need to expensive destinations\n&#8211; Use shorter retention where appropriate<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">11) Can logs contain sensitive data?<\/h3>\n\n\n\n<p>Yes, and that\u2019s a common risk. Apply strict logging hygiene (redaction), access controls (views\/IAM), and retention policies.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">12) How quickly do logs appear after an event?<\/h3>\n\n\n\n<p>Usually within seconds, but it can vary. Exports via sinks can also be delayed. Design operational processes with that in mind.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">13) What\u2019s the difference between textPayload and jsonPayload?<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>textPayload<\/code>: unstructured text logs<\/li>\n<li><code>jsonPayload<\/code>: structured key\/value data (preferred for consistent filtering and analytics)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">14) Can I use Terraform to manage Cloud Logging sinks and buckets?<\/h3>\n\n\n\n<p>Yes, Cloud Logging resources are commonly managed with IaC. Confirm the current Terraform Google provider supports the specific resources you need (buckets\/views\/exclusions) and test changes carefully.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">15) How do I share logs with a vendor or external auditor safely?<\/h3>\n\n\n\n<p>Prefer exporting the required subset to a controlled destination (BigQuery dataset or Cloud Storage bucket) and granting time-bound access, or create a log view with minimal scope. Avoid broad project-level log access.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">16) What are the most important fields for filtering logs?<\/h3>\n\n\n\n<p>Typically:\n&#8211; <code>resource.type<\/code>\n&#8211; <code>resource.labels.*<\/code>\n&#8211; <code>severity<\/code>\n&#8211; <code>logName<\/code>\n&#8211; <code>jsonPayload.*<\/code> (for structured logging)\n&#8211; time range<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">17) Should I export everything to my SIEM?<\/h3>\n\n\n\n<p>Usually no. Start with audit\/security-critical logs and high-value application signals (ERROR\/auth failures). Broad exports increase cost and noise and can overwhelm downstream systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">17. Top Online Resources to Learn Cloud Logging<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table>\n<thead>\n<tr>\n<th>Resource Type<\/th>\n<th>Name<\/th>\n<th>Why It Is Useful<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Official documentation<\/td>\n<td>Cloud Logging docs \u2014 https:\/\/cloud.google.com\/logging\/docs<\/td>\n<td>Canonical reference for concepts (buckets, sinks, views), API usage, and best practices<\/td>\n<\/tr>\n<tr>\n<td>Official pricing<\/td>\n<td>Cloud Logging pricing \u2014 https:\/\/cloud.google.com\/logging\/pricing<\/td>\n<td>Current SKUs, free tier, ingestion\/storage pricing model<\/td>\n<\/tr>\n<tr>\n<td>Pricing calculator<\/td>\n<td>Google Cloud Pricing Calculator \u2014 https:\/\/cloud.google.com\/products\/calculator<\/td>\n<td>Estimate costs for Logging and export destinations (BigQuery, Pub\/Sub, Storage)<\/td>\n<\/tr>\n<tr>\n<td>Getting started<\/td>\n<td>Cloud Logging quickstart \u2014 https:\/\/cloud.google.com\/logging\/docs\/quickstart<\/td>\n<td>Step-by-step introduction to viewing and using logs<\/td>\n<\/tr>\n<tr>\n<td>API reference<\/td>\n<td>Cloud Logging API \u2014 https:\/\/cloud.google.com\/logging\/docs\/reference\/v2\/rest<\/td>\n<td>Programmatic management and log entry operations<\/td>\n<\/tr>\n<tr>\n<td>CLI reference<\/td>\n<td>gcloud logging command group \u2014 https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/logging<\/td>\n<td>Practical CLI commands for reading logs and managing sinks\/metrics<\/td>\n<\/tr>\n<tr>\n<td>Architecture guidance<\/td>\n<td>Google Cloud Architecture Center \u2014 https:\/\/cloud.google.com\/architecture<\/td>\n<td>Reference architectures (use search for logging\/export\/SIEM patterns)<\/td>\n<\/tr>\n<tr>\n<td>Sinks\/export docs<\/td>\n<td>Routing and exporting logs overview \u2014 https:\/\/cloud.google.com\/logging\/docs\/routing\/overview<\/td>\n<td>Authoritative guide to Log Router, sinks, exclusions, and destinations<\/td>\n<\/tr>\n<tr>\n<td>Audit logs<\/td>\n<td>Cloud Audit Logs overview \u2014 https:\/\/cloud.google.com\/logging\/docs\/audit<\/td>\n<td>How audit logs work, types, and configuration considerations<\/td>\n<\/tr>\n<tr>\n<td>Video (official)<\/td>\n<td>Google Cloud Tech YouTube \u2014 https:\/\/www.youtube.com\/googlecloudtech<\/td>\n<td>Official videos often cover Logging\/Operations; search within the channel for \u201cCloud Logging\u201d<\/td>\n<\/tr>\n<tr>\n<td>Samples<\/td>\n<td>GoogleCloudPlatform GitHub \u2014 https:\/\/github.com\/GoogleCloudPlatform<\/td>\n<td>Official samples across products; search repos for logging client libraries and patterns<\/td>\n<\/tr>\n<tr>\n<td>Community (reputable)<\/td>\n<td>Google Cloud Skills Boost \u2014 https:\/\/www.cloudskillsboost.google<\/td>\n<td>Hands-on labs often include Logging\/Operations scenarios (availability varies by catalog)<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">18. Training and Certification Providers<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>DevOpsSchool.com<\/strong>\n   &#8211; Suitable audience: DevOps engineers, SREs, cloud engineers, platform teams\n   &#8211; Likely learning focus: Google Cloud operations, observability fundamentals, logging\/monitoring practices, DevOps toolchains\n   &#8211; Mode: check website\n   &#8211; Website: https:\/\/www.devopsschool.com\/<\/p>\n<\/li>\n<li>\n<p><strong>ScmGalaxy.com<\/strong>\n   &#8211; Suitable audience: Students, early-career engineers, DevOps practitioners\n   &#8211; Likely learning focus: DevOps fundamentals, CI\/CD, cloud basics, operations practices\n   &#8211; Mode: check website\n   &#8211; Website: https:\/\/www.scmgalaxy.com\/<\/p>\n<\/li>\n<li>\n<p><strong>CLoudOpsNow.in<\/strong>\n   &#8211; Suitable audience: Cloud operations teams, administrators, DevOps engineers\n   &#8211; Likely learning focus: Cloud operations and runbooks, monitoring\/logging fundamentals, operational readiness\n   &#8211; Mode: check website\n   &#8211; Website: https:\/\/cloudopsnow.in\/<\/p>\n<\/li>\n<li>\n<p><strong>SreSchool.com<\/strong>\n   &#8211; Suitable audience: SREs, reliability engineers, platform engineering teams\n   &#8211; Likely learning focus: SRE practices, incident response, SLOs, observability (including logging)\n   &#8211; Mode: check website\n   &#8211; Website: https:\/\/sreschool.com\/<\/p>\n<\/li>\n<li>\n<p><strong>AiOpsSchool.com<\/strong>\n   &#8211; Suitable audience: Operations teams exploring AIOps, monitoring\/observability engineers\n   &#8211; Likely learning focus: AIOps concepts, event correlation, automation around observability signals (logs\/metrics\/traces)\n   &#8211; Mode: check website\n   &#8211; Website: https:\/\/aiopsschool.com\/<\/p>\n<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">19. Top Trainers<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>RajeshKumar.xyz<\/strong>\n   &#8211; Likely specialization: DevOps and cloud training content (verify offerings on site)\n   &#8211; Suitable audience: Beginners to intermediate engineers seeking practical guidance\n   &#8211; Website URL: https:\/\/rajeshkumar.xyz\/<\/p>\n<\/li>\n<li>\n<p><strong>devopstrainer.in<\/strong>\n   &#8211; Likely specialization: DevOps tooling and practices (verify current course catalog)\n   &#8211; Suitable audience: DevOps engineers, SREs, CI\/CD practitioners\n   &#8211; Website URL: https:\/\/devopstrainer.in\/<\/p>\n<\/li>\n<li>\n<p><strong>devopsfreelancer.com<\/strong>\n   &#8211; Likely specialization: Freelance DevOps consulting\/training resources (verify services offered)\n   &#8211; Suitable audience: Teams seeking hands-on support or short-term enablement\n   &#8211; Website URL: https:\/\/devopsfreelancer.com\/<\/p>\n<\/li>\n<li>\n<p><strong>devopssupport.in<\/strong>\n   &#8211; Likely specialization: DevOps support and operational assistance (verify service scope)\n   &#8211; Suitable audience: Ops teams needing troubleshooting support and guidance\n   &#8211; Website URL: https:\/\/devopssupport.in\/<\/p>\n<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">20. Top Consulting Companies<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>cotocus.com<\/strong>\n   &#8211; Likely service area: Cloud\/DevOps consulting (verify exact offerings)\n   &#8211; Where they may help: Cloud adoption, platform engineering, operational readiness, observability implementations\n   &#8211; Consulting use case examples:<\/p>\n<ul>\n<li>Designing centralized logging exports and retention strategy<\/li>\n<li>Implementing least-privilege access via log views and IAM<\/li>\n<li>Building Pub\/Sub + Dataflow pipelines for SIEM forwarding<\/li>\n<li>Website URL: https:\/\/cotocus.com\/<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>DevOpsSchool.com<\/strong>\n   &#8211; Likely service area: DevOps and cloud consulting\/training (verify exact offerings)\n   &#8211; Where they may help: DevOps transformations, CI\/CD, SRE practices, cloud operations enablement\n   &#8211; Consulting use case examples:<\/p>\n<ul>\n<li>Establishing logging standards (structured logging schema, severity conventions)<\/li>\n<li>Implementing IaC for sinks, buckets, and exclusions<\/li>\n<li>Cost optimization reviews for logging ingestion and exports<\/li>\n<li>Website URL: https:\/\/www.devopsschool.com\/<\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>DEVOPSCONSULTING.IN<\/strong>\n   &#8211; Likely service area: DevOps consulting services (verify exact offerings)\n   &#8211; Where they may help: Cloud operations maturity, automation, monitoring\/logging practices\n   &#8211; Consulting use case examples:<\/p>\n<ul>\n<li>Logging governance model (ownership, access reviews, compliance retention)<\/li>\n<li>Export architecture to BigQuery and Storage with cost controls<\/li>\n<li>Incident response runbook development using Cloud Logging queries<\/li>\n<li>Website URL: https:\/\/devopsconsulting.in\/<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">21. Career and Learning Roadmap<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn before Cloud Logging<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google Cloud fundamentals:<\/li>\n<li>Projects, IAM, service accounts<\/li>\n<li>Regions vs global services<\/li>\n<li>Basic Linux and application logging concepts:<\/li>\n<li>stdout\/stderr, log levels, JSON logs<\/li>\n<li>Core Google Cloud runtimes:<\/li>\n<li>Cloud Run or GKE basics (so you understand resource labels and log sources)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">What to learn after Cloud Logging<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cloud Monitoring (dashboards, alerting policies, SLOs)<\/li>\n<li>Export destinations and pipelines:<\/li>\n<li>BigQuery (partitioning, cost controls)<\/li>\n<li>Pub\/Sub + Dataflow (streaming processing patterns)<\/li>\n<li>Cloud Storage lifecycle and archival strategy<\/li>\n<li>Security operations on Google Cloud:<\/li>\n<li>Cloud Audit Logs deep dive<\/li>\n<li>IAM hardening and org policies<\/li>\n<li>SIEM integration patterns<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Job roles that use Cloud Logging<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Site Reliability Engineer (SRE)<\/li>\n<li>Cloud\/Platform Engineer<\/li>\n<li>DevOps Engineer<\/li>\n<li>Security Engineer \/ SOC Analyst (via audit logs and exports)<\/li>\n<li>Cloud Architect<\/li>\n<li>Operations Engineer \/ NOC Engineer<\/li>\n<li>Application Developer (production debugging)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Certification path (Google Cloud)<\/h3>\n\n\n\n<p>Google Cloud certifications that commonly benefit from strong logging knowledge include:\n&#8211; Associate Cloud Engineer\n&#8211; Professional Cloud DevOps Engineer\n&#8211; Professional Cloud Security Engineer\n&#8211; Professional Cloud Architect<\/p>\n\n\n\n<p>(Exact exam objectives change; verify current guides on Google Cloud\u2019s certification site.)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Project ideas for practice<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Build a \u201clogging standard\u201d for a microservice system:<\/li>\n<li>JSON schema, correlation IDs, severity rules<\/li>\n<li>Create a centralized logging project:<\/li>\n<li>Folder-level sink \u2192 central bucket<\/li>\n<li>Views per team<\/li>\n<li>Implement a SIEM pipeline:<\/li>\n<li>Logging sink \u2192 Pub\/Sub \u2192 Dataflow \u2192 external endpoint<\/li>\n<li>Cost optimization exercise:<\/li>\n<li>Identify top log sources by volume<\/li>\n<li>Add exclusions and validate impact<\/li>\n<li>Compliance simulation:<\/li>\n<li>Long-retention bucket for audit logs<\/li>\n<li>Restricted access via views and IAM conditions<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">22. Glossary<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cloud Logging<\/strong>: Google Cloud service for log ingestion, storage, search, analysis, and routing.<\/li>\n<li><strong>Log entry<\/strong>: A single log record with timestamp, severity, payload, and resource metadata.<\/li>\n<li><strong>Severity<\/strong>: Log importance level (e.g., DEBUG, INFO, WARNING, ERROR).<\/li>\n<li><strong>Monitored resource<\/strong>: The resource model describing where the log came from (e.g., <code>gce_instance<\/code>, <code>cloud_run_revision<\/code>).<\/li>\n<li><strong>Log bucket<\/strong>: Storage container in Cloud Logging with retention and location settings.<\/li>\n<li><strong>Retention<\/strong>: How long logs are kept before automatic deletion.<\/li>\n<li><strong>Log view<\/strong>: A filtered view into a bucket used for least-privilege access control.<\/li>\n<li><strong>Log Router<\/strong>: The routing layer that applies exclusions and exports logs using sinks.<\/li>\n<li><strong>Sink<\/strong>: A routing\/export rule that sends matching logs to a destination.<\/li>\n<li><strong>Exclusion<\/strong>: A rule that drops matching logs (not stored\/exported).<\/li>\n<li><strong>Cloud Audit Logs<\/strong>: Logs produced by Google Cloud services for governance\/audit, including admin actions and data access (depending on configuration).<\/li>\n<li><strong>Log-based metric<\/strong>: A metric derived from log entries, used in Cloud Monitoring.<\/li>\n<li><strong>BigQuery sink<\/strong>: Export path from Cloud Logging into BigQuery for SQL analytics.<\/li>\n<li><strong>Pub\/Sub sink<\/strong>: Export path from Cloud Logging into Pub\/Sub for streaming pipelines.<\/li>\n<li><strong>Cloud Storage sink<\/strong>: Export path from Cloud Logging into Cloud Storage for archival.<\/li>\n<li><strong>Writer identity<\/strong>: The identity (service account) used by a sink to write to its destination.<\/li>\n<li><strong>Structured logging<\/strong>: Logging using a consistent JSON schema rather than unstructured text.<\/li>\n<li><strong>Cardinality<\/strong>: Number of unique label\/value combinations in metrics; high cardinality increases cost\/limits.<\/li>\n<li><strong>Observability<\/strong>: Ability to understand system behavior using logs, metrics, and traces.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">23. Summary<\/h2>\n\n\n\n<p>Cloud Logging is Google Cloud\u2019s managed logging service in the <strong>Observability and monitoring<\/strong> category. It collects logs from Google Cloud services and your applications, stores them in configurable buckets, and lets you search, analyze, and route logs using the Log Router.<\/p>\n\n\n\n<p>It matters because logs are the fastest path to incident resolution, auditability, and security investigations in modern distributed systems. Cloud Logging fits best when you want a Google Cloud-native logging foundation with strong routing (sinks), governance (buckets\/views), and integrations (Monitoring, BigQuery, Pub\/Sub, Cloud Storage).<\/p>\n\n\n\n<p>Cost is primarily driven by <strong>ingestion volume<\/strong>, <strong>retention<\/strong>, and <strong>downstream export costs<\/strong> (especially BigQuery queries and streaming pipelines). Security hinges on <strong>least-privilege IAM<\/strong>, careful use of <strong>views<\/strong>, and disciplined handling of sensitive data in logs.<\/p>\n\n\n\n<p>If you run workloads on Google Cloud and want a managed, integrated logging platform, Cloud Logging is the default choice. Next, deepen your skills by pairing it with <strong>Cloud Monitoring<\/strong> (alerting\/SLOs) and by practicing <strong>export architectures<\/strong> (BigQuery for analytics, Pub\/Sub for SIEM pipelines, Cloud Storage for archival).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Observability and monitoring<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[51,65],"tags":[],"class_list":["post-781","post","type-post","status-publish","format-standard","hentry","category-google-cloud","category-observability-and-monitoring"],"_links":{"self":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/781","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/comments?post=781"}],"version-history":[{"count":0,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/posts\/781\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/media?parent=781"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/categories?post=781"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.devopsschool.com\/tutorials\/wp-json\/wp\/v2\/tags?post=781"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}