Category
Access and resource management
1. Introduction
Config Connector is a Google Cloud service that lets you manage Google Cloud resources using Kubernetes-style, declarative configuration. You apply Kubernetes manifests, and Config Connector creates, updates, and deletes real Google Cloud resources by calling Google Cloud APIs on your behalf.
In simple terms: if your team already uses Kubernetes workflows (namespaces, RBAC, GitOps, kubectl apply), Config Connector lets you manage cloud infrastructure—like Cloud Storage buckets, Pub/Sub topics, IAM bindings, and more—without switching to a separate infrastructure tool. Your Kubernetes cluster becomes a control plane for Google Cloud resources.
Technically, Config Connector installs Kubernetes Custom Resource Definitions (CRDs) and controllers. Each CRD represents a Google Cloud resource type. When you apply a resource manifest, the controller reconciles the desired state into the actual state in Google Cloud using authenticated API calls. The controller continuously checks for drift and updates status fields and conditions back into the Kubernetes resource.
The problem it solves: teams need consistent, auditable, repeatable infrastructure management. Config Connector helps standardize resource provisioning and governance by using Kubernetes-native patterns (declarative config, reconciliation loops, RBAC, namespaces) to manage Google Cloud resources—especially valuable for platform engineering and access/resource management programs.
Naming note: Config Connector is an active Google Cloud offering. It’s closely related to Config Controller (a managed Google Cloud service that provides a hosted Kubernetes control plane for infrastructure management). Do not confuse Config Connector with GitOps products like Config Sync (which syncs Kubernetes configs from Git repositories).
2. What is Config Connector?
Official purpose (what it’s for)
Config Connector enables you to manage Google Cloud resources through Kubernetes APIs. It exposes Google Cloud resources as Kubernetes objects so you can provision and operate infrastructure using Kubernetes tooling and workflows.
Core capabilities – Kubernetes CRDs for Google Cloud resources (for example, Storage buckets, IAM policy bindings, Pub/Sub topics—resource coverage varies; verify the supported resource list in official docs). – Continuous reconciliation: desired state in Kubernetes is continuously reconciled against actual state in Google Cloud. – Status reporting: resource readiness, errors, and observed state are reported in Kubernetes via conditions/events. – Access and resource management workflows: manage IAM-related resources (for example, service accounts and policy bindings) using Kubernetes RBAC and namespaces for separation of duties.
Major components – CRDs (CustomResourceDefinitions): one CRD per supported Google Cloud resource kind. – Controller(s): watch Kubernetes resources and call Google Cloud APIs to reconcile. – Credentials / identity integration: typically uses Workload Identity on GKE (recommended), or other supported identity methods depending on environment (verify the current supported authentication modes). – Kubernetes namespaces and RBAC: used to partition responsibility across teams and projects (especially in namespaced mode).
Service type – A Kubernetes extension/controller that runs in a Kubernetes cluster (commonly GKE). – Often used as part of broader platform engineering and infrastructure-as-code practices. – Related managed option: Config Controller (hosted control plane) which uses Config Connector concepts.
Scope and locality – Config Connector runs inside your Kubernetes cluster; its “scope” is effectively the cluster where it’s installed. – The Google Cloud resources it manages can span: – Project scope (most common) – Potentially folder/org scope for certain resource types (requires appropriate permissions; verify supported resources and constraints) – It is not a “regional” Google Cloud API service in the same way as Cloud Storage or Pub/Sub; rather, it is a controller that calls Google Cloud APIs (which may be global/regional depending on the resource type).
How it fits into the Google Cloud ecosystem – Works with: – Google Kubernetes Engine (GKE) as the most common runtime. – IAM (identity and access management) to authorize reconciliation actions. – Cloud Audit Logs to record all API actions performed. – Cloud Logging/Monitoring for cluster and controller visibility. – Complements (not replaces) tools like: – Terraform for multi-cloud IaC – gcloud and Console for ad-hoc operations – Policy Controller/Gatekeeper for policy enforcement (commonly paired for governance; verify product names and availability in your environment)
Official documentation entry point (start here):
https://cloud.google.com/config-connector/docs/overview
3. Why use Config Connector?
Business reasons
- Standardize provisioning across teams using a consistent API and workflow (Kubernetes).
- Speed up delivery by letting application/platform teams use the same pipeline patterns for both app deployments and infrastructure.
- Improve auditability: changes are captured as Git commits and Kubernetes events, and Google Cloud API calls are captured in Audit Logs.
Technical reasons
- Declarative desired state and controller-based reconciliation reduce configuration drift.
- Kubernetes-native primitives:
- namespaces for multi-tenancy patterns
- RBAC to limit who can create/change infrastructure objects
- admission policy for guardrails (for example, deny public buckets)
- Works well for teams already invested in Kubernetes.
Operational reasons
- GitOps-friendly: store infrastructure definitions in Git and sync to the cluster.
- Unified tooling:
kubectl, CI pipelines, policy tooling, Kubernetes resource status/conditions. - Supports repeatable environment creation (dev/test/prod) through manifests and overlays.
Security/compliance reasons (Access and resource management)
- Enables structured IAM administration using Kubernetes RBAC + scoped namespaces.
- Works with Workload Identity (recommended on GKE) to avoid long-lived service account keys.
- Google Cloud API calls are auditable with Cloud Audit Logs.
- Enables “policy as code” patterns when paired with Kubernetes admission controls.
Scalability/performance reasons
- Kubernetes controller model scales operationally: controllers can handle many resources and continuously reconcile.
- Supports multi-team scaling through namespaces and RBAC (a common pattern: one namespace per project/team).
When teams should choose Config Connector
Choose Config Connector when: – You run GKE or Kubernetes as a platform and want one control plane for both workloads and Google Cloud resources. – You want strong separation of duties using Kubernetes RBAC and namespaces. – You want to implement resource governance with admission control and GitOps. – You want to manage a significant portion of Google Cloud resources declaratively and continuously.
When teams should not choose Config Connector
Avoid or reconsider Config Connector when: – You don’t operate Kubernetes reliably (controllers need a stable cluster). – You need broad, complete coverage of every Google Cloud resource type immediately (support varies; verify the supported resources list). – You require multi-cloud IaC from a single tool (Terraform/Crossplane might be a better fit). – You need to provision foundational infrastructure before Kubernetes exists (you still need a “bootstrap” step).
4. Where is Config Connector used?
Industries
- SaaS and technology: platform teams managing standardized projects, IAM, and shared services.
- Financial services and healthcare: auditability and policy enforcement are valuable (subject to compliance requirements; verify with your compliance team).
- Retail and media: many teams/projects with standardized patterns (buckets, Pub/Sub, service accounts).
- Public sector: structured access/resource management with audit trails.
Team types
- Platform engineering teams building internal developer platforms (IDPs).
- DevOps/SRE teams standardizing operations.
- Security engineering teams implementing IAM and guardrails as code.
- Application teams that self-provision approved resources via namespaces.
Workloads
- Kubernetes-based workloads that need adjacent Google Cloud resources:
- GCS buckets for artifacts/data
- Pub/Sub topics and subscriptions
- Service accounts and IAM bindings
- Secret Manager secrets (where supported; verify)
- Cloud SQL instances (where supported; verify)
Architectures
- GitOps architectures: Git repo → CI → cluster sync → Config Connector → Google Cloud APIs.
- Multi-project landing zones: namespaces mapped to projects, with central policies and templates.
- Shared services: central clusters that manage shared IAM, logging sinks, or networking components (resource support and permissions permitting).
Real-world deployment contexts
- Production: typically used with:
- strict RBAC
- dedicated namespaces per team/project
- policy enforcement
- reliable cluster operations and backup/DR considerations
- Dev/test: fast environment creation, experimentation with standard patterns, training labs.
5. Top Use Cases and Scenarios
Below are practical scenarios where Config Connector is commonly a good fit. Resource availability varies—verify each resource kind in the official resource reference.
1) Self-service Cloud Storage buckets per team
- Problem: Teams need buckets with consistent naming, labels, and security settings.
- Why Config Connector fits: Buckets can be declared as Kubernetes objects; policy can enforce “no public access.”
- Example: Each team has a namespace; applying a
StorageBucketmanifest creates an encrypted bucket with uniform access.
2) IAM access provisioning via Kubernetes RBAC
- Problem: Central IAM team is a bottleneck; teams need controlled self-service access.
- Why it fits: Teams can manage approved IAM bindings via CRDs while cluster admins enforce RBAC/policy constraints.
- Example: Team namespace allows creating IAM bindings only for their own service accounts and resources.
3) GitOps-managed Pub/Sub topology
- Problem: Pub/Sub topics/subscriptions drift across environments.
- Why it fits: Declarative manifests ensure consistent topic settings, subscriptions, and IAM.
- Example:
PubSubTopicandPubSubSubscriptionresources deployed per environment namespace.
4) Standardized service account creation and rotation workflows
- Problem: Service accounts are created inconsistently and over-privileged.
- Why it fits: Declare service accounts and IAM bindings in code; review via PRs.
- Example: Every app namespace declares a dedicated service account plus least-privilege roles.
5) Environment replication (dev/test/prod)
- Problem: Dev/test environments differ from prod, causing failures at release time.
- Why it fits: Use the same manifests with overlays/parameters to replicate infra patterns.
- Example: Same bucket/topic resources, different names/labels and retention settings.
6) Central policy enforcement for access and resource management
- Problem: Teams accidentally create insecure resources (public buckets, broad IAM grants).
- Why it fits: Pair Config Connector with Kubernetes admission control policies to block noncompliant manifests.
- Example: Deny any IAM binding granting
roles/ownerexcept to break-glass groups.
7) Multi-project resource management with namespace-per-project
- Problem: Managing many projects manually causes inconsistency and drift.
- Why it fits: Each namespace is mapped to a project; teams apply resources within their project boundary.
- Example: 50 namespaces correspond to 50 projects, each with standardized logging sinks and buckets (where supported).
8) Audit-ready infrastructure change management
- Problem: Hard to prove who changed IAM or resource policies and when.
- Why it fits: Git history + Kubernetes events + Cloud Audit Logs provide a strong audit chain.
- Example: PR approval + merge triggers sync; Audit Logs show controller’s API calls.
9) “Day 2” drift detection and remediation
- Problem: Someone changes resource settings in Console; configuration drifts from intended state.
- Why it fits: Controller reconciliation reverts drift (when allowed by APIs and resource semantics).
- Example: A bucket lifecycle policy is removed manually; controller restores it.
10) Platform team templates and golden paths
- Problem: Teams need “approved patterns” for resources, but each team implements differently.
- Why it fits: Provide reusable manifests, Kustomize bases, or Helm charts (if your org uses them) that create standardized resources.
- Example: Template creates a bucket + service account + IAM binding + labels.
11) Shared services cluster managing common resources
- Problem: Organization wants a single operational hub for shared resource provisioning.
- Why it fits: A dedicated cluster runs Config Connector; central team controls access.
- Example: Shared cluster manages organization-wide DNS, logging, and core service accounts (verify supported resources and org-level requirements).
12) Controlled onboarding of new teams/projects
- Problem: New teams need a consistent set of baseline resources and IAM.
- Why it fits: Namespace + baseline manifests can bootstrap approved resources.
- Example: Create a namespace, annotate it to a project, apply a baseline bundle.
6. Core Features
Feature availability and exact configuration steps can change; verify in official docs for your installed version and GKE release channel.
1) Kubernetes CRDs for Google Cloud resources
- What it does: Exposes Google Cloud resources as Kubernetes API objects (Kinds).
- Why it matters: Enables a single, consistent workflow (
kubectl, GitOps) for cloud resources. - Practical benefit: Infrastructure changes become code-reviewed PRs with consistent patterns.
- Caveats: Not all Google Cloud services/resources are supported; some fields may be immutable.
2) Reconciliation loop (continuous desired-state enforcement)
- What it does: Continuously reconciles Kubernetes desired state to Google Cloud actual state.
- Why it matters: Reduces drift and improves reliability.
- Practical benefit: “Accidental console changes” can be automatically corrected.
- Caveats: Some changes cannot be forced due to API limitations; expect eventual consistency.
3) Namespaced mode for multi-tenancy
- What it does: Enables separation of resources by Kubernetes namespace (often mapping one namespace to one Google Cloud project).
- Why it matters: Provides isolation and delegation without granting everyone broad project IAM.
- Practical benefit: Platform team can delegate resource management to app teams safely.
- Caveats: Cross-project references require careful design; verify supported patterns.
4) Cluster mode (centralized admin control)
- What it does: Manages resources cluster-wide (often used by platform admins).
- Why it matters: Useful for shared resources or org-wide provisioning.
- Practical benefit: Central team manages everything in one place.
- Caveats: Higher blast radius; requires stricter RBAC and governance.
5) Workload Identity integration (recommended on GKE)
- What it does: Allows Kubernetes service accounts to act as Google service accounts without keys.
- Why it matters: Avoids long-lived credentials and simplifies rotation.
- Practical benefit: Better security posture and reduced secret management overhead.
- Caveats: Requires correct IAM bindings and cluster configuration; misconfigurations are a common failure point.
6) Resource status, conditions, and events
- What it does: Writes status back into Kubernetes resources (ready/not ready, error messages, observed generation).
- Why it matters: Makes troubleshooting and automation easier.
- Practical benefit: You can
kubectl describeto find API permission errors, invalid fields, etc. - Caveats: Status messages may be terse; you may need Cloud Audit Logs for deeper detail.
7) Adoption/import of existing resources (where supported)
- What it does: Allows bringing existing Google Cloud resources under Config Connector management.
- Why it matters: Enables migration from manual or other IaC approaches.
- Practical benefit: Gradual adoption without re-creating everything.
- Caveats: Import/adoption semantics differ by resource; verify official import guidance.
8) Deletion policies / lifecycle behavior (where supported)
- What it does: Controls whether deleting the Kubernetes object deletes the Google Cloud resource or “abandons” it.
- Why it matters: Prevents accidental destructive operations.
- Practical benefit: Safer experimentation and migration.
- Caveats: Behavior varies by resource; always test in non-prod first.
9) Labeling/annotation patterns for governance
- What it does: Supports labels/annotations that map to Google Cloud labels and metadata.
- Why it matters: Enables cost allocation, ownership tracking, and compliance tagging.
- Practical benefit: Standardized labels across resources via templates.
- Caveats: Google Cloud label constraints apply (key/value format and limits).
10) Compatibility with GitOps and policy-as-code ecosystems
- What it does: Works with GitOps sync tools and Kubernetes policy engines.
- Why it matters: Enables large-scale governance with automation.
- Practical benefit: Prevent noncompliant resources before they are created.
- Caveats: Tooling choices vary (Config Sync, Argo CD, Flux, Gatekeeper); integration details are environment-specific.
7. Architecture and How It Works
High-level architecture
Config Connector runs controllers inside your Kubernetes cluster. When you apply a manifest for a supported CRD: 1. Kubernetes stores the object in etcd via the Kubernetes API server. 2. Config Connector controller watches for new/changed objects. 3. The controller authenticates to Google Cloud (commonly via Workload Identity on GKE). 4. The controller calls the relevant Google Cloud API to create/update/delete the resource. 5. The controller updates Kubernetes status/conditions and emits events.
Request/data/control flow
- Control plane flow: Kubernetes API server → controller → Google Cloud APIs
- Data plane: Config Connector itself does not typically handle your application data; it manages resource configuration.
Integrations with related services
- IAM: crucial for controller permissions (roles granted to the Google service account used by Config Connector).
- Cloud Resource Manager: project/folder/org permissions (depending on what you manage).
- Service Usage API: enabling/disabling Google Cloud services may be required for managed resources.
- Cloud Audit Logs: tracks controller API activity (high value for security and compliance).
- Cloud Logging/Monitoring: cluster and controller observability (plus GKE logs/metrics).
Dependency services
- A Kubernetes cluster, typically GKE.
- Google Cloud APIs for each resource type you manage (for example,
storage.googleapis.comfor buckets). - Identity integration (Workload Identity recommended on GKE).
Security/authentication model
- The controller uses a Google service account (GSA) identity when calling Google Cloud APIs.
- In GKE, best practice is Workload Identity:
- Bind a Kubernetes service account (KSA) used by the controller to a GSA.
- Grant the GSA only the permissions required for the resources it manages.
Networking model
- The controller needs outbound access to Google Cloud APIs (public endpoints).
- In private cluster environments, ensure NAT or Private Google Access is configured as required (details depend on your network architecture; verify in GKE networking docs).
Monitoring/logging/governance considerations
- Monitor:
- controller pod health
- reconciliation errors and events
- API rate limits
- Log:
- controller logs in Kubernetes
- Cloud Audit Logs in Google Cloud for API activity
- Govern:
- use namespaces/RBAC
- enforce policy with admission controls
- standardize labels and naming
Simple architecture diagram (Mermaid)
flowchart LR
Dev[Engineer / CI Pipeline] -->|kubectl apply / GitOps sync| K8sAPI[Kubernetes API Server]
K8sAPI --> CRD[Config Connector CRDs]
K8sAPI --> Ctrl[Config Connector Controller]
Ctrl -->|Workload Identity / GSA| GCPAPI[Google Cloud APIs]
GCPAPI --> Res[Google Cloud Resources\n(buckets, IAM, Pub/Sub, ...)]
Ctrl -->|status/conditions| K8sAPI
Production-style architecture diagram (Mermaid)
flowchart TB
Git[Git Repo: infra manifests] --> CI[CI: validate + policy checks]
CI --> Sync[GitOps Sync Tool\n(e.g., Config Sync / Argo CD)]
Sync --> K8s[Kubernetes Cluster (GKE)]
subgraph K8s[Kubernetes Cluster]
NS1[Namespace: team-a / project-a]
NS2[Namespace: team-b / project-b]
RBAC[Kubernetes RBAC]
Policy[Admission Policy\n(OPA Gatekeeper / Policy Controller)]
CC[Config Connector Controllers\n(cnrm-system)]
end
RBAC --> NS1
RBAC --> NS2
Policy --> K8s
NS1 --> CC
NS2 --> CC
CC -->|Workload Identity| GSA[Google Service Account(s)]
GSA --> IAM[IAM Roles (least privilege)]
CC --> GCP[Google Cloud APIs]
GCP --> Projects[Multiple Google Cloud Projects]
K8s --> Logs[Kubernetes Logs]
GCP --> Audit[Cloud Audit Logs]
K8s --> Metrics[Cloud Monitoring / Prometheus\n(depending on setup)]
8. Prerequisites
Before you start, ensure you have the following.
Account/project requirements
- A Google Cloud project with billing enabled.
- Permission to create/manage a GKE cluster and IAM bindings in the project.
Permissions / IAM roles
Minimum IAM roles vary by what you create. Common needs: – For cluster operations: roles such as Kubernetes Engine Admin (or a custom equivalent). – For identity setup: ability to create service accounts and IAM bindings. – For resource provisioning: roles corresponding to the resource APIs (for example, Storage Admin for buckets).
Use least privilege and consider custom roles for production. Verify role requirements in official docs for each resource kind.
Billing requirements
- Config Connector itself does not typically have a separate line-item price, but you pay for:
- the Kubernetes cluster (GKE control plane/management fees depending on mode) and compute
- the Google Cloud resources created (buckets, Pub/Sub, etc.)
- logs/metrics/storage generated
Tools needed
- gcloud CLI: https://cloud.google.com/sdk/docs/install
- kubectl (often installed via gcloud components)
- Optional: kustomize, helm, or a GitOps tool (organization-specific)
- Access to the Google Cloud Console
Region availability
- Depends on GKE regions and any resource regions you choose.
- Config Connector runs in your cluster; choose a region appropriate for your workloads and governance.
Quotas/limits
- GKE cluster quotas (nodes, CPU, IPs)
- API quotas for the Google Cloud services you manage
- Kubernetes API object limits in etcd (large-scale setups require planning)
- Controller throughput: many resources can increase reconciliation load
Prerequisite services/APIs
Commonly required: – Kubernetes Engine API – IAM API – Cloud Resource Manager API – APIs for the resources you plan to manage (Storage API, Pub/Sub API, etc.)
Verify required APIs per resource type in official docs.
9. Pricing / Cost
Pricing model (accurate framing)
Config Connector is primarily a controller running in Kubernetes. The cost model is therefore mostly indirect: – GKE cost (cluster management fees and compute, depending on Standard vs Autopilot) – Cost of provisioned Google Cloud resources (Storage, Pub/Sub, IAM is generally free but managing IAM may generate Audit Logs) – Networking egress (if applicable) – Operational telemetry (Logging, Monitoring metrics, log storage)
You should validate current billing details using:
– GKE pricing: https://cloud.google.com/kubernetes-engine/pricing
– Google Cloud Pricing Calculator: https://cloud.google.com/products/calculator
– Pricing for each managed service (for example, Cloud Storage pricing): https://cloud.google.com/storage/pricing
If you use Config Controller (managed control plane), it may have its own pricing model separate from Config Connector. Verify in official docs if you’re evaluating that managed option.
Pricing dimensions
Key dimensions that drive cost: 1. Cluster runtime – Standard GKE: management fee (where applicable) + node VM costs + disks – Autopilot: pay for requested pod resources + associated overhead (model differs; verify current Autopilot pricing) 2. Resource usage – Storage buckets: storage used, operations, retrieval, network – Pub/Sub: message volume, retention, egress – Databases and compute: instance hours, storage, IOPS, backups 3. Logging and audit – Cloud Audit Logs are generated by Google Cloud API calls; retention/export can affect cost (verify what’s chargeable in your logging plan) 4. Networking – Controller calls to Google APIs are typically internal/public API calls; costs are usually dominated by your created resources rather than controller traffic, but egress and NAT can add cost in private networking setups.
Free tier
There is no universal “free tier” specifically for Config Connector, but you might use: – Google Cloud free tier offerings for certain services (varies by product and region; verify) – Small, short-lived GKE clusters to limit cost during labs
Hidden or indirect costs
- Leaving clusters running after tests (common cost leak).
- Log volume from frequent reconciliation errors or verbose debugging.
- Accidental resource creation (for example, provisioning a database instance) via a manifest merge.
- NAT gateway costs (if private cluster needs outbound internet to reach APIs).
How to optimize cost
- Use short-lived clusters for experiments; automate cleanup.
- Use least-privileged provisioning so “expensive” resources cannot be created by default.
- Apply quotas and organization policies (as appropriate) to block certain SKUs.
- Reduce log retention or route logs selectively (while preserving security requirements).
- Prefer namespaced design with templates that default to low-cost SKUs.
Example low-cost starter estimate (non-numeric guidance)
A low-cost lab typically includes: – One small GKE cluster (or an existing dev cluster) – A Cloud Storage bucket – A service account and a simple IAM binding
Compute cost will dominate. Use the Pricing Calculator to estimate: – cluster runtime for the expected hours – minimal node pool sizing (Standard) or minimal pod requests (Autopilot) – storage operations (usually small for a lab)
Example production cost considerations
For production, plan for: – Highly available clusters (multi-zone if required) – GitOps controller overhead, policy engines, and monitoring agents – Larger numbers of managed resources (controller scale and API quota considerations) – Logging and audit retention/export – Separate clusters/environments (dev/stage/prod) or separate namespaces with strict controls
10. Step-by-Step Hands-On Tutorial
This lab focuses on a realistic, safe, low-cost workflow: use Config Connector in a GKE cluster to create a Cloud Storage bucket, create a Google service account, and grant it bucket-level access—a common “Access and resource management” task.
Objective
- Enable Config Connector on a GKE cluster.
- Configure identity so Config Connector can call Google Cloud APIs securely (Workload Identity recommended).
- Provision:
- a Cloud Storage bucket
- a Google service account
- an IAM binding granting that service account access to the bucket
- Validate results in both Kubernetes and Google Cloud.
- Clean up everything to avoid ongoing cost.
Lab Overview
You will: 1. Create or select a Google Cloud project and enable required APIs. 2. Create a GKE cluster with Workload Identity enabled. 3. Enable Config Connector on the cluster (GKE add-on) and verify controller pods. 4. Create a Google service account for Config Connector operations and bind it to the controller’s Kubernetes service account (Workload Identity binding). 5. Configure a namespace-to-project mapping (namespaced mode) and apply Config Connector custom resources. 6. Apply manifests to create a bucket, service account, and bucket IAM membership. 7. Validate and troubleshoot. 8. Clean up.
Important: Exact console labels and some resource kind names can change. Where this tutorial references a specific CRD kind or annotation, verify in the official Config Connector resource reference for your version: https://cloud.google.com/config-connector/docs/reference/overview
Step 1: Set environment variables and select your project
1) Authenticate and select a project:
gcloud auth login
gcloud config set project YOUR_PROJECT_ID
gcloud config set compute/region us-central1
Expected outcome: gcloud config get-value project prints your project.
2) Enable required APIs (common set; add more as needed):
gcloud services enable \
container.googleapis.com \
iam.googleapis.com \
cloudresourcemanager.googleapis.com \
storage.googleapis.com
Expected outcome: APIs are enabled without errors.
Step 2: Create a GKE cluster with Workload Identity enabled
You can use an existing cluster, but for a clean lab, create a dedicated one. Cluster sizing and mode affect cost—delete it afterward.
Create a Standard cluster (example; adjust region/zone and sizing):
export PROJECT_ID="$(gcloud config get-value project)"
export REGION="us-central1"
export CLUSTER_NAME="cc-lab"
gcloud container clusters create "$CLUSTER_NAME" \
--region "$REGION" \
--workload-pool="${PROJECT_ID}.svc.id.goog" \
--num-nodes 1
Get credentials:
gcloud container clusters get-credentials "$CLUSTER_NAME" --region "$REGION"
kubectl cluster-info
Expected outcome: kubectl connects to the cluster.
If you prefer Autopilot or private clusters, follow official GKE guidance and ensure outbound access to Google APIs. Autopilot and add-on availability can vary—verify in GKE docs.
Step 3: Enable Config Connector and verify it is running
Preferred (GKE add-on): Enable Config Connector in the Google Cloud Console for the cluster.
General console path (may vary): – Google Cloud Console → Kubernetes Engine → Clusters – Select your cluster → Features / Add-ons – Enable Config Connector – Apply changes and wait for rollout
Verification (in Kubernetes):
Check for the Config Connector system namespace and pods. Common namespace is cnrm-system (verify in your cluster):
kubectl get namespaces
kubectl get pods -A | grep -E "cnrm|config|connector" || true
If you see cnrm-system, inspect pods:
kubectl get pods -n cnrm-system
kubectl logs -n cnrm-system deploy/cnrm-controller-manager --tail=100
Expected outcome: Controller pods are Running/Ready, logs show normal startup (no repeated auth failures).
If the add-on is not available in your cluster version/channel, follow the official installation guide: https://cloud.google.com/config-connector/docs/how-to/install-upgrade-uninstall
Step 4: Configure identity (Workload Identity binding)
Config Connector needs Google Cloud permissions. A common and recommended approach on GKE is: – Create a Google service account (GSA) that will represent Config Connector’s Google Cloud identity. – Bind the controller’s Kubernetes service account (KSA) to that GSA using Workload Identity. – Grant the GSA only the roles required for this lab.
4.1 Identify the controller’s Kubernetes service account
List service accounts in cnrm-system:
kubectl get serviceaccounts -n cnrm-system
A commonly used KSA name is cnrm-controller-manager (verify in your cluster output). Export variables:
export KSA_NAMESPACE="cnrm-system"
export KSA_NAME="cnrm-controller-manager"
4.2 Create a Google service account for Config Connector
export CC_GSA_NAME="config-connector-lab"
export CC_GSA_EMAIL="${CC_GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"
gcloud iam service-accounts create "$CC_GSA_NAME" \
--display-name "Config Connector Lab Service Account"
4.3 Grant least-privilege roles for this lab
For this lab we need: – Create/manage Cloud Storage bucket and IAM on that bucket – Create IAM service account resources
Typical roles (verify exact required roles in your environment):
– roles/storage.admin (buckets + IAM bindings)
– roles/iam.serviceAccountAdmin (create service accounts)
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member="serviceAccount:${CC_GSA_EMAIL}" \
--role="roles/storage.admin"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member="serviceAccount:${CC_GSA_EMAIL}" \
--role="roles/iam.serviceAccountAdmin"
4.4 Bind the Kubernetes service account to the Google service account
gcloud iam service-accounts add-iam-policy-binding "$CC_GSA_EMAIL" \
--member="serviceAccount:${PROJECT_ID}.svc.id.goog[${KSA_NAMESPACE}/${KSA_NAME}]" \
--role="roles/iam.workloadIdentityUser"
Expected outcome: IAM bindings succeed without permission errors.
Common error:
PERMISSION_DENIEDwhen adding IAM bindings. Ensure your user has permission to set IAM policy (for example, Project IAM Admin) or use an authorized admin account.
Step 5: Configure namespaced management (project mapping)
In many setups, you map a Kubernetes namespace to a Google Cloud project for namespaced mode. The exact method can be:
– a namespace annotation (common), and/or
– a ConfigConnectorContext custom resource (depending on mode/version)
Because these details can vary by version, verify the current recommended approach: – Namespaced mode overview: https://cloud.google.com/config-connector/docs/concepts/namespace-management
Create a namespace for this lab:
export NS="cc-demo"
kubectl create namespace "$NS"
Annotate the namespace with the project ID (commonly supported pattern):
kubectl annotate namespace "$NS" cnrm.cloud.google.com/project-id="$PROJECT_ID" --overwrite
If your version requires associating the GSA at the namespace level, consult the namespaced mode docs and apply the recommended annotation or ConfigConnectorContext.
Expected outcome: Namespace exists and has the correct annotation:
kubectl get namespace "$NS" -o jsonpath='{.metadata.annotations.cnrm\.cloud\.google\.com/project-id}{"\n"}'
Step 6: Apply Config Connector resources (bucket, service account, IAM)
Now you will apply three resources: 1. A Cloud Storage bucket 2. A Google service account 3. A bucket-level IAM binding granting the service account access
Resource kinds and fields must match the Config Connector resource reference for your installed version. Verify the exact Kind names and API groups here: https://cloud.google.com/config-connector/docs/reference/overview
Create a working directory:
mkdir -p cc-lab && cd cc-lab
6.1 Create a Storage bucket resource manifest
Create bucket.yaml:
apiVersion: storage.cnrm.cloud.google.com/v1beta1
kind: StorageBucket
metadata:
name: cc-lab-bucket-unique-12345
namespace: cc-demo
spec:
location: US
uniformBucketLevelAccess: true
Apply it:
kubectl apply -f bucket.yaml
Expected outcome: Kubernetes accepts the object. The bucket will become Ready after reconciliation.
Check status:
kubectl get storagebucket -n "$NS"
kubectl describe storagebucket -n "$NS" cc-lab-bucket-unique-12345
Bucket names must be globally unique. Change
cc-lab-bucket-unique-12345to something unique (for example, include your project ID or random suffix).
6.2 Create a Google service account manifest
Create app-sa.yaml:
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMServiceAccount
metadata:
name: cc-app-sa
namespace: cc-demo
spec:
displayName: "Config Connector App Service Account"
Apply:
kubectl apply -f app-sa.yaml
Expected outcome: A Google service account is created in the project.
Verify in Kubernetes:
kubectl get iamserviceaccount -n "$NS"
kubectl describe iamserviceaccount -n "$NS" cc-app-sa
6.3 Grant the service account access to the bucket (IAM binding)
Depending on the resource reference, the Kind may be StorageBucketIAMMember or similar. Verify the correct Kind and fields in the official resource reference for Cloud Storage IAM resources.
Example manifest bucket-iam.yaml (verify Kind/fields):
apiVersion: storage.cnrm.cloud.google.com/v1beta1
kind: StorageBucketIAMMember
metadata:
name: cc-app-sa-objectadmin
namespace: cc-demo
spec:
bucketRef:
name: cc-lab-bucket-unique-12345
role: roles/storage.objectAdmin
member: serviceAccount:cc-app-sa@YOUR_PROJECT_ID.iam.gserviceaccount.com
Before applying, replace YOUR_PROJECT_ID with your project ID:
sed -i.bak "s/YOUR_PROJECT_ID/${PROJECT_ID}/g" bucket-iam.yaml
kubectl apply -f bucket-iam.yaml
Expected outcome: The IAM membership resource becomes Ready, and the service account gains the role on the bucket.
Validation
Validate in Kubernetes
Check readiness and conditions:
kubectl get storagebucket,iamserviceaccount -n "$NS"
kubectl get storagebucketiammember -n "$NS" || true
Describe resources to check conditions:
kubectl describe storagebucket -n "$NS" cc-lab-bucket-unique-12345
kubectl describe iamserviceaccount -n "$NS" cc-app-sa
kubectl describe storagebucketiammember -n "$NS" cc-app-sa-objectadmin || true
Look for:
– Ready=True
– no repeated reconciliation errors
– status fields populated (depends on resource kind)
Validate in Google Cloud
Verify the bucket exists:
gcloud storage buckets list --filter="name:cc-lab-bucket"
Verify the service account exists:
gcloud iam service-accounts list --filter="email:cc-app-sa@${PROJECT_ID}.iam.gserviceaccount.com"
Check bucket IAM policy includes the service account (may require additional permissions to view IAM):
gcloud storage buckets get-iam-policy gs://cc-lab-bucket-unique-12345
Troubleshooting
Common issues and fixes:
1) CRD not found / “no matches for kind …”
– Cause: Config Connector not installed or not fully initialized.
– Fix:
– Confirm add-on is enabled.
– Check CRDs: kubectl get crds | grep cnrm
– Follow official install steps: https://cloud.google.com/config-connector/docs/how-to/install-upgrade-uninstall
2) Permission denied errors in resource status
– Cause: The GSA used by Config Connector lacks required IAM roles, or Workload Identity binding is wrong.
– Fix:
– Check controller logs:
kubectl logs -n cnrm-system deploy/cnrm-controller-manager --tail=200
– Confirm Workload Identity binding is correct:
– correct project workload pool
– correct KSA name/namespace
– Grant the minimal additional role required (based on error), then re-check.
3) Bucket name conflict
– Cause: Bucket names are globally unique.
– Fix: Change the manifest metadata.name to a unique value, delete/reapply if needed.
4) Resource stuck in NotReady
– Cause: API not enabled, invalid field, organization policy constraints, or quota issues.
– Fix:
– Check kubectl describe conditions/events.
– Check Cloud Audit Logs for the failed API call and reason (recommended for deep debugging).
– Verify required APIs are enabled.
5) IAM binding resource kind mismatch – Cause: The exact IAM CRD kind for Cloud Storage may differ by version. – Fix: Use the official reference and adjust the Kind/fields accordingly: https://cloud.google.com/config-connector/docs/reference/overview
Cleanup
Delete Kubernetes resources (this removes or updates Google Cloud resources depending on deletion behavior and resource type):
kubectl delete -n "$NS" -f bucket-iam.yaml || true
kubectl delete -n "$NS" -f app-sa.yaml || true
kubectl delete -n "$NS" -f bucket.yaml || true
kubectl delete namespace "$NS"
Delete the cluster (major cost saver):
gcloud container clusters delete "$CLUSTER_NAME" --region "$REGION" --quiet
Delete the Config Connector Google service account used for reconciliation:
gcloud iam service-accounts delete "$CC_GSA_EMAIL" --quiet
Expected outcome: No cluster remains, and the lab service accounts/resources are removed.
11. Best Practices
Architecture best practices
- Namespace-per-project is a common, scalable pattern:
- one namespace maps to one Google Cloud project
- enforce boundaries with RBAC and policy
- Use a dedicated infrastructure management cluster or dedicated namespaces with strict controls for production.
- Design for bootstrap:
- you may still need Terraform/gcloud to create the first cluster and initial IAM bindings
- then hand off ongoing resource management to Config Connector
IAM/security best practices
- Use Workload Identity instead of service account keys.
- Grant the Config Connector GSA least privilege:
- avoid broad roles like Owner/Editor in production
- use custom roles where appropriate
- Use separate GSAs for different environments or namespaces if your design supports it (verify supported patterns).
Cost best practices
- Auto-delete dev clusters and use short-lived environments.
- Prevent expensive resources via:
- policy-as-code (deny certain Kinds)
- organization policies and quotas
- Manage log volume:
- investigate repeated reconciliation failures quickly
- right-size log retention and sinks based on compliance needs
Performance best practices
- Avoid “one cluster managing everything at org scale” unless you’ve planned:
- API quotas
- controller scaling
- etcd object growth and watch pressure
- Batch changes via GitOps PRs and avoid constant churn.
- Use stable naming conventions to reduce accidental deletes/recreates.
Reliability best practices
- Treat the cluster as critical infrastructure:
- monitor controller health and reconciliation error rates
- protect the cluster with strong access controls
- Use staging environments to test manifest changes before production.
- Understand immutable fields: some changes require replacement, which can be destructive.
Operations best practices
- Standardize:
- labels (
env,team,cost-center,owner) - naming conventions (resource and namespace)
- Create runbooks:
- reconciliation failures
- permission denied
- quota and org-policy violations
- Use Cloud Audit Logs to trace “who did what” for API calls (controller identity).
Governance/tagging/naming best practices
- Enforce labels via admission policies.
- Require a consistent naming scheme (for example,
${team}-${env}-${purpose}). - Document which team “owns” each namespace/project mapping.
12. Security Considerations
Identity and access model
- Config Connector controllers act with the permissions of a Google service account (directly or via Workload Identity).
- Kubernetes users’ ability to create/update resources is controlled by Kubernetes RBAC.
- Secure design requires both layers:
- Kubernetes RBAC limits who can submit manifests
- Google Cloud IAM limits what the controller can do even if a manifest is applied
Encryption
- Google Cloud resources are encrypted at rest by default for many services (service-specific; verify).
- Kubernetes secrets and etcd encryption depend on cluster configuration (GKE features vary; verify).
- Prefer not to store sensitive values in plain manifests. Use secret management patterns appropriate for your org.
Network exposure
- Controller needs access to Google APIs.
- In private clusters, ensure correct outbound routing/NAT/Private Google Access as needed.
- Restrict cluster API access (authorized networks, private endpoint) per your security baseline.
Secrets handling
- Prefer Workload Identity so you don’t store service account keys.
- If any workflow uses keys (legacy), treat them as high-risk:
- rotate frequently
- store in Secret Manager
- scope permissions narrowly
- migrate to Workload Identity where possible
Audit/logging
- Use Cloud Audit Logs to monitor:
- IAM policy changes
- service account creation
- storage IAM changes
- Correlate Kubernetes events (who applied which manifest) with Audit Logs (what API calls occurred).
- Consider exporting relevant logs to a SIEM (organization-specific).
Compliance considerations
- Maintain separation of duties:
- platform admins manage controller permissions and policies
- teams manage resources within approved boundaries
- Use policy-as-code to enforce compliance guardrails before resources are created.
- Review data residency requirements for resources (bucket location/region).
Common security mistakes
- Granting the controller’s GSA broad roles like
roles/owner. - Allowing many users cluster-admin rights, enabling them to create arbitrary infrastructure resources.
- Not restricting which namespaces can manage which projects/resources.
- Storing secrets in Git without encryption or access controls.
- Ignoring Audit Logs and controller reconciliation errors.
Secure deployment recommendations
- Use Workload Identity.
- Use namespaced mode with strict RBAC.
- Enforce admission policies (deny risky resource configs).
- Maintain separate environments and promotion pipelines.
- Monitor and alert on IAM-related changes initiated by the controller identity.
13. Limitations and Gotchas
Config Connector is powerful, but it is not “magic Terraform in Kubernetes.” Plan for these realities:
- Resource coverage is not universal: Not every Google Cloud resource has a CRD. Always verify in the official resource reference.
- Immutable fields: Many resources have fields that force replacement. Replacement can mean downtime or data loss (for example, deleting and recreating).
- Eventual consistency: Some APIs take time; reconciliation may show intermediate states.
- API quotas and rate limits: Large-scale reconciliations can hit API limits.
- Multi-project complexity: Mapping namespaces to projects is straightforward; cross-project dependencies and shared resources require careful design.
- Bootstrap problem: You often need to create the first cluster and IAM bindings using other tools before Config Connector can manage things.
- Operational dependency on the cluster: If the cluster is down, reconciliation stops (resources remain in Google Cloud, but drift won’t be corrected).
- Deletion behavior surprises: Deleting a Kubernetes object can delete the real Google Cloud resource unless configured otherwise (verify deletion policy support).
- UI/feature availability varies by GKE version/channel: Add-ons and supported versions change; verify with current GKE and Config Connector docs.
- Troubleshooting often requires two planes:
- Kubernetes status/events/logs
- Cloud Audit Logs for API call failures
14. Comparison with Alternatives
Config Connector is one option in the Google Cloud infrastructure management toolbox. Here’s how it compares.
| Option | Best For | Strengths | Weaknesses | When to Choose |
|---|---|---|---|---|
| Config Connector (Google Cloud) | Kubernetes-centric orgs managing Google Cloud resources declaratively | Kubernetes-native workflow, reconciliation, GitOps-friendly, integrates with RBAC/namespaces | Requires Kubernetes operational maturity; not all resources supported | You run GKE/Kubernetes and want Kubernetes as the control plane for Google Cloud resources |
| Config Controller (Google Cloud) | Managed control plane approach (hosted) | Reduces need to operate your own cluster for infra management | Separate product with its own constraints/cost; verify resource coverage and pricing | You want managed operations while keeping Config Connector-style workflows |
| Terraform (HashiCorp) | Broad IaC across clouds and services | Mature ecosystem, wide provider coverage, strong plan/apply workflow | Not Kubernetes-native; state management overhead | You need broad coverage, multi-cloud, or provisioning before Kubernetes exists |
| Google Cloud CLI / Console | Ad-hoc operations, small setups | Direct, quick, no controller | Not declarative; drift and auditability issues; hard to scale | Small teams, emergency ops, or one-off changes |
| Pulumi | IaC with general-purpose languages | Strong developer experience, testing | Still needs state; not Kubernetes-native | Teams prefer code-based IaC and want strong software engineering patterns |
| Crossplane (open source) | Kubernetes-native IaC across clouds | Kubernetes API approach like Config Connector; multi-cloud | Additional control plane complexity; provider maturity varies | You want Kubernetes-native multi-cloud infrastructure patterns |
| AWS CloudFormation / Azure ARM/Bicep | Native IaC in other clouds | Deep integration in their ecosystems | Not applicable to Google Cloud directly | Choose when you are operating primarily in those clouds |
15. Real-World Example
Enterprise example (multi-team, governed access/resource management)
- Problem: A large enterprise has dozens of Google Cloud projects and teams. IAM requests and resource provisioning (buckets, Pub/Sub, service accounts) are bottlenecked. Auditors require traceability for IAM changes.
- Proposed architecture:
- A dedicated GKE “platform control” cluster runs Config Connector.
- Namespaces map to projects:
team-a-prod,team-b-prod, etc. - Kubernetes RBAC restricts teams to their namespaces.
- Admission policies enforce guardrails (no public buckets, restricted IAM roles).
- GitOps sync deploys approved manifests from a central repo.
- Cloud Audit Logs are monitored and exported to a SIEM.
- Why Config Connector was chosen:
- The org already standardizes on Kubernetes workflows and GitOps.
- Namespaced delegation aligns with team/project boundaries.
- Auditability is improved: PR history + Kubernetes events + Cloud Audit Logs.
- Expected outcomes:
- Faster, safer provisioning with self-service patterns.
- Fewer IAM misconfigurations due to policy enforcement.
- Stronger audit evidence for access changes.
Startup/small-team example (simple, pragmatic)
- Problem: A startup runs GKE and wants to manage a small set of cloud resources (buckets, Pub/Sub, service accounts) alongside app deployments, with minimal tool sprawl.
- Proposed architecture:
- A single GKE cluster (dev/prod split by namespaces or separate clusters depending on maturity).
- Config Connector enabled; one namespace per environment.
- Minimal policy: enforce naming and block obviously risky configs.
- CI pipeline applies manifests after code review.
- Why Config Connector was chosen:
- Team already uses Kubernetes daily.
- Reduces need to manage Terraform state and separate pipelines early on.
- Expected outcomes:
- Faster onboarding and consistent resource creation.
- Easy cleanup of dev resources by deleting namespace objects (with careful deletion policies).
16. FAQ
1) Is Config Connector a Google Cloud API service or a Kubernetes controller?
It’s primarily a Kubernetes controller + CRDs that manages Google Cloud resources by calling Google Cloud APIs.
2) Does Config Connector replace Terraform?
Not universally. It’s best when you want Kubernetes-native infrastructure management. Terraform still excels for broad coverage, multi-cloud, and provisioning before Kubernetes exists.
3) What’s the difference between Config Connector and Config Controller?
Config Connector is the controller/CRDs. Config Controller is a managed Google Cloud offering that provides a hosted control plane for similar workflows. Verify current product positioning in official docs.
4) Can I use Config Connector without GKE?
It runs on Kubernetes. GKE is the most common supported environment in Google Cloud docs. For non-GKE Kubernetes, supportability and setup may differ—verify official documentation for your environment.
5) Is Workload Identity required?
Not always, but it is commonly the recommended authentication method on GKE because it avoids long-lived keys. Verify supported auth methods for your installation mode.
6) How do I prevent teams from creating expensive resources?
Use a combination of:
– Kubernetes RBAC (who can apply which kinds)
– Admission policies (deny certain resources/fields)
– Google Cloud organization policies and quotas
7) What happens if someone changes a resource in the Console?
Config Connector reconciliation may revert the change if the field is managed and mutable. Some fields may not be enforced due to API constraints. Always test.
8) Can Config Connector manage IAM policies safely?
Yes, but IAM is high-risk. Use least privilege, strong review processes, and consider partial-policy patterns where supported. Verify IAM CRD semantics carefully.
9) How do I troubleshoot a NotReady resource?
Start with:
– kubectl describe (conditions/events)
– controller logs
Then check Cloud Audit Logs for API errors (permission denied, invalid argument, org policy).
10) Does deleting the Kubernetes object delete the Google Cloud resource?
Often yes, unless configured otherwise and supported by that resource. Verify deletion policy behavior per resource kind.
11) How do I manage multiple projects?
A common pattern is namespace-per-project, with each namespace mapped to a project via annotations or context resources (depending on version).
12) Can Config Connector create projects and folders?
Some resource hierarchy objects may be supported, but this is advanced and permission-heavy. Verify official resource coverage and constraints.
13) Is Config Connector suitable for production?
Yes, with proper operations: stable clusters, strong RBAC, policy enforcement, monitoring, and tested release processes.
14) What’s the biggest operational risk?
Treating the cluster as “just another cluster.” If it’s your infrastructure control plane, it needs production-grade operations and access controls.
15) How do I keep manifests compatible over time?
Pin versions where applicable, follow release notes, test changes in staging, and regularly verify CRD/apiVersion changes in official docs.
17. Top Online Resources to Learn Config Connector
| Resource Type | Name | Why It Is Useful |
|---|---|---|
| Official documentation | Config Connector overview – https://cloud.google.com/config-connector/docs/overview | Core concepts, installation paths, and supported patterns |
| Official reference | Resource reference overview – https://cloud.google.com/config-connector/docs/reference/overview | Authoritative list of supported resource kinds, fields, and examples |
| Official guide | Install/upgrade/uninstall – https://cloud.google.com/config-connector/docs/how-to/install-upgrade-uninstall | Current installation workflow and operational tasks |
| Official concept docs | Namespace management – https://cloud.google.com/config-connector/docs/concepts/namespace-management | Namespaced vs cluster mode and multi-tenancy patterns |
| Pricing | GKE pricing – https://cloud.google.com/kubernetes-engine/pricing | Primary indirect cost driver when running Config Connector on GKE |
| Pricing tool | Google Cloud Pricing Calculator – https://cloud.google.com/products/calculator | Model cluster and resource costs without guessing numbers |
| Product docs | Workload Identity (GKE) – https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity | Secure authentication model used by Config Connector in many deployments |
| Observability | Cloud Audit Logs – https://cloud.google.com/logging/docs/audit | Essential for tracking API calls and troubleshooting permission issues |
| Architecture | Google Cloud Architecture Center – https://cloud.google.com/architecture | Patterns for landing zones, governance, and platform engineering (search within for relevant guides) |
| Source/examples | Config Connector samples (verify official GitHub links from docs) | Practical manifests and patterns; always prefer links from official docs to avoid outdated samples |
| Video | Google Cloud Tech / official YouTube channels (search “Config Connector”) | Visual walkthroughs; verify recency and version alignment |
18. Training and Certification Providers
| Institute | Suitable Audience | Likely Learning Focus | Mode | Website URL |
|---|---|---|---|---|
| DevOpsSchool.com | DevOps engineers, SREs, platform teams | Kubernetes + DevOps practices; may include IaC and cloud governance concepts | Check website | https://www.devopsschool.com/ |
| ScmGalaxy.com | Students and early-career engineers | DevOps foundations, SCM, CI/CD | Check website | https://www.scmgalaxy.com/ |
| CLoudOpsNow.in | Cloud/DevOps practitioners | Cloud operations, automation, operational readiness | Check website | https://www.cloudopsnow.in/ |
| SreSchool.com | SREs and operations teams | Reliability engineering, observability, incident response | Check website | https://www.sreschool.com/ |
| AiOpsSchool.com | Ops teams exploring AIOps | Monitoring/automation/AIOps concepts | Check website | https://www.aiopsschool.com/ |
19. Top Trainers
| Platform/Site | Likely Specialization | Suitable Audience | Website URL |
|---|---|---|---|
| RajeshKumar.xyz | DevOps/cloud coaching content (verify offerings) | Engineers looking for guided learning | https://rajeshkumar.xyz/ |
| devopstrainer.in | DevOps training programs (verify catalog) | Individuals and teams | https://www.devopstrainer.in/ |
| devopsfreelancer.com | Freelance DevOps services/training (verify offerings) | Startups and small teams | https://www.devopsfreelancer.com/ |
| devopssupport.in | DevOps support/training resources (verify offerings) | Operations teams needing practical support | https://www.devopssupport.in/ |
20. Top Consulting Companies
| Company | Likely Service Area | Where They May Help | Consulting Use Case Examples | Website URL |
|---|---|---|---|---|
| cotocus.com | Cloud/DevOps consulting (verify service lines) | Platform engineering, Kubernetes operations, DevOps pipelines | GKE platform setup, GitOps rollout, governance guardrails | https://cotocus.com/ |
| DevOpsSchool.com | DevOps consulting and training (verify offerings) | DevOps transformation, Kubernetes enablement | Config Connector adoption plan, CI/CD + policy-as-code implementation | https://www.devopsschool.com/ |
| DEVOPSCONSULTING.IN | DevOps consulting (verify offerings) | Automation, cloud operations, toolchain integration | Infrastructure automation, operational readiness reviews, cost optimization | https://www.devopsconsulting.in/ |
21. Career and Learning Roadmap
What to learn before Config Connector
- Google Cloud fundamentals:
- projects, IAM, service accounts
- resource hierarchy concepts
- billing and quotas
- Kubernetes fundamentals:
- namespaces, RBAC, service accounts
- controllers and reconciliation model
kubectl, manifests, and basic troubleshooting- GKE basics:
- cluster creation, node pools, networking basics
- Workload Identity concepts
What to learn after Config Connector
- GitOps operations:
- PR-based deployment workflows
- environment promotion strategies
- Policy as code:
- admission control patterns
- guardrails for IAM and resource configuration
- Advanced Google Cloud governance:
- organization policies
- audit log exports and SIEM integrations
- labeling and cost allocation
- Scaling platform engineering:
- multi-project landing zones
- standard templates and developer self-service portals
Job roles that use it
- Platform Engineer
- Cloud Engineer
- DevOps Engineer
- Site Reliability Engineer (SRE)
- Cloud Security Engineer (especially IAM governance)
- Solutions Architect (designing standard patterns and controls)
Certification path (if available)
There is no “Config Connector certification” as a standalone credential. Relevant Google Cloud certifications often include:
– Professional Cloud Architect
– Professional Cloud DevOps Engineer
– Professional Cloud Security Engineer
Verify current certification tracks on Google Cloud’s official certification site.
Project ideas for practice
- Build a namespace-per-project model and provision:
- buckets + IAM
- Pub/Sub topics/subscriptions + IAM
- service accounts per app
- Add policy enforcement:
- deny public access
- enforce required labels and naming
- Implement GitOps:
- PR validation + automatic sync to the cluster
- Create a “break-glass” pattern:
- separate admin namespace and restricted RBAC
22. Glossary
- Access and resource management: Practices and controls for who can do what (access) and how cloud resources are created/managed (resource management).
- CRD (CustomResourceDefinition): Extends Kubernetes API with new resource types.
- Controller / reconciliation: A loop that continuously drives actual state toward desired state.
- Desired state: The configuration you declare in Kubernetes objects.
- Drift: When the real resource differs from declared configuration.
- GKE (Google Kubernetes Engine): Managed Kubernetes service on Google Cloud.
- GSA (Google Service Account): Google Cloud identity used by services and automation.
- KSA (Kubernetes Service Account): Kubernetes identity used by pods.
- Workload Identity: GKE feature that maps KSA identities to GSA identities without keys.
- Namespaced mode: Config Connector management model that uses namespaces for isolation and delegation.
- Cluster mode: Config Connector management model where resources are managed cluster-wide.
- Cloud Audit Logs: Google Cloud logging for administrative and data access events.
- Least privilege: Grant only the minimum permissions required.
- GitOps: Operating model where Git is the source of truth and automation applies changes.
23. Summary
Config Connector (Google Cloud) is a Kubernetes-native way to manage Google Cloud resources declaratively, making it especially relevant for Access and resource management programs that need consistent IAM and resource provisioning with strong auditability.
It matters because it brings reconciliation, GitOps workflows, and Kubernetes RBAC to Google Cloud infrastructure management—helping teams reduce drift, standardize provisioning, and implement governance guardrails at scale.
Cost-wise, Config Connector’s biggest drivers are GKE cluster runtime and the resources you provision, plus observability and potential networking costs in private environments. Security-wise, the most important decisions are using Workload Identity, enforcing least privilege, and restricting who can apply infrastructure CRDs via Kubernetes RBAC and admission policy.
Use Config Connector when Kubernetes is already your operational backbone and you want to manage Google Cloud resources with the same operational model. Next step: work through the official overview and resource reference, then build a small GitOps pipeline to manage a controlled set of resource kinds: – https://cloud.google.com/config-connector/docs/overview – https://cloud.google.com/config-connector/docs/reference/overview