Category
Distributed, hybrid, and multicloud
1. Introduction
Config Controller is a Google Cloud managed service that provides a hosted Kubernetes control plane specifically designed to manage Google Cloud resources using Kubernetes-style APIs and workflows.
In simple terms: you create a Config Controller instance, connect to it with kubectl, and apply Kubernetes manifests that represent Google Cloud resources (such as Cloud Storage buckets or Pub/Sub topics). The service then continuously reconciles those manifests into real Google Cloud resources.
Technically, Config Controller runs Google-supported controllers (most importantly Config Connector) in a Google-managed Kubernetes environment. Config Connector exposes Google Cloud resources as Kubernetes Custom Resource Definitions (CRDs). When you apply a custom resource (CR) to the cluster, Config Connector calls Google Cloud APIs to create/update/delete the corresponding infrastructure and continuously keeps it aligned with the desired state you declared.
The main problem it solves is standardizing infrastructure management around Kubernetes and GitOps practices—especially for distributed, hybrid, and multicloud platform teams that already operate Kubernetes heavily and want consistent, policy-governed, auditable infrastructure-as-code (IaC) for Google Cloud.
Naming note: Some older materials may refer to “Anthos Config Controller”. The current product name in Google Cloud documentation is Config Controller. Verify current naming and packaging in the official docs if you’re reading older blog posts or labs.
2. What is Config Controller?
Official purpose
Config Controller’s purpose is to provide a managed control plane for declaratively managing Google Cloud resources using Kubernetes APIs (via Config Connector), enabling GitOps-style workflows, policy enforcement, and consistent operational patterns.
Core capabilities
- Declarative management of Google Cloud resources using Kubernetes CRDs (via Config Connector).
- Continuous reconciliation: desired state in Kubernetes manifests is kept in sync with actual Google Cloud state.
- Kubernetes-native tooling: use
kubectl, Kubernetes RBAC, namespaces, and (optionally) GitOps tooling patterns. - Separation of concerns: run a “management cluster” for infrastructure configuration, separate from application clusters.
Major components (conceptual)
- Config Controller instance: the managed Kubernetes control plane you create in a project/region.
- Config Connector: controllers + CRDs that model Google Cloud resources (e.g.,
StorageBucket,PubSubTopic, IAM policies). - IAM identity used by controllers: a Google Cloud service account (or managed identity) that Config Connector uses to call Google Cloud APIs.
- Kubernetes API endpoint: where you apply manifests and read reconciliation status.
Depending on how Google packages the service at the time you deploy, Config Controller may also support integration with policy and GitOps components commonly used with Anthos Config Management. Only enable/assume components that are explicitly documented for your version—verify in official docs if you expect built-in Policy Controller or Config Sync functionality.
Service type
- Managed control plane / managed Kubernetes-based configuration service
- It is not a general-purpose application runtime like GKE; it is designed for infrastructure configuration and reconciliation.
Scope (how it’s scoped)
Config Controller is typically: – Project-scoped (created inside a Google Cloud project) – Region-scoped (you choose a region for the instance)
Exact region availability and feature set can vary—verify in official docs: – Docs: https://cloud.google.com/config-controller/docs
Fit in the Google Cloud ecosystem
Config Controller sits at the intersection of: – Google Cloud infrastructure APIs (what ultimately gets created/modified) – Kubernetes APIs (how you declare desired state) – Distributed, hybrid, and multicloud operating models (where central platform teams want consistent configuration workflows)
It complements (not replaces) other IaC tools like Terraform: – Terraform is imperative/planned execution based on state files. – Config Controller is Kubernetes-style desired state with ongoing reconciliation.
3. Why use Config Controller?
Business reasons
- Standardize provisioning: consistent workflows across teams, especially if teams already know Kubernetes.
- Improve auditability: changes are visible as Kubernetes object changes (and often through Git history if using GitOps).
- Reduce configuration drift: reconciliation continuously corrects drift from the declared state.
Technical reasons
- Kubernetes-native IaC: use
kubectl, namespaces, RBAC, admission controls, and familiar patterns. - Model cloud resources as APIs: Google Cloud resources become objects you can validate, template, and manage systematically.
- Separation of config and apps: run a dedicated config control plane instead of granting app clusters broad infra permissions.
Operational reasons
- Continuous reconciliation helps keep infra correct even when manual console changes happen (depending on resource and permissions).
- Kubernetes-style status and events: you can inspect conditions, last reconcile status, and errors using standard tooling.
- Easier multi-team delegation: use Kubernetes namespaces + RBAC to delegate resource management boundaries.
Security/compliance reasons
- Centralize IAM: restrict who can create what via Kubernetes RBAC and controlled service accounts used by controllers.
- Policy-based guardrails (where supported): enforce requirements like labels, encryption settings, allowed locations, or naming conventions.
- Change control: tie changes to Git PRs and approvals (GitOps), and track applied objects.
Scalability/performance reasons
- Works well for organizations that manage many resources across teams and want a consistent interface and drift control.
- Offloads control plane management to Google (you manage fewer moving parts).
When teams should choose it
Choose Config Controller if you: – Operate Kubernetes broadly and want Kubernetes-native infrastructure management. – Want a dedicated management control plane for Google Cloud resources. – Need drift reconciliation and (optionally) policy enforcement for infra. – Want to build a platform where teams self-serve infrastructure through Kubernetes APIs.
When teams should not choose it
Avoid or reconsider Config Controller if you: – Need a simple one-time provisioning workflow and already have mature Terraform pipelines—Config Controller adds another control plane. – Need to manage many non–Google Cloud resources (Config Controller is focused on Google Cloud resources via Config Connector). – Don’t want to adopt Kubernetes operational patterns for infra management. – Have strict network constraints that prevent required API connectivity (for example, restricted egress requirements without an approved design). Verify networking requirements in official docs.
4. Where is Config Controller used?
Industries
- Regulated industries (finance, healthcare, public sector) that need auditable, policy-controlled infrastructure provisioning.
- SaaS and tech companies with platform engineering teams and Kubernetes-heavy operations.
- Retail, media, and gaming with multi-team environments and frequent environment creation.
Team types
- Platform engineering teams building “internal developer platforms”.
- DevOps/SRE teams standardizing provisioning and reducing drift.
- Cloud Center of Excellence (CCoE) teams enforcing governance.
- Security engineering teams implementing guardrails for resource creation.
Workloads and architectures
- Kubernetes-centric organizations using GitOps.
- Organizations running many application clusters (GKE, on-prem Kubernetes, other clouds) that want consistent infra provisioning patterns.
- Hybrid environments where a centralized config plane manages cloud resources for multiple environments.
Real-world deployment contexts
- A central project hosting a Config Controller instance managing shared infrastructure (networking, IAM, logging sinks).
- One Config Controller per environment (dev/stage/prod) to isolate permissions and blast radius.
- One Config Controller per business unit, with strict namespace and RBAC boundaries.
Production vs dev/test usage
- Dev/test: prototyping IaC and CRD-based provisioning; learning Kubernetes-style infra.
- Production: central governance, drift control, consistent self-service provisioning with strong IAM boundaries and audit.
5. Top Use Cases and Scenarios
Below are realistic scenarios where Config Controller is a strong fit.
1) GitOps for Google Cloud infrastructure
- Problem: Teams want PR-based review and automated rollout of Google Cloud resources.
- Why Config Controller fits: Manifests can live in Git; reconciler applies desired state continuously.
- Scenario: A platform repo defines Storage buckets, Pub/Sub topics, and IAM bindings. A merge to
maintriggers sync to Config Controller.
2) Centralized “infra management cluster” pattern
- Problem: Application clusters should not have broad IAM to create cloud resources.
- Why it fits: Config Controller is a dedicated control plane for provisioning.
- Scenario: App clusters run workloads only; Config Controller provisions Cloud SQL, buckets, and Pub/Sub topics per namespace/team.
3) Self-service resource provisioning for product teams
- Problem: Platform team is overloaded with ticket-based provisioning requests.
- Why it fits: Kubernetes RBAC + namespaces can delegate resource creation safely.
- Scenario: Team A gets a namespace and permissions to create Pub/Sub topics and subscriptions within approved constraints.
4) Drift control and reconciliation for critical infra
- Problem: Manual console edits cause drift and outages.
- Why it fits: Continuous reconciliation detects and attempts to correct drift.
- Scenario: A logging sink must always exist with specific filters. If someone deletes it, it is recreated (depending on the resource and permissions).
5) Policy guardrails for compliant resource creation (where supported)
- Problem: Teams create resources without required labels, encryption, or approved regions.
- Why it fits: Policy frameworks can prevent non-compliant manifests from being accepted.
- Scenario: Prevent creation of buckets without retention policy and CMEK (if required by policy), enforced at admission time.
6) Multi-environment standardization (dev/stage/prod)
- Problem: Environments drift because each is provisioned differently.
- Why it fits: Same manifest templates applied to each environment’s Config Controller.
- Scenario: A standardized set of resources (topics, service accounts, buckets) is applied with environment-specific parameters.
7) Namespace-based tenancy for shared platform services
- Problem: Multiple teams share a platform but need isolation and least privilege.
- Why it fits: Kubernetes namespaces + RBAC segment who can manage which resources.
- Scenario: Each team has a namespace with only the CRDs and permissions they need.
8) Infrastructure provisioning as part of CI pipelines (Kubernetes-native)
- Problem: CI needs to provision ephemeral environments quickly and consistently.
- Why it fits: CI can apply manifests to Config Controller using standard Kubernetes authentication.
- Scenario: PR validation creates a short-lived Pub/Sub topic and bucket; cleanup deletes Kubernetes resources which deletes cloud resources.
9) Standardizing IAM and service accounts declaratively
- Problem: IAM changes are hard to track and review.
- Why it fits: IAM bindings can be managed as CRs (verify which IAM resources are supported in Config Connector).
- Scenario: Platform team manages project-level IAM bindings and workload identities through manifests reviewed in Git.
10) Managing shared networking and security resources
- Problem: Networking resources must follow strict patterns and be repeatable.
- Why it fits: Declarative definitions reduce variability.
- Scenario: Provision Cloud DNS zones, firewall rules, or load balancing components (only if supported by Config Connector resource coverage).
11) Controlled resource lifecycle tied to Kubernetes objects
- Problem: Orphaned resources accumulate and increase cost.
- Why it fits: Deleting the Kubernetes object can delete the cloud resource (subject to deletion policy and finalizers).
- Scenario: Delete a
StorageBucketCR and the bucket is deleted, reducing leftovers.
12) Platform blueprint as reusable manifests
- Problem: New teams need a secure “starter kit” of cloud resources.
- Why it fits: Provide a repo with pre-approved manifests and policies.
- Scenario: A new microservice team applies a “blueprint” directory and gets standardized storage, topics, and IAM bindings.
6. Core Features
Feature availability can vary by release and configuration. Always cross-check with the official Config Controller and Config Connector docs.
6.1 Managed Config Controller instance
- What it does: Provides a Google-managed Kubernetes control plane dedicated to configuration management.
- Why it matters: You avoid managing your own “management cluster” lifecycle.
- Practical benefit: Faster onboarding and fewer operational tasks compared to self-managed Kubernetes.
- Limitations/caveats: It is not intended as a general workload cluster; treat it as a control plane for infrastructure.
6.2 Config Connector CRDs for Google Cloud resources
- What it does: Exposes Google Cloud resources as Kubernetes APIs (CRDs).
- Why it matters: Infrastructure can be managed with the same tooling and patterns as Kubernetes resources.
- Practical benefit: Declarative provisioning with
kubectl apply, status conditions, and event-driven troubleshooting. - Limitations/caveats: Not all Google Cloud services/resources may be supported. Coverage varies by Config Connector version—verify supported resources in docs:
- https://cloud.google.com/config-connector/docs/reference/overview
6.3 Continuous reconciliation
- What it does: Continuously compares desired state (Kubernetes objects) to actual state (Google Cloud) and reconciles differences.
- Why it matters: Reduces drift and helps enforce standards.
- Practical benefit: Infrastructure stays consistent even with manual changes (within what the controllers can correct).
- Limitations/caveats: Some changes may be rejected by APIs, blocked by IAM, or require replacement; reconciliation might loop on errors until fixed.
6.4 Kubernetes RBAC and namespace isolation
- What it does: Uses Kubernetes-native RBAC to control who can create/modify which objects.
- Why it matters: Enables platform teams to delegate safely.
- Practical benefit: Fine-grained access control for different teams/environments.
- Limitations/caveats: RBAC controls Kubernetes objects; the controller’s Google Cloud IAM still determines what can actually be created in Google Cloud.
6.5 Status reporting and observability via Kubernetes objects
- What it does: Writes reconcile status, conditions, and events into Kubernetes resources.
- Why it matters: Troubleshooting becomes
kubectl describeinstead of digging through ad-hoc scripts. - Practical benefit: Faster diagnosis of IAM/API/quota errors.
- Limitations/caveats: For deeper root cause, you still need Google Cloud logs and audit logs.
6.6 Integration with Google Cloud IAM
- What it does: Controllers authenticate to Google Cloud APIs using a configured identity (service account).
- Why it matters: Least privilege can be implemented centrally.
- Practical benefit: One well-governed identity can manage approved resource types.
- Limitations/caveats: Mis-scoped permissions are a common failure point; start narrow and expand based on observed needs.
6.7 Deletion lifecycle control (finalizers, deletion policies)
- What it does: Ensures safe deletion of resources by coordinating Kubernetes deletion with cloud deletion.
- Why it matters: Prevents accidental orphaning or unintended deletion.
- Practical benefit: Predictable cleanup tied to Kubernetes workflows.
- Limitations/caveats: Deletion behavior can be controlled by annotations/policies; verify current behavior in Config Connector docs.
6.8 Compatibility with GitOps workflows (pattern)
- What it does: Enables a GitOps model where Git is the source of truth and the cluster reconciles to it.
- Why it matters: Repeatable, auditable changes.
- Practical benefit: PR reviews, rollbacks, and environment promotion.
- Limitations/caveats: The GitOps engine (e.g., Config Sync or third-party controllers) is a separate component. If you rely on built-in integration, verify in official docs what is included with Config Controller.
7. Architecture and How It Works
High-level architecture
At a high level, Config Controller works like this:
- You create a Config Controller instance in a Google Cloud project and region.
- You connect to its Kubernetes API endpoint (typically with
kubectl). - You apply Kubernetes manifests that include Config Connector CRDs (for example, a
StorageBucketobject). - Config Connector controllers watch these objects and call the relevant Google Cloud APIs.
- The controller updates Kubernetes object status fields (conditions, errors, reconcile state).
- If drift occurs, reconciliation attempts to bring the real resource back to the declared desired state.
Request/data/control flow
- Control flow:
kubectl apply→ Kubernetes API server → etcd/state → controller watch loop → Google Cloud API calls → status updates. - Data plane: The actual data (bucket contents, Pub/Sub messages) is not part of Config Controller. Config Controller manages control-plane resources.
Integrations with related services
Common integrations in Google Cloud environments include: – IAM: service accounts and roles used for resource creation. – Cloud Audit Logs: track API calls made by Config Connector identity. – Cloud Logging/Monitoring: observe controller logs and resource-level operations (exact integration depends on managed environment and configured sinks). – VPC networking: endpoint access patterns (private/public) depend on instance configuration—verify networking options in docs.
Dependency services
Config Controller depends on:
– Google Cloud APIs for the resources you manage (e.g., storage.googleapis.com, pubsub.googleapis.com).
– IAM permissions for the controller identity.
– Underlying managed Kubernetes components (operated by Google).
Security/authentication model
- Admin access: typically via Google Cloud IAM to get cluster credentials and Kubernetes RBAC to perform actions in the cluster.
- Controller access to Google Cloud: a Google Cloud identity (service account) authorized with least privilege for the resources you want to manage.
- Audit: Google Cloud audit logs record API calls made by the controller identity; Kubernetes audit logs availability depends on the managed offering—verify.
Networking model
Config Controller is created in a region and typically attaches to a VPC/subnet you provide or select, depending on provisioning mode. Key networking considerations: – The controller must reach Google Cloud APIs (either through public endpoints or Private Google Access patterns). – Your operator machine/CI runner must reach the Kubernetes API endpoint (possibly via VPN, bastion, or private access). Because networking options can evolve, verify current networking requirements: – https://cloud.google.com/config-controller/docs
Monitoring/logging/governance considerations
- Use Kubernetes events and object status for first-level troubleshooting.
- Use Google Cloud Audit Logs to understand what API calls were made and denied.
- Consider organization policies, quota management, and standardized labels to keep governance consistent.
Simple architecture diagram (Mermaid)
flowchart LR
Dev[Engineer / CI] -->|kubectl apply| K8sAPI[Kubernetes API<br/>Config Controller]
K8sAPI --> CC[Config Connector<br/>controllers]
CC -->|Google Cloud APIs| GCP[(Google Cloud Resources)]
CC -->|status/conditions| K8sAPI
GCP --> Audit[Cloud Audit Logs]
Production-style architecture diagram (Mermaid)
flowchart TB
subgraph Org[Google Cloud Organization]
subgraph NetProject[Shared VPC / Network Project]
VPC[VPC + Subnets]
VPN[VPN/Interconnect]
end
subgraph PlatformProject[Platform Project]
CCInst[Config Controller Instance<br/>Regional]
RBAC[Kubernetes RBAC<br/>Namespaces]
SA[Controller Service Account<br/>(least privilege)]
Logs[Cloud Logging / Monitoring]
end
subgraph AppProjects[Application Projects]
Resources[(Buckets, Topics,<br/>IAM, etc.)]
end
end
Dev2[Platform Engineers / CI] -->|Private access via VPN| CCInst
CCInst --> RBAC
CCInst -->|Uses SA| SA
SA -->|Calls APIs| Resources
Resources --> Logs
CCInst --> Logs
VPC --- CCInst
VPN --- VPC
8. Prerequisites
Account/project requirements
- A Google Cloud account with a billing-enabled project.
- Permissions to enable APIs, create a Config Controller instance, and manage IAM.
Permissions / IAM roles
The exact roles can vary based on organizational policies and how Config Controller is provisioned. Typically you need: – Project-level permissions to create/configure Config Controller. – Permissions to enable required APIs. – Permissions to grant IAM roles to the controller’s service account.
Start by reviewing official prerequisites: – https://cloud.google.com/config-controller/docs/quickstart (verify exact URL and steps)
Billing requirements
- Billing must be enabled for the project.
- You will pay for:
- Config Controller managed components (pricing model depends on current Anthos/Config Controller packaging).
- Any Google Cloud resources created (buckets, topics, etc.).
- Network egress, logging, and other indirect usage.
CLI/SDK/tools needed
gcloud(Google Cloud CLI): https://cloud.google.com/sdk/docs/installkubectl(often installed viagcloud components install kubectlor via your package manager)- Git (optional but recommended for GitOps workflows)
You may also need a specific gcloud component for Anthos/Config Controller commands depending on the current tooling. Install/update components as directed in the official docs.
Region availability
- Config Controller is regional; available regions may be limited.
- Verify current regions in official docs:
- https://cloud.google.com/config-controller/docs
Quotas/limits
Potential quotas/limits to check (varies by project and org policy): – API rate limits for managed services (Storage, Pub/Sub, IAM). – Project resource limits (number of buckets, topics, service accounts). – Any Config Controller instance limits per project/region (verify in docs).
Prerequisite services/APIs
The required APIs depend on what you create, but typically include: – Config Controller / Anthos-related APIs (verify exact API names in docs) – Kubernetes Engine / GKE-related APIs (if required by provisioning flow) – Resource Manager API – IAM API – Service Usage API – APIs for the resources you will create (e.g., Cloud Storage API, Pub/Sub API)
9. Pricing / Cost
Config Controller pricing can change depending on Google Cloud packaging (for example, how it is bundled with Anthos offerings). Do not rely on third-party estimates.
Official pricing references
- Start with the Config Controller docs, which link to the current billing model:
- https://cloud.google.com/config-controller/docs
- Anthos pricing (often relevant for configuration management services):
- https://cloud.google.com/anthos/pricing
- Google Cloud Pricing Calculator:
- https://cloud.google.com/products/calculator
Pricing dimensions (what you’re typically billed for)
You should plan for these cost categories:
-
Config Controller managed service charges – Could be billed per instance, per vCPU, per hour, or under an Anthos subscription model. – Verify current SKUs and billing dimensions in official pricing pages and your Cloud Billing SKU list.
-
Underlying compute/networking – Even when Google manages the control plane, there may be billable infrastructure in your project (depending on how the service is implemented). – Networking (NAT, VPN, egress) can add cost.
-
Resources you create – Buckets, Pub/Sub topics/subscriptions, IAM, logging sinks, etc. are billed by their own service pricing.
-
Logging/monitoring – Cloud Logging ingestion/retention and Cloud Monitoring metrics can create ongoing costs, especially at scale.
-
Network egress – Egress from the region to the internet or other regions can be a meaningful cost driver. – Calls to Google APIs are typically not billed as egress when staying within Google’s network, but configuration and network path matter—verify for your design.
Free tier
- Config Controller itself typically does not have a “free tier” in the way some managed services do.
- Some resources you create may have free tiers (e.g., limited Cloud Logging, Pub/Sub, or Storage usage), but these are service-specific and subject to change. Always verify current free-tier limits in official docs.
Key cost drivers
- Number of Config Controller instances (per environment, per region, per team).
- Controller activity volume (number of managed resources and change frequency).
- Logging verbosity and retention.
- Network design (private access, NAT gateways, cross-region traffic).
- The cost of actual managed resources (often dominates total spend).
Hidden or indirect costs to watch
- Cloud Logging ingestion from controllers and audit logs.
- NAT gateway costs if you route outbound traffic through Cloud NAT.
- Over-provisioning environments: multiple config planes per team can add overhead.
- Orphaned resources if deletion is blocked by policies/finalizers.
How to optimize cost
- Use one instance per environment (dev/stage/prod) instead of per team unless required.
- Reduce noisy logs and set retention appropriately (balance compliance needs).
- Use labels and budgets; alert on unexpected resource creation.
- Implement policy guardrails to prevent expensive resources from being provisioned unintentionally.
- Prefer regional designs that avoid cross-region egress where possible.
Example low-cost starter estimate (no fabricated prices)
A low-cost starter setup typically includes: – 1 Config Controller instance in a single region – Managing a small number of low-cost resources (e.g., a couple of buckets and topics) – Minimal logging retention
To estimate accurately: 1. Identify whether Config Controller is billed per instance/hour or via Anthos subscription in your org. 2. Add costs for the resources you plan to create (Storage, Pub/Sub). 3. Include logging ingestion and any NAT/egress.
Use: – Pricing calculator: https://cloud.google.com/products/calculator – Billing export to BigQuery for ongoing cost breakdown (recommended in production).
Example production cost considerations
In production, the main drivers are usually: – Multiple environments/regions (2–6+ instances) – Hundreds to thousands of managed resources – High change frequency (CI-driven) – Governance overhead (audit logs, SIEM export) – Private networking (VPN/Interconnect, NAT)
A practical approach: – Start with one instance per environment. – Track total managed resources count. – Review logging costs monthly. – Set budget alerts per project and per environment.
10. Step-by-Step Hands-On Tutorial
This lab provisions a Config Controller instance and uses it to create Google Cloud resources declaratively using Config Connector CRDs.
Important: Exact commands and API names can evolve. If a command differs in your environment, follow the current official quickstart: https://cloud.google.com/config-controller/docs
Objective
- Create a Config Controller instance in a chosen region.
- Connect to it using
kubectl. - Create a Cloud Storage bucket and a Pub/Sub topic using Config Connector custom resources.
- Validate that resources exist in Google Cloud.
- Clean up to avoid ongoing cost.
Lab Overview
You will: 1. Prepare a project and enable APIs. 2. Create networking prerequisites (if required). 3. Create a Config Controller instance. 4. Configure permissions for resource creation. 5. Apply Config Connector manifests to create resources. 6. Validate and troubleshoot. 7. Clean up resources and the instance.
Step 1: Set up your environment (project, auth, defaults)
1) Install and initialize the Google Cloud CLI: – Install: https://cloud.google.com/sdk/docs/install – Authenticate:
gcloud auth login
gcloud auth application-default login
2) Choose (or create) a project:
export PROJECT_ID="cc-lab-$(date +%Y%m%d-%H%M%S)"
gcloud projects create "$PROJECT_ID"
gcloud billing projects link "$PROJECT_ID" --billing-account="YOUR_BILLING_ACCOUNT_ID"
If you already have a billing-enabled project, set it:
export PROJECT_ID="YOUR_PROJECT_ID"
gcloud config set project "$PROJECT_ID"
3) Pick a region supported by Config Controller (verify supported regions in docs). Example:
export REGION="us-central1"
Expected outcome: gcloud config get-value project returns your project, and billing is enabled.
Step 2: Enable required APIs
Enable baseline APIs (some may already be enabled). API requirements vary; follow the official docs if additional services are required:
gcloud services enable \
serviceusage.googleapis.com \
cloudresourcemanager.googleapis.com \
iam.googleapis.com \
storage.googleapis.com \
pubsub.googleapis.com
Now enable the API(s) specifically required for Config Controller. The exact service name can change; consult the official docs for the current list: – https://cloud.google.com/config-controller/docs
If the docs specify an API like configcontroller.googleapis.com (example only), enable it:
# Example: verify the exact API name in official docs
gcloud services enable configcontroller.googleapis.com
Expected outcome: gcloud services list --enabled includes the required services; no errors like “SERVICE_DISABLED”.
Step 3: Create or choose networking (VPC/subnet) as required
Config Controller provisioning typically requires selecting or creating a VPC and subnet (details vary by current product behavior). If you need a dedicated VPC, create one:
export NETWORK="cc-network"
export SUBNET="cc-subnet"
export SUBNET_RANGE="10.10.0.0/24"
gcloud compute networks create "$NETWORK" --subnet-mode=custom
gcloud compute networks subnets create "$SUBNET" \
--network="$NETWORK" \
--region="$REGION" \
--range="$SUBNET_RANGE"
If your org requires Private Google Access for API connectivity, enable it on the subnet:
gcloud compute networks subnets update "$SUBNET" \
--region="$REGION" \
--enable-private-ip-google-access
Expected outcome: VPC and subnet exist and are visible in:
gcloud compute networks list
gcloud compute networks subnets list --regions="$REGION"
Step 4: Create the Config Controller instance
Follow the official creation method for your environment (Console or gcloud). Many environments use a gcloud command group for Config Controller under Anthos tooling.
1) Ensure your gcloud components are updated:
gcloud components update
2) Create the instance using the official command pattern from docs: – https://cloud.google.com/config-controller/docs
Example pattern (command and flags may differ; verify in official docs):
export CC_NAME="cc-lab"
# Example only: verify exact command group and flags
gcloud anthos config controller create "$CC_NAME" \
--location="$REGION" \
--network="$NETWORK" \
--subnet="$SUBNET"
3) Wait until the instance is ready. There is typically a “describe” or “list” command; verify the right one in docs:
# Example only
gcloud anthos config controller describe "$CC_NAME" --location="$REGION"
Expected outcome: The instance shows a READY/RUNNING state.
Step 5: Get cluster credentials and connect with kubectl
Fetch credentials (exact command depends on tooling; verify in docs):
# Example only
gcloud anthos config controller get-credentials "$CC_NAME" --location="$REGION"
Test connectivity:
kubectl version --short
kubectl get nodes
kubectl get namespaces
Expected outcome:
– kubectl get namespaces returns standard namespaces.
– You can list resources without authentication errors.
Common issue: – If you see authorization errors, verify you have the correct Google Cloud IAM permissions and Kubernetes RBAC.
Step 6: Confirm Config Connector CRDs are available
Config Controller’s main value is Config Connector CRDs. Confirm CRDs exist:
kubectl get crds | grep -i cnrm | head
You can also check for specific CRDs (examples; actual names depend on installed version):
kubectl get crds | grep -E "storage|pubsub" | head
Expected outcome: You see CRDs that indicate Config Connector is installed (often CRDs containing cnrm.cloud.google.com).
If you do not see CRDs, check the Config Controller docs and the instance health status. Do not proceed until CRDs are present.
Step 7: Configure IAM for the controller identity (least privilege)
Config Connector needs permissions to create resources. The identity and recommended setup depend on your Config Controller provisioning model.
You must determine: – Which Google Cloud service account the controller uses – How to grant it permissions (project-level roles vs fine-grained roles)
Follow the official guidance: – https://cloud.google.com/config-connector/docs/how-to/install-upgrade-uninstall – https://cloud.google.com/config-controller/docs
A common pattern is: – Use a dedicated service account, then grant roles needed to manage target resources (Storage, Pub/Sub, etc.).
Example (illustrative; verify controller SA and roles):
export CC_SA="config-controller-sa@$PROJECT_ID.iam.gserviceaccount.com"
# Create a dedicated service account (if your model requires it)
gcloud iam service-accounts create config-controller-sa \
--display-name="Config Controller SA"
# Grant only needed roles for this lab (Storage + Pub/Sub)
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member="serviceAccount:$CC_SA" \
--role="roles/storage.admin"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member="serviceAccount:$CC_SA" \
--role="roles/pubsub.admin"
Expected outcome: IAM policy bindings are added. In production, prefer narrower roles (and sometimes resource-level IAM instead of project-wide roles).
Step 8: Create a namespace for the lab and apply resources
1) Create a namespace:
kubectl create namespace infra-demo
Expected outcome: Namespace infra-demo exists.
2) Create a Cloud Storage bucket using Config Connector.
Bucket names must be globally unique. Choose a unique name:
export BUCKET_NAME="cc-bucket-$PROJECT_ID"
Apply a Config Connector resource. The exact API version/kind can vary by Config Connector version; verify the correct manifest format in the reference: – https://cloud.google.com/config-connector/docs/reference/overview – Storage bucket resource reference: verify in docs.
Example manifest (verify apiVersion/kind fields in your environment before applying):
cat > bucket.yaml <<EOF
apiVersion: storage.cnrm.cloud.google.com/v1beta1
kind: StorageBucket
metadata:
name: ${BUCKET_NAME}
namespace: infra-demo
spec:
location: ${REGION}
uniformBucketLevelAccess: true
EOF
kubectl apply -f bucket.yaml
3) Create a Pub/Sub topic:
export TOPIC_NAME="cc-topic-$PROJECT_ID"
cat > topic.yaml <<EOF
apiVersion: pubsub.cnrm.cloud.google.com/v1beta1
kind: PubSubTopic
metadata:
name: ${TOPIC_NAME}
namespace: infra-demo
spec: {}
EOF
kubectl apply -f topic.yaml
Expected outcome:
– Kubernetes objects are created immediately.
– Within a short time, their status should indicate reconciliation succeeded.
Step 9: Watch reconciliation status
Check the objects:
kubectl -n infra-demo get storagebucket
kubectl -n infra-demo get pubsubtopic
Describe them to see conditions/events:
kubectl -n infra-demo describe storagebucket "${BUCKET_NAME}"
kubectl -n infra-demo describe pubsubtopic "${TOPIC_NAME}"
Expected outcome: Look for conditions such as “Ready=True” (exact condition names vary). If not ready, you should see an error message indicating missing permissions, invalid fields, or quota issues.
Validation
Validate in Google Cloud that resources exist.
1) Cloud Storage bucket exists:
gcloud storage buckets describe "gs://${BUCKET_NAME}"
2) Pub/Sub topic exists:
gcloud pubsub topics describe "${TOPIC_NAME}"
Also validate that the resources are managed by Config Connector by checking annotations/labels in the Kubernetes objects:
kubectl -n infra-demo get storagebucket "${BUCKET_NAME}" -o yaml | sed -n '1,120p'
kubectl -n infra-demo get pubsubtopic "${TOPIC_NAME}" -o yaml | sed -n '1,120p'
Expected outcome: The resources exist in Google Cloud and show reconciliation metadata in Kubernetes.
Troubleshooting
Common issues and fixes:
1) CRDs not found / “no matches for kind …”
– Cause: Config Connector CRDs not installed/ready.
– Fix:
– Confirm Config Controller instance is healthy/ready.
– Re-check CRDs: kubectl get crds | grep cnrm
– Follow official docs to confirm components.
2) Permission denied errors in status/conditions – Cause: Controller identity lacks required IAM roles. – Fix: – Identify the exact service account used by the controller (per docs). – Grant least-privilege roles for the resource type. – Re-check after reconciliation.
3) Bucket name already exists
– Cause: Bucket names are globally unique.
– Fix: Change metadata.name to a unique value.
4) Org Policy blocks resource creation – Cause: Organization Policy Service constraints (allowed locations, restricted services, etc.). – Fix: – Review org policies in the project/folder/org. – Adjust policy or choose compliant settings (e.g., allowed regions).
5) Quota exceeded – Cause: Service quotas exceeded for Storage/PubSub/etc. – Fix: – Check quotas in Cloud Console. – Request increases or reduce resources.
6) Networking / API access issues – Cause: The instance cannot reach Google APIs due to restricted egress. – Fix: – Verify VPC/subnet configuration. – Ensure Private Google Access (if required). – Follow documented networking prerequisites for Config Controller.
Cleanup
To avoid ongoing costs, delete the resources and the Config Controller instance.
1) Delete Kubernetes-managed resources:
kubectl delete -f topic.yaml
kubectl delete -f bucket.yaml
kubectl delete namespace infra-demo
2) Verify they are gone from Google Cloud:
gcloud pubsub topics list | grep "${TOPIC_NAME}" || true
gcloud storage buckets describe "gs://${BUCKET_NAME}" 2>/dev/null || echo "Bucket deleted"
3) Delete the Config Controller instance (command varies; verify in docs):
# Example only
gcloud anthos config controller delete "$CC_NAME" --location="$REGION"
4) Optional: delete the network resources if you created a dedicated VPC:
gcloud compute networks subnets delete "$SUBNET" --region="$REGION" --quiet
gcloud compute networks delete "$NETWORK" --quiet
5) Optional: delete the project (fastest way to ensure no lingering resources):
gcloud projects delete "$PROJECT_ID" --quiet
11. Best Practices
Architecture best practices
- Use a dedicated “config plane” (Config Controller) separate from application clusters to reduce blast radius.
- Design for environment isolation: at minimum, separate dev and prod control planes.
- Use a clear resource hierarchy (org → folders → projects) and decide where resources should live before modeling them as CRs.
IAM/security best practices
- Least privilege for controller identity:
- Prefer narrowly scoped roles over broad
adminroles. - Consider splitting identities by namespace or environment when feasible (verify supported patterns).
- Use Kubernetes RBAC to control who can apply/change manifests.
- Avoid giving developers direct project Owner access; use GitOps with controlled approvals.
Cost best practices
- Start with one instance per environment, not per team.
- Control resource sprawl with:
- policy checks (where supported)
- budgets and alerts
- labels and inventory reporting
- Minimize excessive logging; export audit logs thoughtfully.
Performance best practices
- Keep manifests clean and consistent; avoid frequent no-op changes that trigger reconciliation churn.
- Batch related changes to reduce churn and API rate usage.
- Watch for API rate limits if managing very large fleets—design with quotas in mind.
Reliability best practices
- Treat Git (if used) as the source of truth; implement PR reviews and required approvals.
- Use progressive rollout patterns for sensitive infrastructure changes.
- Establish clear rollback procedures (revert commit / re-apply previous manifests).
Operations best practices
- Standardize:
- naming conventions (resource names, namespaces)
- labels (cost center, environment, owner)
- repository structure (per environment, per team)
- Monitor:
- reconcile failures
- repeated error events
- audit log anomalies (unexpected deletions, denied operations)
- Document break-glass procedures (who can bypass GitOps in emergencies).
Governance/tagging/naming best practices
- Use consistent labels/annotations in Kubernetes manifests to map to business ownership.
- If you rely on cloud labels, ensure the Config Connector resource supports them and that they’re enforced by policy.
- Align naming with DNS/project conventions and org policy constraints.
12. Security Considerations
Identity and access model
Config Controller introduces two layers of access control:
1) Kubernetes access – Who can connect and apply manifests is controlled by Kubernetes authentication + RBAC.
2) Google Cloud API access – What can actually be created/changed is controlled by the controller’s Google Cloud identity (service account) and its IAM roles.
Security design goal: – Developers get Kubernetes RBAC permissions in limited namespaces. – Controllers get Google Cloud IAM permissions that are least-privilege and environment-scoped.
Encryption
- Google Cloud encrypts data at rest by default for most managed services.
- For specific services (Storage, etc.), you may need CMEK or retention policies.
- Use policies (where supported) to require encryption settings and block non-compliant resources.
Network exposure
- Prefer private connectivity to the Kubernetes API endpoint where possible.
- Restrict who can reach the API endpoint (VPN/bastion/allowlists depending on product options).
- Ensure the controller has secure access to required Google APIs.
Secrets handling
- Avoid storing secrets in plain manifests.
- Prefer Google Cloud secret services (e.g., Secret Manager) and reference patterns supported by your runtime/tooling.
- If you store Kubernetes Secrets in the config plane, treat that cluster as sensitive and lock down RBAC tightly.
Audit/logging
- Use Cloud Audit Logs to track the controller identity’s API calls.
- Track who applied Kubernetes manifests (Kubernetes audit logs availability depends on the managed environment—verify).
- Export audit logs to a central SIEM in regulated environments.
Compliance considerations
- Enforce consistent labels, locations, retention, and encryption to meet compliance.
- Validate resource creation against org policies (some will block creation regardless of Config Controller).
- Document change management workflows (PR approvals, ticket links, CAB processes) around infrastructure changes.
Common security mistakes
- Granting the controller service account broad roles like
EditororOwner. - Allowing direct
kubectl applyfrom developer laptops without approvals. - Using one Config Controller instance to manage both dev and prod with the same identity.
- Not monitoring reconcile failures (which can hide permission or policy issues).
Secure deployment recommendations
- Separate instances per environment.
- Dedicated controller identities per environment with minimal roles.
- Restrictive Kubernetes RBAC: namespaces per team, minimal verbs/resources.
- Network controls around Kubernetes API endpoint.
- Centralized audit log export and alerting.
13. Limitations and Gotchas
Treat this section as a checklist to validate in your environment, because service behavior and supported resources evolve.
Known limitations (common)
- Not a general-purpose GKE cluster: don’t plan to run application workloads on Config Controller unless explicitly documented as supported.
- Supported resource coverage varies: Config Connector may not support every Google Cloud resource or field.
- Some updates may force replacement: certain resource changes require recreation rather than in-place updates.
- Eventual consistency: reconciliation is asynchronous; “apply” does not mean “resource is ready”.
Quotas
- Google Cloud service quotas for the resources you create apply (Storage, Pub/Sub, IAM, etc.).
- There may be limits on Config Controller instances per project/region—verify in official docs.
Regional constraints
- Config Controller is regional; not all regions may be supported.
- If you have data residency requirements, ensure the region is compliant.
Pricing surprises
- Multiple instances (per team/per env) can add up quickly.
- Logging and audit log export can become significant.
- NAT and private networking can add recurring costs.
Compatibility issues
- Your org policies may block the exact configuration you try to apply.
- Resource fields and API versions differ across Config Connector versions—always use the version documented for your installation.
Operational gotchas
- IAM is dual-layered (Kubernetes RBAC + Google Cloud IAM), and errors can be confusing until you know where to look.
- Some deletes may hang due to finalizers until underlying resources are deleted or the controller can complete cleanup.
- Drift correction can “undo” manual changes; establish a policy on whether manual changes are allowed.
Migration challenges
- Migrating from Terraform to Config Controller requires rethinking state ownership and drift behavior.
- You must plan how to import existing resources into Config Connector management (import support exists for some resources; verify in docs).
Vendor-specific nuances
- Config Controller is deeply integrated with Google Cloud IAM and APIs; it’s not a generic multi-cloud control plane by itself.
- For true multicloud resource management, you typically combine it with other tools or run separate patterns per cloud.
14. Comparison with Alternatives
Config Controller is one of several ways to manage infrastructure on Google Cloud. Here’s how it compares.
Key alternatives
- Config Connector on a self-managed Kubernetes cluster (e.g., GKE)
- Terraform (Google provider)
- Pulumi
- Crossplane
- Google Cloud Deployment Manager (legacy/less recommended in many modern setups; verify current status in official docs)
- Other clouds:
- AWS CloudFormation
- Azure ARM/Bicep
Comparison table
| Option | Best For | Strengths | Weaknesses | When to Choose |
|---|---|---|---|---|
| Config Controller (Google Cloud) | Kubernetes-native Google Cloud infrastructure management with a managed config control plane | Managed control plane, Kubernetes RBAC/namespace patterns, continuous reconciliation | Resource coverage limits; adds a control plane; requires Kubernetes operational knowledge | You want Kubernetes-style infra + managed config plane on Google Cloud |
| Config Connector on GKE (self-managed) | Teams that want Config Connector but already run their own management cluster | Full control over cluster; integrate with existing tools | You manage the cluster lifecycle; more ops burden | You already operate GKE and want to host the controllers yourself |
| Terraform (Google provider) | Broad IaC adoption; strong ecosystem; multi-cloud | Plan/apply workflow, mature modules, wide service coverage | Drift requires refresh; state management complexity | You want broad coverage, established workflows, and strong multi-cloud story |
| Pulumi | IaC with general-purpose languages | Strong developer experience; code reuse | Still needs state; less “always reconciling” | You prefer programming-language IaC over YAML/CRDs |
| Crossplane | Kubernetes control plane for multi-cloud resources | Kubernetes-native + multi-cloud; composition patterns | Additional complexity; provider maturity varies | You want Kubernetes-style IaC across multiple clouds and services |
| AWS CloudFormation | AWS-native IaC | Deep AWS integration; change sets | AWS-only | You are primarily on AWS |
| Azure ARM/Bicep | Azure-native IaC | Deep Azure integration; native tooling | Azure-only | You are primarily on Azure |
15. Real-World Example
Enterprise example: regulated financial services platform team
Problem A bank runs dozens of product teams on GKE. Each team needs Pub/Sub topics, buckets, service accounts, and IAM bindings. Manual provisioning through tickets is slow, and auditors require traceability and policy enforcement.
Proposed architecture
– One Config Controller instance per environment (dev/stage/prod) in dedicated platform projects.
– Namespace per product team (e.g., team-payments, team-risk).
– Kubernetes RBAC grants teams permissions only in their namespace.
– Controller service account has least-privilege IAM roles scoped per project/environment.
– Changes flow through Git PRs; approvals required for production.
– Cloud Audit Logs exported to a central logging project and SIEM.
Why Config Controller was chosen
– Platform already uses Kubernetes heavily; teams understand kubectl workflows.
– Continuous reconciliation reduces drift and “configuration entropy”.
– Strong separation between app clusters and infra provisioning plane improves security posture.
Expected outcomes – Faster provisioning (minutes instead of days). – Reduced audit findings due to standardized, reviewable changes. – Fewer outages caused by inconsistent manual configuration.
Startup/small-team example: SaaS company standardizing environments
Problem A startup deploys microservices on GKE and keeps creating inconsistent dev environments. Terraform exists but is managed by one person; drift and ad-hoc console edits cause recurring problems.
Proposed architecture – One Config Controller instance for dev/stage, later another for prod. – A simple Git repo containing: – bucket/topic manifests per service – common labels and naming conventions – CI applies manifests to Config Controller. – Minimal IAM roles for the controller identity.
Why Config Controller was chosen – The team already uses Kubernetes daily; using the Kubernetes API for infra reduces context switching. – Reconciliation reduces drift from manual changes.
Expected outcomes – Consistent environments. – Reduced dependency on one Terraform expert. – Easier onboarding with “apply these manifests” workflows.
16. FAQ
1) Is Config Controller the same as GKE?
No. Config Controller provides a managed Kubernetes control plane focused on configuration management (primarily via Config Connector). It is not positioned as a general-purpose application cluster. Verify any workload-running support in official docs.
2) What is the relationship between Config Controller and Config Connector?
Config Controller hosts and operates Config Connector so you can manage Google Cloud resources as Kubernetes objects. Config Connector provides the CRDs and controllers.
3) Do I need to know Kubernetes to use Config Controller?
Practically, yes. You’ll use Kubernetes concepts like manifests, namespaces, RBAC, and kubectl.
4) Can Config Controller manage resources across multiple projects?
Often yes, depending on IAM setup and supported patterns, but it requires careful IAM and org policy design. Verify cross-project management support in official docs.
5) Does Config Controller support GitOps?
Config Controller supports GitOps as a pattern because desired state lives as Kubernetes resources. Whether a GitOps engine is bundled (e.g., Config Sync) depends on current packaging—verify in official docs.
6) What happens if someone changes a resource manually in the console?
If Config Connector can detect and reconcile the difference, it will attempt to restore the declared desired state. Some changes may not be reversible automatically.
7) How do I troubleshoot failed reconciliations?
Start with:
– kubectl describe <resource> to read conditions/events
– Cloud Audit Logs to see API errors/denials
– Verify IAM permissions, org policies, quotas, and manifest correctness
8) Can I import existing Google Cloud resources into Config Connector management?
Some resources can be imported/adopted, but coverage varies. Check Config Connector import/adoption docs for your resource type.
9) Is Config Controller appropriate for production?
Yes, it can be, especially for platform teams implementing controlled provisioning. Use environment isolation, least privilege, and strong audit controls.
10) How is access controlled?
Access is controlled by: – Kubernetes RBAC (who can apply/modify manifests) – Google Cloud IAM permissions of the controller identity (what can actually be created/modified)
11) What is the biggest operational risk?
Misconfigured IAM and unclear ownership boundaries (who is allowed to create what). Another risk is relying on unsupported resource types or fields.
12) Does Config Controller support policy enforcement?
Policy enforcement is typically done via Kubernetes admission control frameworks (often associated with Anthos Policy Controller). Whether it is built-in depends on your setup—verify in official docs.
13) How do I avoid accidental deletion of cloud resources?
Use: – Strict RBAC (who can delete objects) – GitOps approvals – Deletion policies/finalizers where available – Separate environments and strict permissions in production
14) How do I estimate costs?
Use the official pricing page(s) and the Pricing Calculator:
– https://cloud.google.com/anthos/pricing
– https://cloud.google.com/products/calculator
Also include costs of resources you create and logging/egress.
15) Should I replace Terraform with Config Controller?
Not necessarily. Many organizations use both:
– Terraform for baseline foundation and shared modules
– Config Controller for Kubernetes-native workflows and ongoing reconciliation
Pick based on team skill sets, governance model, and resource coverage needs.
17. Top Online Resources to Learn Config Controller
| Resource Type | Name | Why It Is Useful |
|---|---|---|
| Official documentation | Config Controller docs — https://cloud.google.com/config-controller/docs | Primary source for supported regions, setup, commands, and operational guidance |
| Official documentation | Config Connector docs — https://cloud.google.com/config-connector/docs | CRD reference, supported resources, IAM patterns, troubleshooting |
| Reference | Config Connector resource reference overview — https://cloud.google.com/config-connector/docs/reference/overview | Helps verify the correct apiVersion, kind, and fields for each resource |
| Pricing | Anthos pricing — https://cloud.google.com/anthos/pricing | Config Controller billing may be connected to Anthos packaging; verify current model here |
| Pricing tool | Google Cloud Pricing Calculator — https://cloud.google.com/products/calculator | Build estimates including resources created and indirect costs |
| Governance | Cloud Audit Logs overview — https://cloud.google.com/logging/docs/audit | Essential for tracking API calls made by controller identity |
| Tooling | Google Cloud SDK installation — https://cloud.google.com/sdk/docs/install | gcloud installation and auth setup |
| Community (trusted) | Kubernetes documentation — https://kubernetes.io/docs/home/ | Understanding manifests, RBAC, namespaces, and troubleshooting |
| Community (practical) | Gatekeeper/OPA docs — https://open-policy-agent.github.io/gatekeeper/ | Useful if you implement admission control policies (verify how it’s packaged with your environment) |
| Samples (verify source) | GoogleCloudPlatform GitHub — https://github.com/GoogleCloudPlatform | Search for Config Connector/Config Controller examples; validate they match your version |
18. Training and Certification Providers
| Institute | Suitable Audience | Likely Learning Focus | Mode | Website URL |
|---|---|---|---|---|
| DevOpsSchool.com | DevOps engineers, SREs, platform teams | DevOps, Kubernetes, cloud automation, GitOps foundations | Check website | https://www.devopsschool.com/ |
| ScmGalaxy.com | Beginners to intermediate engineers | DevOps tooling, CI/CD, SCM, automation | Check website | https://www.scmgalaxy.com/ |
| CLoudOpsNow.in | Cloud operations teams | CloudOps practices, operations, monitoring, cost awareness | Check website | https://www.cloudopsnow.in/ |
| SreSchool.com | SREs, reliability engineers | SRE practices, observability, incident management | Check website | https://www.sreschool.com/ |
| AiOpsSchool.com | Ops and platform teams | AIOps concepts, automation, monitoring analytics | Check website | https://www.aiopsschool.com/ |
19. Top Trainers
| Platform/Site | Likely Specialization | Suitable Audience | Website URL |
|---|---|---|---|
| RajeshKumar.xyz | DevOps/Kubernetes/cloud training content (verify current offerings) | Beginners to advanced practitioners | https://rajeshkumar.xyz/ |
| devopstrainer.in | DevOps and CI/CD coaching (verify current offerings) | Engineers and teams | https://www.devopstrainer.in/ |
| devopsfreelancer.com | DevOps freelance services/training resources (verify current offerings) | Teams needing practical help | https://www.devopsfreelancer.com/ |
| devopssupport.in | DevOps support and training resources (verify current offerings) | Operations teams, DevOps engineers | https://www.devopssupport.in/ |
20. Top Consulting Companies
| Company Name | Likely Service Area | Where They May Help | Consulting Use Case Examples | Website URL |
|---|---|---|---|---|
| cotocus.com | Cloud/DevOps consulting (verify service catalog) | Platform engineering, Kubernetes operations, automation | Designing a GitOps operating model; setting up governance and CI/CD around Config Controller | https://cotocus.com/ |
| DevOpsSchool.com | DevOps consulting and training (verify service catalog) | DevOps transformations, toolchain integration | Building an internal developer platform; implementing RBAC and IaC workflows | https://www.devopsschool.com/ |
| DEVOPSCONSULTING.IN | DevOps consulting (verify service catalog) | CI/CD, automation, operations | CI integration for manifest deployment; cost and security reviews | https://www.devopsconsulting.in/ |
21. Career and Learning Roadmap
What to learn before Config Controller
- Google Cloud fundamentals:
- Projects, IAM, service accounts
- VPC basics and private access patterns
- Cloud Audit Logs and billing basics
- Kubernetes fundamentals:
- Manifests, resources, controllers
- Namespaces and RBAC
kubectltroubleshooting (describe, events, conditions)- Infrastructure-as-Code concepts:
- Desired state vs imperative changes
- Change management and approvals
- Git workflows (branches, PRs, code review)
What to learn after Config Controller
- Config Connector deep dive:
- supported resource coverage and limitations
- importing/adopting existing resources
- composition patterns (if available)
- Policy and governance:
- OPA/Gatekeeper concepts (if used)
- Organization Policy Service constraints
- GitOps operating model:
- repo structures, environment promotion
- CI pipelines and drift detection
- FinOps practices:
- budgets, alerts, labeling strategy
- cost attribution and chargeback/showback
Job roles that use it
- Platform Engineer / Platform Architect
- Cloud Engineer
- DevOps Engineer
- Site Reliability Engineer (SRE)
- Security Engineer (cloud governance)
- Infrastructure Engineer
Certification path (if available)
Config Controller itself is not typically a standalone certification topic, but it aligns strongly with: – Google Cloud Professional Cloud Architect – Google Cloud Professional Cloud DevOps Engineer – Google Cloud Associate Cloud Engineer
Verify current Google Cloud certification paths: – https://cloud.google.com/learn/certification
Project ideas for practice
- Build a “team namespace onboarding” workflow:
- Create namespace + RBAC + standard resources (bucket/topic/service accounts) as manifests.
- Implement a cost guardrail:
- Enforce required labels on all managed resources.
- Create a multi-environment setup:
- Separate Config Controller instances for dev and prod with different IAM scopes.
- Build a drift-response runbook:
- Detect reconcile failures and alert to Slack/email (via your monitoring stack).
22. Glossary
- Config Controller: A Google Cloud managed Kubernetes control plane for managing Google Cloud resources declaratively.
- Config Connector: A set of Kubernetes CRDs and controllers that represent Google Cloud resources and reconcile them via Google Cloud APIs.
- CRD (Custom Resource Definition): A Kubernetes extension mechanism that defines a new resource type.
- Custom Resource (CR): An instance of a CRD (e.g.,
StorageBucket). - Reconciliation: The controller loop that continuously drives actual state toward desired state.
- Desired state: The configuration defined in manifests (what you want).
- Actual state: The real resources and settings currently in Google Cloud (what exists).
- Drift: A mismatch between desired state and actual state.
- Kubernetes RBAC: Role-Based Access Control in Kubernetes for controlling access to API resources.
- Namespace: A Kubernetes mechanism for grouping and isolating resources.
- Least privilege: Granting only the minimal permissions required to perform tasks.
- Cloud Audit Logs: Google Cloud logs that record administrative and data access operations on resources.
- Organization Policy: Governance constraints that can restrict resource creation or configurations across an org/folder/project.
- GitOps: Operational model where Git is the source of truth and automated reconciliation applies changes.
23. Summary
Config Controller is a Google Cloud service in the Distributed, hybrid, and multicloud category that provides a managed Kubernetes-based control plane for managing Google Cloud infrastructure declaratively—primarily through Config Connector CRDs and continuous reconciliation.
It matters because it brings Kubernetes-native workflows (manifests, RBAC, namespaces, reconciliation) to Google Cloud infrastructure management, helping platform teams improve standardization, reduce drift, and strengthen governance.
From a cost perspective, plan for the managed service cost model (often related to Anthos packaging), plus the cost of the resources you create, and indirect costs like logging and networking. From a security perspective, focus on least-privilege IAM for the controller identity and tight Kubernetes RBAC for users/CI.
Use Config Controller when you want Kubernetes-native IaC and a dedicated, managed config plane—especially in organizations standardizing operations across teams and environments. The best next learning step is to deepen your Config Connector resource coverage knowledge and adopt a GitOps workflow with strong policy and audit controls, starting from the official docs: https://cloud.google.com/config-controller/docs