1. What are finalizers, really?
When you delete a Kubernetes object (CRD, Namespace, Pod, etc.), Kubernetes does not delete it immediately.
Instead it:
- Sets
metadata.deletionTimestamp - Leaves the object in place
- Waits for all finalizers to be removed from
metadata.finalizers
A finalizer is just a string tag like:
"kubernetes"(for namespaces)"customresourcecleanup.apiextensions.k8s.io"(for CRDs)"finalizer.keda.sh"(for KEDA)"foregroundDeletion"(for some resources)
It means:
“Before you remove this object from etcd, call the controller that owns this finalizer so it can clean stuff up (external state, dependent resources, DNS, volumes, etc). Once it’s done, it will remove its finalizer, and then K8s can truly delete the object.”
So if the responsible controller never does its job, or is gone, or is misconfigured → finalizer never gets removed → resource stays Terminating forever.
That’s exactly what you saw with:
customresourcecleanup.apiextensions.k8s.ioon your CRDskubernetesfinalizer on thekedanamespace
2. Why does it sometimes take SO long (or never complete)?
Common reasons:
- Controller is gone or broken
- You deleted the operator/Helm release before deleting the CRD or its instances
- Now the CRD has finalizers, but the controller that should remove them no longer exists
- Kubernetes waits… forever.
- Controller can’t reach its backend
- For example, deletion wants to clean something in AWS, but AWS creds are broken
- Cleanup fails, finalizer stays, resource never finishes deleting.
- Namespace-level finalizer (
kubernetes)- When you delete a namespace, K8s tries to clean everything inside it
- If any object is stuck (webhook, CRD instance, PVC, etc.), the namespace stays
Terminatingforever.
- Buggy or over-eager operators
- Some operators add finalizers everywhere but don’t handle edge cases well.
So the long waits / hangs are by design: Kubernetes is saying
“Before I forget this object, I must give controllers a chance to clean up external stuff.”
3. Is there a “better” way? (In practice)
There’s no magic global flag like “ignore all finalizers”, but you can make this much less painful by following some practices:
a) Always uninstall the app/operator correctly
For things like KEDA, Prometheus, cert-manager, etc:
- Prefer Helm uninstall or vendor’s documented uninstall procedure.
- This gives the operator time to:
- Clean its CR instances
- Remove finalizers from them
- Let CRDs/namespace delete without hanging
Deleting CRDs or namespaces first and operators later is the most common way to get into trouble.
b) Only patch finalizers as a last resort
What you did (patching finalizers: []) is the right last step, but it comes with trade-offs:
- You’re telling K8s: “Don’t wait for cleanup, just forget this resource.”
- If the controller was supposed to delete something external (buckets, DNS, etc.), that cleanup may never happen.
For dev/sandbox clusters → totally fine.
For prod → should be done carefully, knowing what might be left behind.
c) How to quickly diagnose “why is this stuck?”
When something is Terminating forever, my standard steps are:
- Check finalizers:
kubectl get <kind> <name> -n <ns> -o jsonpath='{.metadata.finalizers}'That tells you who is holding the deletion. - Check events:
kubectl describe <kind> <name> -n <ns>Sometimes you’ll see helpful errors like:- “cannot contact webhook …”
- “failed to clean up custom resources …”
- Check controller logs for the finalizer owner
- For KEDA finalizer →
kubectl logs -n keda deploy/keda-operator - For CRD cleanup →
kube-apiserver/apiextensions-apiserverlogs (harder on managed clusters)
- For KEDA finalizer →
d) How to avoid this pain in future?
For your use case (EKS + addons like KEDA, Datadog, etc.):
- Use Helm (or GitOps) as the source of truth
- Install/upgrade/uninstall via Helm.
- When decommissioning:
helm uninstall <release>first, then delete CRDs if needed.
- Don’t nuke CRDs and namespaces first
- If you need to remove KEDA:
helm uninstall keda -n keda- Wait for CRs to disappear.
- Then remove CRDs if you really want.
- If you need to remove KEDA:
- Keep operators running until cleanup is finished
- Don’t delete operator deployments before their resources are gone.
- Accept that in dev, patching is normal
- In dev/sandbox clusters, patching finalizers (
kubectl patch ... finalizers: []) is a perfectly OK escape hatch.
- In dev/sandbox clusters, patching finalizers (
4. TL;DR in human language
- Finalizers are “hooks” that block deletion until cleanup is done.
- They’re good for correctness, but awful for UX if the responsible controller is gone or broken.
- That’s why your CRDs and namespace took ages / got stuck.
- Best you can do:
- Uninstall apps the clean way (Helm uninstall, not CRD delete first).
- Only patch-out finalizers when you know what you’re skipping.
- In dev: patching is fine. In prod: be deliberate.
I’m a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I have worked at Cotocus. I share tech blog at DevOps School, travel stories at Holiday Landmark, stock market tips at Stocks Mantra, health and fitness guidance at My Medic Plus, product reviews at TrueReviewNow , and SEO strategies at Wizbrand.
Do you want to learn Quantum Computing?
Please find my social handles as below;
Rajesh Kumar Personal Website
Rajesh Kumar at YOUTUBE
Rajesh Kumar at INSTAGRAM
Rajesh Kumar at X
Rajesh Kumar at FACEBOOK
Rajesh Kumar at LINKEDIN
Rajesh Kumar at WIZBRAND