Let’s build a tiny but real Kubernetes Operator end-to-end for Kubernetes 1.29. We’ll do it the “standard” way (Go + controller-runtime) using Operator SDK (which wraps Kubebuilder) and pin versions that are known to work well with v1.29. I’ll show you how to develop, build, and deploy, with complete example code.
Heads-up: Kubernetes v1.29 is now in extended/legacy support across major managed K8s (upstream support window has passed). The guide below is tested/compatible with 1.29, but plan to upgrade soon. (AWS Documentation)
0) What we’re building
A CRD called Hello with fields:
spec.message
(string) – the text to printspec.replicas
(int) – how many Pods
The controller reconciles each Hello
into a Deployment named hello-<cr-name>
that runs a tiny container which prints the message forever. It also writes status.readyReplicas
.
1) Prerequisites (for K8s 1.29)
- Go 1.22+
- Docker (or another container builder)
- kubectl
- A Kubernetes 1.29 cluster (kind/minikube/EKS etc.)
- Operator SDK pinned to a release that targets K8s 1.29 (we’ll use
v1.36.0
) (sdk.operatorframework.io)
Install the SDK:
go install github.com/operator-framework/operator-sdk/v3/cmd/operator-sdk@v1.36.0
(Kubebuilder/Operator SDK projects use the same controller-runtime stack; commands like make docker-build
/make deploy
come from the standard scaffolding.) (book.kubebuilder.io)
2) Scaffold the project
mkdir hello-operator && cd hello-operator
# Initialize a Go/v4 project (the modern plugin line)
operator-sdk init \
--domain=example.com \
--owner "You" \
--plugins go/v4 \
--project-name=hello-operator
Code language: PHP (php)
Create the API & controller:
operator-sdk create api \
--group=demo \
--version=v1 \
--kind=Hello \
--resource --controller
This creates:
api/v1/hello_types.go
controllers/hello_controller.go
config/...
Makefile, go.mod, main.go, etc.
3) Define the CRD types (api/v1/hello_types.go)
Replace the generated file with:
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// HelloSpec defines the desired state of Hello
type HelloSpec struct {
// +kubebuilder:validation:MinLength=1
Message string `json:"message"`
// +kubebuilder:validation:Minimum=1
// +kubebuilder:default=1
Replicas *int32 `json:"replicas,omitempty"`
}
// HelloStatus defines the observed state of Hello
type HelloStatus struct {
// Ready replicas from the managed Deployment
ReadyReplicas int32 `json:"readyReplicas,omitempty"`
}
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
type Hello struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec HelloSpec `json:"spec,omitempty"`
Status HelloStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
type HelloList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Hello `json:"items"`
}
Code language: JavaScript (javascript)
Generate CRDs & manifests:
make generate
make manifests
4) Implement the controller (controllers/hello_controller.go)
Paste this full controller (it “create-or-updates” a Deployment and tracks status):
package controllers
import (
"context"
"fmt"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/ptr"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/log"
demov1 "github.com/your-repo/hello-operator/api/v1"
)
// +kubebuilder:rbac:groups=demo.example.com,resources=hellos,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=demo.example.com,resources=hellos/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",resources=events,verbs=create;patch
type HelloReconciler struct {
client.Client
}
// Reconcile ensures a Deployment exists per Hello, then updates status.
func (r *HelloReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
// 1) Get the Hello resource
var hello demov1.Hello
if err := r.Get(ctx, req.NamespacedName, &hello); err != nil {
if errors.IsNotFound(err) {
// CR deleted
return ctrl.Result{}, nil
}
return ctrl.Result{}, err
}
// Defaults
replicas := int32(1)
if hello.Spec.Replicas != nil {
replicas = *hello.Spec.Replicas
}
msg := hello.Spec.Message
if msg == "" {
msg = "Hello from " + hello.Name
}
// 2) Desired Deployment
depName := fmt.Sprintf("hello-%s", hello.Name)
labels := map[string]string{
"app.kubernetes.io/name": "hello",
"app.kubernetes.io/instance": hello.Name,
"app.kubernetes.io/managed-by": "hello-operator",
}
var dep appsv1.Deployment
depKey := types.NamespacedName{Name: depName, Namespace: hello.Namespace}
if err := r.Get(ctx, depKey, &dep); err != nil && !errors.IsNotFound(err) {
return ctrl.Result{}, err
}
// create or update the Deployment
op, err := controllerutil.CreateOrUpdate(ctx, r.Client, &dep, func() error {
dep.ObjectMeta.Name = depName
dep.ObjectMeta.Namespace = hello.Namespace
if dep.ObjectMeta.Labels == nil {
dep.ObjectMeta.Labels = map[string]string{}
}
for k, v := range labels {
dep.ObjectMeta.Labels[k] = v
}
// OwnerRef so it gets garbage-collected with the CR
if err := controllerutil.SetControllerReference(&hello, &dep, r.Scheme()); err != nil {
return err
}
dep.Spec.Selector = &metav1.LabelSelector{MatchLabels: labels}
dep.Spec.Replicas = ptr.To(replicas)
dep.Spec.Template.ObjectMeta.Labels = labels
dep.Spec.Template.Spec.Containers = []corev1.Container{
{
Name: "hello",
Image: "busybox:1.36", // tiny & works on 1.29
Command: []string{"/bin/sh", "-c"},
Args: []string{
`while true; do echo "$(date) ` + msg + `"; sleep 5; done`,
},
// Optional: expose the message as an env var instead
Env: []corev1.EnvVar{{Name: "MESSAGE", Value: msg}},
},
}
return nil
})
if err != nil {
return ctrl.Result{}, err
}
if op != controllerutil.OperationResultNone {
logger.Info("deployment reconciled", "op", op, "name", depName)
}
// 3) Update status
ready := int32(0)
if dep.Status.ReadyReplicas > 0 {
ready = dep.Status.ReadyReplicas
}
if hello.Status.ReadyReplicas != ready {
hello.Status.ReadyReplicas = ready
if err := r.Status().Update(ctx, &hello); err != nil {
return ctrl.Result{}, err
}
}
// Requeue on changes automatically via watch; no explicit requeue needed
return ctrl.Result{}, nil
}
func (r *HelloReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&demov1.Hello{}).
Owns(&appsv1.Deployment{}).
Complete(r)
}
Code language: JavaScript (javascript)
Note the
+kubebuilder:rbac
lines—these generate RBAC inconfig/rbac/role.yaml
duringmake manifests
.
5) Build & run locally against your cluster
Apply CRDs:
make install
Run the controller locally:
make run
Create a sample CR:
# config/samples/demo_v1_hello.yaml
apiVersion: demo.example.com/v1
kind: Hello
metadata:
name: hello-sample
namespace: default
spec:
message: "Hello from Operator on K8s 1.29 👋"
replicas: 2
kubectl apply -f config/samples/demo_v1_hello.yaml
kubectl get hello -A
kubectl get deploy -l app.kubernetes.io/name=hello -n default
kubectl logs -f deploy/hello-hello-sample -n default
Code language: JavaScript (javascript)
(Those standard make
targets and flow are from the Kubebuilder/Operator SDK quick start.) (book.kubebuilder.io)
6) Containerize & deploy the operator in-cluster
Build and push the manager image:
export IMG=<your-registry>/hello-operator:v0.1.0
make docker-build docker-push IMG=$IMG
Code language: JavaScript (javascript)
Deploy RBAC/manager/CRDs with that image:
make deploy IMG=$IMG
Code language: PHP (php)
Now the operator runs as a Deployment in hello-operator-system
. Create CRs and watch it manage Deployments.
Cleanup:
make undeploy
kubectl delete -f config/samples/demo_v1_hello.yaml
Code language: JavaScript (javascript)
(These targets and sequence are the canonical way to package and deploy operators.) (book.kubebuilder.io)
7) Example go.mod
(excerpt)
Let the scaffolding pin versions, but you’ll see something like:
module github.com/your-repo/hello-operator
go 1.22
require (
sigs.k8s.io/controller-runtime v0.x.y
k8s.io/api v0.29.x
k8s.io/apimachinery v0.29.x
k8s.io/client-go v0.29.x
)
Code language: JavaScript (javascript)
Controller-runtime & client-go versions are what tie to Kubernetes 1.29; Operator SDK v1.36 aligns those dependencies to K8s v1.29. (sdk.operatorframework.io)
8) Sample CRs to try
Scale & message change:
apiVersion: demo.example.com/v1
kind: Hello
metadata:
name: hello-scale
spec:
message: "Namaste from 1.29!"
replicas: 3
Code language: JavaScript (javascript)
Apply, then modify replicas
or message
to see the Deployment update.
9) Namespacing / watch scope (optional)
By default the operator is cluster-scoped. To scope to a namespace, set WATCH_NAMESPACE
in config/manager/manager.yaml
(env var) before make deploy
.
10) Testing notes for 1.29 (optional)
If you later add unit/envtest tests (make test
), envtest binary locations changed after K8s 1.29.3—Kubebuilder docs explain how to fetch the right envtest assets in ENVTEST_KUBERNETES_VERSION
. (book.kubebuilder.io)
11) Alternative: pure Kubebuilder
You can build the exact same operator using Kubebuilder directly:
go install sigs.k8s.io/kubebuilder/v4@latest
kubebuilder init --domain=example.com --plugins go/v4 --repo=github.com/your-repo/hello-operator
kubebuilder create api --group=demo --version=v1 --kind=Hello
# (same code as above, same make targets)
make docker-build docker-push IMG=$IMG
make deploy IMG=$IMG
Code language: PHP (php)
Kubebuilder’s book documents the same build/deploy flow and targets. (book.kubebuilder.io)
Troubleshooting (1.29-specific gotchas)
- Beta APIs removed: If any third-party manifests in your cluster use removed beta API versions, they can fail on 1.29; check release notes and update them. (Kubernetes)
- Managed K8s: EKS keeps 1.29 under extended support (extra cost/time-bound). Plan a minor upgrade path when possible. (AWS Documentation)
You now have:
- A CRD (
Hello
) - A controller that creates/updates a Deployment and updates status
- A container image and a fully deployed operator
If you want, I can tailor this for EKS/ECR pushes (login/tag commands), add a Helm chart for CRs, or wire in Prometheus metrics/health probes (controller-runtime exposes them out-of-the-box). (book.kubebuilder.io)
Happy operating! 🧑✈️
I’m a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I have worked at Cotocus. I share tech blog at DevOps School, travel stories at Holiday Landmark, stock market tips at Stocks Mantra, health and fitness guidance at My Medic Plus, product reviews at TrueReviewNow , and SEO strategies at Wizbrand.
Do you want to learn Quantum Computing?
Please find my social handles as below;
Rajesh Kumar Personal Website
Rajesh Kumar at YOUTUBE
Rajesh Kumar at INSTAGRAM
Rajesh Kumar at X
Rajesh Kumar at FACEBOOK
Rajesh Kumar at LINKEDIN
Rajesh Kumar at WIZBRAND