Find the Best Cosmetic Hospitals

Explore trusted cosmetic hospitals and make a confident choice for your transformation.

โ€œInvest in yourself โ€” your confidence is always worth it.โ€

Explore Cosmetic Hospitals

Start your journey today โ€” compare options in one place.

Kubernetes: How to Develop Kubernetes Operators

Letโ€™s build a tiny but real Kubernetes Operator end-to-end for Kubernetes 1.29. Weโ€™ll do it the โ€œstandardโ€ way (Go + controller-runtime) using Operator SDK (which wraps Kubebuilder) and pin versions that are known to work well with v1.29. Iโ€™ll show you how to develop, build, and deploy, with complete example code.

Heads-up: Kubernetes v1.29 is now in extended/legacy support across major managed K8s (upstream support window has passed). The guide below is tested/compatible with 1.29, but plan to upgrade soon. (AWS Documentation)


0) What weโ€™re building

A CRD called Hello with fields:

  • spec.message (string) โ€“ the text to print
  • spec.replicas (int) โ€“ how many Pods

The controller reconciles each Hello into a Deployment named hello-<cr-name> that runs a tiny container which prints the message forever. It also writes status.readyReplicas.


1) Prerequisites (for K8s 1.29)

  • Go 1.22+
  • Docker (or another container builder)
  • kubectl
  • A Kubernetes 1.29 cluster (kind/minikube/EKS etc.)
  • Operator SDK pinned to a release that targets K8s 1.29 (weโ€™ll use v1.36.0) (sdk.operatorframework.io)

Install the SDK:

go install github.com/operator-framework/operator-sdk/v3/cmd/operator-sdk@v1.36.0

(Kubebuilder/Operator SDK projects use the same controller-runtime stack; commands like make docker-build/make deploy come from the standard scaffolding.) (book.kubebuilder.io)


2) Scaffold the project

mkdir hello-operator && cd hello-operator

# Initialize a Go/v4 project (the modern plugin line)
operator-sdk init \
  --domain=example.com \
  --owner "You" \
  --plugins go/v4 \
  --project-name=hello-operator
Code language: PHP (php)

Create the API & controller:

operator-sdk create api \
  --group=demo \
  --version=v1 \
  --kind=Hello \
  --resource --controller

This creates:

api/v1/hello_types.go
controllers/hello_controller.go
config/...
Makefile, go.mod, main.go, etc.

3) Define the CRD types (api/v1/hello_types.go)

Replace the generated file with:

package v1

import (
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// HelloSpec defines the desired state of Hello
type HelloSpec struct {
    // +kubebuilder:validation:MinLength=1
    Message string `json:"message"`

    // +kubebuilder:validation:Minimum=1
    // +kubebuilder:default=1
    Replicas *int32 `json:"replicas,omitempty"`
}

// HelloStatus defines the observed state of Hello
type HelloStatus struct {
    // Ready replicas from the managed Deployment
    ReadyReplicas int32 `json:"readyReplicas,omitempty"`
}

//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
type Hello struct {
    metav1.TypeMeta   `json:",inline"`
    metav1.ObjectMeta `json:"metadata,omitempty"`

    Spec   HelloSpec   `json:"spec,omitempty"`
    Status HelloStatus `json:"status,omitempty"`
}

//+kubebuilder:object:root=true
type HelloList struct {
    metav1.TypeMeta `json:",inline"`
    metav1.ListMeta `json:"metadata,omitempty"`
    Items           []Hello `json:"items"`
}
Code language: JavaScript (javascript)

Generate CRDs & manifests:

make generate
make manifests

4) Implement the controller (controllers/hello_controller.go)

Paste this full controller (it โ€œcreate-or-updatesโ€ a Deployment and tracks status):

package controllers

import (
    "context"
    "fmt"

    appsv1 "k8s.io/api/apps/v1"
    corev1 "k8s.io/api/core/v1"
    "k8s.io/apimachinery/pkg/api/errors"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/apimachinery/pkg/types"
    "k8s.io/utils/ptr"

    ctrl "sigs.k8s.io/controller-runtime"
    "sigs.k8s.io/controller-runtime/pkg/client"
    "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
    "sigs.k8s.io/controller-runtime/pkg/log"

    demov1 "github.com/your-repo/hello-operator/api/v1"
)

// +kubebuilder:rbac:groups=demo.example.com,resources=hellos,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=demo.example.com,resources=hellos/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",resources=events,verbs=create;patch

type HelloReconciler struct {
    client.Client
}

// Reconcile ensures a Deployment exists per Hello, then updates status.
func (r *HelloReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    logger := log.FromContext(ctx)

    // 1) Get the Hello resource
    var hello demov1.Hello
    if err := r.Get(ctx, req.NamespacedName, &hello); err != nil {
        if errors.IsNotFound(err) {
            // CR deleted
            return ctrl.Result{}, nil
        }
        return ctrl.Result{}, err
    }

    // Defaults
    replicas := int32(1)
    if hello.Spec.Replicas != nil {
        replicas = *hello.Spec.Replicas
    }
    msg := hello.Spec.Message
    if msg == "" {
        msg = "Hello from " + hello.Name
    }

    // 2) Desired Deployment
    depName := fmt.Sprintf("hello-%s", hello.Name)
    labels := map[string]string{
        "app.kubernetes.io/name":       "hello",
        "app.kubernetes.io/instance":   hello.Name,
        "app.kubernetes.io/managed-by": "hello-operator",
    }

    var dep appsv1.Deployment
    depKey := types.NamespacedName{Name: depName, Namespace: hello.Namespace}
    if err := r.Get(ctx, depKey, &dep); err != nil && !errors.IsNotFound(err) {
        return ctrl.Result{}, err
    }

    // create or update the Deployment
    op, err := controllerutil.CreateOrUpdate(ctx, r.Client, &dep, func() error {
        dep.ObjectMeta.Name = depName
        dep.ObjectMeta.Namespace = hello.Namespace
        if dep.ObjectMeta.Labels == nil {
            dep.ObjectMeta.Labels = map[string]string{}
        }
        for k, v := range labels {
            dep.ObjectMeta.Labels[k] = v
        }

        // OwnerRef so it gets garbage-collected with the CR
        if err := controllerutil.SetControllerReference(&hello, &dep, r.Scheme()); err != nil {
            return err
        }

        dep.Spec.Selector = &metav1.LabelSelector{MatchLabels: labels}
        dep.Spec.Replicas = ptr.To(replicas)
        dep.Spec.Template.ObjectMeta.Labels = labels
        dep.Spec.Template.Spec.Containers = []corev1.Container{
            {
                Name:  "hello",
                Image: "busybox:1.36", // tiny & works on 1.29
                Command: []string{"/bin/sh", "-c"},
                Args: []string{
                    `while true; do echo "$(date) ` + msg + `"; sleep 5; done`,
                },
                // Optional: expose the message as an env var instead
                Env: []corev1.EnvVar{{Name: "MESSAGE", Value: msg}},
            },
        }
        return nil
    })
    if err != nil {
        return ctrl.Result{}, err
    }
    if op != controllerutil.OperationResultNone {
        logger.Info("deployment reconciled", "op", op, "name", depName)
    }

    // 3) Update status
    ready := int32(0)
    if dep.Status.ReadyReplicas > 0 {
        ready = dep.Status.ReadyReplicas
    }
    if hello.Status.ReadyReplicas != ready {
        hello.Status.ReadyReplicas = ready
        if err := r.Status().Update(ctx, &hello); err != nil {
            return ctrl.Result{}, err
        }
    }

    // Requeue on changes automatically via watch; no explicit requeue needed
    return ctrl.Result{}, nil
}

func (r *HelloReconciler) SetupWithManager(mgr ctrl.Manager) error {
    return ctrl.NewControllerManagedBy(mgr).
        For(&demov1.Hello{}).
        Owns(&appsv1.Deployment{}).
        Complete(r)
}
Code language: JavaScript (javascript)

Note the +kubebuilder:rbac linesโ€”these generate RBAC in config/rbac/role.yaml during make manifests.


5) Build & run locally against your cluster

Apply CRDs:

make install

Run the controller locally:

make run

Create a sample CR:

# config/samples/demo_v1_hello.yaml
apiVersion: demo.example.com/v1
kind: Hello
metadata:
  name: hello-sample
  namespace: default
spec:
  message: "Hello from Operator on K8s 1.29 ๐Ÿ‘‹"
  replicas: 2
kubectl apply -f config/samples/demo_v1_hello.yaml
kubectl get hello -A
kubectl get deploy -l app.kubernetes.io/name=hello -n default
kubectl logs -f deploy/hello-hello-sample -n default
Code language: JavaScript (javascript)

(Those standard make targets and flow are from the Kubebuilder/Operator SDK quick start.) (book.kubebuilder.io)


6) Containerize & deploy the operator in-cluster

Build and push the manager image:

export IMG=<your-registry>/hello-operator:v0.1.0
make docker-build docker-push IMG=$IMG
Code language: JavaScript (javascript)

Deploy RBAC/manager/CRDs with that image:

make deploy IMG=$IMG
Code language: PHP (php)

Now the operator runs as a Deployment in hello-operator-system. Create CRs and watch it manage Deployments.

Cleanup:

make undeploy
kubectl delete -f config/samples/demo_v1_hello.yaml
Code language: JavaScript (javascript)

(These targets and sequence are the canonical way to package and deploy operators.) (book.kubebuilder.io)


7) Example go.mod (excerpt)

Let the scaffolding pin versions, but youโ€™ll see something like:

module github.com/your-repo/hello-operator

go 1.22

require (
    sigs.k8s.io/controller-runtime v0.x.y
    k8s.io/api v0.29.x
    k8s.io/apimachinery v0.29.x
    k8s.io/client-go v0.29.x
)
Code language: JavaScript (javascript)

Controller-runtime & client-go versions are what tie to Kubernetes 1.29; Operator SDK v1.36 aligns those dependencies to K8s v1.29. (sdk.operatorframework.io)


8) Sample CRs to try

Scale & message change:

apiVersion: demo.example.com/v1
kind: Hello
metadata:
  name: hello-scale
spec:
  message: "Namaste from 1.29!"
  replicas: 3
Code language: JavaScript (javascript)

Apply, then modify replicas or message to see the Deployment update.


9) Namespacing / watch scope (optional)

By default the operator is cluster-scoped. To scope to a namespace, set WATCH_NAMESPACE in config/manager/manager.yaml (env var) before make deploy.


10) Testing notes for 1.29 (optional)

If you later add unit/envtest tests (make test), envtest binary locations changed after K8s 1.29.3โ€”Kubebuilder docs explain how to fetch the right envtest assets in ENVTEST_KUBERNETES_VERSION. (book.kubebuilder.io)


11) Alternative: pure Kubebuilder

You can build the exact same operator using Kubebuilder directly:

go install sigs.k8s.io/kubebuilder/v4@latest
kubebuilder init --domain=example.com --plugins go/v4 --repo=github.com/your-repo/hello-operator
kubebuilder create api --group=demo --version=v1 --kind=Hello
# (same code as above, same make targets)
make docker-build docker-push IMG=$IMG
make deploy IMG=$IMG
Code language: PHP (php)

Kubebuilderโ€™s book documents the same build/deploy flow and targets. (book.kubebuilder.io)


Troubleshooting (1.29-specific gotchas)

  • Beta APIs removed: If any third-party manifests in your cluster use removed beta API versions, they can fail on 1.29; check release notes and update them. (Kubernetes)
  • Managed K8s: EKS keeps 1.29 under extended support (extra cost/time-bound). Plan a minor upgrade path when possible. (AWS Documentation)

You now have:

  • A CRD (Hello)
  • A controller that creates/updates a Deployment and updates status
  • A container image and a fully deployed operator

If you want, I can tailor this for EKS/ECR pushes (login/tag commands), add a Helm chart for CRs, or wire in Prometheus metrics/health probes (controller-runtime exposes them out-of-the-box). (book.kubebuilder.io)

Happy operating! ๐Ÿง‘โ€โœˆ๏ธ

Find Trusted Cardiac Hospitals

Compare heart hospitals by city and services โ€” all in one place.

Explore Hospitals
Iโ€™m a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I have worked at <a href="https://www.cotocus.com/">Cotocus</a>. I share tech blog at <a href="https://www.devopsschool.com/">DevOps School</a>, travel stories at <a href="https://www.holidaylandmark.com/">Holiday Landmark</a>, stock market tips at <a href="https://www.stocksmantra.in/">Stocks Mantra</a>, health and fitness guidance at <a href="https://www.mymedicplus.com/">My Medic Plus</a>, product reviews at <a href="https://www.truereviewnow.com/">TrueReviewNow</a> , and SEO strategies at <a href="https://www.wizbrand.com/">Wizbrand.</a> Do you want to learn <a href="https://www.quantumuting.com/">Quantum Computing</a>? <strong>Please find my social handles as below;</strong> <a href="https://www.rajeshkumar.xyz/">Rajesh Kumar Personal Website</a> <a href="https://www.youtube.com/TheDevOpsSchool">Rajesh Kumar at YOUTUBE</a> <a href="https://www.instagram.com/rajeshkumarin">Rajesh Kumar at INSTAGRAM</a> <a href="https://x.com/RajeshKumarIn">Rajesh Kumar at X</a> <a href="https://www.facebook.com/RajeshKumarLog">Rajesh Kumar at FACEBOOK</a> <a href="https://www.linkedin.com/in/rajeshkumarin/">Rajesh Kumar at LINKEDIN</a> <a href="https://www.wizbrand.com/rajeshkumar">Rajesh Kumar at WIZBRAND</a> <a href="https://www.rajeshkumar.xyz/dailylogs">Rajesh Kumar DailyLogs</a>

Related Posts

Top 10 LLM Evaluation Harnesses: Features, Pros, Cons & Comparison

Introduction LLM Evaluation Harnesses are tools, frameworks, and platforms that help teams test large language models, prompts, RAG pipelines, chatbots, copilots, and AI agents before they are…

Read More

Top 10 Model Benchmarking Suites: Features, Pros, Cons & Comparison

Introduction Model Benchmarking Suites help AI teams test, compare, and validate machine learning models, large language models, multimodal models, and AI agents before they are deployed in…

Read More

Top 10 Model Compression Toolkits: Features, Pros, Cons & Comparison

Introduction Model compression toolkits help AI teams reduce the size, memory usage, latency, and serving cost of machine learning models while keeping useful performance as high as…

Read More

Top 10 Model Quantization Tooling: Features, Pros, Cons & Comparison

Introduction Model quantization tooling helps AI teams make models smaller, faster, and cheaper to run by reducing numerical precision. Instead of running every model weight or activation…

Read More

Top 10 Model Distillation Toolkits: Features, Pros, Cons & Comparison

Introduction Model distillation toolkits help AI teams transfer knowledge from a larger, more capable model into a smaller, faster, and cheaper model. In simple terms, the larger…

Read More

Top 10 RLHF / RLAIF Training Platforms: Features, Pros, Cons & Comparison

Introduction RLHF and RLAIF training platforms help AI teams improve model behavior using structured feedback. RLHF, or reinforcement learning from human feedback, uses human preference signals, ratings,…

Read More
Subscribe
Notify of
guest
1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Skylar Bennett
Skylar Bennett
4 months ago

This blog gives a very practical view of why Kubernetes Operators are powerful and how to start building them in a structured way. I like how it connects core concepts like Custom Resource Definitions (CRDs), controllers, and reconciliation loops with real use cases such as managing stateful applications and automating complex lifecycle tasks. The focus on designing operators with clear responsibilities, handling failures gracefully, and using tools like the Operator SDK makes the topic easier to apply in real projects. Overall, itโ€™s a very useful read for teams planning to move beyond basic YAML and build intelligent Kubernetes automation.

1
0
Would love your thoughts, please comment.x
()
x