In our Kubernetes setup, we use Pods primarily through Deployments for stateless services, where each Pod runs the main application container alongside sidecars for logging, monitoring, or service-to-service networking when needed. We define resource requests and limits, health probes, and config/secret mounts so Pods start predictably and recover automatically during failures. For scaling, we rely on replicas and autoscaling to adjust Pod counts based on load, and we use rolling updates to release changes without downtime. The main challenges we faced were tuning CPU and memory settings to avoid throttling or OOMKills, handling configuration changes safely across environments, and troubleshooting networking or DNS issues between Pods. We also had to improve observability to quickly identify crash loops, slow startups, and node-level constraints that impacted Pod scheduling and stability.