ArgoCd: How to do Capacity Planning for ArgoCd in Production

It is not easy to come with a one-size-fits-all recommendation (and that’s why we don’t deliver resource limits by default), because the requirements vary on a lot of factors.

For the application controller, you need to consider at least the following variables:

  • How many applications are being managed?
  • How many clusters are being managed?
  • How many APIs and resources are there in any of the managed clusters, and how big are those resources?
  • What is the tuning configuration regarding status- and operation-processors, kube parallelism limits, etc?

For the repository server, there are also quite a few variables to consider:

  • How big are your application manifests?
  • What tooling do you use (e.g. Helm, Kustomize, custom plugins, …)
  • Did you scale out the repository server?
  • Do you allow concurrent processing of manifests?

Capacity planning for ArgoCD in production involves estimating the required resources to ensure optimal performance, scalability, and availability of the system. Here are the key steps for capacity planning for ArgoCD in production:

  1. Identify the workload: Determine the expected number of users, applications, and deployments that ArgoCD will be managing in production. This information will help you estimate the required resources.
  2. Estimate the resource requirements: Use the information from step one to estimate the required CPU, memory, and storage resources for ArgoCD. You can use tools like Prometheus or Grafana to monitor the usage of resources and adjust your estimates accordingly.
  3. Plan for redundancy: Plan for redundancy by deploying ArgoCD in a high-availability configuration with multiple replicas across multiple nodes or availability zones.
  4. Test performance: Test the performance of ArgoCD in a production-like environment to validate your capacity planning assumptions. Use load testing tools like Apache JMeter or Locust to simulate realistic loads on the system.
  5. Monitor and adjust: Monitor the performance of ArgoCD in production and adjust the resource allocation as necessary. Use tools like Prometheus, Grafana, or other monitoring solutions to track the usage of resources and identify any bottlenecks or performance issues.
  6. Automate scaling: Consider automating the scaling of ArgoCD resources using tools like Kubernetes Horizontal Pod Autoscaler or Cluster Autoscaler. This can help you automatically scale up or down the resources based on the workload.

Capacity recommendation for ArgoCd in Production

Here are some general guidelines that can help you determine the capacity requirements for your Argo CD deployment in production:

  1. Hardware Requirements: The hardware requirements for Argo CD will depend on the size of the application manifests, the number of applications, and the number of users. Generally, it is recommended to have at least 2 CPU cores and 4GB of RAM for the Argo CD server.
  2. Storage Requirements: Argo CD requires storage space to store application manifests, configuration, and other metadata. The amount of storage required will depend on the number of applications being managed and their size. A good rule of thumb is to allocate at least 10GB of storage per application.
  3. Network Requirements: Argo CD requires network connectivity to communicate with the Kubernetes API server and Git repositories. Ensure that the network bandwidth is sufficient to support the number of clusters and users.
  4. Scalability: Argo CD can be scaled horizontally by adding more replicas of the server. You can also use a load balancer to distribute the traffic across multiple replicas.
  5. Backup and Disaster Recovery: It is essential to have a backup and disaster recovery plan in place for your Argo CD deployment to ensure that your application manifests and configuration are safe and can be restored in case of any failures.
Rajesh Kumar
Follow me
Latest posts by Rajesh Kumar (see all)
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x