Documentation Index
Fetch the complete documentation index at: https://astronomer-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Use these Helm values to size scheduler, webserver, workers, triggerer, Dag processor, and API server resources for an Airflow Deployment.
Resource values are plain integers: CPU in millicpu and memory in MiB.
Component resources
Scheduler
The scheduler orchestrates Dag runs and task scheduling.
scheduler:
replicas: 1
resources:
requests:
cpu: 500
memory: 1920
limits:
cpu: 1000
memory: 3840
Scaling considerations
- Add additional replicas for high availability.
- Increase memory for complex Dag dependencies.
- Set
safeToEvict: false to prevent cluster autoscaler eviction.
Webserver
webserver:
resources:
requests:
cpu: 500
memory: 1920
limits:
cpu: 1000
memory: 3840
API server (Airflow 3+)
apiServer:
replicas: 1
resources:
requests:
cpu: 1000
memory: 3840
limits:
cpu: 2000
memory: 7680
Dag processor (Airflow 2.3+, required in Airflow 3)
dagProcessor:
enabled: true
replicas: 1
resources:
requests:
cpu: 1000
memory: 3840
limits:
cpu: 2000
memory: 7680
In Airflow 2, the Dag processor defaults to 0 replicas and must be explicitly enabled. In Airflow 3, Houston automatically sets dagProcessor.enabled: true and enforces a minimum of 1 replica regardless of configuration.
Triggerer
triggerer:
replicas: 1
resources:
requests:
cpu: 500
memory: 1920
limits:
cpu: 1000
memory: 3840
Workers (Celery Executor)
workers:
replicas: 2
resources:
requests:
cpu: 1000
memory: 3840
limits:
cpu: 2000
memory: 7680
terminationGracePeriodSeconds: 600
Sizing recommendations
Small workloads (fewer than 50 Dags)
scheduler:
resources:
requests: { cpu: 500, memory: 1920 }
limits: { cpu: 1000, memory: 3840 }
workers:
replicas: 1
resources:
requests: { cpu: 1000, memory: 3840 }
Medium workloads (50–200 Dags)
scheduler:
resources:
requests: { cpu: 500, memory: 1920 }
limits: { cpu: 1000, memory: 3840 }
dagProcessor:
enabled: true
resources:
requests: { cpu: 1000, memory: 3840 }
workers:
replicas: 3
resources:
requests: { cpu: 1000, memory: 3840 }
Large workloads (more than 200 Dags)
scheduler:
replicas: 2
resources:
requests: { cpu: 1000, memory: 3840 }
limits: { cpu: 2000, memory: 7680 }
dagProcessor:
enabled: true
replicas: 2
resources:
requests: { cpu: 1000, memory: 3840 }
workers:
replicas: 10
Autoscale workers with KEDA
Kubernetes Event-driven Autoscaling (KEDA) scales Celery workers based on task queue depth. Enable KEDA for a Deployment using the updateDeploymentKedaConfig mutation:
mutation {
updateDeploymentKedaConfig(
deploymentUuid: "<deployment-uuid>"
state: true
) {
id
label
}
}
Monitor resources
# View current resource usage
kubectl top pods -n <deployment-namespace>
# Check resource limits
kubectl describe pod <pod-name> -n <deployment-namespace>