Use this document to configure resource usage for a Deployment’s executor , webserver, scheduler and triggerer components.Documentation Index
Fetch the complete documentation index at: https://astronomer-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Select an executor
The Airflow executor works closely with the Airflow scheduler to determine what resources complete tasks as they queue. The main difference between executors is their available resources and how they utilize those resources to distribute work. Astronomer supports three executors: Though it largely depends on your use case, Astronomer recommends the Local executor for development environments and the Celery or Kubernetes executors for production environments operating at scale. For a detailed description of each executor, see Airflow executors explained.Select a resource strategy
A Deployment’s Resource Strategy defines how you can allocate CPU and memory to the Deployment’s Airflow components. Astronomer Software offers two different resource strategies: Custom Resources and Astronomer Units (AUs). An AU is equivalent to 0.1 CPU and 0.375 GiB of memory. If you set your resource strategy to Astronomer Units, you can only scale components based on this resource ratio. Components must use the same AU value for both CPU and memory. If you set your resource strategy to Custom Resources, you can freely set CPU and memory for each component without a predetermined ratio. See Customize resource usage.If you still want a constant ratio of CPU to memory, but also want to change the specific ratio, you can change the amount of resources an AU represents. SeeOverprovision Deployments.
Scale core resources
Apache Airflow requires four primary components:- The Webserver
- The Scheduler
- The executor (and the workers it runs)
- The triggerer
Airflow webserver
The Airflow webserver is responsible for rendering the Airflow UI, where users can monitor DAGs, view task logs, and set various non-code configurations. If a function within the Airflow UI is slow or unavailable, Astronomer recommends increasing the resources allocated towards the webserver.Scheduler
The Airflow scheduler is responsible for monitoring task execution and triggering downstream tasks once dependencies have been met. If you experience delays in task execution, which you can track via the Gantt Chart view of the Airflow UI, Astronomer recommends increasing the resources allocated towards the scheduler.Scheduler count
Airflow 2.0 comes with the ability for users to run multiple schedulers concurrently to ensure high-availability, zero recovery time, and faster performance. You can provision up to 4 schedulers on any Deployment. Each individual scheduler will be provisioned with the resources specified in Scheduler Resources. For example, if you set the CPU figure in Scheduler Resources to 5 CPUs and set Scheduler Count to 2, your Airflow Deployment will run with 2 Airflow schedulers using 5 CPUs each for a total of 10 CPUs. To increase the speed at which tasks are scheduled and ensure high-availability, Astronomer recommends provisioning 2 or more Airflow schedulers for production environments.DAG Processor
Complex, dynamically-generated DAGs, sub-optimal DAG parsing practices, or a growing business that requires a larger data pipeline can strain DAG processing and threaten your Airflow scheduler’s availability. Deployments can support high-scale environments more reliably by separating the DAG processor from the scheduler. You can now configure the number of DAG processors for the Deployment from the UI and the Houston API. If you want to enable and provision resources for standalone DAG processors, you can set thedagProcessorEnabled feature flag to true in your Houston API configuration in theconfig.yaml file: