Documentation Index
Fetch the complete documentation index at: https://astronomer-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Use this guide to deploy an Astro Private Cloud (APC) unified cluster, where control plane and data plane components run together in a single Kubernetes cluster. Unified mode combines management services, such as Astro UI, Houston, and NATS, with runtime services like Commander, Config Syncer, and data plane ingress so that platform operators can evaluate APC without maintaining separate clusters.
If you prefer to keep control plane and data plane Helm releases separate but run them in the same Kubernetes cluster, follow the dedicated control plane and data plane install guides sequentially. That approach consumes slightly more resources than unified mode but keeps responsibilities isolated.
At some points in this installation procedure, your particular environment configurations might require you to take additional steps, or, enable you to skip certain steps.When you see a callout like this, read it carefully and follow the instructions if they apply to your installation.
Prerequisites
EKS on AWS
GKE on GCP
AKS on Azure
Other
The following prerequisites apply when running APC on Amazon EKS. See theOthertab if you run a different version of Kubernetes on AWS.
- An EKS Kubernetes cluster, running a version of Kubernetes certified as compatible on the Kubernetes Version Compatibility Reference that provides the following components:
- A PostgreSQL instance, accessible from your Kubernetes cluster, and running a version of Postgres certified as compatible on the Version Compatibility Reference.
- PostgreSQL superuser permissions.
- Permission to create and modify resources on AWS.
- Permission to generate a certificate that covers a defined set of subdomains.
- An SMTP service and credentials. For example, Mailgun, or Sendgrid.
- The AWS CLI.
- (Optional)
eksctl for creating and managing your Astronomer cluster on EKS.
- A machine meeting the following criteria with access to the Kubernetes API Server:
- Network access to the Kubernetes API Server - either direct access or VPN.
- Network access to load-balancer resources that are created when APC is installed later in the procedure - either direct access or VPN.
- Configured to use the DNS servers where APC DNS records can be created.
- Helm (minimum v3.6).
- The Kubernetes CLI (kubectl).
- (Situational) The OpenSSL CLI might be required to troubleshoot certain certificate-related conditions.
The following prerequisites apply when running APC on Google GKE. See theOthertab if you run a different version of Kubernetes on GCP.
- A GKE Kubernetes cluster, running a version of Kubernetes listed as compatible on the Kubernetes Version Compatibility Reference.
- A PostgreSQL instance, accessible from your Kubernetes cluster, and running a version of Postgres certified as compatible on the Version Compatibility Reference.
- PostgreSQL superuser permissions.
- Permission to create and modify resources on Google Cloud Platform.
- Permission to generate a certificate that covers a defined set of subdomains.
- An SMTP service and credentials. For example, Mailgun or Sendgrid.
- Google Cloud SDK.
- A machine that meets the following criteria with access to the Kubernetes API Server:
- Network access to the Kubernetes API Server - either direct access or VPN.
- Network access to load-balancer resources that are created when APC is installed later in the procedure - either direct access or VPN.
- Configured to use the DNS servers where APC DNS records can be created.
- Helm with minimum version 3.6.
- The Kubernetes CLI (kubectl).
- (Optional) The OpenSSL CLI might be required to troubleshoot certain certificate-related conditions.
The following prerequisites apply when running APC on Azure AKS. See theOthertab if you run a different version of Kubernetes on Azure.
- A Kubernetes cluster, running a version of Kubernetes listed as compatible on the Kubernetes Version Compatibility Reference.
- A PostgreSQL instance, accessible from your Kubernetes cluster, and running a version of Postgres certified as compatible on the Version Compatibility Reference.
- If your organization uses Azure Database for PostgreSQL as the database backend, you need to enable the
pg_trgm extension using the Azure portal or the Azure CLI before you install APC. If you don’t enable the pg_trgm extension, the install fails. For more information about enabling the pg_trgm extension, see PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server.
- PostgreSQL superuser permissions.
- Permission to create and modify resources on Azure.
- Permission to generate a certificate that covers a defined set of subdomains.
- An SMTP service and credentials. For example, Mailgun or Sendgrid.
- The Azure CLI.
- A machine meeting the following criteria with access to the Kubernetes API Server:
- Network access to the Kubernetes API Server - either direct access or VPN.
- Network access to load-balancer resources created when APC is installed later in the procedure - either direct access or VPN.
- Configured to use the DNS servers where APC DNS records will be created.
- Helm (minimum v3.6).
- The Kubernetes CLI (kubectl).
- (Optional) The OpenSSL CLI might be required to trouble-shoot certain certificate-related conditions.
The following prerequisites apply when running APC on Kubernetes.
- A Kubernetes cluster. For versioning considerations, see Kubernetes Version Compatibility Reference.
- A PostgreSQL instance accessible from your Kubernetes cluster. For versioning considerations, see Version Compatibility Reference.
- PostgreSQL superuser permissions.
- An SMTP service and credentials. For example, Mailgun, or Sendgrid.
- Permission to generate a certificate that covers a defined set of subdomains.
- The ability to create DNS records.
- A machine with access to the Kubernetes API Server meeting the following criteria:
- Network access to the Kubernetes API Server - either direct access or VPN.
- Network access to load-balancer resources created when APC is installed later in the procedure - either direct access or VPN.
- Configured to use the DNS servers where APC DNS records will be created.
- Helm (minimum v3.6).
- The Kubernetes CLI (kubectl).
- (Optional) The OpenSSL CLI might be required to trouble-shoot certain certificate-related conditions.
Ensure your cluster meets both thecontrol planeanddata planeprerequisites, because unified mode deploys services from each.
Ingress controller considerations [#decide-ingress-controller]
Astro Private Cloud requires a Kubernetes Ingress controller to function and provides an integrated Ingress controller by default. Before installing, decide whether to use a third-party ingress controller or use Astronomer’s integrated ingress controller.
Astronomer generally recommends you use the integrated Ingress controller, but Astro Private Cloud also supports certain third-party ingress-controllers.
Ingress controllers typically need elevated permissions, including a ClusterRole, to function. Specifically, the Astro Private Cloud Ingress controller requires the ability to:
- List all namespaces in the cluster.
- View ingresses in the namespaces.
- Retrieve secrets in the namespaces to locate and use private TLS certificates that service the ingresses.
If you have complex regulatory requirements, you might need to use an Ingress controller that’s approved by your organization and disable Astronomer’s integrated controller. You configure the Ingress controller during the installation.
Before installing APC, consider how many instances of the platform you want to host because you install each of these instances on separate Kubernetes clusters, following the instructions in this document.
Each instance of APC can host multiple Airflow environments, or Deployments. Some common types of APC instances you might consider hosting are:
- Sandbox: The lowest environment that contains no sensitive data, used only by system-administrators to experiment, and not subject to change control.
- Development: User-accessible environment that is subject to most of the same restrictions of higher environments, with relaxed change control rules.
- Staging: All network, security, and patch versions are maintained at the same level as in the production environment. However, it provides no availability guarantees and includes relaxed change control rules.
- Production: The production instance hosts your production Airflow environments. You can choose to host development Airflow environments here or in environments with lower levels of support and restrictions.
Plan each environment as a pairing of one control plane with one or more data planes. Create a project folder for every environment you plan to host to contain its configuration files. For example, if you want to install a development environment, create a folder named ~/astronomer-dev/unified.
Note that in addition to the default cluster in unified mode, additional data plane clusters can be registered by following the Install a Data Plane guide.
Certain files in the project directory might contain secrets when you set up your sandbox or development environments. For your first install, keep these secrets in a secure place on a suitable machine. As you progress to higher environments, such as staging or production, secure these files separately in a vault and use the remaining project files in your directory to serve as the basis for your CI/CD deployment.
Step 2: Create values.yaml from a template [#create-valuesyaml]
APC uses Helm to apply platform-level configurations. Choose your cloud provider tab below to copy a ready-to-use values.yaml. Then, in the following steps, update image tags, domains, and secrets before deploying.
As you work with the template configuration, use the following guidelines to avoid installation issues:
- Do not make any changes to the
values.yaml file until instructed to do.
- Do not run
helm upgrade or upgrade.sh until instructed to do so.
- Ignore any instructions to run
helm upgrade from other Astronomer documentation until after you complete this unified mode installation procedure.
EKS on AWS
GKE on GCP
AKS on Azure
Other
###########################################
### Astronomer global configuration for EKS
###########################################
global:
# Installation mode for the control plane
plane:
mode: unified
# Base domain for all control plane subdomains exposed through ingress
baseDomain: env.astronomer.your.domain
# For development or proof-of-concept, you can use an in-cluster database.
# This NOT supported in production.
# postgresqlEnabled: true
# Name of secret containing TLS certificate, change if not using "astronomer-tls"
# tlsSecret: astronomer-tls
# List of secrets containing the cert.pem of trusted private certification authorities
# Example command: `kubectl -n astronomer create secret generic private-root-ca --from-file=cert.pem=./private-root-ca.pem`
# privateCaCerts:
# - private-root-ca
# Expose Postgres metrics for Prometheus to scrape
# prometheusPostgresExporterEnabled: true
# Use a sidecar for exporting task logs
loggingSidecar:
enabled: true
name: sidecar-log-consumer
# Enable dag-only deployments
dagOnlyDeployment:
enabled: true
# Database SSL configuration
ssl:
# Enable SSL connection to Postgres -- must be false if using in-cluster database
enabled: true
#########################
### Ingress configuration
#########################
# nginx:
# Static IP address the nginx ingress should bind to
# loadBalancerIP: ~
# Set privateLoadbalancer to 'false' to make nginx request a LoadBalancer on a public vnet
# privateLoadBalancer: true
# Dictionary of arbitrary annotations to add to the nginx ingress.
# For full configuration options, see https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
# Change to 'elb' if your node group is private and doesn't utilize a NAT gateway
# ingressAnnotations: {service.beta.kubernetes.io/aws-load-balancer-type: nlb}
# If all subnets are private, auto-discovery may fail.
# You must enter the subnet IDs manually in the annotation below.
# service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-id-1,subnet-id-2
################################
### Astronomer app configuration
################################
astronomer:
houston:
upgradeDeployments:
enabled: false
# Application configuration for Houston
config:
publicSignups: true ## set to false immediately after initial system admin user created
# Allowed user email domains for system level roles
# allowedSystemLevelDomains: []
# Default configuration for deployments.
# Can be overridden on a per-data-plane basis.
deployments:
# Enable Airflow 3 deployments for clusters
airflowV3:
enabled: true
# Allow deletions to immediately remove the database and namespace
# hardDeleteDeployment: true
# Allows you to set your release names
# manualReleaseNames: true
# Flag to enable using IAM roles (don't enter a specific role)
# serviceAccountAnnotationKey: eks.amazonaws.com/role-arn
# Required if dagOnlyDeployment is enabled
configureDagDeployment: true
# Enables the API for updating deployments
# enableUpdateDeploymentImageEndpoint: true
# upsertDeploymentEnabled: true
# email:
# enabled: false
# reply: noreply@your.domain
# secret:
# - envName: "EMAIL__SMTP_URL" # Reference to the Kubernetes secret for SMTP credentials. Only required if email is used.
# secretName: "astronomer-smtp"
# secretKey: "connection"
# User authentication mechanism. One of the following should be enabled.
auth:
github:
# Allow users authenticate with Github, enabled by default
enabled: false
# local:
# # Allow users and passwords in the Houston database, disabled by default
# enabled: false
openidConnect:
# okta:
# enabled: false
# microsoft:
# enabled: false
# adfs:
# enabled: false
# custom:
# enabled: false
google:
# Allow users to authenticate with Google, enabled by default
enabled: false
#################################
## Default tagged groups enabled
#################################
# tags:
# Enable platform components by default (nginx, astronomer)
# platform: true
# Enable monitoring stack (prometheus, kube-state)
# monitoring: true
# Enable logging stack (elasticsearch, vector)
# logging: true
###########################################
### Astronomer global configuration for GKE
###########################################
global:
# Installation mode for the control plane
plane:
mode: unified
# Base domain for all control plane subdomains exposed through ingress
baseDomain: env.astronomer.your.domain
# For development or proof-of-concept, you can use an in-cluster database.
# This NOT supported in production.
# postgresqlEnabled: true
# Name of secret containing TLS certificate, change if not using "astronomer-tls"
# tlsSecret: astronomer-tls
# List of secrets containing the cert.pem of trusted private certification authorities
# Example command: `kubectl -n astronomer create secret generic private-root-ca --from-file=cert.pem=./private-root-ca.pem`
# privateCaCerts:
# - private-root-ca
# Expose Postgres metrics for Prometheus to scrape
# prometheusPostgresExporterEnabled: true
# Use a sidecar for exporting task logs
loggingSidecar:
enabled: true
name: sidecar-log-consumer
# Enable dag-only deployments
dagOnlyDeployment:
enabled: true
# Database SSL configuration
ssl:
# Enable SSL connection to Postgres -- must be false if using in-cluster database
enabled: true
#########################
### Ingress configuration
#########################
# nginx:
# Static IP address the nginx ingress should bind to
# loadBalancerIP: ~
# Set privateLoadbalancer to 'false' to make nginx request a LoadBalancer on a public vnet
# privateLoadBalancer: true
# Dictionary of arbitrary annotations to add to the nginx ingress.
# For full configuration options, see https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
# ingressAnnotations: {}
################################
### Astronomer app configuration
################################
astronomer:
houston:
upgradeDeployments:
enabled: false
# Application configuration for Houston
config:
publicSignups: true ## set to false immediately after initial system admin user created
# Allowed user email domains for system level roles
# allowedSystemLevelDomains: []
# Default configuration for deployments.
# Can be overridden on a per-data-plane basis.
deployments:
# Enable Airflow 3 deployments for clusters
airflowV3:
enabled: true
# Allow deletions to immediately remove the database and namespace
# hardDeleteDeployment: true
# Allows you to set your release names
# manualReleaseNames: true
# Flag to enable using IAM roles (don't enter a specific role)
# serviceAccountAnnotationKey: iam.gke.io/gcp-service-account
# Required if dagOnlyDeployment is enabled
configureDagDeployment: true
# Enables the API for updating deployments
# enableUpdateDeploymentImageEndpoint: true
# upsertDeploymentEnabled: true
# email:
# enabled: false
# reply: noreply@your.domain
# secret:
# - envName: "EMAIL__SMTP_URL" # Reference to the Kubernetes secret for SMTP credentials. Only required if email is used.
# secretName: "astronomer-smtp"
# secretKey: "connection"
# User authentication mechanism. One of the following should be enabled.
auth:
github:
# Allow users authenticate with Github, enabled by default
enabled: false
# local:
# # Allow users and passwords in the Houston database, disabled by default
# enabled: false
openidConnect:
# okta:
# enabled: false
# microsoft:
# enabled: false
# adfs:
# enabled: false
# custom:
# enabled: false
google:
# Allow users to authenticate with Google, enabled by default
enabled: false
#################################
## Default tagged groups enabled
#################################
# tags:
# Enable platform components by default (nginx, astronomer)
# platform: true
# Enable monitoring stack (prometheus, kube-state)
# monitoring: true
# Enable logging stack (elasticsearch, vector)
# logging: true
###########################################
### Astronomer global configuration for AKS
###########################################
global:
# Installation mode for the control plane
plane:
mode: unified
# Base domain for all control plane subdomains exposed through ingress
baseDomain: env.astronomer.your.domain
# For development or proof-of-concept, you can use an in-cluster database.
# This NOT supported in production.
# postgresqlEnabled: true
# Name of secret containing TLS certificate, change if not using "astronomer-tls"
# tlsSecret: astronomer-tls
# List of secrets containing the cert.pem of trusted private certification authorities
# Example command: `kubectl -n astronomer create secret generic private-root-ca --from-file=cert.pem=./private-root-ca.pem`
# privateCaCerts:
# - private-root-ca
# Expose Postgres metrics for Prometheus to scrape
# prometheusPostgresExporterEnabled: true
# Use a sidecar for exporting task logs
loggingSidecar:
enabled: true
name: sidecar-log-consumer
# Enable dag-only deployments
dagOnlyDeployment:
enabled: true
# Database SSL configuration
ssl:
# Enable SSL connection to Postgres -- must be false if using in-cluster database
enabled: true
#########################
### Ingress configuration
#########################
# nginx:
# Static IP address the nginx ingress should bind to
# loadBalancerIP: ~
# Set privateLoadbalancer to 'false' to make nginx request a LoadBalancer on a public vnet
# privateLoadBalancer: true
# Dictionary of arbitrary annotations to add to the nginx ingress.
# For full configuration options, see https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
# required for azure load balancer post Kubernetes 1.24
ingressAnnotations:
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz"
################################
### Astronomer app configuration
################################
astronomer:
houston:
upgradeDeployments:
enabled: false
# Application configuration for Houston
config:
publicSignups: true ## set to false immediately after initial system admin user created
# Allowed user email domains for system level roles
# allowedSystemLevelDomains: []
# Default configuration for deployments.
# Can be overridden on a per-data-plane basis.
deployments:
# Enable Airflow 3 deployments for clusters
airflowV3:
enabled: true
# Allow deletions to immediately remove the database and namespace
# hardDeleteDeployment: true
# Allows you to set your release names
# manualReleaseNames: true
# Flag to enable using IAM roles (don't enter a specific role)
# serviceAccountAnnotationKey:
# Required if dagOnlyDeployment is enabled
configureDagDeployment: true
# Enables the API for updating deployments
# enableUpdateDeploymentImageEndpoint: true
# upsertDeploymentEnabled: true
# email:
# enabled: false
# reply: noreply@your.domain
# secret:
# - envName: "EMAIL__SMTP_URL" # Reference to the Kubernetes secret for SMTP credentials. Only required if email is used.
# secretName: "astronomer-smtp"
# secretKey: "connection"
# User authentication mechanism. One of the following should be enabled.
auth:
github:
# Allow users authenticate with Github, enabled by default
enabled: false
# local:
# # Allow users and passwords in the Houston database, disabled by default
# enabled: false
openidConnect:
# okta:
# enabled: false
# microsoft:
# enabled: false
# adfs:
# enabled: false
# custom:
# enabled: false
google:
# Allow users to authenticate with Google, enabled by default
enabled: false
#################################
## Default tagged groups enabled
#################################
# tags:
# Enable platform components by default (nginx, astronomer)
# platform: true
# Enable monitoring stack (prometheus, kube-state)
# monitoring: true
# Enable logging stack (elasticsearch, vector)
# logging: true
#######################################################
### Astronomer global configuration for other providers
#######################################################
global:
# Installation mode for the control plane
plane:
mode: unified
# Base domain for all control plane subdomains exposed through ingress
baseDomain: env.astronomer.your.domain
# For development or proof-of-concept, you can use an in-cluster database.
# This NOT supported in production.
# postgresqlEnabled: true
# Name of secret containing TLS certificate, change if not using "astronomer-tls"
# tlsSecret: astronomer-tls
# List of secrets containing the cert.pem of trusted private certification authorities
# Example command: `kubectl -n astronomer create secret generic private-root-ca --from-file=cert.pem=./private-root-ca.pem`
# privateCaCerts:
# - private-root-ca
# Expose Postgres metrics for Prometheus to scrape
# prometheusPostgresExporterEnabled: true
# Use a sidecar for exporting task logs
loggingSidecar:
enabled: true
name: sidecar-log-consumer
# Enable dag-only deployments
dagOnlyDeployment:
enabled: true
# Database SSL configuration
ssl:
# Enable SSL connection to Postgres -- must be false if using in-cluster database
enabled: true
#########################
### Ingress configuration
#########################
# nginx:
# Static IP address the nginx ingress should bind to
# loadBalancerIP: ~
# Set privateLoadbalancer to 'false' to make nginx request a LoadBalancer on a public vnet
# privateLoadBalancer: true
# Dictionary of arbitrary annotations to add to the nginx ingress.
# For full configuration options, see https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
# ingressAnnotations: {}
################################
### Astronomer app configuration
################################
astronomer:
houston:
upgradeDeployments:
enabled: false
# Application configuration for Houston
config:
publicSignups: true ## set to false immediately after initial system admin user created
# Allowed user email domains for system level roles
# allowedSystemLevelDomains: []
# Default configuration for deployments.
# Can be overridden on a per-data-plane basis.
deployments:
# Enable Airflow 3 deployments for clusters
airflowV3:
enabled: true
# Allow deletions to immediately remove the database and namespace
# hardDeleteDeployment: true
# Allows you to set your release names
# manualReleaseNames: true
# Flag to enable using IAM roles (don't enter a specific role)
# serviceAccountAnnotationKey:
# Required if dagOnlyDeployment is enabled
configureDagDeployment: true
# Enables the API for updating deployments
# enableUpdateDeploymentImageEndpoint: true
# upsertDeploymentEnabled: true
# email:
# enabled: false
# reply: noreply@your.domain
# secret:
# - envName: "EMAIL__SMTP_URL" # Reference to the Kubernetes secret for SMTP credentials. Only required if email is used.
# secretName: "astronomer-smtp"
# secretKey: "connection"
# User authentication mechanism. One of the following should be enabled.
auth:
github:
# Allow users authenticate with Github, enabled by default
enabled: false
# local:
# # Allow users and passwords in the Houston database, disabled by default
# enabled: false
openidConnect:
# okta:
# enabled: false
# microsoft:
# enabled: false
# adfs:
# enabled: false
# custom:
# enabled: false
google:
# Allow users to authenticate with Google, enabled by default
enabled: false
#################################
## Default tagged groups enabled
#################################
# tags:
# Enable platform components by default (nginx, astronomer)
# platform: true
# Enable monitoring stack (prometheus, kube-state)
# monitoring: true
# Enable logging stack (elasticsearch, vector)
# logging: true
The apc-values.yaml examples leave astronomer.houston.config.publicSignups: true, so you can create the initial administrator account. You can control account creation in Disable anonymous account creation.
Step 3: Choose and configure a base domain [#choose-base-domain]
When you install APC it creates a variety of services that your users access to manage, monitor, and run Airflow.
Choose a base domain such as astronomer.example.com, astro-sandbox.example.com, or astro-prod.example.internal for which:
- You have the ability to create and edit DNS records
- You have the ability to issue TLS certificates
- The following addresses are used by Astronomer components:
app.<base-domain>
deployments.<base-domain>
houston.<base-domain>
alertmanager.<base-domain>
prometheus.<base-domain>
registry.<base-domain>
The base domain itself does not need to be available and can point to another service not associated with Astronomer or Airflow. If the base domain is available, you can choose to establish a vanity redirect from <base-domain> to app.<base-domain> later in the installation process.
When choosing a base domain, consider the following:
- The name you choose must be resolvable by both your users and Kubernetes itself.
- All hostnames must remain under the base domain (for example,
app.<base-domain>), so ensure you can create DNS records and issue TLS certificates for those subdomains.
- You need to have or obtain a TLS certificate that is recognized as valid by your users. If you use the APC integrated container registry, the TLS certification must also be recognized as valid by Kubernetes itself.
- Wildcard certificates are only valid one level deep. For example, an ingress controller that uses a certificate called
*.example.com can provide service for app.example.com but not app.astronomer-dev.example.com.
- The bottom-level hostnames, such as
app, registry, or prometheus, are fixed and cannot be changed.
The base domain is visible to end users. They can view the base domain in the following scenarios:
- When users access the APC UI. For example,
https://app.sandbox-astro.example.com.
- When users access an Airflow Deployment. For example,
https://deployments.sandbox-astro.example.com/deployment-release-name/airflow.
- When users authenticate to the Astro CLI. For example,
astro login sandbox-astro.example.com.
If you install APC on OpenShift and also want to use OpenShift’s integrated ingress controller, you can use the hostname of the default OpenShift ingress controller as your base domain, such asapp.apps.<OpenShift-domain>. Doing this requires permission to reconfigure the route admission policy for the standard ingress controller toInterNamespaceAllowed. SeeThird Party Ingress Controller - Configuration notes for OpenShiftfor additional information and options.
Configure the base domain
Locate the global.baseDomain in your values.yaml file and change it to your base domain as shown in the following example:
global:
# Base domain for all subdomains exposed through ingress
baseDomain: sandbox-astro.example.com
In your Kubernetes cluster, create a Kubernetes namespace to contain the APC platform. The following example uses apc as the namespace.
kubectl create namespace apc
APC uses the contents of this namespace to provision and manage Airflow instances running in other namespaces. Each Airflow instance has its own isolated namespace.
Step 5: Request and validate an Astronomer TLS certificate [#astronomer-tls-certificate]
To install APC you need a TLS certificate that is valid for several domains. One of the domains is the primary name on the certificate, also known as the common name (CN). The additional domains are equally valid, supplementary domains known as Subject Alternative Names (SANs).
Astronomer requires a private certificate in the APC platform namespace, even if you use a third-party ingress controller that doesn’t otherwise require it.
Request an ingress controller TLS certificate [#request-a-certificate-bundle]
Request a TLS certificate from your security team for APC. In your request, include the following:
- Your chosen base domain as the Common Name (CN). If your certificate authority will not issue certificates for the bare base domain, use
app.<base-domain> as the CN instead.
- Either request a wildcard SAN of
*.<base-domain> (plus an explicit SAN for <base-domain>) or list each hostname individually:
app.<base-domain> (omit if already used as the Common Name)
deployments.<base-domain> (required for Airflow UIs and APIs)
houston.<base-domain>
prometheus.<base-domain>
registry.<base-domain> (required if you keep the integrated container registry enabled)
alertmanager.<base-domain> (required if you keep the integrated Alertmanager enabled)
- If you use the APC integrated container registry, specify that the encryption type of the certificate must be RSA.
- Request the following return format:
- A
key.pem containing the private key in pem format
- Either a
full-chain.pem (containing the public certificate and additional certificates required to validate it, in pem format) or a bare cert.pem and explicit affirmation that there are no intermediate certificates and that the public certificate is the full chain.
- Either the
private-root-ca.pem in pem format of the private Certificate Authority used to create your certificate or a statement that the certificate is signed by a public Certificate Authority.
If you use the APC integrated container registry, the encryption type used on your TLS certificate must beRSA. Cerbot users must include--key-type rsawhen requesting certificates. Most other solutions generate RSA keys by default.
Validate the received certificate and associated items
Ensure that you received each of the following three items:
- A
key.pem containing the private key in pem format.
- Either a
full-chain.pem, in pem format, that contains the public certificate and additional certificates required to validate it or a bare cert.pem and explicit affirmation that there are no intermediate certificates and that the public certificate is the full chain.
- Either the
private-root-ca.pem in pem format of the private Certificate Authority used to create your certificate or a statement that the certificate is signed by public Certificate Authority.
To validate that your security team generated the correct certificate, run the following command using the openssl CLI:
openssl x509 -in <your-certificate-filepath> -text -noout
This command generates a report. If the X509v3 Subject Alternative Name section of this report includes either a single *.<base-domain> wildcard domain or all subdomains, then the certificate creation was successful.
Confirm that your full-chain certificate chain is ordered correctly. To determine your certificate chain order, run the following command using the openssl CLI:
openssl crl2pkcs7 -nocrl -certfile <your-full-chain-certificate-filepath> | openssl pkcs7 -print_certs -noout
The command generates a report of all certificates. Verify that the certificates are in the following order:
- Domain
- (Optional) Intermediate
- Root
(Optional) Additional validation for the Astronomer integrated container registry [#docker-registry-cert-encryption-restrictions]
If you don’t plan to store images in Astronomer’s integrated container registry and instead plan to store all container images using an external container registry, you can skip this step.
The APC integrated container registry requires that your private key signs traffic originating from the APC platform using the RSA encryption method. Confirm that the key is signing traffic correctly before proceeding.
Run the following command to extract the bare public cert, if it was not already included in the files provided by your certificate authority, from the full-chain certificate file:
openssl crl2pkcs7 -nocrl -certfile full-chain.pem | openssl pkcs7 -print_certs -noout > cert.pem
Examine the public certificate and ensure all Signature Algorithms are listed as sha1WithRSAEncryption.
openssl x509 -in cert.pem -text|grep Algorithm
Signature Algorithm: sha1WithRSAEncryption
Public Key Algorithm: rsaEncryption
Signature Algorithm: sha1WithRSAEncryption
If your key is not compatible with the APC integrated container registry, ask your Certificate Authority to re-issue the credentials and emphasize the need for an RSA cert, or plan to use an external container registry instead.
Determine whether or not your certificate was issued by an intermediate certificate-authority. If you do not know, assume you use an intermediate certificate and attempt to obtain a full-chain.pem bundle from your certificate authority.
Certificates issued by operators of root certificate authorities, including but not limited to LetsEncrypt, are frequently issued from intermediate certificate authorities associated with a trusted root CA.
APC backend services have stricter trust requirements than most web-browsers. Web Browsers might auto-complete the chain and consider your certificate valid, even if you don’t provide the intermediate certificate-authority’s public certificate. APC backend services can reject the same certificate, and cause Dag and image deploys to fail.
If, and only if, your certificate was issued directly by the root Certificate Authority of a universally trusted certificate authority, and not from one of their intermediaries, then the server.crt is also the full-chain certificate bundle.
Identify your full-chain public certificate .pem file and use it while storing and configuring the ingress controller TLS certificate.
The--certparameter must reference yourfull-chain.pem, which includes the server certificateandany intermediate certificates, if any. Using the server cert directly causes Dag and image deploys to fail.
Run the following command to store the public full-chain certificate in the APC Platform Namespace in a tls-type Kubernetes secret. You can create a custom name for this secret. The following example uses the name, astronomer-tls.
kubectl -n <astronomer platform namespace> create secret tls astronomer-tls --cert <fullchain-pem-filepath> --key <your-private-key-filepath>
However, if your security team has instructed you that there are no intermediate certificates, run the following command.
kubectl -n astronomer create secret tls astronomer-tls --cert full-chain.pem --key server_private_key.pem
Naming the secret astronomer-tls with no substitutions is recommended when using a third-party ingress controller.
If you use APC’s integrated ingress controller, you can skip this step.
Complete the full setup as described in Third-party Ingress-Controllers, which includes steps to configure ingress controllers in specific environment types. When you’re done, return to this page and continue to the next step.
Skip this step if you don’t use a private Certificate Authority (private CA) to sign the certificate used by your ingress-controller. Or, if you don’t use a private CA for any of the following services that the APC platform interacts with.
APC trusts public Certificate Authorities automatically.
APC must be configured to trust any private Certificate Authorities issuing certificates for systems APC interacts with, including, but not limited to the following:
- Ingress controller
- Email server, unless disabled
- Any container registries that Kubernetes pulls from
- If using OAUTH, the OAUTH provider
- If using external Elasticsearch, any external Elasticsearch instances
- If using external Prometheus, any external Prometheus instances
Perform the procedure described in Configuring private CAs for each certificate authority used to sign TLS certificates. After creating the trust secret (for example astronomer-ca), add it to global.privateCaCerts in values.yaml so platform components trust the issuer.
Astro CLI users must also configure both their operating system and container solution,Docker Desktop or Podman, to trust the private certificate Authority that was used to create the certificate used by the APC ingress controller and any third-party container registries.
Step 9: Confirm your Kubernetes cluster trusts required CAs [#private-cas-for-kubernetes]
If at least one of the following circumstances apply to your installation, you must complete this step:
- You configured APC to pull platform container images from an external container registry that uses a certificate signed by a private CA.
- You plan for your users to deploy Airflow images to APC’s integrated container registry and Astronomer is using a TLS certificate issued by a private CA.
- Users will deploy images to an external container registry and that registry is using a TLS certificate issued by a private CA.
Kubernetes must be able to pull images from one or more container registries for APC to function. By default, Kubernetes only trusts publicly signed certificates. This means that by default, Kubernetes does not honor the list of certificates trusted by the APC platform.
Many enterprises configure Kubernetes to trust additional certificate authorities as part of their standard cluster creation procedure. Contact your Kubernetes Administrator to find out what, if any, private certificates are currently trusted by your Kubernetes Cluster. Then, consult your Kubernetes administrator and Kubernetes provider’s documentation for instructions on configuring Kubernetes to trust additional CAs.
Follow procedures for your Kubernetes provider to configure Kubernetes to trust each CA associated with your container registries, including the integrated container registry, if applicable.
Certain clusters do not provide a mechanism to configure the list of certificates trusted by Kubernetes.
While configuring the Kubernetes list of cluster certificates is a customer responsibility, APC includes an optional component that can, for certain Kubernetes cluster configurations, add certificates defined in global.privateCaCerts to the list of certificates trusted by Kubernetes. This can be enabled by setting global.privateCaCertsAddToHost.enabled and global.privateCaCertsAddToHost.addToContainerd to true in your values.yaml file and setting global.privateCaCertsAddToHost.containerdConfigToml to:
[host."https://registry.<baseApp>"]
ca = "/etc/containerd/certs.d/<registry hostname>/<secret name>.pem"
For example, if your base domain is astro-sandbox.example.com and the CA public-certificate is stored in the platform namespace in a secret named my-private-ca, the global.privateCaCertsAddToHost section would be:
global:
privateCaCertsAddToHost:
enabled: true
addToContainerd: true
hostDirectory: /etc/containerd/certs.d
containerdConfigToml: |-
[host."https://registry.astro-sandbox.example.com"]
ca = "/etc/containerd/certs.d/registry.astro-sandbox.example.com/my-private-ca.pem"
APC requires the ability to send email to:
- Notify users of errors with their Airflow Deployments.
- Send emails to invite new users to Astronomer.
- Send certain platform alerts, enabled by default but can be configured.
APC sends all outbound email using SMTP.
- Obtain a set of SMTP credentials from your email administrator for you to use to send email from APC. When you request an email address and display name, remember that these emails are not designed for users to reply directly to them. Request all the following information:
- Email address.
- Email display name requirements. Some email servers require a From line of:
Do Not Reply <donotreply@example.com>.
- SMTP username. This is usually the same as the email address.
- SMTP password.
- SMTP hostname.
- SMTP port.
- Whether or not the connection supports TLS.
If there is a/or any other escape character in your username or password, you might need toURL encodethose characters.
-
Ensure that your Kubernetes cluster has permissions configured to send outbound email to the SMTP server.
-
Change the configuration in
values.yaml from noreply@my.email.internal to an email address that is valid to use with your SMTP credentials.
-
Construct an email connection string and store it in a secret in the Astronomer platform namespace. The following example shows how to store the connection in a secret called
astronomer-smtp. Make sure to url-encode the username and password if they contain special characters.
kubectl -n astronomer create secret generic astronomer-smtp --from-literal connection="smtp://my@40user:my%40pass@smtp.email.internal/?requireTLS=true"
In general, an SMTP URI is formatted as smtps://USERNAME:PASSWORD@HOST/?pool=true. The following table contains examples of the URI for some of the most popular SMTP services:
| Provider | Example SMTP URL |
|---|
| AWS SES | smtp://AWS_SMTP_Username:AWS_SMTP_Password@email-smtp.us-east-1.amazonaws.com/?requireTLS=true |
| SendGrid | smtps://apikey:SG.sometoken@smtp.sendgrid.net:465/?pool=true |
| Mailgun | smtps://xyz%40example.com:password@smtp.mailgun.org/?pool=true |
| Office365 | smtp://xyz%40example.com:password@smtp.office365.com:587/?requireTLS=true |
| Custom SMTP-relay | smtp://smtp-relay.example.com:25/?ignoreTLS=true |
If your SMTP provider is not listed, refer to the provider’s documentation for information on creating an SMTP URI.
If there is a/or any other escape character in your username or password, you might need toURL encodethose characters.
Skip this step if your cluster defines a volume storage class, and you want to use it for all volumes associated with APC and its Airflow Deployments.
Astronomer strongly recommends that you do not back any volumes used for APC with mechanical hard drives.
Create storage-class-config.yaml in your project directory and update the configuration to match your environment:
global:
prometheus:
persistence:
storageClassName: "<desired-storage-class>"
elasticsearch:
common:
persistence:
storageClassName: "<desired-storage-class>"
astronomer:
registry:
persistence:
storageClassName: "<desired-storage-class>"
houston:
config:
deployments:
helm:
dagDeploy:
persistence:
storageClass: "<desired-storage-class>"
airflow:
redis:
persistence:
storageClassName: "<desired-storage-class>"
nats:
nats:
jetStream:
fileStorage:
storageClassName: "<desired-storage-class>"
# this option does not apply when using an external postgres database
# bundled postgresql not a supported option, only for use in proof-of-concepts
postgresql:
persistence:
storageClass: "<desired-storage-class>"
Merge these values into values.yaml manually or by using a YAML merge tool of your choosing.
Astronomer requires a central Postgres database that acts as the backend for the APC Houston API and hosts individual metadata databases for all Deployments created on the platform.
If, while evaluating APC you need to create a temporary environment where Postgres is not available, locate the global.postgresqlEnabled option already present in your values.yaml and set it to true, then skip the remainder of this step.Note that global.postgresqlEnabled to true is an unsupported configuration, and should never be used on any development, staging, or production environment.
If you use Azure Database for either PostgreSQL or another Postgres instance that does not enable the pg_trgm by default, you must enable the pg_trgm extension prior to installing APC. If pg_trgm is not enabled, the install will fail. pg_tgrm is enabled by default on Amazon RDS and Google Cooud SQL for PostgresQL.For instructions on enabling the pg_trgm extension for Azure Flexible Server, see PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server.
Additional requirements apply to the following databases:
- AWS RDS:
- t2 medium is the minimum RDS instance size you can use.
- Azure Flexible Server:
- You must enable the
pg_trgm extension as per the advisory earlier in this section.
- Set
global.ssl.modeto prefer in your values.yaml file.
Create a Kubernetes Secret named astronomer-bootstrap that points to your database. You must URL encode any special characters in your Postgres password.
The in-cluster Postgres option (global.postgresqlEnabled: true) is deprecated and should only be used for short-lived testing. Always rely on an external Postgres instance for any persistent environment.
PostgreSQL usernames must be lowercase.
To create this secret, run the following command replacing the astronomer platform namespace, username, password, database hostname, and database port with their respective values. Remember that username and password must be url-encoded if they contain special-characters:
kubectl -n <astronomer platform namespace> create secret generic astronomer-bootstrap \
--from-literal connection="postgres://<url-encoded username>:<url-encoded password>@<database hostname>:<database port>"
For example, for a username named bob with password abc@abc at hostname some.host.internal, you would run:
kubectl -n astronomer create secret generic astronomer-bootstrap \
--from-literal connection="postgres://bob:abc%40abc@some.host.internal:5432"
Skip this step if you are installing APC onto a Kubernetes cluster that can pull container images from public image repositories and you don’t want to mirror these images locally.
Anonymous
Amazon ECR
Other registries
If you can retrieve images from a registry that can be reached without credentials, ensure the endpoint hosting the registry is restricted to trusted networks, for example, private subnets or VPN access. Avoid exposing the platform image registry directly to the public internet. No additional Astronomer configuration is required beyond setting the repository locations later in this step.
-
Grant your worker nodes or IRSA service accounts the IAM permissions required to pull images from the target ECR repository. At minimum, allow
ecr:GetAuthorizationToken, ecr:BatchCheckLayerAvailability, ecr:GetDownloadUrlForLayer, and ecr:BatchGetImage.
-
Ensure network access from the cluster to the appropriate ECR endpoints (for example, VPC endpoints or public ECR endpoints).
-
Set the platform repository prefix in
values.yaml. For example:
global:
privateRegistry:
enabled: true
repository: <account-id>.dkr.ecr.<region>.amazonaws.com/<platform-prefix>
astronomer:
houston:
config:
deployments:
helm:
runtimeImages:
airflow:
repository: <account-id>.dkr.ecr.<region>.amazonaws.com/<platform-prefix>/astro-runtime
runtimeImagesV3:
airflow:
repository: <account-id>.dkr.ecr.<region>.amazonaws.com/<platform-prefix>/runtime
airflow:
defaultAirflowRepository: <account-id>.dkr.ecr.<region>.amazonaws.com/<platform-prefix>/astro-runtime
defaultRuntimeRepository: <account-id>.dkr.ecr.<region>.amazonaws.com/<platform-prefix>/astro-runtime
When you rely on IAM-based authentication, global.privateRegistry.secretName is not required. If you use static credentials, create the matching Docker registry secret following the AWS ECR documentation and set secretName accordingly.
-
Create a Docker registry secret in the Astronomer namespace and annotate it so Commander propagates the credentials to Deployment namespaces:
kubectl -n <astronomer platform namespace> create secret docker-registry <secret-name> \
--docker-server=<registry-host> \
--docker-username=<username> \
--docker-password=<password> \
--docker-email=<email>
kubectl -n <astronomer platform namespace> annotate secret <secret-name> \
"astronomer.io/commander-sync"="platform=astronomer"
-
Update
values.yaml so the platform charts reference your registry and credentials:
global:
privateRegistry:
enabled: true
repository: <custom-platform-repo-prefix>
secretName: <secret-name>
astronomer:
houston:
config:
deployments:
helm:
runtimeImages:
airflow:
repository: <custom-platform-repo-prefix>/astro-runtime
airflow:
defaultAirflowRepository: <custom-platform-repo-prefix>/ap-airflow
defaultRuntimeRepository: <custom-platform-repo-prefix>/astro-runtime
-
After applying the configuration change, run
kubectl create job -n <astronomer platform namespace> --from=cronjob/<platform-release-name>-config-syncer upgrade-config-synchronization to push the updated credentials to existing Deployment namespaces.
For additional examples (including per-Deployment registry settings and air gapped workflows), see Configure a custom registry for Deployment images.
Step 14: Determine which version of APC to install [#determine-version-of-astronomer]
Astronomer recommends new APC installations use the most recent version available in either the Stable or Long Term Support (LTS) release-channel. Keep this version number available for the following steps.
See APC’s lifecycle policy and version compatibility reference for more information.
Step 15: Fetch Airflow Helm charts [#fetch-airflow-helm-charts]
If you have internet access to https://helm.astronomer.io, run the following command on the machine where you want to install APC:
helm repo add astronomer https://helm.astronomer.io/
helm repo update
If you don’t have internet access to https://helm.astronomer.io, download the APC Platform Helm chart file corresponding to the version of APC you are installing or upgrading to from https://helm.astronomer.io/astronomer-<version number>.tgz. For example, for APC v1.0.0 you would download https://helm.astronomer.io/astronomer-1.0.0.tgz. This file does not need to be uploaded to an internal chart repository.
Step 16: Create and customize upgrade.sh [#create-and-customize-upgrades]
Create a file named upgrade.sh in your platform deployment project directory containing the following script. Specify the following values at the beginning of the script:
CHART_VERSION: Your APC version, including patch and a v prefix. For example, v1.0.0.
RELEASE_NAME: Your Helm release name. astronomer is strongly recommended.
NAMESPACE: The namespace to install platform components into. astronomer is strongly recommended.
CHART_NAME: Set to astronomer/astronomer if fetching platform images from the internet. Otherwise, specify the filename if you’re installing from a file (for example astronomer-1.0.0.tgz).
#!/bin/bash
set -xe
# typically astronomer
RELEASE_NAME=<astronomer-platform-release-name>
# typically astronomer
NAMESPACE=<astronomer-platform-namespace>
# typically astronomer/astronomer
CHART_NAME=<chart name>
# format is v<major>.<minor>.<path> e.g. v1.0.0
CHART_VERSION=<v-prefixed version of the APC platform chart>
# ensure all the above environment variables have been set
helm repo add --force-update astronomer https://helm.astronomer.io
helm repo update
# upgradeDeployments false ensures that Airflow charts are not upgraded when this script is run
# If you deployed a config change that is intended to reconfigure something inside Airflow,
# then you may set this value to "true" instead. When it is "true", then each Airflow chart will
# restart. Note that some stable version upgrades require setting this value to true regardless of your own configuration.
# If you are currently on APC 0.25, 0.26, or 0.27, you must upgrade to version 0.28 before upgrading to 0.29. A direct upgrade to 0.29 from a version lower than 0.28 is not possible.
helm upgrade --install --namespace $NAMESPACE \
-f ./values.yaml \
--reset-values \
--version $CHART_VERSION \
--debug \
--set astronomer.houston.upgradeDeployments.enabled=false \
$RELEASE_NAME \
$CHART_NAME $@
This step is optional but strongly recommended for production environments so your cluster can pull platform images from a registry you control.
- Gather the list of required platform images using one of the following methods:
Shell
Windows Powershell
Other
Mac and Linux users with jq installed can set CHART_VERSION in the following snippet and run it to produce a list of images.CHART_VERSION=<v-prefixed version of the APC platform chart>
UNPREFIXED_CHART_VERSION=${CHART_VERSION#v}
curl -s https://updates.astronomer.io/astronomer-software/releases/astronomer-${UNPREFIXED_CHART_VERSION}.json | jq -r '(.astronomer.images, .airflow.images) | to_entries[] | "\(.value.repository):\(.value.tag)"'| sort
Windows PowerShell users can set CHART_VERSION in the following snippet and run it to produce a list of images.$CHART_VERSION = "<v-prefixed version>"
$UNPREFIXED_CHART_VERSION = $CHART_VERSION.TrimStart('v')
$jsonUrl = "https://updates.astronomer.io/astronomer-software/releases/astronomer-$UNPREFIXED_CHART_VERSION.json"
$jsonContent = Invoke-WebRequest $jsonUrl -UseBasicParsing
$json = $jsonContent.Content | ConvertFrom-Json
$astronomerImages = $json.astronomer.images.PSObject.Properties.Value
$airflowImages = $json.airflow.images.PSObject.Properties.Value
$images = $astronomerImages + $airflowImages
$images |
ForEach-Object { "$($_.repository):$($_.tag)" } |
Sort-Object
Visit the release metadata page and download the json-formatted release metadata corresponding to the version of APC you are installing and use another method of your choice to extract the list of images from beneath the astronomer.images and airflow.images keys.
- Copy these images to the container registry using the naming scheme you configured when you set up a custom image registry.
Step 18: Fetch Airflow/Astro Runtime updates [#fetch-airflow-updates]
If you are installing APC into an egress-controlled or Air gapped environment, perform the following steps.
By default, APC checks for Airflow updates, which are included in the Astro Runtime, once per day at midnight, by querying https://updates.astronomer.io/astronomer-runtime. This returns a JSON file with details about the latest available Astro Runtime versions.
In an egress-controlled or air gapped environment, you need to store the JSON file in the cluster itself, avoiding the external check. To store the JSON file in the cluster, complete the following steps:
- Download the JSON files and store them in a Kubernetes configmap by running the following commands:
curl -XGET https://updates.astronomer.io/astronomer-runtime -o astro_runtime_releases.json
kubectl -n <astronomer platform namespace> create configmap astro-runtime-base-images --from-file=astro_runtime_releases.json
- Add your configmap name,
astro-runtime-base-images to your Houston configuration using the runtimeReleasesConfigMapName configuration:
astronomer:
houston:
runtimeReleasesConfigMapName: astro-runtime-base-images
config:
airgapped:
enabled: true
Step 19: (OpenShift only) Apply OpenShift-specific configuration [#openshift-configuration]
If you’re not installing APC into an OpenShift Kubernetes cluster, skip this step.
Add the following values into values.yaml. You can do this manually or by using a YAML merge tool of your choosing.
global:
openshiftEnabled: true
sccEnabled: false
extraAnnotations:
kubernetes.io/ingress.class: openshift-default
route.openshift.io/termination: "edge"
authSidecar:
enabled: true
dagOnlyDeployment:
securityContext:
fsGroup: ""
nodeExporterEnabled: false
vectorEnabled: false
loggingSidecar:
enabled: true
name: sidecar-log-consumer
elasticsearch:
sysctlInitContainer:
enabled: false
# bundled postgresql not a supported option, only for use in proof-of-concepts
postgresql:
securityContext:
enabled: false
volumePermissions:
enabled: false
Only Ingress objects with the annotation route.openshift.io/termination: "edge" are supported for generating routes in OpenShift 4.11 and later.
Other termination types are no longer supported for automatic route generation.If you’re on an older version of OpenShift, route creation should be done manually.
APC on OpenShift is only supported when using a third-party ingress-controller and using the logging sidecar feature of APC. The above configuration enables both of these items.
By default, APC automatically creates namespaces for each new Airflow Deployment.
You can restrict the Airflow management components of APC to a list of predefined namespaces and configure it to operate without a ClusterRole by following the instructions in Configure a Kubernetes namespace pool for APC. If you want to disable creation of role and rolebindings for commander, config-syncer, and kubestate metrics, you can set global.features.namespacePools.createRbac to false.
When global.rbacEnabled is set to false, the platform no longer creates any role, rolebindings, or service accounts. The user must define default roles to the k8s default service account to continue with the platform install. See Bring your own Kubernetes service accounts for setup steps.
Running a logging sidecar to export Airflow task logs is essential for running APC in a multi-tenant cluster.
By default, APC creates a privileged DaemonSet to aggregate logs from Airflow components for viewing from within Airflow and the APC UI.
You can replace this privileged Daemonset with unprivileged logging sidecars by following instructions in Export logs using container sidecars.
Step 22: (Optional) Integrate an external identity provider [#integrate-an-external-identity-provider]
APC includes integrations for several of the most popular OAUTH2 identity providers (IdPs), such as Okta and Microsoft Entra ID. Configuring an external IdP allows you to automatically provision and manage users in accordance with your organization’s security requirements. See Integrate an auth system to configure the identity provider of your choice in your values.yaml file.
Step 23: Install APC using Helm [#install-astronomer-using-helm]
Deploy the control plane using the upgrade.sh script you created earlier. Confirm RELEASE_NAME, NAMESPACE, and CHART_VERSION reflect your environment, then execute:
To review manifests before applying them, run ./upgrade.sh --dry-run or use helm template with the same flags defined in the script.
Whether you use Astronomer’s integrated ingress controller or a third-party controller, publish the same set of DNS records so users can reach control plane services.
-
If you use the integrated controller, get the load balancer address directly:
kubectl -n <astronomer platform namespace> get svc astronomer-nginx
-
If you use a third-party controller, ask your ingress administrator for the hostname or IP address that should front the Astronomer routes (refer back to Configure a third-party ingress controller).
Create either a wildcard record such as *.sandbox-astro.example.com or individual CNAME records for the following hostnames so that traffic routes through the chosen load balancer:
app.<base-domain> (required)
deployments.<base-domain> (required for Airflow UIs and APIs)
houston.<base-domain> (required)
prometheus.<base-domain> (required)
registry.<base-domain> (required if you keep the integrated container registry enabled)
alertmanager.<base-domain> (required if you keep the integrated Alertmanager enabled)
<base-domain> (optional but recommended, provides a vanity redirect to app.<base-domain>)
Astronomer generally recommends pointing the zone apex (@) directly to the load balancer address and mapping the remaining hostnames as CNAMEs to that apex. In lower environments, you can safely use a low TTL (for example 60 seconds) to speed up troubleshooting during the initial rollout.
After your DNS provider propagates the records, verify them with tools like dig <hostname> or getent hosts <hostname>. You can complete this DNS work after verifying the platform pods—Astronomer services stay healthy without external DNS, but end users need these records to sign in.
Step 25: Verify you can access the UI [#verify-ui]
Visit https://app.<base-domain> in your web-browser to view APC’s web interface. If any components are not ready, consult the debugging guide or contact Astronomer support with the relevant logs and events.
Congratulations, you have configured and installed an APC platform instance - your new Airflow control plane.
From the UI, you’ll be able to both invite and manage users as well as create and monitor Airflow Deployments on the platform.
Step 26: Disable anonymous account creation [#disable-anonymous-account-creation]
Leave astronomer.houston.config.publicSignups: true only until you create your first administrator. Afterwards, secure the platform using the following steps:
- If you keep public sign-ups enabled, turn on outbound email (
astronomer.houston.config.email.enabled: true), specify a trusted domain list under astronomer.houston.config.allowedSystemLevelDomains, and verify that users can only join through an approved identity provider.
- Otherwise, set
astronomer.houston.config.publicSignups: false so new accounts require an invitation.
- Apply the updated configuration with
helm upgrade targeting the control plane release.
Additional customization
The following topics include optional information about one or multiple topics in the installation guide:
Next steps
Register the data plane with the control plane [#register-data-plane]
Start adding users, workspaces and deployments in your newly installed or upgraded APC environment at https://app.<base-domain>.