Skip to main content

Documentation Index

Fetch the complete documentation index at: https://astronomer-preview.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

This guide provides instructions to upgrade from Astro Private Cloud (APC) 0.37.6 to 1.0.0 and prepare your environment for Airflow 3 Deployments.
Upgrade requires platform downtimeThis upgrade deletes and recreates several platform components (NATS, STAN, Houston). The platform will be unavailable for creating or updating Deployments during the upgrade. Plan a maintenance window and notify your users before starting.

Prerequisites

  • Upgrade to APC 0.37.6 if you’re on an earlier version.
  • Back up your platform database. At minimum, create a snapshot or backup of your PostgreSQL database (RDS snapshot, Azure backup, Cloud SQL backup, or pg_dump).
  • Verify that all platform and Airflow Deployment Pods are healthy:
    # Check platform pods
    kubectl get pods -n astronomer
    # Check Airflow deployment pods (replace with your deployment namespace)
    kubectl get pods -n astronomer-<deployment-release-name>
    
  • If you use Astronomer Units (AU) for resource configuration, convert to CPU/memory settings before upgrading. The AU-to-CPU/memory migration script was removed in 1.0, so you must complete this conversion while still on 0.37.x.
  • Check for duplicate workspace labels in your database. The upgrade includes a migration that adds a unique constraint to workspace labels, and it will fail if duplicates exist:
    SET search_path TO "houston$default";
    SELECT label, COUNT(*) FROM "Workspace" GROUP BY label HAVING COUNT(*) > 1;
    
    If this query returns any results, rename or remove the duplicate workspaces before proceeding. See Debug upgrade for details.
  • Remove any deprecated Helm values from your values.yaml that are no longer recognized in 1.0. See Breaking changes and removals for the full list.
  • (Optional) Capture logs from your NATS and STAN instances before the upgrade:
    kubectl logs -n astronomer -l component=nats --tail=1000 > nats-pre-upgrade.log
    kubectl logs -n astronomer -l component=stan --tail=1000 > stan-pre-upgrade.log
    
For help with upgrade issues, see Debug upgrade.
Manual DNS and load balancer update requiredWhen upgrading from Astro Private Cloud (APC) 0.x.x to 1.0.0, APC creates a new control plane NGINX ingress Service (astronomer-cp-nginx) and a new LoadBalancer. The previous ingress and LoadBalancer are replaced.You must:
  • Update all DNS records to the new load balancer IP address.
  • Update firewall, allowlist, and security rules to point to the new public endpoint.
  • Re-issue or update any TLS/SSL certificates that reference the previous LoadBalancer hostname, if applicable.
This change occurs because the control plane ingress Service name changes from astronomer-nginx (0.x) to astronomer-cp-nginx (1.0), which causes Kubernetes to provision a new external LoadBalancer with a new public IP/hostname. Prepare these updates before performing the upgrade.Example (your output will vary by cloud/provider):
kubectl -n astronomer get svc astronomer-cp-nginx
NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP                                        PORT(S)                      AGE
astronomer-cp-nginx  LoadBalancer   10.100.223.78   7666ac61ef6-1683718677.us-east-2.elb.amazonaws.com 80:32255/TCP,443:32647/TCP  25d
If you are on OpenShift and manage your own Routes or use a third-party ingress controller instead of the platform’s built-in NGINX ingress, this LoadBalancer change does not apply to you. You can skip the DNS update step later in this guide.

Step 1: Validate the Helm upgrade (dry run)

Before making any changes to your cluster, run a Helm upgrade dry run to verify that the upgrade will succeed. This catches issues like missing RBAC permissions, invalid values, or chart conflicts before any components are deleted. First, ensure that your values.yaml includes the unified mode configuration required for APC 1.0 (the same configuration referenced later in this guide); you do not need to duplicate that YAML block here. Then run the dry run:
helm upgrade -f values.yaml -n astronomer astronomer astronomer/astronomer --version 1.0.x --dry-run
Replace 1.0.x with the specific patch version you are upgrading to (for example, 1.0.1).
Do not proceed with the remaining steps until the dry run completes successfully. If the dry run fails, resolve the reported errors first.

Step 2: Delete existing STAN and NATS StatefulSets

Before upgrading to version 1.0.0, you must migrate from STAN to JetStream. Run the following commands in your cluster to remove legacy STAN components before JetStream is initialized:
kubectl delete sts <release-name>-stan
kubectl delete sts <release-name>-nats --cascade=orphan

Step 3: Patch astronomer-bootstrap secret with database name

Before running the Helm upgrade, ensure the astronomer-bootstrap secret includes a database name suffix in connection (for example, /postgres). This prevents Postgres connection errors during or after the upgrade.

Check your current connection string

First, check if your connection string already includes a database name:
kubectl get secret -n astronomer astronomer-bootstrap \
  --template='{{.data.connection | base64decode }}'
Examine the output. A connection string with a database name looks like:
postgresql://user:pass@host:5432/postgres
                                 ^^^^^^^^
                                 database name present
A connection string without a database name looks like:
postgresql://user:pass@host:5432
                                ^
                                no database name
If your connection string already includes a database name (for example, /postgres or /astronomer), you can skip to Step 4.

Determine the correct database name

The default database name depends on your cloud provider:
  • AWS RDS: postgres (standard default; main may appear only in specific legacy or custom configurations)
  • Azure Database for PostgreSQL: postgres
  • GCP Cloud SQL: postgres
If you’re unsure which database to use, connect to your PostgreSQL instance using your preferred method (psql, a database client, or cloud console) and run \l to list available databases.

Patch the secret

After confirming the database name, patch the astronomer-bootstrap secret:
# Namespace where APC is installed
NAMESPACE=astronomer
# Database name - verify using the steps above
DB_NAME=postgres

# Read current connection string, append database name, and update the secret
CURRENT=$(kubectl -n "$NAMESPACE" get secret astronomer-bootstrap -o jsonpath='{.data.connection}' | base64 -d)
NEW="${CURRENT%/}/$DB_NAME"
kubectl -n "$NAMESPACE" patch secret astronomer-bootstrap --type=merge -p "{\"data\":{\"connection\":\"$(printf '%s' "$NEW" | base64 -w0)\"}}"

Step 4: Delete the Houston Deployment

Before running the Helm upgrade, delete the Houston Deployment to avoid a known Helm patch conflict related to environment variable ordering changes between versions.
kubectl delete deployment/<release-name>-houston -n astronomer --cascade=orphan
The --cascade=orphan flag keeps the Houston Pods running during the delete operation. The Helm upgrade in the next step recreates the Deployment with the correct configuration.
If you skip this step and encounter a Helm patch error during the upgrade, see Debug upgrade for the workaround.

Step 5: Upgrade to APC 1.0

When upgrading from 0.37.x, you must configure your deployment to run in unified plane mode. Unified mode is equivalent to how 0.37.x operates and is required for the initial upgrade. If you haven’t already, add the following configuration to your values.yaml:
global:
  plane:
    mode: "unified"
After upgrading to 1.0 in unified mode, you can optionally add separate Data Planes to run Airflow Deployments in other clusters. See Install a Data Plane for instructions. If you later want to convert your unified installation to a dedicated Control Plane (after migrating all Deployments to Data Planes), see Install a Control Plane.
Complete all pre-upgrade steps (database backup, STAN/NATS deletion, bootstrap secret patch, Houston Deployment deletion) before running the upgrade command. Perform a standard Helm upgrade using your existing release name and namespace:
helm upgrade -f values.yaml -n astronomer astronomer astronomer/astronomer --version 1.0.x
Replace 1.0.x with the specific patch version you are upgrading to (for example, 1.0.1).
Airgapped environmentsFor airgapped environments that cannot access the internet, download the Helm chart .tgz file directly from the Astronomer Helm repository:
https://helm.astronomer.io/astronomer-<version>.tgz
Replace <version> with the specific version you are upgrading to (for example, 1.0.1). Upload this file to your internal artifact repository (such as Artifactory or Nexus), then reference it in your helm upgrade command.

Step 6: Restart NATS and Houston components

After the platform upgrade completes and all Pods are running, restart NATS and Houston components to ensure the new JetStream components and Houston services are connected and synchronized:
kubectl rollout restart sts/<release-name>-nats
kubectl rollout restart deploy/<release-name>-houston
kubectl rollout restart deploy/<release-name>-houston-worker
After the rollout completes, verify the pods have been recreated by checking their age:
kubectl get pods -n astronomer -l "component in (nats,houston,houston-worker)" -o wide
All pods should show a recent AGE (a few minutes). If any pods show an older age, delete them manually to force recreation:
kubectl delete pod -n astronomer -l component=nats
kubectl delete pod -n astronomer -l component=houston
kubectl delete pod -n astronomer -l component=houston-worker

Step 7: Update DNS records

APC 1.0 creates a new control plane NGINX ingress Service (astronomer-cp-nginx) with a new LoadBalancer. Get the new load balancer address:
kubectl -n astronomer get svc astronomer-cp-nginx
Update your DNS records to point to the new load balancer IP or hostname. This includes all subdomains for your base domain (for example, app.<baseDomain>, houston.<baseDomain>, registry.<baseDomain>).
If you are on OpenShift and manage your own Routes or use a third-party ingress controller, you can skip this step. Your existing ingress configuration continues to work.
You must complete this step before you can access the Astro UI or API.

Step 8: Upgrade all Airflow Deployments

Once you have validated that all platform Pods in APC 1.0 are healthy and running, upgrade your Airflow Deployments to ensure compatibility with 1.0.
Existing Airflow Deployments typically continue to function after the platform upgrade without immediate action. However, Astronomer recommends upgrading Deployments to ensure full compatibility with the new platform version.
To upgrade Deployments, use one of the following approaches:
  • Astro UI: Upgrade each Deployment manually from the Astro UI by navigating to the Deployment and triggering an upgrade.
  • Houston API: Use the Houston API upsertDeployment mutation for programmatic or bulk upgrades.
  • Astro CLI: Use the Astro CLI’s astro deploy command.

Step 9: Post-upgrade validation

After you complete your upgrade, validate your upgrade works as expected by completing the following steps:
1
Confirm that NATS pods are running with JetStream enabled.
2
Check if the JetStream job is created:
3
kubectl -n astronomer get jobs | grep jetstream
4
Check if NATS pods are running:
5
kubectl -n astronomer get pods -l component=nats
6
Verify Houston Worker pods are healthy and processing events.
7
Check if Houston Worker pods are running:
8
kubectl -n astronomer get pods -l component=houston-worker
9
Verify there are no remaining references to STAN.
10
This command should return no results:
11
kubectl -n astronomer get statefulsets | grep stan
12
Verify you can access the Astro UI.
13
Navigate to app.<baseDomain> in your browser and confirm the UI loads.

Next steps

  • Airflow 3: To create Airflow 3 Deployments, see Migrate to Airflow 3 for the required cluster configuration.
  • Data Planes: To add separate Data Planes for running Airflow Deployments in other clusters, see Install a Data Plane.