Skip to main content

Documentation Index

Fetch the complete documentation index at: https://astronomer-preview.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Deploy Airflow images and Dags to Astro Private Cloud using CI/CD pipelines. This guide covers deployment options via CLI, API, and common CI/CD platforms.

Benefits of CI/CD for Airflow deployments

Deploying Dags and other changes via CI/CD workflows provides:
  • Streamlined development: Deploy new and updated Dags efficiently across team members.
  • Faster error response: Decrease maintenance costs and respond quickly to failures.
  • Improved code quality: Enforce continuous automated testing to protect production Dags.

Deployment methods

Astro Private Cloud supports multiple deployment methods. The Astro CLI approach is recommended for most use cases due to its simplicity.

CLI deployment

The Astro CLI provides the simplest way to deploy to Astro Private Cloud from CI/CD pipelines. Build and deploy an image:
astro deploy <DEPLOYMENT-ID>
Deploy Dags only:
astro deploy <DEPLOYMENT-ID> --dags
Deploy a pre-built image:
astro deploy <DEPLOYMENT-ID> \
  --image-name quay.io/myorg/airflow:v1.2.3 \
  --remote \
  --runtime-version 12.1.0
The following optional flags are available for astro deploy:
  • --dags: Deploy only your dags folder. Works only if dag-only deploys are enabled for the Deployment.
  • --image-name <custom-image>: The name of a pre-built custom Docker image to use with your project. The image must be available on your local machine. If specified, building the image is skipped.
  • --remote: Directly point the Deployment to the remote image and skip pushing the image. Use with --image-name.
  • --runtime-version <version>: Specify the Runtime version of your image. Use with --image-name.
  • --force: Force deploy even if your project contains errors or uncommitted changes. Use with caution in CI/CD pipelines, as it bypasses the safeguard that ensures only committed code is deployed.
  • --description "<text>": Attach a description to a code deploy for traceability. If not provided, the system automatically assigns a default description based on deploy type.

API deployment

For advanced automation scenarios, you can use the Houston API’s upsertDeployment mutation to deploy a pre-built image to a Deployment. This approach is useful when you need to integrate with systems that can’t use the Astro CLI directly.
mutation {
  upsertDeployment(
    releaseName: "my-deployment"
    image: "quay.io/myorg/airflow:v1.2.3"
    runtimeVersion: "12.1.0"
    deployRevisionDescription: "CI/CD Pipeline Deploy"
  ) {
    id
    status
  }
}
The mutation accepts the following fields:
  • releaseName: The release name of your Deployment, following the pattern spaceyword-spaceyword-4digits. For example, infrared-photon-7780.
  • image: The full image path including registry, repository, and tag. The image must be accessible from your Astro Private Cloud data plane.
  • runtimeVersion: The Astro Runtime version that the image is based on. For example, 12.1.0.
  • deployRevisionDescription: An optional description for the deploy revision, useful for tracking deploys in the Astro Private Cloud UI.
To explore the full Houston API schema and test mutations interactively, use the GraphQL playground. For more information about deploying custom images with the Houston API, see Configure a custom image registry.

CI/CD platform examples

The following examples show how to implement CI/CD pipelines using the Astro CLI with popular CI/CD platforms. For advanced Docker registry-based deployment examples, see Advanced: Docker registry deployment.

GitHub Actions

name: Deploy to Astro Private Cloud

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install Astro CLI
        run: curl -sSL https://install.astronomer.io | sudo bash -s

      - name: Authenticate
        run: astro auth login <platform-domain> --token-login
        env:
          ASTRONOMER_KEY_ID: ${{ secrets.ASTRONOMER_KEY_ID }}
          ASTRONOMER_KEY_SECRET: ${{ secrets.ASTRONOMER_KEY_SECRET }}

      - name: Deploy
        run: astro deploy ${{ vars.DEPLOYMENT_ID }}

GitLab CI

deploy-airflow:
  stage: deploy
  image: ubuntu:latest
  script:
    - curl -sSL https://install.astronomer.io | bash -s
    - astro auth login ${PLATFORM_DOMAIN} --token-login
    - astro deploy ${DEPLOYMENT_ID}
  variables:
    ASTRONOMER_KEY_ID: ${ASTRONOMER_KEY_ID}
    ASTRONOMER_KEY_SECRET: ${ASTRONOMER_KEY_SECRET}
  only:
    - main

CircleCI

version: 2.1

jobs:
  deploy:
    docker:
      - image: cimg/base:current
    steps:
      - checkout
      - run:
          name: Install and Deploy
          command: |
            curl -sSL https://install.astronomer.io | sudo bash -s
            astro auth login ${PLATFORM_DOMAIN} --token-login
            astro deploy ${DEPLOYMENT_ID}

workflows:
  deploy-workflow:
    jobs:
      - deploy:
          filters:
            branches:
              only: main

Example CI/CD workflow

Consider an Astro project hosted on GitHub and deployed to Astro Private Cloud. In this scenario, dev and main branches of an Astro project are hosted on a single GitHub repository, and dev and prod Airflow Deployments are hosted on an Astronomer Workspace. Using CI/CD, you can automatically deploy Dags to your Airflow Deployment by pushing or merging code to a corresponding branch in GitHub. The general setup:
  1. Create two Airflow Deployments within your Astronomer Workspace, one for dev and one for prod.
  2. Create a repository in GitHub that hosts project code for all Airflow Deployments within your Astronomer Workspace.
  3. In your GitHub code repository, create a dev branch off of your main branch.
  4. Configure your CI/CD tool to deploy to your dev Airflow Deployment whenever you push to your dev branch, and to deploy to your prod Airflow Deployment whenever you merge your dev branch into main.
That would look something like this: CI/CD Workflow Diagram

Service account authentication

Service accounts provide secure, non-interactive authentication for CI/CD pipelines without requiring user credentials.

Prerequisites

Before completing this setup, ensure you:
  • Have access to a running Astro Deployment.
  • Installed the Astro CLI.
  • Are familiar with your CI/CD tool of choice.

Create a service account

To authenticate your CI/CD pipeline to the Astronomer private Docker registry, create a service account and grant it an appropriate set of permissions. You can do so using the Astro Private Cloud UI or CLI. After creation, you can delete this service account at any time. In both cases, creating a service account generates an API key for the CI/CD process. You can create service accounts at the:
  • Workspace level: Allows you to deploy to multiple Airflow Deployments with one code push.
  • Deployment level: Ensures that your CI/CD pipeline only deploys to one particular Deployment.

Create a service account using the CLI

Deployment level service account: First, get your Deployment ID:
astro deployment list
This outputs the list of running Deployments you have access to and their corresponding UUIDs. With that UUID, run:
astro deployment service-account create -d <deployment-id> --label <service-account-label> --role <deployment-role>
Workspace level service account: First, get your Workspace ID:
astro workspace list
Then create the service account:
astro workspace service-account create -w <workspace-id> --label <service-account-label> --role <workspace-role>

Create a service account using the API

You can also create a service account using the GraphQL API. The deploymentUuid field is the same Deployment ID (UUID) returned by astro deployment list.
mutation {
  createDeploymentServiceAccount(
    deploymentUuid: "<deployment-id>"
    label: "CI/CD Pipeline"
    role: DEPLOYMENT_ADMIN
  ) {
    id
    apiKey
  }
}
Set in CI/CD environment:
export ASTRONOMER_KEY_ID=<service-account-id>
export ASTRONOMER_KEY_SECRET=<api-key>

Create a service account using the Astro Private Cloud UI

If you prefer to provision a service account through the Astro Private Cloud UI:
  1. Log into Astronomer and navigate to: Deployment > Service Accounts
  2. Configure your service account:
    • Give it a Name
    • Give it a Category (optional)
    • Grant it a User Role (must be “Editor” or “Admin” to deploy code)
  3. Copy the API key that is generated
The API key is only visible during the session. Store it securely in an environment variable or secret management tool.
For more information on Workspace roles, see“Roles and Permissions”.

Set credentials in CI/CD environment

After creating a service account, set the credentials in your CI/CD environment:
export ASTRONOMER_KEY_ID=<service-account-id>
export ASTRONOMER_KEY_SECRET=<api-key>
The Astro CLI automatically uses these environment variables for authentication.

Best practices

  • Use service accounts for CI/CD authentication instead of personal credentials.
  • Store credentials securely in CI/CD secrets or environment variables.
  • Deploy only committed code in CI/CD pipelines to ensure reproducibility. Avoid using --force unless you have a specific reason to bypass the git commit check.
  • Add deployment descriptions with --description for audit trail and version tracking.
  • Test in staging before production Deployment to catch issues early. For guidance on writing Dags that work across environments, see Manage Airflow code and Dag writing best practices.
  • Use Dag-only deploys when you only need to update Dag files without rebuilding images.

Advanced: Docker registry deployment

For advanced use cases, legacy systems, or when you need more control over the Docker build and push process, you can deploy directly to the Astronomer Docker registry. Most users should use the CLI deployment method instead. When to use Docker registry deployment:
  • You need custom Docker build processes or multi-stage builds.
  • You’re integrating with existing Docker-based CI/CD workflows.
  • You require fine-grained control over image tagging and versioning.
  • You’re working with legacy CI/CD systems that don’t support the Astro CLI.
If you’re using BuildKit with theBuildx plugin, you need to add the--provenance=falseflag to yourdocker buildx buildcommands.
The Docker registry examples useRELEASE_NAME(for example,infrared-photon-7780) instead ofDEPLOYMENT_ID. Both refer to your Astro Deployment, but the Astro CLI usesDEPLOYMENT_IDwhile the Docker registry approach uses the release name.

Authenticate and push to Docker

The first step of this pipeline authenticates against the Docker registry that stores an individual Docker image for every code push or configuration change:
docker login registry.${BASE_DOMAIN} -u _ -p $${API_KEY_SECRET}
In this example:
  • BASE_DOMAIN = The domain at which your Astro Private Cloud instance is running
  • API_KEY_SECRET = The API key that you got from the CLI or the UI and stored in your secret manager

Build and push an image

After you are authenticated, you can build, tag, and push your Airflow image to the private registry, where a webhook triggers an update to your Astro Deployment.
To deploy successfully to Astro Private Cloud, the version in theFROMstatement of your project’s Dockerfile must bethe same as or newer thanthe Runtime version of your Astro Deployment. For more information on upgrading, seeUpgrade Airflow.
Image naming components:
  • Registry Address: Tells Docker where to push images. On Astro Private Cloud, your private registry is located at registry.${BASE_DOMAIN}.
  • Release Name: The release name of your Astro Deployment, following the pattern spaceyword-spaceyword-4digits (for example, infrared-photon-7780).
  • Tag Name: Each deploy generates a Docker image with a corresponding tag. If you deploy via the CLI, the tag defaults to deploy-n, with n representing the number of deploys. For CI/CD, customize this tag to include the source and build number.
Example with custom tag:
docker build -t registry.${BASE_DOMAIN}/${RELEASE_NAME}/airflow:ci-${BUILD_NUMBER} .

Run unit tests

For CI/CD pipelines that push code to a production Deployment, Astronomer recommends adding a unit test after the image build step to ensure that you don’t push a Docker image with breaking changes. To run a basic unit test, add a step in your CI/CD pipeline that executes docker run and then runs pytest tests in a container based on your newly built image before it’s pushed to your registry. For guidance on writing pytest tests for Airflow, including Dag validation tests and unit tests for custom operators, see Test Airflow Dags. For example, you can add the following command as a step in your CI/CD pipeline:
BASE_DOMAIN,RELEASE_NAME, andBUILD_NUMBERshould be set as environment variables in your CI/CD tool.
docker run --rm registry.${BASE_DOMAIN}/${RELEASE_NAME}/airflow:ci-${BUILD_NUMBER} /bin/bash -c "pytest tests"

Configure your CI/CD pipeline

Depending on your CI/CD tool, configuration varies slightly. This section focuses on outlining what needs to be accomplished, not the specifics of how. At its core, your CI/CD pipeline first authenticates to the Astronomer private registry, then builds, tags, and pushes your Docker image to that registry.

Docker registry example: GitHub Actions

This example shows how to implement CI/CD using GitHub Actions with Docker registry deployment for both development and production environments. Setup steps:
  1. Create a GitHub repository for your Astro project with dev and main branches.
  2. Create two Deployment-level service accounts: one for Dev and one for Production.
  3. Add service accounts as GitHub secrets named SERVICE_ACCOUNT_KEY and SERVICE_ACCOUNT_KEY_DEV.
  4. Create a GitHub Action with the following workflow:
    name: Astronomer CI - Deploy code
    on:
      push:
        branches: [dev]
      pull_request:
        types:
          - closed
        branches: [main]
    jobs:
      dev-push:
        if: github.ref == 'refs/heads/dev'
        runs-on: ubuntu-latest
        steps:
        - name: Check out the repo
          uses: actions/checkout@v3
        - name: Log in to registry
          uses: docker/login-action@v1
          with:
            registry: registry.${BASE_DOMAIN}
            username: _
            password: ${{ secrets.SERVICE_ACCOUNT_KEY_DEV }}
        - name: Build image
          run: docker build -t registry.${BASE_DOMAIN}/<dev-release-name>/airflow:ci-${{ github.sha }} .
        - name: Run tests
          run: docker run --rm registry.${BASE_DOMAIN}/<dev-release-name>/airflow:ci-${{ github.sha }} /bin/bash -c "pytest tests"
        - name: Push image
          run: docker push registry.${BASE_DOMAIN}/<dev-release-name>/airflow:ci-${{ github.sha }}
      prod-push:
        if: github.event.action == 'closed' && github.event.pull_request.merged == true
        runs-on: ubuntu-latest
        steps:
        - name: Check out the repo
          uses: actions/checkout@v3
        - name: Log in to registry
          uses: docker/login-action@v1
          with:
            registry: registry.${BASE_DOMAIN}
            username: _
            password: ${{ secrets.SERVICE_ACCOUNT_KEY }}
        - name: Build image
          run: docker build -t registry.${BASE_DOMAIN}/<prod-release-name>/airflow:ci-${{ github.sha }} .
        - name: Run tests
          run: docker run --rm registry.${BASE_DOMAIN}/<prod-release-name>/airflow:ci-${{ github.sha }} /bin/bash -c "pytest tests"
        - name: Push image
          run: docker push registry.${BASE_DOMAIN}/<prod-release-name>/airflow:ci-${{ github.sha }}
    
Replace <dev-release-name> and <prod-release-name> with your Deployment release names.
  1. Test the workflow by committing changes to dev to update your development Deployment, then merge dev into main via pull request to update production.
The prod-push action only runs after merging a pull request. To further restrict this pipeline, add branch protection settings in GitHub to prevent direct pushes to main.

Additional Docker registry examples

The following sections provide templates for configuring CI/CD pipelines using popular CI/CD tools with Docker registry deployment. Each template can be customized to manage multiple branches or Deployments based on your needs.

DroneCI

pipeline:
  build:
    image: quay.io/astronomer/ap-build:latest
    commands:
      - docker build -t registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-${DRONE_BUILD_NUMBER} .
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    when:
      event: push
      branch: [ master, release-* ]

  test:
    image: quay.io/astronomer/ap-build:latest
    commands:
      - docker run --rm registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-${DRONE_BUILD_NUMBER} /bin/bash -c "pytest tests"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    when:
      event: push
      branch: [ master, release-* ]

  push:
    image: quay.io/astronomer/ap-build:latest
    commands:
      - echo $${SERVICE_ACCOUNT_KEY}
      - docker login registry.$BASE_DOMAIN -u _ -p $SERVICE_ACCOUNT_KEY
      - docker push registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-${DRONE_BUILD_NUMBER}
    secrets: [ SERVICE_ACCOUNT_KEY ]
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    when:
      event: push
      branch: [ master, release-* ]

CircleCI

# Python CircleCI configuration file
#
# Check https://circleci.com/docs/language-python/ for more details
#
version: 2
jobs:
  build:
    machine: ubuntu-2204:202509-01
    steps:
      - checkout
      - restore_cache:
          keys:
          - v1-dependencies-{{ checksum "requirements.txt" }}
          # fallback to using the latest cache if no exact match is found
          - v1-dependencies-
      - run:
          name: Install test deps
          command: |
            # Use a virtual env to encapsulate everything in one folder for
            # caching. And make sure it lives outside the checkout, so that any
            # style checkers don't run on all the installed modules
            python -m venv ~/.venv
            . ~/.venv/bin/activate
            pip install -r requirements.txt
      - save_cache:
          paths:
            - ~/.venv
          key: v1-dependencies-{{ checksum "requirements.txt" }}
      - run:
          name: run linter
          command: |
            . ~/.venv/bin/activate
            pycodestyle .
  deploy:
    docker:
      - image: docker:latest
    steps:
      - checkout
      - setup_remote_docker:
          docker_layer_caching: true
      - run:
          name: Push to Docker Hub
          command: |
            TAG=0.1.$CIRCLE_BUILD_NUM
            docker build -t registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-$TAG .
            docker run --rm registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-$TAG /bin/bash -c "pytest tests"
            docker login registry.$BASE_DOMAIN -u _ -p $SERVICE_ACCOUNT_KEY
            docker push registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-$TAG

workflows:
  version: 2
  build-deploy:
    jobs:
      - build
      - deploy:
          requires:
            - build
          filters:
            branches:
              only:
                - master

Jenkins

pipeline {
    agent any
    stages {
        stage('Deploy to astronomer') {
            when { branch 'master' }
            steps {
                script {
                    sh 'docker build -t registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-${BUILD_NUMBER} .'
                    sh 'docker run --rm registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-${BUILD_NUMBER} /bin/bash -c "pytest tests"'
                    sh 'docker login registry.$BASE_DOMAIN -u _ -p $SERVICE_ACCOUNT_KEY'
                    sh 'docker push registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-${BUILD_NUMBER}'
                }
            }
        }
    }
    post {
        always {
            cleanWs()
        }
    }
}

Bitbucket

If you are using Bitbucket, this script should work (courtesy of our friends at Das42)
image: quay.io/astronomer/ap-build:latest

pipelines:
  branches:
    master:
      - step:
          name: Deploy to production
          deployment: production
          script:
            - echo ${SERVICE_ACCOUNT_KEY}
            - docker build -t registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-${BITBUCKET_BUILD_NUMBER} .
            - docker run --rm registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-${BITBUCKET_BUILD_NUMBER} /bin/bash -c "pytest tests"
            - docker login registry.$BASE_DOMAIN -u _ -p $SERVICE_ACCOUNT_KEY
            - docker push registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-${BITBUCKET_BUILD_NUMBER}
          services:
            - docker
          caches:
            - docker

GitLab

astro_deploy:
  stage: deploy
  image: docker:latest
  services:
    - docker:dind
  script:
    - echo "Building container.."
    - docker build -t registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:CI-$CI_PIPELINE_IID .
    - docker run --rm registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:CI-$CI_PIPELINE_IID /bin/bash -c "pytest tests"
    - docker login registry.$BASE_DOMAIN -u _ -p $SERVICE_ACCOUNT_KEY
    - docker push registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:CI-$CI_PIPELINE_IID
  only:
    - master

AWS CodeBuild

version: 0.2
phases:
  install:
    runtime-versions:
      python: latest

  pre_build:
    commands:
      - echo Logging in to dockerhub ...
      - docker login "registry.$BASE_DOMAIN" -u _ -p "$API_KEY_SECRET"
      - export GIT_VERSION="$(git rev-parse --short HEAD)"
      - echo "GIT_VERSION = $GIT_VERSION"
      - pip install -r requirements.txt

  build:
    commands:
      - docker build -t "registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-$GIT_VERSION" .
      - docker run --rm "registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-$GIT_VERSION" /bin/bash -c "pytest tests"
      - docker push "registry.$BASE_DOMAIN/$RELEASE_NAME/airflow:ci-$GIT_VERSION"

Azure DevOps

This example shows how to automatically deploy your Astro project from a GitHub repository using an Azure DevOps pipeline.
To see an example GitHub project that uses this configuration, seecs-tutorial-azuredevopson GitHub.
Prerequisites:
  • A GitHub repository hosting your Astro project.
  • An Azure DevOps account with permissions to create new pipelines.
Setup steps:
  1. Create a file called astro-devops-cicd.yaml in your Astro project repository:
    # Control which branches have CI triggers:
    trigger:
    - main
    
    # To trigger the build/deploy only after a PR has been merged:
    pr: none
    
    # Optionally use Variable Groups & Azure Key Vault:
    #variables:
    #- group: Variable-Group
    #- group: Key-Vault-Group
    
    stages:
    - stage: build
      jobs:
      - job: run_build
        pool:
          vmImage: 'Ubuntu-latest'
        steps:
        - script: |
            echo "Building container.."
            docker build -t registry.$(BASE-DOMAIN)/$(RELEASE-NAME)/airflow:$(Build.SourceVersion) .
            docker run --rm registry.$(BASE-DOMAIN)/$(RELEASE-NAME)/airflow:$(Build.SourceVersion) /bin/bash -c "pytest tests"
            docker login registry.$(BASE-DOMAIN) -u _ -p $(SVC-ACCT-KEY)
            docker push registry.$(BASE-DOMAIN)/$(RELEASE-NAME)/airflow:$(Build.SourceVersion)
    
  2. Follow the steps in Azure documentation to link your GitHub repository to an Azure pipeline. When prompted for the source code for your pipeline, specify that you have an existing Azure Pipelines YAML file and provide the file path: astro-devops-cicd.yaml.
  3. Finish and save your Azure pipeline setup.
  4. In Azure, add environment variables for the following values:
    • BASE-DOMAIN: Your base domain for Astro Private Cloud
    • RELEASE-NAME: The release name for your Deployment
    • SVC-ACCT-KEY: The service account key you created for CI/CD (mark as secret)
After completing this setup, any merges to the main branch of your GitHub repository trigger the pipeline and deploy your changes to Astro Private Cloud.