This topic provides steps for using Hashicorp Vault as a secrets backend for both local development and on Astro. To do this, you will:Documentation Index
Fetch the complete documentation index at: https://astronomer-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
- Create an AppRole in Vault which grants Astro minimal required permissions.
- Write a test Airflow variable or connection as a secret to your Vault server.
- Configure your Astro project to pull the secret from Vault.
- Test the backend in a local environment.
- Deploy your changes to Astro.
Prerequisites
- A Deployment on Astro.
- The Astro CLI.
- A local or hosted Vault server. See Starting the Server or Create a Vault Cluster on HCP.
- An Astro project.
- The Vault CLI.
- Your Vault Server’s URL. If you’re using a local server, this should be
http://127.0.0.1:8200/. - (Remote Execution Only) Helm installed
- (Remote Execution Only) The
values.yamlfile from the Register Agents modal in your Deployments>Agents page.
- Sign up for a Vault trial on Hashicorp Cloud Platform (HCP) or
- Deploy a local Vault server. See Starting the server in Hashicorp documentation.
Step 1: Create a Policy and AppRole in Vault
To use Vault as a secrets backend, Astronomer recommends configuring a Vault AppRole with a policy that grants only the minimum necessary permissions for Astro. For Remote Execution Deployments, you can use any Vault authentication method you prefer, for example Kubernetes auth if your agents and Vault are running on Kubernetes. To do this:-
Run the following command to create a Vault policy that Astro can use to access a Vault server:
-
Run the following command to create a Vault AppRole:
-
Run the following command to retrieve the
secret-idfor your AppRole:Save this value. You’ll use this later to complete the setup.
Step 2: Create an Airflow variable or connection in Vault
To start, create an Airflow variable or connection in Vault that you want to store as a secret. It can be either a real or test value. You will use this secret to test your backend’s functionality. You can use an existing mount point or create a new one to store your Airflow connections and variables. For example, to create a new mount point calledairflow, run the following Vault CLI command:
variables, run the following Vault CLI command with your own values:
connections, first format the connection as a URI. Then, run the following Vault CLI command with your own values:
Step 3: Set up Vault locally
- Astro
- Remote Execution
In your Astro project, add the Hashicorp Airflow provider to your project by adding the following to your Then, add the following environment variables to your
requirements.txt file:.env file:If you run Vault on Hashicorp Cloud Platform (HCP):
- Replace
http://host.docker.internal:8200withhttps://<your-cluster>.hashicorp.cloud:8200. - Add
"namespace": "admin"as an argument afterurl.
airflow/variables/* and airflow/connections/* paths in your Vault server. You can now run a dag locally to check that your variables are accessible using Variable.get("<your-variable-key>").
For more information on the Airflow provider for Hashicorp Vault and how to further customize your integration, see the Apache Airflow documentation.
Step 4: Deploy configuration
- Astro
- Remote Execution
-
Run the following commands to export your environment variables to Astro:
-
Run the following command to push your updated
requirements.txtfile to Astro: -
(Optional) Remove the environment variables from your
.envfile or store your.envfile in a safe location to protect your credentials inAIRFLOW__SECRETS__BACKEND_KWARGS.