Snowflake is where your business truth lives. Astran is what keeps your operations running when upstream systems become temporarily unreachable. Here's how to connect them with two SQL commands.
Snowflake is where your business truth lives - often fed by SAP, your ERP, or other operational systems. Astran is what keeps your operations running when those systems become temporarily unreachable. Here's how to connect Snowflake and Astran in only two SQL commands.
Snowflake has become the backbone of many organizations' data strategy. Finance teams run their closing processes from it. Supply chain teams build their dashboards on it. HR and procurement consolidate everything there. It's fast, it scales, and it has become the de facto source of truth for critical business data - often aggregating feeds from upstream systems like SAP, ERPs, or operational databases.
Which creates an interesting resilience scenario: when one of those upstream systems goes down - a ransomware attack on your SAP environment, a cloud incident, a network outage - the data is still in Snowflake. But the processes that normally run inside those systems are suddenly frozen.
That's exactly where Astran comes in. Rather than waiting for SAP or your ERP to come back online, your teams can keep executing their critical operations directly from the data Astran has secured - independently, without depending on any primary system being available. Astran doesn't replace Snowflake or SAP: it makes sure your vital activities keep running while they recover.
Snowflake's external stage feature allows you to write data exports directly to any S3-compatible storage. Astran's API speaks S3 natively - so the integration requires nothing more than standard Snowflake SQL.
Before you start: you'll need to contact Snowflake support to authorize Astran's endpoint ({your_partition}.s3.astran.io) on your Snowflake instance. This is a one-time allowlisting step - and in practice, Snowflake's team is blazing fast to answer your questions.
A Snowflake stage is a named pointer to an external storage location. Think of it as a reusable destination you can reference in any subsequent export command. You create it once per schema.
CREATE OR REPLACE STAGE MY_DB.MY_SCHEMA.MY_S3_STAGE
URL = 's3compat://snowflake/exports/'
ENDPOINT = '{partition}.s3.astran.io'
CREDENTIALS = (
AWS_KEY_ID = 'YOUR_KEY_ID'
AWS_SECRET_KEY = 'YOUR_KEY_SECRET'
)
FILE_FORMAT = (
TYPE = CSV
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
);| Parameter | What to put here | Notes |
|---|---|---|
MY_DB | Your Snowflake database name | e.g. FINANCE_DB, OPS_DB - the database where the stage object will live |
MY_SCHEMA | Your Snowflake schema name | e.g. PUBLIC, EXPORTS, RESILIENCE - organizes the stage alongside related objects |
MY_S3_STAGE | A name for this stage | Can be anything meaningful, e.g. ASTRAN_RESILIENCE_STAGE |
YOUR_KEY_ID | Astran API key ID | Created by your Astran administrator |
YOUR_KEY_SECRET | Astran API secret key | Treat like a password - store in Snowflake Secrets Manager in production |
Once the stage exists, a single COPY INTO command pushes any table, or any query result, into Astran's storage. The example below uses Snowflake Sample Data, which is available on every Snowflake account by default, so you can test the full flow immediately without touching your own data.
COPY INTO @MY_DB.MY_SCHEMA.MY_S3_STAGE/nation_export/
FROM SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.NATION
FILE_FORMAT = (
TYPE = CSV
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
COMPRESSION = NONE
)
HEADER = TRUE
OVERWRITE = TRUE;| Parameter | What it does | Notes |
|---|---|---|
@MY_DB.MY_SCHEMA.MY_S3_STAGE | Target stage (created in step 1) | Must match the stage name exactly, prefixed with @ |
/nation_export/ | Subfolder path within the stage | Replace with your own path, e.g. /finance/closing/2024-Q4/ |
SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.NATION | The source table (sample data here) | Replace with your actual table - or a full SELECT query |
COMPRESSION = NONE | No compression applied | There's no support for GZIP compression in Astran |
HEADER = TRUE | Writes column names in row 1 | Recommended - makes files self-describing |
SNOWFLAKE_SAMPLE_DATA is a shared database available on all Snowflake accounts - no setup required. TPCH_SF1.NATION is a small, clean table (25 rows) that's perfect for validating the pipeline before you connect your own datasets.For ongoing resilience, you'll want exports to run automatically on a schedule ; so that Astran always has a fresh copy of your data, not just a one-time snapshot. Snowflake's native Tasks feature makes this straightforward:
-- Run every night at 2am UTC
CREATE OR REPLACE TASK MY_DB.MY_SCHEMA.NIGHTLY_EXPORT
SCHEDULE = 'USING CRON 0 2 * * * UTC'
AS
COPY INTO @MY_DB.MY_SCHEMA.MY_S3_STAGE/my_table/
FROM MY_DB.MY_SCHEMA.MY_TABLE
FILE_FORMAT = (TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY = '"' COMPRESSION = NONE)
HEADER = TRUE OVERWRITE = TRUE;
ALTER TASK MY_DB.MY_SCHEMA.NIGHTLY_EXPORT RESUME;This means your Astran platform is fed fresh data daily ; and if a crisis hits at any point, your teams are working from data that's at most 24 hours old.
The real risk isn't losing Snowflake, it's losing the ability to act on the data it holds. When an upstream system like SAP or your ERP goes down, Snowflake still has the data. The question is whether your teams can do anything with it.
With Astran connected to Snowflake via a scheduled S3 export, the answer is yes. Your teams keep executing their critical processes from a secured, independently available copy - without waiting for any primary system to come back online. Two SQL commands to set up. Continuous resilience as the result.
Ready to set it up? The full documentation is available at docs.astran.ai, and our team can walk you through a pilot in as little as two weeks.