Payments categoryCertified by Stitch

Learn more about syncing Stripe data

Documentation

Detailed documentation on how to start syncing Stripe data.

Stripe Documentation

Manual Instructions

How to extract data from Stripe and load it to Delta Lake on Databricks manually.

Stripe and load it to Delta Lake on Databricks manually.

Divider

Jumpstart your Stripe analytics with reusable blocks

dbt packages can speed up your work

Once you replicate your Stripe data with Stitch, you can use it in many ways. For example, you can use the data modeling and transformation tool dbt to prepare data for reporting, analytics, or machine learning applications.

Dbt has prebuilt packages for many Stitch data sources, including Stripe. Here’s a look at code for modeling Stripe data. This particular block of code models monthly recurring revenue from your Stripe data.

View the source on GitHub →
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 {{ config( materialized = 'table', sort = 'date_month', dist = 'customer_id' ) }} with mrr as ( select * from {{ref('stripe_mrr_unioned')}} ), mrr_with_changes as ( select *, coalesce( lag(mrr) over (partition by customer_id order by date_month), 0) as prior_mrr, mrr - coalesce( lag(mrr) over (partition by customer_id order by date_month), 0 ) as mrr_change from mrr ), final as ( select *, case when first_month = 1 and mrr > 0 then 'new' when active_customer = 0 and lag(active_customer) over (partition by customer_id order by date_month) = 1 then 'churn' when lag(active_customer) over (partition by customer_id order by date_month) = 0 and active_customer = 1 then 'reactivation' when mrr_change > 0 then 'upgrade' when mrr_change < 0 then 'downgrade' end as change_category, least(mrr, prior_mrr) as renewal_amount from mrr_with_changes ) select * from final

Start replicating your Stripe data

Select your integrations, choose your warehouse, and enjoy Stitch free for 14 days.

Set up in minutesUnlimited data volume during trial

Simplify your Delta Lake on Databricks migration

When it comes to replicating your data to Delta Lake on Databricks, conventional ETL is no longer the only game in town.

Writing ETL code requires big investments of time, money, and expertise that might otherwise be used for innovation. Most importantly, newer approaches to data ingestion deliver faster implementation than traditional ETL, so you can produce data analytics and business intelligence more quickly.

This is where Stitch can help.

Divider

All your data, where you need it

Give your analysts, data scientists, and other team members the freedom to use the analytics tools of their choice.

See all analysis tools

Analysis Tools Asset
Stitch's interface is sleek and efficient. We only know it's running when it sends us alerts; otherwise, it does its job without bother.

Amaury Dumoulin

Data Lead, Qonto

Divider

Why our customers choose Stitch

Stitch is a simple, powerful ETL service built for developers. Stitch connects to your first-party data sources – from databases like MongoDB and MySQL, to SaaS tools like Salesforce and Zendesk – and replicates that data to your warehouse. With Stitch, developers can provision data for their internal users in minutes, not weeks.

Explore all of Stitch's features
IconSimple setup
Start replicating data in minutes, and never worry about ETL maintenance.
IconOwn your own data infrastructure
Stitch replicates to your warehouse, meaning you’re always in control.
IconMature replication engine
Accurate data from any structure, all the time.
Explore all of Stitch's features

Connect to your ecosystem of data sources

Stitch integrates with leading databases and SaaS products. No API maintenance, ever, while you maintain full control over replication behavior.