When Stitch replicates your data, we’ll load it into the destination of your choosing.

We recommend asking yourself the questions below before making your selection. By fully assessing each choice first, you’ll decrease the likelihood of needing to switch destinations or re-replicate all of your data at a later date.

In this guide, we’ll cover:


Side-by-side comparison

The following tabs contain a high-level look at each of Stitch’s destinations and how they compare to each other.

The remaining sections of this guide expand on the information in these tabs.

Version Release status Stitch availability Fully managed?
Amazon Redshift (v2) v2 Released All Stitch plans No
Amazon S3 (v1) v1 Released All Stitch plans No
data.world (v1) v1 Released All Stitch plans Yes
Databricks Delta Lake (AWS) (v1) v1 Released All Stitch plans Yes
Google BigQuery (v1) v1 Deprecated All Stitch plans Yes
Google BigQuery (v2) v2 Released All Stitch plans Yes
Microsoft Azure Synapse Analytics (v1) v1 Released All Stitch plans Yes
Microsoft SQL Server (v1) v1 Beta All Stitch plans No
MySQL (v1) v1 Released All Stitch plans No
Panoply (v2) v2 Released All Stitch plans Yes
PostgreSQL (v1) v1 Released All Stitch plans Depends
Snowflake (v1) v1 Released All Stitch plans Yes

For more details, see the Pricing section.

Version Pricing model Free option?
Amazon Redshift v1 Hourly Yes (plan and trial)
Amazon S3 v1 Storage Yes (trial)
data.world v1 Monthly Yes (plan)
Databricks Delta Lake (AWS) v1 Usage Yes (trial)
Google BigQuery v1 Usage No
Google BigQuery v1 Usage No
Microsoft Azure Synapse Analytics v1 Usage Yes (plan and trial)
Microsoft SQL Server v1 Varies Yes (plan and trial)
MySQL v1 Annual Yes
Panoply v1 Monthly Yes (trial)
PostgreSQL v1 Varies Depends
Snowflake v1 Usage No
In this tab:
  • General Stitch details
  • Support for operating regions
  • Support for connection methods
GENERAL STITCH DETAILS
Version API availability Incompatible
sources
Notes
Amazon Redshift v2 Available Yes
Amazon S3 v1 Available None
data.world v1 Unavailable None
Databricks Delta Lake (AWS) v1 Available None Stitch requires Databricks Delta Lake (AWS) to use version Databricks Runtime Version 6.3+
Google BigQuery v1 Unavailable None
Google BigQuery v2 Available None
Microsoft Azure Synapse Analytics v1 Available None
Microsoft SQL Server v1 Available None Stitch requires Microsoft SQL Server to use version - Azure SQL Server - SQL Server 2019 - SQL Server 2017 - SQL Server 2016 - SQL Server 2014 - SQL Server 2012 **Note:** SSL can only be used with versions of {{ destination.display_name }} that support TSL 1.2. Check which versions support it in [Microsoft's documentation]({{ site.data.destinations.microsoft-sql-server.resource-links.tls-support }}).
MySQL v1 Available None Stitch requires MySQL to use version 5.7.8 and later
Panoply v2 Available Yes
PostgreSQL v1 Available Yes Stitch requires PostgreSQL to use version 9.3 or higher
Snowflake v1 Available None
OPERATING REGION SUPPORT
Version North America Europe
Amazon Redshift v2
Amazon S3 v1
data.world v1
Databricks Delta Lake (AWS) v1
Google BigQuery v1
Google BigQuery v2
Microsoft Azure Synapse Analytics v1
Microsoft SQL Server v1
MySQL v1
Panoply v2
PostgreSQL v1
Snowflake v1
CONNECTION METHOD SUPPORT
Version SSL support SSH support VPN support
Amazon Redshift v2
Amazon S3 v1 n/a n/a n/a
data.world v1 n/a
Databricks Delta Lake (AWS) v1
Google BigQuery v1
Google BigQuery v2
Microsoft Azure Synapse Analytics v1
Microsoft SQL Server v1
MySQL v1
Panoply v2
PostgreSQL v1
Snowflake v1
In this tab:
  • Object name limits
  • Other destination details
OBJECT NAME LIMITS
Version Table name length Column name length
Amazon Redshift v2 127 characters 115 characters
Amazon S3 v1 1,024 bytes None
data.world v1 None None
Databricks Delta Lake (AWS) v1 78 characters 122 characters
Google BigQuery v1 1,024 characters 128 characters
Google BigQuery v2 1,024 characters 128 characters
Microsoft Azure Synapse Analytics v1 112 characters 128 characters
Microsoft SQL Server v1 113 characters 128 characters
MySQL v1 60 characters 64 characters
Panoply v2 127 characters 115 characters
PostgreSQL v1 63 characters 59 characters
Snowflake v1 255 characters 251 characters
OTHER DETAILS
Version Case sensitivity Max columns per table
Amazon Redshift v2 Insensitive 1,600
Amazon S3 v1 n/a None
data.world v1 n/a None
Databricks Delta Lake (AWS) v1 Insensitive None
Google BigQuery v1 Insensitive 10,000
Google BigQuery v2 Insensitive 10,000
Microsoft Azure Synapse Analytics v1 Insensitive 1,024
Microsoft SQL Server v1 Insensitive 1,024
MySQL v1 Insensitive 1017
Panoply v2 Insensitive 1,600
PostgreSQL v1 Sensitive 250-1,600
Snowflake v1 Insensitive None

For more details, see the Replication and transformations section.

Version Default loading behavior Nested structure support
Amazon Redshift v2 Upsert Unsupported
Amazon S3 v1 Append-Only

Depends on data storage format (CSV or JSON)

data.world v1 Upsert Supported
Databricks Delta Lake (AWS) v1 Upsert Supported
Google BigQuery v1 Append-Only Supported
Google BigQuery v2 Selected by you Supported
Microsoft Azure Synapse Analytics v1 Upsert Unsupported
Microsoft SQL Server v1 Upsert Supported
MySQL v1 Upsert Supported
Panoply v2 Upsert Unsupported
PostgreSQL v1 Upsert Unsupported
Snowflake v1 Upsert Supported

Destination and data source compatibility

Some integrations may be partially or fully incompatible with some of the destinations offered by Stitch. For example: Some destinations don’t support storing multiple data types in the same column. If a SaaS integration sends over a column with mixed data types, some destinations may “reject” the data.

For integrations that allow you to control how data is structured, you may be able to fix the problem at the source and successfully replicate the data. If this is not possible, however, Stitch may never be able to successfully replicate the incompatible data.

Refer to the Integration and destination compatibility reference for more info.


Destination and analysis tool compatibility

You may want to investigate whether your prefered analysis tool supports a native connection to your Stitch destination. We’ve investigated some popular options for you. See the Analysis tools reference for more about using popular analysis tools with Stitch.

Amazon
Redshift
Amazon S3 Google
BigQuery
Microsoft
Azure
PostgreSQL
(self-hosted)
Snowflake
Amazon Quicksight Supported Supported Supported
Google Data Studio Supported Supported
Grafana Via plugin Supported
Looker Supported Supported Supported Supported Supported
Metabase Supported Supported Supported Supported
PowerBI Supported Via REST API Supported Supported Supported Supported
Qlik Supported Supported Supported
Sisense Supported Via Amazon Athena Supported Supported Supported Supported
Tableau Supported Via Amazon Athena Supported Supported Supported Supported

Replication, transformations, and data structure

While the majority of your data will look the same across our destinations, there are some key differences you should be aware of:

Loading behavior and updates to existing records

Loading behavior determines how data is loaded into your destination. Specifically, how updates are made to existing rows in the destination.

Stitch supports two loading behavior types:

  • Upsert: When data is loaded using the Upsert behavior, existing rows are updated in tables with defined Primary Keys. Stitch will de-dupe records based on Primary Keys before loading them, meaning that only one version of a record will exist in the table at any given time.
  • Append-Only: When data is loaded using the Append-Only behavior, records are appended to the end of the table as new rows. Existing rows in the table aren’t updated even if the source has defined Primary Keys. Multiple versions of a row can exist in a table, creating a log of how a record has changed over time.

The table below lists the default loading behavior for each destination and whether it can be configured.

Note: If a destination supports and is configured to use Upsert loading, Stitch will attempt to use Upsert loading before Append-Only. All other conditions for Upsert loading must also be met.

Refer to the Understanding loading behavior guide for more info and examples of each loading bheavior type.

Destination Version Default loading behavior Loading behavior is configurable?
Amazon Redshift v2 Upsert
Amazon S3 v1 Append-Only
data.world v1 Upsert
Databricks Delta Lake (AWS) v1 Upsert
Google BigQuery v1 Append-Only
Google BigQuery v2 Selected by you
Microsoft Azure Synapse Analytics v1 Upsert
Microsoft SQL Server v1 Upsert
MySQL v1 Upsert
Panoply v2 Upsert
PostgreSQL v1 Upsert
Snowflake v1 Upsert

Nested data structures

Some destinations don’t natively support nested structures, meaning that before Stitch can load replicated data, these structures must be “de-nested”. During this process, Stitch will flatten nested structures into relational tables and subtables. As a result of creating subtables, a higher number of rows will be used.

If a destination does natively support nested structures, no de-nesting will occur and Stitch will store the records intact.

Check out the Handling of Nested Data & Row Count Impact guide for an in-depth look at what we mean by nested records, how Stitch handles nested data, and what those records will look like in your data warehouse.

Destination Version Support Notes
Amazon Redshift v2

Nested data structures will be flattened into relational objects.

Amazon S3 v1

Depending on the data storage format you select, nested data structures may be kept intact or flattened into relational objects:

  • CSV: Nested data structures will be flattened into relational objects.
  • JSON: Nested data will be kept intact.
data.world v1
Databricks Delta Lake (AWS) v1

Nested data structures (JSON arrays and objects) will be loaded intact into a STRING column with a comment specifying that the column contains JSON.

Google BigQuery v1

Nested data structures will be maintained.

Google BigQuery v2

Nested data structures will be maintained.

Microsoft Azure Synapse Analytics v1

Nested data structures will be flattened into relational objects.

Microsoft SQL Server v1

Nested data structures (JSON arrays and objects) will be loaded intact into a NVARCHAR(MAX) column.

MySQL v1

Nested data structures will be maintained.

Panoply v2

Nested data structures will be flattened into relational objects.

PostgreSQL v1

Nested data structures will be flattened into relational objects.

Snowflake v1

Nested data structures will be loaded intact into VARIANT columns.


Maintenance and support

With the exception of a self-hosted PostgreSQL instance, all the destinations offered by Stitch are cloud-hosted databases, meaning you won’t have to factor in server maintenance when making a decision.

In the table below are:

  • Fully-managed destinations that include routine maintenance and upgrades in their plans
  • DIY destinations that will require you to perform and schedule maintenance tasks on your own
Fully-managed destinations DIY destinations
  • data.world
  • Databricks Delta Lake (AWS)
  • Google BigQuery
  • Google BigQuery
  • Microsoft Azure Synapse Analytics
  • Panoply
  • Snowflake
  • PostgreSQL (Heroku, Amazon RDS, Amazon Aurora)
  • Amazon Redshift
  • Amazon S3
  • Microsoft SQL Server
  • MySQL
  • PostgreSQL (self-hosted)

Destination pricing models

Every destination offered by Stitch has its own pricing structure. Some providers charge by overall usage, others by an hourly rate or the amount of stored data. Depending on your needs, some pricing structures may fit better into your budget.

In the next section, you’ll find each destination’s pricing structure, including a link to detailed price info and whether a free option (trial or plan) is available. Here are a few things to keep in mind:

Destination Version Pricing model Notes
Amazon Redshift v2

Hourly

Currently, Amazon Redshift bases their pricing on an hourly rate that varies depending on the type and number of nodes in a cluster. The type and number of nodes you choose when creating a cluster is dependent on your needs and data set, but you can scale up or down over time should your requirements change.

Amazon S3 v1

Storage

Amazon S3 pricing is based on two factors: The amount of data stored in and location (region) of your Amazon S3 bucket.

data.world v1

Monthly

data.world plans vary depending on the number of private projects/data sets, size limits per project/dataset, external integrations, and total number of team members that can belong to an account. All plans, however, include unlimited public projects/datasets, API access, joins, queries, activity alerts, and other standard features.

Databricks Delta Lake (AWS) v1

Usage

Google BigQuery v1

Usage

Google BigQuery ‘s pricing isn’t based on a fixed rate, meaning your bill can vary over time.

To learn more about how Stitch may impact your BigQuery costs, click here.
Google BigQuery v2

Usage

Google BigQuery ‘s pricing isn’t based on a fixed rate, meaning your bill can vary over time.

To learn more about how Stitch may impact your BigQuery costs, click here.
Microsoft Azure Synapse Analytics v1

Usage

Microsoft Azure Synapse Analytics bases their pricing on your compute and storage usage. Compute usage is charged using an hourly rate, meaning you’ll only be billed for the hours your data warehouse is active. Compute usage is billed in one hour increments.

Storage charges include the size of your primary database and seven days of incremental snapshots. Microsoft Azure rounds charges to the nearest terabyte (TB). For example: If the data warehouse is 1.5 TB and you have 100 GB of snapshots, your bill will be for 2 TB of data.

Refer to Microsoft’s documentation for more info and examples.

Microsoft SQL Server v1

Varies

Refer to Microsoft’s documentation for more info and examples.

MySQL v1

Annual

Refer to MySQL’s documentation for more info and examples.

Panoply v2

Monthly

Panoply charges based on the amount of data stored and offers several plan options for your needs. Refer to their pricing page for more information.

PostgreSQL v1

Varies

Pricing depends on the type of PostgreSQL instance you’re using. Heroku and Amazon RDS, for example, have a variety of plans to choose from.

Snowflake v1

Usage

Snowflake pricing is based on two factors: The volume or data stored in your Snowflake destination and the amount of compute usage (the time the server runs) in seconds.

Snowflake offers two types of plans, each with varying levels of access and features. There are On Demand plans which are commitment-free and usage-based. The alternative is a Capacity option, which guarantees secure price discounts. Learn more about Snowflake plans and pricing here.


Getting started, now

If you simply want to try Stitch and Redshift, or if you don’t have the ability to spin up a Redshift cluster of your own in AWS, we recommend trying Panoply. With just a few clicks, you create your own fully-managed Redshift data warehouse and start replicating data in minutes.

Note: If you decide to switch destinations later, you’ll need to queue a full re-replication of your data to ensure historical data is present in the new destination.


Additional resources and setup tutorials

Ready to pick a destination and get started? Use the links below to check out a more in-depth look at each destination or move on to the setup and connection process.


Questions? Feedback?

Did this article help? If you have questions or feedback, feel free to submit a pull request with your suggestions, open an issue on GitHub, or reach out to us.