Snowflake pricing is based on two factors: the volume or data stored in your Snowflake destination and the amount of compute usage (the time the server runs) in seconds.
Snowflake offers two types of plans, each with varying levels of access and features. There are On Demand plans which are commitment-free and usage-based. The alternative is a Capacity option, which guarantees secure price discounts. Learn more about Snowflake plans and pricing here.
Snowflake warehouse sizes
Snowflake data warehouses can be different sizes - X-Small, Large, and 3X-Large, for example - which defines how many servers will comprise each cluster in a warehouse.
While the size of a warehouse can impact the time required to execute queries, bigger doesn’t always mean better. Warehouse size is directly tied to the number of credits used, which will directly impact your Snowflake costs. Learn more about Snowflake warehouse sizes here.
To help you select the warehouse size that fits your needs and budget, check out Snowflake’s Warehouse Considerations guide before getting started.
Automated warehouse management
To reduce usage, you can elect to automate the management of your Snowflake warehouse. This means that you can elect to suspend the warehouse when there’s no activity after a specified period of time, and then automatically resume when there is. Note that these settings apply to the entire warehouse and not individual clusters.
Enabling these settings depends on your workload and availability needs. Learn more about the Auto Suspend and Auto Resume features here.
Note: Stitch will only ever impact your Snowflake usage when loading data.
High-level details about Stitch’s implementation of Snowflake, such as supported connection methods, availability on Stitch plans, etc.
Details about Stitch and replication for Snowflake destinations.
|Incremental Replication - Upserts||
Using a table’s Primary Keys, Stitch will perform an update operation on existing rows to overwrite the data.
Snowflake destinations natively support Primary Keys.
Stitch requires Primary Keys to de-dupe data during Incremental Replication. Primary Key columns are created using Primary Key constraints, which requires that values be both unique and not null.
|Multipe data types||
Snowflake destinations do not support multiple data types in a single column.
To accommodate this scenario, Stitch will create additional columns for each data type. Refer to the Table structural changes guide for examples and more info.
|Nested data structure support||
Snowflake destinations natively support nested data structures.
When Stitch replicates source data containing objects or arrays, Stitch will load the data intact into a
You can then use Snowflake’s functions for semi-structured data to parse the data. More info can be found in Snowflake’s documentation.
In this section:
Object name limits - Details about object naming rules imposed by Snowflake destinations.
Data limits - Details about data limitations imposed by Snowflake destinations.
Column naming - Details about transformations performed by Stitch to ensure source column names adhere to the object naming rules imposed by Snowflake destinations.
Object name limits
Details about object naming rules imposed by Snowflake destinations.
This means that Snowflake destinations treat upper- and lowercase characters as different characters. For example:
Refer to the Column naming section of this guide for examples.
|Table name length||
Tables with names that exceed this limit will be rejected. Rejected tables are logged in the
|Column name length||
Columns with names that exceed this limit will be rejected. Rejected columns are logged in the
Details about data limitations imposed by Snowflake destinations.
|Max record size||
Records that exceed this limit will be rejected. Rejected columns are logged in the
Snowflake will store the data as
More info about timestamp data types can be found in Snowflake’s documentation.
Up to 10,000 AD
Timestamp values outside of this range will be rejected and logged in the
More info about Snowflake time data types can be found in Amazon’s documentation.
Limited to 38 digits of scale
Snowflake will store integer data as
Integer values that exceed the limit noted above will be rejected and logged in the
Must be between 1 and 38 (38 is the default)
Snowflake will store decimal data as
Decimal values outside of this range will be rejected and logged in the
|VARCHAR max width||
Note: While Snowflake’s limit is 16 MB, Stitch imposes a record limit of 4 MB. The record as a whole must be less than 4 MB or it will be rejected and logged in the
Details about transformations performed by Stitch to ensure source column names adhere to the object naming rules imposed by Snowflake destinations.
Not sure if Snowflake is the data warehouse for you? Check out the Choosing a Stitch Destination guide to compare each of Stitch’s destination offerings.