Learn how Stitch will load data from your integrations into Stitch’s Snowflake destination.

In this guide, we’ll cover data loading scenarios involving:


Primary Key scenarios

Scenarios involving Primary Key columns.

IF

A table without a Primary Key is replicated.

THEN
  • Initial job: Table is created without NOT NULL column constraints.
  • Subsequent jobs: If using Key-based Incremental or Log-based Incremental Replication, records will be added to the table in an Append-Only fashion.

    If using Full Table Replication, the table will be overwritten in its entirety during each job.

IF

A table with a single Primary Key is replicated.

THEN
  • Initial job: Table is created. The Primary Key column is created with NOT NULL column constraints.
  • Subsequent jobs: If using Key-based Incremental or Log-based Incremental Replication, data will be de-duped based on the Primary Key column and upserted into the table.

    If using Full Table Replication, the table will be overwritten in its entirety during each job.

IF

A table with multiple Primary Keys is replicated.

THEN
  • Initial job: Table is created. Primary Key columns are created with NOT NULL column constraints.
  • Subsequent jobs: If using Key-based Incremental or Log-based Incremental Replication, data will be de-duped based on the Primary Key column and upserted into the table.

    If using Full Table Replication, the table will be overwritten in its entirety during each job.

IF

The table’s Primary Key(s) is/are changed.

THEN

Changing a table’s Primary Key(s) is not permitted in Snowflake.

If Primary Key columns are changed, Stitch will stop processing data for the table.

AND

The following error will display in the Notifications tab in Stitch:

Keys for table do not match Primary Keys of incoming data
FIX IT

Re-instate the table’s Primary Key(s) or drop the table to allow Stitch to continue loading data.

IF

You remove the Primary Key column(s) for a table in Snowflake.

THEN

Changing a table’s Primary Key(s) is not permitted in Snowflake.

If Primary Key columns are changed, Stitch will stop processing data for the table.

AND

The following error will display in the Notifications tab in Stitch:

Keys for table do not match Primary Keys of incoming data
FIX IT

Re-instate the table’s Primary Key(s) to allow Stitch to continue processing data for the table.

Back to top


Replication Key scenarios

Scenarios involving Replication Keys and how data is loaded as a result.

IF

A table using Key-based Incremental Replication is replicated where the Replication Key column contains NULL values.

THEN
  • During the initial job, the table will be created and all rows will be replicated.
  • During subsequent jobs, only rows with populated Replication Keys will be replicated and persisted to Snowflake.

Back to top


Object naming scenarios

Scenarios involving object identifiers in the destination, including naming limitations and transformations.

IF

A table name contains more characters than allowed by Snowflake.

THEN

Snowflake will reject all data for the table.

AND

The following error will display in the Notifications tab in Stitch:

Table name [TABLE] is too long for Snowflake

Rejected records will be logged in the _sdc_rejected table of the integration's schema. Learn more.

FIX IT

If possible, change the table name in the source to be less than Snowflake’s character limit of 255 characters.

Use the _sdc_rejected table to identify the root of the issue.

IF

A column name contains more characters than allowed by Snowflake.

THEN

Snowflake will reject columns with names that exceed the column character limit. Other columns in the table will persist to Snowflake.

AND

The following error will display in the Notifications tab in Stitch:

Column name [COLUMN] is too long for Snowflake

Rejected records will be logged in the _sdc_rejected table of the integration's schema. Learn more.

FIX IT

If possible, change the column name in the source to be less than Snowflake’s character limit of 251 characters.

Use the _sdc_rejected table to identify the root of the issue.

IF

Two columns are replicated that canonicalize to the same name.

THEN

For example: A table containing both CustomerId and customerid columns.

Snowflake will reject the records and create a log for the rejected records in the _sdc_rejected table in that integration’s schema.

AND

The following error will display in the Notifications tab in Stitch:

Field collision on [COLUMN_NAME]

Rejected records will be logged in the _sdc_rejected table of the integration's schema. Learn more.

FIX IT

If possible, re-name one of the columns in the source so that both column names will be unique when replicated to Snowflake.

Use the _sdc_rejected table to identify the root of the issue.

IF

A column is replicated that has a mixed-case name.

THEN

Snowflake will convert letters to uppercase. For example:

Columns in Source Columns in Snowflake
CuStOmErId CUSTOMERID
customerID CUSTOMERID
customerid CUSTOMERID
IF

A column is replicated that has a name with spaces.

THEN

Snowflake will replace spaces with underscores. For example:

Columns in Source Columns in Snowflake
customer id CUSTOMER_ID
CUSTOMER ID CUSTOMER_ID
IF

A column is replicated with a name that contains unsupported special characters.

THEN

Snowflake will remove all non-word characters, including leading underscores. For example:

Columns in Source   Columns in Snowflake
_customer!id   CUSTOMERID
!CUSTOMERID   CUSTOMERID
CUSTOMER!ID   CUSTOMERID
IF

A column is replicated with a name that begins with a non-letter.

THEN

Snowflake will remove all leading non-letter characters. For example:

Columns in Source   Columns in Snowflake
123customerid   CUSTOMERID
_customerid   CUSTOMERID
_987CUSTOMERID   CUSTOMERID

Back to top


Table scenarios

Scenarios involving table creation and modification in the destination.

IF

A table contains entirely NULL columns.

THEN

No table is created in Snowflake. At least one column must have a non-NULL value for Stitch to create a table in Snowflake.

Back to top


Data typing scenarios

Scenarios involving various data types, including how data is typed and structured in the destination.

IF

Stitch detects multiple data types for a single column.

THEN

To accommodate data of varying types, Stitch will create multiple columns to ensure data is loaded with the correct type. In the destination, this will look like the column has been “split”.

For example: Stitch first detected that order_confirmed contained BOOLEAN data, but during a subsequent job, detected STRING values. To accommodate data of varying types, Stitch will:

  1. Store data for the original data type in the original column. In this example, only BOOLEAN values will be stored in order_confirmed. The name of the original column will not change.

  2. Create additional columns to store the other data types - one for each data type detected - and append the data type to the column name. In this example, a order_confirmed__st column will be created to store STRING values.

IF

Data is replicated to Snowflake that is nested, containing many top-level properties and potentially nested sub-properties.

THEN

Nested data structures will be loaded intact into VARIANT columns.

IF

A VARCHAR column is replicated to Snowflake.

THEN

Snowflake will default to using the maximum length: VARCHAR(16777216)

Note: VARCHAR columns only consume storage for the actual amount of stored data. This means that for a 1-character string in a VARCHAR(16777216) column, the storage used is equivalent to one character.

For more info on full-length VARCHAR declarations and performance in Snowflake, refer to Snowflake’s documentation.

IF

VARCHAR data is loaded that exceeds the current maximum size for the column.

THEN

No widening will occur, as Snowflake defaults to storing VARCHAR data using the maximum length of VARCHAR(16777216).

IF

A column containing date data with timezone info is replicated to Snowflake.

THEN

Data will be stored in UTC as TIMESTAMP_TZ(9).

IF

A column contains timestamp data that is outside Snowflake’s supported range.

THEN

Snowflake will reject the records that fall outside the supported range.

AND

The following error will display in the Notifications tab in Stitch:

timestamp out of range for Snowflake on [TIMESTAMP]

Rejected records will be logged in the _sdc_rejected table of the integration's schema. Learn more.

FIX IT

To resolve the error, offending values in the source must be changed to be within Snowflake’s timestamp range.

Use the _sdc_rejected table to identify the root of the issue.

IF

A column contains integer data.

THEN

Snowflake will store integer data as DECIMAL(38,0). Snowflake considers integer data types to be synonymous with NUMBER, and as a result, Stitch will load them as such.

IF

A column contains integer data that is outside Snowflake’s supported range.

THEN

Snowflake will reject the records that fall outside the supported range.

AND

The following error will display in the Notifications tab in Stitch:

integer out of range for Snowflake on [INTEGER]

Rejected records will be logged in the _sdc_rejected table of the integration's schema. Learn more.

FIX IT

To resolve the error, offending values in the source must be changed to be within Snowflake’s limit for integers.

Use the _sdc_rejected table to identify the root of the issue.

IF

A column contains decimal data.

THEN

Snowflake will store decimal data as DECIMAL(38,6).

IF

A column contains decimal data that is outside Snowflake’s supported range.

THEN

Snowflake will reject the records that fall outside the supported range.

AND

The following error will display in the Notifications tab in Stitch:

decimal out of range for Snowflake on [DECIMAL]

Rejected records will be logged in the _sdc_rejected table of the integration's schema. Learn more.

FIX IT

To resolve the error, offending values in the source must be changed to be within Snowflake’s limit for decimals.

Use the _sdc_rejected table to identify the root of the issue.

Back to top


Schema change scenarios

Scenarios involving schema changes in the source or structural changes in the destination.

IF

A new column is added in table already set to replicate.

THEN

If the column has at least one non-NULL value in the source, the column will be created and appended to the end of the table in Snowflake.

Note: If the table using either Key- or Log-based Incremental Replication, backfilled values for the column will only be replicated if:

  1. The records’ Replication Key values are greater than or equal to the last saved maximum Replication Key value for the table, or
  2. The table is reset and a historical re-replication is queued.

Refer to Tracking new columns in an already replicating table guide for more info and examples.

IF

A new column is added by you to a Stitch-generated table in Snowflake.

THEN

Columns may be added to tables created by Stitch as long as they are nullable, meaning columns don’t have NOT NULL constraints.

IF

A column is deleted at the source.

THEN

How a deleted column is reflected in Snowflake depends on the Replication Method used by the table:

  • Key-based Incremental: The column will remain in the destination, and default NULL values will be placed in it going forward.

  • Log-based Incremental: Changes to a source table - including adding or removing columns, changing data types, etc. - require manual intervention before replication can continue. Refer to the Log-based Incremental Replication documentation for more info.

  • Full Table: The column will remain in the destination, and default NULL values will be placed in it going forward.

IF

You remove a column from a Stitch-replicated table in your destination.

THEN

The result of deleting a column from a Stitch-generated table depends on the type of column being removed:

  • Primary Key columns: Changing a table’s Primary Key(s) is not permitted in Snowflake. If Primary Key columns are changed, Stitch will stop processing data for the table.

  • General columns: If new data is detected for the removed column, Stitch will re-create it in Snowflake. This refers to all columns that are not prepended by _sdc or suffixed by a data type. For example: customer_zip, but not customer_zip__st.

    Note: An integration must support selecting columns AND you must deselect the column in Stitch for the column removal to be permanent.

  • _sdc columns: Removing a Stitch replication column will prevent Stitch from loading replicated data into Snowflake.

  • Columns with data type suffixes: Removing a column created as result of accommodating multiple data types will prevent Stitch from loading replicated data into the table. This applies to columns with names such as: customer_zip__st, customer_zip__int, etc.

Back to top


Destination changes

Scenarios involving modifications made to the destination, such as the application of workload/performance management features or user privilege changes.

IF

You switch to a different destination of the same type.

THEN

This means the destination type is still Snowflake, Stitch may just be connected a different database in Snowflake.

  • For tables using Key-based or Log-based Incremental Replication, replication will continue using the Replication’s Key last saved maximum value. To re-replicate historical data, resetting Replication Keys is required.
  • For tables using Full Table Replication, the table will be fully replicated into the new destination during the next successful job.
  • For webhook integrations, some data loss may be possible due to the continuous, real-time nature of webhooks. Historical data must either be backfilled or re-played.

Back to top



Questions? Feedback?

Did this article help? If you have questions or feedback, feel free to submit a pull request with your suggestions, open an issue on GitHub, or reach out to us.