In this reference:


Common loading errors

The following errors are applicable to multiple or all destinations:

Column name exceeds destination limit

Column name [COLUMN] is too long for [DESTINATION]
Applicable to

All destinations

Level Warning
Category

Destination object naming

Potential cause(s)

A column name contains more characters than allowed by the destination.

Suggested action(s)
  1. If possible, change the column name in the source to be less than the destination’s limit for column names, or
  2. De-select the column in Stitch

If neither solution is feasible, you may need to select a destination with a higher character limit for column names. Refer to the table below for each destination’s column name limits:

Destination Version Limit
Amazon Redshift v2 115 characters
Amazon S3 v1 None
data.world v1 None
Databricks Delta Lake (AWS) v1 122 characters
Google BigQuery v1 128 characters
Google BigQuery v2 128 characters
Microsoft Azure Synapse Analytics v1 128 characters
Microsoft SQL Server v1 128 characters
MySQL v1 64 characters
Panoply v2 115 characters
PostgreSQL v1 59 characters
Snowflake v1 251 characters

Back to Common error list
Back to top

Table or column contains reserved words

column names from client should not include reserved datatype suffix on [COLUMN_NAME]
Applicable to

All destinations

Level Warning
Category

Stitch object naming

Potential cause(s)

A table or column contains reserved words.

Suggested action(s)

Remove the reserved words by re-naming the table or column in the source.

Back to Common error list
Back to top

Decimal out of range

decimal out of range for [DESTINATION] on [DECIMAL]
Applicable to

All destinations

Level Warning
Category

Destination data type limits

Potential cause(s)

Decimal data exceeds the destination’s allowed range.

Suggested action(s)

Remove offending values in the source, or change them to be within the allowed range for the destination type:

Destination Version Range
Amazon Redshift v2

Precision must be between 1 and 38; scale must be between 0 and the precision value

Databricks Delta Lake (AWS) v1
Google BigQuery v1

-99999999999999999999999999999.999999999 to 99999999999999999999999999999.999999999

Google BigQuery v2

-99999999999999999999999999999.999999999 to 99999999999999999999999999999.999999999

Microsoft Azure Synapse Analytics v1

Precision must be between 1 and 38; scale must be between 0 and the precision value

Microsoft SQL Server v1

-99999999999999999999999999999999.999999 to 99999999999999999999999999999999.999999

MySQL v1

-99999999999999999999999999999999.999999M to 99999999999999999999999999999999.999999M

Panoply v2

Precision must be between 1 and 38; scale must be between 0 and the precision value

PostgreSQL v1

Up to 131,072 digits before the decimal; up to 16,383 digits after

Snowflake v1

Precision must be between 1 and 38; scale must be between 0 and the precision value

Back to Common error list
Back to top

Table contains too many columns

ERROR: too many columns
Applicable to

All destinations

Level Warning
Category

Destination object limits

Potential cause(s)

A table contains more columns than the destination allows.

Suggested action(s)

De-select columns in Stitch to allow data to continue loading into the destination table.

Refer to the table below for each destination’s columns per table limit:

Destination Version Limit
Amazon Redshift v2 1,600
Amazon S3 v1 None
data.world v1 None
Databricks Delta Lake (AWS) v1 None
Google BigQuery v1 10,000
Google BigQuery v2 10,000
Microsoft Azure Synapse Analytics v1 1,024
Microsoft SQL Server v1 1,024
MySQL v1 1017
Panoply v2 1,600
PostgreSQL v1 250-1,600
Snowflake v1 None

Back to Common error list
Back to top

Column name collision

Field collision on [COLUMN_NAME]
Applicable to

All destinations

Level Warning
Category

Destination object naming

Potential cause(s)

Two columns are replicated that canonicalize to the same name.

For example: In Amazon Redshift, Stitch lowercases column names cUsTomErId and customerid would can

Suggested action(s)

If possible, re-name one of the columns in the source so that both column names will be unique when replicated to the destination.

Back to Common error list
Back to top

Integer out of range

integer out of range for [DESTINATION] on [INTEGER]
Applicable to

All destinations

Level Warning
Category

Destination data type limits

Potential cause(s)

Integer data exceeds the destination’s allowed range.

Suggested action(s)

Remove offending values in the source, or change them to be within the allowed range for the destination type:

Destination Version Range
Amazon Redshift v2

-9223372036854775808 to 9223372036854775807

Databricks Delta Lake (AWS) v1
Google BigQuery v1

-9,223,372,036,854,775,808 to 9,223,372,036,854,775,807

Google BigQuery v2

-9,223,372,036,854,775,808 to 9,223,372,036,854,775,807

Microsoft Azure Synapse Analytics v1

-2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807)

Microsoft SQL Server v1

-2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647)

MySQL v1

-2147483648 to 2147483647

Panoply v2

-9223372036854775808 to 9223372036854775807

PostgreSQL v1

-2147483648 to +2147483647

Snowflake v1

Limited to 38 digits of scale

Back to Common error list
Back to top

Table name exceeds destination limit

Table name [TABLE] is too long for [DESTINATION]
Applicable to

All destinations

Level Warning
Category

Destination object naming

Potential cause(s)

A table name contains more characters than allowed by the destination.

Suggested action(s)
  1. If possible, change the table name in the source to be less than the destination’s limit for table names, or
  2. De-select the table in Stitch

If neither solution is feasible, you may need to select a destination with a higher character limit for table names. Refer to the table below for each destination’s table name limits:

Destination Version Limit
Amazon Redshift v2 127 characters
Amazon S3 v1 1,024 bytes
data.world v1 None
Databricks Delta Lake (AWS) v1 78 characters
Google BigQuery v1 1,024 characters
Google BigQuery v2 1,024 characters
Microsoft Azure Synapse Analytics v1 112 characters
Microsoft SQL Server v1 113 characters
MySQL v1 60 characters
Panoply v2 127 characters
PostgreSQL v1 63 characters
Snowflake v1 255 characters

Back to Common error list
Back to top

Timestamp out of range

timestamp out of range for [DESTINATION] on [TIMESTAMP]
Applicable to

All destinations

Level Warning
Category

Destination data type limits

Potential cause(s)

Timestamp data exceeds the destination’s allowed range.

Suggested action(s)

Remove offending values in the source, or change them to be within the allowed range for the destination type:

Destination Version Range
Amazon Redshift v2

4713 BC to 294276 AD

Databricks Delta Lake (AWS) v1

Timestamps before 1900-01-01T00:00:00Z are not supported.

Google BigQuery v1

0001-01-01 00:00:00 to 9999-12-31 23:59:59.999999 UTC

Google BigQuery v2

0001-01-01 00:00:00 to 9999-12-31 23:59:59.999999 UTC

Microsoft Azure Synapse Analytics v1

Dates: 0001-01-01/January 1, 0001 AD, through 9999-12-31/December 31, 9999 AD; Time: 00:00:00 through 23:59:59.997

Microsoft SQL Server v1

Date: 0001-01-01 through 9999-12-31; Time: 00:00:00 through 23:59:59.999

MySQL v1

1000-01-01 to 9999-12-31

Panoply v2

4713 BC to 294276 AD

PostgreSQL v1

4713 BC to 294276 AD

Snowflake v1

Back to Common error list
Back to top


Amazon Redshift and Panoply loading errors

Cannot create new schema

Encountered error while attempting to create new schema.
Applicable to

Amazon Redshift destinations

Level Critical
Category

Database user privileges

Potential cause(s)

Stitch is unable to create or load data into a schema in your destination. This is usually due to insufficient database user privileges.

Suggested action(s)

Verify that the database user authorizing the connection has all the required privileges as outlined in the Amazon Redshift setup instructions.

Back to Amazon Redshift error list
Back to top

Dependent views

ERROR: cannot drop table [SCHEMA_NAME].[TABLE_NAME] column type because other objects depend on it
Hint: Use DROP ... CASCADE to drop the dependent objects too.
Applicable to

Amazon Redshift destinations

Level Critical
Category

Dependent views

Potential cause(s)

Stitch is attempting to widen a VARCHAR column and can’t because a view is built on top of the table.

Suggested action(s)

Temporarily drop the dependent views, which will allow Stitch to widen the columns. This process usually takes about an hour.

Back to Amazon Redshift error list
Back to top

System is in resize mode

ERROR: Cannot execute query because system is in resize mode
Detail: System is in resize mode, and ONLY read-only queries are allowed to execute.
Applicable to

Amazon Redshift destinations

Level Critical
Category

End-user destination change

Potential cause(s)

Someone adjusted (up or down) the number of nodes of your Amazon Redshift instance and Amazon is currently applying that change.

Suggested action(s)

This is a transient issue. Stitch should be able to resume loading data once the resize is completed.

Back to Amazon Redshift error list
Back to top

Disk is full

ERROR: Disk Full Detail:
-----------------------------
error:  Disk Full
code:      1016
context:   node: 3
query:     1005177
location:  fdisk_api.cpp:345
process:   query3_66 [pid=27629]
-----------------------------
Applicable to

Amazon Redshift destinations

Level Critical
Category

Destination disk space

Potential cause(s)

The Amazon Redshift instance is full.

Suggested action(s)
  • Add additional nodes to make your Amazon Redshift instance larger, or
  • Remove tables and/or data from the existing instance to free up disk space

Back to Amazon Redshift error list
Back to top

Insufficient table privileges

ERROR: must be owner of relation [TABLE_NAME]
Applicable to

Amazon Redshift destinations

Level Critical
Category

Database user privileges

Potential cause(s)

Stitch is not the owner of a table in Amazon Redshift, which is required to perform functions necessary to load data.

Suggested action(s)

Verify that the database user authorizing the connection has all the required privileges as outlined in the Amazon Redshift setup instructions.

Back to Amazon Redshift error list
Back to top

Insufficient schema privileges

ERROR: permission denied for schema [SCHEMA_NAME]
Applicable to

Amazon Redshift destinations

Level Critical
Category

Database user privileges

Potential cause(s)

Stitch does not have privileges to create tables within a schema in Amazon Redshift.

Suggested action(s)

Verify that the database user authorizing the connection has all the required privileges as outlined in the Amazon Redshift setup instructions.

Back to Amazon Redshift error list
Back to top

Database doesn't exist

FATAL: database "[DATABASE_NAME]" does not exist
Applicable to

Amazon Redshift destinations

Level Critical
Category

Connection settings

Potential cause(s)

Stitch can’t find the database in Amazon Redshift that has been entered into the Destination Settings page.

Suggested action(s)

Verify that the correct database is entered in the Destination Settings page of Stitch.

Back to Amazon Redshift error list
Back to top


Databricks Delta Lake loading errors

Mismatched Amazon S3 and Stitch account regions

The authorization header is malformed; the region '[STITCH_ACCOUNT_REGION]' is wrong; expecting '[S3_BUCKET_REGION]'
Applicable to

Databricks Delta Lake destinations

Level Critical
Category

Amazon S3 bucket configuration

Potential cause(s)

The Amazon S3 bucket isn’t in the same AWS region as your Stitch account.

Suggested action(s)

Use an Amazon S3 bucket that’s in the same AWS region as your Stitch account. Currently, all Stitch accounts are in the us-east-1 region. Your S3 bucket must be created in this region to use Databricks Delta Lake as your destination.

Back to Databricks Delta Lake error list
Back to top


Google BigQuery loading errors

Numeric data out of range

Numeric out of range for BigQuery on [NUMERIC]
Applicable to

All Google BigQuery destination versions

Level Warning
Category

Destination data type limits

Potential cause(s)

Numeric data exceeds the destination’s allowed range.

Suggested action(s)

Remove offending values in the source, or change them to be within the allowed range for Google BigQuery:

Version Range
v1

-99999999999999999999999999999.999999999 to 99999999999999999999999999999.999999999

v2

-99999999999999999999999999999.999999999 to 99999999999999999999999999999.999999999

Back to Google BigQuery error list
Back to top

Primary Key change is not permitted

Primary key change is not permitted
Applicable to

Google BigQuery v2 destinations

Level Critical
Category

Primary Keys

Potential cause(s)

The Primary Keys of incoming data don’t match the Primary Keys of the table in the destination.

This can be caused by:

  1. The Primary Keys being changed in the source, or
  2. The _sdc_primary_keys table is altered or dropped
Suggested action(s)

Reset the table(s) mentioned in the error. This will queue a full re-replication of the table(s), which will ensure Primary Keys are correctly captured and used to de-dupe data when loading.

Back to Google BigQuery error list
Back to top


Microsoft SQL Server loading errors

Primary Key type change is not permitted

Incoming Primary Key type does not match Primary Key type in existing table
Applicable to

Microsoft SQL Server destinations

Level Critical
Category

Primary Keys

Potential cause(s)

The type of the Primary Key of incoming data doesn’t match the the type of the Primary Key of the table in the destination.

This can be caused by:

  1. The Primary Key type being changed in the source, or
  2. The _sdc_primary_keys table is altered or dropped
Suggested action(s)

Reset the table(s) mentioned in the error. This will queue a full re-replication of the table(s), which will ensure Primary Keys are correctly captured and used to de-dupe data when loading.

Back to Microsoft SQL Server error list
Back to top

Primary Key change is not permitted

Incoming Primary Keys do not match Primary Keys in existing table
Applicable to

Microsoft SQL Server destinations

Level Critical
Category

Primary Keys

Potential cause(s)

The Primary Keys of incoming data don’t match the Primary Keys of the table in the destination.

This can be caused by:

  1. The Primary Keys being changed in the source, or
  2. The _sdc_primary_keys table is altered or dropped
Suggested action(s)

Reset the table(s) mentioned in the error. This will queue a full re-replication of the table(s), which will ensure Primary Keys are correctly captured and used to de-dupe data when loading.

Back to Microsoft SQL Server error list
Back to top


PostgreSQL loading errors

Insufficient schema privileges

Encountered error while attempting to create new schema.
Applicable to

All PostgreSQL-backed destinations

Level Critical
Category

Database user privileges

Potential cause(s)

Stitch is unable to create or load data into a schema in your destination. This is usually due to insufficient database user privileges.

Suggested action(s)

Verify that the Stitch database user has the required permissions, as outlined here.

Back to PostgreSQL error list
Back to top

Disk is full

ERROR: could not extend file "base/16389/t2_285302": No space left on device
Hint: Check free disk space.
Where: COPY staging_0_[FILE_NAME], line [LINE_NUMBER]
Applicable to

All PostgreSQL-backed destinations

Level Critical
Category

Destination disk space

Potential cause(s)

The PostgreSQL instance is full.

Suggested action(s)
  • Add additional space to make your PostgreSQL database larger, or
  • Remove tables and/or data from the existing instance to free up disk space

Back to PostgreSQL error list
Back to top

Insufficient storage space

ERROR: could not write block XXXX of temporary file: No space left on device
Applicable to

All PostgreSQL-backed destinations

Level Critical
Category

Destination disk space

Potential cause(s)

The PostgreSQL instance is full.

Suggested action(s)
  • Add additional space to make your PostgreSQL database larger, or
  • Remove tables and/or data from the existing instance to free up disk space

Back to PostgreSQL error list
Back to top


Snowflake loading errors

Primary Key change is not permitted

Keys for table do not match Primary Keys of incoming data
Applicable to

Snowflake destinations

Level Critical
Category

Primary Keys

Potential cause(s)

The Primary Keys of incoming data don’t match the Primary Keys of the table in the destination.

This can be caused by:

  1. The Primary Keys being changed in the source, or
  2. The _sdc_primary_keys table is altered or dropped
Suggested action(s)

Reset the table(s) mentioned in the error. This will queue a full re-replication of the table(s), which will ensure Primary Keys are correctly captured and used to de-dupe data when loading.

Back to Snowflake error list
Back to top


Questions? Feedback?

Did this article help? If you have questions or feedback, feel free to submit a pull request with your suggestions, open an issue on GitHub, or reach out to us.