PostgreSQL v1.0 snapshot

Release status Released Supported by Stitch
Availability Free Supported Versions 9.3+
SSL connections Supported VPN Connections Unsupported
Data selection Tables and columns View Replication Supported
Destination
incompatibilities
None

Connecting PostgreSQL

PostgreSQL Setup Requirements

To set up PostgreSQL in Stitch, you need:

  • The CREATEROLE or SUPERUSER privilege. Either permission is required to create a database user for Stitch.

  • If using Log-based Replication, you’ll need:

    • A database running PostgreSQL 9.4 or greater Earlier versions of PostgreSQL do not include logical replication functionality, which is required for Log-based Replication.
    • The SUPERUSER privilege. If using logical replication, this is required to define the appropriate server settings.
    • To connect to the master instance. Log-based replication will only work on master instances due to a feature gap in PostgreSQL 10. Based on their forums, PostgreSQL is working on adding support for using logical replication on a read replica to a future version.
  • If you’re not using Log-based Replication, you’ll need:

    • A database running PostgreSQL 9.3.x or greater. PostgreSQL 9.3.x is the minimum version Stitch supports for PostgreSQL integrations.
    • To verify if the database is a read replica, or follower. While we always recommend connecting a replica over a production database, this also means you may need to verify some of its settings - specifically the max_standby_streaming_delay and max_standby_archive_delay settings - before connecting it to Stitch. We recommend setting these parameters to 8-12 hours for an initial replication job, and then decreasing them afterwards.

Step 1: Configure database connection settings

Choose your connection type

If your database is publicly accessible, you can directly connect it to Stitch.

If your database is not publicly accessible, you'll need to connect Stitch via an SSH tunnel.

Click the tab with your connection type to view the configuration instructions.

For the connection to be successful, you’ll need to configure your firewall to allow access from our IP addresses. Whitelist the following IPs before continuing onto the next step:

  • 52.23.137.21/32

  • 52.204.223.208/32

  • 52.204.228.32/32

  • 52.204.230.227/32

Step 1.1: Whitelist Stitch's IP addresses

For the connection to be successful, you’ll need to configure your firewall to allow access from our IP addresses. Whitelist the following IPs before continuing onto the next step:

  • 52.23.137.21/32

  • 52.204.223.208/32

  • 52.204.228.32/32

  • 52.204.230.227/32

Step 1.2: Retrieve your Stitch public key

The Stitch Public Key

The Public Key is used to authorize the Stitch Linux user. If the key isn’t properly installed, Stitch will be unable to access your database.

To retrieve the key:

  1. Sign into your Stitch account.

  2. On the Stitch Dashboard page, click the Add Integration button.

  3. Click the PostgreSQL icon.

  4. Click the SSH Tunnel checkbox.

  5. The Public Key will display, along with the other SSH fields.

Leave this page open for now - you’ll need it to wrap things up at the end.

Step 1.3: Create a Stitch SSH user

  1. Run the following commands as root on your Linux server to create a user named stitch:

    adduser --disabled-password stitch
    mkdir /home/stitch/.ssh
    
  2. Next, import the Public Key into authorized_keys, replacing [PASTE KEY HERE] with the Stitch Public Key:

    "[PASTE KEY HERE]" >> /home/stitch/.ssh/authorized_keys
    
  3. Alter the permissions on the /home/stitch directory to allow access via SSH:

    chown -R [stitch_username]:stitch /home/stitch
    chmod -R 700 /home/stitch/.ssh
    

Step 2: Create a Stitch database user

Next, you’ll create a dedicated database user for Stitch. This will ensure Stitch is visible in any logs or audits, and allow you to maintain your privilege hierarchy.

Your organization may require a different process, but the simplest way to create this user is to execute the following query when logged into the PostgreSQL database as a user with the right to grant privileges.

Additionally, this user should also own the schema that Stitch is being granted access to.

CREATE USER [stitch_username] WITH ENCRYPTED PASSWORD '[secure password]';
GRANT CONNECT ON DATABASE [database_name] TO [stitch_username];
GRANT USAGE ON SCHEMA [schema_name] TO [stitch_username];
GRANT SELECT ON ALL TABLES IN SCHEMA [schema_name] TO [stitch_username];
ALTER DEFAULT PRIVILEGES IN SCHEMA [schema_name] GRANT SELECT ON TABLES TO [stitch_username];

Replace [secure password here] with a secure password, which can be different than the SSH password. Additionally, make sure you replace [database_name] and [schema_name] with the appropriate names in your database. Repeat this process as necessary if you want to connect multiple databases or schemas.

See the Privileges list tab for an explanation of why these permissions are required by Stitch.

In the table below are the database user privileges Stitch requires to connect to and replicate data from a PostgreSQL database.

Privilege name Reason for requirement
CONNECT

Required to connect successfully to the specified database.

USAGE

Required to access the objects contained in the specified schema.

SELECT

Required to select rows from tables in the specified schema.

ALTER DEFAULT PRIVILEGES

Required to ensure that objects created in the schema after connecting to Stitch will be accessible by the Stitch database user.


Step 3: Configure database server settings

Next, you’ll configure your server to use Log-based Incremental Replication, or binlog replication.

Log-based Incremental Replication is a method of replication that reads a database’s binary log files. These files contain information about modifications made to data in a PostgreSQL instance. Log-based Incremental Replication captures all inserts, updates, and deletes made to records during each replication job, and is the most accurate and efficient method of replication.

While Stitch recommends using Log-based Replication to replicate data, it isn’t mandatory. Stitch offers additional Replication Methods for PostgreSQL databases that don’t require defining these settings.

Step 3.1: Install the wal2json plugin

To use Log-based Replication for your PostgreSQL integration, you must install the wal2json plugin. The wal2json plugin outputs JSON objects for logical decoding, which Stitch then uses to perform Log-based Replication.

Steps for installing the plugin vary depending on your operating system. Instructions for each operating system type are in the wal2json’s GitHub repository:

After you’ve installed the plugin, you can move onto the next step.

Step 3.2: Edit the client authentication file

Usually named pg_hba.conf, this file controls how clients authenticate to the PostgreSQL database. To ensure Stitch can read the output from the wal2json plugin, you’ll need to add replication connection rules to this file. These rules translate to “Allow the Stitch user from this IP address to perform replication on all the databases it has access to.”

  1. Log into your PostgreSQL server as a superuser.
  2. Locate the pg_hba.conf file, usually stored in the database cluster’s data directory. You can also locate this file by checking the value of the hba_file server parameter.
  3. Add the following lines to pg_hba.conf:

     host replication [stitch_username] 52.23.137.21/32 md5
    host replication [stitch_username] 52.204.223.208/32 md5
    host replication [stitch_username] 52.204.228.32/32 md5
    host replication [stitch_username] 52.204.230.227/32 md5
    

    A rule for each of Stitch’s IP addresses must be added to pg_hba.conf. As Stitch can use any one of these IP addresses to connect during the extraction process, each of them must have their own replication connection rule.

Step 3.3: Edit the database configuration file

Locate the database configuration file (usually postgresql.conf) and define the parameters as follows:

wal_level=logical
max_replication_slots=5
max_wal_senders=5
max_standby_archive_delay=28800000-43200000
max_standby_streaming_delay=28800000-43200000

A few things to note:

  • max_standby_archive_delay and max_standby_streaming_delay are only applicable if you’re connecting a read replica to Stitch. If you aren’t connecing a read replica, you don’t have to define these parameters.
  • For max_replication_slots and max_wal_senders, we’re defaulting to a value of 5. This should be sufficient unless you have a large number of read replicas.

In the table below are the names, required values, and descriptions of the server settings you must define.

Setting Value Description
wal_level logical

Required to use Log-based Replication; available on PostgreSQL versions 9.4 and higher. Determines the level of information that is written to the write-ahead log (WAL).

A value of logical is required to use Log-based Replication. This level logs information needed to extract logical changesets - such as updates or deletes - and replicate data from the WAL.

max_replication_slots 5

Required to use Log-based Replication; available on PostgreSQL versions 9.4 and higher. Specifies the maximum number of replication slots that the server can support.

This must be greater than 1. If you have a large number of replica databases, you may want to increase the value of this parameter.

max_wal_senders 5

Available on PostgreSQL versions 9.4 and higher. Specifies the maximum number of concurrent connections from standby servers or streaming base backup clients (the maximum number of simultaneously running WAL sender processes).

If you have a large number of replica databases, you may want to define this setting during setup.

max_standby_archive_delay 28800000-43200000

Applicable only if you’re connecting a read replica. Defines the maximum delay, in miliseconds, before canceling queries when a hot server standby is processing archived WAL data.

If the setting for this parameter is too low, slow, intermittent replication may result when Stitch attempts to replicate large volumes of data - such as during a historical job or if a table is using Full Table Replication.

Proactively increasing this parameter’s limit can help avoid this issue. We recommend a delay of 8 hours (28800000) to 12 hours (43200000).

max_standby_streaming_delay 28800000-43200000

Applicable only if you’re connecting a read replica. Defines the maximum delay, in miliseconds, before canceling queries when a hot standby server is processing streamed WAL data.

If the setting for this parameter is too low, slow, intermittent replication may result when Stitch attempts to replicate large volumes of data - such as during a historical job or if a table is using Full Table Replication.

Proactively increasing this parameter’s limit can help avoid this issue. We recommend a delay of 8 hours (28800000) to 12 hours (43200000).

Step 3.4: Restart the PostgreSQL server

After you’ve finished editing the pg_hba.conf file and configuring the database settings, restart your PostgreSQL server to ensure the changes take effect.


Step 4: Create a replication slot

Next, you’ll create a dedicated logical replication slot for Stitch. In PostgreSQL, a logical replication slot represents a stream of database changes that can then be replayed to a client in the order they were made on the original server. Each slot streams a sequence of changes from a single database.

Note: Replication slots are specific to a given database in a cluster. If you want to connect multiple databases - whether in one integration or several - you’ll need to create a replication slot for each database.

  1. Log into your master database as a superuser.
  2. Using the wal2json plugin, create a logical replication slot:
    • If you’re connecting multiple databases, you’ll need to run this command for every database you want to connect, replacing <raw_database_name> with the name of the database:

        SELECT *
        FROM pg_create_logical_replication_slot('stitch_<raw_database_name>', 'wal2json');
      

      This will create a replication slot named stitch_<raw_database_name>.

    • If you’re connecting a single database, run the following command:

        SELECT *
        ROM pg_create_logical_replication_slot('stitch', 'wal2json');
      

      This will create a replication slot named stitch.

  3. Log in as the Stitch user and verify you can read from the replication slot, replacing <replication_slot_name> with the name of the replication slot:

    SELECT COUNT(*)
    FROM pg_logical_slot_peek_changes('<replication_slot_name>', null, null);
    

    If connecting multiple databases, you should verify that the Stitch user can read from each of the replication slots you created.

Note: wal2json is required to use Log-based replication in Stitch for PostgreSQL-backed databases.


Step 5: Connect Stitch

  1. Sign into your Stitch account, if you haven’t already.
  2. On the Stitch Dashboard page, click the Add Integration button.
  3. Click the PostgreSQL icon.
  4. Fill in the fields as follows:

    • Integration Name: Enter a name for the integration. This is the name that will display on the Stitch Dashboard for the integration; it’ll also be used to create the schema in your data warehouse.

      For example, the name “Stitch PostgreSQL” would create a schema called stitch_postgres in the data warehouse. Note: The schema name cannot be changed after the integration is saved.

    • Host (Endpoint): Enter the host address (endpoint) used by the PostgreSQL instance.

      In general, this will be 127.0.0.1 (localhost), but could also be some other network address (ex: 192.68.0.1) or your server’s public IP address. Note: This must be the actual address - entering localhost into this field will cause connection issues.

    • Port: Enter the port used by the PostgreSQL instance. The default is 5432.

    • Username: Enter the Stitch PostgreSQL database user’s username.

    • Password: Enter the password for the Stitch database user.

    • Database: Enter the name of the PostgreSQL database you want to connect to Stitch. Stitch will ‘find’ all databases you give the Stitch user access to - a default database is only used to complete the connection. This is required for PostgreSQL integrations.

    • Include PostgreSQL schema names in destination tables: Checking this setting will include schema names from the source database in the destination table name - for example: <source_schema_name>__<table_name>.

      Stitch loads all selected replicated tables to a single schema, preserving only the table name. If two tables canonicalize to the same name - even if they’re in different source databases or schemas - name collision errors can arise. Checking this setting can prevent these issues.

      Note: This setting can not be changed after the integration is saved. Additionally, this setting may create table names that exceed your destination’s limits. For more info, refer to the Database Integration Table Name Collisions guide.

Enter SSH connection details

If you’re using an SSH tunnel to connect your PostgreSQL database to Stitch, you’ll also need to complete the following:

  1. Click the SSH Tunnel checkbox.

  2. Fill in the fields as follows:

    • SSH Host: Enter the IP address or hostname of the server Stitch will SSH into.

    • SSH Port: Enter the SSH port on your server. (22 by default)

    • SSH User: Enter the Stitch Linux (SSH) user’s username.

In addition, click the Connect using SSL checkbox if you’re using an SSL connection. Note: The database must support and allow SSL connections for this setting to work correctly.


Step 6: Create a replication schedule

In the Replication Frequency section, you’ll create the integration’s replication schedule. An integration’s replication schedule determines how often Stitch runs a replication job, and the time that job begins.

Stitch offers two methods of creating a replication schedule:

  • Replication Frequency: This method requires selecting the interval you want replication to run for the integration. Start times of replication jobs are based on the start time and duration of the previous job. Refer to the Replication Frequency documentation for more information and examples.
  • Anchor scheduling: Based on the Replication Frequency, or interval, you select, this method “anchors” the start times of this integration’s replication jobs to a time you select to create a predictable schedule. Anchor scheduling is a combination of the Anchor Time and Replication Frequency settings, which must both be defined to use this method. Additionally, note that:

    • A Replication Frequency of at least one hour is required to use anchor scheduling.
    • An initial replication job may not begin immediately after saving the integration, depending on the selected Replication Frequency and Anchor Time. Refer to the Anchor Scheduling documentation for more information.

    • You’ll need to contact support to request using an Anchor Time with this integration.

To help prevent overages, consider setting the integration to replicate less frequently. See the Understanding and Reducing Your Row Usage guide for tips on reducing your usage.


Step 7: Select data to replicate

The last step is to select the tables and columns you want to replicate. When you track a table, you’ll also need to define its Replication Method and, if using Key-based Incremental Replication, its Replication Key.

You can track tables and columns by:

  1. In the Integration Details page, click the Tables to Replicate tab.
  2. Locate a table you want to replicate.
  3. Click the checkbox next to the object’s name. A green checkmark means the object is set to replicate.
  4. If there are child objects, they’ll automatically display and you’ll be prompted to select some.
  5. After you set a table to replicate, the Table Settings page will display. Note: When you track a table, by default all columns will also be tracked.
  6. In the Table Settings page, define the table’s Replication Method and, if using Key-based Incremental Replication, its Replication Key.

  7. Repeat this process for every table you want to replicate.

  8. Click the Finalize Your Selections button to save your data selections.

Initial and historical replication jobs

After you finish setting up PostgreSQL, its Sync Status may show as Pending on either the Stitch Dashboard or in the Integration Details page.

For a new integration, a Pending status indicates that Stitch is in the process of scheduling the initial replication job for the integration. This may take some time to complete.

Free historical data loads

The first seven days of replication, beginning when data is first replicated, are free. Rows replicated from the new integration during this time won’t count towards your quota. Stitch offers this as a way of testing new integrations, measuring usage, and ensuring historical data volumes don’t quickly consume your quota.


Extracting data from PostgreSQL

When you connect a database as an input, Stitch only needs read-only access to the databases, tables, and columns you want to replicate. There are two processes Stitch runs during the Extraction phase of the replication process: a structure sync and a data sync.

Structure syncs

This is the first part of the Extraction process. During this phase, Stitch will detect any changes to the structure of your database. For example: A new column is added to one of the tables you set to replicate in Stitch. Structure syncs are how Stitch identifies the databases, tables, and columns to display in the Stitch app.

To perform a structure sync, Stitch runs queries on the following tables in the pg_catalog schema:

  • pg_class
  • pg_attribute
  • pg_index
  • pg_namespace

Data syncs

This is the second part of the Extraction process. During this phase, Stitch extracts data from the source and replicates it. The method Stitch uses is the same for all databases, but differs depending on the Replication Method that each table uses.

The tabs below contain info about the queries Stitch runs during the data syncs for each type of Replication Method supported for PostgreSQL integrations.

Data syncs for tables using Key-based Incremental

Initial (historical) replication jobs

During the initial replication job for a table using Key-based Incremental Replication, Stitch will replicate the table in full by running a SELECT query and read out of the resulting cursor in batches:

  SELECT column_a, column_b <,...>
    FROM table_a
ORDER BY replication_key_column
Ongoing replication jobs

During subsequent jobs, Stitch will use the last saved maximum value of the Replication Key column to identify new and updated data.

Stitch will run the following query and read out of the associated cursor in batches:

  SELECT column_a, column_b <,...>
    FROM table_a
   WHERE replication_key_column >= 'last_maximum_replication_key_value'
ORDER BY replication_key_column

Data syncs for tables using Log-based Incremental

Initial (historical) replication jobs

During the initial replication job for a table using Log-based Incremental Replication, Stitch will use a SELECT query to retrieve all data for the table and then read out of the resulting cursor in batches:

SELECT column_a, column_b <,...>
  FROM table_a
Ongoing replication jobs

During subsequent jobs, Stitch will use the database's binary logs to stream updates.

Stitch will 'bookmark' its position in the binary log at the end of each replication job, allowing it to resume at the correct position during the next extraction.

Data syncs for tables using Full Table

For tables using Full Table Replication, Stitch runs a single query and reads out of the resulting cursor in batches:

SELECT column_a, column_b <,...>
  FROM table_a

This query will be run for each table using Full Table Replication during every replication job, whether it's the initial historical job or a subsequent job.

Recommendations

While we make every effort to ensure the queries that Stitch executes don’t impart significant load on your databases, we still have some recommendations for guaranteeing database performance:

  • Use a replica database instead of connecting directly. We recommend using read replicas in lieu of directly connecting production databases with high availability and performance requirements.
  • Apply indexes to Replication Key columns. We restrict and order our replication queries by this column, so applying an index to the columns you’re using as Replication Keys can improve performance.

Questions? Feedback?

Did this article help? If you have questions or feedback, feel free to submit a pull request with your suggestions, open an issue on GitHub, or reach out to us.