Documentation Index
Fetch the complete documentation index at: https://private-7c7dfe99-page-updates.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Migrate from Snowflake to ClickHouse
This guide shows you how to migrate data from Snowflake to ClickHouse.Migrating data between Snowflake and ClickHouse requires the use of an object store, such as S3, as an intermediate storage for transfer. The migration process also relies on using the commands
COPY INTO from Snowflake and INSERT INTO SELECT
of ClickHouse.
Export data from Snowflake
Exporting data from Snowflake requires the use of an external stage, as shown in the diagram above.Let’s say we want to export a Snowflake table with the following schema:us-east-1 region, copying data to the S3 bucket will take around 30 minutes.Import to ClickHouse
Once the data is staged in intermediary object storage, ClickHouse functions such as the s3 table function can be used to insert the data into a table, as shown below.This example uses the s3 table function for AWS S3, but the gcs table function can be used for Google Cloud Storage and the azureBlobStorage table function can be used for Azure Blob Storage.Assuming the following table target schema:INSERT INTO SELECT command to insert the data from S3 into a ClickHouse table:Note on nested column structuresThe
VARIANT and OBJECT columns in the original Snowflake table schema will be output as JSON strings by default, forcing us to cast these when inserting them into ClickHouse.Nested structures such as some_file are converted to JSON strings on copy by Snowflake. Importing this data requires us to transform these structures to Tuples at insert time in ClickHouse, using the JSONExtract function as shown above.Test successful data export
To test whether your data was properly inserted, simply run aSELECT query on your new table: