Skip to main content

Documentation Index

Fetch the complete documentation index at: https://private-7c7dfe99-page-updates.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Kafka ClickPipes FAQ

General

ClickPipes uses a dedicated architecture running the Kafka Consumer API to read data from a specified topic and then inserts the data into a ClickHouse table on a specific ClickHouse Cloud service.
The Kafka Table engine is a ClickHouse core capability that implements a “pull model” where the ClickHouse server itself connects to Kafka, pulls events then writes them locally.ClickPipes is a separate cloud service that runs independently of the ClickHouse service. It connects to Kafka (or other data sources) and pushes events to an associated ClickHouse Cloud service. This decoupled architecture allows for superior operational flexibility, clear separation of concerns, scalable ingestion, graceful failure management, extensibility, and more.
In order to use ClickPipes for Kafka, you will need a running Kafka broker and a ClickHouse Cloud service with ClickPipes enabled. You will also need to ensure that ClickHouse Cloud can access your Kafka broker. This can be achieved by allowing remote connection on the Kafka side, whitelisting ClickHouse Cloud Egress IP addresses in your Kafka setup. Alternatively, you can use AWS PrivateLink to connect ClickPipes for Kafka to your Kafka brokers.
No, the ClickPipes for Kafka is designed for reading data from Kafka topics, not writing data to them. To write data to a Kafka topic, you will need to use a dedicated Kafka producer.
Yes, if the brokers are part of the same quorum they can be configured together delimited with ,.
Yes, ClickPipes for streaming can be scaled both horizontally and vertically. Horizontal scaling adds more replicas to increase throughput, while vertical scaling increases the resources (CPU and RAM) allocated to each replica to handle more intensive workloads. This can be configured during ClickPipe creation, or at any other point under Settings -> Advanced Settings -> Scaling.

Azure Event Hubs

No. ClickPipes requires the Event Hubs namespace to have the Kafka surface enabled. This is only available in tiers above basic. See the Azure Event Hubs documentation for more information.
No. ClickPipes only supports schema registries that are API-compatible with the Confluent Schema Registry, which isn’t the case for Azure Schema Registry. If you require support for this schema registry, reach out to our team.
To list topics and consume events, the shared access policy that is given to ClickPipes requires, at minimum, a ‘Listen’ claim.
If your ClickHouse instance is in a different region or continent from your Event Hubs deployment, you may experience timeouts when onboarding your ClickPipes, and higher-latency when consuming data from the Event Hub. We recommend deploying ClickHouse Cloud and Azure Event Hubs in the same cloud region, or regions located close to each other, to avoid performance overhead.
Yes. ClickPipes expects you to include the port number for the Kafka surface, which should be :9093.
Yes. To restrict traffic to your Event Hubs instance, please add the documented static NAT IPs to .
Both work. We strongly recommend using a shared access policy at the namespace level to retrieve samples from multiple Event Hubs.