Skip to main content

Documentation Index

Fetch the complete documentation index at: https://private-7c7dfe99-page-updates.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Streaming files (Node.js only)

There are several file streaming examples with popular data formats (NDJSON, CSV, Parquet) in the client repository. Streaming other formats into a file should be similar to Parquet, the only difference will be in the format used for query call (JSONEachRow, CSV, etc.) and the output file name.

Supported data formats

The client handles data formats as JSON or text. If you specify format as one from the JSON family of formats (JSONEachRow, JSONCompactEachRow, etc.), the client will serialize and deserialize data during the communication over the wire. Data provided in the “raw” text formats (CSV, TabSeparated and CustomSeparated families) are sent over the wire without additional transformations.
There might be confusion between JSON as a general format and ClickHouse JSON format.The client supports streaming JSON objects with formats such as JSONEachRow (see the table overview for other streaming-friendly formats; see also the select_streaming_ examples in the client repository).It’s only that formats like ClickHouse JSON and a few others are represented as a single object in the response and cannot be streamed by the client.
FormatInput (array)Input (object)Input/Output (Stream)Output (JSON)Output (text)
JSON✔️✔️✔️
JSONCompact✔️✔️✔️
JSONObjectEachRow✔️✔️✔️
JSONColumnsWithMetadata✔️✔️✔️
JSONStrings❌️✔️✔️
JSONCompactStrings✔️✔️
JSONEachRow✔️✔️✔️✔️
JSONEachRowWithProgress❌️✔️ ❗- see below✔️✔️
JSONStringsEachRow✔️✔️✔️✔️
JSONCompactEachRow✔️✔️✔️✔️
JSONCompactStringsEachRow✔️✔️✔️✔️
JSONCompactEachRowWithNames✔️✔️✔️✔️
JSONCompactEachRowWithNamesAndTypes✔️✔️✔️✔️
JSONCompactStringsEachRowWithNames✔️✔️✔️✔️
JSONCompactStringsEachRowWithNamesAndTypes✔️✔️✔️✔️
CSV✔️✔️
CSVWithNames✔️✔️
CSVWithNamesAndTypes✔️✔️
TabSeparated✔️✔️
TabSeparatedRaw✔️✔️
TabSeparatedWithNames✔️✔️
TabSeparatedWithNamesAndTypes✔️✔️
CustomSeparated✔️✔️
CustomSeparatedWithNames✔️✔️
CustomSeparatedWithNamesAndTypes✔️✔️
Parquet✔️✔️❗- see below
For Parquet, the main use case for selects likely will be writing the resulting stream into a file. See the example in the client repository. JSONEachRowWithProgress is an output-only format that supports progress reporting in the stream. See this example for more details. The entire list of ClickHouse input and output formats is available here.

Supported ClickHouse data types

The related JS type is relevant for any JSON* formats except the ones that represent everything as a string (e.g. JSONStringEachRow)
TypeStatusJS type
UInt8/16/32✔️number
UInt64/128/256✔️ ❗- see belowstring
Int8/16/32✔️number
Int64/128/256✔️ ❗- see belowstring
Float32/64✔️number
Decimal✔️ ❗- see belownumber
Boolean✔️boolean
String✔️string
FixedString✔️string
UUID✔️string
Date32/64✔️string
DateTime32/64✔️ ❗- see belowstring
Enum✔️string
LowCardinality✔️string
Array(T)✔️T[]
(new) JSON✔️object
Variant(T1, T2…)✔️T (depends on the variant)
Dynamic✔️T (depends on the variant)
Nested✔️T[]
Tuple(T1, T2, …)✔️[T1, T2, …]
Tuple(n1 T1, n2 T2…)✔️{ n1: T1; n2: T2; …}
Nullable(T)✔️JS type for T or null
IPv4✔️string
IPv6✔️string
Point✔️[ number, number ]
Ring✔️Array<Point>
Polygon✔️Array<Ring>
MultiPolygon✔️Array<Polygon>
Map(K, V)✔️Record<K, V>
Time/Time64✔️string
The entire list of supported ClickHouse formats is available here. See also:

Date/Date32 types caveats

Since the client inserts values without additional type conversion, Date/Date32 type columns can only be inserted as strings. Example: Insert a Date type value. Source code
await client.insert({
  table: 'my_table',
  values: [ { date: '2022-09-05' } ],
  format: 'JSONEachRow',
})
However, if you are using DateTime or DateTime64 columns, you can use both strings and JS Date objects. JS Date objects can be passed to insert as-is with date_time_input_format set to best_effort. See this example for more details.

Decimal* types caveats

It is possible to insert Decimals using JSON* family formats. Assuming we have a table defined as:
CREATE TABLE my_table
(
  id     UInt32,
  dec32  Decimal(9, 2),
  dec64  Decimal(18, 3),
  dec128 Decimal(38, 10),
  dec256 Decimal(76, 20)
)
ENGINE MergeTree()
ORDER BY (id)
We can insert values without precision loss using the string representation:
await client.insert({
  table: 'my_table',
  values: [{
    id: 1,
    dec32:  '1234567.89',
    dec64:  '123456789123456.789',
    dec128: '1234567891234567891234567891.1234567891',
    dec256: '12345678912345678912345678911234567891234567891234567891.12345678911234567891',
  }],
  format: 'JSONEachRow',
})
However, when querying the data in JSON* formats, ClickHouse will return Decimals as numbers by default, which could lead to precision loss. To avoid this, you could cast Decimals to string in the query:
await client.query({
  query: `
    SELECT toString(dec32)  AS decimal32,
           toString(dec64)  AS decimal64,
           toString(dec128) AS decimal128,
           toString(dec256) AS decimal256
    FROM my_table
  `,
  format: 'JSONEachRow',
})
See this example for more details.

Integral types: Int64, Int128, Int256, UInt64, UInt128, UInt256

Though the server can accept it as a number, it is returned as a string in JSON* family output formats to avoid integer overflow as max values for these types are bigger than Number.MAX_SAFE_INTEGER. This behavior, however, can be modified with output_format_json_quote_64bit_integers setting . Example: Adjust the JSON output format for 64-bit numbers.
const resultSet = await client.query({
  query: 'SELECT * from system.numbers LIMIT 1',
  format: 'JSONEachRow',
})

expect(await resultSet.json()).toEqual([ { number: '0' } ])
const resultSet = await client.query({
  query: 'SELECT * from system.numbers LIMIT 1',
  format: 'JSONEachRow',
  clickhouse_settings: { output_format_json_quote_64bit_integers: 0 },
})

expect(await resultSet.json()).toEqual([ { number: 0 } ])

ClickHouse settings

The client can adjust ClickHouse behavior via settings mechanism. The settings can be set on the client instance level so that they will be applied to every request sent to the ClickHouse:
const client = createClient({
  clickhouse_settings: {}
})
Or a setting can be configured on a request-level:
client.query({
  clickhouse_settings: {}
})
A type declaration file with all the supported ClickHouse settings can be found here.
Make sure that the user on whose behalf the queries are made has sufficient rights to change the settings.

Advanced topics

Queries with parameters

You can create a query with parameters and pass values to them from client application. This allows to avoid formatting query with specific dynamic values on client side. Format a query as usual, then place the values that you want to pass from the app parameters to the query in braces in the following format:
{<name>: <data_type>}
where:
  • name — Placeholder identifier.
  • data_type - Data type of the app parameter value.
Example:: Query with parameters. Source code .
await client.query({
  query: 'SELECT plus({val1: Int32}, {val2: Int32})',
  format: 'CSV',
  query_params: {
    val1: 10,
    val2: 20,
  },
})
Check https://clickhouse.com/docs/interfaces/cli#cli-queries-with-parameters-syntax for additional details.

Compression

NB: request compression is currently not available in the Web version. Response compression works as normal. Node.js version supports both. Data applications operating with large datasets over the wire can benefit from enabling compression. Currently, only GZIP is supported using zlib.
createClient({
  compression: {
    response: true,
    request: true
  }
})
Configurations parameters are:
  • response: true instructs ClickHouse server to respond with compressed response body. Default value: response: false
  • request: true enables compression on the client request body. Default value: request: false

Logging (Node.js only)

The logging is an experimental feature and is subject to change in the future.
The default logger implementation emits log records into stdout via console.debug/info/warn/error methods. You can customize the logging logic via providing a LoggerClass, and choose the desired log level via level parameter (default is OFF):
import type { Logger } from '@clickhouse/client'

// All three LogParams types are exported by the client
interface LogParams {
  module: string
  message: string
  args?: Record<string, unknown>
}
type ErrorLogParams = LogParams & { err: Error }
type WarnLogParams = LogParams & { err?: Error }

class MyLogger implements Logger {
  trace({ module, message, args }: LogParams) {
    // ...
  }
  debug({ module, message, args }: LogParams) {
    // ...
  }
  info({ module, message, args }: LogParams) {
    // ...
  }
  warn({ module, message, args }: WarnLogParams) {
    // ...
  }
  error({ module, message, args, err }: ErrorLogParams) {
    // ...
  }
}

const client = createClient({
  log: {
    LoggerClass: MyLogger,
    level: ClickHouseLogLevel
  }
})
Currently, the client will log the following events:
  • TRACE - low-level information about the Keep-Alive sockets life cycle
  • DEBUG - response information (without authorization headers and host info)
  • INFO - mostly unused, will print the current log level when the client is initialized
  • WARN - non-fatal errors; failed ping request is logged as a warning, as the underlying error is included in the returned result
  • ERROR - fatal errors from query/insert/exec/command methods, such as a failed request
You can find the default Logger implementation here.

TLS certificates (Node.js only)

Node.js client optionally supports both basic (Certificate Authority only) and mutual (Certificate Authority and client certificates) TLS. Basic TLS configuration example, assuming that you have your certificates in certs folder and CA file name is CA.pem:
const client = createClient({
  url: 'https://<hostname>:<port>',
  username: '<username>',
  password: '<password>', // if required
  tls: {
    ca_cert: fs.readFileSync('certs/CA.pem'),
  },
})
Mutual TLS configuration example using client certificates:
const client = createClient({
  url: 'https://<hostname>:<port>',
  username: '<username>',
  tls: {
    ca_cert: fs.readFileSync('certs/CA.pem'),
    cert: fs.readFileSync(`certs/client.crt`),
    key: fs.readFileSync(`certs/client.key`),
  },
})
See full examples for basic and mutual TLS in the repository.

Keep-alive configuration (Node.js only)

The client enables Keep-Alive in the underlying HTTP agent by default, meaning that the connected sockets will be reused for subsequent requests, and Connection: keep-alive header will be sent. Sockets that are idling will remain in the connection pool for 2500 milliseconds by default (see the notes about adjusting this option). keep_alive.idle_socket_ttl is supposed to have its value a fair bit lower than the server/LB configuration. The main reason is that due to HTTP/1.1 allowing the server to close the sockets without notifying the client, if the server or the load balancer closes the connection before the client does, the client could try to reuse the closed socket, resulting in a socket hang up error. If you are modifying keep_alive.idle_socket_ttl, keep in mind that it should be always in sync with your server/LB Keep-Alive configuration, and it should be always lower than that, ensuring that the server never closes the open connection first.

Adjusting idle_socket_ttl

The client sets keep_alive.idle_socket_ttl to 2500 milliseconds, as it can be considered the safest default; on the server side keep_alive_timeout might be set to as low as 3 seconds in ClickHouse versions prior to 23.11 without config.xml modifications.
If you are happy with the performance and do not experience any issues, it is recommended to not increase the value of keep_alive.idle_socket_ttl setting, as it might lead to potential “Socket hang-up” errors; additionally, if your application sends a lot of queries and there is not a lot of downtime between them, the default value should be sufficient, as the sockets will not be idling for a long enough time, and the client will keep them in the pool.
You can find the correct Keep-Alive timeout value in the server response headers by running the following command:
curl -v --data-binary "SELECT 1" <clickhouse_url>
Check the values of Connection and Keep-Alive headers in the response. For example:
< Connection: Keep-Alive
< Keep-Alive: timeout=10
In this case, keep_alive_timeout is 10 seconds, and you could try increasing keep_alive.idle_socket_ttl to 9000 or even 9500 milliseconds to keep the idling sockets open for a bit longer than by default. Keep an eye on potential “Socket hang-up” errors, which will indicate that the server closes the connections before the client does so, and lower the value until the errors disappear.

Troubleshooting

If you are experiencing socket hang up errors even when using the latest version of the client, there are the following options to resolve this issue:
  • Enable logs with at least WARN log level. This will allow for checking if there is an unconsumed or a dangling stream in the application code: the transport layer will log it on the WARN level, as that could potentially lead to the socket being closed by the server. You can enable logging in the client configuration as follows:
    const client = createClient({
      log: { level: ClickHouseLogLevel.WARN },
    })
    
  • Check your application code with no-floating-promises ESLint rule enabled, which will help to identify unhandled promises that could lead to dangling streams and sockets.
  • Slightly reduce keep_alive.idle_socket_ttl setting in the ClickHouse server configuration. In certain situations, for example, high network latency between client and server, it could be beneficial to reduce keep_alive.idle_socket_ttl by another 200–500 milliseconds, ruling out the situation where an outgoing request could obtain a socket that the server is going to close.
  • If this error is happening during long-running queries with no data coming in or out (for example, a long-running INSERT FROM SELECT), this might be due to the load balancer closing idling connections. You could try forcing some data coming in during long-running queries by using a combination of these ClickHouse settings:
    const client = createClient({
      // Here we assume that we will have some queries with more than 5 minutes of execution time
      request_timeout: 400_000,
      /** These settings in combination allow to avoid LB timeout issues in case of long-running queries without data coming in or out,
       *  such as `INSERT FROM SELECT` and similar ones, as the connection could be marked as idle by the LB and closed abruptly.
       *  In this case, we assume that the LB has idle connection timeout of 120s, so we set 110s as a "safe" value. */
      clickhouse_settings: {
        send_progress_in_http_headers: 1,
        http_headers_progress_interval_ms: '110000', // UInt64, should be passed as a string
      },
    })
    
    Keep in mind, however, that the total size of the received headers has 16KB limit in recent Node.js versions; after certain amount of progress headers received, which was around 70-80 in our tests, an exception will be generated. It is also possible to use an entirely different approach, avoiding wait time on the wire completely; it could be done by leveraging HTTP interface “feature” that mutations are not cancelled when the connection is lost. See this example (part 2) for more details.
  • Keep-Alive feature can be disabled entirely. In this case, client will also add Connection: close header to every request, and the underlying HTTP agent will not reuse the connections. keep_alive.idle_socket_ttl setting will be ignored, as there will be no idling sockets. This will result in additional overhead, as a new connection will be established for every request.
    const client = createClient({
      keep_alive: {
        enabled: false,
      },
    })
    

Read-only users

When using the client with a readonly=1 user, the response compression cannot be enabled, as it requires enable_http_compression setting. The following configuration will result in an error:
const client = createClient({
  compression: {
    response: true, // won't work with a readonly=1 user
  },
})
See the example that has more highlights of readonly=1 user limitations.

Proxy with a pathname

If your ClickHouse instance is behind a proxy, and it has pathname in the URL as in, for example, http://proxy:8123/clickhouse_server, specify clickhouse_server as pathname configuration option (with or without a leading slash); otherwise, if provided directly in the url, it will be considered as the database option. Multiple segments are supported, e.g. /my_proxy/db.
const client = createClient({
  url: 'http://proxy:8123',
  pathname: '/clickhouse_server',
})

Reverse proxy with authentication

If you have a reverse proxy with authentication in front of your ClickHouse deployment, you could use the http_headers setting to provide the necessary headers there:
const client = createClient({
  http_headers: {
    'My-Auth-Header': '...',
  },
})

Custom HTTP/HTTPS agent (experimental, Node.js only)

This is an experimental feature that may change in backwards-incompatible ways in the future releases. The default implementation and settings the client provides should be sufficient for most use cases. Use this feature only if you are sure that you need it.
By default, the client will configure the underlying HTTP(s) agent using the settings provided in the client configuration (such as max_open_connections, keep_alive.enabled, tls), which will handle the connections to the ClickHouse server. Additionally, if TLS certificates are used, the underlying agent will be configured with the necessary certificates, and the correct TLS auth headers will be enforced. After 1.2.0, it is possible to provide a custom HTTP(s) agent to the client, replacing the default underlying one. It could be useful in case of tricky network configurations. The following conditions apply if a custom agent is provided:
  • The max_open_connections and tls options will have no effect and will be ignored by the client, as it is a part of the underlying agent configuration.
  • keep_alive.enabled will only regulate the default value of the Connection header (true -> Connection: keep-alive, false -> Connection: close).
  • While the idle keep-alive socket management will still work (as it is not tied to the agent but to a particular socket itself), it is now possible to disable it entirely by setting the keep_alive.idle_socket_ttl value to 0.

Custom agent usage examples

Using a custom HTTP(s) Agent without certificates:
const agent = new http.Agent({ // or https.Agent
  keepAlive: true,
  keepAliveMsecs: 2500,
  maxSockets: 10,
  maxFreeSockets: 10,
})
const client = createClient({
  http_agent: agent,
})
Using a custom HTTPS Agent with basic TLS and a CA certificate:
const agent = new https.Agent({
  keepAlive: true,
  keepAliveMsecs: 2500,
  maxSockets: 10,
  maxFreeSockets: 10,
  ca: fs.readFileSync('./ca.crt'),
})
const client = createClient({
  url: 'https://myserver:8443',
  http_agent: agent,
  // With a custom HTTPS agent, the client won't use the default HTTPS connection implementation; the headers should be provided manually
  http_headers: {
    'X-ClickHouse-User': 'username',
    'X-ClickHouse-Key': 'password',
  },
  // Important: authorization header conflicts with the TLS headers; disable it.
  set_basic_auth_header: false,
})
Using a custom HTTPS Agent with mutual TLS:
const agent = new https.Agent({
  keepAlive: true,
  keepAliveMsecs: 2500,
  maxSockets: 10,
  maxFreeSockets: 10,
  ca: fs.readFileSync('./ca.crt'),
  cert: fs.readFileSync('./client.crt'),
  key: fs.readFileSync('./client.key'),
})
const client = createClient({
  url: 'https://myserver:8443',
  http_agent: agent,
  // With a custom HTTPS agent, the client won't use the default HTTPS connection implementation; the headers should be provided manually
  http_headers: {
    'X-ClickHouse-User': 'username',
    'X-ClickHouse-Key': 'password',
    'X-ClickHouse-SSL-Certificate-Auth': 'on',
  },
  // Important: authorization header conflicts with the TLS headers; disable it.
  set_basic_auth_header: false,
})
With certificates and a custom HTTPS Agent, it is likely necessary to disable the default authorization header via the set_basic_auth_header setting (introduced in 1.2.0), as it conflicts with the TLS headers. All the TLS headers should be provided manually.

Known limitations (Node.js/web)

Known limitations (web)

  • Streaming for select queries works, but it is disabled for inserts (on the type level as well).
  • Request compression is disabled and configuration is ignored. Response compression works.
  • No logging support yet.

Tips for performance optimizations

  • To reduce application memory consumption, consider using streams for large inserts (e.g. from files) and selects when applicable. For event listeners and similar use cases, async inserts could be another good option, allowing to minimize, or even completely avoid batching on the client side. Async insert examples are available in the client repository, with async_insert_ as the file name prefix.
  • The client does not enable request or response compression by default. However, when selecting or inserting large datasets, you could consider enabling it via ClickHouseClientConfigOptions.compression (either for just request or response, or both).
  • Compression has significant performance penalty. Enabling it for request or response will negatively impact the speed of selects or inserts, respectively, but will reduce the amount of network traffic transferred by the application.

Contact us

If you have any questions or need help, feel free to reach out to us in the Community Slack (#clickhouse-js channel) or via GitHub issues.