The Hive engine allows you to performDocumentation Index
Fetch the complete documentation index at: https://private-7c7dfe99-page-updates.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
SELECT queries on HDFS Hive table. Currently, it supports input formats as below:
-
Text: only supports simple scalar column types except
binary -
ORC: support simple scalar columns types except
char; only support complex types likearray -
Parquet: support all simple scalar columns types; only support complex types like
array
Creating a table
- Column names should be the same as in the original Hive table, but you can use just some of these columns and in any order, also you can use some alias columns calculated from other columns.
- Column types should be the same from those in the original Hive table.
- Partition by expression should be consistent with the original Hive table, and columns in partition by expression should be in the table structure.
-
thrift://host:port— Hive Metastore address -
database— Remote database name. -
table— Remote table name.
Usage example
How to use local cache for HDFS filesystem
We strongly advice you to enable local cache for remote filesystems. Benchmark shows that its almost 2x faster with cache. Before using cache, add it toconfig.xml
- enable: ClickHouse will maintain local cache for remote filesystem(HDFS) after startup if true.
- root_dir: Required. The root directory to store local cache files for remote filesystem.
- limit_size: Required. The maximum size(in bytes) of local cache files.
- bytes_read_before_flush: Control bytes before flush to local filesystem when downloading file from remote filesystem. The default value is 1MB.