This engine is designed for thinning and aggregating/averaging (rollup) Graphite data. It may be helpful to developers who want to use ClickHouse as a data store for Graphite. You can use any ClickHouse table engine to store the Graphite data if you do not need rollup, but if you need a rollup useDocumentation Index
Fetch the complete documentation index at: https://private-7c7dfe99-page-updates.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
GraphiteMergeTree. The engine reduces the volume of storage and increases the efficiency of queries from Graphite.
The engine inherits properties from MergeTree.
Creating a table
-
Metric name (Graphite sensor). Data type:
String. -
Time of measuring the metric. Data type:
DateTime. -
Value of the metric. Data type:
Float64. - Version of the metric. Data type: any numeric (ClickHouse saves the rows with the highest version or the last written if versions are the same. Other rows are deleted during the merge of data parts).
config_section— Name of the section in the configuration file, where are the rules of rollup set.
GraphiteMergeTree table, the same clauses are required, as when creating a MergeTree table.
Rollup configuration
The settings for rollup are defined by the graphite_rollup parameter in the server configuration. The name of the parameter could be any. You can create several configurations and use them for different tables. Rollup configuration structure: required-columns patternsRequired columns
path_column_name
path_column_name — The name of the column storing the metric name (Graphite sensor). Default value: Path.
time_column_name
time_column_name — The name of the column storing the time of measuring the metric. Default value: Time.
value_column_name
value_column_name — The name of the column storing the value of the metric at the time set in time_column_name. Default value: Value.
version_column_name
version_column_name — The name of the column storing the version of the metric. Default value: Timestamp.
Patterns
Structure of thepatterns section:
- Patterns without
functionorretention. - Patterns with both
functionandretention. - Pattern
default. :::
pattern sections. Each of pattern (including default) sections can contain function parameter for aggregation, retention parameters or both. If the metric name matches the regexp, the rules from the pattern section (or sections) are applied; otherwise, the rules from the default section are used.
Fields for pattern and default sections:
rule_type- a rule’s type. It’s applied only to a particular metrics. The engine use it to separate plain and tagged metrics. Optional parameter. Default value:all. It’s unnecessary when performance is not critical, or only one metrics type is used, e.g. plain metrics. By default only one type of rules set is created. Otherwise, if any of special types is defined, two different sets are created. One for plain metrics (root.branch.leaf) and one for tagged metrics (root.branch.leaf;tag1=value1). The default rules are ended up in both sets. Valid values:all(default) - a universal rule, used whenrule_typeis omitted.plain- a rule for plain metrics. The fieldregexpis processed as regular expression.tagged- a rule for tagged metrics (metrics are stored in DB in the format ofsomeName?tag1=value1&tag2=value2&tag3=value3). Regular expression must be sorted by tags’ names, first tag must be__name__if exists. The fieldregexpis processed as regular expression.tag_list- a rule for tagged metrics, a simple DSL for easier metric description in graphite formatsomeName;tag1=value1;tag2=value2,someName, ortag1=value1;tag2=value2. The fieldregexpis translated into ataggedrule. The sorting by tags’ names is unnecessary, ti will be done automatically. A tag’s value (but not a name) can be set as a regular expression, e.g.env=(dev|staging).
regexp– A pattern for the metric name (a regular or DSL).age– The minimum age of the data in seconds.precision– How precisely to define the age of the data in seconds. Should be a divisor for 86400 (seconds in a day).function– The name of the aggregating function to apply to data whose age falls within the range[age, age + precision]. Accepted functions: min / max / any / avg. The average is calculated imprecisely, like the average of the averages.
Configuration Example without rules types
Configuration Example with rules types
Data rollup is performed during merges. Usually, for old partitions, merges are not started, so for rollup it is necessary to trigger an unscheduled merge using optimize. Or use additional tools, for example graphite-ch-optimizer.