Use this file to discover all available pages before exploring further.
This guide allows you to collect logs and metrics from your Kubernetes system, sending them to ClickStack for visualization and analysis. For demo data we use optionally use the ClickStack fork of the official OpenTelemetry demo.
You can follow this guide using either of the following deployment options:
Open Source ClickStack: Deploy ClickStack entirely within your Kubernetes cluster, including:
ClickHouse
HyperDX
MongoDB (used for dashboard state and configuration)
Managed ClickStack, with ClickHouse and the ClickStack UI (HyperDX) managed in ClickHouse Cloud. This eliminates the need to run ClickHouse or HyperDX inside your cluster.
To simulate application traffic, you can optionally deploy the ClickStack fork of the OpenTelemetry Demo Application. This generates telemetry data including logs, metrics, and traces. If you already have workloads running in your cluster, you can skip this step and monitor existing pods, nodes, and containers.
This step is optional and intended if you have no existing pods to monitor. Although users with existing services deployed in their Kubernetes environment can skip, this demo does include instrumented microservices which generate trace and session replay data - allowing users to explore all features of ClickStack.The following deploys the ClickStack fork of the OpenTelemetry Demo application stack within a Kubernetes cluster, tailored for observability testing and showcasing instrumentation. It includes backend microservices, load generators, telemetry pipelines, supporting infrastructure (e.g., Kafka, Redis), and SDK integrations with ClickStack.All services are deployed to the otel-demo namespace. Each deployment includes:
Automatic instrumentation with OTel and ClickStack SDKS for traces, metrics, and logs.
All services send their instrumentation to a my-hyperdx-hdx-oss-v2-otel-collector OpenTelemetry collector (not deployed)
Forwarding of resource tags to correlate logs, metrics and traces via the environment variable OTEL_RESOURCE_ATTRIBUTES.
The demo is composed of microservices written in different programming languages that talk to each other over gRPC and HTTP and a load generator that uses Locust to fake user traffic. The original source code for this demo has been modified to use ClickStack instrumentation.Credit: https://opentelemetry.io/docs/demo/architecture/Further details on the demo can be found in:
With the Helm chart installed, you can deploy ClickStack to your cluster. You can either run all components, including ClickHouse and HyperDX, within your Kubernetes environment, or just deploy the collector and rely on Managed ClickStack for ClickHouse and the UI HyperDX.
ClickStack Open Source (self-managed)
The following command installs ClickStack to the otel-demo namespace. The helm chart deploys:
A ClickHouse instance
HyperDX
The ClickStack distribution of the OTel collector
MongoDB for storage of HyperDX application state
You might need to adjust the storageClassName according to your Kubernetes cluster configuration.
Users not deploying the OTel demo can modify this, selecting an appropriate namespace.
ClickStack in productionThis chart also installs ClickHouse and the OTel collector. For production, it is recommended that you use the clickhouse and OTel collector operators and/or use Managed ClickStack.To disable clickhouse and OTel collector, set the following values:
The chart currently always deploys both HyperDX and MongoDB. While these components offer an alternative access path, they’re not integrated with ClickHouse Cloud authentication. These components are intended for administrators in this deployment model, providing access to the secure ingestion key needed to ingest through the deployed OTel collector, but shouldn’t be exposed to end users.
To verify the deployment status, run the following command and confirm all components are in the Running state. Note that ClickHouse will be absent if you’re using Managed ClickStack:
Even when using Managed ClickStack, the local HyperDX instance deployed in the Kubernetes cluster is still required. It provides an ingestion key managed by the OpAMP server bundled with HyperDX, with secures ingestion through the deployed OTel collector - a capability not currently available in Managed ClickStack.
For security, the service uses ClusterIP and isn’t exposed externally by default.To access the HyperDX UI, port forward from 3000 to the local port 8080.
kubectl port-forward \ pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \ 8080:3000 \ -n otel-demo
Navigate http://localhost:8080 to access the HyperDX UI.Create a user, providing a username and password that meets the complexity requirements.
Ingestion to the OTel collector deployed by the ClickStack collector is secured with an ingestion key.Navigate to Team Settings and copy the Ingestion API Key from the API Keys section. This API key ensures data ingestion through the OpenTelemetry collector is secure.
Create a new Kubernetes secret with the Ingestion API Key and a config map containing the location of the OTel collector deployed with the ClickStack helm chart. Later components will use this to allow ingest into the collector deployed with the ClickStack Helm chart:
# create secret with the ingestion API keykubectl create secret generic hyperdx-secret \--from-literal=HYPERDX_API_KEY=<ingestion_api_key> \-n otel-demo# create a ConfigMap pointing to the ClickStack OTel collector deployed abovekubectl create configmap -n=otel-demo otel-config-vars --from-literal=YOUR_OTEL_COLLECTOR_ENDPOINT=http://my-hyperdx-hdx-oss-v2-otel-collector:4318
Restart the OpenTelemetry demo application pods to take into account the Ingestion API Key.
To collect Kubernetes metrics, we will deploy a standard OTel collector, configuring this to send data securely to our ClickStack collector using the above ingestion API key.This requires us to install the OpenTelemetry Helm repo:
To collect logs and metrics from both the cluster itself and each node, we’ll need to deploy two separate OpenTelemetry collectors, each with its own manifest. The two manifests provided - k8s_deployment.yaml and k8s_daemonset.yaml - work together to collect comprehensive telemetry data from your Kubernetes cluster.
k8s_deployment.yaml deploys a single OpenTelemetry Collector instance responsible for collecting cluster-wide events and metadata. It gathers Kubernetes events, cluster metrics, and enriches telemetry data with pod labels and annotations. This collector runs as a standalone deployment with a single replica to avoid duplicate data.
k8s_daemonset.yaml deploys a DaemonSet-based collector that runs on every node in your cluster. It collects node-level and pod-level metrics, as well as container logs, using components like kubeletstats, hostmetrics, and Kubernetes attribute processors. These collectors enrich logs with metadata and send them to HyperDX using the OTLP exporter.
Together, these manifests enable full-stack observability across the cluster, from infrastructure to application-level telemetry, and send the enriched data to ClickStack for centralized analysis.First, install the collector as a deployment:
# k8s_deployment.yamlmode: deploymentimage: repository: otel/opentelemetry-collector-contrib tag: 0.123.0# We only want one of these collectors - any more and we'd produce duplicate datareplicaCount: 1presets: kubernetesAttributes: enabled: true # When enabled, the processor will extract all labels for an associated pod and add them as resource attributes. # The label's exact name will be the key. extractAllPodLabels: true # When enabled, the processor will extract all annotations for an associated pod and add them as resource attributes. # The annotation's exact name will be the key. extractAllPodAnnotations: true # Configures the collector to collect Kubernetes events. # Adds the k8sobject receiver to the logs pipeline and collects Kubernetes events by default. # More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-objects-receiver kubernetesEvents: enabled: true # Configures the Kubernetes Cluster Receiver to collect cluster-level metrics. # Adds the k8s_cluster receiver to the metrics pipeline and adds the necessary rules to ClusteRole. # More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-cluster-receiver clusterMetrics: enabled: trueextraEnvs: - name: HYPERDX_API_KEY valueFrom: secretKeyRef: name: hyperdx-secret key: HYPERDX_API_KEY optional: true - name: YOUR_OTEL_COLLECTOR_ENDPOINT valueFrom: configMapKeyRef: name: otel-config-vars key: YOUR_OTEL_COLLECTOR_ENDPOINTconfig: exporters: otlphttp: endpoint: "${env:YOUR_OTEL_COLLECTOR_ENDPOINT}" compression: gzip headers: authorization: "${env:HYPERDX_API_KEY}" service: pipelines: logs: exporters: - otlphttp metrics: exporters: - otlphttp
Next, deploy the collector as a DaemonSet for node and pod-level metrics and logs:
# k8s_daemonset.yamlmode: daemonsetimage: repository: otel/opentelemetry-collector-contrib tag: 0.123.0# Required to use the kubeletstats cpu/memory utilization metricsclusterRole: create: true rules: - apiGroups: - '' resources: - nodes/proxy verbs: - getpresets: logsCollection: enabled: true hostMetrics: enabled: true # Configures the Kubernetes Processor to add Kubernetes metadata. # Adds the k8sattributes processor to all the pipelines and adds the necessary rules to ClusterRole. # More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubernetes-attributes-processor kubernetesAttributes: enabled: true # When enabled, the processor will extract all labels for an associated pod and add them as resource attributes. # The label's exact name will be the key. extractAllPodLabels: true # When enabled, the processor will extract all annotations for an associated pod and add them as resource attributes. # The annotation's exact name will be the key. extractAllPodAnnotations: true # Configures the collector to collect node, pod, and container metrics from the API server on a kubelet.. # Adds the kubeletstats receiver to the metrics pipeline and adds the necessary rules to ClusterRole. # More Info: https://opentelemetry.io/docs/kubernetes/collector/components/#kubeletstats-receiver kubeletMetrics: enabled: trueextraEnvs: - name: HYPERDX_API_KEY valueFrom: secretKeyRef: name: hyperdx-secret key: HYPERDX_API_KEY optional: true - name: YOUR_OTEL_COLLECTOR_ENDPOINT valueFrom: configMapKeyRef: name: otel-config-vars key: YOUR_OTEL_COLLECTOR_ENDPOINTconfig: receivers: # Configures additional kubelet metrics kubeletstats: collection_interval: 20s auth_type: 'serviceAccount' endpoint: '${env:K8S_NODE_NAME}:10250' insecure_skip_verify: true metrics: k8s.pod.cpu_limit_utilization: enabled: true k8s.pod.cpu_request_utilization: enabled: true k8s.pod.memory_limit_utilization: enabled: true k8s.pod.memory_request_utilization: enabled: true k8s.pod.uptime: enabled: true k8s.node.uptime: enabled: true k8s.container.cpu_limit_utilization: enabled: true k8s.container.cpu_request_utilization: enabled: true k8s.container.memory_limit_utilization: enabled: true k8s.container.memory_request_utilization: enabled: true container.uptime: enabled: true exporters: otlphttp: endpoint: "${env:YOUR_OTEL_COLLECTOR_ENDPOINT}" compression: gzip headers: authorization: "${env:HYPERDX_API_KEY}" service: pipelines: logs: exporters: - otlphttp metrics: exporters: - otlphttp
Navigate to your HyperDX UI - either using your Kubernetes-deployed instance or via Managed ClickStack.
Managed ClickStack
If using Managed ClickStack, simply log in to your ClickHouse Cloud service and select “ClickStack” from the left menu. You will be automatically authenticated and won’t need to create a user.Data sources for logs, metrics and traces will be pre-created for you.
ClickStack Open Source
To access the local deployed HyperDX, you can port forward using the local command and access HyperDX at http://localhost:8080.
kubectl port-forward \ pod/$(kubectl get pod -l app.kubernetes.io/name=hdx-oss-v2 -o jsonpath='{.items[0].metadata.name}' -n otel-demo) \ 8080:3000 \ -n otel-demo
ClickStack in productionIn production, we recommend using an ingress with TLS if you’re not using Managed ClickStack. For example:
To explore the Kubernetes data, navigate to the dedicated present dashboard at /kubernetes e.g. http://localhost:8080/kubernetes.Each of the tabs, Pods, Nodes, and Namespaces, should be populated with data.