Documentation Index
Fetch the complete documentation index at: https://private-7c7dfe99-page-updates.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Monitoring MongoDB Logs with ClickStack
TL;DRCollect and visualize MongoDB server logs (4.4+ JSON format) in ClickStack using the OTel
filelog receiver. Includes a demo dataset and pre-built dashboard.Integration with existing MongoDB
This section covers configuring your existing MongoDB installation to send logs to ClickStack by modifying the ClickStack OTel collector configuration. If you would like to test the MongoDB integration before configuring your own existing setup, you can test with our preconfigured setup and sample data in the “Demo dataset” section.Prerequisites
- ClickStack instance running
- Existing self-managed MongoDB installation (version 4.4 or newer)
- Access to MongoDB log files
Verify MongoDB logging configuration
MongoDB 4.4+ outputs structured JSON logs by default. Check your log file location:- Linux (apt/yum):
/var/log/mongodb/mongod.log - macOS (Homebrew):
/usr/local/var/log/mongodb/mongo.log - Docker: Often logged to stdout, but can be configured to write to
/var/log/mongodb/mongod.log
mongod.conf:Create a custom OTel collector configuration for MongoDB
ClickStack allows you to extend the base OpenTelemetry Collector configuration by mounting a custom configuration file and setting an environment variable. The custom configuration is merged with the base configuration managed by HyperDX via OpAMP.Create a file namedmongodb-monitoring.yaml with the following configuration:- You only define new receivers and pipelines in the custom config. The processors (
memory_limiter,transform,batch) and exporters (clickhouse) are already defined in the base ClickStack configuration — you just reference them by name. - This configuration uses
start_at: beginningto read all existing logs when the collector starts. For production deployments, change tostart_at: endto avoid re-ingesting logs on collector restarts.
Configure ClickStack to load custom configuration
To enable custom collector configuration in your existing ClickStack deployment, you must:- Mount the custom config file at
/etc/otelcol-contrib/custom.config.yaml - Set the environment variable
CUSTOM_OTELCOL_CONFIG_FILE=/etc/otelcol-contrib/custom.config.yaml - Mount your MongoDB log directory so the collector can read them
- Docker Compose
- Docker Run (All-in-One Image)
Update your ClickStack deployment configuration:
Ensure the ClickStack collector has appropriate permissions to read the MongoDB log files. In production, use read-only mounts (
:ro) and follow the principle of least privilege.Verify Logs in HyperDX
Once configured, log into HyperDX and verify that logs are flowing:Demo dataset
Test the MongoDB integration with a pre-generated sample dataset before configuring your production systems.Create test collector configuration
Create a file namedmongodb-demo.yaml with the following configuration:Run ClickStack with demo configuration
Run ClickStack with the demo logs and configuration:Verify logs in HyperDX
Once ClickStack is running:- Open HyperDX and log in to your account (you may need to create an account first)
- Navigate to the Search view and set the source to
Logs - Set the time range to include 2026-03-09 00:00:00 - 2026-03-10 00:00:00 (UTC)
Dashboards and visualization
the dashboard configuration
Import pre-built dashboard
- Open HyperDX and navigate to the Dashboards section.
- Click “Import Dashboard” in the upper right corner under the ellipses.
- Upload the mongodb-logs-dashboard.json file and click finish import.
The dashboard will be created with all visualizations pre-configured
For the demo dataset, set the time range to include 2026-03-09 00:00:00 - 2026-03-10 00:00:00 (UTC).Troubleshooting
No logs appearing in HyperDX
Verify the effective config includes your filelog receiver:Logs not parsing correctly
Verify MongoDB is outputting JSON logs (4.4+):json_parser operator with a regex_parser, or upgrade to MongoDB 4.4+.
Next steps
- Set up alerts for critical events (error spikes, slow query thresholds)
- Create additional dashboards for specific use cases (replica set monitoring, connection tracking)