Post

Enhanced System Monitoring with Grafana, Prometheus, and Docker on openmediavault

Enhanced System Monitoring with Grafana, Prometheus, and Docker on openmediavault

After upgrading my openmediavault instance to version 6, I decided to take system monitoring to the next level. While openmediavault comes with some basic monitoring features out of the box, I wanted more detailed insights and real-time visualizations. This led me to integrate Grafana with a powerful backend—Prometheus.

Prometheus is a robust open-source system for collecting time-series data, and Grafana serves as an intuitive frontend to visualize this data. The best part? This integration can be done seamlessly using Docker, and in this post, I’ll walk you through how I set it up on my openmediavault instance.


🐳 Installing Docker and Portainer on openmediavault

Before diving into the setup of Prometheus and Grafana, it’s essential to install Docker on openmediavault. Luckily, it’s straightforward, and you can follow the installation tutorial here.

Once Docker is installed, I also recommend adding Portainer, which provides a web interface to manage Docker containers easily. With Docker and Portainer up and running, I was ready to deploy Grafana and Prometheus.


⚙️ Setting Up Grafana and Prometheus Using Docker

For the enhanced monitoring setup, I used a Docker Compose-like configuration directly in Portainer to define the services. The stack I created includes Prometheus for data collection, Grafana for data visualization, and several exporters to collect metrics from my system and Docker containers.

Here’s the configuration I used to deploy Prometheus and Grafana:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
version: '3'

volumes:
  prometheus-data:
    driver: local
  grafana-data:
    driver: local  

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - /srv/raid_md0/Config/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus-data:/prometheus
    restart: unless-stopped
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"
  
  grafana:
    image: grafana/grafana-oss:latest
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    restart: unless-stopped      

⚡ Prometheus Configuration

Prometheus requires a configuration file that defines how often it scrapes data and from which sources. For my setup, I created a basic Prometheus configuration to get started:

1
2
3
4
5
6
7
8
9
10
11
12
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

This setup enables Prometheus to scrape data from itself every 5 seconds. However, Prometheus is most powerful when it collects data from other systems or exporters, which is where the fun begins.


🛠️ Adding Exporters for System Metrics

To collect data from openmediavault and Docker, I added the following exporters:

🖥️ Node Exporter for openmediavault

The Node exporter collects system-level metrics like CPU, memory, disk, and network usage. Since it’s a Prometheus-native exporter, integrating it was a breeze. Here’s the service configuration for the Node exporter:

1
2
3
4
5
6
7
8
9
  node_exporter:
    image: quay.io/prometheus/node-exporter:latest
    container_name: node_exporter
    command:
      - '--path.rootfs=/host'
    pid: host
    restart: unless-stopped
    volumes:
      - '/:/host:ro,rslave'

This exporter provides key performance metrics for your server, such as memory usage, CPU load, disk I/O, and network throughput.

🐳 cAdvisor for Docker Metrics

To monitor Docker containers, I used cAdvisor, which exposes container metrics like CPU, memory, and network stats. Here’s the service configuration for cAdvisor:

1
2
3
4
5
6
7
8
9
10
11
12
  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    container_name: cadvisor
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
      - /dev/disk/:/dev/disk:ro
    devices:
      - /dev/kmsg
    restart: unless-stopped

cAdvisor is a fantastic tool for visualizing Docker container performance metrics, providing insights into resource usage across your containerized environment.


🔧 Full Stack Configuration

Bringing everything together, here’s the final stack configuration for my monitoring setup, which includes Prometheus, Grafana, Node exporter, and cAdvisor:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
version: '3'

volumes:
  prometheus-data:
    driver: local
  grafana-data:
    driver: local  

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - /srv/dev-disk-by-label-data/Config/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus-data:/prometheus
    restart: unless-stopped
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"
  
  grafana:
    image: grafana/grafana-oss:latest
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    restart: unless-stopped      
  
  node_exporter:
    image: quay.io/prometheus/node-exporter:latest
    container_name: node_exporter
    command:
      - '--path.rootfs=/host'
    pid: host
    restart: unless-stopped
    volumes:
      - '/:/host:ro,rslave'  
  
  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    container_name: cadvisor
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
      - /dev/disk/:/dev/disk:ro
    devices:
      - /dev/kmsg
    restart: unless-stopped

Once this stack is up and running, don’t forget to configure Prometheus as the data source in Grafana. Simply navigate to Grafana’s data source page and add the Prometheus instance.


📊 Creating a Grafana Dashboard

While setting up the backend is relatively easy, creating a meaningful dashboard in Grafana can be a bit more challenging. Fortunately, Grafana provides a large collection of predefined dashboards that you can import.

Two great dashboards to start with are:

To import a dashboard, go to Grafana’s dashboard page, select Import, and paste the dashboard URL or ID.


🧠 Final Thoughts

By integrating Grafana and Prometheus into my openmediavault setup, I’ve been able to gain deep insights into the performance of my server and Docker containers. While the setup process requires a bit of configuration, the result is well worth it, providing a powerful and flexible monitoring solution.

If you’re looking to enhance your system monitoring, consider using Grafana and Prometheus with Docker. With the added benefit of cAdvisor and Node exporter, you’ll have real-time statistics at your fingertips, ensuring you can quickly respond to any issues in your environment.

Happy monitoring!

This post is licensed under CC BY 4.0 by the author.