I built a simple observability playground using Docker Compose so people can run locally and understand how things may fit together on a real world environment.
Prometheus captures metrics about CPU, memory, file system, network and others from single machines, entire clusters or processes like a Container running a Node.js app.
For that, you will need to install a exporter that exposes those metrics usually in a /metrics
route on some port.
For example, the node-exporter from Unix based machines, like Linux, or the windows-exporter for Windows machines. The process of collecting those metrics is called scrapping and each resource that provides this metrics route is called a job.
Grafana is a service that allows the user to create dashboards to visualize all kinds of metrics and that is usually used alongside Prometheus, but can read data from many different Data sources.
You can create a prometheus.yml
inside /etc/prometheus/
and try to capture it's own metrics with the following configuration.
scrape_configs:
- job_name: "prometheus"
scrape_interval: 5s
static_configs:
- targets: ["localhost:9090"]
You can also create a prometheus.yml
file to Grafana inside /etc/grafana/provisioning/
.
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
jsonData:
httpMethod: POST
manageAlerts: true
prometheusType: Prometheus
prometheusVersion: 2.44.0
cacheLevel: "High"
disableRecordingRules: false
incrementalQueryOverlapWindow: 10m
This file will be automatic loaded by the Grafana provisioning mechanism. You can also configure Prometheus in the Grafana UI.
After you have configured a data source you can start creating dashboards with that data, but you don't need to start from stratch.
A simple "grafana prometheus dashboard" Google search will lead you to the Prometheus 2.0 Overview dashboard by the Grafana Labs.
You can simply click "Copy ID to clipboard" and paste the dashboard ID in your Grafana.