- 1. General Setup and Configuration
- 2. Import Dashboards
- 3. Enable Solr Metrics
- Next steps
1. General Setup and Configuration
General setup and configuration for Prometheus and Grafana occurs in five major steps:
Important
|
Prometheus and Grafana support is available in Fusion 5.0.2+. |
values.yaml
files
1.1. Configure the microservice Prometheus is enabled for individual Fusion 5 microservices by configuring the service’s values.yaml
file. Each microservice that supports exposing Prometheus metrics will have a pod.annotations
field in its Helm template.
At a minimum, the values.yaml
file must contain the following values to enable the use of Prometheus with the microservice:
pod:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "<port>"
1.2. Install Prometheus into your namespace
prometheus-values.yaml
file
1.2.1. Create a At this time, there are no custom Helm charts for either Prometheus or Grafana. Their respective stable charts can be used.
Installation of Prometheus into your namespace is as simple as declaring the name, namespace, and values.yaml
file you wish to use for each installation. Grafana data sources are configured within the Grafana application, so you do not need to provide data source configuration to the Helm chart.
For spring-boot
services, the following Prometheus prometheus-values.yaml
file should be used, replacing <size>
with the appropriate disk size:
alertmanager:
enabled: false
alertmanagerFiles:
alertmanager.yml: ""
kubeStateMetrics:
enabled: true
nodeExporter:
enabled: false
pushgateway:
enabled: true
server:
replicaCount: 3
statefulSet:
enabled: true
retention: 120h
persistenceVolume:
size: <size>Gi
global:
scrape_interval: 5s
scrape_timeout: 3s
serverFiles:
prometheus.yml:
scrape_configs:
- job_name: 'spring-services'
metrics_path: '/actuator/prometheus'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_instance]
action: replace
target_label: instance
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_component]
action: replace
target_label: service
The prometheus-values.yaml
above declares that the /actuator/prometheus
endpoint will be scraped for all pods that carry the prometheus.io/scrape: “true”
annotation. The endpoint will be accessed via the port defined by the annotation prometheus.io/port
, as established in the Fusion 5 microservices values.yaml
file.
Other declared meta attributes for metrics are included as labels. This includes the namespace, pod/instance name, service name, and component name.
Additional configuration options available to the stable/prometheus
Helm chart can be found on the Prometheus Helm chart GitHub repo.
1.2.2. Install Prometheus
Install Prometheus into your namespace using the following Helm command, replacing <namespace>
with your namespace:
helm install <namespace>-prometheus --namespace <namespace> -f prometheus-values.yaml stable/prometheus
1.3. Install Grafana into your namespace
After Prometheus has been installed into your namespace, install Grafana into your namespace using a similar process.
grafana-values.yaml
file
1.3.1. Create a To begin, create a grafana-values.yaml
file:
server:
ingress:
enabled: true
service:
type: ClusterIP
persistence:
enabled: true
type: pvc
1.3.2. Install Grafana
After creating this file, run the following helm
command to install it:
helm install --name <namespace>-grafana --namespace <namespace> -f grafana-values.yaml stable/grafana
Verify the helm
command has finished rolling out Grafana by checking that the pod has a "ready" state: kubectl get pods --namespace <namespace>
.
Retrieve the default admin password:
kubectl get secret --namespace <namespace> <namespace>-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
This will emit a base64-encoded password.
1.4. Expose Grafana as a load balancer
Expose Grafana as a load balancer using the following command:
kubectl expose deployment <namespace>-grafana --type=LoadBalancer --name=grafana
You can use kubectl get services --namespace <namespace>
to determine when the load balancer setup is complete and what its IP address is. Once complete, you are ready to begin using Grafana.
Go to your browser and enter the http://<loadbalancerPublicIP>:3000
. Enter the username admin@localhost
and the password that was returned in the previous step.
Tip
|
After successfully logging in, create a new administrative user with a more intuitive password. |
1.5. Configure the Prometheus datasource
To begin using the Prometheus datasource with Grafana, navigate to the gear icon and select Data Sources.
Click Add Data Source and Select Prometheus as the datasource type. Ensure the Default switch is toggled on. You will be directed to enter the URL for the Prometheus server. Enter http://<namespace>-prometheus-server
.
Configure any additional fields as desired, then click Save and Test.
2. Import Dashboards
There are several JSON files that represent out-of-the-box dashboards that can be used with Grafana. To import them, click on the plus icon on the left side of the screen and click Import.
After importing, the dashboards are located on the home page under Dashboards.
3. Enable Solr Metrics
Create a file named solr_exporter.yaml
with the following contents:
exporter:
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9983"
prometheus.io/path: "/metrics"
Regenerate the templated Kubernetes file using the following command from the Solr Helm chart folder:
helm template --name "<namespace>-solr" . --values solr_exporter.yaml > solr.yaml
When the process is complete, run the following command:
kubectl apply -f solr.yaml
Kubernetes will start a new service for the solr-exporter
:
service "solr-exporter" created
deployment.apps "solr-exporter" created
You can now query Solr metrics in Prometheus.
Next steps
At this point you can begin building Dashboards that query Prometheus data being scraped from Fusion 5 services that are configured with the appropriate pod annotations as described in an earlier section of this document.