Files
gatus/watchdog/watchdog.go
Bo-Yi Wu 6a9cbb1728 feat(metrics): add support for custom labels in Prometheus metrics (#979)
* feat: add dynamic labels support for Prometheus metrics

- Add `toBoolPtr` function to convert a bool to a bool pointer
- Add `contains` function to check if a key exists in a slice
- Add `GetMetricLabels` method to `Config` struct to return unique metric labels from enabled endpoints
- Change file permission notation from `0644` to `0o644` in `config_test.go`
- Add `Labels` field to `Endpoint` struct for key-value pairs
- Initialize Prometheus metrics with dynamic labels from configuration
- Modify `PublishMetricsForEndpoint` to include dynamic labels
- Add test for `GetMetricLabels` method in `config_test.go`
- Update `watchdog` to pass labels to monitoring and execution functions

Signed-off-by: appleboy <appleboy.tw@gmail.com>

* refactor: refactor pointer conversion utility and update related tests

- Rename `toBoolPtr` function to a generic `toPtr` function
- Update tests to use the new `toPtr` function instead of `toBoolPtr`

Signed-off-by: appleboy <appleboy.tw@gmail.com>

* refactor: refactor utility functions and improve test coverage

- Move `toPtr` and `contains` utility functions to a new file `util.go`

Signed-off-by: appleboy <appleboy.tw@gmail.com>

* missing labels parameter

* refactor: reorder parameters in metrics-related functions and tests

- Reorder parameters in `PublishMetricsForEndpoint` function
- Update test cases to match the new parameter order in `PublishMetricsForEndpoint`
- Reorder parameters in `monitor` function
- Adjust `monitor` function calls to match the new parameter order
- Reorder parameters in `execute` function call to `PublishMetricsForEndpoint`

Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>

* Update main.go

* Update config/config.go

* docs: improve documentation formatting, examples, and readability

- Add multiple blank lines for spacing in the README file
- Fix formatting issues in markdown tables
- Correct deprecated formatting for Teams alerts
- Replace single quotes with double quotes in JSON examples
- Add new sections and examples for various configurations and endpoints
- Improve readability and consistency in the documentation
- Update links and references to examples and configurations

Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>

* docs: enhance custom labels support in Prometheus metrics

- Add a section for custom labels in the README
- Include an example configuration for custom labels in Prometheus metrics initialization

Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>

* refactor: rename and refactor metric labels to use ExtraLabels

- Rename the endpoint metric labels field from Labels to ExtraLabels and update its YAML tag accordingly
- Update code and tests to use ExtraLabels instead of Labels for metrics
- Replace GetMetricLabels with GetUniqueExtraMetricLabels and adjust usages throughout the codebase
- Ensure all metric publishing and monitoring functions accept and use the new extraLabels naming and semantics
- Update tests to verify correct extraction and handling of ExtraLabels for enabled endpoints

Signed-off-by: appleboy <appleboy.tw@gmail.com>

* refactor: refactor parameter order for monitor and execute for consistency

- Change the order of parameters for monitor and execute functions to group extraLabels consistently as the last argument before context.
- Update all relevant function calls and signatures to reflect the new parameter order.
- Replace usage of labels with extraLabels for clarity and consistency.

Signed-off-by: appleboy <appleboy.tw@gmail.com>

* test: improve initialization and labeling of Prometheus metrics

- Add a test to verify that Prometheus metrics initialize correctly with extra labels.
- Ensure metrics variables are properly initialized and not nil.
- Check that WithLabelValues accepts both default and extra labels without causing a panic.

Signed-off-by: appleboy <appleboy.tw@gmail.com>

* test: improve Prometheus metrics testing for extra label handling

- Remove a redundant test for WithLabelValues label length.
- Add a new test to verify that extraLabels are correctly included in exported Prometheus metrics.

Signed-off-by: appleboy <appleboy.tw@gmail.com>

* refactor: refactor metrics to support custom Prometheus registries

- Refactor metrics initialization to accept a custom Prometheus registry, defaulting to the global registry when nil
- Replace promauto with direct metric construction and explicit registration
- Update tests to use dedicated, isolated registries instead of the default global registry

Signed-off-by: appleboy <appleboy.tw@gmail.com>

* Revert README.md to a previous version

* docs: document support for custom metric labels in endpoints

- Add documentation section explaining support for custom labels on metrics
- Provide YAML configuration example illustrating the new labels field for endpoints
- Update table of contents to include the custom labels section

Signed-off-by: appleboy <appleboy.tw@gmail.com>

---------

Signed-off-by: appleboy <appleboy.tw@gmail.com>
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-authored-by: TwiN <twin@linux.com>
2025-08-05 12:26:50 -04:00

193 lines
9.9 KiB
Go

package watchdog
import (
"context"
"sync"
"time"
"github.com/TwiN/gatus/v5/alerting"
"github.com/TwiN/gatus/v5/config"
"github.com/TwiN/gatus/v5/config/connectivity"
"github.com/TwiN/gatus/v5/config/endpoint"
"github.com/TwiN/gatus/v5/config/maintenance"
"github.com/TwiN/gatus/v5/metrics"
"github.com/TwiN/gatus/v5/storage/store"
"github.com/TwiN/logr"
)
var (
// monitoringMutex is used to prevent multiple endpoint from being evaluated at the same time.
// Without this, conditions using response time may become inaccurate.
monitoringMutex sync.Mutex
ctx context.Context
cancelFunc context.CancelFunc
)
// Monitor loops over each endpoint and starts a goroutine to monitor each endpoint separately
func Monitor(cfg *config.Config) {
ctx, cancelFunc = context.WithCancel(context.Background())
extraLabels := cfg.GetUniqueExtraMetricLabels()
for _, endpoint := range cfg.Endpoints {
if endpoint.IsEnabled() {
// To prevent multiple requests from running at the same time, we'll wait for a little before each iteration
time.Sleep(777 * time.Millisecond)
go monitor(endpoint, cfg.Alerting, cfg.Maintenance, cfg.Connectivity, cfg.DisableMonitoringLock, cfg.Metrics, extraLabels, ctx)
}
}
for _, externalEndpoint := range cfg.ExternalEndpoints {
// Check if the external endpoint is enabled and is using heartbeat
// If the external endpoint does not use heartbeat, then it does not need to be monitored periodically, because
// alerting is checked every time an external endpoint is pushed to Gatus, unlike normal endpoints.
if externalEndpoint.IsEnabled() && externalEndpoint.Heartbeat.Interval > 0 {
go monitorExternalEndpointHeartbeat(externalEndpoint, cfg.Alerting, cfg.Maintenance, cfg.Connectivity, cfg.DisableMonitoringLock, cfg.Metrics, ctx, extraLabels)
}
}
}
// monitor a single endpoint in a loop
func monitor(ep *endpoint.Endpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, connectivityConfig *connectivity.Config, disableMonitoringLock bool, enabledMetrics bool, extraLabels []string, ctx context.Context) {
// Run it immediately on start
execute(ep, alertingConfig, maintenanceConfig, connectivityConfig, disableMonitoringLock, enabledMetrics, extraLabels)
// Loop for the next executions
ticker := time.NewTicker(ep.Interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
logr.Warnf("[watchdog.monitor] Canceling current execution of group=%s; endpoint=%s; key=%s", ep.Group, ep.Name, ep.Key())
return
case <-ticker.C:
execute(ep, alertingConfig, maintenanceConfig, connectivityConfig, disableMonitoringLock, enabledMetrics, extraLabels)
}
}
// Just in case somebody wandered all the way to here and wonders, "what about ExternalEndpoints?"
// Alerting is checked every time an external endpoint is pushed to Gatus, so they're not monitored
// periodically like they are for normal endpoints.
}
func execute(ep *endpoint.Endpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, connectivityConfig *connectivity.Config, disableMonitoringLock bool, enabledMetrics bool, extraLabels []string) {
if !disableMonitoringLock {
// By placing the lock here, we prevent multiple endpoints from being monitored at the exact same time, which
// could cause performance issues and return inaccurate results
monitoringMutex.Lock()
defer monitoringMutex.Unlock()
}
// If there's a connectivity checker configured, check if Gatus has internet connectivity
if connectivityConfig != nil && connectivityConfig.Checker != nil && !connectivityConfig.Checker.IsConnected() {
logr.Infof("[watchdog.execute] No connectivity; skipping execution")
return
}
logr.Debugf("[watchdog.execute] Monitoring group=%s; endpoint=%s; key=%s", ep.Group, ep.Name, ep.Key())
result := ep.EvaluateHealth()
if enabledMetrics {
metrics.PublishMetricsForEndpoint(ep, result, extraLabels)
}
UpdateEndpointStatuses(ep, result)
if logr.GetThreshold() == logr.LevelDebug && !result.Success {
logr.Debugf("[watchdog.execute] Monitored group=%s; endpoint=%s; key=%s; success=%v; errors=%d; duration=%s; body=%s", ep.Group, ep.Name, ep.Key(), result.Success, len(result.Errors), result.Duration.Round(time.Millisecond), result.Body)
} else {
logr.Infof("[watchdog.execute] Monitored group=%s; endpoint=%s; key=%s; success=%v; errors=%d; duration=%s", ep.Group, ep.Name, ep.Key(), result.Success, len(result.Errors), result.Duration.Round(time.Millisecond))
}
inEndpointMaintenanceWindow := false
for _, maintenanceWindow := range ep.MaintenanceWindows {
if maintenanceWindow.IsUnderMaintenance() {
logr.Debug("[watchdog.execute] Under endpoint maintenance window")
inEndpointMaintenanceWindow = true
}
}
if !maintenanceConfig.IsUnderMaintenance() && !inEndpointMaintenanceWindow {
// TODO: Consider moving this after the monitoring lock is unlocked? I mean, how much noise can a single alerting provider cause...
HandleAlerting(ep, result, alertingConfig)
} else {
logr.Debug("[watchdog.execute] Not handling alerting because currently in the maintenance window")
}
logr.Debugf("[watchdog.execute] Waiting for interval=%s before monitoring group=%s endpoint=%s (key=%s) again", ep.Interval, ep.Group, ep.Name, ep.Key())
}
func monitorExternalEndpointHeartbeat(ee *endpoint.ExternalEndpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, connectivityConfig *connectivity.Config, disableMonitoringLock bool, enabledMetrics bool, ctx context.Context, extraLabels []string) {
ticker := time.NewTicker(ee.Heartbeat.Interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
logr.Warnf("[watchdog.monitorExternalEndpointHeartbeat] Canceling current execution of group=%s; endpoint=%s; key=%s", ee.Group, ee.Name, ee.Key())
return
case <-ticker.C:
executeExternalEndpointHeartbeat(ee, alertingConfig, maintenanceConfig, connectivityConfig, disableMonitoringLock, enabledMetrics, extraLabels)
}
}
}
func executeExternalEndpointHeartbeat(ee *endpoint.ExternalEndpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, connectivityConfig *connectivity.Config, disableMonitoringLock bool, enabledMetrics bool, extraLabels []string) {
if !disableMonitoringLock {
// By placing the lock here, we prevent multiple endpoints from being monitored at the exact same time, which
// could cause performance issues and return inaccurate results
monitoringMutex.Lock()
defer monitoringMutex.Unlock()
}
// If there's a connectivity checker configured, check if Gatus has internet connectivity
if connectivityConfig != nil && connectivityConfig.Checker != nil && !connectivityConfig.Checker.IsConnected() {
logr.Infof("[watchdog.monitorExternalEndpointHeartbeat] No connectivity; skipping execution")
return
}
logr.Debugf("[watchdog.monitorExternalEndpointHeartbeat] Checking heartbeat for group=%s; endpoint=%s; key=%s", ee.Group, ee.Name, ee.Key())
convertedEndpoint := ee.ToEndpoint()
hasReceivedResultWithinHeartbeatInterval, err := store.Get().HasEndpointStatusNewerThan(ee.Key(), time.Now().Add(-ee.Heartbeat.Interval))
if err != nil {
logr.Errorf("[watchdog.monitorExternalEndpointHeartbeat] Failed to check if endpoint has received a result within the heartbeat interval: %s", err.Error())
return
}
if hasReceivedResultWithinHeartbeatInterval {
// If we received a result within the heartbeat interval, we don't want to create a successful result, so we
// skip the rest. We don't have to worry about alerting or metrics, because if the previous heartbeat failed
// while this one succeeds, it implies that there was a new result pushed, and that result being pushed
// should've resolved the alert.
logr.Infof("[watchdog.monitorExternalEndpointHeartbeat] Checked heartbeat for group=%s; endpoint=%s; key=%s; success=%v; errors=%d", ee.Group, ee.Name, ee.Key(), hasReceivedResultWithinHeartbeatInterval, 0)
return
}
// All code after this point assumes the heartbeat failed
result := &endpoint.Result{
Timestamp: time.Now(),
Success: false,
Errors: []string{"heartbeat: no update received within " + ee.Heartbeat.Interval.String()},
}
if enabledMetrics {
metrics.PublishMetricsForEndpoint(convertedEndpoint, result, extraLabels)
}
UpdateEndpointStatuses(convertedEndpoint, result)
logr.Infof("[watchdog.monitorExternalEndpointHeartbeat] Checked heartbeat for group=%s; endpoint=%s; key=%s; success=%v; errors=%d; duration=%s", ee.Group, ee.Name, ee.Key(), result.Success, len(result.Errors), result.Duration.Round(time.Millisecond))
inEndpointMaintenanceWindow := false
for _, maintenanceWindow := range ee.MaintenanceWindows {
if maintenanceWindow.IsUnderMaintenance() {
logr.Debug("[watchdog.monitorExternalEndpointHeartbeat] Under endpoint maintenance window")
inEndpointMaintenanceWindow = true
}
}
if !maintenanceConfig.IsUnderMaintenance() && !inEndpointMaintenanceWindow {
HandleAlerting(convertedEndpoint, result, alertingConfig)
// Sync the failure/success counters back to the external endpoint
ee.NumberOfSuccessesInARow = convertedEndpoint.NumberOfSuccessesInARow
ee.NumberOfFailuresInARow = convertedEndpoint.NumberOfFailuresInARow
} else {
logr.Debug("[watchdog.monitorExternalEndpointHeartbeat] Not handling alerting because currently in the maintenance window")
}
logr.Debugf("[watchdog.monitorExternalEndpointHeartbeat] Waiting for interval=%s before checking heartbeat for group=%s endpoint=%s (key=%s) again", ee.Heartbeat.Interval, ee.Group, ee.Name, ee.Key())
}
// UpdateEndpointStatuses updates the slice of endpoint statuses
func UpdateEndpointStatuses(ep *endpoint.Endpoint, result *endpoint.Result) {
if err := store.Get().Insert(ep, result); err != nil {
logr.Errorf("[watchdog.UpdateEndpointStatuses] Failed to insert result in storage: %s", err.Error())
}
}
// Shutdown stops monitoring all endpoints
func Shutdown(cfg *config.Config) {
// Disable all the old HTTP connections
for _, ep := range cfg.Endpoints {
ep.Close()
}
cancelFunc()
}