Compare commits

...

19 Commits

Author SHA1 Message Date
TwiN
31bf2aeb80 Update TwiN/health to v1.1.0 2021-11-15 20:11:13 -05:00
TwiN
787f6f0d74 Add feedback email address 2021-11-12 00:32:11 -05:00
TwiN
17a431321c Pass http.NoBody instead of nil as body 2021-11-11 00:14:00 -05:00
TwiN
05e9add16d Regenerate static assets 2021-11-09 00:16:48 -05:00
TwiN
c4ef56511d Update dependencies 2021-11-09 00:07:44 -05:00
TwiN
cfa2c8ef6f Minor updates 2021-11-09 00:06:41 -05:00
TwiN
f36b6863ce Minor update 2021-11-08 23:54:06 -05:00
TwiN
24482cf7a0 Fix icon_url for Mattermost 2021-11-08 21:07:16 -05:00
TwiN
d661a0ea6d Add logo.png in .github/assets 2021-11-08 21:05:16 -05:00
TwiN
a0ec6941ab Display number of days rather than hours if >72h 2021-11-08 20:57:58 -05:00
TwiN
5e711fb3b9 Use http.Error instead of writer.Write 2021-11-08 20:56:35 -05:00
TwiN
ab66e7ec8a Fix badge examples 2021-11-08 02:22:43 -05:00
TwiN
08aba6cd51 Minor updates 2021-11-04 21:40:05 -04:00
TwiN
d3805cd77a Fix #197; Fix #198: Deprecate storage.file in favor of storage.path and deprecate persistence with memory storage type 2021-11-04 21:33:13 -04:00
TwiN
dd70136e6c Omit empty hostname and errors field 2021-11-03 22:18:23 -04:00
TwiN
a94c480c22 Fix typo in comment 2021-11-03 22:17:58 -04:00
TwiN
10fd4ecd6b Minor fixes 2021-11-03 19:48:58 -04:00
TwiN
9287e2f9e2 Move store initialization to store package
This will allow importing storage.Config without importing every SQL drivers in the known universe
2021-10-28 19:35:46 -04:00
TwiN
257f859825 Rename getPagerDutyIntegrationKeyForGroup to getIntegrationKeyForGroup 2021-10-27 23:16:05 -04:00
50 changed files with 2470 additions and 3674 deletions

View File

@@ -1,6 +1,6 @@
storage:
type: postgres
file: "postgres://username:password@postgres:5432/gatus?sslmode=disable"
path: "postgres://username:password@postgres:5432/gatus?sslmode=disable"
endpoints:
- name: back-end

View File

@@ -1,6 +1,6 @@
storage:
type: sqlite
file: /data/data.db
path: /data/data.db
endpoints:
- name: back-end

BIN
.github/assets/logo.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

View File

@@ -17,7 +17,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.16
go-version: 1.17
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Build binary to make sure it works
@@ -25,9 +25,9 @@ jobs:
- name: Test
# We're using "sudo" because one of the tests leverages ping, which requires super-user privileges.
# As for the 'env "PATH=$PATH" "GOROOT=$GOROOT"', we need it to use the same "go" executable that
# was configured by the "Set up Go 1.15" step (otherwise, it'd use sudo's "go" executable)
# was configured by the "Set up Go" step (otherwise, it'd use sudo's "go" executable)
run: sudo env "PATH=$PATH" "GOROOT=$GOROOT" go test -mod vendor ./... -race -coverprofile=coverage.txt -covermode=atomic
- name: Codecov
uses: codecov/codecov-action@v1.5.2
uses: codecov/codecov-action@v2.1.0
with:
file: ./coverage.txt
files: ./coverage.txt

View File

@@ -25,6 +25,7 @@ docker run -p 8080:8080 --name gatus twinproduction/gatus
For more details, see [Usage](#usage)
</details>
Have any feedback or want to share your good/bad experience with Gatus? Feel free to email me at [feedback@gatus.io](mailto:feedback@gatus.io)
## Table of Contents
- [Why Gatus?](#why-gatus)
@@ -103,7 +104,7 @@ The main features of Gatus are:
- **Alerting**: While having a pretty visual dashboard is useful to keep track of the state of your application(s), you probably don't want to stare at it all day. Thus, notifications via Slack, Mattermost, Messagebird, PagerDuty, Twilio and Teams are supported out of the box with the ability to configure a custom alerting provider for any needs you might have, whether it be a different provider or a custom application that manages automated rollbacks.
- **Metrics**
- **Low resource consumption**: As with most Go applications, the resource footprint that this application requires is negligibly small.
- **[Badges](#badges)**: ![Uptime 7d](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/7d/badge.svg) ![Response time 24h](https://status.twin.sh/api/v1/endpoints/core_website-external/response-times/24h/badge.svg)
- **[Badges](#badges)**: ![Uptime 7d](https://status.twin.sh/api/v1/endpoints/core_blog-external/uptimes/7d/badge.svg) ![Response time 24h](https://status.twin.sh/api/v1/endpoints/core_blog-external/response-times/24h/badge.svg)
## Usage
@@ -235,24 +236,29 @@ Here are some examples of conditions you can use:
| Parameter | Description | Default |
|:------------------ |:-------------------------------------------------------------------------------------- |:-------------- |
| `storage` | Storage configuration | `{}` |
| `storage.file` | Path to persist the data in. If the type is `memory`, data is persisted on interval. | `""` |
| `storage.type` | Type of storage. Valid types: `memory`, `sqlite`, `postgres` (ALPHA). | `"memory"` |
| `storage.path` | Path to persist the data in. Only supported for types `sqlite` and `postgres`. | `""` |
| `storage.type` | Type of storage. Valid types: `memory`, `sqlite`, `postgres`. | `"memory"` |
- If `storage.type` is `memory` (default) and `storage.file` is set to a non-blank value.
Furthermore, the data is periodically persisted, but everything remains in memory.
- If `storage.type` is `sqlite`, `storage.file` must not be blank:
- If `storage.type` is `memory` (default):
```yaml
# Note that this is the default value, and you can omit the storage configuration altogether to achieve the same result.
# Because the data is stored in memory, the data will not survive a restart.
storage:
type: memory
```
- If `storage.type` is `sqlite`, `storage.path` must not be blank:
```yaml
storage:
type: sqlite
file: data.db
path: data.db
```
See [examples/docker-compose-sqlite-storage](.examples/docker-compose-sqlite-storage) for an example.
- If `storage.type` is `postgres`, `storage.file` must be the connection URL:
- If `storage.type` is `postgres`, `storage.path` must be the connection URL:
```yaml
storage:
type: postgres
file: "postgres://user:password@127.0.0.1:5432/gatus?sslmode=disable"
path: "postgres://user:password@127.0.0.1:5432/gatus?sslmode=disable"
```
See [examples/docker-compose-postgres-storage](.examples/docker-compose-postgres-storage) for an example.
@@ -937,8 +943,8 @@ endpoints:
- "[CONNECTED] == true"
```
Placeholders `[STATUS]` and `[BODY]` as well as the fields `endpoints[].body`, `endpoints[].insecure`,
`endpoints[].headers`, `endpoints[].method` and `endpoints[].graphql` are not supported for TCP endpoints.
Placeholders `[STATUS]` and `[BODY]` as well as the fields `endpoints[].body`, `endpoints[].headers`,
`endpoints[].method` and `endpoints[].graphql` are not supported for TCP endpoints.
**NOTE**: `[CONNECTED] == true` does not guarantee that the endpoint itself is healthy - it only guarantees that there's
something at the given address listening to the given port, and that a connection to that address was successfully
@@ -991,7 +997,7 @@ endpoints:
url: "starttls://smtp.gmail.com:587"
interval: 30m
client:
timeout: 5s
timeout: 5s
conditions:
- "[CONNECTED] == true"
- "[CERTIFICATE_EXPIRATION] > 48h"
@@ -1006,7 +1012,7 @@ endpoints:
url: "tls://ldap.example.com:636"
interval: 30m
client:
timeout: 5s
timeout: 5s
conditions:
- "[CONNECTED] == true"
- "[CERTIFICATE_EXPIRATION] > 48h"
@@ -1124,9 +1130,9 @@ web:
### Badges
### Uptime
![Uptime 1h](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/1h/badge.svg)
![Uptime 24h](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/24h/badge.svg)
![Uptime 7d](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/7d/badge.svg)
![Uptime 1h](https://status.twin.sh/api/v1/endpoints/core_blog-external/uptimes/1h/badge.svg)
![Uptime 24h](https://status.twin.sh/api/v1/endpoints/core_blog-external/uptimes/24h/badge.svg)
![Uptime 7d](https://status.twin.sh/api/v1/endpoints/core_blog-external/uptimes/7d/badge.svg)
Gatus can automatically generate a SVG badge for one of your monitored endpoints.
This allows you to put badges in your individual applications' README or even create your own status page, if you
@@ -1151,15 +1157,15 @@ https://example.com/api/v1/endpoints/_frontend/uptimes/7d/badge.svg
```
Example:
```
![Uptime 24h](https://status.twin.sh/api/v1/endpoints/core_website-external/uptimes/24h/badge.svg)
![Uptime 24h](https://status.twin.sh/api/v1/endpoints/core_blog-external/uptimes/24h/badge.svg)
```
If you'd like to see a visual example of each badges available, you can simply navigate to the endpoint's detail page.
### Response time
![Response time 1h](https://status.twin.sh/api/v1/endpoints/core_website-external/response-times/1h/badge.svg)
![Response time 24h](https://status.twin.sh/api/v1/endpoints/core_website-external/response-times/24h/badge.svg)
![Response time 7d](https://status.twin.sh/api/v1/endpoints/core_website-external/response-times/7d/badge.svg)
![Response time 1h](https://status.twin.sh/api/v1/endpoints/core_blog-external/response-times/1h/badge.svg)
![Response time 24h](https://status.twin.sh/api/v1/endpoints/core_blog-external/response-times/24h/badge.svg)
![Response time 7d](https://status.twin.sh/api/v1/endpoints/core_blog-external/response-times/7d/badge.svg)
The endpoint to generate a badge is the following:
```
@@ -1183,7 +1189,7 @@ Specific endpoints can also be queried by using the following pattern:
```
/api/v1/endpoints/{group}_{endpoint}/statuses
```
Example: https://status.twin.sh/api/v1/endpoints/core_website-home/statuses
Example: https://status.twin.sh/api/v1/endpoints/core_blog-home/statuses
Gzip compression will be used if the `Accept-Encoding` HTTP header contains `gzip`.

View File

@@ -61,7 +61,7 @@ func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, al
Body: fmt.Sprintf(`{
"text": "",
"username": "gatus",
"icon_url": "https://raw.githubusercontent.com/TwiN/gatus/master/static/logo.png",
"icon_url": "https://raw.githubusercontent.com/TwiN/gatus/master/.github/assets/logo.png",
"attachments": [
{
"title": ":rescue_worker_helmet: Gatus",

View File

@@ -71,15 +71,15 @@ func (provider *AlertProvider) ToCustomAlertProvider(endpoint *core.Endpoint, al
"source": "%s",
"severity": "critical"
}
}`, provider.getPagerDutyIntegrationKeyForGroup(endpoint.Group), resolveKey, eventAction, message, endpoint.Name),
}`, provider.getIntegrationKeyForGroup(endpoint.Group), resolveKey, eventAction, message, endpoint.Name),
Headers: map[string]string{
"Content-Type": "application/json",
},
}
}
// getPagerDutyIntegrationKeyForGroup returns the appropriate pagerduty integration key for a given group
func (provider *AlertProvider) getPagerDutyIntegrationKeyForGroup(group string) string {
// getIntegrationKeyForGroup returns the appropriate pagerduty integration key for a given group
func (provider *AlertProvider) getIntegrationKeyForGroup(group string) string {
if provider.Overrides != nil {
for _, override := range provider.Overrides {
if group == override.Group {
@@ -87,10 +87,7 @@ func (provider *AlertProvider) getPagerDutyIntegrationKeyForGroup(group string)
}
}
}
if provider.IntegrationKey != "" {
return provider.IntegrationKey
}
return ""
return provider.IntegrationKey
}
// GetDefaultAlert returns the provider's default alert configuration

View File

@@ -161,7 +161,7 @@ func TestAlertProvider_ToCustomAlertProviderWithTriggeredAlertAndOverride(t *tes
}
}
func TestAlertProvider_getPagerDutyIntegrationKey(t *testing.T) {
func TestAlertProvider_getIntegrationKeyForGroup(t *testing.T) {
scenarios := []struct {
Name string
Provider AlertProvider
@@ -217,7 +217,7 @@ func TestAlertProvider_getPagerDutyIntegrationKey(t *testing.T) {
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
if output := scenario.Provider.getPagerDutyIntegrationKeyForGroup(scenario.InputGroup); output != scenario.ExpectedOutput {
if output := scenario.Provider.getIntegrationKeyForGroup(scenario.InputGroup); output != scenario.ExpectedOutput {
t.Errorf("expected %s, got %s", scenario.ExpectedOutput, output)
}
})

View File

@@ -195,19 +195,10 @@ func validateStorageConfig(config *Config) error {
config.Storage = &storage.Config{
Type: storage.TypeMemory,
}
}
err := storage.Initialize(config.Storage)
if err != nil {
return err
}
// Remove all EndpointStatus that represent endpoints which no longer exist in the configuration
var keys []string
for _, endpoint := range config.Endpoints {
keys = append(keys, endpoint.Key())
}
numberOfEndpointStatusesDeleted := storage.Get().DeleteAllEndpointStatusesNotInKeys(keys)
if numberOfEndpointStatusesDeleted > 0 {
log.Printf("[config][validateStorageConfig] Deleted %d endpoint statuses because their matching endpoints no longer existed", numberOfEndpointStatusesDeleted)
} else {
if err := config.Storage.ValidateAndSetDefaults(); err != nil {
return err
}
}
return nil
}

View File

@@ -20,6 +20,7 @@ import (
"github.com/TwiN/gatus/v3/config/ui"
"github.com/TwiN/gatus/v3/config/web"
"github.com/TwiN/gatus/v3/core"
"github.com/TwiN/gatus/v3/storage"
)
func TestLoadFileThatDoesNotExist(t *testing.T) {
@@ -44,7 +45,8 @@ func TestParseAndValidateConfigBytes(t *testing.T) {
}()
config, err := parseAndValidateConfigBytes([]byte(fmt.Sprintf(`
storage:
file: %s
type: sqlite
path: %s
maintenance:
enabled: true
start: 00:00
@@ -83,6 +85,9 @@ endpoints:
if config == nil {
t.Fatal("Config shouldn't have been nil")
}
if config.Storage == nil || config.Storage.Path != file || config.Storage.Type != storage.TypeSQLite {
t.Error("expected storage to be set to sqlite, got", config.Storage)
}
if config.UI == nil || config.UI.Title != "Test" {
t.Error("Expected Config.UI.Title to be Test")
}
@@ -1297,3 +1302,53 @@ endpoints:
t.Error("services should've been merged in endpoints")
}
}
// XXX: Remove this in v4.0.0
func TestParseAndValidateConfigBytes_backwardCompatibleWithStorageFile(t *testing.T) {
file := t.TempDir() + "/test.db"
config, err := parseAndValidateConfigBytes([]byte(fmt.Sprintf(`
storage:
type: sqlite
file: %s
endpoints:
- name: website
url: https://twin.sh/actuator/health
conditions:
- "[STATUS] == 200"
`, file)))
if err != nil {
t.Error("expected no error, got", err.Error())
}
if config == nil {
t.Fatal("Config shouldn't have been nil")
}
if config.Storage == nil || config.Storage.Path != file || config.Storage.Type != storage.TypeSQLite {
t.Error("expected storage to be set to sqlite, got", config.Storage)
}
}
// XXX: Remove this in v4.0.0
func TestParseAndValidateConfigBytes_backwardCompatibleWithStorageTypeMemoryAndFile(t *testing.T) {
file := t.TempDir() + "/test.db"
config, err := parseAndValidateConfigBytes([]byte(fmt.Sprintf(`
storage:
type: memory
file: %s
endpoints:
- name: website
url: https://twin.sh/actuator/health
conditions:
- "[STATUS] == 200"
`, file)))
if err != nil {
t.Error("expected no error, got", err.Error())
}
if config == nil {
t.Fatal("Config shouldn't have been nil")
}
if config.Storage == nil || config.Storage.Path != file || config.Storage.Type != storage.TypeMemory {
t.Error("expected storage to be set to memory, got", config.Storage)
}
}

View File

@@ -34,7 +34,7 @@ func TestHandle(t *testing.T) {
defer os.Clearenv()
Handle(cfg.Security, cfg.Web, cfg.UI, cfg.Metrics)
defer Shutdown()
request, _ := http.NewRequest("GET", "/health", nil)
request, _ := http.NewRequest("GET", "/health", http.NoBody)
responseRecorder := httptest.NewRecorder()
server.Handler.ServeHTTP(responseRecorder, request)
if responseRecorder.Code != http.StatusOK {

View File

@@ -7,7 +7,7 @@ import (
"strings"
"time"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store"
"github.com/TwiN/gatus/v3/storage/store/common"
"github.com/gorilla/mux"
)
@@ -40,16 +40,15 @@ func UptimeBadge(writer http.ResponseWriter, request *http.Request) {
return
}
key := variables["key"]
uptime, err := storage.Get().GetUptimeByKey(key, from, time.Now())
uptime, err := store.Get().GetUptimeByKey(key, from, time.Now())
if err != nil {
if err == common.ErrEndpointNotFound {
writer.WriteHeader(http.StatusNotFound)
http.Error(writer, err.Error(), http.StatusNotFound)
} else if err == common.ErrInvalidTimeRange {
writer.WriteHeader(http.StatusBadRequest)
http.Error(writer, err.Error(), http.StatusBadRequest)
} else {
writer.WriteHeader(http.StatusInternalServerError)
http.Error(writer, err.Error(), http.StatusInternalServerError)
}
_, _ = writer.Write([]byte(err.Error()))
return
}
formattedDate := time.Now().Format(http.TimeFormat)
@@ -79,16 +78,15 @@ func ResponseTimeBadge(writer http.ResponseWriter, request *http.Request) {
return
}
key := variables["key"]
averageResponseTime, err := storage.Get().GetAverageResponseTimeByKey(key, from, time.Now())
averageResponseTime, err := store.Get().GetAverageResponseTimeByKey(key, from, time.Now())
if err != nil {
if err == common.ErrEndpointNotFound {
writer.WriteHeader(http.StatusNotFound)
http.Error(writer, err.Error(), http.StatusNotFound)
} else if err == common.ErrInvalidTimeRange {
writer.WriteHeader(http.StatusBadRequest)
http.Error(writer, err.Error(), http.StatusBadRequest)
} else {
writer.WriteHeader(http.StatusInternalServerError)
http.Error(writer, err.Error(), http.StatusInternalServerError)
}
_, _ = writer.Write([]byte(err.Error()))
return
}
formattedDate := time.Now().Format(http.TimeFormat)

View File

@@ -9,12 +9,12 @@ import (
"github.com/TwiN/gatus/v3/config"
"github.com/TwiN/gatus/v3/core"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store"
"github.com/TwiN/gatus/v3/watchdog"
)
func TestUptimeBadge(t *testing.T) {
defer storage.Get().Clear()
defer store.Get().Clear()
defer cache.Clear()
cfg := &config.Config{
Metrics: true,
@@ -107,7 +107,7 @@ func TestUptimeBadge(t *testing.T) {
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
request, _ := http.NewRequest("GET", scenario.Path, nil)
request, _ := http.NewRequest("GET", scenario.Path, http.NoBody)
if scenario.Gzip {
request.Header.Set("Accept-Encoding", "gzip")
}

View File

@@ -7,7 +7,7 @@ import (
"sort"
"time"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store"
"github.com/TwiN/gatus/v3/storage/store/common"
"github.com/gorilla/mux"
"github.com/wcharczuk/go-chart/v2"
@@ -42,21 +42,19 @@ func ResponseTimeChart(writer http.ResponseWriter, r *http.Request) {
http.Error(writer, "Durations supported: 7d, 24h", http.StatusBadRequest)
return
}
hourlyAverageResponseTime, err := storage.Get().GetHourlyAverageResponseTimeByKey(vars["key"], from, time.Now())
hourlyAverageResponseTime, err := store.Get().GetHourlyAverageResponseTimeByKey(vars["key"], from, time.Now())
if err != nil {
if err == common.ErrEndpointNotFound {
writer.WriteHeader(http.StatusNotFound)
http.Error(writer, err.Error(), http.StatusNotFound)
} else if err == common.ErrInvalidTimeRange {
writer.WriteHeader(http.StatusBadRequest)
http.Error(writer, err.Error(), http.StatusBadRequest)
} else {
writer.WriteHeader(http.StatusInternalServerError)
http.Error(writer, err.Error(), http.StatusInternalServerError)
}
_, _ = writer.Write([]byte(err.Error()))
return
}
if len(hourlyAverageResponseTime) == 0 {
writer.WriteHeader(http.StatusNoContent)
_, _ = writer.Write(nil)
http.Error(writer, "", http.StatusNoContent)
return
}
series := chart.TimeSeries{

View File

@@ -8,12 +8,12 @@ import (
"github.com/TwiN/gatus/v3/config"
"github.com/TwiN/gatus/v3/core"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store"
"github.com/TwiN/gatus/v3/watchdog"
)
func TestResponseTimeChart(t *testing.T) {
defer storage.Get().Clear()
defer store.Get().Clear()
defer cache.Clear()
cfg := &config.Config{
Metrics: true,
@@ -66,7 +66,7 @@ func TestResponseTimeChart(t *testing.T) {
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
request, _ := http.NewRequest("GET", scenario.Path, nil)
request, _ := http.NewRequest("GET", scenario.Path, http.NoBody)
if scenario.Gzip {
request.Header.Set("Accept-Encoding", "gzip")
}

View File

@@ -10,7 +10,7 @@ import (
"strings"
"time"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store"
"github.com/TwiN/gatus/v3/storage/store/common"
"github.com/TwiN/gatus/v3/storage/store/common/paging"
"github.com/TwiN/gocache"
@@ -44,7 +44,7 @@ func EndpointStatuses(writer http.ResponseWriter, r *http.Request) {
var err error
buffer := &bytes.Buffer{}
gzipWriter := gzip.NewWriter(buffer)
endpointStatuses, err := storage.Get().GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(page, pageSize))
endpointStatuses, err := store.Get().GetAllEndpointStatuses(paging.NewEndpointStatusParams().WithResults(page, pageSize))
if err != nil {
log.Printf("[handler][EndpointStatuses] Failed to retrieve endpoint statuses: %s", err.Error())
http.Error(writer, err.Error(), http.StatusInternalServerError)
@@ -76,7 +76,7 @@ func EndpointStatuses(writer http.ResponseWriter, r *http.Request) {
func EndpointStatus(writer http.ResponseWriter, r *http.Request) {
page, pageSize := extractPageAndPageSizeFromRequest(r)
vars := mux.Vars(r)
endpointStatus, err := storage.Get().GetEndpointStatusByKey(vars["key"], paging.NewEndpointStatusParams().WithResults(page, pageSize).WithEvents(1, common.MaximumNumberOfEvents))
endpointStatus, err := store.Get().GetEndpointStatusByKey(vars["key"], paging.NewEndpointStatusParams().WithResults(page, pageSize).WithEvents(1, common.MaximumNumberOfEvents))
if err != nil {
if err == common.ErrEndpointNotFound {
http.Error(writer, err.Error(), http.StatusNotFound)

View File

@@ -8,7 +8,7 @@ import (
"github.com/TwiN/gatus/v3/config"
"github.com/TwiN/gatus/v3/core"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store"
"github.com/TwiN/gatus/v3/watchdog"
)
@@ -84,7 +84,7 @@ var (
)
func TestEndpointStatus(t *testing.T) {
defer storage.Get().Clear()
defer store.Get().Clear()
defer cache.Clear()
cfg := &config.Config{
Metrics: true,
@@ -139,7 +139,7 @@ func TestEndpointStatus(t *testing.T) {
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
request, _ := http.NewRequest("GET", scenario.Path, nil)
request, _ := http.NewRequest("GET", scenario.Path, http.NoBody)
if scenario.Gzip {
request.Header.Set("Accept-Encoding", "gzip")
}
@@ -153,12 +153,12 @@ func TestEndpointStatus(t *testing.T) {
}
func TestEndpointStatuses(t *testing.T) {
defer storage.Get().Clear()
defer store.Get().Clear()
defer cache.Clear()
firstResult := &testSuccessfulResult
secondResult := &testUnsuccessfulResult
storage.Get().Insert(&testEndpoint, firstResult)
storage.Get().Insert(&testEndpoint, secondResult)
store.Get().Insert(&testEndpoint, firstResult)
store.Get().Insert(&testEndpoint, secondResult)
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
firstResult.Timestamp = time.Time{}
secondResult.Timestamp = time.Time{}
@@ -175,43 +175,43 @@ func TestEndpointStatuses(t *testing.T) {
Name: "no-pagination",
Path: "/api/v1/endpoints/statuses",
ExpectedCode: http.StatusOK,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"errors":null,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}]}]`,
},
{
Name: "pagination-first-result",
Path: "/api/v1/endpoints/statuses?page=1&pageSize=1",
ExpectedCode: http.StatusOK,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}]}]`,
},
{
Name: "pagination-second-result",
Path: "/api/v1/endpoints/statuses?page=2&pageSize=1",
ExpectedCode: http.StatusOK,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"errors":null,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"}]}]`,
},
{
Name: "pagination-no-results",
Path: "/api/v1/endpoints/statuses?page=5&pageSize=20",
ExpectedCode: http.StatusOK,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[],"events":[]}]`,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[]}]`,
},
{
Name: "invalid-pagination-should-fall-back-to-default",
Path: "/api/v1/endpoints/statuses?page=INVALID&pageSize=INVALID",
ExpectedCode: http.StatusOK,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"errors":null,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}]}]`,
},
{ // XXX: Remove this in v4.0.0
Name: "backward-compatible-service-status",
Path: "/api/v1/services/statuses",
ExpectedCode: http.StatusOK,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"errors":null,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}],"events":[]}]`,
ExpectedBody: `[{"name":"name","group":"group","key":"group_name","results":[{"status":200,"hostname":"example.org","duration":150000000,"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":true},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":true}],"success":true,"timestamp":"0001-01-01T00:00:00Z"},{"status":200,"hostname":"example.org","duration":750000000,"errors":["error-1","error-2"],"conditionResults":[{"condition":"[STATUS] == 200","success":true},{"condition":"[RESPONSE_TIME] \u003c 500","success":false},{"condition":"[CERTIFICATE_EXPIRATION] \u003c 72h","success":false}],"success":false,"timestamp":"0001-01-01T00:00:00Z"}]}]`,
},
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
request, _ := http.NewRequest("GET", scenario.Path, nil)
request, _ := http.NewRequest("GET", scenario.Path, http.NoBody)
responseRecorder := httptest.NewRecorder()
router.ServeHTTP(responseRecorder, request)
if responseRecorder.Code != scenario.ExpectedCode {

View File

@@ -22,7 +22,7 @@ func TestFavIcon(t *testing.T) {
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
request, _ := http.NewRequest("GET", scenario.Path, nil)
request, _ := http.NewRequest("GET", scenario.Path, http.NoBody)
responseRecorder := httptest.NewRecorder()
router.ServeHTTP(responseRecorder, request)
if responseRecorder.Code != scenario.ExpectedCode {

View File

@@ -44,7 +44,7 @@ func TestCreateRouter(t *testing.T) {
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
request, _ := http.NewRequest("GET", scenario.Path, nil)
request, _ := http.NewRequest("GET", scenario.Path, http.NoBody)
if scenario.Gzip {
request.Header.Set("Accept-Encoding", "gzip")
}

View File

@@ -8,12 +8,12 @@ import (
"github.com/TwiN/gatus/v3/config"
"github.com/TwiN/gatus/v3/core"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store"
"github.com/TwiN/gatus/v3/watchdog"
)
func TestSinglePageApplication(t *testing.T) {
defer storage.Get().Clear()
defer store.Get().Clear()
defer cache.Clear()
cfg := &config.Config{
Metrics: true,
@@ -56,7 +56,7 @@ func TestSinglePageApplication(t *testing.T) {
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
request, _ := http.NewRequest("GET", scenario.Path, nil)
request, _ := http.NewRequest("GET", scenario.Path, http.NoBody)
if scenario.Gzip {
request.Header.Set("Accept-Encoding", "gzip")
}

View File

@@ -54,7 +54,7 @@ func TestExtractPageAndPageSizeFromRequest(t *testing.T) {
}
for _, scenario := range scenarios {
t.Run("page-"+scenario.Page+"-pageSize-"+scenario.PageSize, func(t *testing.T) {
request, _ := http.NewRequest("GET", fmt.Sprintf("/api/v1/statuses?page=%s&pageSize=%s", scenario.Page, scenario.PageSize), nil)
request, _ := http.NewRequest("GET", fmt.Sprintf("/api/v1/statuses?page=%s&pageSize=%s", scenario.Page, scenario.PageSize), http.NoBody)
actualPage, actualPageSize := extractPageAndPageSizeFromRequest(request)
if actualPage != scenario.ExpectedPage {
t.Errorf("expected %d, got %d", scenario.ExpectedPage, actualPage)

View File

@@ -80,7 +80,7 @@ const (
maximumLengthBeforeTruncatingWhenComparedWithPattern = 25
)
// Condition is a condition that needs to be met in order for a Endpoint to be considered healthy.
// Condition is a condition that needs to be met in order for an Endpoint to be considered healthy.
type Condition string
// evaluate the Condition with the Result of the health check
@@ -283,7 +283,7 @@ func prettifyNumericalParameters(parameters []string, resolvedParameters []int64
return prettify(parameters, []string{strconv.Itoa(int(resolvedParameters[0])), strconv.Itoa(int(resolvedParameters[1]))}, operator)
}
// XXX: make this configurable? i.e. show-resolved-conditions-on-failure
// prettify returns a string representation of a condition with its parameters resolved between parentheses
func prettify(parameters []string, resolvedParameters []string, operator string) string {
// Since, in the event of an invalid path, the resolvedParameters also contain the condition itself,
// we'll return the resolvedParameters as-is.

View File

@@ -17,7 +17,7 @@ type EndpointStatus struct {
Results []*Result `json:"results"`
// Events is a list of events
Events []*Event `json:"events"`
Events []*Event `json:"events,omitempty"`
// Uptime information on the endpoint's uptime
//

View File

@@ -13,7 +13,7 @@ type Result struct {
DNSRCode string `json:"-"`
// Hostname extracted from Endpoint.URL
Hostname string `json:"hostname"`
Hostname string `json:"hostname,omitempty"`
// IP resolved from the Endpoint URL
IP string `json:"-"`
@@ -25,7 +25,7 @@ type Result struct {
Duration time.Duration `json:"duration"`
// Errors encountered during the evaluation of the Endpoint's health
Errors []string `json:"errors"`
Errors []string `json:"errors,omitempty"`
// ConditionResults results of the Endpoint's conditions
ConditionResults []*ConditionResult `json:"conditionResults"`

2
go.mod
View File

@@ -4,7 +4,7 @@ go 1.17
require (
github.com/TwiN/gocache v1.2.4
github.com/TwiN/health v1.0.1
github.com/TwiN/health v1.1.0
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/go-ping/ping v0.0.0-20210911151512-381826476871

4
go.sum
View File

@@ -35,8 +35,8 @@ github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/TwiN/gocache v1.2.4 h1:AfJ1YRcxtQ/zZEN61URDwk/dwFG7LSRenU5qIm9dQzo=
github.com/TwiN/gocache v1.2.4/go.mod h1:BjabsQQy6z5uHDorHa4LJVPEzFeitLIDbCtdv3gc1gA=
github.com/TwiN/health v1.0.1 h1:Q8lE6mTMPG4A5nHXq5Xa+NY4Y8LkQdRBWh1ReUkuc6Y=
github.com/TwiN/health v1.0.1/go.mod h1:Bt+lEvSi6C/9NWb7OoGmUmgtS4dfPeMM9EINnURv5dE=
github.com/TwiN/health v1.1.0 h1:IbXV4b5VPxzfIqOPiP/19JdBNFYM0oEDReLbUazhb2k=
github.com/TwiN/health v1.1.0/go.mod h1:Bt+lEvSi6C/9NWb7OoGmUmgtS4dfPeMM9EINnURv5dE=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=

27
main.go
View File

@@ -9,7 +9,7 @@ import (
"github.com/TwiN/gatus/v3/config"
"github.com/TwiN/gatus/v3/controller"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store"
"github.com/TwiN/gatus/v3/watchdog"
)
@@ -18,6 +18,7 @@ func main() {
if err != nil {
panic(err)
}
initializeStorage(cfg)
start(cfg)
// Wait for termination signal
signalChannel := make(chan os.Signal, 1)
@@ -46,8 +47,7 @@ func stop() {
}
func save() {
err := storage.Get().Save()
if err != nil {
if err := store.Get().Save(); err != nil {
log.Println("Failed to save storage provider:", err.Error())
}
}
@@ -62,6 +62,27 @@ func loadConfiguration() (cfg *config.Config, err error) {
return
}
// initializeStorage initializes the storage provider
//
// Q: "TwiN, why are you putting this here? Wouldn't it make more sense to have this in the config?!"
// A: Yes. Yes it would make more sense to have it in the config package. But I don't want to import
// the massive SQL dependencies just because I want to import the config, so here we are.
func initializeStorage(cfg *config.Config) {
err := store.Initialize(cfg.Storage)
if err != nil {
panic(err)
}
// Remove all EndpointStatus that represent endpoints which no longer exist in the configuration
var keys []string
for _, endpoint := range cfg.Endpoints {
keys = append(keys, endpoint.Key())
}
numberOfEndpointStatusesDeleted := store.Get().DeleteAllEndpointStatusesNotInKeys(keys)
if numberOfEndpointStatusesDeleted > 0 {
log.Printf("[config][validateStorageConfig] Deleted %d endpoint statuses because their matching endpoints no longer existed", numberOfEndpointStatusesDeleted)
}
}
func listenToConfigurationFileChanges(cfg *config.Config) {
for {
time.Sleep(30 * time.Second)

View File

@@ -1,14 +1,59 @@
package storage
import (
"errors"
"log"
)
var (
ErrSQLStorageRequiresPath = errors.New("sql storage requires a non-empty path to be defined")
ErrMemoryStorageDoesNotSupportPath = errors.New("memory storage does not support persistence, use sqlite if you want persistence on file")
ErrCannotSetBothFileAndPath = errors.New("file has been deprecated in favor of path: you cannot set both of them")
)
// Config is the configuration for storage
type Config struct {
// Path is the path used by the store to achieve persistence
// If blank, persistence is disabled.
// Note that not all Type support persistence
//
// XXX: Rename to path for v4.0.0
Path string `yaml:"path"`
// File is the path of the file to use for persistence
// If blank, persistence is disabled
//
// XXX: Rename to path for v4.0.0
// Deprecated
File string `yaml:"file"`
// Type of store
// If blank, uses the default in-memory store
Type Type `yaml:"type"`
}
// ValidateAndSetDefaults validates the configuration and sets the default values (if applicable)
func (c *Config) ValidateAndSetDefaults() error {
if len(c.File) > 0 && len(c.Path) > 0 { // XXX: Remove for v4.0.0
return ErrCannotSetBothFileAndPath
} else if len(c.File) > 0 { // XXX: Remove for v4.0.0
log.Println("WARNING: Your configuration is using 'storage.file', which is deprecated in favor of 'storage.path'")
log.Println("WARNING: storage.file will be completely removed in v4.0.0, so please update your configuration")
log.Println("WARNING: See https://github.com/TwiN/gatus/issues/197")
c.Path = c.File
}
if c.Type == "" {
c.Type = TypeMemory
}
if (c.Type == TypePostgres || c.Type == TypeSQLite) && len(c.Path) == 0 {
return ErrSQLStorageRequiresPath
}
if c.Type == TypeMemory && len(c.Path) > 0 {
log.Println("WARNING: Your configuration is using a storage of type memory with persistence, which has been deprecated")
log.Println("WARNING: As of v4.0.0, the default storage type (memory) will not support persistence.")
log.Println("WARNING: If you want persistence, use 'storage.type: sqlite' instead of 'storage.type: memory'")
log.Println("WARNING: See https://github.com/TwiN/gatus/issues/198")
// XXX: Uncomment the following line for v4.0.0
//return ErrMemoryStorageDoesNotSupportPath
}
return nil
}

View File

@@ -1,91 +0,0 @@
package storage
import (
"context"
"log"
"time"
"github.com/TwiN/gatus/v3/storage/store"
"github.com/TwiN/gatus/v3/storage/store/memory"
"github.com/TwiN/gatus/v3/storage/store/sql"
)
var (
provider store.Store
// initialized keeps track of whether the storage provider was initialized
// Because store.Store is an interface, a nil check wouldn't be sufficient, so instead of doing reflection
// every single time Get is called, we'll just lazily keep track of its existence through this variable
initialized bool
ctx context.Context
cancelFunc context.CancelFunc
)
// Get retrieves the storage provider
func Get() store.Store {
if !initialized {
log.Println("[storage][Get] Provider requested before it was initialized, automatically initializing")
err := Initialize(nil)
if err != nil {
panic("failed to automatically initialize store: " + err.Error())
}
}
return provider
}
// Initialize instantiates the storage provider based on the Config provider
func Initialize(cfg *Config) error {
initialized = true
var err error
if cancelFunc != nil {
// Stop the active autoSaveStore task, if there's already one
cancelFunc()
}
if cfg == nil {
cfg = &Config{}
}
if len(cfg.File) == 0 && cfg.Type != TypePostgres {
log.Printf("[storage][Initialize] Creating storage provider with type=%s and file=%s", cfg.Type, cfg.File)
} else {
log.Printf("[storage][Initialize] Creating storage provider with type=%s", cfg.Type)
}
ctx, cancelFunc = context.WithCancel(context.Background())
switch cfg.Type {
case TypeSQLite, TypePostgres:
provider, err = sql.NewStore(string(cfg.Type), cfg.File)
if err != nil {
return err
}
case TypeMemory:
fallthrough
default:
if len(cfg.File) > 0 {
provider, err = memory.NewStore(cfg.File)
if err != nil {
return err
}
go autoSaveStore(ctx, provider, 7*time.Minute)
} else {
provider, _ = memory.NewStore("")
}
}
return nil
}
// autoSaveStore automatically calls the Save function of the provider at every interval
func autoSaveStore(ctx context.Context, provider store.Store, interval time.Duration) {
for {
select {
case <-ctx.Done():
log.Printf("[storage][autoSaveStore] Stopping active job")
return
case <-time.After(interval):
log.Printf("[storage][autoSaveStore] Saving")
err := provider.Save()
if err != nil {
log.Println("[storage][autoSaveStore] Save failed:", err.Error())
}
}
}
}

View File

@@ -1,94 +0,0 @@
package storage
import (
"testing"
"time"
"github.com/TwiN/gatus/v3/storage/store/sql"
)
func TestGet(t *testing.T) {
store := Get()
if store == nil {
t.Error("store should've been automatically initialized")
}
}
func TestInitialize(t *testing.T) {
type Scenario struct {
Name string
Cfg *Config
ExpectedErr error
}
scenarios := []Scenario{
{
Name: "nil",
Cfg: nil,
ExpectedErr: nil,
},
{
Name: "blank",
Cfg: &Config{},
ExpectedErr: nil,
},
{
Name: "memory-no-file",
Cfg: &Config{Type: TypeMemory},
ExpectedErr: nil,
},
{
Name: "memory-with-file",
Cfg: &Config{Type: TypeMemory, File: t.TempDir() + "/TestInitialize_memory-with-file.db"},
ExpectedErr: nil,
},
{
Name: "sqlite-no-file",
Cfg: &Config{Type: TypeSQLite},
ExpectedErr: sql.ErrFilePathNotSpecified,
},
{
Name: "sqlite-with-file",
Cfg: &Config{Type: TypeSQLite, File: t.TempDir() + "/TestInitialize_sqlite-with-file.db"},
ExpectedErr: nil,
},
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
err := Initialize(scenario.Cfg)
if err != scenario.ExpectedErr {
t.Errorf("expected %v, got %v", scenario.ExpectedErr, err)
}
if err != nil {
return
}
if cancelFunc == nil {
t.Error("cancelFunc shouldn't have been nil")
}
if ctx == nil {
t.Error("ctx shouldn't have been nil")
}
if provider == nil {
t.Fatal("provider shouldn't have been nit")
}
provider.Close()
// Try to initialize it again
err = Initialize(scenario.Cfg)
if err != scenario.ExpectedErr {
t.Errorf("expected %v, got %v", scenario.ExpectedErr, err)
return
}
provider.Close()
})
}
}
func TestAutoSave(t *testing.T) {
file := t.TempDir() + "/TestAutoSave.db"
if err := Initialize(&Config{File: file}); err != nil {
t.Fatal("shouldn't have returned an error")
}
go autoSaveStore(ctx, provider, 3*time.Millisecond)
time.Sleep(15 * time.Millisecond)
cancelFunc()
time.Sleep(50 * time.Millisecond)
}

View File

@@ -28,6 +28,10 @@ func init() {
// Store that leverages gocache
type Store struct {
sync.RWMutex
// Deprecated
//
// File persistence will no longer be supported as of v4.0.0
// XXX: Remove me in v4.0.0
file string
cache *gocache.Cache
}
@@ -41,6 +45,8 @@ func NewStore(file string) (*Store, error) {
file: file,
cache: gocache.NewCache().WithMaxSize(gocache.NoMaxSize),
}
// XXX: Remove the block below in v4.0.0 because persistence with the memory store will no longer be supported
// XXX: Make sure to also update gocache to v2.0.0
if len(file) > 0 {
_, err := store.cache.ReadFromFile(file)
if err != nil {
@@ -57,7 +63,6 @@ func NewStore(file string) (*Store, error) {
return store, nil
}
}
// XXX: Remove the block above in v4.0.0
return nil, err
}
}

View File

@@ -34,8 +34,8 @@ const (
)
var (
// ErrFilePathNotSpecified is the error returned when path parameter passed in NewStore is blank
ErrFilePathNotSpecified = errors.New("file path cannot be empty")
// ErrPathNotSpecified is the error returned when the path parameter passed in NewStore is blank
ErrPathNotSpecified = errors.New("path cannot be empty")
// ErrDatabaseDriverNotSpecified is the error returned when the driver parameter passed in NewStore is blank
ErrDatabaseDriverNotSpecified = errors.New("database driver cannot be empty")
@@ -45,20 +45,20 @@ var (
// Store that leverages a database
type Store struct {
driver, file string
driver, path string
db *sql.DB
}
// NewStore initializes the database and creates the schema if it doesn't already exist in the file specified
// NewStore initializes the database and creates the schema if it doesn't already exist in the path specified
func NewStore(driver, path string) (*Store, error) {
if len(driver) == 0 {
return nil, ErrDatabaseDriverNotSpecified
}
if len(path) == 0 {
return nil, ErrFilePathNotSpecified
return nil, ErrPathNotSpecified
}
store := &Store{driver: driver, file: path}
store := &Store{driver: driver, path: path}
var err error
if store.db, err = sql.Open(driver, path); err != nil {
return nil, err
@@ -342,7 +342,7 @@ func (s *Store) DeleteAllEndpointStatusesNotInKeys(keys []string) int {
query += fmt.Sprintf("$%d,", i+1)
args = append(args, keys[i])
}
query = query[:len(query)-1] + ")" // Remove the last comma and close the parenthesis
query = query[:len(query)-1] + ")" // Remove the last comma and add the closing parenthesis
result, err = s.db.Exec(query, args...)
}
if err != nil {
@@ -493,12 +493,6 @@ func (s *Store) getEndpointStatusByKey(tx *sql.Tx, key string, parameters *pagin
log.Printf("[sql][getEndpointStatusByKey] Failed to retrieve results for key=%s: %s", key, err.Error())
}
}
//if parameters.IncludeUptime {
// now := time.Now()
// endpointStatus.Uptime.LastHour, _, err = s.getEndpointUptime(tx, endpointID, now.Add(-time.Hour), now)
// endpointStatus.Uptime.LastTwentyFourHours, _, err = s.getEndpointUptime(tx, endpointID, now.Add(-24*time.Hour), now)
// endpointStatus.Uptime.LastSevenDays, _, err = s.getEndpointUptime(tx, endpointID, now.Add(-7*24*time.Hour), now)
//}
return endpointStatus, nil
}

View File

@@ -84,7 +84,7 @@ func TestNewStore(t *testing.T) {
if _, err := NewStore("", "TestNewStore.db"); err != ErrDatabaseDriverNotSpecified {
t.Error("expected error due to blank driver parameter")
}
if _, err := NewStore("sqlite", ""); err != ErrFilePathNotSpecified {
if _, err := NewStore("sqlite", ""); err != ErrPathNotSpecified {
t.Error("expected error due to blank path parameter")
}
if store, err := NewStore("sqlite", t.TempDir()+"/TestNewStore.db"); err != nil {
@@ -169,8 +169,8 @@ func TestStore_InsertCleansUpEventsAndResultsProperly(t *testing.T) {
}
func TestStore_Persistence(t *testing.T) {
file := t.TempDir() + "/TestStore_Persistence.db"
store, _ := NewStore("sqlite", file)
path := t.TempDir() + "/TestStore_Persistence.db"
store, _ := NewStore("sqlite", path)
store.Insert(&testEndpoint, &testSuccessfulResult)
store.Insert(&testEndpoint, &testUnsuccessfulResult)
if uptime, _ := store.GetUptimeByKey(testEndpoint.Key(), time.Now().Add(-time.Hour), time.Now()); uptime != 0.5 {
@@ -188,7 +188,7 @@ func TestStore_Persistence(t *testing.T) {
t.Fatal("sanity check failed")
}
store.Close()
store, _ = NewStore("sqlite", file)
store, _ = NewStore("sqlite", path)
defer store.Close()
ssFromNewStore, _ := store.GetEndpointStatus(testEndpoint.Group, testEndpoint.Name, paging.NewEndpointStatusParams().WithResults(1, common.MaximumNumberOfResults).WithEvents(1, common.MaximumNumberOfEvents))
if ssFromNewStore == nil || ssFromNewStore.Group != "group" || ssFromNewStore.Name != "name" || len(ssFromNewStore.Events) != 3 || len(ssFromNewStore.Results) != 2 {

View File

@@ -1,9 +1,12 @@
package store
import (
"context"
"log"
"time"
"github.com/TwiN/gatus/v3/core"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store/common/paging"
"github.com/TwiN/gatus/v3/storage/store/memory"
"github.com/TwiN/gatus/v3/storage/store/sql"
@@ -56,3 +59,83 @@ var (
_ Store = (*memory.Store)(nil)
_ Store = (*sql.Store)(nil)
)
var (
store Store
// initialized keeps track of whether the storage provider was initialized
// Because store.Store is an interface, a nil check wouldn't be sufficient, so instead of doing reflection
// every single time Get is called, we'll just lazily keep track of its existence through this variable
initialized bool
ctx context.Context
cancelFunc context.CancelFunc
)
func Get() Store {
if !initialized {
// This only happens in tests
log.Println("[store][Get] Provider requested before it was initialized, automatically initializing")
err := Initialize(nil)
if err != nil {
panic("failed to automatically initialize store: " + err.Error())
}
}
return store
}
// Initialize instantiates the storage provider based on the Config provider
func Initialize(cfg *storage.Config) error {
initialized = true
var err error
if cancelFunc != nil {
// Stop the active autoSave task, if there's already one
cancelFunc()
}
if cfg == nil {
// This only happens in tests
log.Println("[store][Initialize] nil storage config passed as parameter. This should only happen in tests. Defaulting to an empty config.")
cfg = &storage.Config{}
}
if len(cfg.Path) == 0 && cfg.Type != storage.TypePostgres {
log.Printf("[store][Initialize] Creating storage provider of type=%s", cfg.Type)
}
ctx, cancelFunc = context.WithCancel(context.Background())
switch cfg.Type {
case storage.TypeSQLite, storage.TypePostgres:
store, err = sql.NewStore(string(cfg.Type), cfg.Path)
if err != nil {
return err
}
case storage.TypeMemory:
fallthrough
default:
if len(cfg.Path) > 0 {
store, err = memory.NewStore(cfg.Path)
if err != nil {
return err
}
go autoSave(ctx, store, 7*time.Minute)
} else {
store, _ = memory.NewStore("")
}
}
return nil
}
// autoSave automatically calls the Save function of the provider at every interval
func autoSave(ctx context.Context, store Store, interval time.Duration) {
for {
select {
case <-ctx.Done():
log.Printf("[store][autoSave] Stopping active job")
return
case <-time.After(interval):
log.Printf("[store][autoSave] Saving")
err := store.Save()
if err != nil {
log.Println("[store][autoSave] Save failed:", err.Error())
}
}
}
}

View File

@@ -5,6 +5,7 @@ import (
"time"
"github.com/TwiN/gatus/v3/core"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store/common"
"github.com/TwiN/gatus/v3/storage/store/common/paging"
"github.com/TwiN/gatus/v3/storage/store/memory"
@@ -520,3 +521,89 @@ func TestStore_DeleteAllEndpointStatusesNotInKeys(t *testing.T) {
})
}
}
func TestGet(t *testing.T) {
store := Get()
if store == nil {
t.Error("store should've been automatically initialized")
}
}
func TestInitialize(t *testing.T) {
type Scenario struct {
Name string
Cfg *storage.Config
ExpectedErr error
}
scenarios := []Scenario{
{
Name: "nil",
Cfg: nil,
ExpectedErr: nil,
},
{
Name: "blank",
Cfg: &storage.Config{},
ExpectedErr: nil,
},
{
Name: "memory-no-path",
Cfg: &storage.Config{Type: storage.TypeMemory},
ExpectedErr: nil,
},
{ // XXX: Remove for v4.0.0. See https://github.com/TwiN/gatus/issues/198
Name: "memory-with-path",
Cfg: &storage.Config{Type: storage.TypeMemory, Path: t.TempDir() + "/TestInitialize_memory-with-path.db"},
ExpectedErr: nil,
},
{
Name: "sqlite-no-path",
Cfg: &storage.Config{Type: storage.TypeSQLite},
ExpectedErr: sql.ErrPathNotSpecified,
},
{
Name: "sqlite-with-path",
Cfg: &storage.Config{Type: storage.TypeSQLite, Path: t.TempDir() + "/TestInitialize_sqlite-with-path.db"},
ExpectedErr: nil,
},
}
for _, scenario := range scenarios {
t.Run(scenario.Name, func(t *testing.T) {
err := Initialize(scenario.Cfg)
if err != scenario.ExpectedErr {
t.Errorf("expected %v, got %v", scenario.ExpectedErr, err)
}
if err != nil {
return
}
if cancelFunc == nil {
t.Error("cancelFunc shouldn't have been nil")
}
if ctx == nil {
t.Error("ctx shouldn't have been nil")
}
if store == nil {
t.Fatal("provider shouldn't have been nit")
}
store.Close()
// Try to initialize it again
err = Initialize(scenario.Cfg)
if err != scenario.ExpectedErr {
t.Errorf("expected %v, got %v", scenario.ExpectedErr, err)
return
}
store.Close()
})
}
}
func TestAutoSave(t *testing.T) {
file := t.TempDir() + "/TestAutoSave.db"
if err := Initialize(&storage.Config{Path: file}); err != nil {
t.Fatal("shouldn't have returned an error")
}
go autoSave(ctx, store, 3*time.Millisecond)
time.Sleep(15 * time.Millisecond)
cancelFunc()
time.Sleep(50 * time.Millisecond)
}

2
vendor/github.com/TwiN/health/Makefile generated vendored Normal file
View File

@@ -0,0 +1,2 @@
bench:
go test -bench . -race

View File

@@ -1,6 +1,9 @@
package health
import "net/http"
import (
"net/http"
"sync"
)
var (
handler = &healthHandler{
@@ -13,6 +16,8 @@ var (
type healthHandler struct {
useJSON bool
status Status
sync.RWMutex
}
// WithJSON configures whether the handler should output a response in JSON or in raw text
@@ -24,30 +29,48 @@ func (h *healthHandler) WithJSON(v bool) *healthHandler {
}
// ServeHTTP serves the HTTP request for the health handler
func (h healthHandler) ServeHTTP(writer http.ResponseWriter, _ *http.Request) {
var status int
func (h *healthHandler) ServeHTTP(writer http.ResponseWriter, _ *http.Request) {
var statusCode int
var body []byte
if h.status == Up {
status = http.StatusOK
handlerStatus := h.getStatus()
if handlerStatus == Up {
statusCode = http.StatusOK
} else {
status = http.StatusInternalServerError
statusCode = http.StatusInternalServerError
}
if h.useJSON {
writer.Header().Set("Content-Type", "application/json")
body = []byte(`{"status":"` + h.status + `"}`)
body = []byte(`{"status":"` + handlerStatus + `"}`)
} else {
body = []byte(h.status)
body = []byte(handlerStatus)
}
writer.WriteHeader(status)
writer.WriteHeader(statusCode)
_, _ = writer.Write(body)
}
func (h *healthHandler) getStatus() Status {
h.Lock()
defer h.Unlock()
return h.status
}
func (h *healthHandler) setStatus(status Status) {
h.Lock()
h.status = status
h.Unlock()
}
// Handler retrieves the health handler
func Handler() *healthHandler {
return handler
}
// SetStatus sets the status to be reflected by the health handler
func SetStatus(status Status) {
handler.status = status
// GetStatus retrieves the current status returned by the health handler
func GetStatus() Status {
return handler.getStatus()
}
// SetStatus sets the status to be returned by the health handler
func SetStatus(status Status) {
handler.setStatus(status)
}

2
vendor/modules.txt vendored
View File

@@ -1,7 +1,7 @@
# github.com/TwiN/gocache v1.2.4
## explicit; go 1.16
github.com/TwiN/gocache
# github.com/TwiN/health v1.0.1
# github.com/TwiN/health v1.1.0
## explicit; go 1.17
github.com/TwiN/health
# github.com/beorn7/perks v1.0.1

View File

@@ -11,7 +11,7 @@ import (
"github.com/TwiN/gatus/v3/config/maintenance"
"github.com/TwiN/gatus/v3/core"
"github.com/TwiN/gatus/v3/metric"
"github.com/TwiN/gatus/v3/storage"
"github.com/TwiN/gatus/v3/storage/store"
)
var (
@@ -29,13 +29,13 @@ func Monitor(cfg *config.Config) {
for _, endpoint := range cfg.Endpoints {
if endpoint.IsEnabled() {
// To prevent multiple requests from running at the same time, we'll wait for a little before each iteration
time.Sleep(1111 * time.Millisecond)
time.Sleep(777 * time.Millisecond)
go monitor(endpoint, cfg.Alerting, cfg.Maintenance, cfg.DisableMonitoringLock, cfg.Metrics, cfg.Debug, ctx)
}
}
}
// monitor monitors a single endpoint in a loop
// monitor a single endpoint in a loop
func monitor(endpoint *core.Endpoint, alertingConfig *alerting.Config, maintenanceConfig *maintenance.Config, disableMonitoringLock, enabledMetrics, debug bool, ctx context.Context) {
// Run it immediately on start
execute(endpoint, alertingConfig, maintenanceConfig, disableMonitoringLock, enabledMetrics, debug)
@@ -88,7 +88,7 @@ func execute(endpoint *core.Endpoint, alertingConfig *alerting.Config, maintenan
// UpdateEndpointStatuses updates the slice of endpoint statuses
func UpdateEndpointStatuses(endpoint *core.Endpoint, result *core.Result) {
if err := storage.Get().Insert(endpoint, result); err != nil {
if err := store.Get().Insert(endpoint, result); err != nil {
log.Println("[watchdog][UpdateEndpointStatuses] Failed to insert data in storage:", err.Error())
}
}

5306
web/app/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "gatus",
"version": "3.2.2",
"version": "3.3.3",
"private": true,
"scripts": {
"serve": "vue-cli-service serve --mode development",
@@ -8,22 +8,22 @@
"lint": "vue-cli-service lint"
},
"dependencies": {
"core-js": "^3.17.3",
"vue": "^3.2.11",
"core-js": "^3.19.1",
"vue": "3.2.21",
"vue-router": "^4.0.11"
},
"devDependencies": {
"@vue/cli-plugin-babel": "^5.0.0-beta.3",
"@vue/cli-plugin-eslint": "^5.0.0-beta.3",
"@vue/cli-plugin-router": "^5.0.0-beta.3",
"@vue/cli-service": "^5.0.0-beta.3",
"@vue/compiler-sfc": "^3.2.11",
"autoprefixer": "^10.3.4",
"@vue/cli-plugin-babel": "5.0.0-beta.6",
"@vue/cli-plugin-eslint": "5.0.0-beta.6",
"@vue/cli-plugin-router": "5.0.0-beta.6",
"@vue/cli-service": "5.0.0-beta.6",
"@vue/compiler-sfc": "3.2.21",
"autoprefixer": "10.4.0",
"babel-eslint": "^10.1.0",
"eslint": "^7.32.0",
"eslint-plugin-vue": "^7.17.0",
"postcss": "^8.3.6",
"tailwindcss": "^2.2.15"
"tailwindcss": "^2.2.19"
},
"eslintConfig": {
"root": true,

View File

@@ -3,9 +3,7 @@
<head>
<meta charset="utf-8" />
<script type="text/javascript">
window.config = {
logo: "{{ .Logo }}"
};
window.config = {logo: "{{ .Logo }}"};
</script>
<title>{{ .Title }}</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge">

View File

@@ -7,7 +7,7 @@
</div>
<div class="w-1/4 flex justify-end">
<img v-if="getLogo" :src="getLogo" alt="Gatus" class="object-scale-down" style="max-width: 100px; min-width: 50px; min-height:50px;"/>
<img v-if="!getLogo" src="./assets/logo.png" alt="Gatus" class="object-scale-down" style="max-width: 100px; min-width: 50px; min-height:50px;"/>
<img v-else src="./assets/logo.png" alt="Gatus" class="object-scale-down" style="max-width: 100px; min-width: 50px; min-height:50px;"/>
</div>
</div>
</div>

View File

@@ -2,7 +2,11 @@ export const helper = {
methods: {
generatePrettyTimeAgo(t) {
let differenceInMs = new Date().getTime() - new Date(t).getTime();
if (differenceInMs > 3600000) {
if (differenceInMs > 3*86400000) { // If it was more than 3 days ago, we'll display the number of days ago
let days = (differenceInMs / 86400000).toFixed(0);
return days + " day" + (days !== "1" ? "s" : "") + " ago";
}
if (differenceInMs > 3600000) { // If it was more than 1h ago, display the number of hours ago
let hours = (differenceInMs / 3600000).toFixed(0);
return hours + " hour" + (hours !== "1" ? "s" : "") + " ago";
}

File diff suppressed because one or more lines are too long

View File

@@ -1,3 +1 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><script>window.config = {
logo: "{{ .Logo }}"
};</script><title>{{ .Title }}</title><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" href="/favicon.ico"><script defer="defer" src="/js/chunk-vendors.js" type="module"></script><script defer="defer" src="/js/app.js" type="module"></script><link href="/css/app.css" rel="stylesheet"><script defer="defer" src="/js/chunk-vendors-legacy.js" nomodule></script><script defer="defer" src="/js/app-legacy.js" nomodule></script></head><body class="dark:bg-gray-900"><noscript><strong>Enable JavaScript to view this page.</strong></noscript><div id="app"></div></body></html>
<!doctype html><html lang="en"><head><meta charset="utf-8"/><script>window.config = {logo: "{{ .Logo }}"};</script><title>{{ .Title }}</title><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" href="/favicon.ico"><script defer="defer" type="module" src="/js/chunk-vendors.js"></script><script defer="defer" type="module" src="/js/app.js"></script><link href="/css/app.css" rel="stylesheet"><script defer="defer" src="/js/chunk-vendors-legacy.js" nomodule></script><script defer="defer" src="/js/app-legacy.js" nomodule></script></head><body class="dark:bg-gray-900"><noscript><strong>Enable JavaScript to view this page.</strong></noscript><div id="app"></div></body></html>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long