Compare commits
77 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
be4e9aba1e | ||
|
|
ac0d00fdb5 | ||
|
|
3293222cd6 | ||
|
|
892f3ada6f | ||
|
|
f22a79eb7d | ||
|
|
911deb91d1 | ||
|
|
bcd4105af3 | ||
|
|
423ada68b3 | ||
|
|
70fa17349f | ||
|
|
e640ede709 | ||
|
|
fb3447eaf3 | ||
|
|
46cf616a57 | ||
|
|
cf48072167 | ||
|
|
97dd868ae8 | ||
|
|
c18b2728c9 | ||
|
|
b3fd290e4d | ||
|
|
89e23a986c | ||
|
|
c454c868f6 | ||
|
|
6d82a54518 | ||
|
|
bd3c01a4f4 | ||
|
|
43150ae484 | ||
|
|
acb6757dc8 | ||
|
|
2037d9aca6 | ||
|
|
c700154f5e | ||
|
|
aac72e3741 | ||
|
|
1a597f92ba | ||
|
|
56fedcedd1 | ||
|
|
6bdce4fe29 | ||
|
|
381488a1b2 | ||
|
|
42a909c1ad | ||
|
|
5a4fa6f2b0 | ||
|
|
bbbfe7f466 | ||
|
|
7cf1750f86 | ||
|
|
b88ae5fcf6 | ||
|
|
8516c41b43 | ||
|
|
b90a64e2a6 | ||
|
|
627173e64f | ||
|
|
8b5e5f54cc | ||
|
|
2c95cce7b3 | ||
|
|
2ef9329fa6 | ||
|
|
9384373f43 | ||
|
|
d3a81a2d57 | ||
|
|
fed32d3909 | ||
|
|
c1d9006aaf | ||
|
|
7126d36d85 | ||
|
|
677c7faffe | ||
|
|
8dedcf7c74 | ||
|
|
a4c69d6fc3 | ||
|
|
943d0a19d1 | ||
|
|
fd08c8b1e5 | ||
|
|
393147c300 | ||
|
|
f73e8a56ef | ||
|
|
4203355edc | ||
|
|
5cc1c11b1a | ||
|
|
796228466d | ||
|
|
23ba9795a6 | ||
|
|
1291e86a6f | ||
|
|
14316cfd31 | ||
|
|
670272f411 | ||
|
|
ffc3e644c5 | ||
|
|
bc42d15625 | ||
|
|
20594b902c | ||
|
|
0a3267e499 | ||
|
|
9c8bf2b69e | ||
|
|
bd1eb7c61b | ||
|
|
e6335da94f | ||
|
|
1498b6d8a2 | ||
|
|
7aed826d65 | ||
|
|
9b68582622 | ||
|
|
a1afeea56b | ||
|
|
38de0ec9cd | ||
|
|
9d8a3f1574 | ||
|
|
b904afb8b5 | ||
|
|
5bf560221f | ||
|
|
574dd50b98 | ||
|
|
35c33620a5 | ||
|
|
fc0c3499f4 |
1
.github/FUNDING.yml
vendored
Normal file
1
.github/FUNDING.yml
vendored
Normal file
@@ -0,0 +1 @@
|
||||
github: [TwinProduction]
|
||||
2
.github/workflows/build.yml
vendored
2
.github/workflows/build.yml
vendored
@@ -28,6 +28,6 @@ jobs:
|
||||
# was configured by the "Set up Go 1.15" step (otherwise, it'd use sudo's "go" executable)
|
||||
run: sudo env "PATH=$PATH" "GOROOT=$GOROOT" go test -mod vendor ./... -race -coverprofile=coverage.txt -covermode=atomic
|
||||
- name: Codecov
|
||||
uses: codecov/codecov-action@v1.0.14
|
||||
uses: codecov/codecov-action@v1.5.2
|
||||
with:
|
||||
file: ./coverage.txt
|
||||
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -2,4 +2,6 @@
|
||||
.vscode
|
||||
gatus
|
||||
db.db
|
||||
config/config.yml
|
||||
config/config.yml
|
||||
db.db-shm
|
||||
db.db-wal
|
||||
164
README.md
164
README.md
@@ -26,7 +26,6 @@ For more details, see [Usage](#usage)
|
||||
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Why Gatus?](#why-gatus)
|
||||
- [Features](#features)
|
||||
- [Usage](#usage)
|
||||
@@ -45,8 +44,10 @@ For more details, see [Usage](#usage)
|
||||
- [Configuring custom alerts](#configuring-custom-alerts)
|
||||
- [Kubernetes (ALPHA)](#kubernetes-alpha)
|
||||
- [Auto Discovery](#auto-discovery)
|
||||
- [Deploying](#deploying)
|
||||
- [Docker](#docker)
|
||||
- [Deployment](#deployment)
|
||||
- [Docker](#docker)
|
||||
- [Helm Chart](#helm-chart)
|
||||
- [Terraform](#terraform)
|
||||
- [Running the tests](#running-the-tests)
|
||||
- [Using in Production](#using-in-production)
|
||||
- [FAQ](#faq)
|
||||
@@ -64,10 +65,9 @@ For more details, see [Usage](#usage)
|
||||
- [Exposing Gatus on a custom port](#exposing-gatus-on-a-custom-port)
|
||||
- [Uptime Badges (ALPHA)](#uptime-badges)
|
||||
- [API](#API)
|
||||
|
||||
- [Sponsors](#sponsors)
|
||||
|
||||
## Why Gatus?
|
||||
|
||||
Before getting into the specifics, I want to address the most common question:
|
||||
> Why would I use Gatus when I can just use Prometheus’ Alertmanager, Cloudwatch or even Splunk?
|
||||
|
||||
@@ -86,7 +86,6 @@ fixing the issue before they even know about it.
|
||||
|
||||
|
||||
## Features
|
||||
|
||||

|
||||
|
||||
The main features of Gatus are:
|
||||
@@ -100,7 +99,6 @@ The main features of Gatus are:
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
By default, the configuration file is expected to be at `config/config.yaml`.
|
||||
|
||||
You can specify a custom path by setting the `GATUS_CONFIG_FILE` environment variable.
|
||||
@@ -134,50 +132,46 @@ If you want to test it locally, see [Docker](#docker).
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|:---------------------------------------- |:----------------------------------------------------------------------------- |:-------------- |
|
||||
| `debug` | Whether to enable debug logs | `false` |
|
||||
| `metrics` | Whether to expose metrics at /metrics | `false` |
|
||||
| `storage` | Storage configuration | `{}` |
|
||||
| `storage.file` | File to persist the data in. If not set, storage is in-memory only. | `""` |
|
||||
| `services` | List of services to monitor | Required `[]` |
|
||||
| `debug` | Whether to enable debug logs. | `false` |
|
||||
| `metrics` | Whether to expose metrics at /metrics. | `false` |
|
||||
| `storage` | Storage configuration. See [Storage](#storage). | `{}` |
|
||||
| `services` | List of services to monitor. | Required `[]` |
|
||||
| `services[].name` | Name of the service. Can be anything. | Required `""` |
|
||||
| `services[].group` | Group name. Used to group multiple services together on the dashboard. See [Service groups](#service-groups). | `""` |
|
||||
| `services[].url` | URL to send the request to | Required `""` |
|
||||
| `services[].method` | Request method | `GET` |
|
||||
| `services[].insecure` | Whether to skip verifying the server's certificate chain and host name | `false` |
|
||||
| `services[].url` | URL to send the request to. | Required `""` |
|
||||
| `services[].method` | Request method. | `GET` |
|
||||
| `services[].insecure` | Whether to skip verifying the server's certificate chain and host name. | `false` |
|
||||
| `services[].conditions` | Conditions used to determine the health of the service. See [Conditions](#conditions). | `[]` |
|
||||
| `services[].interval` | Duration to wait between every status check | `60s` |
|
||||
| `services[].graphql` | Whether to wrap the body in a query param (`{"query":"$body"}`) | `false` |
|
||||
| `services[].body` | Request body | `""` |
|
||||
| `services[].headers` | Request headers | `{}` |
|
||||
| `services[].interval` | Duration to wait between every status check. | `60s` |
|
||||
| `services[].graphql` | Whether to wrap the body in a query param (`{"query":"$body"}`). | `false` |
|
||||
| `services[].body` | Request body. | `""` |
|
||||
| `services[].headers` | Request headers. | `{}` |
|
||||
| `services[].dns` | Configuration for a service of type DNS. See [Monitoring a service using DNS queries](#monitoring-a-service-using-dns-queries). | `""` |
|
||||
| `services[].dns.query-type` | Query type for DNS service | `""` |
|
||||
| `services[].dns.query-name` | Query name for DNS service | `""` |
|
||||
| `services[].alerts[].type` | Type of alert. Valid types: `slack`, `discord`, `pagerduty`, `twilio`, `mattermost`, `messagebird`, `custom` | Required `""` |
|
||||
| `services[].alerts[].enabled` | Whether to enable the alert | `false` |
|
||||
| `services[].alerts[].failure-threshold` | Number of failures in a row needed before triggering the alert | `3` |
|
||||
| `services[].alerts[].success-threshold` | Number of successes in a row before an ongoing incident is marked as resolved | `2` |
|
||||
| `services[].alerts[].send-on-resolved` | Whether to send a notification once a triggered alert is marked as resolved | `false` |
|
||||
| `services[].alerts[].description` | Description of the alert. Will be included in the alert sent | `""` |
|
||||
| `services[].dns.query-type` | Query type for DNS service. | `""` |
|
||||
| `services[].dns.query-name` | Query name for DNS service. | `""` |
|
||||
| `services[].alerts[].type` | Type of alert. Valid types: `slack`, `discord`, `pagerduty`, `twilio`, `mattermost`, `messagebird`, `custom`. | Required `""` |
|
||||
| `services[].alerts[].enabled` | Whether to enable the alert. | `false` |
|
||||
| `services[].alerts[].failure-threshold` | Number of failures in a row needed before triggering the alert. | `3` |
|
||||
| `services[].alerts[].success-threshold` | Number of successes in a row before an ongoing incident is marked as resolved. | `2` |
|
||||
| `services[].alerts[].send-on-resolved` | Whether to send a notification once a triggered alert is marked as resolved. | `false` |
|
||||
| `services[].alerts[].description` | Description of the alert. Will be included in the alert sent. | `""` |
|
||||
| `alerting` | Configuration for alerting. See [Alerting](#alerting). | `{}` |
|
||||
| `security` | Security configuration | `{}` |
|
||||
| `security.basic` | Basic authentication security configuration | `{}` |
|
||||
| `security.basic.username` | Username for Basic authentication | Required `""` |
|
||||
| `security.basic.password-sha512` | Password's SHA512 hash for Basic authentication | Required `""` |
|
||||
| `disable-monitoring-lock` | Whether to [disable the monitoring lock](#disable-monitoring-lock) | `false` |
|
||||
| `skip-invalid-config-update` | Whether to ignore invalid configuration update. See [Reloading configuration on the fly](#reloading-configuration-on-the-fly).
|
||||
| `web` | Web configuration | `{}` |
|
||||
| `web.address` | Address to listen on | `0.0.0.0` |
|
||||
| `web.port` | Port to listen on | `8080` |
|
||||
| `security` | Security configuration. | `{}` |
|
||||
| `security.basic` | Basic authentication security configuration. | `{}` |
|
||||
| `security.basic.username` | Username for Basic authentication. | Required `""` |
|
||||
| `security.basic.password-sha512` | Password's SHA512 hash for Basic authentication. | Required `""` |
|
||||
| `disable-monitoring-lock` | Whether to [disable the monitoring lock](#disable-monitoring-lock). | `false` |
|
||||
| `skip-invalid-config-update` | Whether to ignore invalid configuration update. See [Reloading configuration on the fly](#reloading-configuration-on-the-fly). | `false` |
|
||||
| `web` | Web configuration. | `{}` |
|
||||
| `web.address` | Address to listen on. | `0.0.0.0` |
|
||||
| `web.port` | Port to listen on. | `8080` |
|
||||
|
||||
- For Kubernetes configuration, see [Kubernetes](#kubernetes-alpha).
|
||||
- For alerting configuration, see [Alerting](#alerting).
|
||||
For Kubernetes configuration, see [Kubernetes](#kubernetes-alpha).
|
||||
|
||||
|
||||
### Conditions
|
||||
|
||||
Here are some examples of conditions you can use:
|
||||
|
||||
| Condition | Description | Passing values | Failing values |
|
||||
@@ -204,7 +198,6 @@ Here are some examples of conditions you can use:
|
||||
|
||||
|
||||
#### Placeholders
|
||||
|
||||
| Placeholder | Description | Example of resolved value |
|
||||
|:-------------------------- |:--------------------------------------------------------------- |:------------------------- |
|
||||
| `[STATUS]` | Resolves into the HTTP status of the request | 404
|
||||
@@ -217,7 +210,6 @@ Here are some examples of conditions you can use:
|
||||
|
||||
|
||||
#### Functions
|
||||
|
||||
| Function | Description | Example |
|
||||
|:-----------|:---------------------------------------------------------------------------------------------------------------- |:-------------------------- |
|
||||
| `len` | Returns the length of the object/slice. Works only with the `[BODY]` placeholder. | `len([BODY].username) > 8`
|
||||
@@ -228,12 +220,29 @@ Here are some examples of conditions you can use:
|
||||
**NOTE**: Use `pat` only when you need to. `[STATUS] == pat(2*)` is a lot more expensive than `[STATUS] < 300`.
|
||||
|
||||
|
||||
### Alerting
|
||||
### Storage
|
||||
| Parameter | Description | Default |
|
||||
|:------------------ |:-------------------------------------------------------------------------------------- |:-------------- |
|
||||
| `storage` | Storage configuration | `{}` |
|
||||
| `storage.file` | File to persist the data in. If the type is `inmemory`, data is persisted on interval. | `""` |
|
||||
| `storage.type` | Type of storage. Valid types: `inmemory`, `sqlite`. | `"inmemory"` |
|
||||
|
||||
- If `storage.type` is `inmemory` (default) and `storage.file` is set to a non-blank value.
|
||||
Furthermore, the data is periodically persisted, but everything remains in memory.
|
||||
- If `storage.type` is `sqlite`, `storage.file` must not be blank.
|
||||
```yaml
|
||||
storage:
|
||||
type: sqlite
|
||||
file: data.db
|
||||
```
|
||||
See [examples/docker-compose-sqlite-storage](examples/docker-compose-sqlite-storage) for an example.
|
||||
|
||||
|
||||
### Alerting
|
||||
Gatus supports multiple alerting providers, such as Slack and PagerDuty, and supports different alerts for each
|
||||
individual services with configurable descriptions and thresholds.
|
||||
|
||||
Note that if an alerting provider is not configured properly, all alerts configured with the provider's type will be
|
||||
Note that if an alerting provider is not properly configured, all alerts configured with the provider's type will be
|
||||
ignored.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
@@ -273,7 +282,6 @@ ignored.
|
||||
|
||||
|
||||
#### Configuring Slack alerts
|
||||
|
||||
```yaml
|
||||
alerting:
|
||||
slack:
|
||||
@@ -305,7 +313,6 @@ Here's an example of what the notifications look like:
|
||||
|
||||
|
||||
#### Configuring Discord alerts
|
||||
|
||||
```yaml
|
||||
alerting:
|
||||
discord:
|
||||
@@ -328,7 +335,6 @@ services:
|
||||
|
||||
|
||||
#### Configuring PagerDuty alerts
|
||||
|
||||
It is highly recommended to set `services[].alerts[].send-on-resolved` to `true` for alerts
|
||||
of type `pagerduty`, because unlike other alerts, the operation resulting from setting said
|
||||
parameter to `true` will not create another incident, but mark the incident as resolved on
|
||||
@@ -358,7 +364,6 @@ services:
|
||||
|
||||
|
||||
#### Configuring Twilio alerts
|
||||
|
||||
```yaml
|
||||
alerting:
|
||||
twilio:
|
||||
@@ -385,7 +390,6 @@ services:
|
||||
|
||||
|
||||
#### Configuring Mattermost alerts
|
||||
|
||||
```yaml
|
||||
alerting:
|
||||
mattermost:
|
||||
@@ -440,7 +444,6 @@ services:
|
||||
|
||||
|
||||
#### Configuring Telegram alerts
|
||||
|
||||
```yaml
|
||||
alerting:
|
||||
telegram:
|
||||
@@ -466,7 +469,6 @@ Here's an example of what the notifications look like:
|
||||
|
||||
|
||||
#### Configuring custom alerts
|
||||
|
||||
While they're called alerts, you can use this feature to call anything.
|
||||
|
||||
For instance, you could automate rollbacks by having an application that keeps tracks of new deployments, and by
|
||||
@@ -524,7 +526,6 @@ As a result, the `[ALERT_TRIGGERED_OR_RESOLVED]` in the body of first example of
|
||||
|
||||
|
||||
#### Setting a default provider alert
|
||||
|
||||
While you can specify the alert configuration directly in the service definition, it's tedious and may lead to a very
|
||||
long configuration file.
|
||||
|
||||
@@ -577,9 +578,10 @@ services:
|
||||
|
||||
|
||||
### Kubernetes (ALPHA)
|
||||
|
||||
> **WARNING**: This feature is in ALPHA. This means that it is very likely to change in the near future, which means that
|
||||
> while you can use this feature as you see fit, there may be breaking changes in future releases.
|
||||
>
|
||||
> **NOTICE**: This feature may be removed. To give your opinion on the subject, see https://github.com/TwinProduction/gatus/discussions/135.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|:------------------------------------------- |:----------------------------------------------------------------------------- |:-------------- |
|
||||
@@ -596,7 +598,6 @@ services:
|
||||
|
||||
|
||||
#### Auto Discovery
|
||||
|
||||
Auto discovery works by reading all `Service` resources from the configured `namespaces` and appending the `hostname-suffix` as
|
||||
well as the configured `target-path` to the service name and making an HTTP call.
|
||||
|
||||
@@ -635,13 +636,14 @@ Note that `hostname-suffix` could also be something like `.yourdomain.com`, in w
|
||||
monitored would be `potato.example.com/health`, assuming you have a service named `potato` and a matching ingress
|
||||
to map `potato.example.com` to the `potato` service.
|
||||
|
||||
#### Deploying
|
||||
|
||||
See [example/kubernetes-with-auto-discovery](example/kubernetes-with-auto-discovery)
|
||||
For a full example, see [examples/kubernetes-with-auto-discovery](examples/kubernetes-with-auto-discovery)
|
||||
|
||||
|
||||
## Docker
|
||||
## Deployment
|
||||
Many examples can be found in the [examples](examples) folder, but this section will focus on the most popular ways of deploying Gatus.
|
||||
|
||||
|
||||
### Docker
|
||||
To run Gatus locally with Docker:
|
||||
```
|
||||
docker run -p 8080:8080 --name gatus twinproduction/gatus
|
||||
@@ -665,22 +667,37 @@ docker build . -t twinproduction/gatus
|
||||
```
|
||||
|
||||
|
||||
## Running the tests
|
||||
### Helm Chart
|
||||
[Helm](https://helm.sh) must be installed to use the chart.
|
||||
Please refer to Helm's [documentation](https://helm.sh/docs/) to get started.
|
||||
|
||||
Once Helm is set up properly, add the repository as follows:
|
||||
|
||||
```console
|
||||
helm repo add gatus https://avakarev.github.io/gatus-chart
|
||||
```
|
||||
|
||||
To get more details, please check chart's [configuration](https://github.com/avakarev/gatus-chart#configuration)
|
||||
and [helmfile example](https://github.com/avakarev/gatus-chart#helmfileyaml-example)
|
||||
|
||||
|
||||
### Terraform
|
||||
Gatus can be deployed on Terraform by using the following module: [terraform-kubernetes-gatus](https://github.com/TwinProduction/terraform-kubernetes-gatus).
|
||||
|
||||
|
||||
|
||||
## Running the tests
|
||||
```
|
||||
go test ./... -mod vendor
|
||||
```
|
||||
|
||||
|
||||
## Using in Production
|
||||
|
||||
See the [example](example) folder.
|
||||
See the [Deployment](#deployment) section.
|
||||
|
||||
|
||||
## FAQ
|
||||
|
||||
### Sending a GraphQL request
|
||||
|
||||
By setting `services[].graphql` to true, the body will automatically be wrapped by the standard GraphQL `query` parameter.
|
||||
|
||||
For instance, the following configuration:
|
||||
@@ -711,9 +728,8 @@ will send a `POST` request to `http://localhost:8080/playground` with the follow
|
||||
|
||||
|
||||
### Recommended interval
|
||||
|
||||
**NOTE**: This does not _really_ apply if `disable-monitoring-lock` is set to `true`, as the monitoring lock is what
|
||||
tells Gatus to only evaluate one service at a time.
|
||||
> **NOTE**: This does not _really_ apply if `disable-monitoring-lock` is set to `true`, as the monitoring lock is what
|
||||
> tells Gatus to only evaluate one service at a time.
|
||||
|
||||
To ensure that Gatus provides reliable and accurate results (i.e. response time), Gatus only evaluates one service at a time
|
||||
In other words, even if you have multiple services with the exact same interval, they will not execute at the same time.
|
||||
@@ -743,7 +759,6 @@ simple health checks used for alerting (PagerDuty/Twilio) to `30s`.
|
||||
|
||||
|
||||
### Default timeouts
|
||||
|
||||
| Protocol | Timeout |
|
||||
|:-------- |:------- |
|
||||
| HTTP | 10s
|
||||
@@ -772,7 +787,6 @@ established.
|
||||
|
||||
|
||||
### Monitoring a service using ICMP
|
||||
|
||||
By prefixing `services[].url` with `icmp:\\`, you can monitor services at a very basic level using ICMP, or more
|
||||
commonly known as "ping" or "echo":
|
||||
|
||||
@@ -789,7 +803,6 @@ You can specify a domain prefixed by `icmp://`, or an IP address prefixed by `ic
|
||||
|
||||
|
||||
### Monitoring a service using DNS queries
|
||||
|
||||
Defining a `dns` configuration in a service will automatically mark that service as a service of type DNS:
|
||||
```yaml
|
||||
services:
|
||||
@@ -811,7 +824,6 @@ There are two placeholders that can be used in the conditions for services of ty
|
||||
|
||||
|
||||
### Monitoring a service using STARTTLS
|
||||
|
||||
If you have an email server that you want to ensure there are no problems with, monitoring it through STARTTLS
|
||||
will serve as a good initial indicator:
|
||||
```yaml
|
||||
@@ -826,7 +838,6 @@ services:
|
||||
|
||||
|
||||
### Basic authentication
|
||||
|
||||
You can require Basic authentication by leveraging the `security.basic` configuration:
|
||||
```yaml
|
||||
security:
|
||||
@@ -839,7 +850,6 @@ The example above will require that you authenticate with the username `john.doe
|
||||
|
||||
|
||||
### disable-monitoring-lock
|
||||
|
||||
Setting `disable-monitoring-lock` to `true` means that multiple services could be monitored at the same time.
|
||||
|
||||
While this behavior wouldn't generally be harmful, conditions using the `[RESPONSE_TIME]` placeholder could be impacted
|
||||
@@ -853,7 +863,6 @@ technically, if you create 100 services with a 1 seconds interval, Gatus will se
|
||||
|
||||
|
||||
### Reloading configuration on the fly
|
||||
|
||||
For the sake on convenience, Gatus automatically reloads the configuration on the fly if the loaded configuration file
|
||||
is updated while Gatus is running.
|
||||
|
||||
@@ -877,7 +886,6 @@ the same as restarting the application.
|
||||
|
||||
|
||||
### Service groups
|
||||
|
||||
Service groups are used for grouping multiple services together on the dashboard.
|
||||
|
||||
```yaml
|
||||
@@ -923,7 +931,6 @@ The configuration above will result in a dashboard that looks like this:
|
||||
|
||||
|
||||
### Exposing Gatus on a custom port
|
||||
|
||||
By default, Gatus is exposed on port `8080`, but you may specify a different port by setting the `web.port` parameter:
|
||||
```yaml
|
||||
web:
|
||||
@@ -993,3 +1000,10 @@ Gzip compression will be used if the `Accept-Encoding` HTTP header contains `gzi
|
||||
|
||||
The API will return a JSON payload with the `Content-Type` response header set to `application/json`.
|
||||
No such header is required to query the API.
|
||||
|
||||
|
||||
## Sponsors
|
||||
You can find the full list of sponsors [here](https://github.com/sponsors/TwinProduction).
|
||||
|
||||
[<img src="https://github.com/math280h.png" width="35" />](https://github.com/math280h)
|
||||
[<img src="https://github.com/mateothegreat.png" width="35" />](https://github.com/mateothegreat)
|
||||
|
||||
@@ -7,8 +7,7 @@ import (
|
||||
"net"
|
||||
"net/http"
|
||||
"net/smtp"
|
||||
"os"
|
||||
"strconv"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -27,16 +26,6 @@ var (
|
||||
httpTimeout = 10 * time.Second
|
||||
)
|
||||
|
||||
func init() {
|
||||
// XXX: This is an undocumented feature. See https://github.com/TwinProduction/gatus/issues/104.
|
||||
httpTimeoutInSecondsFromEnvironmentVariable := os.Getenv("HTTP_CLIENT_TIMEOUT_IN_SECONDS")
|
||||
if len(httpTimeoutInSecondsFromEnvironmentVariable) > 0 {
|
||||
if httpTimeoutInSeconds, err := strconv.Atoi(httpTimeoutInSecondsFromEnvironmentVariable); err == nil {
|
||||
httpTimeout = time.Duration(httpTimeoutInSeconds) * time.Second
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// GetHTTPClient returns the shared HTTP client
|
||||
func GetHTTPClient(insecure bool) *http.Client {
|
||||
if insecure {
|
||||
@@ -51,6 +40,9 @@ func GetHTTPClient(insecure bool) *http.Client {
|
||||
InsecureSkipVerify: true,
|
||||
},
|
||||
},
|
||||
CheckRedirect: func(req *http.Request, via []*http.Request) error {
|
||||
return http.ErrUseLastResponse // Don't follow redirects
|
||||
},
|
||||
}
|
||||
}
|
||||
return insecureHTTPClient
|
||||
@@ -63,6 +55,9 @@ func GetHTTPClient(insecure bool) *http.Client {
|
||||
MaxIdleConnsPerHost: 20,
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
},
|
||||
CheckRedirect: func(req *http.Request, via []*http.Request) error {
|
||||
return http.ErrUseLastResponse // Don't follow redirects
|
||||
},
|
||||
}
|
||||
}
|
||||
return secureHTTPClient
|
||||
@@ -113,7 +108,9 @@ func Ping(address string) (bool, time.Duration) {
|
||||
}
|
||||
pinger.Count = 1
|
||||
pinger.Timeout = pingTimeout
|
||||
pinger.SetPrivileged(true)
|
||||
// Set the pinger's privileged mode to true for every operating system except darwin
|
||||
// https://github.com/TwinProduction/gatus/issues/132
|
||||
pinger.SetPrivileged(runtime.GOOS != "darwin")
|
||||
err = pinger.Run()
|
||||
if err != nil {
|
||||
return false, 0
|
||||
|
||||
@@ -99,3 +99,9 @@ func TestCanPerformStartTLS(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCanCreateTCPConnection(t *testing.T) {
|
||||
if CanCreateTCPConnection("127.0.0.1") {
|
||||
t.Error("should've failed, because there's no port in the address")
|
||||
}
|
||||
}
|
||||
|
||||
13
config.yaml
13
config.yaml
@@ -10,14 +10,15 @@ services:
|
||||
|
||||
- name: back-end
|
||||
group: core
|
||||
url: "http://example.org/"
|
||||
url: "https://example.org/"
|
||||
interval: 5m
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
- "[CERTIFICATE_EXPIRATION] > 48h"
|
||||
|
||||
- name: monitoring
|
||||
group: internal
|
||||
url: "http://example.com/"
|
||||
url: "https://example.org/"
|
||||
interval: 5m
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
@@ -29,14 +30,6 @@ services:
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
- name: cat-fact
|
||||
url: "https://cat-fact.herokuapp.com/facts/random"
|
||||
interval: 5m
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
- "[BODY].deleted == false"
|
||||
- "len([BODY].text) > 0"
|
||||
|
||||
- name: example-dns-query
|
||||
url: "8.8.8.8" # Address of the DNS server to use
|
||||
interval: 5m
|
||||
|
||||
@@ -14,7 +14,6 @@ import (
|
||||
"github.com/TwinProduction/gatus/k8s"
|
||||
"github.com/TwinProduction/gatus/security"
|
||||
"github.com/TwinProduction/gatus/storage"
|
||||
"github.com/TwinProduction/gatus/util"
|
||||
"gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
@@ -177,7 +176,9 @@ func parseAndValidateConfigBytes(yamlBytes []byte) (config *Config, err error) {
|
||||
|
||||
func validateStorageConfig(config *Config) error {
|
||||
if config.Storage == nil {
|
||||
config.Storage = &storage.Config{}
|
||||
config.Storage = &storage.Config{
|
||||
Type: storage.TypeInMemory,
|
||||
}
|
||||
}
|
||||
err := storage.Initialize(config.Storage)
|
||||
if err != nil {
|
||||
@@ -186,7 +187,7 @@ func validateStorageConfig(config *Config) error {
|
||||
// Remove all ServiceStatus that represent services which no longer exist in the configuration
|
||||
var keys []string
|
||||
for _, service := range config.Services {
|
||||
keys = append(keys, util.ConvertGroupAndServiceToKey(service.Group, service.Name))
|
||||
keys = append(keys, service.Key())
|
||||
}
|
||||
numberOfServiceStatusesDeleted := storage.Get().DeleteAllServiceStatusesNotInKeys(keys)
|
||||
if numberOfServiceStatusesDeleted > 0 {
|
||||
@@ -208,6 +209,7 @@ func validateWebConfig(config *Config) error {
|
||||
// I don't like the current implementation.
|
||||
func validateKubernetesConfig(config *Config) error {
|
||||
if config.Kubernetes != nil && config.Kubernetes.AutoDiscover {
|
||||
log.Println("WARNING - The Kubernetes integration is planned to be removed in v3.0.0. If you're seeing this message, it's because you're currently using it, and you may want to give your opinion at https://github.com/TwinProduction/gatus/discussions/135")
|
||||
if config.Kubernetes.ServiceTemplate == nil {
|
||||
return errors.New("kubernetes.service-template cannot be nil")
|
||||
}
|
||||
|
||||
@@ -1028,6 +1028,49 @@ services:
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAndValidateConfigBytesWithInvalidServiceName(t *testing.T) {
|
||||
_, err := parseAndValidateConfigBytes([]byte(`
|
||||
services:
|
||||
- name: ""
|
||||
url: https://twinnation.org/health
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
`))
|
||||
if err != core.ErrServiceWithNoName {
|
||||
t.Error("should've returned an error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAndValidateConfigBytesWithInvalidStorageConfig(t *testing.T) {
|
||||
_, err := parseAndValidateConfigBytes([]byte(`
|
||||
storage:
|
||||
type: sqlite
|
||||
services:
|
||||
- name: example
|
||||
url: https://example.org
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
`))
|
||||
if err == nil {
|
||||
t.Error("should've returned an error, because a file must be specified for a storage of type sqlite")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAndValidateConfigBytesWithInvalidYAML(t *testing.T) {
|
||||
_, err := parseAndValidateConfigBytes([]byte(`
|
||||
storage:
|
||||
invalid yaml
|
||||
services:
|
||||
- name: example
|
||||
url: https://example.org
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
`))
|
||||
if err == nil {
|
||||
t.Error("should've returned an error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAndValidateConfigBytesWithInvalidSecurityConfig(t *testing.T) {
|
||||
_, err := parseAndValidateConfigBytes([]byte(`
|
||||
security:
|
||||
@@ -1041,7 +1084,7 @@ services:
|
||||
- "[STATUS] == 200"
|
||||
`))
|
||||
if err == nil {
|
||||
t.Error("Function should've returned an error")
|
||||
t.Error("should've returned an error")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1173,7 +1216,7 @@ kubernetes:
|
||||
target-path: "/health"
|
||||
`))
|
||||
if err == nil {
|
||||
t.Error("Function should've returned an error because providing a service-template is mandatory")
|
||||
t.Error("should've returned an error because providing a service-template is mandatory")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1192,7 +1235,7 @@ kubernetes:
|
||||
target-path: "/health"
|
||||
`))
|
||||
if err == nil {
|
||||
t.Error("Function should've returned an error because testing with ClusterModeIn isn't supported")
|
||||
t.Error("should've returned an error because testing with ClusterModeIn isn't supported")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
@@ -25,7 +26,7 @@ func badgeHandler(writer http.ResponseWriter, request *http.Request) {
|
||||
}
|
||||
identifier := variables["identifier"]
|
||||
key := strings.TrimSuffix(identifier, ".svg")
|
||||
serviceStatus := storage.Get().GetServiceStatusByKey(key)
|
||||
serviceStatus := storage.Get().GetServiceStatusByKey(key, paging.NewServiceStatusParams().WithUptime())
|
||||
if serviceStatus == nil {
|
||||
writer.WriteHeader(http.StatusNotFound)
|
||||
_, _ = writer.Write([]byte("Requested service not found"))
|
||||
|
||||
@@ -13,8 +13,10 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/config"
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/security"
|
||||
"github.com/TwinProduction/gatus/storage"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
"github.com/TwinProduction/gocache"
|
||||
"github.com/TwinProduction/health"
|
||||
"github.com/gorilla/mux"
|
||||
@@ -37,12 +39,6 @@ var (
|
||||
server *http.Server
|
||||
)
|
||||
|
||||
func init() {
|
||||
if err := cache.StartJanitor(); err != nil {
|
||||
log.Fatal("[controller][init] Failed to start cache janitor:", err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
// Handle creates the router and starts the server
|
||||
func Handle(securityConfig *security.Config, webConfig *config.WebConfig, enableMetrics bool) {
|
||||
var router http.Handler = CreateRouter(securityConfig, enableMetrics)
|
||||
@@ -115,7 +111,7 @@ func serviceStatusesHandler(writer http.ResponseWriter, r *http.Request) {
|
||||
var err error
|
||||
buffer := &bytes.Buffer{}
|
||||
gzipWriter := gzip.NewWriter(buffer)
|
||||
data, err = json.Marshal(storage.Get().GetAllServiceStatusesWithResultPagination(page, pageSize))
|
||||
data, err = json.Marshal(storage.Get().GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(page, pageSize)))
|
||||
if err != nil {
|
||||
log.Printf("[controller][serviceStatusesHandler] Unable to marshal object to JSON: %s", err.Error())
|
||||
writer.WriteHeader(http.StatusInternalServerError)
|
||||
@@ -142,7 +138,7 @@ func serviceStatusesHandler(writer http.ResponseWriter, r *http.Request) {
|
||||
func serviceStatusHandler(writer http.ResponseWriter, r *http.Request) {
|
||||
page, pageSize := extractPageAndPageSizeFromRequest(r)
|
||||
vars := mux.Vars(r)
|
||||
serviceStatus := storage.Get().GetServiceStatusByKey(vars["key"])
|
||||
serviceStatus := storage.Get().GetServiceStatusByKey(vars["key"], paging.NewServiceStatusParams().WithResults(page, pageSize).WithEvents(1, core.MaximumNumberOfEvents).WithUptime())
|
||||
if serviceStatus == nil {
|
||||
log.Printf("[controller][serviceStatusHandler] Service with key=%s not found", vars["key"])
|
||||
writer.WriteHeader(http.StatusNotFound)
|
||||
@@ -150,7 +146,7 @@ func serviceStatusHandler(writer http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
data := map[string]interface{}{
|
||||
"serviceStatus": serviceStatus.WithResultPagination(page, pageSize),
|
||||
"serviceStatus": serviceStatus,
|
||||
// The following fields, while present on core.ServiceStatus, are annotated to remain hidden so that we can
|
||||
// expose only the necessary data on /api/v1/statuses.
|
||||
// Since the /api/v1/statuses/{key} endpoint does need this data, however, we explicitly expose it here
|
||||
|
||||
@@ -3,6 +3,8 @@ package controller
|
||||
import (
|
||||
"net/http"
|
||||
"strconv"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -13,7 +15,7 @@ const (
|
||||
DefaultPageSize = 20
|
||||
|
||||
// MaximumPageSize is the maximum page size allowed
|
||||
MaximumPageSize = 100
|
||||
MaximumPageSize = core.MaximumNumberOfResults
|
||||
)
|
||||
|
||||
func extractPageAndPageSizeFromRequest(r *http.Request) (page int, pageSize int) {
|
||||
|
||||
@@ -24,3 +24,14 @@ var (
|
||||
// EventUnhealthy is a type of event that represents a service failing one or more of its conditions
|
||||
EventUnhealthy EventType = "UNHEALTHY"
|
||||
)
|
||||
|
||||
// NewEventFromResult creates an Event from a Result
|
||||
func NewEventFromResult(result *Result) *Event {
|
||||
event := &Event{Timestamp: result.Timestamp}
|
||||
if result.Success {
|
||||
event.Type = EventHealthy
|
||||
} else {
|
||||
event.Type = EventUnhealthy
|
||||
}
|
||||
return event
|
||||
}
|
||||
|
||||
@@ -25,7 +25,7 @@ type Result struct {
|
||||
Duration time.Duration `json:"duration"`
|
||||
|
||||
// Errors encountered during the evaluation of the service's health
|
||||
Errors []string `json:"errors"` // XXX: find a way to filter out duplicate errors
|
||||
Errors []string `json:"errors"`
|
||||
|
||||
// ConditionResults results of the service's conditions
|
||||
ConditionResults []*ConditionResult `json:"conditionResults"`
|
||||
|
||||
@@ -1,114 +0,0 @@
|
||||
package core
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/util"
|
||||
)
|
||||
|
||||
const (
|
||||
// MaximumNumberOfResults is the maximum number of results that ServiceStatus.Results can have
|
||||
MaximumNumberOfResults = 100
|
||||
|
||||
// MaximumNumberOfEvents is the maximum number of events that ServiceStatus.Events can have
|
||||
MaximumNumberOfEvents = 50
|
||||
)
|
||||
|
||||
// ServiceStatus contains the evaluation Results of a Service
|
||||
type ServiceStatus struct {
|
||||
// Name of the service
|
||||
Name string `json:"name,omitempty"`
|
||||
|
||||
// Group the service is a part of. Used for grouping multiple services together on the front end.
|
||||
Group string `json:"group,omitempty"`
|
||||
|
||||
// Key is the key representing the ServiceStatus
|
||||
Key string `json:"key"`
|
||||
|
||||
// Results is the list of service evaluation results
|
||||
Results []*Result `json:"results"`
|
||||
|
||||
// Events is a list of events
|
||||
//
|
||||
// We don't expose this through JSON, because the main dashboard doesn't need to have this data.
|
||||
// However, the detailed service page does leverage this by including it to a map that will be
|
||||
// marshalled alongside the ServiceStatus.
|
||||
Events []*Event `json:"-"`
|
||||
|
||||
// Uptime information on the service's uptime
|
||||
//
|
||||
// We don't expose this through JSON, because the main dashboard doesn't need to have this data.
|
||||
// However, the detailed service page does leverage this by including it to a map that will be
|
||||
// marshalled alongside the ServiceStatus.
|
||||
Uptime *Uptime `json:"-"`
|
||||
}
|
||||
|
||||
// NewServiceStatus creates a new ServiceStatus
|
||||
func NewServiceStatus(service *Service) *ServiceStatus {
|
||||
return &ServiceStatus{
|
||||
Name: service.Name,
|
||||
Group: service.Group,
|
||||
Key: util.ConvertGroupAndServiceToKey(service.Group, service.Name),
|
||||
Results: make([]*Result, 0),
|
||||
Events: []*Event{{
|
||||
Type: EventStart,
|
||||
Timestamp: time.Now(),
|
||||
}},
|
||||
Uptime: NewUptime(),
|
||||
}
|
||||
}
|
||||
|
||||
// WithResultPagination returns a shallow copy of the ServiceStatus with only the results
|
||||
// within the range defined by the page and pageSize parameters
|
||||
func (ss ServiceStatus) WithResultPagination(page, pageSize int) *ServiceStatus {
|
||||
shallowCopy := ss
|
||||
numberOfResults := len(shallowCopy.Results)
|
||||
start := numberOfResults - (page * pageSize)
|
||||
end := numberOfResults - ((page - 1) * pageSize)
|
||||
if start > numberOfResults {
|
||||
start = -1
|
||||
} else if start < 0 {
|
||||
start = 0
|
||||
}
|
||||
if end > numberOfResults {
|
||||
end = numberOfResults
|
||||
}
|
||||
if start < 0 || end < 0 {
|
||||
shallowCopy.Results = []*Result{}
|
||||
} else {
|
||||
shallowCopy.Results = shallowCopy.Results[start:end]
|
||||
}
|
||||
return &shallowCopy
|
||||
}
|
||||
|
||||
// AddResult adds a Result to ServiceStatus.Results and makes sure that there are
|
||||
// no more than 20 results in the Results slice
|
||||
func (ss *ServiceStatus) AddResult(result *Result) {
|
||||
if len(ss.Results) > 0 {
|
||||
// Check if there's any change since the last result
|
||||
// OR there's only 1 event, which only happens when there's a start event
|
||||
if ss.Results[len(ss.Results)-1].Success != result.Success || len(ss.Events) == 1 {
|
||||
event := &Event{Timestamp: result.Timestamp}
|
||||
if result.Success {
|
||||
event.Type = EventHealthy
|
||||
} else {
|
||||
event.Type = EventUnhealthy
|
||||
}
|
||||
ss.Events = append(ss.Events, event)
|
||||
if len(ss.Events) > MaximumNumberOfEvents {
|
||||
// Doing ss.Events[1:] would usually be sufficient, but in the case where for some reason, the slice has
|
||||
// more than one extra element, we can get rid of all of them at once and thus returning the slice to a
|
||||
// length of MaximumNumberOfEvents by using ss.Events[len(ss.Events)-MaximumNumberOfEvents:] instead
|
||||
ss.Events = ss.Events[len(ss.Events)-MaximumNumberOfEvents:]
|
||||
}
|
||||
}
|
||||
}
|
||||
ss.Results = append(ss.Results, result)
|
||||
if len(ss.Results) > MaximumNumberOfResults {
|
||||
// Doing ss.Results[1:] would usually be sufficient, but in the case where for some reason, the slice has more
|
||||
// than one extra element, we can get rid of all of them at once and thus returning the slice to a length of
|
||||
// MaximumNumberOfResults by using ss.Results[len(ss.Results)-MaximumNumberOfResults:] instead
|
||||
ss.Results = ss.Results[len(ss.Results)-MaximumNumberOfResults:]
|
||||
}
|
||||
ss.Uptime.ProcessResult(result)
|
||||
}
|
||||
@@ -1,92 +0,0 @@
|
||||
package core
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
firstCondition = Condition("[STATUS] == 200")
|
||||
secondCondition = Condition("[RESPONSE_TIME] < 500")
|
||||
thirdCondition = Condition("[CERTIFICATE_EXPIRATION] < 72h")
|
||||
|
||||
timestamp = time.Now()
|
||||
|
||||
testService = Service{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
URL: "https://example.org/what/ever",
|
||||
Method: "GET",
|
||||
Body: "body",
|
||||
Interval: 30 * time.Second,
|
||||
Conditions: []*Condition{&firstCondition, &secondCondition, &thirdCondition},
|
||||
Alerts: nil,
|
||||
Insecure: false,
|
||||
NumberOfFailuresInARow: 0,
|
||||
NumberOfSuccessesInARow: 0,
|
||||
}
|
||||
testSuccessfulResult = Result{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
body: []byte("body"),
|
||||
Errors: nil,
|
||||
Connected: true,
|
||||
Success: true,
|
||||
Timestamp: timestamp,
|
||||
Duration: 150 * time.Millisecond,
|
||||
CertificateExpiration: 10 * time.Hour,
|
||||
ConditionResults: []*ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[CERTIFICATE_EXPIRATION] < 72h",
|
||||
Success: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
testUnsuccessfulResult = Result{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
body: []byte("body"),
|
||||
Errors: []string{"error-1", "error-2"},
|
||||
Connected: true,
|
||||
Success: false,
|
||||
Timestamp: timestamp,
|
||||
Duration: 750 * time.Millisecond,
|
||||
CertificateExpiration: 10 * time.Hour,
|
||||
ConditionResults: []*ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: false,
|
||||
},
|
||||
{
|
||||
Condition: "[CERTIFICATE_EXPIRATION] < 72h",
|
||||
Success: false,
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func BenchmarkServiceStatus_WithResultPagination(b *testing.B) {
|
||||
service := &testService
|
||||
serviceStatus := NewServiceStatus(service)
|
||||
for i := 0; i < MaximumNumberOfResults; i++ {
|
||||
serviceStatus.AddResult(&testSuccessfulResult)
|
||||
}
|
||||
for n := 0; n < b.N; n++ {
|
||||
serviceStatus.WithResultPagination(1, 20)
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
@@ -1,66 +0,0 @@
|
||||
package core
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestNewServiceStatus(t *testing.T) {
|
||||
service := &Service{Name: "name", Group: "group"}
|
||||
serviceStatus := NewServiceStatus(service)
|
||||
if serviceStatus.Name != service.Name {
|
||||
t.Errorf("expected %s, got %s", service.Name, serviceStatus.Name)
|
||||
}
|
||||
if serviceStatus.Group != service.Group {
|
||||
t.Errorf("expected %s, got %s", service.Group, serviceStatus.Group)
|
||||
}
|
||||
if serviceStatus.Key != "group_name" {
|
||||
t.Errorf("expected %s, got %s", "group_name", serviceStatus.Key)
|
||||
}
|
||||
}
|
||||
|
||||
func TestServiceStatus_AddResult(t *testing.T) {
|
||||
service := &Service{Name: "name", Group: "group"}
|
||||
serviceStatus := NewServiceStatus(service)
|
||||
for i := 0; i < MaximumNumberOfResults+10; i++ {
|
||||
serviceStatus.AddResult(&Result{Timestamp: time.Now()})
|
||||
}
|
||||
if len(serviceStatus.Results) != MaximumNumberOfResults {
|
||||
t.Errorf("expected serviceStatus.Results to not exceed a length of %d", MaximumNumberOfResults)
|
||||
}
|
||||
}
|
||||
|
||||
func TestServiceStatus_WithResultPagination(t *testing.T) {
|
||||
service := &Service{Name: "name", Group: "group"}
|
||||
serviceStatus := NewServiceStatus(service)
|
||||
for i := 0; i < 25; i++ {
|
||||
serviceStatus.AddResult(&Result{Timestamp: time.Now()})
|
||||
}
|
||||
if len(serviceStatus.WithResultPagination(1, 1).Results) != 1 {
|
||||
t.Errorf("expected to have 1 result")
|
||||
}
|
||||
if len(serviceStatus.WithResultPagination(5, 0).Results) != 0 {
|
||||
t.Errorf("expected to have 0 results")
|
||||
}
|
||||
if len(serviceStatus.WithResultPagination(-1, 20).Results) != 0 {
|
||||
t.Errorf("expected to have 0 result, because the page was invalid")
|
||||
}
|
||||
if len(serviceStatus.WithResultPagination(1, -1).Results) != 0 {
|
||||
t.Errorf("expected to have 0 result, because the page size was invalid")
|
||||
}
|
||||
if len(serviceStatus.WithResultPagination(1, 10).Results) != 10 {
|
||||
t.Errorf("expected to have 10 results, because given a page size of 10, page 1 should have 10 elements")
|
||||
}
|
||||
if len(serviceStatus.WithResultPagination(2, 10).Results) != 10 {
|
||||
t.Errorf("expected to have 10 results, because given a page size of 10, page 2 should have 10 elements")
|
||||
}
|
||||
if len(serviceStatus.WithResultPagination(3, 10).Results) != 5 {
|
||||
t.Errorf("expected to have 5 results, because given a page size of 10, page 3 should have 5 elements")
|
||||
}
|
||||
if len(serviceStatus.WithResultPagination(4, 10).Results) != 0 {
|
||||
t.Errorf("expected to have 0 results, because given a page size of 10, page 4 should have 0 elements")
|
||||
}
|
||||
if len(serviceStatus.WithResultPagination(1, 50).Results) != 25 {
|
||||
t.Errorf("expected to have 25 results, because there's only 25 results")
|
||||
}
|
||||
}
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
|
||||
"github.com/TwinProduction/gatus/alerting/alert"
|
||||
"github.com/TwinProduction/gatus/client"
|
||||
"github.com/TwinProduction/gatus/util"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -135,6 +136,11 @@ func (service *Service) ValidateAndSetDefaults() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Key returns the unique key for the Service
|
||||
func (service Service) Key() string {
|
||||
return util.ConvertGroupAndServiceToKey(service.Group, service.Name)
|
||||
}
|
||||
|
||||
// EvaluateHealth sends a request to the service's URL and evaluates the conditions of the service.
|
||||
func (service *Service) EvaluateHealth() *Result {
|
||||
result := &Result{Success: true, Errors: []string{}}
|
||||
|
||||
50
core/service_status.go
Normal file
50
core/service_status.go
Normal file
@@ -0,0 +1,50 @@
|
||||
package core
|
||||
|
||||
const (
|
||||
// MaximumNumberOfResults is the maximum number of results that ServiceStatus.Results can have
|
||||
MaximumNumberOfResults = 100
|
||||
|
||||
// MaximumNumberOfEvents is the maximum number of events that ServiceStatus.Events can have
|
||||
MaximumNumberOfEvents = 50
|
||||
)
|
||||
|
||||
// ServiceStatus contains the evaluation Results of a Service
|
||||
type ServiceStatus struct {
|
||||
// Name of the service
|
||||
Name string `json:"name,omitempty"`
|
||||
|
||||
// Group the service is a part of. Used for grouping multiple services together on the front end.
|
||||
Group string `json:"group,omitempty"`
|
||||
|
||||
// Key is the key representing the ServiceStatus
|
||||
Key string `json:"key"`
|
||||
|
||||
// Results is the list of service evaluation results
|
||||
Results []*Result `json:"results"`
|
||||
|
||||
// Events is a list of events
|
||||
//
|
||||
// We don't expose this through JSON, because the main dashboard doesn't need to have this data.
|
||||
// However, the detailed service page does leverage this by including it to a map that will be
|
||||
// marshalled alongside the ServiceStatus.
|
||||
Events []*Event `json:"-"`
|
||||
|
||||
// Uptime information on the service's uptime
|
||||
//
|
||||
// We don't expose this through JSON, because the main dashboard doesn't need to have this data.
|
||||
// However, the detailed service page does leverage this by including it to a map that will be
|
||||
// marshalled alongside the ServiceStatus.
|
||||
Uptime *Uptime `json:"-"`
|
||||
}
|
||||
|
||||
// NewServiceStatus creates a new ServiceStatus
|
||||
func NewServiceStatus(serviceKey, serviceGroup, serviceName string) *ServiceStatus {
|
||||
return &ServiceStatus{
|
||||
Name: serviceName,
|
||||
Group: serviceGroup,
|
||||
Key: serviceKey,
|
||||
Results: make([]*Result, 0),
|
||||
Events: make([]*Event, 0),
|
||||
Uptime: NewUptime(),
|
||||
}
|
||||
}
|
||||
19
core/service_status_test.go
Normal file
19
core/service_status_test.go
Normal file
@@ -0,0 +1,19 @@
|
||||
package core
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNewServiceStatus(t *testing.T) {
|
||||
service := &Service{Name: "name", Group: "group"}
|
||||
serviceStatus := NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
if serviceStatus.Name != service.Name {
|
||||
t.Errorf("expected %s, got %s", service.Name, serviceStatus.Name)
|
||||
}
|
||||
if serviceStatus.Group != service.Group {
|
||||
t.Errorf("expected %s, got %s", service.Group, serviceStatus.Group)
|
||||
}
|
||||
if serviceStatus.Key != "group_name" {
|
||||
t.Errorf("expected %s, got %s", "group_name", serviceStatus.Key)
|
||||
}
|
||||
}
|
||||
107
core/uptime.go
107
core/uptime.go
@@ -1,7 +1,6 @@
|
||||
package core
|
||||
|
||||
import (
|
||||
"log"
|
||||
"time"
|
||||
)
|
||||
|
||||
@@ -44,109 +43,3 @@ func NewUptime() *Uptime {
|
||||
HourlyStatistics: make(map[int64]*HourlyUptimeStatistics),
|
||||
}
|
||||
}
|
||||
|
||||
// ProcessResult processes the result by extracting the relevant from the result and recalculating the uptime
|
||||
// if necessary
|
||||
func (uptime *Uptime) ProcessResult(result *Result) {
|
||||
// XXX: Remove this on v3.0.0
|
||||
if len(uptime.SuccessfulExecutionsPerHour) != 0 || len(uptime.TotalExecutionsPerHour) != 0 {
|
||||
uptime.migrateToHourlyStatistics()
|
||||
}
|
||||
if uptime.HourlyStatistics == nil {
|
||||
uptime.HourlyStatistics = make(map[int64]*HourlyUptimeStatistics)
|
||||
}
|
||||
unixTimestampFlooredAtHour := result.Timestamp.Unix() - (result.Timestamp.Unix() % 3600)
|
||||
hourlyStats, _ := uptime.HourlyStatistics[unixTimestampFlooredAtHour]
|
||||
if hourlyStats == nil {
|
||||
hourlyStats = &HourlyUptimeStatistics{}
|
||||
uptime.HourlyStatistics[unixTimestampFlooredAtHour] = hourlyStats
|
||||
}
|
||||
if result.Success {
|
||||
hourlyStats.SuccessfulExecutions++
|
||||
}
|
||||
hourlyStats.TotalExecutions++
|
||||
hourlyStats.TotalExecutionsResponseTime += uint64(result.Duration.Milliseconds())
|
||||
// Clean up only when we're starting to have too many useless keys
|
||||
// Note that this is only triggered when there are more entries than there should be after
|
||||
// 10 days, despite the fact that we are deleting everything that's older than 7 days.
|
||||
// This is to prevent re-iterating on every `ProcessResult` as soon as the uptime has been logged for 7 days.
|
||||
if len(uptime.HourlyStatistics) > numberOfHoursInTenDays {
|
||||
sevenDaysAgo := time.Now().Add(-(sevenDays + time.Hour)).Unix()
|
||||
for hourlyUnixTimestamp := range uptime.HourlyStatistics {
|
||||
if sevenDaysAgo > hourlyUnixTimestamp {
|
||||
delete(uptime.HourlyStatistics, hourlyUnixTimestamp)
|
||||
}
|
||||
}
|
||||
}
|
||||
if result.Success {
|
||||
// Recalculate uptime if at least one of the 1h, 24h or 7d uptime are not 100%
|
||||
// If they're all 100%, then recalculating the uptime would be useless unless
|
||||
// the result added was a failure (!result.Success)
|
||||
if uptime.LastSevenDays != 1 || uptime.LastTwentyFourHours != 1 || uptime.LastHour != 1 {
|
||||
uptime.recalculate()
|
||||
}
|
||||
} else {
|
||||
// Recalculate uptime if at least one of the 1h, 24h or 7d uptime are not 0%
|
||||
// If they're all 0%, then recalculating the uptime would be useless unless
|
||||
// the result added was a success (result.Success)
|
||||
if uptime.LastSevenDays != 0 || uptime.LastTwentyFourHours != 0 || uptime.LastHour != 0 {
|
||||
uptime.recalculate()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (uptime *Uptime) recalculate() {
|
||||
uptimeBrackets := make(map[string]uint64)
|
||||
now := time.Now()
|
||||
// The oldest uptime bracket starts 7 days ago, so we'll start from there
|
||||
timestamp := now.Add(-sevenDays)
|
||||
for now.Sub(timestamp) >= 0 {
|
||||
hourlyUnixTimestamp := timestamp.Unix() - (timestamp.Unix() % 3600)
|
||||
hourlyStats := uptime.HourlyStatistics[hourlyUnixTimestamp]
|
||||
if hourlyStats == nil || hourlyStats.TotalExecutions == 0 {
|
||||
timestamp = timestamp.Add(time.Hour)
|
||||
continue
|
||||
}
|
||||
uptimeBrackets["7d_success"] += hourlyStats.SuccessfulExecutions
|
||||
uptimeBrackets["7d_total"] += hourlyStats.TotalExecutions
|
||||
if now.Sub(timestamp) <= 24*time.Hour {
|
||||
uptimeBrackets["24h_success"] += hourlyStats.SuccessfulExecutions
|
||||
uptimeBrackets["24h_total"] += hourlyStats.TotalExecutions
|
||||
}
|
||||
if now.Sub(timestamp) <= time.Hour {
|
||||
uptimeBrackets["1h_success"] += hourlyStats.SuccessfulExecutions
|
||||
uptimeBrackets["1h_total"] += hourlyStats.TotalExecutions
|
||||
}
|
||||
timestamp = timestamp.Add(time.Hour)
|
||||
}
|
||||
if uptimeBrackets["7d_total"] > 0 {
|
||||
uptime.LastSevenDays = float64(uptimeBrackets["7d_success"]) / float64(uptimeBrackets["7d_total"])
|
||||
}
|
||||
if uptimeBrackets["24h_total"] > 0 {
|
||||
uptime.LastTwentyFourHours = float64(uptimeBrackets["24h_success"]) / float64(uptimeBrackets["24h_total"])
|
||||
}
|
||||
if uptimeBrackets["1h_total"] > 0 {
|
||||
uptime.LastHour = float64(uptimeBrackets["1h_success"]) / float64(uptimeBrackets["1h_total"])
|
||||
}
|
||||
}
|
||||
|
||||
// XXX: Remove this on v3.0.0
|
||||
// Deprecated
|
||||
func (uptime *Uptime) migrateToHourlyStatistics() {
|
||||
log.Println("[migrateToHourlyStatistics] Got", len(uptime.SuccessfulExecutionsPerHour), "entries for successful executions and", len(uptime.TotalExecutionsPerHour), "entries for total executions")
|
||||
uptime.HourlyStatistics = make(map[int64]*HourlyUptimeStatistics)
|
||||
for hourlyUnixTimestamp, totalExecutions := range uptime.TotalExecutionsPerHour {
|
||||
if totalExecutions == 0 {
|
||||
log.Println("[migrateToHourlyStatistics] Skipping entry at", hourlyUnixTimestamp, "because total number of executions is 0")
|
||||
continue
|
||||
}
|
||||
uptime.HourlyStatistics[hourlyUnixTimestamp] = &HourlyUptimeStatistics{
|
||||
TotalExecutions: totalExecutions,
|
||||
SuccessfulExecutions: uptime.SuccessfulExecutionsPerHour[hourlyUnixTimestamp],
|
||||
TotalExecutionsResponseTime: 0,
|
||||
}
|
||||
}
|
||||
log.Println("[migrateToHourlyStatistics] Migrated", len(uptime.HourlyStatistics), "entries")
|
||||
uptime.SuccessfulExecutionsPerHour = nil
|
||||
uptime.TotalExecutionsPerHour = nil
|
||||
}
|
||||
|
||||
42
examples/docker-compose-sqlite-storage/config.yaml
Normal file
42
examples/docker-compose-sqlite-storage/config.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
storage:
|
||||
type: sqlite
|
||||
file: /data/data.db
|
||||
|
||||
services:
|
||||
- name: back-end
|
||||
group: core
|
||||
url: "https://example.org/"
|
||||
interval: 5m
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
- "[CERTIFICATE_EXPIRATION] > 48h"
|
||||
|
||||
- name: monitoring
|
||||
group: internal
|
||||
url: "https://example.org/"
|
||||
interval: 5m
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
- name: nas
|
||||
group: internal
|
||||
url: "https://example.org/"
|
||||
interval: 5m
|
||||
conditions:
|
||||
- "[STATUS] == 200"
|
||||
|
||||
- name: example-dns-query
|
||||
url: "8.8.8.8" # Address of the DNS server to use
|
||||
interval: 5m
|
||||
dns:
|
||||
query-name: "example.com"
|
||||
query-type: "A"
|
||||
conditions:
|
||||
- "[BODY] == 93.184.216.34"
|
||||
- "[DNS_RCODE] == NOERROR"
|
||||
|
||||
- name: icmp-ping
|
||||
url: "icmp://example.org"
|
||||
interval: 1m
|
||||
conditions:
|
||||
- "[CONNECTED] == true"
|
||||
@@ -0,0 +1,9 @@
|
||||
version: "3.8"
|
||||
services:
|
||||
gatus:
|
||||
image: twinproduction/gatus:latest
|
||||
ports:
|
||||
- 8080:8080
|
||||
volumes:
|
||||
- ./config.yaml:/config/config.yaml
|
||||
- ./data:/data/
|
||||
2
go.mod
2
go.mod
@@ -14,13 +14,13 @@ require (
|
||||
github.com/prometheus/client_golang v1.9.0
|
||||
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad // indirect
|
||||
golang.org/x/net v0.0.0-20201224014010-6772e930b67b // indirect
|
||||
golang.org/x/sys v0.0.0-20201223074533-0d417f636930 // indirect
|
||||
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf // indirect
|
||||
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
k8s.io/api v0.18.14
|
||||
k8s.io/apimachinery v0.18.14
|
||||
k8s.io/client-go v0.18.14
|
||||
modernc.org/sqlite v1.11.2
|
||||
)
|
||||
|
||||
replace k8s.io/client-go => k8s.io/client-go v0.18.14
|
||||
|
||||
50
go.sum
50
go.sum
@@ -99,6 +99,8 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
|
||||
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
|
||||
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
|
||||
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
|
||||
@@ -183,6 +185,7 @@ github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.4 h1:L8R9j+yAqZuZjsqh/z+F1NCffTKKLShY6zXTItVIZ8M=
|
||||
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
@@ -264,6 +267,8 @@ github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/X
|
||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
|
||||
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
|
||||
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=
|
||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
@@ -282,7 +287,11 @@ github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN
|
||||
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
||||
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
|
||||
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
|
||||
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
|
||||
github.com/mattn/go-sqlite3 v1.14.6 h1:dNPt6NO46WmLVt2DLNpwczCmdV5boIZ6g/tlDrlRUbg=
|
||||
github.com/mattn/go-sqlite3 v1.14.6/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
|
||||
@@ -379,6 +388,8 @@ github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4O
|
||||
github.com/prometheus/procfs v0.2.0 h1:wH4vA7pcjKuZzjF7lM8awk4fnuJO6idemZXoKnULUx4=
|
||||
github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
|
||||
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0 h1:OdAsTTz6OkFY5QxjkYwrChwuRruF69c169dPK26NUlk=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
@@ -480,6 +491,7 @@ golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzB
|
||||
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.4.0 h1:8pl+sMODzuvGJkmj2W4kZihvVb5mKm8pB/X44PIQHv8=
|
||||
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
@@ -576,6 +588,7 @@ golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -594,10 +607,11 @@ golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201126233918-771906719818/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201214210602-f9fddec55a1e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201223074533-0d417f636930 h1:vRgIt+nup/B/BwIS0g2oC0haq0iqbV3ZA+u6+0TlNCo=
|
||||
golang.org/x/sys v0.0.0-20201223074533-0d417f636930/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c h1:VwygUrnw9jn88c4u8GD3rZQbqrP/tgas88tPUbBxQrk=
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf h1:MZ2shdL+ZM/XzY3ZGOnh4Nlpnxz5GSOhOmtHo3iPU6M=
|
||||
@@ -667,7 +681,9 @@ golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc
|
||||
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
|
||||
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
|
||||
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20201124115921-2c860bdd6e78/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2 h1:vEtypaVub6UvKkiXZ2xx9QIvp9TL7sI7xp7vdi2kezA=
|
||||
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
@@ -812,6 +828,36 @@ k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
|
||||
k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E=
|
||||
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89 h1:d4vVOjXm687F1iLSP2q3lyPPuyvTUt3aVoBpi2DqRsU=
|
||||
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
|
||||
lukechampine.com/uint128 v1.1.1 h1:pnxCASz787iMf+02ssImqk6OLt+Z5QHMoZyUXR4z6JU=
|
||||
lukechampine.com/uint128 v1.1.1/go.mod h1:c4eWIwlEGaxC/+H1VguhU4PHXNWDCDMUlWdIWl2j1gk=
|
||||
modernc.org/cc/v3 v3.33.6 h1:r63dgSzVzRxUpAJFPQWHy1QeZeY1ydNENUDaBx1GqYc=
|
||||
modernc.org/cc/v3 v3.33.6/go.mod h1:iPJg1pkwXqAV16SNgFBVYmggfMg6xhs+2oiO0vclK3g=
|
||||
modernc.org/ccgo/v3 v3.9.5 h1:dEuUSf8WN51rDkprFuAqjfchKEzN0WttP/Py3enBwjk=
|
||||
modernc.org/ccgo/v3 v3.9.5/go.mod h1:umuo2EP2oDSBnD3ckjaVUXMrmeAw8C8OSICVa0iFf60=
|
||||
modernc.org/httpfs v1.0.6 h1:AAgIpFZRXuYnkjftxTAZwMIiwEqAfk8aVB2/oA6nAeM=
|
||||
modernc.org/httpfs v1.0.6/go.mod h1:7dosgurJGp0sPaRanU53W4xZYKh14wfzX420oZADeHM=
|
||||
modernc.org/libc v1.7.13-0.20210308123627-12f642a52bb8/go.mod h1:U1eq8YWr/Kc1RWCMFUWEdkTg8OTcfLw2kY8EDwl039w=
|
||||
modernc.org/libc v1.9.8/go.mod h1:U1eq8YWr/Kc1RWCMFUWEdkTg8OTcfLw2kY8EDwl039w=
|
||||
modernc.org/libc v1.9.11 h1:QUxZMs48Ahg2F7SN41aERvMfGLY2HU/ADnB9DC4Yts8=
|
||||
modernc.org/libc v1.9.11/go.mod h1:NyF3tsA5ArIjJ83XB0JlqhjTabTCHm9aX4XMPHyQn0Q=
|
||||
modernc.org/mathutil v1.1.1/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
|
||||
modernc.org/mathutil v1.2.2/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
|
||||
modernc.org/mathutil v1.4.0 h1:GCjoRaBew8ECCKINQA2nYjzvufFW9YiEuuB+rQ9bn2E=
|
||||
modernc.org/mathutil v1.4.0/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
|
||||
modernc.org/memory v1.0.4 h1:utMBrFcpnQDdNsmM6asmyH/FM9TqLPS7XF7otpJmrwM=
|
||||
modernc.org/memory v1.0.4/go.mod h1:nV2OApxradM3/OVbs2/0OsP6nPfakXpi50C7dcoHXlc=
|
||||
modernc.org/opt v0.1.1 h1:/0RX92k9vwVeDXj+Xn23DKp2VJubL7k8qNffND6qn3A=
|
||||
modernc.org/opt v0.1.1/go.mod h1:WdSiB5evDcignE70guQKxYUl14mgWtbClRi5wmkkTX0=
|
||||
modernc.org/sqlite v1.11.2 h1:ShWQpeD3ag/bmx6TqidBlIWonWmQaSQKls3aenCbt+w=
|
||||
modernc.org/sqlite v1.11.2/go.mod h1:+mhs/P1ONd+6G7hcAs6irwDi/bjTQ7nLW6LHRBsEa3A=
|
||||
modernc.org/strutil v1.1.1 h1:xv+J1BXY3Opl2ALrBwyfEikFAj8pmqcpnfmuwUwcozs=
|
||||
modernc.org/strutil v1.1.1/go.mod h1:DE+MQQ/hjKBZS2zNInV5hhcipt5rLPWkmpbGeW5mmdw=
|
||||
modernc.org/tcl v1.5.5 h1:N03RwthgTR/l/eQvz3UjfYnvVVj1G2sZqzFGfoD4HE4=
|
||||
modernc.org/tcl v1.5.5/go.mod h1:ADkaTUuwukkrlhqwERyq0SM8OvyXo7+TjFz7yAF56EI=
|
||||
modernc.org/token v1.0.0 h1:a0jaWiNMDhDUtqOj09wvjWWAqd3q7WpBulmL9H2egsk=
|
||||
modernc.org/token v1.0.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
|
||||
modernc.org/z v1.0.1 h1:WyIDpEpAIx4Hel6q/Pcgj/VhaQV5XPJ2I6ryIYbjnpc=
|
||||
modernc.org/z v1.0.1/go.mod h1:8/SRk5C/HgiQWCgXdfpb+1RvhORdkz5sw72d3jjtyqA=
|
||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
|
||||
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
|
||||
|
||||
12
main.go
12
main.go
@@ -34,6 +34,12 @@ func main() {
|
||||
log.Println("Shutting down")
|
||||
}
|
||||
|
||||
func start(cfg *config.Config) {
|
||||
go controller.Handle(cfg.Security, cfg.Web, cfg.Metrics)
|
||||
watchdog.Monitor(cfg)
|
||||
go listenToConfigurationFileChanges(cfg)
|
||||
}
|
||||
|
||||
func stop() {
|
||||
watchdog.Shutdown()
|
||||
controller.Shutdown()
|
||||
@@ -46,12 +52,6 @@ func save() {
|
||||
}
|
||||
}
|
||||
|
||||
func start(cfg *config.Config) {
|
||||
go controller.Handle(cfg.Security, cfg.Web, cfg.Metrics)
|
||||
watchdog.Monitor(cfg)
|
||||
go listenToConfigurationFileChanges(cfg)
|
||||
}
|
||||
|
||||
func loadConfiguration() (cfg *config.Config, err error) {
|
||||
customConfigFile := os.Getenv("GATUS_CONFIG_FILE")
|
||||
if len(customConfigFile) > 0 {
|
||||
|
||||
@@ -1,8 +1,12 @@
|
||||
package storage
|
||||
|
||||
// Config is the configuration for alerting providers
|
||||
// Config is the configuration for storage
|
||||
type Config struct {
|
||||
// File is the path of the file to use for persistence
|
||||
// If blank, persistence is disabled.
|
||||
// If blank, persistence is disabled
|
||||
File string `yaml:"file"`
|
||||
|
||||
// Type of store
|
||||
// If blank, uses the default in-memory store
|
||||
Type Type `yaml:"type"`
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
|
||||
"github.com/TwinProduction/gatus/storage/store"
|
||||
"github.com/TwinProduction/gatus/storage/store/memory"
|
||||
"github.com/TwinProduction/gatus/storage/store/sqlite"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -38,36 +39,52 @@ func Initialize(cfg *Config) error {
|
||||
initialized = true
|
||||
var err error
|
||||
if cancelFunc != nil {
|
||||
// Stop the active autoSave task
|
||||
// Stop the active autoSaveStore task, if there's already one
|
||||
cancelFunc()
|
||||
}
|
||||
if cfg == nil || len(cfg.File) == 0 {
|
||||
log.Println("[storage][Initialize] Creating storage provider")
|
||||
provider, _ = memory.NewStore("")
|
||||
if cfg == nil {
|
||||
cfg = &Config{}
|
||||
}
|
||||
if len(cfg.File) == 0 {
|
||||
log.Printf("[storage][Initialize] Creating storage provider with type=%s", cfg.Type)
|
||||
} else {
|
||||
ctx, cancelFunc = context.WithCancel(context.Background())
|
||||
log.Printf("[storage][Initialize] Creating storage provider with file=%s", cfg.File)
|
||||
provider, err = memory.NewStore(cfg.File)
|
||||
log.Printf("[storage][Initialize] Creating storage provider with type=%s and file=%s", cfg.Type, cfg.File)
|
||||
}
|
||||
ctx, cancelFunc = context.WithCancel(context.Background())
|
||||
switch cfg.Type {
|
||||
case TypeSQLite:
|
||||
provider, err = sqlite.NewStore(string(cfg.Type), cfg.File)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
go autoSave(7*time.Minute, ctx)
|
||||
case TypeInMemory:
|
||||
fallthrough
|
||||
default:
|
||||
if len(cfg.File) > 0 {
|
||||
provider, err = memory.NewStore(cfg.File)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
go autoSaveStore(ctx, provider, 7*time.Minute)
|
||||
} else {
|
||||
provider, _ = memory.NewStore("")
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// autoSave automatically calls the SaveFunc function of the provider at every interval
|
||||
func autoSave(interval time.Duration, ctx context.Context) {
|
||||
// autoSaveStore automatically calls the Save function of the provider at every interval
|
||||
func autoSaveStore(ctx context.Context, provider store.Store, interval time.Duration) {
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
log.Printf("[storage][autoSave] Stopping active job")
|
||||
log.Printf("[storage][autoSaveStore] Stopping active job")
|
||||
return
|
||||
case <-time.After(interval):
|
||||
log.Printf("[storage][autoSave] Saving")
|
||||
log.Printf("[storage][autoSaveStore] Saving")
|
||||
err := provider.Save()
|
||||
if err != nil {
|
||||
log.Println("[storage][autoSave] Save failed:", err.Error())
|
||||
log.Println("[storage][autoSaveStore] Save failed:", err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,35 +3,92 @@ package storage
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/storage/store/sqlite"
|
||||
)
|
||||
|
||||
func TestGet(t *testing.T) {
|
||||
store := Get()
|
||||
if store == nil {
|
||||
t.Error("store should've been automatically initialized")
|
||||
}
|
||||
}
|
||||
|
||||
func TestInitialize(t *testing.T) {
|
||||
file := t.TempDir() + "/test.db"
|
||||
err := Initialize(&Config{File: file})
|
||||
if err != nil {
|
||||
t.Fatal("shouldn't have returned an error")
|
||||
type Scenario struct {
|
||||
Name string
|
||||
Cfg *Config
|
||||
ExpectedErr error
|
||||
}
|
||||
if cancelFunc == nil {
|
||||
t.Error("cancelFunc shouldn't have been nil")
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "nil",
|
||||
Cfg: nil,
|
||||
ExpectedErr: nil,
|
||||
},
|
||||
{
|
||||
Name: "blank",
|
||||
Cfg: &Config{},
|
||||
ExpectedErr: nil,
|
||||
},
|
||||
{
|
||||
Name: "inmemory-no-file",
|
||||
Cfg: &Config{Type: TypeInMemory},
|
||||
ExpectedErr: nil,
|
||||
},
|
||||
{
|
||||
Name: "inmemory-with-file",
|
||||
Cfg: &Config{Type: TypeInMemory, File: t.TempDir() + "/TestInitialize_inmemory-with-file.db"},
|
||||
ExpectedErr: nil,
|
||||
},
|
||||
{
|
||||
Name: "sqlite-no-file",
|
||||
Cfg: &Config{Type: TypeSQLite},
|
||||
ExpectedErr: sqlite.ErrFilePathNotSpecified,
|
||||
},
|
||||
{
|
||||
Name: "sqlite-with-file",
|
||||
Cfg: &Config{Type: TypeSQLite, File: t.TempDir() + "/TestInitialize_sqlite-with-file.db"},
|
||||
ExpectedErr: nil,
|
||||
},
|
||||
}
|
||||
if ctx == nil {
|
||||
t.Error("ctx shouldn't have been nil")
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
err := Initialize(scenario.Cfg)
|
||||
if err != scenario.ExpectedErr {
|
||||
t.Errorf("expected %v, got %v", scenario.ExpectedErr, err)
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if cancelFunc == nil {
|
||||
t.Error("cancelFunc shouldn't have been nil")
|
||||
}
|
||||
if ctx == nil {
|
||||
t.Error("ctx shouldn't have been nil")
|
||||
}
|
||||
if provider == nil {
|
||||
t.Fatal("provider shouldn't have been nit")
|
||||
}
|
||||
provider.Close()
|
||||
// Try to initialize it again
|
||||
err = Initialize(scenario.Cfg)
|
||||
if err != scenario.ExpectedErr {
|
||||
t.Errorf("expected %v, got %v", scenario.ExpectedErr, err)
|
||||
return
|
||||
}
|
||||
provider.Close()
|
||||
})
|
||||
}
|
||||
// Try to initialize it again
|
||||
err = Initialize(&Config{File: file})
|
||||
if err != nil {
|
||||
t.Fatal("shouldn't have returned an error")
|
||||
}
|
||||
cancelFunc()
|
||||
}
|
||||
|
||||
func TestAutoSave(t *testing.T) {
|
||||
file := t.TempDir() + "/test.db"
|
||||
file := t.TempDir() + "/TestAutoSave.db"
|
||||
if err := Initialize(&Config{File: file}); err != nil {
|
||||
t.Fatal("shouldn't have returned an error")
|
||||
}
|
||||
go autoSave(3*time.Millisecond, ctx)
|
||||
go autoSaveStore(ctx, provider, 3*time.Millisecond)
|
||||
time.Sleep(15 * time.Millisecond)
|
||||
cancelFunc()
|
||||
time.Sleep(5 * time.Millisecond)
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
|
||||
@@ -2,8 +2,11 @@ package memory
|
||||
|
||||
import (
|
||||
"encoding/gob"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
"github.com/TwinProduction/gatus/util"
|
||||
"github.com/TwinProduction/gocache"
|
||||
)
|
||||
@@ -17,11 +20,15 @@ func init() {
|
||||
|
||||
// Store that leverages gocache
|
||||
type Store struct {
|
||||
sync.RWMutex
|
||||
file string
|
||||
cache *gocache.Cache
|
||||
}
|
||||
|
||||
// NewStore creates a new store
|
||||
// NewStore creates a new store using gocache.Cache
|
||||
//
|
||||
// This store holds everything in memory, and if the file parameter is not blank,
|
||||
// supports eventual persistence.
|
||||
func NewStore(file string) (*Store, error) {
|
||||
store := &Store{
|
||||
file: file,
|
||||
@@ -36,40 +43,46 @@ func NewStore(file string) (*Store, error) {
|
||||
return store, nil
|
||||
}
|
||||
|
||||
// GetAllServiceStatusesWithResultPagination returns all monitored core.ServiceStatus
|
||||
// GetAllServiceStatuses returns all monitored core.ServiceStatus
|
||||
// with a subset of core.Result defined by the page and pageSize parameters
|
||||
func (s *Store) GetAllServiceStatusesWithResultPagination(page, pageSize int) map[string]*core.ServiceStatus {
|
||||
func (s *Store) GetAllServiceStatuses(params *paging.ServiceStatusParams) map[string]*core.ServiceStatus {
|
||||
serviceStatuses := s.cache.GetAll()
|
||||
pagedServiceStatuses := make(map[string]*core.ServiceStatus, len(serviceStatuses))
|
||||
for k, v := range serviceStatuses {
|
||||
pagedServiceStatuses[k] = v.(*core.ServiceStatus).WithResultPagination(page, pageSize)
|
||||
pagedServiceStatuses[k] = ShallowCopyServiceStatus(v.(*core.ServiceStatus), params)
|
||||
}
|
||||
return pagedServiceStatuses
|
||||
}
|
||||
|
||||
// GetServiceStatus returns the service status for a given service name in the given group
|
||||
func (s *Store) GetServiceStatus(groupName, serviceName string) *core.ServiceStatus {
|
||||
return s.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(groupName, serviceName))
|
||||
func (s *Store) GetServiceStatus(groupName, serviceName string, params *paging.ServiceStatusParams) *core.ServiceStatus {
|
||||
return s.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(groupName, serviceName), params)
|
||||
}
|
||||
|
||||
// GetServiceStatusByKey returns the service status for a given key
|
||||
func (s *Store) GetServiceStatusByKey(key string) *core.ServiceStatus {
|
||||
func (s *Store) GetServiceStatusByKey(key string, params *paging.ServiceStatusParams) *core.ServiceStatus {
|
||||
serviceStatus := s.cache.GetValue(key)
|
||||
if serviceStatus == nil {
|
||||
return nil
|
||||
}
|
||||
return serviceStatus.(*core.ServiceStatus)
|
||||
return ShallowCopyServiceStatus(serviceStatus.(*core.ServiceStatus), params)
|
||||
}
|
||||
|
||||
// Insert adds the observed result for the specified service into the store
|
||||
func (s *Store) Insert(service *core.Service, result *core.Result) {
|
||||
key := util.ConvertGroupAndServiceToKey(service.Group, service.Name)
|
||||
key := service.Key()
|
||||
s.Lock()
|
||||
serviceStatus, exists := s.cache.Get(key)
|
||||
if !exists {
|
||||
serviceStatus = core.NewServiceStatus(service)
|
||||
serviceStatus = core.NewServiceStatus(key, service.Group, service.Name)
|
||||
serviceStatus.(*core.ServiceStatus).Events = append(serviceStatus.(*core.ServiceStatus).Events, &core.Event{
|
||||
Type: core.EventStart,
|
||||
Timestamp: time.Now(),
|
||||
})
|
||||
}
|
||||
serviceStatus.(*core.ServiceStatus).AddResult(result)
|
||||
AddResult(serviceStatus.(*core.ServiceStatus), result)
|
||||
s.cache.Set(key, serviceStatus)
|
||||
s.Unlock()
|
||||
}
|
||||
|
||||
// DeleteAllServiceStatusesNotInKeys removes all ServiceStatus that are not within the keys provided
|
||||
@@ -102,3 +115,8 @@ func (s *Store) Save() error {
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Close does nothing, because there's nothing to close
|
||||
func (s *Store) Close() {
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
package memory
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/util"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -81,174 +80,31 @@ var (
|
||||
}
|
||||
)
|
||||
|
||||
func TestStore_Insert(t *testing.T) {
|
||||
// Note that are much more extensive tests in /storage/store/store_test.go.
|
||||
// This test is simply an extra sanity check
|
||||
func TestStore_SanityCheck(t *testing.T) {
|
||||
store, _ := NewStore("")
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
if numberOfServiceStatuses := len(store.GetAllServiceStatuses(paging.NewServiceStatusParams())); numberOfServiceStatuses != 1 {
|
||||
t.Fatalf("expected 1 ServiceStatus, got %d", numberOfServiceStatuses)
|
||||
}
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
|
||||
if store.cache.Count() != 1 {
|
||||
t.Fatalf("expected 1 ServiceStatus, got %d", store.cache.Count())
|
||||
// Both results inserted are for the same service, therefore, the count shouldn't have increased
|
||||
if numberOfServiceStatuses := len(store.GetAllServiceStatuses(paging.NewServiceStatusParams())); numberOfServiceStatuses != 1 {
|
||||
t.Fatalf("expected 1 ServiceStatus, got %d", numberOfServiceStatuses)
|
||||
}
|
||||
key := fmt.Sprintf("%s_%s", testService.Group, testService.Name)
|
||||
serviceStatus := store.GetServiceStatusByKey(key)
|
||||
if serviceStatus == nil {
|
||||
t.Fatalf("Store should've had key '%s', but didn't", key)
|
||||
ss := store.GetServiceStatus(testService.Group, testService.Name, paging.NewServiceStatusParams().WithResults(1, 20).WithEvents(1, 20))
|
||||
if ss == nil {
|
||||
t.Fatalf("Store should've had key '%s', but didn't", testService.Key())
|
||||
}
|
||||
if len(serviceStatus.Results) != 2 {
|
||||
t.Fatalf("Service '%s' should've had 2 results, but actually returned %d", serviceStatus.Name, len(serviceStatus.Results))
|
||||
if len(ss.Events) != 3 {
|
||||
t.Errorf("Service '%s' should've had 3 events, got %d", ss.Name, len(ss.Events))
|
||||
}
|
||||
for i, r := range serviceStatus.Results {
|
||||
expectedResult := store.GetServiceStatus(testService.Group, testService.Name).Results[i]
|
||||
if r.HTTPStatus != expectedResult.HTTPStatus {
|
||||
t.Errorf("Result at index %d should've had a HTTPStatus of %d, but was actually %d", i, expectedResult.HTTPStatus, r.HTTPStatus)
|
||||
}
|
||||
if r.DNSRCode != expectedResult.DNSRCode {
|
||||
t.Errorf("Result at index %d should've had a DNSRCode of %s, but was actually %s", i, expectedResult.DNSRCode, r.DNSRCode)
|
||||
}
|
||||
if r.Hostname != expectedResult.Hostname {
|
||||
t.Errorf("Result at index %d should've had a Hostname of %s, but was actually %s", i, expectedResult.Hostname, r.Hostname)
|
||||
}
|
||||
if r.IP != expectedResult.IP {
|
||||
t.Errorf("Result at index %d should've had a IP of %s, but was actually %s", i, expectedResult.IP, r.IP)
|
||||
}
|
||||
if r.Connected != expectedResult.Connected {
|
||||
t.Errorf("Result at index %d should've had a Connected value of %t, but was actually %t", i, expectedResult.Connected, r.Connected)
|
||||
}
|
||||
if r.Duration != expectedResult.Duration {
|
||||
t.Errorf("Result at index %d should've had a Duration of %s, but was actually %s", i, expectedResult.Duration.String(), r.Duration.String())
|
||||
}
|
||||
if len(r.Errors) != len(expectedResult.Errors) {
|
||||
t.Errorf("Result at index %d should've had %d errors, but actually had %d errors", i, len(expectedResult.Errors), len(r.Errors))
|
||||
}
|
||||
if len(r.ConditionResults) != len(expectedResult.ConditionResults) {
|
||||
t.Errorf("Result at index %d should've had %d ConditionResults, but actually had %d ConditionResults", i, len(expectedResult.ConditionResults), len(r.ConditionResults))
|
||||
}
|
||||
if r.Success != expectedResult.Success {
|
||||
t.Errorf("Result at index %d should've had a Success of %t, but was actually %t", i, expectedResult.Success, r.Success)
|
||||
}
|
||||
if r.Timestamp != expectedResult.Timestamp {
|
||||
t.Errorf("Result at index %d should've had a Timestamp of %s, but was actually %s", i, expectedResult.Timestamp.String(), r.Timestamp.String())
|
||||
}
|
||||
if r.CertificateExpiration != expectedResult.CertificateExpiration {
|
||||
t.Errorf("Result at index %d should've had a CertificateExpiration of %s, but was actually %s", i, expectedResult.CertificateExpiration.String(), r.CertificateExpiration.String())
|
||||
}
|
||||
if len(ss.Results) != 2 {
|
||||
t.Errorf("Service '%s' should've had 2 results, got %d", ss.Name, len(ss.Results))
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetServiceStatus(t *testing.T) {
|
||||
store, _ := NewStore("")
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
|
||||
serviceStatus := store.GetServiceStatus(testService.Group, testService.Name)
|
||||
if serviceStatus == nil {
|
||||
t.Fatalf("serviceStatus shouldn't have been nil")
|
||||
}
|
||||
if serviceStatus.Uptime == nil {
|
||||
t.Fatalf("serviceStatus.Uptime shouldn't have been nil")
|
||||
}
|
||||
if serviceStatus.Uptime.LastHour != 0.5 {
|
||||
t.Errorf("serviceStatus.Uptime.LastHour should've been 0.5")
|
||||
}
|
||||
if serviceStatus.Uptime.LastTwentyFourHours != 0.5 {
|
||||
t.Errorf("serviceStatus.Uptime.LastTwentyFourHours should've been 0.5")
|
||||
}
|
||||
if serviceStatus.Uptime.LastSevenDays != 0.5 {
|
||||
t.Errorf("serviceStatus.Uptime.LastSevenDays should've been 0.5")
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetServiceStatusForMissingStatusReturnsNil(t *testing.T) {
|
||||
store, _ := NewStore("")
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
|
||||
serviceStatus := store.GetServiceStatus("nonexistantgroup", "nonexistantname")
|
||||
if serviceStatus != nil {
|
||||
t.Errorf("Returned service status for group '%s' and name '%s' not nil after inserting the service into the store", testService.Group, testService.Name)
|
||||
}
|
||||
serviceStatus = store.GetServiceStatus(testService.Group, "nonexistantname")
|
||||
if serviceStatus != nil {
|
||||
t.Errorf("Returned service status for group '%s' and name '%s' not nil after inserting the service into the store", testService.Group, "nonexistantname")
|
||||
}
|
||||
serviceStatus = store.GetServiceStatus("nonexistantgroup", testService.Name)
|
||||
if serviceStatus != nil {
|
||||
t.Errorf("Returned service status for group '%s' and name '%s' not nil after inserting the service into the store", "nonexistantgroup", testService.Name)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetServiceStatusByKey(t *testing.T) {
|
||||
store, _ := NewStore("")
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
|
||||
serviceStatus := store.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(testService.Group, testService.Name))
|
||||
if serviceStatus == nil {
|
||||
t.Fatalf("serviceStatus shouldn't have been nil")
|
||||
}
|
||||
if serviceStatus.Uptime == nil {
|
||||
t.Fatalf("serviceStatus.Uptime shouldn't have been nil")
|
||||
}
|
||||
if serviceStatus.Uptime.LastHour != 0.5 {
|
||||
t.Errorf("serviceStatus.Uptime.LastHour should've been 0.5")
|
||||
}
|
||||
if serviceStatus.Uptime.LastTwentyFourHours != 0.5 {
|
||||
t.Errorf("serviceStatus.Uptime.LastTwentyFourHours should've been 0.5")
|
||||
}
|
||||
if serviceStatus.Uptime.LastSevenDays != 0.5 {
|
||||
t.Errorf("serviceStatus.Uptime.LastSevenDays should've been 0.5")
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetAllServiceStatusesWithResultPagination(t *testing.T) {
|
||||
store, _ := NewStore("")
|
||||
firstResult := &testSuccessfulResult
|
||||
secondResult := &testUnsuccessfulResult
|
||||
store.Insert(&testService, firstResult)
|
||||
store.Insert(&testService, secondResult)
|
||||
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
|
||||
firstResult.Timestamp = time.Time{}
|
||||
secondResult.Timestamp = time.Time{}
|
||||
serviceStatuses := store.GetAllServiceStatusesWithResultPagination(1, 20)
|
||||
if len(serviceStatuses) != 1 {
|
||||
t.Fatal("expected 1 service status")
|
||||
}
|
||||
actual, exists := serviceStatuses[util.ConvertGroupAndServiceToKey(testService.Group, testService.Name)]
|
||||
if !exists {
|
||||
t.Fatal("expected service status to exist")
|
||||
}
|
||||
if len(actual.Results) != 2 {
|
||||
t.Error("expected 2 results, got", len(actual.Results))
|
||||
}
|
||||
if len(actual.Events) != 2 {
|
||||
t.Error("expected 2 events, got", len(actual.Events))
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_DeleteAllServiceStatusesNotInKeys(t *testing.T) {
|
||||
store, _ := NewStore("")
|
||||
firstService := core.Service{Name: "service-1", Group: "group"}
|
||||
secondService := core.Service{Name: "service-2", Group: "group"}
|
||||
result := &testSuccessfulResult
|
||||
store.Insert(&firstService, result)
|
||||
store.Insert(&secondService, result)
|
||||
if store.cache.Count() != 2 {
|
||||
t.Errorf("expected cache to have 2 keys, got %d", store.cache.Count())
|
||||
}
|
||||
if store.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(firstService.Group, firstService.Name)) == nil {
|
||||
t.Fatal("firstService should exist")
|
||||
}
|
||||
if store.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(secondService.Group, secondService.Name)) == nil {
|
||||
t.Fatal("secondService should exist")
|
||||
}
|
||||
store.DeleteAllServiceStatusesNotInKeys([]string{util.ConvertGroupAndServiceToKey(firstService.Group, firstService.Name)})
|
||||
if store.cache.Count() != 1 {
|
||||
t.Fatalf("expected cache to have 1 keys, got %d", store.cache.Count())
|
||||
}
|
||||
if store.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(firstService.Group, firstService.Name)) == nil {
|
||||
t.Error("secondService should've been deleted")
|
||||
}
|
||||
if store.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(secondService.Group, secondService.Name)) != nil {
|
||||
t.Error("firstService should still exist")
|
||||
if deleted := store.DeleteAllServiceStatusesNotInKeys([]string{}); deleted != 1 {
|
||||
t.Errorf("%d entries should've been deleted, got %d", 1, deleted)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
119
storage/store/memory/uptime.go
Normal file
119
storage/store/memory/uptime.go
Normal file
@@ -0,0 +1,119 @@
|
||||
package memory
|
||||
|
||||
import (
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
)
|
||||
|
||||
const (
|
||||
numberOfHoursInTenDays = 10 * 24
|
||||
sevenDays = 7 * 24 * time.Hour
|
||||
)
|
||||
|
||||
// processUptimeAfterResult processes the result by extracting the relevant from the result and recalculating the uptime
|
||||
// if necessary
|
||||
func processUptimeAfterResult(uptime *core.Uptime, result *core.Result) {
|
||||
// XXX: Remove this on v3.0.0
|
||||
if len(uptime.SuccessfulExecutionsPerHour) != 0 || len(uptime.TotalExecutionsPerHour) != 0 {
|
||||
migrateUptimeToHourlyStatistics(uptime)
|
||||
}
|
||||
if uptime.HourlyStatistics == nil {
|
||||
uptime.HourlyStatistics = make(map[int64]*core.HourlyUptimeStatistics)
|
||||
}
|
||||
unixTimestampFlooredAtHour := result.Timestamp.Truncate(time.Hour).Unix()
|
||||
hourlyStats, _ := uptime.HourlyStatistics[unixTimestampFlooredAtHour]
|
||||
if hourlyStats == nil {
|
||||
hourlyStats = &core.HourlyUptimeStatistics{}
|
||||
uptime.HourlyStatistics[unixTimestampFlooredAtHour] = hourlyStats
|
||||
}
|
||||
if result.Success {
|
||||
hourlyStats.SuccessfulExecutions++
|
||||
}
|
||||
hourlyStats.TotalExecutions++
|
||||
hourlyStats.TotalExecutionsResponseTime += uint64(result.Duration.Milliseconds())
|
||||
// Clean up only when we're starting to have too many useless keys
|
||||
// Note that this is only triggered when there are more entries than there should be after
|
||||
// 10 days, despite the fact that we are deleting everything that's older than 7 days.
|
||||
// This is to prevent re-iterating on every `processUptimeAfterResult` as soon as the uptime has been logged for 7 days.
|
||||
if len(uptime.HourlyStatistics) > numberOfHoursInTenDays {
|
||||
sevenDaysAgo := time.Now().Add(-(sevenDays + time.Hour)).Unix()
|
||||
for hourlyUnixTimestamp := range uptime.HourlyStatistics {
|
||||
if sevenDaysAgo > hourlyUnixTimestamp {
|
||||
delete(uptime.HourlyStatistics, hourlyUnixTimestamp)
|
||||
}
|
||||
}
|
||||
}
|
||||
if result.Success {
|
||||
// Recalculate uptime if at least one of the 1h, 24h or 7d uptime are not 100%
|
||||
// If they're all 100%, then recalculating the uptime would be useless unless
|
||||
// the result added was a failure (!result.Success)
|
||||
if uptime.LastSevenDays != 1 || uptime.LastTwentyFourHours != 1 || uptime.LastHour != 1 {
|
||||
recalculateUptime(uptime)
|
||||
}
|
||||
} else {
|
||||
// Recalculate uptime if at least one of the 1h, 24h or 7d uptime are not 0%
|
||||
// If they're all 0%, then recalculating the uptime would be useless unless
|
||||
// the result added was a success (result.Success)
|
||||
if uptime.LastSevenDays != 0 || uptime.LastTwentyFourHours != 0 || uptime.LastHour != 0 {
|
||||
recalculateUptime(uptime)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func recalculateUptime(uptime *core.Uptime) {
|
||||
uptimeBrackets := make(map[string]uint64)
|
||||
now := time.Now()
|
||||
// The oldest uptime bracket starts 7 days ago, so we'll start from there
|
||||
timestamp := now.Add(-sevenDays)
|
||||
for now.Sub(timestamp) >= 0 {
|
||||
hourlyUnixTimestamp := timestamp.Truncate(time.Hour).Unix()
|
||||
hourlyStats := uptime.HourlyStatistics[hourlyUnixTimestamp]
|
||||
if hourlyStats == nil || hourlyStats.TotalExecutions == 0 {
|
||||
timestamp = timestamp.Add(time.Hour)
|
||||
continue
|
||||
}
|
||||
uptimeBrackets["7d_success"] += hourlyStats.SuccessfulExecutions
|
||||
uptimeBrackets["7d_total"] += hourlyStats.TotalExecutions
|
||||
if now.Sub(timestamp) <= 24*time.Hour {
|
||||
uptimeBrackets["24h_success"] += hourlyStats.SuccessfulExecutions
|
||||
uptimeBrackets["24h_total"] += hourlyStats.TotalExecutions
|
||||
}
|
||||
if now.Sub(timestamp) <= time.Hour {
|
||||
uptimeBrackets["1h_success"] += hourlyStats.SuccessfulExecutions
|
||||
uptimeBrackets["1h_total"] += hourlyStats.TotalExecutions
|
||||
}
|
||||
timestamp = timestamp.Add(time.Hour)
|
||||
}
|
||||
if uptimeBrackets["7d_total"] > 0 {
|
||||
uptime.LastSevenDays = float64(uptimeBrackets["7d_success"]) / float64(uptimeBrackets["7d_total"])
|
||||
}
|
||||
if uptimeBrackets["24h_total"] > 0 {
|
||||
uptime.LastTwentyFourHours = float64(uptimeBrackets["24h_success"]) / float64(uptimeBrackets["24h_total"])
|
||||
}
|
||||
if uptimeBrackets["1h_total"] > 0 {
|
||||
uptime.LastHour = float64(uptimeBrackets["1h_success"]) / float64(uptimeBrackets["1h_total"])
|
||||
}
|
||||
}
|
||||
|
||||
// XXX: Remove this on v3.0.0
|
||||
// Deprecated
|
||||
func migrateUptimeToHourlyStatistics(uptime *core.Uptime) {
|
||||
log.Println("[migrateUptimeToHourlyStatistics] Got", len(uptime.SuccessfulExecutionsPerHour), "entries for successful executions and", len(uptime.TotalExecutionsPerHour), "entries for total executions")
|
||||
uptime.HourlyStatistics = make(map[int64]*core.HourlyUptimeStatistics)
|
||||
for hourlyUnixTimestamp, totalExecutions := range uptime.TotalExecutionsPerHour {
|
||||
if totalExecutions == 0 {
|
||||
log.Println("[migrateUptimeToHourlyStatistics] Skipping entry at", hourlyUnixTimestamp, "because total number of executions is 0")
|
||||
continue
|
||||
}
|
||||
uptime.HourlyStatistics[hourlyUnixTimestamp] = &core.HourlyUptimeStatistics{
|
||||
TotalExecutions: totalExecutions,
|
||||
SuccessfulExecutions: uptime.SuccessfulExecutionsPerHour[hourlyUnixTimestamp],
|
||||
TotalExecutionsResponseTime: 0,
|
||||
}
|
||||
}
|
||||
log.Println("[migrateUptimeToHourlyStatistics] Migrated", len(uptime.HourlyStatistics), "entries")
|
||||
uptime.SuccessfulExecutionsPerHour = nil
|
||||
uptime.TotalExecutionsPerHour = nil
|
||||
}
|
||||
@@ -1,18 +1,20 @@
|
||||
package core
|
||||
package memory
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
)
|
||||
|
||||
func BenchmarkUptime_ProcessResult(b *testing.B) {
|
||||
uptime := NewUptime()
|
||||
func BenchmarkProcessUptimeAfterResult(b *testing.B) {
|
||||
uptime := core.NewUptime()
|
||||
now := time.Now()
|
||||
now = time.Date(now.Year(), now.Month(), now.Day(), now.Hour(), 0, 0, 0, now.Location())
|
||||
// Start 12000 days ago
|
||||
timestamp := now.Add(-12000 * 24 * time.Hour)
|
||||
for n := 0; n < b.N; n++ {
|
||||
uptime.ProcessResult(&Result{
|
||||
processUptimeAfterResult(uptime, &core.Result{
|
||||
Duration: 18 * time.Millisecond,
|
||||
Success: n%15 == 0,
|
||||
Timestamp: timestamp,
|
||||
@@ -1,62 +1,64 @@
|
||||
package core
|
||||
package memory
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
)
|
||||
|
||||
func TestUptime_ProcessResult(t *testing.T) {
|
||||
service := &Service{Name: "name", Group: "group"}
|
||||
serviceStatus := NewServiceStatus(service)
|
||||
func TestProcessUptimeAfterResult(t *testing.T) {
|
||||
service := &core.Service{Name: "name", Group: "group"}
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
uptime := serviceStatus.Uptime
|
||||
|
||||
checkUptimes(t, serviceStatus, 0.00, 0.00, 0.00)
|
||||
|
||||
now := time.Now()
|
||||
now = time.Date(now.Year(), now.Month(), now.Day(), now.Hour(), 0, 0, 0, now.Location())
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-7 * 24 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-7 * 24 * time.Hour), Success: true})
|
||||
checkUptimes(t, serviceStatus, 1.00, 0.00, 0.00)
|
||||
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-6 * 24 * time.Hour), Success: false})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-6 * 24 * time.Hour), Success: false})
|
||||
checkUptimes(t, serviceStatus, 0.50, 0.00, 0.00)
|
||||
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-8 * 24 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-8 * 24 * time.Hour), Success: true})
|
||||
checkUptimes(t, serviceStatus, 0.50, 0.00, 0.00)
|
||||
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-24 * time.Hour), Success: true})
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-12 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-24 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-12 * time.Hour), Success: true})
|
||||
checkUptimes(t, serviceStatus, 0.75, 1.00, 0.00)
|
||||
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-1 * time.Hour), Success: true, Duration: 10 * time.Millisecond})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-1 * time.Hour), Success: true, Duration: 10 * time.Millisecond})
|
||||
checkHourlyStatistics(t, uptime.HourlyStatistics[now.Unix()-now.Unix()%3600-3600], 10, 1, 1)
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-30 * time.Minute), Success: false, Duration: 500 * time.Millisecond})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-30 * time.Minute), Success: false, Duration: 500 * time.Millisecond})
|
||||
checkHourlyStatistics(t, uptime.HourlyStatistics[now.Unix()-now.Unix()%3600-3600], 510, 2, 1)
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-15 * time.Minute), Success: false, Duration: 25 * time.Millisecond})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-15 * time.Minute), Success: false, Duration: 25 * time.Millisecond})
|
||||
checkHourlyStatistics(t, uptime.HourlyStatistics[now.Unix()-now.Unix()%3600-3600], 535, 3, 1)
|
||||
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-10 * time.Minute), Success: false})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-10 * time.Minute), Success: false})
|
||||
checkUptimes(t, serviceStatus, 0.50, 0.50, 0.25)
|
||||
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-120 * time.Hour), Success: true})
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-119 * time.Hour), Success: true})
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-118 * time.Hour), Success: true})
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-117 * time.Hour), Success: true})
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-10 * time.Hour), Success: true})
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-8 * time.Hour), Success: true})
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-30 * time.Minute), Success: true})
|
||||
uptime.ProcessResult(&Result{Timestamp: now.Add(-25 * time.Minute), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-120 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-119 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-118 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-117 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-10 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-8 * time.Hour), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-30 * time.Minute), Success: true})
|
||||
processUptimeAfterResult(uptime, &core.Result{Timestamp: now.Add(-25 * time.Minute), Success: true})
|
||||
checkUptimes(t, serviceStatus, 0.75, 0.70, 0.50)
|
||||
}
|
||||
|
||||
func TestServiceStatus_AddResultUptimeIsCleaningUpAfterItself(t *testing.T) {
|
||||
service := &Service{Name: "name", Group: "group"}
|
||||
serviceStatus := NewServiceStatus(service)
|
||||
func TestAddResultUptimeIsCleaningUpAfterItself(t *testing.T) {
|
||||
service := &core.Service{Name: "name", Group: "group"}
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
now := time.Now()
|
||||
now = time.Date(now.Year(), now.Month(), now.Day(), now.Hour(), 0, 0, 0, now.Location())
|
||||
// Start 12 days ago
|
||||
timestamp := now.Add(-12 * 24 * time.Hour)
|
||||
for timestamp.Unix() <= now.Unix() {
|
||||
serviceStatus.AddResult(&Result{Timestamp: timestamp, Success: true})
|
||||
AddResult(serviceStatus, &core.Result{Timestamp: timestamp, Success: true})
|
||||
if len(serviceStatus.Uptime.HourlyStatistics) > numberOfHoursInTenDays {
|
||||
t.Errorf("At no point in time should there be more than %d entries in serviceStatus.SuccessfulExecutionsPerHour, but there are %d", numberOfHoursInTenDays, len(serviceStatus.Uptime.HourlyStatistics))
|
||||
}
|
||||
@@ -71,7 +73,7 @@ func TestServiceStatus_AddResultUptimeIsCleaningUpAfterItself(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func checkUptimes(t *testing.T, status *ServiceStatus, expectedUptimeDuringLastSevenDays, expectedUptimeDuringLastTwentyFourHours, expectedUptimeDuringLastHour float64) {
|
||||
func checkUptimes(t *testing.T, status *core.ServiceStatus, expectedUptimeDuringLastSevenDays, expectedUptimeDuringLastTwentyFourHours, expectedUptimeDuringLastHour float64) {
|
||||
if status.Uptime.LastSevenDays != expectedUptimeDuringLastSevenDays {
|
||||
t.Errorf("expected status.Uptime.LastSevenDays to be %f, got %f", expectedUptimeDuringLastHour, status.Uptime.LastSevenDays)
|
||||
}
|
||||
@@ -83,7 +85,7 @@ func checkUptimes(t *testing.T, status *ServiceStatus, expectedUptimeDuringLastS
|
||||
}
|
||||
}
|
||||
|
||||
func checkHourlyStatistics(t *testing.T, hourlyUptimeStatistics *HourlyUptimeStatistics, expectedTotalExecutionsResponseTime uint64, expectedTotalExecutions uint64, expectedSuccessfulExecutions uint64) {
|
||||
func checkHourlyStatistics(t *testing.T, hourlyUptimeStatistics *core.HourlyUptimeStatistics, expectedTotalExecutionsResponseTime uint64, expectedTotalExecutions uint64, expectedSuccessfulExecutions uint64) {
|
||||
if hourlyUptimeStatistics.TotalExecutionsResponseTime != expectedTotalExecutionsResponseTime {
|
||||
t.Error("TotalExecutionsResponseTime should've been", expectedTotalExecutionsResponseTime, "got", hourlyUptimeStatistics.TotalExecutionsResponseTime)
|
||||
}
|
||||
85
storage/store/memory/util.go
Normal file
85
storage/store/memory/util.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package memory
|
||||
|
||||
import (
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
)
|
||||
|
||||
// ShallowCopyServiceStatus returns a shallow copy of a ServiceStatus with only the results
|
||||
// within the range defined by the page and pageSize parameters
|
||||
func ShallowCopyServiceStatus(ss *core.ServiceStatus, params *paging.ServiceStatusParams) *core.ServiceStatus {
|
||||
shallowCopy := &core.ServiceStatus{
|
||||
Name: ss.Name,
|
||||
Group: ss.Group,
|
||||
Key: ss.Key,
|
||||
Uptime: core.NewUptime(),
|
||||
}
|
||||
numberOfResults := len(ss.Results)
|
||||
resultsStart, resultsEnd := getStartAndEndIndex(numberOfResults, params.ResultsPage, params.ResultsPageSize)
|
||||
if resultsStart < 0 || resultsEnd < 0 {
|
||||
shallowCopy.Results = []*core.Result{}
|
||||
} else {
|
||||
shallowCopy.Results = ss.Results[resultsStart:resultsEnd]
|
||||
}
|
||||
numberOfEvents := len(ss.Events)
|
||||
eventsStart, eventsEnd := getStartAndEndIndex(numberOfEvents, params.EventsPage, params.EventsPageSize)
|
||||
if eventsStart < 0 || eventsEnd < 0 {
|
||||
shallowCopy.Events = []*core.Event{}
|
||||
} else {
|
||||
shallowCopy.Events = ss.Events[eventsStart:eventsEnd]
|
||||
}
|
||||
if params.IncludeUptime {
|
||||
shallowCopy.Uptime.LastHour = ss.Uptime.LastHour
|
||||
shallowCopy.Uptime.LastTwentyFourHours = ss.Uptime.LastTwentyFourHours
|
||||
shallowCopy.Uptime.LastSevenDays = ss.Uptime.LastSevenDays
|
||||
}
|
||||
return shallowCopy
|
||||
}
|
||||
|
||||
func getStartAndEndIndex(numberOfResults int, page, pageSize int) (int, int) {
|
||||
if page < 1 || pageSize < 0 {
|
||||
return -1, -1
|
||||
}
|
||||
start := numberOfResults - (page * pageSize)
|
||||
end := numberOfResults - ((page - 1) * pageSize)
|
||||
if start > numberOfResults {
|
||||
start = -1
|
||||
} else if start < 0 {
|
||||
start = 0
|
||||
}
|
||||
if end > numberOfResults {
|
||||
end = numberOfResults
|
||||
}
|
||||
return start, end
|
||||
}
|
||||
|
||||
// AddResult adds a Result to ServiceStatus.Results and makes sure that there are
|
||||
// no more than MaximumNumberOfResults results in the Results slice
|
||||
func AddResult(ss *core.ServiceStatus, result *core.Result) {
|
||||
if ss == nil {
|
||||
return
|
||||
}
|
||||
if len(ss.Results) > 0 {
|
||||
// Check if there's any change since the last result
|
||||
if ss.Results[len(ss.Results)-1].Success != result.Success {
|
||||
ss.Events = append(ss.Events, core.NewEventFromResult(result))
|
||||
if len(ss.Events) > core.MaximumNumberOfEvents {
|
||||
// Doing ss.Events[1:] would usually be sufficient, but in the case where for some reason, the slice has
|
||||
// more than one extra element, we can get rid of all of them at once and thus returning the slice to a
|
||||
// length of MaximumNumberOfEvents by using ss.Events[len(ss.Events)-MaximumNumberOfEvents:] instead
|
||||
ss.Events = ss.Events[len(ss.Events)-core.MaximumNumberOfEvents:]
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// This is the first result, so we need to add the first healthy/unhealthy event
|
||||
ss.Events = append(ss.Events, core.NewEventFromResult(result))
|
||||
}
|
||||
ss.Results = append(ss.Results, result)
|
||||
if len(ss.Results) > core.MaximumNumberOfResults {
|
||||
// Doing ss.Results[1:] would usually be sufficient, but in the case where for some reason, the slice has more
|
||||
// than one extra element, we can get rid of all of them at once and thus returning the slice to a length of
|
||||
// MaximumNumberOfResults by using ss.Results[len(ss.Results)-MaximumNumberOfResults:] instead
|
||||
ss.Results = ss.Results[len(ss.Results)-core.MaximumNumberOfResults:]
|
||||
}
|
||||
processUptimeAfterResult(ss.Uptime, result)
|
||||
}
|
||||
20
storage/store/memory/util_bench_test.go
Normal file
20
storage/store/memory/util_bench_test.go
Normal file
@@ -0,0 +1,20 @@
|
||||
package memory
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
)
|
||||
|
||||
func BenchmarkShallowCopyServiceStatus(b *testing.B) {
|
||||
service := &testService
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
for i := 0; i < core.MaximumNumberOfResults; i++ {
|
||||
AddResult(serviceStatus, &testSuccessfulResult)
|
||||
}
|
||||
for n := 0; n < b.N; n++ {
|
||||
ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
}
|
||||
b.ReportAllocs()
|
||||
}
|
||||
79
storage/store/memory/util_test.go
Normal file
79
storage/store/memory/util_test.go
Normal file
@@ -0,0 +1,79 @@
|
||||
package memory
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
)
|
||||
|
||||
func TestAddResult(t *testing.T) {
|
||||
service := &core.Service{Name: "name", Group: "group"}
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
for i := 0; i < (core.MaximumNumberOfResults+core.MaximumNumberOfEvents)*2; i++ {
|
||||
AddResult(serviceStatus, &core.Result{Success: i%2 == 0, Timestamp: time.Now()})
|
||||
}
|
||||
if len(serviceStatus.Results) != core.MaximumNumberOfResults {
|
||||
t.Errorf("expected serviceStatus.Results to not exceed a length of %d", core.MaximumNumberOfResults)
|
||||
}
|
||||
if len(serviceStatus.Events) != core.MaximumNumberOfEvents {
|
||||
t.Errorf("expected serviceStatus.Events to not exceed a length of %d", core.MaximumNumberOfEvents)
|
||||
}
|
||||
// Try to add nil serviceStatus
|
||||
AddResult(nil, &core.Result{Timestamp: time.Now()})
|
||||
}
|
||||
|
||||
func TestShallowCopyServiceStatus(t *testing.T) {
|
||||
service := &core.Service{Name: "name", Group: "group"}
|
||||
serviceStatus := core.NewServiceStatus(service.Key(), service.Group, service.Name)
|
||||
ts := time.Now().Add(-25 * time.Hour)
|
||||
for i := 0; i < 25; i++ {
|
||||
AddResult(serviceStatus, &core.Result{Success: i%2 == 0, Timestamp: ts})
|
||||
ts = ts.Add(time.Hour)
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(-1, -1)).Results) != 0 {
|
||||
t.Error("expected to have 0 result")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, 1)).Results) != 1 {
|
||||
t.Error("expected to have 1 result")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(5, 0)).Results) != 0 {
|
||||
t.Error("expected to have 0 results")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(-1, 20)).Results) != 0 {
|
||||
t.Error("expected to have 0 result, because the page was invalid")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, -1)).Results) != 0 {
|
||||
t.Error("expected to have 0 result, because the page size was invalid")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, 10)).Results) != 10 {
|
||||
t.Error("expected to have 10 results, because given a page size of 10, page 1 should have 10 elements")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(2, 10)).Results) != 10 {
|
||||
t.Error("expected to have 10 results, because given a page size of 10, page 2 should have 10 elements")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(3, 10)).Results) != 5 {
|
||||
t.Error("expected to have 5 results, because given a page size of 10, page 3 should have 5 elements")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(4, 10)).Results) != 0 {
|
||||
t.Error("expected to have 0 results, because given a page size of 10, page 4 should have 0 elements")
|
||||
}
|
||||
if len(ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithResults(1, 50)).Results) != 25 {
|
||||
t.Error("expected to have 25 results, because there's only 25 results")
|
||||
}
|
||||
uptime := ShallowCopyServiceStatus(serviceStatus, paging.NewServiceStatusParams().WithUptime()).Uptime
|
||||
if uptime == nil {
|
||||
t.Error("expected uptime to not be nil")
|
||||
} else {
|
||||
if uptime.LastHour != 1 {
|
||||
t.Error("expected uptime.LastHour to not be 1, got", uptime.LastHour)
|
||||
}
|
||||
if uptime.LastTwentyFourHours != 0.5 {
|
||||
t.Error("expected uptime.LastTwentyFourHours to not be 0.5, got", uptime.LastTwentyFourHours)
|
||||
}
|
||||
if uptime.LastSevenDays != 0.52 {
|
||||
t.Error("expected uptime.LastSevenDays to not be 0.52, got", uptime.LastSevenDays)
|
||||
}
|
||||
}
|
||||
}
|
||||
35
storage/store/paging/paging.go
Normal file
35
storage/store/paging/paging.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package paging
|
||||
|
||||
// ServiceStatusParams represents all parameters that can be used for paging purposes
|
||||
type ServiceStatusParams struct {
|
||||
EventsPage int // Number of the event page
|
||||
EventsPageSize int // Size of the event page
|
||||
ResultsPage int // Number of the result page
|
||||
ResultsPageSize int // Size of the result page
|
||||
IncludeUptime bool // Whether to include uptime data
|
||||
}
|
||||
|
||||
// NewServiceStatusParams creates a new ServiceStatusParams
|
||||
func NewServiceStatusParams() *ServiceStatusParams {
|
||||
return &ServiceStatusParams{}
|
||||
}
|
||||
|
||||
// WithEvents sets the values for EventsPage and EventsPageSize
|
||||
func (params *ServiceStatusParams) WithEvents(page, pageSize int) *ServiceStatusParams {
|
||||
params.EventsPage = page
|
||||
params.EventsPageSize = pageSize
|
||||
return params
|
||||
}
|
||||
|
||||
// WithResults sets the values for ResultsPage and ResultsPageSize
|
||||
func (params *ServiceStatusParams) WithResults(page, pageSize int) *ServiceStatusParams {
|
||||
params.ResultsPage = page
|
||||
params.ResultsPageSize = pageSize
|
||||
return params
|
||||
}
|
||||
|
||||
// WithUptime sets the value IncludeUptime to true
|
||||
func (params *ServiceStatusParams) WithUptime() *ServiceStatusParams {
|
||||
params.IncludeUptime = true
|
||||
return params
|
||||
}
|
||||
81
storage/store/paging/paging_test.go
Normal file
81
storage/store/paging/paging_test.go
Normal file
@@ -0,0 +1,81 @@
|
||||
package paging
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestNewServiceStatusParams(t *testing.T) {
|
||||
type Scenario struct {
|
||||
Name string
|
||||
Params *ServiceStatusParams
|
||||
ExpectedEventsPage int
|
||||
ExpectedEventsPageSize int
|
||||
ExpectedResultsPage int
|
||||
ExpectedResultsPageSize int
|
||||
ExpectedIncludeUptime bool
|
||||
}
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "empty-params",
|
||||
Params: NewServiceStatusParams(),
|
||||
ExpectedEventsPage: 0,
|
||||
ExpectedEventsPageSize: 0,
|
||||
ExpectedResultsPage: 0,
|
||||
ExpectedResultsPageSize: 0,
|
||||
ExpectedIncludeUptime: false,
|
||||
},
|
||||
{
|
||||
Name: "with-events-page-2-size-7",
|
||||
Params: NewServiceStatusParams().WithEvents(2, 7),
|
||||
ExpectedEventsPage: 2,
|
||||
ExpectedEventsPageSize: 7,
|
||||
ExpectedResultsPage: 0,
|
||||
ExpectedResultsPageSize: 0,
|
||||
ExpectedIncludeUptime: false,
|
||||
},
|
||||
{
|
||||
Name: "with-events-page-4-size-3-uptime",
|
||||
Params: NewServiceStatusParams().WithEvents(4, 3).WithUptime(),
|
||||
ExpectedEventsPage: 4,
|
||||
ExpectedEventsPageSize: 3,
|
||||
ExpectedResultsPage: 0,
|
||||
ExpectedResultsPageSize: 0,
|
||||
ExpectedIncludeUptime: true,
|
||||
},
|
||||
{
|
||||
Name: "with-results-page-1-size-20-uptime",
|
||||
Params: NewServiceStatusParams().WithResults(1, 20).WithUptime(),
|
||||
ExpectedEventsPage: 0,
|
||||
ExpectedEventsPageSize: 0,
|
||||
ExpectedResultsPage: 1,
|
||||
ExpectedResultsPageSize: 20,
|
||||
ExpectedIncludeUptime: true,
|
||||
},
|
||||
{
|
||||
Name: "with-results-page-2-size-10-events-page-3-size-50",
|
||||
Params: NewServiceStatusParams().WithResults(2, 10).WithEvents(3, 50),
|
||||
ExpectedEventsPage: 3,
|
||||
ExpectedEventsPageSize: 50,
|
||||
ExpectedResultsPage: 2,
|
||||
ExpectedResultsPageSize: 10,
|
||||
ExpectedIncludeUptime: false,
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
if scenario.Params.EventsPage != scenario.ExpectedEventsPage {
|
||||
t.Errorf("expected ExpectedEventsPage to be %d, was %d", scenario.ExpectedEventsPageSize, scenario.Params.EventsPage)
|
||||
}
|
||||
if scenario.Params.EventsPageSize != scenario.ExpectedEventsPageSize {
|
||||
t.Errorf("expected EventsPageSize to be %d, was %d", scenario.ExpectedEventsPageSize, scenario.Params.EventsPageSize)
|
||||
}
|
||||
if scenario.Params.ResultsPage != scenario.ExpectedResultsPage {
|
||||
t.Errorf("expected ResultsPage to be %d, was %d", scenario.ExpectedResultsPage, scenario.Params.ResultsPage)
|
||||
}
|
||||
if scenario.Params.ResultsPageSize != scenario.ExpectedResultsPageSize {
|
||||
t.Errorf("expected ResultsPageSize to be %d, was %d", scenario.ExpectedResultsPageSize, scenario.Params.ResultsPageSize)
|
||||
}
|
||||
if scenario.Params.IncludeUptime != scenario.ExpectedIncludeUptime {
|
||||
t.Errorf("expected IncludeUptime to be %v, was %v", scenario.ExpectedIncludeUptime, scenario.Params.IncludeUptime)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
778
storage/store/sqlite/sqlite.go
Normal file
778
storage/store/sqlite/sqlite.go
Normal file
@@ -0,0 +1,778 @@
|
||||
package sqlite
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
"github.com/TwinProduction/gatus/util"
|
||||
_ "modernc.org/sqlite"
|
||||
)
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Note that only exported functions in this file may create, commit, or rollback a transaction //
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
const (
|
||||
arraySeparator = "|~|"
|
||||
|
||||
uptimeCleanUpThreshold = 10 * 24 * time.Hour // Maximum uptime age before triggering a clean up
|
||||
eventsCleanUpThreshold = core.MaximumNumberOfEvents + 10 // Maximum number of events before triggering a clean up
|
||||
resultsCleanUpThreshold = core.MaximumNumberOfResults + 10 // Maximum number of results before triggering a clean up
|
||||
|
||||
uptimeRetention = 7 * 24 * time.Hour
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrFilePathNotSpecified is the error returned when path parameter passed in NewStore is blank
|
||||
ErrFilePathNotSpecified = errors.New("file path cannot be empty")
|
||||
|
||||
// ErrDatabaseDriverNotSpecified is the error returned when the driver parameter passed in NewStore is blank
|
||||
ErrDatabaseDriverNotSpecified = errors.New("database driver cannot be empty")
|
||||
|
||||
errServiceNotFoundInDatabase = errors.New("service does not exist in database")
|
||||
errNoRowsReturned = errors.New("expected a row to be returned, but none was")
|
||||
)
|
||||
|
||||
// Store that leverages a database
|
||||
type Store struct {
|
||||
driver, file string
|
||||
|
||||
db *sql.DB
|
||||
}
|
||||
|
||||
// NewStore initializes the database and creates the schema if it doesn't already exist in the file specified
|
||||
func NewStore(driver, path string) (*Store, error) {
|
||||
if len(driver) == 0 {
|
||||
return nil, ErrDatabaseDriverNotSpecified
|
||||
}
|
||||
if len(path) == 0 {
|
||||
return nil, ErrFilePathNotSpecified
|
||||
}
|
||||
store := &Store{driver: driver, file: path}
|
||||
var err error
|
||||
if store.db, err = sql.Open(driver, path); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if driver == "sqlite" {
|
||||
_, _ = store.db.Exec("PRAGMA foreign_keys=ON")
|
||||
_, _ = store.db.Exec("PRAGMA journal_mode=WAL")
|
||||
_, _ = store.db.Exec("PRAGMA synchronous=NORMAL")
|
||||
// Prevents driver from running into "database is locked" errors
|
||||
// This is because we're using WAL to improve performance
|
||||
store.db.SetMaxOpenConns(1)
|
||||
}
|
||||
if err = store.createSchema(); err != nil {
|
||||
_ = store.db.Close()
|
||||
return nil, err
|
||||
}
|
||||
return store, nil
|
||||
}
|
||||
|
||||
// createSchema creates the schema required to perform all database operations.
|
||||
func (s *Store) createSchema() error {
|
||||
_, err := s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service (
|
||||
service_id INTEGER PRIMARY KEY,
|
||||
service_key TEXT UNIQUE,
|
||||
service_name TEXT,
|
||||
service_group TEXT,
|
||||
UNIQUE(service_name, service_group)
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_event (
|
||||
service_event_id INTEGER PRIMARY KEY,
|
||||
service_id INTEGER REFERENCES service(service_id) ON DELETE CASCADE,
|
||||
event_type TEXT,
|
||||
event_timestamp TIMESTAMP
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_result (
|
||||
service_result_id INTEGER PRIMARY KEY,
|
||||
service_id INTEGER REFERENCES service(service_id) ON DELETE CASCADE,
|
||||
success INTEGER,
|
||||
errors TEXT,
|
||||
connected INTEGER,
|
||||
status INTEGER,
|
||||
dns_rcode TEXT,
|
||||
certificate_expiration INTEGER,
|
||||
hostname TEXT,
|
||||
ip TEXT,
|
||||
duration INTEGER,
|
||||
timestamp TIMESTAMP
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_result_condition (
|
||||
service_result_condition_id INTEGER PRIMARY KEY,
|
||||
service_result_id INTEGER REFERENCES service_result(service_result_id) ON DELETE CASCADE,
|
||||
condition TEXT,
|
||||
success INTEGER
|
||||
)
|
||||
`)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = s.db.Exec(`
|
||||
CREATE TABLE IF NOT EXISTS service_uptime (
|
||||
service_uptime_id INTEGER PRIMARY KEY,
|
||||
service_id INTEGER REFERENCES service(service_id) ON DELETE CASCADE,
|
||||
hour_unix_timestamp INTEGER,
|
||||
total_executions INTEGER,
|
||||
successful_executions INTEGER,
|
||||
total_response_time INTEGER,
|
||||
UNIQUE(service_id, hour_unix_timestamp)
|
||||
)
|
||||
`)
|
||||
return err
|
||||
}
|
||||
|
||||
// GetAllServiceStatuses returns all monitored core.ServiceStatus
|
||||
// with a subset of core.Result defined by the page and pageSize parameters
|
||||
func (s *Store) GetAllServiceStatuses(params *paging.ServiceStatusParams) map[string]*core.ServiceStatus {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
keys, err := s.getAllServiceKeys(tx)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return nil
|
||||
}
|
||||
serviceStatuses := make(map[string]*core.ServiceStatus, len(keys))
|
||||
for _, key := range keys {
|
||||
serviceStatus, err := s.getServiceStatusByKey(tx, key, params)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
serviceStatuses[key] = serviceStatus
|
||||
}
|
||||
if err = tx.Commit(); err != nil {
|
||||
_ = tx.Rollback()
|
||||
}
|
||||
return serviceStatuses
|
||||
}
|
||||
|
||||
// GetServiceStatus returns the service status for a given service name in the given group
|
||||
func (s *Store) GetServiceStatus(groupName, serviceName string, params *paging.ServiceStatusParams) *core.ServiceStatus {
|
||||
return s.GetServiceStatusByKey(util.ConvertGroupAndServiceToKey(groupName, serviceName), params)
|
||||
}
|
||||
|
||||
// GetServiceStatusByKey returns the service status for a given key
|
||||
func (s *Store) GetServiceStatusByKey(key string, params *paging.ServiceStatusParams) *core.ServiceStatus {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
serviceStatus, err := s.getServiceStatusByKey(tx, key, params)
|
||||
if err != nil {
|
||||
_ = tx.Rollback()
|
||||
return nil
|
||||
}
|
||||
if err = tx.Commit(); err != nil {
|
||||
_ = tx.Rollback()
|
||||
}
|
||||
return serviceStatus
|
||||
}
|
||||
|
||||
// Insert adds the observed result for the specified service into the store
|
||||
func (s *Store) Insert(service *core.Service, result *core.Result) {
|
||||
tx, err := s.db.Begin()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
//start := time.Now()
|
||||
serviceID, err := s.getServiceID(tx, service)
|
||||
if err != nil {
|
||||
if err == errServiceNotFoundInDatabase {
|
||||
// Service doesn't exist in the database, insert it
|
||||
if serviceID, err = s.insertService(tx, service); err != nil {
|
||||
_ = tx.Rollback()
|
||||
return // failed to insert service
|
||||
}
|
||||
} else {
|
||||
_ = tx.Rollback()
|
||||
return
|
||||
}
|
||||
}
|
||||
// First, we need to check if we need to insert a new event.
|
||||
//
|
||||
// A new event must be added if either of the following cases happen:
|
||||
// 1. There is only 1 event. The total number of events for a service can only be 1 if the only existing event is
|
||||
// of type EventStart, in which case we will have to create a new event of type EventHealthy or EventUnhealthy
|
||||
// based on result.Success.
|
||||
// 2. The lastResult.Success != result.Success. This implies that the service went from healthy to unhealthy or
|
||||
// vice-versa, in which case we will have to create a new event of type EventHealthy or EventUnhealthy
|
||||
// based on result.Success.
|
||||
numberOfEvents, err := s.getNumberOfEventsByServiceID(tx, serviceID)
|
||||
if err != nil {
|
||||
log.Printf("[sqlite][Insert] Failed to retrieve total number of events for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
}
|
||||
if numberOfEvents == 0 {
|
||||
// There's no events yet, which means we need to add the EventStart and the first healthy/unhealthy event
|
||||
err = s.insertEvent(tx, serviceID, &core.Event{
|
||||
Type: core.EventStart,
|
||||
Timestamp: result.Timestamp.Add(-50 * time.Millisecond),
|
||||
})
|
||||
if err != nil {
|
||||
// Silently fail
|
||||
log.Printf("[sqlite][Insert] Failed to insert event=%s for group=%s; service=%s: %s", core.EventStart, service.Group, service.Name, err.Error())
|
||||
}
|
||||
event := core.NewEventFromResult(result)
|
||||
if err = s.insertEvent(tx, serviceID, event); err != nil {
|
||||
// Silently fail
|
||||
log.Printf("[sqlite][Insert] Failed to insert event=%s for group=%s; service=%s: %s", event.Type, service.Group, service.Name, err.Error())
|
||||
}
|
||||
} else {
|
||||
// Get the success value of the previous result
|
||||
var lastResultSuccess bool
|
||||
if lastResultSuccess, err = s.getLastServiceResultSuccessValue(tx, serviceID); err != nil {
|
||||
log.Printf("[sqlite][Insert] Failed to retrieve outcome of previous result for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
} else {
|
||||
// If we managed to retrieve the outcome of the previous result, we'll compare it with the new result.
|
||||
// If the final outcome (success or failure) of the previous and the new result aren't the same, it means
|
||||
// that the service either went from Healthy to Unhealthy or Unhealthy -> Healthy, therefore, we'll add
|
||||
// an event to mark the change in state
|
||||
if lastResultSuccess != result.Success {
|
||||
event := core.NewEventFromResult(result)
|
||||
if err = s.insertEvent(tx, serviceID, event); err != nil {
|
||||
// Silently fail
|
||||
log.Printf("[sqlite][Insert] Failed to insert event=%s for group=%s; service=%s: %s", event.Type, service.Group, service.Name, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
// Clean up old events if there's more than twice the maximum number of events
|
||||
// This lets us both keep the table clean without impacting performance too much
|
||||
// (since we're only deleting MaximumNumberOfEvents at a time instead of 1)
|
||||
if numberOfEvents > eventsCleanUpThreshold {
|
||||
if err = s.deleteOldServiceEvents(tx, serviceID); err != nil {
|
||||
log.Printf("[sqlite][Insert] Failed to delete old events for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
// Second, we need to insert the result.
|
||||
if err = s.insertResult(tx, serviceID, result); err != nil {
|
||||
log.Printf("[sqlite][Insert] Failed to insert result for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
_ = tx.Rollback() // If we can't insert the result, we'll rollback now since there's no point continuing
|
||||
return
|
||||
}
|
||||
// Clean up old results
|
||||
numberOfResults, err := s.getNumberOfResultsByServiceID(tx, serviceID)
|
||||
if err != nil {
|
||||
log.Printf("[sqlite][Insert] Failed to retrieve total number of results for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
} else {
|
||||
if numberOfResults > resultsCleanUpThreshold {
|
||||
if err = s.deleteOldServiceResults(tx, serviceID); err != nil {
|
||||
log.Printf("[sqlite][Insert] Failed to delete old results for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
// Finally, we need to insert the uptime data.
|
||||
// Because the uptime data significantly outlives the results, we can't rely on the results for determining the uptime
|
||||
if err = s.updateServiceUptime(tx, serviceID, result); err != nil {
|
||||
log.Printf("[sqlite][Insert] Failed to update uptime for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
}
|
||||
// Clean up old uptime entries
|
||||
ageOfOldestUptimeEntry, err := s.getAgeOfOldestServiceUptimeEntry(tx, serviceID)
|
||||
if err != nil {
|
||||
log.Printf("[sqlite][Insert] Failed to retrieve oldest service uptime entry for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
} else {
|
||||
if ageOfOldestUptimeEntry > uptimeCleanUpThreshold {
|
||||
if err = s.deleteOldUptimeEntries(tx, serviceID, time.Now().Add(-(uptimeRetention + time.Hour))); err != nil {
|
||||
log.Printf("[sqlite][Insert] Failed to delete old uptime entries for group=%s; service=%s: %s", service.Group, service.Name, err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
//log.Printf("[sqlite][Insert] Successfully inserted result in duration=%dms", time.Since(start).Milliseconds())
|
||||
if err = tx.Commit(); err != nil {
|
||||
_ = tx.Rollback()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// DeleteAllServiceStatusesNotInKeys removes all rows owned by a service whose key is not within the keys provided
|
||||
func (s *Store) DeleteAllServiceStatusesNotInKeys(keys []string) int {
|
||||
var err error
|
||||
var result sql.Result
|
||||
if len(keys) == 0 {
|
||||
// Delete everything
|
||||
result, err = s.db.Exec("DELETE FROM service")
|
||||
} else {
|
||||
args := make([]interface{}, 0, len(keys))
|
||||
for i := range keys {
|
||||
args = append(args, keys[i])
|
||||
}
|
||||
result, err = s.db.Exec(fmt.Sprintf("DELETE FROM service WHERE service_key NOT IN (%s)", strings.Trim(strings.Repeat("?,", len(keys)), ",")), args...)
|
||||
}
|
||||
if err != nil {
|
||||
log.Printf("[sqlite][DeleteAllServiceStatusesNotInKeys] Failed to delete rows that do not belong to any of keys=%v: %s", keys, err.Error())
|
||||
return 0
|
||||
}
|
||||
rowsAffects, _ := result.RowsAffected()
|
||||
return int(rowsAffects)
|
||||
}
|
||||
|
||||
// Clear deletes everything from the store
|
||||
func (s *Store) Clear() {
|
||||
_, _ = s.db.Exec("DELETE FROM service")
|
||||
}
|
||||
|
||||
// Save does nothing, because this store is immediately persistent.
|
||||
func (s *Store) Save() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Close the database handle
|
||||
func (s *Store) Close() {
|
||||
_ = s.db.Close()
|
||||
}
|
||||
|
||||
// insertService inserts a service in the store and returns the generated id of said service
|
||||
func (s *Store) insertService(tx *sql.Tx, service *core.Service) (int64, error) {
|
||||
//log.Printf("[sqlite][insertService] Inserting service with group=%s and name=%s", service.Group, service.Name)
|
||||
result, err := tx.Exec(
|
||||
"INSERT INTO service (service_key, service_name, service_group) VALUES ($1, $2, $3)",
|
||||
service.Key(),
|
||||
service.Name,
|
||||
service.Group,
|
||||
)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return result.LastInsertId()
|
||||
}
|
||||
|
||||
// insertEvent inserts a service event in the store
|
||||
func (s *Store) insertEvent(tx *sql.Tx, serviceID int64, event *core.Event) error {
|
||||
_, err := tx.Exec(
|
||||
"INSERT INTO service_event (service_id, event_type, event_timestamp) VALUES ($1, $2, $3)",
|
||||
serviceID,
|
||||
event.Type,
|
||||
event.Timestamp,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// insertResult inserts a result in the store
|
||||
func (s *Store) insertResult(tx *sql.Tx, serviceID int64, result *core.Result) error {
|
||||
res, err := tx.Exec(
|
||||
`
|
||||
INSERT INTO service_result (service_id, success, errors, connected, status, dns_rcode, certificate_expiration, hostname, ip, duration, timestamp)
|
||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
|
||||
`,
|
||||
serviceID,
|
||||
result.Success,
|
||||
strings.Join(result.Errors, arraySeparator),
|
||||
result.Connected,
|
||||
result.HTTPStatus,
|
||||
result.DNSRCode,
|
||||
result.CertificateExpiration,
|
||||
result.Hostname,
|
||||
result.IP,
|
||||
result.Duration,
|
||||
result.Timestamp,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
serviceResultID, err := res.LastInsertId()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return s.insertConditionResults(tx, serviceResultID, result.ConditionResults)
|
||||
}
|
||||
|
||||
func (s *Store) insertConditionResults(tx *sql.Tx, serviceResultID int64, conditionResults []*core.ConditionResult) error {
|
||||
var err error
|
||||
for _, cr := range conditionResults {
|
||||
_, err = tx.Exec("INSERT INTO service_result_condition (service_result_id, condition, success) VALUES ($1, $2, $3)",
|
||||
serviceResultID,
|
||||
cr.Condition,
|
||||
cr.Success,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Store) updateServiceUptime(tx *sql.Tx, serviceID int64, result *core.Result) error {
|
||||
unixTimestampFlooredAtHour := result.Timestamp.Truncate(time.Hour).Unix()
|
||||
var successfulExecutions int
|
||||
if result.Success {
|
||||
successfulExecutions = 1
|
||||
}
|
||||
_, err := tx.Exec(
|
||||
`
|
||||
INSERT INTO service_uptime (service_id, hour_unix_timestamp, total_executions, successful_executions, total_response_time)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
ON CONFLICT(service_id, hour_unix_timestamp) DO UPDATE SET
|
||||
total_executions = excluded.total_executions + total_executions,
|
||||
successful_executions = excluded.successful_executions + successful_executions,
|
||||
total_response_time = excluded.total_response_time + total_response_time
|
||||
`,
|
||||
serviceID,
|
||||
unixTimestampFlooredAtHour,
|
||||
1,
|
||||
successfulExecutions,
|
||||
result.Duration.Milliseconds(),
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Store) getAllServiceKeys(tx *sql.Tx) (keys []string, err error) {
|
||||
rows, err := tx.Query("SELECT service_key FROM service")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for rows.Next() {
|
||||
var key string
|
||||
_ = rows.Scan(&key)
|
||||
keys = append(keys, key)
|
||||
}
|
||||
_ = rows.Close()
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Store) getServiceStatusByKey(tx *sql.Tx, key string, parameters *paging.ServiceStatusParams) (*core.ServiceStatus, error) {
|
||||
serviceID, serviceGroup, serviceName, err := s.getServiceIDGroupAndNameByKey(tx, key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
serviceStatus := core.NewServiceStatus(key, serviceGroup, serviceName)
|
||||
if parameters.EventsPageSize > 0 {
|
||||
if serviceStatus.Events, err = s.getEventsByServiceID(tx, serviceID, parameters.EventsPage, parameters.EventsPageSize); err != nil {
|
||||
log.Printf("[sqlite][getServiceStatusByKey] Failed to retrieve events for key=%s: %s", key, err.Error())
|
||||
}
|
||||
}
|
||||
if parameters.ResultsPageSize > 0 {
|
||||
if serviceStatus.Results, err = s.getResultsByServiceID(tx, serviceID, parameters.ResultsPage, parameters.ResultsPageSize); err != nil {
|
||||
log.Printf("[sqlite][getServiceStatusByKey] Failed to retrieve results for key=%s: %s", key, err.Error())
|
||||
}
|
||||
}
|
||||
if parameters.IncludeUptime {
|
||||
now := time.Now()
|
||||
serviceStatus.Uptime.LastHour, _, err = s.getServiceUptime(tx, serviceID, now.Add(-time.Hour), now)
|
||||
serviceStatus.Uptime.LastTwentyFourHours, _, err = s.getServiceUptime(tx, serviceID, now.Add(-24*time.Hour), now)
|
||||
serviceStatus.Uptime.LastSevenDays, _, err = s.getServiceUptime(tx, serviceID, now.Add(-7*24*time.Hour), now)
|
||||
}
|
||||
return serviceStatus, nil
|
||||
}
|
||||
|
||||
func (s *Store) getServiceIDGroupAndNameByKey(tx *sql.Tx, key string) (id int64, group, name string, err error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT service_id, service_group, service_name
|
||||
FROM service
|
||||
WHERE service_key = $1
|
||||
LIMIT 1
|
||||
`,
|
||||
key,
|
||||
)
|
||||
if err != nil {
|
||||
return 0, "", "", err
|
||||
}
|
||||
for rows.Next() {
|
||||
_ = rows.Scan(&id, &group, &name)
|
||||
}
|
||||
_ = rows.Close()
|
||||
if id == 0 {
|
||||
return 0, "", "", errServiceNotFoundInDatabase
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Store) getEventsByServiceID(tx *sql.Tx, serviceID int64, page, pageSize int) (events []*core.Event, err error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT event_type, event_timestamp
|
||||
FROM service_event
|
||||
WHERE service_id = $1
|
||||
ORDER BY service_event_id ASC
|
||||
LIMIT $2 OFFSET $3
|
||||
`,
|
||||
serviceID,
|
||||
pageSize,
|
||||
(page-1)*pageSize,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for rows.Next() {
|
||||
event := &core.Event{}
|
||||
_ = rows.Scan(&event.Type, &event.Timestamp)
|
||||
events = append(events, event)
|
||||
}
|
||||
_ = rows.Close()
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Store) getResultsByServiceID(tx *sql.Tx, serviceID int64, page, pageSize int) (results []*core.Result, err error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT service_result_id, success, errors, connected, status, dns_rcode, certificate_expiration, hostname, ip, duration, timestamp
|
||||
FROM service_result
|
||||
WHERE service_id = $1
|
||||
ORDER BY service_result_id DESC -- Normally, we'd sort by timestamp, but sorting by service_result_id is faster
|
||||
LIMIT $2 OFFSET $3
|
||||
`,
|
||||
//`
|
||||
// SELECT * FROM (
|
||||
// SELECT service_result_id, success, errors, connected, status, dns_rcode, certificate_expiration, hostname, ip, duration, timestamp
|
||||
// FROM service_result
|
||||
// WHERE service_id = $1
|
||||
// ORDER BY service_result_id DESC -- Normally, we'd sort by timestamp, but sorting by service_result_id is faster
|
||||
// LIMIT $2 OFFSET $3
|
||||
// )
|
||||
// ORDER BY service_result_id ASC -- Normally, we'd sort by timestamp, but sorting by service_result_id is faster
|
||||
//`,
|
||||
serviceID,
|
||||
pageSize,
|
||||
(page-1)*pageSize,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
idResultMap := make(map[int64]*core.Result)
|
||||
for rows.Next() {
|
||||
result := &core.Result{}
|
||||
var id int64
|
||||
var joinedErrors string
|
||||
_ = rows.Scan(&id, &result.Success, &joinedErrors, &result.Connected, &result.HTTPStatus, &result.DNSRCode, &result.CertificateExpiration, &result.Hostname, &result.IP, &result.Duration, &result.Timestamp)
|
||||
if len(joinedErrors) != 0 {
|
||||
result.Errors = strings.Split(joinedErrors, arraySeparator)
|
||||
}
|
||||
//results = append(results, result)
|
||||
// This is faster than using a subselect
|
||||
results = append([]*core.Result{result}, results...)
|
||||
idResultMap[id] = result
|
||||
}
|
||||
_ = rows.Close()
|
||||
// Get the conditionResults
|
||||
for serviceResultID, result := range idResultMap {
|
||||
rows, err = tx.Query(
|
||||
`
|
||||
SELECT condition, success
|
||||
FROM service_result_condition
|
||||
WHERE service_result_id = $1
|
||||
`,
|
||||
serviceResultID,
|
||||
)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
for rows.Next() {
|
||||
conditionResult := &core.ConditionResult{}
|
||||
if err = rows.Scan(&conditionResult.Condition, &conditionResult.Success); err != nil {
|
||||
return
|
||||
}
|
||||
result.ConditionResults = append(result.ConditionResults, conditionResult)
|
||||
}
|
||||
_ = rows.Close()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Store) getServiceUptime(tx *sql.Tx, serviceID int64, from, to time.Time) (uptime float64, avgResponseTime time.Duration, err error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT SUM(total_executions), SUM(successful_executions), SUM(total_response_time)
|
||||
FROM service_uptime
|
||||
WHERE service_id = $1
|
||||
AND hour_unix_timestamp >= $2
|
||||
AND hour_unix_timestamp <= $3
|
||||
`,
|
||||
serviceID,
|
||||
from.Unix(),
|
||||
to.Unix(),
|
||||
)
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
var totalExecutions, totalSuccessfulExecutions, totalResponseTime int
|
||||
for rows.Next() {
|
||||
_ = rows.Scan(&totalExecutions, &totalSuccessfulExecutions, &totalResponseTime)
|
||||
break
|
||||
}
|
||||
_ = rows.Close()
|
||||
if totalExecutions > 0 {
|
||||
uptime = float64(totalSuccessfulExecutions) / float64(totalExecutions)
|
||||
avgResponseTime = time.Duration(float64(totalResponseTime)/float64(totalExecutions)) * time.Millisecond
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Store) getServiceID(tx *sql.Tx, service *core.Service) (int64, error) {
|
||||
rows, err := tx.Query("SELECT service_id FROM service WHERE service_key = $1", service.Key())
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var id int64
|
||||
var found bool
|
||||
for rows.Next() {
|
||||
_ = rows.Scan(&id)
|
||||
found = true
|
||||
break
|
||||
}
|
||||
_ = rows.Close()
|
||||
if !found {
|
||||
return 0, errServiceNotFoundInDatabase
|
||||
}
|
||||
return id, nil
|
||||
}
|
||||
|
||||
func (s *Store) getNumberOfEventsByServiceID(tx *sql.Tx, serviceID int64) (int64, error) {
|
||||
rows, err := tx.Query("SELECT COUNT(1) FROM service_event WHERE service_id = $1", serviceID)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var numberOfEvents int64
|
||||
for rows.Next() {
|
||||
_ = rows.Scan(&numberOfEvents)
|
||||
}
|
||||
_ = rows.Close()
|
||||
return numberOfEvents, nil
|
||||
}
|
||||
|
||||
func (s *Store) getNumberOfResultsByServiceID(tx *sql.Tx, serviceID int64) (int64, error) {
|
||||
rows, err := tx.Query("SELECT COUNT(1) FROM service_result WHERE service_id = $1", serviceID)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var numberOfResults int64
|
||||
for rows.Next() {
|
||||
_ = rows.Scan(&numberOfResults)
|
||||
}
|
||||
_ = rows.Close()
|
||||
return numberOfResults, nil
|
||||
}
|
||||
|
||||
func (s *Store) getAgeOfOldestServiceUptimeEntry(tx *sql.Tx, serviceID int64) (time.Duration, error) {
|
||||
rows, err := tx.Query(
|
||||
`
|
||||
SELECT hour_unix_timestamp
|
||||
FROM service_uptime
|
||||
WHERE service_id = $1
|
||||
ORDER BY hour_unix_timestamp
|
||||
LIMIT 1
|
||||
`,
|
||||
serviceID,
|
||||
)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var oldestServiceUptimeUnixTimestamp int64
|
||||
var found bool
|
||||
for rows.Next() {
|
||||
_ = rows.Scan(&oldestServiceUptimeUnixTimestamp)
|
||||
found = true
|
||||
break
|
||||
}
|
||||
_ = rows.Close()
|
||||
if !found {
|
||||
return 0, errNoRowsReturned
|
||||
}
|
||||
return time.Since(time.Unix(oldestServiceUptimeUnixTimestamp, 0)), nil
|
||||
}
|
||||
|
||||
func (s *Store) getLastServiceResultSuccessValue(tx *sql.Tx, serviceID int64) (bool, error) {
|
||||
rows, err := tx.Query("SELECT success FROM service_result WHERE service_id = $1 ORDER BY service_result_id DESC LIMIT 1", serviceID)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
var success bool
|
||||
var found bool
|
||||
for rows.Next() {
|
||||
_ = rows.Scan(&success)
|
||||
found = true
|
||||
break
|
||||
}
|
||||
_ = rows.Close()
|
||||
if !found {
|
||||
return false, errNoRowsReturned
|
||||
}
|
||||
return success, nil
|
||||
}
|
||||
|
||||
// deleteOldServiceEvents deletes old service events that are no longer needed
|
||||
func (s *Store) deleteOldServiceEvents(tx *sql.Tx, serviceID int64) error {
|
||||
_, err := tx.Exec(
|
||||
`
|
||||
DELETE FROM service_event
|
||||
WHERE service_id = $1
|
||||
AND service_event_id NOT IN (
|
||||
SELECT service_event_id
|
||||
FROM service_event
|
||||
WHERE service_id = $1
|
||||
ORDER BY service_event_id DESC
|
||||
LIMIT $2
|
||||
)
|
||||
`,
|
||||
serviceID,
|
||||
core.MaximumNumberOfEvents,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
//rowsAffected, _ := result.RowsAffected()
|
||||
//log.Printf("deleted %d rows from service_event", rowsAffected)
|
||||
return nil
|
||||
}
|
||||
|
||||
// deleteOldServiceResults deletes old service results that are no longer needed
|
||||
func (s *Store) deleteOldServiceResults(tx *sql.Tx, serviceID int64) error {
|
||||
_, err := tx.Exec(
|
||||
`
|
||||
DELETE FROM service_result
|
||||
WHERE service_id = $1
|
||||
AND service_result_id NOT IN (
|
||||
SELECT service_result_id
|
||||
FROM service_result
|
||||
WHERE service_id = $1
|
||||
ORDER BY service_result_id DESC
|
||||
LIMIT $2
|
||||
)
|
||||
`,
|
||||
serviceID,
|
||||
core.MaximumNumberOfResults,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
//rowsAffected, _ := result.RowsAffected()
|
||||
//log.Printf("deleted %d rows from service_result", rowsAffected)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Store) deleteOldUptimeEntries(tx *sql.Tx, serviceID int64, maxAge time.Time) error {
|
||||
_, err := tx.Exec("DELETE FROM service_uptime WHERE service_id = $1 AND hour_unix_timestamp < $2", serviceID, maxAge.Unix())
|
||||
//if err != nil {
|
||||
// return err
|
||||
//}
|
||||
//rowsAffected, _ := result.RowsAffected()
|
||||
//log.Printf("deleted %d rows from service_uptime", rowsAffected)
|
||||
return err
|
||||
}
|
||||
352
storage/store/sqlite/sqlite_test.go
Normal file
352
storage/store/sqlite/sqlite_test.go
Normal file
@@ -0,0 +1,352 @@
|
||||
package sqlite
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
)
|
||||
|
||||
var (
|
||||
firstCondition = core.Condition("[STATUS] == 200")
|
||||
secondCondition = core.Condition("[RESPONSE_TIME] < 500")
|
||||
thirdCondition = core.Condition("[CERTIFICATE_EXPIRATION] < 72h")
|
||||
|
||||
now = time.Now()
|
||||
|
||||
testService = core.Service{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
URL: "https://example.org/what/ever",
|
||||
Method: "GET",
|
||||
Body: "body",
|
||||
Interval: 30 * time.Second,
|
||||
Conditions: []*core.Condition{&firstCondition, &secondCondition, &thirdCondition},
|
||||
Alerts: nil,
|
||||
Insecure: false,
|
||||
NumberOfFailuresInARow: 0,
|
||||
NumberOfSuccessesInARow: 0,
|
||||
}
|
||||
testSuccessfulResult = core.Result{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
Errors: nil,
|
||||
Connected: true,
|
||||
Success: true,
|
||||
Timestamp: now,
|
||||
Duration: 150 * time.Millisecond,
|
||||
CertificateExpiration: 10 * time.Hour,
|
||||
ConditionResults: []*core.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[CERTIFICATE_EXPIRATION] < 72h",
|
||||
Success: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
testUnsuccessfulResult = core.Result{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
Errors: []string{"error-1", "error-2"},
|
||||
Connected: true,
|
||||
Success: false,
|
||||
Timestamp: now,
|
||||
Duration: 750 * time.Millisecond,
|
||||
CertificateExpiration: 10 * time.Hour,
|
||||
ConditionResults: []*core.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: false,
|
||||
},
|
||||
{
|
||||
Condition: "[CERTIFICATE_EXPIRATION] < 72h",
|
||||
Success: false,
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func TestNewStore(t *testing.T) {
|
||||
if _, err := NewStore("", "TestNewStore.db"); err != ErrDatabaseDriverNotSpecified {
|
||||
t.Error("expected error due to blank driver parameter")
|
||||
}
|
||||
if _, err := NewStore("sqlite", ""); err != ErrFilePathNotSpecified {
|
||||
t.Error("expected error due to blank path parameter")
|
||||
}
|
||||
if store, err := NewStore("sqlite", t.TempDir()+"/TestNewStore.db"); err != nil {
|
||||
t.Error("shouldn't have returned any error, got", err.Error())
|
||||
} else {
|
||||
_ = store.db.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_InsertCleansUpOldUptimeEntriesProperly(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_InsertCleansUpOldUptimeEntriesProperly.db")
|
||||
defer store.Close()
|
||||
now := time.Now().Round(time.Minute)
|
||||
now = time.Date(now.Year(), now.Month(), now.Day(), now.Hour(), 0, 0, 0, now.Location())
|
||||
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-5 * time.Hour), Success: true})
|
||||
|
||||
tx, _ := store.db.Begin()
|
||||
oldest, _ := store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != 5*time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~5 hours old, was %s", oldest)
|
||||
}
|
||||
|
||||
// The oldest cache entry should remain at ~5 hours old, because this entry is more recent
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-3 * time.Hour), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != 5*time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~5 hours old, was %s", oldest)
|
||||
}
|
||||
|
||||
// The oldest cache entry should now become at ~8 hours old, because this entry is older
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-8 * time.Hour), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != 8*time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~8 hours old, was %s", oldest)
|
||||
}
|
||||
|
||||
// Since this is one hour before reaching the clean up threshold, the oldest entry should now be this one
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-(uptimeCleanUpThreshold - time.Hour)), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != uptimeCleanUpThreshold-time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~%s hours old, was %s", uptimeCleanUpThreshold-time.Hour, oldest)
|
||||
}
|
||||
|
||||
// Since this entry is after the uptimeCleanUpThreshold, both this entry as well as the previous
|
||||
// one should be deleted since they both surpass uptimeRetention
|
||||
store.Insert(&testService, &core.Result{Timestamp: now.Add(-(uptimeCleanUpThreshold + time.Hour)), Success: true})
|
||||
|
||||
tx, _ = store.db.Begin()
|
||||
oldest, _ = store.getAgeOfOldestServiceUptimeEntry(tx, 1)
|
||||
_ = tx.Commit()
|
||||
if oldest.Truncate(time.Hour) != 8*time.Hour {
|
||||
t.Errorf("oldest service uptime entry should've been ~8 hours old, was %s", oldest)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_InsertCleansUpEventsAndResultsProperly(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_InsertCleansUpEventsAndResultsProperly.db")
|
||||
defer store.Close()
|
||||
for i := 0; i < resultsCleanUpThreshold+eventsCleanUpThreshold; i++ {
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
ss := store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(1, core.MaximumNumberOfResults*5).WithEvents(1, core.MaximumNumberOfEvents*5))
|
||||
if len(ss.Results) > resultsCleanUpThreshold+1 {
|
||||
t.Errorf("number of results shouldn't have exceeded %d, reached %d", resultsCleanUpThreshold, len(ss.Results))
|
||||
}
|
||||
if len(ss.Events) > eventsCleanUpThreshold+1 {
|
||||
t.Errorf("number of events shouldn't have exceeded %d, reached %d", eventsCleanUpThreshold, len(ss.Events))
|
||||
}
|
||||
}
|
||||
store.Clear()
|
||||
}
|
||||
|
||||
func TestStore_Persistence(t *testing.T) {
|
||||
file := t.TempDir() + "/TestStore_Persistence.db"
|
||||
store, _ := NewStore("sqlite", file)
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
ssFromOldStore := store.GetServiceStatus(testService.Group, testService.Name, paging.NewServiceStatusParams().WithResults(1, core.MaximumNumberOfResults).WithEvents(1, core.MaximumNumberOfEvents).WithUptime())
|
||||
if ssFromOldStore == nil || ssFromOldStore.Group != "group" || ssFromOldStore.Name != "name" || len(ssFromOldStore.Events) != 3 || len(ssFromOldStore.Results) != 2 || ssFromOldStore.Uptime.LastHour != 0.5 || ssFromOldStore.Uptime.LastTwentyFourHours != 0.5 || ssFromOldStore.Uptime.LastSevenDays != 0.5 {
|
||||
store.Close()
|
||||
t.Fatal("sanity check failed")
|
||||
}
|
||||
store.Close()
|
||||
store, _ = NewStore("sqlite", file)
|
||||
defer store.Close()
|
||||
ssFromNewStore := store.GetServiceStatus(testService.Group, testService.Name, paging.NewServiceStatusParams().WithResults(1, core.MaximumNumberOfResults).WithEvents(1, core.MaximumNumberOfEvents).WithUptime())
|
||||
if ssFromNewStore == nil || ssFromNewStore.Group != "group" || ssFromNewStore.Name != "name" || len(ssFromNewStore.Events) != 3 || len(ssFromNewStore.Results) != 2 || ssFromNewStore.Uptime.LastHour != 0.5 || ssFromNewStore.Uptime.LastTwentyFourHours != 0.5 || ssFromNewStore.Uptime.LastSevenDays != 0.5 {
|
||||
t.Fatal("failed sanity check")
|
||||
}
|
||||
if ssFromNewStore == ssFromOldStore {
|
||||
t.Fatal("ss from the old and new store should have a different memory address")
|
||||
}
|
||||
for i := range ssFromNewStore.Events {
|
||||
if ssFromNewStore.Events[i].Timestamp != ssFromOldStore.Events[i].Timestamp {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
if ssFromNewStore.Events[i].Type != ssFromOldStore.Events[i].Type {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
}
|
||||
for i := range ssFromOldStore.Results {
|
||||
if ssFromNewStore.Results[i].Timestamp != ssFromOldStore.Results[i].Timestamp {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
if ssFromNewStore.Results[i].Success != ssFromOldStore.Results[i].Success {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
if ssFromNewStore.Results[i].Connected != ssFromOldStore.Results[i].Connected {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
if ssFromNewStore.Results[i].IP != ssFromOldStore.Results[i].IP {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
if ssFromNewStore.Results[i].Hostname != ssFromOldStore.Results[i].Hostname {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
if ssFromNewStore.Results[i].HTTPStatus != ssFromOldStore.Results[i].HTTPStatus {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
if ssFromNewStore.Results[i].DNSRCode != ssFromOldStore.Results[i].DNSRCode {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
if len(ssFromNewStore.Results[i].Errors) != len(ssFromOldStore.Results[i].Errors) {
|
||||
t.Error("new and old should've been the same")
|
||||
} else {
|
||||
for j := range ssFromOldStore.Results[i].Errors {
|
||||
if ssFromNewStore.Results[i].Errors[j] != ssFromOldStore.Results[i].Errors[j] {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(ssFromNewStore.Results[i].ConditionResults) != len(ssFromOldStore.Results[i].ConditionResults) {
|
||||
t.Error("new and old should've been the same")
|
||||
} else {
|
||||
for j := range ssFromOldStore.Results[i].ConditionResults {
|
||||
if ssFromNewStore.Results[i].ConditionResults[j].Condition != ssFromOldStore.Results[i].ConditionResults[j].Condition {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
if ssFromNewStore.Results[i].ConditionResults[j].Success != ssFromOldStore.Results[i].ConditionResults[j].Success {
|
||||
t.Error("new and old should've been the same")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_Save(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_Save.db")
|
||||
defer store.Close()
|
||||
if store.Save() != nil {
|
||||
t.Error("Save shouldn't do anything for this store")
|
||||
}
|
||||
}
|
||||
|
||||
// Note that are much more extensive tests in /storage/store/store_test.go.
|
||||
// This test is simply an extra sanity check
|
||||
func TestStore_SanityCheck(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_SanityCheck.db")
|
||||
defer store.Close()
|
||||
store.Insert(&testService, &testSuccessfulResult)
|
||||
if numberOfServiceStatuses := len(store.GetAllServiceStatuses(paging.NewServiceStatusParams())); numberOfServiceStatuses != 1 {
|
||||
t.Fatalf("expected 1 ServiceStatus, got %d", numberOfServiceStatuses)
|
||||
}
|
||||
store.Insert(&testService, &testUnsuccessfulResult)
|
||||
// Both results inserted are for the same service, therefore, the count shouldn't have increased
|
||||
if numberOfServiceStatuses := len(store.GetAllServiceStatuses(paging.NewServiceStatusParams())); numberOfServiceStatuses != 1 {
|
||||
t.Fatalf("expected 1 ServiceStatus, got %d", numberOfServiceStatuses)
|
||||
}
|
||||
ss := store.GetServiceStatus(testService.Group, testService.Name, paging.NewServiceStatusParams().WithResults(1, 20).WithEvents(1, 20))
|
||||
if ss == nil {
|
||||
t.Fatalf("Store should've had key '%s', but didn't", testService.Key())
|
||||
}
|
||||
if len(ss.Events) != 3 {
|
||||
t.Errorf("Service '%s' should've had 3 events, got %d", ss.Name, len(ss.Events))
|
||||
}
|
||||
if len(ss.Results) != 2 {
|
||||
t.Errorf("Service '%s' should've had 2 results, got %d", ss.Name, len(ss.Results))
|
||||
}
|
||||
if deleted := store.DeleteAllServiceStatusesNotInKeys([]string{}); deleted != 1 {
|
||||
t.Errorf("%d entries should've been deleted, got %d", 1, deleted)
|
||||
}
|
||||
}
|
||||
|
||||
// TestStore_InvalidTransaction tests what happens if an invalid transaction is passed as parameter
|
||||
func TestStore_InvalidTransaction(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_InvalidTransaction.db")
|
||||
defer store.Close()
|
||||
tx, _ := store.db.Begin()
|
||||
tx.Commit()
|
||||
if _, err := store.insertService(tx, &testService); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.insertEvent(tx, 1, core.NewEventFromResult(&testSuccessfulResult)); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.insertResult(tx, 1, &testSuccessfulResult); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.insertConditionResults(tx, 1, testSuccessfulResult.ConditionResults); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.updateServiceUptime(tx, 1, &testSuccessfulResult); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getAllServiceKeys(tx); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getServiceStatusByKey(tx, testService.Key(), paging.NewServiceStatusParams().WithResults(1, 20)); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getEventsByServiceID(tx, 1, 1, 50); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getResultsByServiceID(tx, 1, 1, 50); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.deleteOldServiceEvents(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if err := store.deleteOldServiceResults(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, _, err := store.getServiceUptime(tx, 1, time.Now(), time.Now()); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getServiceID(tx, &testService); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getNumberOfEventsByServiceID(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getNumberOfResultsByServiceID(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getAgeOfOldestServiceUptimeEntry(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
if _, err := store.getLastServiceResultSuccessValue(tx, 1); err == nil {
|
||||
t.Error("should've returned an error, because the transaction was already committed")
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_NoRows(t *testing.T) {
|
||||
store, _ := NewStore("sqlite", t.TempDir()+"/TestStore_NoRows.db")
|
||||
defer store.Close()
|
||||
tx, _ := store.db.Begin()
|
||||
defer tx.Rollback()
|
||||
if _, err := store.getLastServiceResultSuccessValue(tx, 1); err != errNoRowsReturned {
|
||||
t.Errorf("should've %v, got %v", errNoRowsReturned, err)
|
||||
}
|
||||
if _, err := store.getAgeOfOldestServiceUptimeEntry(tx, 1); err != errNoRowsReturned {
|
||||
t.Errorf("should've %v, got %v", errNoRowsReturned, err)
|
||||
}
|
||||
}
|
||||
@@ -3,19 +3,21 @@ package store
|
||||
import (
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage/store/memory"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
"github.com/TwinProduction/gatus/storage/store/sqlite"
|
||||
)
|
||||
|
||||
// Store is the interface that each stores should implement
|
||||
type Store interface {
|
||||
// GetAllServiceStatusesWithResultPagination returns the JSON encoding of all monitored core.ServiceStatus
|
||||
// GetAllServiceStatuses returns the JSON encoding of all monitored core.ServiceStatus
|
||||
// with a subset of core.Result defined by the page and pageSize parameters
|
||||
GetAllServiceStatusesWithResultPagination(page, pageSize int) map[string]*core.ServiceStatus
|
||||
GetAllServiceStatuses(params *paging.ServiceStatusParams) map[string]*core.ServiceStatus
|
||||
|
||||
// GetServiceStatus returns the service status for a given service name in the given group
|
||||
GetServiceStatus(groupName, serviceName string) *core.ServiceStatus
|
||||
GetServiceStatus(groupName, serviceName string, params *paging.ServiceStatusParams) *core.ServiceStatus
|
||||
|
||||
// GetServiceStatusByKey returns the service status for a given key
|
||||
GetServiceStatusByKey(key string) *core.ServiceStatus
|
||||
GetServiceStatusByKey(key string, params *paging.ServiceStatusParams) *core.ServiceStatus
|
||||
|
||||
// Insert adds the observed result for the specified service into the store
|
||||
Insert(service *core.Service, result *core.Result)
|
||||
@@ -30,9 +32,16 @@ type Store interface {
|
||||
|
||||
// Save persists the data if and where it needs to be persisted
|
||||
Save() error
|
||||
|
||||
// Close terminates every connections and closes the store, if applicable.
|
||||
// Should only be used before stopping the application.
|
||||
Close()
|
||||
}
|
||||
|
||||
// TODO: add method to check state of store (by keeping track of silent errors)
|
||||
|
||||
var (
|
||||
// Validate interface implementation on compile
|
||||
_ Store = (*memory.Store)(nil)
|
||||
_ Store = (*sqlite.Store)(nil)
|
||||
)
|
||||
|
||||
@@ -6,104 +6,65 @@ import (
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage/store/memory"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
"github.com/TwinProduction/gatus/storage/store/sqlite"
|
||||
)
|
||||
|
||||
var (
|
||||
firstCondition = core.Condition("[STATUS] == 200")
|
||||
secondCondition = core.Condition("[RESPONSE_TIME] < 500")
|
||||
thirdCondition = core.Condition("[CERTIFICATE_EXPIRATION] < 72h")
|
||||
|
||||
timestamp = time.Now()
|
||||
|
||||
testService = core.Service{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
URL: "https://example.org/what/ever",
|
||||
Method: "GET",
|
||||
Body: "body",
|
||||
Interval: 30 * time.Second,
|
||||
Conditions: []*core.Condition{&firstCondition, &secondCondition, &thirdCondition},
|
||||
Alerts: nil,
|
||||
Insecure: false,
|
||||
NumberOfFailuresInARow: 0,
|
||||
NumberOfSuccessesInARow: 0,
|
||||
}
|
||||
testSuccessfulResult = core.Result{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
Errors: nil,
|
||||
Connected: true,
|
||||
Success: true,
|
||||
Timestamp: timestamp,
|
||||
Duration: 150 * time.Millisecond,
|
||||
CertificateExpiration: 10 * time.Hour,
|
||||
ConditionResults: []*core.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[CERTIFICATE_EXPIRATION] < 72h",
|
||||
Success: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
testUnsuccessfulResult = core.Result{
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
Errors: []string{"error-1", "error-2"},
|
||||
Connected: true,
|
||||
Success: false,
|
||||
Timestamp: timestamp,
|
||||
Duration: 750 * time.Millisecond,
|
||||
CertificateExpiration: 10 * time.Hour,
|
||||
ConditionResults: []*core.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: false,
|
||||
},
|
||||
{
|
||||
Condition: "[CERTIFICATE_EXPIRATION] < 72h",
|
||||
Success: false,
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func BenchmarkStore_GetAllAsJSON(b *testing.B) {
|
||||
func BenchmarkStore_GetAllServiceStatuses(b *testing.B) {
|
||||
memoryStore, err := memory.NewStore("")
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
sqliteStore, err := sqlite.NewStore("sqlite", b.TempDir()+"/BenchmarkStore_GetAllServiceStatuses.db")
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
defer sqliteStore.Close()
|
||||
type Scenario struct {
|
||||
Name string
|
||||
Store Store
|
||||
Name string
|
||||
Store Store
|
||||
Parallel bool
|
||||
}
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "memory",
|
||||
Store: memoryStore,
|
||||
Name: "memory",
|
||||
Store: memoryStore,
|
||||
Parallel: false,
|
||||
},
|
||||
{
|
||||
Name: "memory-parallel",
|
||||
Store: memoryStore,
|
||||
Parallel: true,
|
||||
},
|
||||
{
|
||||
Name: "sqlite",
|
||||
Store: sqliteStore,
|
||||
Parallel: false,
|
||||
},
|
||||
{
|
||||
Name: "sqlite-parallel",
|
||||
Store: sqliteStore,
|
||||
Parallel: true,
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
scenario.Store.Insert(&testService, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testService, &testUnsuccessfulResult)
|
||||
b.Run(scenario.Name, func(b *testing.B) {
|
||||
for n := 0; n < b.N; n++ {
|
||||
scenario.Store.GetAllServiceStatusesWithResultPagination(1, 20)
|
||||
if scenario.Parallel {
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
}
|
||||
})
|
||||
} else {
|
||||
for n := 0; n < b.N; n++ {
|
||||
scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
}
|
||||
}
|
||||
b.ReportAllocs()
|
||||
})
|
||||
scenario.Store.Clear()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -112,26 +73,129 @@ func BenchmarkStore_Insert(b *testing.B) {
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
sqliteStore, err := sqlite.NewStore("sqlite", b.TempDir()+"/BenchmarkStore_Insert.db")
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
defer sqliteStore.Close()
|
||||
type Scenario struct {
|
||||
Name string
|
||||
Store Store
|
||||
Name string
|
||||
Store Store
|
||||
Parallel bool
|
||||
}
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "memory",
|
||||
Store: memoryStore,
|
||||
Name: "memory",
|
||||
Store: memoryStore,
|
||||
Parallel: false,
|
||||
},
|
||||
{
|
||||
Name: "memory-parallel",
|
||||
Store: memoryStore,
|
||||
Parallel: true,
|
||||
},
|
||||
{
|
||||
Name: "sqlite",
|
||||
Store: sqliteStore,
|
||||
Parallel: false,
|
||||
},
|
||||
{
|
||||
Name: "sqlite-parallel",
|
||||
Store: sqliteStore,
|
||||
Parallel: false,
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
b.Run(scenario.Name, func(b *testing.B) {
|
||||
for n := 0; n < b.N; n++ {
|
||||
if n%100 == 0 {
|
||||
scenario.Store.Insert(&testService, &testSuccessfulResult)
|
||||
} else {
|
||||
scenario.Store.Insert(&testService, &testUnsuccessfulResult)
|
||||
if scenario.Parallel {
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
n := 0
|
||||
for pb.Next() {
|
||||
var result core.Result
|
||||
if n%10 == 0 {
|
||||
result = testUnsuccessfulResult
|
||||
} else {
|
||||
result = testSuccessfulResult
|
||||
}
|
||||
result.Timestamp = time.Now()
|
||||
scenario.Store.Insert(&testService, &result)
|
||||
n++
|
||||
}
|
||||
})
|
||||
} else {
|
||||
for n := 0; n < b.N; n++ {
|
||||
var result core.Result
|
||||
if n%10 == 0 {
|
||||
result = testUnsuccessfulResult
|
||||
} else {
|
||||
result = testSuccessfulResult
|
||||
}
|
||||
result.Timestamp = time.Now()
|
||||
scenario.Store.Insert(&testService, &result)
|
||||
}
|
||||
}
|
||||
b.ReportAllocs()
|
||||
scenario.Store.Clear()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkStore_GetServiceStatusByKey(b *testing.B) {
|
||||
memoryStore, err := memory.NewStore("")
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
sqliteStore, err := sqlite.NewStore("sqlite", b.TempDir()+"/BenchmarkStore_GetServiceStatusByKey.db")
|
||||
if err != nil {
|
||||
b.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
defer sqliteStore.Close()
|
||||
type Scenario struct {
|
||||
Name string
|
||||
Store Store
|
||||
Parallel bool
|
||||
}
|
||||
scenarios := []Scenario{
|
||||
{
|
||||
Name: "memory",
|
||||
Store: memoryStore,
|
||||
Parallel: false,
|
||||
},
|
||||
{
|
||||
Name: "memory-parallel",
|
||||
Store: memoryStore,
|
||||
Parallel: true,
|
||||
},
|
||||
{
|
||||
Name: "sqlite",
|
||||
Store: sqliteStore,
|
||||
Parallel: false,
|
||||
},
|
||||
{
|
||||
Name: "sqlite-parallel",
|
||||
Store: sqliteStore,
|
||||
Parallel: true,
|
||||
},
|
||||
}
|
||||
for _, scenario := range scenarios {
|
||||
for i := 0; i < 50; i++ {
|
||||
scenario.Store.Insert(&testService, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testService, &testUnsuccessfulResult)
|
||||
}
|
||||
b.Run(scenario.Name, func(b *testing.B) {
|
||||
if scenario.Parallel {
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
}
|
||||
})
|
||||
} else {
|
||||
for n := 0; n < b.N; n++ {
|
||||
scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
}
|
||||
}
|
||||
b.ReportAllocs()
|
||||
})
|
||||
scenario.Store.Clear()
|
||||
}
|
||||
}
|
||||
|
||||
382
storage/store/store_test.go
Normal file
382
storage/store/store_test.go
Normal file
@@ -0,0 +1,382 @@
|
||||
package store
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/TwinProduction/gatus/core"
|
||||
"github.com/TwinProduction/gatus/storage/store/memory"
|
||||
"github.com/TwinProduction/gatus/storage/store/paging"
|
||||
"github.com/TwinProduction/gatus/storage/store/sqlite"
|
||||
)
|
||||
|
||||
var (
|
||||
firstCondition = core.Condition("[STATUS] == 200")
|
||||
secondCondition = core.Condition("[RESPONSE_TIME] < 500")
|
||||
thirdCondition = core.Condition("[CERTIFICATE_EXPIRATION] < 72h")
|
||||
|
||||
now = time.Now().Truncate(time.Minute)
|
||||
|
||||
testService = core.Service{
|
||||
Name: "name",
|
||||
Group: "group",
|
||||
URL: "https://example.org/what/ever",
|
||||
Method: "GET",
|
||||
Body: "body",
|
||||
Interval: 30 * time.Second,
|
||||
Conditions: []*core.Condition{&firstCondition, &secondCondition, &thirdCondition},
|
||||
Alerts: nil,
|
||||
Insecure: false,
|
||||
NumberOfFailuresInARow: 0,
|
||||
NumberOfSuccessesInARow: 0,
|
||||
}
|
||||
testSuccessfulResult = core.Result{
|
||||
Timestamp: now,
|
||||
Success: true,
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
Errors: nil,
|
||||
Connected: true,
|
||||
Duration: 150 * time.Millisecond,
|
||||
CertificateExpiration: 10 * time.Hour,
|
||||
ConditionResults: []*core.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[CERTIFICATE_EXPIRATION] < 72h",
|
||||
Success: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
testUnsuccessfulResult = core.Result{
|
||||
Timestamp: now,
|
||||
Success: false,
|
||||
Hostname: "example.org",
|
||||
IP: "127.0.0.1",
|
||||
HTTPStatus: 200,
|
||||
Errors: []string{"error-1", "error-2"},
|
||||
Connected: true,
|
||||
Duration: 750 * time.Millisecond,
|
||||
CertificateExpiration: 10 * time.Hour,
|
||||
ConditionResults: []*core.ConditionResult{
|
||||
{
|
||||
Condition: "[STATUS] == 200",
|
||||
Success: true,
|
||||
},
|
||||
{
|
||||
Condition: "[RESPONSE_TIME] < 500",
|
||||
Success: false,
|
||||
},
|
||||
{
|
||||
Condition: "[CERTIFICATE_EXPIRATION] < 72h",
|
||||
Success: false,
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
type Scenario struct {
|
||||
Name string
|
||||
Store Store
|
||||
}
|
||||
|
||||
func initStoresAndBaseScenarios(t *testing.T, testName string) []*Scenario {
|
||||
memoryStore, err := memory.NewStore("")
|
||||
if err != nil {
|
||||
t.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
sqliteStore, err := sqlite.NewStore("sqlite", t.TempDir()+"/"+testName+".db")
|
||||
if err != nil {
|
||||
t.Fatal("failed to create store:", err.Error())
|
||||
}
|
||||
return []*Scenario{
|
||||
{
|
||||
Name: "memory",
|
||||
Store: memoryStore,
|
||||
},
|
||||
{
|
||||
Name: "sqlite",
|
||||
Store: sqliteStore,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func cleanUp(scenarios []*Scenario) {
|
||||
for _, scenario := range scenarios {
|
||||
scenario.Store.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetServiceStatusByKey(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetServiceStatusByKey")
|
||||
defer cleanUp(scenarios)
|
||||
firstResult := testSuccessfulResult
|
||||
firstResult.Timestamp = now.Add(-time.Minute)
|
||||
secondResult := testUnsuccessfulResult
|
||||
secondResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
|
||||
serviceStatus := scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithEvents(1, core.MaximumNumberOfEvents).WithResults(1, core.MaximumNumberOfResults).WithUptime())
|
||||
if serviceStatus == nil {
|
||||
t.Fatalf("serviceStatus shouldn't have been nil")
|
||||
}
|
||||
if serviceStatus.Name != testService.Name {
|
||||
t.Fatalf("serviceStatus.Name should've been %s, got %s", testService.Name, serviceStatus.Name)
|
||||
}
|
||||
if serviceStatus.Group != testService.Group {
|
||||
t.Fatalf("serviceStatus.Group should've been %s, got %s", testService.Group, serviceStatus.Group)
|
||||
}
|
||||
if len(serviceStatus.Results) != 2 {
|
||||
t.Fatalf("serviceStatus.Results should've had 2 entries")
|
||||
}
|
||||
if serviceStatus.Results[0].Timestamp.After(serviceStatus.Results[1].Timestamp) {
|
||||
t.Error("The result at index 0 should've been older than the result at index 1")
|
||||
}
|
||||
if serviceStatus.Uptime == nil {
|
||||
t.Fatalf("serviceStatus.Uptime shouldn't have been nil")
|
||||
}
|
||||
if serviceStatus.Uptime.LastHour != 0.5 {
|
||||
t.Errorf("serviceStatus.Uptime.LastHour should've been 0.5, got %f", serviceStatus.Uptime.LastHour)
|
||||
}
|
||||
if serviceStatus.Uptime.LastTwentyFourHours != 0.5 {
|
||||
t.Errorf("serviceStatus.Uptime.LastTwentyFourHours should've been 0.5, got %f", serviceStatus.Uptime.LastTwentyFourHours)
|
||||
}
|
||||
if serviceStatus.Uptime.LastSevenDays != 0.5 {
|
||||
t.Errorf("serviceStatus.Uptime.LastSevenDays should've been 0.5, got %f", serviceStatus.Uptime.LastSevenDays)
|
||||
}
|
||||
scenario.Store.Clear()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetServiceStatusForMissingStatusReturnsNil(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetServiceStatusForMissingStatusReturnsNil")
|
||||
defer cleanUp(scenarios)
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &testSuccessfulResult)
|
||||
serviceStatus := scenario.Store.GetServiceStatus("nonexistantgroup", "nonexistantname", paging.NewServiceStatusParams().WithEvents(1, core.MaximumNumberOfEvents).WithResults(1, core.MaximumNumberOfResults).WithUptime())
|
||||
if serviceStatus != nil {
|
||||
t.Errorf("Returned service status for group '%s' and name '%s' not nil after inserting the service into the store", testService.Group, testService.Name)
|
||||
}
|
||||
serviceStatus = scenario.Store.GetServiceStatus(testService.Group, "nonexistantname", paging.NewServiceStatusParams().WithEvents(1, core.MaximumNumberOfEvents).WithResults(1, core.MaximumNumberOfResults).WithUptime())
|
||||
if serviceStatus != nil {
|
||||
t.Errorf("Returned service status for group '%s' and name '%s' not nil after inserting the service into the store", testService.Group, "nonexistantname")
|
||||
}
|
||||
serviceStatus = scenario.Store.GetServiceStatus("nonexistantgroup", testService.Name, paging.NewServiceStatusParams().WithEvents(1, core.MaximumNumberOfEvents).WithResults(1, core.MaximumNumberOfResults).WithUptime())
|
||||
if serviceStatus != nil {
|
||||
t.Errorf("Returned service status for group '%s' and name '%s' not nil after inserting the service into the store", "nonexistantgroup", testService.Name)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetAllServiceStatuses(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetAllServiceStatuses")
|
||||
defer cleanUp(scenarios)
|
||||
firstResult := testSuccessfulResult
|
||||
secondResult := testUnsuccessfulResult
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
|
||||
serviceStatuses := scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 20))
|
||||
if len(serviceStatuses) != 1 {
|
||||
t.Fatal("expected 1 service status")
|
||||
}
|
||||
actual, exists := serviceStatuses[testService.Key()]
|
||||
if !exists {
|
||||
t.Fatal("expected service status to exist")
|
||||
}
|
||||
if len(actual.Results) != 2 {
|
||||
t.Error("expected 2 results, got", len(actual.Results))
|
||||
}
|
||||
if len(actual.Events) != 0 {
|
||||
t.Error("expected 0 events, got", len(actual.Events))
|
||||
}
|
||||
scenario.Store.Clear()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetAllServiceStatusesWithResultsAndEvents(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetAllServiceStatusesWithResultsAndEvents")
|
||||
defer cleanUp(scenarios)
|
||||
firstResult := testSuccessfulResult
|
||||
secondResult := testUnsuccessfulResult
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
// Can't be bothered dealing with timezone issues on the worker that runs the automated tests
|
||||
serviceStatuses := scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams().WithResults(1, 20).WithEvents(1, 50))
|
||||
if len(serviceStatuses) != 1 {
|
||||
t.Fatal("expected 1 service status")
|
||||
}
|
||||
actual, exists := serviceStatuses[testService.Key()]
|
||||
if !exists {
|
||||
t.Fatal("expected service status to exist")
|
||||
}
|
||||
if len(actual.Results) != 2 {
|
||||
t.Error("expected 2 results, got", len(actual.Results))
|
||||
}
|
||||
if len(actual.Events) != 3 {
|
||||
t.Error("expected 3 events, got", len(actual.Events))
|
||||
}
|
||||
scenario.Store.Clear()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GetServiceStatusPage1IsHasMoreRecentResultsThanPage2(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_GetServiceStatusPage1IsHasMoreRecentResultsThanPage2")
|
||||
defer cleanUp(scenarios)
|
||||
firstResult := testSuccessfulResult
|
||||
firstResult.Timestamp = now.Add(-time.Minute)
|
||||
secondResult := testUnsuccessfulResult
|
||||
secondResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &firstResult)
|
||||
scenario.Store.Insert(&testService, &secondResult)
|
||||
serviceStatusPage1 := scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(1, 1))
|
||||
if serviceStatusPage1 == nil {
|
||||
t.Fatalf("serviceStatusPage1 shouldn't have been nil")
|
||||
}
|
||||
if len(serviceStatusPage1.Results) != 1 {
|
||||
t.Fatalf("serviceStatusPage1 should've had 1 result")
|
||||
}
|
||||
serviceStatusPage2 := scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithResults(2, 1))
|
||||
if serviceStatusPage2 == nil {
|
||||
t.Fatalf("serviceStatusPage2 shouldn't have been nil")
|
||||
}
|
||||
if len(serviceStatusPage2.Results) != 1 {
|
||||
t.Fatalf("serviceStatusPage2 should've had 1 result")
|
||||
}
|
||||
// Compare the timestamp of both pages
|
||||
if !serviceStatusPage1.Results[0].Timestamp.After(serviceStatusPage2.Results[0].Timestamp) {
|
||||
t.Errorf("The result from the first page should've been more recent than the results from the second page")
|
||||
}
|
||||
scenario.Store.Clear()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_Insert(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_Insert")
|
||||
defer cleanUp(scenarios)
|
||||
firstResult := testSuccessfulResult
|
||||
firstResult.Timestamp = now.Add(-time.Minute)
|
||||
secondResult := testUnsuccessfulResult
|
||||
secondResult.Timestamp = now
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&testService, &testSuccessfulResult)
|
||||
scenario.Store.Insert(&testService, &testUnsuccessfulResult)
|
||||
|
||||
ss := scenario.Store.GetServiceStatusByKey(testService.Key(), paging.NewServiceStatusParams().WithEvents(1, core.MaximumNumberOfEvents).WithResults(1, core.MaximumNumberOfResults).WithUptime())
|
||||
if ss == nil {
|
||||
t.Fatalf("Store should've had key '%s', but didn't", testService.Key())
|
||||
}
|
||||
if len(ss.Events) != 3 {
|
||||
t.Fatalf("Service '%s' should've had 3 events, got %d", ss.Name, len(ss.Events))
|
||||
}
|
||||
if len(ss.Results) != 2 {
|
||||
t.Fatalf("Service '%s' should've had 2 results, got %d", ss.Name, len(ss.Results))
|
||||
}
|
||||
for i, expectedResult := range []core.Result{testSuccessfulResult, testUnsuccessfulResult} {
|
||||
if expectedResult.HTTPStatus != ss.Results[i].HTTPStatus {
|
||||
t.Errorf("Result at index %d should've had a HTTPStatus of %d, got %d", i, ss.Results[i].HTTPStatus, expectedResult.HTTPStatus)
|
||||
}
|
||||
if expectedResult.DNSRCode != ss.Results[i].DNSRCode {
|
||||
t.Errorf("Result at index %d should've had a DNSRCode of %s, got %s", i, ss.Results[i].DNSRCode, expectedResult.DNSRCode)
|
||||
}
|
||||
if expectedResult.Hostname != ss.Results[i].Hostname {
|
||||
t.Errorf("Result at index %d should've had a Hostname of %s, got %s", i, ss.Results[i].Hostname, expectedResult.Hostname)
|
||||
}
|
||||
if expectedResult.IP != ss.Results[i].IP {
|
||||
t.Errorf("Result at index %d should've had a IP of %s, got %s", i, ss.Results[i].IP, expectedResult.IP)
|
||||
}
|
||||
if expectedResult.Connected != ss.Results[i].Connected {
|
||||
t.Errorf("Result at index %d should've had a Connected value of %t, got %t", i, ss.Results[i].Connected, expectedResult.Connected)
|
||||
}
|
||||
if expectedResult.Duration != ss.Results[i].Duration {
|
||||
t.Errorf("Result at index %d should've had a Duration of %s, got %s", i, ss.Results[i].Duration.String(), expectedResult.Duration.String())
|
||||
}
|
||||
if len(expectedResult.Errors) != len(ss.Results[i].Errors) {
|
||||
t.Errorf("Result at index %d should've had %d errors, but actually had %d errors", i, len(ss.Results[i].Errors), len(expectedResult.Errors))
|
||||
} else {
|
||||
for j := range expectedResult.Errors {
|
||||
if ss.Results[i].Errors[j] != expectedResult.Errors[j] {
|
||||
t.Error("should've been the same")
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(expectedResult.ConditionResults) != len(ss.Results[i].ConditionResults) {
|
||||
t.Errorf("Result at index %d should've had %d ConditionResults, but actually had %d ConditionResults", i, len(ss.Results[i].ConditionResults), len(expectedResult.ConditionResults))
|
||||
} else {
|
||||
for j := range expectedResult.ConditionResults {
|
||||
if ss.Results[i].ConditionResults[j].Condition != expectedResult.ConditionResults[j].Condition {
|
||||
t.Error("should've been the same")
|
||||
}
|
||||
if ss.Results[i].ConditionResults[j].Success != expectedResult.ConditionResults[j].Success {
|
||||
t.Error("should've been the same")
|
||||
}
|
||||
}
|
||||
}
|
||||
if expectedResult.Success != ss.Results[i].Success {
|
||||
t.Errorf("Result at index %d should've had a Success of %t, got %t", i, ss.Results[i].Success, expectedResult.Success)
|
||||
}
|
||||
if expectedResult.Timestamp.Unix() != ss.Results[i].Timestamp.Unix() {
|
||||
t.Errorf("Result at index %d should've had a Timestamp of %d, got %d", i, ss.Results[i].Timestamp.Unix(), expectedResult.Timestamp.Unix())
|
||||
}
|
||||
if expectedResult.CertificateExpiration != ss.Results[i].CertificateExpiration {
|
||||
t.Errorf("Result at index %d should've had a CertificateExpiration of %s, got %s", i, ss.Results[i].CertificateExpiration.String(), expectedResult.CertificateExpiration.String())
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_DeleteAllServiceStatusesNotInKeys(t *testing.T) {
|
||||
scenarios := initStoresAndBaseScenarios(t, "TestStore_DeleteAllServiceStatusesNotInKeys")
|
||||
defer cleanUp(scenarios)
|
||||
firstService := core.Service{Name: "service-1", Group: "group"}
|
||||
secondService := core.Service{Name: "service-2", Group: "group"}
|
||||
result := &testSuccessfulResult
|
||||
for _, scenario := range scenarios {
|
||||
t.Run(scenario.Name, func(t *testing.T) {
|
||||
scenario.Store.Insert(&firstService, result)
|
||||
scenario.Store.Insert(&secondService, result)
|
||||
if scenario.Store.GetServiceStatusByKey(firstService.Key(), paging.NewServiceStatusParams()) == nil {
|
||||
t.Fatal("firstService should exist")
|
||||
}
|
||||
if scenario.Store.GetServiceStatusByKey(secondService.Key(), paging.NewServiceStatusParams()) == nil {
|
||||
t.Fatal("secondService should exist")
|
||||
}
|
||||
scenario.Store.DeleteAllServiceStatusesNotInKeys([]string{firstService.Key()})
|
||||
if scenario.Store.GetServiceStatusByKey(firstService.Key(), paging.NewServiceStatusParams()) == nil {
|
||||
t.Error("secondService should've been deleted")
|
||||
}
|
||||
if scenario.Store.GetServiceStatusByKey(secondService.Key(), paging.NewServiceStatusParams()) != nil {
|
||||
t.Error("firstService should still exist")
|
||||
}
|
||||
// Delete everything
|
||||
scenario.Store.DeleteAllServiceStatusesNotInKeys([]string{})
|
||||
if len(scenario.Store.GetAllServiceStatuses(paging.NewServiceStatusParams())) != 0 {
|
||||
t.Errorf("everything should've been deleted")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
9
storage/type.go
Normal file
9
storage/type.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package storage
|
||||
|
||||
// Type of the store.
|
||||
type Type string
|
||||
|
||||
const (
|
||||
TypeInMemory Type = "inmemory" // In-memory store
|
||||
TypeSQLite Type = "sqlite" // SQLite store
|
||||
)
|
||||
11
util/key_bench_test.go
Normal file
11
util/key_bench_test.go
Normal file
@@ -0,0 +1,11 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkConvertGroupAndServiceToKey(b *testing.B) {
|
||||
for n := 0; n < b.N; n++ {
|
||||
ConvertGroupAndServiceToKey("group", "service")
|
||||
}
|
||||
}
|
||||
19
vendor/github.com/kballard/go-shellquote/LICENSE
generated
vendored
Normal file
19
vendor/github.com/kballard/go-shellquote/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
Copyright (C) 2014 Kevin Ballard
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the "Software"),
|
||||
to deal in the Software without restriction, including without limitation
|
||||
the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
and/or sell copies of the Software, and to permit persons to whom the
|
||||
Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included
|
||||
in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
|
||||
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
36
vendor/github.com/kballard/go-shellquote/README
generated
vendored
Normal file
36
vendor/github.com/kballard/go-shellquote/README
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
PACKAGE
|
||||
|
||||
package shellquote
|
||||
import "github.com/kballard/go-shellquote"
|
||||
|
||||
Shellquote provides utilities for joining/splitting strings using sh's
|
||||
word-splitting rules.
|
||||
|
||||
VARIABLES
|
||||
|
||||
var (
|
||||
UnterminatedSingleQuoteError = errors.New("Unterminated single-quoted string")
|
||||
UnterminatedDoubleQuoteError = errors.New("Unterminated double-quoted string")
|
||||
UnterminatedEscapeError = errors.New("Unterminated backslash-escape")
|
||||
)
|
||||
|
||||
|
||||
FUNCTIONS
|
||||
|
||||
func Join(args ...string) string
|
||||
Join quotes each argument and joins them with a space. If passed to
|
||||
/bin/sh, the resulting string will be split back into the original
|
||||
arguments.
|
||||
|
||||
func Split(input string) (words []string, err error)
|
||||
Split splits a string according to /bin/sh's word-splitting rules. It
|
||||
supports backslash-escapes, single-quotes, and double-quotes. Notably it
|
||||
does not support the $'' style of quoting. It also doesn't attempt to
|
||||
perform any other sort of expansion, including brace expansion, shell
|
||||
expansion, or pathname expansion.
|
||||
|
||||
If the given input has an unterminated quoted string or ends in a
|
||||
backslash-escape, one of UnterminatedSingleQuoteError,
|
||||
UnterminatedDoubleQuoteError, or UnterminatedEscapeError is returned.
|
||||
|
||||
|
||||
3
vendor/github.com/kballard/go-shellquote/doc.go
generated
vendored
Normal file
3
vendor/github.com/kballard/go-shellquote/doc.go
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
// Shellquote provides utilities for joining/splitting strings using sh's
|
||||
// word-splitting rules.
|
||||
package shellquote
|
||||
102
vendor/github.com/kballard/go-shellquote/quote.go
generated
vendored
Normal file
102
vendor/github.com/kballard/go-shellquote/quote.go
generated
vendored
Normal file
@@ -0,0 +1,102 @@
|
||||
package shellquote
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// Join quotes each argument and joins them with a space.
|
||||
// If passed to /bin/sh, the resulting string will be split back into the
|
||||
// original arguments.
|
||||
func Join(args ...string) string {
|
||||
var buf bytes.Buffer
|
||||
for i, arg := range args {
|
||||
if i != 0 {
|
||||
buf.WriteByte(' ')
|
||||
}
|
||||
quote(arg, &buf)
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
const (
|
||||
specialChars = "\\'\"`${[|&;<>()*?!"
|
||||
extraSpecialChars = " \t\n"
|
||||
prefixChars = "~"
|
||||
)
|
||||
|
||||
func quote(word string, buf *bytes.Buffer) {
|
||||
// We want to try to produce a "nice" output. As such, we will
|
||||
// backslash-escape most characters, but if we encounter a space, or if we
|
||||
// encounter an extra-special char (which doesn't work with
|
||||
// backslash-escaping) we switch over to quoting the whole word. We do this
|
||||
// with a space because it's typically easier for people to read multi-word
|
||||
// arguments when quoted with a space rather than with ugly backslashes
|
||||
// everywhere.
|
||||
origLen := buf.Len()
|
||||
|
||||
if len(word) == 0 {
|
||||
// oops, no content
|
||||
buf.WriteString("''")
|
||||
return
|
||||
}
|
||||
|
||||
cur, prev := word, word
|
||||
atStart := true
|
||||
for len(cur) > 0 {
|
||||
c, l := utf8.DecodeRuneInString(cur)
|
||||
cur = cur[l:]
|
||||
if strings.ContainsRune(specialChars, c) || (atStart && strings.ContainsRune(prefixChars, c)) {
|
||||
// copy the non-special chars up to this point
|
||||
if len(cur) < len(prev) {
|
||||
buf.WriteString(prev[0 : len(prev)-len(cur)-l])
|
||||
}
|
||||
buf.WriteByte('\\')
|
||||
buf.WriteRune(c)
|
||||
prev = cur
|
||||
} else if strings.ContainsRune(extraSpecialChars, c) {
|
||||
// start over in quote mode
|
||||
buf.Truncate(origLen)
|
||||
goto quote
|
||||
}
|
||||
atStart = false
|
||||
}
|
||||
if len(prev) > 0 {
|
||||
buf.WriteString(prev)
|
||||
}
|
||||
return
|
||||
|
||||
quote:
|
||||
// quote mode
|
||||
// Use single-quotes, but if we find a single-quote in the word, we need
|
||||
// to terminate the string, emit an escaped quote, and start the string up
|
||||
// again
|
||||
inQuote := false
|
||||
for len(word) > 0 {
|
||||
i := strings.IndexRune(word, '\'')
|
||||
if i == -1 {
|
||||
break
|
||||
}
|
||||
if i > 0 {
|
||||
if !inQuote {
|
||||
buf.WriteByte('\'')
|
||||
inQuote = true
|
||||
}
|
||||
buf.WriteString(word[0:i])
|
||||
}
|
||||
word = word[i+1:]
|
||||
if inQuote {
|
||||
buf.WriteByte('\'')
|
||||
inQuote = false
|
||||
}
|
||||
buf.WriteString("\\'")
|
||||
}
|
||||
if len(word) > 0 {
|
||||
if !inQuote {
|
||||
buf.WriteByte('\'')
|
||||
}
|
||||
buf.WriteString(word)
|
||||
buf.WriteByte('\'')
|
||||
}
|
||||
}
|
||||
156
vendor/github.com/kballard/go-shellquote/unquote.go
generated
vendored
Normal file
156
vendor/github.com/kballard/go-shellquote/unquote.go
generated
vendored
Normal file
@@ -0,0 +1,156 @@
|
||||
package shellquote
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
var (
|
||||
UnterminatedSingleQuoteError = errors.New("Unterminated single-quoted string")
|
||||
UnterminatedDoubleQuoteError = errors.New("Unterminated double-quoted string")
|
||||
UnterminatedEscapeError = errors.New("Unterminated backslash-escape")
|
||||
)
|
||||
|
||||
var (
|
||||
splitChars = " \n\t"
|
||||
singleChar = '\''
|
||||
doubleChar = '"'
|
||||
escapeChar = '\\'
|
||||
doubleEscapeChars = "$`\"\n\\"
|
||||
)
|
||||
|
||||
// Split splits a string according to /bin/sh's word-splitting rules. It
|
||||
// supports backslash-escapes, single-quotes, and double-quotes. Notably it does
|
||||
// not support the $'' style of quoting. It also doesn't attempt to perform any
|
||||
// other sort of expansion, including brace expansion, shell expansion, or
|
||||
// pathname expansion.
|
||||
//
|
||||
// If the given input has an unterminated quoted string or ends in a
|
||||
// backslash-escape, one of UnterminatedSingleQuoteError,
|
||||
// UnterminatedDoubleQuoteError, or UnterminatedEscapeError is returned.
|
||||
func Split(input string) (words []string, err error) {
|
||||
var buf bytes.Buffer
|
||||
words = make([]string, 0)
|
||||
|
||||
for len(input) > 0 {
|
||||
// skip any splitChars at the start
|
||||
c, l := utf8.DecodeRuneInString(input)
|
||||
if strings.ContainsRune(splitChars, c) {
|
||||
input = input[l:]
|
||||
continue
|
||||
} else if c == escapeChar {
|
||||
// Look ahead for escaped newline so we can skip over it
|
||||
next := input[l:]
|
||||
if len(next) == 0 {
|
||||
err = UnterminatedEscapeError
|
||||
return
|
||||
}
|
||||
c2, l2 := utf8.DecodeRuneInString(next)
|
||||
if c2 == '\n' {
|
||||
input = next[l2:]
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
var word string
|
||||
word, input, err = splitWord(input, &buf)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
words = append(words, word)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func splitWord(input string, buf *bytes.Buffer) (word string, remainder string, err error) {
|
||||
buf.Reset()
|
||||
|
||||
raw:
|
||||
{
|
||||
cur := input
|
||||
for len(cur) > 0 {
|
||||
c, l := utf8.DecodeRuneInString(cur)
|
||||
cur = cur[l:]
|
||||
if c == singleChar {
|
||||
buf.WriteString(input[0 : len(input)-len(cur)-l])
|
||||
input = cur
|
||||
goto single
|
||||
} else if c == doubleChar {
|
||||
buf.WriteString(input[0 : len(input)-len(cur)-l])
|
||||
input = cur
|
||||
goto double
|
||||
} else if c == escapeChar {
|
||||
buf.WriteString(input[0 : len(input)-len(cur)-l])
|
||||
input = cur
|
||||
goto escape
|
||||
} else if strings.ContainsRune(splitChars, c) {
|
||||
buf.WriteString(input[0 : len(input)-len(cur)-l])
|
||||
return buf.String(), cur, nil
|
||||
}
|
||||
}
|
||||
if len(input) > 0 {
|
||||
buf.WriteString(input)
|
||||
input = ""
|
||||
}
|
||||
goto done
|
||||
}
|
||||
|
||||
escape:
|
||||
{
|
||||
if len(input) == 0 {
|
||||
return "", "", UnterminatedEscapeError
|
||||
}
|
||||
c, l := utf8.DecodeRuneInString(input)
|
||||
if c == '\n' {
|
||||
// a backslash-escaped newline is elided from the output entirely
|
||||
} else {
|
||||
buf.WriteString(input[:l])
|
||||
}
|
||||
input = input[l:]
|
||||
}
|
||||
goto raw
|
||||
|
||||
single:
|
||||
{
|
||||
i := strings.IndexRune(input, singleChar)
|
||||
if i == -1 {
|
||||
return "", "", UnterminatedSingleQuoteError
|
||||
}
|
||||
buf.WriteString(input[0:i])
|
||||
input = input[i+1:]
|
||||
goto raw
|
||||
}
|
||||
|
||||
double:
|
||||
{
|
||||
cur := input
|
||||
for len(cur) > 0 {
|
||||
c, l := utf8.DecodeRuneInString(cur)
|
||||
cur = cur[l:]
|
||||
if c == doubleChar {
|
||||
buf.WriteString(input[0 : len(input)-len(cur)-l])
|
||||
input = cur
|
||||
goto raw
|
||||
} else if c == escapeChar {
|
||||
// bash only supports certain escapes in double-quoted strings
|
||||
c2, l2 := utf8.DecodeRuneInString(cur)
|
||||
cur = cur[l2:]
|
||||
if strings.ContainsRune(doubleEscapeChars, c2) {
|
||||
buf.WriteString(input[0 : len(input)-len(cur)-l-l2])
|
||||
if c2 == '\n' {
|
||||
// newline is special, skip the backslash entirely
|
||||
} else {
|
||||
buf.WriteRune(c2)
|
||||
}
|
||||
input = cur
|
||||
}
|
||||
}
|
||||
}
|
||||
return "", "", UnterminatedDoubleQuoteError
|
||||
}
|
||||
|
||||
done:
|
||||
return buf.String(), input, nil
|
||||
}
|
||||
14
vendor/github.com/mattn/go-isatty/.travis.yml
generated
vendored
Normal file
14
vendor/github.com/mattn/go-isatty/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
language: go
|
||||
sudo: false
|
||||
go:
|
||||
- 1.13.x
|
||||
- tip
|
||||
|
||||
before_install:
|
||||
- go get -t -v ./...
|
||||
|
||||
script:
|
||||
- ./go.test.sh
|
||||
|
||||
after_success:
|
||||
- bash <(curl -s https://codecov.io/bash)
|
||||
9
vendor/github.com/mattn/go-isatty/LICENSE
generated
vendored
Normal file
9
vendor/github.com/mattn/go-isatty/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
Copyright (c) Yasuhiro MATSUMOTO <mattn.jp@gmail.com>
|
||||
|
||||
MIT License (Expat)
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
50
vendor/github.com/mattn/go-isatty/README.md
generated
vendored
Normal file
50
vendor/github.com/mattn/go-isatty/README.md
generated
vendored
Normal file
@@ -0,0 +1,50 @@
|
||||
# go-isatty
|
||||
|
||||
[](http://godoc.org/github.com/mattn/go-isatty)
|
||||
[](https://codecov.io/gh/mattn/go-isatty)
|
||||
[](https://coveralls.io/github/mattn/go-isatty?branch=master)
|
||||
[](https://goreportcard.com/report/mattn/go-isatty)
|
||||
|
||||
isatty for golang
|
||||
|
||||
## Usage
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/mattn/go-isatty"
|
||||
"os"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if isatty.IsTerminal(os.Stdout.Fd()) {
|
||||
fmt.Println("Is Terminal")
|
||||
} else if isatty.IsCygwinTerminal(os.Stdout.Fd()) {
|
||||
fmt.Println("Is Cygwin/MSYS2 Terminal")
|
||||
} else {
|
||||
fmt.Println("Is Not Terminal")
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```
|
||||
$ go get github.com/mattn/go-isatty
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
## Author
|
||||
|
||||
Yasuhiro Matsumoto (a.k.a mattn)
|
||||
|
||||
## Thanks
|
||||
|
||||
* k-takata: base idea for IsCygwinTerminal
|
||||
|
||||
https://github.com/k-takata/go-iscygpty
|
||||
2
vendor/github.com/mattn/go-isatty/doc.go
generated
vendored
Normal file
2
vendor/github.com/mattn/go-isatty/doc.go
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
// Package isatty implements interface to isatty
|
||||
package isatty
|
||||
5
vendor/github.com/mattn/go-isatty/go.mod
generated
vendored
Normal file
5
vendor/github.com/mattn/go-isatty/go.mod
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
module github.com/mattn/go-isatty
|
||||
|
||||
go 1.12
|
||||
|
||||
require golang.org/x/sys v0.0.0-20200116001909-b77594299b42
|
||||
2
vendor/github.com/mattn/go-isatty/go.sum
generated
vendored
Normal file
2
vendor/github.com/mattn/go-isatty/go.sum
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
golang.org/x/sys v0.0.0-20200116001909-b77594299b42 h1:vEOn+mP2zCOVzKckCZy6YsCtDblrpj/w7B9nxGNELpg=
|
||||
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
12
vendor/github.com/mattn/go-isatty/go.test.sh
generated
vendored
Normal file
12
vendor/github.com/mattn/go-isatty/go.test.sh
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
echo "" > coverage.txt
|
||||
|
||||
for d in $(go list ./... | grep -v vendor); do
|
||||
go test -race -coverprofile=profile.out -covermode=atomic "$d"
|
||||
if [ -f profile.out ]; then
|
||||
cat profile.out >> coverage.txt
|
||||
rm profile.out
|
||||
fi
|
||||
done
|
||||
18
vendor/github.com/mattn/go-isatty/isatty_bsd.go
generated
vendored
Normal file
18
vendor/github.com/mattn/go-isatty/isatty_bsd.go
generated
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
// +build darwin freebsd openbsd netbsd dragonfly
|
||||
// +build !appengine
|
||||
|
||||
package isatty
|
||||
|
||||
import "golang.org/x/sys/unix"
|
||||
|
||||
// IsTerminal return true if the file descriptor is terminal.
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
_, err := unix.IoctlGetTermios(int(fd), unix.TIOCGETA)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
|
||||
// terminal. This is also always false on this environment.
|
||||
func IsCygwinTerminal(fd uintptr) bool {
|
||||
return false
|
||||
}
|
||||
15
vendor/github.com/mattn/go-isatty/isatty_others.go
generated
vendored
Normal file
15
vendor/github.com/mattn/go-isatty/isatty_others.go
generated
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
// +build appengine js nacl
|
||||
|
||||
package isatty
|
||||
|
||||
// IsTerminal returns true if the file descriptor is terminal which
|
||||
// is always false on js and appengine classic which is a sandboxed PaaS.
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// IsCygwinTerminal() return true if the file descriptor is a cygwin or msys2
|
||||
// terminal. This is also always false on this environment.
|
||||
func IsCygwinTerminal(fd uintptr) bool {
|
||||
return false
|
||||
}
|
||||
22
vendor/github.com/mattn/go-isatty/isatty_plan9.go
generated
vendored
Normal file
22
vendor/github.com/mattn/go-isatty/isatty_plan9.go
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
// +build plan9
|
||||
|
||||
package isatty
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// IsTerminal returns true if the given file descriptor is a terminal.
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
path, err := syscall.Fd2path(int(fd))
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return path == "/dev/cons" || path == "/mnt/term/dev/cons"
|
||||
}
|
||||
|
||||
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
|
||||
// terminal. This is also always false on this environment.
|
||||
func IsCygwinTerminal(fd uintptr) bool {
|
||||
return false
|
||||
}
|
||||
22
vendor/github.com/mattn/go-isatty/isatty_solaris.go
generated
vendored
Normal file
22
vendor/github.com/mattn/go-isatty/isatty_solaris.go
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
// +build solaris
|
||||
// +build !appengine
|
||||
|
||||
package isatty
|
||||
|
||||
import (
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
// IsTerminal returns true if the given file descriptor is a terminal.
|
||||
// see: http://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libbc/libc/gen/common/isatty.c
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
var termio unix.Termio
|
||||
err := unix.IoctlSetTermio(int(fd), unix.TCGETA, &termio)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
|
||||
// terminal. This is also always false on this environment.
|
||||
func IsCygwinTerminal(fd uintptr) bool {
|
||||
return false
|
||||
}
|
||||
18
vendor/github.com/mattn/go-isatty/isatty_tcgets.go
generated
vendored
Normal file
18
vendor/github.com/mattn/go-isatty/isatty_tcgets.go
generated
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
// +build linux aix
|
||||
// +build !appengine
|
||||
|
||||
package isatty
|
||||
|
||||
import "golang.org/x/sys/unix"
|
||||
|
||||
// IsTerminal return true if the file descriptor is terminal.
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
_, err := unix.IoctlGetTermios(int(fd), unix.TCGETS)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
|
||||
// terminal. This is also always false on this environment.
|
||||
func IsCygwinTerminal(fd uintptr) bool {
|
||||
return false
|
||||
}
|
||||
125
vendor/github.com/mattn/go-isatty/isatty_windows.go
generated
vendored
Normal file
125
vendor/github.com/mattn/go-isatty/isatty_windows.go
generated
vendored
Normal file
@@ -0,0 +1,125 @@
|
||||
// +build windows
|
||||
// +build !appengine
|
||||
|
||||
package isatty
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
"syscall"
|
||||
"unicode/utf16"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const (
|
||||
objectNameInfo uintptr = 1
|
||||
fileNameInfo = 2
|
||||
fileTypePipe = 3
|
||||
)
|
||||
|
||||
var (
|
||||
kernel32 = syscall.NewLazyDLL("kernel32.dll")
|
||||
ntdll = syscall.NewLazyDLL("ntdll.dll")
|
||||
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
|
||||
procGetFileInformationByHandleEx = kernel32.NewProc("GetFileInformationByHandleEx")
|
||||
procGetFileType = kernel32.NewProc("GetFileType")
|
||||
procNtQueryObject = ntdll.NewProc("NtQueryObject")
|
||||
)
|
||||
|
||||
func init() {
|
||||
// Check if GetFileInformationByHandleEx is available.
|
||||
if procGetFileInformationByHandleEx.Find() != nil {
|
||||
procGetFileInformationByHandleEx = nil
|
||||
}
|
||||
}
|
||||
|
||||
// IsTerminal return true if the file descriptor is terminal.
|
||||
func IsTerminal(fd uintptr) bool {
|
||||
var st uint32
|
||||
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, fd, uintptr(unsafe.Pointer(&st)), 0)
|
||||
return r != 0 && e == 0
|
||||
}
|
||||
|
||||
// Check pipe name is used for cygwin/msys2 pty.
|
||||
// Cygwin/MSYS2 PTY has a name like:
|
||||
// \{cygwin,msys}-XXXXXXXXXXXXXXXX-ptyN-{from,to}-master
|
||||
func isCygwinPipeName(name string) bool {
|
||||
token := strings.Split(name, "-")
|
||||
if len(token) < 5 {
|
||||
return false
|
||||
}
|
||||
|
||||
if token[0] != `\msys` &&
|
||||
token[0] != `\cygwin` &&
|
||||
token[0] != `\Device\NamedPipe\msys` &&
|
||||
token[0] != `\Device\NamedPipe\cygwin` {
|
||||
return false
|
||||
}
|
||||
|
||||
if token[1] == "" {
|
||||
return false
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(token[2], "pty") {
|
||||
return false
|
||||
}
|
||||
|
||||
if token[3] != `from` && token[3] != `to` {
|
||||
return false
|
||||
}
|
||||
|
||||
if token[4] != "master" {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// getFileNameByHandle use the undocomented ntdll NtQueryObject to get file full name from file handler
|
||||
// since GetFileInformationByHandleEx is not avilable under windows Vista and still some old fashion
|
||||
// guys are using Windows XP, this is a workaround for those guys, it will also work on system from
|
||||
// Windows vista to 10
|
||||
// see https://stackoverflow.com/a/18792477 for details
|
||||
func getFileNameByHandle(fd uintptr) (string, error) {
|
||||
if procNtQueryObject == nil {
|
||||
return "", errors.New("ntdll.dll: NtQueryObject not supported")
|
||||
}
|
||||
|
||||
var buf [4 + syscall.MAX_PATH]uint16
|
||||
var result int
|
||||
r, _, e := syscall.Syscall6(procNtQueryObject.Addr(), 5,
|
||||
fd, objectNameInfo, uintptr(unsafe.Pointer(&buf)), uintptr(2*len(buf)), uintptr(unsafe.Pointer(&result)), 0)
|
||||
if r != 0 {
|
||||
return "", e
|
||||
}
|
||||
return string(utf16.Decode(buf[4 : 4+buf[0]/2])), nil
|
||||
}
|
||||
|
||||
// IsCygwinTerminal() return true if the file descriptor is a cygwin or msys2
|
||||
// terminal.
|
||||
func IsCygwinTerminal(fd uintptr) bool {
|
||||
if procGetFileInformationByHandleEx == nil {
|
||||
name, err := getFileNameByHandle(fd)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
return isCygwinPipeName(name)
|
||||
}
|
||||
|
||||
// Cygwin/msys's pty is a pipe.
|
||||
ft, _, e := syscall.Syscall(procGetFileType.Addr(), 1, fd, 0, 0)
|
||||
if ft != fileTypePipe || e != 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
var buf [2 + syscall.MAX_PATH]uint16
|
||||
r, _, e := syscall.Syscall6(procGetFileInformationByHandleEx.Addr(),
|
||||
4, fd, fileNameInfo, uintptr(unsafe.Pointer(&buf)),
|
||||
uintptr(len(buf)*2), 0, 0)
|
||||
if r == 0 || e != 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
l := *(*uint32)(unsafe.Pointer(&buf))
|
||||
return isCygwinPipeName(string(utf16.Decode(buf[2 : 2+l/2])))
|
||||
}
|
||||
8
vendor/github.com/mattn/go-isatty/renovate.json
generated
vendored
Normal file
8
vendor/github.com/mattn/go-isatty/renovate.json
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"extends": [
|
||||
"config:base"
|
||||
],
|
||||
"postUpdateOptions": [
|
||||
"gomodTidy"
|
||||
]
|
||||
}
|
||||
27
vendor/github.com/remyoudompheng/bigfft/LICENSE
generated
vendored
Normal file
27
vendor/github.com/remyoudompheng/bigfft/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
Copyright (c) 2012 The Go Authors. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
43
vendor/github.com/remyoudompheng/bigfft/README
generated
vendored
Normal file
43
vendor/github.com/remyoudompheng/bigfft/README
generated
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
Benchmarking math/big vs. bigfft
|
||||
|
||||
Number size old ns/op new ns/op delta
|
||||
1kb 1599 1640 +2.56%
|
||||
10kb 61533 62170 +1.04%
|
||||
50kb 833693 831051 -0.32%
|
||||
100kb 2567995 2693864 +4.90%
|
||||
1Mb 105237800 28446400 -72.97%
|
||||
5Mb 1272947000 168554600 -86.76%
|
||||
10Mb 3834354000 405120200 -89.43%
|
||||
20Mb 11514488000 845081600 -92.66%
|
||||
50Mb 49199945000 2893950000 -94.12%
|
||||
100Mb 147599836000 5921594000 -95.99%
|
||||
|
||||
Benchmarking GMP vs bigfft
|
||||
|
||||
Number size GMP ns/op Go ns/op delta
|
||||
1kb 536 1500 +179.85%
|
||||
10kb 26669 50777 +90.40%
|
||||
50kb 252270 658534 +161.04%
|
||||
100kb 686813 2127534 +209.77%
|
||||
1Mb 12100000 22391830 +85.06%
|
||||
5Mb 111731843 133550600 +19.53%
|
||||
10Mb 212314000 318595800 +50.06%
|
||||
20Mb 490196000 671512800 +36.99%
|
||||
50Mb 1280000000 2451476000 +91.52%
|
||||
100Mb 2673000000 5228991000 +95.62%
|
||||
|
||||
Benchmarks were run on a Core 2 Quad Q8200 (2.33GHz).
|
||||
FFT is enabled when input numbers are over 200kbits.
|
||||
|
||||
Scanning large decimal number from strings.
|
||||
(math/big [n^2 complexity] vs bigfft [n^1.6 complexity], Core i5-4590)
|
||||
|
||||
Digits old ns/op new ns/op delta
|
||||
1e3 9995 10876 +8.81%
|
||||
1e4 175356 243806 +39.03%
|
||||
1e5 9427422 6780545 -28.08%
|
||||
1e6 1776707489 144867502 -91.85%
|
||||
2e6 6865499995 346540778 -94.95%
|
||||
5e6 42641034189 1069878799 -97.49%
|
||||
10e6 151975273589 2693328580 -98.23%
|
||||
|
||||
36
vendor/github.com/remyoudompheng/bigfft/arith_386.s
generated
vendored
Normal file
36
vendor/github.com/remyoudompheng/bigfft/arith_386.s
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
// Trampolines to math/big assembly implementations.
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// func addVV(z, x, y []Word) (c Word)
|
||||
TEXT ·addVV(SB),NOSPLIT,$0
|
||||
JMP math∕big·addVV(SB)
|
||||
|
||||
// func subVV(z, x, y []Word) (c Word)
|
||||
TEXT ·subVV(SB),NOSPLIT,$0
|
||||
JMP math∕big·subVV(SB)
|
||||
|
||||
// func addVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·addVW(SB)
|
||||
|
||||
// func subVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·subVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·subVW(SB)
|
||||
|
||||
// func shlVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shlVU(SB),NOSPLIT,$0
|
||||
JMP math∕big·shlVU(SB)
|
||||
|
||||
// func shrVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shrVU(SB),NOSPLIT,$0
|
||||
JMP math∕big·shrVU(SB)
|
||||
|
||||
// func mulAddVWW(z, x []Word, y, r Word) (c Word)
|
||||
TEXT ·mulAddVWW(SB),NOSPLIT,$0
|
||||
JMP math∕big·mulAddVWW(SB)
|
||||
|
||||
// func addMulVVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addMulVVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·addMulVVW(SB)
|
||||
|
||||
38
vendor/github.com/remyoudompheng/bigfft/arith_amd64.s
generated
vendored
Normal file
38
vendor/github.com/remyoudompheng/bigfft/arith_amd64.s
generated
vendored
Normal file
@@ -0,0 +1,38 @@
|
||||
// Trampolines to math/big assembly implementations.
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// func addVV(z, x, y []Word) (c Word)
|
||||
TEXT ·addVV(SB),NOSPLIT,$0
|
||||
JMP math∕big·addVV(SB)
|
||||
|
||||
// func subVV(z, x, y []Word) (c Word)
|
||||
// (same as addVV except for SBBQ instead of ADCQ and label names)
|
||||
TEXT ·subVV(SB),NOSPLIT,$0
|
||||
JMP math∕big·subVV(SB)
|
||||
|
||||
// func addVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·addVW(SB)
|
||||
|
||||
// func subVW(z, x []Word, y Word) (c Word)
|
||||
// (same as addVW except for SUBQ/SBBQ instead of ADDQ/ADCQ and label names)
|
||||
TEXT ·subVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·subVW(SB)
|
||||
|
||||
// func shlVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shlVU(SB),NOSPLIT,$0
|
||||
JMP math∕big·shlVU(SB)
|
||||
|
||||
// func shrVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shrVU(SB),NOSPLIT,$0
|
||||
JMP math∕big·shrVU(SB)
|
||||
|
||||
// func mulAddVWW(z, x []Word, y, r Word) (c Word)
|
||||
TEXT ·mulAddVWW(SB),NOSPLIT,$0
|
||||
JMP math∕big·mulAddVWW(SB)
|
||||
|
||||
// func addMulVVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addMulVVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·addMulVVW(SB)
|
||||
|
||||
36
vendor/github.com/remyoudompheng/bigfft/arith_arm.s
generated
vendored
Normal file
36
vendor/github.com/remyoudompheng/bigfft/arith_arm.s
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
// Trampolines to math/big assembly implementations.
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// func addVV(z, x, y []Word) (c Word)
|
||||
TEXT ·addVV(SB),NOSPLIT,$0
|
||||
B math∕big·addVV(SB)
|
||||
|
||||
// func subVV(z, x, y []Word) (c Word)
|
||||
TEXT ·subVV(SB),NOSPLIT,$0
|
||||
B math∕big·subVV(SB)
|
||||
|
||||
// func addVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addVW(SB),NOSPLIT,$0
|
||||
B math∕big·addVW(SB)
|
||||
|
||||
// func subVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·subVW(SB),NOSPLIT,$0
|
||||
B math∕big·subVW(SB)
|
||||
|
||||
// func shlVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shlVU(SB),NOSPLIT,$0
|
||||
B math∕big·shlVU(SB)
|
||||
|
||||
// func shrVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shrVU(SB),NOSPLIT,$0
|
||||
B math∕big·shrVU(SB)
|
||||
|
||||
// func mulAddVWW(z, x []Word, y, r Word) (c Word)
|
||||
TEXT ·mulAddVWW(SB),NOSPLIT,$0
|
||||
B math∕big·mulAddVWW(SB)
|
||||
|
||||
// func addMulVVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addMulVVW(SB),NOSPLIT,$0
|
||||
B math∕big·addMulVVW(SB)
|
||||
|
||||
36
vendor/github.com/remyoudompheng/bigfft/arith_arm64.s
generated
vendored
Normal file
36
vendor/github.com/remyoudompheng/bigfft/arith_arm64.s
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
// Trampolines to math/big assembly implementations.
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// func addVV(z, x, y []Word) (c Word)
|
||||
TEXT ·addVV(SB),NOSPLIT,$0
|
||||
B math∕big·addVV(SB)
|
||||
|
||||
// func subVV(z, x, y []Word) (c Word)
|
||||
TEXT ·subVV(SB),NOSPLIT,$0
|
||||
B math∕big·subVV(SB)
|
||||
|
||||
// func addVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addVW(SB),NOSPLIT,$0
|
||||
B math∕big·addVW(SB)
|
||||
|
||||
// func subVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·subVW(SB),NOSPLIT,$0
|
||||
B math∕big·subVW(SB)
|
||||
|
||||
// func shlVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shlVU(SB),NOSPLIT,$0
|
||||
B math∕big·shlVU(SB)
|
||||
|
||||
// func shrVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shrVU(SB),NOSPLIT,$0
|
||||
B math∕big·shrVU(SB)
|
||||
|
||||
// func mulAddVWW(z, x []Word, y, r Word) (c Word)
|
||||
TEXT ·mulAddVWW(SB),NOSPLIT,$0
|
||||
B math∕big·mulAddVWW(SB)
|
||||
|
||||
// func addMulVVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addMulVVW(SB),NOSPLIT,$0
|
||||
B math∕big·addMulVVW(SB)
|
||||
|
||||
16
vendor/github.com/remyoudompheng/bigfft/arith_decl.go
generated
vendored
Normal file
16
vendor/github.com/remyoudompheng/bigfft/arith_decl.go
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package bigfft
|
||||
|
||||
import . "math/big"
|
||||
|
||||
// implemented in arith_$GOARCH.s
|
||||
func addVV(z, x, y []Word) (c Word)
|
||||
func subVV(z, x, y []Word) (c Word)
|
||||
func addVW(z, x []Word, y Word) (c Word)
|
||||
func subVW(z, x []Word, y Word) (c Word)
|
||||
func shlVU(z, x []Word, s uint) (c Word)
|
||||
func mulAddVWW(z, x []Word, y, r Word) (c Word)
|
||||
func addMulVVW(z, x []Word, y Word) (c Word)
|
||||
40
vendor/github.com/remyoudompheng/bigfft/arith_mips64x.s
generated
vendored
Normal file
40
vendor/github.com/remyoudompheng/bigfft/arith_mips64x.s
generated
vendored
Normal file
@@ -0,0 +1,40 @@
|
||||
// Trampolines to math/big assembly implementations.
|
||||
|
||||
// +build mips64 mips64le
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// func addVV(z, x, y []Word) (c Word)
|
||||
TEXT ·addVV(SB),NOSPLIT,$0
|
||||
JMP math∕big·addVV(SB)
|
||||
|
||||
// func subVV(z, x, y []Word) (c Word)
|
||||
// (same as addVV except for SBBQ instead of ADCQ and label names)
|
||||
TEXT ·subVV(SB),NOSPLIT,$0
|
||||
JMP math∕big·subVV(SB)
|
||||
|
||||
// func addVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·addVW(SB)
|
||||
|
||||
// func subVW(z, x []Word, y Word) (c Word)
|
||||
// (same as addVW except for SUBQ/SBBQ instead of ADDQ/ADCQ and label names)
|
||||
TEXT ·subVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·subVW(SB)
|
||||
|
||||
// func shlVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shlVU(SB),NOSPLIT,$0
|
||||
JMP math∕big·shlVU(SB)
|
||||
|
||||
// func shrVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shrVU(SB),NOSPLIT,$0
|
||||
JMP math∕big·shrVU(SB)
|
||||
|
||||
// func mulAddVWW(z, x []Word, y, r Word) (c Word)
|
||||
TEXT ·mulAddVWW(SB),NOSPLIT,$0
|
||||
JMP math∕big·mulAddVWW(SB)
|
||||
|
||||
// func addMulVVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addMulVVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·addMulVVW(SB)
|
||||
|
||||
40
vendor/github.com/remyoudompheng/bigfft/arith_mipsx.s
generated
vendored
Normal file
40
vendor/github.com/remyoudompheng/bigfft/arith_mipsx.s
generated
vendored
Normal file
@@ -0,0 +1,40 @@
|
||||
// Trampolines to math/big assembly implementations.
|
||||
|
||||
// +build mips mipsle
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// func addVV(z, x, y []Word) (c Word)
|
||||
TEXT ·addVV(SB),NOSPLIT,$0
|
||||
JMP math∕big·addVV(SB)
|
||||
|
||||
// func subVV(z, x, y []Word) (c Word)
|
||||
// (same as addVV except for SBBQ instead of ADCQ and label names)
|
||||
TEXT ·subVV(SB),NOSPLIT,$0
|
||||
JMP math∕big·subVV(SB)
|
||||
|
||||
// func addVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·addVW(SB)
|
||||
|
||||
// func subVW(z, x []Word, y Word) (c Word)
|
||||
// (same as addVW except for SUBQ/SBBQ instead of ADDQ/ADCQ and label names)
|
||||
TEXT ·subVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·subVW(SB)
|
||||
|
||||
// func shlVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shlVU(SB),NOSPLIT,$0
|
||||
JMP math∕big·shlVU(SB)
|
||||
|
||||
// func shrVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shrVU(SB),NOSPLIT,$0
|
||||
JMP math∕big·shrVU(SB)
|
||||
|
||||
// func mulAddVWW(z, x []Word, y, r Word) (c Word)
|
||||
TEXT ·mulAddVWW(SB),NOSPLIT,$0
|
||||
JMP math∕big·mulAddVWW(SB)
|
||||
|
||||
// func addMulVVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addMulVVW(SB),NOSPLIT,$0
|
||||
JMP math∕big·addMulVVW(SB)
|
||||
|
||||
38
vendor/github.com/remyoudompheng/bigfft/arith_ppc64x.s
generated
vendored
Normal file
38
vendor/github.com/remyoudompheng/bigfft/arith_ppc64x.s
generated
vendored
Normal file
@@ -0,0 +1,38 @@
|
||||
// Trampolines to math/big assembly implementations.
|
||||
|
||||
// +build ppc64 ppc64le
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// func addVV(z, x, y []Word) (c Word)
|
||||
TEXT ·addVV(SB),NOSPLIT,$0
|
||||
BR math∕big·addVV(SB)
|
||||
|
||||
// func subVV(z, x, y []Word) (c Word)
|
||||
TEXT ·subVV(SB),NOSPLIT,$0
|
||||
BR math∕big·subVV(SB)
|
||||
|
||||
// func addVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addVW(SB),NOSPLIT,$0
|
||||
BR math∕big·addVW(SB)
|
||||
|
||||
// func subVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·subVW(SB),NOSPLIT,$0
|
||||
BR math∕big·subVW(SB)
|
||||
|
||||
// func shlVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shlVU(SB),NOSPLIT,$0
|
||||
BR math∕big·shlVU(SB)
|
||||
|
||||
// func shrVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shrVU(SB),NOSPLIT,$0
|
||||
BR math∕big·shrVU(SB)
|
||||
|
||||
// func mulAddVWW(z, x []Word, y, r Word) (c Word)
|
||||
TEXT ·mulAddVWW(SB),NOSPLIT,$0
|
||||
BR math∕big·mulAddVWW(SB)
|
||||
|
||||
// func addMulVVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addMulVVW(SB),NOSPLIT,$0
|
||||
BR math∕big·addMulVVW(SB)
|
||||
|
||||
37
vendor/github.com/remyoudompheng/bigfft/arith_s390x.s
generated
vendored
Normal file
37
vendor/github.com/remyoudompheng/bigfft/arith_s390x.s
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
|
||||
// Trampolines to math/big assembly implementations.
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// func addVV(z, x, y []Word) (c Word)
|
||||
TEXT ·addVV(SB),NOSPLIT,$0
|
||||
BR math∕big·addVV(SB)
|
||||
|
||||
// func subVV(z, x, y []Word) (c Word)
|
||||
TEXT ·subVV(SB),NOSPLIT,$0
|
||||
BR math∕big·subVV(SB)
|
||||
|
||||
// func addVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addVW(SB),NOSPLIT,$0
|
||||
BR math∕big·addVW(SB)
|
||||
|
||||
// func subVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·subVW(SB),NOSPLIT,$0
|
||||
BR math∕big·subVW(SB)
|
||||
|
||||
// func shlVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shlVU(SB),NOSPLIT,$0
|
||||
BR math∕big·shlVU(SB)
|
||||
|
||||
// func shrVU(z, x []Word, s uint) (c Word)
|
||||
TEXT ·shrVU(SB),NOSPLIT,$0
|
||||
BR math∕big·shrVU(SB)
|
||||
|
||||
// func mulAddVWW(z, x []Word, y, r Word) (c Word)
|
||||
TEXT ·mulAddVWW(SB),NOSPLIT,$0
|
||||
BR math∕big·mulAddVWW(SB)
|
||||
|
||||
// func addMulVVW(z, x []Word, y Word) (c Word)
|
||||
TEXT ·addMulVVW(SB),NOSPLIT,$0
|
||||
BR math∕big·addMulVVW(SB)
|
||||
|
||||
216
vendor/github.com/remyoudompheng/bigfft/fermat.go
generated
vendored
Normal file
216
vendor/github.com/remyoudompheng/bigfft/fermat.go
generated
vendored
Normal file
@@ -0,0 +1,216 @@
|
||||
package bigfft
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
)
|
||||
|
||||
// Arithmetic modulo 2^n+1.
|
||||
|
||||
// A fermat of length w+1 represents a number modulo 2^(w*_W) + 1. The last
|
||||
// word is zero or one. A number has at most two representatives satisfying the
|
||||
// 0-1 last word constraint.
|
||||
type fermat nat
|
||||
|
||||
func (n fermat) String() string { return nat(n).String() }
|
||||
|
||||
func (z fermat) norm() {
|
||||
n := len(z) - 1
|
||||
c := z[n]
|
||||
if c == 0 {
|
||||
return
|
||||
}
|
||||
if z[0] >= c {
|
||||
z[n] = 0
|
||||
z[0] -= c
|
||||
return
|
||||
}
|
||||
// z[0] < z[n].
|
||||
subVW(z, z, c) // Substract c
|
||||
if c > 1 {
|
||||
z[n] -= c - 1
|
||||
c = 1
|
||||
}
|
||||
// Add back c.
|
||||
if z[n] == 1 {
|
||||
z[n] = 0
|
||||
return
|
||||
} else {
|
||||
addVW(z, z, 1)
|
||||
}
|
||||
}
|
||||
|
||||
// Shift computes (x << k) mod (2^n+1).
|
||||
func (z fermat) Shift(x fermat, k int) {
|
||||
if len(z) != len(x) {
|
||||
panic("len(z) != len(x) in Shift")
|
||||
}
|
||||
n := len(x) - 1
|
||||
// Shift by n*_W is taking the opposite.
|
||||
k %= 2 * n * _W
|
||||
if k < 0 {
|
||||
k += 2 * n * _W
|
||||
}
|
||||
neg := false
|
||||
if k >= n*_W {
|
||||
k -= n * _W
|
||||
neg = true
|
||||
}
|
||||
|
||||
kw, kb := k/_W, k%_W
|
||||
|
||||
z[n] = 1 // Add (-1)
|
||||
if !neg {
|
||||
for i := 0; i < kw; i++ {
|
||||
z[i] = 0
|
||||
}
|
||||
// Shift left by kw words.
|
||||
// x = a·2^(n-k) + b
|
||||
// x<<k = (b<<k) - a
|
||||
copy(z[kw:], x[:n-kw])
|
||||
b := subVV(z[:kw+1], z[:kw+1], x[n-kw:])
|
||||
if z[kw+1] > 0 {
|
||||
z[kw+1] -= b
|
||||
} else {
|
||||
subVW(z[kw+1:], z[kw+1:], b)
|
||||
}
|
||||
} else {
|
||||
for i := kw + 1; i < n; i++ {
|
||||
z[i] = 0
|
||||
}
|
||||
// Shift left and negate, by kw words.
|
||||
copy(z[:kw+1], x[n-kw:n+1]) // z_low = x_high
|
||||
b := subVV(z[kw:n], z[kw:n], x[:n-kw]) // z_high -= x_low
|
||||
z[n] -= b
|
||||
}
|
||||
// Add back 1.
|
||||
if z[n] > 0 {
|
||||
z[n]--
|
||||
} else if z[0] < ^big.Word(0) {
|
||||
z[0]++
|
||||
} else {
|
||||
addVW(z, z, 1)
|
||||
}
|
||||
// Shift left by kb bits
|
||||
shlVU(z, z, uint(kb))
|
||||
z.norm()
|
||||
}
|
||||
|
||||
// ShiftHalf shifts x by k/2 bits the left. Shifting by 1/2 bit
|
||||
// is multiplication by sqrt(2) mod 2^n+1 which is 2^(3n/4) - 2^(n/4).
|
||||
// A temporary buffer must be provided in tmp.
|
||||
func (z fermat) ShiftHalf(x fermat, k int, tmp fermat) {
|
||||
n := len(z) - 1
|
||||
if k%2 == 0 {
|
||||
z.Shift(x, k/2)
|
||||
return
|
||||
}
|
||||
u := (k - 1) / 2
|
||||
a := u + (3*_W/4)*n
|
||||
b := u + (_W/4)*n
|
||||
z.Shift(x, a)
|
||||
tmp.Shift(x, b)
|
||||
z.Sub(z, tmp)
|
||||
}
|
||||
|
||||
// Add computes addition mod 2^n+1.
|
||||
func (z fermat) Add(x, y fermat) fermat {
|
||||
if len(z) != len(x) {
|
||||
panic("Add: len(z) != len(x)")
|
||||
}
|
||||
addVV(z, x, y) // there cannot be a carry here.
|
||||
z.norm()
|
||||
return z
|
||||
}
|
||||
|
||||
// Sub computes substraction mod 2^n+1.
|
||||
func (z fermat) Sub(x, y fermat) fermat {
|
||||
if len(z) != len(x) {
|
||||
panic("Add: len(z) != len(x)")
|
||||
}
|
||||
n := len(y) - 1
|
||||
b := subVV(z[:n], x[:n], y[:n])
|
||||
b += y[n]
|
||||
// If b > 0, we need to subtract b<<n, which is the same as adding b.
|
||||
z[n] = x[n]
|
||||
if z[0] <= ^big.Word(0)-b {
|
||||
z[0] += b
|
||||
} else {
|
||||
addVW(z, z, b)
|
||||
}
|
||||
z.norm()
|
||||
return z
|
||||
}
|
||||
|
||||
func (z fermat) Mul(x, y fermat) fermat {
|
||||
if len(x) != len(y) {
|
||||
panic("Mul: len(x) != len(y)")
|
||||
}
|
||||
n := len(x) - 1
|
||||
if n < 30 {
|
||||
z = z[:2*n+2]
|
||||
basicMul(z, x, y)
|
||||
z = z[:2*n+1]
|
||||
} else {
|
||||
var xi, yi, zi big.Int
|
||||
xi.SetBits(x)
|
||||
yi.SetBits(y)
|
||||
zi.SetBits(z)
|
||||
zb := zi.Mul(&xi, &yi).Bits()
|
||||
if len(zb) <= n {
|
||||
// Short product.
|
||||
copy(z, zb)
|
||||
for i := len(zb); i < len(z); i++ {
|
||||
z[i] = 0
|
||||
}
|
||||
return z
|
||||
}
|
||||
z = zb
|
||||
}
|
||||
// len(z) is at most 2n+1.
|
||||
if len(z) > 2*n+1 {
|
||||
panic("len(z) > 2n+1")
|
||||
}
|
||||
// We now have
|
||||
// z = z[:n] + 1<<(n*W) * z[n:2n+1]
|
||||
// which normalizes to:
|
||||
// z = z[:n] - z[n:2n] + z[2n]
|
||||
c1 := big.Word(0)
|
||||
if len(z) > 2*n {
|
||||
c1 = addVW(z[:n], z[:n], z[2*n])
|
||||
}
|
||||
c2 := big.Word(0)
|
||||
if len(z) >= 2*n {
|
||||
c2 = subVV(z[:n], z[:n], z[n:2*n])
|
||||
} else {
|
||||
m := len(z) - n
|
||||
c2 = subVV(z[:m], z[:m], z[n:])
|
||||
c2 = subVW(z[m:n], z[m:n], c2)
|
||||
}
|
||||
// Restore carries.
|
||||
// Substracting z[n] -= c2 is the same
|
||||
// as z[0] += c2
|
||||
z = z[:n+1]
|
||||
z[n] = c1
|
||||
c := addVW(z, z, c2)
|
||||
if c != 0 {
|
||||
panic("impossible")
|
||||
}
|
||||
z.norm()
|
||||
return z
|
||||
}
|
||||
|
||||
// copied from math/big
|
||||
//
|
||||
// basicMul multiplies x and y and leaves the result in z.
|
||||
// The (non-normalized) result is placed in z[0 : len(x) + len(y)].
|
||||
func basicMul(z, x, y fermat) {
|
||||
// initialize z
|
||||
for i := 0; i < len(z); i++ {
|
||||
z[i] = 0
|
||||
}
|
||||
for i, d := range y {
|
||||
if d != 0 {
|
||||
z[len(x)+i] = addMulVVW(z[i:i+len(x)], x, d)
|
||||
}
|
||||
}
|
||||
}
|
||||
370
vendor/github.com/remyoudompheng/bigfft/fft.go
generated
vendored
Normal file
370
vendor/github.com/remyoudompheng/bigfft/fft.go
generated
vendored
Normal file
@@ -0,0 +1,370 @@
|
||||
// Package bigfft implements multiplication of big.Int using FFT.
|
||||
//
|
||||
// The implementation is based on the Schönhage-Strassen method
|
||||
// using integer FFT modulo 2^n+1.
|
||||
package bigfft
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const _W = int(unsafe.Sizeof(big.Word(0)) * 8)
|
||||
|
||||
type nat []big.Word
|
||||
|
||||
func (n nat) String() string {
|
||||
v := new(big.Int)
|
||||
v.SetBits(n)
|
||||
return v.String()
|
||||
}
|
||||
|
||||
// fftThreshold is the size (in words) above which FFT is used over
|
||||
// Karatsuba from math/big.
|
||||
//
|
||||
// TestCalibrate seems to indicate a threshold of 60kbits on 32-bit
|
||||
// arches and 110kbits on 64-bit arches.
|
||||
var fftThreshold = 1800
|
||||
|
||||
// Mul computes the product x*y and returns z.
|
||||
// It can be used instead of the Mul method of
|
||||
// *big.Int from math/big package.
|
||||
func Mul(x, y *big.Int) *big.Int {
|
||||
xwords := len(x.Bits())
|
||||
ywords := len(y.Bits())
|
||||
if xwords > fftThreshold && ywords > fftThreshold {
|
||||
return mulFFT(x, y)
|
||||
}
|
||||
return new(big.Int).Mul(x, y)
|
||||
}
|
||||
|
||||
func mulFFT(x, y *big.Int) *big.Int {
|
||||
var xb, yb nat = x.Bits(), y.Bits()
|
||||
zb := fftmul(xb, yb)
|
||||
z := new(big.Int)
|
||||
z.SetBits(zb)
|
||||
if x.Sign()*y.Sign() < 0 {
|
||||
z.Neg(z)
|
||||
}
|
||||
return z
|
||||
}
|
||||
|
||||
// A FFT size of K=1<<k is adequate when K is about 2*sqrt(N) where
|
||||
// N = x.Bitlen() + y.Bitlen().
|
||||
|
||||
func fftmul(x, y nat) nat {
|
||||
k, m := fftSize(x, y)
|
||||
xp := polyFromNat(x, k, m)
|
||||
yp := polyFromNat(y, k, m)
|
||||
rp := xp.Mul(&yp)
|
||||
return rp.Int()
|
||||
}
|
||||
|
||||
// fftSizeThreshold[i] is the maximal size (in bits) where we should use
|
||||
// fft size i.
|
||||
var fftSizeThreshold = [...]int64{0, 0, 0,
|
||||
4 << 10, 8 << 10, 16 << 10, // 5
|
||||
32 << 10, 64 << 10, 1 << 18, 1 << 20, 3 << 20, // 10
|
||||
8 << 20, 30 << 20, 100 << 20, 300 << 20, 600 << 20,
|
||||
}
|
||||
|
||||
// returns the FFT length k, m the number of words per chunk
|
||||
// such that m << k is larger than the number of words
|
||||
// in x*y.
|
||||
func fftSize(x, y nat) (k uint, m int) {
|
||||
words := len(x) + len(y)
|
||||
bits := int64(words) * int64(_W)
|
||||
k = uint(len(fftSizeThreshold))
|
||||
for i := range fftSizeThreshold {
|
||||
if fftSizeThreshold[i] > bits {
|
||||
k = uint(i)
|
||||
break
|
||||
}
|
||||
}
|
||||
// The 1<<k chunks of m words must have N bits so that
|
||||
// 2^N-1 is larger than x*y. That is, m<<k > words
|
||||
m = words>>k + 1
|
||||
return
|
||||
}
|
||||
|
||||
// valueSize returns the length (in words) to use for polynomial
|
||||
// coefficients, to compute a correct product of polynomials P*Q
|
||||
// where deg(P*Q) < K (== 1<<k) and where coefficients of P and Q are
|
||||
// less than b^m (== 1 << (m*_W)).
|
||||
// The chosen length (in bits) must be a multiple of 1 << (k-extra).
|
||||
func valueSize(k uint, m int, extra uint) int {
|
||||
// The coefficients of P*Q are less than b^(2m)*K
|
||||
// so we need W * valueSize >= 2*m*W+K
|
||||
n := 2*m*_W + int(k) // necessary bits
|
||||
K := 1 << (k - extra)
|
||||
if K < _W {
|
||||
K = _W
|
||||
}
|
||||
n = ((n / K) + 1) * K // round to a multiple of K
|
||||
return n / _W
|
||||
}
|
||||
|
||||
// poly represents an integer via a polynomial in Z[x]/(x^K+1)
|
||||
// where K is the FFT length and b^m is the computation basis 1<<(m*_W).
|
||||
// If P = a[0] + a[1] x + ... a[n] x^(K-1), the associated natural number
|
||||
// is P(b^m).
|
||||
type poly struct {
|
||||
k uint // k is such that K = 1<<k.
|
||||
m int // the m such that P(b^m) is the original number.
|
||||
a []nat // a slice of at most K m-word coefficients.
|
||||
}
|
||||
|
||||
// polyFromNat slices the number x into a polynomial
|
||||
// with 1<<k coefficients made of m words.
|
||||
func polyFromNat(x nat, k uint, m int) poly {
|
||||
p := poly{k: k, m: m}
|
||||
length := len(x)/m + 1
|
||||
p.a = make([]nat, length)
|
||||
for i := range p.a {
|
||||
if len(x) < m {
|
||||
p.a[i] = make(nat, m)
|
||||
copy(p.a[i], x)
|
||||
break
|
||||
}
|
||||
p.a[i] = x[:m]
|
||||
x = x[m:]
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
// Int evaluates back a poly to its integer value.
|
||||
func (p *poly) Int() nat {
|
||||
length := len(p.a)*p.m + 1
|
||||
if na := len(p.a); na > 0 {
|
||||
length += len(p.a[na-1])
|
||||
}
|
||||
n := make(nat, length)
|
||||
m := p.m
|
||||
np := n
|
||||
for i := range p.a {
|
||||
l := len(p.a[i])
|
||||
c := addVV(np[:l], np[:l], p.a[i])
|
||||
if np[l] < ^big.Word(0) {
|
||||
np[l] += c
|
||||
} else {
|
||||
addVW(np[l:], np[l:], c)
|
||||
}
|
||||
np = np[m:]
|
||||
}
|
||||
n = trim(n)
|
||||
return n
|
||||
}
|
||||
|
||||
func trim(n nat) nat {
|
||||
for i := range n {
|
||||
if n[len(n)-1-i] != 0 {
|
||||
return n[:len(n)-i]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Mul multiplies p and q modulo X^K-1, where K = 1<<p.k.
|
||||
// The product is done via a Fourier transform.
|
||||
func (p *poly) Mul(q *poly) poly {
|
||||
// extra=2 because:
|
||||
// * some power of 2 is a K-th root of unity when n is a multiple of K/2.
|
||||
// * 2 itself is a square (see fermat.ShiftHalf)
|
||||
n := valueSize(p.k, p.m, 2)
|
||||
|
||||
pv, qv := p.Transform(n), q.Transform(n)
|
||||
rv := pv.Mul(&qv)
|
||||
r := rv.InvTransform()
|
||||
r.m = p.m
|
||||
return r
|
||||
}
|
||||
|
||||
// A polValues represents the value of a poly at the powers of a
|
||||
// K-th root of unity θ=2^(l/2) in Z/(b^n+1)Z, where b^n = 2^(K/4*l).
|
||||
type polValues struct {
|
||||
k uint // k is such that K = 1<<k.
|
||||
n int // the length of coefficients, n*_W a multiple of K/4.
|
||||
values []fermat // a slice of K (n+1)-word values
|
||||
}
|
||||
|
||||
// Transform evaluates p at θ^i for i = 0...K-1, where
|
||||
// θ is a K-th primitive root of unity in Z/(b^n+1)Z.
|
||||
func (p *poly) Transform(n int) polValues {
|
||||
k := p.k
|
||||
inputbits := make([]big.Word, (n+1)<<k)
|
||||
input := make([]fermat, 1<<k)
|
||||
// Now computed q(ω^i) for i = 0 ... K-1
|
||||
valbits := make([]big.Word, (n+1)<<k)
|
||||
values := make([]fermat, 1<<k)
|
||||
for i := range values {
|
||||
input[i] = inputbits[i*(n+1) : (i+1)*(n+1)]
|
||||
if i < len(p.a) {
|
||||
copy(input[i], p.a[i])
|
||||
}
|
||||
values[i] = fermat(valbits[i*(n+1) : (i+1)*(n+1)])
|
||||
}
|
||||
fourier(values, input, false, n, k)
|
||||
return polValues{k, n, values}
|
||||
}
|
||||
|
||||
// InvTransform reconstructs p (modulo X^K - 1) from its
|
||||
// values at θ^i for i = 0..K-1.
|
||||
func (v *polValues) InvTransform() poly {
|
||||
k, n := v.k, v.n
|
||||
|
||||
// Perform an inverse Fourier transform to recover p.
|
||||
pbits := make([]big.Word, (n+1)<<k)
|
||||
p := make([]fermat, 1<<k)
|
||||
for i := range p {
|
||||
p[i] = fermat(pbits[i*(n+1) : (i+1)*(n+1)])
|
||||
}
|
||||
fourier(p, v.values, true, n, k)
|
||||
// Divide by K, and untwist q to recover p.
|
||||
u := make(fermat, n+1)
|
||||
a := make([]nat, 1<<k)
|
||||
for i := range p {
|
||||
u.Shift(p[i], -int(k))
|
||||
copy(p[i], u)
|
||||
a[i] = nat(p[i])
|
||||
}
|
||||
return poly{k: k, m: 0, a: a}
|
||||
}
|
||||
|
||||
// NTransform evaluates p at θω^i for i = 0...K-1, where
|
||||
// θ is a (2K)-th primitive root of unity in Z/(b^n+1)Z
|
||||
// and ω = θ².
|
||||
func (p *poly) NTransform(n int) polValues {
|
||||
k := p.k
|
||||
if len(p.a) >= 1<<k {
|
||||
panic("Transform: len(p.a) >= 1<<k")
|
||||
}
|
||||
// θ is represented as a shift.
|
||||
θshift := (n * _W) >> k
|
||||
// p(x) = a_0 + a_1 x + ... + a_{K-1} x^(K-1)
|
||||
// p(θx) = q(x) where
|
||||
// q(x) = a_0 + θa_1 x + ... + θ^(K-1) a_{K-1} x^(K-1)
|
||||
//
|
||||
// Twist p by θ to obtain q.
|
||||
tbits := make([]big.Word, (n+1)<<k)
|
||||
twisted := make([]fermat, 1<<k)
|
||||
src := make(fermat, n+1)
|
||||
for i := range twisted {
|
||||
twisted[i] = fermat(tbits[i*(n+1) : (i+1)*(n+1)])
|
||||
if i < len(p.a) {
|
||||
for i := range src {
|
||||
src[i] = 0
|
||||
}
|
||||
copy(src, p.a[i])
|
||||
twisted[i].Shift(src, θshift*i)
|
||||
}
|
||||
}
|
||||
|
||||
// Now computed q(ω^i) for i = 0 ... K-1
|
||||
valbits := make([]big.Word, (n+1)<<k)
|
||||
values := make([]fermat, 1<<k)
|
||||
for i := range values {
|
||||
values[i] = fermat(valbits[i*(n+1) : (i+1)*(n+1)])
|
||||
}
|
||||
fourier(values, twisted, false, n, k)
|
||||
return polValues{k, n, values}
|
||||
}
|
||||
|
||||
// InvTransform reconstructs a polynomial from its values at
|
||||
// roots of x^K+1. The m field of the returned polynomial
|
||||
// is unspecified.
|
||||
func (v *polValues) InvNTransform() poly {
|
||||
k := v.k
|
||||
n := v.n
|
||||
θshift := (n * _W) >> k
|
||||
|
||||
// Perform an inverse Fourier transform to recover q.
|
||||
qbits := make([]big.Word, (n+1)<<k)
|
||||
q := make([]fermat, 1<<k)
|
||||
for i := range q {
|
||||
q[i] = fermat(qbits[i*(n+1) : (i+1)*(n+1)])
|
||||
}
|
||||
fourier(q, v.values, true, n, k)
|
||||
|
||||
// Divide by K, and untwist q to recover p.
|
||||
u := make(fermat, n+1)
|
||||
a := make([]nat, 1<<k)
|
||||
for i := range q {
|
||||
u.Shift(q[i], -int(k)-i*θshift)
|
||||
copy(q[i], u)
|
||||
a[i] = nat(q[i])
|
||||
}
|
||||
return poly{k: k, m: 0, a: a}
|
||||
}
|
||||
|
||||
// fourier performs an unnormalized Fourier transform
|
||||
// of src, a length 1<<k vector of numbers modulo b^n+1
|
||||
// where b = 1<<_W.
|
||||
func fourier(dst []fermat, src []fermat, backward bool, n int, k uint) {
|
||||
var rec func(dst, src []fermat, size uint)
|
||||
tmp := make(fermat, n+1) // pre-allocate temporary variables.
|
||||
tmp2 := make(fermat, n+1) // pre-allocate temporary variables.
|
||||
|
||||
// The recursion function of the FFT.
|
||||
// The root of unity used in the transform is ω=1<<(ω2shift/2).
|
||||
// The source array may use shifted indices (i.e. the i-th
|
||||
// element is src[i << idxShift]).
|
||||
rec = func(dst, src []fermat, size uint) {
|
||||
idxShift := k - size
|
||||
ω2shift := (4 * n * _W) >> size
|
||||
if backward {
|
||||
ω2shift = -ω2shift
|
||||
}
|
||||
|
||||
// Easy cases.
|
||||
if len(src[0]) != n+1 || len(dst[0]) != n+1 {
|
||||
panic("len(src[0]) != n+1 || len(dst[0]) != n+1")
|
||||
}
|
||||
switch size {
|
||||
case 0:
|
||||
copy(dst[0], src[0])
|
||||
return
|
||||
case 1:
|
||||
dst[0].Add(src[0], src[1<<idxShift]) // dst[0] = src[0] + src[1]
|
||||
dst[1].Sub(src[0], src[1<<idxShift]) // dst[1] = src[0] - src[1]
|
||||
return
|
||||
}
|
||||
|
||||
// Let P(x) = src[0] + src[1<<idxShift] * x + ... + src[K-1 << idxShift] * x^(K-1)
|
||||
// The P(x) = Q1(x²) + x*Q2(x²)
|
||||
// where Q1's coefficients are src with indices shifted by 1
|
||||
// where Q2's coefficients are src[1<<idxShift:] with indices shifted by 1
|
||||
|
||||
// Split destination vectors in halves.
|
||||
dst1 := dst[:1<<(size-1)]
|
||||
dst2 := dst[1<<(size-1):]
|
||||
// Transform Q1 and Q2 in the halves.
|
||||
rec(dst1, src, size-1)
|
||||
rec(dst2, src[1<<idxShift:], size-1)
|
||||
|
||||
// Reconstruct P's transform from transforms of Q1 and Q2.
|
||||
// dst[i] is dst1[i] + ω^i * dst2[i]
|
||||
// dst[i + 1<<(k-1)] is dst1[i] + ω^(i+K/2) * dst2[i]
|
||||
//
|
||||
for i := range dst1 {
|
||||
tmp.ShiftHalf(dst2[i], i*ω2shift, tmp2) // ω^i * dst2[i]
|
||||
dst2[i].Sub(dst1[i], tmp)
|
||||
dst1[i].Add(dst1[i], tmp)
|
||||
}
|
||||
}
|
||||
rec(dst, src, k)
|
||||
}
|
||||
|
||||
// Mul returns the pointwise product of p and q.
|
||||
func (p *polValues) Mul(q *polValues) (r polValues) {
|
||||
n := p.n
|
||||
r.k, r.n = p.k, p.n
|
||||
r.values = make([]fermat, len(p.values))
|
||||
bits := make([]big.Word, len(p.values)*(n+1))
|
||||
buf := make(fermat, 8*n)
|
||||
for i := range r.values {
|
||||
r.values[i] = bits[i*(n+1) : (i+1)*(n+1)]
|
||||
z := buf.Mul(p.values[i], q.values[i])
|
||||
copy(r.values[i], z)
|
||||
}
|
||||
return
|
||||
}
|
||||
3
vendor/github.com/remyoudompheng/bigfft/go.mod
generated
vendored
Normal file
3
vendor/github.com/remyoudompheng/bigfft/go.mod
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
module github.com/remyoudompheng/bigfft
|
||||
|
||||
go 1.12
|
||||
70
vendor/github.com/remyoudompheng/bigfft/scan.go
generated
vendored
Normal file
70
vendor/github.com/remyoudompheng/bigfft/scan.go
generated
vendored
Normal file
@@ -0,0 +1,70 @@
|
||||
package bigfft
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
)
|
||||
|
||||
// FromDecimalString converts the base 10 string
|
||||
// representation of a natural (non-negative) number
|
||||
// into a *big.Int.
|
||||
// Its asymptotic complexity is less than quadratic.
|
||||
func FromDecimalString(s string) *big.Int {
|
||||
var sc scanner
|
||||
z := new(big.Int)
|
||||
sc.scan(z, s)
|
||||
return z
|
||||
}
|
||||
|
||||
type scanner struct {
|
||||
// powers[i] is 10^(2^i * quadraticScanThreshold).
|
||||
powers []*big.Int
|
||||
}
|
||||
|
||||
func (s *scanner) chunkSize(size int) (int, *big.Int) {
|
||||
if size <= quadraticScanThreshold {
|
||||
panic("size < quadraticScanThreshold")
|
||||
}
|
||||
pow := uint(0)
|
||||
for n := size; n > quadraticScanThreshold; n /= 2 {
|
||||
pow++
|
||||
}
|
||||
// threshold * 2^(pow-1) <= size < threshold * 2^pow
|
||||
return quadraticScanThreshold << (pow - 1), s.power(pow - 1)
|
||||
}
|
||||
|
||||
func (s *scanner) power(k uint) *big.Int {
|
||||
for i := len(s.powers); i <= int(k); i++ {
|
||||
z := new(big.Int)
|
||||
if i == 0 {
|
||||
if quadraticScanThreshold%14 != 0 {
|
||||
panic("quadraticScanThreshold % 14 != 0")
|
||||
}
|
||||
z.Exp(big.NewInt(1e14), big.NewInt(quadraticScanThreshold/14), nil)
|
||||
} else {
|
||||
z.Mul(s.powers[i-1], s.powers[i-1])
|
||||
}
|
||||
s.powers = append(s.powers, z)
|
||||
}
|
||||
return s.powers[k]
|
||||
}
|
||||
|
||||
func (s *scanner) scan(z *big.Int, str string) {
|
||||
if len(str) <= quadraticScanThreshold {
|
||||
z.SetString(str, 10)
|
||||
return
|
||||
}
|
||||
sz, pow := s.chunkSize(len(str))
|
||||
// Scan the left half.
|
||||
s.scan(z, str[:len(str)-sz])
|
||||
// FIXME: reuse temporaries.
|
||||
left := Mul(z, pow)
|
||||
// Scan the right half
|
||||
s.scan(z, str[len(str)-sz:])
|
||||
z.Add(z, left)
|
||||
}
|
||||
|
||||
// quadraticScanThreshold is the number of digits
|
||||
// below which big.Int.SetString is more efficient
|
||||
// than subquadratic algorithms.
|
||||
// 1232 digits fit in 4096 bits.
|
||||
const quadraticScanThreshold = 1232
|
||||
27
vendor/golang.org/x/mod/LICENSE
generated
vendored
Normal file
27
vendor/golang.org/x/mod/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
Copyright (c) 2009 The Go Authors. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
22
vendor/golang.org/x/mod/PATENTS
generated
vendored
Normal file
22
vendor/golang.org/x/mod/PATENTS
generated
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
Additional IP Rights Grant (Patents)
|
||||
|
||||
"This implementation" means the copyrightable works distributed by
|
||||
Google as part of the Go project.
|
||||
|
||||
Google hereby grants to You a perpetual, worldwide, non-exclusive,
|
||||
no-charge, royalty-free, irrevocable (except as stated in this section)
|
||||
patent license to make, have made, use, offer to sell, sell, import,
|
||||
transfer and otherwise run, modify and propagate the contents of this
|
||||
implementation of Go, where such license applies only to those patent
|
||||
claims, both currently owned or controlled by Google and acquired in
|
||||
the future, licensable by Google that are necessarily infringed by this
|
||||
implementation of Go. This grant does not include claims that would be
|
||||
infringed only as a consequence of further modification of this
|
||||
implementation. If you or your agent or exclusive licensee institute or
|
||||
order or agree to the institution of patent litigation against any
|
||||
entity (including a cross-claim or counterclaim in a lawsuit) alleging
|
||||
that this implementation of Go or any code incorporated within this
|
||||
implementation of Go constitutes direct or contributory patent
|
||||
infringement, or inducement of patent infringement, then any patent
|
||||
rights granted to you under this License for this implementation of Go
|
||||
shall terminate as of the date such litigation is filed.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user