mirror of
https://framagit.org/ppom/reaction
synced 2026-03-15 13:15:47 +01:00
Compare commits
No commits in common. "main" and "v2.0.0-rc2" have entirely different histories.
main
...
v2.0.0-rc2
142 changed files with 5330 additions and 20566 deletions
1
.envrc
1
.envrc
|
|
@ -1 +0,0 @@
|
|||
use_nix
|
||||
12
.gitignore
vendored
12
.gitignore
vendored
|
|
@ -1,15 +1,21 @@
|
|||
/reaction
|
||||
reaction.db
|
||||
reaction.db.old
|
||||
/ip46tables
|
||||
/nft46
|
||||
reaction*.db
|
||||
reaction*.db.old
|
||||
/data
|
||||
reaction*.export.json
|
||||
/reaction*.sock
|
||||
/result
|
||||
/wiki
|
||||
/deb
|
||||
*.deb
|
||||
*.minisig
|
||||
*.qcow2
|
||||
debian-packaging/*
|
||||
*.swp
|
||||
export-go-db/export-go-db
|
||||
import-rust-db/target
|
||||
/target
|
||||
/local
|
||||
.ccls-cache
|
||||
.direnv
|
||||
|
|
|
|||
15
.gitlab-ci.yml
Normal file
15
.gitlab-ci.yml
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
image: golang:1.20-bookworm
|
||||
stages:
|
||||
- build
|
||||
|
||||
variables:
|
||||
DEBIAN_FRONTEND: noninteractive
|
||||
|
||||
test_building:
|
||||
stage: build
|
||||
before_script:
|
||||
- apt-get -qq -y update
|
||||
- apt-get -qq -y install build-essential devscripts debhelper quilt wget
|
||||
script:
|
||||
- make reaction ip46tables nft46
|
||||
|
|
@ -6,18 +6,18 @@ Here is a high-level overview of the codebase.
|
|||
|
||||
## Build
|
||||
|
||||
- `bench/`: Configuration that spawns a very high load on reaction. Useful to test performance improvements and regressions.
|
||||
- `build.rs`: permits to create shell completions and man pages on build.
|
||||
- `Cargo.toml`, `Cargo.lock`: manifest and dependencies.
|
||||
- `config/`: example / test configuration files. Look at its git history to discover more.
|
||||
- `Makefile`: Makefile. Resumes useful commands.
|
||||
- `packaging/`: Files useful for .deb and .tar generation.
|
||||
- `release.py`: Build process for a release. Handles cross-compilation, .tar and .deb generation.
|
||||
- `config`: example / test configuration files. Look at the git history to discover more!
|
||||
- `debian`: reaction.deb generation.
|
||||
- `Makefile`: Makefile. I plan to remove this at some point.
|
||||
- `release.py`: Build process for a release. I'd like to make it more modular, to permit to build specific parts only for example.
|
||||
|
||||
## Main source code
|
||||
|
||||
- `tests/`: Integration tests. They test reaction runtime behavior, persistance, client-daemon communication, plugin integrations.
|
||||
- `src/`: The source code, here we go!
|
||||
- `helpers_c`: C helpers. I wish to have special IP support in reaction and get rid of them.
|
||||
- `tests`: Integration tests. For now they test basic reaction runtime behavior, persistance, and client-daemon communication.
|
||||
- `src`: The source code, here we go!
|
||||
|
||||
### Top-level files
|
||||
|
||||
|
|
@ -25,57 +25,47 @@ Here is a high-level overview of the codebase.
|
|||
- `src/lib.rs`: Second main entrypoint
|
||||
- `src/cli.rs`: Command-line arguments
|
||||
- `src/tests.rs`: Test utilities
|
||||
- `src/protocol.rs`: de/serialization and client/daemon protocol messages.
|
||||
|
||||
### `src/concepts/`
|
||||
### `src/concepts`
|
||||
|
||||
reaction really is about its configuration, which is at the center of the code.
|
||||
|
||||
There is one file for each of its concepts: configuration, streams, filters, actions, patterns, plugins.
|
||||
There is one file for each of its concepts: configuration, streams, filters, actions, patterns.
|
||||
|
||||
### `src/client/`
|
||||
### `src/protocol`
|
||||
|
||||
Client code: `reaction show`, `reaction flush`, `reaction trigger`, `reaction test-regex`.
|
||||
Low-level serialization/deserialization and client-daemon protocol messages.
|
||||
|
||||
- `request.rs`: commands requiring client/server communication: `show`, `flush` & `trigger`.
|
||||
- `test_config.rs`: `test-config` command.
|
||||
Shared by the client and daemon's socket. Also used by daemon's database.
|
||||
|
||||
### `src/client`
|
||||
|
||||
Client code: `reaction show`, `reaction flush`, `reaction test-regex`.
|
||||
|
||||
- `show_flush.rs`: `show` & `flush` commands.
|
||||
- `test_regex.rs`: `test-regex` command.
|
||||
|
||||
### `src/daemon`
|
||||
|
||||
Daemon runtime structures and logic.
|
||||
|
||||
This code has async code, to handle input streams and communication with clients, using the tokio runtime.
|
||||
This code is mainly async, with the tokio runtime.
|
||||
|
||||
- `mod.rs`: daemon main function. Initializes all tasks, handles synchronization and quitting, etc.
|
||||
- `stream.rs`: Stream managers: start the stream `cmd` and dispatch its stdout lines to its Filter managers.
|
||||
- `filter/`: Filter managers: handle lines, persistance, store matches and trigger actions. This is the main piece of runtime logic.
|
||||
- `mod.rs`: High-level logic
|
||||
- `state.rs`: Inner state operations
|
||||
- `filter.rs`: Filter managers: handle lines, store matches, send logs to database and decide when to trigger actions.
|
||||
- `action.rs`: Action managers: handle action triggers (*execs*), store & manage pending actions.
|
||||
- `socket.rs`: The socket task, responsible for communication with clients.
|
||||
- `plugin.rs`: Plugin startup, configuration loading and cleanup.
|
||||
- `database`: The database thread. This is a sync thread, because it's somehow muuch faster. At startup it sends persisted matches to the Filter managers. Then it receives match/exec logs from the filters and persist them.
|
||||
- `database/mod.rs`: Main logic.
|
||||
- `database/lowlevel.rs`: Low-level implementation details (serialization / deserialization and size optimizations).
|
||||
- `database/tests.rs`: Unit tests.
|
||||
|
||||
### `crates/treedb`
|
||||
## Migration from Go to Rust
|
||||
|
||||
Persistence layer.
|
||||
- `go.old/`: Go / v1 codebase.
|
||||
|
||||
This is a database highly adapted to reaction workload, making reaction faster than when used with general purpose key-value databases
|
||||
(heed, sled and fjall crates have been tested).
|
||||
Its design is explained in the comments of its files:
|
||||
Those scripts are merged in a single-file executable by `release.py`:
|
||||
|
||||
- `lib.rs`: main database code, with its two API structs: Tree and Database.
|
||||
- `raw.rs`: low-level part, directly interacting with de/serializisation and files.
|
||||
- `time.rs`: time definitions shared with reaction.
|
||||
- `helpers.rs`: utilities to ease db deserialization from disk.
|
||||
|
||||
### `plugins/reaction-plugin`
|
||||
|
||||
Shared plugin interface between reaction daemon and its plugins.
|
||||
|
||||
Also defines some shared logic between them:
|
||||
- `shutdown.rs`: Logic for passing shutdown signal across all tasks
|
||||
- `parse_duration.rs` Duration parsing
|
||||
|
||||
### `plugins/reaction-plugin-*`
|
||||
|
||||
All core plugins.
|
||||
- `export-go-db/`: Go script to export the reaction-v1 database as JSON.
|
||||
- `import-rust-db/`: Rust script to import the JSON export as a reaction-v2 database.
|
||||
|
|
|
|||
4137
Cargo.lock
generated
4137
Cargo.lock
generated
File diff suppressed because it is too large
Load diff
78
Cargo.toml
78
Cargo.toml
|
|
@ -1,7 +1,7 @@
|
|||
[package]
|
||||
name = "reaction"
|
||||
version = "2.3.0"
|
||||
edition = "2024"
|
||||
version = "2.0.0-rc2"
|
||||
edition = "2021"
|
||||
authors = ["ppom <reaction@ppom.me>"]
|
||||
license = "AGPL-3.0"
|
||||
description = "Scan logs and take action"
|
||||
|
|
@ -10,58 +10,40 @@ homepage = "https://reaction.ppom.me"
|
|||
repository = "https://framagit.org/ppom/reaction"
|
||||
keywords = ["security", "sysadmin", "fail2ban", "logs", "monitoring"]
|
||||
build = "build.rs"
|
||||
default-run = "reaction"
|
||||
|
||||
[package.metadata.deb]
|
||||
section = "net"
|
||||
extended-description = """A daemon that scans program outputs for repeated patterns, and takes action.
|
||||
A common usage is to scan ssh and webserver logs, and to ban hosts that cause multiple authentication errors.
|
||||
reaction aims at being a successor to fail2ban."""
|
||||
maintainer-scripts = "packaging/"
|
||||
systemd-units = { enable = false }
|
||||
assets = [
|
||||
# Executables
|
||||
# Executable
|
||||
[ "target/release/reaction", "/usr/bin/reaction", "755" ],
|
||||
[ "target/release/reaction-plugin-virtual", "/usr/bin/reaction-plugin-virtual", "755" ],
|
||||
# Man pages
|
||||
[ "target/release/reaction*.1", "/usr/share/man/man1/", "644" ],
|
||||
# Shell completions
|
||||
[ "target/release/reaction.bash", "/usr/share/bash-completion/completions/reaction", "644" ],
|
||||
[ "target/release/reaction.fish", "/usr/share/fish/completions/", "644" ],
|
||||
[ "target/release/_reaction", "/usr/share/zsh/vendor-completions/", "644" ],
|
||||
# Slice
|
||||
[ "packaging/system-reaction.slice", "/usr/lib/systemd/system/", "644" ],
|
||||
]
|
||||
|
||||
[dependencies]
|
||||
# Time types
|
||||
chrono.workspace = true
|
||||
# CLI parsing
|
||||
bincode = "1.3.3"
|
||||
chrono = { version = "0.4.38", features = ["std", "clock", "serde"] }
|
||||
clap = { version = "4.5.4", features = ["derive"] }
|
||||
# Unix interfaces
|
||||
jrsonnet-evaluator = "0.4.2"
|
||||
nix = { version = "0.29.0", features = ["signal"] }
|
||||
num_cpus = "1.16.0"
|
||||
# Regex matching
|
||||
regex = "1.10.4"
|
||||
# Configuration languages, ser/deserialisation
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
serde = { version = "1.0.203", features = ["derive"] }
|
||||
serde_json = "1.0.117"
|
||||
serde_yaml = "0.9.34"
|
||||
jrsonnet-evaluator = "0.4.2"
|
||||
# Error macro
|
||||
thiserror.workspace = true
|
||||
# Async runtime & helpers
|
||||
futures = { workspace = true }
|
||||
tokio = { workspace = true, features = ["full", "tracing"] }
|
||||
tokio-util = { workspace = true, features = ["codec"] }
|
||||
# Async logging
|
||||
tracing.workspace = true
|
||||
thiserror = "1.0.63"
|
||||
timer = "0.2.0"
|
||||
futures = "0.3.30"
|
||||
tokio = { version = "1.40.0", features = ["full", "tracing"] }
|
||||
tokio-util = { version = "0.7.12", features = ["codec"] }
|
||||
tracing = "0.1.40"
|
||||
tracing-subscriber = "0.3.18"
|
||||
# Database
|
||||
treedb.workspace = true
|
||||
# Reaction plugin system
|
||||
remoc.workspace = true
|
||||
reaction-plugin.workspace = true
|
||||
sled = "0.34.7"
|
||||
|
||||
[build-dependencies]
|
||||
clap = { version = "4.5.4", features = ["derive"] }
|
||||
|
|
@ -72,34 +54,4 @@ tracing = "0.1.40"
|
|||
|
||||
[dev-dependencies]
|
||||
rand = "0.8.5"
|
||||
treedb.workspace = true
|
||||
treedb.features = ["test"]
|
||||
tempfile.workspace = true
|
||||
assert_fs.workspace = true
|
||||
assert_cmd = "2.0.17"
|
||||
predicates = "3.1.3"
|
||||
|
||||
[workspace]
|
||||
members = [
|
||||
"crates/treedb",
|
||||
"plugins/reaction-plugin",
|
||||
"plugins/reaction-plugin-cluster",
|
||||
"plugins/reaction-plugin-ipset",
|
||||
"plugins/reaction-plugin-nftables",
|
||||
"plugins/reaction-plugin-virtual"
|
||||
]
|
||||
|
||||
[workspace.dependencies]
|
||||
assert_fs = "1.1.3"
|
||||
chrono = { version = "0.4.38", features = ["std", "clock", "serde"] }
|
||||
futures = "0.3.30"
|
||||
remoc = { version = "0.18.3" }
|
||||
serde = { version = "1.0.203", features = ["derive"] }
|
||||
serde_json = { version = "1.0.117", features = ["arbitrary_precision"] }
|
||||
tempfile = "3.12.0"
|
||||
thiserror = "1.0.63"
|
||||
tokio = { version = "1.40.0" }
|
||||
tokio-util = { version = "0.7.12" }
|
||||
tracing = "0.1.40"
|
||||
reaction-plugin = { path = "plugins/reaction-plugin" }
|
||||
treedb = { path = "crates/treedb" }
|
||||
|
|
|
|||
11
Dockerfile
11
Dockerfile
|
|
@ -1,11 +0,0 @@
|
|||
# This Dockerfile permits to build reaction and its plugins
|
||||
|
||||
# Use debian old-stable, so that it runs on both old-stable and stable
|
||||
FROM rust:bookworm
|
||||
|
||||
RUN apt update && apt install -y \
|
||||
clang \
|
||||
libipset-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /reaction
|
||||
5
Makefile
5
Makefile
|
|
@ -14,10 +14,9 @@ reaction:
|
|||
|
||||
install: reaction
|
||||
install -m755 target/release/reaction $(DESTDIR)$(BINDIR)
|
||||
install -m755 target/release/ip46tables $(DESTDIR)$(BINDIR)
|
||||
install -m755 target/release/nft46 $(DESTDIR)$(BINDIR)
|
||||
|
||||
install_systemd: install
|
||||
install -m644 packaging/reaction.service $(SYSTEMDDIR)/system/reaction.service
|
||||
sed -i 's#/usr/local/bin#$(DESTDIR)$(BINDIR)#' $(SYSTEMDDIR)/system/reaction.service
|
||||
|
||||
release:
|
||||
nix-shell release.py
|
||||
|
|
|
|||
145
README.md
145
README.md
|
|
@ -4,40 +4,39 @@ A daemon that scans program outputs for repeated patterns, and takes action.
|
|||
|
||||
A common usage is to scan ssh and webserver logs, and to ban hosts that cause multiple authentication errors.
|
||||
|
||||
🚧 This program hasn't received external security audit yet. However, it already works well on many servers 🚧
|
||||
🚧 This program hasn't received external audit. however, it already works well on my servers 🚧
|
||||
|
||||
## Current project status
|
||||
|
||||
reaction just reached v2.0.0-rc2 version, which is a complete rust rewrite of reaction.
|
||||
It's in feature parity with the Go version, and breaking changes should be small.
|
||||
|
||||
See https://reaction.ppom.me/migrate-to-v2.html
|
||||
|
||||
## Rationale
|
||||
|
||||
I was using the honorable fail2ban since quite a long time, but i was a bit frustrated by its CPU consumption
|
||||
I was using the honorable fail2ban since quite a long time, but i was a bit frustrated by its cpu consumption
|
||||
and all its heavy default configuration.
|
||||
|
||||
In my view, a security-oriented program should be simple to configure
|
||||
and an always-running daemon should be implemented in a fast*er* language.
|
||||
|
||||
reaction does not have all the features of the honorable fail2ban, but it's more than 10x faster and has more manageable configuration.
|
||||
reaction does not have all the features of the honorable fail2ban, but it's ~10x faster and has more manageable configuration.
|
||||
|
||||
[📽️ quick french name explanation 😉](https://u.ppom.me/reaction.webm)
|
||||
|
||||
[🇬🇧 in-depth blog article](https://blog.ppom.me/en-reaction)
|
||||
/ [🇫🇷 french version](https://blog.ppom.me/fr-reaction)
|
||||
|
||||
## Rust rewrite
|
||||
|
||||
reaction v2.x is a complete Rust rewrite of reaction.
|
||||
It's in feature parity with the Go version, v1.x, which is now deprecated.
|
||||
|
||||
See https://blog.ppom.me/en-reaction-v2.
|
||||
|
||||
## Configuration
|
||||
|
||||
YAML and [JSONnet](https://jsonnet.org/) (more powerful) are supported.
|
||||
both are extensions of JSON, so JSON is transitively supported.
|
||||
|
||||
- See [reaction.yml](./config/example.yml) or [reaction.jsonnet](./config/example.jsonnet) for a fully explained reference (ipv4 + ipv6)
|
||||
- See the [wiki](https://reaction.ppom.me) for multiple examples, security recommendations and FAQ.
|
||||
- See [server.jsonnet](https://reaction.ppom.me/configurations/ppom/server.jsonnet.html) for a real-world configuration
|
||||
- See [reaction.service](./config/reaction.service) for a systemd service file
|
||||
- This minimal example (ipv4 only) shows what's needed to prevent brute force attacks on an ssh server (please read at least the [Security](https://reaction.ppom.me/security.html) part of the wiki before starting 🆙):
|
||||
- See [reaction.yml](./app/example.yml) or [reaction.jsonnet](./config/example.jsonnet) for a fully explained reference
|
||||
- See [server.jsonnet](./config/server.jsonnet) for a real-world configuration
|
||||
- See [reaction.example.service](./config/reaction.example.service) for a systemd service file
|
||||
- This minimal example shows what's needed to prevent brute force attacks on an ssh server (please take a look at more complete references before starting 🆙):
|
||||
|
||||
<details open>
|
||||
|
||||
|
|
@ -46,18 +45,18 @@ both are extensions of JSON, so JSON is transitively supported.
|
|||
```yaml
|
||||
patterns:
|
||||
ip:
|
||||
type: ipv4
|
||||
regex: '(([0-9]{1,3}\.){3}[0-9]{1,3})|([0-9a-fA-F:]{2,90})'
|
||||
|
||||
start:
|
||||
- [ 'iptables', '-w', '-N', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-N', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-A', 'reaction', '-j', 'ACCEPT' ]
|
||||
- [ 'ip46tables', '-w', '-I', 'reaction', '1', '-s', '127.0.0.1', '-j', 'ACCEPT' ]
|
||||
- [ 'ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
|
||||
stop:
|
||||
- [ 'iptables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-F', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-X', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-F', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-X', 'reaction' ]
|
||||
|
||||
streams:
|
||||
ssh:
|
||||
|
|
@ -66,16 +65,13 @@ streams:
|
|||
failedlogin:
|
||||
regex:
|
||||
- 'authentication failure;.*rhost=<ip>'
|
||||
- 'Failed password for .* from <ip>'
|
||||
- 'Invalid user .* from <ip>'
|
||||
- 'banner exchange: Connection from <ip> port [0-9]*: invalid format'
|
||||
retry: 3
|
||||
retryperiod: '6h'
|
||||
actions:
|
||||
ban:
|
||||
cmd: [ 'iptables', '-w', '-I', 'reaction', '1', '-s', '<ip>', '-j', 'DROP' ]
|
||||
cmd: [ 'ip46tables', '-w', '-I', 'reaction', '1', '-s', '<ip>', '-j', 'block' ]
|
||||
unban:
|
||||
cmd: [ 'iptables', '-w', '-D', 'reaction', '1', '-s', '<ip>', '-j', 'DROP' ]
|
||||
cmd: [ 'ip46tables', '-w', '-D', 'reaction', '1', '-s', '<ip>', '-j', 'block' ]
|
||||
after: '48h'
|
||||
```
|
||||
|
||||
|
|
@ -86,44 +82,39 @@ streams:
|
|||
<summary><code>/etc/reaction.jsonnet</code></summary>
|
||||
|
||||
```jsonnet
|
||||
local iptables(args) = [ 'ip46tables', '-w' ] + args;
|
||||
local banFor(time) = {
|
||||
ban: {
|
||||
cmd: ['iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
|
||||
cmd: iptables(['-A', 'reaction', '-s', '<ip>', '-j', 'reaction-log-refuse']),
|
||||
},
|
||||
unban: {
|
||||
cmd: ['iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
|
||||
after: time,
|
||||
cmd: iptables(['-D', 'reaction', '-s', '<ip>', '-j', 'reaction-log-refuse']),
|
||||
},
|
||||
};
|
||||
|
||||
{
|
||||
patterns: {
|
||||
ip: {
|
||||
type: 'ipv4',
|
||||
regex: @'(?:(?:[ 0-9 ]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})',
|
||||
},
|
||||
},
|
||||
start: [
|
||||
['iptables', '-N', 'reaction'],
|
||||
['iptables', '-I', 'INPUT', '-p', 'all', '-j', 'reaction'],
|
||||
['iptables', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction'],
|
||||
iptables([ '-N', 'reaction' ]),
|
||||
iptables([ '-A', 'reaction', '-j', 'ACCEPT' ]),
|
||||
iptables([ '-I', 'reaction', '1', '-s', '127.0.0.1', '-j', 'ACCEPT' ]),
|
||||
iptables([ '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]),
|
||||
],
|
||||
stop: [
|
||||
['iptables', '-D', 'INPUT', '-p', 'all', '-j', 'reaction'],
|
||||
['iptables', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction'],
|
||||
['iptables', '-F', 'reaction'],
|
||||
['iptables', '-X', 'reaction'],
|
||||
iptables([ '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]),
|
||||
iptables([ '-F', 'reaction' ]),
|
||||
iptables([ '-X', 'reaction' ]),
|
||||
],
|
||||
streams: {
|
||||
ssh: {
|
||||
cmd: ['journalctl', '-fu', 'sshd.service'],
|
||||
cmd: [ 'journalctl', '-fu', 'sshd.service' ],
|
||||
filters: {
|
||||
failedlogin: {
|
||||
regex: [
|
||||
@'authentication failure;.*rhost=<ip>',
|
||||
@'Failed password for .* from <ip>',
|
||||
@'banner exchange: Connection from <ip> port [0-9]*: invalid format',
|
||||
@'Invalid user .* from <ip>',
|
||||
],
|
||||
regex: [ @'authentication failure;.*rhost=<ip>' ],
|
||||
retry: 3,
|
||||
retryperiod: '6h',
|
||||
actions: banFor('48h'),
|
||||
|
|
@ -136,29 +127,26 @@ local banFor(time) = {
|
|||
|
||||
</details>
|
||||
|
||||
> It is recommended to setup reaction with [`nftables`](https://reaction.ppom.me/actions/nftables.html)
|
||||
> or [`ipset` + `iptables`](https://reaction.ppom.me/actions/ipset.html), which are much more performant
|
||||
> solutions than `iptables` alone.
|
||||
|
||||
### Database
|
||||
|
||||
The embedded database is stored in the working directory (but can be overriden by the `state_directory` config option).
|
||||
The embedded database is stored in the working directory.
|
||||
If you don't know where to start reaction, `/var/lib/reaction` should be a sane choice.
|
||||
|
||||
### CLI
|
||||
|
||||
- `reaction start` runs the server
|
||||
- `reaction show` show pending actions (ie. show current bans)
|
||||
- `reaction show` show pending actions (ie. bans)
|
||||
- `reaction flush` permits to run pending actions (ie. clear bans)
|
||||
- `reaction trigger` permits to manually trigger a filter (ie. run custom ban)
|
||||
- `reaction test-regex` permits to test regexes
|
||||
- `reaction test-config` shows loaded configuration
|
||||
- `reaction help` for full usage.
|
||||
|
||||
### old binaries
|
||||
### `ip46tables`
|
||||
|
||||
`ip46tables` and `nft46` binaries are no longer part of reaction. If you really need them, see
|
||||
[the last commit that included them](https://framagit.org/ppom/reaction/-/tree/b7d997ca5e9a69c8572bb2ec9d27d0eb03b3cb9f/helpers_c).
|
||||
`ip46tables` is a minimal c program present in its own subdirectory with only standard posix dependencies.
|
||||
|
||||
It permits to configure `iptables` and `ip6tables` at the same time.
|
||||
It will execute `iptables` when detecting ipv4, `ip6tables` when detecting ipv6 and both if no ip address is present on the command line.
|
||||
|
||||
## Wiki
|
||||
|
||||
|
|
@ -184,7 +172,7 @@ reaction is packaged, but the [**module**](https://framagit.org/ppom/nixos/-/blo
|
|||
|
||||
#### OpenBSD
|
||||
|
||||
See the [wiki](https://reaction.ppom.me/configurations/OpenBSD.html).
|
||||
[wiki](https://reaction.ppom.me/configs/openbsd.html)
|
||||
|
||||
### Compilation
|
||||
|
||||
|
|
@ -206,46 +194,13 @@ To install the systemd file as well
|
|||
make install_systemd
|
||||
```
|
||||
|
||||
## Contributing
|
||||
## Development
|
||||
|
||||
> We, as participants in the open source ecosystem, are ethically responsible for the software
|
||||
> and hardware we help create - as it can be used to perpetuate inequalities or help empower
|
||||
> marginalized communities, and fight against patriarchy, capitalism, sexism, gender violence,
|
||||
> racism, ableism, homophobia, colonialism, fascism, surveillance, and oppressive control.
|
||||
Contributions are welcome. For any substantial feature, please file an issue first, to be assured that we agree on the feature, and to avoid unnecessary work.
|
||||
|
||||
- [NGI's Diversity and Inclusion Guide](https://nlnet.nl/NGI0/bestpractices/DiversityAndInclusionGuide-v4.pdf)
|
||||
|
||||
I'll do my best to maintain a safe contribution place, as free as possible from discrimination and elitism.
|
||||
|
||||
### Ideas
|
||||
|
||||
Please take a look at issues which have the "Opinion Welcome 👀" label!
|
||||
*Your opinion is welcome.*
|
||||
|
||||
Your ideas are welcome in the issues.
|
||||
|
||||
### Code
|
||||
|
||||
Contributions are welcome.
|
||||
|
||||
For any substantial feature, please file an issue first, to be assured that we agree on the feature, and to avoid unnecessary work.
|
||||
|
||||
I recommend reading [`ARCHITECTURE.md`](ARCHITECTURE.md) first. This is a quick tour of the codebase, which should save time to new contributors.
|
||||
|
||||
You can also join this Matrix development room: [#reaction-dev-en:club1.fr](https://matrix.to/#/#reaction-dev-en:club1.fr).
|
||||
French version: [#reaction-dev-fr:club1.fr](https://matrix.to/#/#reaction-dev-fr:club1.fr).
|
||||
|
||||
## Help
|
||||
|
||||
You can ask for help in the issues or in this Matrix room: [#reaction-users-en:club1.fr](https://matrix.to/#/#reaction-users-en:club1.fr).
|
||||
French version: [#reaction-users-fr:club1.fr](https://matrix.to/#/#reaction-users-fr:club1.fr).
|
||||
You can alternatively send a mail: `reaction` on domain `ppom.me`.
|
||||
I recommend reading [`ARCHITECTURE.md`](ARCHITECTURE.md) first. This is a tour of the codebase, which should save time to potential contributors.
|
||||
|
||||
## Funding
|
||||
|
||||
<!-- This is a free time project, so I'm not working on schedule.
|
||||
However, if you're willing to fund the project, I can priorise and plan paid work. This includes features, documentation and specific JSONnet configurations. -->
|
||||
|
||||
This project is currenlty funded through the [NGI0 Core](https://nlnet.nl/core) Fund, a fund established by [NLnet](https://nlnet.nl) with financial support from the European Commission's [Next Generation Internet](https://ngi.eu) programme.
|
||||
|
||||

|
||||
This is a free time project, so I'm not working on schedule.
|
||||
However, if you're willing to fund the project, I can priorise and plan paid work. This includes features, documentation and specific JSONnet configurations.
|
||||
|
|
|
|||
3
TODO
3
TODO
|
|
@ -1,3 +0,0 @@
|
|||
Test what happens when a Filter's pattern Set changes (I think it's shitty)
|
||||
DB: add tests on stress testing (lines should always be in order)
|
||||
conf: merge filters
|
||||
|
|
@ -1,24 +0,0 @@
|
|||
set -e
|
||||
|
||||
if test "$(realpath "$PWD")" != "$(realpath "$(dirname "$0")/..")"
|
||||
then
|
||||
echo "You must be in reaction root directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if test ! -f "$1"
|
||||
then
|
||||
# shellcheck disable=SC2016
|
||||
echo '$1 must be a configuration file (most probably in ./bench)'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
rm -f reaction.db
|
||||
cargo build --release --bins
|
||||
sudo systemd-run --wait \
|
||||
-p User="$(id -nu)" \
|
||||
-p MemoryAccounting=yes \
|
||||
-p IOAccounting=yes \
|
||||
-p WorkingDirectory="$(pwd)" \
|
||||
-p Environment=PATH=/run/current-system/sw/bin/ \
|
||||
sh -c "for i in 1 2; do ./target/release/reaction start -c '$1' -l ERROR -s ./reaction.sock; done"
|
||||
130
bench/nginx.yml
130
bench/nginx.yml
|
|
@ -1,130 +0,0 @@
|
|||
# This is an extract of a real life configuration
|
||||
#
|
||||
# It reads an nginx's access.log in the following format:
|
||||
# log_format '$remote_addr - $remote_user [$time_local] '
|
||||
# '$host '
|
||||
# '"$request" $status $bytes_sent '
|
||||
# '"$http_referer" "$http_user_agent"';
|
||||
#
|
||||
# I can't make my access.log public for obvious privacy reasons.
|
||||
#
|
||||
# On the opposite of heavy-load.yml, this test is closer to real-life regex complexity.
|
||||
#
|
||||
# It has been created to test the performance improvements of
|
||||
# the previous commit: ad6b0faa30c1af84360f66074a917b4bf6cda10a
|
||||
#
|
||||
# On this test, most lines don't match anything, so most time is spent matching regexes.
|
||||
|
||||
concurrency: 0
|
||||
patterns:
|
||||
ip:
|
||||
ignore:
|
||||
- 192.168.1.253
|
||||
- 10.1.1.1
|
||||
- 10.1.1.5
|
||||
- 10.1.1.4
|
||||
- 127.0.0.1
|
||||
- ::1
|
||||
regex: (?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))
|
||||
untilEOL:
|
||||
regex: .*$
|
||||
streams:
|
||||
nginx:
|
||||
cmd:
|
||||
- cat
|
||||
- /tmp/access.log
|
||||
filters:
|
||||
directusFailedLogin:
|
||||
actions:
|
||||
ban:
|
||||
cmd:
|
||||
- sleep
|
||||
- 0.01
|
||||
unban:
|
||||
after: 4h
|
||||
cmd:
|
||||
- sleep
|
||||
- 0.01
|
||||
regex:
|
||||
- ^<ip> .* "POST /repertoire/auth/login HTTP/..." 401 [0-9]+ .https://babos.land
|
||||
- ^<ip> .* "POST /pompeani.art/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
|
||||
- ^<ip> .* "POST /leborddeleau/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
|
||||
- ^<ip> .* "POST /5eroue/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
|
||||
- ^<ip> .* "POST /edit/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
|
||||
- ^<ip> .* "POST /auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.fr
|
||||
retry: 6
|
||||
retryperiod: 4h
|
||||
gptbot:
|
||||
actions:
|
||||
ban:
|
||||
cmd:
|
||||
- sleep
|
||||
- 0.01
|
||||
unban:
|
||||
after: 4h
|
||||
cmd:
|
||||
- sleep
|
||||
- 0.01
|
||||
regex:
|
||||
- ^<ip>.*"[^"]*AI2Bot[^"]*"$
|
||||
- ^<ip>.*"[^"]*Amazonbot[^"]*"$
|
||||
- ^<ip>.*"[^"]*Applebot[^"]*"$
|
||||
- ^<ip>.*"[^"]*Applebot-Extended[^"]*"$
|
||||
- ^<ip>.*"[^"]*Bytespider[^"]*"$
|
||||
- ^<ip>.*"[^"]*CCBot[^"]*"$
|
||||
- ^<ip>.*"[^"]*ChatGPT-User[^"]*"$
|
||||
- ^<ip>.*"[^"]*ClaudeBot[^"]*"$
|
||||
- ^<ip>.*"[^"]*Diffbot[^"]*"$
|
||||
- ^<ip>.*"[^"]*DuckAssistBot[^"]*"$
|
||||
- ^<ip>.*"[^"]*FacebookBot[^"]*"$
|
||||
- ^<ip>.*"[^"]*GPTBot[^"]*"$
|
||||
- ^<ip>.*"[^"]*Google-Extended[^"]*"$
|
||||
- ^<ip>.*"[^"]*Kangaroo Bot[^"]*"$
|
||||
- ^<ip>.*"[^"]*Meta-ExternalAgent[^"]*"$
|
||||
- ^<ip>.*"[^"]*Meta-ExternalFetcher[^"]*"$
|
||||
- ^<ip>.*"[^"]*OAI-SearchBot[^"]*"$
|
||||
- ^<ip>.*"[^"]*PerplexityBot[^"]*"$
|
||||
- ^<ip>.*"[^"]*Timpibot[^"]*"$
|
||||
- ^<ip>.*"[^"]*Webzio-Extended[^"]*"$
|
||||
- ^<ip>.*"[^"]*YouBot[^"]*"$
|
||||
- ^<ip>.*"[^"]*omgili[^"]*"$
|
||||
slskd-failedLogin:
|
||||
actions:
|
||||
ban:
|
||||
cmd:
|
||||
- sleep
|
||||
- 0.01
|
||||
unban:
|
||||
after: 4h
|
||||
cmd:
|
||||
- sleep
|
||||
- 0.01
|
||||
regex:
|
||||
- ^<ip> .* "POST /slskd/api/v0/session HTTP/..." 401 [0-9]+ .https://ppom.me
|
||||
- ^<ip> .* "POST /kiosque/api/v0/session HTTP/..." 401 [0-9]+ .https://babos.land
|
||||
retry: 3
|
||||
retryperiod: 1h
|
||||
suspectRequests:
|
||||
actions:
|
||||
ban:
|
||||
cmd:
|
||||
- sleep
|
||||
- 0.01
|
||||
unban:
|
||||
after: 4h
|
||||
cmd:
|
||||
- sleep
|
||||
- 0.01
|
||||
regex:
|
||||
- ^<ip> .*"GET /(?:[^/" ]*/)*wp-login\.php
|
||||
- ^<ip> .*"GET /(?:[^/" ]*/)*wp-includes
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*\.env '
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*config\.json '
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*info\.php '
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*owa/auth/logon.aspx '
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*auth.html '
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*auth1.html '
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*password.txt '
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*passwords.txt '
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*dns-query '
|
||||
- '^<ip> .*"GET /(?:[^/" ]*/)*\.git/ '
|
||||
|
|
@ -1,86 +0,0 @@
|
|||
---
|
||||
# This configuration permits to test reaction's performance
|
||||
# under a very high load
|
||||
#
|
||||
# It keeps regexes super simple, to avoid benchmarking the `regex` crate,
|
||||
# and benchmark reaction's internals instead.
|
||||
concurrency: 32
|
||||
|
||||
plugins:
|
||||
- path: "/home/ppom/prg/reaction/target/release/reaction-plugin-virtual"
|
||||
|
||||
patterns:
|
||||
num:
|
||||
regex: '[0-9]{3}'
|
||||
ip:
|
||||
regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
|
||||
ignore:
|
||||
- 1.0.0.1
|
||||
|
||||
streams:
|
||||
virtual:
|
||||
type: virtual
|
||||
filters:
|
||||
find0:
|
||||
regex:
|
||||
- '^<num>$'
|
||||
actions:
|
||||
damn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
undamn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
after: 1m
|
||||
onexit: false
|
||||
tailDown1:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 1001 | while read i; do echo found $i; done' ]
|
||||
filters:
|
||||
find1:
|
||||
regex:
|
||||
- '^found <num>'
|
||||
retry: 9
|
||||
retryperiod: 6m
|
||||
actions:
|
||||
virtual:
|
||||
type: virtual
|
||||
options:
|
||||
send: '<num>'
|
||||
to: virtual
|
||||
tailDown2:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
|
||||
filters:
|
||||
find2:
|
||||
regex:
|
||||
- '^found <num>'
|
||||
retry: 480
|
||||
retryperiod: 6m
|
||||
actions:
|
||||
virtual:
|
||||
type: virtual
|
||||
options:
|
||||
send: '<num>'
|
||||
to: virtual
|
||||
tailDown3:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
|
||||
filters:
|
||||
find3:
|
||||
regex:
|
||||
- '^found <num>'
|
||||
retry: 480
|
||||
retryperiod: 6m
|
||||
actions:
|
||||
virtual:
|
||||
type: virtual
|
||||
options:
|
||||
send: '<num>'
|
||||
to: virtual
|
||||
find4:
|
||||
regex:
|
||||
- '^trouvé <num>'
|
||||
retry: 480
|
||||
retryperiod: 6m
|
||||
actions:
|
||||
virtual:
|
||||
type: virtual
|
||||
options:
|
||||
send: '<num>'
|
||||
to: virtual
|
||||
|
|
@ -1,74 +0,0 @@
|
|||
---
|
||||
# This configuration permits to test reaction's performance
|
||||
# under a very high load
|
||||
#
|
||||
# It keeps regexes super simple, to avoid benchmarking the `regex` crate,
|
||||
# and benchmark reaction's internals instead.
|
||||
concurrency: 32
|
||||
|
||||
patterns:
|
||||
num:
|
||||
regex: '[0-9]{3}'
|
||||
ip:
|
||||
regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
|
||||
ignore:
|
||||
- 1.0.0.1
|
||||
|
||||
streams:
|
||||
tailDown1:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 1001 | while read i; do echo found $i; done' ]
|
||||
filters:
|
||||
find1:
|
||||
regex:
|
||||
- '^found <num>'
|
||||
retry: 9
|
||||
retryperiod: 6m
|
||||
actions:
|
||||
damn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
undamn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
after: 1m
|
||||
onexit: false
|
||||
tailDown2:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
|
||||
filters:
|
||||
find2:
|
||||
regex:
|
||||
- '^found <num>'
|
||||
retry: 480
|
||||
retryperiod: 6m
|
||||
actions:
|
||||
damn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
undamn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
after: 1m
|
||||
onexit: false
|
||||
tailDown3:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
|
||||
filters:
|
||||
find3:
|
||||
regex:
|
||||
- '^found <num>'
|
||||
retry: 480
|
||||
retryperiod: 6m
|
||||
actions:
|
||||
damn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
undamn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
after: 1m
|
||||
onexit: false
|
||||
find4:
|
||||
regex:
|
||||
- '^trouvé <num>'
|
||||
retry: 480
|
||||
retryperiod: 6m
|
||||
actions:
|
||||
damn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
undamn:
|
||||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
after: 1m
|
||||
onexit: false
|
||||
19
build.rs
19
build.rs
|
|
@ -1,6 +1,8 @@
|
|||
use std::{
|
||||
env::var_os,
|
||||
io::{self, ErrorKind},
|
||||
path::Path,
|
||||
process,
|
||||
};
|
||||
|
||||
use clap_complete::shells;
|
||||
|
|
@ -8,10 +10,25 @@ use clap_complete::shells;
|
|||
// SubCommand defined here
|
||||
include!("src/cli.rs");
|
||||
|
||||
fn compile_helper(name: &str, out_dir: &Path) -> io::Result<()> {
|
||||
process::Command::new("gcc")
|
||||
.args([
|
||||
&format!("helpers_c/{name}.c"),
|
||||
"-o",
|
||||
out_dir.join(name).to_str().expect("could not join path"),
|
||||
])
|
||||
.spawn()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn main() -> io::Result<()> {
|
||||
if var_os("PROFILE").ok_or(ErrorKind::NotFound)? == "release" {
|
||||
let out_dir = PathBuf::from(var_os("OUT_DIR").ok_or(ErrorKind::NotFound)?).join("../../..");
|
||||
|
||||
// Compile C helpers
|
||||
compile_helper("ip46tables", &out_dir)?;
|
||||
compile_helper("nft46", &out_dir)?;
|
||||
|
||||
// Build CLI
|
||||
let cli = clap::Command::new("reaction");
|
||||
let cli = SubCommand::augment_subcommands(cli);
|
||||
|
|
@ -34,6 +51,8 @@ See usage examples, service configurations and good practices on the wiki: https
|
|||
|
||||
println!("cargo::rerun-if-changed=build.rs");
|
||||
println!("cargo::rerun-if-changed=src/cli.rs");
|
||||
println!("cargo::rerun-if-changed=helpers_c/ip46tables.c");
|
||||
println!("cargo::rerun-if-changed=helpers_c/nft46.c");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,8 +0,0 @@
|
|||
# Configuration
|
||||
|
||||
Here reside two equal configurations, one in YAML and one in JSONnet.
|
||||
|
||||
Those serve as a configuration reference for now, waiting for a more complete reference in the wiki.
|
||||
|
||||
Please take a look at the [wiki](https://reaction.ppom.me) for security implications of using reaction,
|
||||
FAQ, JSONnet tips, and multiple examples of filters and actions.
|
||||
101
config/activitywatch.jsonnet
Normal file
101
config/activitywatch.jsonnet
Normal file
|
|
@ -0,0 +1,101 @@
|
|||
// Those strings will be substitued in each shell() call
|
||||
local substitutions = [
|
||||
['OUTFILE', '"$HOME/.local/share/watch/logs-$(date +%F)"'],
|
||||
['DATE', '"$(date "+%F %T")"'],
|
||||
];
|
||||
|
||||
// Substitue each substitutions' item in string
|
||||
local sub(str) = std.foldl(
|
||||
(function(changedstr, kv) std.strReplace(changedstr, kv[0], kv[1])),
|
||||
substitutions,
|
||||
str
|
||||
);
|
||||
local shell(prg) = [
|
||||
'sh',
|
||||
'-c',
|
||||
sub(prg),
|
||||
];
|
||||
|
||||
local log(line) = shell('echo DATE ' + std.strReplace(line, '\n', ' ') + '>> OUTFILE');
|
||||
|
||||
{
|
||||
start: [
|
||||
shell('mkdir -p "$(dirname OUTFILE)"'),
|
||||
log('start'),
|
||||
],
|
||||
|
||||
stop: [
|
||||
log('stop'),
|
||||
],
|
||||
|
||||
patterns: {
|
||||
all: { regex: '.*' },
|
||||
},
|
||||
|
||||
streams: {
|
||||
// Be notified about each window focus change
|
||||
// FIXME DOESN'T WORK
|
||||
sway: {
|
||||
cmd: shell(|||
|
||||
swaymsg -rm -t subscribe "['window']" | jq -r 'select(.change == "focus") | .container | if has("app_id") and .app_id != null then .app_id else .window_properties.class end'
|
||||
|||),
|
||||
filters: {
|
||||
send: {
|
||||
regex: ['^<all>$'],
|
||||
actions: {
|
||||
send: { cmd: log('focus <all>') },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
// Be notified when user is away
|
||||
swayidle: {
|
||||
// FIXME echo stop and start instead?
|
||||
cmd: ['swayidle', 'timeout', '30', 'echo sleep', 'resume', 'echo resume'],
|
||||
filters: {
|
||||
send: {
|
||||
regex: ['^<all>$'],
|
||||
actions: {
|
||||
send: { cmd: log('<all>') },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
// Be notified about tmux activity
|
||||
// Limitation: can't handle multiple concurrently attached sessions
|
||||
// tmux: {
|
||||
// cmd: shell(|||
|
||||
// LAST_TIME="0"
|
||||
// LAST_ACTIVITY=""
|
||||
// while true;
|
||||
// do
|
||||
// NEW_TIME=$(tmux display -p '#{session_activity}')
|
||||
// if [ -n "$NEW_TIME" ] && [ "$NEW_TIME" -gt "$LAST_TIME" ]
|
||||
// then
|
||||
// LAST_TIME="$NEW_TIME"
|
||||
// NEW_ACTIVITY="$(tmux display -p '#{pane_current_command} #{pane_current_path}')"
|
||||
// if [ -n "$NEW_ACTIVITY" ] && [ "$NEW_ACTIVITY" != "$LAST_ACTIVITY" ]
|
||||
// then
|
||||
// LAST_ACTIVITY="$NEW_ACTIVITY"
|
||||
// echo "tmux $NEW_ACTIVITY"
|
||||
// fi
|
||||
// fi
|
||||
// sleep 10
|
||||
// done
|
||||
// |||),
|
||||
// filters: {
|
||||
// send: {
|
||||
// regex: ['^tmux <all>$'],
|
||||
// actions: {
|
||||
// send: { cmd: log('tmux <all>') },
|
||||
// },
|
||||
// },
|
||||
// },
|
||||
// },
|
||||
|
||||
// Be notified about firefox activity
|
||||
// TODO
|
||||
},
|
||||
}
|
||||
|
|
@ -7,106 +7,61 @@
|
|||
// strongly encouraged to take a look at the full documentation: https://reaction.ppom.me
|
||||
|
||||
// JSONnet functions
|
||||
local ipBan(cmd) = [cmd, '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'];
|
||||
local ipUnban(cmd) = [cmd, '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'];
|
||||
local iptables(args) = ['ip46tables', '-w'] + args;
|
||||
// ip46tables is a minimal C program (only POSIX dependencies) present in a
|
||||
// subdirectory of this repo.
|
||||
// it permits to handle both ipv4/iptables and ipv6/ip6tables commands
|
||||
|
||||
// See meaning and usage of this function around L180
|
||||
// See meaning and usage of this function around L106
|
||||
local banFor(time) = {
|
||||
ban4: {
|
||||
cmd: ipBan('iptables'),
|
||||
ipv4only: true,
|
||||
ban: {
|
||||
cmd: iptables(['-A', 'reaction', '-s', '<ip>', '-j', 'DROP']),
|
||||
},
|
||||
ban6: {
|
||||
cmd: ipBan('ip6tables'),
|
||||
ipv6only: true,
|
||||
},
|
||||
unban4: {
|
||||
cmd: ipUnban('iptables'),
|
||||
unban: {
|
||||
after: time,
|
||||
ipv4only: true,
|
||||
},
|
||||
unban6: {
|
||||
cmd: ipUnban('ip6tables'),
|
||||
after: time,
|
||||
ipv6only: true,
|
||||
cmd: iptables(['-D', 'reaction', '-s', '<ip>', '-j', 'DROP']),
|
||||
},
|
||||
};
|
||||
|
||||
// See usage of this function around L90
|
||||
// Generates a command for iptables and ip46tables
|
||||
local ip46tables(arguments) = [
|
||||
['iptables', '-w'] + arguments,
|
||||
['ip6tables', '-w'] + arguments,
|
||||
];
|
||||
|
||||
{
|
||||
// patterns are substitued in regexes.
|
||||
// when a filter performs an action, it replaces the found pattern
|
||||
patterns: {
|
||||
|
||||
name: {
|
||||
// reaction regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
|
||||
// common patterns have a 'regex' field
|
||||
regex: '[a-z]+',
|
||||
// patterns can ignore specific strings
|
||||
ignore: ['cecilia'],
|
||||
// patterns can also be ignored based on regexes, it will try to match the whole string detected by the pattern
|
||||
ignoreregex: [
|
||||
// ignore names starting with 'jo'
|
||||
'jo.*',
|
||||
],
|
||||
},
|
||||
|
||||
ip: {
|
||||
// patterns can have a special 'ip' type that matches both ipv4 and ipv6
|
||||
// or 'ipv4' or 'ipv6' to match only that ip version
|
||||
type: 'ip',
|
||||
// reaction regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
|
||||
// jsonnet's @'string' is for verbatim strings
|
||||
// simple version: regex: @'(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})',
|
||||
regex: @'(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))',
|
||||
ignore: ['127.0.0.1', '::1'],
|
||||
// they can also ignore whole CIDR ranges of ip
|
||||
ignorecidr: ['10.0.0.0/8'],
|
||||
// last but not least, patterns of type ip, ipv4, ipv6 can also group their matched ips by mask
|
||||
// ipv4mask: 30
|
||||
// this means that ipv6 matches will be converted to their network part.
|
||||
ipv6mask: 64,
|
||||
// for example,"2001:db8:85a3:9de5::8a2e:370:7334" will be converted to "2001:db8:85a3:9de5::/64".
|
||||
// Patterns can be ignored based on regexes, it will try to match the whole string detected by the pattern
|
||||
// ignoreregex: [@'10\.0\.[0-9]{1,3}\.[0-9]{1,3}'],
|
||||
},
|
||||
|
||||
// ipv4: {
|
||||
// type: 'ipv4',
|
||||
// ignore: ...
|
||||
// ipv4mask: ...
|
||||
// },
|
||||
|
||||
},
|
||||
|
||||
// where the state (database) must be read
|
||||
// defaults to . which means reaction's working directory.
|
||||
// The systemd service starts reaction in /var/lib/reaction.
|
||||
state_directory: '.',
|
||||
|
||||
// if set to a positive number → max number of concurrent actions
|
||||
// if set to a negative number → no limit
|
||||
// if not specified or set to 0 → defaults to the number of CPUs on the system
|
||||
concurrency: 0,
|
||||
|
||||
// Those commands will be executed in order at start, before everything else
|
||||
start:
|
||||
start: [
|
||||
// Create an iptables chain for reaction
|
||||
ip46tables(['-N', 'reaction']) +
|
||||
iptables(['-N', 'reaction']),
|
||||
// Insert this chain as the first item of the INPUT & FORWARD chains (for incoming connections)
|
||||
ip46tables(['-I', 'INPUT', '-p', 'all', '-j', 'reaction']) +
|
||||
ip46tables(['-I', 'FORWARD', '-p', 'all', '-j', 'reaction']),
|
||||
iptables(['-I', 'INPUT', '-p', 'all', '-j', 'reaction']),
|
||||
iptables(['-I', 'FORWARD', '-p', 'all', '-j', 'reaction']),
|
||||
],
|
||||
|
||||
// Those commands will be executed in order at stop, after everything else
|
||||
stop:
|
||||
stop: [
|
||||
// Remove the chain from the INPUT & FORWARD chains
|
||||
ip46tables(['-D', 'INPUT', '-p', 'all', '-j', 'reaction']) +
|
||||
ip46tables(['-D', 'FORWARD', '-p', 'all', '-j', 'reaction']) +
|
||||
iptables(['-D', 'INPUT', '-p', 'all', '-j', 'reaction']),
|
||||
iptables(['-D', 'FORWARD', '-p', 'all', '-j', 'reaction']),
|
||||
// Empty the chain
|
||||
ip46tables(['-F', 'reaction']) +
|
||||
iptables(['-F', 'reaction']),
|
||||
// Delete the chain
|
||||
ip46tables(['-X', 'reaction']),
|
||||
|
||||
iptables(['-X', 'reaction']),
|
||||
],
|
||||
|
||||
// streams are commands
|
||||
// they are run and their ouptut is captured
|
||||
|
|
@ -118,85 +73,41 @@ local ip46tables(arguments) = [
|
|||
// note that if the command is not in environment's `PATH`
|
||||
// its full path must be given.
|
||||
cmd: ['journalctl', '-n0', '-fu', 'sshd.service'],
|
||||
|
||||
// filters run actions when they match regexes on a stream
|
||||
filters: {
|
||||
// filters have a user-defined name
|
||||
failedlogin: {
|
||||
// reaction's regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
|
||||
// reaction's regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
|
||||
regex: [
|
||||
// <ip> is predefined in the patterns section
|
||||
// ip's regex is inserted in the following regex
|
||||
@'authentication failure;.*rhost=<ip>',
|
||||
@'Failed password for .* from <ip>',
|
||||
@'Invalid user .* from <ip>',
|
||||
@'Connection (reset|closed) by (authenticating|invalid) user .* <ip>',
|
||||
@'banner exchange: Connection from <ip> port [0-9]*: invalid format',
|
||||
],
|
||||
|
||||
// if retry and retryperiod are defined,
|
||||
// the actions will only take place if a same pattern is
|
||||
// found `retry` times in a `retryperiod` interval
|
||||
retry: 3,
|
||||
// format is defined as follows: <integer> <unit>
|
||||
// - whitespace between the integer and unit is optional
|
||||
// - integer must be positive (>= 0)
|
||||
// - unit can be one of:
|
||||
// - ms / millis / millisecond / milliseconds
|
||||
// - s / sec / secs / second / seconds
|
||||
// - m / min / mins / minute / minutes
|
||||
// - h / hour / hours
|
||||
// - d / day / days
|
||||
// format is defined here: https://pkg.go.dev/time#ParseDuration
|
||||
retryperiod: '6h',
|
||||
|
||||
// duplicate specify how to handle matches after an action has already been taken.
|
||||
// 3 options are possible:
|
||||
// - extend (default): update the pending actions' time, so they run later
|
||||
// - ignore: don't do anything, ignore the match
|
||||
// - rerun: run the actions again. so we may have the same pending actions multiple times.
|
||||
// (this was the default before 2.2.0)
|
||||
// duplicate: extend
|
||||
|
||||
// actions are run by the filter when regexes are matched
|
||||
actions: {
|
||||
// actions have a user-defined name
|
||||
ban4: {
|
||||
cmd: ['iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
|
||||
// this optional field permits to run an action only when a pattern of type ip contains an ipv4
|
||||
ipv4only: true,
|
||||
ban: {
|
||||
cmd: iptables(['-A', 'reaction', '-s', '<ip>', '-j', 'DROP']),
|
||||
},
|
||||
|
||||
ban6: {
|
||||
cmd: ['ip6tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
|
||||
// this optional field permits to run an action only when a pattern of type ip contains an ipv6
|
||||
ipv6only: true,
|
||||
},
|
||||
|
||||
unban4: {
|
||||
cmd: ['iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
|
||||
unban: {
|
||||
cmd: iptables(['-D', 'reaction', '-s', '<ip>', '-j', 'DROP']),
|
||||
// if after is defined, the action will not take place immediately, but after a specified duration
|
||||
// same format as retryperiod
|
||||
after: '2 days',
|
||||
after: '48h',
|
||||
// let's say reaction is quitting. does it run all those pending commands which had an `after` duration set?
|
||||
// if you want reaction to run those pending commands before exiting, you can set this:
|
||||
// onexit: true,
|
||||
// (defaults to false)
|
||||
// here it is not useful because we will flush and delete the chain containing the bans anyway
|
||||
// (with the stop commands)
|
||||
ipv4only: true,
|
||||
},
|
||||
|
||||
unban6: {
|
||||
cmd: ['ip6tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
|
||||
after: '2 days',
|
||||
ipv6only: true,
|
||||
},
|
||||
|
||||
mail: {
|
||||
cmd: ['sendmail', '...', '<ip>'],
|
||||
// some commands, such as alerting commands, are "oneshot".
|
||||
// this means they'll be run only once, and won't be executed again when reaction is restarted
|
||||
oneshot: true,
|
||||
},
|
||||
},
|
||||
// or use the banFor function defined at the beginning!
|
||||
|
|
|
|||
|
|
@ -10,18 +10,11 @@
|
|||
# using YAML anchors `&name` and pointers `*name`
|
||||
# definitions are not readed by reaction
|
||||
definitions:
|
||||
- &ip4tablesban [ 'iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
|
||||
- &ip6tablesban [ 'ip6tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
|
||||
- &ip4tablesunban [ 'iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
|
||||
- &ip6tablesunban [ 'ip6tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
|
||||
- &iptablesban [ 'ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
|
||||
- &iptablesunban [ 'ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
|
||||
# ip46tables is a minimal C program (only POSIX dependencies) present as a subdirectory.
|
||||
# it permits to handle both ipv4/iptables and ipv6/ip6tables commands
|
||||
|
||||
# where the state (database) must be read
|
||||
# defaults to . which means reaction's working directory.
|
||||
# The systemd service starts reaction in /var/lib/reaction.
|
||||
state_directory: .
|
||||
|
||||
# if set to a positive number → max number of concurrent actions
|
||||
# if set to a negative number → no limit
|
||||
# if not specified or set to 0 → defaults to the number of CPUs on the system
|
||||
|
|
@ -30,57 +23,29 @@ concurrency: 0
|
|||
# patterns are substitued in regexes.
|
||||
# when a filter performs an action, it replaces the found pattern
|
||||
patterns:
|
||||
name:
|
||||
# reaction regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
|
||||
# common patterns have a 'regex' field
|
||||
regex: '[a-z]+'
|
||||
# patterns can ignore specific strings
|
||||
ignore:
|
||||
- 'cecilia'
|
||||
# patterns can also be ignored based on regexes, it will try to match the whole string detected by the pattern
|
||||
ignoreregex:
|
||||
# ignore names starting with 'jo'
|
||||
- 'jo.*'
|
||||
|
||||
ip:
|
||||
# patterns can have a special 'ip' type that matches both ipv4 and ipv6
|
||||
# or 'ipv4' or 'ipv6' to match only that ip version
|
||||
type: ip
|
||||
# reaction regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
|
||||
# simple version: regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
|
||||
regex: '(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))'
|
||||
ignore:
|
||||
- 127.0.0.1
|
||||
- ::1
|
||||
# they can also ignore whole CIDR ranges of ip
|
||||
ignorecidr:
|
||||
- 10.0.0.0/8
|
||||
# last but not least, patterns of type ip, ipv4, ipv6 can also group their matched ips by mask
|
||||
# ipv4mask: 30
|
||||
# this means that ipv6 matches will be converted to their network part.
|
||||
ipv6mask: 64
|
||||
# for example,"2001:db8:85a3:9de5::8a2e:370:7334" will be converted to "2001:db8:85a3:9de5::/64".
|
||||
|
||||
# ipv4:
|
||||
# type: ipv4
|
||||
# ignore: ...
|
||||
# Patterns can be ignored based on regexes, it will try to match the whole string detected by the pattern
|
||||
# ignoreregex:
|
||||
# - '10\.0\.[0-9]{1,3}\.[0-9]{1,3}'
|
||||
|
||||
# Those commands will be executed in order at start, before everything else
|
||||
start:
|
||||
- [ 'iptables', '-w', '-N', 'reaction' ]
|
||||
- [ 'ip6tables', '-w', '-N', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip6tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip6tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-N', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
|
||||
# Those commands will be executed in order at stop, after everything else
|
||||
stop:
|
||||
- [ 'iptables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip6tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip6tables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-F', 'reaction' ]
|
||||
- [ 'ip6tables', '-w', '-F', 'reaction' ]
|
||||
- [ 'iptables', '-w', '-X', 'reaction' ]
|
||||
- [ 'ip6tables', '-w', '-X', 'reaction' ]
|
||||
- [ 'ip46tables', '-w,', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip46tables', '-w,', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-F', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-X', 'reaction' ]
|
||||
|
||||
# streams are commands
|
||||
# they are run and their ouptut is captured
|
||||
|
|
@ -92,81 +57,40 @@ streams:
|
|||
# note that if the command is not in environment's `PATH`
|
||||
# its full path must be given.
|
||||
cmd: [ 'journalctl', '-n0', '-fu', 'sshd.service' ]
|
||||
|
||||
# filters run actions when they match regexes on a stream
|
||||
filters:
|
||||
# filters have a user-defined name
|
||||
failedlogin:
|
||||
# reaction's regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
|
||||
# reaction's regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
|
||||
regex:
|
||||
# <ip> is predefined in the patterns section
|
||||
# ip's regex is inserted in the following regex
|
||||
- 'authentication failure;.*rhost=<ip>'
|
||||
- 'Failed password for .* from <ip>'
|
||||
- 'Invalid user .* from <ip>'
|
||||
- 'Connection (reset|closed) by (authenticating|invalid) user .* <ip>'
|
||||
- 'banner exchange: Connection from <ip> port [0-9]*: invalid format'
|
||||
|
||||
# if retry and retryperiod are defined,
|
||||
# the actions will only take place if a same pattern is
|
||||
# found `retry` times in a `retryperiod` interval
|
||||
retry: 3
|
||||
# format is defined as follows: <integer> <unit>
|
||||
# - whitespace between the integer and unit is optional
|
||||
# - integer must be positive (>= 0)
|
||||
# - unit can be one of:
|
||||
# - ms / millis / millisecond / milliseconds
|
||||
# - s / sec / secs / second / seconds
|
||||
# - m / min / mins / minute / minutes
|
||||
# - h / hour / hours
|
||||
# - d / day / days
|
||||
# format is defined here: https://pkg.go.dev/time#ParseDuration
|
||||
retryperiod: 6h
|
||||
|
||||
# duplicate specify how to handle matches after an action has already been taken.
|
||||
# 3 options are possible:
|
||||
# - extend (default): update the pending actions' time, so they run later
|
||||
# - ignore: don't do anything, ignore the match
|
||||
# - rerun: run the actions again. so we may have the same pending actions multiple times.
|
||||
# (this was the default before 2.2.0)
|
||||
# duplicate: extend
|
||||
|
||||
# actions are run by the filter when regexes are matched
|
||||
actions:
|
||||
# actions have a user-defined name
|
||||
ban4:
|
||||
ban:
|
||||
# YAML substitutes *reference by the value anchored at &reference
|
||||
cmd: *ip4tablesban
|
||||
# this optional field permits to run an action only when a pattern of type ip contains an ipv4
|
||||
ipv4only: true
|
||||
|
||||
ban6:
|
||||
cmd: *ip6tablesban
|
||||
# this optional field permits to run an action only when a pattern of type ip contains an ipv6
|
||||
ipv6only: true
|
||||
|
||||
unban4:
|
||||
cmd: *ip4tablesunban
|
||||
cmd: *iptablesban
|
||||
unban:
|
||||
cmd: *iptablesunban
|
||||
# if after is defined, the action will not take place immediately, but after a specified duration
|
||||
# same format as retryperiod
|
||||
after: '2 days'
|
||||
after: 48h
|
||||
# let's say reaction is quitting. does it run all those pending commands which had an `after` duration set?
|
||||
# if you want reaction to run those pending commands before exiting, you can set this:
|
||||
# onexit: true
|
||||
# (defaults to false)
|
||||
# here it is not useful because we will flush and delete the chain containing the bans anyway
|
||||
# (with the stop commands)
|
||||
ipv4only: true
|
||||
|
||||
unban6:
|
||||
cmd: *ip6tablesunban
|
||||
after: '2 days'
|
||||
ipv6only: true
|
||||
|
||||
mail:
|
||||
cmd: ['sendmail', '...', '<ip>']
|
||||
# some commands, such as alerting commands, are "oneshot".
|
||||
# this means they'll be run only once, and won't be executed again when reaction is restarted
|
||||
oneshot: true
|
||||
|
||||
# persistence
|
||||
# tldr; when an `after` action is set in a filter, such filter acts as a 'jail',
|
||||
|
|
|
|||
|
|
@ -1,9 +1,4 @@
|
|||
---
|
||||
# This configuration permits to test reaction's performance
|
||||
# under a very high load
|
||||
#
|
||||
# It keeps regexes super simple, to avoid benchmarking the `regex` crate,
|
||||
# and benchmark reaction's internals instead.
|
||||
concurrency: 32
|
||||
|
||||
patterns:
|
||||
|
|
@ -18,7 +13,7 @@ streams:
|
|||
tailDown1:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 10001 | while read i; do echo found $i; done' ]
|
||||
filters:
|
||||
find1:
|
||||
find:
|
||||
regex:
|
||||
- '^found <num>'
|
||||
retry: 9
|
||||
|
|
@ -31,9 +26,9 @@ streams:
|
|||
after: 1m
|
||||
onexit: false
|
||||
tailDown2:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; echo trouvé $i; done' ]
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; done' ]
|
||||
filters:
|
||||
find2:
|
||||
find:
|
||||
regex:
|
||||
- '^found <num>'
|
||||
retry: 480
|
||||
|
|
@ -46,9 +41,9 @@ streams:
|
|||
after: 1m
|
||||
onexit: false
|
||||
tailDown3:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; echo trouvé $i; done' ]
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; done' ]
|
||||
filters:
|
||||
find3:
|
||||
find:
|
||||
regex:
|
||||
- '^found <num>'
|
||||
retry: 480
|
||||
|
|
@ -60,9 +55,12 @@ streams:
|
|||
cmd: [ 'sleep', '0.0<num>' ]
|
||||
after: 1m
|
||||
onexit: false
|
||||
find4:
|
||||
tailDown4:
|
||||
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; done' ]
|
||||
filters:
|
||||
find:
|
||||
regex:
|
||||
- '^trouvé <num>'
|
||||
- '^found <num>'
|
||||
retry: 480
|
||||
retryperiod: 6m
|
||||
actions:
|
||||
50
config/persistence.jsonnet
Normal file
50
config/persistence.jsonnet
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
{
|
||||
patterns: {
|
||||
num: {
|
||||
regex: '[0-9]+',
|
||||
},
|
||||
},
|
||||
|
||||
streams: {
|
||||
tailDown1: {
|
||||
cmd: ['sh', '-c', "echo 01 02 03 04 05 | tr ' ' '\n' | while read i; do sleep 0.5; echo found $i; done"],
|
||||
filters: {
|
||||
findIP1: {
|
||||
regex: ['^found <num>$'],
|
||||
retry: 1,
|
||||
retryperiod: '2m',
|
||||
actions: {
|
||||
damn: {
|
||||
cmd: ['echo', '<num>'],
|
||||
},
|
||||
undamn: {
|
||||
cmd: ['echo', 'undamn', '<num>'],
|
||||
after: '1m',
|
||||
onexit: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
tailDown2: {
|
||||
cmd: ['sh', '-c', "echo 11 12 13 14 15 11 13 15 | tr ' ' '\n' | while read i; do sleep 0.3; echo found $i; done"],
|
||||
filters: {
|
||||
findIP2: {
|
||||
regex: ['^found <num>$'],
|
||||
retry: 2,
|
||||
retryperiod: '2m',
|
||||
actions: {
|
||||
damn: {
|
||||
cmd: ['echo', '<num>'],
|
||||
},
|
||||
undamn: {
|
||||
cmd: ['echo', 'undamn', '<num>'],
|
||||
after: '1m',
|
||||
onexit: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
@ -7,7 +7,7 @@ Documentation=https://reaction.ppom.me
|
|||
|
||||
# See `man systemd.exec` and `man systemd.service` for most options below
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/reaction start -c /etc/reaction/
|
||||
ExecStart=/usr/local/bin/reaction start -c /etc/reaction.jsonnet
|
||||
|
||||
# Ask systemd to create /var/lib/reaction (/var/lib/ is implicit)
|
||||
StateDirectory=reaction
|
||||
|
|
@ -15,8 +15,6 @@ StateDirectory=reaction
|
|||
RuntimeDirectory=reaction
|
||||
# Start reaction in its state directory
|
||||
WorkingDirectory=/var/lib/reaction
|
||||
# Let reaction kill its child processes first
|
||||
KillMode=mixed
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
|
|
|||
160
config/server.jsonnet
Normal file
160
config/server.jsonnet
Normal file
|
|
@ -0,0 +1,160 @@
|
|||
// This is the extensive configuration used on a **real** server!
|
||||
|
||||
local banFor(time) = {
|
||||
ban: {
|
||||
cmd: ['ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
|
||||
},
|
||||
unban: {
|
||||
after: time,
|
||||
cmd: ['ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
|
||||
},
|
||||
};
|
||||
|
||||
{
|
||||
patterns: {
|
||||
// IPs can be IPv4 or IPv6
|
||||
// ip46tables (C program also in this repo) handles running the good commands
|
||||
ip: {
|
||||
regex: @'(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))',
|
||||
// Ignore all from 192.168.1.1 to 192.168.1.255
|
||||
ignore: std.makeArray(255, function(i) '192.168.1.' + (i + 1)),
|
||||
},
|
||||
},
|
||||
|
||||
start: [
|
||||
['ip46tables', '-w', '-N', 'reaction'],
|
||||
['ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction'],
|
||||
],
|
||||
stop: [
|
||||
['ip46tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction'],
|
||||
['ip46tables', '-w', '-F', 'reaction'],
|
||||
['ip46tables', '-w', '-X', 'reaction'],
|
||||
],
|
||||
|
||||
streams: {
|
||||
// Ban hosts failing to connect via ssh
|
||||
ssh: {
|
||||
cmd: ['journalctl', '-fn0', '-u', 'sshd.service'],
|
||||
filters: {
|
||||
failedlogin: {
|
||||
regex: [
|
||||
@'authentication failure;.*rhost=<ip>',
|
||||
@'Connection (reset|closed) by (authenticating|invalid) user .* <ip>',
|
||||
@'Failed password for .* from <ip>',
|
||||
],
|
||||
retry: 3,
|
||||
retryperiod: '6h',
|
||||
actions: banFor('48h'),
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
// Ban hosts which knock on closed ports.
|
||||
// It needs this iptables chain to be used to drop packets:
|
||||
// ip46tables -N log-refuse
|
||||
// ip46tables -A log-refuse -p tcp --syn -j LOG --log-level info --log-prefix 'refused connection: '
|
||||
// ip46tables -A log-refuse -m pkttype ! --pkt-type unicast -j nixos-fw-refuse
|
||||
// ip46tables -A log-refuse -j DROP
|
||||
kernel: {
|
||||
cmd: ['journalctl', '-fn0', '-k'],
|
||||
filters: {
|
||||
portscan: {
|
||||
regex: ['refused connection: .*SRC=<ip>'],
|
||||
retry: 4,
|
||||
retryperiod: '1h',
|
||||
actions: banFor('720h'),
|
||||
},
|
||||
},
|
||||
},
|
||||
// Note: nextcloud and vaultwarden could also be filters on the nginx stream
|
||||
// I did use their own logs instead because it's less logs to parse than the front webserver
|
||||
|
||||
// Ban hosts failing to connect to Nextcloud
|
||||
nextcloud: {
|
||||
cmd: ['journalctl', '-fn0', '-u', 'phpfpm-nextcloud.service'],
|
||||
filters: {
|
||||
failedLogin: {
|
||||
regex: [
|
||||
@'"remoteAddr":"<ip>".*"message":"Login failed:',
|
||||
@'"remoteAddr":"<ip>".*"message":"Trusted domain error.',
|
||||
],
|
||||
retry: 3,
|
||||
retryperiod: '1h',
|
||||
actions: banFor('1h'),
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
// Ban hosts failing to connect to vaultwarden
|
||||
vaultwarden: {
|
||||
cmd: ['journalctl', '-fn0', '-u', 'vaultwarden.service'],
|
||||
filters: {
|
||||
failedlogin: {
|
||||
actions: banFor('2h'),
|
||||
regex: [@'Username or password is incorrect\. Try again\. IP: <ip>\. Username:'],
|
||||
retry: 3,
|
||||
retryperiod: '1h',
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
// Used with this nginx log configuration:
|
||||
// log_format withhost '$remote_addr - $remote_user [$time_local] $host "$request" $status $bytes_sent "$http_referer" "$http_user_agent"';
|
||||
// access_log /var/log/nginx/access.log withhost;
|
||||
nginx: {
|
||||
cmd: ['tail', '-n0', '-f', '/var/log/nginx/access.log'],
|
||||
filters: {
|
||||
// Ban hosts failing to connect to Directus
|
||||
directus: {
|
||||
regex: [
|
||||
@'^<ip> .* "POST /auth/login HTTP/..." 401 [0-9]+ .https://directusdomain',
|
||||
],
|
||||
retry: 6,
|
||||
retryperiod: '4h',
|
||||
actions: banFor('4h'),
|
||||
},
|
||||
|
||||
// Ban hosts presenting themselves as bots of ChatGPT
|
||||
gptbot: {
|
||||
regex: [@'^<ip>.*GPTBot/1.0'],
|
||||
actions: banFor('720h'),
|
||||
},
|
||||
|
||||
// Ban hosts failing to connect to slskd
|
||||
slskd: {
|
||||
regex: [
|
||||
@'^<ip> .* "POST /api/v0/session HTTP/..." 401 [0-9]+ .https://slskddomain',
|
||||
],
|
||||
retry: 3,
|
||||
retryperiod: '1h',
|
||||
actions: banFor('6h'),
|
||||
},
|
||||
|
||||
// Ban suspect HTTP requests
|
||||
// Those are frequent malicious requests I got from bots
|
||||
// Make sure you don't have honnest use cases for those requests, or your clients may be banned for 2 weeks!
|
||||
suspectRequests: {
|
||||
regex: [
|
||||
// (?:[^/" ]*/)* is a "non-capturing group" regex that allow for subpath(s)
|
||||
// example: /code/.env should be matched as well as /.env
|
||||
// ^^^^^
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*\.env ',
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*info\.php ',
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*owa/auth/logon.aspx ',
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*auth.html ',
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*auth1.html ',
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*password.txt ',
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*passwords.txt ',
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*dns-query ',
|
||||
// Do not include this if you have a Wordpress website ;)
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*wp-login\.php',
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*wp-includes',
|
||||
// Do not include this if a client must retrieve a config.json file ;)
|
||||
@'^<ip>.*"GET /(?:[^/" ]*/)*config\.json ',
|
||||
],
|
||||
actions: banFor('720h'),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
@ -22,7 +22,7 @@
|
|||
|
||||
streams: {
|
||||
s1: {
|
||||
cmd: ['sh', '-c', 'seq 20 | while read i; do echo found $((i % 5)); sleep 1; done'],
|
||||
cmd: ['sh', '-c', "seq 20 | tr ' ' '\n' | while read i; do echo found $((i % 5)); sleep 1; done"],
|
||||
filters: {
|
||||
f1: {
|
||||
regex: [
|
||||
|
|
@ -41,23 +41,6 @@
|
|||
},
|
||||
},
|
||||
},
|
||||
f2: {
|
||||
regex: [
|
||||
"^can't found <num>$",
|
||||
],
|
||||
retry: 2,
|
||||
retryperiod: '60s',
|
||||
actions: {
|
||||
damn: {
|
||||
cmd: ['notify-send', 'you should not see that', 'ban <num>'],
|
||||
},
|
||||
undamn: {
|
||||
cmd: ['notify-send', 'you should not see that', 'unban <num>'],
|
||||
after: '3s',
|
||||
onexit: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
[package]
|
||||
name = "treedb"
|
||||
version = "1.0.0"
|
||||
edition = "2024"
|
||||
|
||||
[features]
|
||||
test = []
|
||||
|
||||
[dependencies]
|
||||
chrono.workspace = true
|
||||
futures.workspace = true
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
thiserror.workspace = true
|
||||
tokio.workspace = true
|
||||
tokio.features = ["rt-multi-thread", "macros", "io-util", "time", "fs", "tracing"]
|
||||
tokio-util.workspace = true
|
||||
tokio-util.features = ["rt"]
|
||||
tracing.workspace = true
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile.workspace = true
|
||||
|
||||
|
|
@ -1,188 +0,0 @@
|
|||
use std::{
|
||||
collections::{BTreeMap, BTreeSet},
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
use chrono::DateTime;
|
||||
use serde_json::Value;
|
||||
|
||||
use crate::time::Time;
|
||||
|
||||
/// Tries to convert a [`Value`] into a [`String`]
|
||||
pub fn to_string(val: &Value) -> Result<String, String> {
|
||||
Ok(val.as_str().ok_or("not a string")?.to_owned())
|
||||
}
|
||||
|
||||
/// Tries to convert a [`Value`] into a [`u64`]
|
||||
pub fn to_u64(val: &Value) -> Result<u64, String> {
|
||||
val.as_u64().ok_or("not a u64".into())
|
||||
}
|
||||
|
||||
/// Old way of converting time: with chrono's serialization
|
||||
fn old_string_to_time(val: &str) -> Result<Time, String> {
|
||||
let time = DateTime::parse_from_rfc3339(val).map_err(|err| err.to_string())?;
|
||||
Ok(Duration::new(time.timestamp() as u64, time.timestamp_subsec_nanos()).into())
|
||||
}
|
||||
|
||||
/// New way of converting time: with our own implem
|
||||
fn new_string_to_time(val: &str) -> Result<Time, String> {
|
||||
let nanos: u128 = val.parse().map_err(|_| "not a number")?;
|
||||
Ok(Duration::new(
|
||||
(nanos / 1_000_000_000) as u64,
|
||||
(nanos % 1_000_000_000) as u32,
|
||||
)
|
||||
.into())
|
||||
}
|
||||
|
||||
/// Tries to convert a [`&str`] into a [`Time`]
|
||||
fn string_to_time(val: &str) -> Result<Time, String> {
|
||||
match new_string_to_time(val) {
|
||||
Err(err) => match old_string_to_time(val) {
|
||||
Err(_) => Err(err),
|
||||
ok => ok,
|
||||
},
|
||||
ok => ok,
|
||||
}
|
||||
}
|
||||
|
||||
/// Tries to convert a [`Value`] into a [`Time`]
|
||||
pub fn to_time(val: &Value) -> Result<Time, String> {
|
||||
string_to_time(val.as_str().ok_or("not a string number")?)
|
||||
}
|
||||
|
||||
/// Tries to convert a [`Value`] into a [`Vec<String>`]
|
||||
pub fn to_match(val: &Value) -> Result<Vec<String>, String> {
|
||||
val.as_array()
|
||||
.ok_or("not an array")?
|
||||
.iter()
|
||||
.map(to_string)
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Tries to convert a [`Value`] into a [`BTreeSet<Time>`]
|
||||
pub fn to_timeset(val: &Value) -> Result<BTreeSet<Time>, String> {
|
||||
val.as_array()
|
||||
.ok_or("not an array")?
|
||||
.iter()
|
||||
.map(to_time)
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Tries to convert a [`Value`] into a [`BTreeMap<Time, u64>`]
|
||||
pub fn to_timemap(val: &Value) -> Result<BTreeMap<Time, u64>, String> {
|
||||
val.as_object()
|
||||
.ok_or("not a map")?
|
||||
.iter()
|
||||
.map(|(key, value)| Ok((string_to_time(key)?, to_u64(value)?)))
|
||||
.collect()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_to_string() {
|
||||
assert_eq!(to_string(&("".into())), Ok("".into()));
|
||||
assert_eq!(to_string(&("ploup".into())), Ok("ploup".into()));
|
||||
|
||||
assert!(to_string(&(["ploup"].into())).is_err());
|
||||
assert!(to_string(&(true.into())).is_err());
|
||||
assert!(to_string(&(8.into())).is_err());
|
||||
assert!(to_string(&(None::<String>.into())).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_to_u64() {
|
||||
assert_eq!(to_u64(&(0.into())), Ok(0));
|
||||
assert_eq!(to_u64(&(8.into())), Ok(8));
|
||||
assert_eq!(to_u64(&(u64::MAX.into())), Ok(u64::MAX));
|
||||
|
||||
assert!(to_u64(&("ploup".into())).is_err());
|
||||
assert!(to_u64(&(["ploup"].into())).is_err());
|
||||
assert!(to_u64(&(true.into())).is_err());
|
||||
assert!(to_u64(&(None::<String>.into())).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_to_time() {
|
||||
assert_eq!(to_time(&"123456".into()).unwrap(), Time::from_nanos(123456),);
|
||||
assert!(to_time(&(u64::MAX.into())).is_err());
|
||||
|
||||
assert!(to_time(&(["ploup"].into())).is_err());
|
||||
assert!(to_time(&(true.into())).is_err());
|
||||
// assert!(to_time(&(12345.into())).is_err());
|
||||
assert!(to_time(&(None::<String>.into())).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_to_match() {
|
||||
assert_eq!(to_match(&([""].into())), Ok(vec!["".into()]));
|
||||
assert_eq!(
|
||||
to_match(&(["plip", "ploup"].into())),
|
||||
Ok(vec!["plip".into(), "ploup".into()])
|
||||
);
|
||||
assert!(to_match(&[Value::from("plip"), Value::from(10)].into()).is_err());
|
||||
|
||||
assert!(to_match(&("ploup".into())).is_err());
|
||||
assert!(to_match(&(true.into())).is_err());
|
||||
assert!(to_match(&(8.into())).is_err());
|
||||
assert!(to_match(&(None::<String>.into())).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_to_timeset() {
|
||||
assert_eq!(
|
||||
to_timeset(&Value::from([Value::from("123456789")])),
|
||||
Ok(BTreeSet::from([Time::from_nanos(123456789)]))
|
||||
);
|
||||
assert_eq!(
|
||||
to_timeset(&Value::from([Value::from("8"), Value::from("123456")])),
|
||||
Ok(BTreeSet::from([
|
||||
Time::from_nanos(8),
|
||||
Time::from_nanos(123456),
|
||||
]))
|
||||
);
|
||||
assert!(to_timeset(&[Value::from("plip"), Value::from(10)].into()).is_err());
|
||||
|
||||
assert!(to_timeset(&([""].into())).is_err());
|
||||
assert!(to_timeset(&(["ploup"].into())).is_err());
|
||||
assert!(to_timeset(&(true.into())).is_err());
|
||||
assert!(to_timeset(&(8.into())).is_err());
|
||||
assert!(to_timeset(&(None::<String>.into())).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_to_timemap() {
|
||||
let time1 = 1234567;
|
||||
let time1_t = Time::from_nanos(time1);
|
||||
let time2 = 123456789;
|
||||
let time2_t = Time::from_nanos(time2);
|
||||
|
||||
assert_eq!(
|
||||
to_timemap(&Value::from_iter([(time2.to_string(), 1)])),
|
||||
Ok(BTreeMap::from([(time2_t, 1)]))
|
||||
);
|
||||
assert_eq!(
|
||||
to_timemap(&Value::from_iter([
|
||||
(time1.to_string(), 4),
|
||||
(time2.to_string(), 0)
|
||||
])),
|
||||
Ok(BTreeMap::from([(time1_t.into(), 4), (time2_t.into(), 0)]))
|
||||
);
|
||||
|
||||
assert!(to_timemap(&Value::from_iter([("1-1", time2)])).is_err());
|
||||
// assert!(to_timemap(&Value::from_iter([(time2.to_string(), time2)])).is_err());
|
||||
assert!(to_timemap(&Value::from_iter([(time2)])).is_err());
|
||||
assert!(to_timemap(&Value::from_iter([(1)])).is_err());
|
||||
|
||||
assert!(to_timemap(&(["1970-01-01T01:20:34.567+01:00"].into())).is_err());
|
||||
assert!(to_timemap(&([""].into())).is_err());
|
||||
assert!(to_timemap(&(["ploup"].into())).is_err());
|
||||
assert!(to_timemap(&(true.into())).is_err());
|
||||
assert!(to_timemap(&(8.into())).is_err());
|
||||
assert!(to_timemap(&(None::<String>.into())).is_err());
|
||||
}
|
||||
}
|
||||
|
|
@ -1,746 +0,0 @@
|
|||
/// This module implements an asynchronously persisted BTreeMap named [`Tree`],
|
||||
/// via an unique "Write Behind Log" (in opposition to WAL, "Write Ahead Log").
|
||||
///
|
||||
/// This permits to have RAM-speed read & write operations, while eventually
|
||||
/// persisting operations. The log is flushed to kernel every 2 seconds.
|
||||
///
|
||||
/// Operations stored in the log have a timeout configured at the Tree level.
|
||||
/// All operations are then stored for this lifetime.
|
||||
///
|
||||
/// Data is stored as JSONL. Each line stores a (key, value), plus the tree id,
|
||||
/// and an expiry timestamp in milliseconds.
|
||||
use std::{
|
||||
collections::{BTreeMap, HashMap},
|
||||
io::{Error as IoError, ErrorKind},
|
||||
ops::Deref,
|
||||
path::{Path, PathBuf},
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
use serde::{Deserialize, Serialize, de::DeserializeOwned};
|
||||
use serde_json::Value;
|
||||
use tokio::{
|
||||
fs::{File, rename},
|
||||
sync::{mpsc, oneshot},
|
||||
time::{MissedTickBehavior, interval},
|
||||
};
|
||||
use tokio_util::{sync::CancellationToken, task::task_tracker::TaskTrackerToken};
|
||||
|
||||
// Database
|
||||
|
||||
use raw::{ReadDB, WriteDB};
|
||||
use time::{Time, now};
|
||||
|
||||
pub mod helpers;
|
||||
mod raw;
|
||||
pub mod time;
|
||||
|
||||
/// Any order the Database can receive
|
||||
enum Order {
|
||||
Log(Entry),
|
||||
OpenTree(OpenTree),
|
||||
}
|
||||
|
||||
/// Entry sent from [`Tree`] to [`Database`]
|
||||
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
|
||||
struct Entry {
|
||||
pub tree: String,
|
||||
pub key: Value,
|
||||
pub value: Option<Value>,
|
||||
pub expiry: Time,
|
||||
}
|
||||
|
||||
/// Order to receive a tree from previous Database
|
||||
pub struct OpenTree {
|
||||
name: String,
|
||||
resp: oneshot::Sender<Option<LoadedTree>>,
|
||||
}
|
||||
|
||||
type LoadedTree = HashMap<Value, Value>;
|
||||
pub type LoadedDB = HashMap<String, LoadedTree>;
|
||||
|
||||
const DB_NAME: &str = "reaction.db";
|
||||
const DB_NEW_NAME: &str = "reaction.new.db";
|
||||
|
||||
fn path_of(state_directory: &Path, name: &str) -> PathBuf {
|
||||
if state_directory.as_os_str().is_empty() {
|
||||
name.into()
|
||||
} else {
|
||||
PathBuf::from(state_directory).join(name)
|
||||
}
|
||||
}
|
||||
|
||||
pub type DatabaseErrorReceiver = oneshot::Receiver<Result<(), String>>;
|
||||
|
||||
/// Public-facing API for a treedb Database
|
||||
pub struct Database {
|
||||
entry_tx: Option<mpsc::Sender<Order>>,
|
||||
error_rx: DatabaseErrorReceiver,
|
||||
}
|
||||
|
||||
impl Database {
|
||||
/// Open a new Database, whom task will start in the background.
|
||||
/// You'll have to:
|
||||
/// - drop all [`Tree`]s,
|
||||
/// - call [`Self::quit`],
|
||||
///
|
||||
/// to have the Database properly quit.
|
||||
///
|
||||
/// You can wait for [`Self::quit`] returned channel to know how it went.
|
||||
pub async fn open(
|
||||
path_directory: &Path,
|
||||
cancellation_token: CancellationToken,
|
||||
task_tracker_token: TaskTrackerToken,
|
||||
) -> Result<Database, IoError> {
|
||||
let (manager, entry_tx) = DatabaseManager::open(path_directory).await?;
|
||||
let error_rx = manager.manager(cancellation_token, task_tracker_token);
|
||||
Ok(Self {
|
||||
entry_tx: Some(entry_tx),
|
||||
error_rx,
|
||||
})
|
||||
}
|
||||
|
||||
/// Permit to close DB's channel.
|
||||
/// Without this function manually called, the DB can't close.
|
||||
pub fn quit(self) -> DatabaseErrorReceiver {
|
||||
self.error_rx
|
||||
}
|
||||
}
|
||||
|
||||
// TODO rotate_db at a regular interval instead of every N bytes?
|
||||
// This would make more sense, as actual garbage collection is time-based
|
||||
|
||||
/// A [`Database`] logs all write operations on [`Tree`]s in a single file.
|
||||
/// Logs are written asynchronously, so the write operations in RAM will block only when the
|
||||
/// underlying channel is full.
|
||||
struct DatabaseManager {
|
||||
/// Inner database
|
||||
write_db: WriteDB,
|
||||
/// [`Tree`]s loaded from disk
|
||||
loaded_db: LoadedDB,
|
||||
/// Path for the "normal" database
|
||||
path: PathBuf,
|
||||
/// Path for the "new" database, when rotating database.
|
||||
/// New database atomically replaces the old one when its writing is done.
|
||||
new_path: PathBuf,
|
||||
/// The receiver on [`Tree`] write operations
|
||||
entry_rx: mpsc::Receiver<Order>,
|
||||
/// The interval at which the database must be flushed to kernel
|
||||
flush_every: Duration,
|
||||
/// The maximum bytes that must be written until the database is rotated
|
||||
max_bytes: usize,
|
||||
/// Counter to account of the current number of bytes written
|
||||
bytes_written: usize,
|
||||
}
|
||||
|
||||
impl DatabaseManager {
|
||||
pub async fn open(
|
||||
path_directory: &Path,
|
||||
) -> Result<(DatabaseManager, mpsc::Sender<Order>), IoError> {
|
||||
let path = path_of(path_directory, DB_NAME);
|
||||
let new_path = path_of(path_directory, DB_NEW_NAME);
|
||||
|
||||
let (write_db, loaded_db) = rotate_db(&path, &new_path, true).await?;
|
||||
|
||||
let (entry_tx, entry_rx) = mpsc::channel(1000);
|
||||
|
||||
Ok((
|
||||
DatabaseManager {
|
||||
write_db,
|
||||
loaded_db,
|
||||
path,
|
||||
new_path,
|
||||
entry_rx,
|
||||
flush_every: Duration::from_secs(2),
|
||||
max_bytes: 20 * 1024 * 1024, // 20 MiB
|
||||
bytes_written: 0,
|
||||
},
|
||||
entry_tx,
|
||||
))
|
||||
}
|
||||
|
||||
pub fn manager(
|
||||
mut self,
|
||||
cancellation_token: CancellationToken,
|
||||
_task_tracker_token: TaskTrackerToken,
|
||||
) -> oneshot::Receiver<Result<(), String>> {
|
||||
let (error_tx, error_rx) = oneshot::channel();
|
||||
tokio::spawn(async move {
|
||||
let mut interval = interval(self.flush_every);
|
||||
// If we missed a tick, it will tick immediately, then wait
|
||||
// flush_every for the next tick, resulting in a relaxed interval.
|
||||
// Hoping this will smooth IO pressure when under heavy load.
|
||||
interval.set_missed_tick_behavior(MissedTickBehavior::Delay);
|
||||
let mut status = loop {
|
||||
tokio::select! {
|
||||
order = self.entry_rx.recv() => {
|
||||
if let Err(err) = self.handle_order(order).await {
|
||||
cancellation_token.cancel();
|
||||
break err;
|
||||
}
|
||||
}
|
||||
_ = interval.tick() => {
|
||||
if let Err(err) = self.flush().await {
|
||||
cancellation_token.cancel();
|
||||
break Some(err);
|
||||
}
|
||||
}
|
||||
_ = cancellation_token.cancelled() => break None
|
||||
};
|
||||
};
|
||||
|
||||
// Finish consuming received entries when shutdown asked
|
||||
if status.is_none() {
|
||||
loop {
|
||||
let order = self.entry_rx.recv().await;
|
||||
if let Err(err) = self.handle_order(order).await {
|
||||
status = err;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Shutdown
|
||||
let close_status = self
|
||||
.close()
|
||||
.await
|
||||
.map_err(|err| format!("while closing database: {err}"));
|
||||
|
||||
let _ = error_tx.send(if let Some(err) = status {
|
||||
Err(err)
|
||||
} else if close_status.is_err() {
|
||||
close_status
|
||||
} else {
|
||||
Ok(())
|
||||
});
|
||||
});
|
||||
error_rx
|
||||
}
|
||||
|
||||
/// Executes an order. Returns:
|
||||
/// - Err(Some) if there was an error,
|
||||
/// - Err(None) is channel is closed,
|
||||
/// - Ok(()) in general case.
|
||||
async fn handle_order(&mut self, order: Option<Order>) -> Result<(), Option<String>> {
|
||||
match order {
|
||||
Some(Order::Log(entry)) => self.handle_entry(entry).await.map_err(Option::Some),
|
||||
Some(Order::OpenTree(open_tree)) => {
|
||||
self.handle_open_tree(open_tree);
|
||||
Ok(())
|
||||
}
|
||||
None => Err(None),
|
||||
}
|
||||
}
|
||||
|
||||
/// Write a received entry.
|
||||
async fn handle_entry(&mut self, entry: Entry) -> Result<(), String> {
|
||||
match self.write_db.write_entry(&entry).await {
|
||||
Ok(bytes_written) => {
|
||||
self.bytes_written += bytes_written;
|
||||
if self.bytes_written > self.max_bytes {
|
||||
match self.rotate_db().await {
|
||||
Ok(_) => {
|
||||
self.bytes_written = 0;
|
||||
Ok(())
|
||||
}
|
||||
Err(err) => Err(format!("while rotating database: {err}")),
|
||||
}
|
||||
} else {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
Err(err) => Err(format!("while writing entry to database: {err}")),
|
||||
}
|
||||
}
|
||||
|
||||
/// Flush inner database.
|
||||
async fn flush(&mut self) -> Result<(), String> {
|
||||
self.write_db
|
||||
.flush()
|
||||
.await
|
||||
.map_err(|err| format!("while flushing database: {err}"))
|
||||
}
|
||||
|
||||
/// Close inner database.
|
||||
async fn close(mut self) -> Result<(), IoError> {
|
||||
self.write_db.close().await
|
||||
}
|
||||
|
||||
/// Rotate inner database.
|
||||
async fn rotate_db(&mut self) -> Result<(), IoError> {
|
||||
self.write_db.close().await?;
|
||||
let (write_db, _) = rotate_db(&self.path, &self.new_path, false).await?;
|
||||
self.write_db = write_db;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
async fn rotate_db(
|
||||
path: &Path,
|
||||
new_path: &Path,
|
||||
startup: bool,
|
||||
) -> Result<(WriteDB, LoadedDB), IoError> {
|
||||
let file = match File::open(&path).await {
|
||||
Ok(file) => file,
|
||||
Err(err) => match (startup, err.kind()) {
|
||||
// No need to rotate the database when it is new,
|
||||
// we return here
|
||||
(true, ErrorKind::NotFound) => {
|
||||
return Ok((WriteDB::new(File::create(path).await?), HashMap::default()));
|
||||
}
|
||||
(_, _) => return Err(err),
|
||||
},
|
||||
};
|
||||
|
||||
let mut read_db = ReadDB::new(file);
|
||||
let mut write_db = WriteDB::new(File::create(new_path).await?);
|
||||
|
||||
let loaded_db = read_db.read(&mut write_db, startup).await?;
|
||||
|
||||
rename(new_path, path).await?;
|
||||
|
||||
Ok((write_db, loaded_db))
|
||||
}
|
||||
|
||||
// Tree
|
||||
|
||||
pub trait KeyType: Ord + Serialize + DeserializeOwned + Clone {}
|
||||
pub trait ValueType: Ord + Serialize + DeserializeOwned + Clone {}
|
||||
|
||||
impl<T> KeyType for T where T: Ord + Serialize + DeserializeOwned + Clone {}
|
||||
impl<T> ValueType for T where T: Ord + Serialize + DeserializeOwned + Clone {}
|
||||
|
||||
/// Main API of this crate.
|
||||
/// [`Tree`] wraps and is meant to be used exactly like a standard [`std::collections::BTreeMap`].
|
||||
/// Read operations are RAM only.
|
||||
/// Write operations are asynchronously persisted on disk by its parent [`Database`].
|
||||
/// They will never block.
|
||||
pub struct Tree<K: KeyType, V: ValueType> {
|
||||
// FIXME implement id as a u64 instead?
|
||||
// Could permit to send more direct database entries.
|
||||
// Database should write special entries as soon as a tree is opened.
|
||||
/// The name of the tree
|
||||
id: String,
|
||||
/// The duration for which the data in the tree must be persisted to disk.
|
||||
/// All write operations will stay logged on disk for this duration.
|
||||
/// This property permits the database rotation to be `O(n)` in time and `O(1)` in RAM space,
|
||||
/// `n` being the number of write operations from the last rotation plus the number of new
|
||||
/// operations.
|
||||
entry_timeout: Duration,
|
||||
/// The inner BTreeMap
|
||||
tree: BTreeMap<K, V>,
|
||||
/// The sender that permits to asynchronously send write operations to database
|
||||
tx: mpsc::Sender<Order>,
|
||||
}
|
||||
|
||||
impl Database {
|
||||
/// Creates a new Tree with the given name and entry timeout.
|
||||
/// Takes a closure (or regular function) that converts (Value, Value) JSON entries
|
||||
/// into (K, V) typed entries.
|
||||
/// Helpers for this closure can be found in the [`helpers`] module.
|
||||
pub async fn open_tree<K: KeyType, V: ValueType, F>(
|
||||
&mut self,
|
||||
name: String,
|
||||
entry_timeout: Duration,
|
||||
map_f: F,
|
||||
) -> Result<Tree<K, V>, String>
|
||||
where
|
||||
F: Fn((Value, Value)) -> Result<(K, V), String>,
|
||||
{
|
||||
// Request the tree
|
||||
let (tx, rx) = oneshot::channel();
|
||||
let entry_tx = match self.entry_tx.clone() {
|
||||
None => return Err("Database is closing".to_string()),
|
||||
Some(entry_tx) => {
|
||||
entry_tx
|
||||
.send(Order::OpenTree(OpenTree {
|
||||
name: name.clone(),
|
||||
resp: tx,
|
||||
}))
|
||||
.await
|
||||
.map_err(|_| "Database did not answer")?;
|
||||
// Get a clone of the channel sender
|
||||
entry_tx.clone()
|
||||
}
|
||||
};
|
||||
// Load the tree from its JSON
|
||||
let tree = if let Some(json_tree) = rx.await.map_err(|_| "Database did not respond")? {
|
||||
json_tree
|
||||
.into_iter()
|
||||
.map(map_f)
|
||||
.collect::<Result<BTreeMap<K, V>, String>>()?
|
||||
} else {
|
||||
BTreeMap::default()
|
||||
};
|
||||
Ok(Tree {
|
||||
id: name,
|
||||
entry_timeout,
|
||||
tree,
|
||||
tx: entry_tx,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl DatabaseManager {
|
||||
/// Creates a new Tree with the given name and entry timeout.
|
||||
/// Takes a closure (or regular function) that converts (Value, Value) JSON entries
|
||||
/// into (K, V) typed entries.
|
||||
/// Helpers for this closure can be found in the [`helpers`] module.
|
||||
pub fn handle_open_tree(&mut self, open_tree: OpenTree) {
|
||||
let _ = open_tree.resp.send(self.loaded_db.remove(&open_tree.name));
|
||||
}
|
||||
|
||||
// TODO keep only tree names, and use it for next db rotation to remove associated entries
|
||||
// Drops Trees that have not been loaded already
|
||||
// pub fn drop_trees(&mut self) {
|
||||
// self.loaded_db = HashMap::default();
|
||||
// }
|
||||
}
|
||||
|
||||
// Gives access to all read-only functions
|
||||
impl<K: KeyType, V: ValueType> Deref for Tree<K, V> {
|
||||
type Target = BTreeMap<K, V>;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.tree
|
||||
}
|
||||
}
|
||||
|
||||
// Reimplement write functions
|
||||
impl<K: KeyType, V: ValueType> Tree<K, V> {
|
||||
/// Log an [`Entry`] to the [`Database`]
|
||||
async fn log(&mut self, k: &K, v: Option<&V>) {
|
||||
let now = now();
|
||||
let e = Entry {
|
||||
tree: self.id.clone(),
|
||||
key: serde_json::to_value(k).expect("could not serialize key"),
|
||||
value: v.map(|v| serde_json::to_value(v).expect("could not serialize value")),
|
||||
expiry: now + self.entry_timeout,
|
||||
};
|
||||
let tx = self.tx.clone();
|
||||
// FIXME what if send fails?
|
||||
let _ = tx.send(Order::Log(e)).await;
|
||||
}
|
||||
|
||||
/// Asynchronously persisted version of [`BTreeMap::insert`]
|
||||
pub async fn insert(&mut self, key: K, value: V) -> Option<V> {
|
||||
self.log(&key, Some(&value)).await;
|
||||
self.tree.insert(key, value)
|
||||
}
|
||||
|
||||
/// Asynchronously persisted version of [`BTreeMap::pop_first`]
|
||||
pub async fn pop_first(&mut self) -> Option<(K, V)> {
|
||||
match self.tree.pop_first() {
|
||||
Some((key, value)) => {
|
||||
self.log(&key, None).await;
|
||||
Some((key, value))
|
||||
}
|
||||
None => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Asynchronously persisted version of [`BTreeMap::pop_last`]
|
||||
pub async fn pop_last(&mut self) -> Option<(K, V)> {
|
||||
match self.tree.pop_last() {
|
||||
Some((key, value)) => {
|
||||
self.log(&key, None).await;
|
||||
Some((key, value))
|
||||
}
|
||||
None => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Asynchronously persisted version of [`BTreeMap::remove`]
|
||||
pub async fn remove(&mut self, key: &K) -> Option<V> {
|
||||
self.log(key, None).await;
|
||||
self.tree.remove(key)
|
||||
}
|
||||
|
||||
/// Updates an item and returns the previous value.
|
||||
/// Returning None removes the item if it existed before.
|
||||
/// Asynchronously persisted.
|
||||
/// *API design borrowed from [`fjall::WriteTransaction::fetch_update`].*
|
||||
pub async fn fetch_update<F: FnMut(Option<V>) -> Option<V>>(
|
||||
&mut self,
|
||||
key: K,
|
||||
mut f: F,
|
||||
) -> Option<V> {
|
||||
let old_value = self.get(&key).map(|v| v.to_owned());
|
||||
let new_value = f(old_value);
|
||||
self.log(&key, new_value.as_ref()).await;
|
||||
if let Some(new_value) = new_value {
|
||||
self.tree.insert(key, new_value)
|
||||
} else {
|
||||
self.tree.remove(&key)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(any(test, feature = "test"))]
|
||||
pub fn tree(&self) -> &BTreeMap<K, V> {
|
||||
&self.tree
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(any(test, feature = "test"))]
|
||||
impl DatabaseManager {
|
||||
pub fn set_loaded_db(&mut self, loaded_db: LoadedDB) {
|
||||
self.loaded_db = loaded_db;
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(any(test, feature = "test"))]
|
||||
impl Database {
|
||||
pub async fn from_dir(dir_path: &Path, loaded_db: Option<LoadedDB>) -> Result<Self, IoError> {
|
||||
use tokio_util::task::TaskTracker;
|
||||
|
||||
let (mut manager, entry_tx) = DatabaseManager::open(dir_path).await?;
|
||||
if let Some(loaded_db) = loaded_db {
|
||||
manager.set_loaded_db(loaded_db)
|
||||
}
|
||||
let error_rx = manager.manager(CancellationToken::new(), TaskTracker::new().token());
|
||||
Ok(Self {
|
||||
entry_tx: Some(entry_tx),
|
||||
error_rx,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
|
||||
use std::{
|
||||
collections::{BTreeMap, BTreeSet, HashMap},
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
use serde_json::Value;
|
||||
use tempfile::{NamedTempFile, TempDir};
|
||||
use tokio::fs::File;
|
||||
|
||||
use super::{DB_NAME, Database, Entry, Time, helpers::*, now, raw::WriteDB, rotate_db};
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_rotate_db() {
|
||||
let now = now();
|
||||
|
||||
let expired = now - Time::from_secs(2);
|
||||
let valid = now + Time::from_secs(2);
|
||||
|
||||
let entries = [
|
||||
Entry {
|
||||
tree: "tree1".into(),
|
||||
key: "key1".into(),
|
||||
value: Some("value1".into()),
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "tree2".into(),
|
||||
key: "key2".into(),
|
||||
value: Some("value2".into()),
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "tree1".into(),
|
||||
key: "key2".into(),
|
||||
value: Some("value2".into()),
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "tree1".into(),
|
||||
key: "key1".into(),
|
||||
value: None,
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "tree3".into(),
|
||||
key: "key1".into(),
|
||||
value: Some("value1".into()),
|
||||
expiry: expired,
|
||||
},
|
||||
];
|
||||
|
||||
let path = NamedTempFile::new().unwrap().into_temp_path();
|
||||
let new_path = NamedTempFile::new().unwrap().into_temp_path();
|
||||
|
||||
let mut write_db = WriteDB::new(File::create(&path).await.unwrap());
|
||||
for entry in entries {
|
||||
write_db.write_entry(&entry).await.unwrap();
|
||||
}
|
||||
write_db.close().await.unwrap();
|
||||
|
||||
let (mut write_db, loaded_db) = rotate_db(&path, &new_path, true).await.unwrap();
|
||||
assert_eq!(
|
||||
loaded_db,
|
||||
HashMap::from([
|
||||
(
|
||||
"tree1".into(),
|
||||
HashMap::from([("key2".into(), "value2".into())])
|
||||
),
|
||||
(
|
||||
"tree2".into(),
|
||||
HashMap::from([("key2".into(), "value2".into())])
|
||||
)
|
||||
])
|
||||
);
|
||||
|
||||
// Test that we can write in new db
|
||||
write_db
|
||||
.write_entry(&Entry {
|
||||
tree: "tree3".into(),
|
||||
key: "key3".into(),
|
||||
value: Some("value3".into()),
|
||||
expiry: valid,
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
write_db.close().await.unwrap();
|
||||
|
||||
// And that we get the correct result
|
||||
let (_, loaded_db) = rotate_db(&path, &new_path, true).await.unwrap();
|
||||
assert_eq!(
|
||||
loaded_db,
|
||||
HashMap::from([
|
||||
(
|
||||
"tree1".into(),
|
||||
HashMap::from([("key2".into(), "value2".into())])
|
||||
),
|
||||
(
|
||||
"tree2".into(),
|
||||
HashMap::from([("key2".into(), "value2".into())])
|
||||
),
|
||||
(
|
||||
"tree3".into(),
|
||||
HashMap::from([("key3".into(), "value3".into())])
|
||||
),
|
||||
])
|
||||
);
|
||||
|
||||
// Test that asking not to load results in no load
|
||||
let (_, loaded_db) = rotate_db(&path, &new_path, false).await.unwrap();
|
||||
assert_eq!(loaded_db, HashMap::default());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_open_tree() {
|
||||
let now = now();
|
||||
|
||||
let now2 = now + Time::from_millis(2);
|
||||
let now3 = now + Time::from_millis(3);
|
||||
|
||||
// let now_ms = now.as_nanos().to_string();
|
||||
// let now2_ms = now2.as_nanos().to_string();
|
||||
// let now3_ms = now3.as_nanos().to_string();
|
||||
|
||||
let valid = now + Time::from_secs(2);
|
||||
|
||||
let ip127 = vec!["127.0.0.1".to_string()];
|
||||
let ip1 = vec!["1.1.1.1".to_string()];
|
||||
|
||||
let entries = [
|
||||
Entry {
|
||||
tree: "time-match".into(),
|
||||
key: now.as_nanos().to_string().into(),
|
||||
value: Some(ip127.clone().into()),
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "time-match".into(),
|
||||
key: now2.as_nanos().to_string().into(),
|
||||
value: Some(ip127.clone().into()),
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "time-match".into(),
|
||||
key: now3.as_nanos().to_string().into(),
|
||||
value: Some(ip127.clone().into()),
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "time-match".into(),
|
||||
key: now2.as_nanos().to_string().into(),
|
||||
value: Some(ip127.clone().into()),
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "match-timeset".into(),
|
||||
key: ip127.clone().into(),
|
||||
value: Some([Value::String(now.as_nanos().to_string())].into()),
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "match-timeset".into(),
|
||||
key: ip1.clone().into(),
|
||||
value: Some([Value::String(now2.as_nanos().to_string())].into()),
|
||||
expiry: valid,
|
||||
},
|
||||
Entry {
|
||||
tree: "match-timeset".into(),
|
||||
key: ip1.clone().into(),
|
||||
value: Some(
|
||||
[
|
||||
Value::String(now2.as_nanos().to_string()),
|
||||
Value::String(now3.as_nanos().to_string()),
|
||||
]
|
||||
.into(),
|
||||
),
|
||||
expiry: valid,
|
||||
},
|
||||
];
|
||||
|
||||
let dir = TempDir::new().unwrap();
|
||||
let dir_path = dir.path();
|
||||
let db_path = dir_path.join(DB_NAME);
|
||||
|
||||
let mut write_db = WriteDB::new(File::create(&db_path).await.unwrap());
|
||||
for entry in entries {
|
||||
write_db.write_entry(&entry).await.unwrap();
|
||||
}
|
||||
write_db.close().await.unwrap();
|
||||
drop(write_db);
|
||||
|
||||
let mut database = Database::from_dir(dir_path, None).await.unwrap();
|
||||
|
||||
let time_match = database
|
||||
.open_tree(
|
||||
"time-match".into(),
|
||||
Duration::from_secs(2),
|
||||
|(key, value)| Ok((to_time(&key)?, to_match(&value)?)),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(
|
||||
time_match.tree,
|
||||
BTreeMap::from([
|
||||
(now, ip127.clone()),
|
||||
(now2, ip127.clone()),
|
||||
(now3, ip127.clone())
|
||||
])
|
||||
);
|
||||
|
||||
let match_timeset = database
|
||||
.open_tree(
|
||||
"match-timeset".into(),
|
||||
Duration::from_hours(2),
|
||||
|(key, value)| Ok((to_match(&key)?, to_timeset(&value)?)),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(
|
||||
match_timeset.tree,
|
||||
BTreeMap::from([
|
||||
(ip127.clone(), BTreeSet::from([now])),
|
||||
(ip1.clone(), BTreeSet::from([now2, now3])),
|
||||
])
|
||||
);
|
||||
|
||||
let unknown_tree = database
|
||||
.open_tree(
|
||||
"unknown_tree".into(),
|
||||
Duration::from_hours(2),
|
||||
|(key, value)| Ok((to_match(&key)?, to_timeset(&value)?)),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(unknown_tree.tree, BTreeMap::default());
|
||||
}
|
||||
}
|
||||
|
|
@ -1,497 +0,0 @@
|
|||
use std::{
|
||||
collections::HashMap,
|
||||
io::Error as IoError,
|
||||
time::{SystemTime, UNIX_EPOCH},
|
||||
};
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::Value;
|
||||
use thiserror::Error;
|
||||
use tokio::{
|
||||
fs::File,
|
||||
io::{AsyncBufReadExt, AsyncWriteExt, BufReader, BufWriter},
|
||||
};
|
||||
use tracing::error;
|
||||
|
||||
use crate::time::Time;
|
||||
|
||||
use super::{Entry, LoadedDB};
|
||||
|
||||
const DB_TREE_ID: u64 = 0;
|
||||
// const DB_TREE_NAME: &str = "tree_names";
|
||||
|
||||
#[derive(Debug, Error)]
|
||||
pub enum SerdeOrIoError {
|
||||
#[error("{0}")]
|
||||
IO(#[from] IoError),
|
||||
#[error("{0}")]
|
||||
Serde(#[from] serde_json::Error),
|
||||
}
|
||||
|
||||
#[derive(Debug, Error)]
|
||||
pub enum DatabaseError {
|
||||
#[error("{0}")]
|
||||
IO(#[from] IoError),
|
||||
#[error("{0}")]
|
||||
Serde(#[from] serde_json::Error),
|
||||
#[error("missing key id: {0}")]
|
||||
MissingKeyId(u64),
|
||||
}
|
||||
|
||||
/// Entry in custom database format, just to be written in database
|
||||
#[derive(Serialize)]
|
||||
struct WriteEntry<'a> {
|
||||
#[serde(rename = "t")]
|
||||
pub tree: u64,
|
||||
#[serde(rename = "k")]
|
||||
pub key: &'a Value,
|
||||
#[serde(rename = "v")]
|
||||
pub value: &'a Option<Value>,
|
||||
#[serde(rename = "e")]
|
||||
pub expiry: u64,
|
||||
}
|
||||
|
||||
/// Entry in custom database format, just read from database
|
||||
#[derive(Deserialize)]
|
||||
struct ReadEntry {
|
||||
#[serde(rename = "t")]
|
||||
pub tree: u64,
|
||||
#[serde(rename = "k")]
|
||||
pub key: Value,
|
||||
#[serde(rename = "v")]
|
||||
pub value: Option<Value>,
|
||||
#[serde(rename = "e")]
|
||||
pub expiry: u64,
|
||||
}
|
||||
|
||||
/// Permits to write entries in a database.
|
||||
/// Entries are written as plain JSONL.
|
||||
pub struct WriteDB {
|
||||
file: BufWriter<File>,
|
||||
names: HashMap<String, u64>,
|
||||
next_id: u64,
|
||||
buffer: Vec<u8>,
|
||||
}
|
||||
|
||||
impl WriteDB {
|
||||
/// Creates a wrapper around a file opened in write mode,
|
||||
/// To store entries in it.
|
||||
pub fn new(file: File) -> Self {
|
||||
Self {
|
||||
file: BufWriter::new(file),
|
||||
// names: HashMap::from([(DB_TREE_NAME.into(), DB_TREE_ID)]),
|
||||
names: HashMap::default(),
|
||||
next_id: 1,
|
||||
buffer: Vec::default(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Writes an entry in the database
|
||||
pub async fn write_entry(&mut self, entry: &Entry) -> Result<usize, SerdeOrIoError> {
|
||||
let mut written = 0;
|
||||
let tree_id = match self.names.get(&entry.tree) {
|
||||
Some(id) => *id,
|
||||
// Insert special database entry when the tree is not recorded yet
|
||||
None => {
|
||||
let id = self.next_id;
|
||||
self.next_id += 1;
|
||||
|
||||
self.names.insert(entry.tree.clone(), id);
|
||||
written = self
|
||||
._write_entry(&WriteEntry {
|
||||
tree: DB_TREE_ID,
|
||||
key: &id.into(),
|
||||
value: &Some(entry.tree.clone().into()),
|
||||
// Expiry is not used for special entries
|
||||
expiry: 0,
|
||||
})
|
||||
.await?;
|
||||
|
||||
id
|
||||
}
|
||||
};
|
||||
self._write_entry(&WriteEntry {
|
||||
tree: tree_id,
|
||||
key: &entry.key,
|
||||
value: &entry.value,
|
||||
expiry: entry.expiry.as_millis() as u64,
|
||||
})
|
||||
.await
|
||||
.map(|bytes_written| bytes_written + written)
|
||||
}
|
||||
|
||||
async fn _write_entry(&mut self, raw_entry: &WriteEntry<'_>) -> Result<usize, SerdeOrIoError> {
|
||||
self.buffer.clear();
|
||||
serde_json::to_writer(&mut self.buffer, &raw_entry)?;
|
||||
self.buffer.push(b'\n');
|
||||
Ok(self.file.write(self.buffer.as_ref()).await?)
|
||||
}
|
||||
|
||||
/// Flushes the inner [`tokio::io::BufWriter`]
|
||||
pub async fn flush(&mut self) -> Result<(), IoError> {
|
||||
self.file.flush().await
|
||||
}
|
||||
|
||||
/// Closes the inner [`tokio::io::BufWriter`]
|
||||
/// WriteDB should not be used after this point.
|
||||
pub async fn close(&mut self) -> Result<(), IoError> {
|
||||
self.file.shutdown().await
|
||||
}
|
||||
}
|
||||
|
||||
/// Permits to read entries from a database.
|
||||
/// The database is plain JSONL.
|
||||
pub struct ReadDB {
|
||||
file: BufReader<File>,
|
||||
names: HashMap<u64, String>,
|
||||
buffer: String,
|
||||
}
|
||||
|
||||
impl ReadDB {
|
||||
pub fn new(file: File) -> Self {
|
||||
ReadDB {
|
||||
file: BufReader::new(file),
|
||||
// names: HashMap::from([(DB_TREE_ID, DB_TREE_NAME.into())]),
|
||||
names: HashMap::default(),
|
||||
buffer: String::default(),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn read(
|
||||
&mut self,
|
||||
write_db: &mut WriteDB,
|
||||
load_db: bool,
|
||||
) -> tokio::io::Result<LoadedDB> {
|
||||
let mut data_maps = HashMap::new();
|
||||
|
||||
loop {
|
||||
match self.next().await {
|
||||
// EOF
|
||||
Ok(None) => break,
|
||||
// Can't recover io::Error here
|
||||
Err(DatabaseError::IO(err)) => return Err(err),
|
||||
// Just skip malformed entries
|
||||
Err(DatabaseError::Serde(err)) => {
|
||||
error!("malformed entry read from database: {err}")
|
||||
}
|
||||
Err(DatabaseError::MissingKeyId(id)) => {
|
||||
error!("invalid entry read from database: {id}")
|
||||
}
|
||||
// Ok, we got an entry
|
||||
Ok(Some(entry)) => {
|
||||
// Add back in new DB
|
||||
match write_db.write_entry(&entry).await {
|
||||
Ok(_) => (),
|
||||
Err(err) => match err {
|
||||
SerdeOrIoError::IO(err) => return Err(err),
|
||||
SerdeOrIoError::Serde(err) => error!(
|
||||
"serde should be able to serialize an entry just deserialized: {err}"
|
||||
),
|
||||
},
|
||||
}
|
||||
// Insert data in RAM
|
||||
if load_db {
|
||||
let map: &mut HashMap<Value, Value> =
|
||||
data_maps.entry(entry.tree).or_default();
|
||||
match entry.value {
|
||||
Some(value) => map.insert(entry.key, value),
|
||||
None => map.remove(&entry.key),
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(data_maps)
|
||||
}
|
||||
|
||||
async fn next(&mut self) -> Result<Option<Entry>, DatabaseError> {
|
||||
let now = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.unwrap()
|
||||
.as_millis() as u64;
|
||||
// Loop until we get a non-special value
|
||||
let raw_entry = loop {
|
||||
self.buffer.clear();
|
||||
let raw_entry = match self.file.read_line(&mut self.buffer).await {
|
||||
// EOF
|
||||
Ok(0) => return Ok(None),
|
||||
Ok(_) => match serde_json::from_str::<ReadEntry>(&self.buffer) {
|
||||
Ok(raw_entry) => Ok(raw_entry),
|
||||
Err(err) => Err(DatabaseError::Serde(err)),
|
||||
},
|
||||
Err(err) => Err(DatabaseError::IO(err)),
|
||||
}?;
|
||||
|
||||
if raw_entry.tree == DB_TREE_ID {
|
||||
// Insert new tree
|
||||
self.names.insert(
|
||||
raw_entry
|
||||
.key
|
||||
.as_u64()
|
||||
.expect("database reserved entry doesn't have an uint as key")
|
||||
.to_owned(),
|
||||
raw_entry
|
||||
.value
|
||||
.expect("database reserved entry doesn't have a value")
|
||||
.as_str()
|
||||
.expect("database reserved entry doesn't have a string as value")
|
||||
.to_owned(),
|
||||
);
|
||||
// Skip expired entries
|
||||
} else if raw_entry.expiry >= now {
|
||||
break raw_entry;
|
||||
}
|
||||
};
|
||||
|
||||
match self.names.get(&raw_entry.tree) {
|
||||
Some(tree) => Ok(Some(Entry {
|
||||
tree: tree.to_owned(),
|
||||
key: raw_entry.key,
|
||||
value: raw_entry.value,
|
||||
expiry: Time::from_millis(raw_entry.expiry),
|
||||
})),
|
||||
None => Err(DatabaseError::MissingKeyId(raw_entry.tree)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::collections::HashMap;
|
||||
|
||||
use serde_json::Value;
|
||||
use tempfile::NamedTempFile;
|
||||
use tokio::fs::{File, read, write};
|
||||
|
||||
use crate::{
|
||||
Entry,
|
||||
raw::{DB_TREE_ID, DatabaseError, ReadDB, WriteDB},
|
||||
time::{Time, now},
|
||||
};
|
||||
|
||||
#[tokio::test]
|
||||
async fn write_db_write_entry() {
|
||||
let now = now();
|
||||
let expired = now - Time::from_secs(2);
|
||||
let expired_ts = expired.as_millis();
|
||||
// let valid = now + Time::from_secs(2);
|
||||
// let valid_ts = valid.timestamp_millis();
|
||||
|
||||
let path = NamedTempFile::new().unwrap().into_temp_path();
|
||||
|
||||
let mut write_db = WriteDB::new(File::create(&path).await.unwrap());
|
||||
|
||||
write_db
|
||||
.write_entry(&Entry {
|
||||
tree: "yooo".into(),
|
||||
key: "key1".into(),
|
||||
value: Some("value1".into()),
|
||||
expiry: expired,
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
write_db.flush().await.unwrap();
|
||||
|
||||
let contents = String::from_utf8(read(path).await.unwrap()).unwrap();
|
||||
|
||||
println!("{}", &contents);
|
||||
|
||||
assert_eq!(
|
||||
contents,
|
||||
format!(
|
||||
"{{\"t\":0,\"k\":1,\"v\":\"yooo\",\"e\":0}}\n{{\"t\":1,\"k\":\"key1\",\"v\":\"value1\",\"e\":{expired_ts}}}\n"
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn read_db_next() {
|
||||
let now = now();
|
||||
|
||||
let expired = now - Time::from_secs(2);
|
||||
let expired_ts = expired.as_millis();
|
||||
|
||||
let valid = now + Time::from_secs(2);
|
||||
let valid_ts = valid.as_millis();
|
||||
// Truncate to millisecond precision
|
||||
let valid = Time::new(valid.as_secs(), valid.subsec_millis() * 1_000_000);
|
||||
|
||||
let path = NamedTempFile::new().unwrap().into_temp_path();
|
||||
|
||||
write(
|
||||
&path,
|
||||
format!(
|
||||
"{{\"t\": {DB_TREE_ID}, \"k\": 1, \"v\": \"test_tree\", \"e\": 0}}
|
||||
{{\"t\": 1, \"k\": \"key1\", \"v\": 1, \"e\": {expired_ts}}}
|
||||
{{\"t\": 1, \"k\": \"key2\", \"v\": 2, \"e\": {valid_ts}}}
|
||||
malformed entry: not json
|
||||
{{\"t\": \"tree cant be string\", \"k\": \"key2\", \"v\": 2, \"e\": {valid_ts}}}
|
||||
{{\"t\": 1, \"k\": \"missing quote, \"v\": 2, \"e\": {valid_ts}}}
|
||||
{{\"t\": 3, \"k\": \"missing key id\", \"v\": 2, \"e\": {valid_ts}}}"
|
||||
),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let mut read_db = ReadDB::new(File::open(path).await.unwrap());
|
||||
|
||||
assert_eq!(
|
||||
read_db.next().await.unwrap(),
|
||||
Some(Entry {
|
||||
tree: "test_tree".into(),
|
||||
key: "key2".into(),
|
||||
value: Some(2.into()),
|
||||
expiry: valid,
|
||||
})
|
||||
);
|
||||
|
||||
matches!(read_db.next().await, Err(DatabaseError::Serde(_)));
|
||||
matches!(read_db.next().await, Err(DatabaseError::Serde(_)));
|
||||
matches!(read_db.next().await, Err(DatabaseError::Serde(_)));
|
||||
matches!(read_db.next().await, Err(DatabaseError::MissingKeyId(3)));
|
||||
assert!(read_db.next().await.unwrap().is_none());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn read_db_read() {
|
||||
let now = now();
|
||||
|
||||
let expired = now - Time::from_secs(2);
|
||||
let expired_ts = expired.as_millis();
|
||||
|
||||
let valid = now + Time::from_secs(2);
|
||||
let valid_ts = valid.as_millis();
|
||||
|
||||
let read_path = NamedTempFile::new().unwrap().into_temp_path();
|
||||
let write_path = NamedTempFile::new().unwrap().into_temp_path();
|
||||
|
||||
write(
|
||||
&read_path,
|
||||
format!(
|
||||
"{{\"t\": {DB_TREE_ID}, \"k\": 1, \"v\": \"test_tree\", \"e\": 0}}
|
||||
{{\"t\": 1, \"k\": \"key1\", \"v\": 1, \"e\": {expired_ts}}}
|
||||
{{\"t\": 1, \"k\": \"key2\", \"v\": 2, \"e\": {valid_ts}}}
|
||||
malformed entry: not json
|
||||
{{\"t\": \"tree cant be string\", \"k\": \"key2\", \"v\": 2, \"e\": {valid_ts}}}
|
||||
{{\"t\": 1, \"k\": \"missing quote, \"v\": 2, \"e\": {valid_ts}}}
|
||||
{{\"t\": 3, \"k\": \"missing key id\", \"v\": 2, \"e\": {valid_ts}}}"
|
||||
),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// First test loading in RAM
|
||||
|
||||
let mut read_db = ReadDB::new(File::open(&read_path).await.unwrap());
|
||||
let mut write_db = WriteDB::new(File::create(&write_path).await.unwrap());
|
||||
let maps = read_db.read(&mut write_db, true).await.unwrap();
|
||||
|
||||
// Test RAM
|
||||
assert_eq!(
|
||||
maps,
|
||||
HashMap::from([(
|
||||
"test_tree".into(),
|
||||
HashMap::from([("key2".into(), 2.into())])
|
||||
)])
|
||||
);
|
||||
|
||||
// Test disk
|
||||
write_db.close().await.unwrap();
|
||||
let contents = String::from_utf8(read(&write_path).await.unwrap()).unwrap();
|
||||
|
||||
assert_eq!(
|
||||
contents,
|
||||
format!(
|
||||
"{{\"t\":{DB_TREE_ID},\"k\":1,\"v\":\"test_tree\",\"e\":0}}\n\
|
||||
{{\"t\":1,\"k\":\"key2\",\"v\":2,\"e\":{valid_ts}}}\n"
|
||||
)
|
||||
);
|
||||
|
||||
// Second test only rotating on disk
|
||||
|
||||
let mut read_db = ReadDB::new(File::open(&read_path).await.unwrap());
|
||||
let mut write_db = WriteDB::new(File::create(&write_path).await.unwrap());
|
||||
let maps = read_db.read(&mut write_db, false).await.unwrap();
|
||||
|
||||
// Test RAM
|
||||
assert_eq!(maps, HashMap::default());
|
||||
|
||||
// Test disk
|
||||
write_db.close().await.unwrap();
|
||||
let contents = String::from_utf8(read(write_path).await.unwrap()).unwrap();
|
||||
|
||||
assert_eq!(
|
||||
contents,
|
||||
format!(
|
||||
"{{\"t\":{DB_TREE_ID},\"k\":1,\"v\":\"test_tree\",\"e\":0}}\n\
|
||||
{{\"t\":1,\"k\":\"key2\",\"v\":2,\"e\":{valid_ts}}}\n"
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
// write then read 1000 random entries
|
||||
#[tokio::test]
|
||||
async fn write_then_read_1000() {
|
||||
// Generate entries
|
||||
let now = now();
|
||||
let entries: Vec<_> = (0..1000)
|
||||
.map(|i| Entry {
|
||||
tree: format!("tree{}", i % 4),
|
||||
key: format!("key{}", i % 10).into(),
|
||||
value: Some(format!("value{}", i % 10).into()),
|
||||
expiry: now + Time::from_secs(i % 4) - Time::from_secs(1),
|
||||
})
|
||||
.collect();
|
||||
|
||||
let remove_entries: Vec<_> = (0..1000)
|
||||
.filter(|i| i % 5 == 1)
|
||||
.map(|i| Entry {
|
||||
tree: format!("tree{}", i % 4),
|
||||
key: format!("key{}", i % 10).into(),
|
||||
value: None,
|
||||
expiry: now + Time::from_secs(i % 4),
|
||||
})
|
||||
.collect();
|
||||
|
||||
let all_entries: Vec<_> = entries.iter().chain(remove_entries.iter()).collect();
|
||||
|
||||
let kept_entries: HashMap<String, HashMap<Value, Value>> = entries
|
||||
.iter()
|
||||
.filter(|entry| entry.expiry > now)
|
||||
.filter(|entry| {
|
||||
remove_entries
|
||||
.iter()
|
||||
.all(|rm_entry| rm_entry.tree != entry.tree || rm_entry.key != entry.key)
|
||||
})
|
||||
.fold(HashMap::default(), |mut acc, entry| {
|
||||
acc.entry(entry.tree.clone())
|
||||
.or_default()
|
||||
.insert(entry.key.clone(), entry.value.clone().unwrap());
|
||||
acc
|
||||
});
|
||||
|
||||
// Write entries
|
||||
let read_path = NamedTempFile::new().unwrap().into_temp_path();
|
||||
let write_path = NamedTempFile::new().unwrap().into_temp_path();
|
||||
|
||||
let mut write_db = WriteDB::new(File::create(&read_path).await.unwrap());
|
||||
for entry in all_entries {
|
||||
write_db.write_entry(entry).await.unwrap();
|
||||
}
|
||||
write_db.close().await.unwrap();
|
||||
|
||||
// Read entries
|
||||
let mut read_db = ReadDB::new(File::open(&read_path).await.unwrap());
|
||||
let mut write_db = WriteDB::new(File::create(&write_path).await.unwrap());
|
||||
let maps = read_db
|
||||
.read(&mut write_db, true)
|
||||
.await
|
||||
.unwrap()
|
||||
.into_iter()
|
||||
.filter(|(_, map)| !map.is_empty())
|
||||
.collect::<HashMap<_, _>>();
|
||||
|
||||
assert_eq!(maps, kept_entries);
|
||||
}
|
||||
}
|
||||
|
|
@ -1,117 +0,0 @@
|
|||
use std::{
|
||||
fmt,
|
||||
ops::{Add, Deref, Sub},
|
||||
time::{Duration, SystemTime, UNIX_EPOCH},
|
||||
};
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
/// [`std::time::Duration`] since [`std::time::UNIX_EPOCH`]
|
||||
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub struct Time(Duration);
|
||||
impl Deref for Time {
|
||||
type Target = Duration;
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
impl From<Duration> for Time {
|
||||
fn from(value: Duration) -> Self {
|
||||
Time(value)
|
||||
}
|
||||
}
|
||||
impl Into<Duration> for Time {
|
||||
fn into(self) -> Duration {
|
||||
self.0
|
||||
}
|
||||
}
|
||||
impl Add<Duration> for Time {
|
||||
type Output = Time;
|
||||
fn add(self, rhs: Duration) -> Self::Output {
|
||||
Time(self.0 + rhs)
|
||||
}
|
||||
}
|
||||
impl Add<Time> for Time {
|
||||
type Output = Time;
|
||||
fn add(self, rhs: Time) -> Self::Output {
|
||||
Time(self.0 + rhs.0)
|
||||
}
|
||||
}
|
||||
impl Sub<Duration> for Time {
|
||||
type Output = Time;
|
||||
fn sub(self, rhs: Duration) -> Self::Output {
|
||||
Time(self.0 - rhs)
|
||||
}
|
||||
}
|
||||
impl Sub<Time> for Time {
|
||||
type Output = Time;
|
||||
fn sub(self, rhs: Time) -> Self::Output {
|
||||
Time(self.0 - rhs.0)
|
||||
}
|
||||
}
|
||||
|
||||
impl Serialize for Time {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: serde::Serializer,
|
||||
{
|
||||
self.as_nanos().to_string().serialize(serializer)
|
||||
}
|
||||
}
|
||||
struct TimeVisitor;
|
||||
impl<'de> serde::de::Visitor<'de> for TimeVisitor {
|
||||
type Value = Time;
|
||||
|
||||
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
|
||||
write!(formatter, "a string representing nanoseconds")
|
||||
}
|
||||
|
||||
fn visit_str<E>(self, s: &str) -> Result<Self::Value, E>
|
||||
where
|
||||
E: serde::de::Error,
|
||||
{
|
||||
match s.parse::<u128>() {
|
||||
Ok(nanos) => Ok(Time(Duration::new(
|
||||
(nanos / 1_000_000_000) as u64,
|
||||
(nanos % 1_000_000_000) as u32,
|
||||
))),
|
||||
Err(_) => Err(serde::de::Error::invalid_value(
|
||||
serde::de::Unexpected::Str(s),
|
||||
&self,
|
||||
)),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl<'de> Deserialize<'de> for Time {
|
||||
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
|
||||
where
|
||||
D: serde::Deserializer<'de>,
|
||||
{
|
||||
deserializer.deserialize_str(TimeVisitor)
|
||||
}
|
||||
}
|
||||
|
||||
impl Time {
|
||||
pub fn new(secs: u64, nanos: u32) -> Time {
|
||||
Time(Duration::new(secs, nanos))
|
||||
}
|
||||
pub fn from_hours(hours: u64) -> Time {
|
||||
Time(Duration::from_hours(hours))
|
||||
}
|
||||
pub fn from_mins(mins: u64) -> Time {
|
||||
Time(Duration::from_mins(mins))
|
||||
}
|
||||
pub fn from_secs(secs: u64) -> Time {
|
||||
Time(Duration::from_secs(secs))
|
||||
}
|
||||
pub fn from_millis(millis: u64) -> Time {
|
||||
Time(Duration::from_millis(millis))
|
||||
}
|
||||
pub fn from_nanos(nanos: u64) -> Time {
|
||||
Time(Duration::from_nanos(nanos))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn now() -> Time {
|
||||
Time(SystemTime::now().duration_since(UNIX_EPOCH).unwrap())
|
||||
}
|
||||
4
go.old/README.md
Normal file
4
go.old/README.md
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
This is the old Go codebase of reaction, ie. all 0.x and 1.x versions.
|
||||
This codebase most probably won't be updated.
|
||||
|
||||
Development now continues in Rust for reaction 2.x.
|
||||
393
go.old/app/client.go
Normal file
393
go.old/app/client.go
Normal file
|
|
@ -0,0 +1,393 @@
|
|||
package app
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/gob"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"regexp"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"framagit.org/ppom/reaction/logger"
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
|
||||
const (
|
||||
Info = 0
|
||||
Flush = 1
|
||||
)
|
||||
|
||||
type Request struct {
|
||||
Request int
|
||||
Flush PSF
|
||||
}
|
||||
|
||||
type Response struct {
|
||||
Err error
|
||||
// Config Conf
|
||||
Matches MatchesMap
|
||||
Actions ActionsMap
|
||||
}
|
||||
|
||||
func SendAndRetrieve(data Request) Response {
|
||||
conn, err := net.Dial("unix", *SocketPath)
|
||||
if err != nil {
|
||||
logger.Fatalln("Error opening connection to daemon:", err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
err = gob.NewEncoder(conn).Encode(data)
|
||||
if err != nil {
|
||||
logger.Fatalln("Can't send message:", err)
|
||||
}
|
||||
|
||||
var response Response
|
||||
err = gob.NewDecoder(conn).Decode(&response)
|
||||
if err != nil {
|
||||
logger.Fatalln("Invalid answer from daemon:", err)
|
||||
}
|
||||
return response
|
||||
}
|
||||
|
||||
type PatternStatus struct {
|
||||
Matches int `json:"matches,omitempty"`
|
||||
Actions map[string][]string `json:"actions,omitempty"`
|
||||
}
|
||||
type MapPatternStatus map[Match]*PatternStatus
|
||||
type MapPatternStatusFlush MapPatternStatus
|
||||
|
||||
type ClientStatus map[string]map[string]MapPatternStatus
|
||||
type ClientStatusFlush ClientStatus
|
||||
|
||||
func (mps MapPatternStatusFlush) MarshalJSON() ([]byte, error) {
|
||||
for _, v := range mps {
|
||||
return json.Marshal(v)
|
||||
}
|
||||
return []byte(""), nil
|
||||
}
|
||||
|
||||
func (csf ClientStatusFlush) MarshalJSON() ([]byte, error) {
|
||||
ret := make(map[string]map[string]MapPatternStatusFlush)
|
||||
for k, v := range csf {
|
||||
ret[k] = make(map[string]MapPatternStatusFlush)
|
||||
for kk, vv := range v {
|
||||
ret[k][kk] = MapPatternStatusFlush(vv)
|
||||
}
|
||||
}
|
||||
return json.Marshal(ret)
|
||||
}
|
||||
|
||||
func pfMatches(streamName string, filterName string, regexes map[string]*regexp.Regexp, match Match, filter *Filter) bool {
|
||||
// Check stream and filter match
|
||||
if streamName != "" && streamName != filter.Stream.Name {
|
||||
return false
|
||||
}
|
||||
if filterName != "" && filterName != filter.Name {
|
||||
return false
|
||||
}
|
||||
// Check that all user requested patterns are in this filter
|
||||
var nbMatched int
|
||||
var localMatches = match.Split()
|
||||
// For each pattern of this filter
|
||||
for i, pattern := range filter.Pattern {
|
||||
// Check that this pattern has user requested name
|
||||
if reg, ok := regexes[pattern.Name]; ok {
|
||||
// Check that the PF.p[i] matches user requested pattern
|
||||
if reg.MatchString(localMatches[i]) {
|
||||
nbMatched++
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(regexes) != nbMatched {
|
||||
return false
|
||||
}
|
||||
// All checks passed
|
||||
return true
|
||||
}
|
||||
|
||||
func addMatchToCS(cs ClientStatus, pf PF, times map[time.Time]struct{}) {
|
||||
patterns, streamName, filterName := pf.P, pf.F.Stream.Name, pf.F.Name
|
||||
if cs[streamName] == nil {
|
||||
cs[streamName] = make(map[string]MapPatternStatus)
|
||||
}
|
||||
if cs[streamName][filterName] == nil {
|
||||
cs[streamName][filterName] = make(MapPatternStatus)
|
||||
}
|
||||
cs[streamName][filterName][patterns] = &PatternStatus{len(times), nil}
|
||||
}
|
||||
|
||||
func addActionToCS(cs ClientStatus, pa PA, times map[time.Time]struct{}) {
|
||||
patterns, streamName, filterName, actionName := pa.P, pa.A.Filter.Stream.Name, pa.A.Filter.Name, pa.A.Name
|
||||
if cs[streamName] == nil {
|
||||
cs[streamName] = make(map[string]MapPatternStatus)
|
||||
}
|
||||
if cs[streamName][filterName] == nil {
|
||||
cs[streamName][filterName] = make(MapPatternStatus)
|
||||
}
|
||||
if cs[streamName][filterName][patterns] == nil {
|
||||
cs[streamName][filterName][patterns] = new(PatternStatus)
|
||||
}
|
||||
ps := cs[streamName][filterName][patterns]
|
||||
if ps.Actions == nil {
|
||||
ps.Actions = make(map[string][]string)
|
||||
}
|
||||
for then := range times {
|
||||
ps.Actions[actionName] = append(ps.Actions[actionName], then.Format(time.DateTime))
|
||||
}
|
||||
}
|
||||
|
||||
func printClientStatus(cs ClientStatus, format string) {
|
||||
var text []byte
|
||||
var err error
|
||||
if format == "json" {
|
||||
text, err = json.MarshalIndent(cs, "", " ")
|
||||
} else {
|
||||
text, err = yaml.Marshal(cs)
|
||||
}
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to convert daemon binary response to text format:", err)
|
||||
}
|
||||
|
||||
fmt.Println(strings.ReplaceAll(string(text), "\\0", " "))
|
||||
}
|
||||
|
||||
func compileKVPatterns(kvpatterns []string) map[string]*regexp.Regexp {
|
||||
var regexes map[string]*regexp.Regexp
|
||||
regexes = make(map[string]*regexp.Regexp)
|
||||
for _, p := range kvpatterns {
|
||||
// p syntax already checked in Main
|
||||
key, value, found := strings.Cut(p, "=")
|
||||
if !found {
|
||||
logger.Printf(logger.ERROR, "Bad argument: no `=` in %v", p)
|
||||
logger.Fatalln("Patterns must be prefixed by their name (e.g. ip=1.1.1.1)")
|
||||
}
|
||||
if regexes[key] != nil {
|
||||
logger.Fatalf("Bad argument: same pattern name provided multiple times: %v", key)
|
||||
}
|
||||
compiled, err := regexp.Compile(fmt.Sprintf("^%v$", value))
|
||||
if err != nil {
|
||||
logger.Fatalf("Bad argument: Could not compile: `%v`: %v", value, err)
|
||||
}
|
||||
regexes[key] = compiled
|
||||
}
|
||||
return regexes
|
||||
}
|
||||
|
||||
func ClientShow(format, stream, filter string, kvpatterns []string) {
|
||||
response := SendAndRetrieve(Request{Info, PSF{}})
|
||||
if response.Err != nil {
|
||||
logger.Fatalln("Received error from daemon:", response.Err)
|
||||
}
|
||||
|
||||
cs := make(ClientStatus)
|
||||
|
||||
var regexes map[string]*regexp.Regexp
|
||||
|
||||
if len(kvpatterns) != 0 {
|
||||
regexes = compileKVPatterns(kvpatterns)
|
||||
}
|
||||
|
||||
var found bool
|
||||
|
||||
// Painful data manipulation
|
||||
for pf, times := range response.Matches {
|
||||
// Check this PF is not empty
|
||||
if len(times) == 0 {
|
||||
continue
|
||||
}
|
||||
if !pfMatches(stream, filter, regexes, pf.P, pf.F) {
|
||||
continue
|
||||
}
|
||||
addMatchToCS(cs, pf, times)
|
||||
found = true
|
||||
}
|
||||
|
||||
// Painful data manipulation
|
||||
for pa, times := range response.Actions {
|
||||
// Check this PF is not empty
|
||||
if len(times) == 0 {
|
||||
continue
|
||||
}
|
||||
if !pfMatches(stream, filter, regexes, pa.P, pa.A.Filter) {
|
||||
continue
|
||||
}
|
||||
addActionToCS(cs, pa, times)
|
||||
found = true
|
||||
}
|
||||
|
||||
if !found {
|
||||
logger.Println(logger.WARN, "No matching stream.filter items found. This does not mean it doesn't exist, maybe it just didn't receive any match.")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
printClientStatus(cs, format)
|
||||
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
// TODO : Show values we just flushed - for now we got no details :
|
||||
/*
|
||||
* % ./reaction flush -l ssh.failedlogin login=".*t"
|
||||
* ssh:
|
||||
* failedlogin:
|
||||
* actions:
|
||||
* unban:
|
||||
* - "2024-04-30 15:27:28"
|
||||
* - "2024-04-30 15:27:28"
|
||||
* - "2024-04-30 15:27:28"
|
||||
* - "2024-04-30 15:27:28"
|
||||
*
|
||||
*/
|
||||
func ClientFlush(format, streamName, filterName string, patterns []string) {
|
||||
requestedPatterns := compileKVPatterns(patterns)
|
||||
|
||||
// Remember which Filters are compatible with the query
|
||||
filterCompatibility := make(map[SF]bool)
|
||||
isCompatible := func(filter *Filter) bool {
|
||||
sf := SF{filter.Stream.Name, filter.Name}
|
||||
compatible, ok := filterCompatibility[sf]
|
||||
|
||||
// already tested
|
||||
if ok {
|
||||
return compatible
|
||||
}
|
||||
|
||||
for k := range requestedPatterns {
|
||||
if -1 == slices.IndexFunc(filter.Pattern, func(pattern *Pattern) bool {
|
||||
return pattern.Name == k
|
||||
}) {
|
||||
filterCompatibility[sf] = false
|
||||
return false
|
||||
}
|
||||
}
|
||||
filterCompatibility[sf] = true
|
||||
return true
|
||||
}
|
||||
|
||||
// match functions
|
||||
kvMatch := func(filter *Filter, filterPatterns []string) bool {
|
||||
// For each user requested pattern
|
||||
for k, v := range requestedPatterns {
|
||||
// Find its index on the Filter.Pattern
|
||||
for i, pattern := range filter.Pattern {
|
||||
if k == pattern.Name {
|
||||
// Test the match
|
||||
if !v.MatchString(filterPatterns[i]) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
var found bool
|
||||
fullMatch := func(filter *Filter, match Match) bool {
|
||||
// Test if we limit by stream
|
||||
if streamName == "" || filter.Stream.Name == streamName {
|
||||
// Test if we limit by filter
|
||||
if filterName == "" || filter.Name == filterName {
|
||||
found = true
|
||||
filterPatterns := match.Split()
|
||||
return isCompatible(filter) && kvMatch(filter, filterPatterns)
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
response := SendAndRetrieve(Request{Info, PSF{}})
|
||||
if response.Err != nil {
|
||||
logger.Fatalln("Received error from daemon:", response.Err)
|
||||
}
|
||||
|
||||
commands := make([]PSF, 0)
|
||||
|
||||
cs := make(ClientStatus)
|
||||
|
||||
for pf, times := range response.Matches {
|
||||
if fullMatch(pf.F, pf.P) {
|
||||
commands = append(commands, PSF{pf.P, pf.F.Stream.Name, pf.F.Name})
|
||||
addMatchToCS(cs, pf, times)
|
||||
}
|
||||
}
|
||||
|
||||
for pa, times := range response.Actions {
|
||||
if fullMatch(pa.A.Filter, pa.P) {
|
||||
commands = append(commands, PSF{pa.P, pa.A.Filter.Stream.Name, pa.A.Filter.Name})
|
||||
addActionToCS(cs, pa, times)
|
||||
}
|
||||
}
|
||||
|
||||
if !found {
|
||||
logger.Println(logger.WARN, "No matching stream.filter items found. This does not mean it doesn't exist, maybe it just didn't receive any match.")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
for _, psf := range commands {
|
||||
response := SendAndRetrieve(Request{Flush, psf})
|
||||
if response.Err != nil {
|
||||
logger.Fatalln("Received error from daemon:", response.Err)
|
||||
}
|
||||
}
|
||||
|
||||
printClientStatus(cs, format)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
func TestRegex(confFilename, regex, line string) {
|
||||
conf := parseConf(confFilename)
|
||||
|
||||
// Code close to app/startup.go
|
||||
var usedPatterns []*Pattern
|
||||
for _, pattern := range conf.Patterns {
|
||||
if strings.Contains(regex, pattern.nameWithBraces) {
|
||||
usedPatterns = append(usedPatterns, pattern)
|
||||
regex = strings.Replace(regex, pattern.nameWithBraces, pattern.Regex, 1)
|
||||
}
|
||||
}
|
||||
reg, err := regexp.Compile(regex)
|
||||
if err != nil {
|
||||
logger.Fatalln("ERROR the specified regex is invalid: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Code close to app/daemon.go
|
||||
match := func(line string) {
|
||||
var ignored bool
|
||||
if matches := reg.FindStringSubmatch(line); matches != nil {
|
||||
if usedPatterns != nil {
|
||||
var result []string
|
||||
for _, p := range usedPatterns {
|
||||
match := matches[reg.SubexpIndex(p.Name)]
|
||||
result = append(result, match)
|
||||
if !p.notAnIgnore(&match) {
|
||||
ignored = true
|
||||
}
|
||||
}
|
||||
if !ignored {
|
||||
fmt.Printf("\033[32mmatching\033[0m %v: %v\n", WithBrackets(result), line)
|
||||
} else {
|
||||
fmt.Printf("\033[33mignore matching\033[0m %v: %v\n", WithBrackets(result), line)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("\033[32mmatching\033[0m [%v]:\n", line)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("\033[31mno match\033[0m: %v\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
if line != "" {
|
||||
match(line)
|
||||
} else {
|
||||
logger.Println(logger.INFO, "no second argument: reading from stdin")
|
||||
scanner := bufio.NewScanner(os.Stdin)
|
||||
for scanner.Scan() {
|
||||
match(scanner.Text())
|
||||
}
|
||||
}
|
||||
}
|
||||
454
go.old/app/daemon.go
Normal file
454
go.old/app/daemon.go
Normal file
|
|
@ -0,0 +1,454 @@
|
|||
package app
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/signal"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"framagit.org/ppom/reaction/logger"
|
||||
)
|
||||
|
||||
// Executes a command and channel-send its stdout
|
||||
func cmdStdout(commandline []string) chan *string {
|
||||
lines := make(chan *string)
|
||||
|
||||
go func() {
|
||||
cmd := exec.Command(commandline[0], commandline[1:]...)
|
||||
stdout, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
logger.Fatalln("couldn't open stdout on command:", err)
|
||||
}
|
||||
if err := cmd.Start(); err != nil {
|
||||
logger.Fatalln("couldn't start command:", err)
|
||||
}
|
||||
defer stdout.Close()
|
||||
scanner := bufio.NewScanner(stdout)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
lines <- &line
|
||||
logger.Println(logger.DEBUG, "stdout:", line)
|
||||
}
|
||||
close(lines)
|
||||
}()
|
||||
|
||||
return lines
|
||||
}
|
||||
|
||||
func runCommands(commands [][]string, moment string) bool {
|
||||
ok := true
|
||||
for _, command := range commands {
|
||||
cmd := exec.Command(command[0], command[1:]...)
|
||||
cmd.WaitDelay = time.Minute
|
||||
|
||||
logger.Printf(logger.INFO, "%v command: run %v\n", moment, command)
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
logger.Printf(logger.ERROR, "%v command: run %v: %v", moment, command, err)
|
||||
ok = false
|
||||
} else {
|
||||
err := cmd.Wait()
|
||||
if err != nil {
|
||||
logger.Printf(logger.ERROR, "%v command: run %v: %v", moment, command, err)
|
||||
ok = false
|
||||
}
|
||||
}
|
||||
}
|
||||
return ok
|
||||
}
|
||||
|
||||
func (p *Pattern) notAnIgnore(match *string) bool {
|
||||
for _, regex := range p.compiledIgnoreRegex {
|
||||
if regex.MatchString(*match) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
for _, ignore := range p.Ignore {
|
||||
if ignore == *match {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Whether one of the filter's regexes is matched on a line
|
||||
func (f *Filter) match(line *string) Match {
|
||||
for _, regex := range f.compiledRegex {
|
||||
|
||||
if matches := regex.FindStringSubmatch(*line); matches != nil {
|
||||
if f.Pattern != nil {
|
||||
var result []string
|
||||
for _, p := range f.Pattern {
|
||||
match := matches[regex.SubexpIndex(p.Name)]
|
||||
if p.notAnIgnore(&match) {
|
||||
result = append(result, match)
|
||||
}
|
||||
}
|
||||
if len(result) == len(f.Pattern) {
|
||||
logger.Printf(logger.INFO, "%s.%s: match %s", f.Stream.Name, f.Name, WithBrackets(result))
|
||||
return JoinMatch(result)
|
||||
}
|
||||
} else {
|
||||
logger.Printf(logger.INFO, "%s.%s: match [.]\n", f.Stream.Name, f.Name)
|
||||
// No pattern, so this match will never actually be used
|
||||
return "."
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (f *Filter) sendActions(match Match, at time.Time) {
|
||||
for _, a := range f.Actions {
|
||||
actionsC <- PAT{match, a, at.Add(a.afterDuration)}
|
||||
}
|
||||
}
|
||||
|
||||
func (a *Action) exec(match Match) {
|
||||
defer wgActions.Done()
|
||||
|
||||
var computedCommand []string
|
||||
|
||||
if a.Filter.Pattern != nil {
|
||||
computedCommand = make([]string, 0, len(a.Cmd))
|
||||
matches := match.Split()
|
||||
|
||||
for _, item := range a.Cmd {
|
||||
for i, p := range a.Filter.Pattern {
|
||||
item = strings.ReplaceAll(item, p.nameWithBraces, matches[i])
|
||||
}
|
||||
computedCommand = append(computedCommand, item)
|
||||
}
|
||||
} else {
|
||||
computedCommand = a.Cmd
|
||||
}
|
||||
|
||||
logger.Printf(logger.INFO, "%s.%s.%s: run %s\n", a.Filter.Stream.Name, a.Filter.Name, a.Name, computedCommand)
|
||||
|
||||
cmd := exec.Command(computedCommand[0], computedCommand[1:]...)
|
||||
|
||||
if ret := cmd.Run(); ret != nil {
|
||||
logger.Printf(logger.ERROR, "%s.%s.%s: run %s, code %s\n", a.Filter.Stream.Name, a.Filter.Name, a.Name, computedCommand, ret)
|
||||
}
|
||||
}
|
||||
|
||||
func ActionsManager(concurrency int) {
|
||||
// concurrency init
|
||||
execActionsC := make(chan PA)
|
||||
if concurrency > 0 {
|
||||
for i := 0; i < concurrency; i++ {
|
||||
go func() {
|
||||
var pa PA
|
||||
for {
|
||||
pa = <-execActionsC
|
||||
pa.A.exec(pa.P)
|
||||
}
|
||||
}()
|
||||
}
|
||||
} else {
|
||||
go func() {
|
||||
var pa PA
|
||||
for {
|
||||
pa = <-execActionsC
|
||||
go func(pa PA) {
|
||||
pa.A.exec(pa.P)
|
||||
}(pa)
|
||||
}
|
||||
}()
|
||||
}
|
||||
execAction := func(a *Action, p Match) {
|
||||
wgActions.Add(1)
|
||||
execActionsC <- PA{p, a}
|
||||
}
|
||||
|
||||
// main
|
||||
pendingActionsC := make(chan PAT)
|
||||
for {
|
||||
select {
|
||||
case pat := <-actionsC:
|
||||
pa := PA{pat.P, pat.A}
|
||||
pattern, action, then := pat.P, pat.A, pat.T
|
||||
now := time.Now()
|
||||
// check if must be executed now
|
||||
if then.Compare(now) <= 0 {
|
||||
execAction(action, pattern)
|
||||
} else {
|
||||
if actions[pa] == nil {
|
||||
actions[pa] = make(map[time.Time]struct{})
|
||||
}
|
||||
actions[pa][then] = struct{}{}
|
||||
go func(insidePat PAT, insideNow time.Time) {
|
||||
time.Sleep(insidePat.T.Sub(insideNow))
|
||||
pendingActionsC <- insidePat
|
||||
}(pat, now)
|
||||
}
|
||||
case pat := <-pendingActionsC:
|
||||
pa := PA{pat.P, pat.A}
|
||||
pattern, action, then := pat.P, pat.A, pat.T
|
||||
if actions[pa] != nil {
|
||||
delete(actions[pa], then)
|
||||
execAction(action, pattern)
|
||||
}
|
||||
case fo := <-flushToActionsC:
|
||||
for pa := range actions {
|
||||
if fo.S == pa.A.Filter.Stream.Name &&
|
||||
fo.F == pa.A.Filter.Name &&
|
||||
fo.P == pa.P {
|
||||
for range actions[pa] {
|
||||
execAction(pa.A, pa.P)
|
||||
}
|
||||
delete(actions, pa)
|
||||
break
|
||||
}
|
||||
}
|
||||
case _, _ = <-stopActions:
|
||||
for pa := range actions {
|
||||
if pa.A.OnExit {
|
||||
for range actions[pa] {
|
||||
execAction(pa.A, pa.P)
|
||||
}
|
||||
}
|
||||
}
|
||||
wgActions.Done()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func MatchesManager() {
|
||||
var fo PSF
|
||||
var pft PFT
|
||||
end := false
|
||||
|
||||
for !end {
|
||||
select {
|
||||
case fo = <-flushToMatchesC:
|
||||
matchesManagerHandleFlush(fo)
|
||||
case fo, ok := <-startupMatchesC:
|
||||
if !ok {
|
||||
end = true
|
||||
} else {
|
||||
_ = matchesManagerHandleMatch(fo)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case fo = <-flushToMatchesC:
|
||||
matchesManagerHandleFlush(fo)
|
||||
case pft = <-matchesC:
|
||||
|
||||
entry := LogEntry{pft.T, 0, pft.P, pft.F.Stream.Name, pft.F.Name, 0, false}
|
||||
|
||||
entry.Exec = matchesManagerHandleMatch(pft)
|
||||
|
||||
logsC <- entry
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func matchesManagerHandleFlush(fo PSF) {
|
||||
matchesLock.Lock()
|
||||
for pf := range matches {
|
||||
if fo.S == pf.F.Stream.Name &&
|
||||
fo.F == pf.F.Name &&
|
||||
fo.P == pf.P {
|
||||
delete(matches, pf)
|
||||
break
|
||||
}
|
||||
}
|
||||
matchesLock.Unlock()
|
||||
}
|
||||
|
||||
func matchesManagerHandleMatch(pft PFT) bool {
|
||||
matchesLock.Lock()
|
||||
defer matchesLock.Unlock()
|
||||
|
||||
filter, patterns, then := pft.F, pft.P, pft.T
|
||||
pf := PF{pft.P, pft.F}
|
||||
|
||||
if filter.Retry > 1 {
|
||||
// make sure map exists
|
||||
if matches[pf] == nil {
|
||||
matches[pf] = make(map[time.Time]struct{})
|
||||
}
|
||||
// add new match
|
||||
matches[pf][then] = struct{}{}
|
||||
// remove match when expired
|
||||
go func(pf PF, then time.Time) {
|
||||
time.Sleep(then.Sub(time.Now()) + filter.retryDuration)
|
||||
matchesLock.Lock()
|
||||
if matches[pf] != nil {
|
||||
// FIXME replace this and all similar occurences
|
||||
// by clear() when switching to go 1.21
|
||||
delete(matches[pf], then)
|
||||
}
|
||||
matchesLock.Unlock()
|
||||
}(pf, then)
|
||||
}
|
||||
|
||||
if filter.Retry <= 1 || len(matches[pf]) >= filter.Retry {
|
||||
delete(matches, pf)
|
||||
filter.sendActions(patterns, then)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func StreamManager(s *Stream, endedSignal chan *Stream) {
|
||||
defer wgStreams.Done()
|
||||
logger.Printf(logger.INFO, "%s: start %s\n", s.Name, s.Cmd)
|
||||
|
||||
lines := cmdStdout(s.Cmd)
|
||||
for {
|
||||
select {
|
||||
case line, ok := <-lines:
|
||||
if !ok {
|
||||
endedSignal <- s
|
||||
return
|
||||
}
|
||||
for _, filter := range s.Filters {
|
||||
if match := filter.match(line); match != "" {
|
||||
matchesC <- PFT{match, filter, time.Now()}
|
||||
}
|
||||
}
|
||||
case _, _ = <-stopStreams:
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
var actions ActionsMap
|
||||
var matches MatchesMap
|
||||
var matchesLock sync.Mutex
|
||||
|
||||
var stopStreams chan bool
|
||||
var stopActions chan bool
|
||||
var wgActions sync.WaitGroup
|
||||
var wgStreams sync.WaitGroup
|
||||
|
||||
/*
|
||||
<StreamCmds>
|
||||
↓
|
||||
StreamManager onstartup:matches
|
||||
↓ ↓ ↑
|
||||
matches→ MatchesManager →logs→ DatabaseManager ←·
|
||||
↑ ↓ ↑
|
||||
↑ actions→ ActionsManager ↑
|
||||
↑ ↑ ↑
|
||||
SocketManager →flushes→→→→→→→→→→·→→→→→→→→→→→→→→→→·
|
||||
↑
|
||||
<Clients>
|
||||
*/
|
||||
|
||||
// DatabaseManager → MatchesManager
|
||||
var startupMatchesC chan PFT
|
||||
|
||||
// StreamManager → MatchesManager
|
||||
var matchesC chan PFT
|
||||
|
||||
// MatchesManager → DatabaseManager
|
||||
var logsC chan LogEntry
|
||||
|
||||
// MatchesManager → ActionsManager
|
||||
var actionsC chan PAT
|
||||
|
||||
// SocketManager, DatabaseManager → MatchesManager
|
||||
var flushToMatchesC chan PSF
|
||||
|
||||
// SocketManager → ActionsManager
|
||||
var flushToActionsC chan PSF
|
||||
|
||||
// SocketManager → DatabaseManager
|
||||
var flushToDatabaseC chan LogEntry
|
||||
|
||||
func Daemon(confFilename string) {
|
||||
conf := parseConf(confFilename)
|
||||
|
||||
startupMatchesC = make(chan PFT)
|
||||
matchesC = make(chan PFT)
|
||||
logsC = make(chan LogEntry)
|
||||
actionsC = make(chan PAT)
|
||||
flushToMatchesC = make(chan PSF)
|
||||
flushToActionsC = make(chan PSF)
|
||||
flushToDatabaseC = make(chan LogEntry)
|
||||
stopActions = make(chan bool)
|
||||
stopStreams = make(chan bool)
|
||||
actions = make(ActionsMap)
|
||||
matches = make(MatchesMap)
|
||||
|
||||
_ = runCommands(conf.Start, "start")
|
||||
|
||||
go DatabaseManager(conf)
|
||||
go MatchesManager()
|
||||
go ActionsManager(conf.Concurrency)
|
||||
|
||||
// Ready to start
|
||||
|
||||
sigs := make(chan os.Signal, 1)
|
||||
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
|
||||
|
||||
endSignals := make(chan *Stream)
|
||||
nbStreamsInExecution := len(conf.Streams)
|
||||
|
||||
for _, stream := range conf.Streams {
|
||||
wgStreams.Add(1)
|
||||
go StreamManager(stream, endSignals)
|
||||
}
|
||||
|
||||
go SocketManager(conf)
|
||||
|
||||
for {
|
||||
select {
|
||||
case finishedStream := <-endSignals:
|
||||
logger.Printf(logger.ERROR, "%s stream finished", finishedStream.Name)
|
||||
nbStreamsInExecution--
|
||||
if nbStreamsInExecution == 0 {
|
||||
quit(conf, false)
|
||||
}
|
||||
case <-sigs:
|
||||
// Trap endSignals, which may cause a deadlock otherwise
|
||||
go func() {
|
||||
ok := true
|
||||
for ok {
|
||||
_, ok = <-endSignals
|
||||
}
|
||||
}()
|
||||
logger.Printf(logger.INFO, "Received SIGINT/SIGTERM, exiting")
|
||||
quit(conf, true)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func quit(conf *Conf, graceful bool) {
|
||||
// send stop to StreamManager·s
|
||||
close(stopStreams)
|
||||
logger.Println(logger.INFO, "Waiting for Streams to finish...")
|
||||
wgStreams.Wait()
|
||||
// ActionsManager calls wgActions.Done() when it has launched all pending actions
|
||||
wgActions.Add(1)
|
||||
// send stop to ActionsManager
|
||||
close(stopActions)
|
||||
// stop all actions
|
||||
logger.Println(logger.INFO, "Waiting for Actions to finish...")
|
||||
wgActions.Wait()
|
||||
// run stop commands
|
||||
stopOk := runCommands(conf.Stop, "stop")
|
||||
// delete pipe
|
||||
err := os.Remove(*SocketPath)
|
||||
if err != nil {
|
||||
logger.Println(logger.ERROR, "Failed to remove socket:", err)
|
||||
}
|
||||
|
||||
if !stopOk || !graceful {
|
||||
os.Exit(1)
|
||||
}
|
||||
os.Exit(0)
|
||||
}
|
||||
108
go.old/app/example.yml
Normal file
108
go.old/app/example.yml
Normal file
|
|
@ -0,0 +1,108 @@
|
|||
---
|
||||
# This example configuration file is a good starting point, but you're
|
||||
# strongly encouraged to take a look at the full documentation: https://reaction.ppom.me
|
||||
#
|
||||
# This file is using the well-established YAML configuration language.
|
||||
# Note that the more powerful JSONnet configuration language is also supported
|
||||
# and that the documentation uses JSONnet
|
||||
|
||||
# definitions are just a place to put chunks of conf you want to reuse in another place
|
||||
# using YAML anchors `&name` and pointers `*name`
|
||||
# definitions are not readed by reaction
|
||||
definitions:
|
||||
- &iptablesban [ 'ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
|
||||
- &iptablesunban [ 'ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
|
||||
# ip46tables is a minimal C program (only POSIX dependencies) present as a subdirectory.
|
||||
# it permits to handle both ipv4/iptables and ipv6/ip6tables commands
|
||||
|
||||
# if set to a positive number → max number of concurrent actions
|
||||
# if set to a negative number → no limit
|
||||
# if not specified or set to 0 → defaults to the number of CPUs on the system
|
||||
concurrency: 0
|
||||
|
||||
# patterns are substitued in regexes.
|
||||
# when a filter performs an action, it replaces the found pattern
|
||||
patterns:
|
||||
ip:
|
||||
# reaction regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
|
||||
# simple version: regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
|
||||
regex: '(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))'
|
||||
ignore:
|
||||
- 127.0.0.1
|
||||
- ::1
|
||||
# Patterns can be ignored based on regexes, it will try to match the whole string detected by the pattern
|
||||
# ignoreregex:
|
||||
# - '10\.0\.[0-9]{1,3}\.[0-9]{1,3}'
|
||||
|
||||
# Those commands will be executed in order at start, before everything else
|
||||
start:
|
||||
- [ 'ip46tables', '-w', '-N', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
|
||||
# Those commands will be executed in order at stop, after everything else
|
||||
stop:
|
||||
- [ 'ip46tables', '-w,', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip46tables', '-w,', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-F', 'reaction' ]
|
||||
- [ 'ip46tables', '-w', '-X', 'reaction' ]
|
||||
|
||||
# streams are commands
|
||||
# they are run and their ouptut is captured
|
||||
# *example:* `tail -f /var/log/nginx/access.log`
|
||||
# their output will be used by one or more filters
|
||||
streams:
|
||||
# streams have a user-defined name
|
||||
ssh:
|
||||
# note that if the command is not in environment's `PATH`
|
||||
# its full path must be given.
|
||||
cmd: [ 'journalctl', '-n0', '-fu', 'sshd.service' ]
|
||||
# filters run actions when they match regexes on a stream
|
||||
filters:
|
||||
# filters have a user-defined name
|
||||
failedlogin:
|
||||
# reaction's regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
|
||||
regex:
|
||||
# <ip> is predefined in the patterns section
|
||||
# ip's regex is inserted in the following regex
|
||||
- 'authentication failure;.*rhost=<ip>'
|
||||
- 'Failed password for .* from <ip>'
|
||||
- 'Connection (reset|closed) by (authenticating|invalid) user .* <ip>'
|
||||
# if retry and retryperiod are defined,
|
||||
# the actions will only take place if a same pattern is
|
||||
# found `retry` times in a `retryperiod` interval
|
||||
retry: 3
|
||||
# format is defined here: https://pkg.go.dev/time#ParseDuration
|
||||
retryperiod: 6h
|
||||
# actions are run by the filter when regexes are matched
|
||||
actions:
|
||||
# actions have a user-defined name
|
||||
ban:
|
||||
# YAML substitutes *reference by the value anchored at &reference
|
||||
cmd: *iptablesban
|
||||
unban:
|
||||
cmd: *iptablesunban
|
||||
# if after is defined, the action will not take place immediately, but after a specified duration
|
||||
# same format as retryperiod
|
||||
after: 48h
|
||||
# let's say reaction is quitting. does it run all those pending commands which had an `after` duration set?
|
||||
# if you want reaction to run those pending commands before exiting, you can set this:
|
||||
# onexit: true
|
||||
# (defaults to false)
|
||||
# here it is not useful because we will flush and delete the chain containing the bans anyway
|
||||
# (with the stop commands)
|
||||
|
||||
# persistence
|
||||
# tldr; when an `after` action is set in a filter, such filter acts as a 'jail',
|
||||
# which is persisted after reboots.
|
||||
#
|
||||
# when a filter is triggered, there are 2 flows:
|
||||
#
|
||||
# if none of its actions have an `after` directive set:
|
||||
# no action will be replayed.
|
||||
#
|
||||
# else (if at least one action has an `after` directive set):
|
||||
# if reaction stops while `after` actions are pending:
|
||||
# and reaction starts again while those actions would still be pending:
|
||||
# reaction executes the past actions (actions without after or with then+after < now)
|
||||
# and plans the execution of future actions (actions with then+after > now)
|
||||
230
go.old/app/main.go
Normal file
230
go.old/app/main.go
Normal file
|
|
@ -0,0 +1,230 @@
|
|||
package app
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"framagit.org/ppom/reaction/logger"
|
||||
)
|
||||
|
||||
func addStringFlag(names []string, defvalue string, f *flag.FlagSet) *string {
|
||||
var value string
|
||||
for _, name := range names {
|
||||
f.StringVar(&value, name, defvalue, "")
|
||||
}
|
||||
return &value
|
||||
}
|
||||
|
||||
func addBoolFlag(names []string, f *flag.FlagSet) *bool {
|
||||
var value bool
|
||||
for _, name := range names {
|
||||
f.BoolVar(&value, name, false, "")
|
||||
}
|
||||
return &value
|
||||
}
|
||||
|
||||
var SocketPath *string
|
||||
|
||||
func addSocketFlag(f *flag.FlagSet) *string {
|
||||
return addStringFlag([]string{"s", "socket"}, "/run/reaction/reaction.sock", f)
|
||||
}
|
||||
|
||||
func addConfFlag(f *flag.FlagSet) *string {
|
||||
return addStringFlag([]string{"c", "config"}, "", f)
|
||||
}
|
||||
|
||||
func addFormatFlag(f *flag.FlagSet) *string {
|
||||
return addStringFlag([]string{"f", "format"}, "yaml", f)
|
||||
}
|
||||
|
||||
func addLimitFlag(f *flag.FlagSet) *string {
|
||||
return addStringFlag([]string{"l", "limit"}, "", f)
|
||||
}
|
||||
|
||||
func addLevelFlag(f *flag.FlagSet) *string {
|
||||
return addStringFlag([]string{"l", "loglevel"}, "INFO", f)
|
||||
}
|
||||
|
||||
func subCommandParse(f *flag.FlagSet, maxRemainingArgs int) {
|
||||
help := addBoolFlag([]string{"h", "help"}, f)
|
||||
f.Parse(os.Args[2:])
|
||||
if *help {
|
||||
basicUsage()
|
||||
os.Exit(0)
|
||||
}
|
||||
// -1 = no limit to remaining args
|
||||
if maxRemainingArgs > -1 && len(f.Args()) > maxRemainingArgs {
|
||||
fmt.Printf("ERROR unrecognized argument(s): %v\n", f.Args()[maxRemainingArgs:])
|
||||
basicUsage()
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
func basicUsage() {
|
||||
const (
|
||||
bold = "\033[1m"
|
||||
reset = "\033[0m"
|
||||
)
|
||||
fmt.Print(
|
||||
bold + `reaction help` + reset + `
|
||||
# print this help message
|
||||
|
||||
` + bold + `reaction start` + reset + `
|
||||
# start the daemon
|
||||
|
||||
# options:
|
||||
-c/--config CONFIG_FILE # configuration file in json, jsonnet or yaml format (required)
|
||||
-l/--loglevel LEVEL # minimum log level to show, in DEBUG, INFO, WARN, ERROR, FATAL
|
||||
# (default: INFO)
|
||||
-s/--socket SOCKET # path to the client-daemon communication socket
|
||||
# (default: /run/reaction/reaction.sock)
|
||||
|
||||
` + bold + `reaction example-conf` + reset + `
|
||||
# print a configuration file example
|
||||
|
||||
` + bold + `reaction show` + reset + ` [NAME=PATTERN...]
|
||||
# show current matches and which actions are still to be run for the specified PATTERN regexe(s)
|
||||
# (e.g know what is currenly banned)
|
||||
|
||||
reaction show
|
||||
reaction show "ip=192.168.1.1"
|
||||
reaction show "ip=192\.168\..*" login=root
|
||||
|
||||
# options:
|
||||
-s/--socket SOCKET # path to the client-daemon communication socket
|
||||
-f/--format yaml|json # (default: yaml)
|
||||
-l/--limit STREAM[.FILTER] # only show items related to this STREAM (or STREAM.FILTER)
|
||||
|
||||
` + bold + `reaction flush` + reset + ` NAME=PATTERN [NAME=PATTERN...]
|
||||
# remove currently active matches and run currently pending actions for the specified PATTERN regexe(s)
|
||||
# (then show flushed matches and actions)
|
||||
|
||||
reaction flush "ip=192.168.1.1"
|
||||
reaction flush "ip=192\.168\..*" login=root
|
||||
|
||||
# options:
|
||||
-s/--socket SOCKET # path to the client-daemon communication socket
|
||||
-f/--format yaml|json # (default: yaml)
|
||||
-l/--limit STREAM.FILTER # flush only items related to this STREAM.FILTER
|
||||
|
||||
` + bold + `reaction test-regex` + reset + ` REGEX LINE # test REGEX against LINE
|
||||
cat FILE | ` + bold + `reaction test-regex` + reset + ` REGEX # test REGEX against each line of FILE
|
||||
|
||||
# options:
|
||||
-c/--config CONFIG_FILE # configuration file in json, jsonnet or yaml format
|
||||
# optional: permits to use configured patterns like <ip> in regex
|
||||
|
||||
` + bold + `reaction version` + reset + `
|
||||
# print version information
|
||||
|
||||
see usage examples, service configurations and good practices
|
||||
on the ` + bold + `wiki` + reset + `: https://reaction.ppom.me
|
||||
`)
|
||||
}
|
||||
|
||||
//go:embed example.yml
|
||||
var exampleConf string
|
||||
|
||||
func Main(version string) {
|
||||
if len(os.Args) <= 1 {
|
||||
logger.Fatalln("No argument provided. Try `reaction help`")
|
||||
basicUsage()
|
||||
os.Exit(1)
|
||||
}
|
||||
f := flag.NewFlagSet(os.Args[1], flag.ExitOnError)
|
||||
switch os.Args[1] {
|
||||
case "help", "-h", "-help", "--help":
|
||||
basicUsage()
|
||||
|
||||
case "version", "-v", "--version":
|
||||
fmt.Printf("reaction version %v\n", version)
|
||||
|
||||
case "example-conf":
|
||||
subCommandParse(f, 0)
|
||||
fmt.Print(exampleConf)
|
||||
|
||||
case "start":
|
||||
SocketPath = addSocketFlag(f)
|
||||
confFilename := addConfFlag(f)
|
||||
logLevel := addLevelFlag(f)
|
||||
subCommandParse(f, 0)
|
||||
if *confFilename == "" {
|
||||
logger.Fatalln("no configuration file provided")
|
||||
basicUsage()
|
||||
os.Exit(1)
|
||||
}
|
||||
logLevelType := logger.FromString(*logLevel)
|
||||
if logLevelType == logger.UNKNOWN {
|
||||
logger.Fatalf("Log Level %v not recognized", logLevel)
|
||||
basicUsage()
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.SetLogLevel(logLevelType)
|
||||
Daemon(*confFilename)
|
||||
|
||||
case "show":
|
||||
SocketPath = addSocketFlag(f)
|
||||
queryFormat := addFormatFlag(f)
|
||||
limit := addLimitFlag(f)
|
||||
subCommandParse(f, -1)
|
||||
if *queryFormat != "yaml" && *queryFormat != "json" {
|
||||
logger.Fatalln("only yaml and json formats are supported")
|
||||
}
|
||||
stream, filter := "", ""
|
||||
if *limit != "" {
|
||||
splitSF := strings.Split(*limit, ".")
|
||||
stream = splitSF[0]
|
||||
if len(splitSF) == 2 {
|
||||
filter = splitSF[1]
|
||||
} else if len(splitSF) > 2 {
|
||||
logger.Fatalln("-l/--limit: only one . separator is supported")
|
||||
}
|
||||
}
|
||||
ClientShow(*queryFormat, stream, filter, f.Args())
|
||||
|
||||
case "flush":
|
||||
SocketPath = addSocketFlag(f)
|
||||
queryFormat := addFormatFlag(f)
|
||||
limit := addLimitFlag(f)
|
||||
subCommandParse(f, -1)
|
||||
if *queryFormat != "yaml" && *queryFormat != "json" {
|
||||
logger.Fatalln("only yaml and json formats are supported")
|
||||
}
|
||||
if len(f.Args()) == 0 {
|
||||
logger.Fatalln("subcommand flush takes at least one TARGET argument")
|
||||
}
|
||||
stream, filter := "", ""
|
||||
if *limit != "" {
|
||||
splitSF := strings.Split(*limit, ".")
|
||||
stream = splitSF[0]
|
||||
if len(splitSF) == 2 {
|
||||
filter = splitSF[1]
|
||||
} else if len(splitSF) > 2 {
|
||||
logger.Fatalln("-l/--limit: only one . separator is supported")
|
||||
}
|
||||
}
|
||||
ClientFlush(*queryFormat, stream, filter, f.Args())
|
||||
|
||||
case "test-regex":
|
||||
// socket not needed, no interaction with the daemon
|
||||
confFilename := addConfFlag(f)
|
||||
subCommandParse(f, 2)
|
||||
if *confFilename == "" {
|
||||
logger.Println(logger.WARN, "no configuration file provided. Can't make use of registered patterns.")
|
||||
}
|
||||
if f.Arg(0) == "" {
|
||||
logger.Fatalln("subcommand test-regex takes at least one REGEX argument")
|
||||
basicUsage()
|
||||
os.Exit(1)
|
||||
}
|
||||
TestRegex(*confFilename, f.Arg(0), f.Arg(1))
|
||||
|
||||
default:
|
||||
logger.Fatalf("subcommand %v not recognized. Try `reaction help`", os.Args[1])
|
||||
basicUsage()
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
264
go.old/app/persist.go
Normal file
264
go.old/app/persist.go
Normal file
|
|
@ -0,0 +1,264 @@
|
|||
package app
|
||||
|
||||
import (
|
||||
"encoding/gob"
|
||||
"errors"
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"framagit.org/ppom/reaction/logger"
|
||||
)
|
||||
|
||||
const (
|
||||
logDBName = "./reaction-matches.db"
|
||||
logDBNewName = "./reaction-matches.new.db"
|
||||
flushDBName = "./reaction-flushes.db"
|
||||
)
|
||||
|
||||
func openDB(path string) (bool, *ReadDB) {
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
logger.Printf(logger.WARN, "No DB found at %s. It's ok if this is the first time reaction is running.\n", path)
|
||||
return true, nil
|
||||
}
|
||||
logger.Fatalln("Failed to open DB:", err)
|
||||
}
|
||||
return false, &ReadDB{file, gob.NewDecoder(file)}
|
||||
}
|
||||
|
||||
func createDB(path string) *WriteDB {
|
||||
file, err := os.Create(path)
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to create DB:", err)
|
||||
}
|
||||
return &WriteDB{file, gob.NewEncoder(file)}
|
||||
}
|
||||
|
||||
func DatabaseManager(c *Conf) {
|
||||
logDB, flushDB := c.RotateDB(true)
|
||||
close(startupMatchesC)
|
||||
c.manageLogs(logDB, flushDB)
|
||||
}
|
||||
|
||||
func (c *Conf) manageLogs(logDB *WriteDB, flushDB *WriteDB) {
|
||||
cpt := 0
|
||||
writeSF2int := make(map[SF]int)
|
||||
writeCpt := 1
|
||||
for {
|
||||
select {
|
||||
case entry := <-flushToDatabaseC:
|
||||
flushDB.enc.Encode(entry)
|
||||
case entry := <-logsC:
|
||||
encodeOrFatal(logDB.enc, entry, writeSF2int, &writeCpt)
|
||||
cpt++
|
||||
// let's say 100 000 entries ~ 10 MB
|
||||
if cpt == 500_000 {
|
||||
cpt = 0
|
||||
logger.Printf(logger.INFO, "Rotating database...")
|
||||
logDB.file.Close()
|
||||
flushDB.file.Close()
|
||||
logDB, flushDB = c.RotateDB(false)
|
||||
logger.Printf(logger.INFO, "Rotated database")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Conf) RotateDB(startup bool) (*WriteDB, *WriteDB) {
|
||||
var (
|
||||
doesntExist bool
|
||||
err error
|
||||
logReadDB *ReadDB
|
||||
flushReadDB *ReadDB
|
||||
logWriteDB *WriteDB
|
||||
flushWriteDB *WriteDB
|
||||
)
|
||||
doesntExist, logReadDB = openDB(logDBName)
|
||||
if doesntExist {
|
||||
return createDB(logDBName), createDB(flushDBName)
|
||||
}
|
||||
doesntExist, flushReadDB = openDB(flushDBName)
|
||||
if doesntExist {
|
||||
logger.Println(logger.WARN, "Strange! No flushes db, opening /dev/null instead")
|
||||
doesntExist, flushReadDB = openDB("/dev/null")
|
||||
if doesntExist {
|
||||
logger.Fatalln("Opening dummy /dev/null failed")
|
||||
}
|
||||
}
|
||||
|
||||
logWriteDB = createDB(logDBNewName)
|
||||
|
||||
rotateDB(c, logReadDB.dec, flushReadDB.dec, logWriteDB.enc, startup)
|
||||
|
||||
err = logReadDB.file.Close()
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to close old DB:", err)
|
||||
}
|
||||
|
||||
// It should be ok to rename an open file
|
||||
err = os.Rename(logDBNewName, logDBName)
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to replace old DB with new one:", err)
|
||||
}
|
||||
|
||||
err = os.Remove(flushDBName)
|
||||
if err != nil && !errors.Is(err, os.ErrNotExist) {
|
||||
logger.Fatalln("Failed to delete old DB:", err)
|
||||
}
|
||||
|
||||
flushWriteDB = createDB(flushDBName)
|
||||
return logWriteDB, flushWriteDB
|
||||
}
|
||||
|
||||
func rotateDB(c *Conf, logDec *gob.Decoder, flushDec *gob.Decoder, logEnc *gob.Encoder, startup bool) {
|
||||
// This mapping is a space optimization feature
|
||||
// It permits to compress stream+filter to a small number (which is a byte in gob)
|
||||
// We do this only for matches, not for flushes
|
||||
readSF2int := make(map[int]SF)
|
||||
writeSF2int := make(map[SF]int)
|
||||
writeCounter := 1
|
||||
// This extra code is made to warn only one time for each non-existant filter
|
||||
discardedEntries := make(map[SF]int)
|
||||
malformedEntries := 0
|
||||
defer func() {
|
||||
for sf, t := range discardedEntries {
|
||||
if t > 0 {
|
||||
logger.Printf(logger.WARN, "info discarded %v times from the DBs: stream/filter not found: %s.%s\n", t, sf.S, sf.F)
|
||||
}
|
||||
}
|
||||
if malformedEntries > 0 {
|
||||
logger.Printf(logger.WARN, "%v malformed entries discarded from the DBs\n", malformedEntries)
|
||||
}
|
||||
}()
|
||||
|
||||
// pattern, stream, fitler → last flush
|
||||
flushes := make(map[*PSF]time.Time)
|
||||
for {
|
||||
var entry LogEntry
|
||||
var filter *Filter
|
||||
// decode entry
|
||||
err := flushDec.Decode(&entry)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
malformedEntries++
|
||||
continue
|
||||
}
|
||||
|
||||
// retrieve related filter
|
||||
if entry.Stream != "" || entry.Filter != "" {
|
||||
if stream := c.Streams[entry.Stream]; stream != nil {
|
||||
filter = stream.Filters[entry.Filter]
|
||||
}
|
||||
if filter == nil {
|
||||
discardedEntries[SF{entry.Stream, entry.Filter}]++
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// store
|
||||
flushes[&PSF{entry.Pattern, entry.Stream, entry.Filter}] = entry.T
|
||||
}
|
||||
|
||||
lastTimeCpt := int64(0)
|
||||
now := time.Now()
|
||||
for {
|
||||
var entry LogEntry
|
||||
var filter *Filter
|
||||
|
||||
// decode entry
|
||||
err := logDec.Decode(&entry)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
malformedEntries++
|
||||
continue
|
||||
}
|
||||
|
||||
// retrieve related stream & filter
|
||||
if entry.Stream == "" && entry.Filter == "" {
|
||||
sf, ok := readSF2int[entry.SF]
|
||||
if !ok {
|
||||
discardedEntries[SF{"", ""}]++
|
||||
continue
|
||||
}
|
||||
entry.Stream = sf.S
|
||||
entry.Filter = sf.F
|
||||
}
|
||||
if stream := c.Streams[entry.Stream]; stream != nil {
|
||||
filter = stream.Filters[entry.Filter]
|
||||
}
|
||||
if filter == nil {
|
||||
discardedEntries[SF{entry.Stream, entry.Filter}]++
|
||||
continue
|
||||
}
|
||||
if entry.SF != 0 {
|
||||
readSF2int[entry.SF] = SF{entry.Stream, entry.Filter}
|
||||
}
|
||||
|
||||
// check if number of patterns is in sync
|
||||
if len(entry.Pattern.Split()) != len(filter.Pattern) {
|
||||
continue
|
||||
}
|
||||
|
||||
// check if it hasn't been flushed
|
||||
lastGlobalFlush := flushes[&PSF{entry.Pattern, "", ""}].Unix()
|
||||
lastLocalFlush := flushes[&PSF{entry.Pattern, entry.Stream, entry.Filter}].Unix()
|
||||
entryTime := entry.T.Unix()
|
||||
if lastLocalFlush > entryTime || lastGlobalFlush > entryTime {
|
||||
continue
|
||||
}
|
||||
|
||||
// restore time
|
||||
if entry.T.IsZero() {
|
||||
entry.T = time.Unix(entry.S, lastTimeCpt)
|
||||
}
|
||||
lastTimeCpt++
|
||||
|
||||
// store matches
|
||||
if !entry.Exec && entry.T.Add(filter.retryDuration).Unix() > now.Unix() {
|
||||
if startup {
|
||||
startupMatchesC <- PFT{entry.Pattern, filter, entry.T}
|
||||
}
|
||||
|
||||
encodeOrFatal(logEnc, entry, writeSF2int, &writeCounter)
|
||||
}
|
||||
|
||||
// replay executions
|
||||
if entry.Exec && entry.T.Add(*filter.longuestActionDuration).Unix() > now.Unix() {
|
||||
if startup {
|
||||
flushToMatchesC <- PSF{entry.Pattern, entry.Stream, entry.Filter}
|
||||
filter.sendActions(entry.Pattern, entry.T)
|
||||
}
|
||||
|
||||
encodeOrFatal(logEnc, entry, writeSF2int, &writeCounter)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func encodeOrFatal(enc *gob.Encoder, entry LogEntry, writeSF2int map[SF]int, writeCounter *int) {
|
||||
// Stream/Filter reduction
|
||||
sf, ok := writeSF2int[SF{entry.Stream, entry.Filter}]
|
||||
if ok {
|
||||
entry.SF = sf
|
||||
entry.Stream = ""
|
||||
entry.Filter = ""
|
||||
} else {
|
||||
entry.SF = *writeCounter
|
||||
writeSF2int[SF{entry.Stream, entry.Filter}] = *writeCounter
|
||||
*writeCounter++
|
||||
}
|
||||
// Time reduction
|
||||
if !entry.T.IsZero() {
|
||||
entry.S = entry.T.Unix()
|
||||
entry.T = time.Time{}
|
||||
}
|
||||
err := enc.Encode(entry)
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to write to new DB:", err)
|
||||
}
|
||||
}
|
||||
81
go.old/app/pipe.go
Normal file
81
go.old/app/pipe.go
Normal file
|
|
@ -0,0 +1,81 @@
|
|||
package app
|
||||
|
||||
import (
|
||||
"encoding/gob"
|
||||
"errors"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"time"
|
||||
|
||||
"framagit.org/ppom/reaction/logger"
|
||||
)
|
||||
|
||||
func createOpenSocket() net.Listener {
|
||||
err := os.MkdirAll(path.Dir(*SocketPath), 0755)
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to create socket directory")
|
||||
}
|
||||
_, err = os.Stat(*SocketPath)
|
||||
if err == nil {
|
||||
logger.Println(logger.WARN, "socket", SocketPath, "already exists: Is the daemon already running? Deleting.")
|
||||
err = os.Remove(*SocketPath)
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to remove socket:", err)
|
||||
}
|
||||
}
|
||||
ln, err := net.Listen("unix", *SocketPath)
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to create socket:", err)
|
||||
}
|
||||
return ln
|
||||
}
|
||||
|
||||
// Handle connections
|
||||
//func SocketManager(streams map[string]*Stream) {
|
||||
func SocketManager(conf *Conf) {
|
||||
ln := createOpenSocket()
|
||||
defer ln.Close()
|
||||
for {
|
||||
conn, err := ln.Accept()
|
||||
if err != nil {
|
||||
logger.Println(logger.ERROR, "Failed to open connection from cli:", err)
|
||||
continue
|
||||
}
|
||||
go func(conn net.Conn) {
|
||||
defer conn.Close()
|
||||
var request Request
|
||||
var response Response
|
||||
|
||||
err := gob.NewDecoder(conn).Decode(&request)
|
||||
if err != nil {
|
||||
logger.Println(logger.ERROR, "Invalid Message from cli:", err)
|
||||
return
|
||||
}
|
||||
|
||||
switch request.Request {
|
||||
case Info:
|
||||
// response.Config = *conf
|
||||
response.Matches = matches
|
||||
response.Actions = actions
|
||||
case Flush:
|
||||
le := LogEntry{time.Now(), 0, request.Flush.P, request.Flush.S, request.Flush.F, 0, false}
|
||||
|
||||
flushToMatchesC <- request.Flush
|
||||
flushToActionsC <- request.Flush
|
||||
flushToDatabaseC <- le
|
||||
|
||||
default:
|
||||
logger.Println(logger.ERROR, "Invalid Message from cli: unrecognised command type")
|
||||
response.Err = errors.New("unrecognised command type")
|
||||
return
|
||||
}
|
||||
|
||||
err = gob.NewEncoder(conn).Encode(response)
|
||||
if err != nil {
|
||||
logger.Println(logger.ERROR, "Can't respond to cli:", err)
|
||||
return
|
||||
}
|
||||
}(conn)
|
||||
}
|
||||
}
|
||||
178
go.old/app/startup.go
Normal file
178
go.old/app/startup.go
Normal file
|
|
@ -0,0 +1,178 @@
|
|||
package app
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"framagit.org/ppom/reaction/logger"
|
||||
|
||||
"github.com/google/go-jsonnet"
|
||||
)
|
||||
|
||||
func (c *Conf) setup() {
|
||||
if c.Concurrency == 0 {
|
||||
c.Concurrency = runtime.NumCPU()
|
||||
}
|
||||
|
||||
// Assure we iterate through c.Patterns map in reproductible order
|
||||
sortedPatternNames := make([]string, 0, len(c.Patterns))
|
||||
for k := range c.Patterns {
|
||||
sortedPatternNames = append(sortedPatternNames, k)
|
||||
}
|
||||
slices.Sort(sortedPatternNames)
|
||||
|
||||
for _, patternName := range sortedPatternNames {
|
||||
pattern := c.Patterns[patternName]
|
||||
pattern.Name = patternName
|
||||
pattern.nameWithBraces = fmt.Sprintf("<%s>", pattern.Name)
|
||||
|
||||
if pattern.Regex == "" {
|
||||
logger.Fatalf("Bad configuration: pattern's regex %v is empty!", patternName)
|
||||
}
|
||||
|
||||
compiled, err := regexp.Compile(fmt.Sprintf("^%v$", pattern.Regex))
|
||||
if err != nil {
|
||||
logger.Fatalf("Bad configuration: pattern %v: %v", patternName, err)
|
||||
}
|
||||
pattern.Regex = fmt.Sprintf("(?P<%s>%s)", patternName, pattern.Regex)
|
||||
for _, ignore := range pattern.Ignore {
|
||||
if !compiled.MatchString(ignore) {
|
||||
logger.Fatalf("Bad configuration: pattern ignore '%v' doesn't match pattern %v! It should be fixed or removed.", ignore, pattern.nameWithBraces)
|
||||
}
|
||||
}
|
||||
|
||||
// Compile ignore regexes
|
||||
for _, regex := range pattern.IgnoreRegex {
|
||||
// Enclose the regex to make sure that it matches the whole detected string
|
||||
compiledRegex, err := regexp.Compile("^" + regex + "$")
|
||||
if err != nil {
|
||||
logger.Fatalf("Bad configuration: in ignoreregex of pattern %s: %v", pattern.Name, err)
|
||||
}
|
||||
|
||||
pattern.compiledIgnoreRegex = append(pattern.compiledIgnoreRegex, *compiledRegex)
|
||||
}
|
||||
}
|
||||
|
||||
if len(c.Streams) == 0 {
|
||||
logger.Fatalln("Bad configuration: no streams configured!")
|
||||
}
|
||||
for streamName := range c.Streams {
|
||||
|
||||
stream := c.Streams[streamName]
|
||||
stream.Name = streamName
|
||||
|
||||
if strings.Contains(stream.Name, ".") {
|
||||
logger.Fatalf("Bad configuration: character '.' is not allowed in stream names: '%v'", stream.Name)
|
||||
}
|
||||
|
||||
if len(stream.Filters) == 0 {
|
||||
logger.Fatalf("Bad configuration: no filters configured in %v", stream.Name)
|
||||
}
|
||||
for filterName := range stream.Filters {
|
||||
|
||||
filter := stream.Filters[filterName]
|
||||
filter.Stream = stream
|
||||
filter.Name = filterName
|
||||
|
||||
if strings.Contains(filter.Name, ".") {
|
||||
logger.Fatalf("Bad configuration: character '.' is not allowed in filter names: '%v'", filter.Name)
|
||||
}
|
||||
// Parse Duration
|
||||
if filter.RetryPeriod == "" {
|
||||
if filter.Retry > 1 {
|
||||
logger.Fatalf("Bad configuration: retry but no retryperiod in %v.%v", stream.Name, filter.Name)
|
||||
}
|
||||
} else {
|
||||
retryDuration, err := time.ParseDuration(filter.RetryPeriod)
|
||||
if err != nil {
|
||||
logger.Fatalf("Bad configuration: Failed to parse retry time in %v.%v: %v", stream.Name, filter.Name, err)
|
||||
}
|
||||
filter.retryDuration = retryDuration
|
||||
}
|
||||
|
||||
if len(filter.Regex) == 0 {
|
||||
logger.Fatalf("Bad configuration: no regexes configured in %v.%v", stream.Name, filter.Name)
|
||||
}
|
||||
// Compute Regexes
|
||||
// Look for Patterns inside Regexes
|
||||
for _, regex := range filter.Regex {
|
||||
// iterate through patterns in reproductible order
|
||||
for _, patternName := range sortedPatternNames {
|
||||
pattern := c.Patterns[patternName]
|
||||
if strings.Contains(regex, pattern.nameWithBraces) {
|
||||
if !slices.Contains(filter.Pattern, pattern) {
|
||||
filter.Pattern = append(filter.Pattern, pattern)
|
||||
}
|
||||
regex = strings.Replace(regex, pattern.nameWithBraces, pattern.Regex, 1)
|
||||
}
|
||||
}
|
||||
compiledRegex, err := regexp.Compile(regex)
|
||||
if err != nil {
|
||||
logger.Fatalf("Bad configuration: regex of filter %s.%s: %v", stream.Name, filter.Name, err)
|
||||
}
|
||||
filter.compiledRegex = append(filter.compiledRegex, *compiledRegex)
|
||||
}
|
||||
|
||||
if len(filter.Actions) == 0 {
|
||||
logger.Fatalln("Bad configuration: no actions configured in", stream.Name, ".", filter.Name)
|
||||
}
|
||||
for actionName := range filter.Actions {
|
||||
|
||||
action := filter.Actions[actionName]
|
||||
action.Filter = filter
|
||||
action.Name = actionName
|
||||
|
||||
if strings.Contains(action.Name, ".") {
|
||||
logger.Fatalln("Bad configuration: character '.' is not allowed in action names", action.Name)
|
||||
}
|
||||
// Parse Duration
|
||||
if action.After != "" {
|
||||
afterDuration, err := time.ParseDuration(action.After)
|
||||
if err != nil {
|
||||
logger.Fatalln("Bad configuration: Failed to parse after time in ", stream.Name, ".", filter.Name, ".", action.Name, ":", err)
|
||||
}
|
||||
action.afterDuration = afterDuration
|
||||
} else if action.OnExit {
|
||||
logger.Fatalln("Bad configuration: Cannot have `onexit: true` without an `after` directive in", stream.Name, ".", filter.Name, ".", action.Name)
|
||||
}
|
||||
if filter.longuestActionDuration == nil || filter.longuestActionDuration.Milliseconds() < action.afterDuration.Milliseconds() {
|
||||
filter.longuestActionDuration = &action.afterDuration
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseConf(filename string) *Conf {
|
||||
|
||||
data, err := os.Open(filename)
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to read configuration file:", err)
|
||||
}
|
||||
|
||||
var conf Conf
|
||||
if filename[len(filename)-4:] == ".yml" || filename[len(filename)-5:] == ".yaml" {
|
||||
err = jsonnet.NewYAMLToJSONDecoder(data).Decode(&conf)
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to parse yaml configuration file:", err)
|
||||
}
|
||||
} else {
|
||||
var jsondata string
|
||||
jsondata, err = jsonnet.MakeVM().EvaluateFile(filename)
|
||||
if err == nil {
|
||||
err = json.Unmarshal([]byte(jsondata), &conf)
|
||||
}
|
||||
if err != nil {
|
||||
logger.Fatalln("Failed to parse json configuration file:", err)
|
||||
}
|
||||
}
|
||||
|
||||
conf.setup()
|
||||
return &conf
|
||||
}
|
||||
200
go.old/app/types.go
Normal file
200
go.old/app/types.go
Normal file
|
|
@ -0,0 +1,200 @@
|
|||
package app
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/gob"
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Conf struct {
|
||||
Concurrency int `json:"concurrency"`
|
||||
Patterns map[string]*Pattern `json:"patterns"`
|
||||
Streams map[string]*Stream `json:"streams"`
|
||||
Start [][]string `json:"start"`
|
||||
Stop [][]string `json:"stop"`
|
||||
}
|
||||
|
||||
type Pattern struct {
|
||||
Regex string `json:"regex"`
|
||||
Ignore []string `json:"ignore"`
|
||||
|
||||
IgnoreRegex []string `json:"ignoreregex"`
|
||||
compiledIgnoreRegex []regexp.Regexp `json:"-"`
|
||||
|
||||
Name string `json:"-"`
|
||||
nameWithBraces string `json:"-"`
|
||||
}
|
||||
|
||||
// Stream, Filter & Action structures must never be copied.
|
||||
// They're always referenced through pointers
|
||||
|
||||
type Stream struct {
|
||||
Name string `json:"-"`
|
||||
|
||||
Cmd []string `json:"cmd"`
|
||||
Filters map[string]*Filter `json:"filters"`
|
||||
}
|
||||
type LilStream struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
func (s *Stream) GobEncode() ([]byte, error) {
|
||||
var buf bytes.Buffer
|
||||
enc := gob.NewEncoder(&buf)
|
||||
err := enc.Encode(LilStream{s.Name})
|
||||
return buf.Bytes(), err
|
||||
}
|
||||
|
||||
func (s *Stream) GobDecode(b []byte)(error) {
|
||||
var ls LilStream
|
||||
dec := gob.NewDecoder(bytes.NewReader(b))
|
||||
err := dec.Decode(&ls)
|
||||
s.Name = ls.Name
|
||||
return err
|
||||
}
|
||||
|
||||
type Filter struct {
|
||||
Stream *Stream `json:"-"`
|
||||
Name string `json:"-"`
|
||||
|
||||
Regex []string `json:"regex"`
|
||||
compiledRegex []regexp.Regexp `json:"-"`
|
||||
Pattern []*Pattern `json:"-"`
|
||||
|
||||
Retry int `json:"retry"`
|
||||
RetryPeriod string `json:"retryperiod"`
|
||||
retryDuration time.Duration `json:"-"`
|
||||
|
||||
Actions map[string]*Action `json:"actions"`
|
||||
longuestActionDuration *time.Duration
|
||||
}
|
||||
|
||||
// those small versions are needed to prevent infinite recursion in gob because of
|
||||
// data cycles: Stream <-> Filter, Filter <-> Action
|
||||
type LilFilter struct {
|
||||
Stream *Stream
|
||||
Name string
|
||||
Pattern []*Pattern
|
||||
}
|
||||
|
||||
func (f *Filter) GobDecode(b []byte)(error) {
|
||||
var lf LilFilter
|
||||
dec := gob.NewDecoder(bytes.NewReader(b))
|
||||
err := dec.Decode(&lf)
|
||||
f.Stream = lf.Stream
|
||||
f.Name = lf.Name
|
||||
f.Pattern = lf.Pattern
|
||||
return err
|
||||
}
|
||||
|
||||
func (f *Filter) GobEncode() ([]byte, error) {
|
||||
var buf bytes.Buffer
|
||||
enc := gob.NewEncoder(&buf)
|
||||
err := enc.Encode(LilFilter{f.Stream, f.Name, f.Pattern})
|
||||
return buf.Bytes(), err
|
||||
}
|
||||
|
||||
type Action struct {
|
||||
Filter *Filter `json:"-"`
|
||||
Name string `json:"-"`
|
||||
|
||||
Cmd []string `json:"cmd"`
|
||||
|
||||
After string `json:"after"`
|
||||
afterDuration time.Duration `json:"-"`
|
||||
|
||||
OnExit bool `json:"onexit"`
|
||||
}
|
||||
type LilAction struct {
|
||||
Filter *Filter
|
||||
Name string
|
||||
}
|
||||
|
||||
func (a *Action) GobEncode() ([]byte, error) {
|
||||
var buf bytes.Buffer
|
||||
enc := gob.NewEncoder(&buf)
|
||||
err := enc.Encode(LilAction{a.Filter, a.Name})
|
||||
return buf.Bytes(), err
|
||||
}
|
||||
|
||||
func (a *Action) GobDecode(b []byte)(error) {
|
||||
var la LilAction
|
||||
dec := gob.NewDecoder(bytes.NewReader(b))
|
||||
err := dec.Decode(&la)
|
||||
a.Filter = la.Filter
|
||||
a.Name = la.Name
|
||||
return err
|
||||
}
|
||||
|
||||
type LogEntry struct {
|
||||
T time.Time
|
||||
S int64
|
||||
Pattern Match
|
||||
Stream, Filter string
|
||||
SF int
|
||||
Exec bool
|
||||
}
|
||||
|
||||
type ReadDB struct {
|
||||
file *os.File
|
||||
dec *gob.Decoder
|
||||
}
|
||||
|
||||
type WriteDB struct {
|
||||
file *os.File
|
||||
enc *gob.Encoder
|
||||
}
|
||||
|
||||
type MatchesMap map[PF]map[time.Time]struct{}
|
||||
type ActionsMap map[PA]map[time.Time]struct{}
|
||||
|
||||
// This is a "\x00" Joined string
|
||||
// which contains all matches on a line.
|
||||
type Match string
|
||||
|
||||
func (m *Match) Split() []string {
|
||||
return strings.Split(string(*m), "\x00")
|
||||
}
|
||||
func JoinMatch(mm []string) Match {
|
||||
return Match(strings.Join(mm, "\x00"))
|
||||
}
|
||||
func WithBrackets(mm []string) string {
|
||||
var b strings.Builder
|
||||
for _, match := range mm {
|
||||
fmt.Fprintf(&b, "[%s]", match)
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// Helper structs made to carry information
|
||||
// Stream, Filter
|
||||
type SF struct{ S, F string }
|
||||
|
||||
// Pattern, Stream, Filter
|
||||
type PSF struct {
|
||||
P Match
|
||||
S, F string
|
||||
}
|
||||
|
||||
type PF struct {
|
||||
P Match
|
||||
F *Filter
|
||||
}
|
||||
type PFT struct {
|
||||
P Match
|
||||
F *Filter
|
||||
T time.Time
|
||||
}
|
||||
type PA struct {
|
||||
P Match
|
||||
A *Action
|
||||
}
|
||||
type PAT struct {
|
||||
P Match
|
||||
A *Action
|
||||
T time.Time
|
||||
}
|
||||
10
go.old/go.mod
Normal file
10
go.old/go.mod
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
module framagit.org/ppom/reaction
|
||||
|
||||
go 1.21
|
||||
|
||||
require (
|
||||
github.com/google/go-jsonnet v0.20.0
|
||||
sigs.k8s.io/yaml v1.1.0
|
||||
)
|
||||
|
||||
require gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
9
go.old/go.sum
Normal file
9
go.old/go.sum
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
github.com/google/go-jsonnet v0.20.0 h1:WG4TTSARuV7bSm4PMB4ohjxe33IHT5WVTrJSU33uT4g=
|
||||
github.com/google/go-jsonnet v0.20.0/go.mod h1:VbgWF9JX7ztlv770x/TolZNGGFfiHEVx9G6ca2eUmeA=
|
||||
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
|
||||
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
|
||||
80
go.old/logger/log.go
Normal file
80
go.old/logger/log.go
Normal file
|
|
@ -0,0 +1,80 @@
|
|||
package logger
|
||||
|
||||
import "log"
|
||||
|
||||
type Level int
|
||||
|
||||
const (
|
||||
UNKNOWN = Level(-1)
|
||||
DEBUG = Level(1)
|
||||
INFO = Level(2)
|
||||
WARN = Level(3)
|
||||
ERROR = Level(4)
|
||||
FATAL = Level(5)
|
||||
)
|
||||
|
||||
func (l Level) String() string {
|
||||
switch l {
|
||||
case DEBUG:
|
||||
return "DEBUG "
|
||||
case INFO:
|
||||
return "INFO "
|
||||
case WARN:
|
||||
return "WARN "
|
||||
case ERROR:
|
||||
return "ERROR "
|
||||
case FATAL:
|
||||
return "FATAL "
|
||||
default:
|
||||
return "????? "
|
||||
}
|
||||
}
|
||||
|
||||
func FromString(s string) Level {
|
||||
switch s {
|
||||
case "DEBUG":
|
||||
return DEBUG
|
||||
case "INFO":
|
||||
return INFO
|
||||
case "WARN":
|
||||
return WARN
|
||||
case "ERROR":
|
||||
return ERROR
|
||||
case "FATAL":
|
||||
return FATAL
|
||||
default:
|
||||
return UNKNOWN
|
||||
}
|
||||
}
|
||||
|
||||
var LogLevel Level = 2
|
||||
|
||||
func SetLogLevel(level Level) {
|
||||
LogLevel = level
|
||||
}
|
||||
|
||||
func Println(level Level, args ...any) {
|
||||
if level >= LogLevel {
|
||||
newargs := make([]any, 0)
|
||||
newargs = append(newargs, level)
|
||||
newargs = append(newargs, args...)
|
||||
log.Println(newargs...)
|
||||
}
|
||||
}
|
||||
|
||||
func Printf(level Level, format string, args ...any) {
|
||||
if level >= LogLevel {
|
||||
log.Printf(level.String()+format, args...)
|
||||
}
|
||||
}
|
||||
|
||||
func Fatalln(args ...any) {
|
||||
newargs := make([]any, 0)
|
||||
newargs = append(newargs, FATAL)
|
||||
newargs = append(newargs, args...)
|
||||
log.Fatalln(newargs...)
|
||||
}
|
||||
|
||||
func Fatalf(format string, args ...any) {
|
||||
log.Fatalf(FATAL.String()+format, args...)
|
||||
}
|
||||
13
go.old/reaction.go
Normal file
13
go.old/reaction.go
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"framagit.org/ppom/reaction/app"
|
||||
)
|
||||
|
||||
func main() {
|
||||
app.Main(version)
|
||||
}
|
||||
|
||||
var (
|
||||
version = "v1.4.2"
|
||||
)
|
||||
91
helpers_c/ip46tables.c
Normal file
91
helpers_c/ip46tables.c
Normal file
|
|
@ -0,0 +1,91 @@
|
|||
#include<ctype.h>
|
||||
#include<errno.h>
|
||||
#include<stdio.h>
|
||||
#include<stdlib.h>
|
||||
#include<string.h>
|
||||
#include<unistd.h>
|
||||
|
||||
// If this programs
|
||||
// - receives an ipv4 address in its arguments:
|
||||
// → it will executes iptables with the same arguments in place.
|
||||
//
|
||||
// - receives an ipv6 address in its arguments:
|
||||
// → it will executes ip6tables with the same arguments in place.
|
||||
//
|
||||
// - doesn't receive an ipv4 or ipv6 address in its arguments:
|
||||
// → it will executes both, with the same arguments in place.
|
||||
|
||||
int isIPv4(char *tab) {
|
||||
int i,len;
|
||||
// IPv4 addresses are at least 7 chars long
|
||||
len = strlen(tab);
|
||||
if (len < 7 || !isdigit(tab[0]) || !isdigit(tab[len-1])) {
|
||||
return 0;
|
||||
}
|
||||
// Each char must be a digit or a dot between 2 digits
|
||||
for (i=1; i<len-1; i++) {
|
||||
if (!isdigit(tab[i]) && !(tab[i] == '.' && isdigit(tab[i-1]) && isdigit(tab[i+1]))) {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
int isIPv6(char *tab) {
|
||||
int i,len, twodots = 0;
|
||||
// IPv6 addresses are at least 3 chars long
|
||||
len = strlen(tab);
|
||||
if (len < 3) {
|
||||
return 0;
|
||||
}
|
||||
// Each char must be a digit, :, a-f, or A-F
|
||||
for (i=0; i<len; i++) {
|
||||
if (!isdigit(tab[i]) && tab[i] != ':' && !(tab[i] >= 'a' && tab[i] <= 'f') && !(tab[i] >= 'A' && tab[i] <= 'F')) {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
int guess_type(int len, char *tab[]) {
|
||||
int i;
|
||||
for (i=0; i<len; i++) {
|
||||
if (isIPv4(tab[i])) {
|
||||
return 4;
|
||||
} else if (isIPv6(tab[i])) {
|
||||
return 6;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int exec(char *str, char **argv) {
|
||||
argv[0] = str;
|
||||
execvp(str, argv);
|
||||
// returns only if fails
|
||||
printf("ip46tables: exec failed %d\n", errno);
|
||||
}
|
||||
|
||||
int main(int argc, char **argv) {
|
||||
if (argc < 2) {
|
||||
printf("ip46tables: At least one argument has to be given\n");
|
||||
exit(1);
|
||||
}
|
||||
int type;
|
||||
type = guess_type(argc, argv);
|
||||
if (type == 4) {
|
||||
exec("iptables", argv);
|
||||
} else if (type == 6) {
|
||||
exec("ip6tables", argv);
|
||||
} else {
|
||||
pid_t pid = fork();
|
||||
if (pid == -1) {
|
||||
printf("ip46tables: fork failed\n");
|
||||
exit(1);
|
||||
} else if (pid) {
|
||||
exec("iptables", argv);
|
||||
} else {
|
||||
exec("ip6tables", argv);
|
||||
}
|
||||
}
|
||||
}
|
||||
97
helpers_c/nft46.c
Normal file
97
helpers_c/nft46.c
Normal file
|
|
@ -0,0 +1,97 @@
|
|||
#include<ctype.h>
|
||||
#include<errno.h>
|
||||
#include<stdio.h>
|
||||
#include<stdlib.h>
|
||||
#include<string.h>
|
||||
#include<unistd.h>
|
||||
|
||||
// nft46 'add element inet reaction ipvXbans { 1.2.3.4 }' → nft 'add element inet reaction ipv4bans { 1.2.3.4 }'
|
||||
// nft46 'add element inet reaction ipvXbans { a:b::c:d }' → nft 'add element inet reaction ipv6bans { a:b::c:d }'
|
||||
//
|
||||
// the character X is replaced by 4 or 6 depending on the address family of the specified IP
|
||||
//
|
||||
// Limitations:
|
||||
// - nft46 must receive exactly one argument
|
||||
// - only one IP must be given per command
|
||||
// - the IP must be between { braces }
|
||||
|
||||
int isIPv4(char *tab, int len) {
|
||||
int i;
|
||||
// IPv4 addresses are at least 7 chars long
|
||||
if (len < 7 || !isdigit(tab[0]) || !isdigit(tab[len-1])) {
|
||||
return 0;
|
||||
}
|
||||
// Each char must be a digit or a dot between 2 digits
|
||||
for (i=1; i<len-1; i++) {
|
||||
if (!isdigit(tab[i]) && !(tab[i] == '.' && isdigit(tab[i-1]) && isdigit(tab[i+1]))) {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
int isIPv6(char *tab, int len) {
|
||||
int i;
|
||||
// IPv6 addresses are at least 3 chars long
|
||||
if (len < 3) {
|
||||
return 0;
|
||||
}
|
||||
// Each char must be a digit, :, a-f, or A-F
|
||||
for (i=0; i<len; i++) {
|
||||
if (!isdigit(tab[i]) && tab[i] != ':' && tab[i] != '.' && !(tab[i] >= 'a' && tab[i] <= 'f') && !(tab[i] >= 'A' && tab[i] <= 'F')) {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
int findchar(char *tab, char c, int i, int len) {
|
||||
while (i < len && tab[i] != c) i++;
|
||||
if (i == len) {
|
||||
printf("nft46: one %c must be present", c);
|
||||
exit(1);
|
||||
}
|
||||
return i;
|
||||
}
|
||||
|
||||
void adapt_args(char *tab) {
|
||||
int i, len, X, startIP, endIP, startedIP;
|
||||
X = startIP = endIP = -1;
|
||||
startedIP = 0;
|
||||
len = strlen(tab);
|
||||
i = 0;
|
||||
X = i = findchar(tab, 'X', i, len);
|
||||
startIP = i = findchar(tab, '{', i, len);
|
||||
while (startIP + 1 <= (i = findchar(tab, ' ', i, len))) startIP = i + 1;
|
||||
i = startIP;
|
||||
endIP = i = findchar(tab, ' ', i, len) - 1;
|
||||
|
||||
if (isIPv4(tab+startIP, endIP-startIP+1)) {
|
||||
tab[X] = '4';
|
||||
return;
|
||||
}
|
||||
|
||||
if (isIPv6(tab+startIP, endIP-startIP+1)) {
|
||||
tab[X] = '6';
|
||||
return;
|
||||
}
|
||||
|
||||
printf("nft46: no IP address found\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
void exec(char *str, char **argv) {
|
||||
argv[0] = str;
|
||||
execvp(str, argv);
|
||||
// returns only if fails
|
||||
printf("nft46: exec failed %d\n", errno);
|
||||
}
|
||||
|
||||
int main(int argc, char **argv) {
|
||||
if (argc != 2) {
|
||||
printf("nft46: Exactly one argument must be given\n");
|
||||
exit(1);
|
||||
}
|
||||
adapt_args(argv[1]);
|
||||
exec("nft", argv);
|
||||
}
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
<?xml version="1.0" standalone="no"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN" "http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
|
||||
<!-- Created using Karbon14, part of koffice: http://www.koffice.org/karbon -->
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="449px" height="168px">
|
||||
<defs>
|
||||
</defs>
|
||||
<g id="Layer">
|
||||
</g>
|
||||
<g id="Layer">
|
||||
<path fill="#98bf00" d="M446.602 73.8789L449.102 60.234L436.207 60.234L439.957 40.145L424.512 46.191L422.012 60.234L412.617 60.234L410.117 73.8789L419.363 73.8789L416.215 91.1719C416.066 92.125 415.816 93.5234 415.566 95.3203C415.316 97.1211 415.164 98.7188 415.164 100.07C415.215 106.316 416.715 111.465 419.664 115.516C422.613 119.66 427.41 122.109 434.109 122.859L440.555 109.566C437.105 109.117 434.508 107.766 432.66 105.469C430.809 103.117 429.91 100.168 429.91 96.5703C429.91 95.8711 430.012 94.8711 430.16 93.5234C430.309 92.1719 430.461 91.0742 430.609 90.2227L433.609 73.8789L446.602 73.8789L446.602 73.8789Z" />
|
||||
<path fill="#98bf00" d="M310.707 72.332C313.105 71.4805 315.207 71.0312 316.957 71.0312C318.855 71.0312 320.453 71.582 321.754 72.6797C323.004 73.7305 323.602 75.2812 323.602 77.4297C323.602 78.0273 323.504 78.9297 323.301 80.1797C323.102 81.3281 322.953 82.3789 322.805 83.2773L319.203 100.168C318.953 101.469 318.703 102.82 318.453 104.219C318.203 105.668 318.105 106.918 318.105 107.965C318.105 112.016 319.203 115.414 321.453 118.113C323.602 120.812 327.449 122.41 333 122.859L339.348 110.016C337.195 109.668 335.648 108.867 334.699 107.617C333.699 106.418 333.199 104.719 333.199 102.57C333.199 102.07 333.25 101.469 333.348 100.82C333.398 100.168 333.5 99.6211 333.547 99.2188L337.195 82.0273C337.496 80.5781 337.746 79.1289 337.945 77.6797C338.148 76.2812 338.246 74.8789 338.246 73.5312C338.246 68.582 336.797 64.586 333.898 61.637C330.949 58.688 326.852 57.188 321.602 57.188C318.555 57.188 315.656 57.688 312.809 58.688C310.008 59.637 306.609 61.234 302.66 63.586C302.512 62.637 302.16 61.484 301.66 60.188C301.113 58.938 300.512 57.836 299.863 56.836L286.469 62.586C287.617 64.336 288.516 66.184 289.066 68.082C289.566 69.9805 289.816 71.7812 289.816 73.4297C289.816 74.2812 289.766 75.3281 289.617 76.4805C289.516 77.6289 289.367 78.5273 289.215 79.1797L281.27 121.512L295.664 121.512L304.109 75.8281C306.16 74.2812 308.359 73.1289 310.707 72.332L310.707 72.332Z" />
|
||||
<path fill="#98bf00" d="M350.742 80.0781C349.191 84.6758 348.441 89.5742 348.441 94.7227C348.441 99.2188 349.043 103.219 350.191 106.719C351.34 110.215 352.992 113.164 355.09 115.516C357.141 117.914 359.688 119.711 362.637 120.961C365.586 122.211 368.883 122.859 372.484 122.859C376.832 122.859 381.129 122.062 385.43 120.461C389.777 118.863 393.574 116.363 396.824 113.016L391.426 100.519C388.926 103.32 386.176 105.418 383.129 106.867C380.078 108.316 377.031 109.016 374.031 109.016C370.535 109.016 367.785 107.918 365.785 105.719C363.836 103.469 362.836 100.668 362.836 97.3711L362.836 96.4219C362.836 96.0234 362.887 95.6211 362.988 95.2227C365.637 94.8711 368.633 94.4219 371.984 93.8242C375.332 93.2227 378.73 92.5234 382.18 91.7227C385.629 90.875 388.977 89.9258 392.273 88.9258C395.523 87.9258 398.422 86.875 400.871 85.8242L400.871 80.0781C400.871 76.5312 400.32 73.332 399.223 70.4805C398.074 67.734 396.574 65.332 394.625 63.285C392.676 61.285 390.324 59.785 387.676 58.785C385.078 57.738 382.23 57.188 379.18 57.188C374.73 57.188 370.582 58.188 366.836 60.137C363.035 62.086 359.789 64.785 357.141 68.2344C354.391 71.6328 352.293 75.5781 350.742 80.0781L350.742 80.0781ZM372.383 69.9805C373.934 69.1328 375.684 68.7344 377.633 68.7344C380.281 68.7344 382.48 69.582 384.227 71.332C385.977 73.0312 386.879 75.5781 386.879 79.0273C385.43 79.4766 383.727 80.0273 381.73 80.5781C379.68 81.0781 377.633 81.5781 375.531 82.0273C373.383 82.4766 371.332 82.9258 369.285 83.3281C367.234 83.6758 365.484 83.9766 363.984 84.2266C364.234 82.1289 364.688 80.1289 365.387 78.2773C366.137 76.4297 367.086 74.7812 368.234 73.3789C369.484 71.9805 370.832 70.832 372.383 69.9805L372.383 69.9805Z" fill-rule="evenodd" />
|
||||
<path fill="#000000" d="M404.172 140.453C404.172 139.203 403.969 138.055 403.57 137.055C403.172 136.055 402.621 135.207 401.973 134.457C401.27 133.758 400.473 133.207 399.523 132.856C398.574 132.508 397.523 132.309 396.422 132.309C394.973 132.309 393.625 132.606 392.375 133.156C391.125 133.707 390.027 134.508 389.078 135.504C388.125 136.504 387.379 137.656 386.828 139.004C386.277 140.356 385.977 141.805 385.977 143.402C385.977 144.652 386.176 145.75 386.578 146.801C386.926 147.801 387.477 148.652 388.176 149.352C388.828 150.101 389.676 150.648 390.625 151.051C391.574 151.399 392.625 151.598 393.773 151.598C395.176 151.598 396.523 151.301 397.773 150.75C399.023 150.199 400.121 149.398 401.07 148.402C402.02 147.449 402.77 146.25 403.32 144.902C403.871 143.551 404.172 142.055 404.172 140.453L404.172 140.453ZM390.277 140.402C390.574 139.504 390.977 138.703 391.477 138.004C392.023 137.305 392.676 136.754 393.426 136.305C394.176 135.856 394.973 135.656 395.922 135.656C397.371 135.656 398.422 136.106 399.172 137.004C399.922 137.856 400.32 139.106 400.32 140.652C400.32 141.602 400.172 142.555 399.871 143.504C399.621 144.402 399.223 145.203 398.672 145.902C398.121 146.602 397.473 147.152 396.723 147.601C395.973 148 395.125 148.199 394.223 148.199C392.773 148.199 391.727 147.75 390.977 146.902C390.227 146 389.824 144.801 389.824 143.254C389.824 142.305 389.977 141.352 390.277 140.402L390.277 140.402Z" fill-rule="evenodd" />
|
||||
<path fill="#000000" d="M434.559 132.559L431.008 132.559L429.109 143.602C429.059 143.754 429.012 144.004 429.012 144.352C429.012 144.703 429.012 144.953 429.012 145.203L428.859 145.203L422.465 132.559L419.113 132.559L415.766 151.301L419.363 151.301L421.363 140.004C421.414 139.856 421.414 139.606 421.414 139.356C421.414 139.106 421.414 138.805 421.414 138.504L421.563 138.504L428.109 151.449L431.309 151.149L434.559 132.559L434.559 132.559Z" />
|
||||
<path fill="#000000" d="M374.383 132.559L370.734 132.559L367.387 151.301L371.082 151.301L374.383 132.559L374.383 132.559Z" />
|
||||
<path fill="#000000" d="M328.949 132.559L324.703 132.559C323.902 133.906 323.051 135.457 322.102 137.106C321.152 138.754 320.254 140.453 319.355 142.152C318.453 143.852 317.656 145.5 316.906 147.102C316.156 148.699 315.555 150.101 315.105 151.301L318.953 151.301C319.105 150.949 319.254 150.5 319.453 150.051C319.652 149.602 319.855 149.102 320.105 148.652C320.305 148.199 320.504 147.75 320.703 147.301C320.902 146.852 321.102 146.453 321.254 146.102L327.75 146.102C327.801 146.551 327.801 147 327.852 147.5L328 148.949C328.051 149.398 328.102 149.852 328.152 150.301C328.199 150.75 328.199 151.098 328.199 151.449L331.898 151.149C331.898 150.449 331.848 149.648 331.75 148.699C331.699 147.75 331.551 146.75 331.398 145.703C331.25 144.652 331.098 143.504 330.898 142.351C330.75 141.203 330.551 140.055 330.301 138.906C330.102 137.754 329.898 136.656 329.648 135.555C329.398 134.508 329.199 133.508 328.949 132.559L328.949 132.559ZM326.602 138.106C326.703 138.656 326.801 139.254 326.902 139.902C327 140.504 327.102 141.106 327.152 141.652C327.25 142.203 327.301 142.601 327.352 142.953L322.703 142.953C322.953 142.504 323.203 142.004 323.453 141.453C323.754 140.902 324.051 140.305 324.352 139.703C324.703 139.106 325 138.555 325.301 138.004C325.602 137.453 325.852 136.957 326.102 136.606L326.301 136.606C326.402 137.004 326.5 137.504 326.602 138.106L326.602 138.106Z" fill-rule="evenodd" />
|
||||
<path fill="#000000" d="M357.641 135.957L358.188 132.559L345.395 132.559L344.844 135.957L349.391 135.957L346.742 151.301L350.391 151.301L353.09 135.957L357.641 135.957L357.641 135.957Z" />
|
||||
<path fill="#000000" d="M297.465 132.309C296.414 132.309 295.363 132.356 294.312 132.457C293.266 132.606 292.266 132.758 291.316 133.008L288.168 150.852C289.117 151.098 290.215 151.25 291.414 151.399C292.566 151.551 293.664 151.598 294.715 151.598C296.262 151.598 297.664 151.348 299.012 150.852C300.363 150.301 301.562 149.602 302.562 148.652C303.559 147.699 304.359 146.551 304.961 145.203C305.508 143.852 305.809 142.305 305.809 140.606C305.809 139.254 305.609 138.106 305.211 137.055C304.762 136.004 304.211 135.156 303.461 134.457C302.711 133.758 301.812 133.207 300.812 132.856C299.762 132.508 298.664 132.309 297.465 132.309L297.465 132.309ZM296.664 135.707C297.414 135.707 298.113 135.805 298.762 135.957C299.414 136.106 299.961 136.406 300.41 136.805C300.91 137.203 301.312 137.703 301.562 138.356C301.812 138.953 301.961 139.703 301.961 140.652C301.961 141.852 301.812 142.902 301.461 143.852C301.16 144.801 300.711 145.602 300.113 146.25C299.512 146.902 298.812 147.352 297.961 147.699C297.113 148.051 296.215 148.199 295.164 148.199C294.715 148.199 294.266 148.199 293.715 148.152C293.164 148.102 292.664 148.051 292.316 148L294.465 135.906C294.766 135.856 295.164 135.805 295.613 135.754C296.062 135.707 296.414 135.707 296.664 135.707L296.664 135.707Z" fill-rule="evenodd" />
|
||||
<path fill="#000000" d="M185.809 62.586C186.957 64.336 187.855 66.184 188.406 68.082C188.906 69.9805 189.156 71.7812 189.156 73.4297C189.156 74.2812 189.105 75.3281 188.957 76.4805C188.855 77.6289 188.707 78.5273 188.555 79.1797L180.609 121.512L195.004 121.512L203.449 75.8281C205.5 74.2812 207.699 73.1289 210.047 72.332C212.445 71.4805 214.547 71.0312 216.297 71.0312C218.195 71.0312 219.793 71.582 221.094 72.6797C222.344 73.7305 222.941 75.2812 222.941 77.4297C222.941 78.0273 222.844 78.9297 222.645 80.1797C222.441 81.3281 222.293 82.3789 222.145 83.2773L218.543 100.168C218.293 101.469 218.043 102.82 217.793 104.219C217.547 105.668 217.445 106.918 217.445 107.965C217.445 112.016 218.543 115.414 220.793 118.113C222.941 120.812 226.793 122.41 232.34 122.859L238.688 110.016C236.539 109.668 234.988 108.867 234.039 107.617C233.039 106.418 232.539 104.719 232.539 102.57C232.539 102.07 232.59 101.469 232.688 100.82C232.738 100.168 232.84 99.6211 232.891 99.2188L236.539 82.0273C236.836 80.5781 237.086 79.1289 237.285 77.6797C237.488 76.2812 237.586 74.8789 237.586 73.5312C237.586 68.582 236.137 64.586 233.238 61.637C230.289 58.688 226.191 57.188 220.945 57.188C217.895 57.188 214.996 57.688 212.148 58.688C209.348 59.637 205.949 61.234 202 63.586C201.852 62.637 201.5 61.484 201 60.188C200.453 58.938 199.852 57.836 199.203 56.836L185.809 62.586L185.809 62.586Z" />
|
||||
<path fill="#000000" d="M276.82 31.547L262.676 31.547L251.883 90.0234C251.43 91.9727 251.082 94.0234 250.832 96.1719C250.582 98.2695 250.434 100.219 250.434 102.019C250.434 107.816 251.531 112.566 253.781 116.262C256.031 119.961 259.828 122.16 265.176 122.859L271.672 109.566C270.625 109.066 269.723 108.516 268.875 107.918C268.023 107.367 267.324 106.617 266.773 105.769C266.176 104.918 265.727 103.918 265.477 102.719C265.227 101.519 265.074 100.019 265.074 98.2695C265.074 97.4219 265.125 96.4727 265.227 95.4727C265.375 94.4219 265.527 93.3711 265.676 92.2734L276.82 31.547L276.82 31.547Z" />
|
||||
<path fill="#000000" d="M246.434 132.559L242.785 132.559L240.387 146.25C239.887 146.801 239.285 147.25 238.535 147.652C237.785 148 236.988 148.199 236.086 148.199C235.188 148.199 234.488 148 233.988 147.601C233.438 147.152 233.188 146.453 233.188 145.402C233.188 145.203 233.238 144.902 233.289 144.504C233.34 144.152 233.34 143.801 233.387 143.504L235.387 132.559L231.688 132.559L229.738 143.453C229.691 143.902 229.641 144.352 229.59 144.801C229.539 145.25 229.539 145.602 229.539 145.953C229.539 146.953 229.691 147.801 229.988 148.551C230.289 149.301 230.691 149.852 231.191 150.301C231.738 150.75 232.34 151.098 232.988 151.301C233.688 151.5 234.387 151.598 235.137 151.598C236.988 151.598 238.637 151.051 240.137 149.898C240.137 150.148 240.137 150.449 240.188 150.75C240.188 151 240.188 151.25 240.234 151.5L243.883 151.25C243.836 151 243.836 150.75 243.836 150.449C243.785 150.199 243.785 149.898 243.785 149.551C243.785 148.949 243.836 148.301 243.883 147.652C243.934 146.953 243.984 146.301 244.133 145.703L246.434 132.559L246.434 132.559Z" />
|
||||
<path fill="#000000" d="M276.621 132.559L273.074 132.559L271.172 143.602C271.125 143.754 271.074 144.004 271.074 144.352C271.074 144.703 271.074 144.953 271.074 145.203L270.922 145.203L264.527 132.559L261.176 132.559L257.828 151.301L261.426 151.301L263.426 140.004C263.477 139.856 263.477 139.606 263.477 139.356C263.477 139.106 263.477 138.805 263.477 138.504L263.625 138.504L270.176 151.449L273.371 151.149L276.621 132.559L276.621 132.559Z" />
|
||||
<path fill="#000000" d="M214.797 134.457C214.098 133.758 213.297 133.207 212.348 132.856C211.398 132.508 210.348 132.309 209.25 132.309C207.801 132.309 206.449 132.606 205.199 133.156C203.949 133.707 202.852 134.508 201.902 135.504C200.953 136.504 200.203 137.656 199.652 139.004C199.102 140.356 198.801 141.805 198.801 143.402C198.801 144.652 199.004 145.75 199.402 146.801C199.754 147.801 200.301 148.652 201 149.352C201.652 150.101 202.5 150.648 203.449 151.051C204.398 151.399 205.449 151.598 206.598 151.598C208 151.598 209.348 151.301 210.598 150.75C211.848 150.199 212.945 149.398 213.895 148.402C214.848 147.449 215.598 146.25 216.145 144.902C216.695 143.551 216.996 142.055 216.996 140.453C216.996 139.203 216.797 138.055 216.395 137.055C215.996 136.055 215.445 135.207 214.797 134.457L214.797 134.457ZM204.301 138.004C204.852 137.305 205.5 136.754 206.25 136.305C207 135.856 207.801 135.656 208.75 135.656C210.199 135.656 211.246 136.106 211.996 137.004C212.746 137.856 213.148 139.106 213.148 140.652C213.148 141.602 212.996 142.555 212.695 143.504C212.445 144.402 212.047 145.203 211.496 145.902C210.949 146.602 210.297 147.152 209.547 147.601C208.797 148 207.949 148.199 207.051 148.199C205.602 148.199 204.551 147.75 203.801 146.902C203.051 146 202.652 144.801 202.652 143.254C202.652 142.305 202.801 141.352 203.102 140.402C203.402 139.504 203.801 138.703 204.301 138.004L204.301 138.004Z" fill-rule="evenodd" />
|
||||
<path fill="#000000" d="M188.258 132.559L177.961 132.559L174.613 151.301L178.312 151.301L179.559 144.152L186.309 144.152L186.906 140.754L180.16 140.754L181.008 135.957L187.656 135.957L188.258 132.559L188.258 132.559Z" />
|
||||
<path fill="#98bf00" d="M127.082 44.891C128.43 33.945 125.684 24.102 118.883 15.402C112.086 6.707 103.191 1.66 92.2461 0.309C81.3008 -1.039 71.4531 1.711 62.7578 8.508C54.7109 14.754 49.8125 22.801 48.0625 32.648C47.9141 33.496 47.7617 34.297 47.6641 35.145C47.5625 35.996 47.5117 36.797 47.4648 37.594C47.1133 42.191 47.5625 46.59 48.7617 50.789C50.1133 55.688 52.4609 60.285 55.8594 64.633C59.2578 68.9805 63.1563 72.3828 67.6055 74.9297C71.3516 77.0781 75.5 78.5273 80.0508 79.3281C80.8516 79.4766 81.6484 79.5781 82.5 79.7266C82.9492 79.7773 83.3984 79.8281 83.8477 79.8789C84.9492 75.4297 86.6484 71.2812 88.9961 67.531C87.4453 67.582 85.8477 67.531 84.25 67.383C84.1484 67.332 84.0977 67.332 84.0469 67.332C82.1992 67.082 80.3984 66.734 78.75 66.184C73.6016 64.535 69.2539 61.484 65.707 56.938C62.1562 52.391 60.2578 47.441 59.9062 42.043C59.8086 40.293 59.8594 38.543 60.1094 36.695C60.1094 36.645 60.1094 36.547 60.1094 36.496C61.0586 29.047 64.5078 23 70.4531 18.352C76.4531 13.703 83.1992 11.805 90.7461 12.754C98.293 13.656 104.391 17.102 109.039 23.102C113.688 29.098 115.586 35.844 114.688 43.395C114.438 45.094 114.137 46.691 113.688 48.242C117.887 46.891 122.281 46.191 126.883 46.242C126.93 45.793 127.031 45.344 127.082 44.891L127.082 44.891Z" />
|
||||
<path fill="#98bf00" d="M132.328 51.488C131.48 51.391 130.68 51.289 129.828 51.238C125.23 50.941 120.832 51.391 116.637 52.539C111.738 53.887 107.141 56.289 102.789 59.688C98.4414 63.035 95.043 66.934 92.5469 71.3828C90.3945 75.1289 88.9453 79.2773 88.0977 83.8281C92.4453 84.5742 96.4453 85.8242 100.141 87.6758C100.391 85.875 100.742 84.1758 101.242 82.5781C102.891 77.4297 105.941 73.082 110.488 69.5312C115.035 65.984 119.984 64.035 125.434 63.684C127.18 63.586 128.93 63.633 130.781 63.883C130.828 63.883 130.879 63.883 130.93 63.883C138.375 64.836 144.426 68.332 149.074 74.2812C153.77 80.2266 155.668 86.9766 154.719 94.5234C153.77 102.07 150.32 108.168 144.375 112.863C138.426 117.512 131.68 119.363 124.23 118.461C125.082 122.512 125.332 126.758 125.031 131.156C134.977 131.809 143.973 128.957 152.02 122.711C160.719 115.914 165.766 107.016 167.113 96.0703C168.465 85.125 165.715 75.2812 158.918 66.582C152.621 58.535 144.574 53.637 134.777 51.891C133.93 51.738 133.129 51.59 132.328 51.488L132.328 51.488Z" />
|
||||
<path fill="#000000" d="M128.93 78.7266C125.48 78.3281 122.434 79.1797 119.684 81.3281C116.934 83.4766 115.387 86.2266 114.984 89.625C114.535 93.0742 115.387 96.1211 117.535 98.8711C119.684 101.621 122.434 103.168 125.883 103.57C129.281 104.019 132.328 103.168 135.078 101.019C137.828 98.8711 139.375 96.1211 139.824 92.6719C140.227 89.2734 139.375 86.2266 137.227 83.4766C135.078 80.7266 132.328 79.1797 128.93 78.7266L128.93 78.7266Z" />
|
||||
<path fill="#98bf00" d="M12.8281 73.6289C13.7773 66.082 17.2266 59.938 23.2227 55.289C29.1719 50.641 35.8672 48.742 43.3164 49.691C42.4648 45.641 42.1641 41.395 42.5156 36.996C32.5703 36.344 23.5742 39.145 15.5273 45.441C6.77734 52.238 1.78125 61.137 0.433594 72.082C-0.917969 83.0273 1.78125 92.8242 8.62891 101.57C14.875 109.617 22.9219 114.516 32.7695 116.262C33.5703 116.414 34.3672 116.512 35.2188 116.664C36.0664 116.762 36.8672 116.863 37.7188 116.914C42.3164 117.215 46.7148 116.762 50.9102 115.613C55.7578 114.215 60.4062 111.816 64.7578 108.465C69.0547 105.066 72.4531 101.168 75.0039 96.7695C77.1523 93.0234 78.6016 88.875 79.4492 84.3281C75.1016 83.5781 71.1055 82.2773 67.4062 80.4766C67.1563 82.2266 66.8047 83.9258 66.3047 85.5742C64.6562 90.7227 61.6055 95.0703 57.0586 98.6211C52.5117 102.168 47.5625 104.117 42.1641 104.469C40.4141 104.566 38.6172 104.519 36.7656 104.269C36.7188 104.269 36.668 104.269 36.6172 104.219C29.1719 103.269 23.1211 99.8203 18.4727 93.8711C13.7773 87.875 11.8789 81.1289 12.8281 73.6289L12.8281 73.6289Z" />
|
||||
<path fill="#000000" d="M32.4688 67.133C29.7188 69.2305 28.1719 72.0312 27.7227 75.4805C27.3203 78.8281 28.1719 81.8789 30.3203 84.625C32.418 87.375 35.168 88.9727 38.6172 89.4258C42.0664 89.7734 45.1133 88.9258 47.8633 86.8242C50.5625 84.6758 52.1094 81.8789 52.5625 78.5273C53.0117 75.0781 52.1602 71.9805 50.0117 69.2812C47.8633 66.535 45.1133 64.984 41.6641 64.586C38.2148 64.133 35.168 64.984 32.4688 67.133L32.4688 67.133Z" />
|
||||
<path fill="#000000" d="M97.293 32.348C95.1445 29.598 92.3438 28.047 88.9453 27.648C85.4961 27.199 82.4492 28.047 79.75 30.199C77 32.297 75.4023 35.098 75.0039 38.543C74.5508 41.941 75.4531 44.992 77.6016 47.742C79.6992 50.441 82.4492 52.039 85.8984 52.488C89.2969 52.84 92.3438 51.988 95.0938 49.891C97.8438 47.742 99.3906 44.941 99.8438 41.594C100.242 38.145 99.3906 35.047 97.293 32.348L97.293 32.348Z" />
|
||||
<path fill="#98bf00" d="M85.0469 88.4258C84.5977 88.375 84.1484 88.3242 83.6992 88.2734C82.5977 92.7227 80.8984 96.8711 78.5508 100.621C80.1016 100.519 81.6992 100.57 83.3477 100.769C83.3984 100.769 83.4492 100.769 83.5 100.82C85.3477 101.019 87.0977 101.371 88.7969 101.918C93.9453 103.57 98.293 106.668 101.84 111.215C105.391 115.715 107.289 120.66 107.641 126.109C107.738 127.859 107.688 129.609 107.438 131.457C107.438 131.508 107.438 131.559 107.438 131.656C106.488 139.106 103.039 145.152 97.0938 149.801C91.0938 154.449 84.3477 156.348 76.8008 155.398C69.2539 154.449 63.1563 151 58.5078 145.051C53.8086 139.055 51.9102 132.309 52.8594 124.762C53.0625 123.062 53.4102 121.461 53.9102 119.91C49.6641 121.262 45.2656 121.91 40.6641 121.91C40.6172 122.359 40.5156 122.812 40.4648 123.262C39.1172 134.207 41.8164 144.004 48.6641 152.75C55.4609 161.445 64.3555 166.492 75.3008 167.844C86.2461 169.191 96.043 166.445 104.789 159.645C112.836 153.348 117.734 145.301 119.484 135.457C119.633 134.656 119.734 133.856 119.883 133.008C119.934 132.156 120.035 131.359 120.082 130.559C120.383 125.91 119.934 121.512 118.785 117.363C117.434 112.465 115.035 107.867 111.688 103.519C108.289 99.1719 104.391 95.7227 99.9922 93.2227C96.1953 91.0742 92.0469 89.625 87.4961 88.8242C86.6992 88.6758 85.8984 88.5234 85.0469 88.4258L85.0469 88.4258Z" />
|
||||
<path fill="#000000" d="M89.9961 120.41C87.8477 117.664 85.0977 116.113 81.6484 115.664C78.1992 115.266 75.1523 116.113 72.4531 118.262C69.7031 120.41 68.1562 123.16 67.7031 126.559C67.2539 130.008 68.1562 133.059 70.3047 135.805C72.4024 138.555 75.1523 140.106 78.6016 140.504C82.0508 140.953 85.0977 140.106 87.8477 137.953C90.5469 135.805 92.0938 133.059 92.5469 129.609C92.9453 126.211 92.0938 123.16 89.9961 120.41L89.9961 120.41Z" />
|
||||
</g>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 20 KiB |
|
|
@ -4,23 +4,19 @@ MANDIR = $(PREFIX)/share/man/man1
|
|||
SYSTEMDDIR ?= /etc/systemd
|
||||
|
||||
install:
|
||||
install -Dm755 reaction $(DESTDIR)$(BINDIR)
|
||||
install -Dm755 reaction-plugin-virtual $(DESTDIR)$(BINDIR)
|
||||
install -Dm644 reaction*.1 -t $(DESTDIR)$(MANDIR)/
|
||||
install -Dm644 reaction.bash $(DESTDIR)$(PREFIX)/share/bash-completion/completions/reaction
|
||||
install -Dm644 reaction.fish $(DESTDIR)$(PREFIX)/share/fish/vendor_completions.d/reaction.fish
|
||||
install -Dm644 _reaction $(DESTDIR)$(PREFIX)/share/zsh/vendor-completions/_reaction
|
||||
install -Dm644 reaction.service $(SYSTEMDDIR)/system/reaction.service
|
||||
|
||||
install-ipset:
|
||||
install -Dm755 reaction-plugin-ipset $(DESTDIR)$(BINDIR)
|
||||
install -m755 reaction nft46 ip46tables $(DESTDIR)$(BINDIR)
|
||||
install -m644 reaction*.1 $(DESTDIR)$(MANDIR)/man/man1/
|
||||
install -m644 reaction.bash $(DESTDIR)/share/bash-completion/completions/reaction
|
||||
install -m644 reaction.fish $(DESTDIR)/share/fish/completions/
|
||||
install -m644 _reaction $(DESTDIR)/share/zsh/vendor-completions/
|
||||
install -m644 reaction.service $(SYSTEMDDIR)/system/reaction.service
|
||||
|
||||
remove:
|
||||
rm -f $(DESTDIR)$(BINDIR)/bin/reaction
|
||||
rm -f $(DESTDIR)$(BINDIR)/bin/reaction-plugin-virtual
|
||||
rm -f $(DESTDIR)$(BINDIR)/bin/reaction-plugin-ipset
|
||||
rm -f $(DESTDIR)$(MANDIR)/reaction*.1
|
||||
rm -f $(DESTDIR)$(PREFIX)/share/bash-completion/completions/reaction
|
||||
rm -f $(DESTDIR)$(PREFIX)/share/fish/vendor_completions.d/reaction.fish
|
||||
rm -f $(DESTDIR)$(PREFIX)/share/zsh/vendor-completions/_reaction
|
||||
rm -f $(DESTDIR)$(BINDIR)/bin/nft46
|
||||
rm -f $(DESTDIR)$(BINDIR)/bin/ip46tables
|
||||
rm -f $(DESTDIR)$(MANDIR)/man/man1/reaction*.1
|
||||
rm -f $(DESTDIR)/share/bash-completion/completions/reaction
|
||||
rm -f $(DESTDIR)/share/fish/completions/
|
||||
rm -f $(DESTDIR)/share/zsh/vendor-completions/
|
||||
rm -f $(SYSTEMDDIR)/system/reaction.service
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
# vim: ft=systemd
|
||||
[Unit]
|
||||
Description=reaction daemon
|
||||
Description=A daemon that scans program outputs for repeated patterns, and takes action.
|
||||
Documentation=https://reaction.ppom.me
|
||||
# Ensure reaction will insert its chain after docker has inserted theirs. Only useful when iptables & docker are used
|
||||
# After=docker.service
|
||||
|
||||
# See `man systemd.exec` and `man systemd.service` for most options below
|
||||
[Service]
|
||||
ExecStart=/usr/bin/reaction start -c /etc/reaction/
|
||||
ExecStart=/usr/bin/reaction start -c /etc/%i
|
||||
|
||||
# Ask systemd to create /var/lib/reaction (/var/lib/ is implicit)
|
||||
StateDirectory=reaction
|
||||
|
|
@ -15,10 +15,6 @@ StateDirectory=reaction
|
|||
RuntimeDirectory=reaction
|
||||
# Start reaction in its state directory
|
||||
WorkingDirectory=/var/lib/reaction
|
||||
# Let reaction kill its child processes first
|
||||
KillMode=mixed
|
||||
# Put reaction in its own slice so that plugins can be grouped within.
|
||||
Slice=system-reaction.slice
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
|
@ -1 +0,0 @@
|
|||
[Slice]
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
[package]
|
||||
name = "reaction-plugin-cluster"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
reaction-plugin.workspace = true
|
||||
|
||||
chrono.workspace = true
|
||||
futures.workspace = true
|
||||
remoc.workspace = true
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
tokio.workspace = true
|
||||
tokio.features = ["rt-multi-thread"]
|
||||
treedb.workspace = true
|
||||
|
||||
data-encoding = "2.9.0"
|
||||
iroh = { version = "0.95.1", default-features = false }
|
||||
rand = "0.9.2"
|
||||
|
||||
[dev-dependencies]
|
||||
assert_fs.workspace = true
|
||||
|
|
@ -1,165 +0,0 @@
|
|||
use std::{
|
||||
collections::BTreeMap,
|
||||
net::{SocketAddrV4, SocketAddrV6},
|
||||
sync::Arc,
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
use futures::future::join_all;
|
||||
use iroh::{
|
||||
Endpoint,
|
||||
endpoint::{ConnectOptions, TransportConfig},
|
||||
};
|
||||
use reaction_plugin::{Line, shutdown::ShutdownController};
|
||||
use remoc::rch::mpsc as remocMpsc;
|
||||
use tokio::sync::mpsc as tokioMpsc;
|
||||
use treedb::{Database, time::Time};
|
||||
|
||||
use crate::{ActionInit, StreamInit, connection::ConnectionManager, endpoint::EndpointManager};
|
||||
|
||||
pub const ALPN: [&[u8]; 1] = ["reaction_cluster_1".as_bytes()];
|
||||
|
||||
pub type UtcLine = (Arc<String>, Time);
|
||||
|
||||
pub fn transport_config() -> TransportConfig {
|
||||
// FIXME higher timeouts and keep alive
|
||||
let mut transport = TransportConfig::default();
|
||||
transport
|
||||
.max_idle_timeout(Some(Duration::from_millis(2000).try_into().unwrap()))
|
||||
.keep_alive_interval(Some(Duration::from_millis(200)));
|
||||
transport
|
||||
}
|
||||
|
||||
pub fn connect_config() -> ConnectOptions {
|
||||
ConnectOptions::new().with_transport_config(transport_config().into())
|
||||
}
|
||||
|
||||
pub async fn bind(stream: &StreamInit) -> Result<Endpoint, String> {
|
||||
let mut builder = Endpoint::builder()
|
||||
.secret_key(stream.secret_key.clone())
|
||||
.alpns(ALPN.iter().map(|slice| slice.to_vec()).collect())
|
||||
.relay_mode(iroh::RelayMode::Disabled)
|
||||
.clear_discovery()
|
||||
.transport_config(transport_config());
|
||||
|
||||
if let Some(ip) = stream.bind_ipv4 {
|
||||
builder = builder.bind_addr_v4(SocketAddrV4::new(ip, stream.listen_port));
|
||||
}
|
||||
if let Some(ip) = stream.bind_ipv6 {
|
||||
builder = builder.bind_addr_v6(SocketAddrV6::new(ip, stream.listen_port, 0, 0));
|
||||
}
|
||||
|
||||
builder.bind().await.map_err(|err| {
|
||||
format!(
|
||||
"Could not create socket address for cluster {}: {err}",
|
||||
stream.name
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn cluster_tasks(
|
||||
endpoint: Endpoint,
|
||||
mut stream: StreamInit,
|
||||
mut actions: Vec<ActionInit>,
|
||||
db: &mut Database,
|
||||
shutdown: ShutdownController,
|
||||
) -> Result<(), String> {
|
||||
eprintln!("DEBUG cluster tasks starts running");
|
||||
|
||||
let (message_action2connection_txs, mut message_action2connection_rxs): (
|
||||
Vec<tokioMpsc::Sender<UtcLine>>,
|
||||
Vec<tokioMpsc::Receiver<UtcLine>>,
|
||||
) = (0..stream.nodes.len())
|
||||
.map(|_| tokioMpsc::channel(1))
|
||||
.unzip();
|
||||
|
||||
// Spawn action tasks
|
||||
while let Some(mut action) = actions.pop() {
|
||||
let message_action2connection_txs = message_action2connection_txs.clone();
|
||||
let own_cluster_tx = stream.tx.clone();
|
||||
tokio::spawn(async move {
|
||||
action
|
||||
.serve(message_action2connection_txs, own_cluster_tx)
|
||||
.await
|
||||
});
|
||||
}
|
||||
|
||||
let endpoint = Arc::new(endpoint);
|
||||
|
||||
let mut connection_endpoint2connection_txs = BTreeMap::new();
|
||||
|
||||
// Spawn connection managers
|
||||
while let Some((pk, endpoint_addr)) = stream.nodes.pop_first() {
|
||||
let cluster_name = stream.name.clone();
|
||||
let endpoint = endpoint.clone();
|
||||
let message_action2connection_rx = message_action2connection_rxs.pop().unwrap();
|
||||
let stream_tx = stream.tx.clone();
|
||||
let shutdown = shutdown.clone();
|
||||
let (connection_manager, connection_endpoint2connection_tx) = ConnectionManager::new(
|
||||
cluster_name,
|
||||
endpoint_addr,
|
||||
endpoint,
|
||||
stream.message_timeout,
|
||||
message_action2connection_rx,
|
||||
stream_tx,
|
||||
db,
|
||||
shutdown,
|
||||
)
|
||||
.await?;
|
||||
tokio::spawn(async move { connection_manager.task().await });
|
||||
connection_endpoint2connection_txs.insert(pk, connection_endpoint2connection_tx);
|
||||
}
|
||||
|
||||
// Spawn connection accepter
|
||||
EndpointManager::new(
|
||||
endpoint.clone(),
|
||||
stream.name.clone(),
|
||||
connection_endpoint2connection_txs,
|
||||
shutdown.clone(),
|
||||
);
|
||||
|
||||
eprintln!("DEBUG cluster tasks finished running");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
impl ActionInit {
|
||||
// Receive messages from its reaction action and dispatch them to all connections and to the reaction stream
|
||||
async fn serve(
|
||||
&mut self,
|
||||
nodes_tx: Vec<tokioMpsc::Sender<UtcLine>>,
|
||||
own_stream_tx: remocMpsc::Sender<Line>,
|
||||
) {
|
||||
while let Ok(Some(m)) = self.rx.recv().await {
|
||||
eprintln!("DEBUG action: received a message to send to connections");
|
||||
let line = self.send.line(m.match_);
|
||||
if self.self_
|
||||
&& let Err(err) = own_stream_tx.send((line.clone(), m.time)).await
|
||||
{
|
||||
eprintln!("ERROR while queueing message to be sent to own cluster stream: {err}");
|
||||
}
|
||||
|
||||
let line = (Arc::new(line), m.time.into());
|
||||
for result in join_all(nodes_tx.iter().map(|tx| tx.send(line.clone()))).await {
|
||||
if let Err(err) = result {
|
||||
eprintln!("ERROR while queueing message to be sent to cluster nodes: {err}");
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use chrono::{DateTime, Local};
|
||||
|
||||
// As long as nodes communicate with UTC datetimes, them having different local timezones is not an issue!
|
||||
#[test]
|
||||
fn different_local_tz_is_ok() {
|
||||
let dates: Vec<DateTime<Local>> = serde_json::from_str(
|
||||
"[\"2025-11-02T17:47:21.716229569+01:00\",\"2025-11-02T18:47:21.716229569+02:00\"]",
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(dates[0].to_utc(), dates[1].to_utc());
|
||||
}
|
||||
}
|
||||
|
|
@ -1,668 +0,0 @@
|
|||
use std::{cmp::max, io::Error as IoError, sync::Arc, time::Duration};
|
||||
|
||||
use futures::FutureExt;
|
||||
use iroh::{
|
||||
Endpoint, EndpointAddr,
|
||||
endpoint::{Connection, RecvStream, SendStream, VarInt},
|
||||
};
|
||||
use rand::random_range;
|
||||
use reaction_plugin::{Line, shutdown::ShutdownController};
|
||||
use tokio::{
|
||||
io::{AsyncReadExt, AsyncWriteExt, BufReader, BufWriter},
|
||||
sync::mpsc,
|
||||
time::sleep,
|
||||
};
|
||||
use treedb::{
|
||||
Database, Tree,
|
||||
helpers::{to_string, to_time},
|
||||
time::{Time, now},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
cluster::{ALPN, UtcLine, connect_config},
|
||||
key::Show,
|
||||
};
|
||||
|
||||
const PROTOCOL_VERSION: u32 = 1;
|
||||
|
||||
const CLOSE_RECV: (u32, &[u8]) = (1, b"error receiving from your stream");
|
||||
const CLOSE_CLOSED: (u32, &[u8]) = (2, b"you closed your stream");
|
||||
const CLOSE_SEND: (u32, &[u8]) = (3, b"could not send a message to your channel so I quit");
|
||||
|
||||
type MaybeRemoteLine = Result<Option<(String, Time)>, IoError>;
|
||||
|
||||
enum Event {
|
||||
LocalMessageReceived(Option<UtcLine>),
|
||||
RemoteMessageReceived(MaybeRemoteLine),
|
||||
ConnectionReceived(Option<ConnOrConn>),
|
||||
}
|
||||
|
||||
pub struct OwnConnection {
|
||||
connection: Connection,
|
||||
id: u64,
|
||||
|
||||
line_tx: BufWriter<SendStream>,
|
||||
line_rx: BufReader<RecvStream>,
|
||||
|
||||
next_time_secs: Option<u64>,
|
||||
next_time_nanos: Option<u32>,
|
||||
next_len: Option<usize>,
|
||||
next_line: Option<Vec<u8>>,
|
||||
}
|
||||
|
||||
impl OwnConnection {
|
||||
fn new(
|
||||
connection: Connection,
|
||||
id: u64,
|
||||
line_tx: BufWriter<SendStream>,
|
||||
line_rx: BufReader<RecvStream>,
|
||||
) -> Self {
|
||||
Self {
|
||||
connection,
|
||||
id,
|
||||
line_tx,
|
||||
line_rx,
|
||||
next_time_secs: None,
|
||||
next_time_nanos: None,
|
||||
next_len: None,
|
||||
next_line: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Send a line to peer.
|
||||
///
|
||||
/// Time is a std::time::Duration since UNIX_EPOCH, which is defined as UTC
|
||||
/// So it's safe to use between nodes using different timezones
|
||||
async fn send_line(&mut self, line: &String, time: &Time) -> Result<(), std::io::Error> {
|
||||
self.line_tx.write_u64(time.as_secs()).await?;
|
||||
self.line_tx.write_u32(time.subsec_nanos()).await?;
|
||||
self.line_tx.write_u32(line.len() as u32).await?;
|
||||
self.line_tx.write_all(line.as_bytes()).await?;
|
||||
self.line_tx.flush().await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Cancel-safe function that returns next line from peer
|
||||
/// Returns None if we don't have all data yet.
|
||||
async fn recv_line(&mut self) -> MaybeRemoteLine {
|
||||
if self.next_time_secs.is_none() {
|
||||
self.next_time_secs = Some(self.line_rx.read_u64().await?);
|
||||
}
|
||||
if self.next_time_nanos.is_none() {
|
||||
self.next_time_nanos = Some(self.line_rx.read_u32().await?);
|
||||
}
|
||||
if self.next_len.is_none() {
|
||||
self.next_len = Some(self.line_rx.read_u32().await? as usize);
|
||||
}
|
||||
// Ok we have next_len.is_some()
|
||||
let next_len = self.next_len.clone().unwrap();
|
||||
|
||||
if self.next_line.is_none() {
|
||||
self.next_line = Some(Vec::with_capacity(next_len));
|
||||
}
|
||||
// Ok we have next_line.is_some()
|
||||
let next_line = self.next_line.as_mut().unwrap();
|
||||
|
||||
let actual_len = next_line.len();
|
||||
// Resize to wanted length
|
||||
next_line.resize(next_len, 0);
|
||||
|
||||
// Read bytes
|
||||
let bytes_read = self
|
||||
.line_rx
|
||||
.read(&mut next_line[actual_len..next_len])
|
||||
.await?;
|
||||
// Truncate possibly unread bytes
|
||||
next_line.truncate(actual_len + bytes_read);
|
||||
|
||||
// Let's test if we read all bytes
|
||||
if next_line.len() == next_len {
|
||||
// Ok we have a full line
|
||||
self.next_len.take();
|
||||
let line = String::try_from(self.next_line.take().unwrap()).map_err(|err| {
|
||||
std::io::Error::new(std::io::ErrorKind::InvalidData, err.to_string())
|
||||
})?;
|
||||
let time = Time::new(
|
||||
self.next_time_secs.take().unwrap(),
|
||||
self.next_time_nanos.take().unwrap(),
|
||||
);
|
||||
Ok(Some((line, time)))
|
||||
} else {
|
||||
// Ok we don't have a full line, will be next time!
|
||||
Ok(None)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub enum ConnOrConn {
|
||||
Connection(Connection),
|
||||
OwnConnection(OwnConnection),
|
||||
}
|
||||
|
||||
/// Handle a remote node.
|
||||
/// Manage reception and sending of messages to this node.
|
||||
/// Retry failed connections.
|
||||
pub struct ConnectionManager {
|
||||
/// Cluster's name (for logging)
|
||||
cluster_name: String,
|
||||
/// The remote node we're communicating with (for logging)
|
||||
node_id: String,
|
||||
/// Remote
|
||||
remote: EndpointAddr,
|
||||
/// Endpoint
|
||||
endpoint: Arc<Endpoint>,
|
||||
|
||||
/// Cancel asking for a connection
|
||||
cancel_ask_connection: Option<mpsc::Sender<()>>,
|
||||
/// Create a delegated task to send ourselves a connection
|
||||
connection_tx: mpsc::Sender<ConnOrConn>,
|
||||
/// The EndpointManager or our delegated task sending us a connection (whether we asked for it or not)
|
||||
connection_rx: mpsc::Receiver<ConnOrConn>,
|
||||
/// Our own connection (when we have one)
|
||||
connection: Option<OwnConnection>,
|
||||
/// Last connexion ID, used to have a determinist way to choose between conflicting connections
|
||||
last_connection_id: u64,
|
||||
|
||||
/// Max duration before we drop pending messages to a node we can't connect to.
|
||||
message_timeout: Duration,
|
||||
/// Message we receive from actions
|
||||
message_rx: mpsc::Receiver<UtcLine>,
|
||||
/// Our queue of messages to send
|
||||
message_queue: Tree<Time, Arc<String>>,
|
||||
|
||||
/// Messages we send from remote nodes to our own stream
|
||||
own_cluster_tx: remoc::rch::mpsc::Sender<Line>,
|
||||
|
||||
/// shutdown
|
||||
shutdown: ShutdownController,
|
||||
}
|
||||
|
||||
impl ConnectionManager {
|
||||
pub async fn new(
|
||||
cluster_name: String,
|
||||
remote: EndpointAddr,
|
||||
endpoint: Arc<Endpoint>,
|
||||
message_timeout: Duration,
|
||||
message_rx: mpsc::Receiver<UtcLine>,
|
||||
own_cluster_tx: remoc::rch::mpsc::Sender<Line>,
|
||||
db: &mut Database,
|
||||
shutdown: ShutdownController,
|
||||
) -> Result<(Self, mpsc::Sender<ConnOrConn>), String> {
|
||||
let node_id = remote.id.show();
|
||||
|
||||
let message_queue = db
|
||||
.open_tree(
|
||||
format!("message_queue_{}_{}", endpoint.id().show(), node_id),
|
||||
message_timeout,
|
||||
|(key, value)| Ok((to_time(&key)?, Arc::new(to_string(&value)?))),
|
||||
)
|
||||
.await?;
|
||||
|
||||
let (connection_tx, connection_rx) = mpsc::channel(1);
|
||||
Ok((
|
||||
Self {
|
||||
cluster_name,
|
||||
node_id,
|
||||
remote,
|
||||
endpoint,
|
||||
connection: None,
|
||||
cancel_ask_connection: None,
|
||||
connection_tx: connection_tx.clone(),
|
||||
connection_rx,
|
||||
last_connection_id: 0,
|
||||
message_timeout,
|
||||
message_rx,
|
||||
message_queue,
|
||||
own_cluster_tx,
|
||||
shutdown,
|
||||
},
|
||||
connection_tx,
|
||||
))
|
||||
}
|
||||
|
||||
/// Main loop
|
||||
pub async fn task(mut self) {
|
||||
self.ask_connection();
|
||||
loop {
|
||||
let have_connection = self.connection.is_some();
|
||||
let maybe_conn_rx = self
|
||||
.connection
|
||||
.as_mut()
|
||||
.map(|conn| conn.recv_line().boxed())
|
||||
// This Future will never be polled because of the if in select!
|
||||
// It still needs to be present because the branch will be evaluated
|
||||
// so we can't unwrap
|
||||
.unwrap_or(false_recv().boxed());
|
||||
|
||||
let event = tokio::select! {
|
||||
biased;
|
||||
// Quitting
|
||||
_ = self.shutdown.wait() => None,
|
||||
// Receive a connection from EndpointManager
|
||||
conn = self.connection_rx.recv() => Some(Event::ConnectionReceived(conn)),
|
||||
// Receive remote message when we have a connection
|
||||
msg = maybe_conn_rx, if have_connection => Some(Event::RemoteMessageReceived(msg)),
|
||||
// Receive a message from local Actions
|
||||
msg = self.message_rx.recv() => Some(Event::LocalMessageReceived(msg)),
|
||||
};
|
||||
|
||||
match event {
|
||||
Some(event) => {
|
||||
self.handle_event(event).await;
|
||||
self.send_queue_messages().await;
|
||||
self.drop_timeout_messages().await;
|
||||
}
|
||||
None => break,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn handle_event(&mut self, event: Event) {
|
||||
match event {
|
||||
Event::ConnectionReceived(connection) => {
|
||||
self.handle_connection(connection).await;
|
||||
}
|
||||
Event::LocalMessageReceived(utc_line) => {
|
||||
self.handle_local_message(utc_line).await;
|
||||
}
|
||||
Event::RemoteMessageReceived(message) => {
|
||||
self.handle_remote_message(message).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn send_queue_messages(&mut self) {
|
||||
while let Some(connection) = &mut self.connection
|
||||
&& let Some((time, line)) = self
|
||||
.message_queue
|
||||
.first_key_value()
|
||||
.map(|(k, v)| (k.clone(), v.clone()))
|
||||
{
|
||||
if let Err(err) = connection.send_line(&line, &time).await {
|
||||
eprintln!(
|
||||
"INFO cluster {}: connection with node {} failed: {err}",
|
||||
self.cluster_name, self.node_id,
|
||||
);
|
||||
self.close_connection(CLOSE_SEND).await;
|
||||
} else {
|
||||
self.message_queue.remove(&time).await;
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: sent a local message to remote: {}",
|
||||
self.cluster_name, self.node_id, line
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn drop_timeout_messages(&mut self) {
|
||||
let now = now();
|
||||
let mut count = 0;
|
||||
loop {
|
||||
// We have a next key and it reached timeout
|
||||
if let Some(next_key) = self.message_queue.first_key_value().map(|kv| kv.0.clone())
|
||||
&& next_key + self.message_timeout < now
|
||||
{
|
||||
self.message_queue.remove(&next_key).await;
|
||||
count += 1;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
if count > 0 {
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: dropping {count} messages that reached timeout",
|
||||
self.cluster_name, self.node_id,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/// Bootstrap a new Connection
|
||||
/// Returns true if we have a valid connection now
|
||||
async fn handle_connection(&mut self, connection: Option<ConnOrConn>) {
|
||||
match connection {
|
||||
None => {
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: ConnectionManager {}: quitting because EndpointManager has quit",
|
||||
self.cluster_name, self.node_id,
|
||||
);
|
||||
self.quit();
|
||||
}
|
||||
Some(connection) => {
|
||||
if let Some(cancel) = self.cancel_ask_connection.take() {
|
||||
let _ = cancel.send(()).await;
|
||||
}
|
||||
|
||||
let last_connection_id = self.last_connection_id;
|
||||
let mut insert_connection = |own_connection: OwnConnection| {
|
||||
if self
|
||||
.connection
|
||||
.as_ref()
|
||||
.is_none_or(|old_own| old_own.id < own_connection.id)
|
||||
{
|
||||
self.last_connection_id = own_connection.id;
|
||||
self.connection = Some(own_connection);
|
||||
} else {
|
||||
eprintln!(
|
||||
"WARN cluster {}: node {}: ignoring incoming connection, as we already have a valid connection with it and our connection id is greater",
|
||||
self.cluster_name, self.node_id,
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
match connection {
|
||||
ConnOrConn::Connection(connection) => {
|
||||
match open_channels(
|
||||
connection,
|
||||
last_connection_id,
|
||||
&self.cluster_name,
|
||||
&self.node_id,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(own_connection) => insert_connection(own_connection),
|
||||
Err(err) => {
|
||||
eprintln!(
|
||||
"ERROR cluster {}: trying to initialize connection to node {}: {err}",
|
||||
self.cluster_name, self.node_id,
|
||||
);
|
||||
if self.connection.is_none() {
|
||||
self.ask_connection();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
ConnOrConn::OwnConnection(own_connection) => insert_connection(own_connection),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn handle_remote_message(&mut self, message: MaybeRemoteLine) {
|
||||
match message {
|
||||
Err(err) => {
|
||||
eprintln!(
|
||||
"WARN cluster {}: node {}: connection {}: error receiving remote message: {err}",
|
||||
self.cluster_name, self.node_id, self.last_connection_id
|
||||
);
|
||||
self.close_connection(CLOSE_RECV).await;
|
||||
}
|
||||
Ok(None) => {
|
||||
eprintln!(
|
||||
"WARN cluster {}: node {} closed its stream",
|
||||
self.cluster_name, self.node_id,
|
||||
);
|
||||
self.close_connection(CLOSE_CLOSED).await;
|
||||
}
|
||||
Ok(Some(line)) => {
|
||||
if let Err(err) = self
|
||||
.own_cluster_tx
|
||||
.send((line.0.clone(), line.1.into()))
|
||||
.await
|
||||
{
|
||||
eprintln!(
|
||||
"ERROR cluster {}: could not send message to reaction stream: {err}",
|
||||
self.cluster_name
|
||||
);
|
||||
eprintln!(
|
||||
"INFO cluster {}: line that can't be sent: {}",
|
||||
self.cluster_name, line.0
|
||||
);
|
||||
self.quit();
|
||||
} else {
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: sent a remote message to local stream: {}",
|
||||
self.cluster_name, self.node_id, line.0
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn handle_local_message(&mut self, message: Option<UtcLine>) {
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: received a local message",
|
||||
self.cluster_name, self.node_id,
|
||||
);
|
||||
match message {
|
||||
None => {
|
||||
eprintln!(
|
||||
"INFO cluster {}: no action remaining, quitting",
|
||||
self.cluster_name
|
||||
);
|
||||
self.quit();
|
||||
}
|
||||
Some(message) => match &mut self.connection {
|
||||
Some(connection) => {
|
||||
if let Err(err) = connection.send_line(&message.0, &message.1).await {
|
||||
eprintln!(
|
||||
"INFO cluster {}: connection with node {} failed: {err}",
|
||||
self.cluster_name, self.node_id,
|
||||
);
|
||||
self.message_queue.insert(message.1, message.0).await;
|
||||
self.close_connection(CLOSE_SEND).await;
|
||||
} else {
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: sent a local message to remote: {}",
|
||||
self.cluster_name, self.node_id, message.0
|
||||
);
|
||||
}
|
||||
}
|
||||
None => {
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: no connection, saving local message to send later: {}",
|
||||
self.cluster_name, self.node_id, message.0
|
||||
);
|
||||
self.message_queue.insert(message.1, message.0).await;
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
async fn close_connection(&mut self, code: (u32, &[u8])) {
|
||||
if let Some(connection) = self.connection.take() {
|
||||
connection
|
||||
.connection
|
||||
.close(VarInt::from_u32(code.0), code.1);
|
||||
}
|
||||
self.ask_connection();
|
||||
}
|
||||
|
||||
fn ask_connection(&mut self) {
|
||||
// if self.node_id.starts_with('H') {
|
||||
let (tx, rx) = mpsc::channel(1);
|
||||
self.cancel_ask_connection = Some(tx);
|
||||
try_connect(
|
||||
self.cluster_name.clone(),
|
||||
self.remote.clone(),
|
||||
self.endpoint.clone(),
|
||||
self.last_connection_id,
|
||||
self.connection_tx.clone(),
|
||||
rx,
|
||||
);
|
||||
}
|
||||
|
||||
fn quit(&mut self) {
|
||||
self.shutdown.ask_shutdown();
|
||||
}
|
||||
}
|
||||
|
||||
/// Open accept one stream and create one stream.
|
||||
/// This way, there is no need to know if we created or accepted the connection.
|
||||
async fn open_channels(
|
||||
connection: Connection,
|
||||
last_connexion_id: u64,
|
||||
cluster_name: &str,
|
||||
node_id: &str,
|
||||
) -> Result<OwnConnection, IoError> {
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: opening uni channel",
|
||||
cluster_name, node_id
|
||||
);
|
||||
let mut output = BufWriter::new(connection.open_uni().await?);
|
||||
|
||||
let our_id = random_range(last_connexion_id + 1..last_connexion_id + 1_000_000);
|
||||
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: sending handshake in uni channel",
|
||||
cluster_name, node_id
|
||||
);
|
||||
output.write_u32(PROTOCOL_VERSION).await?;
|
||||
output.write_u64(our_id).await?;
|
||||
output.flush().await?;
|
||||
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: accepting uni channel",
|
||||
cluster_name, node_id
|
||||
);
|
||||
let mut input = BufReader::new(connection.accept_uni().await?);
|
||||
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: reading handshake from uni channel",
|
||||
cluster_name, node_id
|
||||
);
|
||||
let their_version = input.read_u32().await?;
|
||||
|
||||
if their_version != PROTOCOL_VERSION {
|
||||
return Err(IoError::new(
|
||||
std::io::ErrorKind::InvalidData,
|
||||
format!(
|
||||
"incompatible version: {their_version}. We use {PROTOCOL_VERSION}. Consider upgrading the node with the older version."
|
||||
),
|
||||
));
|
||||
}
|
||||
|
||||
let their_id = input.read_u64().await?;
|
||||
// FIXME Do we need to test this? If so, this function should return their_id even when error in order to retry better next time
|
||||
// if their_id < last_connexion_id
|
||||
// ERROR
|
||||
// else
|
||||
let chosen_id = max(our_id, their_id);
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: node {}: version handshake complete: last id: {last_connexion_id}, our id: {our_id}, their id: {their_id}: chosen id: {chosen_id}",
|
||||
cluster_name, node_id
|
||||
);
|
||||
Ok(OwnConnection::new(connection, chosen_id, output, input))
|
||||
}
|
||||
|
||||
async fn false_recv() -> MaybeRemoteLine {
|
||||
Ok(None)
|
||||
}
|
||||
|
||||
const START_TIMEOUT: Duration = Duration::from_millis(500);
|
||||
const MAX_TIMEOUT: Duration = Duration::from_hours(1);
|
||||
const TIMEOUT_FACTOR: f64 = 1.5;
|
||||
|
||||
fn with_random(d: Duration) -> Duration {
|
||||
let max_delta = d.as_micros() as f32 * 0.2;
|
||||
d + Duration::from_micros(rand::random_range(0.0..max_delta) as u64)
|
||||
}
|
||||
|
||||
// Compute the next wait Duration.
|
||||
// We're multiplying the Duration by [`TIMEOUT_FACTOR`] and cap it to [`MAX_TIMEOUT`].
|
||||
fn next_delta(delta: Option<Duration>) -> Duration {
|
||||
with_random(match delta {
|
||||
None => START_TIMEOUT,
|
||||
Some(delta) => {
|
||||
// Multiply timeout by TIMEOUT_FACTOR
|
||||
let delta = Duration::from_millis(((delta.as_millis() as f64) * TIMEOUT_FACTOR) as u64);
|
||||
// Cap to MAX_TIMEOUT
|
||||
if delta > MAX_TIMEOUT {
|
||||
MAX_TIMEOUT
|
||||
} else {
|
||||
delta
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[test]
|
||||
fn test_with_random() {
|
||||
for d in [
|
||||
123, 1234, 12345, 123456, 1234567, 12345678, 123456789, 1234567890,
|
||||
] {
|
||||
let rd = with_random(Duration::from_micros(d)).as_micros();
|
||||
assert!(rd as f32 >= d as f32, "{rd} < {d}");
|
||||
assert!(rd as f32 <= (d + 1) as f32 * 1.2, "{rd} > {d} * 1.2");
|
||||
}
|
||||
}
|
||||
|
||||
fn try_connect(
|
||||
cluster_name: String,
|
||||
remote: EndpointAddr,
|
||||
endpoint: Arc<Endpoint>,
|
||||
last_connection_id: u64,
|
||||
connection_tx: mpsc::Sender<ConnOrConn>,
|
||||
mut order_stop: mpsc::Receiver<()>,
|
||||
) {
|
||||
tokio::spawn(async move {
|
||||
let node_id = remote.id.show();
|
||||
// Until we have a connection or we're requested to stop
|
||||
let mut keep_trying = true;
|
||||
let mut delta = None;
|
||||
|
||||
while keep_trying {
|
||||
delta = Some(next_delta(delta));
|
||||
|
||||
keep_trying = tokio::select! {
|
||||
_ = sleep(delta.unwrap_or_default()) => true,
|
||||
_ = order_stop.recv() => false,
|
||||
};
|
||||
if keep_trying {
|
||||
eprintln!("DEBUG cluster {cluster_name}: node {node_id}: trying to connect...");
|
||||
let connect = tokio::select! {
|
||||
// conn = endpoint.connect(remote.clone(), ALPN[0]) => Some(conn),
|
||||
conn = endpoint.connect_with_opts(remote.clone(), ALPN[0], connect_config()) => Some(conn),
|
||||
_ = order_stop.recv() => None,
|
||||
};
|
||||
if let Some(connect) = connect {
|
||||
let res = match connect {
|
||||
Ok(connecting) => match connecting.await {
|
||||
Ok(connection) => {
|
||||
eprintln!(
|
||||
"DEBUG cluster {cluster_name}: node {node_id}: created connection"
|
||||
);
|
||||
match open_channels(
|
||||
connection,
|
||||
last_connection_id,
|
||||
&cluster_name,
|
||||
&node_id,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(own_connection) => {
|
||||
if let Err(err) = connection_tx
|
||||
.send(ConnOrConn::OwnConnection(own_connection))
|
||||
.await
|
||||
{
|
||||
eprintln!(
|
||||
"DEBUG cluster {cluster_name}: node {node_id}: quitting because ConnectionManager has quit: {err}"
|
||||
);
|
||||
}
|
||||
// successfully opened connection
|
||||
keep_trying = false;
|
||||
Ok(())
|
||||
}
|
||||
Err(err) => Err(err.to_string()),
|
||||
}
|
||||
}
|
||||
Err(err) => Err(err.to_string()),
|
||||
},
|
||||
Err(err) => Err(err.to_string()),
|
||||
};
|
||||
if let Err(err) = res {
|
||||
eprintln!(
|
||||
"WARN cluster {cluster_name}: node {node_id}: while trying to connect: {err}"
|
||||
);
|
||||
}
|
||||
} else {
|
||||
// received stop order
|
||||
eprintln!(
|
||||
"DEBUG cluster {cluster_name}: node {node_id}: stop to try connecting to node be cause we received a connection from it"
|
||||
);
|
||||
keep_trying = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
|
@ -1,128 +0,0 @@
|
|||
use std::collections::BTreeMap;
|
||||
use std::sync::Arc;
|
||||
|
||||
use iroh::{Endpoint, PublicKey, endpoint::Incoming};
|
||||
use reaction_plugin::shutdown::ShutdownController;
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
use crate::{connection::ConnOrConn, key::Show};
|
||||
|
||||
enum Break {
|
||||
Yes,
|
||||
No,
|
||||
}
|
||||
|
||||
pub struct EndpointManager {
|
||||
/// The [`iroh::Endpoint`] to manage
|
||||
endpoint: Arc<Endpoint>,
|
||||
/// Cluster's name (for logging)
|
||||
cluster_name: String,
|
||||
/// Connection sender to the Connection Managers
|
||||
connections_tx: BTreeMap<PublicKey, mpsc::Sender<ConnOrConn>>,
|
||||
/// shutdown
|
||||
shutdown: ShutdownController,
|
||||
}
|
||||
|
||||
impl EndpointManager {
|
||||
pub fn new(
|
||||
endpoint: Arc<Endpoint>,
|
||||
cluster_name: String,
|
||||
connections_tx: BTreeMap<PublicKey, mpsc::Sender<ConnOrConn>>,
|
||||
shutdown: ShutdownController,
|
||||
) {
|
||||
tokio::spawn(async move {
|
||||
Self {
|
||||
endpoint,
|
||||
cluster_name,
|
||||
connections_tx,
|
||||
shutdown,
|
||||
}
|
||||
.task()
|
||||
.await
|
||||
});
|
||||
}
|
||||
|
||||
async fn task(&mut self) {
|
||||
loop {
|
||||
// Uncomment this line and comment the select! for faster development in this function
|
||||
// let event = Event::TryConnect(self.endpoint_addr_rx.recv().await);
|
||||
let incoming = tokio::select! {
|
||||
incoming = self.endpoint.accept() => incoming,
|
||||
_ = self.shutdown.wait() => None,
|
||||
};
|
||||
|
||||
match incoming {
|
||||
Some(incoming) => {
|
||||
if let Break::Yes = self.handle_incoming(incoming).await {
|
||||
break;
|
||||
}
|
||||
}
|
||||
None => break,
|
||||
}
|
||||
}
|
||||
|
||||
self.endpoint.close().await
|
||||
}
|
||||
|
||||
async fn handle_incoming(&mut self, incoming: Incoming) -> Break {
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: EndpointManager: receiving connection",
|
||||
self.cluster_name,
|
||||
);
|
||||
// FIXME a malicious actor could maybe prevent a node from connecting to
|
||||
// its cluster by sending lots of invalid slow connection requests?
|
||||
// This function could be moved to a new 'oneshot' task instead
|
||||
let remote_address = incoming.remote_address();
|
||||
let remote_address_validated = incoming.remote_address_validated();
|
||||
let connection = match incoming.await {
|
||||
Ok(connection) => connection,
|
||||
Err(err) => {
|
||||
if remote_address_validated {
|
||||
eprintln!("INFO refused connection from {}: {err}", remote_address)
|
||||
} else {
|
||||
eprintln!("INFO refused connection: {err}")
|
||||
}
|
||||
return Break::No;
|
||||
}
|
||||
};
|
||||
|
||||
let remote_id = connection.remote_id();
|
||||
|
||||
match self.connections_tx.get(&remote_id) {
|
||||
None => {
|
||||
eprintln!(
|
||||
"WARN cluster {}: incoming connection from node '{}', ip: {} is not in our list, refusing incoming connection.",
|
||||
self.cluster_name,
|
||||
remote_id.show(),
|
||||
remote_address
|
||||
);
|
||||
eprintln!(
|
||||
"INFO cluster {}: {}, {}",
|
||||
self.cluster_name,
|
||||
"maybe it's not from our cluster,",
|
||||
"maybe this node's configuration has not yet been updated to add this new node."
|
||||
);
|
||||
return Break::No;
|
||||
}
|
||||
Some(tx) => {
|
||||
if let Err(_) = tx.send(ConnOrConn::Connection(connection)).await {
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: EndpointManager: quitting because ConnectionManager has quit",
|
||||
self.cluster_name,
|
||||
);
|
||||
self.shutdown.ask_shutdown();
|
||||
return Break::Yes;
|
||||
}
|
||||
eprintln!(
|
||||
"DEBUG cluster {}: EndpointManager: receiving connection from {}",
|
||||
self.cluster_name,
|
||||
remote_id.show(),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// TODO persist the incoming address, so that we don't forget this address
|
||||
|
||||
Break::No
|
||||
}
|
||||
}
|
||||
|
|
@ -1,188 +0,0 @@
|
|||
use std::io;
|
||||
|
||||
use data_encoding::DecodeError;
|
||||
use iroh::{PublicKey, SecretKey};
|
||||
use tokio::{
|
||||
fs::{self, File},
|
||||
io::AsyncWriteExt,
|
||||
};
|
||||
|
||||
pub fn secret_key_path(dir: &str, cluster_name: &str) -> String {
|
||||
format!("{dir}/secret_key_{cluster_name}.txt")
|
||||
}
|
||||
|
||||
pub async fn secret_key(dir: &str, cluster_name: &str) -> Result<SecretKey, String> {
|
||||
let path = secret_key_path(dir, cluster_name);
|
||||
if let Some(key) = get_secret_key(&path).await? {
|
||||
Ok(key)
|
||||
} else {
|
||||
let key = SecretKey::generate(&mut rand::rng());
|
||||
set_secret_key(&path, &key).await?;
|
||||
Ok(key)
|
||||
}
|
||||
}
|
||||
|
||||
async fn get_secret_key(path: &str) -> Result<Option<SecretKey>, String> {
|
||||
let key = match fs::read_to_string(path).await {
|
||||
Ok(key) => Ok(key),
|
||||
Err(err) => match err.kind() {
|
||||
io::ErrorKind::NotFound => return Ok(None),
|
||||
_ => Err(format!("can't read secret key file: {err}")),
|
||||
},
|
||||
}?;
|
||||
let bytes = match key_b64_to_bytes(&key) {
|
||||
Ok(key) => Ok(key),
|
||||
Err(err) => Err(format!(
|
||||
"invalid secret key read from file: {err}. Please remove the `{path}` file from plugin directory.",
|
||||
)),
|
||||
}?;
|
||||
Ok(Some(SecretKey::from_bytes(&bytes)))
|
||||
}
|
||||
|
||||
async fn set_secret_key(path: &str, key: &SecretKey) -> Result<(), String> {
|
||||
let secret_key = key.show();
|
||||
File::options()
|
||||
.mode(0o600)
|
||||
.write(true)
|
||||
.create(true)
|
||||
.open(path)
|
||||
.await
|
||||
.map_err(|err| format!("can't open `{path}` in plugin directory: {err}"))?
|
||||
.write_all(secret_key.as_bytes())
|
||||
.await
|
||||
.map_err(|err| format!("can't write to `{path}` in plugin directory: {err}"))
|
||||
}
|
||||
|
||||
pub fn key_b64_to_bytes(key: &str) -> Result<[u8; 32], DecodeError> {
|
||||
let vec = data_encoding::BASE64URL.decode(key.as_bytes())?;
|
||||
if vec.len() != 32 {
|
||||
return Err(DecodeError {
|
||||
position: vec.len(),
|
||||
kind: data_encoding::DecodeKind::Length,
|
||||
});
|
||||
}
|
||||
let mut bytes = [0u8; 32];
|
||||
for i in 0..32 {
|
||||
bytes[i] = vec[i];
|
||||
}
|
||||
Ok(bytes)
|
||||
}
|
||||
|
||||
pub fn key_bytes_to_b64(key: &[u8; 32]) -> String {
|
||||
data_encoding::BASE64URL.encode(key)
|
||||
}
|
||||
|
||||
/// Implemented by PublicKey & SecretKey to display keys as base64 instead of hexadecimal.
|
||||
/// Similar to Display/ToString
|
||||
pub trait Show {
|
||||
fn show(&self) -> String;
|
||||
}
|
||||
|
||||
impl Show for PublicKey {
|
||||
fn show(&self) -> String {
|
||||
key_bytes_to_b64(self.as_bytes())
|
||||
}
|
||||
}
|
||||
|
||||
impl Show for SecretKey {
|
||||
fn show(&self) -> String {
|
||||
key_bytes_to_b64(&self.to_bytes())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use assert_fs::{
|
||||
TempDir,
|
||||
prelude::{FileWriteStr, PathChild},
|
||||
};
|
||||
use iroh::{PublicKey, SecretKey};
|
||||
use tokio::fs::read_to_string;
|
||||
|
||||
use crate::key::{
|
||||
get_secret_key, key_b64_to_bytes, key_bytes_to_b64, secret_key_path, set_secret_key,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn secret_key_encode_decode() {
|
||||
for (secret_key, public_key) in [
|
||||
(
|
||||
"g7U1LPq2cgGSyk6CH_v1QpoXowSFKVQ8IcFljd_ZKGw=",
|
||||
"HhVh7ghqpXM9375HZ82OOeB504HBSS25wgug-1vUggY=",
|
||||
),
|
||||
(
|
||||
"5EgRjwIpqd60IXWCGg5dFTtxkI-0fS1PlhoIhUjh1eY=",
|
||||
"LPSQ9pS7m_5vvNC-fhoBNeL2-eS2Fd6aO4ImSnXp3lc=",
|
||||
),
|
||||
] {
|
||||
assert_eq!(
|
||||
secret_key,
|
||||
&key_bytes_to_b64(&key_b64_to_bytes(secret_key).unwrap())
|
||||
);
|
||||
assert_eq!(
|
||||
public_key,
|
||||
&key_bytes_to_b64(&key_b64_to_bytes(public_key).unwrap())
|
||||
);
|
||||
|
||||
let secret_key_parsed = SecretKey::from_bytes(&key_b64_to_bytes(secret_key).unwrap());
|
||||
let public_key_parsed =
|
||||
PublicKey::from_bytes(&key_b64_to_bytes(public_key).unwrap()).unwrap();
|
||||
|
||||
assert_eq!(secret_key_parsed.public(), public_key_parsed);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn secret_key_get() {
|
||||
let tmp_dir = TempDir::new().unwrap();
|
||||
let tmp_dir_str = tmp_dir.to_str().unwrap();
|
||||
for (secret_key, cluster_name) in [
|
||||
("g7U1LPq2cgGSyk6CH_v1QpoXowSFKVQ8IcFljd_ZKGw=", "my_cluster"),
|
||||
("5EgRjwIpqd60IXWCGg5dFTtxkI-0fS1PlhoIhUjh1eY=", "name"),
|
||||
] {
|
||||
tmp_dir
|
||||
.child(&format!("secret_key_{cluster_name}.txt"))
|
||||
.write_str(secret_key)
|
||||
.unwrap();
|
||||
|
||||
let secret_key_parsed = SecretKey::from_bytes(&key_b64_to_bytes(secret_key).unwrap());
|
||||
|
||||
let path = secret_key_path(tmp_dir_str, cluster_name);
|
||||
let secret_key_from_file = get_secret_key(&path).await.unwrap();
|
||||
|
||||
assert_eq!(
|
||||
secret_key_parsed.to_bytes(),
|
||||
secret_key_from_file.unwrap().to_bytes()
|
||||
)
|
||||
}
|
||||
|
||||
assert_eq!(
|
||||
Ok(None),
|
||||
get_secret_key(&format!("{tmp_dir_str}/non_existent"))
|
||||
.await
|
||||
// Can't compare secret keys so we map to bytes
|
||||
// even if we don't want one
|
||||
.map(|opt| opt.map(|pk| pk.to_bytes()))
|
||||
);
|
||||
// Will fail if we're root, but who runs this as root??
|
||||
assert!(
|
||||
get_secret_key(&format!("/root/non_existent"))
|
||||
.await
|
||||
.is_err()
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn secret_key_set() {
|
||||
let tmp_dir = TempDir::new().unwrap();
|
||||
let tmp_dir_str = tmp_dir.to_str().unwrap();
|
||||
|
||||
let path = format!("{tmp_dir_str}/secret");
|
||||
let key = SecretKey::generate(&mut rand::rng());
|
||||
|
||||
assert!(set_secret_key(&path, &key).await.is_ok());
|
||||
let read_file = read_to_string(&path).await;
|
||||
assert!(read_file.is_ok());
|
||||
assert_eq!(read_file.unwrap(), key_bytes_to_b64(&key.to_bytes()));
|
||||
}
|
||||
}
|
||||
|
|
@ -1,273 +0,0 @@
|
|||
use std::{
|
||||
collections::{BTreeMap, BTreeSet},
|
||||
net::{Ipv4Addr, Ipv6Addr, SocketAddr},
|
||||
path::PathBuf,
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
use iroh::{EndpointAddr, PublicKey, SecretKey, TransportAddr};
|
||||
use reaction_plugin::{
|
||||
ActionConfig, ActionImpl, Exec, Hello, Line, Manifest, PluginInfo, RemoteResult, StreamConfig,
|
||||
StreamImpl, line::PatternLine, main_loop, shutdown::ShutdownController, time::parse_duration,
|
||||
};
|
||||
use remoc::{rch::mpsc, rtc};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use treedb::Database;
|
||||
|
||||
use crate::key::Show;
|
||||
|
||||
mod cluster;
|
||||
mod connection;
|
||||
mod endpoint;
|
||||
mod key;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
let plugin = Plugin::default();
|
||||
main_loop(plugin).await;
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
struct Plugin {
|
||||
init: BTreeMap<String, (StreamInit, Vec<ActionInit>)>,
|
||||
cluster_shutdown: ShutdownController,
|
||||
}
|
||||
|
||||
/// Stream options as defined by the user
|
||||
#[derive(Serialize, Deserialize)]
|
||||
struct StreamOptions {
|
||||
/// The UDP port to open
|
||||
listen_port: u16,
|
||||
/// The IPv4 to bind to. Defaults to 0.0.0.0.
|
||||
/// Set to `null` to use IPv6 only.
|
||||
#[serde(default = "ipv4_unspecified")]
|
||||
bind_ipv4: Option<Ipv4Addr>,
|
||||
/// The IPv6 to bind to. Defaults to 0.0.0.0.
|
||||
/// Set to `null` to use IPv4 only.
|
||||
#[serde(default = "ipv6_unspecified")]
|
||||
bind_ipv6: Option<Ipv6Addr>,
|
||||
/// Other nodes which are part of the cluster.
|
||||
nodes: Vec<NodeOption>,
|
||||
/// Max duration before we drop pending messages to a node we can't connect to.
|
||||
message_timeout: String,
|
||||
}
|
||||
|
||||
fn ipv4_unspecified() -> Option<Ipv4Addr> {
|
||||
Some(Ipv4Addr::UNSPECIFIED)
|
||||
}
|
||||
fn ipv6_unspecified() -> Option<Ipv6Addr> {
|
||||
Some(Ipv6Addr::UNSPECIFIED)
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize)]
|
||||
struct NodeOption {
|
||||
public_key: String,
|
||||
#[serde(default)]
|
||||
addresses: Vec<SocketAddr>,
|
||||
}
|
||||
|
||||
/// Stream information before start
|
||||
struct StreamInit {
|
||||
name: String,
|
||||
listen_port: u16,
|
||||
bind_ipv4: Option<Ipv4Addr>,
|
||||
bind_ipv6: Option<Ipv6Addr>,
|
||||
secret_key: SecretKey,
|
||||
message_timeout: Duration,
|
||||
nodes: BTreeMap<PublicKey, EndpointAddr>,
|
||||
tx: mpsc::Sender<Line>,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize)]
|
||||
struct ActionOptions {
|
||||
/// The line to send to the corresponding cluster, example: "ban \<ip\>"
|
||||
send: String,
|
||||
/// The name of the corresponding cluster, example: "my_cluster_stream"
|
||||
to: String,
|
||||
/// Whether the stream of this node also receives the line
|
||||
#[serde(default, rename = "self")]
|
||||
self_: bool,
|
||||
}
|
||||
|
||||
struct ActionInit {
|
||||
name: String,
|
||||
send: PatternLine,
|
||||
self_: bool,
|
||||
rx: mpsc::Receiver<Exec>,
|
||||
}
|
||||
|
||||
impl PluginInfo for Plugin {
|
||||
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
|
||||
Ok(Manifest {
|
||||
hello: Hello::new(),
|
||||
streams: BTreeSet::from(["cluster".into()]),
|
||||
actions: BTreeSet::from(["cluster_send".into()]),
|
||||
})
|
||||
}
|
||||
async fn load_config(
|
||||
&mut self,
|
||||
streams: Vec<StreamConfig>,
|
||||
actions: Vec<ActionConfig>,
|
||||
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
|
||||
let mut ret_streams = Vec::with_capacity(streams.len());
|
||||
let mut ret_actions = Vec::with_capacity(actions.len());
|
||||
|
||||
for StreamConfig {
|
||||
stream_name,
|
||||
stream_type,
|
||||
config,
|
||||
} in streams
|
||||
{
|
||||
if &stream_type != "cluster" {
|
||||
return Err("This plugin can't handle other stream types than cluster".into());
|
||||
}
|
||||
|
||||
let options: StreamOptions = serde_json::from_value(config.into())
|
||||
.map_err(|err| format!("invalid options: {err}"))?;
|
||||
|
||||
let mut nodes = BTreeMap::default();
|
||||
|
||||
let message_timeout = parse_duration(&options.message_timeout)
|
||||
.map_err(|err| format!("invalid message_timeout: {err}"))?;
|
||||
|
||||
if options.bind_ipv4.is_none() && options.bind_ipv6.is_none() {
|
||||
Err(
|
||||
"At least one of bind_ipv4 and bind_ipv6 must be enabled. Unset at least one of them or set at least one of them to an IP.",
|
||||
)?;
|
||||
}
|
||||
|
||||
if options.nodes.is_empty() {
|
||||
Err("At least one remote node has to be configured for a cluster")?;
|
||||
}
|
||||
|
||||
for node in options.nodes.into_iter() {
|
||||
let bytes = key::key_b64_to_bytes(&node.public_key)
|
||||
.map_err(|err| format!("invalid public key {}: {err}", node.public_key))?;
|
||||
|
||||
let public_key = PublicKey::from_bytes(&bytes)
|
||||
.map_err(|err| format!("invalid public key {}: {err}", node.public_key))?;
|
||||
|
||||
nodes.insert(
|
||||
public_key,
|
||||
EndpointAddr {
|
||||
id: public_key,
|
||||
addrs: node
|
||||
.addresses
|
||||
.into_iter()
|
||||
.map(|addr| TransportAddr::Ip(addr))
|
||||
.collect(),
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
let secret_key = key::secret_key(".", &stream_name).await?;
|
||||
eprintln!(
|
||||
"INFO public key of this node for cluster {stream_name}: {}",
|
||||
secret_key.public().show()
|
||||
);
|
||||
|
||||
let (tx, rx) = mpsc::channel(1);
|
||||
|
||||
let stream = StreamInit {
|
||||
name: stream_name.clone(),
|
||||
listen_port: options.listen_port,
|
||||
bind_ipv4: options.bind_ipv4,
|
||||
bind_ipv6: options.bind_ipv6,
|
||||
secret_key,
|
||||
message_timeout,
|
||||
nodes,
|
||||
tx,
|
||||
};
|
||||
|
||||
if let Some(_) = self.init.insert(stream_name, (stream, vec![])) {
|
||||
return Err("this virtual stream has already been initialized".into());
|
||||
}
|
||||
|
||||
ret_streams.push(StreamImpl {
|
||||
stream: rx,
|
||||
standalone: true,
|
||||
})
|
||||
}
|
||||
|
||||
for ActionConfig {
|
||||
stream_name,
|
||||
filter_name,
|
||||
action_name,
|
||||
action_type,
|
||||
config,
|
||||
patterns,
|
||||
} in actions
|
||||
{
|
||||
if &action_type != "cluster_send" {
|
||||
return Err(
|
||||
"This plugin can't handle other action types than 'cluster_send'".into(),
|
||||
);
|
||||
}
|
||||
|
||||
let options: ActionOptions = serde_json::from_value(config.into())
|
||||
.map_err(|err| format!("invalid options: {err}"))?;
|
||||
|
||||
let (tx, rx) = mpsc::channel(1);
|
||||
|
||||
let init_action = ActionInit {
|
||||
name: format!("{}.{}.{}", stream_name, filter_name, action_name),
|
||||
send: PatternLine::new(options.send, patterns),
|
||||
self_: options.self_,
|
||||
rx,
|
||||
};
|
||||
|
||||
match self.init.get_mut(&options.to) {
|
||||
Some((_, actions)) => actions.push(init_action),
|
||||
None => {
|
||||
return Err(format!(
|
||||
"ERROR action '{}' sends 'to' unknown stream '{}'",
|
||||
init_action.name, options.to
|
||||
)
|
||||
.into());
|
||||
}
|
||||
}
|
||||
|
||||
ret_actions.push(ActionImpl { tx })
|
||||
}
|
||||
|
||||
Ok((ret_streams, ret_actions))
|
||||
}
|
||||
|
||||
async fn start(&mut self) -> RemoteResult<()> {
|
||||
self.cluster_shutdown.delegate().handle_quit_signals()?;
|
||||
|
||||
let mut db = {
|
||||
let path = PathBuf::from(".");
|
||||
let (cancellation_token, task_tracker_token) = self.cluster_shutdown.token().split();
|
||||
Database::open(&path, cancellation_token, task_tracker_token)
|
||||
.await
|
||||
.map_err(|err| format!("Can't open database: {err}"))?
|
||||
};
|
||||
|
||||
while let Some((_, (stream, actions))) = self.init.pop_first() {
|
||||
let endpoint = cluster::bind(&stream).await?;
|
||||
cluster::cluster_tasks(
|
||||
endpoint,
|
||||
stream,
|
||||
actions,
|
||||
&mut db,
|
||||
self.cluster_shutdown.clone(),
|
||||
)
|
||||
.await?;
|
||||
}
|
||||
// Free containers
|
||||
self.init = Default::default();
|
||||
eprintln!("DEBUG started");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn close(self) -> RemoteResult<()> {
|
||||
self.cluster_shutdown.ask_shutdown();
|
||||
self.cluster_shutdown.wait_all_task_shutdown().await;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
|
@ -1,293 +0,0 @@
|
|||
use std::env::set_current_dir;
|
||||
|
||||
use assert_fs::TempDir;
|
||||
use reaction_plugin::{ActionConfig, PluginInfo, StreamConfig};
|
||||
use serde_json::json;
|
||||
|
||||
use crate::{Plugin, tests::insert_secret_key};
|
||||
|
||||
use super::{PUBLIC_KEY_A, TEST_MUTEX, stream_ok};
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_stream() {
|
||||
// Minimal node configuration
|
||||
let nodes = json!([{
|
||||
"public_key": PUBLIC_KEY_A,
|
||||
}]);
|
||||
|
||||
// Invalid type
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "clust".into(),
|
||||
config: stream_ok().into(),
|
||||
}],
|
||||
vec![]
|
||||
)
|
||||
.await
|
||||
.is_err()
|
||||
);
|
||||
|
||||
for (json, is_ok) in [
|
||||
(
|
||||
json!({
|
||||
"listen_port": 2048,
|
||||
"nodes": nodes,
|
||||
"message_timeout": "30m",
|
||||
}),
|
||||
Result::is_ok as fn(&_) -> bool,
|
||||
),
|
||||
(
|
||||
// invalid time
|
||||
json!({
|
||||
"listen_port": 2048,
|
||||
"nodes": nodes,
|
||||
"message_timeout": "30pv",
|
||||
}),
|
||||
Result::is_err,
|
||||
),
|
||||
(
|
||||
json!({
|
||||
"listen_port": 2048,
|
||||
"bind_ipv4": "0.0.0.0",
|
||||
"nodes": nodes,
|
||||
"message_timeout": "30m",
|
||||
}),
|
||||
Result::is_ok,
|
||||
),
|
||||
(
|
||||
json!({
|
||||
"listen_port": 2048,
|
||||
"bind_ipv6": "::",
|
||||
"nodes": nodes,
|
||||
"message_timeout": "30m",
|
||||
}),
|
||||
Result::is_ok,
|
||||
),
|
||||
(
|
||||
json!({
|
||||
"listen_port": 2048,
|
||||
"bind_ipv4": "0.0.0.0",
|
||||
"bind_ipv6": "::",
|
||||
"nodes": nodes,
|
||||
"message_timeout": "30m",
|
||||
}),
|
||||
Result::is_ok,
|
||||
),
|
||||
(
|
||||
json!({
|
||||
"listen_port": 2048,
|
||||
"bind_ipv4": null,
|
||||
"nodes": nodes,
|
||||
"message_timeout": "30m",
|
||||
}),
|
||||
Result::is_ok,
|
||||
),
|
||||
(
|
||||
json!({
|
||||
"listen_port": 2048,
|
||||
"bind_ipv6": null,
|
||||
"nodes": nodes,
|
||||
"message_timeout": "30m",
|
||||
}),
|
||||
Result::is_ok,
|
||||
),
|
||||
(
|
||||
// No bind
|
||||
json!({
|
||||
"listen_port": 2048,
|
||||
"bind_ipv4": null,
|
||||
"bind_ipv6": null,
|
||||
"nodes": nodes,
|
||||
"message_timeout": "30m",
|
||||
}),
|
||||
Result::is_err,
|
||||
),
|
||||
(json!({}), Result::is_err),
|
||||
] {
|
||||
assert!(is_ok(
|
||||
&Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "cluster".into(),
|
||||
config: json.into(),
|
||||
}],
|
||||
vec![]
|
||||
)
|
||||
.await
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_action() {
|
||||
let patterns = vec!["p1".into(), "p2".into()];
|
||||
|
||||
// Invalid type
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "cluster".into(),
|
||||
config: stream_ok().into(),
|
||||
}],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "cluster_sen".into(),
|
||||
config: json!({
|
||||
"send": "<p1>",
|
||||
"to": "stream",
|
||||
})
|
||||
.into(),
|
||||
patterns: patterns.clone(),
|
||||
}]
|
||||
)
|
||||
.await
|
||||
.is_err()
|
||||
);
|
||||
|
||||
for (json, is_ok) in [
|
||||
(
|
||||
json!({
|
||||
"send": "<p1>",
|
||||
"to": "stream",
|
||||
}),
|
||||
true,
|
||||
),
|
||||
(
|
||||
json!({
|
||||
"send": "<p1>",
|
||||
"to": "stream",
|
||||
"self": true,
|
||||
}),
|
||||
true,
|
||||
),
|
||||
(
|
||||
json!({
|
||||
"send": "<p1>",
|
||||
"to": "stream",
|
||||
"self": false,
|
||||
}),
|
||||
true,
|
||||
),
|
||||
(
|
||||
// missing to
|
||||
json!({
|
||||
"send": "<p1>",
|
||||
}),
|
||||
false,
|
||||
),
|
||||
(
|
||||
// missing send
|
||||
json!({
|
||||
"to": "stream",
|
||||
}),
|
||||
false,
|
||||
),
|
||||
(
|
||||
// invalid self
|
||||
json!({
|
||||
"send": "<p1>",
|
||||
"to": "stream",
|
||||
"self": "true",
|
||||
}),
|
||||
false,
|
||||
),
|
||||
(
|
||||
// missing conf
|
||||
json!({}),
|
||||
false,
|
||||
),
|
||||
] {
|
||||
let ret = Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "cluster".into(),
|
||||
config: stream_ok().into(),
|
||||
}],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "cluster_send".into(),
|
||||
config: json.clone().into(),
|
||||
patterns: patterns.clone(),
|
||||
}],
|
||||
)
|
||||
.await;
|
||||
|
||||
assert!(
|
||||
ret.is_ok() == is_ok,
|
||||
"is_ok: {is_ok}, ret: {:?}, action conf: {json:?}",
|
||||
ret.map(|_| ())
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_send() {
|
||||
let _lock = TEST_MUTEX.lock();
|
||||
let dir = TempDir::new().unwrap();
|
||||
set_current_dir(&dir).unwrap();
|
||||
insert_secret_key().await;
|
||||
|
||||
// No action is ok
|
||||
let res = Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "cluster".into(),
|
||||
config: stream_ok().into(),
|
||||
}],
|
||||
vec![],
|
||||
)
|
||||
.await;
|
||||
assert!(res.is_ok(), "{:?}", res.map(|_| ()));
|
||||
|
||||
// An action is ok
|
||||
let res = Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "cluster".into(),
|
||||
config: stream_ok().into(),
|
||||
}],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "cluster_send".into(),
|
||||
config: json!({ "send": "message", "to": "stream" }).into(),
|
||||
patterns: vec![],
|
||||
}],
|
||||
)
|
||||
.await;
|
||||
assert!(res.is_ok(), "{:?}", res.map(|_| ()));
|
||||
|
||||
// Invalid to: option
|
||||
let res = Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "cluster".into(),
|
||||
config: stream_ok().into(),
|
||||
}],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "cluster_send".into(),
|
||||
config: json!({ "send": "message", "to": "stream1" }).into(),
|
||||
patterns: vec![],
|
||||
}],
|
||||
)
|
||||
.await;
|
||||
assert!(res.is_err(), "{:?}", res.map(|_| ()));
|
||||
}
|
||||
|
|
@ -1,319 +0,0 @@
|
|||
use std::{env::set_current_dir, time::Duration};
|
||||
|
||||
use assert_fs::TempDir;
|
||||
use reaction_plugin::{ActionConfig, Exec, PluginInfo, StreamConfig};
|
||||
use serde_json::json;
|
||||
use tokio::{fs, time::timeout};
|
||||
use treedb::time::now;
|
||||
|
||||
use crate::{
|
||||
Plugin,
|
||||
key::secret_key_path,
|
||||
tests::{PUBLIC_KEY_A, PUBLIC_KEY_B, SECRET_KEY_A, SECRET_KEY_B, TEST_MUTEX},
|
||||
};
|
||||
|
||||
#[derive(Clone)]
|
||||
struct TestNode {
|
||||
public_key: &'static str,
|
||||
private_key: &'static str,
|
||||
port: u16,
|
||||
}
|
||||
|
||||
const POOL: [TestNode; 15] = [
|
||||
TestNode {
|
||||
public_key: PUBLIC_KEY_A,
|
||||
private_key: SECRET_KEY_A,
|
||||
port: 2055,
|
||||
},
|
||||
TestNode {
|
||||
public_key: PUBLIC_KEY_B,
|
||||
private_key: SECRET_KEY_B,
|
||||
port: 2056,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "ZjEPlIdGikV_sPIAUzO3RFUidlERJUhJ9XwNAlieuvU=",
|
||||
private_key: "SCbd8Ids3Dg9MwzyMNV1KFcUtsyRbeCp7GDmu-xXBSs=",
|
||||
port: 2057,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "2FUpABLl9I6bU9a2XtWKMLDzwHfrVcNEG6K8Ix6sxWQ=",
|
||||
private_key: "F0W8nIlVmuFVpelwYH4PDaBDM0COYOyXDmBEmnHyo5s=",
|
||||
port: 2058,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "qR4JDI_yyPWUBrmBbQjqfFbGP14v9dEaQVPHPOjId1o=",
|
||||
private_key: "S5pxTafNXPd_9TMT4_ERuPXlZ882UmggAHrf8Yntfqg=",
|
||||
port: 2059,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "NjkPBwDO4IEOBjkcxufYtVXspJNQZ0qF6GamRq2TOB4=",
|
||||
private_key: "zM_lXiFuwTkmPuuXqIghW_J0uwq0a53L_yhM57uy_R8=",
|
||||
port: 2060,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "_mgTzrlE8b_zvka3LgfD5qH2h_d3S0hcDU1WzIL6C74=",
|
||||
private_key: "6Obq7fxOXK-u-P3QB5FJvNnwXdKwP1FsVJ0555o7DXs=",
|
||||
port: 2061,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "FLKxCSSjjzxH0ZWTpQ8xXcSIRutXUhIDhZimjamxO2s=",
|
||||
private_key: "pBPcJ32bt4xGZIGZDLDtj0eedg7p5DENjAwA-wM-1vk=",
|
||||
port: 2062,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "yYBWzhzXO4isdPW2SzI-Sv3mcy3dUl6Kl0oFN6YpuzE=",
|
||||
private_key: "nC8F6prLAY9-86EZlfXwpOjQeghlPKf3PtT-zXsJZsA=",
|
||||
port: 2063,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "QLbNxlLEUt0tieD9BX9of663gCm9WjKeqch0BIFJ3CE=",
|
||||
private_key: "JL4bKNHJMaMX_ElnaDHc6Ql74HZbovcswNOrY6fN1sU=",
|
||||
port: 2064,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "2cmAmcaEFW-9val6WMoHSfTW25IxiQHes7Jwy6NqLLc=",
|
||||
private_key: "TCvfDLHLQ5RxfAs7_2Th2u1XF48ygxTLAAsUzVPBn_o=",
|
||||
port: 2065,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "PfKYILyGmu0C6GFUOLw4MSLxN6gtkj0XUdvQW50A2xA=",
|
||||
private_key: "LaQgDWsXpwSQlZZXd8UEllrgpeXw9biSye4zcjLclU0=",
|
||||
port: 2066,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "OQMXwPl90gr-2y-f5qZIZuVG4WEae5cc8JOB39LTNYE=",
|
||||
private_key: "blcigXzk0oeQ8J1jwYFiYHJ-pMiUqbUM4SJBlxA0MiI=",
|
||||
port: 2067,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "DHpkBgnQUfpC7s4-mTfpn1_PN4dzj7hCCMF6GwO3Bus=",
|
||||
private_key: "sw7-2gPOswznF2OJHJdbfyJxdjS-P5O0lie6SdOL_08=",
|
||||
port: 2068,
|
||||
},
|
||||
TestNode {
|
||||
public_key: "odjjaYd6lL1DG8N9AXHW9LGsrKIb5IlW0KZz-rgxfXA=",
|
||||
private_key: "6JU6YHRBM_rJkuQmMaGaio_PZiyzZlTIU0qE8AHPGSE=",
|
||||
port: 2069,
|
||||
},
|
||||
];
|
||||
|
||||
async fn stream_action(
|
||||
name: &str,
|
||||
index: usize,
|
||||
nodes: &[TestNode],
|
||||
) -> (StreamConfig, ActionConfig) {
|
||||
let stream_name = format!("stream_{name}");
|
||||
let this_node = &nodes[index];
|
||||
let other_nodes: Vec<_> = nodes
|
||||
.iter()
|
||||
.filter(|node| node.public_key != this_node.public_key)
|
||||
.map(|node| {
|
||||
json!({
|
||||
"public_key": node.public_key,
|
||||
"addresses": [format!("[::1]:{}", node.port)]
|
||||
})
|
||||
})
|
||||
.collect();
|
||||
|
||||
fs::write(secret_key_path(".", &stream_name), this_node.private_key)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
(
|
||||
StreamConfig {
|
||||
stream_name: stream_name.clone(),
|
||||
stream_type: "cluster".into(),
|
||||
config: json!({
|
||||
"message_timeout": "30s",
|
||||
"listen_port": this_node.port,
|
||||
"nodes": other_nodes,
|
||||
})
|
||||
.into(),
|
||||
},
|
||||
ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "cluster_send".into(),
|
||||
config: json!({
|
||||
"send": format!("from {name}: <test>"),
|
||||
"to": stream_name,
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["test".into()],
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn two_nodes_simultaneous_startup() {
|
||||
for separate_plugin in [true /*, false */] {
|
||||
let _lock = TEST_MUTEX.lock();
|
||||
let dir = TempDir::new().unwrap();
|
||||
set_current_dir(&dir).unwrap();
|
||||
|
||||
let ((mut stream_a, action_a), (mut stream_b, action_b)) = if separate_plugin {
|
||||
let mut plugin_a = Plugin::default();
|
||||
let (sa, aa) = stream_action("a", 0, &POOL[0..2]).await;
|
||||
let (mut streams_a, mut actions_a) =
|
||||
plugin_a.load_config(vec![sa], vec![aa]).await.unwrap();
|
||||
plugin_a.start().await.unwrap();
|
||||
|
||||
let mut plugin_b = Plugin::default();
|
||||
let (sb, ab) = stream_action("b", 1, &POOL[0..2]).await;
|
||||
let (mut streams_b, mut actions_b) =
|
||||
plugin_b.load_config(vec![sb], vec![ab]).await.unwrap();
|
||||
plugin_b.start().await.unwrap();
|
||||
(
|
||||
(streams_a.remove(0), actions_a.remove(0)),
|
||||
(streams_b.remove(0), actions_b.remove(0)),
|
||||
)
|
||||
} else {
|
||||
let mut plugin = Plugin::default();
|
||||
let a = stream_action("a", 0, &POOL[0..2]).await;
|
||||
let b = stream_action("b", 1, &POOL[0..2]).await;
|
||||
let (mut streams, mut actions) = plugin
|
||||
.load_config(vec![a.0, b.0], vec![a.1, b.1])
|
||||
.await
|
||||
.unwrap();
|
||||
plugin.start().await.unwrap();
|
||||
(
|
||||
(streams.remove(0), actions.remove(0)),
|
||||
(streams.remove(1), actions.remove(1)),
|
||||
)
|
||||
};
|
||||
|
||||
for m in ["test1", "test2", "test3"] {
|
||||
let time = now().into();
|
||||
for (stream, action, from) in [
|
||||
(&mut stream_b, &action_a, "a"),
|
||||
(&mut stream_a, &action_b, "b"),
|
||||
] {
|
||||
assert!(
|
||||
action
|
||||
.tx
|
||||
.send(Exec {
|
||||
match_: vec![m.into()],
|
||||
time,
|
||||
})
|
||||
.await
|
||||
.is_ok(),
|
||||
"separate_plugin: {separate_plugin}, message: {m}, from: {from}"
|
||||
);
|
||||
|
||||
let received = timeout(Duration::from_millis(5000), stream.stream.recv()).await;
|
||||
|
||||
assert!(
|
||||
received.is_ok(),
|
||||
"separate_plugin: {separate_plugin}, message: {m}, from: {from}, did timeout"
|
||||
);
|
||||
let received = received.unwrap();
|
||||
assert!(
|
||||
received.is_ok(),
|
||||
"separate_plugin: {separate_plugin}, message: {m}, from: {from}, remoc receive error"
|
||||
);
|
||||
let received = received.unwrap();
|
||||
assert_eq!(
|
||||
received,
|
||||
Some((format!("from {from}: {m}"), time)),
|
||||
"separate_plugin: {separate_plugin}, message: {m}, from: {from}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn n_nodes_simultaneous_startup() {
|
||||
let _lock = TEST_MUTEX.lock();
|
||||
|
||||
// Ports can take some time to be really closed
|
||||
let mut port_delta = 0;
|
||||
|
||||
for n in 3..=POOL.len() {
|
||||
println!("\nNODES: {n}\n");
|
||||
port_delta += n;
|
||||
// for n in 3..=3 {
|
||||
let dir = TempDir::new().unwrap();
|
||||
set_current_dir(&dir).unwrap();
|
||||
|
||||
let mut plugins = Vec::with_capacity(n);
|
||||
let mut streams = Vec::with_capacity(n);
|
||||
let mut actions = Vec::with_capacity(n);
|
||||
for i in 0..n {
|
||||
let mut plugin = Plugin::default();
|
||||
let name = format!("n{i}");
|
||||
let (stream, action) = stream_action(
|
||||
&name,
|
||||
i,
|
||||
&POOL[0..n]
|
||||
.iter()
|
||||
.map(|node| node.clone())
|
||||
.map(|node| TestNode {
|
||||
port: node.port + port_delta as u16,
|
||||
..node
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.as_slice(),
|
||||
)
|
||||
.await;
|
||||
let (mut stream, mut action) = plugin
|
||||
.load_config(vec![stream], vec![action])
|
||||
.await
|
||||
.unwrap();
|
||||
plugin.start().await.unwrap();
|
||||
plugins.push(plugin);
|
||||
streams.push(stream.pop().unwrap());
|
||||
actions.push((action.pop().unwrap(), name));
|
||||
}
|
||||
|
||||
for m in ["test1", "test2", "test3", "test4", "test5"] {
|
||||
let time = now().into();
|
||||
for (i, (action, from)) in actions.iter().enumerate() {
|
||||
assert!(
|
||||
action
|
||||
.tx
|
||||
.send(Exec {
|
||||
match_: vec![m.into()],
|
||||
time,
|
||||
})
|
||||
.await
|
||||
.is_ok(),
|
||||
"n nodes: {n}, n°action{i}, message: {m}, from: {from}"
|
||||
);
|
||||
|
||||
for (j, stream) in streams.iter_mut().enumerate().filter(|(j, _)| *j != i) {
|
||||
let received = timeout(Duration::from_millis(5000), stream.stream.recv()).await;
|
||||
|
||||
assert!(
|
||||
received.is_ok(),
|
||||
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}, did timeout"
|
||||
);
|
||||
let received = received.unwrap();
|
||||
assert!(
|
||||
received.is_ok(),
|
||||
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}, remoc receive error"
|
||||
);
|
||||
let received = received.unwrap();
|
||||
assert_eq!(
|
||||
received,
|
||||
Some((format!("from {from}: {m}"), time)),
|
||||
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}"
|
||||
);
|
||||
println!(
|
||||
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for plugin in plugins {
|
||||
plugin.close().await.unwrap();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TODO test:
|
||||
// with inexisting nodes
|
||||
// different startup times
|
||||
// stopping & restarting a node mid exchange
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
use std::sync::{LazyLock, Mutex};
|
||||
|
||||
use serde_json::json;
|
||||
use tokio::fs::write;
|
||||
|
||||
mod conf;
|
||||
mod e2e;
|
||||
mod self_;
|
||||
|
||||
const SECRET_KEY_A: &str = "g7U1LPq2cgGSyk6CH_v1QpoXowSFKVQ8IcFljd_ZKGw=";
|
||||
const PUBLIC_KEY_A: &str = "HhVh7ghqpXM9375HZ82OOeB504HBSS25wgug-1vUggY=";
|
||||
|
||||
const SECRET_KEY_B: &str = "5EgRjwIpqd60IXWCGg5dFTtxkI-0fS1PlhoIhUjh1eY=";
|
||||
const PUBLIC_KEY_B: &str = "LPSQ9pS7m_5vvNC-fhoBNeL2-eS2Fd6aO4ImSnXp3lc=";
|
||||
|
||||
// Tests that spawn a database in current directory must be run one at a time
|
||||
static TEST_MUTEX: LazyLock<Mutex<()>> = LazyLock::new(|| Mutex::new(()));
|
||||
|
||||
fn stream_ok_port(port: u16) -> serde_json::Value {
|
||||
json!({
|
||||
"listen_port": port,
|
||||
"nodes": [{
|
||||
"public_key": PUBLIC_KEY_A,
|
||||
}],
|
||||
"message_timeout": "30m",
|
||||
})
|
||||
}
|
||||
|
||||
fn stream_ok() -> serde_json::Value {
|
||||
stream_ok_port(2048)
|
||||
}
|
||||
|
||||
async fn insert_secret_key() {
|
||||
write(
|
||||
"./secret_key_stream.txt",
|
||||
b"pBPcJ32bt4xGZIGZDLDtj0eedg7p5DENjAwA-wM-1vk=",
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
}
|
||||
|
|
@ -1,78 +0,0 @@
|
|||
use std::{env::set_current_dir, time::Duration};
|
||||
|
||||
use assert_fs::TempDir;
|
||||
use reaction_plugin::{ActionConfig, Exec, PluginInfo, StreamConfig};
|
||||
use serde_json::json;
|
||||
use tokio::time::timeout;
|
||||
use treedb::time::now;
|
||||
|
||||
use crate::{Plugin, tests::insert_secret_key};
|
||||
|
||||
use super::{TEST_MUTEX, stream_ok_port};
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_with_self() {
|
||||
let _lock = TEST_MUTEX.lock();
|
||||
let dir = TempDir::new().unwrap();
|
||||
set_current_dir(&dir).unwrap();
|
||||
insert_secret_key().await;
|
||||
|
||||
for self_ in [true, false] {
|
||||
let mut plugin = Plugin::default();
|
||||
let (mut streams, mut actions) = plugin
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "cluster".into(),
|
||||
config: stream_ok_port(2052).into(),
|
||||
}],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "cluster_send".into(),
|
||||
config: json!({
|
||||
"send": "message <test>",
|
||||
"to": "stream",
|
||||
"self": self_,
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["test".into()],
|
||||
}],
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let mut stream = streams.pop().unwrap();
|
||||
let action = actions.pop().unwrap();
|
||||
assert!(stream.standalone);
|
||||
assert!(plugin.start().await.is_ok());
|
||||
|
||||
for m in ["test1", "test2", "test3", " a a a aa a a"] {
|
||||
let time = now().into();
|
||||
assert!(
|
||||
action
|
||||
.tx
|
||||
.send(Exec {
|
||||
match_: vec![m.into()],
|
||||
time,
|
||||
})
|
||||
.await
|
||||
.is_ok()
|
||||
);
|
||||
if self_ {
|
||||
assert_eq!(
|
||||
stream.stream.recv().await.unwrap().unwrap(),
|
||||
(format!("message {m}"), time),
|
||||
);
|
||||
} else {
|
||||
// Don't receive anything
|
||||
assert!(
|
||||
timeout(Duration::from_millis(100), stream.stream.recv())
|
||||
.await
|
||||
.is_err()
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,26 +0,0 @@
|
|||
[package]
|
||||
name = "reaction-plugin-ipset"
|
||||
description = "ipset plugin for reaction"
|
||||
version = "1.0.0"
|
||||
edition = "2024"
|
||||
authors = ["ppom <reaction@ppom.me>"]
|
||||
license = "AGPL-3.0"
|
||||
homepage = "https://reaction.ppom.me"
|
||||
repository = "https://framagit.org/ppom/reaction"
|
||||
keywords = ["security", "sysadmin", "fail2ban", "logs", "monitoring"]
|
||||
default-run = "reaction-plugin-ipset"
|
||||
|
||||
[dependencies]
|
||||
tokio = { workspace = true, features = ["rt-multi-thread"] }
|
||||
remoc.workspace = true
|
||||
reaction-plugin.path = "../reaction-plugin"
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
ipset = "0.9.0"
|
||||
|
||||
[package.metadata.deb]
|
||||
section = "net"
|
||||
assets = [
|
||||
[ "target/release/reaction-plugin-ipset", "/usr/bin/reaction-plugin-ipset", "755" ],
|
||||
]
|
||||
depends = ["libipset-dev", "reaction"]
|
||||
|
|
@ -1,419 +0,0 @@
|
|||
use std::{fmt::Debug, u32, usize};
|
||||
|
||||
use reaction_plugin::{Exec, shutdown::ShutdownToken, time::parse_duration};
|
||||
use remoc::rch::mpsc as remocMpsc;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use crate::ipset::{CreateSet, IpSet, Order, SetChain, Version};
|
||||
|
||||
#[derive(Default, Serialize, Deserialize, PartialEq, Eq, Clone, Copy)]
|
||||
pub enum IpVersion {
|
||||
#[default]
|
||||
#[serde(rename = "ip")]
|
||||
Ip,
|
||||
#[serde(rename = "ipv4")]
|
||||
Ipv4,
|
||||
#[serde(rename = "ipv6")]
|
||||
Ipv6,
|
||||
}
|
||||
impl Debug for IpVersion {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
"{}",
|
||||
match self {
|
||||
IpVersion::Ipv4 => "ipv4",
|
||||
IpVersion::Ipv6 => "ipv6",
|
||||
IpVersion::Ip => "ip",
|
||||
}
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Default, Serialize, Deserialize)]
|
||||
pub enum AddDel {
|
||||
#[default]
|
||||
#[serde(alias = "add")]
|
||||
Add,
|
||||
#[serde(alias = "del")]
|
||||
Del,
|
||||
}
|
||||
|
||||
/// User-facing action options
|
||||
#[derive(Serialize, Deserialize)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct ActionOptions {
|
||||
/// The set that should be used by this action
|
||||
pub set: String,
|
||||
/// The pattern name of the IP.
|
||||
/// Defaults to "ip"
|
||||
#[serde(default = "serde_ip")]
|
||||
pub pattern: String,
|
||||
#[serde(skip)]
|
||||
ip_index: usize,
|
||||
// Whether the action is to "add" or "del" the ip from the set
|
||||
#[serde(default)]
|
||||
action: AddDel,
|
||||
|
||||
#[serde(flatten)]
|
||||
pub set_options: SetOptions,
|
||||
}
|
||||
|
||||
fn serde_ip() -> String {
|
||||
"ip".into()
|
||||
}
|
||||
|
||||
impl ActionOptions {
|
||||
pub fn set_ip_index(&mut self, patterns: Vec<String>) -> Result<(), ()> {
|
||||
self.ip_index = patterns
|
||||
.into_iter()
|
||||
.enumerate()
|
||||
.filter(|(_, name)| name == &self.pattern)
|
||||
.next()
|
||||
.ok_or(())?
|
||||
.0;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Merged set options
|
||||
#[derive(Default, Clone, Deserialize, Serialize, Debug, PartialEq, Eq)]
|
||||
pub struct SetOptions {
|
||||
/// The IP type.
|
||||
/// Defaults to `46`.
|
||||
/// If `ipv4`: creates an IPv4 set with this name
|
||||
/// If `ipv6`: creates an IPv6 set with this name
|
||||
/// If `ip`: creates an IPv4 set with its name suffixed by 'v4' AND an IPv6 set with its name suffixed by 'v6'
|
||||
/// *Merged set-wise*.
|
||||
#[serde(default)]
|
||||
version: Option<IpVersion>,
|
||||
/// Chains where the IP set should be inserted.
|
||||
/// Defaults to `["INPUT", "FORWARD"]`
|
||||
/// *Merged set-wise*.
|
||||
#[serde(default)]
|
||||
chains: Option<Vec<String>>,
|
||||
// Optional timeout, letting linux/netfilter handle set removal instead of reaction
|
||||
// Note that `reaction show` and `reaction flush` won't work if set instead of an `after` action
|
||||
// Same syntax as after and retryperiod in reaction.
|
||||
/// *Merged set-wise*.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
timeout: Option<String>,
|
||||
#[serde(skip)]
|
||||
timeout_u32: Option<u32>,
|
||||
// Target that iptables should use when the IP is encountered.
|
||||
// Defaults to DROP, but can also be ACCEPT, RETURN or any user-defined chain
|
||||
/// *Merged set-wise*.
|
||||
#[serde(default)]
|
||||
target: Option<String>,
|
||||
}
|
||||
|
||||
impl SetOptions {
|
||||
pub fn merge(&mut self, options: &SetOptions) -> Result<(), String> {
|
||||
// merge two Option<T> and fail if there is conflict
|
||||
fn inner_merge<T: Eq + Clone + std::fmt::Debug>(
|
||||
a: &mut Option<T>,
|
||||
b: &Option<T>,
|
||||
name: &str,
|
||||
) -> Result<(), String> {
|
||||
match (&a, &b) {
|
||||
(Some(aa), Some(bb)) => {
|
||||
if aa != bb {
|
||||
return Err(format!(
|
||||
"Conflicting options for {name}: `{aa:?}` and `{bb:?}`"
|
||||
));
|
||||
}
|
||||
}
|
||||
(None, Some(_)) => {
|
||||
*a = b.clone();
|
||||
}
|
||||
_ => (),
|
||||
};
|
||||
Ok(())
|
||||
}
|
||||
|
||||
inner_merge(&mut self.version, &options.version, "version")?;
|
||||
inner_merge(&mut self.timeout, &options.timeout, "timeout")?;
|
||||
inner_merge(&mut self.chains, &options.chains, "chains")?;
|
||||
inner_merge(&mut self.target, &options.target, "target")?;
|
||||
|
||||
if let Some(timeout) = &self.timeout {
|
||||
let duration = parse_duration(timeout)
|
||||
.map_err(|err| format!("failed to parse timeout: {}", err))?
|
||||
.as_secs();
|
||||
if duration > u32::MAX as u64 {
|
||||
return Err(format!(
|
||||
"timeout is limited to {} seconds (approx {} days)",
|
||||
u32::MAX,
|
||||
49_000
|
||||
));
|
||||
}
|
||||
self.timeout_u32 = Some(duration as u32);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Set {
|
||||
sets: SetNames,
|
||||
chains: Vec<String>,
|
||||
timeout: Option<u32>,
|
||||
target: String,
|
||||
}
|
||||
|
||||
impl Set {
|
||||
pub fn from(name: String, options: SetOptions) -> Self {
|
||||
Self {
|
||||
sets: SetNames::new(name, options.version),
|
||||
timeout: options.timeout_u32,
|
||||
target: options.target.unwrap_or("DROP".into()),
|
||||
chains: options
|
||||
.chains
|
||||
.unwrap_or(vec!["INPUT".into(), "FORWARD".into()]),
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn init(&self, ipset: &mut IpSet) -> Result<(), (usize, String)> {
|
||||
for (set, version) in [
|
||||
(&self.sets.ipv4, Version::IPv4),
|
||||
(&self.sets.ipv6, Version::IPv6),
|
||||
] {
|
||||
if let Some(set) = set {
|
||||
// create set
|
||||
ipset
|
||||
.order(Order::CreateSet(CreateSet {
|
||||
name: set.clone(),
|
||||
version,
|
||||
timeout: self.timeout,
|
||||
}))
|
||||
.await
|
||||
.map_err(|err| (0, err.to_string()))?;
|
||||
// insert set in chains
|
||||
for (i, chain) in self.chains.iter().enumerate() {
|
||||
ipset
|
||||
.order(Order::InsertSet(SetChain {
|
||||
set: set.clone(),
|
||||
chain: chain.clone(),
|
||||
target: self.target.clone(),
|
||||
}))
|
||||
.await
|
||||
.map_err(|err| (i + 1, err.to_string()))?;
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn destroy(&self, ipset: &mut IpSet, until: Option<usize>) {
|
||||
for set in [&self.sets.ipv4, &self.sets.ipv6] {
|
||||
if let Some(set) = set {
|
||||
for chain in self
|
||||
.chains
|
||||
.iter()
|
||||
.take(until.map(|until| until - 1).unwrap_or(usize::MAX))
|
||||
{
|
||||
let _ = ipset
|
||||
.order(Order::RemoveSet(SetChain {
|
||||
set: set.clone(),
|
||||
chain: chain.clone(),
|
||||
target: self.target.clone(),
|
||||
}))
|
||||
.await;
|
||||
}
|
||||
if until.is_none_or(|until| until != 0) {
|
||||
let _ = ipset.order(Order::DestroySet(set.clone())).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct SetNames {
|
||||
pub ipv4: Option<String>,
|
||||
pub ipv6: Option<String>,
|
||||
}
|
||||
|
||||
impl SetNames {
|
||||
pub fn new(name: String, version: Option<IpVersion>) -> Self {
|
||||
Self {
|
||||
ipv4: match version {
|
||||
Some(IpVersion::Ipv4) => Some(name.clone()),
|
||||
Some(IpVersion::Ipv6) => None,
|
||||
None | Some(IpVersion::Ip) => Some(format!("{}v4", name)),
|
||||
},
|
||||
ipv6: match version {
|
||||
Some(IpVersion::Ipv4) => None,
|
||||
Some(IpVersion::Ipv6) => Some(name),
|
||||
None | Some(IpVersion::Ip) => Some(format!("{}v6", name)),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Action {
|
||||
ipset: IpSet,
|
||||
rx: remocMpsc::Receiver<Exec>,
|
||||
shutdown: ShutdownToken,
|
||||
sets: SetNames,
|
||||
// index of pattern ip in match vec
|
||||
ip_index: usize,
|
||||
action: AddDel,
|
||||
}
|
||||
|
||||
impl Action {
|
||||
pub fn new(
|
||||
ipset: IpSet,
|
||||
shutdown: ShutdownToken,
|
||||
rx: remocMpsc::Receiver<Exec>,
|
||||
options: ActionOptions,
|
||||
) -> Result<Self, String> {
|
||||
Ok(Action {
|
||||
ipset,
|
||||
rx,
|
||||
shutdown,
|
||||
sets: SetNames::new(options.set, options.set_options.version),
|
||||
ip_index: options.ip_index,
|
||||
action: options.action,
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn serve(mut self) {
|
||||
loop {
|
||||
let event = tokio::select! {
|
||||
exec = self.rx.recv() => Some(exec),
|
||||
_ = self.shutdown.wait() => None,
|
||||
};
|
||||
match event {
|
||||
// shutdown asked
|
||||
None => break,
|
||||
// channel closed
|
||||
Some(Ok(None)) => break,
|
||||
// error from channel
|
||||
Some(Err(err)) => {
|
||||
eprintln!("ERROR {err}");
|
||||
break;
|
||||
}
|
||||
// ok
|
||||
Some(Ok(Some(exec))) => {
|
||||
if let Err(err) = self.handle_exec(exec).await {
|
||||
eprintln!("ERROR {err}");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// eprintln!("DEBUG Asking for shutdown");
|
||||
// self.shutdown.ask_shutdown();
|
||||
}
|
||||
|
||||
async fn handle_exec(&mut self, mut exec: Exec) -> Result<(), String> {
|
||||
// safeguard against Vec::remove's panic
|
||||
if exec.match_.len() <= self.ip_index {
|
||||
return Err(format!(
|
||||
"match received from reaction is smaller than expected. looking for index {} but size is {}. this is a bug!",
|
||||
self.ip_index,
|
||||
exec.match_.len()
|
||||
));
|
||||
}
|
||||
let ip = exec.match_.remove(self.ip_index);
|
||||
// select set
|
||||
let set = match (&self.sets.ipv4, &self.sets.ipv6) {
|
||||
(None, None) => return Err(format!("action is neither IPv4 nor IPv6, this is a bug!")),
|
||||
(None, Some(set)) => set,
|
||||
(Some(set), None) => set,
|
||||
(Some(set4), Some(set6)) => {
|
||||
if ip.contains(':') {
|
||||
set6
|
||||
} else {
|
||||
set4
|
||||
}
|
||||
}
|
||||
};
|
||||
// add/remove ip to set
|
||||
self.ipset
|
||||
.order(match self.action {
|
||||
AddDel::Add => Order::Add(set.clone(), ip),
|
||||
AddDel::Del => Order::Del(set.clone(), ip),
|
||||
})
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::action::{IpVersion, SetOptions};
|
||||
|
||||
#[tokio::test]
|
||||
async fn set_options_merge() {
|
||||
let s1 = SetOptions {
|
||||
version: None,
|
||||
chains: None,
|
||||
timeout: None,
|
||||
timeout_u32: None,
|
||||
target: None,
|
||||
};
|
||||
let s2 = SetOptions {
|
||||
version: Some(IpVersion::Ipv4),
|
||||
chains: Some(vec!["INPUT".into()]),
|
||||
timeout: Some("3h".into()),
|
||||
timeout_u32: Some(3 * 3600),
|
||||
target: Some("DROP".into()),
|
||||
};
|
||||
assert_ne!(s1, s2);
|
||||
assert_eq!(s1, SetOptions::default());
|
||||
|
||||
{
|
||||
// s2 can be merged in s1
|
||||
let mut s1 = s1.clone();
|
||||
assert!(s1.merge(&s2).is_ok());
|
||||
assert_eq!(s1, s2);
|
||||
}
|
||||
|
||||
{
|
||||
// s1 can be merged in s2
|
||||
let mut s2 = s2.clone();
|
||||
assert!(s2.merge(&s1).is_ok());
|
||||
}
|
||||
|
||||
{
|
||||
// s1 can be merged in itself
|
||||
let mut s3 = s1.clone();
|
||||
assert!(s3.merge(&s1).is_ok());
|
||||
assert_eq!(s1, s3);
|
||||
}
|
||||
|
||||
{
|
||||
// s2 can be merged in itself
|
||||
let mut s3 = s2.clone();
|
||||
assert!(s3.merge(&s2).is_ok());
|
||||
assert_eq!(s2, s3);
|
||||
}
|
||||
|
||||
for s3 in [
|
||||
SetOptions {
|
||||
version: Some(IpVersion::Ipv6),
|
||||
..Default::default()
|
||||
},
|
||||
SetOptions {
|
||||
chains: Some(vec!["damn".into()]),
|
||||
..Default::default()
|
||||
},
|
||||
SetOptions {
|
||||
timeout: Some("30min".into()),
|
||||
..Default::default()
|
||||
},
|
||||
SetOptions {
|
||||
target: Some("log-refuse".into()),
|
||||
..Default::default()
|
||||
},
|
||||
] {
|
||||
// none with some is ok
|
||||
assert!(s3.clone().merge(&s1).is_ok(), "s3: {s3:?}");
|
||||
assert!(s1.clone().merge(&s3).is_ok(), "s3: {s3:?}");
|
||||
// different some is ko
|
||||
assert!(s3.clone().merge(&s2).is_err(), "s3: {s3:?}");
|
||||
assert!(s2.clone().merge(&s3).is_err(), "s3: {s3:?}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,248 +0,0 @@
|
|||
use std::{collections::BTreeMap, fmt::Display, net::Ipv4Addr, process::Command, thread};
|
||||
|
||||
use ipset::{
|
||||
Session,
|
||||
types::{HashNet, NetDataType, Parse},
|
||||
};
|
||||
use tokio::sync::{mpsc, oneshot};
|
||||
|
||||
#[derive(PartialEq, Eq, PartialOrd, Ord, Copy, Clone)]
|
||||
pub enum Version {
|
||||
IPv4,
|
||||
IPv6,
|
||||
}
|
||||
impl Display for Version {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.write_str(match self {
|
||||
Version::IPv4 => "IPv4",
|
||||
Version::IPv6 => "IPv6",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone)]
|
||||
pub struct CreateSet {
|
||||
pub name: String,
|
||||
pub version: Version,
|
||||
pub timeout: Option<u32>,
|
||||
}
|
||||
|
||||
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone)]
|
||||
pub struct SetChain {
|
||||
pub set: String,
|
||||
pub chain: String,
|
||||
pub target: String,
|
||||
}
|
||||
|
||||
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone)]
|
||||
pub enum Order {
|
||||
CreateSet(CreateSet),
|
||||
DestroySet(String),
|
||||
InsertSet(SetChain),
|
||||
RemoveSet(SetChain),
|
||||
Add(String, String),
|
||||
Del(String, String),
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct IpSet {
|
||||
tx: mpsc::Sender<OrderType>,
|
||||
}
|
||||
|
||||
impl Default for IpSet {
|
||||
fn default() -> Self {
|
||||
let (tx, rx) = mpsc::channel(1);
|
||||
thread::spawn(move || IPsetManager::default().serve(rx));
|
||||
Self { tx }
|
||||
}
|
||||
}
|
||||
|
||||
impl IpSet {
|
||||
pub async fn order(&mut self, order: Order) -> Result<(), IpSetError> {
|
||||
let (tx, rx) = oneshot::channel();
|
||||
self.tx
|
||||
.send((order, tx))
|
||||
.await
|
||||
.map_err(|err| IpSetError::Thread(format!("ipset thread has quit: {err}")))?;
|
||||
rx.await
|
||||
.map_err(|err| IpSetError::Thread(format!("ipset thread didn't respond: {err}")))?
|
||||
.map_err(IpSetError::IpSet)
|
||||
}
|
||||
}
|
||||
|
||||
pub enum IpSetError {
|
||||
Thread(String),
|
||||
IpSet(()),
|
||||
}
|
||||
impl Display for IpSetError {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
"{}",
|
||||
match self {
|
||||
IpSetError::Thread(err) => err,
|
||||
IpSetError::IpSet(()) => "ipset error",
|
||||
}
|
||||
)
|
||||
}
|
||||
}
|
||||
impl From<IpSetError> for String {
|
||||
fn from(value: IpSetError) -> Self {
|
||||
match value {
|
||||
IpSetError::Thread(err) => err,
|
||||
IpSetError::IpSet(()) => "ipset error".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub type OrderType = (Order, oneshot::Sender<Result<(), ()>>);
|
||||
|
||||
struct Set {
|
||||
session: Session<HashNet>,
|
||||
version: Version,
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
struct IPsetManager {
|
||||
// IPset sessions
|
||||
sessions: BTreeMap<String, Set>,
|
||||
}
|
||||
|
||||
impl IPsetManager {
|
||||
fn serve(mut self, mut rx: mpsc::Receiver<OrderType>) {
|
||||
loop {
|
||||
match rx.blocking_recv() {
|
||||
None => break,
|
||||
Some((order, response)) => {
|
||||
let result = self.handle_order(order);
|
||||
let _ = response.send(result);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn handle_order(&mut self, order: Order) -> Result<(), ()> {
|
||||
match order {
|
||||
Order::CreateSet(CreateSet {
|
||||
name,
|
||||
version,
|
||||
timeout,
|
||||
}) => {
|
||||
eprintln!("INFO creating {version} set {name}");
|
||||
let mut session: Session<HashNet> = Session::new(name.clone());
|
||||
session
|
||||
.create(|builder| {
|
||||
let builder = if let Some(timeout) = timeout {
|
||||
builder.with_timeout(timeout)?
|
||||
} else {
|
||||
builder
|
||||
};
|
||||
builder.with_ipv6(version == Version::IPv6)?.build()
|
||||
})
|
||||
.map_err(|err| eprintln!("ERROR Could not create set {name}: {err}"))?;
|
||||
|
||||
self.sessions.insert(name, Set { session, version });
|
||||
}
|
||||
Order::DestroySet(set) => {
|
||||
if let Some(mut session) = self.sessions.remove(&set) {
|
||||
eprintln!("INFO destroying {} set {set}", session.version);
|
||||
session
|
||||
.session
|
||||
.destroy()
|
||||
.map_err(|err| eprintln!("ERROR Could not destroy set {set}: {err}"))?;
|
||||
}
|
||||
}
|
||||
|
||||
Order::InsertSet(options) => self.insert_remove_set(options, true)?,
|
||||
Order::RemoveSet(options) => self.insert_remove_set(options, false)?,
|
||||
|
||||
Order::Add(set, ip) => self.insert_remove_ip(set, ip, true)?,
|
||||
Order::Del(set, ip) => self.insert_remove_ip(set, ip, false)?,
|
||||
};
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn insert_remove_ip(&mut self, set: String, ip: String, insert: bool) -> Result<(), ()> {
|
||||
self._insert_remove_ip(set, ip, insert)
|
||||
.map_err(|err| eprintln!("ERROR {err}"))
|
||||
}
|
||||
fn _insert_remove_ip(&mut self, set: String, ip: String, insert: bool) -> Result<(), String> {
|
||||
let session = self.sessions.get_mut(&set).ok_or(format!(
|
||||
"No set handled by this plugin with this name: {set}. This likely is a bug."
|
||||
))?;
|
||||
|
||||
let mut net_data = NetDataType::new(Ipv4Addr::LOCALHOST, 0);
|
||||
net_data
|
||||
.parse(&ip)
|
||||
.map_err(|err| format!("`{ip}` is not recognized as an IP: {err}"))?;
|
||||
|
||||
if insert {
|
||||
session.session.add(net_data, &[])
|
||||
} else {
|
||||
session.session.del(net_data)
|
||||
}
|
||||
.map_err(|err| format!("Could not add `{ip}` to set {set}: {err}"))?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn insert_remove_set(&self, options: SetChain, insert: bool) -> Result<(), ()> {
|
||||
self._insert_remove_set(options, insert)
|
||||
.map_err(|err| eprintln!("ERROR {err}"))
|
||||
}
|
||||
fn _insert_remove_set(&self, options: SetChain, insert: bool) -> Result<(), String> {
|
||||
let SetChain { set, chain, target } = options;
|
||||
|
||||
let version = self
|
||||
.sessions
|
||||
.get(&set)
|
||||
.ok_or(format!(
|
||||
"No set managed by this plugin with this name: {set}"
|
||||
))?
|
||||
.version;
|
||||
|
||||
let (verb, verbing, from) = if insert {
|
||||
("insert", "inserting", "in")
|
||||
} else {
|
||||
("remove", "removing", "from")
|
||||
};
|
||||
|
||||
eprintln!("INFO {verbing} {version} set {set} {from} chain {chain}");
|
||||
|
||||
let command = match version {
|
||||
Version::IPv4 => "iptables",
|
||||
Version::IPv6 => "ip6tables",
|
||||
};
|
||||
|
||||
let mut child = Command::new(command)
|
||||
.args([
|
||||
"-w",
|
||||
if insert { "-I" } else { "-D" },
|
||||
&chain,
|
||||
"-m",
|
||||
"set",
|
||||
"--match-set",
|
||||
&set,
|
||||
"src",
|
||||
"-j",
|
||||
&target,
|
||||
])
|
||||
.spawn()
|
||||
.map_err(|err| format!("Could not {verb} ipset {set} {from} chain {chain}: Could not execute {command}: {err}"))?;
|
||||
|
||||
let exit = child
|
||||
.wait()
|
||||
.map_err(|err| format!("Could not {verb} ipset {set} {from} chain {chain}: {err}"))?;
|
||||
|
||||
if exit.success() {
|
||||
Ok(())
|
||||
} else {
|
||||
Err(format!(
|
||||
"Could not {verb} ipset: exit code {}",
|
||||
exit.code()
|
||||
.map(|c| c.to_string())
|
||||
.unwrap_or_else(|| "<unknown>".to_string())
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,159 +0,0 @@
|
|||
use std::collections::{BTreeMap, BTreeSet};
|
||||
|
||||
use reaction_plugin::{
|
||||
ActionConfig, ActionImpl, Hello, Manifest, PluginInfo, RemoteError, RemoteResult, StreamConfig,
|
||||
StreamImpl,
|
||||
shutdown::{ShutdownController, ShutdownToken},
|
||||
};
|
||||
use remoc::rtc;
|
||||
|
||||
use crate::{
|
||||
action::{Action, ActionOptions, Set, SetOptions},
|
||||
ipset::IpSet,
|
||||
};
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
mod action;
|
||||
mod ipset;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
let plugin = Plugin::default();
|
||||
reaction_plugin::main_loop(plugin).await;
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
struct Plugin {
|
||||
ipset: IpSet,
|
||||
sets: Vec<Set>,
|
||||
actions: Vec<Action>,
|
||||
shutdown: ShutdownController,
|
||||
}
|
||||
|
||||
impl PluginInfo for Plugin {
|
||||
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
|
||||
Ok(Manifest {
|
||||
hello: Hello::new(),
|
||||
streams: BTreeSet::default(),
|
||||
actions: BTreeSet::from(["ipset".into()]),
|
||||
})
|
||||
}
|
||||
|
||||
async fn load_config(
|
||||
&mut self,
|
||||
streams: Vec<StreamConfig>,
|
||||
actions: Vec<ActionConfig>,
|
||||
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
|
||||
if !streams.is_empty() {
|
||||
return Err("This plugin can't handle any stream type".into());
|
||||
}
|
||||
|
||||
let mut ret_actions = Vec::with_capacity(actions.len());
|
||||
let mut set_options: BTreeMap<String, SetOptions> = BTreeMap::new();
|
||||
|
||||
for ActionConfig {
|
||||
stream_name,
|
||||
filter_name,
|
||||
action_name,
|
||||
action_type,
|
||||
config,
|
||||
patterns,
|
||||
} in actions
|
||||
{
|
||||
if &action_type != "ipset" {
|
||||
return Err("This plugin can't handle other action types than ipset".into());
|
||||
}
|
||||
|
||||
let mut options: ActionOptions = serde_json::from_value(config.into()).map_err(|err| {
|
||||
format!("invalid options for action {stream_name}.{filter_name}.{action_name}: {err}")
|
||||
})?;
|
||||
|
||||
options.set_ip_index(patterns).map_err(|_|
|
||||
format!(
|
||||
"No pattern with name {} in filter {stream_name}.{filter_name}. Try setting the option `pattern` to your pattern name of type 'ip'",
|
||||
&options.pattern
|
||||
)
|
||||
)?;
|
||||
|
||||
// Merge option
|
||||
set_options
|
||||
.entry(options.set.clone())
|
||||
.or_default()
|
||||
.merge(&options.set_options)
|
||||
.map_err(|err| format!("ipset {}: {err}", options.set))?;
|
||||
|
||||
let (tx, rx) = remoc::rch::mpsc::channel(1);
|
||||
self.actions.push(Action::new(
|
||||
self.ipset.clone(),
|
||||
self.shutdown.token(),
|
||||
rx,
|
||||
options,
|
||||
)?);
|
||||
|
||||
ret_actions.push(ActionImpl { tx });
|
||||
}
|
||||
|
||||
// Init all sets
|
||||
while let Some((name, options)) = set_options.pop_first() {
|
||||
self.sets.push(Set::from(name, options));
|
||||
}
|
||||
|
||||
Ok((vec![], ret_actions))
|
||||
}
|
||||
|
||||
async fn start(&mut self) -> RemoteResult<()> {
|
||||
self.shutdown.delegate().handle_quit_signals()?;
|
||||
|
||||
let mut first_error = None;
|
||||
for (i, set) in self.sets.iter().enumerate() {
|
||||
// Retain if error
|
||||
if let Err((failed_step, err)) = set.init(&mut self.ipset).await {
|
||||
first_error = Some((i, failed_step, RemoteError::Plugin(err)));
|
||||
break;
|
||||
}
|
||||
}
|
||||
// Destroy initialized sets if error
|
||||
if let Some((last_set, failed_step, err)) = first_error {
|
||||
eprintln!("DEBUG last_set: {last_set} failed_step: {failed_step} err: {err}");
|
||||
for (curr_set, set) in self.sets.iter().enumerate().take(last_set + 1) {
|
||||
let until = if last_set == curr_set {
|
||||
Some(failed_step)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
let _ = set.destroy(&mut self.ipset, until).await;
|
||||
}
|
||||
return Err(err);
|
||||
}
|
||||
|
||||
// Launch a task that will destroy the sets on shutdown
|
||||
tokio::spawn(destroy_sets_at_shutdown(
|
||||
self.ipset.clone(),
|
||||
std::mem::take(&mut self.sets),
|
||||
self.shutdown.token(),
|
||||
));
|
||||
|
||||
// Launch all actions
|
||||
while let Some(action) = self.actions.pop() {
|
||||
tokio::spawn(async move { action.serve().await });
|
||||
}
|
||||
self.actions = Default::default();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn close(self) -> RemoteResult<()> {
|
||||
self.shutdown.ask_shutdown();
|
||||
self.shutdown.wait_all_task_shutdown().await;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
async fn destroy_sets_at_shutdown(mut ipset: IpSet, sets: Vec<Set>, shutdown: ShutdownToken) {
|
||||
shutdown.wait().await;
|
||||
for set in sets {
|
||||
set.destroy(&mut ipset, None).await;
|
||||
}
|
||||
}
|
||||
|
|
@ -1,253 +0,0 @@
|
|||
use reaction_plugin::{ActionConfig, PluginInfo, StreamConfig, Value};
|
||||
use serde_json::json;
|
||||
|
||||
use crate::Plugin;
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_stream() {
|
||||
// No stream is supported by ipset
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "ipset".into(),
|
||||
config: Value::Null
|
||||
}],
|
||||
vec![]
|
||||
)
|
||||
.await
|
||||
.is_err()
|
||||
);
|
||||
|
||||
// Nothing is ok
|
||||
assert!(Plugin::default().load_config(vec![], vec![]).await.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_action_standalone() {
|
||||
let p = vec!["name".into(), "ip".into(), "ip2".into()];
|
||||
let p_noip = vec!["name".into(), "ip2".into()];
|
||||
|
||||
for (is_ok, conf, patterns) in [
|
||||
// minimal set
|
||||
(true, json!({ "set": "test" }), &p),
|
||||
// missing set key
|
||||
(false, json!({}), &p),
|
||||
(false, json!({ "version": "ipv4" }), &p),
|
||||
// unknown key
|
||||
(false, json!({ "set": "test", "unknown": "yes" }), &p),
|
||||
(false, json!({ "set": "test", "ip_index": 1 }), &p),
|
||||
(false, json!({ "set": "test", "timeout_u32": 1 }), &p),
|
||||
// pattern //
|
||||
(true, json!({ "set": "test" }), &p),
|
||||
(true, json!({ "set": "test", "pattern": "ip" }), &p),
|
||||
(true, json!({ "set": "test", "pattern": "ip2" }), &p),
|
||||
(true, json!({ "set": "test", "pattern": "ip2" }), &p_noip),
|
||||
// unknown pattern "ip"
|
||||
(false, json!({ "set": "test" }), &p_noip),
|
||||
(false, json!({ "set": "test", "pattern": "ip" }), &p_noip),
|
||||
// unknown pattern
|
||||
(false, json!({ "set": "test", "pattern": "unknown" }), &p),
|
||||
(false, json!({ "set": "test", "pattern": "uwu" }), &p_noip),
|
||||
// bad type
|
||||
(false, json!({ "set": "test", "pattern": 0 }), &p_noip),
|
||||
(false, json!({ "set": "test", "pattern": true }), &p_noip),
|
||||
// action //
|
||||
(true, json!({ "set": "test", "action": "add" }), &p),
|
||||
(true, json!({ "set": "test", "action": "del" }), &p),
|
||||
// unknown action
|
||||
(false, json!({ "set": "test", "action": "create" }), &p),
|
||||
(false, json!({ "set": "test", "action": "insert" }), &p),
|
||||
(false, json!({ "set": "test", "action": "delete" }), &p),
|
||||
(false, json!({ "set": "test", "action": "destroy" }), &p),
|
||||
// bad type
|
||||
(false, json!({ "set": "test", "action": true }), &p),
|
||||
(false, json!({ "set": "test", "action": 1 }), &p),
|
||||
// ip version //
|
||||
// ok
|
||||
(true, json!({ "set": "test", "version": "ipv4" }), &p),
|
||||
(true, json!({ "set": "test", "version": "ipv6" }), &p),
|
||||
(true, json!({ "set": "test", "version": "ip" }), &p),
|
||||
// unknown version
|
||||
(false, json!({ "set": "test", "version": 4 }), &p),
|
||||
(false, json!({ "set": "test", "version": 6 }), &p),
|
||||
(false, json!({ "set": "test", "version": 46 }), &p),
|
||||
(false, json!({ "set": "test", "version": "5" }), &p),
|
||||
(false, json!({ "set": "test", "version": "ipv5" }), &p),
|
||||
(false, json!({ "set": "test", "version": "4" }), &p),
|
||||
(false, json!({ "set": "test", "version": "6" }), &p),
|
||||
(false, json!({ "set": "test", "version": "46" }), &p),
|
||||
// bad type
|
||||
(false, json!({ "set": "test", "version": true }), &p),
|
||||
// chains //
|
||||
// everything is fine really
|
||||
(true, json!({ "set": "test", "chains": [] }), &p),
|
||||
(true, json!({ "set": "test", "chains": ["INPUT"] }), &p),
|
||||
(true, json!({ "set": "test", "chains": ["FORWARD"] }), &p),
|
||||
(
|
||||
true,
|
||||
json!({ "set": "test", "chains": ["custom_chain"] }),
|
||||
&p,
|
||||
),
|
||||
(
|
||||
true,
|
||||
json!({ "set": "test", "chains": ["INPUT", "FORWARD"] }),
|
||||
&p,
|
||||
),
|
||||
(
|
||||
true,
|
||||
json!({
|
||||
"set": "test",
|
||||
"chains": ["INPUT", "FORWARD", "my_iptables_chain"]
|
||||
}),
|
||||
&p,
|
||||
),
|
||||
// timeout //
|
||||
(true, json!({ "set": "test", "timeout": "1m" }), &p),
|
||||
(true, json!({ "set": "test", "timeout": "3 days" }), &p),
|
||||
// bad
|
||||
(false, json!({ "set": "test", "timeout": "3 dayz"}), &p),
|
||||
(false, json!({ "set": "test", "timeout": 12 }), &p),
|
||||
// target //
|
||||
// anything is fine too
|
||||
(true, json!({ "set": "test", "target": "DROP" }), &p),
|
||||
(true, json!({ "set": "test", "target": "ACCEPT" }), &p),
|
||||
(true, json!({ "set": "test", "target": "RETURN" }), &p),
|
||||
(true, json!({ "set": "test", "target": "custom_chain" }), &p),
|
||||
// bad
|
||||
(false, json!({ "set": "test", "target": 11 }), &p),
|
||||
(false, json!({ "set": "test", "target": ["DROP"] }), &p),
|
||||
] {
|
||||
let res = Plugin::default()
|
||||
.load_config(
|
||||
vec![],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "ipset".into(),
|
||||
config: conf.clone().into(),
|
||||
patterns: patterns.clone(),
|
||||
}],
|
||||
)
|
||||
.await;
|
||||
|
||||
assert!(
|
||||
res.is_ok() == is_ok,
|
||||
"conf: {:?}, must be ok: {is_ok}, result: {:?}",
|
||||
conf,
|
||||
// empty Result::Ok because ActionImpl is not Debug
|
||||
res.map(|_| ())
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// TODO
|
||||
#[tokio::test]
|
||||
async fn conf_action_merge() {
|
||||
let mut plugin = Plugin::default();
|
||||
|
||||
let set1 = ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action1".into(),
|
||||
action_type: "ipset".into(),
|
||||
config: json!({
|
||||
"set": "test",
|
||||
"target": "DROP",
|
||||
"chains": ["INPUT"],
|
||||
"action": "add",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
};
|
||||
|
||||
let set2 = ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action2".into(),
|
||||
action_type: "ipset".into(),
|
||||
config: json!({
|
||||
"set": "test",
|
||||
"target": "DROP",
|
||||
"version": "ip",
|
||||
"action": "add",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
};
|
||||
|
||||
let set3 = ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action2".into(),
|
||||
action_type: "ipset".into(),
|
||||
config: json!({
|
||||
"set": "test",
|
||||
"action": "del",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
};
|
||||
|
||||
let res = plugin
|
||||
.load_config(
|
||||
vec![],
|
||||
vec![
|
||||
// First set
|
||||
set1.clone(),
|
||||
// Same set, adding options, no conflict
|
||||
set2.clone(),
|
||||
// Same set, no new options, no conflict
|
||||
set3.clone(),
|
||||
// Unrelated set, so no conflict
|
||||
ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action3".into(),
|
||||
action_type: "ipset".into(),
|
||||
config: json!({
|
||||
"set": "test2",
|
||||
"target": "target1",
|
||||
"version": "ipv6",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
},
|
||||
],
|
||||
)
|
||||
.await;
|
||||
|
||||
assert!(res.is_ok(), "res: {:?}", res.map(|_| ()));
|
||||
|
||||
// Another set with conflict is not ok
|
||||
let res = plugin
|
||||
.load_config(
|
||||
vec![],
|
||||
vec![
|
||||
// First set
|
||||
set1,
|
||||
// Same set, adding options, no conflict
|
||||
set2,
|
||||
// Same set, no new options, no conflict
|
||||
set3,
|
||||
// Another set with conflict
|
||||
ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action3".into(),
|
||||
action_type: "ipset".into(),
|
||||
config: json!({
|
||||
"set": "test",
|
||||
"target": "target2",
|
||||
"action": "del",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
},
|
||||
],
|
||||
)
|
||||
.await;
|
||||
assert!(res.is_err(), "res: {:?}", res.map(|_| ()));
|
||||
}
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
[package]
|
||||
name = "reaction-plugin-nftables"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
tokio = { workspace = true, features = ["rt-multi-thread"] }
|
||||
remoc.workspace = true
|
||||
reaction-plugin.path = "../reaction-plugin"
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
nftables = { version = "0.6.3", features = ["tokio"] }
|
||||
libnftables1-sys = { version = "0.1.1" }
|
||||
|
|
@ -1,493 +0,0 @@
|
|||
use std::{
|
||||
borrow::Cow,
|
||||
collections::HashSet,
|
||||
fmt::{Debug, Display},
|
||||
u32,
|
||||
};
|
||||
|
||||
use nftables::{
|
||||
batch::Batch,
|
||||
expr::Expression,
|
||||
schema::{Element, NfListObject, Rule, SetFlag, SetType, SetTypeValue},
|
||||
stmt::Statement,
|
||||
types::{NfFamily, NfHook},
|
||||
};
|
||||
use reaction_plugin::{Exec, shutdown::ShutdownToken, time::parse_duration};
|
||||
use remoc::rch::mpsc as remocMpsc;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use crate::{helpers::Version, nft::NftClient};
|
||||
|
||||
#[derive(Default, Serialize, Deserialize, PartialEq, Eq, Clone, Copy)]
|
||||
pub enum IpVersion {
|
||||
#[default]
|
||||
#[serde(rename = "ip")]
|
||||
Ip,
|
||||
#[serde(rename = "ipv4")]
|
||||
Ipv4,
|
||||
#[serde(rename = "ipv6")]
|
||||
Ipv6,
|
||||
}
|
||||
impl Debug for IpVersion {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
"{}",
|
||||
match self {
|
||||
IpVersion::Ipv4 => "ipv4",
|
||||
IpVersion::Ipv6 => "ipv6",
|
||||
IpVersion::Ip => "ip",
|
||||
}
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Default, Debug, Serialize, Deserialize)]
|
||||
pub enum AddDel {
|
||||
#[default]
|
||||
#[serde(alias = "add")]
|
||||
Add,
|
||||
#[serde(alias = "delete")]
|
||||
Delete,
|
||||
}
|
||||
|
||||
/// User-facing action options
|
||||
#[derive(Serialize, Deserialize)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct ActionOptions {
|
||||
/// The set that should be used by this action
|
||||
pub set: String,
|
||||
/// The pattern name of the IP.
|
||||
/// Defaults to "ip"
|
||||
#[serde(default = "serde_ip")]
|
||||
pub pattern: String,
|
||||
#[serde(skip)]
|
||||
ip_index: usize,
|
||||
// Whether the action is to "add" or "del" the ip from the set
|
||||
#[serde(default)]
|
||||
action: AddDel,
|
||||
|
||||
#[serde(flatten)]
|
||||
pub set_options: SetOptions,
|
||||
}
|
||||
|
||||
fn serde_ip() -> String {
|
||||
"ip".into()
|
||||
}
|
||||
|
||||
impl ActionOptions {
|
||||
pub fn set_ip_index(&mut self, patterns: Vec<String>) -> Result<(), ()> {
|
||||
self.ip_index = patterns
|
||||
.into_iter()
|
||||
.enumerate()
|
||||
.filter(|(_, name)| name == &self.pattern)
|
||||
.next()
|
||||
.ok_or(())?
|
||||
.0;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Merged set options
|
||||
#[derive(Default, Clone, Deserialize, Serialize, Debug, PartialEq, Eq)]
|
||||
pub struct SetOptions {
|
||||
/// The IP type.
|
||||
/// Defaults to `46`.
|
||||
/// If `ipv4`: creates an IPv4 set with this name
|
||||
/// If `ipv6`: creates an IPv6 set with this name
|
||||
/// If `ip`: creates an IPv4 set with its name suffixed by 'v4' AND an IPv6 set with its name suffixed by 'v6'
|
||||
/// *Merged set-wise*.
|
||||
#[serde(default)]
|
||||
version: Option<IpVersion>,
|
||||
/// Chains where the IP set should be inserted.
|
||||
/// Defaults to `["input", "forward"]`
|
||||
/// *Merged set-wise*.
|
||||
#[serde(default)]
|
||||
hooks: Option<Vec<RHook>>,
|
||||
// Optional timeout, letting linux/netfilter handle set removal instead of reaction
|
||||
// Note that `reaction show` and `reaction flush` won't work if set instead of an `after` action
|
||||
// Same syntax as after and retryperiod in reaction.
|
||||
/// *Merged set-wise*.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
timeout: Option<String>,
|
||||
#[serde(skip)]
|
||||
timeout_u32: Option<u32>,
|
||||
// Target that iptables should use when the IP is encountered.
|
||||
// Defaults to DROP, but can also be ACCEPT, RETURN or any user-defined chain
|
||||
/// *Merged set-wise*.
|
||||
#[serde(default)]
|
||||
target: Option<RStatement>,
|
||||
}
|
||||
|
||||
impl SetOptions {
|
||||
pub fn merge(&mut self, options: &SetOptions) -> Result<(), String> {
|
||||
// merge two Option<T> and fail if there is conflict
|
||||
fn inner_merge<T: Eq + Clone + std::fmt::Debug>(
|
||||
a: &mut Option<T>,
|
||||
b: &Option<T>,
|
||||
name: &str,
|
||||
) -> Result<(), String> {
|
||||
match (&a, &b) {
|
||||
(Some(aa), Some(bb)) => {
|
||||
if aa != bb {
|
||||
return Err(format!(
|
||||
"Conflicting options for {name}: `{aa:?}` and `{bb:?}`"
|
||||
));
|
||||
}
|
||||
}
|
||||
(None, Some(_)) => {
|
||||
*a = b.clone();
|
||||
}
|
||||
_ => (),
|
||||
};
|
||||
Ok(())
|
||||
}
|
||||
|
||||
inner_merge(&mut self.version, &options.version, "version")?;
|
||||
inner_merge(&mut self.timeout, &options.timeout, "timeout")?;
|
||||
inner_merge(&mut self.hooks, &options.hooks, "chains")?;
|
||||
inner_merge(&mut self.target, &options.target, "target")?;
|
||||
|
||||
if let Some(timeout) = &self.timeout {
|
||||
let duration = parse_duration(timeout)
|
||||
.map_err(|err| format!("failed to parse timeout: {}", err))?
|
||||
.as_secs();
|
||||
if duration > u32::MAX as u64 {
|
||||
return Err(format!(
|
||||
"timeout is limited to {} seconds (approx {} days)",
|
||||
u32::MAX,
|
||||
49_000
|
||||
));
|
||||
}
|
||||
self.timeout_u32 = Some(duration as u32);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum RHook {
|
||||
Ingress,
|
||||
Prerouting,
|
||||
Forward,
|
||||
Input,
|
||||
Output,
|
||||
Postrouting,
|
||||
Egress,
|
||||
}
|
||||
|
||||
impl RHook {
|
||||
pub fn as_str(&self) -> &'static str {
|
||||
match self {
|
||||
RHook::Ingress => "ingress",
|
||||
RHook::Prerouting => "prerouting",
|
||||
RHook::Forward => "forward",
|
||||
RHook::Input => "input",
|
||||
RHook::Output => "output",
|
||||
RHook::Postrouting => "postrouting",
|
||||
RHook::Egress => "egress",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for RHook {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}", self.as_str())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&RHook> for NfHook {
|
||||
fn from(value: &RHook) -> Self {
|
||||
match value {
|
||||
RHook::Ingress => Self::Ingress,
|
||||
RHook::Prerouting => Self::Prerouting,
|
||||
RHook::Forward => Self::Forward,
|
||||
RHook::Input => Self::Input,
|
||||
RHook::Output => Self::Output,
|
||||
RHook::Postrouting => Self::Postrouting,
|
||||
RHook::Egress => Self::Egress,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum RStatement {
|
||||
Accept,
|
||||
Drop,
|
||||
Continue,
|
||||
Return,
|
||||
}
|
||||
|
||||
pub struct Set {
|
||||
pub sets: SetNames,
|
||||
pub hooks: Vec<RHook>,
|
||||
pub timeout: Option<u32>,
|
||||
pub target: RStatement,
|
||||
}
|
||||
|
||||
impl Set {
|
||||
pub fn from(name: String, options: SetOptions) -> Self {
|
||||
Self {
|
||||
sets: SetNames::new(name, options.version),
|
||||
timeout: options.timeout_u32,
|
||||
target: options.target.unwrap_or(RStatement::Drop),
|
||||
hooks: options.hooks.unwrap_or(vec![RHook::Input, RHook::Forward]),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn init<'a>(&self, batch: &mut Batch<'a>) -> Result<(), String> {
|
||||
for (set, version) in [
|
||||
(&self.sets.ipv4, Version::IPv4),
|
||||
(&self.sets.ipv6, Version::IPv6),
|
||||
] {
|
||||
if let Some(set) = set {
|
||||
let family = NfFamily::INet;
|
||||
let table = Cow::from("reaction");
|
||||
|
||||
// create set
|
||||
batch.add(NfListObject::<'a>::Set(Box::new(nftables::schema::Set::<
|
||||
'a,
|
||||
> {
|
||||
family,
|
||||
table: table.to_owned(),
|
||||
name: Cow::Owned(set.to_owned()),
|
||||
// TODO Try a set which is both ipv4 and ipv6?
|
||||
set_type: SetTypeValue::Single(match version {
|
||||
Version::IPv4 => SetType::Ipv4Addr,
|
||||
Version::IPv6 => SetType::Ipv6Addr,
|
||||
}),
|
||||
flags: Some({
|
||||
let mut flags = HashSet::from([SetFlag::Interval]);
|
||||
if self.timeout.is_some() {
|
||||
flags.insert(SetFlag::Timeout);
|
||||
}
|
||||
flags
|
||||
}),
|
||||
timeout: self.timeout.clone(),
|
||||
..Default::default()
|
||||
})));
|
||||
// insert set in chains
|
||||
let expr = vec![match self.target {
|
||||
RStatement::Accept => Statement::Accept(None),
|
||||
RStatement::Drop => Statement::Drop(None),
|
||||
RStatement::Continue => Statement::Continue(None),
|
||||
RStatement::Return => Statement::Return(None),
|
||||
}];
|
||||
for hook in &self.hooks {
|
||||
batch.add(NfListObject::Rule(Rule {
|
||||
family,
|
||||
table: table.to_owned(),
|
||||
chain: Cow::from(hook.to_string()),
|
||||
expr: Cow::Owned(expr.clone()),
|
||||
..Default::default()
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
pub struct SetNames {
|
||||
pub ipv4: Option<String>,
|
||||
pub ipv6: Option<String>,
|
||||
}
|
||||
|
||||
impl SetNames {
|
||||
pub fn new(name: String, version: Option<IpVersion>) -> Self {
|
||||
Self {
|
||||
ipv4: match version {
|
||||
Some(IpVersion::Ipv4) => Some(name.clone()),
|
||||
Some(IpVersion::Ipv6) => None,
|
||||
None | Some(IpVersion::Ip) => Some(format!("{}v4", name)),
|
||||
},
|
||||
ipv6: match version {
|
||||
Some(IpVersion::Ipv4) => None,
|
||||
Some(IpVersion::Ipv6) => Some(name),
|
||||
None | Some(IpVersion::Ip) => Some(format!("{}v6", name)),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Action {
|
||||
nft: NftClient,
|
||||
rx: remocMpsc::Receiver<Exec>,
|
||||
shutdown: ShutdownToken,
|
||||
sets: SetNames,
|
||||
// index of pattern ip in match vec
|
||||
ip_index: usize,
|
||||
action: AddDel,
|
||||
}
|
||||
|
||||
impl Action {
|
||||
pub fn new(
|
||||
nft: NftClient,
|
||||
shutdown: ShutdownToken,
|
||||
rx: remocMpsc::Receiver<Exec>,
|
||||
options: ActionOptions,
|
||||
) -> Result<Self, String> {
|
||||
Ok(Action {
|
||||
nft,
|
||||
rx,
|
||||
shutdown,
|
||||
sets: SetNames::new(options.set, options.set_options.version),
|
||||
ip_index: options.ip_index,
|
||||
action: options.action,
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn serve(mut self) {
|
||||
loop {
|
||||
let event = tokio::select! {
|
||||
exec = self.rx.recv() => Some(exec),
|
||||
_ = self.shutdown.wait() => None,
|
||||
};
|
||||
match event {
|
||||
// shutdown asked
|
||||
None => break,
|
||||
// channel closed
|
||||
Some(Ok(None)) => break,
|
||||
// error from channel
|
||||
Some(Err(err)) => {
|
||||
eprintln!("ERROR {err}");
|
||||
break;
|
||||
}
|
||||
// ok
|
||||
Some(Ok(Some(exec))) => {
|
||||
if let Err(err) = self.handle_exec(exec).await {
|
||||
eprintln!("ERROR {err}");
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// eprintln!("DEBUG Asking for shutdown");
|
||||
// self.shutdown.ask_shutdown();
|
||||
}
|
||||
|
||||
async fn handle_exec(&mut self, mut exec: Exec) -> Result<(), String> {
|
||||
// safeguard against Vec::remove's panic
|
||||
if exec.match_.len() <= self.ip_index {
|
||||
return Err(format!(
|
||||
"match received from reaction is smaller than expected. looking for index {} but size is {}. this is a bug!",
|
||||
self.ip_index,
|
||||
exec.match_.len()
|
||||
));
|
||||
}
|
||||
let ip = exec.match_.remove(self.ip_index);
|
||||
// select set
|
||||
let set = match (&self.sets.ipv4, &self.sets.ipv6) {
|
||||
(None, None) => return Err(format!("action is neither IPv4 nor IPv6, this is a bug!")),
|
||||
(None, Some(set)) => set,
|
||||
(Some(set), None) => set,
|
||||
(Some(set4), Some(set6)) => {
|
||||
if ip.contains(':') {
|
||||
set6
|
||||
} else {
|
||||
set4
|
||||
}
|
||||
}
|
||||
};
|
||||
// add/remove ip to set
|
||||
let element = NfListObject::Element(Element {
|
||||
family: NfFamily::INet,
|
||||
table: Cow::from("reaction"),
|
||||
name: Cow::from(set),
|
||||
elem: Cow::from(vec![Expression::String(Cow::from(ip.clone()))]),
|
||||
});
|
||||
let mut batch = Batch::new();
|
||||
match self.action {
|
||||
AddDel::Add => batch.add(element),
|
||||
AddDel::Delete => batch.delete(element),
|
||||
};
|
||||
match self.nft.send(batch).await {
|
||||
Ok(ok) => {
|
||||
eprintln!("DEBUG action ok {:?} {ip}: {ok}", self.action);
|
||||
Ok(())
|
||||
}
|
||||
Err(err) => Err(format!("action ko {:?} {ip}: {err}", self.action)),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::action::{IpVersion, RHook, RStatement, SetOptions};
|
||||
|
||||
#[tokio::test]
|
||||
async fn set_options_merge() {
|
||||
let s1 = SetOptions {
|
||||
version: None,
|
||||
hooks: None,
|
||||
timeout: None,
|
||||
timeout_u32: None,
|
||||
target: None,
|
||||
};
|
||||
let s2 = SetOptions {
|
||||
version: Some(IpVersion::Ipv4),
|
||||
hooks: Some(vec![RHook::Input]),
|
||||
timeout: Some("3h".into()),
|
||||
timeout_u32: Some(3 * 3600),
|
||||
target: Some(RStatement::Drop),
|
||||
};
|
||||
assert_ne!(s1, s2);
|
||||
assert_eq!(s1, SetOptions::default());
|
||||
|
||||
{
|
||||
// s2 can be merged in s1
|
||||
let mut s1 = s1.clone();
|
||||
assert!(s1.merge(&s2).is_ok());
|
||||
assert_eq!(s1, s2);
|
||||
}
|
||||
|
||||
{
|
||||
// s1 can be merged in s2
|
||||
let mut s2 = s2.clone();
|
||||
assert!(s2.merge(&s1).is_ok());
|
||||
}
|
||||
|
||||
{
|
||||
// s1 can be merged in itself
|
||||
let mut s3 = s1.clone();
|
||||
assert!(s3.merge(&s1).is_ok());
|
||||
assert_eq!(s1, s3);
|
||||
}
|
||||
|
||||
{
|
||||
// s2 can be merged in itself
|
||||
let mut s3 = s2.clone();
|
||||
assert!(s3.merge(&s2).is_ok());
|
||||
assert_eq!(s2, s3);
|
||||
}
|
||||
|
||||
for s3 in [
|
||||
SetOptions {
|
||||
version: Some(IpVersion::Ipv6),
|
||||
..Default::default()
|
||||
},
|
||||
SetOptions {
|
||||
hooks: Some(vec![RHook::Output]),
|
||||
..Default::default()
|
||||
},
|
||||
SetOptions {
|
||||
timeout: Some("30min".into()),
|
||||
..Default::default()
|
||||
},
|
||||
SetOptions {
|
||||
target: Some(RStatement::Continue),
|
||||
..Default::default()
|
||||
},
|
||||
] {
|
||||
// none with some is ok
|
||||
assert!(s3.clone().merge(&s1).is_ok(), "s3: {s3:?}");
|
||||
assert!(s1.clone().merge(&s3).is_ok(), "s3: {s3:?}");
|
||||
// different some is ko
|
||||
assert!(s3.clone().merge(&s2).is_err(), "s3: {s3:?}");
|
||||
assert!(s2.clone().merge(&s3).is_err(), "s3: {s3:?}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
use std::fmt::Display;
|
||||
|
||||
#[derive(PartialEq, Eq, PartialOrd, Ord, Copy, Clone)]
|
||||
pub enum Version {
|
||||
IPv4,
|
||||
IPv6,
|
||||
}
|
||||
impl Display for Version {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
f.write_str(match self {
|
||||
Version::IPv4 => "IPv4",
|
||||
Version::IPv6 => "IPv6",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -1,176 +0,0 @@
|
|||
use std::{
|
||||
borrow::Cow,
|
||||
collections::{BTreeMap, BTreeSet},
|
||||
};
|
||||
|
||||
use nftables::{
|
||||
batch::Batch,
|
||||
schema::{Chain, NfListObject, Table},
|
||||
types::{NfChainType, NfFamily},
|
||||
};
|
||||
use reaction_plugin::{
|
||||
ActionConfig, ActionImpl, Hello, Manifest, PluginInfo, RemoteResult, StreamConfig, StreamImpl,
|
||||
shutdown::ShutdownController,
|
||||
};
|
||||
use remoc::rtc;
|
||||
|
||||
use crate::{
|
||||
action::{Action, ActionOptions, Set, SetOptions},
|
||||
nft::NftClient,
|
||||
};
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
mod action;
|
||||
pub mod helpers;
|
||||
mod nft;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
let plugin = Plugin::default();
|
||||
reaction_plugin::main_loop(plugin).await;
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
struct Plugin {
|
||||
nft: NftClient,
|
||||
sets: Vec<Set>,
|
||||
actions: Vec<Action>,
|
||||
shutdown: ShutdownController,
|
||||
}
|
||||
|
||||
impl PluginInfo for Plugin {
|
||||
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
|
||||
Ok(Manifest {
|
||||
hello: Hello::new(),
|
||||
streams: BTreeSet::default(),
|
||||
actions: BTreeSet::from(["nftables".into()]),
|
||||
})
|
||||
}
|
||||
|
||||
async fn load_config(
|
||||
&mut self,
|
||||
streams: Vec<StreamConfig>,
|
||||
actions: Vec<ActionConfig>,
|
||||
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
|
||||
if !streams.is_empty() {
|
||||
return Err("This plugin can't handle any stream type".into());
|
||||
}
|
||||
|
||||
let mut ret_actions = Vec::with_capacity(actions.len());
|
||||
let mut set_options: BTreeMap<String, SetOptions> = BTreeMap::new();
|
||||
|
||||
for ActionConfig {
|
||||
stream_name,
|
||||
filter_name,
|
||||
action_name,
|
||||
action_type,
|
||||
config,
|
||||
patterns,
|
||||
} in actions
|
||||
{
|
||||
if &action_type != "nftables" {
|
||||
return Err("This plugin can't handle other action types than nftables".into());
|
||||
}
|
||||
|
||||
let mut options: ActionOptions = serde_json::from_value(config.into()).map_err(|err| {
|
||||
format!("invalid options for action {stream_name}.{filter_name}.{action_name}: {err}")
|
||||
})?;
|
||||
|
||||
options.set_ip_index(patterns).map_err(|_|
|
||||
format!(
|
||||
"No pattern with name {} in filter {stream_name}.{filter_name}. Try setting the option `pattern` to your pattern name of type 'ip'",
|
||||
&options.pattern
|
||||
)
|
||||
)?;
|
||||
|
||||
// Merge option
|
||||
set_options
|
||||
.entry(options.set.clone())
|
||||
.or_default()
|
||||
.merge(&options.set_options)
|
||||
.map_err(|err| format!("set {}: {err}", options.set))?;
|
||||
|
||||
let (tx, rx) = remoc::rch::mpsc::channel(1);
|
||||
self.actions.push(Action::new(
|
||||
self.nft.clone(),
|
||||
self.shutdown.token(),
|
||||
rx,
|
||||
options,
|
||||
)?);
|
||||
|
||||
ret_actions.push(ActionImpl { tx });
|
||||
}
|
||||
|
||||
// Init all sets
|
||||
while let Some((name, options)) = set_options.pop_first() {
|
||||
self.sets.push(Set::from(name, options));
|
||||
}
|
||||
|
||||
Ok((vec![], ret_actions))
|
||||
}
|
||||
|
||||
async fn start(&mut self) -> RemoteResult<()> {
|
||||
self.shutdown.delegate().handle_quit_signals()?;
|
||||
|
||||
let mut batch = Batch::new();
|
||||
batch.add(reaction_table());
|
||||
|
||||
// Create a chain for each registered netfilter hook
|
||||
for hook in self
|
||||
.sets
|
||||
.iter()
|
||||
.flat_map(|set| &set.hooks)
|
||||
.collect::<BTreeSet<_>>()
|
||||
{
|
||||
batch.add(NfListObject::Chain(Chain {
|
||||
family: NfFamily::INet,
|
||||
table: Cow::Borrowed("reaction"),
|
||||
name: Cow::from(hook.as_str()),
|
||||
_type: Some(NfChainType::Filter),
|
||||
hook: Some(hook.into()),
|
||||
prio: Some(0),
|
||||
..Default::default()
|
||||
}));
|
||||
}
|
||||
|
||||
for set in &self.sets {
|
||||
set.init(&mut batch)?;
|
||||
}
|
||||
|
||||
// TODO apply batch
|
||||
self.nft.send(batch).await?;
|
||||
|
||||
// Launch a task that will destroy the table on shutdown
|
||||
{
|
||||
let token = self.shutdown.token();
|
||||
tokio::spawn(async move {
|
||||
token.wait().await;
|
||||
Batch::new().delete(reaction_table());
|
||||
});
|
||||
}
|
||||
|
||||
// Launch all actions
|
||||
while let Some(action) = self.actions.pop() {
|
||||
tokio::spawn(async move { action.serve().await });
|
||||
}
|
||||
self.actions = Default::default();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn close(self) -> RemoteResult<()> {
|
||||
self.shutdown.ask_shutdown();
|
||||
self.shutdown.wait_all_task_shutdown().await;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
fn reaction_table() -> NfListObject<'static> {
|
||||
NfListObject::Table(Table {
|
||||
family: NfFamily::INet,
|
||||
name: Cow::Borrowed("reaction"),
|
||||
handle: None,
|
||||
})
|
||||
}
|
||||
|
|
@ -1,81 +0,0 @@
|
|||
use std::{
|
||||
ffi::{CStr, CString},
|
||||
thread,
|
||||
};
|
||||
|
||||
use libnftables1_sys::Nftables;
|
||||
use nftables::batch::Batch;
|
||||
use tokio::sync::{mpsc, oneshot};
|
||||
|
||||
/// A client with a dedicated server thread to libnftables.
|
||||
/// Calling [`Default::default()`] spawns a new server thread.
|
||||
/// Cloning just creates a new client to the same server thread.
|
||||
#[derive(Clone)]
|
||||
pub struct NftClient {
|
||||
tx: mpsc::Sender<NftCommand>,
|
||||
}
|
||||
|
||||
impl Default for NftClient {
|
||||
fn default() -> Self {
|
||||
let (tx, mut rx) = mpsc::channel(10);
|
||||
|
||||
thread::spawn(move || {
|
||||
let mut conn = Nftables::new();
|
||||
|
||||
while let Some(NftCommand { json, ret }) = rx.blocking_recv() {
|
||||
let (rc, output, error) = conn.run_cmd(json.as_ptr());
|
||||
let res = match rc {
|
||||
0 => to_rust_string(output)
|
||||
.ok_or_else(|| "unknown ok (rc = 0 but no output buffer)".into()),
|
||||
_ => to_rust_string(error)
|
||||
.map(|err| format!("error (rc = {rc}: {err})"))
|
||||
.ok_or_else(|| format!("unknown error (rc = {rc} but no error buffer)")),
|
||||
};
|
||||
let _ = ret.send(res);
|
||||
}
|
||||
});
|
||||
|
||||
NftClient { tx }
|
||||
}
|
||||
}
|
||||
|
||||
impl NftClient {
|
||||
/// Send a batch to nftables.
|
||||
pub async fn send(&self, batch: Batch<'_>) -> Result<String, String> {
|
||||
// convert JSON to CString
|
||||
let mut json = serde_json::to_vec(&batch.to_nftables())
|
||||
.map_err(|err| format!("couldn't build json to send to nftables: {err}"))?;
|
||||
json.push('\0' as u8);
|
||||
let json = CString::from_vec_with_nul(json)
|
||||
.map_err(|err| format!("invalid json with null char: {err}"))?;
|
||||
|
||||
// Send command
|
||||
let (tx, rx) = oneshot::channel();
|
||||
let command = NftCommand { json, ret: tx };
|
||||
self.tx
|
||||
.send(command)
|
||||
.await
|
||||
.map_err(|err| format!("nftables thread has quit, can't send command: {err}"))?;
|
||||
|
||||
// Wait for result
|
||||
rx.await
|
||||
.map_err(|_| format!("nftables thread has quit, no response for command"))?
|
||||
}
|
||||
}
|
||||
|
||||
struct NftCommand {
|
||||
json: CString,
|
||||
ret: oneshot::Sender<Result<String, String>>,
|
||||
}
|
||||
|
||||
fn to_rust_string(c_ptr: *const i8) -> Option<String> {
|
||||
if c_ptr.is_null() {
|
||||
None
|
||||
} else {
|
||||
Some(
|
||||
unsafe { CStr::from_ptr(c_ptr) }
|
||||
.to_string_lossy()
|
||||
.into_owned(),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,247 +0,0 @@
|
|||
use reaction_plugin::{ActionConfig, PluginInfo, StreamConfig, Value};
|
||||
use serde_json::json;
|
||||
|
||||
use crate::Plugin;
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_stream() {
|
||||
// No stream is supported by nftables
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "nftables".into(),
|
||||
config: Value::Null
|
||||
}],
|
||||
vec![]
|
||||
)
|
||||
.await
|
||||
.is_err()
|
||||
);
|
||||
|
||||
// Empty config is ok
|
||||
assert!(Plugin::default().load_config(vec![], vec![]).await.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_action_standalone() {
|
||||
let p = vec!["name".into(), "ip".into(), "ip2".into()];
|
||||
let p_noip = vec!["name".into(), "ip2".into()];
|
||||
|
||||
for (is_ok, conf, patterns) in [
|
||||
// minimal set
|
||||
(true, json!({ "set": "test" }), &p),
|
||||
// missing set key
|
||||
(false, json!({}), &p),
|
||||
(false, json!({ "version": "ipv4" }), &p),
|
||||
// unknown key
|
||||
(false, json!({ "set": "test", "unknown": "yes" }), &p),
|
||||
(false, json!({ "set": "test", "ip_index": 1 }), &p),
|
||||
(false, json!({ "set": "test", "timeout_u32": 1 }), &p),
|
||||
// pattern //
|
||||
(true, json!({ "set": "test" }), &p),
|
||||
(true, json!({ "set": "test", "pattern": "ip" }), &p),
|
||||
(true, json!({ "set": "test", "pattern": "ip2" }), &p),
|
||||
(true, json!({ "set": "test", "pattern": "ip2" }), &p_noip),
|
||||
// unknown pattern "ip"
|
||||
(false, json!({ "set": "test" }), &p_noip),
|
||||
(false, json!({ "set": "test", "pattern": "ip" }), &p_noip),
|
||||
// unknown pattern
|
||||
(false, json!({ "set": "test", "pattern": "unknown" }), &p),
|
||||
(false, json!({ "set": "test", "pattern": "uwu" }), &p_noip),
|
||||
// bad type
|
||||
(false, json!({ "set": "test", "pattern": 0 }), &p_noip),
|
||||
(false, json!({ "set": "test", "pattern": true }), &p_noip),
|
||||
// action //
|
||||
(true, json!({ "set": "test", "action": "add" }), &p),
|
||||
(true, json!({ "set": "test", "action": "delete" }), &p),
|
||||
// unknown action
|
||||
(false, json!({ "set": "test", "action": "create" }), &p),
|
||||
(false, json!({ "set": "test", "action": "insert" }), &p),
|
||||
(false, json!({ "set": "test", "action": "del" }), &p),
|
||||
(false, json!({ "set": "test", "action": "destroy" }), &p),
|
||||
// bad type
|
||||
(false, json!({ "set": "test", "action": true }), &p),
|
||||
(false, json!({ "set": "test", "action": 1 }), &p),
|
||||
// ip version //
|
||||
// ok
|
||||
(true, json!({ "set": "test", "version": "ipv4" }), &p),
|
||||
(true, json!({ "set": "test", "version": "ipv6" }), &p),
|
||||
(true, json!({ "set": "test", "version": "ip" }), &p),
|
||||
// unknown version
|
||||
(false, json!({ "set": "test", "version": 4 }), &p),
|
||||
(false, json!({ "set": "test", "version": 6 }), &p),
|
||||
(false, json!({ "set": "test", "version": 46 }), &p),
|
||||
(false, json!({ "set": "test", "version": "5" }), &p),
|
||||
(false, json!({ "set": "test", "version": "ipv5" }), &p),
|
||||
(false, json!({ "set": "test", "version": "4" }), &p),
|
||||
(false, json!({ "set": "test", "version": "6" }), &p),
|
||||
(false, json!({ "set": "test", "version": "46" }), &p),
|
||||
// bad type
|
||||
(false, json!({ "set": "test", "version": true }), &p),
|
||||
// hooks //
|
||||
// everything is fine really
|
||||
(true, json!({ "set": "test", "hooks": [] }), &p),
|
||||
(
|
||||
true,
|
||||
json!({ "set": "test", "hooks": ["input", "forward", "ingress", "prerouting", "output", "postrouting", "egress"] }),
|
||||
&p,
|
||||
),
|
||||
(false, json!({ "set": "test", "hooks": ["INPUT"] }), &p),
|
||||
(false, json!({ "set": "test", "hooks": ["FORWARD"] }), &p),
|
||||
(
|
||||
false,
|
||||
json!({ "set": "test", "hooks": ["unknown_hook"] }),
|
||||
&p,
|
||||
),
|
||||
// timeout //
|
||||
(true, json!({ "set": "test", "timeout": "1m" }), &p),
|
||||
(true, json!({ "set": "test", "timeout": "3 days" }), &p),
|
||||
// bad
|
||||
(false, json!({ "set": "test", "timeout": "3 dayz"}), &p),
|
||||
(false, json!({ "set": "test", "timeout": 12 }), &p),
|
||||
// target //
|
||||
// anything is fine too
|
||||
(true, json!({ "set": "test", "target": "drop" }), &p),
|
||||
(true, json!({ "set": "test", "target": "accept" }), &p),
|
||||
(true, json!({ "set": "test", "target": "return" }), &p),
|
||||
(true, json!({ "set": "test", "target": "continue" }), &p),
|
||||
// bad
|
||||
(false, json!({ "set": "test", "target": "custom" }), &p),
|
||||
(false, json!({ "set": "test", "target": "DROP" }), &p),
|
||||
(false, json!({ "set": "test", "target": 11 }), &p),
|
||||
(false, json!({ "set": "test", "target": ["DROP"] }), &p),
|
||||
] {
|
||||
let res = Plugin::default()
|
||||
.load_config(
|
||||
vec![],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "nftables".into(),
|
||||
config: conf.clone().into(),
|
||||
patterns: patterns.clone(),
|
||||
}],
|
||||
)
|
||||
.await;
|
||||
|
||||
assert!(
|
||||
res.is_ok() == is_ok,
|
||||
"conf: {:?}, must be ok: {is_ok}, result: {:?}",
|
||||
conf,
|
||||
// empty Result::Ok because ActionImpl is not Debug
|
||||
res.map(|_| ())
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// TODO
|
||||
#[tokio::test]
|
||||
async fn conf_action_merge() {
|
||||
let mut plugin = Plugin::default();
|
||||
|
||||
let set1 = ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action1".into(),
|
||||
action_type: "nftables".into(),
|
||||
config: json!({
|
||||
"set": "test",
|
||||
"target": "drop",
|
||||
"hooks": ["input"],
|
||||
"action": "add",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
};
|
||||
|
||||
let set2 = ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action2".into(),
|
||||
action_type: "nftables".into(),
|
||||
config: json!({
|
||||
"set": "test",
|
||||
"target": "drop",
|
||||
"version": "ip",
|
||||
"action": "add",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
};
|
||||
|
||||
let set3 = ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action2".into(),
|
||||
action_type: "nftables".into(),
|
||||
config: json!({
|
||||
"set": "test",
|
||||
"action": "delete",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
};
|
||||
|
||||
let res = plugin
|
||||
.load_config(
|
||||
vec![],
|
||||
vec![
|
||||
// First set
|
||||
set1.clone(),
|
||||
// Same set, adding options, no conflict
|
||||
set2.clone(),
|
||||
// Same set, no new options, no conflict
|
||||
set3.clone(),
|
||||
// Unrelated set, so no conflict
|
||||
ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action3".into(),
|
||||
action_type: "nftables".into(),
|
||||
config: json!({
|
||||
"set": "test2",
|
||||
"target": "return",
|
||||
"version": "ipv6",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
},
|
||||
],
|
||||
)
|
||||
.await;
|
||||
|
||||
assert!(res.is_ok(), "res: {:?}", res.map(|_| ()));
|
||||
|
||||
// Another set with conflict is not ok
|
||||
let res = plugin
|
||||
.load_config(
|
||||
vec![],
|
||||
vec![
|
||||
// First set
|
||||
set1,
|
||||
// Same set, adding options, no conflict
|
||||
set2,
|
||||
// Same set, no new options, no conflict
|
||||
set3,
|
||||
// Another set with conflict
|
||||
ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action3".into(),
|
||||
action_type: "nftables".into(),
|
||||
config: json!({
|
||||
"set": "test",
|
||||
"target": "target2",
|
||||
"action": "del",
|
||||
})
|
||||
.into(),
|
||||
patterns: vec!["ip".into()],
|
||||
},
|
||||
],
|
||||
)
|
||||
.await;
|
||||
assert!(res.is_err(), "res: {:?}", res.map(|_| ()));
|
||||
}
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
[package]
|
||||
name = "reaction-plugin-virtual"
|
||||
version = "1.0.0"
|
||||
edition = "2024"
|
||||
|
||||
[dependencies]
|
||||
tokio = { workspace = true, features = ["rt-multi-thread"] }
|
||||
remoc.workspace = true
|
||||
reaction-plugin.path = "../reaction-plugin"
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
|
|
@ -1,179 +0,0 @@
|
|||
use std::collections::{BTreeMap, BTreeSet};
|
||||
|
||||
use reaction_plugin::{
|
||||
ActionConfig, ActionImpl, Exec, Hello, Line, Manifest, PluginInfo, RemoteResult, StreamConfig,
|
||||
StreamImpl, Value, line::PatternLine,
|
||||
};
|
||||
use remoc::{rch::mpsc, rtc};
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
let plugin = Plugin::default();
|
||||
reaction_plugin::main_loop(plugin).await;
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
struct Plugin {}
|
||||
|
||||
impl PluginInfo for Plugin {
|
||||
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
|
||||
Ok(Manifest {
|
||||
hello: Hello::new(),
|
||||
streams: BTreeSet::from(["virtual".into()]),
|
||||
actions: BTreeSet::from(["virtual".into()]),
|
||||
})
|
||||
}
|
||||
|
||||
async fn load_config(
|
||||
&mut self,
|
||||
streams: Vec<StreamConfig>,
|
||||
actions: Vec<ActionConfig>,
|
||||
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
|
||||
let mut ret_streams = Vec::with_capacity(streams.len());
|
||||
let mut ret_actions = Vec::with_capacity(actions.len());
|
||||
|
||||
let mut local_streams = BTreeMap::new();
|
||||
|
||||
for StreamConfig {
|
||||
stream_name,
|
||||
stream_type,
|
||||
config,
|
||||
} in streams
|
||||
{
|
||||
if stream_type != "virtual" {
|
||||
return Err("This plugin can't handle other stream types than virtual".into());
|
||||
}
|
||||
|
||||
let (virtual_stream, receiver) = VirtualStream::new(config)?;
|
||||
|
||||
if let Some(_) = local_streams.insert(stream_name, virtual_stream) {
|
||||
return Err("this virtual stream has already been initialized".into());
|
||||
}
|
||||
|
||||
ret_streams.push(StreamImpl {
|
||||
stream: receiver,
|
||||
standalone: false,
|
||||
});
|
||||
}
|
||||
|
||||
for ActionConfig {
|
||||
stream_name,
|
||||
filter_name,
|
||||
action_name,
|
||||
action_type,
|
||||
config,
|
||||
patterns,
|
||||
} in actions
|
||||
{
|
||||
if &action_type != "virtual" {
|
||||
return Err("This plugin can't handle other action types than virtual".into());
|
||||
}
|
||||
|
||||
let (mut virtual_action, tx) = VirtualAction::new(
|
||||
stream_name,
|
||||
filter_name,
|
||||
action_name,
|
||||
config,
|
||||
patterns,
|
||||
&local_streams,
|
||||
)?;
|
||||
|
||||
tokio::spawn(async move { virtual_action.serve().await });
|
||||
|
||||
ret_actions.push(ActionImpl { tx });
|
||||
}
|
||||
|
||||
Ok((ret_streams, ret_actions))
|
||||
}
|
||||
|
||||
async fn start(&mut self) -> RemoteResult<()> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn close(self) -> RemoteResult<()> {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
struct VirtualStream {
|
||||
tx: mpsc::Sender<Line>,
|
||||
}
|
||||
|
||||
impl VirtualStream {
|
||||
fn new(config: Value) -> Result<(Self, mpsc::Receiver<Line>), String> {
|
||||
const CONFIG_ERROR: &'static str = "streams of type virtual take no options";
|
||||
match config {
|
||||
Value::Null => (),
|
||||
Value::Object(map) => {
|
||||
if map.len() != 0 {
|
||||
return Err(CONFIG_ERROR.into());
|
||||
}
|
||||
}
|
||||
_ => return Err(CONFIG_ERROR.into()),
|
||||
}
|
||||
|
||||
let (tx, rx) = mpsc::channel(1);
|
||||
Ok((Self { tx }, rx))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize, Deserialize)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
struct ActionOptions {
|
||||
/// The line to send to the corresponding virtual stream, example: "ban \<ip\>"
|
||||
send: String,
|
||||
/// The name of the corresponding virtual stream, example: "my_stream"
|
||||
to: String,
|
||||
}
|
||||
|
||||
struct VirtualAction {
|
||||
rx: mpsc::Receiver<Exec>,
|
||||
send: PatternLine,
|
||||
to: VirtualStream,
|
||||
}
|
||||
|
||||
impl VirtualAction {
|
||||
fn new(
|
||||
stream_name: String,
|
||||
filter_name: String,
|
||||
action_name: String,
|
||||
config: Value,
|
||||
patterns: Vec<String>,
|
||||
streams: &BTreeMap<String, VirtualStream>,
|
||||
) -> Result<(Self, mpsc::Sender<Exec>), String> {
|
||||
let options: ActionOptions = serde_json::from_value(config.into()).map_err(|err| {
|
||||
format!("invalid options for action {stream_name}.{filter_name}.{action_name}: {err}")
|
||||
})?;
|
||||
|
||||
let send = PatternLine::new(options.send, patterns);
|
||||
|
||||
let stream = streams.get(&options.to).ok_or_else(|| {
|
||||
format!(
|
||||
"action {}.{}.{}: send \"{}\" matches no stream name",
|
||||
stream_name, filter_name, action_name, options.to
|
||||
)
|
||||
})?;
|
||||
|
||||
let (tx, rx) = mpsc::channel(1);
|
||||
Ok((
|
||||
Self {
|
||||
rx,
|
||||
send: send,
|
||||
to: stream.clone(),
|
||||
},
|
||||
tx,
|
||||
))
|
||||
}
|
||||
|
||||
async fn serve(&mut self) {
|
||||
while let Ok(Some(exec)) = self.rx.recv().await {
|
||||
let line = self.send.line(exec.match_);
|
||||
self.to.tx.send((line, exec.time)).await.unwrap();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,322 +0,0 @@
|
|||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
use reaction_plugin::{ActionConfig, Exec, PluginInfo, StreamConfig, Value};
|
||||
use serde_json::json;
|
||||
|
||||
use crate::Plugin;
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_stream() {
|
||||
// Invalid type
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "virtu".into(),
|
||||
config: Value::Null
|
||||
}],
|
||||
vec![]
|
||||
)
|
||||
.await
|
||||
.is_err()
|
||||
);
|
||||
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "virtual".into(),
|
||||
config: Value::Null
|
||||
}],
|
||||
vec![]
|
||||
)
|
||||
.await
|
||||
.is_ok()
|
||||
);
|
||||
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "virtual".into(),
|
||||
config: json!({}).into(),
|
||||
}],
|
||||
vec![]
|
||||
)
|
||||
.await
|
||||
.is_ok()
|
||||
);
|
||||
|
||||
// Invalid conf: must be empty
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "virtual".into(),
|
||||
config: json!({"key": "value" }).into(),
|
||||
}],
|
||||
vec![]
|
||||
)
|
||||
.await
|
||||
.is_err()
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_action() {
|
||||
let streams = vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "virtual".into(),
|
||||
config: Value::Null,
|
||||
}];
|
||||
|
||||
let valid_conf = json!({ "send": "message", "to": "stream" });
|
||||
|
||||
let missing_send_conf = json!({ "to": "stream" });
|
||||
let missing_to_conf = json!({ "send": "stream" });
|
||||
let extra_attr_conf = json!({ "send": "message", "send2": "message", "to": "stream" });
|
||||
|
||||
let patterns = Vec::default();
|
||||
|
||||
// Invalid type
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
streams.clone(),
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "virtu".into(),
|
||||
config: Value::Null,
|
||||
patterns: patterns.clone(),
|
||||
}]
|
||||
)
|
||||
.await
|
||||
.is_err()
|
||||
);
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
streams.clone(),
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "virtual".into(),
|
||||
config: valid_conf.into(),
|
||||
patterns: patterns.clone()
|
||||
}]
|
||||
)
|
||||
.await
|
||||
.is_ok()
|
||||
);
|
||||
|
||||
for conf in [missing_send_conf, missing_to_conf, extra_attr_conf] {
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
streams.clone(),
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "virtual".into(),
|
||||
config: conf.clone().into(),
|
||||
patterns: patterns.clone()
|
||||
}]
|
||||
)
|
||||
.await
|
||||
.is_err(),
|
||||
"conf: {:?}",
|
||||
conf
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn conf_send() {
|
||||
// Valid to: option
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "virtual".into(),
|
||||
config: Value::Null,
|
||||
}],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "virtual".into(),
|
||||
config: json!({ "send": "message", "to": "stream" }).into(),
|
||||
patterns: vec![],
|
||||
}]
|
||||
)
|
||||
.await
|
||||
.is_ok(),
|
||||
);
|
||||
|
||||
// Invalid to: option
|
||||
assert!(
|
||||
Plugin::default()
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "virtual".into(),
|
||||
config: Value::Null,
|
||||
}],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "virtual".into(),
|
||||
config: json!({ "send": "message", "to": "stream1" }).into(),
|
||||
patterns: vec![],
|
||||
}]
|
||||
)
|
||||
.await
|
||||
.is_err(),
|
||||
);
|
||||
}
|
||||
|
||||
// Let's allow empty streams for now.
|
||||
// I guess it can be useful to have manual only actions.
|
||||
//
|
||||
// #[tokio::test]
|
||||
// async fn conf_empty_stream() {
|
||||
// assert!(
|
||||
// Plugin::default()
|
||||
// .load_config(
|
||||
// vec![StreamConfig {
|
||||
// stream_name: "stream".into(),
|
||||
// stream_type: "virtual".into(),
|
||||
// config: Value::Null,
|
||||
// }],
|
||||
// vec![],
|
||||
// )
|
||||
// .await
|
||||
// .is_err(),
|
||||
// );
|
||||
// }
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_simple() {
|
||||
let mut plugin = Plugin::default();
|
||||
let (mut streams, mut actions) = plugin
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "virtual".into(),
|
||||
config: Value::Null,
|
||||
}],
|
||||
vec![ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "virtual".into(),
|
||||
config: json!({ "send": "message <test>", "to": "stream" }).into(),
|
||||
patterns: vec!["test".into()],
|
||||
}],
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let mut stream = streams.pop().unwrap();
|
||||
let action = actions.pop().unwrap();
|
||||
assert!(!stream.standalone);
|
||||
|
||||
for m in ["test1", "test2", "test3", " a a a aa a a"] {
|
||||
let time = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
|
||||
assert!(
|
||||
action
|
||||
.tx
|
||||
.send(Exec {
|
||||
match_: vec![m.into()],
|
||||
time,
|
||||
})
|
||||
.await
|
||||
.is_ok()
|
||||
);
|
||||
assert_eq!(
|
||||
stream.stream.recv().await.unwrap().unwrap(),
|
||||
(format!("message {m}"), time),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn run_two_actions() {
|
||||
let mut plugin = Plugin::default();
|
||||
let (mut streams, mut actions) = plugin
|
||||
.load_config(
|
||||
vec![StreamConfig {
|
||||
stream_name: "stream".into(),
|
||||
stream_type: "virtual".into(),
|
||||
config: Value::Null,
|
||||
}],
|
||||
vec![
|
||||
ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "virtual".into(),
|
||||
config: json!({ "send": "send <a>", "to": "stream" }).into(),
|
||||
patterns: vec!["a".into(), "b".into()],
|
||||
},
|
||||
ActionConfig {
|
||||
stream_name: "stream".into(),
|
||||
filter_name: "filter".into(),
|
||||
action_name: "action".into(),
|
||||
action_type: "virtual".into(),
|
||||
config: json!({ "send": "<b> send", "to": "stream" }).into(),
|
||||
patterns: vec!["a".into(), "b".into()],
|
||||
},
|
||||
],
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let mut stream = streams.pop().unwrap();
|
||||
assert!(!stream.standalone);
|
||||
|
||||
let action2 = actions.pop().unwrap();
|
||||
let action1 = actions.pop().unwrap();
|
||||
|
||||
let time = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
|
||||
|
||||
assert!(
|
||||
action1
|
||||
.tx
|
||||
.send(Exec {
|
||||
match_: vec!["aa".into(), "bb".into()],
|
||||
time,
|
||||
})
|
||||
.await
|
||||
.is_ok(),
|
||||
);
|
||||
assert_eq!(
|
||||
stream.stream.recv().await.unwrap().unwrap(),
|
||||
("send aa".into(), time),
|
||||
);
|
||||
|
||||
assert!(
|
||||
action2
|
||||
.tx
|
||||
.send(Exec {
|
||||
match_: vec!["aa".into(), "bb".into()],
|
||||
time,
|
||||
})
|
||||
.await
|
||||
.is_ok(),
|
||||
);
|
||||
assert_eq!(
|
||||
stream.stream.recv().await.unwrap().unwrap(),
|
||||
("bb send".into(), time),
|
||||
);
|
||||
}
|
||||
|
|
@ -1,20 +0,0 @@
|
|||
[package]
|
||||
name = "reaction-plugin"
|
||||
version = "1.0.0"
|
||||
edition = "2024"
|
||||
authors = ["ppom <reaction@ppom.me>"]
|
||||
license = "AGPL-3.0"
|
||||
homepage = "https://reaction.ppom.me"
|
||||
repository = "https://framagit.org/ppom/reaction"
|
||||
keywords = ["security", "sysadmin", "logs", "monitoring", "plugin"]
|
||||
categories = ["security"]
|
||||
description = "Plugin interface for reaction, a daemon that scans logs and takes action (alternative to fail2ban)"
|
||||
|
||||
[dependencies]
|
||||
remoc.workspace = true
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
tokio.workspace = true
|
||||
tokio.features = ["io-std", "signal"]
|
||||
tokio-util.workspace = true
|
||||
tokio-util.features = ["rt"]
|
||||
|
|
@ -1,599 +0,0 @@
|
|||
//! This crate defines the API between reaction's core and plugins.
|
||||
//!
|
||||
//! Plugins must be written in Rust, for now.
|
||||
//!
|
||||
//! This documentation assumes the reader has some knowledge of Rust.
|
||||
//! However, if you find that something is unclear, don't hesitate to
|
||||
//! [ask for help](https://framagit.org/ppom/reaction/#help), even if you're new to Rust.
|
||||
//!
|
||||
//! To implement a plugin, one has to provide an implementation of [`PluginInfo`], that provides
|
||||
//! the entrypoint for a plugin.
|
||||
//! It permits to define `0` to `n` custom stream and action types.
|
||||
//!
|
||||
//! ## Note on reaction-plugin API stability
|
||||
//!
|
||||
//! This is the v1 of reaction's plugin interface.
|
||||
//! It's quite efficient and complete, but it has the big drawback of being Rust-only and [`tokio`]-only.
|
||||
//!
|
||||
//! In the future, I'd like to define a language-agnostic interface, which will be a major breaking change in the API.
|
||||
//! However, I'll try my best to reduce the necessary code changes for plugins that use this v1.
|
||||
//!
|
||||
//! ## Naming & calling conventions
|
||||
//!
|
||||
//! Your plugin should be named `reaction-plugin-$NAME`, eg. `reaction-plugin-postgresql`.
|
||||
//! It will be invoked with one positional argument "serve".
|
||||
//! ```bash
|
||||
//! reaction-plugin-$NAME serve
|
||||
//! ```
|
||||
//! This can be useful if you want to provide CLI functionnality to your users,
|
||||
//! so you can distinguish between a human user and reaction.
|
||||
//!
|
||||
//! ### State directory
|
||||
//!
|
||||
//! It will be executed in its own directory, in which it should have write access.
|
||||
//! The directory is `$reaction_state_directory/plugin_data/$NAME`.
|
||||
//! reaction's [state_directory](https://reaction.ppom.me/reference.html#state_directory)
|
||||
//! defaults to its working directory, which is `/var/lib/reaction` in most setups.
|
||||
//!
|
||||
//! So your plugin directory should most often be `/var/lib/reaction/plugin_data/$NAME`,
|
||||
//! but the plugin shouldn't expect that and use the current working directory instead.
|
||||
//!
|
||||
//! ## Communication
|
||||
//!
|
||||
//! Communication between the plugin and reaction is based on [`remoc`], which permits to multiplex channels and remote objects/functions/trait
|
||||
//! calls over a single transport channel.
|
||||
//! The channels read and write channels are stdin and stdout, so you shouldn't use them for something else.
|
||||
//!
|
||||
//! [`remoc`] builds upon [`tokio`], so you'll need to use tokio too.
|
||||
//!
|
||||
//! ### Errors
|
||||
//!
|
||||
//! Errors during:
|
||||
//! - config loading in [`PluginInfo::load_config`]
|
||||
//! - startup in [`PluginInfo::start`]
|
||||
//!
|
||||
//! should be returned to reaction by the function's return value, permitting reaction to abort startup.
|
||||
//!
|
||||
//! During normal runtime, after the plugin has loaded its config and started, and before reaction is quitting, there is no *rusty* way to send errors to reaction.
|
||||
//! Then errors can be printed to stderr.
|
||||
//! They'll be captured line by line and re-printed by reaction, with the plugin name prepended.
|
||||
//!
|
||||
//! A line can start with `DEBUG `, `INFO `, `WARN `, `ERROR `.
|
||||
//! If it starts with none of the above, the line is assumed to be an error.
|
||||
//!
|
||||
//! Example:
|
||||
//! Those lines:
|
||||
//! ```log
|
||||
//! WARN This is an official warning from the plugin
|
||||
//! Freeeee errrooooorrr
|
||||
//! ```
|
||||
//! Will become:
|
||||
//! ```log
|
||||
//! WARN plugin test: This is an official warning from the plugin
|
||||
//! ERROR plugin test: Freeeee errrooooorrr
|
||||
//! ```
|
||||
//!
|
||||
//! Plugins should not exit when there is an error: reaction quits only when told to do so,
|
||||
//! or if all its streams exit, and won't retry starting a failing plugin or stream.
|
||||
//! Please only exit if you're in a 100% failing state.
|
||||
//! It's considered better to continue operating in a degraded state than exiting.
|
||||
//!
|
||||
//! ## Getting started
|
||||
//!
|
||||
//! If you don't have Rust already installed, follow their [*Getting Started* documentation](https://rust-lang.org/learn/get-started/)
|
||||
//! to get rust build tools and learn about editor support.
|
||||
//!
|
||||
//! Then create a new repository with cargo:
|
||||
//!
|
||||
//! ```bash
|
||||
//! cargo new reaction-plugin-$NAME
|
||||
//! cd reaction-plugin-$NAME
|
||||
//! ```
|
||||
//!
|
||||
//! Add required dependencies:
|
||||
//!
|
||||
//! ```bash
|
||||
//! cargo add reaction-plugin tokio
|
||||
//! ```
|
||||
//!
|
||||
//! Replace `src/main.rs` with those contents:
|
||||
//!
|
||||
//! ```ignore
|
||||
//! use reaction_plugin::PluginInfo;
|
||||
//!
|
||||
//! #[tokio::main]
|
||||
//! async fn main() {
|
||||
//! let plugin = MyPlugin::default();
|
||||
//! reaction_plugin::main_loop(plugin).await;
|
||||
//! }
|
||||
//!
|
||||
//! #[derive(Default)]
|
||||
//! struct MyPlugin {}
|
||||
//!
|
||||
//! impl PluginInfo for MyPlugin {
|
||||
//! // ...
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! Your IDE should now propose to implement missing members of the [`PluginInfo`] trait.
|
||||
//! Your journey starts!
|
||||
//!
|
||||
//! ## Examples
|
||||
//!
|
||||
//! Core plugins can be found here: <https://framagit.org/ppom/reaction/-/tree/main/plugins>.
|
||||
//!
|
||||
//! - The "virtual" plugin is the simplest and can serve as a good complete example that links custom stream types and custom action types.
|
||||
//! - The "ipset" plugin is a good example of an action-only plugin.
|
||||
|
||||
use std::{
|
||||
collections::{BTreeMap, BTreeSet},
|
||||
env::args,
|
||||
error::Error,
|
||||
fmt::Display,
|
||||
process::exit,
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
use remoc::{
|
||||
Connect, rch,
|
||||
rtc::{self, Server},
|
||||
};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::{Number, Value as JValue};
|
||||
use tokio::io::{stdin, stdout};
|
||||
|
||||
pub mod line;
|
||||
pub mod shutdown;
|
||||
pub mod time;
|
||||
|
||||
/// The only trait that **must** be implemented by a plugin.
|
||||
/// It provides lists of stream, filter and action types implemented by a dynamic plugin.
|
||||
#[rtc::remote]
|
||||
pub trait PluginInfo {
|
||||
/// Return the manifest of the plugin.
|
||||
/// This should not be dynamic, and return always the same manifest.
|
||||
///
|
||||
/// Example implementation:
|
||||
/// ```
|
||||
/// Ok(Manifest {
|
||||
/// hello: Hello::new(),
|
||||
/// streams: BTreeSet::from(["mystreamtype".into()]),
|
||||
/// actions: BTreeSet::from(["myactiontype".into()]),
|
||||
/// })
|
||||
/// ```
|
||||
///
|
||||
/// First function called.
|
||||
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError>;
|
||||
|
||||
/// Load all plugin stream and action configurations.
|
||||
/// Must error if config is invalid.
|
||||
///
|
||||
/// The plugin should not start running mutable commands here:
|
||||
/// It should be ok to quit without cleanup for now.
|
||||
///
|
||||
/// Each [`StreamConfig`] from the `streams` arg should result in a corresponding [`StreamImpl`] returned, in the same order.
|
||||
/// Each [`ActionConfig`] from the `actions` arg should result in a corresponding [`ActionImpl`] returned, in the same order.
|
||||
///
|
||||
/// Function called after [`PluginInfo::manifest`].
|
||||
async fn load_config(
|
||||
&mut self,
|
||||
streams: Vec<StreamConfig>,
|
||||
actions: Vec<ActionConfig>,
|
||||
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)>;
|
||||
|
||||
/// Notify the plugin that setup is finished, permitting a last occasion to report an error that'll make reaction exit.
|
||||
/// All initialization (opening remote connections, starting streams, etc) should happen here.
|
||||
///
|
||||
/// Function called after [`PluginInfo::load_config`].
|
||||
async fn start(&mut self) -> RemoteResult<()>;
|
||||
|
||||
/// Notify the plugin that reaction is quitting and that the plugin should quit too.
|
||||
/// A few seconds later, the plugin will receive SIGTERM.
|
||||
/// A few seconds later, the plugin will receive SIGKILL.
|
||||
///
|
||||
/// Function called after [`PluginInfo::start`], when reaction is quitting.
|
||||
async fn close(mut self) -> RemoteResult<()>;
|
||||
}
|
||||
|
||||
/// The config for one Stream of a type advertised by this plugin.
|
||||
///
|
||||
/// For example this user config:
|
||||
/// ```jsonnet
|
||||
/// {
|
||||
/// streams: {
|
||||
/// mystream: {
|
||||
/// type: "mystreamtype",
|
||||
/// options: {
|
||||
/// key: "value",
|
||||
/// num: 3,
|
||||
/// },
|
||||
/// // filters: ...
|
||||
/// },
|
||||
/// },
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// would result in the following `StreamConfig`:
|
||||
///
|
||||
/// ```
|
||||
/// StreamConfig {
|
||||
/// stream_name: "mystream",
|
||||
/// stream_type: "mystreamtype",
|
||||
/// config: Value::Object(BTreeMap::from([
|
||||
/// ("key", Value::String("value")),
|
||||
/// ("num", Value::Integer(3)),
|
||||
/// ])),
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// Don't hesitate to take advantage of [`serde_json::from_value`], to deserialize the [`Value`] into a Rust struct:
|
||||
///
|
||||
/// ```
|
||||
/// #[derive(Deserialize)]
|
||||
/// struct MyStreamOptions {
|
||||
/// key: String,
|
||||
/// num: i64,
|
||||
/// }
|
||||
///
|
||||
/// fn validate_config(stream_config: Value) -> Result<MyStreamOptions, serde_json::Error> {
|
||||
/// serde_json::from_value(stream_config.into())
|
||||
/// }
|
||||
/// ```
|
||||
#[derive(Serialize, Deserialize, Clone)]
|
||||
pub struct StreamConfig {
|
||||
pub stream_name: String,
|
||||
pub stream_type: String,
|
||||
pub config: Value,
|
||||
}
|
||||
|
||||
/// The config for one Stream of a type advertised by this plugin.
|
||||
///
|
||||
/// For example this user config:
|
||||
/// ```jsonnet
|
||||
/// {
|
||||
/// streams: {
|
||||
/// mystream: {
|
||||
/// // ...
|
||||
/// filters: {
|
||||
/// myfilter: {
|
||||
/// // ...
|
||||
/// actions: {
|
||||
/// myaction: {
|
||||
/// type: "myactiontype",
|
||||
/// options: {
|
||||
/// boolean: true,
|
||||
/// array: ["item"],
|
||||
/// },
|
||||
/// },
|
||||
/// },
|
||||
/// },
|
||||
/// },
|
||||
/// },
|
||||
/// },
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// would result in the following `ActionConfig`:
|
||||
///
|
||||
/// ```rust
|
||||
/// ActionConfig {
|
||||
/// action_name: "myaction",
|
||||
/// action_type: "myactiontype",
|
||||
/// config: Value::Object(BTreeMap::from([
|
||||
/// ("boolean", Value::Boolean(true)),
|
||||
/// ("array", Value::Array([Value::String("item")])),
|
||||
/// ])),
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// Don't hesitate to take advantage of [`serde_json::from_value`], to deserialize the [`Value`] into a Rust struct:
|
||||
///
|
||||
/// ```rust
|
||||
/// #[derive(Deserialize)]
|
||||
/// struct MyActionOptions {
|
||||
/// boolean: bool,
|
||||
/// array: Vec<String>,
|
||||
/// }
|
||||
///
|
||||
/// fn validate_config(action_config: Value) -> Result<MyActionOptions, serde_json::Error> {
|
||||
/// serde_json::from_value(action_config.into())
|
||||
/// }
|
||||
/// ```
|
||||
#[derive(Serialize, Deserialize, Clone)]
|
||||
pub struct ActionConfig {
|
||||
pub stream_name: String,
|
||||
pub filter_name: String,
|
||||
pub action_name: String,
|
||||
pub action_type: String,
|
||||
pub config: Value,
|
||||
pub patterns: Vec<String>,
|
||||
}
|
||||
|
||||
/// Mandatory announcement of a plugin's protocol version, stream and action types.
|
||||
#[derive(Serialize, Deserialize)]
|
||||
pub struct Manifest {
|
||||
// Protocol version.
|
||||
// Just use the [`Hello::new`] constructor that uses this crate's current version.
|
||||
pub hello: Hello,
|
||||
/// Stream types that should be made available to reaction users
|
||||
///
|
||||
/// ```jsonnet
|
||||
/// {
|
||||
/// streams: {
|
||||
/// my_stream: {
|
||||
/// type: "..."
|
||||
/// # ↑ all those exposed types
|
||||
/// }
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
pub streams: BTreeSet<String>,
|
||||
/// Action types that should be made available to reaction users
|
||||
///
|
||||
/// ```jsonnet
|
||||
/// {
|
||||
/// streams: {
|
||||
/// mystream: {
|
||||
/// filters: {
|
||||
/// myfilter: {
|
||||
/// actions: {
|
||||
/// myaction: {
|
||||
/// type: "myactiontype",
|
||||
/// # ↑ all those exposed types
|
||||
/// },
|
||||
/// },
|
||||
/// },
|
||||
/// },
|
||||
/// },
|
||||
/// },
|
||||
/// }
|
||||
/// ```
|
||||
pub actions: BTreeSet<String>,
|
||||
}
|
||||
|
||||
#[derive(Default, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub struct Hello {
|
||||
/// Major version of the protocol
|
||||
/// Increment means breaking change
|
||||
pub version_major: u32,
|
||||
/// Minor version of the protocol
|
||||
/// Increment means reaction core can handle older version plugins
|
||||
pub version_minor: u32,
|
||||
}
|
||||
|
||||
impl Hello {
|
||||
/// Constructor that fills a [`Hello`] struct with [`crate`]'s version.
|
||||
/// You should use this in your plugin [`Manifest`].
|
||||
pub fn new() -> Hello {
|
||||
Hello {
|
||||
version_major: env!("CARGO_PKG_VERSION_MAJOR").parse().unwrap(),
|
||||
version_minor: env!("CARGO_PKG_VERSION_MINOR").parse().unwrap(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Used by the reaction daemon. Permits to check compatibility between two versions.
|
||||
/// Major versions must be the same between the daemon and plugin.
|
||||
/// Minor version of the daemon must be greater than or equal minor version of the plugin.
|
||||
pub fn is_compatible(server: &Hello, plugin: &Hello) -> std::result::Result<(), String> {
|
||||
if server.version_major == plugin.version_major
|
||||
&& server.version_minor >= plugin.version_minor
|
||||
{
|
||||
Ok(())
|
||||
} else if plugin.version_major > server.version_major
|
||||
|| (plugin.version_major == server.version_major
|
||||
&& plugin.version_minor > server.version_minor)
|
||||
{
|
||||
Err("consider upgrading reaction".into())
|
||||
} else {
|
||||
Err("consider upgrading the plugin".into())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A clone of [`serde_json::Value`].
|
||||
/// Implements From & Into [`serde_json::Value`].
|
||||
#[derive(Serialize, Deserialize, Clone)]
|
||||
pub enum Value {
|
||||
Null,
|
||||
Bool(bool),
|
||||
Integer(i64),
|
||||
Float(f64),
|
||||
String(String),
|
||||
Array(Vec<Value>),
|
||||
Object(BTreeMap<String, Value>),
|
||||
}
|
||||
|
||||
impl From<JValue> for Value {
|
||||
fn from(value: serde_json::Value) -> Self {
|
||||
match value {
|
||||
JValue::Null => Value::Null,
|
||||
JValue::Bool(b) => Value::Bool(b),
|
||||
JValue::Number(number) => {
|
||||
if let Some(number) = number.as_i64() {
|
||||
Value::Integer(number)
|
||||
} else if let Some(number) = number.as_f64() {
|
||||
Value::Float(number)
|
||||
} else {
|
||||
Value::Null
|
||||
}
|
||||
}
|
||||
JValue::String(s) => Value::String(s.into()),
|
||||
JValue::Array(v) => Value::Array(v.into_iter().map(|e| e.into()).collect()),
|
||||
JValue::Object(m) => Value::Object(m.into_iter().map(|(k, v)| (k, v.into())).collect()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Into<JValue> for Value {
|
||||
fn into(self) -> JValue {
|
||||
match self {
|
||||
Value::Null => JValue::Null,
|
||||
Value::Bool(v) => JValue::Bool(v),
|
||||
Value::Integer(v) => JValue::Number(v.into()),
|
||||
Value::Float(v) => JValue::Number(Number::from_f64(v).unwrap()),
|
||||
Value::String(v) => JValue::String(v),
|
||||
Value::Array(v) => JValue::Array(v.into_iter().map(|e| e.into()).collect()),
|
||||
Value::Object(m) => JValue::Object(m.into_iter().map(|(k, v)| (k, v.into())).collect()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Represents a Stream handled by a plugin on reaction core's side.
|
||||
///
|
||||
/// During [`PluginInfo::load_config`], the plugin should create a [`remoc::rch::mpsc::channel`] of [`Line`].
|
||||
/// It will keep the sending side for itself and put the receiving side in a [`StreamImpl`].
|
||||
///
|
||||
/// The plugin should start sending [`Line`]s in the channel only after [`PluginInfo::start`] has been called by reaction core.
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct StreamImpl {
|
||||
pub stream: rch::mpsc::Receiver<Line>,
|
||||
/// Whether this stream works standalone, or if it needs other streams or actions to be fed.
|
||||
/// Defaults to true.
|
||||
/// When `false`, reaction will exit if it's the last one standing.
|
||||
#[serde(default = "_true")]
|
||||
pub standalone: bool,
|
||||
}
|
||||
|
||||
fn _true() -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
/// Messages passed from the [`StreamImpl`] of a plugin to reaction core
|
||||
pub type Line = (String, Duration);
|
||||
|
||||
// // Filters
|
||||
// // For now, plugins can't handle custom filter implementations.
|
||||
// #[derive(Serialize, Deserialize)]
|
||||
// pub struct FilterImpl {
|
||||
// pub stream: rch::lr::Sender<Exec>,
|
||||
// }
|
||||
// #[derive(Serialize, Deserialize)]
|
||||
// pub struct Match {
|
||||
// pub match_: String,
|
||||
// pub result: rch::oneshot::Sender<bool>,
|
||||
// }
|
||||
|
||||
/// Represents an Action handled by a plugin on reaction core's side.
|
||||
///
|
||||
/// During [`PluginInfo::load_config`], the plugin should create a [`remoc::rch::mpsc::channel`] of [`Exec`].
|
||||
/// It will keep the receiving side for itself and put the sending side in a [`ActionImpl`].
|
||||
///
|
||||
/// The plugin will start receiving [`Exec`]s in the channel from reaction only after [`PluginInfo::start`] has been called by reaction core.
|
||||
#[derive(Clone, Serialize, Deserialize)]
|
||||
pub struct ActionImpl {
|
||||
pub tx: rch::mpsc::Sender<Exec>,
|
||||
}
|
||||
|
||||
/// A [trigger](https://reaction.ppom.me/reference.html#trigger) of the Action, sent by reaction core to the plugin.
|
||||
///
|
||||
/// The plugin should perform the configured action for each received [`Exec`].
|
||||
///
|
||||
/// Any error during its execution should be logged to stderr, see [`crate#Errors`] for error handling recommandations.
|
||||
#[derive(Serialize, Deserialize)]
|
||||
pub struct Exec {
|
||||
pub match_: Vec<String>,
|
||||
pub time: Duration,
|
||||
}
|
||||
|
||||
/// The main loop for a plugin.
|
||||
///
|
||||
/// Bootstraps the communication with reaction core on the process' stdin and stdout,
|
||||
/// then holds the connection and maintains the plugin in a server state.
|
||||
///
|
||||
/// Your main function should only create a struct that implements [`PluginInfo`]
|
||||
/// and then call [`main_loop`]:
|
||||
/// ```ignore
|
||||
/// #[tokio::main]
|
||||
/// async fn main() {
|
||||
/// let plugin = MyPlugin::default();
|
||||
/// reaction_plugin::main_loop(plugin).await;
|
||||
/// }
|
||||
/// ```
|
||||
pub async fn main_loop<T: PluginInfo + Send + Sync + 'static>(plugin_info: T) {
|
||||
// First check that we're called by reaction
|
||||
let mut args = args();
|
||||
// skip 0th argument
|
||||
let _skip = args.next();
|
||||
if args.next().is_none_or(|arg| arg != "serve") {
|
||||
eprintln!("This plugin is not meant to be called as-is.");
|
||||
eprintln!(
|
||||
"reaction daemon starts plugins itself and communicates with them on stdin, stdout and stderr."
|
||||
);
|
||||
eprintln!("See the doc on plugin configuration: https://reaction.ppom.me/plugins/");
|
||||
exit(1);
|
||||
} else {
|
||||
let (conn, mut tx, _rx): (
|
||||
_,
|
||||
remoc::rch::base::Sender<PluginInfoClient>,
|
||||
remoc::rch::base::Receiver<()>,
|
||||
) = Connect::io(remoc::Cfg::default(), stdin(), stdout())
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let (server, client) = PluginInfoServer::new(plugin_info, 1);
|
||||
|
||||
let (res1, (_, res2), res3) = tokio::join!(tx.send(client), server.serve(), conn);
|
||||
let mut exit_code = 0;
|
||||
if let Err(err) = res1 {
|
||||
eprintln!("ERROR could not send plugin info to reaction: {err}");
|
||||
exit_code = 1;
|
||||
}
|
||||
if let Err(err) = res2 {
|
||||
eprintln!("ERROR could not launch plugin service for reaction: {err}");
|
||||
exit_code = 2;
|
||||
}
|
||||
if let Err(err) = res3 {
|
||||
eprintln!("ERROR connection error with reaction: {err}");
|
||||
exit_code = 3;
|
||||
}
|
||||
exit(exit_code);
|
||||
}
|
||||
}
|
||||
|
||||
// Errors
|
||||
|
||||
pub type RemoteResult<T> = Result<T, RemoteError>;
|
||||
|
||||
/// reaction-plugin's Error type.
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub enum RemoteError {
|
||||
/// A connection error that origins from [`remoc`], the crate used for communication on the plugin's `stdin`/`stdout`.
|
||||
///
|
||||
/// You should not instantiate this type of error yourself.
|
||||
Remoc(rtc::CallError),
|
||||
/// A free String for application-specific errors.
|
||||
///
|
||||
/// You should only instantiate this type of error yourself, for any error that you encounter at startup and shutdown.
|
||||
///
|
||||
/// Otherwise, any error during the plugin's runtime should be logged to stderr, see [`crate#Errors`] for error handling recommandations.
|
||||
Plugin(String),
|
||||
}
|
||||
|
||||
impl Display for RemoteError {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
RemoteError::Remoc(call_error) => write!(f, "communication error: {call_error}"),
|
||||
RemoteError::Plugin(err) => write!(f, "{err}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Error for RemoteError {}
|
||||
|
||||
impl From<String> for RemoteError {
|
||||
fn from(value: String) -> Self {
|
||||
Self::Plugin(value)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&str> for RemoteError {
|
||||
fn from(value: &str) -> Self {
|
||||
Self::Plugin(value.into())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<rtc::CallError> for RemoteError {
|
||||
fn from(value: rtc::CallError) -> Self {
|
||||
Self::Remoc(value)
|
||||
}
|
||||
}
|
||||
|
|
@ -1,237 +0,0 @@
|
|||
//! Helper module that permits to use templated lines (ie. `bad password for <ip>`), like in Stream's and Action's `cmd`.
|
||||
//!
|
||||
//! Corresponding reaction core settings:
|
||||
//! - [Stream's `cmd`](https://reaction.ppom.me/reference.html#cmd)
|
||||
//! - [Action's `cmd`](https://reaction.ppom.me/reference.html#cmd-1)
|
||||
//!
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
enum SendItem {
|
||||
Index(usize),
|
||||
Str(String),
|
||||
}
|
||||
|
||||
impl SendItem {
|
||||
fn min_size(&self) -> usize {
|
||||
match self {
|
||||
Self::Index(_) => 0,
|
||||
Self::Str(s) => s.len(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Helper struct that permits to transform a template line with patterns into an instantiated line from a match.
|
||||
///
|
||||
/// Useful when you permit the user to reconstruct lines from an action, like in reaction's native actions and in the virtual plugin:
|
||||
/// ```yaml
|
||||
/// actions:
|
||||
/// native:
|
||||
/// cmd: ["iptables", "...", "<ip>"]
|
||||
///
|
||||
/// virtual:
|
||||
/// type: virtual
|
||||
/// options:
|
||||
/// send: "<ip>: bad password on user <user>"
|
||||
/// to: "my_virtual_stream"
|
||||
/// ```
|
||||
///
|
||||
/// Usage example:
|
||||
/// ```
|
||||
/// # use reaction_plugin::line::PatternLine;
|
||||
/// #
|
||||
/// let template = "<ip>: bad password on user <user>".to_string();
|
||||
/// let patterns = vec!["ip".to_string(), "user".to_string()];
|
||||
/// let pattern_line = PatternLine::new(template, patterns);
|
||||
///
|
||||
/// assert_eq!(
|
||||
/// pattern_line.line(vec!["1.2.3.4".to_string(), "root".to_string()]),
|
||||
/// "1.2.3.4: bad password on user root".to_string(),
|
||||
/// );
|
||||
/// ```
|
||||
///
|
||||
/// You can find full examples in those plugins:
|
||||
/// `reaction-plugin-virtual`,
|
||||
/// `reaction-plugin-cluster`.
|
||||
///
|
||||
#[derive(Debug)]
|
||||
pub struct PatternLine {
|
||||
line: Vec<SendItem>,
|
||||
min_size: usize,
|
||||
}
|
||||
|
||||
impl PatternLine {
|
||||
/// Construct [`PatternLine`] from a template line and the list of patterns of the underlying [Filter](https://reaction.ppom.me/reference.html#filter).
|
||||
///
|
||||
/// This list of patterns comes from [`super::ActionConfig`].
|
||||
pub fn new(template: String, patterns: Vec<String>) -> Self {
|
||||
let line = Self::_from(patterns, Vec::from([SendItem::Str(template)]));
|
||||
Self {
|
||||
min_size: line.iter().map(SendItem::min_size).sum(),
|
||||
line,
|
||||
}
|
||||
}
|
||||
fn _from(mut patterns: Vec<String>, acc: Vec<SendItem>) -> Vec<SendItem> {
|
||||
match patterns.pop() {
|
||||
None => acc,
|
||||
Some(pattern) => {
|
||||
let enclosed_pattern = format!("<{pattern}>");
|
||||
let acc = acc
|
||||
.into_iter()
|
||||
.flat_map(|item| match &item {
|
||||
SendItem::Index(_) => vec![item],
|
||||
SendItem::Str(str) => match str.find(&enclosed_pattern) {
|
||||
Some(i) => {
|
||||
let pattern_index = patterns.len();
|
||||
let mut ret = vec![];
|
||||
|
||||
let (left, mid) = str.split_at(i);
|
||||
if !left.is_empty() {
|
||||
ret.push(SendItem::Str(left.into()))
|
||||
}
|
||||
|
||||
ret.push(SendItem::Index(pattern_index));
|
||||
|
||||
if mid.len() > enclosed_pattern.len() {
|
||||
let (_, right) = mid.split_at(enclosed_pattern.len());
|
||||
ret.push(SendItem::Str(right.into()))
|
||||
}
|
||||
|
||||
ret
|
||||
}
|
||||
None => vec![item],
|
||||
},
|
||||
})
|
||||
.collect();
|
||||
Self::_from(patterns, acc)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn line(&self, match_: Vec<String>) -> String {
|
||||
let mut res = String::with_capacity(self.min_size);
|
||||
for item in &self.line {
|
||||
match item {
|
||||
SendItem::Index(i) => {
|
||||
if let Some(element) = match_.get(*i) {
|
||||
res.push_str(element);
|
||||
}
|
||||
}
|
||||
SendItem::Str(str) => res.push_str(str),
|
||||
}
|
||||
}
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::line::{PatternLine, SendItem};
|
||||
|
||||
#[test]
|
||||
fn line_0_pattern() {
|
||||
let msg = "my message".to_string();
|
||||
let line = PatternLine::new(msg.clone(), vec![]);
|
||||
assert_eq!(line.line, vec![SendItem::Str(msg.clone())]);
|
||||
assert_eq!(line.min_size, msg.len());
|
||||
assert_eq!(line.line(vec![]), msg.clone());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn line_1_pattern() {
|
||||
let patterns = vec![
|
||||
"ignored".into(),
|
||||
"oh".into(),
|
||||
"ignored".into(),
|
||||
"my".into(),
|
||||
"test".into(),
|
||||
];
|
||||
|
||||
let matches = vec!["yay", "oh", "my", "test", "<oh>", "<my>", "<test>"];
|
||||
|
||||
let tests = [
|
||||
(
|
||||
"<oh> my test",
|
||||
1,
|
||||
vec![SendItem::Index(1), SendItem::Str(" my test".into())],
|
||||
vec![
|
||||
("yay", "yay my test"),
|
||||
("oh", "oh my test"),
|
||||
("my", "my my test"),
|
||||
("test", "test my test"),
|
||||
("<oh>", "<oh> my test"),
|
||||
("<my>", "<my> my test"),
|
||||
("<test>", "<test> my test"),
|
||||
],
|
||||
),
|
||||
(
|
||||
"oh <my> test",
|
||||
3,
|
||||
vec![
|
||||
SendItem::Str("oh ".into()),
|
||||
SendItem::Index(3),
|
||||
SendItem::Str(" test".into()),
|
||||
],
|
||||
vec![
|
||||
("yay", "oh yay test"),
|
||||
("oh", "oh oh test"),
|
||||
("my", "oh my test"),
|
||||
("test", "oh test test"),
|
||||
("<oh>", "oh <oh> test"),
|
||||
("<my>", "oh <my> test"),
|
||||
("<test>", "oh <test> test"),
|
||||
],
|
||||
),
|
||||
(
|
||||
"oh my <test>",
|
||||
4,
|
||||
vec![SendItem::Str("oh my ".into()), SendItem::Index(4)],
|
||||
vec![
|
||||
("yay", "oh my yay"),
|
||||
("oh", "oh my oh"),
|
||||
("my", "oh my my"),
|
||||
("test", "oh my test"),
|
||||
("<oh>", "oh my <oh>"),
|
||||
("<my>", "oh my <my>"),
|
||||
("<test>", "oh my <test>"),
|
||||
],
|
||||
),
|
||||
];
|
||||
|
||||
for (msg, index, expected_pl, lines) in tests {
|
||||
let pattern_line = PatternLine::new(msg.to_string(), patterns.clone());
|
||||
assert_eq!(pattern_line.line, expected_pl);
|
||||
|
||||
for (match_element, line) in lines {
|
||||
for match_default in &matches {
|
||||
let mut match_ = vec![
|
||||
match_default.to_string(),
|
||||
match_default.to_string(),
|
||||
match_default.to_string(),
|
||||
match_default.to_string(),
|
||||
match_default.to_string(),
|
||||
];
|
||||
match_[index] = match_element.to_string();
|
||||
assert_eq!(
|
||||
pattern_line.line(match_.clone()),
|
||||
line,
|
||||
"match: {match_:?}, pattern_line: {pattern_line:?}"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn line_2_pattern() {
|
||||
let pattern_line = PatternLine::new("<a> ; <b>".into(), vec!["a".into(), "b".into()]);
|
||||
|
||||
let matches = ["a", "b", "ab", "<a>", "<b>"];
|
||||
for a in &matches {
|
||||
for b in &matches {
|
||||
assert_eq!(
|
||||
pattern_line.line(vec![a.to_string(), b.to_string()]),
|
||||
format!("{a} ; {b}"),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,162 +0,0 @@
|
|||
//! Helper module that provides structures to ease the quitting process when having multiple tokio tasks.
|
||||
//!
|
||||
//! It defines a [`ShutdownController`], that permits to keep track of ongoing tasks, ask them to shutdown and wait for all of them to quit.
|
||||
//!
|
||||
//! You can have it as an attribute of your plugin struct.
|
||||
//! ```
|
||||
//! struct MyPlugin {
|
||||
//! shutdown: ShutdownController
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! You can then give a [`ShutdownToken`] to other tasks when creating them:
|
||||
//!
|
||||
//! ```
|
||||
//! impl PluginInfo for MyPlugin {
|
||||
//! async fn start(&mut self) -> RemoteResult<()> {
|
||||
//! let token = self.shutdown.token();
|
||||
//!
|
||||
//! tokio::spawn(async move {
|
||||
//! token.wait().await;
|
||||
//! eprintln!("DEBUG shutdown asked to quit, now quitting")
|
||||
//! })
|
||||
//! }
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! On closing, calling [`ShutdownController::ask_shutdown`] will inform all tasks waiting on [`ShutdownToken::wait`] that it's time to leave.
|
||||
//! Then we can wait for [`ShutdownController::wait_all_task_shutdown`] to complete.
|
||||
//!
|
||||
//! ```
|
||||
//! impl PluginInfo for MyPlugin {
|
||||
//! async fn close(self) -> RemoteResult<()> {
|
||||
//! self.shutdown.ask_shutdown();
|
||||
//! self.shutdown.wait_all_task_shutdown().await;
|
||||
//! Ok(())
|
||||
//! }
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! [`ShutdownDelegate::handle_quit_signals`] permits to handle SIGHUP, SIGINT and SIGTERM by gracefully shutting down tasks.
|
||||
|
||||
use tokio::signal::unix::{SignalKind, signal};
|
||||
use tokio_util::{
|
||||
sync::{CancellationToken, WaitForCancellationFuture},
|
||||
task::task_tracker::{TaskTracker, TaskTrackerToken},
|
||||
};
|
||||
|
||||
/// Permits to keep track of ongoing tasks, ask them to shutdown and wait for all of them to quit.
|
||||
/// Stupid wrapper around [`tokio_util::sync::CancellationToken`] and [`tokio_util::task::task_tracker::TaskTracker`].
|
||||
#[derive(Default, Clone)]
|
||||
pub struct ShutdownController {
|
||||
shutdown_notifyer: CancellationToken,
|
||||
task_tracker: TaskTracker,
|
||||
}
|
||||
|
||||
impl ShutdownController {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Ask for all tasks to quit
|
||||
pub fn ask_shutdown(&self) {
|
||||
self.shutdown_notifyer.cancel();
|
||||
self.task_tracker.close();
|
||||
}
|
||||
|
||||
/// Wait for all tasks to quit.
|
||||
/// This task may return even without having called [`ShutdownController::ask_shutdown`]
|
||||
/// first, if all tasks quit by themselves.
|
||||
pub async fn wait_all_task_shutdown(self) {
|
||||
self.task_tracker.close();
|
||||
self.task_tracker.wait().await;
|
||||
}
|
||||
|
||||
/// Returns a new shutdown token, to be held by a task.
|
||||
pub fn token(&self) -> ShutdownToken {
|
||||
ShutdownToken::new(self.shutdown_notifyer.clone(), self.task_tracker.token())
|
||||
}
|
||||
|
||||
/// Returns a [`ShutdownDelegate`], which is able to ask for shutdown,
|
||||
/// without counting as a task that needs to be awaited.
|
||||
pub fn delegate(&self) -> ShutdownDelegate {
|
||||
ShutdownDelegate(self.shutdown_notifyer.clone())
|
||||
}
|
||||
|
||||
/// Returns a future that will resolve only when a shutdown request happened.
|
||||
pub fn wait(&self) -> WaitForCancellationFuture<'_> {
|
||||
self.shutdown_notifyer.cancelled()
|
||||
}
|
||||
}
|
||||
|
||||
/// Permits to ask for shutdown, without counting as a task that needs to be awaited.
|
||||
pub struct ShutdownDelegate(CancellationToken);
|
||||
|
||||
impl ShutdownDelegate {
|
||||
/// Ask for all tasks to quit
|
||||
pub fn ask_shutdown(&self) {
|
||||
self.0.cancel();
|
||||
}
|
||||
|
||||
/// Ensure [`Self::ask_shutdown`] is called whenever we receive SIGHUP,
|
||||
/// SIGTERM or SIGINT. Spawns a task that consumes self.
|
||||
pub fn handle_quit_signals(self) -> Result<(), String> {
|
||||
let err_str = |err| format!("could not register signal: {err}");
|
||||
|
||||
let mut sighup = signal(SignalKind::hangup()).map_err(err_str)?;
|
||||
let mut sigint = signal(SignalKind::interrupt()).map_err(err_str)?;
|
||||
let mut sigterm = signal(SignalKind::terminate()).map_err(err_str)?;
|
||||
|
||||
tokio::spawn(async move {
|
||||
let signal = tokio::select! {
|
||||
_ = sighup.recv() => "SIGHUP",
|
||||
_ = sigint.recv() => "SIGINT",
|
||||
_ = sigterm.recv() => "SIGTERM",
|
||||
};
|
||||
eprintln!("received {signal}, closing...");
|
||||
self.ask_shutdown();
|
||||
});
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Created by a [`ShutdownController`].
|
||||
/// Serves two purposes:
|
||||
///
|
||||
/// - Wait for a shutdown request to happen with [`Self::wait`]
|
||||
/// - Keep track of the current task. While this token is held,
|
||||
/// [`ShutdownController::wait_all_task_shutdown`] will block.
|
||||
#[derive(Clone)]
|
||||
pub struct ShutdownToken {
|
||||
shutdown_notifyer: CancellationToken,
|
||||
_task_tracker_token: TaskTrackerToken,
|
||||
}
|
||||
|
||||
impl ShutdownToken {
|
||||
fn new(shutdown_notifyer: CancellationToken, _task_tracker_token: TaskTrackerToken) -> Self {
|
||||
Self {
|
||||
shutdown_notifyer,
|
||||
_task_tracker_token,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns underlying [`CancellationToken`] and [`TaskTrackerToken`], consuming self.
|
||||
pub fn split(self) -> (CancellationToken, TaskTrackerToken) {
|
||||
(self.shutdown_notifyer, self._task_tracker_token)
|
||||
}
|
||||
|
||||
/// Returns a future that will resolve only when a shutdown request happened.
|
||||
pub fn wait(&self) -> WaitForCancellationFuture<'_> {
|
||||
self.shutdown_notifyer.cancelled()
|
||||
}
|
||||
|
||||
/// Returns true if the shutdown request happened
|
||||
pub fn is_shutdown(&self) -> bool {
|
||||
self.shutdown_notifyer.is_cancelled()
|
||||
}
|
||||
|
||||
/// Ask for all tasks to quit
|
||||
pub fn ask_shutdown(&self) {
|
||||
self.shutdown_notifyer.cancel();
|
||||
}
|
||||
}
|
||||
|
|
@ -1,76 +0,0 @@
|
|||
//! This module provides [`parse_duration`], which parses duration in reaction's format (ie. `6h`, `3 days`)
|
||||
//!
|
||||
//! Like in those reaction core settings:
|
||||
//! - [Filters' `retryperiod`](https://reaction.ppom.me/reference.html#retryperiod)
|
||||
//! - [Actions' `after`](https://reaction.ppom.me/reference.html#after).
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
/// Parses the &str argument as a Duration
|
||||
/// Returns Ok(Duration) if successful, or Err(String).
|
||||
///
|
||||
/// Format is defined as follows: `<integer> <unit>`
|
||||
/// - whitespace between the integer and unit is optional
|
||||
/// - integer must be positive (>= 0)
|
||||
/// - unit can be one of:
|
||||
/// - `ms` / `millis` / `millisecond` / `milliseconds`
|
||||
/// - `s` / `sec` / `secs` / `second` / `seconds`
|
||||
/// - `m` / `min` / `mins` / `minute` / `minutes`
|
||||
/// - `h` / `hour` / `hours`
|
||||
/// - `d` / `day` / `days`
|
||||
pub fn parse_duration(d: &str) -> Result<Duration, String> {
|
||||
let d_trimmed = d.trim();
|
||||
let chars = d_trimmed.as_bytes();
|
||||
let mut value = 0;
|
||||
let mut i = 0;
|
||||
while i < chars.len() && chars[i].is_ascii_digit() {
|
||||
value = value * 10 + (chars[i] - b'0') as u32;
|
||||
i += 1;
|
||||
}
|
||||
if i == 0 {
|
||||
return Err(format!("duration '{}' doesn't start with digits", d));
|
||||
}
|
||||
let ok_as = |func: fn(u64) -> Duration| -> Result<_, String> { Ok(func(value as u64)) };
|
||||
|
||||
match d_trimmed[i..].trim() {
|
||||
"ms" | "millis" | "millisecond" | "milliseconds" => ok_as(Duration::from_millis),
|
||||
"s" | "sec" | "secs" | "second" | "seconds" => ok_as(Duration::from_secs),
|
||||
"m" | "min" | "mins" | "minute" | "minutes" => ok_as(Duration::from_mins),
|
||||
"h" | "hour" | "hours" => ok_as(Duration::from_hours),
|
||||
"d" | "day" | "days" => ok_as(|d: u64| Duration::from_hours(d * 24)),
|
||||
unit => Err(format!(
|
||||
"unit {} not recognised. must be one of s/sec/seconds, m/min/minutes, h/hours, d/days",
|
||||
unit
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn char_conversion() {
|
||||
assert_eq!(b'9' - b'0', 9);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_duration_test() {
|
||||
assert_eq!(parse_duration("1s"), Ok(Duration::from_secs(1)));
|
||||
assert_eq!(parse_duration("12s"), Ok(Duration::from_secs(12)));
|
||||
assert_eq!(parse_duration(" 12 secs "), Ok(Duration::from_secs(12)));
|
||||
assert_eq!(parse_duration("2m"), Ok(Duration::from_mins(2)));
|
||||
assert_eq!(parse_duration("6 hours"), Ok(Duration::from_hours(6)));
|
||||
assert_eq!(parse_duration("1d"), Ok(Duration::from_hours(1 * 24)));
|
||||
assert_eq!(parse_duration("365d"), Ok(Duration::from_hours(365 * 24)));
|
||||
|
||||
assert!(parse_duration("d 3").is_err());
|
||||
assert!(parse_duration("d3").is_err());
|
||||
assert!(parse_duration("3da").is_err());
|
||||
assert!(parse_duration("3_days").is_err());
|
||||
assert!(parse_duration("_3d").is_err());
|
||||
assert!(parse_duration("3 3d").is_err());
|
||||
assert!(parse_duration("3.3d").is_err());
|
||||
}
|
||||
}
|
||||
247
release.py
247
release.py
|
|
@ -1,11 +1,10 @@
|
|||
#!/usr/bin/env nix-shell
|
||||
#!nix-shell -i python3 -p "python3.withPackages (ps: with ps; [ requests ])" -p debian-devscripts git minisign docker cargo-deb
|
||||
import argparse
|
||||
#!nix-shell -i python3 -p "python3.withPackages (ps: with ps; [ requests ])" -p debian-devscripts git minisign cargo-cross rustup
|
||||
import http.client
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
|
|
@ -20,25 +19,13 @@ def run_command(args, **kwargs):
|
|||
|
||||
|
||||
def main():
|
||||
# CLI arguments
|
||||
parser = argparse.ArgumentParser(description="create a reaction release")
|
||||
parser.add_argument(
|
||||
"-p",
|
||||
"--publish",
|
||||
action="store_true",
|
||||
help="publish a release. else build only",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
root_dir = os.getcwd()
|
||||
|
||||
# Git tag
|
||||
cmd = run_command(
|
||||
["git", "tag", "--sort=-creatordate"], capture_output=True, text=True
|
||||
["git", "tag", "--sort=v:refname"], capture_output=True, text=True
|
||||
)
|
||||
tag = ""
|
||||
try:
|
||||
tag = cmd.stdout.strip().split("\n")[0]
|
||||
tag = cmd.stdout.strip().split("\n")[-1]
|
||||
except Exception:
|
||||
pass
|
||||
if tag == "":
|
||||
|
|
@ -46,53 +33,36 @@ def main():
|
|||
sys.exit(1)
|
||||
|
||||
# Ask user
|
||||
if (
|
||||
args.publish
|
||||
and input(
|
||||
f"We will create a release for tag {tag}. Do you want to continue? (y/n) "
|
||||
)
|
||||
!= "y"
|
||||
):
|
||||
print("exiting.")
|
||||
sys.exit(1)
|
||||
# if input(f"We will create a release for tag {tag}. Do you want to continue? (y/n) ") != "y":
|
||||
# print("exiting.")
|
||||
# sys.exit(1)
|
||||
|
||||
# Git push
|
||||
# run_command(["git", "push", "--tags"])
|
||||
|
||||
# Minisign password
|
||||
cmd = subprocess.run(["rbw", "get", "minisign"], capture_output=True, text=True)
|
||||
minisign_password = cmd.stdout
|
||||
|
||||
if args.publish:
|
||||
# Git push
|
||||
run_command(["git", "push", "--tags"])
|
||||
|
||||
# Create directory
|
||||
run_command(
|
||||
[
|
||||
"ssh",
|
||||
"akesi",
|
||||
# "-J", "pica01",
|
||||
"mkdir",
|
||||
"-p",
|
||||
f"/var/www/static/reaction/releases/{tag}/",
|
||||
]
|
||||
)
|
||||
else:
|
||||
# Prepare directory for tarball and deb file.
|
||||
# We must do a `cargo clean` before each build,
|
||||
# So we have to move them out of `target/`
|
||||
local_dir = os.path.join(root_dir, "local")
|
||||
try:
|
||||
os.mkdir(local_dir)
|
||||
except FileExistsError:
|
||||
pass
|
||||
# Create directory
|
||||
run_command(
|
||||
[
|
||||
"ssh",
|
||||
"akesi",
|
||||
# "-J", "pica01",
|
||||
"mkdir",
|
||||
"-p",
|
||||
f"/var/www/static/reaction/releases/{tag}/",
|
||||
]
|
||||
)
|
||||
|
||||
architectures = {
|
||||
"x86_64-unknown-linux-gnu": "amd64",
|
||||
# I would like to build for those targets instead:
|
||||
# "x86_64-unknown-linux-musl": "amd64",
|
||||
# "aarch64-unknown-linux-musl": "arm64",
|
||||
# "arm-unknown-linux-gnueabihf": "armhf",
|
||||
"x86_64-unknown-linux-musl": "amd64",
|
||||
"aarch64-unknown-linux-musl": "arm64",
|
||||
}
|
||||
|
||||
root_dir = os.getcwd()
|
||||
|
||||
all_files = []
|
||||
|
||||
instructions = [
|
||||
|
|
@ -102,8 +72,8 @@ def main():
|
|||
|
||||
You'll need to install minisign to check the authenticity of the package.
|
||||
|
||||
After installing reaction, create your configuration file(s) in JSON, YAML or JSONnet in the
|
||||
`/etc/reaction/` directory.
|
||||
After installing reaction, create your configuration file at
|
||||
`/etc/reaction.json`, `/etc/reaction.jsonnet` or `/etc/reaction.yml`.
|
||||
See <https://reaction.ppom.me> for documentation.
|
||||
|
||||
Reload systemd:
|
||||
|
|
@ -113,67 +83,44 @@ $ sudo systemctl daemon-reload
|
|||
|
||||
Then enable and start reaction with this command
|
||||
```bash
|
||||
# write first your configuration file(s) in /etc/reaction/
|
||||
$ sudo systemctl enable --now reaction.service
|
||||
# replace `reaction.jsonnet` with the name of your configuration file in /etc/
|
||||
$ sudo systemctl enable --now reaction@reaction.jsonnet.service
|
||||
```
|
||||
""".strip(),
|
||||
]
|
||||
|
||||
for architecture_rs, architecture_pretty in architectures.items():
|
||||
for architecture in architectures.keys():
|
||||
# Cargo clean
|
||||
# run_command(["cargo", "clean"])
|
||||
run_command(["cargo", "clean"])
|
||||
|
||||
# Build docker image
|
||||
run_command(["docker", "pull", "rust:bookworm"])
|
||||
run_command(["docker", "build", "-t", "rust:reaction", "."])
|
||||
|
||||
binaries = [
|
||||
# Binaries
|
||||
"reaction",
|
||||
"reaction-plugin-virtual",
|
||||
"reaction-plugin-ipset",
|
||||
]
|
||||
|
||||
# Build
|
||||
# Install toolchain
|
||||
run_command(
|
||||
[
|
||||
"docker",
|
||||
"run",
|
||||
"--rm",
|
||||
"-u", str(os.getuid()),
|
||||
"-v", ".:/reaction",
|
||||
"rust:reaction",
|
||||
"sh", "-c",
|
||||
" && ".join([
|
||||
f"cargo build --release --target {architecture_rs} --package {binary}"
|
||||
for binary in binaries
|
||||
])
|
||||
"rustup",
|
||||
"toolchain",
|
||||
"install",
|
||||
f"stable-{architecture}",
|
||||
"--force-non-host", # I know, I know!
|
||||
"--profile",
|
||||
"minimal",
|
||||
]
|
||||
)
|
||||
|
||||
# Build .deb
|
||||
debs = [
|
||||
"reaction",
|
||||
"reaction-plugin-ipset",
|
||||
]
|
||||
for deb in debs:
|
||||
cmd = run_command(
|
||||
[
|
||||
"cargo-deb",
|
||||
"--target", architecture_rs,
|
||||
"--package", deb,
|
||||
"--no-build",
|
||||
"--no-strip"
|
||||
]
|
||||
)
|
||||
# Build
|
||||
run_command(["cross", "build", "--release", "--target", architecture])
|
||||
|
||||
deb_dir = os.path.join("./target", architecture_rs, "debian")
|
||||
deb_names = [f for f in os.listdir(deb_dir) if f.endswith(".deb")]
|
||||
deb_paths = [os.path.join(deb_dir, deb_name) for deb_name in deb_names]
|
||||
# Build .deb
|
||||
cmd = run_command(
|
||||
["cargo-deb", f"--target={architecture}", "--no-build", "--no-strip"]
|
||||
)
|
||||
|
||||
deb_dir = os.path.join("./target", architecture, "debian")
|
||||
deb_name = [f for f in os.listdir(deb_dir) if f.endswith(".deb")][0]
|
||||
deb_path = os.path.join(deb_dir, deb_name)
|
||||
|
||||
# Archive
|
||||
files_path = os.path.join("./target", architecture_rs, "release")
|
||||
pkg_name = f"reaction-{tag}-{architecture_pretty}"
|
||||
files_path = os.path.join("./target", architecture, "release")
|
||||
pkg_name = f"reaction-{tag}-{architectures[architecture]}"
|
||||
tar_name = f"{pkg_name}.tar.gz"
|
||||
tar_path = os.path.join(files_path, tar_name)
|
||||
|
||||
|
|
@ -183,7 +130,11 @@ $ sudo systemctl enable --now reaction.service
|
|||
except FileExistsError:
|
||||
pass
|
||||
|
||||
files = binaries + [
|
||||
files = [
|
||||
# Binaries
|
||||
"reaction",
|
||||
"nft46",
|
||||
"ip46tables",
|
||||
# Shell completion
|
||||
"reaction.bash",
|
||||
"reaction.fish",
|
||||
|
|
@ -194,7 +145,6 @@ $ sudo systemctl enable --now reaction.service
|
|||
"reaction-show.1",
|
||||
"reaction-start.1",
|
||||
"reaction-test-regex.1",
|
||||
"reaction-test-config.1",
|
||||
]
|
||||
for file in files:
|
||||
shutil.copy(file, pkg_name)
|
||||
|
|
@ -211,86 +161,56 @@ $ sudo systemctl enable --now reaction.service
|
|||
|
||||
# Sign
|
||||
run_command(
|
||||
["minisign", "-Sm", tar_path] + deb_paths,
|
||||
text=True,
|
||||
input=minisign_password,
|
||||
["minisign", "-Sm", deb_path, tar_path], text=True, input=minisign_password
|
||||
)
|
||||
deb_sig_paths = [f"{deb_path}.minisig" for deb_path in deb_paths]
|
||||
deb_sig_names = [f"{deb_name}.minisig" for deb_name in deb_names]
|
||||
deb_sig = f"{deb_path}.minisig"
|
||||
tar_sig = f"{tar_path}.minisig"
|
||||
|
||||
if args.publish:
|
||||
# Push
|
||||
run_command(
|
||||
[
|
||||
"rsync",
|
||||
"-az", # "-e", "ssh -J pica01",
|
||||
tar_path,
|
||||
tar_sig,
|
||||
]
|
||||
+ deb_paths
|
||||
+ deb_sig_paths
|
||||
+ [
|
||||
f"akesi:/var/www/static/reaction/releases/{tag}/",
|
||||
]
|
||||
)
|
||||
else:
|
||||
# Copy
|
||||
run_command(["cp", tar_path, tar_sig] + deb_paths + deb_sig_paths + [local_dir])
|
||||
|
||||
all_files.extend([tar_path, tar_sig])
|
||||
all_files.extend(deb_paths)
|
||||
all_files.extend(deb_sig_paths)
|
||||
# Push
|
||||
run_command(
|
||||
[
|
||||
"rsync",
|
||||
"-az", # "-e", "ssh -J pica01",
|
||||
tar_path,
|
||||
tar_sig,
|
||||
deb_path,
|
||||
deb_sig,
|
||||
f"akesi:/var/www/static/reaction/releases/{tag}/",
|
||||
]
|
||||
)
|
||||
all_files.extend([tar_path, tar_sig, deb_path, deb_sig])
|
||||
|
||||
# Instructions
|
||||
|
||||
instructions.append(
|
||||
f"""
|
||||
## Tar installation ({architecture_pretty} linux)
|
||||
## Tar installation ({architectures[architecture]} linux)
|
||||
|
||||
```bash
|
||||
curl -O https://static.ppom.me/reaction/releases/{tag}/{tar_name} \\
|
||||
-O https://static.ppom.me/reaction/releases/{tag}/{tar_name}.minisig \\
|
||||
&& minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m {tar_name} \\
|
||||
&& rm {tar_name}.minisig \\
|
||||
&& tar xvf {tar_name} \\
|
||||
&& cd {pkg_name} \\
|
||||
&& cd {tar_name} \\
|
||||
&& sudo make install
|
||||
```
|
||||
|
||||
If you want to install the ipset plugin as well:
|
||||
```bash
|
||||
sudo apt install -y libipset-dev && sudo make install-ipset
|
||||
```
|
||||
""".strip()
|
||||
""".strip()
|
||||
)
|
||||
|
||||
instructions.append(
|
||||
f"""
|
||||
## Debian installation ({architecture_pretty} linux)
|
||||
## Debian installation ({architectures[architecture]} linux)
|
||||
|
||||
```bash
|
||||
curl \\
|
||||
{"\n".join([
|
||||
f" -O https://static.ppom.me/reaction/releases/{tag}/{deb_name} \\"
|
||||
for deb_name in deb_names + deb_sig_names
|
||||
])}
|
||||
{"\n".join([
|
||||
f" && minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m {deb_name} \\"
|
||||
for deb_name in deb_names
|
||||
])}
|
||||
&& rm {" ".join(deb_sig_names)} \\
|
||||
&& sudo apt install {" ".join([f"./{deb_name}" for deb_name in deb_names])}
|
||||
curl -O https://static.ppom.me/reaction/releases/{tag}/{deb_name} \\
|
||||
-O https://static.ppom.me/reaction/releases/{tag}/{deb_name}.minisig \\
|
||||
&& minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m {deb_name} \\
|
||||
&& rm {deb_name}.minisig \\
|
||||
&& sudo apt install ./{deb_name}
|
||||
```
|
||||
|
||||
*You can also use [this third-party package repository](https://packages.azlux.fr).*
|
||||
""".strip()
|
||||
""".strip()
|
||||
)
|
||||
|
||||
if not args.publish:
|
||||
print("\n\n".join(instructions))
|
||||
return
|
||||
|
||||
# Release
|
||||
cmd = run_command(
|
||||
["rbw", "get", "framagit.org", "token"], capture_output=True, text=True
|
||||
|
|
@ -348,11 +268,10 @@ curl \\
|
|||
conn = http.client.HTTPSConnection("framagit.org")
|
||||
conn.request("POST", "/api/v4/projects/90566/releases", body=body, headers=headers)
|
||||
response = conn.getresponse()
|
||||
body = json.loads(response.read())
|
||||
|
||||
if response.status != 201:
|
||||
if response.status != 200:
|
||||
print(
|
||||
f"sending message failed: status: {response.status}, reason: {response.reason}, message: {body.message}"
|
||||
f"sending message failed: status: {response.status}, reason: {response.reason}"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
|
|
|||
14
shell.nix
14
shell.nix
|
|
@ -1,14 +0,0 @@
|
|||
# This shell.nix for NixOS users is only needed when building reaction-plugin-ipset
|
||||
with import <nixpkgs> {};
|
||||
pkgs.mkShell {
|
||||
name = "libipset";
|
||||
buildInputs = [
|
||||
ipset
|
||||
nftables
|
||||
clang
|
||||
];
|
||||
src = null;
|
||||
shellHook = ''
|
||||
export LIBCLANG_PATH="$(clang -print-file-name=libclang.so)"
|
||||
'';
|
||||
}
|
||||
40
src/cli.rs
40
src/cli.rs
|
|
@ -25,7 +25,7 @@ pub struct Cli {
|
|||
pub enum SubCommand {
|
||||
/// Start reaction daemon
|
||||
Start {
|
||||
/// configuration file in json, jsonnet or yaml format, or directory containing those files. required.
|
||||
/// configuration file in json, jsonnet or yaml format. required.
|
||||
#[clap(short = 'c', long)]
|
||||
config: PathBuf,
|
||||
|
||||
|
|
@ -47,7 +47,7 @@ pub enum SubCommand {
|
|||
#[clap(short = 's', long, default_value = "/run/reaction/reaction.sock")]
|
||||
socket: PathBuf,
|
||||
|
||||
/// how to format output
|
||||
/// how to format output: json or yaml.
|
||||
#[clap(short = 'f', long, default_value_t = Format::YAML)]
|
||||
format: Format,
|
||||
|
||||
|
|
@ -70,7 +70,7 @@ Then prints the flushed matches and actions."
|
|||
#[clap(short = 's', long, default_value = "/run/reaction/reaction.sock")]
|
||||
socket: PathBuf,
|
||||
|
||||
/// how to format output
|
||||
/// how to format output: json or yaml.
|
||||
#[clap(short = 'f', long, default_value_t = Format::YAML)]
|
||||
format: Format,
|
||||
|
||||
|
|
@ -83,24 +83,6 @@ Then prints the flushed matches and actions."
|
|||
patterns: Vec<(String, String)>,
|
||||
},
|
||||
|
||||
/// Trigger a target in reaction (e.g. ban)
|
||||
#[command(
|
||||
long_about = "Trigger actions and remove currently active matches for the specified PATTERNS in the specified STREAM.FILTER. (e.g. ban)"
|
||||
)]
|
||||
Trigger {
|
||||
/// path to the client-daemon communication socket
|
||||
#[clap(short = 's', long, default_value = "/run/reaction/reaction.sock")]
|
||||
socket: PathBuf,
|
||||
|
||||
/// STREAM.FILTER to trigger
|
||||
#[clap(value_name = "STREAM.FILTER")]
|
||||
limit: String,
|
||||
|
||||
/// PATTERNs to trigger on (e.g. ip=1.2.3.4)
|
||||
#[clap(value_parser = parse_named_regex, value_name = "NAME=PATTERN")]
|
||||
patterns: Vec<(String, String)>,
|
||||
},
|
||||
|
||||
/// Test a regex
|
||||
#[command(
|
||||
name = "test-regex",
|
||||
|
|
@ -108,7 +90,7 @@ Then prints the flushed matches and actions."
|
|||
Giving a configuration file permits to use its patterns in REGEX."
|
||||
)]
|
||||
TestRegex {
|
||||
/// configuration file in json, jsonnet or yaml format, or directory containing those files. required.
|
||||
/// configuration file in json, jsonnet or yaml format. required.
|
||||
#[clap(short = 'c', long)]
|
||||
config: PathBuf,
|
||||
|
||||
|
|
@ -120,20 +102,6 @@ Giving a configuration file permits to use its patterns in REGEX."
|
|||
#[clap(value_name = "LINE")]
|
||||
line: Option<String>,
|
||||
},
|
||||
/// Test your configuration
|
||||
TestConfig {
|
||||
/// either a configuration file in json, jsonnet or yaml format, or a directory containing those files. required.
|
||||
#[clap(short = 'c', long)]
|
||||
config: PathBuf,
|
||||
|
||||
/// how to format output
|
||||
#[clap(short = 'f', long, default_value_t = Format::YAML)]
|
||||
format: Format,
|
||||
|
||||
/// whether to output additional information
|
||||
#[clap(short = 'v', long, default_value_t = false)]
|
||||
verbose: bool,
|
||||
},
|
||||
}
|
||||
|
||||
// Enums
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
mod request;
|
||||
mod test_config;
|
||||
mod show_flush;
|
||||
mod test_regex;
|
||||
|
||||
pub use request::request;
|
||||
pub use test_config::test_config;
|
||||
pub use show_flush::request;
|
||||
pub use test_regex::test_regex;
|
||||
|
|
|
|||
|
|
@ -78,7 +78,6 @@ pub async fn request(
|
|||
DaemonResponse::Err(err) => Err(format!(
|
||||
"failed to communicate to daemon: error response: {err}"
|
||||
)),
|
||||
DaemonResponse::Ok(_) => Ok(()),
|
||||
}?;
|
||||
Ok(())
|
||||
}
|
||||
|
|
@ -1,42 +0,0 @@
|
|||
use std::{error::Error, path::PathBuf};
|
||||
|
||||
use crate::{cli::Format, concepts::Config};
|
||||
|
||||
pub fn test_config(
|
||||
config_path: PathBuf,
|
||||
format: Format,
|
||||
verbose: bool,
|
||||
) -> Result<(), Box<dyn Error + Send + Sync>> {
|
||||
let (mut cfg, cfg_files) = Config::from_path_raw(&config_path)?;
|
||||
if verbose {
|
||||
if config_path.is_dir() {
|
||||
println!(
|
||||
"Loaded the configuration from the following files in the directory {} in this order:",
|
||||
config_path.display()
|
||||
);
|
||||
println!("{}\n", cfg_files.join("\n"));
|
||||
} else {
|
||||
println!(
|
||||
"Loaded the configuration from the file {}",
|
||||
config_path.display()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// first serialize the raw config (before regexes are transformed and their original version
|
||||
// discarded)
|
||||
let cfg_str = match format {
|
||||
Format::JSON => serde_json::to_string_pretty(&cfg).map_err(|e| e.to_string()),
|
||||
Format::YAML => serde_yaml::to_string(&cfg).map_err(|e| e.to_string()),
|
||||
}
|
||||
.map_err(|e| format!("Error serializing back the configuration: {e}"))?;
|
||||
|
||||
// then try to finalize the configuration: will raise an error on an invalid config
|
||||
cfg.setup()
|
||||
.map_err(|e| format!("Configuration file {}: {}", config_path.display(), e))?;
|
||||
|
||||
// only print the serialized config if everyting went well
|
||||
println!("{cfg_str}");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
|
@ -15,11 +15,11 @@ pub fn test_regex(
|
|||
mut regex: String,
|
||||
line: Option<String>,
|
||||
) -> Result<(), Box<dyn Error + Send + Sync>> {
|
||||
let config = Config::from_path(&config_path)?;
|
||||
let config: Config = Config::from_file(&config_path)?;
|
||||
|
||||
// Code close to Filter::setup()
|
||||
let mut used_patterns: BTreeSet<Arc<Pattern>> = BTreeSet::new();
|
||||
for pattern in config.patterns.values() {
|
||||
for pattern in config.patterns().values() {
|
||||
if let Some(index) = regex.find(pattern.name_with_braces()) {
|
||||
// we already `find` it, so we must be able to `rfind` it
|
||||
#[allow(clippy::unwrap_used)]
|
||||
|
|
@ -43,9 +43,9 @@ pub fn test_regex(
|
|||
let mut result = Vec::new();
|
||||
if !used_patterns.is_empty() {
|
||||
for pattern in used_patterns.iter() {
|
||||
if let Some(match_) = matches.name(&pattern.name) {
|
||||
if let Some(match_) = matches.name(pattern.name()) {
|
||||
result.push(match_.as_str().to_string());
|
||||
if pattern.is_ignore(match_.as_str()) {
|
||||
if !pattern.not_an_ignore(match_.as_str()) {
|
||||
ignored = true;
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,67 +1,59 @@
|
|||
use std::{cmp::Ordering, collections::BTreeSet, fmt::Display, sync::Arc, time::Duration};
|
||||
use std::{cmp::Ordering, collections::BTreeSet, fmt::Display, sync::Arc};
|
||||
|
||||
use reaction_plugin::{ActionConfig, time::parse_duration};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::Value;
|
||||
use chrono::TimeDelta;
|
||||
|
||||
use serde::Deserialize;
|
||||
use tokio::process::Command;
|
||||
|
||||
use super::{Match, Pattern, PatternType};
|
||||
use super::parse_duration;
|
||||
use super::{Match, Pattern};
|
||||
|
||||
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
|
||||
#[derive(Clone, Debug, Deserialize)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct Action {
|
||||
#[serde(default)]
|
||||
pub cmd: Vec<String>,
|
||||
cmd: Vec<String>,
|
||||
|
||||
// TODO one shot time deserialization
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub after: Option<String>,
|
||||
after: Option<String>,
|
||||
#[serde(skip)]
|
||||
pub after_duration: Option<Duration>,
|
||||
after_duration: Option<TimeDelta>,
|
||||
|
||||
#[serde(
|
||||
rename = "onexit",
|
||||
default = "set_false",
|
||||
skip_serializing_if = "is_false"
|
||||
)]
|
||||
pub on_exit: bool,
|
||||
#[serde(default = "set_false", skip_serializing_if = "is_false")]
|
||||
pub oneshot: bool,
|
||||
|
||||
#[serde(default = "set_false", skip_serializing_if = "is_false")]
|
||||
pub ipv4only: bool,
|
||||
#[serde(default = "set_false", skip_serializing_if = "is_false")]
|
||||
pub ipv6only: bool,
|
||||
#[serde(rename = "onexit", default = "set_false")]
|
||||
on_exit: bool,
|
||||
|
||||
#[serde(skip)]
|
||||
pub patterns: Arc<BTreeSet<Arc<Pattern>>>,
|
||||
patterns: Arc<BTreeSet<Arc<Pattern>>>,
|
||||
#[serde(skip)]
|
||||
pub name: String,
|
||||
name: String,
|
||||
#[serde(skip)]
|
||||
pub filter_name: String,
|
||||
filter_name: String,
|
||||
#[serde(skip)]
|
||||
pub stream_name: String,
|
||||
|
||||
// Plugin-specific
|
||||
#[serde(default, rename = "type", skip_serializing_if = "Option::is_none")]
|
||||
pub action_type: Option<String>,
|
||||
#[serde(default, skip_serializing_if = "Value::is_null")]
|
||||
pub options: Value,
|
||||
stream_name: String,
|
||||
}
|
||||
|
||||
fn set_false() -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
fn is_false(b: &bool) -> bool {
|
||||
!*b
|
||||
}
|
||||
|
||||
impl Action {
|
||||
pub fn is_plugin(&self) -> bool {
|
||||
self.action_type
|
||||
.as_ref()
|
||||
.is_some_and(|action_type| action_type != "cmd")
|
||||
pub fn name(&self) -> &str {
|
||||
&self.name
|
||||
}
|
||||
|
||||
pub fn stream_name(&self) -> &str {
|
||||
&self.stream_name
|
||||
}
|
||||
|
||||
pub fn filter_name(&self) -> &str {
|
||||
&self.filter_name
|
||||
}
|
||||
|
||||
pub fn after_duration(&self) -> Option<TimeDelta> {
|
||||
self.after_duration
|
||||
}
|
||||
|
||||
pub fn on_exit(&self) -> bool {
|
||||
self.on_exit
|
||||
}
|
||||
|
||||
pub fn setup(
|
||||
|
|
@ -94,18 +86,11 @@ impl Action {
|
|||
return Err("character '.' is not allowed in filter name".into());
|
||||
}
|
||||
|
||||
if !self.is_plugin() {
|
||||
if self.cmd.is_empty() {
|
||||
return Err("cmd is empty".into());
|
||||
}
|
||||
if self.cmd[0].is_empty() {
|
||||
return Err("cmd's first item is empty".into());
|
||||
}
|
||||
if !self.options.is_null() {
|
||||
return Err("can't define options without a plugin type".into());
|
||||
}
|
||||
} else if !self.cmd.is_empty() {
|
||||
return Err("can't define a cmd and a plugin type".into());
|
||||
if self.cmd.is_empty() {
|
||||
return Err("cmd is empty".into());
|
||||
}
|
||||
if self.cmd[0].is_empty() {
|
||||
return Err("cmd's first item is empty".into());
|
||||
}
|
||||
|
||||
if let Some(after) = &self.after {
|
||||
|
|
@ -118,22 +103,6 @@ impl Action {
|
|||
return Err("cannot have `onexit: true`, without an `after` directive".into());
|
||||
}
|
||||
|
||||
if self.ipv4only && self.ipv6only {
|
||||
return Err("cannot have `ipv4only: true` and `ipv6only: true` in one action".into());
|
||||
}
|
||||
if self
|
||||
.patterns
|
||||
.iter()
|
||||
.all(|pattern| pattern.pattern_type() != PatternType::Ip)
|
||||
{
|
||||
if self.ipv4only {
|
||||
return Err("it makes no sense to have an action with `ipv4only: true` when no pattern of type ip is defined on the filter".into());
|
||||
}
|
||||
if self.ipv6only {
|
||||
return Err("it makes no sense to have an action with `ipv6only: true` when no pattern of type ip is defined on the filter".into());
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
|
@ -157,24 +126,6 @@ impl Action {
|
|||
cmd.args(&computed_command[1..]);
|
||||
cmd
|
||||
}
|
||||
|
||||
pub fn to_action_config(&self) -> Result<ActionConfig, String> {
|
||||
Ok(ActionConfig {
|
||||
stream_name: self.stream_name.clone(),
|
||||
filter_name: self.filter_name.clone(),
|
||||
action_name: self.name.clone(),
|
||||
action_type: self
|
||||
.action_type
|
||||
.clone()
|
||||
.ok_or_else(|| format!("action {} doesn't load a plugin. this is a bug!", self))?,
|
||||
config: self.options.clone().into(),
|
||||
patterns: self
|
||||
.patterns
|
||||
.iter()
|
||||
.map(|pattern| pattern.name.clone())
|
||||
.collect(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq for Action {
|
||||
|
|
@ -206,62 +157,35 @@ impl Display for Action {
|
|||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
impl Action {
|
||||
/// Test-only constructor designed to be easy to call
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub fn new(
|
||||
cmd: Vec<&str>,
|
||||
after: Option<&str>,
|
||||
on_exit: bool,
|
||||
stream_name: &str,
|
||||
filter_name: &str,
|
||||
name: &str,
|
||||
config_patterns: &super::Patterns,
|
||||
ip_only: u8,
|
||||
) -> Self {
|
||||
let mut action = Self {
|
||||
cmd: cmd.into_iter().map(|s| s.into()).collect(),
|
||||
after: after.map(|s| s.into()),
|
||||
on_exit,
|
||||
ipv4only: ip_only == 4,
|
||||
ipv6only: ip_only == 6,
|
||||
..Default::default()
|
||||
};
|
||||
action
|
||||
.setup(
|
||||
stream_name,
|
||||
filter_name,
|
||||
name,
|
||||
config_patterns
|
||||
.clone()
|
||||
.into_values()
|
||||
.collect::<BTreeSet<_>>()
|
||||
.into(),
|
||||
)
|
||||
.unwrap();
|
||||
action
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::unwrap_used)]
|
||||
#[cfg(test)]
|
||||
pub mod tests {
|
||||
|
||||
use super::*;
|
||||
|
||||
pub fn ok_action() -> Action {
|
||||
fn default_action() -> Action {
|
||||
Action {
|
||||
cmd: vec!["command".into()],
|
||||
..Default::default()
|
||||
cmd: Vec::new(),
|
||||
name: "".into(),
|
||||
filter_name: "".into(),
|
||||
stream_name: "".into(),
|
||||
after: None,
|
||||
after_duration: None,
|
||||
on_exit: false,
|
||||
patterns: Arc::new(BTreeSet::default()),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn ok_action() -> Action {
|
||||
let mut action = default_action();
|
||||
action.cmd = vec!["command".into()];
|
||||
action
|
||||
}
|
||||
|
||||
pub fn ok_action_with_after(d: String, name: &str) -> Action {
|
||||
let mut action = Action {
|
||||
cmd: vec!["command".into()],
|
||||
after: Some(d),
|
||||
..Default::default()
|
||||
};
|
||||
let mut action = default_action();
|
||||
action.cmd = vec!["command".into()];
|
||||
action.after = Some(d);
|
||||
action
|
||||
.setup("", "", name, Arc::new(BTreeSet::default()))
|
||||
.unwrap();
|
||||
|
|
@ -275,16 +199,16 @@ pub mod tests {
|
|||
let patterns = Arc::new(BTreeSet::default());
|
||||
|
||||
// No command
|
||||
action = Action::default();
|
||||
action = default_action();
|
||||
assert!(action.setup(&name, &name, &name, patterns.clone()).is_err());
|
||||
|
||||
// No command
|
||||
action = Action::default();
|
||||
action = default_action();
|
||||
action.cmd = vec!["".into()];
|
||||
assert!(action.setup(&name, &name, &name, patterns.clone()).is_err());
|
||||
|
||||
// No command
|
||||
action = Action::default();
|
||||
action = default_action();
|
||||
action.cmd = vec!["".into(), "arg1".into()];
|
||||
assert!(action.setup(&name, &name, &name, patterns.clone()).is_err());
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
use std::{
|
||||
collections::{BTreeMap, btree_map::Entry},
|
||||
collections::BTreeMap,
|
||||
fs::File,
|
||||
io,
|
||||
path::Path,
|
||||
|
|
@ -7,36 +7,32 @@ use std::{
|
|||
sync::Arc,
|
||||
};
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde::Deserialize;
|
||||
use thiserror::Error;
|
||||
use tracing::{debug, error, info, warn};
|
||||
use tracing::{error, info};
|
||||
|
||||
use super::{Pattern, Plugin, Stream, merge_attrs};
|
||||
use super::{Filter, Pattern, Stream};
|
||||
|
||||
pub type Patterns = BTreeMap<String, Arc<Pattern>>;
|
||||
|
||||
#[derive(Clone, Debug, Deserialize, Serialize)]
|
||||
#[derive(Clone, Debug, Deserialize)]
|
||||
#[cfg_attr(test, derive(Default))]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct Config {
|
||||
patterns: Patterns,
|
||||
|
||||
streams: BTreeMap<String, Stream>,
|
||||
|
||||
#[serde(default = "num_cpus::get")]
|
||||
pub concurrency: usize,
|
||||
#[serde(default = "dot", skip_serializing_if = "String::is_empty")]
|
||||
pub state_directory: String,
|
||||
|
||||
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
|
||||
pub plugins: BTreeMap<String, Plugin>,
|
||||
concurrency: usize,
|
||||
|
||||
#[serde(default)]
|
||||
pub patterns: Patterns,
|
||||
|
||||
#[serde(default, skip_serializing_if = "Vec::is_empty")]
|
||||
pub start: Vec<Vec<String>>,
|
||||
#[serde(default, skip_serializing_if = "Vec::is_empty")]
|
||||
pub stop: Vec<Vec<String>>,
|
||||
|
||||
start: Vec<Vec<String>>,
|
||||
#[serde(default)]
|
||||
pub streams: BTreeMap<String, Stream>,
|
||||
stop: Vec<Vec<String>>,
|
||||
|
||||
#[serde(default = "dot")]
|
||||
state_directory: String,
|
||||
|
||||
// This field only serve the purpose of having a top-level place for saving YAML variables
|
||||
#[serde(default, skip_serializing, rename = "definitions")]
|
||||
|
|
@ -48,66 +44,33 @@ fn dot() -> String {
|
|||
}
|
||||
|
||||
impl Config {
|
||||
fn merge(&mut self, mut other: Config) -> Result<(), String> {
|
||||
for (key, plugin) in other.plugins.into_iter() {
|
||||
match self.plugins.entry(key) {
|
||||
Entry::Vacant(e) => {
|
||||
e.insert(plugin);
|
||||
}
|
||||
Entry::Occupied(e) => {
|
||||
return Err(format!(
|
||||
"plugin {} is already defined. plugin definitions can't be spread accross multiple files.",
|
||||
e.key()
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
pub fn streams(&self) -> &BTreeMap<String, Stream> {
|
||||
&self.streams
|
||||
}
|
||||
|
||||
for (key, pattern) in other.patterns.into_iter() {
|
||||
match self.patterns.entry(key) {
|
||||
Entry::Vacant(e) => {
|
||||
e.insert(pattern);
|
||||
}
|
||||
Entry::Occupied(e) => {
|
||||
return Err(format!(
|
||||
"pattern {} is already defined. pattern definitions can't be spread accross multiple files.",
|
||||
e.key()
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
pub fn patterns(&self) -> &Patterns {
|
||||
&self.patterns
|
||||
}
|
||||
|
||||
for (key, stream) in other.streams.into_iter() {
|
||||
match self.streams.entry(key) {
|
||||
Entry::Vacant(e) => {
|
||||
e.insert(stream);
|
||||
}
|
||||
Entry::Occupied(mut e) => {
|
||||
e.get_mut()
|
||||
.merge(stream)
|
||||
.map_err(|err| format!("Stream {}: {}", e.key(), err))?;
|
||||
}
|
||||
}
|
||||
}
|
||||
pub fn concurrency(&self) -> usize {
|
||||
self.concurrency
|
||||
}
|
||||
|
||||
self.start.append(&mut other.start);
|
||||
self.stop.append(&mut other.stop);
|
||||
pub fn state_directory(&self) -> &str {
|
||||
&self.state_directory
|
||||
}
|
||||
|
||||
self.state_directory = merge_attrs(
|
||||
self.state_directory.clone(),
|
||||
other.state_directory,
|
||||
".".into(),
|
||||
"state_directory",
|
||||
)?;
|
||||
pub fn filters(&self) -> Vec<&Filter> {
|
||||
self.streams
|
||||
.values()
|
||||
.flat_map(|stream| stream.filters().values())
|
||||
.collect()
|
||||
}
|
||||
|
||||
self.concurrency = merge_attrs(
|
||||
self.concurrency,
|
||||
other.concurrency,
|
||||
num_cpus::get(),
|
||||
"concurrency",
|
||||
)?;
|
||||
|
||||
Ok(())
|
||||
pub fn get_filter(&self, name: &(String, String)) -> Option<&Filter> {
|
||||
self.streams
|
||||
.get(&name.0)
|
||||
.and_then(|stream| stream.get_filter(&name.1))
|
||||
}
|
||||
|
||||
pub fn setup(&mut self) -> Result<(), String> {
|
||||
|
|
@ -118,14 +81,6 @@ impl Config {
|
|||
// Nullify this useless field
|
||||
self._definitions = serde_json::Value::Null;
|
||||
|
||||
for (key, value) in &mut self.plugins {
|
||||
value.setup(key)?;
|
||||
}
|
||||
|
||||
if self.patterns.is_empty() {
|
||||
return Err("no patterns configured".into());
|
||||
}
|
||||
|
||||
let mut new_patterns = BTreeMap::new();
|
||||
for (key, value) in &self.patterns {
|
||||
let mut value = value.as_ref().clone();
|
||||
|
|
@ -152,142 +107,18 @@ impl Config {
|
|||
run_commands(&self.stop, "stop")
|
||||
}
|
||||
|
||||
pub fn from_path(path: &Path) -> Result<Self, String> {
|
||||
match Self::from_path_raw(path) {
|
||||
Ok((mut cfg, files)) => {
|
||||
cfg.setup().map_err(ConfigError::BadConfig).map_err(|e| {
|
||||
if path.is_dir() {
|
||||
format!(
|
||||
"{e}\nWhile reading config from {}. List of files read, in that order:\n{}",
|
||||
path.display(),
|
||||
files.join("\n"),
|
||||
)
|
||||
} else {
|
||||
format!("{e}\nWhile reading config from {}.", path.display())
|
||||
}
|
||||
})?;
|
||||
Ok(cfg)
|
||||
}
|
||||
Err(e) => Err(e),
|
||||
}
|
||||
pub fn from_file(path: &Path) -> Result<Self, String> {
|
||||
Config::_from_file(path)
|
||||
.map_err(|err| format!("Configuration file {}: {}", path.display(), err))
|
||||
}
|
||||
|
||||
pub fn from_path_raw(path: &Path) -> Result<(Self, Vec<String>), String> {
|
||||
match std::fs::metadata(path) {
|
||||
Err(e) => Err(format!("Error accessing {}: {e}", path.to_string_lossy())),
|
||||
Ok(m) => {
|
||||
if m.is_file() {
|
||||
Self::_from_file_raw(path)
|
||||
.map(|cfg| {
|
||||
let fname = path
|
||||
.file_name()
|
||||
.map(|s| s.to_string_lossy().to_string())
|
||||
.unwrap_or("".to_string());
|
||||
(cfg, vec![fname])
|
||||
})
|
||||
.map_err(|e| format!("Configuration file {}: {}", path.display(), e))
|
||||
} else if m.is_dir() {
|
||||
Self::_from_dir_raw(path)
|
||||
} else {
|
||||
Err(format!(
|
||||
"Invalid file type for {}: not a file nor a directory",
|
||||
path.to_string_lossy()
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
fn _from_file(path: &Path) -> Result<Self, ConfigError> {
|
||||
let extension = path
|
||||
.extension()
|
||||
.and_then(|ex| ex.to_str())
|
||||
.ok_or(ConfigError::Extension("no file extension".into()))?;
|
||||
|
||||
fn _from_dir_raw(path: &Path) -> Result<(Self, Vec<String>), String> {
|
||||
let dir = std::fs::read_dir(path)
|
||||
.map_err(|e| format!("Error accessing directory {}: {e}", path.display()))?;
|
||||
// sorts files by name
|
||||
let mut cfg_files = BTreeMap::new();
|
||||
for f in dir {
|
||||
let f =
|
||||
f.map_err(|e| format!("Error while reading directory {}: {e}", path.display()))?;
|
||||
|
||||
let fname = f.file_name();
|
||||
let fname = match fname.to_str() {
|
||||
Some(fname) => fname,
|
||||
None => {
|
||||
warn!(
|
||||
"Ignoring file {} in {}",
|
||||
f.file_name().to_string_lossy(),
|
||||
path.display()
|
||||
);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
if fname.starts_with(".") || fname.starts_with("_") {
|
||||
// silently ignore hidden file
|
||||
debug!("Ignoring hidden file {fname} in {}", path.display());
|
||||
continue;
|
||||
}
|
||||
|
||||
let fpath = f.path();
|
||||
let ext = match fpath.extension() {
|
||||
None => {
|
||||
// silently ignore files without extensions (may be directory)
|
||||
debug!(
|
||||
"Ignoring file without extension {fname} in {}",
|
||||
path.display()
|
||||
);
|
||||
continue;
|
||||
}
|
||||
Some(ext) => {
|
||||
if let Some(ext) = ext.to_str() {
|
||||
ext
|
||||
} else {
|
||||
warn!(
|
||||
"Ignoring file {} in {} with unexpected extension",
|
||||
fname,
|
||||
path.display()
|
||||
);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
match Self::_extension_to_format(ext) {
|
||||
Ok(fmt) => cfg_files.insert(fname.to_string(), (fpath, fmt)),
|
||||
Err(_) => {
|
||||
// silently ignore files without an expected extension
|
||||
debug!(
|
||||
"Ignoring file with non recognized extension {fname} in {}",
|
||||
path.display()
|
||||
);
|
||||
continue;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
let mut cfg: Option<Self> = None;
|
||||
for (fname, (fpath, fmt)) in &cfg_files {
|
||||
let cfg_part = Self::_load_file(fpath, *fmt)
|
||||
.map_err(|e| format!("While reading {fname} in {}: {e}", path.display()))?;
|
||||
|
||||
if let Some(mut cfg_agg) = cfg.take() {
|
||||
cfg_agg.merge(cfg_part)?;
|
||||
cfg = Some(cfg_agg);
|
||||
} else {
|
||||
cfg = Some(cfg_part)
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(cfg) = cfg {
|
||||
Ok((cfg, cfg_files.into_keys().collect()))
|
||||
} else {
|
||||
Err(format!(
|
||||
"No valid configuration files found in {}",
|
||||
path.display()
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
fn _extension_to_format(extension: &str) -> Result<Format, ConfigError> {
|
||||
match extension {
|
||||
let format = match extension {
|
||||
"yaml" | "yml" => Ok(Format::Yaml),
|
||||
"json" => Ok(Format::Json),
|
||||
"jsonnet" => Ok(Format::Jsonnet),
|
||||
|
|
@ -295,31 +126,29 @@ impl Config {
|
|||
"extension {} is not recognized",
|
||||
extension
|
||||
))),
|
||||
}
|
||||
}
|
||||
}?;
|
||||
|
||||
fn _load_file(path: &Path, format: Format) -> Result<Self, ConfigError> {
|
||||
let cfg: Self = match format {
|
||||
let mut config: Config = match format {
|
||||
Format::Json => serde_json::from_reader(File::open(path)?)?,
|
||||
Format::Yaml => serde_yaml::from_reader(File::open(path)?)?,
|
||||
Format::Jsonnet => serde_json::from_str(&jsonnet::from_path(path)?)?,
|
||||
};
|
||||
Ok(cfg)
|
||||
|
||||
config.setup().map_err(ConfigError::BadConfig)?;
|
||||
|
||||
Ok(config)
|
||||
}
|
||||
|
||||
fn _from_file_raw(path: &Path) -> Result<Self, ConfigError> {
|
||||
let extension = path
|
||||
.extension()
|
||||
.and_then(|ex| ex.to_str())
|
||||
.ok_or(ConfigError::Extension("no file extension".into()))?;
|
||||
|
||||
let format = Self::_extension_to_format(extension)?;
|
||||
|
||||
Self::_load_file(path, format)
|
||||
#[cfg(test)]
|
||||
pub fn from_streams(streams: BTreeMap<String, Stream>, dir: &str) -> Self {
|
||||
Self {
|
||||
streams,
|
||||
state_directory: dir.to_string(),
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy)]
|
||||
enum Format {
|
||||
Yaml,
|
||||
Json,
|
||||
|
|
@ -345,7 +174,7 @@ enum ConfigError {
|
|||
mod jsonnet {
|
||||
use std::path::Path;
|
||||
|
||||
use jrsonnet_evaluator::{EvaluationState, FileImportResolver, error::LocError};
|
||||
use jrsonnet_evaluator::{error::LocError, EvaluationState, FileImportResolver};
|
||||
|
||||
use super::ConfigError;
|
||||
|
||||
|
|
@ -370,7 +199,6 @@ mod jsonnet {
|
|||
}
|
||||
|
||||
fn run_commands(commands: &Vec<Vec<String>>, moment: &str) -> bool {
|
||||
debug!("Running {moment} commands...");
|
||||
let mut ok = true;
|
||||
for command in commands {
|
||||
info!("{} command: run {:?}\n", moment, command);
|
||||
|
|
@ -407,7 +235,6 @@ fn run_commands(commands: &Vec<Vec<String>>, moment: &str) -> bool {
|
|||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[allow(clippy::unwrap_used)]
|
||||
mod tests {
|
||||
|
||||
use super::*;
|
||||
|
|
@ -417,396 +244,4 @@ mod tests {
|
|||
let mut config = Config::default();
|
||||
assert!(config.setup().is_err());
|
||||
}
|
||||
|
||||
fn parse_config_json(cfg: &str) -> Config {
|
||||
const DUMMY_PATTERNS: &str = r#"
|
||||
"patterns": {
|
||||
"zero": {
|
||||
"regex": "0"
|
||||
}
|
||||
}"#;
|
||||
const DUMMY_FILTERS: &str = r#"
|
||||
"filters": {
|
||||
"dummy": {
|
||||
"regex": ["abc"],
|
||||
"actions": {
|
||||
"act": {
|
||||
"cmd": ["echo", "1"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}"#;
|
||||
const DUMMY_STREAMS: &str = r#"
|
||||
"streams": {
|
||||
"dummy": {
|
||||
"cmd": ["true"],
|
||||
{{FILTERS}}
|
||||
}
|
||||
}
|
||||
"#;
|
||||
let cfg = cfg
|
||||
.to_string()
|
||||
.replace("{{STREAMS}}", DUMMY_STREAMS)
|
||||
.replace("{{PATTERNS}}", DUMMY_PATTERNS)
|
||||
.replace("{{FILTERS}}", DUMMY_FILTERS);
|
||||
|
||||
serde_json::from_str(&cfg)
|
||||
.inspect_err(|_| {
|
||||
eprintln!("{cfg}");
|
||||
})
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn config_without_stream() {
|
||||
let mut cfg1 = parse_config_json(
|
||||
r#"{
|
||||
{{PATTERNS}}
|
||||
}"#,
|
||||
);
|
||||
let mut cfg2 = parse_config_json(
|
||||
r#"{
|
||||
{{PATTERNS}},
|
||||
"streams": {}
|
||||
}"#,
|
||||
);
|
||||
|
||||
let res = cfg1.setup();
|
||||
assert!(res.is_err());
|
||||
let res = cfg2.setup();
|
||||
assert!(res.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn config_without_pattern() {
|
||||
let mut cfg1 = parse_config_json(
|
||||
r#"{
|
||||
{{STREAMS}}
|
||||
}"#,
|
||||
);
|
||||
let mut cfg2 = parse_config_json(
|
||||
r#"{
|
||||
"patterns": {},
|
||||
{{STREAMS}}
|
||||
}"#,
|
||||
);
|
||||
|
||||
let res = cfg1.setup();
|
||||
assert!(res.is_err());
|
||||
let res = cfg2.setup();
|
||||
assert!(res.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_distinct_patterns() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
"patterns": {
|
||||
"ip4": {
|
||||
"regex": "ip4"
|
||||
}
|
||||
},
|
||||
{{STREAMS}}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"patterns": {
|
||||
"ip6": {
|
||||
"regex": "ip6"
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
|
||||
cfg_org.merge(cfg_oth).unwrap();
|
||||
cfg_org.setup().unwrap();
|
||||
assert!(cfg_org.patterns.contains_key("ip4"));
|
||||
assert!(cfg_org.patterns.contains_key("ip6"));
|
||||
assert_eq!(cfg_org.patterns.len(), 2);
|
||||
assert!(cfg_org.streams.contains_key("dummy"));
|
||||
assert_eq!(cfg_org.streams.len(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_same_patterns() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
"patterns": {
|
||||
"zero": {
|
||||
"regex": "0"
|
||||
}
|
||||
},
|
||||
{{STREAMS}}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"patterns": {
|
||||
"zero": {
|
||||
"regex": "00"
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
|
||||
let res = cfg_org.merge(cfg_oth);
|
||||
assert!(res.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_distinct_streams() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
{{PATTERNS}},
|
||||
"streams": {
|
||||
"echo1": {
|
||||
"cmd": ["echo"],
|
||||
{{FILTERS}}
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"streams": {
|
||||
"echo2": {
|
||||
"cmd": ["echo"],
|
||||
{{FILTERS}}
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
|
||||
cfg_org.merge(cfg_oth).unwrap();
|
||||
cfg_org.setup().unwrap();
|
||||
assert!(cfg_org.patterns.contains_key("zero"));
|
||||
assert_eq!(cfg_org.patterns.len(), 1);
|
||||
assert!(cfg_org.streams.contains_key("echo1"));
|
||||
assert!(cfg_org.streams.contains_key("echo2"));
|
||||
assert_eq!(cfg_org.streams.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_same_streams_distinct_filters() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
{{PATTERNS}},
|
||||
"streams": {
|
||||
"echo": {
|
||||
"cmd": ["echo"],
|
||||
"filters": {
|
||||
"f1": {
|
||||
"regex": ["abc"],
|
||||
"actions": {
|
||||
"act": {
|
||||
"cmd": ["echo", "1"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"streams": {
|
||||
"echo": {
|
||||
"cmd": ["echo"],
|
||||
"filters": {
|
||||
"f2": {
|
||||
"regex": ["abc"],
|
||||
"actions": {
|
||||
"act": {
|
||||
"cmd": ["echo", "1"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
|
||||
cfg_org.merge(cfg_oth).unwrap();
|
||||
cfg_org.setup().unwrap();
|
||||
assert!(cfg_org.streams.contains_key("echo"));
|
||||
assert_eq!(cfg_org.streams.len(), 1);
|
||||
|
||||
let filters = &cfg_org.streams.get("echo").unwrap().filters;
|
||||
assert!(filters.contains_key("f1"));
|
||||
assert!(filters.contains_key("f2"));
|
||||
assert_eq!(filters.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_same_streams_distinct_command() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
{{PATTERNS}},
|
||||
"streams": {
|
||||
"echo": {
|
||||
"cmd": ["echo"],
|
||||
{{FILTERS}}
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"streams": {
|
||||
"echo": {
|
||||
"cmd": ["true"]
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
|
||||
let res = cfg_org.merge(cfg_oth);
|
||||
assert!(res.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_same_streams_command_in_one() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
{{PATTERNS}},
|
||||
"streams": {
|
||||
"echo": {
|
||||
{{FILTERS}}
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"streams": {
|
||||
"echo": {
|
||||
"cmd": ["echo"]
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
|
||||
cfg_org.merge(cfg_oth).unwrap();
|
||||
cfg_org.setup().unwrap();
|
||||
assert!(cfg_org.streams.contains_key("echo"));
|
||||
assert_eq!(cfg_org.streams.len(), 1);
|
||||
let stream = cfg_org.streams.get("echo").unwrap();
|
||||
assert_eq!(stream.cmd.len(), 1);
|
||||
assert_eq!(stream.filters.len(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_same_streams_same_filters() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
{{PATTERNS}},
|
||||
"streams": {
|
||||
"echo": {
|
||||
"cmd": ["echo"],
|
||||
{{FILTERS}}
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"streams": {
|
||||
"echo": {
|
||||
"cmd": ["echo"],
|
||||
{{FILTERS}}
|
||||
}
|
||||
}
|
||||
}"#,
|
||||
);
|
||||
|
||||
let res = cfg_org.merge(cfg_oth);
|
||||
assert!(res.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_same_concurrency() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
"concurrency": 97,
|
||||
{{STREAMS}}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"concurrency": 97
|
||||
}"#,
|
||||
);
|
||||
|
||||
let res = cfg_org.merge(cfg_oth);
|
||||
assert!(res.is_ok());
|
||||
assert_eq!(cfg_org.concurrency, 97);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_distinct_concurrency() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
"concurrency": 97,
|
||||
{{STREAMS}}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"concurrency": 96
|
||||
}"#,
|
||||
);
|
||||
|
||||
let res = cfg_org.merge(cfg_oth);
|
||||
assert!(res.is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_one_concurrency() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
"concurrency": 97,
|
||||
{{STREAMS}}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(r#"{}"#);
|
||||
|
||||
let res = cfg_org.merge(cfg_oth);
|
||||
assert!(res.is_ok());
|
||||
assert_eq!(cfg_org.concurrency, 97);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_right_concurrency() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
{{STREAMS}}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(
|
||||
r#"{
|
||||
"concurrency": 97
|
||||
}"#,
|
||||
);
|
||||
|
||||
let res = cfg_org.merge(cfg_oth);
|
||||
assert!(res.is_ok());
|
||||
assert_eq!(cfg_org.concurrency, 97);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn merge_config_no_concurrency() {
|
||||
let mut cfg_org = parse_config_json(
|
||||
r#"{
|
||||
{{STREAMS}}
|
||||
}"#,
|
||||
);
|
||||
let cfg_oth = parse_config_json(r#"{ }"#);
|
||||
|
||||
let res = cfg_org.merge(cfg_oth);
|
||||
assert!(res.is_ok());
|
||||
assert_eq!(cfg_org.concurrency, num_cpus::get());
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,65 +4,43 @@ use std::{
|
|||
fmt::Display,
|
||||
hash::Hash,
|
||||
sync::Arc,
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
use reaction_plugin::time::parse_duration;
|
||||
use chrono::TimeDelta;
|
||||
use regex::Regex;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde::Deserialize;
|
||||
use tracing::info;
|
||||
|
||||
use super::{Action, Match, Pattern, PatternType, Patterns};
|
||||
|
||||
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Deserialize, Serialize)]
|
||||
pub enum Duplicate {
|
||||
#[default]
|
||||
#[serde(rename = "extend")]
|
||||
Extend,
|
||||
#[serde(rename = "ignore")]
|
||||
Ignore,
|
||||
#[serde(rename = "rerun")]
|
||||
Rerun,
|
||||
}
|
||||
use super::parse_duration;
|
||||
use super::{Action, Match, Pattern, Patterns};
|
||||
|
||||
// Only names are serialized
|
||||
// Only computed fields are not deserialized
|
||||
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
|
||||
#[derive(Clone, Debug, Default, Deserialize)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct Filter {
|
||||
actions: BTreeMap<String, Action>,
|
||||
#[serde(skip)]
|
||||
pub longuest_action_duration: Duration,
|
||||
#[serde(skip)]
|
||||
pub has_ip: bool,
|
||||
longuest_action_duration: TimeDelta,
|
||||
|
||||
pub regex: Vec<String>,
|
||||
regex: Vec<String>,
|
||||
#[serde(skip)]
|
||||
pub compiled_regex: Vec<Regex>,
|
||||
compiled_regex: Vec<Regex>,
|
||||
// We want patterns to be ordered
|
||||
// This is necessary when using matches which contain multiple patterns
|
||||
#[serde(skip)]
|
||||
pub patterns: Arc<BTreeSet<Arc<Pattern>>>,
|
||||
patterns: Arc<BTreeSet<Arc<Pattern>>>,
|
||||
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub retry: Option<u32>,
|
||||
#[serde(rename = "retryperiod", skip_serializing_if = "Option::is_none")]
|
||||
pub retry_period: Option<String>,
|
||||
retry: Option<u32>,
|
||||
#[serde(rename = "retryperiod")]
|
||||
retry_period: Option<String>,
|
||||
#[serde(skip)]
|
||||
pub retry_duration: Option<Duration>,
|
||||
|
||||
#[serde(default)]
|
||||
pub duplicate: Duplicate,
|
||||
|
||||
pub actions: BTreeMap<String, Action>,
|
||||
retry_duration: Option<TimeDelta>,
|
||||
|
||||
#[serde(skip)]
|
||||
pub name: String,
|
||||
name: String,
|
||||
#[serde(skip)]
|
||||
pub stream_name: String,
|
||||
// // Plugin-specific
|
||||
// #[serde(default, rename = "type")]
|
||||
// pub filter_type: Option<String>,
|
||||
// #[serde(default = "null_value")]
|
||||
// pub options: Value,
|
||||
stream_name: String,
|
||||
}
|
||||
|
||||
impl Filter {
|
||||
|
|
@ -84,11 +62,47 @@ impl Filter {
|
|||
Filter {
|
||||
stream_name: stream_name.into(),
|
||||
name: filter_name.into(),
|
||||
patterns: Arc::new(patterns.into_iter().map(Arc::new).collect()),
|
||||
patterns: Arc::new(patterns.into_iter().map(|p| Arc::new(p)).collect()),
|
||||
..Filter::default()
|
||||
}
|
||||
}
|
||||
|
||||
pub fn name(&self) -> &str {
|
||||
&self.name
|
||||
}
|
||||
|
||||
pub fn stream_name(&self) -> &str {
|
||||
&self.stream_name
|
||||
}
|
||||
|
||||
pub fn retry(&self) -> Option<u32> {
|
||||
self.retry
|
||||
}
|
||||
|
||||
pub fn retry_duration(&self) -> Option<TimeDelta> {
|
||||
self.retry_duration
|
||||
}
|
||||
|
||||
pub fn longuest_action_duration(&self) -> TimeDelta {
|
||||
self.longuest_action_duration
|
||||
}
|
||||
|
||||
pub fn max_time_before_outdated(&self) -> TimeDelta {
|
||||
if let Some(retry_duration) = self.retry_duration {
|
||||
self.longuest_action_duration + retry_duration
|
||||
} else {
|
||||
self.longuest_action_duration
|
||||
}
|
||||
}
|
||||
|
||||
pub fn actions(&self) -> &BTreeMap<String, Action> {
|
||||
&self.actions
|
||||
}
|
||||
|
||||
pub fn patterns(&self) -> &BTreeSet<Arc<Pattern>> {
|
||||
&self.patterns
|
||||
}
|
||||
|
||||
pub fn setup(
|
||||
&mut self,
|
||||
stream_name: &str,
|
||||
|
|
@ -120,13 +134,13 @@ impl Filter {
|
|||
}
|
||||
|
||||
if self.retry.is_some_and(|r| r < 2) {
|
||||
return Err("retry must be >= 2. Remove 'retry' and 'retryperiod' to trigger at the first occurence.".into());
|
||||
return Err("retry has been specified but is < 2".into());
|
||||
}
|
||||
|
||||
if let Some(retry_period) = &self.retry_period {
|
||||
self.retry_duration = Some(
|
||||
parse_duration(retry_period)
|
||||
.map_err(|err| format!("failed to parse retry period: {}", err))?,
|
||||
.map_err(|err| format!("failed to parse retry time: {}", err))?,
|
||||
);
|
||||
self.retry_period = None;
|
||||
}
|
||||
|
|
@ -136,7 +150,6 @@ impl Filter {
|
|||
}
|
||||
|
||||
let mut new_patterns = BTreeSet::new();
|
||||
let mut new_regex = Vec::new();
|
||||
let mut first = true;
|
||||
for regex in &self.regex {
|
||||
let mut regex_buf = regex.clone();
|
||||
|
|
@ -160,18 +173,17 @@ impl Filter {
|
|||
}
|
||||
} else if !first && new_patterns.contains(pattern) {
|
||||
return Err(format!(
|
||||
"pattern {} is present in the first regex but is not present in a following regex. all regexes should contain the same set of regexes",
|
||||
&pattern.name_with_braces()
|
||||
));
|
||||
"pattern {} is present in the first regex but is not present in a following regex. all regexes should contain the same set of regexes",
|
||||
&pattern.name_with_braces()
|
||||
));
|
||||
}
|
||||
regex_buf = regex_buf.replacen(pattern.name_with_braces(), &pattern.regex, 1);
|
||||
}
|
||||
new_regex.push(regex_buf.clone());
|
||||
let compiled = Regex::new(®ex_buf).map_err(|err| err.to_string())?;
|
||||
self.compiled_regex.push(compiled);
|
||||
first = false;
|
||||
}
|
||||
self.regex = new_regex;
|
||||
self.regex.clear();
|
||||
self.patterns = Arc::new(new_patterns);
|
||||
|
||||
if self.actions.is_empty() {
|
||||
|
|
@ -181,18 +193,12 @@ impl Filter {
|
|||
for (key, action) in &mut self.actions {
|
||||
action.setup(stream_name, name, key, self.patterns.clone())?;
|
||||
}
|
||||
self.has_ip = self
|
||||
.actions
|
||||
.values()
|
||||
.any(|action| action.ipv4only || action.ipv6only);
|
||||
|
||||
self.longuest_action_duration =
|
||||
self.actions
|
||||
.values()
|
||||
.fold(Duration::from_secs(0), |acc, v| {
|
||||
v.after_duration
|
||||
.map_or(acc, |v| if v > acc { v } else { acc })
|
||||
});
|
||||
self.actions.values().fold(TimeDelta::seconds(0), |acc, v| {
|
||||
v.after_duration()
|
||||
.map_or(acc, |v| if v > acc { v } else { acc })
|
||||
});
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
|
@ -203,128 +209,26 @@ impl Filter {
|
|||
if !self.patterns.is_empty() {
|
||||
let mut result = Match::new();
|
||||
for pattern in self.patterns.as_ref() {
|
||||
// if the pattern is in an optional part of the regex,
|
||||
// there may be no captured group for it.
|
||||
if let Some(match_) = matches.name(&pattern.name)
|
||||
&& !pattern.is_ignore(match_.as_str())
|
||||
{
|
||||
let mut match_ = match_.as_str().to_string();
|
||||
pattern.normalize(&mut match_);
|
||||
result.push(match_);
|
||||
// if the pattern is in an optional part of the regex, there may be no
|
||||
// captured group for it.
|
||||
if let Some(match_) = matches.name(pattern.name()) {
|
||||
if pattern.not_an_ignore(match_.as_str()) {
|
||||
result.push(match_.as_str().to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
if result.len() == self.patterns.len() {
|
||||
info!("{}: match {:?}", self, result);
|
||||
return Some(result);
|
||||
}
|
||||
} else {
|
||||
return Some(vec![]);
|
||||
info!("{}: match []", self);
|
||||
return Some(vec![".".to_string()]);
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
/// Test that the patterns map conforms to the filter's patterns.
|
||||
/// Then returns a corresponding [`Match`].
|
||||
pub fn get_match_from_patterns(
|
||||
&self,
|
||||
mut patterns: BTreeMap<Arc<Pattern>, String>,
|
||||
) -> Result<Match, String> {
|
||||
// Check pattern length
|
||||
if patterns.len() != self.patterns.len() {
|
||||
return Err(format!(
|
||||
"{} patterns specified, while the {}.{} filter has {} pattern: ({})",
|
||||
patterns.len(),
|
||||
self.stream_name,
|
||||
self.name,
|
||||
self.patterns.len(),
|
||||
self.patterns
|
||||
.iter()
|
||||
.map(|pattern| pattern.name.clone())
|
||||
.reduce(|acc, pattern| acc + ", " + &pattern)
|
||||
.unwrap_or("".into()),
|
||||
));
|
||||
}
|
||||
|
||||
for (pattern, match_) in &mut patterns {
|
||||
if self.patterns.get(pattern).is_none() {
|
||||
return Err(format!(
|
||||
"pattern {} is not present in the filter {}.{}",
|
||||
pattern.name, self.stream_name, self.name
|
||||
));
|
||||
}
|
||||
|
||||
if !pattern.is_match(match_) {
|
||||
return Err(format!(
|
||||
"'{}' doesn't match pattern {}",
|
||||
match_, pattern.name,
|
||||
));
|
||||
}
|
||||
|
||||
if pattern.is_ignore(match_) {
|
||||
return Err(format!(
|
||||
"'{}' is explicitly ignored by pattern {}",
|
||||
match_, pattern.name,
|
||||
));
|
||||
}
|
||||
|
||||
pattern.normalize(match_);
|
||||
}
|
||||
|
||||
for pattern in self.patterns.iter() {
|
||||
if !patterns.contains_key(pattern) {
|
||||
return Err(format!(
|
||||
"pattern {} is missing, because it's in the filter {}.{}",
|
||||
pattern.name, self.stream_name, self.name
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(patterns.into_values().collect())
|
||||
}
|
||||
|
||||
/// Filters [`Filter`]'s [`Action`]s according to its [`Pattern`]s [`PatternType`]
|
||||
/// and those of the given [`Match`]
|
||||
pub fn filtered_actions_from_match(&self, m: &Match) -> Vec<&Action> {
|
||||
let ip_type = if self.has_ip {
|
||||
self.patterns
|
||||
.iter()
|
||||
.zip(m)
|
||||
.find(|(p, _)| p.pattern_type() == PatternType::Ip)
|
||||
.map(|(_, m)| -> _ {
|
||||
// Using this dumb heuristic is ok,
|
||||
// because we know we have a valid IP address.
|
||||
if m.contains(':') {
|
||||
PatternType::Ipv6
|
||||
} else if m.contains('.') {
|
||||
PatternType::Ipv4
|
||||
} else {
|
||||
// This else should not happen, but better falling back on something than
|
||||
// panicking, right? Maybe we should add a warning there?
|
||||
PatternType::Regex
|
||||
}
|
||||
})
|
||||
.unwrap_or(PatternType::Regex)
|
||||
} else {
|
||||
PatternType::Regex
|
||||
};
|
||||
|
||||
let mut actions: Vec<_> = self
|
||||
.actions
|
||||
.values()
|
||||
// If specific ip version, check it
|
||||
.filter(move |action| !action.ipv4only || ip_type == PatternType::Ipv4)
|
||||
.filter(move |action| !action.ipv6only || ip_type == PatternType::Ipv6)
|
||||
.collect();
|
||||
|
||||
// Sort by after
|
||||
actions.sort_by(|a, b| {
|
||||
a.after_duration
|
||||
.unwrap_or_default()
|
||||
.cmp(&b.after_duration.unwrap_or_default())
|
||||
});
|
||||
actions
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for Filter {
|
||||
|
|
@ -359,61 +263,12 @@ impl Hash for Filter {
|
|||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
impl Filter {
|
||||
/// Test-only constructor designed to be easy to call
|
||||
pub fn new(
|
||||
actions: Vec<Action>,
|
||||
regex: Vec<&str>,
|
||||
retry: Option<u32>,
|
||||
retry_period: Option<&str>,
|
||||
stream_name: &str,
|
||||
name: &str,
|
||||
duplicate: Duplicate,
|
||||
config_patterns: &Patterns,
|
||||
) -> Self {
|
||||
let mut filter = Self {
|
||||
actions: actions.into_iter().map(|a| (a.name.clone(), a)).collect(),
|
||||
regex: regex.into_iter().map(|s| s.into()).collect(),
|
||||
retry,
|
||||
retry_period: retry_period.map(|s| s.into()),
|
||||
duplicate,
|
||||
..Default::default()
|
||||
};
|
||||
filter.setup(stream_name, name, config_patterns).unwrap();
|
||||
filter
|
||||
}
|
||||
|
||||
pub fn new_static(
|
||||
actions: Vec<Action>,
|
||||
regex: Vec<&str>,
|
||||
retry: Option<u32>,
|
||||
retry_period: Option<&str>,
|
||||
stream_name: &str,
|
||||
name: &str,
|
||||
duplicate: Duplicate,
|
||||
config_patterns: &Patterns,
|
||||
) -> &'static Self {
|
||||
Box::leak(Box::new(Self::new(
|
||||
actions,
|
||||
regex,
|
||||
retry,
|
||||
retry_period,
|
||||
stream_name,
|
||||
name,
|
||||
duplicate,
|
||||
config_patterns,
|
||||
)))
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::unwrap_used)]
|
||||
#[cfg(test)]
|
||||
pub mod tests {
|
||||
use crate::concepts::action::tests::{ok_action, ok_action_with_after};
|
||||
use crate::concepts::pattern::PatternIp;
|
||||
use crate::concepts::pattern::tests::{
|
||||
boubou_pattern_with_ignore, default_pattern, number_pattern, ok_pattern_with_ignore,
|
||||
boubou_pattern_with_ignore, default_pattern, ok_pattern_with_ignore,
|
||||
};
|
||||
|
||||
use super::*;
|
||||
|
|
@ -482,14 +337,14 @@ pub mod tests {
|
|||
let name = "name".to_string();
|
||||
let empty_patterns = Patterns::new();
|
||||
let minute_str = "1m".to_string();
|
||||
let minute = Duration::from_secs(60);
|
||||
let two_minutes = Duration::from_secs(60 * 2);
|
||||
let minute = TimeDelta::seconds(60);
|
||||
let two_minutes = TimeDelta::seconds(60 * 2);
|
||||
let two_minutes_str = "2m".to_string();
|
||||
|
||||
// duration 0
|
||||
filter = ok_filter();
|
||||
filter.setup(&name, &name, &empty_patterns).unwrap();
|
||||
assert_eq!(filter.longuest_action_duration, Duration::default());
|
||||
assert_eq!(filter.longuest_action_duration, TimeDelta::default());
|
||||
|
||||
let minute_action = ok_action_with_after(minute_str.clone(), &minute_str);
|
||||
|
||||
|
|
@ -546,7 +401,6 @@ pub mod tests {
|
|||
.unwrap()
|
||||
.to_string()
|
||||
);
|
||||
assert_eq!(&filter.regex[0].to_string(), "insert (?P<name>[abc]) here$");
|
||||
assert_eq!(filter.patterns.len(), 1);
|
||||
let stored_pattern = filter.patterns.first().unwrap();
|
||||
assert_eq!(stored_pattern.regex, pattern.regex);
|
||||
|
|
@ -572,10 +426,6 @@ pub mod tests {
|
|||
.unwrap()
|
||||
.to_string()
|
||||
);
|
||||
assert_eq!(
|
||||
&filter.compiled_regex[0].to_string(),
|
||||
"insert (?P<name>[abc]) here and (?P<boubou>(?:bou){1,3}) there"
|
||||
);
|
||||
assert_eq!(filter.patterns.len(), 2);
|
||||
let stored_pattern = filter.patterns.first().unwrap();
|
||||
assert_eq!(stored_pattern.regex, boubou.regex);
|
||||
|
|
@ -594,20 +444,12 @@ pub mod tests {
|
|||
.unwrap()
|
||||
.to_string()
|
||||
);
|
||||
assert_eq!(
|
||||
&filter.compiled_regex[0].to_string(),
|
||||
"insert (?P<name>[abc]) here"
|
||||
);
|
||||
assert_eq!(
|
||||
filter.compiled_regex[1].to_string(),
|
||||
Regex::new("also add (?P<name>[abc]) there")
|
||||
.unwrap()
|
||||
.to_string()
|
||||
);
|
||||
assert_eq!(
|
||||
&filter.compiled_regex[1].to_string(),
|
||||
"also add (?P<name>[abc]) there"
|
||||
);
|
||||
assert_eq!(filter.patterns.len(), 1);
|
||||
let stored_pattern = filter.patterns.first().unwrap();
|
||||
assert_eq!(stored_pattern.regex, pattern.regex);
|
||||
|
|
@ -634,10 +476,6 @@ pub mod tests {
|
|||
.unwrap()
|
||||
.to_string()
|
||||
);
|
||||
assert_eq!(
|
||||
&filter.compiled_regex[1].to_string(),
|
||||
"also add (?P<boubou>(?:bou){1,3}) here and (?P<name>[abc]) there"
|
||||
);
|
||||
assert_eq!(filter.patterns.len(), 2);
|
||||
let stored_pattern = filter.patterns.first().unwrap();
|
||||
assert_eq!(stored_pattern.regex, boubou.regex);
|
||||
|
|
@ -675,24 +513,17 @@ pub mod tests {
|
|||
let name = "name".to_string();
|
||||
let mut filter;
|
||||
|
||||
// make a Patterns
|
||||
let mut patterns = Patterns::new();
|
||||
|
||||
let mut pattern = ok_pattern_with_ignore();
|
||||
pattern.setup(&name).unwrap();
|
||||
let pattern = Arc::new(pattern);
|
||||
patterns.insert(name.clone(), pattern.clone().into());
|
||||
|
||||
let boubou_name = "boubou".to_string();
|
||||
let mut boubou = boubou_pattern_with_ignore();
|
||||
boubou.setup(&boubou_name).unwrap();
|
||||
let boubou = Arc::new(boubou);
|
||||
|
||||
let patterns = Patterns::from([
|
||||
(name.clone(), pattern.clone()),
|
||||
(boubou_name.clone(), boubou.clone()),
|
||||
]);
|
||||
|
||||
let number_name = "number".to_string();
|
||||
let mut number_pattern = number_pattern();
|
||||
number_pattern.setup(&number_name).unwrap();
|
||||
let number_pattern = Arc::new(number_pattern);
|
||||
patterns.insert(boubou_name.clone(), boubou.clone().into());
|
||||
|
||||
// one simple regex
|
||||
filter = Filter::default();
|
||||
|
|
@ -704,41 +535,6 @@ pub mod tests {
|
|||
assert_eq!(filter.get_match("youpi b youpi"), None);
|
||||
assert_eq!(filter.get_match("insert here"), None);
|
||||
|
||||
// Ok
|
||||
assert_eq!(
|
||||
filter.get_match_from_patterns(BTreeMap::from([(pattern.clone(), "b".into())])),
|
||||
Ok(vec!("b".into()))
|
||||
);
|
||||
// Doesn't match
|
||||
assert!(
|
||||
filter
|
||||
.get_match_from_patterns(BTreeMap::from([(pattern.clone(), "abc".into())]))
|
||||
.is_err()
|
||||
);
|
||||
// Ignored match
|
||||
assert!(
|
||||
filter
|
||||
.get_match_from_patterns(BTreeMap::from([(pattern.clone(), "a".into())]))
|
||||
.is_err()
|
||||
);
|
||||
// Bad pattern
|
||||
assert!(
|
||||
filter
|
||||
.get_match_from_patterns(BTreeMap::from([(boubou.clone(), "bou".into())]))
|
||||
.is_err()
|
||||
);
|
||||
// Bad number of patterns
|
||||
assert!(
|
||||
filter
|
||||
.get_match_from_patterns(BTreeMap::from([
|
||||
(pattern.clone(), "b".into()),
|
||||
(boubou.clone(), "bou".into()),
|
||||
]))
|
||||
.is_err()
|
||||
);
|
||||
// Bad number of patterns
|
||||
assert!(filter.get_match_from_patterns(BTreeMap::from([])).is_err());
|
||||
|
||||
// two patterns in one regex
|
||||
filter = Filter::default();
|
||||
filter.actions.insert(name.clone(), ok_action());
|
||||
|
|
@ -753,55 +549,6 @@ pub mod tests {
|
|||
assert_eq!(filter.get_match("insert a here and bouboubou there"), None);
|
||||
assert_eq!(filter.get_match("insert b here and boubou there"), None);
|
||||
|
||||
// Ok
|
||||
assert_eq!(
|
||||
filter.get_match_from_patterns(BTreeMap::from([
|
||||
(pattern.clone(), "b".into()),
|
||||
(boubou.clone(), "bou".into()),
|
||||
])),
|
||||
// Reordered by pattern name
|
||||
Ok(vec!("bou".into(), "b".into()))
|
||||
);
|
||||
// Doesn't match
|
||||
assert!(
|
||||
filter
|
||||
.get_match_from_patterns(BTreeMap::from([
|
||||
(pattern.clone(), "abc".into()),
|
||||
(boubou.clone(), "bou".into()),
|
||||
]))
|
||||
.is_err()
|
||||
);
|
||||
// Ignored match
|
||||
assert!(
|
||||
filter
|
||||
.get_match_from_patterns(BTreeMap::from([
|
||||
(pattern.clone(), "b".into()),
|
||||
(boubou.clone(), "boubou".into()),
|
||||
]))
|
||||
.is_err()
|
||||
);
|
||||
// Bad pattern
|
||||
assert!(
|
||||
filter
|
||||
.get_match_from_patterns(BTreeMap::from([
|
||||
(pattern.clone(), "b".into()),
|
||||
(number_pattern.clone(), "1".into()),
|
||||
]))
|
||||
.is_err()
|
||||
);
|
||||
// Bad number of patterns
|
||||
assert!(
|
||||
filter
|
||||
.get_match_from_patterns(BTreeMap::from([
|
||||
(pattern.clone(), "b".into()),
|
||||
(boubou.clone(), "bou".into()),
|
||||
(number_pattern.clone(), "1".into()),
|
||||
]))
|
||||
.is_err()
|
||||
);
|
||||
// Bad number of patterns
|
||||
assert!(filter.get_match_from_patterns(BTreeMap::from([])).is_err());
|
||||
|
||||
// multiple regexes with same pattern
|
||||
filter = Filter::default();
|
||||
filter.actions.insert(name.clone(), ok_action());
|
||||
|
|
@ -839,155 +586,4 @@ pub mod tests {
|
|||
assert_eq!(filter.get_match("insert b here and boubou there"), None);
|
||||
assert_eq!(filter.get_match("also add boubou here and b there"), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_match_from_patterns() {
|
||||
// TODO
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filtered_actions_from_match_one_regex_pattern() {
|
||||
let az_patterns = Pattern::new_map("az", "[a-z]+").unwrap();
|
||||
let action = Action::new(
|
||||
vec!["zblorg <az>"],
|
||||
None,
|
||||
false,
|
||||
"test",
|
||||
"test",
|
||||
"a1",
|
||||
&az_patterns,
|
||||
0,
|
||||
);
|
||||
let filter = Filter::new(
|
||||
vec![action.clone()],
|
||||
vec![""],
|
||||
None,
|
||||
None,
|
||||
"test",
|
||||
"test",
|
||||
Duplicate::default(),
|
||||
&az_patterns,
|
||||
);
|
||||
assert_eq!(
|
||||
vec![&action],
|
||||
filter.filtered_actions_from_match(&vec!["zboum".into()])
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filtered_actions_from_match_two_regex_patterns() {
|
||||
let patterns = BTreeMap::from([
|
||||
(
|
||||
"az".to_string(),
|
||||
Arc::new(Pattern::new("az", "[a-z]+").unwrap()),
|
||||
),
|
||||
(
|
||||
"num".to_string(),
|
||||
Arc::new(Pattern::new("num", "[0-9]{1,3}").unwrap()),
|
||||
),
|
||||
]);
|
||||
let action1 = Action::new(
|
||||
vec!["zblorg <az> <num>"],
|
||||
None,
|
||||
false,
|
||||
"test",
|
||||
"test",
|
||||
"a1",
|
||||
&patterns,
|
||||
0,
|
||||
);
|
||||
let action2 = Action::new(
|
||||
vec!["zbleurg <num> <az>"],
|
||||
None,
|
||||
false,
|
||||
"test",
|
||||
"test",
|
||||
"a2",
|
||||
&patterns,
|
||||
0,
|
||||
);
|
||||
let filter = Filter::new(
|
||||
vec![action1.clone(), action2.clone()],
|
||||
vec![""],
|
||||
None,
|
||||
None,
|
||||
"test",
|
||||
"test",
|
||||
Duplicate::default(),
|
||||
&patterns,
|
||||
);
|
||||
assert_eq!(
|
||||
vec![&action1, &action2],
|
||||
filter.filtered_actions_from_match(&vec!["zboum".into()])
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn filtered_actions_from_match_one_regex_one_ip() {
|
||||
let patterns = BTreeMap::from([
|
||||
(
|
||||
"az".to_string(),
|
||||
Arc::new(Pattern::new("az", "[a-z]+").unwrap()),
|
||||
),
|
||||
("ip".to_string(), {
|
||||
let mut pattern = Pattern {
|
||||
ip: PatternIp {
|
||||
pattern_type: PatternType::Ip,
|
||||
..Default::default()
|
||||
},
|
||||
..Default::default()
|
||||
};
|
||||
pattern.setup("ip").unwrap();
|
||||
Arc::new(pattern)
|
||||
}),
|
||||
]);
|
||||
let action4 = Action::new(
|
||||
vec!["zblorg4 <az> <ip>"],
|
||||
None,
|
||||
false,
|
||||
"test",
|
||||
"test",
|
||||
"action4",
|
||||
&patterns,
|
||||
4,
|
||||
);
|
||||
let action6 = Action::new(
|
||||
vec!["zblorg6 <az> <ip>"],
|
||||
None,
|
||||
false,
|
||||
"test",
|
||||
"test",
|
||||
"action6",
|
||||
&patterns,
|
||||
6,
|
||||
);
|
||||
let action = Action::new(
|
||||
vec!["zblorg <az> <ip>"],
|
||||
None,
|
||||
false,
|
||||
"test",
|
||||
"test",
|
||||
"action",
|
||||
&patterns,
|
||||
0,
|
||||
);
|
||||
let filter = Filter::new(
|
||||
vec![action4.clone(), action6.clone(), action.clone()],
|
||||
vec!["<az>: <ip>"],
|
||||
None,
|
||||
None,
|
||||
"test",
|
||||
"test",
|
||||
Duplicate::default(),
|
||||
&patterns,
|
||||
);
|
||||
assert_eq!(
|
||||
filter.filtered_actions_from_match(&vec!["zboum".into(), "1.2.3.4".into()]),
|
||||
vec![&action, &action4],
|
||||
);
|
||||
assert_eq!(
|
||||
filter.filtered_actions_from_match(&vec!["zboum".into(), "ab4:35f::1".into()]),
|
||||
vec![&action, &action6],
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,90 +1,18 @@
|
|||
mod action;
|
||||
mod config;
|
||||
mod filter;
|
||||
mod parse_duration;
|
||||
mod pattern;
|
||||
mod plugin;
|
||||
mod stream;
|
||||
|
||||
use std::fmt::Debug;
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
pub use action::Action;
|
||||
pub use config::{Config, Patterns};
|
||||
pub use filter::{Duplicate, Filter};
|
||||
pub use pattern::{Pattern, PatternType};
|
||||
pub use plugin::Plugin;
|
||||
pub use filter::Filter;
|
||||
pub use parse_duration::parse_duration;
|
||||
pub use pattern::Pattern;
|
||||
pub use stream::Stream;
|
||||
pub use treedb::time::{Time, now};
|
||||
|
||||
use chrono::{DateTime, Local};
|
||||
|
||||
pub type Time = DateTime<Local>;
|
||||
pub type Match = Vec<String>;
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
|
||||
pub struct MatchTime {
|
||||
pub m: Match,
|
||||
pub t: Time,
|
||||
}
|
||||
|
||||
fn merge_attrs<A: Default + Debug + PartialEq + Eq + Clone>(
|
||||
this: A,
|
||||
other: A,
|
||||
default: A,
|
||||
name: &str,
|
||||
) -> Result<A, String> {
|
||||
if !(this == default || other == default || this == other) {
|
||||
return Err(format!(
|
||||
"'{name}' has conflicting definitions: '{this:?}', '{other:?}'"
|
||||
));
|
||||
}
|
||||
if this == default {
|
||||
return Ok(other);
|
||||
}
|
||||
Ok(this)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
pub use filter::tests as filter_tests;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::concepts::merge_attrs;
|
||||
|
||||
#[test]
|
||||
fn test_merge_attrs() {
|
||||
assert_eq!(merge_attrs(None::<String>, None, None, "t"), Ok(None));
|
||||
assert_eq!(
|
||||
merge_attrs(Some("coucou"), None, None, "t"),
|
||||
Ok(Some("coucou"))
|
||||
);
|
||||
assert_eq!(
|
||||
merge_attrs(None, Some("coucou"), None, "t"),
|
||||
Ok(Some("coucou"))
|
||||
);
|
||||
assert_eq!(
|
||||
merge_attrs(Some("coucou"), Some("coucou"), None, "t"),
|
||||
Ok(Some("coucou"))
|
||||
);
|
||||
assert_eq!(
|
||||
merge_attrs(Some("coucou"), Some("hello"), None, "t"),
|
||||
Err("'t' has conflicting definitions: 'Some(\"coucou\")', 'Some(\"hello\")'".into())
|
||||
);
|
||||
|
||||
assert_eq!(merge_attrs("", "", "", "t"), Ok(""));
|
||||
assert_eq!(merge_attrs("coucou", "", "", "t"), Ok("coucou"));
|
||||
assert_eq!(merge_attrs("", "coucou", "", "t"), Ok("coucou"));
|
||||
assert_eq!(merge_attrs("coucou", "coucou", "", "t"), Ok("coucou"));
|
||||
assert_eq!(
|
||||
merge_attrs("coucou", "hello", "", "t"),
|
||||
Err("'t' has conflicting definitions: '\"coucou\"', '\"hello\"'".into())
|
||||
);
|
||||
|
||||
assert_eq!(merge_attrs(0, 0, 0, "t"), Ok(0));
|
||||
assert_eq!(merge_attrs(5, 0, 0, "t"), Ok(5));
|
||||
assert_eq!(merge_attrs(0, 5, 0, "t"), Ok(5));
|
||||
assert_eq!(merge_attrs(5, 5, 0, "t"), Ok(5));
|
||||
assert_eq!(
|
||||
merge_attrs(5, 6, 0, "t"),
|
||||
Err("'t' has conflicting definitions: '5', '6'".into())
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
67
src/concepts/parse_duration.rs
Normal file
67
src/concepts/parse_duration.rs
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
use chrono::TimeDelta;
|
||||
|
||||
pub fn parse_duration(d: &str) -> Result<TimeDelta, String> {
|
||||
let d_trimmed = d.trim();
|
||||
let chars = d_trimmed.as_bytes();
|
||||
let mut value = 0;
|
||||
let mut i = 0;
|
||||
while i < chars.len() && chars[i].is_ascii_digit() {
|
||||
value = value * 10 + (chars[i] - b'0') as u32;
|
||||
i += 1;
|
||||
}
|
||||
if i == 0 {
|
||||
return Err(format!("duration '{}' doesn't start with digits", d));
|
||||
}
|
||||
let ok_secs = |mul: u32| -> Result<TimeDelta, String> {
|
||||
Ok(TimeDelta::seconds(mul as i64 * value as i64))
|
||||
};
|
||||
|
||||
match d_trimmed[i..].trim() {
|
||||
"s" | "sec" | "secs" | "second" | "seconds" => ok_secs(1),
|
||||
"m" | "min" | "mins" | "minute" | "minutes" => ok_secs(60),
|
||||
"h" | "hour" | "hours" => ok_secs(60 * 60),
|
||||
"d" | "day" | "days" => ok_secs(24 * 60 * 60),
|
||||
unit => Err(format!(
|
||||
"unit {} not recognised. must be one of s/sec/seconds, m/min/minutes, h/hours, d/days",
|
||||
unit
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
|
||||
use chrono::TimeDelta;
|
||||
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn char_conversion() {
|
||||
assert_eq!(b'9' - b'0', 9);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_duration_test() {
|
||||
assert_eq!(parse_duration("1s"), Ok(TimeDelta::seconds(1)));
|
||||
assert_eq!(parse_duration("12s"), Ok(TimeDelta::seconds(12)));
|
||||
assert_eq!(parse_duration(" 12 secs "), Ok(TimeDelta::seconds(12)));
|
||||
assert_eq!(parse_duration("2m"), Ok(TimeDelta::seconds(2 * 60)));
|
||||
assert_eq!(
|
||||
parse_duration("6 hours"),
|
||||
Ok(TimeDelta::seconds(6 * 60 * 60))
|
||||
);
|
||||
assert_eq!(parse_duration("1d"), Ok(TimeDelta::seconds(24 * 60 * 60)));
|
||||
assert_eq!(
|
||||
parse_duration("365d"),
|
||||
Ok(TimeDelta::seconds(365 * 24 * 60 * 60))
|
||||
);
|
||||
|
||||
assert!(parse_duration("d 3").is_err());
|
||||
assert!(parse_duration("d3").is_err());
|
||||
assert!(parse_duration("3da").is_err());
|
||||
assert!(parse_duration("3_days").is_err());
|
||||
assert!(parse_duration("_3d").is_err());
|
||||
assert!(parse_duration("3 3d").is_err());
|
||||
assert!(parse_duration("3.3d").is_err());
|
||||
}
|
||||
}
|
||||
251
src/concepts/pattern.rs
Normal file
251
src/concepts/pattern.rs
Normal file
|
|
@ -0,0 +1,251 @@
|
|||
use std::cmp::Ordering;
|
||||
|
||||
use regex::Regex;
|
||||
use serde::Deserialize;
|
||||
|
||||
#[derive(Clone, Debug, Deserialize)]
|
||||
#[cfg_attr(test, derive(Default))]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct Pattern {
|
||||
pub regex: String,
|
||||
|
||||
#[serde(default)]
|
||||
ignore: Vec<String>,
|
||||
|
||||
#[serde(default, rename = "ignoreregex")]
|
||||
ignore_regex: Vec<String>,
|
||||
#[serde(skip)]
|
||||
compiled_ignore_regex: Vec<Regex>,
|
||||
|
||||
#[serde(skip)]
|
||||
name: String,
|
||||
#[serde(skip)]
|
||||
name_with_braces: String,
|
||||
}
|
||||
|
||||
impl Pattern {
|
||||
#[cfg(test)]
|
||||
pub fn from_name(name: &str) -> Pattern {
|
||||
Pattern {
|
||||
name: name.into(),
|
||||
..Pattern::default()
|
||||
}
|
||||
}
|
||||
pub fn setup(&mut self, name: &str) -> Result<(), String> {
|
||||
self._setup(name)
|
||||
.map_err(|msg| format!("pattern {}: {}", name, msg))
|
||||
}
|
||||
|
||||
pub fn name(&self) -> &String {
|
||||
&self.name
|
||||
}
|
||||
pub fn name_with_braces(&self) -> &String {
|
||||
&self.name_with_braces
|
||||
}
|
||||
|
||||
pub fn _setup(&mut self, name: &str) -> Result<(), String> {
|
||||
self.name = name.to_string();
|
||||
self.name_with_braces = format!("<{}>", name);
|
||||
|
||||
if self.name.is_empty() {
|
||||
return Err("pattern name is empty".into());
|
||||
}
|
||||
if self.name.contains('.') {
|
||||
return Err("character '.' is not allowed in pattern name".into());
|
||||
}
|
||||
|
||||
if self.regex.is_empty() {
|
||||
return Err("regex is empty".into());
|
||||
}
|
||||
let compiled = Regex::new(&format!("^{}$", self.regex)).map_err(|err| err.to_string())?;
|
||||
|
||||
self.regex = format!("(?P<{}>{})", self.name, self.regex);
|
||||
|
||||
for ignore in &self.ignore {
|
||||
if !compiled.is_match(ignore) {
|
||||
return Err(format!(
|
||||
"ignore '{}' doesn't match pattern. It should be fixed or removed.",
|
||||
ignore,
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
for ignore_regex in &self.ignore_regex {
|
||||
let compiled_ignore = Regex::new(&format!("^{}$", ignore_regex))
|
||||
.map_err(|err| format!("ignoreregex '{}': {}", ignore_regex, err))?;
|
||||
|
||||
self.compiled_ignore_regex.push(compiled_ignore);
|
||||
}
|
||||
self.ignore_regex.clear();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn not_an_ignore(&self, match_: &str) -> bool {
|
||||
for regex in &self.compiled_ignore_regex {
|
||||
if regex.is_match(match_) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
for ignore in &self.ignore {
|
||||
if ignore == match_ {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
// This is required to be added to a BTreeSet
|
||||
// We compare Patterns by their names, which are unique.
|
||||
// This is enforced by Patterns' names coming from their keys in a BTreeMap in Config
|
||||
impl Ord for Pattern {
|
||||
fn cmp(&self, other: &Self) -> Ordering {
|
||||
self.name.cmp(&other.name)
|
||||
}
|
||||
}
|
||||
impl PartialOrd for Pattern {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||
Some(self.cmp(other))
|
||||
}
|
||||
}
|
||||
impl Eq for Pattern {}
|
||||
impl PartialEq for Pattern {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
self.name == other.name
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::unwrap_used)]
|
||||
#[cfg(test)]
|
||||
pub mod tests {
|
||||
|
||||
use super::*;
|
||||
|
||||
pub fn default_pattern() -> Pattern {
|
||||
Pattern {
|
||||
regex: "".into(),
|
||||
ignore: Vec::new(),
|
||||
ignore_regex: Vec::new(),
|
||||
compiled_ignore_regex: Vec::new(),
|
||||
name: "".into(),
|
||||
name_with_braces: "".into(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn ok_pattern() -> Pattern {
|
||||
let mut pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern
|
||||
}
|
||||
|
||||
pub fn ok_pattern_with_ignore() -> Pattern {
|
||||
let mut pattern = ok_pattern();
|
||||
pattern.ignore.push("a".into());
|
||||
pattern
|
||||
}
|
||||
|
||||
pub fn boubou_pattern_with_ignore() -> Pattern {
|
||||
let mut pattern = ok_pattern();
|
||||
pattern.regex = "(?:bou){1,3}".to_string();
|
||||
pattern.ignore.push("boubou".into());
|
||||
pattern
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_missing_information() {
|
||||
let mut pattern;
|
||||
|
||||
// Empty name
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "abc".into();
|
||||
assert!(pattern.setup("").is_err());
|
||||
|
||||
// '.' in name
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "abc".into();
|
||||
assert!(pattern.setup("na.me").is_err());
|
||||
|
||||
// Empty regex
|
||||
pattern = default_pattern();
|
||||
assert!(pattern.setup("name").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_regex() {
|
||||
let mut pattern;
|
||||
|
||||
// regex ok
|
||||
pattern = ok_pattern();
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
// regex ok
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "abc".into();
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
// regex ko
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc".into();
|
||||
assert!(pattern.setup("name").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_ignore() {
|
||||
let mut pattern;
|
||||
|
||||
// ignore ok
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern.ignore.push("a".into());
|
||||
pattern.ignore.push("b".into());
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
// ignore ko
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern.ignore.push("d".into());
|
||||
assert!(pattern.setup("name").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_ignore_regex() {
|
||||
let mut pattern;
|
||||
|
||||
// ignore_regex ok
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern.ignore_regex.push("[a]".into());
|
||||
pattern.ignore_regex.push("a".into());
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
// ignore_regex ko
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern.ignore.push("[a".into());
|
||||
assert!(pattern.setup("name").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn not_an_ignore() {
|
||||
let mut pattern;
|
||||
|
||||
// ignore ok
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abcdefg]".into();
|
||||
pattern.ignore.push("a".into());
|
||||
pattern.ignore.push("b".into());
|
||||
pattern.ignore_regex.push("c".into());
|
||||
pattern.ignore_regex.push("[de]".into());
|
||||
|
||||
pattern.setup("name").unwrap();
|
||||
assert!(!pattern.not_an_ignore("a"));
|
||||
assert!(!pattern.not_an_ignore("b"));
|
||||
assert!(!pattern.not_an_ignore("c"));
|
||||
assert!(!pattern.not_an_ignore("d"));
|
||||
assert!(!pattern.not_an_ignore("e"));
|
||||
assert!(pattern.not_an_ignore("f"));
|
||||
assert!(pattern.not_an_ignore("g"));
|
||||
assert!(pattern.not_an_ignore("h"));
|
||||
}
|
||||
}
|
||||
|
|
@ -1,239 +0,0 @@
|
|||
use std::{
|
||||
fmt::Display,
|
||||
net::{IpAddr, Ipv4Addr, Ipv6Addr},
|
||||
str::FromStr,
|
||||
};
|
||||
|
||||
use super::*;
|
||||
|
||||
/// Stores an IP and an associated mask.
|
||||
#[derive(Clone, Debug, PartialEq, Eq)]
|
||||
pub enum Cidr {
|
||||
IPv4((Ipv4Addr, Ipv4Addr)),
|
||||
IPv6((Ipv6Addr, Ipv6Addr)),
|
||||
}
|
||||
|
||||
impl FromStr for Cidr {
|
||||
type Err = String;
|
||||
|
||||
fn from_str(cidr: &str) -> Result<Self, Self::Err> {
|
||||
let (ip, mask) = cidr.split_once('/').ok_or(format!(
|
||||
"malformed IP/MASK. '{cidr}' doesn't contain any '/'"
|
||||
))?;
|
||||
let ip = normalize(ip).map_err(|err| format!("malformed IP '{ip}' in '{cidr}': {err}"))?;
|
||||
let mask_count = u8::from_str(mask)
|
||||
.map_err(|err| format!("malformed mask '{mask}' in '{cidr}': {err}"))?;
|
||||
|
||||
// Let's accept any mask size for now, as useless as it may seem
|
||||
// if mask_count < 2 {
|
||||
// return Err("Can't have a network mask of 0 or 1. You're either ignoring all Internet or half of it.".into());
|
||||
// } else if mask_count
|
||||
// < (match ip {
|
||||
// IpAddr::V4(_) => 8,
|
||||
// IpAddr::V6(_) => 16,
|
||||
// })
|
||||
// {
|
||||
// warn!("With a mask of {mask_count}, you're ignoring a big part of Internet. Are you sure you want to do this?");
|
||||
// }
|
||||
|
||||
Self::from_ip_and_mask(ip, mask_count)
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for Cidr {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{}/{}", self.network(), self.mask())
|
||||
}
|
||||
}
|
||||
|
||||
impl Cidr {
|
||||
fn from_ip_and_mask(ip: IpAddr, mask_count: u8) -> Result<Self, String> {
|
||||
match ip {
|
||||
IpAddr::V4(mut ipv4_addr) => {
|
||||
// Create bitmask
|
||||
let mask = mask_to_ipv4(mask_count)?;
|
||||
// Normalize IP from mask
|
||||
ipv4_addr &= mask;
|
||||
|
||||
Ok(Cidr::IPv4((ipv4_addr, mask)))
|
||||
}
|
||||
IpAddr::V6(mut ipv6_addr) => {
|
||||
let mask = mask_to_ipv6(mask_count)?;
|
||||
// Normalize IP from mask
|
||||
ipv6_addr &= mask;
|
||||
|
||||
Ok(Cidr::IPv6((ipv6_addr, mask)))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Whether an IP is included in this IP CIDR.
|
||||
/// If IP is not the same version as CIDR, returns always false.
|
||||
pub fn includes(&self, ip: &IpAddr) -> bool {
|
||||
let ip = normalize_ip(*ip);
|
||||
match self {
|
||||
Cidr::IPv4((network_ipv4, mask)) => match ip {
|
||||
IpAddr::V6(_) => false,
|
||||
IpAddr::V4(ipv4_addr) => *network_ipv4 == ipv4_addr & mask,
|
||||
},
|
||||
Cidr::IPv6((network_ipv6, mask)) => match ip {
|
||||
IpAddr::V4(_) => false,
|
||||
IpAddr::V6(ipv6_addr) => *network_ipv6 == ipv6_addr & mask,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
fn network(&self) -> IpAddr {
|
||||
match self {
|
||||
Cidr::IPv4((network, _)) => IpAddr::from(*network),
|
||||
Cidr::IPv6((network, _)) => IpAddr::from(*network),
|
||||
}
|
||||
}
|
||||
|
||||
fn mask(&self) -> u8 {
|
||||
let mut raw_mask = match self {
|
||||
Cidr::IPv4((_, mask)) => mask.to_bits() as u128,
|
||||
Cidr::IPv6((_, mask)) => mask.to_bits(),
|
||||
};
|
||||
let mut ret = 0;
|
||||
for _ in 0..128 {
|
||||
if raw_mask % 2 == 1 {
|
||||
ret += 1;
|
||||
}
|
||||
raw_mask >>= 1;
|
||||
}
|
||||
ret
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod cidr_tests {
|
||||
use std::{
|
||||
net::{IpAddr, Ipv4Addr, Ipv6Addr},
|
||||
str::FromStr,
|
||||
};
|
||||
|
||||
use super::Cidr;
|
||||
|
||||
#[test]
|
||||
fn cidrv4_from_str() {
|
||||
assert_eq!(
|
||||
Ok(Cidr::IPv4((Ipv4Addr::new(192, 168, 1, 4), u32::MAX.into()))),
|
||||
Cidr::from_str("192.168.1.4/32")
|
||||
);
|
||||
// Test IP normalization from mask
|
||||
assert_eq!(
|
||||
Ok(Cidr::IPv4((
|
||||
Ipv4Addr::new(192, 168, 1, 0),
|
||||
Ipv4Addr::new(255, 255, 255, 0),
|
||||
))),
|
||||
Cidr::from_str("192.168.1.4/24")
|
||||
);
|
||||
// Another ok-test "pour la route"
|
||||
assert_eq!(
|
||||
Ok(Cidr::IPv4((
|
||||
Ipv4Addr::new(1, 1, 0, 0),
|
||||
Ipv4Addr::new(255, 255, 0, 0),
|
||||
))),
|
||||
Cidr::from_str("1.1.248.25/16")
|
||||
);
|
||||
// Errors
|
||||
assert!(Cidr::from_str("256.1.1.1/8").is_err());
|
||||
// assert!(Cidr::from_str("1.1.1.1/0").is_err());
|
||||
// assert!(Cidr::from_str("1.1.1.1/1").is_err());
|
||||
// assert!(Cidr::from_str("1.1.1.1.1").is_err());
|
||||
assert!(Cidr::from_str("1.1.1.1/16/16").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cidrv6_from_str() {
|
||||
assert_eq!(
|
||||
Ok(Cidr::IPv6((
|
||||
Ipv6Addr::new(0xfe80, 0, 0, 0, 0xdf68, 0x2ee, 0xe4f9, 0xe68),
|
||||
u128::MAX.into()
|
||||
))),
|
||||
Cidr::from_str("fe80::df68:2ee:e4f9:e68/128")
|
||||
);
|
||||
// Test IP normalization from mask
|
||||
assert_eq!(
|
||||
Ok(Cidr::IPv6((
|
||||
Ipv6Addr::new(0x2001, 0xdb8, 0x85a3, 0x9de5, 0, 0, 0, 0),
|
||||
Ipv6Addr::new(u16::MAX, u16::MAX, u16::MAX, u16::MAX, 0, 0, 0, 0),
|
||||
))),
|
||||
Cidr::from_str("2001:db8:85a3:9de5::8a2e:370:7334/64")
|
||||
);
|
||||
// Another ok-test "pour la route"
|
||||
assert_eq!(
|
||||
Ok(Cidr::IPv6((
|
||||
Ipv6Addr::new(0x2001, 0xdb8, 0x85a3, 0x9d00, 0, 0, 0, 0),
|
||||
Ipv6Addr::new(
|
||||
u16::MAX,
|
||||
u16::MAX,
|
||||
u16::MAX,
|
||||
u16::MAX - u8::MAX as u16,
|
||||
0,
|
||||
0,
|
||||
0,
|
||||
0
|
||||
),
|
||||
))),
|
||||
Cidr::from_str("2001:db8:85a3:9d00::8a2e:370:7334/56")
|
||||
);
|
||||
assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334/56").is_ok());
|
||||
assert!(Cidr::from_str("2001:DB8:85A3:0:0:8A2E:370:7334/56").is_ok());
|
||||
// Errors
|
||||
assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:g334/56").is_err());
|
||||
// assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334/0").is_err());
|
||||
// assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334/1").is_err());
|
||||
assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334:11/56").is_err());
|
||||
assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334/11/56").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cidrv4_includes() {
|
||||
let cidr = Cidr::from_str("192.168.1.0/24").unwrap();
|
||||
assert!(cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 1, 0))));
|
||||
assert!(cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 1, 1))));
|
||||
assert!(cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 1, 234))));
|
||||
assert!(!cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 0, 1))));
|
||||
assert!(!cidr.includes(&IpAddr::V6(Ipv6Addr::new(
|
||||
0xfe80, 0, 0, 0, 0xdf68, 0x2ee, 0xe4f9, 0xe68
|
||||
),)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cidrv6_includes() {
|
||||
let cidr = Cidr::from_str("2001:db8:85a3:9d00:0:8a2e:370:7334/56").unwrap();
|
||||
assert!(cidr.includes(&IpAddr::V6(Ipv6Addr::new(
|
||||
0x2001, 0x0db8, 0x85a3, 0x9d00, 0, 0, 0, 0
|
||||
))));
|
||||
assert!(cidr.includes(&IpAddr::V6(Ipv6Addr::new(
|
||||
0x2001, 0x0db8, 0x85a3, 0x9da4, 0x34fc, 0x0d8b, 0xffff, 0x1111
|
||||
))));
|
||||
assert!(!cidr.includes(&IpAddr::V6(Ipv6Addr::new(
|
||||
0x2001, 0x0db8, 0x85a3, 0xad00, 0, 0, 0, 1
|
||||
))));
|
||||
assert!(!cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 1, 0))));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cidr_display() {
|
||||
let cidrs = [
|
||||
("192.168.1.4/32", "192.168.1.4/32"),
|
||||
("192.168.1.4/24", "192.168.1.0/24"),
|
||||
("1.1.248.25/16", "1.1.0.0/16"),
|
||||
("fe80::df68:2ee:e4f9:e68/128", "fe80::df68:2ee:e4f9:e68/128"),
|
||||
(
|
||||
"2001:db8:85a3:9de5::8a2e:370:7334/64",
|
||||
"2001:db8:85a3:9de5::/64",
|
||||
),
|
||||
(
|
||||
"2001:db8:85a3:9d00::8a2e:370:7334/56",
|
||||
"2001:db8:85a3:9d00::/56",
|
||||
),
|
||||
];
|
||||
for (from, to) in cidrs {
|
||||
assert_eq!(Cidr::from_str(from).unwrap().to_string(), to);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,730 +0,0 @@
|
|||
use std::{
|
||||
net::{IpAddr, Ipv4Addr, Ipv6Addr},
|
||||
str::FromStr,
|
||||
};
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::warn;
|
||||
|
||||
use cidr::Cidr;
|
||||
use utils::*;
|
||||
|
||||
mod cidr;
|
||||
mod utils;
|
||||
|
||||
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Deserialize, Serialize)]
|
||||
pub enum PatternType {
|
||||
#[default]
|
||||
#[serde(rename = "regex")]
|
||||
Regex,
|
||||
#[serde(rename = "ip")]
|
||||
Ip,
|
||||
#[serde(rename = "ipv4")]
|
||||
Ipv4,
|
||||
#[serde(rename = "ipv6")]
|
||||
Ipv6,
|
||||
}
|
||||
|
||||
impl PatternType {
|
||||
pub fn is_default(&self) -> bool {
|
||||
*self == PatternType::default()
|
||||
}
|
||||
|
||||
pub fn regex(&self) -> Option<String> {
|
||||
// Those orders of preference are very important for <ip>
|
||||
// patterns that have greedy catch-all regexes becore or after them,
|
||||
// for example: "Failed password .*<ip>.*"
|
||||
|
||||
let num4 = [
|
||||
// Order is important, first is preferred.
|
||||
|
||||
// first 25x
|
||||
"(?:25[0-5]",
|
||||
// then 2xx
|
||||
"2[0-4][0-9]",
|
||||
// then 1xx
|
||||
"1[0-9][0-9]",
|
||||
// then 0xx
|
||||
"[1-9][0-9]",
|
||||
// then 0x
|
||||
"[0-9])",
|
||||
]
|
||||
.join("|");
|
||||
|
||||
let numsix = "[0-9a-fA-F]{1,4}";
|
||||
|
||||
let ipv4 = format!(r#"{num4}(?:\.{num4}){{3}}"#);
|
||||
|
||||
#[allow(clippy::useless_format)]
|
||||
let ipv6 = [
|
||||
// We're unrolling all possibilities, longer IPv6 first,
|
||||
// to make it super-greedy,
|
||||
// more than an eventual .* before or after <ip> ,
|
||||
// that would "eat" its first or last blocks.
|
||||
|
||||
// Order is important, first is preferred.
|
||||
|
||||
// We put IPv4-suffixed regexes first
|
||||
format!(r#"::(?:ffff(?::0{{1,4}})?:)?{ipv4}"#),
|
||||
format!(r#"(?:{numsix}:){{1,4}}:{ipv4}"#),
|
||||
// Then link-local addresses with interface name
|
||||
format!(r#"fe80:(?::[0-9a-fA-F]{{0,4}}){{0,4}}%[0-9a-zA-Z]+"#),
|
||||
// Full IPv6
|
||||
format!("(?:{numsix}:){{7}}{numsix}"),
|
||||
// 1 block cut
|
||||
format!("(?:{numsix}:){{7}}:"),
|
||||
format!("(?:{numsix}:){{6}}:{numsix}"),
|
||||
format!("(?:{numsix}:){{5}}(?::{numsix}){{2}}"),
|
||||
format!("(?:{numsix}:){{4}}(?::{numsix}){{3}}"),
|
||||
format!("(?:{numsix}:){{3}}(?::{numsix}){{4}}"),
|
||||
format!("(?:{numsix}:){{2}}(?::{numsix}){{5}}"),
|
||||
format!("{numsix}:(?:(?::{numsix}){{6}})"),
|
||||
format!(":(?:(?::{numsix}){{7}})"),
|
||||
// 2 blocks cut
|
||||
format!("(?:{numsix}:){{6}}:"),
|
||||
format!("(?:{numsix}:){{5}}:{numsix}"),
|
||||
format!("(?:{numsix}:){{4}}(?::{numsix}){{2}}"),
|
||||
format!("(?:{numsix}:){{3}}(?::{numsix}){{3}}"),
|
||||
format!("(?:{numsix}:){{2}}(?::{numsix}){{4}}"),
|
||||
format!("{numsix}:(?:(?::{numsix}){{5}})"),
|
||||
format!(":(?:(?::{numsix}){{6}})"),
|
||||
// 3 blocks cut
|
||||
format!("(?:{numsix}:){{5}}:"),
|
||||
format!("(?:{numsix}:){{4}}:{numsix}"),
|
||||
format!("(?:{numsix}:){{3}}(?::{numsix}){{2}}"),
|
||||
format!("(?:{numsix}:){{2}}(?::{numsix}){{3}}"),
|
||||
format!("{numsix}:(?:(?::{numsix}){{4}})"),
|
||||
format!(":(?:(?::{numsix}){{5}})"),
|
||||
// 4 blocks cut
|
||||
format!("(?:{numsix}:){{4}}:"),
|
||||
format!("(?:{numsix}:){{3}}:{numsix}"),
|
||||
format!("(?:{numsix}:){{2}}(?::{numsix}){{2}}"),
|
||||
format!("{numsix}:(?:(?::{numsix}){{3}})"),
|
||||
format!(":(?:(?::{numsix}){{4}})"),
|
||||
// 5 blocks cut
|
||||
format!("(?:{numsix}:){{3}}:"),
|
||||
format!("(?:{numsix}:){{2}}:{numsix}"),
|
||||
format!("{numsix}:(?:(?::{numsix}){{2}})"),
|
||||
format!(":(?:(?::{numsix}){{3}})"),
|
||||
// 6 blocks cut
|
||||
format!("(?:{numsix}:){{2}}:"),
|
||||
format!("{numsix}::{numsix}"),
|
||||
format!(":(?:(?::{numsix}){{2}})"),
|
||||
// 7 blocks cut
|
||||
format!("{numsix}::"),
|
||||
format!("::{numsix}"),
|
||||
// special cuts
|
||||
// 8 blocks cut
|
||||
format!("::"),
|
||||
]
|
||||
.join("|");
|
||||
match self {
|
||||
PatternType::Ipv4 => Some(ipv4),
|
||||
PatternType::Ipv6 => Some(ipv6),
|
||||
PatternType::Ip => Some(format!("{ipv4}|{ipv6}")),
|
||||
PatternType::Regex => None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, PartialEq, Eq, Deserialize, Serialize)]
|
||||
pub struct PatternIp {
|
||||
#[serde(
|
||||
default,
|
||||
rename = "type",
|
||||
skip_serializing_if = "PatternType::is_default"
|
||||
)]
|
||||
pub pattern_type: PatternType,
|
||||
|
||||
#[serde(default, rename = "ipv4mask")]
|
||||
pub ipv4_mask: Option<u8>,
|
||||
#[serde(default, rename = "ipv6mask")]
|
||||
pub ipv6_mask: Option<u8>,
|
||||
#[serde(skip)]
|
||||
pub ipv4_bitmask: Option<Ipv4Addr>,
|
||||
#[serde(skip)]
|
||||
pub ipv6_bitmask: Option<Ipv6Addr>,
|
||||
|
||||
#[serde(default, rename = "ignorecidr", skip_serializing_if = "Vec::is_empty")]
|
||||
pub ignore_cidr: Vec<String>,
|
||||
#[serde(skip)]
|
||||
pub ignore_cidr_normalized: Vec<Cidr>,
|
||||
}
|
||||
|
||||
impl PatternIp {
|
||||
pub fn pattern_type(&self) -> PatternType {
|
||||
self.pattern_type
|
||||
}
|
||||
|
||||
/// Setup the IP-specific part of a Pattern.
|
||||
/// Returns an optional regex string if of type IP, else None
|
||||
/// Returns an error if one of:
|
||||
/// - the type is not IP but there is IP-specific config
|
||||
/// - the type is IP/IPv4/IPv6 and there is invalid IP-specific config
|
||||
/// - the type is IPv4 and there is IPv6-specific config
|
||||
/// - the type is IPv6 and there is IPv4-specific config
|
||||
pub fn setup(&mut self) -> Result<Option<String>, String> {
|
||||
match self.pattern_type {
|
||||
PatternType::Regex => {
|
||||
if self.ipv4_mask.is_some() {
|
||||
return Err("ipv4mask is only allowed for patterns of `type: 'ip'`".into());
|
||||
}
|
||||
if self.ipv6_mask.is_some() {
|
||||
return Err("ipv6mask is only allowed for patterns of `type: 'ip'`".into());
|
||||
}
|
||||
if !self.ignore_cidr.is_empty() {
|
||||
return Err("ignorecidr is only allowed for patterns of `type: 'ip'`".into());
|
||||
}
|
||||
}
|
||||
|
||||
PatternType::Ip | PatternType::Ipv4 | PatternType::Ipv6 => {
|
||||
if let Some(mask) = self.ipv4_mask {
|
||||
self.ipv4_bitmask = Some(mask_to_ipv4(mask)?);
|
||||
}
|
||||
if let Some(mask) = self.ipv6_mask {
|
||||
self.ipv6_bitmask = Some(mask_to_ipv6(mask)?);
|
||||
}
|
||||
|
||||
for cidr in &self.ignore_cidr {
|
||||
let cidr_normalized = Cidr::from_str(cidr)?;
|
||||
let cidr_normalized_string = cidr_normalized.to_string();
|
||||
if &cidr_normalized_string != cidr {
|
||||
warn!(
|
||||
"CIDR {cidr} should be rewritten in its normalized form: {cidr_normalized_string}"
|
||||
);
|
||||
}
|
||||
self.ignore_cidr_normalized.push(cidr_normalized);
|
||||
}
|
||||
self.ignore_cidr = Vec::default();
|
||||
|
||||
match self.pattern_type {
|
||||
PatternType::Regex => (),
|
||||
PatternType::Ip => (),
|
||||
PatternType::Ipv4 => {
|
||||
if self.ipv6_mask.is_some() {
|
||||
return Err("An IPv4-only pattern can't have an ipv6mask".into());
|
||||
}
|
||||
for cidr in &self.ignore_cidr_normalized {
|
||||
if let Cidr::IPv6(_) = cidr {
|
||||
return Err(format!(
|
||||
"An IPv4-only pattern can't have an IPv6 ({}) as an ignore",
|
||||
cidr
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
PatternType::Ipv6 => {
|
||||
if self.ipv4_mask.is_some() {
|
||||
return Err("An IPv6-only pattern can't have an ipv4mask".into());
|
||||
}
|
||||
for cidr in &self.ignore_cidr_normalized {
|
||||
if let Cidr::IPv4(_) = cidr {
|
||||
return Err(format!(
|
||||
"An IPv6-only pattern can't have an IPv4 ({}) as an ignore",
|
||||
cidr
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(self.pattern_type.regex())
|
||||
}
|
||||
|
||||
/// Whether the IP match is included in one of [`Self::ignore_cidr`]
|
||||
pub fn is_ignore(&self, match_: &str) -> bool {
|
||||
let match_ip = match IpAddr::from_str(match_) {
|
||||
Ok(ip) => ip,
|
||||
Err(_) => return false,
|
||||
};
|
||||
self.ignore_cidr_normalized
|
||||
.iter()
|
||||
.any(|cidr| cidr.includes(&match_ip))
|
||||
}
|
||||
|
||||
/// Normalize the pattern.
|
||||
/// This should happen after checking on ignores.
|
||||
/// No-op when the pattern is not an IP.
|
||||
/// Otherwise BitAnd the IP with its configured mask,
|
||||
/// and add the /<mask>
|
||||
pub fn normalize(&self, match_: &mut String) {
|
||||
let ip = match self.pattern_type {
|
||||
PatternType::Regex => None,
|
||||
// Attempt to normalize only if type is IP*
|
||||
_ => normalize(match_)
|
||||
.ok()
|
||||
.and_then(|ip| match self.pattern_type {
|
||||
PatternType::Ip => Some(ip),
|
||||
PatternType::Ipv4 => match ip {
|
||||
IpAddr::V4(_) => Some(ip),
|
||||
_ => None,
|
||||
},
|
||||
PatternType::Ipv6 => match ip {
|
||||
IpAddr::V6(_) => Some(ip),
|
||||
_ => None,
|
||||
},
|
||||
_ => None,
|
||||
}),
|
||||
};
|
||||
if let Some(ip) = ip {
|
||||
*match_ = match ip {
|
||||
IpAddr::V4(addr) => match self.ipv4_bitmask {
|
||||
Some(bitmask) => {
|
||||
format!("{}/{}", addr & bitmask, self.ipv4_mask.unwrap_or(32))
|
||||
}
|
||||
None => addr.to_string(),
|
||||
},
|
||||
IpAddr::V6(addr) => match self.ipv6_bitmask {
|
||||
Some(bitmask) => {
|
||||
format!("{}/{}", addr & bitmask, self.ipv6_mask.unwrap_or(128))
|
||||
}
|
||||
None => addr.to_string(),
|
||||
},
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod patternip_tests {
|
||||
use std::net::{Ipv4Addr, Ipv6Addr};
|
||||
|
||||
use tokio::{fs::read_to_string, task::JoinSet};
|
||||
|
||||
use crate::{
|
||||
concepts::{Action, Duplicate, Filter, Pattern, now},
|
||||
daemon::{React, tests::TestBed},
|
||||
};
|
||||
|
||||
use super::{Cidr, PatternIp, PatternType};
|
||||
|
||||
#[test]
|
||||
fn test_setup_type_regex() {
|
||||
let mut regex_struct = PatternIp {
|
||||
pattern_type: PatternType::Regex,
|
||||
..Default::default()
|
||||
};
|
||||
let copy = regex_struct.clone();
|
||||
// All default patterns is ok for regex type
|
||||
assert!(regex_struct.setup().is_ok());
|
||||
// Setup changes nothing
|
||||
assert_eq!(regex_struct, copy);
|
||||
|
||||
// Any non-default field is err
|
||||
|
||||
let mut regex_struct = PatternIp {
|
||||
pattern_type: PatternType::Regex,
|
||||
ipv4_mask: Some(24),
|
||||
..Default::default()
|
||||
};
|
||||
assert!(regex_struct.setup().is_err());
|
||||
|
||||
let mut regex_struct = PatternIp {
|
||||
pattern_type: PatternType::Regex,
|
||||
ipv6_mask: Some(64),
|
||||
..Default::default()
|
||||
};
|
||||
assert!(regex_struct.setup().is_err());
|
||||
|
||||
let mut regex_struct = PatternIp {
|
||||
pattern_type: PatternType::Regex,
|
||||
ignore_cidr: vec!["192.168.1/24".into()],
|
||||
..Default::default()
|
||||
};
|
||||
assert!(regex_struct.setup().is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_setup_type_ip() {
|
||||
for pattern_type in [PatternType::Ip, PatternType::Ipv4, PatternType::Ipv6] {
|
||||
let mut ip_struct = PatternIp {
|
||||
pattern_type,
|
||||
..Default::default()
|
||||
};
|
||||
assert!(ip_struct.setup().is_ok());
|
||||
|
||||
let mut ip_struct = PatternIp {
|
||||
pattern_type,
|
||||
ipv4_mask: Some(24),
|
||||
..Default::default()
|
||||
};
|
||||
match pattern_type {
|
||||
PatternType::Ipv6 => assert!(ip_struct.setup().is_err()),
|
||||
_ => {
|
||||
assert!(ip_struct.setup().is_ok());
|
||||
assert_eq!(
|
||||
ip_struct.ipv4_bitmask,
|
||||
Some(Ipv4Addr::new(255, 255, 255, 0))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let mut ip_struct = PatternIp {
|
||||
pattern_type,
|
||||
ipv6_mask: Some(64),
|
||||
..Default::default()
|
||||
};
|
||||
match pattern_type {
|
||||
PatternType::Ipv4 => assert!(ip_struct.setup().is_err()),
|
||||
_ => {
|
||||
assert!(ip_struct.setup().is_ok());
|
||||
assert_eq!(
|
||||
ip_struct.ipv6_bitmask,
|
||||
Some(Ipv6Addr::new(0xffff, 0xffff, 0xffff, 0xffff, 0, 0, 0, 0))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let mut ip_struct = PatternIp {
|
||||
pattern_type,
|
||||
ignore_cidr: vec!["192.168.1.0/24".into()],
|
||||
..Default::default()
|
||||
};
|
||||
match pattern_type {
|
||||
PatternType::Ipv6 => assert!(ip_struct.setup().is_err()),
|
||||
_ => {
|
||||
assert!(ip_struct.setup().is_ok());
|
||||
assert_eq!(
|
||||
ip_struct.ignore_cidr_normalized,
|
||||
vec![Cidr::IPv4((
|
||||
Ipv4Addr::new(192, 168, 1, 0),
|
||||
Ipv4Addr::new(255, 255, 255, 0)
|
||||
))]
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let mut ip_struct = PatternIp {
|
||||
pattern_type,
|
||||
ignore_cidr: vec!["::ffff:192.168.1.0/24".into()],
|
||||
..Default::default()
|
||||
};
|
||||
match pattern_type {
|
||||
PatternType::Ipv6 => assert!(ip_struct.setup().is_err()),
|
||||
_ => {
|
||||
assert!(ip_struct.setup().is_ok());
|
||||
assert_eq!(
|
||||
ip_struct.ignore_cidr_normalized,
|
||||
vec![Cidr::IPv4((
|
||||
Ipv4Addr::new(192, 168, 1, 0),
|
||||
Ipv4Addr::new(255, 255, 255, 0)
|
||||
))]
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let mut ip_struct = PatternIp {
|
||||
pattern_type,
|
||||
ignore_cidr: vec!["2001:db8:85a3:9de5::8a2e:370:7334/64".into()],
|
||||
..Default::default()
|
||||
};
|
||||
match pattern_type {
|
||||
PatternType::Ipv4 => assert!(ip_struct.setup().is_err()),
|
||||
_ => {
|
||||
assert!(ip_struct.setup().is_ok());
|
||||
assert_eq!(
|
||||
ip_struct.ignore_cidr_normalized,
|
||||
vec![Cidr::IPv6((
|
||||
Ipv6Addr::new(0x2001, 0xdb8, 0x85a3, 0x9de5, 0, 0, 0, 0),
|
||||
Ipv6Addr::new(u16::MAX, u16::MAX, u16::MAX, u16::MAX, 0, 0, 0, 0),
|
||||
))]
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_is_ignore() {
|
||||
let mut ip_struct = PatternIp {
|
||||
pattern_type: PatternType::Ip,
|
||||
ignore_cidr: vec!["10.0.0.0/8".into(), "2001:db8:85a3:9de5::/64".into()],
|
||||
..Default::default()
|
||||
};
|
||||
ip_struct.setup().unwrap();
|
||||
assert!(!ip_struct.is_ignore("prout"));
|
||||
assert!(!ip_struct.is_ignore("1.1.1.1"));
|
||||
assert!(!ip_struct.is_ignore("11.1.1.1"));
|
||||
assert!(!ip_struct.is_ignore("2001:db8:85a3:9de6::1"));
|
||||
assert!(ip_struct.is_ignore("10.1.1.1"));
|
||||
assert!(ip_struct.is_ignore("2001:db8:85a3:9de5::1"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_normalize() {
|
||||
let ipv4_32 = "1.1.1.1";
|
||||
let ipv4_32_norm = "1.1.1.1";
|
||||
let ipv4_24 = "1.1.1.0";
|
||||
let ipv4_24_norm = "1.1.1.0";
|
||||
let ipv4_24_mask = "1.1.1.0/24";
|
||||
let ipv6_128 = "2001:db8:85a3:9de5:0:0:01:02";
|
||||
let ipv6_128_norm = "2001:db8:85a3:9de5::1:2";
|
||||
let ipv6_64 = "2001:db8:85a3:9de5:0:0:0:0";
|
||||
let ipv6_64_norm = "2001:db8:85a3:9de5::";
|
||||
let ipv6_64_mask = "2001:db8:85a3:9de5::/64";
|
||||
|
||||
for (ipv4_mask, ipv6_mask) in [(Some(24), None), (None, Some(64)), (Some(24), Some(64))] {
|
||||
let mut ip_struct = PatternIp {
|
||||
pattern_type: PatternType::Ip,
|
||||
ipv4_mask,
|
||||
ipv6_mask,
|
||||
..Default::default()
|
||||
};
|
||||
ip_struct.setup().unwrap();
|
||||
|
||||
let mut ipv4_32_modified = ipv4_32.to_string();
|
||||
let mut ipv4_24_modified = ipv4_24.to_string();
|
||||
let mut ipv6_128_modified = ipv6_128.to_string();
|
||||
let mut ipv6_64_modified = ipv6_64.to_string();
|
||||
|
||||
ip_struct.normalize(&mut ipv4_32_modified);
|
||||
ip_struct.normalize(&mut ipv4_24_modified);
|
||||
ip_struct.normalize(&mut ipv6_128_modified);
|
||||
ip_struct.normalize(&mut ipv6_64_modified);
|
||||
|
||||
match ipv4_mask {
|
||||
Some(_) => {
|
||||
// modified with mask
|
||||
assert_eq!(
|
||||
ipv4_32_modified, ipv4_24_mask,
|
||||
"ipv4mask: {:?}, ipv6mask: {:?}",
|
||||
ipv4_mask, ipv6_mask
|
||||
);
|
||||
assert_eq!(
|
||||
ipv4_24_modified, ipv4_24_mask,
|
||||
"ipv4mask: {:?}, ipv6mask: {:?}",
|
||||
ipv4_mask, ipv6_mask
|
||||
);
|
||||
}
|
||||
None => {
|
||||
// only normaized
|
||||
assert_eq!(
|
||||
ipv4_32_modified, ipv4_32_norm,
|
||||
"ipv4mask: {:?}, ipv6mask: {:?}",
|
||||
ipv4_mask, ipv6_mask
|
||||
);
|
||||
assert_eq!(
|
||||
ipv4_24_modified, ipv4_24_norm,
|
||||
"ipv4mask: {:?}, ipv6mask: {:?}",
|
||||
ipv4_mask, ipv6_mask
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
match ipv6_mask {
|
||||
Some(_) => {
|
||||
// modified with mask
|
||||
assert_eq!(
|
||||
ipv6_128_modified, ipv6_64_mask,
|
||||
"ipv4mask: {:?}, ipv6mask: {:?}",
|
||||
ipv4_mask, ipv6_mask
|
||||
);
|
||||
assert_eq!(
|
||||
ipv6_64_modified, ipv6_64_mask,
|
||||
"ipv4mask: {:?}, ipv6mask: {:?}",
|
||||
ipv4_mask, ipv6_mask
|
||||
);
|
||||
}
|
||||
None => {
|
||||
// only normalized
|
||||
assert_eq!(
|
||||
ipv6_128_modified, ipv6_128_norm,
|
||||
"ipv4mask: {:?}, ipv6mask: {:?}",
|
||||
ipv4_mask, ipv6_mask
|
||||
);
|
||||
assert_eq!(
|
||||
ipv6_64_modified, ipv6_64_norm,
|
||||
"ipv4mask: {:?}, ipv6mask: {:?}",
|
||||
ipv4_mask, ipv6_mask
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub const VALID_IPV4: [&str; 8] = [
|
||||
"252.4.92.250",
|
||||
"212.4.92.210",
|
||||
"112.4.92.110",
|
||||
"83.4.92.35",
|
||||
"83.4.92.0",
|
||||
"3.254.92.4",
|
||||
"1.2.3.4",
|
||||
"255.255.255.255",
|
||||
];
|
||||
|
||||
pub const VALID_IPV6: [&str; 42] = [
|
||||
// all accepted characters
|
||||
"0123:4567:89:ab:cdef:AB:CD:EF",
|
||||
// ipv6-mapped ipv4
|
||||
"::ffff:1.2.3.4",
|
||||
"ffff::1.2.3.4",
|
||||
// 8 blocks
|
||||
"1111:2:3:4:5:6:7:8888",
|
||||
// 7 blocks
|
||||
"::2:3:4:5:6:7:8888",
|
||||
"1111::3:4:5:6:7:8888",
|
||||
"1111:2::4:5:6:7:8888",
|
||||
"1111:2:3::5:6:7:8888",
|
||||
"1111:2:3:4::6:7:8888",
|
||||
"1111:2:3:4:5::7:8888",
|
||||
"1111:2:3:4:5:6::8888",
|
||||
"1111:2:3:4:5:6:7::",
|
||||
// 6 blocks
|
||||
"::3:4:5:6:7:8888",
|
||||
"1111::4:5:6:7:8888",
|
||||
"1111:2::5:6:7:8888",
|
||||
"1111:2:3::6:7:8888",
|
||||
"1111:2:3:4::7:8888",
|
||||
"1111:2:3:4:5::8888",
|
||||
"1111:2:3:4:5:6::",
|
||||
// 5 blocks
|
||||
"::4:5:6:7:8888",
|
||||
"1111::5:6:7:8888",
|
||||
"1111:2::6:7:8888",
|
||||
"1111:2:3::7:8888",
|
||||
"1111:2:3:4::8888",
|
||||
"1111:2:3:4:5::",
|
||||
// 4 blocks
|
||||
"::5:6:7:8888",
|
||||
"1111::6:7:8888",
|
||||
"1111:2::7:8888",
|
||||
"1111:2:3::8888",
|
||||
"1111:2:3:4::",
|
||||
// 3 blocks
|
||||
"::6:7:8888",
|
||||
"1111::7:8888",
|
||||
"1111:2::8888",
|
||||
"1111:2:3::",
|
||||
// 2 blocks
|
||||
"::7:8888",
|
||||
"1111::8888",
|
||||
"1111:2::",
|
||||
// 1 block
|
||||
"::8",
|
||||
"::8888",
|
||||
"1::",
|
||||
"1111::",
|
||||
// 0 block
|
||||
"::",
|
||||
];
|
||||
|
||||
#[test]
|
||||
fn test_ip_regexes() {
|
||||
for pattern_type in [PatternType::Ip, PatternType::Ipv4, PatternType::Ipv6] {
|
||||
let mut pattern = Pattern {
|
||||
ip: PatternIp {
|
||||
pattern_type,
|
||||
..Default::default()
|
||||
},
|
||||
..Default::default()
|
||||
};
|
||||
assert!(pattern.setup("zblorg").is_ok());
|
||||
let regex = pattern.compiled().unwrap();
|
||||
|
||||
let accepts_ipv4 = pattern_type == PatternType::Ip || pattern_type == PatternType::Ipv4;
|
||||
let accepts_ipv6 = pattern_type == PatternType::Ip || pattern_type == PatternType::Ipv6;
|
||||
|
||||
macro_rules! assert2 {
|
||||
($a:expr) => {
|
||||
assert!($a, "PatternType: {pattern_type:?}");
|
||||
};
|
||||
}
|
||||
|
||||
for ip in VALID_IPV4 {
|
||||
assert2!(accepts_ipv4 == regex.is_match(ip));
|
||||
}
|
||||
|
||||
assert2!(!regex.is_match(".1.2.3.4"));
|
||||
assert2!(!regex.is_match(" 1.2.3.4"));
|
||||
assert2!(!regex.is_match("1.2.3.4 "));
|
||||
assert2!(!regex.is_match("1.2. 3.4"));
|
||||
assert2!(!regex.is_match("257.2.3.4"));
|
||||
assert2!(!regex.is_match("074.2.3.4"));
|
||||
assert2!(!regex.is_match("1.2.3.4.5"));
|
||||
assert2!(!regex.is_match("1.2..4"));
|
||||
assert2!(!regex.is_match("1.2..3.4"));
|
||||
|
||||
for ip in VALID_IPV6 {
|
||||
assert2!(accepts_ipv6 == regex.is_match(ip));
|
||||
}
|
||||
|
||||
assert2!(!regex.is_match("1:"));
|
||||
assert2!(!regex.is_match("1:::"));
|
||||
assert2!(!regex.is_match("1:::2"));
|
||||
assert2!(!regex.is_match("1:2:3:4:5:6:7:8:9"));
|
||||
assert2!(!regex.is_match("1:23456:3:4:5:6:7:8"));
|
||||
assert2!(!regex.is_match("1:2:3:4:5:6:7:8:"));
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "multi_thread")]
|
||||
async fn ip_pattern_matches() {
|
||||
let mut join_set = JoinSet::new();
|
||||
|
||||
for ip in VALID_IPV4.iter().chain(&VALID_IPV6) {
|
||||
for line in [
|
||||
format!("borned {ip} test"),
|
||||
//
|
||||
format!("right-unborned {ip} text"),
|
||||
format!("right-unborned {ip}text"),
|
||||
format!("right-unborned {ip}:"),
|
||||
//
|
||||
format!("left-unborned text {ip}"),
|
||||
format!("left-unborned text{ip}"),
|
||||
format!("left-unborned :{ip}"),
|
||||
//
|
||||
format!("full-unborned text {ip} text"),
|
||||
format!("full-unborned text{ip} text"),
|
||||
format!("full-unborned text {ip}text"),
|
||||
format!("full-unborned text{ip}text"),
|
||||
format!("full-unborned :{ip}:"),
|
||||
format!("full-unborned : {ip}:"),
|
||||
] {
|
||||
join_set.spawn(tokio::spawn(async move {
|
||||
let bed = TestBed::default();
|
||||
let filter = Filter::new_static(
|
||||
vec![Action::new(
|
||||
vec!["sh", "-c", &format!("echo <ip> >> {}", &bed.out_file)],
|
||||
None,
|
||||
false,
|
||||
"test",
|
||||
"test",
|
||||
"a1",
|
||||
&bed.ip_patterns,
|
||||
0,
|
||||
)],
|
||||
vec![
|
||||
"^borned <ip> test",
|
||||
"^right-unborned <ip>.*",
|
||||
"^left-unborned .*<ip>",
|
||||
"^full-unborned .*<ip>.*",
|
||||
],
|
||||
None,
|
||||
None,
|
||||
"test",
|
||||
"test",
|
||||
Duplicate::Ignore,
|
||||
&bed.ip_patterns,
|
||||
);
|
||||
let bed = bed.part2(filter, now(), None).await;
|
||||
assert_eq!(
|
||||
bed.manager.handle_line(&line, now()).await,
|
||||
React::Trigger,
|
||||
"line: {line}"
|
||||
);
|
||||
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||
assert_eq!(
|
||||
&read_to_string(&bed.out_file).await.unwrap().trim_end(),
|
||||
ip,
|
||||
"line: {line}"
|
||||
);
|
||||
}));
|
||||
}
|
||||
}
|
||||
|
||||
join_set.join_all().await;
|
||||
}
|
||||
}
|
||||
|
|
@ -1,122 +0,0 @@
|
|||
use std::{
|
||||
net::{AddrParseError, IpAddr, Ipv4Addr, Ipv6Addr},
|
||||
str::FromStr,
|
||||
};
|
||||
|
||||
/// Normalize a string as an IP address.
|
||||
/// IPv6-mapped IPv4 addresses are casted to IPv4.
|
||||
pub fn normalize(ip: &str) -> Result<IpAddr, AddrParseError> {
|
||||
IpAddr::from_str(ip).map(normalize_ip)
|
||||
}
|
||||
|
||||
/// Normalize a string as an IP address.
|
||||
/// IPv6-mapped IPv4 addresses are casted to IPv4.
|
||||
pub fn normalize_ip(ip: IpAddr) -> IpAddr {
|
||||
match ip {
|
||||
IpAddr::V4(_) => ip,
|
||||
IpAddr::V6(ipv6) => match ipv6.to_ipv4_mapped() {
|
||||
Some(ipv4) => IpAddr::V4(ipv4),
|
||||
None => ip,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates an [`Ipv4Addr`] from a mask
|
||||
pub fn mask_to_ipv4(mask_count: u8) -> Result<Ipv4Addr, String> {
|
||||
if mask_count > 32 {
|
||||
Err(format!(
|
||||
"an IPv4 mask must be 32 max. {mask_count} is too big."
|
||||
))
|
||||
} else {
|
||||
let mask = match mask_count {
|
||||
0 => 0u32,
|
||||
n => u32::MAX << (32 - n),
|
||||
};
|
||||
let mask = Ipv4Addr::from_bits(mask);
|
||||
Ok(mask)
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates an [`Ipv4Addr`] from a mask
|
||||
pub fn mask_to_ipv6(mask_count: u8) -> Result<Ipv6Addr, String> {
|
||||
if mask_count > 128 {
|
||||
Err(format!(
|
||||
"an IPv4 mask must be 128 max. {mask_count} is too big."
|
||||
))
|
||||
} else {
|
||||
let mask = match mask_count {
|
||||
0 => 0u128,
|
||||
n => u128::MAX << (128 - n),
|
||||
};
|
||||
let mask = Ipv6Addr::from_bits(mask);
|
||||
Ok(mask)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod utils_tests {
|
||||
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr};
|
||||
|
||||
use super::{mask_to_ipv4, mask_to_ipv6, normalize};
|
||||
|
||||
#[test]
|
||||
fn test_normalize_ip() {
|
||||
assert_eq!(
|
||||
normalize("83.44.23.14"),
|
||||
Ok(IpAddr::V4(Ipv4Addr::new(83, 44, 23, 14)))
|
||||
);
|
||||
assert_eq!(
|
||||
normalize("2001:db8:85a3::8a2e:370:7334"),
|
||||
Ok(IpAddr::V6(Ipv6Addr::new(
|
||||
0x2001, 0xdb8, 0x85a3, 0x0, 0x0, 0x8a2e, 0x370, 0x7334
|
||||
)))
|
||||
);
|
||||
assert_eq!(
|
||||
normalize("::ffff:192.168.1.34"),
|
||||
Ok(IpAddr::V4(Ipv4Addr::new(192, 168, 1, 34)))
|
||||
);
|
||||
assert_eq!(
|
||||
normalize("::ffff:1.2.3.4"),
|
||||
Ok(IpAddr::V4(Ipv4Addr::new(1, 2, 3, 4)))
|
||||
);
|
||||
// octal numbers are forbidden
|
||||
assert!(normalize("083.44.23.14").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mask_to_ipv4() {
|
||||
assert!(mask_to_ipv4(33).is_err());
|
||||
assert!(mask_to_ipv4(100).is_err());
|
||||
assert_eq!(mask_to_ipv4(16), Ok(Ipv4Addr::new(255, 255, 0, 0)));
|
||||
assert_eq!(mask_to_ipv4(24), Ok(Ipv4Addr::new(255, 255, 255, 0)));
|
||||
assert_eq!(mask_to_ipv4(25), Ok(Ipv4Addr::new(255, 255, 255, 128)));
|
||||
assert_eq!(mask_to_ipv4(26), Ok(Ipv4Addr::new(255, 255, 255, 192)));
|
||||
assert_eq!(mask_to_ipv4(32), Ok(Ipv4Addr::new(255, 255, 255, 255)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mask_to_ipv6() {
|
||||
assert!(mask_to_ipv6(129).is_err());
|
||||
assert!(mask_to_ipv6(254).is_err());
|
||||
assert_eq!(
|
||||
mask_to_ipv6(56),
|
||||
Ok(Ipv6Addr::new(0xffff, 0xffff, 0xffff, 0xff00, 0, 0, 0, 0))
|
||||
);
|
||||
assert_eq!(
|
||||
mask_to_ipv6(64),
|
||||
Ok(Ipv6Addr::new(0xffff, 0xffff, 0xffff, 0xffff, 0, 0, 0, 0))
|
||||
);
|
||||
assert_eq!(
|
||||
mask_to_ipv6(112),
|
||||
Ok(Ipv6Addr::new(
|
||||
0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0
|
||||
))
|
||||
);
|
||||
assert_eq!(
|
||||
mask_to_ipv6(128),
|
||||
Ok(Ipv6Addr::new(
|
||||
0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff
|
||||
))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
@ -1,348 +0,0 @@
|
|||
use std::cmp::Ordering;
|
||||
|
||||
use regex::{Regex, RegexSet};
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
mod ip;
|
||||
|
||||
pub use ip::{PatternIp, PatternType};
|
||||
|
||||
#[derive(Clone, Debug, Deserialize, Serialize)]
|
||||
#[cfg_attr(test, derive(Default))]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct Pattern {
|
||||
#[serde(default)]
|
||||
pub regex: String,
|
||||
|
||||
#[serde(default, skip_serializing_if = "Vec::is_empty")]
|
||||
pub ignore: Vec<String>,
|
||||
|
||||
#[serde(default, rename = "ignoreregex", skip_serializing_if = "Vec::is_empty")]
|
||||
pub ignore_regex: Vec<String>,
|
||||
#[serde(skip)]
|
||||
pub compiled_ignore_regex: RegexSet,
|
||||
|
||||
#[serde(flatten)]
|
||||
pub ip: PatternIp,
|
||||
|
||||
#[serde(skip)]
|
||||
pub name: String,
|
||||
#[serde(skip)]
|
||||
pub name_with_braces: String,
|
||||
}
|
||||
|
||||
impl Pattern {
|
||||
#[cfg(test)]
|
||||
pub fn from_name(name: &str) -> Pattern {
|
||||
Pattern {
|
||||
name: name.into(),
|
||||
..Pattern::default()
|
||||
}
|
||||
}
|
||||
|
||||
pub fn name_with_braces(&self) -> &String {
|
||||
&self.name_with_braces
|
||||
}
|
||||
|
||||
pub fn pattern_type(&self) -> PatternType {
|
||||
self.ip.pattern_type
|
||||
}
|
||||
|
||||
pub fn setup(&mut self, name: &str) -> Result<(), String> {
|
||||
self._setup(name)
|
||||
.map_err(|msg| format!("pattern {}: {}", name, msg))
|
||||
}
|
||||
fn _setup(&mut self, name: &str) -> Result<(), String> {
|
||||
self.name = name.to_string();
|
||||
self.name_with_braces = format!("<{}>", name);
|
||||
|
||||
if self.name.is_empty() {
|
||||
return Err("pattern name is empty".into());
|
||||
}
|
||||
if self.name.contains('.') {
|
||||
return Err("character '.' is not allowed in pattern name".into());
|
||||
}
|
||||
|
||||
if let Some(regex) = self.ip.setup()? {
|
||||
if !self.regex.is_empty() {
|
||||
return Err("patterns of type ip, ipv4, ipv6 have a built-in regex defined. you should not define it yourself".into());
|
||||
}
|
||||
self.regex = regex;
|
||||
}
|
||||
|
||||
if self.regex.is_empty() {
|
||||
return Err("regex is empty".into());
|
||||
}
|
||||
let compiled = self.compiled()?;
|
||||
|
||||
self.regex = format!("(?P<{}>{})", self.name, self.regex);
|
||||
|
||||
for ignore in &self.ignore {
|
||||
if !compiled.is_match(ignore) {
|
||||
return Err(format!(
|
||||
"ignore '{}' doesn't match pattern. It should be fixed or removed.",
|
||||
ignore,
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
self.compiled_ignore_regex =
|
||||
match RegexSet::new(self.ignore_regex.iter().map(|regex| format!("^{}$", regex))) {
|
||||
Ok(set) => set,
|
||||
Err(err) => {
|
||||
// Recompile regexes one by one to display a more specific error
|
||||
for ignore_regex in &self.ignore_regex {
|
||||
Regex::new(&format!("^{}$", ignore_regex))
|
||||
.map_err(|err| format!("ignoreregex '{}': {}", ignore_regex, err))?;
|
||||
}
|
||||
// Here we should have returned an error already.
|
||||
// Returning a more generic error if not (which shouldn't happen).
|
||||
return Err(format!("ignoreregex: {}", err));
|
||||
}
|
||||
};
|
||||
self.ignore_regex = Vec::default();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Returns the pattern's regex compiled standalone, enclosed in ^ and $
|
||||
/// It's not kept as a field of the [`Pattern`] struct
|
||||
/// because it's only used during setup and for the `trigger` manual command.
|
||||
///
|
||||
/// *Yes, I know, avoiding a few bytes of memory is certainly a bad idea.*
|
||||
/// *I'm open to discussion.*
|
||||
fn compiled(&self) -> Result<Regex, String> {
|
||||
Regex::new(&format!("^{}$", self.regex)).map_err(|err| err.to_string())
|
||||
}
|
||||
|
||||
/// Normalize the pattern.
|
||||
/// This should happen after checking on ignores.
|
||||
/// No-op when the pattern is not an IP.
|
||||
/// Otherwise BitAnd the IP with its configured mask,
|
||||
/// and add the /<mask>
|
||||
pub fn normalize(&self, match_: &mut String) {
|
||||
self.ip.normalize(match_)
|
||||
}
|
||||
|
||||
/// Whether the provided string is a match for this pattern or not.
|
||||
///
|
||||
/// Doesn't take into account ignore and ignore_regex:
|
||||
/// use [`Self::is_ignore`] to access this information.
|
||||
pub fn is_match(&self, match_: &str) -> bool {
|
||||
match self.compiled() {
|
||||
Ok(regex) => regex.is_match(match_),
|
||||
// Should not happen, this function should be called only after
|
||||
// [`Pattern::setup`]
|
||||
Err(_) => false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Whether the provided string is ignored by the ignore or ignoreregex
|
||||
/// fields of this pattern.
|
||||
///
|
||||
/// Can be used in combination with [`Self::is_match`].
|
||||
pub fn is_ignore(&self, match_: &str) -> bool {
|
||||
self.ignore.iter().any(|ignore| ignore == match_)
|
||||
|| self.compiled_ignore_regex.is_match(match_)
|
||||
|| self.ip.is_ignore(match_)
|
||||
}
|
||||
}
|
||||
|
||||
// This is required to be added to a BTreeSet
|
||||
// We compare Patterns by their names, which are unique.
|
||||
// This is enforced by Patterns' names coming from their keys in a BTreeMap in Config
|
||||
impl Ord for Pattern {
|
||||
fn cmp(&self, other: &Self) -> Ordering {
|
||||
self.name.cmp(&other.name)
|
||||
}
|
||||
}
|
||||
impl PartialOrd for Pattern {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||
Some(self.cmp(other))
|
||||
}
|
||||
}
|
||||
impl Eq for Pattern {}
|
||||
impl PartialEq for Pattern {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
self.name == other.name
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
impl Pattern {
|
||||
/// Test-only constructor designed to be easy to call
|
||||
pub fn new(name: &str, regex: &str) -> Result<Self, String> {
|
||||
let mut pattern = Self {
|
||||
regex: regex.into(),
|
||||
..Default::default()
|
||||
};
|
||||
pattern.setup(name)?;
|
||||
Ok(pattern)
|
||||
}
|
||||
|
||||
/// Test-only constructor designed to be easy to call.
|
||||
/// Constructs a full [`super::Patterns`] collection with one given pattern
|
||||
pub fn new_map(name: &str, regex: &str) -> Result<super::Patterns, String> {
|
||||
Ok(std::iter::once((name.into(), Self::new(name, regex)?.into())).collect())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
pub mod tests {
|
||||
|
||||
use super::*;
|
||||
|
||||
pub fn default_pattern() -> Pattern {
|
||||
Pattern {
|
||||
regex: "".into(),
|
||||
ignore: Vec::new(),
|
||||
ignore_regex: Vec::new(),
|
||||
compiled_ignore_regex: RegexSet::default(),
|
||||
ip: PatternIp::default(),
|
||||
name: "".into(),
|
||||
name_with_braces: "".into(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn ok_pattern() -> Pattern {
|
||||
let mut pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern
|
||||
}
|
||||
|
||||
pub fn ok_pattern_with_ignore() -> Pattern {
|
||||
let mut pattern = ok_pattern();
|
||||
pattern.ignore.push("a".into());
|
||||
pattern
|
||||
}
|
||||
|
||||
pub fn boubou_pattern_with_ignore() -> Pattern {
|
||||
let mut pattern = ok_pattern();
|
||||
pattern.regex = "(?:bou){1,3}".to_string();
|
||||
pattern.ignore.push("boubou".into());
|
||||
pattern
|
||||
}
|
||||
|
||||
pub fn number_pattern() -> Pattern {
|
||||
let mut pattern = ok_pattern();
|
||||
pattern.regex = "[0-1]+".to_string();
|
||||
pattern
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_missing_information() {
|
||||
let mut pattern;
|
||||
|
||||
// Empty name
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "abc".into();
|
||||
assert!(pattern.setup("").is_err());
|
||||
|
||||
// '.' in name
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "abc".into();
|
||||
assert!(pattern.setup("na.me").is_err());
|
||||
|
||||
// Empty regex
|
||||
pattern = default_pattern();
|
||||
assert!(pattern.setup("name").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_regex() {
|
||||
let mut pattern;
|
||||
|
||||
// regex ok
|
||||
pattern = ok_pattern();
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
// regex ok
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "abc".into();
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
// regex ko
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc".into();
|
||||
assert!(pattern.setup("name").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_ignore() {
|
||||
let mut pattern;
|
||||
|
||||
// ignore ok
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern.ignore.push("a".into());
|
||||
pattern.ignore.push("b".into());
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
// ignore ko
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern.ignore.push("d".into());
|
||||
assert!(pattern.setup("name").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_ignore_regex() {
|
||||
let mut pattern;
|
||||
|
||||
// ignore_regex ok
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern.ignore_regex.push("[a]".into());
|
||||
pattern.ignore_regex.push("a".into());
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
// ignore_regex ko
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abc]".into();
|
||||
pattern.ignore.push("[a".into());
|
||||
assert!(pattern.setup("name").is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn setup_yml() {
|
||||
let mut pattern: Pattern = serde_yaml::from_str("{}").unwrap();
|
||||
assert!(pattern.setup("name").is_err());
|
||||
|
||||
let mut pattern: Pattern = serde_yaml::from_str(r#"regex: "[abc]""#).unwrap();
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
let mut pattern: Pattern = serde_yaml::from_str(r#"type: ip"#).unwrap();
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
let mut pattern: Pattern = serde_yaml::from_str(r#"type: ipv4"#).unwrap();
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
let mut pattern: Pattern = serde_yaml::from_str(r#"type: ipv6"#).unwrap();
|
||||
assert!(pattern.setup("name").is_ok());
|
||||
|
||||
assert!(serde_yaml::from_str::<Pattern>(r#"type: zblorg"#).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_ignore() {
|
||||
let mut pattern;
|
||||
|
||||
// ignore ok
|
||||
pattern = default_pattern();
|
||||
pattern.regex = "[abcdefg]".into();
|
||||
pattern.ignore.push("a".into());
|
||||
pattern.ignore.push("b".into());
|
||||
pattern.ignore_regex.push("c".into());
|
||||
pattern.ignore_regex.push("[de]".into());
|
||||
|
||||
pattern.setup("name").unwrap();
|
||||
assert!(pattern.is_ignore("a"));
|
||||
assert!(pattern.is_ignore("b"));
|
||||
assert!(pattern.is_ignore("c"));
|
||||
assert!(pattern.is_ignore("d"));
|
||||
assert!(pattern.is_ignore("e"));
|
||||
assert!(!pattern.is_ignore("f"));
|
||||
assert!(!pattern.is_ignore("g"));
|
||||
assert!(!pattern.is_ignore("h"));
|
||||
}
|
||||
}
|
||||
|
|
@ -1,218 +0,0 @@
|
|||
use std::{collections::BTreeMap, io::Error, path, process::Stdio};
|
||||
|
||||
#[cfg(target_os = "macos")]
|
||||
use std::os::darwin::fs::MetadataExt;
|
||||
#[cfg(target_os = "freebsd")]
|
||||
use std::os::freebsd::fs::MetadataExt;
|
||||
#[cfg(target_os = "illumos")]
|
||||
use std::os::illumos::fs::MetadataExt;
|
||||
#[cfg(target_os = "linux")]
|
||||
use std::os::linux::fs::MetadataExt;
|
||||
#[cfg(target_os = "netbsd")]
|
||||
use std::os::netbsd::fs::MetadataExt;
|
||||
#[cfg(target_os = "openbsd")]
|
||||
use std::os::openbsd::fs::MetadataExt;
|
||||
#[cfg(target_os = "solaris")]
|
||||
use std::os::solaris::fs::MetadataExt;
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tokio::{
|
||||
fs,
|
||||
process::{Child, Command},
|
||||
};
|
||||
use tracing::{debug, warn};
|
||||
|
||||
// TODO commented options block execution of program,
|
||||
// while developping in my home directory.
|
||||
// Some options may still be useful in production environments.
|
||||
fn systemd_default_options(working_directory: &str) -> BTreeMap<String, Vec<String>> {
|
||||
BTreeMap::from(
|
||||
[
|
||||
// reaction slice (does nothing if inexistent)
|
||||
("Slice", vec!["reaction.slice"]),
|
||||
// Started in its own directory
|
||||
("WorkingDirectory", vec![working_directory]),
|
||||
// No file access except own directory
|
||||
("ReadWritePaths", vec![working_directory]),
|
||||
("ReadOnlyPaths", vec!["/"]),
|
||||
("InaccessiblePaths", vec!["/boot", "/etc"]),
|
||||
// Protect special filesystems
|
||||
("PrivateDevices", vec!["true"]),
|
||||
("PrivateMounts", vec!["true"]),
|
||||
("PrivateTmp", vec!["true"]),
|
||||
// ("PrivateUsers", vec!["true"]),
|
||||
("ProcSubset", vec!["pid"]),
|
||||
("ProtectClock", vec!["true"]),
|
||||
("ProtectControlGroups", vec!["true"]),
|
||||
#[cfg(not(debug_assertions))]
|
||||
("ProtectHome", vec!["true"]),
|
||||
("ProtectHostname", vec!["true"]),
|
||||
("ProtectKernelLogs", vec!["true"]),
|
||||
("ProtectKernelModules", vec!["true"]),
|
||||
("ProtectKernelTunables", vec!["true"]),
|
||||
("ProtectProc", vec!["invisible"]),
|
||||
("ProtectSystem", vec!["strict"]),
|
||||
// Various Protections
|
||||
("LockPersonality", vec!["true"]),
|
||||
("NoNewPrivileges", vec!["true"]),
|
||||
("AmbientCapabilities", vec![""]),
|
||||
("CapabilityBoundingSet", vec![""]),
|
||||
// Isolate File
|
||||
("RemoveIPC", vec!["true"]),
|
||||
("RestrictNamespaces", vec!["true"]),
|
||||
("RestrictSUIDSGID", vec!["true"]),
|
||||
("SystemCallArchitectures", vec!["native"]),
|
||||
(
|
||||
"SystemCallFilter",
|
||||
vec!["@system-service", "~@privileged", "~@resources", "~@setuid"],
|
||||
),
|
||||
// User
|
||||
// FIXME Setting another user doesn't work, because of stdio pipe permission errors
|
||||
// ("DynamicUser", vec!["true"]),
|
||||
// ("User", vec!["reaction-plugin-test"]),
|
||||
// Too restrictive
|
||||
// ("NoExecPaths", vec!["/"]),
|
||||
// ("RestrictAddressFamilies", vec![""]),
|
||||
]
|
||||
.map(|(k, v)| (k.into(), v.into_iter().map(|v| v.into()).collect())),
|
||||
)
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Deserialize, Serialize)]
|
||||
#[cfg_attr(test, derive(Default))]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct Plugin {
|
||||
#[serde(skip)]
|
||||
pub name: String,
|
||||
|
||||
pub path: String,
|
||||
/// Check that plugin file owner is root
|
||||
#[serde(default = "_true")]
|
||||
pub check_root: bool,
|
||||
/// Enable systemd containerization
|
||||
#[serde(default = "_true")]
|
||||
pub systemd: bool,
|
||||
/// Options for `run0`
|
||||
#[serde(default)]
|
||||
pub systemd_options: BTreeMap<String, Vec<String>>,
|
||||
}
|
||||
|
||||
fn _true() -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
// NOTE
|
||||
// `run0` can be used for security customisation.
|
||||
// with the --pipe option, raw stdio fd are transmitted to the underlying command, so there is no overhead.
|
||||
|
||||
impl Plugin {
|
||||
pub fn setup(&mut self, name: &str) -> Result<(), String> {
|
||||
self.name = name.to_string();
|
||||
|
||||
if self.path.is_empty() {
|
||||
return Err("can't specify empty plugin path".into());
|
||||
}
|
||||
|
||||
// Only when testing, make relative paths absolute
|
||||
#[cfg(debug_assertions)]
|
||||
if !self.path.starts_with("/") {
|
||||
self.path = format!(
|
||||
"{}/{}",
|
||||
std::env::current_dir()
|
||||
.map_err(|err| format!("error on working directory: {err}"))?
|
||||
.to_string_lossy(),
|
||||
self.path
|
||||
);
|
||||
}
|
||||
|
||||
// Disallow relative paths
|
||||
if !self.path.starts_with("/") {
|
||||
return Err(format!("plugin paths must be absolute: {}", self.path));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Override default options with user-defined options, when defined.
|
||||
pub fn systemd_setup(&self, working_directory: &str) -> BTreeMap<String, Vec<String>> {
|
||||
let mut new_options = systemd_default_options(working_directory);
|
||||
for (option, value) in self.systemd_options.iter() {
|
||||
new_options.insert(option.clone(), value.clone());
|
||||
}
|
||||
new_options
|
||||
}
|
||||
|
||||
pub async fn launch(&self, state_directory: &str) -> Result<Child, std::io::Error> {
|
||||
// owner check
|
||||
if self.check_root {
|
||||
let path = self.path.clone();
|
||||
let stat = fs::metadata(path).await?;
|
||||
|
||||
if stat.st_uid() != 0 {
|
||||
return Err(Error::other("plugin file is not owned by root"));
|
||||
}
|
||||
}
|
||||
|
||||
let self_uid = if self.systemd {
|
||||
Some(
|
||||
// Well well we want to check if we're root
|
||||
#[allow(unsafe_code)]
|
||||
unsafe {
|
||||
nix::libc::geteuid()
|
||||
},
|
||||
)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
// Create plugin working directory (also state directory)
|
||||
let plugin_working_directory = format!("{state_directory}/plugin_data/{}", self.name);
|
||||
fs::create_dir_all(&plugin_working_directory).await?;
|
||||
|
||||
let mut command = if self_uid.is_some_and(|self_uid| self_uid == 0) {
|
||||
let mut command = Command::new("run0");
|
||||
// --pipe gives direct, non-emulated stdio access, for better performance.
|
||||
command.arg("--pipe");
|
||||
// run the command inside the same slice as reaction
|
||||
command.arg("--slice-inherit");
|
||||
|
||||
// Make path absolute for systemd
|
||||
let full_workdir = path::absolute(&plugin_working_directory)?;
|
||||
let full_workdir = full_workdir.to_str().ok_or_else(|| {
|
||||
std::io::Error::new(
|
||||
std::io::ErrorKind::InvalidFilename,
|
||||
format!(
|
||||
"Could not absolutize plugin working directory {plugin_working_directory}"
|
||||
),
|
||||
)
|
||||
})?;
|
||||
let merged_systemd_options = self.systemd_setup(full_workdir);
|
||||
// run0 options
|
||||
for (option, values) in merged_systemd_options.iter() {
|
||||
for value in values.iter() {
|
||||
command.arg("--property").arg(format!("{option}={value}"));
|
||||
}
|
||||
}
|
||||
command.arg(&self.path);
|
||||
command
|
||||
} else {
|
||||
if self.systemd {
|
||||
warn!("Disabling systemd because reaction does not run as root");
|
||||
}
|
||||
let mut command = Command::new(&self.path);
|
||||
command.current_dir(plugin_working_directory);
|
||||
command
|
||||
};
|
||||
command.arg("serve");
|
||||
debug!(
|
||||
"plugin {}: running command: {:?}",
|
||||
self.name,
|
||||
command.as_std()
|
||||
);
|
||||
command
|
||||
.stdin(Stdio::piped())
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::piped())
|
||||
.env("RUST_BACKTRACE", "1")
|
||||
.spawn()
|
||||
}
|
||||
}
|
||||
|
|
@ -1,62 +1,35 @@
|
|||
use std::{cmp::Ordering, collections::BTreeMap, hash::Hash};
|
||||
|
||||
use reaction_plugin::StreamConfig;
|
||||
use regex::RegexSet;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::Value;
|
||||
use serde::Deserialize;
|
||||
|
||||
use super::{Filter, Patterns, merge_attrs};
|
||||
use super::{Filter, Patterns};
|
||||
|
||||
#[derive(Clone, Debug, Deserialize, Serialize)]
|
||||
#[derive(Clone, Debug, Deserialize)]
|
||||
#[cfg_attr(test, derive(Default))]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub struct Stream {
|
||||
#[serde(default)]
|
||||
pub cmd: Vec<String>,
|
||||
|
||||
#[serde(default)]
|
||||
pub filters: BTreeMap<String, Filter>,
|
||||
cmd: Vec<String>,
|
||||
filters: BTreeMap<String, Filter>,
|
||||
|
||||
#[serde(skip)]
|
||||
pub name: String,
|
||||
|
||||
#[serde(skip)]
|
||||
pub compiled_regex_set: RegexSet,
|
||||
#[serde(skip)]
|
||||
pub regex_index_to_filter_name: Vec<String>,
|
||||
|
||||
// Plugin-specific
|
||||
#[serde(default, rename = "type", skip_serializing_if = "Option::is_none")]
|
||||
pub stream_type: Option<String>,
|
||||
#[serde(default, skip_serializing_if = "Value::is_null")]
|
||||
pub options: Value,
|
||||
name: String,
|
||||
}
|
||||
|
||||
impl Stream {
|
||||
pub fn filters(&self) -> &BTreeMap<String, Filter> {
|
||||
&self.filters
|
||||
}
|
||||
|
||||
pub fn get_filter(&self, filter_name: &str) -> Option<&Filter> {
|
||||
self.filters.get(filter_name)
|
||||
}
|
||||
|
||||
pub fn merge(&mut self, other: Stream) -> Result<(), String> {
|
||||
self.cmd = merge_attrs(self.cmd.clone(), other.cmd, Vec::default(), "cmd")?;
|
||||
self.stream_type = merge_attrs(self.stream_type.clone(), other.stream_type, None, "type")?;
|
||||
|
||||
for (key, filter) in other.filters.into_iter() {
|
||||
if self.filters.insert(key.clone(), filter).is_some() {
|
||||
return Err(format!(
|
||||
"filter {} is already defined. filter definitions can't be spread accross multiple files.",
|
||||
key
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
pub fn name(&self) -> &str {
|
||||
&self.name
|
||||
}
|
||||
|
||||
pub fn is_plugin(&self) -> bool {
|
||||
self.stream_type
|
||||
.as_ref()
|
||||
.is_some_and(|stream_type| stream_type != "cmd")
|
||||
pub fn cmd(&self) -> &Vec<String> {
|
||||
&self.cmd
|
||||
}
|
||||
|
||||
pub fn setup(&mut self, name: &str, patterns: &Patterns) -> Result<(), String> {
|
||||
|
|
@ -74,18 +47,11 @@ impl Stream {
|
|||
return Err("character '.' is not allowed in stream name".into());
|
||||
}
|
||||
|
||||
if !self.is_plugin() {
|
||||
if self.cmd.is_empty() {
|
||||
return Err("cmd is empty".into());
|
||||
}
|
||||
if self.cmd[0].is_empty() {
|
||||
return Err("cmd's first item is empty".into());
|
||||
}
|
||||
if !self.options.is_null() {
|
||||
return Err("can't define options without a plugin type".into());
|
||||
}
|
||||
} else if !self.cmd.is_empty() {
|
||||
return Err("can't define cmd and a plugin type".into());
|
||||
if self.cmd.is_empty() {
|
||||
return Err("cmd is empty".into());
|
||||
}
|
||||
if self.cmd[0].is_empty() {
|
||||
return Err("cmd's first item is empty".into());
|
||||
}
|
||||
|
||||
if self.filters.is_empty() {
|
||||
|
|
@ -96,32 +62,16 @@ impl Stream {
|
|||
filter.setup(name, key, patterns)?;
|
||||
}
|
||||
|
||||
let all_regexes: BTreeMap<_, _> = self
|
||||
.filters
|
||||
.values()
|
||||
.flat_map(|filter| {
|
||||
filter
|
||||
.regex
|
||||
.iter()
|
||||
.map(|regex| (regex, filter.name.clone()))
|
||||
})
|
||||
.collect();
|
||||
|
||||
self.compiled_regex_set = RegexSet::new(all_regexes.keys())
|
||||
.map_err(|err| format!("too much regexes on the filters of this stream: {err}"))?;
|
||||
self.regex_index_to_filter_name = all_regexes.into_values().collect();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn to_stream_config(&self) -> Result<StreamConfig, String> {
|
||||
Ok(StreamConfig {
|
||||
stream_name: self.name.clone(),
|
||||
stream_type: self.stream_type.clone().ok_or_else(|| {
|
||||
format!("stream {} doesn't load a plugin. this is a bug!", self.name)
|
||||
})?,
|
||||
config: self.options.clone().into(),
|
||||
})
|
||||
#[cfg(test)]
|
||||
pub fn from_filters(filters: BTreeMap<String, Filter>, name: &str) -> Self {
|
||||
Self {
|
||||
filters,
|
||||
name: name.to_string(),
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -148,19 +98,26 @@ impl Hash for Stream {
|
|||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
pub mod tests {
|
||||
|
||||
use super::*;
|
||||
use crate::concepts::filter::tests::ok_filter;
|
||||
|
||||
fn ok_stream() -> Stream {
|
||||
fn default_stream() -> Stream {
|
||||
Stream {
|
||||
cmd: vec!["command".into()],
|
||||
filters: BTreeMap::from([("name".into(), ok_filter())]),
|
||||
..Default::default()
|
||||
cmd: Vec::new(),
|
||||
name: "".into(),
|
||||
filters: BTreeMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn ok_stream() -> Stream {
|
||||
let mut stream = default_stream();
|
||||
stream.cmd = vec!["command".into()];
|
||||
stream.filters.insert("name".into(), ok_filter());
|
||||
stream
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test() {
|
||||
let mut stream;
|
||||
|
|
|
|||
354
src/daemon/filter.rs
Normal file
354
src/daemon/filter.rs
Normal file
|
|
@ -0,0 +1,354 @@
|
|||
use std::{
|
||||
collections::{BTreeMap, BTreeSet},
|
||||
process::Stdio,
|
||||
sync::Arc,
|
||||
};
|
||||
|
||||
use chrono::Local;
|
||||
use regex::Regex;
|
||||
use tokio::sync::Semaphore;
|
||||
use tracing::{error, info};
|
||||
|
||||
use crate::{
|
||||
concepts::{Action, Filter, Match, Pattern, Time},
|
||||
protocol::{Order, PatternStatus},
|
||||
};
|
||||
|
||||
use super::{shutdown::ShutdownToken, SledDbExt, Tree};
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct FilterManager {
|
||||
/// the Filter managed
|
||||
filter: &'static Filter,
|
||||
/// Has the filter at least an action with an after directive?
|
||||
has_after: bool,
|
||||
/// Permits to limit concurrency of actions execution
|
||||
exec_limit: Option<Arc<Semaphore>>,
|
||||
/// Permits to run pending actions on shutdown
|
||||
shutdown: ShutdownToken,
|
||||
/// Saves all the current Matches for this Filter
|
||||
matches: Tree<Match, BTreeSet<Time>>,
|
||||
/// Alternative view of the current Matches for O(1) cleaning of old Matches
|
||||
/// without added async Tasks to remove them
|
||||
ordered_times: Tree<Time, Match>,
|
||||
/// Saves all the current Triggers for this Filter
|
||||
triggers: Tree<Match, BTreeMap<Time, usize>>,
|
||||
}
|
||||
|
||||
#[allow(clippy::unwrap_used)]
|
||||
impl FilterManager {
|
||||
pub fn new(
|
||||
filter: &'static Filter,
|
||||
exec_limit: Option<Arc<Semaphore>>,
|
||||
shutdown: ShutdownToken,
|
||||
db: &sled::Db,
|
||||
) -> Result<Self, sled::Error> {
|
||||
let manager = Self {
|
||||
filter,
|
||||
has_after: !filter.longuest_action_duration().is_zero(),
|
||||
exec_limit,
|
||||
shutdown,
|
||||
matches: db.open_filter_matches_tree(filter)?,
|
||||
ordered_times: db.open_filter_ordered_times_tree(filter)?,
|
||||
triggers: db.open_filter_triggers_tree(filter)?,
|
||||
};
|
||||
let now = Local::now();
|
||||
manager.clear_past_matches(now);
|
||||
manager.clear_past_triggers_and_schedule_future_actions(now);
|
||||
Ok(manager)
|
||||
}
|
||||
|
||||
pub fn handle_line(&self, line: &str) {
|
||||
if let Some(match_) = self.filter.get_match(line) {
|
||||
self.handle_match(match_);
|
||||
}
|
||||
}
|
||||
|
||||
fn handle_match(&self, m: Match) {
|
||||
let now = Local::now();
|
||||
self.clear_past_matches(now);
|
||||
|
||||
let exec = match self.filter.retry() {
|
||||
None => true,
|
||||
Some(retry) => {
|
||||
self.add_match(&m, now);
|
||||
// Number of stored times for this match >= configured retry for this filter
|
||||
self.get_times(&m) >= retry as usize
|
||||
}
|
||||
};
|
||||
|
||||
if exec {
|
||||
self.remove_match(&m);
|
||||
self.add_trigger(&m, now);
|
||||
self.schedule_exec(m.clone(), now);
|
||||
}
|
||||
}
|
||||
|
||||
pub fn handle_order(
|
||||
&self,
|
||||
patterns: &BTreeMap<Arc<Pattern>, Regex>,
|
||||
order: Order,
|
||||
) -> BTreeMap<String, PatternStatus> {
|
||||
let is_match = |match_: &Match| {
|
||||
match_
|
||||
.iter()
|
||||
.zip(self.filter.patterns())
|
||||
.filter_map(|(a_match, pattern)| {
|
||||
patterns.get(pattern.as_ref()).map(|regex| (a_match, regex))
|
||||
})
|
||||
.all(|(a_match, regex)| regex.is_match(a_match))
|
||||
};
|
||||
|
||||
let mut cs: BTreeMap<_, _> = self
|
||||
.matches
|
||||
.iter()
|
||||
// match filtering
|
||||
.filter(|(match_, _)| is_match(match_))
|
||||
.map(|(match_, times)| {
|
||||
if let Order::Flush = order {
|
||||
self.remove_match(&match_);
|
||||
}
|
||||
(
|
||||
match_,
|
||||
PatternStatus {
|
||||
matches: times.len(),
|
||||
..Default::default()
|
||||
},
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
|
||||
let now = Local::now();
|
||||
for (match_, times) in self
|
||||
.triggers
|
||||
.iter()
|
||||
// match filtering
|
||||
.filter(|(match_, _)| is_match(match_))
|
||||
{
|
||||
// Remove the match from the triggers
|
||||
if let Order::Flush = order {
|
||||
self.remove_trigger(&match_);
|
||||
}
|
||||
|
||||
let pattern_status = cs.entry(match_.clone()).or_default();
|
||||
|
||||
for action in self.filter.actions().values() {
|
||||
let mut action_times = Vec::new();
|
||||
for time in times.keys() {
|
||||
let action_time = *time + action.after_duration().unwrap_or_default();
|
||||
if action_time > now {
|
||||
action_times.push(action_time.to_rfc3339().chars().take(19).collect());
|
||||
// Execute the action early
|
||||
if let Order::Flush = order {
|
||||
self.exec_now(action, match_.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
if !action_times.is_empty() {
|
||||
pattern_status
|
||||
.actions
|
||||
.insert(action.name().into(), action_times);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
cs.into_iter().map(|(k, v)| (k.join(" "), v)).collect()
|
||||
}
|
||||
|
||||
/// Schedule execution for a given Action and Match.
|
||||
/// We check first if the trigger is still here
|
||||
/// because pending actions can be flushed.
|
||||
fn schedule_exec(&self, m: Match, t: Time) {
|
||||
let now = Local::now();
|
||||
for action in self.filter.actions().values() {
|
||||
let exec_time = t + action.after_duration().unwrap_or_default();
|
||||
let m = m.clone();
|
||||
|
||||
if exec_time < now {
|
||||
if self.decrement_trigger(&m, t) {
|
||||
self.exec_now(action, m);
|
||||
}
|
||||
} else {
|
||||
let this = self.clone();
|
||||
tokio::spawn(async move {
|
||||
let dur = (exec_time - now)
|
||||
.to_std()
|
||||
// Could cause an error if t + after < now
|
||||
// In this case, 0 is fine
|
||||
.unwrap_or_default();
|
||||
// Wait either for end of sleep
|
||||
// or reaction exiting
|
||||
let exiting = tokio::select! {
|
||||
_ = tokio::time::sleep(dur) => false,
|
||||
_ = this.shutdown.wait() => true,
|
||||
};
|
||||
// Exec action if triggered hasn't been already flushed
|
||||
if (!exiting || action.on_exit()) && this.decrement_trigger(&m, t) {
|
||||
this.exec_now(action, m);
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn add_match(&self, m: &Match, t: Time) {
|
||||
// FIXME do this in a transaction
|
||||
self.matches
|
||||
.fetch_and_update(m, |set| {
|
||||
let mut set = set.unwrap_or_default();
|
||||
set.insert(t);
|
||||
Some(set)
|
||||
})
|
||||
.unwrap();
|
||||
self.ordered_times.insert(&t, m).unwrap();
|
||||
}
|
||||
|
||||
fn add_trigger(&self, m: &Match, t: Time) {
|
||||
// We record triggered filters only when there is an action with an `after` directive
|
||||
if self.has_after {
|
||||
// Add the (Match, Time) to the triggers map
|
||||
self.triggers
|
||||
.fetch_and_update(m, |map| {
|
||||
let mut map = map.unwrap_or_default();
|
||||
map.insert(t, self.filter.actions().len());
|
||||
Some(map)
|
||||
})
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
// Completely remove a Match from the matches
|
||||
fn remove_match(&self, m: &Match) {
|
||||
// FIXME do this in a transaction
|
||||
if let Some(times) = self.matches.remove(m) {
|
||||
for t in times {
|
||||
self.ordered_times.remove(&t);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Completely remove a Match from the triggers
|
||||
fn remove_trigger(&self, m: &Match) {
|
||||
// FIXME do this in a transaction
|
||||
self.triggers.remove(m);
|
||||
}
|
||||
|
||||
/// Returns whether we should still execute an action for this (Match, Time) trigger
|
||||
fn decrement_trigger(&self, m: &Match, t: Time) -> bool {
|
||||
// We record triggered filters only when there is an action with an `after` directive
|
||||
if self.has_after {
|
||||
let mut exec_needed = false;
|
||||
self.triggers
|
||||
.fetch_and_update(&m, |map| {
|
||||
map.map(|mut map| {
|
||||
if let Some(counter) = map.get(&t) {
|
||||
exec_needed = true;
|
||||
// We're the last action to run
|
||||
if *counter <= 1 {
|
||||
map.remove(&t);
|
||||
} else {
|
||||
map.insert(t, counter - 1);
|
||||
}
|
||||
}
|
||||
map
|
||||
})
|
||||
// Remove empty maps
|
||||
.filter(|map| !map.is_empty())
|
||||
})
|
||||
.unwrap();
|
||||
exec_needed
|
||||
} else {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
fn clear_past_matches(&self, now: Time) {
|
||||
let retry_duration = self.filter.retry_duration().unwrap_or_default();
|
||||
while self
|
||||
.ordered_times
|
||||
.first()
|
||||
.unwrap()
|
||||
.is_some_and(|(t, _)| t + retry_duration < now)
|
||||
{
|
||||
// FIXME do this in a transaction
|
||||
#[allow(clippy::unwrap_used)]
|
||||
// second unwrap: we just checked in the condition that first is_some
|
||||
let (t, m) = self
|
||||
.ordered_times
|
||||
.pop_min()
|
||||
.unwrap()
|
||||
.unwrap();
|
||||
self.matches
|
||||
.fetch_and_update(&m, |set| {
|
||||
let mut set = set.unwrap();
|
||||
set.remove(&t);
|
||||
Some(set)
|
||||
})
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
fn get_times(&self, m: &Match) -> usize {
|
||||
self.matches
|
||||
.get(m)
|
||||
.unwrap()
|
||||
.map(|v| v.len())
|
||||
.unwrap_or(0)
|
||||
}
|
||||
|
||||
fn clear_past_triggers_and_schedule_future_actions(&self, now: Time) {
|
||||
let longuest_action_duration = self.filter.longuest_action_duration();
|
||||
let number_of_actions = self.filter.actions().len();
|
||||
|
||||
for (m, map) in self.triggers.iter() {
|
||||
let new_map: BTreeMap<_, _> = map
|
||||
.into_iter()
|
||||
// Keep only times that are still relevant
|
||||
.filter(|(t, _)| *t + longuest_action_duration > now)
|
||||
// Reset the action counter
|
||||
.map(|(t, _)| (t, number_of_actions))
|
||||
.collect();
|
||||
|
||||
if new_map.is_empty() {
|
||||
// No upcoming time, delete the entry from the Tree
|
||||
self.triggers.remove(&m);
|
||||
} else {
|
||||
// Insert back the upcoming times
|
||||
let _ = self
|
||||
.triggers
|
||||
.insert(&m, &new_map);
|
||||
|
||||
// Schedule the upcoming times
|
||||
for t in new_map.into_keys() {
|
||||
self.schedule_exec(m.clone(), t);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn exec_now(&self, action: &'static Action, m: Match) {
|
||||
let exec_limit = self.exec_limit.clone();
|
||||
tokio::spawn(async move {
|
||||
// Wait for semaphore's permission, if it is Some
|
||||
let _permit = match exec_limit {
|
||||
#[allow(clippy::unwrap_used)] // We know the semaphore is not closed
|
||||
Some(semaphore) => Some(semaphore.acquire_owned().await.unwrap()),
|
||||
None => None,
|
||||
};
|
||||
|
||||
// Construct command
|
||||
let mut command = action.exec(&m);
|
||||
|
||||
info!("{}: run [{:?}]", &action, command.as_std());
|
||||
if let Err(err) = command
|
||||
.stdin(Stdio::null())
|
||||
.stderr(Stdio::null())
|
||||
.stdout(Stdio::piped())
|
||||
.status()
|
||||
.await
|
||||
{
|
||||
error!("{}: run [{:?}], code {}", &action, command.as_std(), err);
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
|
@ -1,458 +0,0 @@
|
|||
#[cfg(test)]
|
||||
pub mod tests;
|
||||
|
||||
mod state;
|
||||
|
||||
use std::{collections::BTreeMap, process::Stdio, sync::Arc};
|
||||
|
||||
use chrono::TimeZone;
|
||||
use reaction_plugin::{ActionImpl, shutdown::ShutdownToken};
|
||||
use regex::Regex;
|
||||
use tokio::sync::{Mutex, MutexGuard, Semaphore};
|
||||
use tracing::{error, info};
|
||||
|
||||
use crate::{
|
||||
concepts::{Action, Duplicate, Filter, Match, Pattern, Time},
|
||||
daemon::plugin::Plugins,
|
||||
protocol::{Order, PatternStatus},
|
||||
};
|
||||
use treedb::Database;
|
||||
|
||||
use state::State;
|
||||
|
||||
/// Responsible for handling all runtime logic dedicated to a [`Filter`].
|
||||
/// Notably handles incoming lines from [`super::stream::stream_manager`]
|
||||
/// and orders from the [`super::socket::socket_manager`]
|
||||
#[derive(Clone)]
|
||||
pub struct FilterManager {
|
||||
/// the Filter managed
|
||||
filter: &'static Filter,
|
||||
/// Permits to limit concurrency of actions execution
|
||||
exec_limit: Option<Arc<Semaphore>>,
|
||||
/// Permits to run pending actions on shutdown
|
||||
shutdown: ShutdownToken,
|
||||
/// Action Plugins
|
||||
action_plugins: BTreeMap<&'static String, ActionImpl>,
|
||||
/// Inner state.
|
||||
/// Protected by a [`Mutex`], permitting FilterManager to be cloned
|
||||
/// and concurrently owned by its stream manager, the socket manager,
|
||||
/// and actions' delayed tasks.
|
||||
state: Arc<Mutex<State>>,
|
||||
}
|
||||
|
||||
/// The react to a line handling.
|
||||
#[derive(Debug, PartialEq, Eq)]
|
||||
pub enum React {
|
||||
/// This line doesn't match
|
||||
NoMatch,
|
||||
/// This line matches, but no execution is triggered
|
||||
Match,
|
||||
/// This line matches, and an execution is triggered
|
||||
Trigger,
|
||||
}
|
||||
|
||||
#[allow(clippy::unwrap_used)]
|
||||
impl FilterManager {
|
||||
pub async fn new(
|
||||
filter: &'static Filter,
|
||||
exec_limit: Option<Arc<Semaphore>>,
|
||||
shutdown: ShutdownToken,
|
||||
db: &mut Database,
|
||||
plugins: &mut Plugins,
|
||||
now: Time,
|
||||
) -> Result<Self, String> {
|
||||
let mut action_plugins = BTreeMap::default();
|
||||
for (action_name, action) in filter.actions.iter().filter(|action| action.1.is_plugin()) {
|
||||
action_plugins.insert(
|
||||
action_name,
|
||||
plugins.get_action_impl(action.to_string()).ok_or_else(|| {
|
||||
format!("action {action} doesn't load a plugin. this is a bug!")
|
||||
})?,
|
||||
);
|
||||
}
|
||||
let this = Self {
|
||||
filter,
|
||||
exec_limit,
|
||||
shutdown,
|
||||
action_plugins,
|
||||
state: Arc::new(Mutex::new(State::new(filter, db, now).await?)),
|
||||
};
|
||||
Ok(this)
|
||||
}
|
||||
|
||||
pub async fn handle_line(&self, line: &str, now: Time) -> React {
|
||||
if let Some(match_) = self.filter.get_match(line) {
|
||||
if self.handle_match(match_, now).await {
|
||||
React::Trigger
|
||||
} else {
|
||||
React::Match
|
||||
}
|
||||
} else {
|
||||
React::NoMatch
|
||||
}
|
||||
}
|
||||
|
||||
async fn handle_match(&self, m: Match, now: Time) -> bool {
|
||||
#[allow(clippy::unwrap_used)] // propagating panics is ok
|
||||
let mut state = self.state.lock().await;
|
||||
state.clear_past_matches(now).await;
|
||||
|
||||
// if Duplicate::Ignore and already triggered, skip
|
||||
if state.triggers.contains_key(&m) && Duplicate::Ignore == self.filter.duplicate {
|
||||
return false;
|
||||
}
|
||||
|
||||
info!("{}: match {:?}", self.filter, &m);
|
||||
|
||||
let trigger = match self.filter.retry {
|
||||
None => true,
|
||||
Some(retry) => {
|
||||
state.add_match(m.clone(), now).await;
|
||||
// Number of stored times for this match >= configured retry for this filter
|
||||
state.get_times(&m).await >= retry as usize
|
||||
}
|
||||
};
|
||||
|
||||
if trigger {
|
||||
state.remove_match(&m).await;
|
||||
let actions_left = if Duplicate::Extend == self.filter.duplicate {
|
||||
// Get number of actions left from last trigger
|
||||
state
|
||||
.remove_trigger(&m)
|
||||
.await
|
||||
// Only one entry in the map because Duplicate::Extend
|
||||
.and_then(|map| map.first_key_value().map(|(_, n)| *n))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
state.add_trigger(m.clone(), now, actions_left).await;
|
||||
self.schedule_exec(m, now, now, &mut state, false, actions_left)
|
||||
.await;
|
||||
}
|
||||
|
||||
trigger
|
||||
}
|
||||
|
||||
pub async fn handle_trigger(
|
||||
&self,
|
||||
patterns: BTreeMap<Arc<Pattern>, String>,
|
||||
now: Time,
|
||||
) -> Result<(), String> {
|
||||
let match_ = self.filter.get_match_from_patterns(patterns)?;
|
||||
|
||||
#[allow(clippy::unwrap_used)] // propagating panics is ok
|
||||
let mut state = self.state.lock().await;
|
||||
state.remove_match(&match_).await;
|
||||
state.add_trigger(match_.clone(), now, None).await;
|
||||
self.schedule_exec(match_, now, now, &mut state, false, None)
|
||||
.await;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn handle_order(
|
||||
&self,
|
||||
patterns: &BTreeMap<Arc<Pattern>, Regex>,
|
||||
order: Order,
|
||||
now: Time,
|
||||
) -> BTreeMap<String, PatternStatus> {
|
||||
let is_match = |match_: &Match| {
|
||||
match_
|
||||
.iter()
|
||||
.zip(self.filter.patterns.as_ref())
|
||||
.filter_map(|(a_match, pattern)| {
|
||||
patterns.get(pattern.as_ref()).map(|regex| (a_match, regex))
|
||||
})
|
||||
.all(|(a_match, regex)| regex.is_match(a_match))
|
||||
};
|
||||
|
||||
#[allow(clippy::unwrap_used)] // propagating panics is ok
|
||||
let mut state = self.state.lock().await;
|
||||
|
||||
let mut cs: BTreeMap<_, _> = {
|
||||
let cloned_matches = state
|
||||
.matches
|
||||
.keys()
|
||||
// match filtering
|
||||
.filter(|match_| is_match(match_))
|
||||
// clone necessary to drop all references to State
|
||||
.cloned()
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let mut cs = BTreeMap::new();
|
||||
for match_ in cloned_matches {
|
||||
// mutable State required here
|
||||
if let Order::Flush = order {
|
||||
state.remove_match(&match_).await;
|
||||
}
|
||||
let matches = state
|
||||
.matches
|
||||
.get(&match_)
|
||||
.map(|times| times.len())
|
||||
.unwrap_or(0);
|
||||
cs.insert(
|
||||
match_,
|
||||
PatternStatus {
|
||||
matches,
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
cs
|
||||
};
|
||||
|
||||
let cloned_triggers = state
|
||||
.triggers
|
||||
.keys()
|
||||
// match filtering
|
||||
.filter(|match_| is_match(match_))
|
||||
// clone necessary to drop all references to State
|
||||
.cloned()
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
for m in cloned_triggers.into_iter() {
|
||||
let map = state.triggers.get(&m).unwrap().clone();
|
||||
|
||||
if let Order::Flush = order {
|
||||
state.remove_trigger(&m).await;
|
||||
}
|
||||
|
||||
for (t, remaining) in map {
|
||||
if remaining > 0 {
|
||||
let pattern_status = cs.entry(m.clone()).or_default();
|
||||
|
||||
for action in self.filter.filtered_actions_from_match(&m) {
|
||||
let action_time = t + action.after_duration.unwrap_or_default();
|
||||
if action_time > now {
|
||||
// Pretty print time
|
||||
let time = chrono::Local
|
||||
.timestamp_opt(
|
||||
action_time.as_secs() as i64,
|
||||
action_time.subsec_nanos(),
|
||||
)
|
||||
.unwrap()
|
||||
.to_rfc3339()
|
||||
.chars()
|
||||
.take(19)
|
||||
.collect();
|
||||
// Insert action
|
||||
pattern_status
|
||||
.actions
|
||||
.entry(action.name.clone())
|
||||
.or_default()
|
||||
.push(time);
|
||||
|
||||
// Execute the action early
|
||||
if let Order::Flush = order {
|
||||
self.exec_now(action, m.clone(), t);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
cs.into_iter().map(|(k, v)| (k.join(" "), v)).collect()
|
||||
}
|
||||
|
||||
/// Schedule execution for a given Match.
|
||||
/// We check first if the trigger is still here
|
||||
/// because pending actions can be flushed.
|
||||
async fn schedule_exec(
|
||||
&self,
|
||||
m: Match,
|
||||
t: Time,
|
||||
now: Time,
|
||||
state: &mut MutexGuard<'_, State>,
|
||||
startup: bool,
|
||||
actions_left: Option<u64>,
|
||||
) {
|
||||
let actions = self
|
||||
.filter
|
||||
.filtered_actions_from_match(&m)
|
||||
.into_iter()
|
||||
// On startup, skip oneshot actions
|
||||
.filter(|action| !startup || !action.oneshot)
|
||||
// skip any actions
|
||||
.skip(match actions_left {
|
||||
Some(actions_left) => {
|
||||
self.filter.filtered_actions_from_match(&m).len() - actions_left as usize
|
||||
}
|
||||
None => 0,
|
||||
});
|
||||
|
||||
// Scheduling each action
|
||||
for action in actions {
|
||||
let exec_time = t + action.after_duration.unwrap_or_default();
|
||||
let m = m.clone();
|
||||
|
||||
if exec_time <= now {
|
||||
if state.decrement_trigger(&m, t, false).await {
|
||||
self.exec_now(action, m, t);
|
||||
}
|
||||
} else {
|
||||
let this = self.clone();
|
||||
let action_impl = self.action_plugins.get(&action.name).cloned();
|
||||
tokio::spawn(async move {
|
||||
let dur = exec_time - now;
|
||||
// Wait either for end of sleep
|
||||
// or reaction exiting
|
||||
let exiting = tokio::select! {
|
||||
_ = tokio::time::sleep(dur.into()) => false,
|
||||
_ = this.shutdown.wait() => true,
|
||||
};
|
||||
// Exec action if triggered hasn't been already flushed
|
||||
if !exiting || action.on_exit {
|
||||
#[allow(clippy::unwrap_used)] // propagating panics is ok
|
||||
let mut state = this.state.lock().await;
|
||||
if state.decrement_trigger(&m, t, exiting).await {
|
||||
exec_now(&this.exec_limit, this.shutdown, action, action_impl, m, t);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Clear past triggers and schedule future actions
|
||||
pub async fn start(&self, now: Time) {
|
||||
let longuest_action_duration = self.filter.longuest_action_duration;
|
||||
let number_of_actions = self
|
||||
.filter
|
||||
.actions
|
||||
.values()
|
||||
// On startup, skip oneshot actions
|
||||
.filter(|action| !action.oneshot)
|
||||
.count() as u64;
|
||||
|
||||
#[allow(clippy::unwrap_used)] // propagating panics is ok
|
||||
let mut state = self.state.lock().await;
|
||||
|
||||
let cloned_triggers = state
|
||||
.triggers
|
||||
.iter()
|
||||
.map(|(k, v)| (k.clone(), v.clone()))
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
for (m, map) in cloned_triggers.into_iter() {
|
||||
let map: BTreeMap<_, _> = map
|
||||
.into_iter()
|
||||
// Keep only up-to-date triggers
|
||||
.filter(|(t, remaining)| *remaining > 0 && *t + longuest_action_duration > now)
|
||||
// Reset action count
|
||||
.map(|(t, _)| (t, number_of_actions))
|
||||
.collect();
|
||||
|
||||
if map.is_empty() {
|
||||
state.triggers.remove(&m).await;
|
||||
} else {
|
||||
// Filter duplicates
|
||||
// unwrap is fine because map is not empty (see if)
|
||||
let map = match self.filter.duplicate {
|
||||
// Keep only last item
|
||||
Duplicate::Extend => BTreeMap::from([map.into_iter().next_back().unwrap()]),
|
||||
// Keep only first item
|
||||
Duplicate::Ignore => BTreeMap::from([map.into_iter().next().unwrap()]),
|
||||
// No filtering
|
||||
Duplicate::Rerun => map,
|
||||
};
|
||||
state.triggers.insert(m.clone(), map.clone()).await;
|
||||
for (t, _) in map {
|
||||
// Schedule the upcoming times
|
||||
self.schedule_exec(m.clone(), t, now, &mut state, true, None)
|
||||
.await;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn exec_now(&self, action: &'static Action, m: Match, t: Time) {
|
||||
let action_impl = self.action_plugins.get(&action.name).cloned();
|
||||
exec_now(
|
||||
&self.exec_limit,
|
||||
self.shutdown.clone(),
|
||||
action,
|
||||
action_impl,
|
||||
m,
|
||||
t,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
fn exec_now(
|
||||
exec_limit: &Option<Arc<Semaphore>>,
|
||||
shutdown: ShutdownToken,
|
||||
action: &'static Action,
|
||||
action_impl: Option<ActionImpl>,
|
||||
m: Match,
|
||||
t: Time,
|
||||
) {
|
||||
let exec_limit = exec_limit.clone();
|
||||
tokio::spawn(async move {
|
||||
// Move ShutdownToken in task
|
||||
let _shutdown = shutdown;
|
||||
|
||||
match action_impl {
|
||||
Some(action_impl) => {
|
||||
info!(
|
||||
"{action}: run {} {:?}",
|
||||
action.action_type.clone().unwrap_or_default(),
|
||||
&m,
|
||||
);
|
||||
|
||||
// Sending action
|
||||
if let Err(err) = action_impl
|
||||
.tx
|
||||
.send(reaction_plugin::Exec {
|
||||
match_: m,
|
||||
time: t.into(),
|
||||
})
|
||||
.await
|
||||
{
|
||||
error!("{action}: communication with plugin failed: {err}");
|
||||
return;
|
||||
}
|
||||
}
|
||||
None => {
|
||||
// Wait for semaphore's permission, if it is Some
|
||||
let _permit = match exec_limit {
|
||||
#[allow(clippy::unwrap_used)] // We know the semaphore is not closed
|
||||
Some(semaphore) => Some(semaphore.acquire_owned().await.unwrap()),
|
||||
None => None,
|
||||
};
|
||||
|
||||
// Construct command
|
||||
let mut command = action.exec(&m);
|
||||
|
||||
info!("{action}: run [{:?}]", command.as_std());
|
||||
if let Err(err) = command
|
||||
.stdin(Stdio::null())
|
||||
.stderr(Stdio::null())
|
||||
.stdout(Stdio::piped())
|
||||
.status()
|
||||
.await
|
||||
{
|
||||
error!("{action}: run [{:?}], code {err}", command.as_std());
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
impl PartialEq for FilterManager {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
self.filter == other.filter
|
||||
}
|
||||
}
|
||||
impl Eq for FilterManager {}
|
||||
|
||||
impl Ord for FilterManager {
|
||||
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
|
||||
self.filter.cmp(other.filter)
|
||||
}
|
||||
}
|
||||
impl PartialOrd for FilterManager {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
|
||||
Some(self.cmp(other))
|
||||
}
|
||||
}
|
||||
|
|
@ -1,655 +0,0 @@
|
|||
use std::collections::{BTreeMap, BTreeSet};
|
||||
|
||||
use serde_json::Value;
|
||||
use treedb::{Database, Tree, helpers::*};
|
||||
|
||||
use crate::concepts::{Filter, Match, MatchTime, Time};
|
||||
|
||||
pub fn filter_ordered_times_db_name(filter: &Filter) -> String {
|
||||
format!(
|
||||
"filter_ordered_times_{}.{}",
|
||||
filter.stream_name, filter.name
|
||||
)
|
||||
}
|
||||
|
||||
pub fn filter_triggers_old_db_name(filter: &Filter) -> String {
|
||||
format!("filter_triggers_{}.{}", filter.stream_name, filter.name)
|
||||
}
|
||||
|
||||
pub fn filter_triggers_db_name(filter: &Filter) -> String {
|
||||
format!("filter_triggers2_{}.{}", filter.stream_name, filter.name)
|
||||
}
|
||||
|
||||
/// Internal state of a [`FilterManager`].
|
||||
/// Holds all data on current matches and triggers.
|
||||
pub struct State {
|
||||
/// the Filter managed
|
||||
filter: &'static Filter,
|
||||
/// Has the filter at least an action with an after directive?
|
||||
has_after: bool,
|
||||
/// Saves all the current Matches for this Filter
|
||||
/// Has duplicate values for a key
|
||||
/// Not persisted
|
||||
pub matches: BTreeMap<Match, BTreeSet<Time>>,
|
||||
/// Alternative view of the current Matches for O(1) cleaning of old Matches
|
||||
/// without added async Tasks to remove them
|
||||
/// Persisted
|
||||
///
|
||||
/// I'm pretty confident that Time will always be unique, because it has enough precision.
|
||||
/// See this code that gives different times, even in a minimal loop:
|
||||
/// ```rust
|
||||
/// use reaction::concepts::now;
|
||||
///
|
||||
/// let mut res = vec![];
|
||||
/// for _ in 0..10 {
|
||||
/// let now = now();
|
||||
/// res.push(format!("Now: {}", now.as_nanos()));
|
||||
/// }
|
||||
/// for s in res {
|
||||
/// println!("{s}");
|
||||
/// }
|
||||
/// ```
|
||||
pub ordered_times: Tree<Time, Match>,
|
||||
/// Saves all the current Triggers for this Filter
|
||||
/// Persisted
|
||||
pub triggers: Tree<Match, BTreeMap<Time, u64>>,
|
||||
}
|
||||
|
||||
impl State {
|
||||
pub async fn new(
|
||||
filter: &'static Filter,
|
||||
db: &mut Database,
|
||||
now: Time,
|
||||
) -> Result<Self, String> {
|
||||
let ordered_times = db
|
||||
.open_tree(
|
||||
filter_ordered_times_db_name(filter),
|
||||
filter.retry_duration.unwrap_or_default(),
|
||||
|(key, value)| Ok((to_time(&key)?, to_match(&value)?)),
|
||||
)
|
||||
.await?;
|
||||
let mut triggers = db
|
||||
.open_tree(
|
||||
filter_triggers_db_name(filter),
|
||||
filter.longuest_action_duration,
|
||||
|(key, value)| Ok((to_match(&key)?, to_timemap(&value)?)),
|
||||
)
|
||||
.await?;
|
||||
if triggers.is_empty() {
|
||||
let old_triggers = db
|
||||
.open_tree(
|
||||
filter_triggers_old_db_name(filter),
|
||||
filter.longuest_action_duration,
|
||||
|(key, value)| Ok((to_matchtime(&key)?, to_u64(&value)?)),
|
||||
)
|
||||
.await?;
|
||||
for (mt, n) in old_triggers.iter() {
|
||||
triggers
|
||||
.fetch_update(mt.m.clone(), |map| {
|
||||
Some(match map {
|
||||
None => [(mt.t, *n)].into(),
|
||||
Some(mut map) => {
|
||||
map.insert(mt.t, *n);
|
||||
map
|
||||
}
|
||||
})
|
||||
})
|
||||
.await;
|
||||
}
|
||||
}
|
||||
let mut this = Self {
|
||||
filter,
|
||||
has_after: !filter.longuest_action_duration.is_zero(),
|
||||
matches: BTreeMap::new(),
|
||||
ordered_times,
|
||||
triggers,
|
||||
};
|
||||
this.clear_past_matches(now).await;
|
||||
this.load_matches_from_ordered_times().await;
|
||||
Ok(this)
|
||||
}
|
||||
|
||||
pub async fn add_match(&mut self, m: Match, t: Time) {
|
||||
let set = self.matches.entry(m.clone()).or_default();
|
||||
set.insert(t);
|
||||
self.ordered_times.insert(t, m).await;
|
||||
}
|
||||
|
||||
pub async fn add_trigger(&mut self, m: Match, t: Time, action_count: Option<u64>) {
|
||||
// We record triggered filters only when there is an action with an `after` directive
|
||||
if self.has_after {
|
||||
// Add the (Match, Time) to the triggers map
|
||||
let n = action_count
|
||||
.unwrap_or_else(|| self.filter.filtered_actions_from_match(&m).len() as u64);
|
||||
self.triggers
|
||||
.fetch_update(m, |map| {
|
||||
Some(match map {
|
||||
None => [(t, n)].into(),
|
||||
Some(mut value) => {
|
||||
value.insert(t, n);
|
||||
value
|
||||
}
|
||||
})
|
||||
})
|
||||
.await;
|
||||
}
|
||||
}
|
||||
|
||||
// Completely remove a Match from the matches
|
||||
pub async fn remove_match(&mut self, m: &Match) {
|
||||
if let Some(set) = self.matches.get(m) {
|
||||
for t in set {
|
||||
self.ordered_times.remove(t).await;
|
||||
}
|
||||
self.matches.remove(m);
|
||||
}
|
||||
}
|
||||
|
||||
/// Completely remove a Match from the triggers
|
||||
pub async fn remove_trigger(&mut self, m: &Match) -> Option<BTreeMap<Time, u64>> {
|
||||
self.triggers.remove(m).await
|
||||
}
|
||||
|
||||
/// Returns whether we should still execute an action for this (Match, Time) trigger
|
||||
pub async fn decrement_trigger(&mut self, m: &Match, t: Time, exiting: bool) -> bool {
|
||||
// We record triggered filters only when there is an action with an `after` directive
|
||||
if self.has_after {
|
||||
let mut exec_needed = false;
|
||||
let mt = MatchTime { m: m.clone(), t };
|
||||
let count = self
|
||||
.triggers
|
||||
.get(&mt.m)
|
||||
.and_then(|map| map.get(&mt.t))
|
||||
.cloned();
|
||||
if let Some(count) = count {
|
||||
exec_needed = true;
|
||||
if count <= 1 {
|
||||
if !exiting {
|
||||
self.triggers
|
||||
.fetch_update(mt.m, |map| {
|
||||
map.and_then(|mut map| {
|
||||
map.remove(&mt.t);
|
||||
if map.is_empty() { None } else { Some(map) }
|
||||
})
|
||||
})
|
||||
.await;
|
||||
}
|
||||
// else don't do anything
|
||||
// Because that will remove the entry in the DB, and make
|
||||
// it forget this trigger.
|
||||
// Maybe we should have 2 maps for triggers:
|
||||
// - The current for action counting, not persisted
|
||||
// - Another like ordered_times, Tree<Time, Match>, persisted
|
||||
} else {
|
||||
self.triggers
|
||||
.fetch_update(mt.m, |map| {
|
||||
map.map(|mut map| {
|
||||
map.insert(mt.t, count - 1);
|
||||
map
|
||||
})
|
||||
})
|
||||
.await;
|
||||
}
|
||||
}
|
||||
exec_needed
|
||||
} else {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn clear_past_matches(&mut self, now: Time) {
|
||||
let retry_duration = self.filter.retry_duration.unwrap_or_default();
|
||||
while self
|
||||
.ordered_times
|
||||
.first_key_value()
|
||||
.is_some_and(|(t, _)| *t + retry_duration < now)
|
||||
{
|
||||
#[allow(clippy::unwrap_used)]
|
||||
// unwrap: we just checked in the condition that first is_some
|
||||
let (t, m) = {
|
||||
let (t, m) = self.ordered_times.first_key_value().unwrap();
|
||||
(*t, m.clone())
|
||||
};
|
||||
self.ordered_times.remove(&t).await;
|
||||
if let Some(set) = self.matches.get(&m) {
|
||||
let mut set = set.clone();
|
||||
set.remove(&t);
|
||||
if set.is_empty() {
|
||||
self.matches.remove(&m);
|
||||
} else {
|
||||
self.matches.insert(m, set);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn get_times(&self, m: &Match) -> usize {
|
||||
match self.matches.get(m) {
|
||||
Some(vec) => vec.len(),
|
||||
None => 0,
|
||||
}
|
||||
}
|
||||
|
||||
async fn load_matches_from_ordered_times(&mut self) {
|
||||
for (t, m) in self.ordered_times.iter() {
|
||||
let set = self.matches.entry(m.clone()).or_default();
|
||||
set.insert(*t);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Tries to convert a [`Value`] into a [`MatchTime`]
|
||||
pub fn to_matchtime(val: &Value) -> Result<MatchTime, String> {
|
||||
let map = val.as_object().ok_or("not an object")?;
|
||||
Ok(MatchTime {
|
||||
m: to_match(map.get("m").ok_or("no m in object")?)?,
|
||||
t: to_time(map.get("t").ok_or("no t in object")?)?,
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::collections::{BTreeMap, HashMap};
|
||||
|
||||
use serde_json::{Map, Value, json};
|
||||
|
||||
use crate::{
|
||||
concepts::{
|
||||
Action, Duplicate, Filter, MatchTime, Pattern, Time, filter_tests::ok_filter, now,
|
||||
},
|
||||
tests::TempDatabase,
|
||||
};
|
||||
|
||||
use super::{State, to_matchtime};
|
||||
|
||||
// Tests `new`, `clear_past_matches` and `load_matches_from_ordered_times`
|
||||
#[tokio::test]
|
||||
async fn state_new() {
|
||||
let patterns = Pattern::new_map("az", "[a-z]+").unwrap();
|
||||
let filter = Filter::new_static(
|
||||
vec![
|
||||
Action::new(vec!["true"], None, false, "s1", "f1", "a1", &patterns, 0),
|
||||
Action::new(
|
||||
vec!["true"],
|
||||
Some("3s"),
|
||||
false,
|
||||
"s1",
|
||||
"f1",
|
||||
"a2",
|
||||
&patterns,
|
||||
0,
|
||||
),
|
||||
],
|
||||
vec!["test <az>"],
|
||||
Some(3),
|
||||
Some("2s"),
|
||||
"s1",
|
||||
"f1",
|
||||
Duplicate::default(),
|
||||
&patterns,
|
||||
);
|
||||
|
||||
let now = Time::from_secs(1234567);
|
||||
// DateTime::parse_from_rfc3339("2025-07-10T12:35:00.000+00:00")
|
||||
// .unwrap()
|
||||
// .with_timezone(&Local);
|
||||
let now_plus_1m = now + Time::from_mins(1);
|
||||
let now_plus_1m01 = now_plus_1m + Time::from_secs(1);
|
||||
let now_less_1m = now - Time::from_mins(1);
|
||||
let now_less_1s = now - Time::from_secs(1);
|
||||
let now_less_4s = now - Time::from_secs(4);
|
||||
let now_less_5s = now - Time::from_secs(5);
|
||||
|
||||
let triggers = [
|
||||
// format v1
|
||||
(
|
||||
"filter_triggers_s1.f1".into(),
|
||||
HashMap::from([
|
||||
// Will stay
|
||||
(
|
||||
json!({
|
||||
"t": now_plus_1m,
|
||||
"m": ["one"],
|
||||
}),
|
||||
json!(1),
|
||||
),
|
||||
(
|
||||
json!({
|
||||
"t": now_less_1s,
|
||||
"m": ["one"],
|
||||
}),
|
||||
json!(1),
|
||||
),
|
||||
// Will not get cleaned because it's FilterManager's task
|
||||
(
|
||||
json!({
|
||||
"t": now_less_5s,
|
||||
"m": ["one"],
|
||||
}),
|
||||
json!(1),
|
||||
),
|
||||
]),
|
||||
),
|
||||
// format v2 (since v2.2.0)
|
||||
(
|
||||
"filter_triggers2_s1.f1".into(),
|
||||
HashMap::from([(
|
||||
json!(["one"]),
|
||||
json!({
|
||||
// Will stay
|
||||
now_plus_1m.as_nanos().to_string(): 1,
|
||||
now_less_1s.as_nanos().to_string(): 1,
|
||||
// Will not get cleaned because it's FilterManager's task
|
||||
now_less_5s.as_nanos().to_string(): 1,
|
||||
}),
|
||||
)]),
|
||||
),
|
||||
];
|
||||
|
||||
for trigger_db in triggers {
|
||||
let mut db = TempDatabase::from_loaded_db(HashMap::from([
|
||||
(
|
||||
"filter_ordered_times_s1.f1".into(),
|
||||
HashMap::from([
|
||||
// Will stay
|
||||
(now_plus_1m.as_nanos().to_string().into(), ["one"].into()),
|
||||
(now_plus_1m01.as_nanos().to_string().into(), ["one"].into()),
|
||||
(now_less_1s.as_nanos().to_string().into(), ["two"].into()), // stays because retry: 2s
|
||||
// Will get cleaned
|
||||
(now_less_4s.as_nanos().to_string().into(), ["two"].into()),
|
||||
(now_less_5s.as_nanos().to_string().into(), ["three"].into()),
|
||||
(now_less_1m.as_nanos().to_string().into(), ["two"].into()),
|
||||
]),
|
||||
),
|
||||
trigger_db,
|
||||
]))
|
||||
.await;
|
||||
|
||||
let state = State::new(filter, &mut db, now).await.unwrap();
|
||||
|
||||
assert_eq!(
|
||||
state.ordered_times.tree(),
|
||||
&BTreeMap::from([
|
||||
(now_less_1s, vec!["two".into()]),
|
||||
(now_plus_1m, vec!["one".into()]),
|
||||
(now_plus_1m01, vec!["one".into()]),
|
||||
])
|
||||
);
|
||||
assert_eq!(
|
||||
state.matches,
|
||||
BTreeMap::from([
|
||||
(vec!["one".into()], [now_plus_1m, now_plus_1m01].into()),
|
||||
(vec!["two".into()], [now_less_1s].into()),
|
||||
])
|
||||
);
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(
|
||||
vec!["one".into()],
|
||||
BTreeMap::from([
|
||||
(now_less_5s, 1u64),
|
||||
(now_less_1s, 1u64),
|
||||
(now_plus_1m, 1u64),
|
||||
]),
|
||||
)])
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn state_match_add_remove() {
|
||||
let filter = Box::leak(Box::new(ok_filter()));
|
||||
|
||||
let one = vec!["one".into()];
|
||||
|
||||
let now = Time::from_secs(1234567);
|
||||
let now_less_1s = now - Time::from_secs(1);
|
||||
let now_less_4s = now - Time::from_secs(4);
|
||||
|
||||
let mut db = TempDatabase::default().await;
|
||||
let mut state = State::new(filter, &mut db, now).await.unwrap();
|
||||
|
||||
assert!(state.ordered_times.tree().is_empty());
|
||||
assert!(state.matches.is_empty());
|
||||
|
||||
// Add non-previously added match
|
||||
state.add_match(one.clone(), now_less_1s).await;
|
||||
assert_eq!(
|
||||
state.ordered_times.tree(),
|
||||
&BTreeMap::from([(now_less_1s, one.clone()),])
|
||||
);
|
||||
assert_eq!(
|
||||
state.matches,
|
||||
BTreeMap::from([(one.clone(), [now_less_1s].into())])
|
||||
);
|
||||
|
||||
// Add previously added match
|
||||
state.add_match(one.clone(), now_less_4s).await;
|
||||
assert_eq!(
|
||||
state.ordered_times.tree(),
|
||||
&BTreeMap::from([(now_less_1s, one.clone()), (now_less_4s, one.clone())])
|
||||
);
|
||||
assert_eq!(
|
||||
state.matches,
|
||||
BTreeMap::from([(one.clone(), [now_less_1s, now_less_4s].into())])
|
||||
);
|
||||
|
||||
// Remove added match
|
||||
state.remove_match(&one).await;
|
||||
assert!(state.ordered_times.tree().is_empty());
|
||||
assert!(state.matches.is_empty());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn state_trigger_no_after_add_remove_decrement() {
|
||||
let filter = Box::leak(Box::new(ok_filter()));
|
||||
|
||||
let one = vec!["one".into()];
|
||||
let now = now();
|
||||
|
||||
let mut db = TempDatabase::default().await;
|
||||
let mut state = State::new(filter, &mut db, now).await.unwrap();
|
||||
|
||||
assert!(state.triggers.tree().is_empty());
|
||||
|
||||
// Add unique trigger
|
||||
state.add_trigger(one.clone(), now, None).await;
|
||||
// Nothing is really added
|
||||
assert!(state.triggers.tree().is_empty());
|
||||
|
||||
// Will be called immediately after, it returns true
|
||||
assert!(state.decrement_trigger(&one, now, false).await);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn state_trigger_has_after_add_remove_decrement() {
|
||||
let patterns = Pattern::new_map("az", "[a-z]+").unwrap();
|
||||
let filter = Filter::new_static(
|
||||
vec![
|
||||
Action::new(vec!["true"], None, false, "s1", "f1", "a1", &patterns, 0),
|
||||
Action::new(
|
||||
vec!["true"],
|
||||
Some("1s"),
|
||||
false,
|
||||
"s1",
|
||||
"f1",
|
||||
"a2",
|
||||
&patterns,
|
||||
0,
|
||||
),
|
||||
Action::new(
|
||||
vec!["true"],
|
||||
Some("3s"),
|
||||
false,
|
||||
"s1",
|
||||
"f1",
|
||||
"a3",
|
||||
&patterns,
|
||||
0,
|
||||
),
|
||||
],
|
||||
vec!["test <az>"],
|
||||
Some(3),
|
||||
Some("2s"),
|
||||
"s1",
|
||||
"f1",
|
||||
Duplicate::default(),
|
||||
&patterns,
|
||||
);
|
||||
|
||||
let one = vec!["one".into()];
|
||||
let now = now();
|
||||
let now_plus_1s = now + Time::from_secs(1);
|
||||
|
||||
let mut db = TempDatabase::default().await;
|
||||
let mut state = State::new(filter, &mut db, now).await.unwrap();
|
||||
|
||||
assert!(state.triggers.tree().is_empty());
|
||||
|
||||
// Add unique trigger
|
||||
state.add_trigger(one.clone(), now, None).await;
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now, 3)].into())])
|
||||
);
|
||||
// Decrement → true
|
||||
assert!(state.decrement_trigger(&one, now, false).await);
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now, 2)].into())])
|
||||
);
|
||||
// Decrement → true
|
||||
assert!(state.decrement_trigger(&one, now, false).await);
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now, 1)].into())])
|
||||
);
|
||||
// Decrement → true
|
||||
assert!(state.decrement_trigger(&one, now, false).await);
|
||||
assert!(state.triggers.tree().is_empty());
|
||||
// Decrement → false
|
||||
assert!(!state.decrement_trigger(&one, now, false).await);
|
||||
|
||||
// Add unique trigger (but decrement exiting-like)
|
||||
state.add_trigger(one.clone(), now, None).await;
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now, 3)].into())])
|
||||
);
|
||||
// Decrement → true
|
||||
assert!(state.decrement_trigger(&one, now, true).await);
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now, 2)].into())])
|
||||
);
|
||||
// Decrement → true
|
||||
assert!(state.decrement_trigger(&one, now, true).await);
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now, 1)].into())])
|
||||
);
|
||||
// Decrement but exiting → true, does nothing
|
||||
assert!(state.decrement_trigger(&one, now, true).await);
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now, 1)].into())])
|
||||
);
|
||||
// Decrement → true
|
||||
assert!(state.decrement_trigger(&one, now, false).await);
|
||||
assert!(state.triggers.tree().is_empty());
|
||||
// Decrement → false
|
||||
assert!(!state.decrement_trigger(&one, now, false).await);
|
||||
|
||||
// Add trigger with neighbour
|
||||
state.add_trigger(one.clone(), now, None).await;
|
||||
state.add_trigger(one.clone(), now_plus_1s, None).await;
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3), (now, 3)].into())])
|
||||
);
|
||||
// Decrement → true
|
||||
assert!(state.decrement_trigger(&one, now, false).await);
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3), (now, 2)].into())])
|
||||
);
|
||||
// Decrement → true
|
||||
assert!(state.decrement_trigger(&one, now, false).await);
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3), (now, 1)].into())])
|
||||
);
|
||||
// Decrement → true
|
||||
assert!(state.decrement_trigger(&one, now, false).await);
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3)].into())])
|
||||
);
|
||||
// Decrement → false
|
||||
assert!(!state.decrement_trigger(&one, now, false).await);
|
||||
// Remove neighbour
|
||||
state.remove_trigger(&one).await;
|
||||
assert!(state.triggers.tree().is_empty());
|
||||
|
||||
// Add two neighbour triggers
|
||||
state.add_trigger(one.clone(), now, None).await;
|
||||
state.add_trigger(one.clone(), now_plus_1s, None).await;
|
||||
assert_eq!(
|
||||
state.triggers.tree(),
|
||||
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3), (now, 3)].into())])
|
||||
);
|
||||
// Remove them
|
||||
state.remove_trigger(&one).await;
|
||||
assert!(state.triggers.tree().is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_to_matchtime() {
|
||||
assert_eq!(
|
||||
to_matchtime(&Value::Object(Map::from_iter(
|
||||
BTreeMap::from([
|
||||
("m".into(), ["plip", "ploup"].into()),
|
||||
("t".into(), "12345678".into()),
|
||||
])
|
||||
.into_iter()
|
||||
))),
|
||||
Ok(MatchTime {
|
||||
m: vec!["plip".into(), "ploup".into()],
|
||||
t: Time::from_nanos(12345678),
|
||||
})
|
||||
);
|
||||
|
||||
assert!(
|
||||
to_matchtime(&Value::Object(Map::from_iter(
|
||||
BTreeMap::from([("m".into(), ["plip", "ploup"].into()),]).into_iter()
|
||||
)))
|
||||
.is_err()
|
||||
);
|
||||
|
||||
assert!(
|
||||
to_matchtime(&Value::Object(Map::from_iter(
|
||||
BTreeMap::from([("t".into(), 12345678.into()),]).into_iter()
|
||||
)))
|
||||
.is_err()
|
||||
);
|
||||
|
||||
assert!(
|
||||
to_matchtime(&Value::Object(Map::from_iter(
|
||||
BTreeMap::from([("m".into(), "ploup".into()), ("t".into(), 12345678.into()),])
|
||||
.into_iter()
|
||||
)))
|
||||
.is_err()
|
||||
);
|
||||
|
||||
assert!(
|
||||
to_matchtime(&Value::Object(Map::from_iter(
|
||||
BTreeMap::from([
|
||||
("m".into(), ["plip", "ploup"].into()),
|
||||
("t".into(), [1234567].into()),
|
||||
])
|
||||
.into_iter()
|
||||
)))
|
||||
.is_err()
|
||||
);
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load diff
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue