Compare commits

...

440 commits

Author SHA1 Message Date
ppom
8a34a1fa11
Remove useless gitlab ci file 2026-03-13 12:00:00 +01:00
ppom
3ca54c6c43
ipset: Better error handling and messages
- Clearer messages.
- Make sure logs are showed in order.
- When cleaning after an error on startup,
  do not try to undo an action that failed.
2026-03-02 12:00:00 +01:00
ppom
16692731f0
Remove useless chrono dependency from reaction-plugin 2026-03-02 12:00:00 +01:00
ppom
938a366576
More useful error message when plugin can't launch and systemd=true 2026-03-01 12:00:00 +01:00
ppom
5a6c203c01
Add system-reaction.slice 2026-02-27 12:00:00 +01:00
ppom
f2b1accec0
Fix slice-inherit option 2026-02-26 12:00:00 +01:00
ppom
00725ed9e2
notif test: add a filter that shouldn't match 2026-02-26 12:00:00 +01:00
ppom
ea0e7177d9
nftables: Fix bad action advertised 2026-02-26 12:00:00 +01:00
ppom
c41c89101d
Fix #151: Move RegexSet creation from StreamManager to config Stream
This move the potential error of a too big regex set to the config setup,
a place where it can be gracefully handled, instead of the place it was,
where this would make reaction mess up with start/stop, etc.
2026-02-26 12:00:00 +01:00
ppom
3d7e647ef7
Adapt tests to nftables configuration 2026-02-25 12:00:00 +01:00
ppom
5b6cc35deb
nftables: Fix compilation errors and actually use libnftables 2026-02-25 12:00:00 +01:00
ppom
0cd765251a
run plugins in the same slice as reaction
And reaction should be started in system-reaction.slice.
The plugins could then be grouped together with the daemon
2026-02-20 12:00:00 +01:00
ppom
26cf3a96e7
First draft of an nftables plugin
Not compiling yet but I'm getting there.
Must be careful on the unsafe, C-wrapping code!
2026-02-20 12:00:00 +01:00
ppom
285954f7cd
Remove outdated FIXME 2026-02-18 12:00:00 +01:00
ppom
dc51d7d432
Add support for macOS 2026-02-17 12:00:00 +01:00
ppom
488dc6c66f
Update release instructions 2026-02-15 12:00:00 +01:00
ppom
88c99fff0f
Fix install instructions 2026-02-12 12:00:00 +01:00
ppom
645d72ac1e
.gitignore cleanup 2026-02-12 12:00:00 +01:00
ppom
a7e958f248
Update ARCHITECTURE.md 2026-02-12 12:00:00 +01:00
ppom
5577d4f46f
reaction-plugin: Add metadata 2026-02-12 12:00:00 +01:00
ppom
a8cd1af78d
Set CapabiltyBoundingSet again 2026-02-12 12:00:00 +01:00
ppom
2f57f73ac9
Fix systemd functionality
- Non-absolute WorkingDirectory was refused by systemd
- Plugin specific-conf updated

Improvements:
- ReadOnlyPaths=/
- ProtectHome=true in release builds
- SystemCallFilter further restricted

Disabled:
- DynamicUser: breaks stdio communication, FIXME!
- RestrictAddressFamilies: seems impossible to override to default.
- CapabilityBoundingSet: too restrictive
2026-02-12 12:00:00 +01:00
ppom
d629d57a7e
Change ipset version option from 4/6/46 to ipv4/ipv6/ip 2026-02-12 12:00:00 +01:00
ppom
3c20d8f008
Fix merging of systemd options 2026-02-12 12:00:00 +01:00
ppom
5a030ffb7e
Make systemd default options more accessible for users by moving them up 2026-02-12 12:00:00 +01:00
ppom
a4ea173c13
Do not permit options key when stream/action is not a plugin 2026-02-12 12:00:00 +01:00
ppom
3a61db9e6f
plugin: shutdown: add function that permit graceful shutdown by signal
Handling SIGTERM (etc) signals permit graceful shutdown, cleaning of resources etc.

Added in ipset and cluster.
2026-02-12 12:00:00 +01:00
ppom
b4313699df
systemd: Let reaction stop its subprocesses before killing them
systemd by default send SIGTERM to all processes in the cgroup, which
doesn't let reaction handle the shutdown of its plugins.
This is fixed by adding KillMode=mixed.
2026-02-12 12:00:00 +01:00
ppom
270c6cb969
systemd service: config file must live in /etc/reaction/
This is a breaking change, but it unifies config
for yaml, json, jsonnet and directory users.
2026-02-12 12:00:00 +01:00
ppom
15f923ef64
Safeguard against users executing plugins themselves
main_loop now first checks that it has been started with the `serve` argument.
If not, it prints an info message and quits.
2026-02-11 12:00:00 +01:00
ppom
a37a5e5752
release v2.3.0
- cross-rs project doesn't compile anymore: switching to debian12-amd64 only binary release
- package virtual plugin in reaction .deb
- package ipset plugin in separate .deb with its required libipset-dev dependency
2026-02-11 12:00:00 +01:00
ppom
a8651bf2e0
Removal of nft46 and ip46tables 2026-02-11 12:00:00 +01:00
ppom
b07b5064e9
Improve reaction-plugin developer documentation 2026-02-11 12:00:00 +01:00
ppom
b7d997ca5e
Slight change on the "no audit" sentence 2026-02-09 12:00:00 +01:00
ppom
cce850fc71
Add recommandation on ipset or nftables rather than plain iptables 2026-02-09 12:00:00 +01:00
ppom
109fb6d869
Adapt reaction core to plugin interface change 2026-02-09 12:00:00 +01:00
ppom
ae28cfbb31
cluster: adapt to plugin interface change 2026-02-09 12:00:00 +01:00
ppom
b0dc3c56ad
ipset: adapt to plugin interface change 2026-02-09 12:00:00 +01:00
ppom
57d6da5377
virtual: adapt to plugin interface change 2026-02-09 12:00:00 +01:00
ppom
12fc90535a
Change plugin interface: oneshot load_config and start
Instead of multiple stream_impl / action_impl and one finish_setup.
This made plugin implementations awkward: they often got some conf and
couldn't determine if it was valid or not.
Now they get all the conf in one function and don't have to keep partial
state from one call to another.

This has the other important benefit that configuration loading is
separated from startup. This will make plugin lifecycle management less
clunky.
2026-02-09 12:00:00 +01:00
ppom
62933b55e4
Start plugins after start commands
Because stop commands run after plugins' shutdown, so it seems better
that commands embrace ({ plugins }).

Fix outdated comment about aborting on startup.
2026-02-09 12:00:00 +01:00
ppom
34e2a8f294
plugin: simpler crate version retrieval 2026-02-09 12:00:00 +01:00
ppom
41bc3525f8
Fix time-based test sometimes failing by increasing sleep 2026-02-09 12:00:00 +01:00
ppom
5ce773c8e5
cluster: ignore integration tests for now 2026-02-09 12:00:00 +01:00
ppom
6914f19fb8
fix assert_cmd::cargo_bin deprecation warning 2026-02-09 12:00:00 +01:00
ppom
7cd4a4305d
fix: merge plugins in configuration 2026-02-09 12:00:00 +01:00
ppom
c39fdecef3
ipset: add tests for configuration 2026-02-09 12:00:00 +01:00
ppom
885e6b7ef7
ipset: re-arrange spacing in logs 2026-02-09 12:00:00 +01:00
ppom
516e6956ab
fix double-printing of square brackets in plugin logs 2026-02-09 12:00:00 +01:00
ppom
79ec6d279f
ipset: Manuel e2e test does pass 2026-02-09 12:00:00 +01:00
ppom
a83c93ac9d
ipset: do not shutdown plugin when one action errors 2026-02-09 12:00:00 +01:00
ppom
47947d18db
ipset: Fix dumb bug due to future not awaited
The edge case is so dumb, cargo is supposed to tell me about this ><

Just learnt that Python never warns about this btw:
https://trio.readthedocs.io/en/v0.9.0/tutorial.html#warning-don-t-forget-that-await
2026-02-09 12:00:00 +01:00
ppom
915e308015
Better plugin process management
following stderr: task doesn't use shutdown anymore. It will simply follow
stderr until the end of reaction, which at worst is a negligible
memory leak if reaction continues running.
I tried closing stderr on the plugin side with a raw syscall of the file
descriptor, but reaction side doesn't see that stderr is closed.
So I can't rely on that.
Quitting when shutdown.wait() returns is too early, because that's also
what makes reaction asking for the plugin to close(), and it can print
important logs during its shutdown.
The task ignoring all the shutdown part is dead simple and is most likely
correct everytime.

updated the wording of plugin-related errors.

also replaced futures::select! { future, sleep() } with more concise and
macro-less tokio::timeout.
2026-02-09 12:00:00 +01:00
ppom
41b8a661d2
Print on stderr instead of stdout
...stdout is already taken by remoc ;)
2026-02-09 12:00:00 +01:00
ppom
87a25cf04c
Extract ipset options from action options so that it's globally merged
Actions don't manage sets anymore.
Set options are merged at each new action,
then Sets are managed by themselves.
2026-02-09 12:00:00 +01:00
ppom
d6b6e9096b
ipset: Add the add/del option, journal orders & deduplicate them 2026-02-09 12:00:00 +01:00
ppom
3ccd471b45
ipset: so much ~~waow~~ code 2026-02-09 12:00:00 +01:00
ppom
3a6260fa26
reaction-plugin-ipset: first work session 2026-02-09 12:00:00 +01:00
kol3rby
959c32c01e Fix project not compiling on BSD & Solaris systems 2026-02-09 11:03:00 +01:00
ppom
05c6c1fbce
Fix tests
I initially wrote those tests with a test secret key file in the same directory.
Better having them write their own secret key file in their own dir
than a dangling test file in source code and be sensitive to the directory tests are run in.
2026-01-19 12:00:00 +01:00
ppom
615d721c9a
cluster: Upgrade iroh to 0.95.1 2026-01-19 12:00:00 +01:00
ppom
19ee5688a7
Testing with clusters of up to 15 nodes. Fails at ~6 to 9 nodes.
Still a "connection lost" issue.
Happens irregularly.
Nodes tend to ignore incoming connections because their id is too small.
I should debug why it is the case.
Nodes may succeed to recreate connections,
but they should not lose connections on localhost like that...
2026-01-19 12:00:00 +01:00
ppom
fb6f54d84f
Disable test where one plugin is in multiple nodes of one cluster. Test pass! 2026-01-19 12:00:00 +01:00
ppom
4fce6ecaf5
no long living task to try connect to a node. one shot task. add interval randomness. 2026-01-19 12:00:00 +01:00
ppom
5bfcf318c7
Tests on a cluster of 2 nodes 2026-01-19 12:00:00 +01:00
ppom
7ede2fa79c
cluster: Fix use of stream timestamp in action 2026-01-19 12:00:00 +01:00
ppom
1e082086e5
cluster: add tests
- on configuration
- on sending messages to its own cluster
2026-01-19 12:00:00 +01:00
ppom
5a44ae89e9
sleep a bit more to fix time-sensitive test 2025-12-16 12:00:00 +01:00
ppom
8b3bde456e
cluster: UTC: no need for conversions, as Time already is UTC-aware 2025-12-14 12:00:00 +01:00
ppom
2095009fa9
cluster: use treedb for message queue persistance 2025-12-15 12:00:00 +01:00
ppom
f414245168
Separate treedb into its own crate 2025-12-14 12:00:00 +01:00
ppom
c595552504
plugin: Remove action oneshot response 2025-12-07 12:00:00 +01:00
ppom
96a551f7b9
Remove debug 2025-12-07 12:00:00 +01:00
ppom
f6e03496e1
Fix plugin_cluster test, now passing 🎉 2025-12-07 12:00:00 +01:00
ppom
fbf8c24e31
cluster: try_connect opens the channels and handshakes itself
This fixes a deadlock where each node is initiating a connection
and therefore unable to accept an incoming connection.

connection_rx can now be either a raw connection or an initialized connection.
cluster startup has been refactored to take this into account and make
ConnectionManager create this channel itself.
2025-12-08 12:00:00 +01:00
ppom
114dcd9945
Remove extra space in plugin relogging 2025-12-07 12:00:00 +01:00
ppom
da257966d9
Fix connection time out 🎉
I misinterpreted a millisecond arg as seconds, so the timeout was at 2ms
and the keep alive at 200ms, what could go wrong?

Also I gave this TransportConfig option to connect too. If not, the
default is used, not the Endpoint's own config.
https://github.com/n0-computer/iroh/issues/2872
2025-12-07 12:00:00 +01:00
ppom
b14f781528
cluster: use reaction_plugin's PatternLine 2025-12-08 12:00:00 +01:00
ppom
c9e3a07fde
ignore cluster test for now 2025-12-07 12:00:00 +01:00
ppom
aac9a71d4e
DB migration for previous commit change 2025-12-07 12:00:00 +01:00
ppom
79d85c1df1
Reduce usage of chrono
TODO: handle migrations
2025-12-07 12:00:00 +01:00
ppom
1c423c5258
Fix panic caused by previous commit
Connection still close as soon as they idle :/
2025-12-07 12:00:00 +01:00
ppom
b667b1a373
Get rid of remoc for peer communications
I couldn't understand why all communications timed out as soon as all
messages are sent with a remoc RecvError::ChMux "multiplexer terminated".

So I'm getting rid of remoc (for now at least) and sending/receiving
raw data over the stream.

For now it panics, after the handshake complete, which is already good
after only one test O:D
2025-12-07 12:00:00 +01:00
ppom
83ac520d27
Connections have ids, to fix simultaneous connections races 2025-12-07 12:00:00 +01:00
ppom
81fa49aa5c
Add tests to virtual and use reaction-plugin's PatternLine
Those tests permitted to find the bug that led me to create PatternLine
Also add a serde option to deny extra keys in virtual action's config
2025-12-07 12:00:00 +01:00
ppom
e22429f92e
Add time to Exec messages, so that plugin actions don't have to calc this 2025-12-07 12:00:00 +01:00
ppom
da5c3afefb
Provide a correct implementation of user-configured match line parsing 2025-12-07 12:00:00 +01:00
ppom
3ed2ebd488
Two nodes succeeded to exchange messages 🎉
Separated try_connect to another task, to prevent interblocking

Send a byte to the new stream so that the other can see the stream
and accept it.
2025-12-07 12:00:00 +01:00
ppom
ff5200b0a0
cluster: add a lot of DEBUG msgs, Show trait to ease logging 2025-12-07 12:00:00 +01:00
ppom
a5d31f6c1a
cluster: First round of fixes and tests after first run
Still not working!
2025-12-07 12:00:00 +01:00
ppom
43fdd3a877
cluster: finish first draft
finish ConnectionManager main loop
handle local & remote messages, maintain local queue
2025-12-07 12:00:00 +01:00
ppom
2216edfba0
shutdown: permit ShutdownController to be cloned
When multiple tasks can ask to quit
2025-12-07 12:00:00 +01:00
ppom
0635bae544
cluster: created ConnectionManager
Reorganized code.
Moved some functionnality from EndpointManager to ConnectionManager.
Still a lot to do there, but few in the rest of the code.
2025-12-07 12:00:00 +01:00
ppom
552b311ac4
Move shutdown module to reaction-plugin and use in cluster 2025-12-07 12:00:00 +01:00
ppom
71d26766f8
plugin: Stream plugins now pass time information along their lines
This will permit the cluster to accurately receive older-than-immediate
information, and it will permit potential log plugins (journald?) to go
back in time at startup.
2025-12-07 12:00:00 +01:00
ppom
bc0271b209
Fix test that did not pass when virtual was not previously built
This seems a bit hacky though because
the test needs to have `cargo` in `$PATH`
2025-12-07 12:00:00 +01:00
ppom
5782e3eb29
Fix reaction-plugin doctests 2025-12-07 12:00:00 +01:00
ppom
a70b45ba2d
Move parse_duration to reaction-plugin and fix dependency tree 2025-12-07 12:00:00 +01:00
ppom
40c6202cd4
WIP switch to one task per connection 2025-12-07 12:00:00 +01:00
ppom
7e680a3a66
Remove shared_secret option 2025-12-07 12:00:00 +01:00
ppom
9235873084
Expose parse_duration to the plugin
It may be better to put it in the reaction-plugin module instead
2025-12-07 12:00:00 +01:00
ppom
ba9ab4c319
Remove insecure handshake and just check if we know this public key 2025-12-07 12:00:00 +01:00
ppom
2e7fa016c6
Insecure hash-based handshake. I must find something else. 2025-12-07 12:00:00 +01:00
ppom
3f6e74d096
Accept remote connections. Prepare work for shared_secret handshake
Renamed ConnectionInitializer to EndpointManager.
Endpoint isn't shared with Cluster anymore.

Moved big `match` in `loop` to own function, mainly to separate it from
the select macro and reduce LSP latency. But that's cleaner too.
2025-12-07 12:00:00 +01:00
ppom
983eff13eb
cluster initialization
- Actions are connected to Cluster,
- Separate task to (re)initialize connections
2025-12-07 12:00:00 +01:00
ppom
db622eec53
show plugin stream exit error only when not quitting 2025-12-07 12:00:00 +01:00
ppom
cd2d337850
Fixed communication error: do not use serde_json::Value
So maybe serde_json's Value can't be serialized with postbag.
Recreated my own Value that can be converted from and to serde_json's.

removed one useless tokio::spawn.
2025-12-07 12:00:00 +01:00
ppom
ebf906ea51
Better doc and errors 2025-12-07 12:00:00 +01:00
ppom
310d3dbe99
Fix plugin build, one secret key per cluster, more work on cluster init 2025-12-07 12:00:00 +01:00
ppom
58180fe609
fmt, clippy, tests, fix some tests after startup refacto 2025-12-07 12:00:00 +01:00
ppom
20921be07d
Fix daemon startup: all subsystems will cleanly exit
Regardless of which startup error makes reaction exit.

Also made plugin stderr task exit when the ShutdownToken asks for it.
Also updated Rust edition to 2024.
2025-12-07 12:00:00 +01:00
ppom
a7604ca8d5
WIP allow plugin to print error to stderr and capture them
I have a race condition where reaction quits before printing process' stderr.
This will be the occasion to rework (again) reaction's daemon startup
2025-12-07 12:00:00 +01:00
ppom
124a2827d9
Cluster plugin init
- Remove PersistData utility
- Provide plugins a state directory instead, by starting them inside.
- Store the secret key as a file inside this directory.
- Use iroh's crate for base64 encoding, thus removing one dependency.
- Implement plugin's stream_impl and action_impl functions,
  creating all necessary data structures.
2025-12-07 12:00:00 +01:00
ppom
e3060d0404
cluster: retrieve, generate and store iroh SecretKey 2025-12-07 12:00:00 +01:00
ppom
c918910453
plugin: add simple way to store small data for plugins 2025-12-07 12:00:00 +01:00
ppom
61fe405b85
Add cluster plugin skeleton 2025-12-07 12:00:00 +01:00
ppom
8d864b1fb9
Add PersistData to trait 2025-12-07 12:00:00 +01:00
ppom
fa350310fd
plugin protocol: add manifest with version 2025-12-07 12:00:00 +01:00
ppom
0c4d19a4d7
plugins are now named
and fixed the virtual test
2025-12-07 12:00:00 +01:00
ppom
9f56e5d8d2
fmt 2025-12-07 12:00:00 +01:00
ppom
a5c563d55f
WIP systemd support
The logic seems to be fine.
Still need to think what security defaults are pertinent.
2025-12-07 12:00:00 +01:00
ppom
76bc551043
Specify reaction as default bin 2025-12-07 12:00:00 +01:00
ppom
b44800ed30
cargo build builds plugin
And benchmark for virtual plugin
2025-12-07 12:00:00 +01:00
ppom
7cbf482e4d
plugin improvements
- fix panic of channel(0)
- cleaner plugin interface with one level of Result
- standalone metadata for stream plugins
- new test for plugin virtual
2025-12-07 12:00:00 +01:00
ppom
f08762c3f3
First shot of "virtual stream" plugin 2025-12-07 12:00:00 +01:00
ppom
160d27f13a
Fix tests 2025-12-07 12:00:00 +01:00
ppom
147a4623b2
First building version of reaction with plugins 2025-12-07 12:00:00 +01:00
ppom
d887acf27e
Adapt Config and plugin loading
daemon::Stream integration TBD
2025-12-07 12:00:00 +01:00
ppom
a99dea4421
Adapt reaction-plugin to remoc 2025-12-07 12:00:00 +01:00
ppom
fc11234f12
Loading plugin not on config side, but stream/action manager side
Trying to implement this on the StreamManager first.
I get lifetime errors that make no sense to me, like futures should
hold any argument with 'static.

I wonder if I should try to convert everything stabby to abi_stable &
async_ffi. I'll try this and see if it solves anything.
2025-12-07 12:00:00 +01:00
ppom
05f30c3c57
First WIP iteration on the plugin system, reaction side.
Delaying the implementation of plugin Filters. I'm not sure it's useful,
(apart from JSON, what can be done?) and it's likely to be more painful
than the rest.
I'll probably just implement one custom JSON Filter like I did with
Pattern's IP support.
2025-12-07 12:00:00 +01:00
ppom
338aa8a8a2
Fix compilation error
The lifetime compilation error disappears when the filter methods are
async, so let's just do that for now
2025-12-07 12:00:00 +01:00
ppom
ae46932219
Fix workspace dependency 2025-12-07 12:00:00 +01:00
ppom
8229f01182
Dependency cleanup 2025-12-07 12:00:00 +01:00
ppom
22125dfd53
First plugin shot 2025-12-07 12:00:00 +01:00
ppom
d36e54c55b
README: add mail 2025-10-31 12:00:00 +01:00
ppom
fa63a9feb8
README: fix example YAML 2025-10-07 12:00:00 +02:00
ppom
7f0cf32666
v2.2.1 2025-09-20 12:00:00 +02:00
Baptiste Careil
c6e4af96cd
Fix some triggers no longer triggering after being loaded from db 2025-09-19 12:00:00 +02:00
ppom
278baaa3e6
Shorter variant of the heavy load benchmark for quicker results 2025-09-19 12:00:00 +02:00
ppom
974139610f
async db
Fixing deadlock on start.
FilterManager send a lot of write operations on start.
Each of them spawned a new Task to send the log in a channel.
All those writes were unlocked when the Database started, shortly after.
Now that the channel sending is awaited, it made a deadlock.

Database's API and startup has been rewritten, so that open_tree is made
accross the same channel used to log write operations.

Database is started as soon as it is opened.
The Database struct is now just a Sender to the real Database, now
DatabaseManager.

This removes the constraint for Tree opening happening before any write
operation!
2025-09-19 12:00:00 +02:00
ppom
aec3bb54ed
async db 2025-09-07 12:00:00 +02:00
ppom
582889f71e
WIP async db
Fixes inherent problem on sync db, which spawns a new task for
persistance. This makes the log unordered, which can cause inconsistence
issues.
2025-09-07 12:00:00 +02:00
ppom
e37bd6ebbe
Add a mention on Azlux's third-party repository
Related to #134
2025-09-06 12:00:00 +02:00
Baptiste Careil
1f734a516d Fix test load_conf_directory
- Fixed concurrency to 1 not to be platform dependent
- Added fields introduced by recent changes
- Used builtin str comparator that produces a diff instead of the eq
  predicate
2025-08-17 18:33:09 +02:00
ppom
e45963dd4c
Debian: Add extended-description
Fix #134
2025-08-11 12:00:00 +02:00
ppom
0e75514db3
Debian: Add section information
Fix #134
2025-08-11 12:00:00 +02:00
ppom
fc6a385574
Add armhf-gnu build for Raspberry Pis 2025-08-11 12:00:00 +02:00
ppom
dcc2e1ec4c
v2.2.0 2025-08-08 12:00:00 +02:00
ppom
ca89c7f72a
Fix filter commands executing before start commands
Now creating the socket file before starting its manager.
So I can launch start commands after its creation, and before creating
the filter managers.
2025-08-08 12:00:00 +02:00
ppom
e8f13dc9ff
cargo fmt 2025-08-08 12:00:00 +02:00
ppom
a7b63b69a8
Database: finish writing entries when quitting 2025-08-08 12:00:00 +02:00
ppom
10bd0a1859
Tree::fetch_update: Do not remove and re-add entries.
Better cloning the value than writing another entry!
2025-08-08 12:00:00 +02:00
ppom
f4b5ed20ab
Add debug on start/stop commands 2025-08-08 12:00:00 +02:00
ppom
58f4793308
Fix triggers being forgotten on after actions with on_exit: true
decrement_trigger do not delete triggers anymore when exiting

test still failing because filters start before start commands
2025-08-08 12:00:00 +02:00
ppom
607141f22f
Fix after action commands not being correctly awaited
We were scheduling the action with exec_now, but it spawns a new task
itself, which did not have the ShutdownToken.

Persistance part of the start_stop test doesn't work
because when the after actions are executed, they decrement the trigger,
which is then removed from DB.
So they should not decrement it anymore, just check that it's still
there. Next commit!
2025-08-06 12:00:00 +02:00
ppom
c824583613
Add new failing tests on start / stop sequences.
They fail because reaction don't correctly order stop commands after
2025-08-06 12:00:00 +02:00
ppom
91885e49bd
Ignore new tests that fail for now
FIXME check this later
2025-08-06 12:00:00 +02:00
ppom
eea708883b
Add example config equality test 2025-08-06 12:00:00 +02:00
Baptiste Careil
0337fcab1f
Automate some tests 2025-08-06 12:00:00 +02:00
ppom
90ec56902a
Add tests for triggers tree migration 2025-08-06 12:00:00 +02:00
ppom
eaf40cb579
test Filter::regex conformity after setup 2025-08-06 12:00:00 +02:00
ppom
441d981a20
Duplicate::Ignore: do not show ignored matches
move match logging from concepts/filter to daemon/filter
2025-08-06 12:00:00 +02:00
ppom
f36464299a
Duplicate::Extend: reschedule correctly actions not already triggered
Before, it rescheduled all actions with an `after` directive,
which is wrong when some after actions have already been executed
(in case of different actions with different after durations)
2025-08-06 12:00:00 +02:00
ppom
a1df62077c
cargo clippy 2025-08-05 12:00:00 +02:00
ppom
56e4d77854
Deduplication of triggers on start 2025-08-05 12:00:00 +02:00
ppom
f4d002c615
Fix trigger count on start
schedule_exec was called before inserting the data in triggers,
resulting in action count being set again after decrement in schedule
exec.
This could lead to:
- trigger not disappearing after done
- second action with no "after" not being run
- ...
2025-08-05 12:00:00 +02:00
ppom
f477310a29
duplicates: Add failing tests for Deduplication on start 2025-08-05 12:00:00 +02:00
ppom
773eb76f92
Update README to advertise ip-specific features 2025-08-04 12:00:00 +02:00
ppom
59c7bfdd1d
Move action filtering logic from daemon to concepts and use at 3 places
Used in Filter::schedule_exec, Filter::handle_order, State::add_trigger
Add proper testing.
This also fix previously failing test.
2025-08-04 12:00:00 +02:00
ppom
cebdbc7ad0
ipv4 regex: do no accept numbers 0[0-9]
The Rust std won't accept it anyway, as it interprets numbers starting
with 0 as octal numbers and forbid that.
2025-08-04 12:00:00 +02:00
ppom
0b2bfe533b
Update example configs to get rid of ip46tables 2025-08-04 12:00:00 +02:00
ppom
a0b804811b
Refacto: make all Config structures' fields public
Config is 'static after setup anyways.
I don't need to hide all this, it's just cumbersome for tests.
2025-08-04 12:00:00 +02:00
ppom
6f63f49acd
Add failing test for flushing ipvXonly actions 2025-08-04 12:00:00 +02:00
ppom
b927ba4fdf
Add ipv4only/ipv6only logic to actions 2025-08-04 12:00:00 +02:00
ppom
e4e50dd03b
cargo clippy 2025-08-04 12:00:00 +02:00
ppom
0a9c7f97df
Split IP pattern code in 3 files 2025-08-04 12:00:00 +02:00
ppom
130607d28f
Add test for pattern deserialization 2025-08-04 12:00:00 +02:00
ppom
19e3b2bf98
Make IP regex much more robust and add tests
IP will be correctly extracted in any regex line, even if it is
surrounded by greedy catch-all: .*<ip>.*

This what actually hard to do!
2025-08-04 12:00:00 +02:00
ppom
421002442e
Add ip tests on daemon::filter
Fix PatternType deserialization
Fix regex deserialization (now optional)
Tests currently failing
2025-08-04 12:00:00 +02:00
ppom
4f79b476aa
Cut ip regexes in smaller blocks and add tests 2025-08-04 12:00:00 +02:00
ppom
6cde89cc4b
rename file 2025-08-04 12:00:00 +02:00
ppom
43f8b66870
Update config documentation 2025-08-04 12:00:00 +02:00
ppom
94b40c4a0b
Add more tests
Done: Tests on PatternIp.
Todo: Tests on Pattern.

Fixed a bug in is_ignore.
Checked a new possible misconfiguration.
2025-08-04 12:00:00 +02:00
ppom
a5f616e295
WIP pattern ip
add ipv{4,6}mask
factorize redundant code in util functions
normalize match
most tests done
2025-08-04 12:00:00 +02:00
ppom
04b5dfd95b
ip: Add includes, tests, more setup constraints 2025-08-04 12:00:00 +02:00
ppom
44e5757ae3
WIP pattern ip 2025-08-04 12:00:00 +02:00
ppom
ea0452f62c
Fix components starting order
Now Database and Socket components are created before start commands are
executed. So in case of error, start commands are not executed.

Also socket syscalls are now async instead of blocking, for better
integration with the async runtime.

New start order:
- DB
- Socket
- Start commands
- Streams
2025-08-04 12:00:00 +02:00
ppom
6b970e74c5
Update configuration reference 2025-07-14 12:00:00 +02:00
ppom
d8db2a1745
Add extensive test on Duplicate and fix related bug 2025-07-14 12:00:00 +02:00
ppom
6f346ff371
Test existing FilterManager tests for each Duplicate enum 2025-07-14 12:00:00 +02:00
ppom
81e5fb4c42
add State tests and fix trigger persistance
Triggers were only persisted for retry duration, instead of longuest
action duration.
As retry is often shorter than after, this would make reaction forget
most triggers on restart.
entry_timeout is now set to longuest_action_duration.
2025-07-14 12:00:00 +02:00
ppom
270a1a9bdf
Duplicate: Fix tests, more tests 2025-07-14 12:00:00 +02:00
ppom
d9842c2340
Duplicate::Extend: Re-Trigger only after actions
- implement schedule_exec's only_after
2025-07-14 12:00:00 +02:00
ppom
22384a2cb4
rename React::Exec to React::Trigger 2025-07-14 12:00:00 +02:00
ppom
2cebb733b5
WIP duplicates
- change remove_trigger to remove all triggers for a Match
- schedule_exec will take only_after boolean
2025-07-14 12:00:00 +02:00
ppom
881fc76bf9
WIP duplicates
- new duplicate option
- change triggers Tree structure to keep O(log(n)) querying:
  now we need to know if a match already has a trigger.
- triggers migration
- triggers adaptations in State & FilterManager
2025-07-14 12:00:00 +02:00
ppom
4ddaf6c195
v2.1.2 2025-07-14 12:00:00 +02:00
ppom
b62f085e51
Fix trigger persistance
Triggers were only persisted for retry duration, instead of longuest
action duration.
As retry is often shorter than after, this would make reaction forget
most triggers on restart.
entry_timeout is now set to longuest_action_duration.

Cherry picked from the duplicate branch.
2025-07-14 12:00:00 +02:00
ppom
fd0dc91824
Get rid of useless Buffer wrapper for Vec<u8>
Write is already implemented on Vec<u8>
2025-07-11 12:00:00 +02:00
ppom
d880f7338b
Get rid of low-level async with Poll
use futures::stream::try_unfold to create a Stream from an async closure
2025-07-11 12:00:00 +02:00
ppom
e0609e3c3e
Move rewrite section 2025-07-08 12:00:00 +02:00
ppom
28f136f491
README update
project status update (rust rewrite ok)

contributing: separate ideas & code
2025-07-08 12:00:00 +02:00
ppom
5d9f2ceb6a
v2.1.1 2025-07-07 12:00:00 +02:00
ppom
bba113b6ab
Remove newline at the end of stream lines
Bug introduced by !24 which kept trailing `\n` and fed it to filters.
Thus regexes ending with `$` couldn't match anymore.

Fixes #128
2025-07-07 12:00:00 +02:00
ppom
5bf67860f4
Fix Filter::regex for StreamManager::compiled_regex_set
regexes were pushed multiple times, with pattern names not completely
replaced by their corresponding regexes.
Now only pushed when pattern replacement is finished.
2025-07-07 12:00:00 +02:00
ppom
39bf662296
Fix example configs
- Fix comma issues
- Fix regex syntax doc
- Add ssh regexes
2025-06-28 12:00:00 +02:00
ppom
359957c58c
README: Add trigger command 2025-06-24 12:00:00 +02:00
ppom
3f3236cafb
v2.1.0 2025-06-24 12:00:00 +02:00
ppom
78056b6fc5
src/client/request.rs rename and ARCHITECTURE.md update 2025-06-24 12:00:00 +02:00
ppom
6a778f3d01
cargo fmt, cargo clippy --all-targets 2025-06-24 12:00:00 +02:00
ppom
35862d32fa
Fix trigger command
- Force STREAM.FILTER one the command line
- Fix typo
2025-06-24 12:00:00 +02:00
ppom
283d1867b8
Benchmark: Add real-life configuration file and benchmark wrapper
Performance on this real-life configuration:

Before last commit:
Service runtime: 2min 22.669s
CPU time consumed: 3min 44.299s
Memory peak: 50.7M (swap: 0B)

With last commit:
Service runtime: 7.569s
CPU time consumed: 21.998s
Memory peak: 105.6M (swap: 0B)
2025-06-23 12:00:00 +02:00
ppom
ad6b0faa30
Performance: Use a RegexSet for all regexes of a Stream
StreamManager is now a struct that has its own RegexSet created from all
the regexes inside its Filters. Instead of calling
FilterManager::handle_line on all its FilterManagers, resulting in m*n
regex passes, it matches on all the regexes with its RegexSet.
It then only calls FilterManager::handle_line on matching Filters.

This should increase performance in those cases:
- Streams with a lot of filters or a lot of regexes
- Filters that match a small proportion of their Stream lines

This may decrease performance when most of lines are matched by all
Filters of a Stream.
2025-06-23 12:00:00 +02:00
ppom
55ed7b9c5f
Amend heavy-load test
- Add wrapper script
- Add non-matching lines
- Put two filters on the same stream, where either one of them matches
2025-06-23 12:00:00 +02:00
Baptiste Careil
d12a61c14a Fix #124: discard invalid utf8 sequences from input streams 2025-06-23 19:13:49 +00:00
Baptiste Careil
d4ffae8489 Fix #126: make config evaluation order predictable 2025-06-23 20:07:17 +02:00
ppom
529e40acd4
move State into its own file
This permit to reduce filter/mod.rs file size
2025-06-23 12:00:00 +02:00
ppom
39ae570ae5
rename file 2025-06-23 12:00:00 +02:00
ppom
4cb69fb0d4
Add test for trigger command 2025-06-23 12:00:00 +02:00
ppom
fad9ce1166
Add unit tests to FilterManager::handle_trigger 2025-06-23 12:00:00 +02:00
ppom
ff8ea60ce6
WIP trigger command and ignoreregex performance improvement
- ignoreregex is now a RegexSet for improved performance
- Vec::clear() replaced by new Vec to really free RAM
2025-06-23 12:00:00 +02:00
ppom
b0c307a9d2
WIP trigger command 2025-06-23 12:00:00 +02:00
ppom
731ad6ddfd
Simplify parse_duration tests by using appropriate units 2025-06-23 12:00:00 +02:00
ppom
0ff8fda607
cargo fmt, cargo clippy --all-targets 2025-06-17 12:00:00 +02:00
ppom
9963ef4192
Improve error message for retry < 2.
Fixes #125
2025-06-17 12:00:00 +02:00
ppom
ff84a31a7d
build: use CC env var if available. defaults to cc instead of gcc 2025-06-15 12:00:00 +02:00
ppom
0d9fc47016
Update duration format documentation
As it's no longer Go's format
2025-06-10 12:00:00 +02:00
ppom
5bccdb5ba7
Add oneshot option for actions
Fixes #92
2025-06-10 12:00:00 +02:00
ppom
cc38c55fdb
Add test-config subcommand to README 2025-06-06 12:00:00 +02:00
ppom
c04168d4dc
Fix outdated links in README 2025-06-06 12:00:00 +02:00
ppom
2e9e7a2a7b
Remove old go codebase 2025-06-06 12:00:00 +02:00
ppom
e642620ae3
Cross-compile C binaries too 2025-06-06 12:00:00 +02:00
ppom
8f5511b415
v2.0.1 2025-06-05 12:00:00 +02:00
ppom
388d4dac90
Fix tarball Makefile, release.py
- Makefile creates missing directories
- release.py puts tarballs & debs in local/ directory when not
  publishing
2025-06-05 12:00:00 +02:00
ppom
f63502759f
make official release only with --publish flag 2025-06-05 12:00:00 +02:00
ppom
74280d0f45
Fix completions filenames and their removal 2025-06-05 12:00:00 +02:00
Martin
8543fead54 Fix makefile install
remove duplicated /man/man1
otherwise, get an error during installation
install: target '/usr/local/share/man/man1/man/man1/': No such file or directory

Use -D to create missing directory
2025-06-05 16:52:09 +02:00
ppom
3beca6d7a5
Document state_directory
Fixes #71
2025-06-05 12:00:00 +02:00
ppom
b53044323c
Add small doc for C helpers 2025-06-05 12:00:00 +02:00
ppom
02f13a263e
fix release 2025-06-05 12:00:00 +02:00
ppom
d4fb820cb7
version 2.0.0 2025-06-05 12:00:00 +02:00
ppom
da884029e6
Fix git not sorting tags correctly 2025-06-05 12:00:00 +02:00
ppom
bc078471a9
NLnet statement 2025-06-01 12:00:00 +02:00
ppom
daf1bf3818
Update configuration
- README.md: update minimal example (was outdated since a long time 🤐)
- README.md: link to Security part of the wiki
- README.md: fix OpenBSD broken link
- README.md: add `state_directory` option
- activitywatch & server configs: remove from config directory. only
  available in the wiki.
- heavy-load: move to new bench directory
- test: move to tests directory

Fix #121
2025-05-31 12:00:00 +02:00
ppom
ebad317a97
Config: fix state_directory and concurrency merging 2025-05-28 12:00:00 +02:00
ppom
9152c95b03
Change error wording again 2025-05-28 12:00:00 +02:00
ppom
8ffbcad1b9
Add test-config man page 2025-05-28 12:00:00 +02:00
ppom
f7bd28e46e
Config: cleanup code 2025-05-28 12:00:00 +02:00
ppom
73ffcb97ab
Update CLI help messages 2025-05-28 12:00:00 +02:00
ppom
283d5c0f13
delete note files 2025-05-28 12:00:00 +02:00
ppom
1a5548c871
Reorder Config fields for test-config command
This order looks more readable to me, with:
- one line options at the beginning of config
- streams at the end of config
- filters at the end of streams
- actions at the end of filters
2025-05-28 12:00:00 +02:00
ppom
4f2eac2788
Change error wording 2025-05-28 12:00:00 +02:00
Baptiste Careil
231c9f8a99
Add test-config sub-command 2025-05-28 12:00:00 +02:00
Baptiste Careil
cf96fa30f1
Add test directory for reading configuration files from a directory 2025-05-28 12:00:00 +02:00
Baptiste Careil
28b3a173bb
Add ability to read config from multiple files in a same directory 2025-05-28 12:00:00 +02:00
ppom
5260b6b9b1
Clean handle_order
Had bits that were relevant in LMDB but are not anymore.
Use of keys().cloned() instead of custom map()
Clone after filtering.
2025-05-28 12:00:00 +02:00
ppom
a5bbd7641a
reaction {show,flush}: do not display empty pending actions
We were creating an action array even if it was past.
2025-05-28 12:00:00 +02:00
ppom
14aa859e2d
cargo update 2025-05-28 12:00:00 +02:00
ppom
c5dbb4e29c
Remove bincode & fjall crates 2025-05-28 12:00:00 +02:00
ppom
3ffb2b5bed
Adapt to Time being serialized as rfc3339
And all tests pass! 🎉
2025-05-28 12:00:00 +02:00
ppom
3d05815263
Daemon runs the database task
Completely forgot to run the database task, no wonder why the test didn't pass 😅

Now the database asks for the daemon to stop if it errors.
A lot better than panicking, as it was until now.

It also quits with all the other tasks, thanks to its ShutdownToken.

It uses a oneshot channel to tell the daemon about a potential error.
2025-05-28 12:00:00 +02:00
ppom
78ca0025df
Document daemon::shutdown. ShutdownToken is now able to ask for shutdown. 2025-05-28 12:00:00 +02:00
ppom
0c56acfd81
Fix FilterManager unit tests
e2e test failing for now
2025-05-28 12:00:00 +02:00
ppom
1783bf8062
Use treedb in FilterManager
Not tested yet, tests must be adapted
2025-05-28 12:00:00 +02:00
ppom
037b3498bc
rename waltree into treedb
WAL was a wrong name. It's not a Write Ahead Log, but a "Write Behind
Log" (new concept haha), so it made no sense to keep wal.
And wbl is not unpronounceable.
2025-05-28 12:00:00 +02:00
ppom
5f21db5279
WriteDB: reuse write buffer 2025-05-28 12:00:00 +02:00
ppom
fe5cd70f7a
use tokio::time::interval instead of sleep, for guaranteed flushes 2025-05-28 12:00:00 +02:00
ppom
c93f4e34c1
LoadedDB internal to Database, open_tree uses LoadedDB. More doc! 2025-05-28 12:00:00 +02:00
ppom
482df254ba
implement Tree::fetch_update 2025-05-28 12:00:00 +02:00
ppom
ec0d8c72e9
Reorganise waltree. Test log rotation. Add tokio channel. Implement Tree operations. 2025-05-28 12:00:00 +02:00
ppom
6cdad37588
Finish tests on lowlevel db (and fix bug) 2025-05-28 12:00:00 +02:00
ppom
924c8e8635
test ReadDB::read 2025-05-28 12:00:00 +02:00
ppom
a029e28812
Tested ReadDB and WriteDB 2025-05-28 12:00:00 +02:00
ppom
779e5e5d86
more WIP, split into low level / high level 2025-05-28 12:00:00 +02:00
ppom
bb9b17761a
WIP custom db 2025-05-28 12:00:00 +02:00
ppom
9c8be2f2de
Working db rewrite using fjall 2025-05-28 12:00:00 +02:00
ppom
2facac9fbd
WIP fjall 2025-05-28 12:00:00 +02:00
ppom
11459f7ee4
lmdb NO_SYNC 2025-05-28 12:00:00 +02:00
ppom
68e35ed021
Benchmark after heed/lmdb rewrite 2025-05-28 12:00:00 +02:00
ppom
db0e480daa
arbitrary max size
To avoid a runtime Error(MapSize) by heed

FIXME How to know which value is high enough?
2025-05-28 12:00:00 +02:00
ppom
a056efe770
Fix warning until next heed release 2025-05-28 12:00:00 +02:00
ppom
d952a61f83
Fix last lmdb bug
Hopefully the last. Tests pass!
I was using a read transaction outdated by recent changes in an
uncommited write transaction. So I use the same write transaction to
read things now.

I also checked that other uses of a read transaction is fine.
They're fine because they're only used to iterate on a whole db, while
the write transaction drops some keys/values.
2025-05-28 12:00:00 +02:00
ppom
64fdc52e9a
Better use of transactions, React enum for more precise tests, test macros
Transactions are now opened at higher level, to
1. Ensure a logical transaction is a complete action.
   (ex: when exec, remove match & add corresponding trigger)
2. Fix issue where multiple Rw / Ro transactions are open at the same
   time.

So now they're open only by:
- FilterManager's task in handle_*,
- FilterManager's task in cleanup_*,
- the scheduled task for after actions.

handle_line returns an enum (no match, match, exec) to have more precise
tests. So maybe the tests could care less about database state and only
about FilterManager's behavior.

Use of macros to have more readable tests and reduce duplication and
remove complex generic functions

Tests still failing, behavior seems to be altered.
2025-05-28 12:00:00 +02:00
ppom
ea1bc033c5
WIP 2025-05-28 12:00:00 +02:00
ppom
8393ebf20e
Temporary fix while we can't remove heed databases
Also remove sled related code
2025-05-28 12:00:00 +02:00
ppom
77f001f860
Add debug log before closing DB 2025-05-28 12:00:00 +02:00
ppom
e321e3ea0b
collecting maps before any remove_match to avoid iterators holding a read
But now the tasks don't finish so I introduced another bug?
2025-05-28 12:00:00 +02:00
ppom
04e3fb3e28
unit tests passing. handle_order in e2e tests panicing
panic because of two read transactions open in same thread.
Don't know if I will fix this with txns without tls
or if i'll find another way
2025-05-28 12:00:00 +02:00
ppom
3b6c352204
WIP 2025-05-28 12:00:00 +02:00
ppom
0abb55b69e
WIP 2025-05-28 12:00:00 +02:00
ppom
9e83ceed46
WIP 2025-05-28 12:00:00 +02:00
ppom
660a7d5a58
WIP 2025-05-28 12:00:00 +02:00
ppom
2ab6eeceaa
Matrix rooms 2025-05-16 12:00:00 +02:00
ppom
a297f26f3d Prettier exec_limit handler 2025-05-03 12:00:00 +02:00
ppom
79b132d775 README: add a word on oppression 2025-05-03 12:00:00 +02:00
ppom
94502443f7 Prettier pattern-matching 💅 2025-04-29 12:00:00 +02:00
ppom
da9287c16c ip46tables: fix return type of exec func 2025-04-22 12:00:00 +02:00
ppom
170c1fd01e release.py: add cargo-deb to the depencies list 2025-03-26 12:00:00 +01:00
ppom
deaf418afd Add ip46tables and nft46 executables to the .deb 2025-03-04 12:00:00 +01:00
ppom
f4b8572e94 run clippy on test target 2025-02-26 12:00:00 +01:00
ppom
b655ef1008 Move tests, add tests, fix memory leak
- Move FilterManager tests to own file
- Add some persistance tests
- Fix memory leak where entry like (Match, EmptySet) would be kept in DB
  (instead of discarding it)
2025-02-26 12:00:00 +01:00
ppom
859e35e5c3 Speed up tests now that we handle millisecond precision
Previously database cropped to the second precision
Now it keeps millisecond precision and handle millisecond units
2025-02-26 12:00:00 +01:00
ppom
b448089f58 Added daemon tests; parse_duration now supports milliseconds
TODO test persistance handling on FilterManager
2025-02-25 12:00:00 +01:00
ppom
fe1a93b8a2 Remove more dead code
Client/Daemon communication is now JSON so we don't need
This quirks anymore
2025-02-23 12:00:00 +01:00
ppom
fb9c0f8699 Remove dead code
Process is a bit tedious, as pub objects are not treated as unused:

1. Remove all `pub`s:
	sed -i 's/\bpub //' (rg -le rs src/)

2. Fix the errors in your IDE by adding the necessary `pub`s

3. Remove what's marked as unused, when the project compiles again.
2025-02-23 12:00:00 +01:00
ppom
f641f45211 Update ARCHITECTURE.md 2025-02-22 12:00:00 +01:00
ppom
f7184ff42b release: Fix expected return code from Gitlab 2025-02-21 12:00:00 +01:00
ppom
1ec558e559 v2.0.0-rc2 release
- Cross compilation to amd64 and arm64
- Fully static binaries
- Debian packages
- Man pages
- Shell completions
- Systemd service
2025-02-21 12:00:00 +01:00
ppom
ca8656148a Remove import/export scripts 2025-02-20 12:00:00 +01:00
ppom
1cc537e258 Remove outdated TODOs 2025-02-17 12:00:00 +01:00
ppom
1a57481110 Client-Daemon protocol: use json instead of bincode
This permits to have a pretty PatternStatus easily
Also having a human-readable protocol format is good!

Closes #109
2025-02-17 12:00:00 +01:00
ppom
9642f47512 Config: permit top-level definitions key
closes #118
2025-02-17 12:00:00 +01:00
ppom
a238c7411f Transform SledTreeExt into a struct that wraps sled::Tree
This permits to have a stronger type system, as the type of a Tree is
then decided at construction, not for every function call. This also
makes the code clearer, with much less type annotations.
2025-02-16 12:00:00 +01:00
ppom
d7824faf3d Correct my errance in the two previous commits
End of the TODO task of 9569421336

So I was wandering around, with only poorly time-based solutions on this
trigger cleanup. I couldn't know for sure when was the last time I
needed this trigger entry, because two actions can have the same after
duration. So now I just store the count of actions still needed in the
sled tree, and there is no concurrency concerns anymore.

Sky is blue and grass is green 🌈
2025-02-17 12:00:00 +01:00
ppom
82ab861132 Fix and enhance tests
The tokio task that removes the trigger after the last after-action was
not woken up & cancelled by the cancellation token, but left to die when
tokio quits.

Unfortunately this broke the tests, where tokio is kept from one
reaction launch to another. Those tasks kept a reference to the DB, then
preventing it to close, then preventing a new DB instance from being
opened.

This was difficult to debug due to the tests only checking that reaction
quits with an error, but not testing what kind of error it is.
So they now check that reaction quits with the "normal" error, which is
that all streams finished.

The tokio task is now woken up, like the other sleeping tasks, at
shutdown, and quits doing nothing.

It's not very satisfying because I assume it's a bit costly to wake up a
task, so I'm gonna see how I can improve this.

I may start a new tokio runtime for each subtest.
I may delete this task and move its instructions to the last
after-action.
I may do both.
2025-02-16 12:00:00 +01:00
ppom
a7b1d33492 WIP 2025-02-16 12:00:00 +01:00
ppom
7a3b8bd0e2 remove useless async 2025-02-16 12:00:00 +01:00
ppom
9569421336 Move ActionManager's logic to FilterManager
I realized that already-ran actions where not run on startup.
This is a problem because reaction is expected to ban, then unban.
On startup it runs already ran action with have a still-to-be run action
with an after directive.

So the solution to this problem has been to move the action execution
logic to the filter manager, which have a coherent view of all the
action set of a filter.

I also changed how pending actions are run on exit:
- Before, pending tasks would be left out by the runtime quitting, and
  now tasks would run to exec them
- Now, pending tasks are woken up by a global CancellationToken.
  This token also replace the shutdown channel.
  The daemon/mod.rs then waits for all the tasks to complete via an
  empty mpsc channel.

TODO
Add the (Match, Time) to triggers only if longuest_action_duration is
non-zero.
Then spawn a task that will remove past self.triggers (Match, Time) after
longuest_action_duration time.
2025-02-13 12:00:00 +01:00
ppom
2d86bfb470 Fix: do not try to remove sled's default map
Because it will Err ^^
2025-02-12 12:00:00 +01:00
ppom
0d0a4708ec Improve error message when opening database 2025-02-12 12:00:00 +01:00
ppom
d06d386f77 Fix default state_directory 2025-02-12 12:00:00 +01:00
ppom
d748924ba8 sled: remove unused trees on start 2025-02-12 12:00:00 +01:00
ppom
7df357c4e2 add db flush on quit 2025-02-10 12:00:00 +01:00
ppom
d36fe2871f simplify complex state sharing
thanks sled
2025-02-10 12:00:00 +01:00
ppom
639be7eebf action & filter: not &mut self needed anymore
Thanks 🙏 sled for those wonderful types
Shared state is so easier now
2025-02-10 12:00:00 +01:00
ppom
38d433fa6c actions startup logic
realized ordered_times where not needed in ActionManager so I removed
them. We cleanup old actions only on startup, and we run the up-to-date
ones, so it's ok to do this in O(n). Thus ordered_times is not needed.
Whenever actions expire, they're run by a scheduled tokio task, so no
old state is kept.

Now on startup:
- cleanup old actions
- run still relevant ones
2025-02-10 12:00:00 +01:00
ppom
2782edf27a smarty brain lifetime annotations to avoid cloning 2025-02-07 12:00:00 +01:00
ppom
f5dd36eec1 use sled in action.rs
- Fix remove logic in clear_past_times
- add iter_ext() in SledTreeExt
- add SledDbExt for opening correct trees
2025-02-07 12:00:00 +01:00
ppom
8cc32d122e WIP: use sled instead of custom db implementation
Goal is to directly use sled as a drop-in replacement for BTreeMaps,
which already maintain the state of reaction.
Those maps are now persisted by sled.

A lot of code is no longer needed, and is deleted in this commit.

This commit focuses on adapting FilterManager to sled.

TODO
- adapt ActionManager as well
- at startup, clean old matches
- at startup, clean old actions
- at startup, run still relevant actions
- at startup, remove sled trees that no longer correspond to something
  in the configuration
- refactor socket.rs to remove complex state sharing, as sled can now be
  directly used.
2025-02-05 12:00:00 +01:00
ppom
1587e38c68 Move example.yml back to config/ directory 2025-02-05 12:00:00 +01:00
ppom
db0391bdb7 Add SVG logo 2025-01-06 12:00:00 +01:00
Baptiste Careil
672f0e9599 Add test config to test delayed actions 2024-12-29 10:20:20 +01:00
Baptiste Careil
0b59befc42 Fix delayed action not being executed 2024-12-29 10:19:51 +01:00
ppom
2f2ffdc871 better db logs 2024-12-09 12:00:00 +01:00
ppom
3792660295 restore (relaxed) old entries deletion
Removing the (surely incorrect) old entry deletion prevented any
deletion of entries in the database.

Now is a relaxed version, that will remove entries only when it's sure
that they're outdated.

The daemon/filter part is still responsible for more precise book
keeping of the in-memory state cleaning.
2024-12-09 12:00:00 +01:00
ppom
777a8ca6fa Better handling of empty databases 2024-12-09 12:00:00 +01:00
Baptiste Careil
a379df8998 simplify handle_child 2024-12-06 11:16:03 +01:00
Baptiste Careil
b143a49942 Ask nicely the stream process to exit on shutdown
Introduces the nix dependency for the signal constants and the kill(1)
function.

The child process is now delegated to a dedicated function handle_child
that will ensure it terminated and reclaimed. On shutdown, this function
first ask nicely using SIGTERM the stream process to exit (maybe we'll
want to make that signal configurable as SIGINT is a good candidate as
well).

After 15s, if the child process still did not exit, it is killed with
SIGKILL. Which is usually enough. But to make sure not to block
reaction's shutdown (which could interfere unintentionally with, for
example, the management of the database), there is another 5s timeout
after which we give up on waiting for the child process since at this
point it's most likely deadlocked in some way at the kernel level.

handle_child now handles the error message about the stream command
early exit. So there is no need for a communication channel between this
function and handle_io which just processes the process' output.
2024-11-22 16:52:25 +01:00
Baptiste Careil
78f03eb643 Restore default features of futures for the select macro 2024-11-19 14:54:53 +00:00
Baptiste Careil
68637e35a7 Fix #110: don't show error message on shutdown
The stream_manager is now the sole owner of the child process handle and
is responsible for cleaning it.

The stream_manager expects a shutdown command from the broadcast channel
it receives as parameter and watches it concurrently with the child process
I/O.

When the shutdown command is received, the child process is killed and the
remaining I/O is processed. Then the child process is reaped. Or it is
killed and reaped after EOF is encountered, would the child process exit
before (or at the same time as) the shutdown command is issued.
2024-11-19 14:54:53 +00:00
ppom
7c3116b7c9 Tests passing but no error has been found
Tests are finally passing but nothing has been fixed on real code, so bug is not found
2024-11-18 12:00:00 +01:00
Baptiste Careil
79302efb27 Fix spurious rebuild of reaction due to invalid file path 2024-11-14 19:22:32 +01:00
ppom
8579e30890 test infrastructure, new conf's state_directory, less deps
reaction's configuration now has a state_directory optional member,
which is where it will save its databases. defaults to cwd.

added a lot of code necessary to properly test databases.
The new tests are currently failing, which is good : they'll permit
to hunt down this database consistency bug.

also removed some indirect dependencies via features removal,
and moved test dependencies to dedicated [dev-dependencies]

also small fix on an nft46.c function type and empty conf file
for ccls LSP server.
2024-11-13 12:00:00 +01:00
Baptiste Careil
a3081b0486 Fix #111: Streams now read stderr from started processes 2024-11-11 16:54:35 +01:00
ppom
b747e52e94 Remove unused code and structs 2024-10-31 12:00:00 +01:00
ppom
c66a5aad67 DB stop filtering and sends all matches to filter managers 2024-10-31 12:00:00 +01:00
ppom
db22bc087d fix previous-previous commit for import-rust-db 2024-10-31 12:00:00 +01:00
ppom
79677cf327 Restructure code and document it in ARCHITECTURE.md 2024-10-26 12:00:00 +02:00
ppom
838ad1b18a less owned data, more borrowed data 2024-10-26 12:00:00 +02:00
ppom
d776667c80 release script update 2024-10-24 12:00:00 +02:00
ppom
e39ba05da4 Update project status 2024-10-24 12:00:00 +02:00
ppom
8080bae293 deactivate PatternStatus special formatting 2024-10-24 12:00:00 +02:00
ppom
40fc6e3380 Fix client-daemon protocol & client output 2024-10-24 12:00:00 +02:00
ppom
4b8d6e8168 Fix db migration glue script 2024-10-24 12:00:00 +02:00
ppom
a05e05750c Packaging for Rust 2024-10-24 12:00:00 +02:00
ppom
a80e3764f1 version 2.0.0-rc1 2024-10-24 12:00:00 +02:00
ppom
7deb2b4625 Remove tokio-console 2024-10-24 12:00:00 +02:00
ppom
3dd97523fd Move new rust codebase to root dir 2024-10-24 12:00:00 +02:00
ppom
fea9035f32 Move old go codebase to go.old
And add lil Readme
2024-10-24 12:00:00 +02:00
ppom
b8f037352c Add import & export scripts 2024-10-24 12:00:00 +02:00
ppom
58cf68ba58 Change database signature
- raw signature, not prefixed by length (seems more human-friendly to
  me)
- better error message
2024-10-24 12:00:00 +02:00
ppom
c42487db5c Update dependencies 2024-10-24 12:00:00 +02:00
ppom
aca19fea8f fix all async & clippy issues
- use hasmap instead of btreemap https://github.com/rust-lang/rust/issues/64552
- use async iterators as few as possible
- move first rotate_db into its own thread to be out of the runtime, so
  that we can use blocking_send
- send flush to database
2024-10-24 12:00:00 +02:00
ppom
d7203c792a remove useless unsafe code, format, simplify leak() line 2024-10-24 12:00:00 +02:00
ppom
cf74ebcda6 de-async database tests 2024-10-24 12:00:00 +02:00
ppom
51175f010d WIP Reimplement flush
Still same compiler issue
2024-10-24 12:00:00 +02:00
ppom
607775b8e3 Remove ActionFilter trait
Was useful for the removed trait StateMap.
Not useful anymore.
2024-10-24 12:00:00 +02:00
ppom
2e00092c18 WIP reimplement socket part
I think I'm stumbling on this rust compiler bug: https://github.com/rust-lang/rust/issues/110338
2024-10-24 12:00:00 +02:00
ppom
d2345d6047 Switch back database code to sync
Enormous performance gain!

 INFO tailDown3.find: match ["100"]
ERROR stream tailDown3 exited: its command returned.
 INFO Rotated database

________________________________________________________
Executed in   31.27 secs    fish           external
   usr time   35.24 secs    0.00 micros   35.24 secs
   sys time   27.09 secs  343.00 micros   27.09 secs

TODO reimpl the socket part.
2024-10-24 12:00:00 +02:00
ppom
72887f3af0 big refacto
avoid channels
remove matches_manager and execs_manager
one stream_manager task handles filter_managers and action_managers
2024-10-24 12:00:00 +02:00
ppom
8dbb20efce WIP profiling
performance is veryy bad
2024-10-24 12:00:00 +02:00
ppom
df7b4291f7 cleaner bool testing 2024-10-24 12:00:00 +02:00
ppom
f7e42ceab5 make sleep tasks listen to the shutdown signal 2024-10-24 12:00:00 +02:00
ppom
0d783218d8 fix execs bug, fix tests with tracing 2024-10-24 12:00:00 +02:00
ppom
18ca9600e9 fix matches and execs not finishing
They can't "naturally" end because they have their own channel whose
send is inside their task and whose Sender is never dropped
2024-10-24 12:00:00 +02:00
ppom
9549a7b3ec use tracing instead of log 2024-10-24 12:00:00 +02:00
ppom
1e6e67a4b3 splitted channels, asyncify 2024-10-24 12:00:00 +02:00
ppom
2c5a781036 Make more things async 2024-10-24 12:00:00 +02:00
ppom
088354e955 untested async version 2024-10-24 12:00:00 +02:00
ppom
116e75f81f Move filter logic from database/mod.rs to filter.rs 2024-10-24 12:00:00 +02:00
ppom
5e6331f820 Adapt code for integration tests; add one integration test
The integration test checks:
- general logic
- match/exec persistence
- flush
- flush persistence
2024-10-24 12:00:00 +02:00
ppom
16a083095c Fix bug: forgot to compare match's time to now 2024-10-24 12:00:00 +02:00
ppom
70a367c189 create lib.rs, fmt, clippy 2024-10-24 12:00:00 +02:00
ppom
b2c85a5d39 improve error message 2024-10-24 12:00:00 +02:00
ppom
85ba7c8152 implement test-regex 2024-10-24 12:00:00 +02:00
ppom
ce0c34de8e Socket communication! 2024-10-24 12:00:00 +02:00
ppom
6170f2dcd2 WIP impl of socket, daemon-side 2024-10-24 12:00:00 +02:00
ppom
9cc702e9c7 Replace postcard by bincode
- The interface is easier
- This permits me to have unbounded buffers.

I did some benchmarking on both options.
I can't see any difference in terms of
- CPU performance,
- memory usage, and
- stockage size.

I used this file: `datasize.jsonnet`
```jsonnet
{
  patterns: {
    num: {
      regex: @'([0-9]+)',
    },
  },
  streams: {
    s1: {
      cmd: ['seq', '-w', '499999'],
      filters: {
        f1: {
          regex: [
            '^<num>$',
          ],
          retry: 10,
          retryperiod: '1m',
          actions: {
            a: {
              cmd: ['true'],
            },
            b: {
              cmd: ['true'],
              after: '1m',
            },
          },
        },
      },
    },
  },
}
```
And this commands:

```
rm reaction-*
sudo systemd-run --wait -p User=ao -p MemoryAccounting=yes -p WorkingDirectory=(pwd) -p Environment=PATH=/run/current-system/sw/bin/ time ./target/release/reaction start -c datasize.jsonnet && ls -l reaction-matches.db
sudo systemd-run --wait -p User=ao -p MemoryAccounting=yes -p WorkingDirectory=(pwd) -p Environment=PATH=/run/current-system/sw/bin/ time ./target/release/reaction start -c datasize.jsonnet && ls -l reaction-matches.db
sudo systemd-run --wait -p User=ao -p MemoryAccounting=yes -p WorkingDirectory=(pwd) -p Environment=PATH=/run/current-system/sw/bin/ time ./target/release/reaction start -c datasize.jsonnet && ls -l reaction-matches.db
```
At the first invocation, reaction reads no DB.
At the second invocation, reaction reads a DB.
At the third invocation, reaction reads a double-sized DB.
2024-10-24 12:00:00 +02:00
ppom
807b3c7440 source tree organization + proper unwrap warns and explanations 2024-10-24 12:00:00 +02:00
ppom
d30d03bae8 Box::leak global Config into a &'static ref instead of Arcs everywhere 2024-10-24 12:00:00 +02:00
ppom
feb863670e fix short option name clash 2024-10-24 12:00:00 +02:00
ppom
6b52b03025 fix database deserializing and use BufWriter
Deserializing was failing because Filter::Deserialize and
Filter::Serialize had different attributes.
Now directly de·serializing (String, String) from/to database

Using BufWriter greatly increases performance.
Adding flushes where necessary.
2024-10-24 12:00:00 +02:00
ppom
0bac011ab1 Split database lowlevel code; sync_channel(1) for better perf; flush db 2024-10-24 12:00:00 +02:00
ppom
7adf8d908d Fix DB implementation 2024-10-24 12:00:00 +02:00
ppom
a772b8347d Optimization by changing ownership model
Avoid useless indirections with FilterName and ActionName
Put Arc<Filter> and Arc<Action> instead
2024-10-24 12:00:00 +02:00
ppom
81616fb1d9 rename config_from_file → Config::from_file 2024-10-24 12:00:00 +02:00
ppom
cd206d77a7 Use thiserror macros 2024-10-24 12:00:00 +02:00
ppom
0fb870f5be Fix time ambiguity bug 2024-10-24 12:00:00 +02:00
ppom
ed77120aa0 database integrated to daemon code 2024-10-24 12:00:00 +02:00
ppom
544af2283d Remove anyhow crate 2024-10-24 12:00:00 +02:00
ppom
69d2436847 database tests; better error handling (avoid unwraping) 2024-10-24 12:00:00 +02:00
ppom
4b2f760e12 first draft of database
untested for now
2024-10-24 12:00:00 +02:00
ppom
a55ab5d8d8 fix: run on_exit actions on exit 2024-10-24 12:00:00 +02:00
ppom
ca89d5e61c bug fixing
- correctly quitting all threads
  I had to add a Stop variant of manager's enums because they hold a
  Sender of their own Receiver. So streams leaving their Sender don't
  close the channel. The Stop is sent to matches_manager when streams
  have quit, and sent to execs_manager when matches_manager has quit.
- fix ThreadPool panic when channel is closed
- fix filter not adding after_duration to MAT messages
- better logging of execs_manager
2024-10-24 12:00:00 +02:00
ppom
bc755efc0c implement exec logic
- switch from std::time to chrono (mainly because of timer crate)
- move cli, logger and parse_duration to utils/ folder
- implement threadpool util (from the rust book)
- MatchesMap: move order: first FilterName, then Match
- implement execs_manager thread responsible for executing actions
2024-10-24 12:00:00 +02:00
ppom
cfadfe9ec5 action, pattern: remove pub fields 2024-10-24 12:00:00 +02:00
ppom
77be68c3a4 matches_manager unmatch logic
timer implementation
2024-10-24 12:00:00 +02:00
ppom
b7c56881a9 use directives normalization 2024-10-24 12:00:00 +02:00
ppom
bf3bd6e42f matches_manager logic
still has to implement timers
2024-10-24 12:00:00 +02:00
ppom
c077767f9a more clippy 2024-10-24 12:00:00 +02:00
ppom
25cce7522b fix newline kept in stream read_line() 2024-10-24 12:00:00 +02:00
ppom
7880e1d87e cargo clippy, panic custom message 2024-10-24 12:00:00 +02:00
ppom
eaac4fc341 match & ignore logic (stream/filter/pattern) + tests 2024-10-24 12:00:00 +02:00
ppom
8abdb7b74b streams, signals, quit
- launching streams
- handling termination signals by killing stream cmds
- waiting for stream threads to quit
- executing stop commands
2024-10-24 12:00:00 +02:00
ppom
f8b09faa9a iteration on daemon
- handle interruption signals
- config in Arc
2024-10-24 12:00:00 +02:00
ppom
efb543355a implement start/stop commands 2024-10-24 12:00:00 +02:00
ppom
9d6f367a13 optimise Config size 2024-10-24 12:00:00 +02:00
ppom
4aece412e8 cargo fmt 2024-10-24 12:00:00 +02:00
ppom
2b1ee51f97 use log package 2024-10-24 12:00:00 +02:00
ppom
3d491feec3 more tests on filter.setup() 2024-10-24 12:00:00 +02:00
ppom
d583df21a2 add configuration setup tests 2024-10-24 12:00:00 +02:00
ppom
3ce3b5ea7a refacto: move struct to their own module files 2024-10-24 12:00:00 +02:00
ppom
f298f285a3 optional concurrency falling back to default 2024-10-24 12:00:00 +02:00
ppom
217c75abd5 jsonnet support 2024-10-24 12:00:00 +02:00
ppom
8f132df4ac yaml support 2024-10-24 12:00:00 +02:00
ppom
9ed7602952 Complete config setup. Tests for duration parsing.
TODO: Tests for config setup.
2024-10-24 12:00:00 +02:00
ppom
582ba571dc WIP configuration setup 2024-10-24 12:00:00 +02:00
ppom
b39868b228 initial rust commit 2024-10-24 12:00:00 +02:00
ppom
fba60f36f0 Update on project rewrite status 2024-10-12 12:00:00 +02:00
ppom
51669a87e3 Emphasis on the wiki. 2024-06-26 12:00:00 +02:00
ppom
6d1aaabbb7 Recommend reading more before starting 2024-06-24 12:00:00 +02:00
ppom
1c45104676 typo fix in YAML example 2024-06-24 12:00:00 +02:00
ppom
77974dd938 Switch to hardcoded version value
fix #77 #95
2024-06-24 12:00:00 +02:00
ppom
ce2de3cf93 new release 2024-06-08 12:00:00 +02:00
ppom
92982ee511 Fix #70 2024-06-08 12:00:00 +02:00
ppom
0fb3fb6267 typo fix 2024-06-08 12:00:00 +02:00
ppom
f26c5f35d4 readme nixos 2024-06-08 12:00:00 +02:00
ppom
520272637f readme update: reaction*.deb / nixos 2024-06-08 12:00:00 +02:00
ppom
014a53fe94 debian changelog update 2024-05-29 12:00:00 +02:00
147 changed files with 23691 additions and 3616 deletions

0
.ccls Normal file
View file

1
.envrc Normal file
View file

@ -0,0 +1 @@
use_nix

12
.gitignore vendored
View file

@ -1,13 +1,15 @@
/reaction
/ip46tables
/nft46
/reaction*.db
reaction.db
reaction.db.old
/data
/reaction*.sock
/result
/wiki
/deb
*.deb
*.minisig
*.qcow2
debian-packaging/*
*.swp
/target
/local
.ccls-cache
.direnv

View file

@ -1,15 +0,0 @@
---
image: golang:1.20-bookworm
stages:
- build
variables:
DEBIAN_FRONTEND: noninteractive
test_building:
stage: build
before_script:
- apt-get -qq -y update
- apt-get -qq -y install build-essential devscripts debhelper quilt wget
script:
- make reaction ip46tables nft46

81
ARCHITECTURE.md Normal file
View file

@ -0,0 +1,81 @@
# Architecture
Here is a high-level overview of the codebase.
*Don't hesitate to create an issue or a merge request if something is unclear, missing or outdated.*
## Build
- `bench/`: Configuration that spawns a very high load on reaction. Useful to test performance improvements and regressions.
- `build.rs`: permits to create shell completions and man pages on build.
- `Cargo.toml`, `Cargo.lock`: manifest and dependencies.
- `config/`: example / test configuration files. Look at its git history to discover more.
- `Makefile`: Makefile. Resumes useful commands.
- `packaging/`: Files useful for .deb and .tar generation.
- `release.py`: Build process for a release. Handles cross-compilation, .tar and .deb generation.
## Main source code
- `tests/`: Integration tests. They test reaction runtime behavior, persistance, client-daemon communication, plugin integrations.
- `src/`: The source code, here we go!
### Top-level files
- `src/main.rs`: Main entrypoint
- `src/lib.rs`: Second main entrypoint
- `src/cli.rs`: Command-line arguments
- `src/tests.rs`: Test utilities
- `src/protocol.rs`: de/serialization and client/daemon protocol messages.
### `src/concepts/`
reaction really is about its configuration, which is at the center of the code.
There is one file for each of its concepts: configuration, streams, filters, actions, patterns, plugins.
### `src/client/`
Client code: `reaction show`, `reaction flush`, `reaction trigger`, `reaction test-regex`.
- `request.rs`: commands requiring client/server communication: `show`, `flush` & `trigger`.
- `test_config.rs`: `test-config` command.
- `test_regex.rs`: `test-regex` command.
### `src/daemon`
Daemon runtime structures and logic.
This code has async code, to handle input streams and communication with clients, using the tokio runtime.
- `mod.rs`: daemon main function. Initializes all tasks, handles synchronization and quitting, etc.
- `stream.rs`: Stream managers: start the stream `cmd` and dispatch its stdout lines to its Filter managers.
- `filter/`: Filter managers: handle lines, persistance, store matches and trigger actions. This is the main piece of runtime logic.
- `mod.rs`: High-level logic
- `state.rs`: Inner state operations
- `socket.rs`: The socket task, responsible for communication with clients.
- `plugin.rs`: Plugin startup, configuration loading and cleanup.
### `crates/treedb`
Persistence layer.
This is a database highly adapted to reaction workload, making reaction faster than when used with general purpose key-value databases
(heed, sled and fjall crates have been tested).
Its design is explained in the comments of its files:
- `lib.rs`: main database code, with its two API structs: Tree and Database.
- `raw.rs`: low-level part, directly interacting with de/serializisation and files.
- `time.rs`: time definitions shared with reaction.
- `helpers.rs`: utilities to ease db deserialization from disk.
### `plugins/reaction-plugin`
Shared plugin interface between reaction daemon and its plugins.
Also defines some shared logic between them:
- `shutdown.rs`: Logic for passing shutdown signal across all tasks
- `parse_duration.rs` Duration parsing
### `plugins/reaction-plugin-*`
All core plugins.

4985
Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

105
Cargo.toml Normal file
View file

@ -0,0 +1,105 @@
[package]
name = "reaction"
version = "2.3.0"
edition = "2024"
authors = ["ppom <reaction@ppom.me>"]
license = "AGPL-3.0"
description = "Scan logs and take action"
readme = "README.md"
homepage = "https://reaction.ppom.me"
repository = "https://framagit.org/ppom/reaction"
keywords = ["security", "sysadmin", "fail2ban", "logs", "monitoring"]
build = "build.rs"
default-run = "reaction"
[package.metadata.deb]
section = "net"
extended-description = """A daemon that scans program outputs for repeated patterns, and takes action.
A common usage is to scan ssh and webserver logs, and to ban hosts that cause multiple authentication errors.
reaction aims at being a successor to fail2ban."""
maintainer-scripts = "packaging/"
systemd-units = { enable = false }
assets = [
# Executables
[ "target/release/reaction", "/usr/bin/reaction", "755" ],
[ "target/release/reaction-plugin-virtual", "/usr/bin/reaction-plugin-virtual", "755" ],
# Man pages
[ "target/release/reaction*.1", "/usr/share/man/man1/", "644" ],
# Shell completions
[ "target/release/reaction.bash", "/usr/share/bash-completion/completions/reaction", "644" ],
[ "target/release/reaction.fish", "/usr/share/fish/completions/", "644" ],
[ "target/release/_reaction", "/usr/share/zsh/vendor-completions/", "644" ],
# Slice
[ "packaging/system-reaction.slice", "/usr/lib/systemd/system/", "644" ],
]
[dependencies]
# Time types
chrono.workspace = true
# CLI parsing
clap = { version = "4.5.4", features = ["derive"] }
# Unix interfaces
nix = { version = "0.29.0", features = ["signal"] }
num_cpus = "1.16.0"
# Regex matching
regex = "1.10.4"
# Configuration languages, ser/deserialisation
serde.workspace = true
serde_json.workspace = true
serde_yaml = "0.9.34"
jrsonnet-evaluator = "0.4.2"
# Error macro
thiserror.workspace = true
# Async runtime & helpers
futures = { workspace = true }
tokio = { workspace = true, features = ["full", "tracing"] }
tokio-util = { workspace = true, features = ["codec"] }
# Async logging
tracing.workspace = true
tracing-subscriber = "0.3.18"
# Database
treedb.workspace = true
# Reaction plugin system
remoc.workspace = true
reaction-plugin.workspace = true
[build-dependencies]
clap = { version = "4.5.4", features = ["derive"] }
clap_complete = "4.5.2"
clap_mangen = "0.2.24"
regex = "1.10.4"
tracing = "0.1.40"
[dev-dependencies]
rand = "0.8.5"
treedb.workspace = true
treedb.features = ["test"]
tempfile.workspace = true
assert_fs.workspace = true
assert_cmd = "2.0.17"
predicates = "3.1.3"
[workspace]
members = [
"crates/treedb",
"plugins/reaction-plugin",
"plugins/reaction-plugin-cluster",
"plugins/reaction-plugin-ipset",
"plugins/reaction-plugin-nftables",
"plugins/reaction-plugin-virtual"
]
[workspace.dependencies]
assert_fs = "1.1.3"
chrono = { version = "0.4.38", features = ["std", "clock", "serde"] }
futures = "0.3.30"
remoc = { version = "0.18.3" }
serde = { version = "1.0.203", features = ["derive"] }
serde_json = { version = "1.0.117", features = ["arbitrary_precision"] }
tempfile = "3.12.0"
thiserror = "1.0.63"
tokio = { version = "1.40.0" }
tokio-util = { version = "0.7.12" }
tracing = "0.1.40"
reaction-plugin = { path = "plugins/reaction-plugin" }
treedb = { path = "crates/treedb" }

11
Dockerfile Normal file
View file

@ -0,0 +1,11 @@
# This Dockerfile permits to build reaction and its plugins
# Use debian old-stable, so that it runs on both old-stable and stable
FROM rust:bookworm
RUN apt update && apt install -y \
clang \
libipset-dev \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /reaction

View file

@ -1,51 +1,23 @@
CC ?= gcc
PREFIX ?= /usr/local
BINDIR = $(PREFIX)/bin
MANDIR = $(PREFIX)/share/man/man1
SYSTEMDDIR ?= /etc/systemd
all: reaction ip46tables nft46
all: reaction
clean:
rm -f reaction ip46tables nft46 reaction*.deb reaction.minisig ip46tables.minisig nft46.minisig reaction*.deb.minisig
rm -rf debian-packaging
cargo clean
ip46tables: helpers_c/ip46tables.c
$(CC) -s -static helpers_c/ip46tables.c -o ip46tables
reaction:
cargo build --release
nft46: helpers_c/nft46.c
$(CC) -s -static helpers_c/nft46.c -o nft46
reaction: app/* reaction.go go.mod go.sum
CGO_ENABLED=0 go build -buildvcs=false -ldflags "-s -X main.version=`git tag --sort=v:refname | tail -n1` -X main.commit=`git rev-parse --short HEAD`"
reaction_%-1_amd64.deb:
apt-get -qq -y update
apt-get -qq -y install build-essential devscripts debhelper quilt wget
if [ -e debian-packaging ]; then rm -rf debian-packaging; fi
mkdir debian-packaging
wget "https://framagit.org/ppom/reaction/-/archive/v${*}/reaction-v${*}.tar.gz" -O "debian-packaging/reaction_${*}.orig.tar.gz"
cd debian-packaging && tar xf "reaction_${*}.orig.tar.gz"
cp -r debian "debian-packaging/reaction-v${*}"
if [ -e "debian/changelog" ]; then \
cd "debian-packaging/reaction-v${*}" && \
DEBFULLNAME=ppom DEBEMAIL=reaction@ppom.me dch --package reaction --newversion "${*}-1" "New upstream release."; \
else \
cd "debian-packaging/reaction-v${*}" && \
DEBFULLNAME=ppom DEBEMAIL=reaction@ppom.me dch --create --package reaction --newversion "${*}-1" "Initial release."; \
fi
cd "debian-packaging/reaction-v${*}" && DEBFULLNAME=ppom DEBEMAIL=reaction@ppom.me dch --release --distribution stable --urgency low ""
cd "debian-packaging/reaction-v${*}" && debuild --prepend-path=/go/bin:/usr/local/go/bin -us -uc
cp "debian-packaging/reaction-v${*}/debian/changelog" debian/
cp "debian-packaging/reaction_${*}-1_amd64.deb" .
signatures_%: reaction_%-1_amd64.deb reaction ip46tables nft46
minisign -Sm nft46 ip46tables reaction reaction_${*}-1_amd64.deb
install: all
install -m755 reaction $(DESTDIR)$(BINDIR)
install -m755 ip46tables $(DESTDIR)$(BINDIR)
install -m755 nft46 $(DESTDIR)$(BINDIR)
install: reaction
install -m755 target/release/reaction $(DESTDIR)$(BINDIR)
install_systemd: install
install -m644 config/reaction.example.service $(SYSTEMDDIR)/system/reaction.service
sed -i 's#/usr/bin#$(DESTDIR)$(BINDIR)#' $(SYSTEMDDIR)/system/reaction.service
install -m644 packaging/reaction.service $(SYSTEMDDIR)/system/reaction.service
sed -i 's#/usr/local/bin#$(DESTDIR)$(BINDIR)#' $(SYSTEMDDIR)/system/reaction.service
release:
nix-shell release.py

185
README.md
View file

@ -4,32 +4,40 @@ A daemon that scans program outputs for repeated patterns, and takes action.
A common usage is to scan ssh and webserver logs, and to ban hosts that cause multiple authentication errors.
🚧 This program hasn't received external audit. however, it already works well on my servers 🚧
🚧 This program hasn't received external security audit yet. However, it already works well on many servers 🚧
## Rationale
I was using the honorable fail2ban since quite a long time, but i was a bit frustrated by its cpu consumption
I was using the honorable fail2ban since quite a long time, but i was a bit frustrated by its CPU consumption
and all its heavy default configuration.
In my view, a security-oriented program should be simple to configure
and an always-running daemon should be implemented in a fast*er* language.
reaction does not have all the features of the honorable fail2ban, but it's ~10x faster and has more manageable configuration.
reaction does not have all the features of the honorable fail2ban, but it's more than 10x faster and has more manageable configuration.
[📽️ quick french name explanation 😉](https://u.ppom.me/reaction.webm)
[🇬🇧 in-depth blog article](https://blog.ppom.me/en-reaction)
/ [🇫🇷 french version](https://blog.ppom.me/fr-reaction)
## Rust rewrite
reaction v2.x is a complete Rust rewrite of reaction.
It's in feature parity with the Go version, v1.x, which is now deprecated.
See https://blog.ppom.me/en-reaction-v2.
## Configuration
YAML and [JSONnet](https://jsonnet.org/) (more powerful) are supported.
both are extensions of JSON, so JSON is transitively supported.
- See [reaction.yml](./app/example.yml) or [reaction.jsonnet](./config/example.jsonnet) for a fully explained reference
- See [server.jsonnet](./config/server.jsonnet) for a real-world configuration
- See [reaction.example.service](./config/reaction.example.service) for a systemd service file
- This quick example shows what's needed to prevent brute force attacks on an ssh server:
- See [reaction.yml](./config/example.yml) or [reaction.jsonnet](./config/example.jsonnet) for a fully explained reference (ipv4 + ipv6)
- See the [wiki](https://reaction.ppom.me) for multiple examples, security recommendations and FAQ.
- See [server.jsonnet](https://reaction.ppom.me/configurations/ppom/server.jsonnet.html) for a real-world configuration
- See [reaction.service](./config/reaction.service) for a systemd service file
- This minimal example (ipv4 only) shows what's needed to prevent brute force attacks on an ssh server (please read at least the [Security](https://reaction.ppom.me/security.html) part of the wiki before starting 🆙):
<details open>
@ -37,18 +45,19 @@ both are extensions of JSON, so JSON is transitively supported.
```yaml
patterns:
ip: '(([ 0-9 ]{1,3}\.){3}[0-9]{1,3})|([0-9a-fA-F:]{2,90})'
ip:
type: ipv4
start:
- [ 'ip46tables', '-w', '-N', 'reaction' ]
- [ 'ip46tables', '-w', '-A', 'reaction', '-j', 'ACCEPT' ]
- [ 'ip46tables', '-w', '-I', 'reaction', '1', '-s', '127.0.0.1', '-j', 'ACCEPT' ]
- [ 'ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-N', 'reaction' ]
- [ 'iptables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
stop:
- [ 'ip46tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-F', 'reaction' ]
- [ 'ip46tables', '-w', '-X', 'reaction' ]
- [ 'iptables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-F', 'reaction' ]
- [ 'iptables', '-w', '-X', 'reaction' ]
streams:
ssh:
@ -57,13 +66,16 @@ streams:
failedlogin:
regex:
- 'authentication failure;.*rhost=<ip>'
- 'Failed password for .* from <ip>'
- 'Invalid user .* from <ip>'
- 'banner exchange: Connection from <ip> port [0-9]*: invalid format'
retry: 3
retryperiod: '6h'
actions:
ban:
cmd: [ 'ip46tables', '-w', '-I', 'reaction', '1', '-s', '<ip>', '-j', 'block' ]
cmd: [ 'iptables', '-w', '-I', 'reaction', '1', '-s', '<ip>', '-j', 'DROP' ]
unban:
cmd: [ 'ip46tables', '-w', '-D', 'reaction', '1', '-s', '<ip>', '-j', 'block' ]
cmd: [ 'iptables', '-w', '-D', 'reaction', '1', '-s', '<ip>', '-j', 'DROP' ]
after: '48h'
```
@ -74,39 +86,44 @@ streams:
<summary><code>/etc/reaction.jsonnet</code></summary>
```jsonnet
local iptables(args) = [ 'ip46tables', '-w' ] + args;
local banFor(time) = {
ban: {
cmd: iptables(['-A', 'reaction', '-s', '<ip>', '-j', 'reaction-log-refuse']),
cmd: ['iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
unban: {
cmd: ['iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
after: time,
cmd: iptables(['-D', 'reaction', '-s', '<ip>', '-j', 'reaction-log-refuse']),
},
};
{
patterns: {
ip: {
regex: @'(?:(?:[ 0-9 ]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})',
type: 'ipv4',
},
},
start: [
iptables([ '-N', 'reaction' ]),
iptables([ '-A', 'reaction', '-j', 'ACCEPT' ]),
iptables([ '-I', 'reaction', '1', '-s', '127.0.0.1', '-j', 'ACCEPT' ]),
iptables([ '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]),
['iptables', '-N', 'reaction'],
['iptables', '-I', 'INPUT', '-p', 'all', '-j', 'reaction'],
['iptables', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction'],
],
stop: [
iptables([ '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]),
iptables([ '-F', 'reaction' ]),
iptables([ '-X', 'reaction' ]),
['iptables', '-D', 'INPUT', '-p', 'all', '-j', 'reaction'],
['iptables', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction'],
['iptables', '-F', 'reaction'],
['iptables', '-X', 'reaction'],
],
streams: {
ssh: {
cmd: [ 'journalctl', '-fu', 'sshd.service' ],
cmd: ['journalctl', '-fu', 'sshd.service'],
filters: {
failedlogin: {
regex: [ @'authentication failure;.*rhost=<ip>' ],
regex: [
@'authentication failure;.*rhost=<ip>',
@'Failed password for .* from <ip>',
@'banner exchange: Connection from <ip> port [0-9]*: invalid format',
@'Invalid user .* from <ip>',
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
@ -119,30 +136,35 @@ local banFor(time) = {
</details>
> It is recommended to setup reaction with [`nftables`](https://reaction.ppom.me/actions/nftables.html)
> or [`ipset` + `iptables`](https://reaction.ppom.me/actions/ipset.html), which are much more performant
> solutions than `iptables` alone.
### Database
The embedded database is stored in the working directory.
The embedded database is stored in the working directory (but can be overriden by the `state_directory` config option).
If you don't know where to start reaction, `/var/lib/reaction` should be a sane choice.
### CLI
- `reaction start` runs the server
- `reaction show` show pending actions (ie. bans)
- `reaction show` show pending actions (ie. show current bans)
- `reaction flush` permits to run pending actions (ie. clear bans)
- `reaction trigger` permits to manually trigger a filter (ie. run custom ban)
- `reaction test-regex` permits to test regexes
- `reaction test-config` shows loaded configuration
- `reaction help` for full usage.
### `ip46tables`
### old binaries
`ip46tables` is a minimal c program present in its own subdirectory with only standard posix dependencies.
It permits to configure `iptables` and `ip6tables` at the same time.
It will execute `iptables` when detecting ipv4, `ip6tables` when detecting ipv6 and both if no ip address is present on the command line.
`ip46tables` and `nft46` binaries are no longer part of reaction. If you really need them, see
[the last commit that included them](https://framagit.org/ppom/reaction/-/tree/b7d997ca5e9a69c8572bb2ec9d27d0eb03b3cb9f/helpers_c).
## Wiki
You'll find more ressources, service configurations, etc. on the [Wiki](https://reaction.ppom.me)!
You'll find more ressources, service configurations, etc. on [the wiki](https://reaction.ppom.me)!
We recommend that you read the ***Good Practices*** chapters before starting.
## Installation
@ -150,53 +172,23 @@ You'll find more ressources, service configurations, etc. on the [Wiki](https://
### Binaries
Executables are provided [here](https://framagit.org/ppom/reaction/-/releases/), for a standard x86-64 linux machine.
Executables and .deb packages are provided [in the releases page](https://framagit.org/ppom/reaction/-/releases/), for x86-64/amd64 linux and aarch64/arm64 linux.
A standard place to put such executables is `/usr/local/bin/`.
Signature verification and installation instructions are provided in the releases page.
> Provided binaries in the previous section are compiled this way:
```shell
$ docker run -it --rm -e HOME=/tmp/ -v $(pwd):/tmp/code -w /tmp/code -u $(id -u) golang:1.20 make clean reaction.deb
$ make signaturese
```
#### Signature verification
Starting at v1.0.3, all binaries are signed with public key `RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX`. You can check their authenticity with minisign:
```bash
minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m nft46
minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m ip46tables
minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m reaction
# or
minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m reaction.deb
```
#### Debian
The releases also contain a `reaction.deb` file, which packages reaction & ip46tables.
You can install it using `sudo apt install ./reaction.deb`.
You'll have to create a configuration at `/etc/reaction.jsonnet`.
If you want to use another configuration format (YAML or JSON), you can override systemd's `ExecStart` command in `/etc/systemd/system/reaction.service` like this:
```systemd
[Service]
# First an empty directive to reset the default one
ExecStart=
# Then put what you want
ExecStart=/usr/bin/reaction start -c /etc/reaction.yml
```
> Provided binaries are compiled by running `nix-shell release.py` on a NixOS machine with docker installed.
#### NixOS
- [ package ](https://framagit.org/ppom/nixos/-/blob/main/pkgs/reaction/default.nix)
- [ module ](https://framagit.org/ppom/nixos/-/blob/main/modules/common/reaction.nix)
reaction is packaged, but the [**module**](https://framagit.org/ppom/nixos/-/blob/main/modules/common/reaction.nix) has not yet been upstreamed.
#### OpenBSD
[wiki](https://reaction.ppom.me/configs/openbsd.html)
See the [wiki](https://reaction.ppom.me/configurations/OpenBSD.html).
### Compilation
You'll need the go (>= 1.20) toolchain for reaction and a c compiler for ip46tables.
You'll need a recent rust toolchain for reaction and a c compiler for ip46tables.
```shell
$ make
```
@ -214,9 +206,46 @@ To install the systemd file as well
make install_systemd
```
## Development
## Contributing
Contributions are welcome. For any substantial feature, please file an issue first, to be assured that we agree on the feature, and to avoid unnecessary work.
> We, as participants in the open source ecosystem, are ethically responsible for the software
> and hardware we help create - as it can be used to perpetuate inequalities or help empower
> marginalized communities, and fight against patriarchy, capitalism, sexism, gender violence,
> racism, ableism, homophobia, colonialism, fascism, surveillance, and oppressive control.
This is a free time project, so I'm not working on schedule.
However, if you're willing to fund the project, I can priorise and plan paid work. This includes features, documentation and specific JSONnet configurations.
- [NGI's Diversity and Inclusion Guide](https://nlnet.nl/NGI0/bestpractices/DiversityAndInclusionGuide-v4.pdf)
I'll do my best to maintain a safe contribution place, as free as possible from discrimination and elitism.
### Ideas
Please take a look at issues which have the "Opinion Welcome 👀" label!
*Your opinion is welcome.*
Your ideas are welcome in the issues.
### Code
Contributions are welcome.
For any substantial feature, please file an issue first, to be assured that we agree on the feature, and to avoid unnecessary work.
I recommend reading [`ARCHITECTURE.md`](ARCHITECTURE.md) first. This is a quick tour of the codebase, which should save time to new contributors.
You can also join this Matrix development room: [#reaction-dev-en:club1.fr](https://matrix.to/#/#reaction-dev-en:club1.fr).
French version: [#reaction-dev-fr:club1.fr](https://matrix.to/#/#reaction-dev-fr:club1.fr).
## Help
You can ask for help in the issues or in this Matrix room: [#reaction-users-en:club1.fr](https://matrix.to/#/#reaction-users-en:club1.fr).
French version: [#reaction-users-fr:club1.fr](https://matrix.to/#/#reaction-users-fr:club1.fr).
You can alternatively send a mail: `reaction` on domain `ppom.me`.
## Funding
<!-- This is a free time project, so I'm not working on schedule.
However, if you're willing to fund the project, I can priorise and plan paid work. This includes features, documentation and specific JSONnet configurations. -->
This project is currenlty funded through the [NGI0 Core](https://nlnet.nl/core) Fund, a fund established by [NLnet](https://nlnet.nl) with financial support from the European Commission's [Next Generation Internet](https://ngi.eu) programme.
![NLnet logo](logo/nlnet.svg)

3
TODO Normal file
View file

@ -0,0 +1,3 @@
Test what happens when a Filter's pattern Set changes (I think it's shitty)
DB: add tests on stress testing (lines should always be in order)
conf: merge filters

View file

@ -1,393 +0,0 @@
package app
import (
"bufio"
"encoding/gob"
"encoding/json"
"fmt"
"net"
"os"
"regexp"
"slices"
"strings"
"time"
"framagit.org/ppom/reaction/logger"
"sigs.k8s.io/yaml"
)
const (
Info = 0
Flush = 1
)
type Request struct {
Request int
Flush PSF
}
type Response struct {
Err error
// Config Conf
Matches MatchesMap
Actions ActionsMap
}
func SendAndRetrieve(data Request) Response {
conn, err := net.Dial("unix", *SocketPath)
if err != nil {
logger.Fatalln("Error opening connection to daemon:", err)
}
defer conn.Close()
err = gob.NewEncoder(conn).Encode(data)
if err != nil {
logger.Fatalln("Can't send message:", err)
}
var response Response
err = gob.NewDecoder(conn).Decode(&response)
if err != nil {
logger.Fatalln("Invalid answer from daemon:", err)
}
return response
}
type PatternStatus struct {
Matches int `json:"matches,omitempty"`
Actions map[string][]string `json:"actions,omitempty"`
}
type MapPatternStatus map[Match]*PatternStatus
type MapPatternStatusFlush MapPatternStatus
type ClientStatus map[string]map[string]MapPatternStatus
type ClientStatusFlush ClientStatus
func (mps MapPatternStatusFlush) MarshalJSON() ([]byte, error) {
for _, v := range mps {
return json.Marshal(v)
}
return []byte(""), nil
}
func (csf ClientStatusFlush) MarshalJSON() ([]byte, error) {
ret := make(map[string]map[string]MapPatternStatusFlush)
for k, v := range csf {
ret[k] = make(map[string]MapPatternStatusFlush)
for kk, vv := range v {
ret[k][kk] = MapPatternStatusFlush(vv)
}
}
return json.Marshal(ret)
}
func pfMatches(streamName string, filterName string, regexes map[string]*regexp.Regexp, match Match, filter *Filter) bool {
// Check stream and filter match
if streamName != "" && streamName != filter.Stream.Name {
return false
}
if filterName != "" && filterName != filter.Name {
return false
}
// Check that all user requested patterns are in this filter
var nbMatched int
var localMatches = match.Split()
// For each pattern of this filter
for i, pattern := range filter.Pattern {
// Check that this pattern has user requested name
if reg, ok := regexes[pattern.Name]; ok {
// Check that the PF.p[i] matches user requested pattern
if reg.MatchString(localMatches[i]) {
nbMatched++
}
}
}
if len(regexes) != nbMatched {
return false
}
// All checks passed
return true
}
func addMatchToCS(cs ClientStatus, pf PF, times map[time.Time]struct{}) {
patterns, streamName, filterName := pf.P, pf.F.Stream.Name, pf.F.Name
if cs[streamName] == nil {
cs[streamName] = make(map[string]MapPatternStatus)
}
if cs[streamName][filterName] == nil {
cs[streamName][filterName] = make(MapPatternStatus)
}
cs[streamName][filterName][patterns] = &PatternStatus{len(times), nil}
}
func addActionToCS(cs ClientStatus, pa PA, times map[time.Time]struct{}) {
patterns, streamName, filterName, actionName := pa.P, pa.A.Filter.Stream.Name, pa.A.Filter.Name, pa.A.Name
if cs[streamName] == nil {
cs[streamName] = make(map[string]MapPatternStatus)
}
if cs[streamName][filterName] == nil {
cs[streamName][filterName] = make(MapPatternStatus)
}
if cs[streamName][filterName][patterns] == nil {
cs[streamName][filterName][patterns] = new(PatternStatus)
}
ps := cs[streamName][filterName][patterns]
if ps.Actions == nil {
ps.Actions = make(map[string][]string)
}
for then := range times {
ps.Actions[actionName] = append(ps.Actions[actionName], then.Format(time.DateTime))
}
}
func printClientStatus(cs ClientStatus, format string) {
var text []byte
var err error
if format == "json" {
text, err = json.MarshalIndent(cs, "", " ")
} else {
text, err = yaml.Marshal(cs)
}
if err != nil {
logger.Fatalln("Failed to convert daemon binary response to text format:", err)
}
fmt.Println(strings.ReplaceAll(string(text), "\\0", " "))
}
func compileKVPatterns(kvpatterns []string) map[string]*regexp.Regexp {
var regexes map[string]*regexp.Regexp
regexes = make(map[string]*regexp.Regexp)
for _, p := range kvpatterns {
// p syntax already checked in Main
key, value, found := strings.Cut(p, "=")
if !found {
logger.Printf(logger.ERROR, "Bad argument: no `=` in %v", p)
logger.Fatalln("Patterns must be prefixed by their name (e.g. ip=1.1.1.1)")
}
if regexes[key] != nil {
logger.Fatalf("Bad argument: same pattern name provided multiple times: %v", key)
}
compiled, err := regexp.Compile(fmt.Sprintf("^%v$", value))
if err != nil {
logger.Fatalf("Bad argument: Could not compile: `%v`: %v", value, err)
}
regexes[key] = compiled
}
return regexes
}
func ClientShow(format, stream, filter string, kvpatterns []string) {
response := SendAndRetrieve(Request{Info, PSF{}})
if response.Err != nil {
logger.Fatalln("Received error from daemon:", response.Err)
}
cs := make(ClientStatus)
var regexes map[string]*regexp.Regexp
if len(kvpatterns) != 0 {
regexes = compileKVPatterns(kvpatterns)
}
var found bool
// Painful data manipulation
for pf, times := range response.Matches {
// Check this PF is not empty
if len(times) == 0 {
continue
}
if !pfMatches(stream, filter, regexes, pf.P, pf.F) {
continue
}
addMatchToCS(cs, pf, times)
found = true
}
// Painful data manipulation
for pa, times := range response.Actions {
// Check this PF is not empty
if len(times) == 0 {
continue
}
if !pfMatches(stream, filter, regexes, pa.P, pa.A.Filter) {
continue
}
addActionToCS(cs, pa, times)
found = true
}
if !found {
logger.Println(logger.WARN, "No matching stream.filter items found. This does not mean it doesn't exist, maybe it just didn't receive any match.")
os.Exit(1)
}
printClientStatus(cs, format)
os.Exit(0)
}
// TODO : Show values we just flushed - for now we got no details :
/*
* % ./reaction flush -l ssh.failedlogin login=".*t"
* ssh:
* failedlogin:
* actions:
* unban:
* - "2024-04-30 15:27:28"
* - "2024-04-30 15:27:28"
* - "2024-04-30 15:27:28"
* - "2024-04-30 15:27:28"
*
*/
func ClientFlush(format, streamName, filterName string, patterns []string) {
requestedPatterns := compileKVPatterns(patterns)
// Remember which Filters are compatible with the query
filterCompatibility := make(map[SF]bool)
isCompatible := func(filter *Filter) bool {
sf := SF{filter.Stream.Name, filter.Name}
compatible, ok := filterCompatibility[sf]
// already tested
if ok {
return compatible
}
for k := range requestedPatterns {
if -1 == slices.IndexFunc(filter.Pattern, func(pattern *Pattern) bool {
return pattern.Name == k
}) {
filterCompatibility[sf] = false
return false
}
}
filterCompatibility[sf] = true
return true
}
// match functions
kvMatch := func(filter *Filter, filterPatterns []string) bool {
// For each user requested pattern
for k, v := range requestedPatterns {
// Find its index on the Filter.Pattern
for i, pattern := range filter.Pattern {
if k == pattern.Name {
// Test the match
if !v.MatchString(filterPatterns[i]) {
return false
}
}
}
}
return true
}
var found bool
fullMatch := func(filter *Filter, match Match) bool {
// Test if we limit by stream
if streamName == "" || filter.Stream.Name == streamName {
// Test if we limit by filter
if filterName == "" || filter.Name == filterName {
found = true
filterPatterns := match.Split()
return isCompatible(filter) && kvMatch(filter, filterPatterns)
}
}
return false
}
response := SendAndRetrieve(Request{Info, PSF{}})
if response.Err != nil {
logger.Fatalln("Received error from daemon:", response.Err)
}
commands := make([]PSF, 0)
cs := make(ClientStatus)
for pf, times := range response.Matches {
if fullMatch(pf.F, pf.P) {
commands = append(commands, PSF{pf.P, pf.F.Stream.Name, pf.F.Name})
addMatchToCS(cs, pf, times)
}
}
for pa, times := range response.Actions {
if fullMatch(pa.A.Filter, pa.P) {
commands = append(commands, PSF{pa.P, pa.A.Filter.Stream.Name, pa.A.Filter.Name})
addActionToCS(cs, pa, times)
}
}
if !found {
logger.Println(logger.WARN, "No matching stream.filter items found. This does not mean it doesn't exist, maybe it just didn't receive any match.")
os.Exit(1)
}
for _, psf := range commands {
response := SendAndRetrieve(Request{Flush, psf})
if response.Err != nil {
logger.Fatalln("Received error from daemon:", response.Err)
}
}
printClientStatus(cs, format)
os.Exit(0)
}
func TestRegex(confFilename, regex, line string) {
conf := parseConf(confFilename)
// Code close to app/startup.go
var usedPatterns []*Pattern
for _, pattern := range conf.Patterns {
if strings.Contains(regex, pattern.nameWithBraces) {
usedPatterns = append(usedPatterns, pattern)
regex = strings.Replace(regex, pattern.nameWithBraces, pattern.Regex, 1)
}
}
reg, err := regexp.Compile(regex)
if err != nil {
logger.Fatalln("ERROR the specified regex is invalid: %v", err)
os.Exit(1)
}
// Code close to app/daemon.go
match := func(line string) {
var ignored bool
if matches := reg.FindStringSubmatch(line); matches != nil {
if usedPatterns != nil {
var result []string
for _, p := range usedPatterns {
match := matches[reg.SubexpIndex(p.Name)]
result = append(result, match)
if !p.notAnIgnore(&match) {
ignored = true
}
}
if !ignored {
fmt.Printf("\033[32mmatching\033[0m %v: %v\n", WithBrackets(result), line)
} else {
fmt.Printf("\033[33mignore matching\033[0m %v: %v\n", WithBrackets(result), line)
}
} else {
fmt.Printf("\033[32mmatching\033[0m [%v]:\n", line)
}
} else {
fmt.Printf("\033[31mno match\033[0m: %v\n", line)
}
}
if line != "" {
match(line)
} else {
logger.Println(logger.INFO, "no second argument: reading from stdin")
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
match(scanner.Text())
}
}
}

View file

@ -1,447 +0,0 @@
package app
import (
"bufio"
"os"
"os/exec"
"os/signal"
"strings"
"sync"
"syscall"
"time"
"framagit.org/ppom/reaction/logger"
)
// Executes a command and channel-send its stdout
func cmdStdout(commandline []string) chan *string {
lines := make(chan *string)
go func() {
cmd := exec.Command(commandline[0], commandline[1:]...)
stdout, err := cmd.StdoutPipe()
if err != nil {
logger.Fatalln("couldn't open stdout on command:", err)
}
if err := cmd.Start(); err != nil {
logger.Fatalln("couldn't start command:", err)
}
defer stdout.Close()
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
line := scanner.Text()
lines <- &line
logger.Println(logger.DEBUG, "stdout:", line)
}
close(lines)
}()
return lines
}
func runCommands(commands [][]string, moment string) bool {
ok := true
for _, command := range commands {
cmd := exec.Command(command[0], command[1:]...)
cmd.WaitDelay = time.Minute
logger.Printf(logger.INFO, "%v command: run %v\n", moment, command)
if err := cmd.Start(); err != nil {
logger.Printf(logger.ERROR, "%v command: run %v: %v", moment, command, err)
ok = false
} else {
err := cmd.Wait()
if err != nil {
logger.Printf(logger.ERROR, "%v command: run %v: %v", moment, command, err)
ok = false
}
}
}
return ok
}
func (p *Pattern) notAnIgnore(match *string) bool {
for _, regex := range p.compiledIgnoreRegex {
if regex.MatchString(*match) {
return false
}
}
for _, ignore := range p.Ignore {
if ignore == *match {
return false
}
}
return true
}
// Whether one of the filter's regexes is matched on a line
func (f *Filter) match(line *string) Match {
for _, regex := range f.compiledRegex {
if matches := regex.FindStringSubmatch(*line); matches != nil {
if f.Pattern != nil {
var result []string
for _, p := range f.Pattern {
match := matches[regex.SubexpIndex(p.Name)]
if p.notAnIgnore(&match) {
result = append(result, match)
}
}
if len(result) == len(f.Pattern) {
logger.Printf(logger.INFO, "%s.%s: match %s", f.Stream.Name, f.Name, WithBrackets(result))
return JoinMatch(result)
}
} else {
logger.Printf(logger.INFO, "%s.%s: match [.]\n", f.Stream.Name, f.Name)
// No pattern, so this match will never actually be used
return "."
}
}
}
return ""
}
func (f *Filter) sendActions(match Match, at time.Time) {
for _, a := range f.Actions {
actionsC <- PAT{match, a, at.Add(a.afterDuration)}
}
}
func (a *Action) exec(match Match) {
defer wgActions.Done()
var computedCommand []string
if a.Filter.Pattern != nil {
computedCommand = make([]string, 0, len(a.Cmd))
matches := match.Split()
for _, item := range a.Cmd {
for i, p := range a.Filter.Pattern {
item = strings.ReplaceAll(item, p.nameWithBraces, matches[i])
}
computedCommand = append(computedCommand, item)
}
} else {
computedCommand = a.Cmd
}
logger.Printf(logger.INFO, "%s.%s.%s: run %s\n", a.Filter.Stream.Name, a.Filter.Name, a.Name, computedCommand)
cmd := exec.Command(computedCommand[0], computedCommand[1:]...)
if ret := cmd.Run(); ret != nil {
logger.Printf(logger.ERROR, "%s.%s.%s: run %s, code %s\n", a.Filter.Stream.Name, a.Filter.Name, a.Name, computedCommand, ret)
}
}
func ActionsManager(concurrency int) {
// concurrency init
execActionsC := make(chan PA)
if concurrency > 0 {
for i := 0; i < concurrency; i++ {
go func() {
var pa PA
for {
pa = <-execActionsC
pa.A.exec(pa.P)
}
}()
}
} else {
go func() {
var pa PA
for {
pa = <-execActionsC
go func(pa PA) {
pa.A.exec(pa.P)
}(pa)
}
}()
}
execAction := func(a *Action, p Match) {
wgActions.Add(1)
execActionsC <- PA{p, a}
}
// main
pendingActionsC := make(chan PAT)
for {
select {
case pat := <-actionsC:
pa := PA{pat.P, pat.A}
pattern, action, then := pat.P, pat.A, pat.T
now := time.Now()
// check if must be executed now
if then.Compare(now) <= 0 {
execAction(action, pattern)
} else {
if actions[pa] == nil {
actions[pa] = make(map[time.Time]struct{})
}
actions[pa][then] = struct{}{}
go func(insidePat PAT, insideNow time.Time) {
time.Sleep(insidePat.T.Sub(insideNow))
pendingActionsC <- insidePat
}(pat, now)
}
case pat := <-pendingActionsC:
pa := PA{pat.P, pat.A}
pattern, action, then := pat.P, pat.A, pat.T
if actions[pa] != nil {
delete(actions[pa], then)
execAction(action, pattern)
}
case fo := <-flushToActionsC:
for pa := range actions {
if fo.S == pa.A.Filter.Stream.Name &&
fo.F == pa.A.Filter.Name &&
fo.P == pa.P {
for range actions[pa] {
execAction(pa.A, pa.P)
}
delete(actions, pa)
break
}
}
case _, _ = <-stopActions:
for pa := range actions {
if pa.A.OnExit {
for range actions[pa] {
execAction(pa.A, pa.P)
}
}
}
wgActions.Done()
return
}
}
}
func MatchesManager() {
var fo PSF
var pft PFT
end := false
for !end {
select {
case fo = <-flushToMatchesC:
matchesManagerHandleFlush(fo)
case fo, ok := <-startupMatchesC:
if !ok {
end = true
} else {
_ = matchesManagerHandleMatch(fo)
}
}
}
for {
select {
case fo = <-flushToMatchesC:
matchesManagerHandleFlush(fo)
case pft = <-matchesC:
entry := LogEntry{pft.T, 0, pft.P, pft.F.Stream.Name, pft.F.Name, 0, false}
entry.Exec = matchesManagerHandleMatch(pft)
logsC <- entry
}
}
}
func matchesManagerHandleFlush(fo PSF) {
matchesLock.Lock()
for pf := range matches {
if fo.S == pf.F.Stream.Name &&
fo.F == pf.F.Name &&
fo.P == pf.P {
delete(matches, pf)
break
}
}
matchesLock.Unlock()
}
func matchesManagerHandleMatch(pft PFT) bool {
matchesLock.Lock()
defer matchesLock.Unlock()
filter, patterns, then := pft.F, pft.P, pft.T
pf := PF{pft.P, pft.F}
if filter.Retry > 1 {
// make sure map exists
if matches[pf] == nil {
matches[pf] = make(map[time.Time]struct{})
}
// add new match
matches[pf][then] = struct{}{}
// remove match when expired
go func(pf PF, then time.Time) {
time.Sleep(then.Sub(time.Now()) + filter.retryDuration)
matchesLock.Lock()
if matches[pf] != nil {
// FIXME replace this and all similar occurences
// by clear() when switching to go 1.21
delete(matches[pf], then)
}
matchesLock.Unlock()
}(pf, then)
}
if filter.Retry <= 1 || len(matches[pf]) >= filter.Retry {
delete(matches, pf)
filter.sendActions(patterns, then)
return true
}
return false
}
func StreamManager(s *Stream, endedSignal chan *Stream) {
defer wgStreams.Done()
logger.Printf(logger.INFO, "%s: start %s\n", s.Name, s.Cmd)
lines := cmdStdout(s.Cmd)
for {
select {
case line, ok := <-lines:
if !ok {
endedSignal <- s
return
}
for _, filter := range s.Filters {
if match := filter.match(line); match != "" {
matchesC <- PFT{match, filter, time.Now()}
}
}
case _, _ = <-stopStreams:
return
}
}
}
var actions ActionsMap
var matches MatchesMap
var matchesLock sync.Mutex
var stopStreams chan bool
var stopActions chan bool
var wgActions sync.WaitGroup
var wgStreams sync.WaitGroup
/*
<StreamCmds>
StreamManager onstartup:matches
matches MatchesManager logs DatabaseManager ·
actions ActionsManager
SocketManager flushes··
<Clients>
*/
// DatabaseManager → MatchesManager
var startupMatchesC chan PFT
// StreamManager → MatchesManager
var matchesC chan PFT
// MatchesManager → DatabaseManager
var logsC chan LogEntry
// MatchesManager → ActionsManager
var actionsC chan PAT
// SocketManager, DatabaseManager → MatchesManager
var flushToMatchesC chan PSF
// SocketManager → ActionsManager
var flushToActionsC chan PSF
// SocketManager → DatabaseManager
var flushToDatabaseC chan LogEntry
func Daemon(confFilename string) {
conf := parseConf(confFilename)
startupMatchesC = make(chan PFT)
matchesC = make(chan PFT)
logsC = make(chan LogEntry)
actionsC = make(chan PAT)
flushToMatchesC = make(chan PSF)
flushToActionsC = make(chan PSF)
flushToDatabaseC = make(chan LogEntry)
stopActions = make(chan bool)
stopStreams = make(chan bool)
actions = make(ActionsMap)
matches = make(MatchesMap)
_ = runCommands(conf.Start, "start")
go DatabaseManager(conf)
go MatchesManager()
go ActionsManager(conf.Concurrency)
// Ready to start
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
endSignals := make(chan *Stream)
nbStreamsInExecution := len(conf.Streams)
for _, stream := range conf.Streams {
wgStreams.Add(1)
go StreamManager(stream, endSignals)
}
go SocketManager(conf)
for {
select {
case finishedStream := <-endSignals:
logger.Printf(logger.ERROR, "%s stream finished", finishedStream.Name)
nbStreamsInExecution--
if nbStreamsInExecution == 0 {
quit(conf, false)
}
case <-sigs:
logger.Printf(logger.INFO, "Received SIGINT/SIGTERM, exiting")
quit(conf, true)
}
}
}
func quit(conf *Conf, graceful bool) {
// send stop to StreamManager·s
close(stopStreams)
logger.Println(logger.INFO, "Waiting for Streams to finish...")
wgStreams.Wait()
// ActionsManager calls wgActions.Done() when it has launched all pending actions
wgActions.Add(1)
// send stop to ActionsManager
close(stopActions)
// stop all actions
logger.Println(logger.INFO, "Waiting for Actions to finish...")
wgActions.Wait()
// run stop commands
stopOk := runCommands(conf.Stop, "stop")
// delete pipe
err := os.Remove(*SocketPath)
if err != nil {
logger.Println(logger.ERROR, "Failed to remove socket:", err)
}
if !stopOk || !graceful {
os.Exit(1)
}
os.Exit(0)
}

View file

@ -1,108 +0,0 @@
---
# This example configuration file is a good starting point, but you're
# strongly encouraged to take a look at the full documentation: https://reaction.ppom.me
#
# This file is using the well-established YAML configuration language.
# Note that the more powerful JSONnet configuration language is also supported
# and that the documentation uses JSONnet
# definitions are just a place to put chunks of conf you want to reuse in another place
# using YAML anchors `&name` and pointers `*name`
# definitions are not readed by reaction
definitions:
- &iptablesban [ 'ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &iptablesunban [ 'ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
# ip46tables is a minimal C program (only POSIX dependencies) present as a subdirectory.
# it permits to handle both ipv4/iptables and ipv6/ip6tables commands
# if set to a positive number → max number of concurrent actions
# if set to a negative number → no limit
# if not specified or set to 0 → defaults to the number of CPUs on the system
concurrency: 0
# patterns are substitued in regexes.
# when a filter performs an action, it replaces the found pattern
patterns:
ip:
# reaction regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
# simple version: regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
regex: '(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))'
ignore:
- 127.0.0.1
- ::1
# Patterns can be ignored based on regexes, it will try to match the whole string detected by the pattern
# ignoreregex:
# - '10\.0\.[0-9]{1,3}\.[0-9]{1,3}'
# Those commands will be executed in order at start, before everything else
start:
- [ 'ip46tables', '-w', '-N', 'reaction' ]
- [ 'ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
# Those commands will be executed in order at stop, after everything else
stop:
- [ 'ip46tables', '-w,', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w,', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-F', 'reaction' ]
- [ 'ip46tables', '-w', '-X', 'reaction' ]
# streams are commands
# they are run and their ouptut is captured
# *example:* `tail -f /var/log/nginx/access.log`
# their output will be used by one or more filters
streams:
# streams have a user-defined name
ssh:
# note that if the command is not in environment's `PATH`
# its full path must be given.
cmd: [ 'journalctl', '-n0', '-fu', 'sshd.service' ]
# filters run actions when they match regexes on a stream
filters:
# filters have a user-defined name
failedlogin:
# reaction's regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
regex:
# <ip> is predefined in the patterns section
# ip's regex is inserted in the following regex
- 'authentication failure;.*rhost=<ip>'
- 'Failed password for .* from <ip>'
- 'Connection (reset|closed) by (authenticating|invalid) user .* <ip>'
# if retry and retryperiod are defined,
# the actions will only take place if a same pattern is
# found `retry` times in a `retryperiod` interval
retry: 3
# format is defined here: https://pkg.go.dev/time#ParseDuration
retryperiod: 6h
# actions are run by the filter when regexes are matched
actions:
# actions have a user-defined name
ban:
# YAML substitutes *reference by the value anchored at &reference
cmd: *iptablesban
unban:
cmd: *iptablesunban
# if after is defined, the action will not take place immediately, but after a specified duration
# same format as retryperiod
after: 48h
# let's say reaction is quitting. does it run all those pending commands which had an `after` duration set?
# if you want reaction to run those pending commands before exiting, you can set this:
# onexit: true
# (defaults to false)
# here it is not useful because we will flush and delete the chain containing the bans anyway
# (with the stop commands)
# persistence
# tldr; when an `after` action is set in a filter, such filter acts as a 'jail',
# which is persisted after reboots.
#
# when a filter is triggered, there are 2 flows:
#
# if none of its actions have an `after` directive set:
# no action will be replayed.
#
# else (if at least one action has an `after` directive set):
# if reaction stops while `after` actions are pending:
# and reaction starts again while those actions would still be pending:
# reaction executes the past actions (actions without after or with then+after < now)
# and plans the execution of future actions (actions with then+after > now)

View file

@ -1,230 +0,0 @@
package app
import (
_ "embed"
"flag"
"fmt"
"os"
"strings"
"framagit.org/ppom/reaction/logger"
)
func addStringFlag(names []string, defvalue string, f *flag.FlagSet) *string {
var value string
for _, name := range names {
f.StringVar(&value, name, defvalue, "")
}
return &value
}
func addBoolFlag(names []string, f *flag.FlagSet) *bool {
var value bool
for _, name := range names {
f.BoolVar(&value, name, false, "")
}
return &value
}
var SocketPath *string
func addSocketFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"s", "socket"}, "/run/reaction/reaction.sock", f)
}
func addConfFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"c", "config"}, "", f)
}
func addFormatFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"f", "format"}, "yaml", f)
}
func addLimitFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"l", "limit"}, "", f)
}
func addLevelFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"l", "loglevel"}, "INFO", f)
}
func subCommandParse(f *flag.FlagSet, maxRemainingArgs int) {
help := addBoolFlag([]string{"h", "help"}, f)
f.Parse(os.Args[2:])
if *help {
basicUsage()
os.Exit(0)
}
// -1 = no limit to remaining args
if maxRemainingArgs > -1 && len(f.Args()) > maxRemainingArgs {
fmt.Printf("ERROR unrecognized argument(s): %v\n", f.Args()[maxRemainingArgs:])
basicUsage()
os.Exit(1)
}
}
func basicUsage() {
const (
bold = "\033[1m"
reset = "\033[0m"
)
fmt.Print(
bold + `reaction help` + reset + `
# print this help message
` + bold + `reaction start` + reset + `
# start the daemon
# options:
-c/--config CONFIG_FILE # configuration file in json, jsonnet or yaml format (required)
-l/--loglevel LEVEL # minimum log level to show, in DEBUG, INFO, WARN, ERROR, FATAL
# (default: INFO)
-s/--socket SOCKET # path to the client-daemon communication socket
# (default: /run/reaction/reaction.sock)
` + bold + `reaction example-conf` + reset + `
# print a configuration file example
` + bold + `reaction show` + reset + ` [NAME=PATTERN...]
# show current matches and which actions are still to be run for the specified PATTERN regexe(s)
# (e.g know what is currenly banned)
reaction show
reaction show "ip=192.168.1.1"
reaction show "ip=192\.168\..*" login=root
# options:
-s/--socket SOCKET # path to the client-daemon communication socket
-f/--format yaml|json # (default: yaml)
-l/--limit STREAM[.FILTER] # only show items related to this STREAM (or STREAM.FILTER)
` + bold + `reaction flush` + reset + ` NAME=PATTERN [NAME=PATTERN...]
# remove currently active matches and run currently pending actions for the specified PATTERN regexe(s)
# (then show flushed matches and actions)
reaction flush "ip=192.168.1.1"
reaction flush "ip=192\.168\..*" login=root
# options:
-s/--socket SOCKET # path to the client-daemon communication socket
-f/--format yaml|json # (default: yaml)
-l/--limit STREAM.FILTER # flush only items related to this STREAM.FILTER
` + bold + `reaction test-regex` + reset + ` REGEX LINE # test REGEX against LINE
cat FILE | ` + bold + `reaction test-regex` + reset + ` REGEX # test REGEX against each line of FILE
# options:
-c/--config CONFIG_FILE # configuration file in json, jsonnet or yaml format
# optional: permits to use configured patterns like <ip> in regex
` + bold + `reaction version` + reset + `
# print version information
see usage examples, service configurations and good practices
on the ` + bold + `wiki` + reset + `: https://reaction.ppom.me
`)
}
//go:embed example.yml
var exampleConf string
func Main(version, commit string) {
if len(os.Args) <= 1 {
logger.Fatalln("No argument provided. Try `reaction help`")
basicUsage()
os.Exit(1)
}
f := flag.NewFlagSet(os.Args[1], flag.ExitOnError)
switch os.Args[1] {
case "help", "-h", "-help", "--help":
basicUsage()
case "version", "-v", "--version":
fmt.Printf("reaction version %v commit %v\n", version, commit)
case "example-conf":
subCommandParse(f, 0)
fmt.Print(exampleConf)
case "start":
SocketPath = addSocketFlag(f)
confFilename := addConfFlag(f)
logLevel := addLevelFlag(f)
subCommandParse(f, 0)
if *confFilename == "" {
logger.Fatalln("no configuration file provided")
basicUsage()
os.Exit(1)
}
logLevelType := logger.FromString(*logLevel)
if logLevelType == logger.UNKNOWN {
logger.Fatalf("Log Level %v not recognized", logLevel)
basicUsage()
os.Exit(1)
}
logger.SetLogLevel(logLevelType)
Daemon(*confFilename)
case "show":
SocketPath = addSocketFlag(f)
queryFormat := addFormatFlag(f)
limit := addLimitFlag(f)
subCommandParse(f, -1)
if *queryFormat != "yaml" && *queryFormat != "json" {
logger.Fatalln("only yaml and json formats are supported")
}
stream, filter := "", ""
if *limit != "" {
splitSF := strings.Split(*limit, ".")
stream = splitSF[0]
if len(splitSF) == 2 {
filter = splitSF[1]
} else if len(splitSF) > 2 {
logger.Fatalln("-l/--limit: only one . separator is supported")
}
}
ClientShow(*queryFormat, stream, filter, f.Args())
case "flush":
SocketPath = addSocketFlag(f)
queryFormat := addFormatFlag(f)
limit := addLimitFlag(f)
subCommandParse(f, -1)
if *queryFormat != "yaml" && *queryFormat != "json" {
logger.Fatalln("only yaml and json formats are supported")
}
if len(f.Args()) == 0 {
logger.Fatalln("subcommand flush takes at least one TARGET argument")
}
stream, filter := "", ""
if *limit != "" {
splitSF := strings.Split(*limit, ".")
stream = splitSF[0]
if len(splitSF) == 2 {
filter = splitSF[1]
} else if len(splitSF) > 2 {
logger.Fatalln("-l/--limit: only one . separator is supported")
}
}
ClientFlush(*queryFormat, stream, filter, f.Args())
case "test-regex":
// socket not needed, no interaction with the daemon
confFilename := addConfFlag(f)
subCommandParse(f, 2)
if *confFilename == "" {
logger.Println(logger.WARN, "no configuration file provided. Can't make use of registered patterns.")
}
if f.Arg(0) == "" {
logger.Fatalln("subcommand test-regex takes at least one REGEX argument")
basicUsage()
os.Exit(1)
}
TestRegex(*confFilename, f.Arg(0), f.Arg(1))
default:
logger.Fatalf("subcommand %v not recognized. Try `reaction help`", os.Args[1])
basicUsage()
os.Exit(1)
}
}

View file

@ -1,264 +0,0 @@
package app
import (
"encoding/gob"
"errors"
"io"
"os"
"time"
"framagit.org/ppom/reaction/logger"
)
const (
logDBName = "./reaction-matches.db"
logDBNewName = "./reaction-matches.new.db"
flushDBName = "./reaction-flushes.db"
)
func openDB(path string) (bool, *ReadDB) {
file, err := os.Open(path)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
logger.Printf(logger.WARN, "No DB found at %s. It's ok if this is the first time reaction is running.\n", path)
return true, nil
}
logger.Fatalln("Failed to open DB:", err)
}
return false, &ReadDB{file, gob.NewDecoder(file)}
}
func createDB(path string) *WriteDB {
file, err := os.Create(path)
if err != nil {
logger.Fatalln("Failed to create DB:", err)
}
return &WriteDB{file, gob.NewEncoder(file)}
}
func DatabaseManager(c *Conf) {
logDB, flushDB := c.RotateDB(true)
close(startupMatchesC)
c.manageLogs(logDB, flushDB)
}
func (c *Conf) manageLogs(logDB *WriteDB, flushDB *WriteDB) {
cpt := 0
writeSF2int := make(map[SF]int)
writeCpt := 1
for {
select {
case entry := <-flushToDatabaseC:
flushDB.enc.Encode(entry)
case entry := <-logsC:
encodeOrFatal(logDB.enc, entry, writeSF2int, &writeCpt)
cpt++
// let's say 100 000 entries ~ 10 MB
if cpt == 500_000 {
cpt = 0
logger.Printf(logger.INFO, "Rotating database...")
logDB.file.Close()
flushDB.file.Close()
logDB, flushDB = c.RotateDB(false)
logger.Printf(logger.INFO, "Rotated database")
}
}
}
}
func (c *Conf) RotateDB(startup bool) (*WriteDB, *WriteDB) {
var (
doesntExist bool
err error
logReadDB *ReadDB
flushReadDB *ReadDB
logWriteDB *WriteDB
flushWriteDB *WriteDB
)
doesntExist, logReadDB = openDB(logDBName)
if doesntExist {
return createDB(logDBName), createDB(flushDBName)
}
doesntExist, flushReadDB = openDB(flushDBName)
if doesntExist {
logger.Println(logger.WARN, "Strange! No flushes db, opening /dev/null instead")
doesntExist, flushReadDB = openDB("/dev/null")
if doesntExist {
logger.Fatalln("Opening dummy /dev/null failed")
}
}
logWriteDB = createDB(logDBNewName)
rotateDB(c, logReadDB.dec, flushReadDB.dec, logWriteDB.enc, startup)
err = logReadDB.file.Close()
if err != nil {
logger.Fatalln("Failed to close old DB:", err)
}
// It should be ok to rename an open file
err = os.Rename(logDBNewName, logDBName)
if err != nil {
logger.Fatalln("Failed to replace old DB with new one:", err)
}
err = os.Remove(flushDBName)
if err != nil && !errors.Is(err, os.ErrNotExist) {
logger.Fatalln("Failed to delete old DB:", err)
}
flushWriteDB = createDB(flushDBName)
return logWriteDB, flushWriteDB
}
func rotateDB(c *Conf, logDec *gob.Decoder, flushDec *gob.Decoder, logEnc *gob.Encoder, startup bool) {
// This mapping is a space optimization feature
// It permits to compress stream+filter to a small number (which is a byte in gob)
// We do this only for matches, not for flushes
readSF2int := make(map[int]SF)
writeSF2int := make(map[SF]int)
writeCounter := 1
// This extra code is made to warn only one time for each non-existant filter
discardedEntries := make(map[SF]int)
malformedEntries := 0
defer func() {
for sf, t := range discardedEntries {
if t > 0 {
logger.Printf(logger.WARN, "info discarded %v times from the DBs: stream/filter not found: %s.%s\n", t, sf.S, sf.F)
}
}
if malformedEntries > 0 {
logger.Printf(logger.WARN, "%v malformed entries discarded from the DBs\n", malformedEntries)
}
}()
// pattern, stream, fitler → last flush
flushes := make(map[*PSF]time.Time)
for {
var entry LogEntry
var filter *Filter
// decode entry
err := flushDec.Decode(&entry)
if err != nil {
if err == io.EOF {
break
}
malformedEntries++
continue
}
// retrieve related filter
if entry.Stream != "" || entry.Filter != "" {
if stream := c.Streams[entry.Stream]; stream != nil {
filter = stream.Filters[entry.Filter]
}
if filter == nil {
discardedEntries[SF{entry.Stream, entry.Filter}]++
continue
}
}
// store
flushes[&PSF{entry.Pattern, entry.Stream, entry.Filter}] = entry.T
}
lastTimeCpt := int64(0)
now := time.Now()
for {
var entry LogEntry
var filter *Filter
// decode entry
err := logDec.Decode(&entry)
if err != nil {
if err == io.EOF {
break
}
malformedEntries++
continue
}
// retrieve related stream & filter
if entry.Stream == "" && entry.Filter == "" {
sf, ok := readSF2int[entry.SF]
if !ok {
discardedEntries[SF{"", ""}]++
continue
}
entry.Stream = sf.S
entry.Filter = sf.F
}
if stream := c.Streams[entry.Stream]; stream != nil {
filter = stream.Filters[entry.Filter]
}
if filter == nil {
discardedEntries[SF{entry.Stream, entry.Filter}]++
continue
}
if entry.SF != 0 {
readSF2int[entry.SF] = SF{entry.Stream, entry.Filter}
}
// check if number of patterns is in sync
if len(entry.Pattern.Split()) != len(filter.Pattern) {
continue
}
// check if it hasn't been flushed
lastGlobalFlush := flushes[&PSF{entry.Pattern, "", ""}].Unix()
lastLocalFlush := flushes[&PSF{entry.Pattern, entry.Stream, entry.Filter}].Unix()
entryTime := entry.T.Unix()
if lastLocalFlush > entryTime || lastGlobalFlush > entryTime {
continue
}
// restore time
if entry.T.IsZero() {
entry.T = time.Unix(entry.S, lastTimeCpt)
}
lastTimeCpt++
// store matches
if !entry.Exec && entry.T.Add(filter.retryDuration).Unix() > now.Unix() {
if startup {
startupMatchesC <- PFT{entry.Pattern, filter, entry.T}
}
encodeOrFatal(logEnc, entry, writeSF2int, &writeCounter)
}
// replay executions
if entry.Exec && entry.T.Add(*filter.longuestActionDuration).Unix() > now.Unix() {
if startup {
flushToMatchesC <- PSF{entry.Pattern, entry.Stream, entry.Filter}
filter.sendActions(entry.Pattern, entry.T)
}
encodeOrFatal(logEnc, entry, writeSF2int, &writeCounter)
}
}
}
func encodeOrFatal(enc *gob.Encoder, entry LogEntry, writeSF2int map[SF]int, writeCounter *int) {
// Stream/Filter reduction
sf, ok := writeSF2int[SF{entry.Stream, entry.Filter}]
if ok {
entry.SF = sf
entry.Stream = ""
entry.Filter = ""
} else {
entry.SF = *writeCounter
writeSF2int[SF{entry.Stream, entry.Filter}] = *writeCounter
*writeCounter++
}
// Time reduction
if !entry.T.IsZero() {
entry.S = entry.T.Unix()
entry.T = time.Time{}
}
err := enc.Encode(entry)
if err != nil {
logger.Fatalln("Failed to write to new DB:", err)
}
}

View file

@ -1,81 +0,0 @@
package app
import (
"encoding/gob"
"errors"
"net"
"os"
"path"
"time"
"framagit.org/ppom/reaction/logger"
)
func createOpenSocket() net.Listener {
err := os.MkdirAll(path.Dir(*SocketPath), 0755)
if err != nil {
logger.Fatalln("Failed to create socket directory")
}
_, err = os.Stat(*SocketPath)
if err == nil {
logger.Println(logger.WARN, "socket", SocketPath, "already exists: Is the daemon already running? Deleting.")
err = os.Remove(*SocketPath)
if err != nil {
logger.Fatalln("Failed to remove socket:", err)
}
}
ln, err := net.Listen("unix", *SocketPath)
if err != nil {
logger.Fatalln("Failed to create socket:", err)
}
return ln
}
// Handle connections
//func SocketManager(streams map[string]*Stream) {
func SocketManager(conf *Conf) {
ln := createOpenSocket()
defer ln.Close()
for {
conn, err := ln.Accept()
if err != nil {
logger.Println(logger.ERROR, "Failed to open connection from cli:", err)
continue
}
go func(conn net.Conn) {
defer conn.Close()
var request Request
var response Response
err := gob.NewDecoder(conn).Decode(&request)
if err != nil {
logger.Println(logger.ERROR, "Invalid Message from cli:", err)
return
}
switch request.Request {
case Info:
// response.Config = *conf
response.Matches = matches
response.Actions = actions
case Flush:
le := LogEntry{time.Now(), 0, request.Flush.P, request.Flush.S, request.Flush.F, 0, false}
flushToMatchesC <- request.Flush
flushToActionsC <- request.Flush
flushToDatabaseC <- le
default:
logger.Println(logger.ERROR, "Invalid Message from cli: unrecognised command type")
response.Err = errors.New("unrecognised command type")
return
}
err = gob.NewEncoder(conn).Encode(response)
if err != nil {
logger.Println(logger.ERROR, "Can't respond to cli:", err)
return
}
}(conn)
}
}

View file

@ -1,178 +0,0 @@
package app
import (
"encoding/json"
"fmt"
"os"
"regexp"
"runtime"
"slices"
"strings"
"time"
"framagit.org/ppom/reaction/logger"
"github.com/google/go-jsonnet"
)
func (c *Conf) setup() {
if c.Concurrency == 0 {
c.Concurrency = runtime.NumCPU()
}
// Assure we iterate through c.Patterns map in reproductible order
sortedPatternNames := make([]string, 0, len(c.Patterns))
for k := range c.Patterns {
sortedPatternNames = append(sortedPatternNames, k)
}
slices.Sort(sortedPatternNames)
for _, patternName := range sortedPatternNames {
pattern := c.Patterns[patternName]
pattern.Name = patternName
pattern.nameWithBraces = fmt.Sprintf("<%s>", pattern.Name)
if pattern.Regex == "" {
logger.Fatalf("Bad configuration: pattern's regex %v is empty!", patternName)
}
compiled, err := regexp.Compile(fmt.Sprintf("^%v$", pattern.Regex))
if err != nil {
logger.Fatalf("Bad configuration: pattern %v: %v", patternName, err)
}
pattern.Regex = fmt.Sprintf("(?P<%s>%s)", patternName, pattern.Regex)
for _, ignore := range pattern.Ignore {
if !compiled.MatchString(ignore) {
logger.Fatalf("Bad configuration: pattern ignore '%v' doesn't match pattern %v! It should be fixed or removed.", ignore, pattern.nameWithBraces)
}
}
// Compile ignore regexes
for _, regex := range pattern.IgnoreRegex {
// Enclose the regex to make sure that it matches the whole detected string
compiledRegex, err := regexp.Compile("^" + regex + "$")
if err != nil {
logger.Fatalf("Bad configuration: in ignoreregex of pattern %s: %v", pattern.Name, err)
}
pattern.compiledIgnoreRegex = append(pattern.compiledIgnoreRegex, *compiledRegex)
}
}
if len(c.Streams) == 0 {
logger.Fatalln("Bad configuration: no streams configured!")
}
for streamName := range c.Streams {
stream := c.Streams[streamName]
stream.Name = streamName
if strings.Contains(stream.Name, ".") {
logger.Fatalf("Bad configuration: character '.' is not allowed in stream names: '%v'", stream.Name)
}
if len(stream.Filters) == 0 {
logger.Fatalf("Bad configuration: no filters configured in %v", stream.Name)
}
for filterName := range stream.Filters {
filter := stream.Filters[filterName]
filter.Stream = stream
filter.Name = filterName
if strings.Contains(filter.Name, ".") {
logger.Fatalf("Bad configuration: character '.' is not allowed in filter names: '%v'", filter.Name)
}
// Parse Duration
if filter.RetryPeriod == "" {
if filter.Retry > 1 {
logger.Fatalf("Bad configuration: retry but no retryperiod in %v.%v", stream.Name, filter.Name)
}
} else {
retryDuration, err := time.ParseDuration(filter.RetryPeriod)
if err != nil {
logger.Fatalf("Bad configuration: Failed to parse retry time in %v.%v: %v", stream.Name, filter.Name, err)
}
filter.retryDuration = retryDuration
}
if len(filter.Regex) == 0 {
logger.Fatalf("Bad configuration: no regexes configured in %v.%v", stream.Name, filter.Name)
}
// Compute Regexes
// Look for Patterns inside Regexes
for _, regex := range filter.Regex {
// iterate through patterns in reproductible order
for _, patternName := range sortedPatternNames {
pattern := c.Patterns[patternName]
if strings.Contains(regex, pattern.nameWithBraces) {
if !slices.Contains(filter.Pattern, pattern) {
filter.Pattern = append(filter.Pattern, pattern)
}
regex = strings.Replace(regex, pattern.nameWithBraces, pattern.Regex, 1)
}
}
compiledRegex, err := regexp.Compile(regex)
if err != nil {
logger.Fatalf("Bad configuration: regex of filter %s.%s: %v", stream.Name, filter.Name, err)
}
filter.compiledRegex = append(filter.compiledRegex, *compiledRegex)
}
if len(filter.Actions) == 0 {
logger.Fatalln("Bad configuration: no actions configured in", stream.Name, ".", filter.Name)
}
for actionName := range filter.Actions {
action := filter.Actions[actionName]
action.Filter = filter
action.Name = actionName
if strings.Contains(action.Name, ".") {
logger.Fatalln("Bad configuration: character '.' is not allowed in action names", action.Name)
}
// Parse Duration
if action.After != "" {
afterDuration, err := time.ParseDuration(action.After)
if err != nil {
logger.Fatalln("Bad configuration: Failed to parse after time in ", stream.Name, ".", filter.Name, ".", action.Name, ":", err)
}
action.afterDuration = afterDuration
} else if action.OnExit {
logger.Fatalln("Bad configuration: Cannot have `onexit: true` without an `after` directive in", stream.Name, ".", filter.Name, ".", action.Name)
}
if filter.longuestActionDuration == nil || filter.longuestActionDuration.Milliseconds() < action.afterDuration.Milliseconds() {
filter.longuestActionDuration = &action.afterDuration
}
}
}
}
}
func parseConf(filename string) *Conf {
data, err := os.Open(filename)
if err != nil {
logger.Fatalln("Failed to read configuration file:", err)
}
var conf Conf
if filename[len(filename)-4:] == ".yml" || filename[len(filename)-5:] == ".yaml" {
err = jsonnet.NewYAMLToJSONDecoder(data).Decode(&conf)
if err != nil {
logger.Fatalln("Failed to parse yaml configuration file:", err)
}
} else {
var jsondata string
jsondata, err = jsonnet.MakeVM().EvaluateFile(filename)
if err == nil {
err = json.Unmarshal([]byte(jsondata), &conf)
}
if err != nil {
logger.Fatalln("Failed to parse json configuration file:", err)
}
}
conf.setup()
return &conf
}

View file

@ -1,200 +0,0 @@
package app
import (
"bytes"
"encoding/gob"
"fmt"
"os"
"regexp"
"strings"
"time"
)
type Conf struct {
Concurrency int `json:"concurrency"`
Patterns map[string]*Pattern `json:"patterns"`
Streams map[string]*Stream `json:"streams"`
Start [][]string `json:"start"`
Stop [][]string `json:"stop"`
}
type Pattern struct {
Regex string `json:"regex"`
Ignore []string `json:"ignore"`
IgnoreRegex []string `json:"ignoreregex"`
compiledIgnoreRegex []regexp.Regexp `json:"-"`
Name string `json:"-"`
nameWithBraces string `json:"-"`
}
// Stream, Filter & Action structures must never be copied.
// They're always referenced through pointers
type Stream struct {
Name string `json:"-"`
Cmd []string `json:"cmd"`
Filters map[string]*Filter `json:"filters"`
}
type LilStream struct {
Name string
}
func (s *Stream) GobEncode() ([]byte, error) {
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
err := enc.Encode(LilStream{s.Name})
return buf.Bytes(), err
}
func (s *Stream) GobDecode(b []byte)(error) {
var ls LilStream
dec := gob.NewDecoder(bytes.NewReader(b))
err := dec.Decode(&ls)
s.Name = ls.Name
return err
}
type Filter struct {
Stream *Stream `json:"-"`
Name string `json:"-"`
Regex []string `json:"regex"`
compiledRegex []regexp.Regexp `json:"-"`
Pattern []*Pattern `json:"-"`
Retry int `json:"retry"`
RetryPeriod string `json:"retryperiod"`
retryDuration time.Duration `json:"-"`
Actions map[string]*Action `json:"actions"`
longuestActionDuration *time.Duration
}
// those small versions are needed to prevent infinite recursion in gob because of
// data cycles: Stream <-> Filter, Filter <-> Action
type LilFilter struct {
Stream *Stream
Name string
Pattern []*Pattern
}
func (f *Filter) GobDecode(b []byte)(error) {
var lf LilFilter
dec := gob.NewDecoder(bytes.NewReader(b))
err := dec.Decode(&lf)
f.Stream = lf.Stream
f.Name = lf.Name
f.Pattern = lf.Pattern
return err
}
func (f *Filter) GobEncode() ([]byte, error) {
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
err := enc.Encode(LilFilter{f.Stream, f.Name, f.Pattern})
return buf.Bytes(), err
}
type Action struct {
Filter *Filter `json:"-"`
Name string `json:"-"`
Cmd []string `json:"cmd"`
After string `json:"after"`
afterDuration time.Duration `json:"-"`
OnExit bool `json:"onexit"`
}
type LilAction struct {
Filter *Filter
Name string
}
func (a *Action) GobEncode() ([]byte, error) {
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
err := enc.Encode(LilAction{a.Filter, a.Name})
return buf.Bytes(), err
}
func (a *Action) GobDecode(b []byte)(error) {
var la LilAction
dec := gob.NewDecoder(bytes.NewReader(b))
err := dec.Decode(&la)
a.Filter = la.Filter
a.Name = la.Name
return err
}
type LogEntry struct {
T time.Time
S int64
Pattern Match
Stream, Filter string
SF int
Exec bool
}
type ReadDB struct {
file *os.File
dec *gob.Decoder
}
type WriteDB struct {
file *os.File
enc *gob.Encoder
}
type MatchesMap map[PF]map[time.Time]struct{}
type ActionsMap map[PA]map[time.Time]struct{}
// This is a "\x00" Joined string
// which contains all matches on a line.
type Match string
func (m *Match) Split() []string {
return strings.Split(string(*m), "\x00")
}
func JoinMatch(mm []string) Match {
return Match(strings.Join(mm, "\x00"))
}
func WithBrackets(mm []string) string {
var b strings.Builder
for _, match := range mm {
fmt.Fprintf(&b, "[%s]", match)
}
return b.String()
}
// Helper structs made to carry information
// Stream, Filter
type SF struct{ S, F string }
// Pattern, Stream, Filter
type PSF struct {
P Match
S, F string
}
type PF struct {
P Match
F *Filter
}
type PFT struct {
P Match
F *Filter
T time.Time
}
type PA struct {
P Match
A *Action
}
type PAT struct {
P Match
A *Action
T time.Time
}

24
bench/bench.sh Executable file
View file

@ -0,0 +1,24 @@
set -e
if test "$(realpath "$PWD")" != "$(realpath "$(dirname "$0")/..")"
then
echo "You must be in reaction root directory"
exit 1
fi
if test ! -f "$1"
then
# shellcheck disable=SC2016
echo '$1 must be a configuration file (most probably in ./bench)'
exit 1
fi
rm -f reaction.db
cargo build --release --bins
sudo systemd-run --wait \
-p User="$(id -nu)" \
-p MemoryAccounting=yes \
-p IOAccounting=yes \
-p WorkingDirectory="$(pwd)" \
-p Environment=PATH=/run/current-system/sw/bin/ \
sh -c "for i in 1 2; do ./target/release/reaction start -c '$1' -l ERROR -s ./reaction.sock; done"

74
bench/heavy-load.yml Normal file
View file

@ -0,0 +1,74 @@
---
# This configuration permits to test reaction's performance
# under a very high load
#
# It keeps regexes super simple, to avoid benchmarking the `regex` crate,
# and benchmark reaction's internals instead.
concurrency: 32
patterns:
num:
regex: '[0-9]{3}'
ip:
regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
ignore:
- 1.0.0.1
streams:
tailDown1:
cmd: [ 'sh', '-c', 'sleep 2; seq 10001 | while read i; do echo found $i; done' ]
filters:
find1:
regex:
- '^found <num>'
retry: 9
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
tailDown2:
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find2:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
tailDown3:
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find3:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
find4:
regex:
- '^trouvé <num>'
retry: 480
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false

130
bench/nginx.yml Normal file
View file

@ -0,0 +1,130 @@
# This is an extract of a real life configuration
#
# It reads an nginx's access.log in the following format:
# log_format '$remote_addr - $remote_user [$time_local] '
# '$host '
# '"$request" $status $bytes_sent '
# '"$http_referer" "$http_user_agent"';
#
# I can't make my access.log public for obvious privacy reasons.
#
# On the opposite of heavy-load.yml, this test is closer to real-life regex complexity.
#
# It has been created to test the performance improvements of
# the previous commit: ad6b0faa30c1af84360f66074a917b4bf6cda10a
#
# On this test, most lines don't match anything, so most time is spent matching regexes.
concurrency: 0
patterns:
ip:
ignore:
- 192.168.1.253
- 10.1.1.1
- 10.1.1.5
- 10.1.1.4
- 127.0.0.1
- ::1
regex: (?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))
untilEOL:
regex: .*$
streams:
nginx:
cmd:
- cat
- /tmp/access.log
filters:
directusFailedLogin:
actions:
ban:
cmd:
- sleep
- 0.01
unban:
after: 4h
cmd:
- sleep
- 0.01
regex:
- ^<ip> .* "POST /repertoire/auth/login HTTP/..." 401 [0-9]+ .https://babos.land
- ^<ip> .* "POST /pompeani.art/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
- ^<ip> .* "POST /leborddeleau/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
- ^<ip> .* "POST /5eroue/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
- ^<ip> .* "POST /edit/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
- ^<ip> .* "POST /auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.fr
retry: 6
retryperiod: 4h
gptbot:
actions:
ban:
cmd:
- sleep
- 0.01
unban:
after: 4h
cmd:
- sleep
- 0.01
regex:
- ^<ip>.*"[^"]*AI2Bot[^"]*"$
- ^<ip>.*"[^"]*Amazonbot[^"]*"$
- ^<ip>.*"[^"]*Applebot[^"]*"$
- ^<ip>.*"[^"]*Applebot-Extended[^"]*"$
- ^<ip>.*"[^"]*Bytespider[^"]*"$
- ^<ip>.*"[^"]*CCBot[^"]*"$
- ^<ip>.*"[^"]*ChatGPT-User[^"]*"$
- ^<ip>.*"[^"]*ClaudeBot[^"]*"$
- ^<ip>.*"[^"]*Diffbot[^"]*"$
- ^<ip>.*"[^"]*DuckAssistBot[^"]*"$
- ^<ip>.*"[^"]*FacebookBot[^"]*"$
- ^<ip>.*"[^"]*GPTBot[^"]*"$
- ^<ip>.*"[^"]*Google-Extended[^"]*"$
- ^<ip>.*"[^"]*Kangaroo Bot[^"]*"$
- ^<ip>.*"[^"]*Meta-ExternalAgent[^"]*"$
- ^<ip>.*"[^"]*Meta-ExternalFetcher[^"]*"$
- ^<ip>.*"[^"]*OAI-SearchBot[^"]*"$
- ^<ip>.*"[^"]*PerplexityBot[^"]*"$
- ^<ip>.*"[^"]*Timpibot[^"]*"$
- ^<ip>.*"[^"]*Webzio-Extended[^"]*"$
- ^<ip>.*"[^"]*YouBot[^"]*"$
- ^<ip>.*"[^"]*omgili[^"]*"$
slskd-failedLogin:
actions:
ban:
cmd:
- sleep
- 0.01
unban:
after: 4h
cmd:
- sleep
- 0.01
regex:
- ^<ip> .* "POST /slskd/api/v0/session HTTP/..." 401 [0-9]+ .https://ppom.me
- ^<ip> .* "POST /kiosque/api/v0/session HTTP/..." 401 [0-9]+ .https://babos.land
retry: 3
retryperiod: 1h
suspectRequests:
actions:
ban:
cmd:
- sleep
- 0.01
unban:
after: 4h
cmd:
- sleep
- 0.01
regex:
- ^<ip> .*"GET /(?:[^/" ]*/)*wp-login\.php
- ^<ip> .*"GET /(?:[^/" ]*/)*wp-includes
- '^<ip> .*"GET /(?:[^/" ]*/)*\.env '
- '^<ip> .*"GET /(?:[^/" ]*/)*config\.json '
- '^<ip> .*"GET /(?:[^/" ]*/)*info\.php '
- '^<ip> .*"GET /(?:[^/" ]*/)*owa/auth/logon.aspx '
- '^<ip> .*"GET /(?:[^/" ]*/)*auth.html '
- '^<ip> .*"GET /(?:[^/" ]*/)*auth1.html '
- '^<ip> .*"GET /(?:[^/" ]*/)*password.txt '
- '^<ip> .*"GET /(?:[^/" ]*/)*passwords.txt '
- '^<ip> .*"GET /(?:[^/" ]*/)*dns-query '
- '^<ip> .*"GET /(?:[^/" ]*/)*\.git/ '

View file

@ -0,0 +1,86 @@
---
# This configuration permits to test reaction's performance
# under a very high load
#
# It keeps regexes super simple, to avoid benchmarking the `regex` crate,
# and benchmark reaction's internals instead.
concurrency: 32
plugins:
- path: "/home/ppom/prg/reaction/target/release/reaction-plugin-virtual"
patterns:
num:
regex: '[0-9]{3}'
ip:
regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
ignore:
- 1.0.0.1
streams:
virtual:
type: virtual
filters:
find0:
regex:
- '^<num>$'
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
tailDown1:
cmd: [ 'sh', '-c', 'sleep 2; seq 1001 | while read i; do echo found $i; done' ]
filters:
find1:
regex:
- '^found <num>'
retry: 9
retryperiod: 6m
actions:
virtual:
type: virtual
options:
send: '<num>'
to: virtual
tailDown2:
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find2:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
virtual:
type: virtual
options:
send: '<num>'
to: virtual
tailDown3:
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find3:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
virtual:
type: virtual
options:
send: '<num>'
to: virtual
find4:
regex:
- '^trouvé <num>'
retry: 480
retryperiod: 6m
actions:
virtual:
type: virtual
options:
send: '<num>'
to: virtual

View file

@ -0,0 +1,74 @@
---
# This configuration permits to test reaction's performance
# under a very high load
#
# It keeps regexes super simple, to avoid benchmarking the `regex` crate,
# and benchmark reaction's internals instead.
concurrency: 32
patterns:
num:
regex: '[0-9]{3}'
ip:
regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
ignore:
- 1.0.0.1
streams:
tailDown1:
cmd: [ 'sh', '-c', 'sleep 2; seq 1001 | while read i; do echo found $i; done' ]
filters:
find1:
regex:
- '^found <num>'
retry: 9
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
tailDown2:
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find2:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
tailDown3:
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find3:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
find4:
regex:
- '^trouvé <num>'
retry: 480
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false

39
build.rs Normal file
View file

@ -0,0 +1,39 @@
use std::{
env::var_os,
io::{self, ErrorKind},
};
use clap_complete::shells;
// SubCommand defined here
include!("src/cli.rs");
fn main() -> io::Result<()> {
if var_os("PROFILE").ok_or(ErrorKind::NotFound)? == "release" {
let out_dir = PathBuf::from(var_os("OUT_DIR").ok_or(ErrorKind::NotFound)?).join("../../..");
// Build CLI
let cli = clap::Command::new("reaction");
let cli = SubCommand::augment_subcommands(cli);
// We have to manually add metadata because it is lost: only subcommands are appended
let cli = cli.about("Scan logs and take action").long_about(
"A daemon that scans program outputs for repeated patterns, and takes action.
Aims at being more versatile and flexible than fail2ban, while being faster and having simpler configuration.
See usage examples, service configurations and good practices on the wiki: https://reaction.ppom.me");
// Generate completions
clap_complete::generate_to(shells::Bash, &mut cli.clone(), "reaction", out_dir.clone())?;
clap_complete::generate_to(shells::Fish, &mut cli.clone(), "reaction", out_dir.clone())?;
clap_complete::generate_to(shells::Zsh, &mut cli.clone(), "reaction", out_dir.clone())?;
// Generate manpages
clap_mangen::generate_to(cli, out_dir.clone())?;
}
println!("cargo::rerun-if-changed=build.rs");
println!("cargo::rerun-if-changed=src/cli.rs");
Ok(())
}

8
config/README.md Normal file
View file

@ -0,0 +1,8 @@
# Configuration
Here reside two equal configurations, one in YAML and one in JSONnet.
Those serve as a configuration reference for now, waiting for a more complete reference in the wiki.
Please take a look at the [wiki](https://reaction.ppom.me) for security implications of using reaction,
FAQ, JSONnet tips, and multiple examples of filters and actions.

View file

@ -1,101 +0,0 @@
// Those strings will be substitued in each shell() call
local substitutions = [
['OUTFILE', '"$HOME/.local/share/watch/logs-$(date +%F)"'],
['DATE', '"$(date "+%F %T")"'],
];
// Substitue each substitutions' item in string
local sub(str) = std.foldl(
(function(changedstr, kv) std.strReplace(changedstr, kv[0], kv[1])),
substitutions,
str
);
local shell(prg) = [
'sh',
'-c',
sub(prg),
];
local log(line) = shell('echo DATE ' + std.strReplace(line, '\n', ' ') + '>> OUTFILE');
{
start: [
shell('mkdir -p "$(dirname OUTFILE)"'),
log('start'),
],
stop: [
log('stop'),
],
patterns: {
all: { regex: '.*' },
},
streams: {
// Be notified about each window focus change
// FIXME DOESN'T WORK
sway: {
cmd: shell(|||
swaymsg -rm -t subscribe "['window']" | jq -r 'select(.change == "focus") | .container | if has("app_id") and .app_id != null then .app_id else .window_properties.class end'
|||),
filters: {
send: {
regex: ['^<all>$'],
actions: {
send: { cmd: log('focus <all>') },
},
},
},
},
// Be notified when user is away
swayidle: {
// FIXME echo stop and start instead?
cmd: ['swayidle', 'timeout', '30', 'echo sleep', 'resume', 'echo resume'],
filters: {
send: {
regex: ['^<all>$'],
actions: {
send: { cmd: log('<all>') },
},
},
},
},
// Be notified about tmux activity
// Limitation: can't handle multiple concurrently attached sessions
// tmux: {
// cmd: shell(|||
// LAST_TIME="0"
// LAST_ACTIVITY=""
// while true;
// do
// NEW_TIME=$(tmux display -p '#{session_activity}')
// if [ -n "$NEW_TIME" ] && [ "$NEW_TIME" -gt "$LAST_TIME" ]
// then
// LAST_TIME="$NEW_TIME"
// NEW_ACTIVITY="$(tmux display -p '#{pane_current_command} #{pane_current_path}')"
// if [ -n "$NEW_ACTIVITY" ] && [ "$NEW_ACTIVITY" != "$LAST_ACTIVITY" ]
// then
// LAST_ACTIVITY="$NEW_ACTIVITY"
// echo "tmux $NEW_ACTIVITY"
// fi
// fi
// sleep 10
// done
// |||),
// filters: {
// send: {
// regex: ['^tmux <all>$'],
// actions: {
// send: { cmd: log('tmux <all>') },
// },
// },
// },
// },
// Be notified about firefox activity
// TODO
},
}

View file

@ -7,61 +7,106 @@
// strongly encouraged to take a look at the full documentation: https://reaction.ppom.me
// JSONnet functions
local iptables(args) = ['ip46tables', '-w'] + args;
// ip46tables is a minimal C program (only POSIX dependencies) present in a
// subdirectory of this repo.
// it permits to handle both ipv4/iptables and ipv6/ip6tables commands
local ipBan(cmd) = [cmd, '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'];
local ipUnban(cmd) = [cmd, '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'];
// See meaning and usage of this function around L106
// See meaning and usage of this function around L180
local banFor(time) = {
ban: {
cmd: iptables(['-A', 'reaction', '-s', '<ip>', '-j', 'DROP']),
ban4: {
cmd: ipBan('iptables'),
ipv4only: true,
},
unban: {
ban6: {
cmd: ipBan('ip6tables'),
ipv6only: true,
},
unban4: {
cmd: ipUnban('iptables'),
after: time,
cmd: iptables(['-D', 'reaction', '-s', '<ip>', '-j', 'DROP']),
ipv4only: true,
},
unban6: {
cmd: ipUnban('ip6tables'),
after: time,
ipv6only: true,
},
};
// See usage of this function around L90
// Generates a command for iptables and ip46tables
local ip46tables(arguments) = [
['iptables', '-w'] + arguments,
['ip6tables', '-w'] + arguments,
];
{
// patterns are substitued in regexes.
// when a filter performs an action, it replaces the found pattern
patterns: {
ip: {
// reaction regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
// jsonnet's @'string' is for verbatim strings
// simple version: regex: @'(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})',
regex: @'(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))',
ignore: ['127.0.0.1', '::1'],
// Patterns can be ignored based on regexes, it will try to match the whole string detected by the pattern
// ignoreregex: [@'10\.0\.[0-9]{1,3}\.[0-9]{1,3}'],
name: {
// reaction regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
// common patterns have a 'regex' field
regex: '[a-z]+',
// patterns can ignore specific strings
ignore: ['cecilia'],
// patterns can also be ignored based on regexes, it will try to match the whole string detected by the pattern
ignoreregex: [
// ignore names starting with 'jo'
'jo.*',
],
},
ip: {
// patterns can have a special 'ip' type that matches both ipv4 and ipv6
// or 'ipv4' or 'ipv6' to match only that ip version
type: 'ip',
ignore: ['127.0.0.1', '::1'],
// they can also ignore whole CIDR ranges of ip
ignorecidr: ['10.0.0.0/8'],
// last but not least, patterns of type ip, ipv4, ipv6 can also group their matched ips by mask
// ipv4mask: 30
// this means that ipv6 matches will be converted to their network part.
ipv6mask: 64,
// for example,"2001:db8:85a3:9de5::8a2e:370:7334" will be converted to "2001:db8:85a3:9de5::/64".
},
// ipv4: {
// type: 'ipv4',
// ignore: ...
// ipv4mask: ...
// },
},
// where the state (database) must be read
// defaults to . which means reaction's working directory.
// The systemd service starts reaction in /var/lib/reaction.
state_directory: '.',
// if set to a positive number max number of concurrent actions
// if set to a negative number no limit
// if not specified or set to 0 defaults to the number of CPUs on the system
concurrency: 0,
// Those commands will be executed in order at start, before everything else
start: [
start:
// Create an iptables chain for reaction
iptables(['-N', 'reaction']),
ip46tables(['-N', 'reaction']) +
// Insert this chain as the first item of the INPUT & FORWARD chains (for incoming connections)
iptables(['-I', 'INPUT', '-p', 'all', '-j', 'reaction']),
iptables(['-I', 'FORWARD', '-p', 'all', '-j', 'reaction']),
],
ip46tables(['-I', 'INPUT', '-p', 'all', '-j', 'reaction']) +
ip46tables(['-I', 'FORWARD', '-p', 'all', '-j', 'reaction']),
// Those commands will be executed in order at stop, after everything else
stop: [
stop:
// Remove the chain from the INPUT & FORWARD chains
iptables(['-D', 'INPUT', '-p', 'all', '-j', 'reaction']),
iptables(['-D', 'FORWARD', '-p', 'all', '-j', 'reaction']),
ip46tables(['-D', 'INPUT', '-p', 'all', '-j', 'reaction']) +
ip46tables(['-D', 'FORWARD', '-p', 'all', '-j', 'reaction']) +
// Empty the chain
iptables(['-F', 'reaction']),
ip46tables(['-F', 'reaction']) +
// Delete the chain
iptables(['-X', 'reaction']),
],
ip46tables(['-X', 'reaction']),
// streams are commands
// they are run and their ouptut is captured
@ -73,41 +118,85 @@ local banFor(time) = {
// note that if the command is not in environment's `PATH`
// its full path must be given.
cmd: ['journalctl', '-n0', '-fu', 'sshd.service'],
// filters run actions when they match regexes on a stream
filters: {
// filters have a user-defined name
failedlogin: {
// reaction's regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
// reaction's regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
regex: [
// <ip> is predefined in the patterns section
// ip's regex is inserted in the following regex
@'authentication failure;.*rhost=<ip>',
@'Failed password for .* from <ip>',
@'Invalid user .* from <ip>',
@'Connection (reset|closed) by (authenticating|invalid) user .* <ip>',
@'banner exchange: Connection from <ip> port [0-9]*: invalid format',
],
// if retry and retryperiod are defined,
// the actions will only take place if a same pattern is
// found `retry` times in a `retryperiod` interval
retry: 3,
// format is defined here: https://pkg.go.dev/time#ParseDuration
// format is defined as follows: <integer> <unit>
// - whitespace between the integer and unit is optional
// - integer must be positive (>= 0)
// - unit can be one of:
// - ms / millis / millisecond / milliseconds
// - s / sec / secs / second / seconds
// - m / min / mins / minute / minutes
// - h / hour / hours
// - d / day / days
retryperiod: '6h',
// duplicate specify how to handle matches after an action has already been taken.
// 3 options are possible:
// - extend (default): update the pending actions' time, so they run later
// - ignore: don't do anything, ignore the match
// - rerun: run the actions again. so we may have the same pending actions multiple times.
// (this was the default before 2.2.0)
// duplicate: extend
// actions are run by the filter when regexes are matched
actions: {
// actions have a user-defined name
ban: {
cmd: iptables(['-A', 'reaction', '-s', '<ip>', '-j', 'DROP']),
ban4: {
cmd: ['iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
// this optional field permits to run an action only when a pattern of type ip contains an ipv4
ipv4only: true,
},
unban: {
cmd: iptables(['-D', 'reaction', '-s', '<ip>', '-j', 'DROP']),
ban6: {
cmd: ['ip6tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
// this optional field permits to run an action only when a pattern of type ip contains an ipv6
ipv6only: true,
},
unban4: {
cmd: ['iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
// if after is defined, the action will not take place immediately, but after a specified duration
// same format as retryperiod
after: '48h',
after: '2 days',
// let's say reaction is quitting. does it run all those pending commands which had an `after` duration set?
// if you want reaction to run those pending commands before exiting, you can set this:
// onexit: true,
// (defaults to false)
// here it is not useful because we will flush and delete the chain containing the bans anyway
// (with the stop commands)
ipv4only: true,
},
unban6: {
cmd: ['ip6tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
after: '2 days',
ipv6only: true,
},
mail: {
cmd: ['sendmail', '...', '<ip>'],
// some commands, such as alerting commands, are "oneshot".
// this means they'll be run only once, and won't be executed again when reaction is restarted
oneshot: true,
},
},
// or use the banFor function defined at the beginning!

View file

@ -1 +0,0 @@
../app/example.yml

184
config/example.yml Normal file
View file

@ -0,0 +1,184 @@
---
# This example configuration file is a good starting point, but you're
# strongly encouraged to take a look at the full documentation: https://reaction.ppom.me
#
# This file is using the well-established YAML configuration language.
# Note that the more powerful JSONnet configuration language is also supported
# and that the documentation uses JSONnet
# definitions are just a place to put chunks of conf you want to reuse in another place
# using YAML anchors `&name` and pointers `*name`
# definitions are not readed by reaction
definitions:
- &ip4tablesban [ 'iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &ip6tablesban [ 'ip6tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &ip4tablesunban [ 'iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &ip6tablesunban [ 'ip6tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
# ip46tables is a minimal C program (only POSIX dependencies) present as a subdirectory.
# it permits to handle both ipv4/iptables and ipv6/ip6tables commands
# where the state (database) must be read
# defaults to . which means reaction's working directory.
# The systemd service starts reaction in /var/lib/reaction.
state_directory: .
# if set to a positive number → max number of concurrent actions
# if set to a negative number → no limit
# if not specified or set to 0 → defaults to the number of CPUs on the system
concurrency: 0
# patterns are substitued in regexes.
# when a filter performs an action, it replaces the found pattern
patterns:
name:
# reaction regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
# common patterns have a 'regex' field
regex: '[a-z]+'
# patterns can ignore specific strings
ignore:
- 'cecilia'
# patterns can also be ignored based on regexes, it will try to match the whole string detected by the pattern
ignoreregex:
# ignore names starting with 'jo'
- 'jo.*'
ip:
# patterns can have a special 'ip' type that matches both ipv4 and ipv6
# or 'ipv4' or 'ipv6' to match only that ip version
type: ip
ignore:
- 127.0.0.1
- ::1
# they can also ignore whole CIDR ranges of ip
ignorecidr:
- 10.0.0.0/8
# last but not least, patterns of type ip, ipv4, ipv6 can also group their matched ips by mask
# ipv4mask: 30
# this means that ipv6 matches will be converted to their network part.
ipv6mask: 64
# for example,"2001:db8:85a3:9de5::8a2e:370:7334" will be converted to "2001:db8:85a3:9de5::/64".
# ipv4:
# type: ipv4
# ignore: ...
# Those commands will be executed in order at start, before everything else
start:
- [ 'iptables', '-w', '-N', 'reaction' ]
- [ 'ip6tables', '-w', '-N', 'reaction' ]
- [ 'iptables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip6tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'ip6tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
# Those commands will be executed in order at stop, after everything else
stop:
- [ 'iptables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip6tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'ip6tables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-F', 'reaction' ]
- [ 'ip6tables', '-w', '-F', 'reaction' ]
- [ 'iptables', '-w', '-X', 'reaction' ]
- [ 'ip6tables', '-w', '-X', 'reaction' ]
# streams are commands
# they are run and their ouptut is captured
# *example:* `tail -f /var/log/nginx/access.log`
# their output will be used by one or more filters
streams:
# streams have a user-defined name
ssh:
# note that if the command is not in environment's `PATH`
# its full path must be given.
cmd: [ 'journalctl', '-n0', '-fu', 'sshd.service' ]
# filters run actions when they match regexes on a stream
filters:
# filters have a user-defined name
failedlogin:
# reaction's regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
regex:
# <ip> is predefined in the patterns section
# ip's regex is inserted in the following regex
- 'authentication failure;.*rhost=<ip>'
- 'Failed password for .* from <ip>'
- 'Invalid user .* from <ip>'
- 'Connection (reset|closed) by (authenticating|invalid) user .* <ip>'
- 'banner exchange: Connection from <ip> port [0-9]*: invalid format'
# if retry and retryperiod are defined,
# the actions will only take place if a same pattern is
# found `retry` times in a `retryperiod` interval
retry: 3
# format is defined as follows: <integer> <unit>
# - whitespace between the integer and unit is optional
# - integer must be positive (>= 0)
# - unit can be one of:
# - ms / millis / millisecond / milliseconds
# - s / sec / secs / second / seconds
# - m / min / mins / minute / minutes
# - h / hour / hours
# - d / day / days
retryperiod: 6h
# duplicate specify how to handle matches after an action has already been taken.
# 3 options are possible:
# - extend (default): update the pending actions' time, so they run later
# - ignore: don't do anything, ignore the match
# - rerun: run the actions again. so we may have the same pending actions multiple times.
# (this was the default before 2.2.0)
# duplicate: extend
# actions are run by the filter when regexes are matched
actions:
# actions have a user-defined name
ban4:
# YAML substitutes *reference by the value anchored at &reference
cmd: *ip4tablesban
# this optional field permits to run an action only when a pattern of type ip contains an ipv4
ipv4only: true
ban6:
cmd: *ip6tablesban
# this optional field permits to run an action only when a pattern of type ip contains an ipv6
ipv6only: true
unban4:
cmd: *ip4tablesunban
# if after is defined, the action will not take place immediately, but after a specified duration
# same format as retryperiod
after: '2 days'
# let's say reaction is quitting. does it run all those pending commands which had an `after` duration set?
# if you want reaction to run those pending commands before exiting, you can set this:
# onexit: true
# (defaults to false)
# here it is not useful because we will flush and delete the chain containing the bans anyway
# (with the stop commands)
ipv4only: true
unban6:
cmd: *ip6tablesunban
after: '2 days'
ipv6only: true
mail:
cmd: ['sendmail', '...', '<ip>']
# some commands, such as alerting commands, are "oneshot".
# this means they'll be run only once, and won't be executed again when reaction is restarted
oneshot: true
# persistence
# tldr; when an `after` action is set in a filter, such filter acts as a 'jail',
# which is persisted after reboots.
#
# when a filter is triggered, there are 2 flows:
#
# if none of its actions have an `after` directive set:
# no action will be replayed.
#
# else (if at least one action has an `after` directive set):
# if reaction stops while `after` actions are pending:
# and reaction starts again while those actions would still be pending:
# reaction executes the past actions (actions without after or with then+after < now)
# and plans the execution of future actions (actions with then+after > now)

View file

@ -1,72 +0,0 @@
---
patterns:
num:
regex: '[0-9]+'
ip:
regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
ignore:
- 1.0.0.1
concurrency: 0
streams:
tailDown1:
cmd: [ 'sh', '-c', 'sleep 2; seq 100010 | while read i; do echo found $(($i % 100)); done' ]
filters:
findIP:
regex:
- '^found <num>$'
retry: 50
retryperiod: 1m
actions:
damn:
cmd: [ 'sleep', '0.<num>' ]
undamn:
cmd: [ 'sleep', '0.<num>' ]
after: 1m
onexit: false
tailDown2:
cmd: [ 'sh', '-c', 'sleep 2; seq 100010 | while read i; do echo prout $(($i % 100)); done' ]
filters:
findIP:
regex:
- '^prout <num>$'
retry: 50
retryperiod: 1m
actions:
damn:
cmd: [ 'sleep', '0.<num>' ]
undamn:
cmd: [ 'sleep', '0.<num>' ]
after: 1m
onexit: false
tailDown3:
cmd: [ 'sh', '-c', 'sleep 2; seq 100010 | while read i; do echo nanana $(($i % 100)); done' ]
filters:
findIP:
regex:
- '^nanana <num>$'
retry: 50
retryperiod: 2m
actions:
damn:
cmd: [ 'sleep', '0.<num>' ]
undamn:
cmd: [ 'sleep', '0.<num>' ]
after: 1m
onexit: false
tailDown4:
cmd: [ 'sh', '-c', 'sleep 2; seq 100010 | while read i; do echo nanana $(($i % 100)); done' ]
filters:
findIP:
regex:
- '^nomatch <num>$'
retry: 50
retryperiod: 2m
actions:
damn:
cmd: [ 'sleep', '0.<num>' ]
undamn:
cmd: [ 'sleep', '0.<num>' ]
after: 1m
onexit: false

View file

@ -1,50 +0,0 @@
{
patterns: {
num: {
regex: '[0-9]+',
},
},
streams: {
tailDown1: {
cmd: ['sh', '-c', "echo 01 02 03 04 05 | tr ' ' '\n' | while read i; do sleep 0.5; echo found $i; done"],
filters: {
findIP1: {
regex: ['^found <num>$'],
retry: 1,
retryperiod: '2m',
actions: {
damn: {
cmd: ['echo', '<num>'],
},
undamn: {
cmd: ['echo', 'undamn', '<num>'],
after: '1m',
onexit: true,
},
},
},
},
},
tailDown2: {
cmd: ['sh', '-c', "echo 11 12 13 14 15 11 13 15 | tr ' ' '\n' | while read i; do sleep 0.3; echo found $i; done"],
filters: {
findIP2: {
regex: ['^found <num>$'],
retry: 2,
retryperiod: '2m',
actions: {
damn: {
cmd: ['echo', '<num>'],
},
undamn: {
cmd: ['echo', 'undamn', '<num>'],
after: '1m',
onexit: true,
},
},
},
},
},
},
}

View file

@ -2,15 +2,12 @@
[Unit]
Description=A daemon that scans program outputs for repeated patterns, and takes action.
Documentation=https://reaction.ppom.me
[Install]
WantedBy=multi-user.target
# Ensure reaction will insert its chain after docker has inserted theirs. Only useful when iptables & docker are used
# After=docker.service
# See `man systemd.exec` and `man systemd.service` for most options below
[Service]
ExecStart=/usr/bin/reaction start -c /etc/reaction.jsonnet
ExecStart=/usr/local/bin/reaction start -c /etc/reaction/
# Ask systemd to create /var/lib/reaction (/var/lib/ is implicit)
StateDirectory=reaction
@ -18,3 +15,8 @@ StateDirectory=reaction
RuntimeDirectory=reaction
# Start reaction in its state directory
WorkingDirectory=/var/lib/reaction
# Let reaction kill its child processes first
KillMode=mixed
[Install]
WantedBy=multi-user.target

View file

@ -1,160 +0,0 @@
// This is the extensive configuration used on a **real** server!
local banFor(time) = {
ban: {
cmd: ['ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
unban: {
after: time,
cmd: ['ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
};
{
patterns: {
// IPs can be IPv4 or IPv6
// ip46tables (C program also in this repo) handles running the good commands
ip: {
regex: @'(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))',
// Ignore all from 192.168.1.1 to 192.168.1.255
ignore: std.makeArray(255, function(i) '192.168.1.' + (i + 1)),
},
},
start: [
['ip46tables', '-w', '-N', 'reaction'],
['ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction'],
],
stop: [
['ip46tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction'],
['ip46tables', '-w', '-F', 'reaction'],
['ip46tables', '-w', '-X', 'reaction'],
],
streams: {
// Ban hosts failing to connect via ssh
ssh: {
cmd: ['journalctl', '-fn0', '-u', 'sshd.service'],
filters: {
failedlogin: {
regex: [
@'authentication failure;.*rhost=<ip>',
@'Connection (reset|closed) by (authenticating|invalid) user .* <ip>',
@'Failed password for .* from <ip>',
],
retry: 3,
retryperiod: '6h',
actions: banFor('48h'),
},
},
},
// Ban hosts which knock on closed ports.
// It needs this iptables chain to be used to drop packets:
// ip46tables -N log-refuse
// ip46tables -A log-refuse -p tcp --syn -j LOG --log-level info --log-prefix 'refused connection: '
// ip46tables -A log-refuse -m pkttype ! --pkt-type unicast -j nixos-fw-refuse
// ip46tables -A log-refuse -j DROP
kernel: {
cmd: ['journalctl', '-fn0', '-k'],
filters: {
portscan: {
regex: ['refused connection: .*SRC=<ip>'],
retry: 4,
retryperiod: '1h',
actions: banFor('720h'),
},
},
},
// Note: nextcloud and vaultwarden could also be filters on the nginx stream
// I did use their own logs instead because it's less logs to parse than the front webserver
// Ban hosts failing to connect to Nextcloud
nextcloud: {
cmd: ['journalctl', '-fn0', '-u', 'phpfpm-nextcloud.service'],
filters: {
failedLogin: {
regex: [
@'"remoteAddr":"<ip>".*"message":"Login failed:',
@'"remoteAddr":"<ip>".*"message":"Trusted domain error.',
],
retry: 3,
retryperiod: '1h',
actions: banFor('1h'),
},
},
},
// Ban hosts failing to connect to vaultwarden
vaultwarden: {
cmd: ['journalctl', '-fn0', '-u', 'vaultwarden.service'],
filters: {
failedlogin: {
actions: banFor('2h'),
regex: [@'Username or password is incorrect\. Try again\. IP: <ip>\. Username:'],
retry: 3,
retryperiod: '1h',
},
},
},
// Used with this nginx log configuration:
// log_format withhost '$remote_addr - $remote_user [$time_local] $host "$request" $status $bytes_sent "$http_referer" "$http_user_agent"';
// access_log /var/log/nginx/access.log withhost;
nginx: {
cmd: ['tail', '-n0', '-f', '/var/log/nginx/access.log'],
filters: {
// Ban hosts failing to connect to Directus
directus: {
regex: [
@'^<ip> .* "POST /auth/login HTTP/..." 401 [0-9]+ .https://directusdomain',
],
retry: 6,
retryperiod: '4h',
actions: banFor('4h'),
},
// Ban hosts presenting themselves as bots of ChatGPT
gptbot: {
regex: [@'^<ip>.*GPTBot/1.0'],
actions: banFor('720h'),
},
// Ban hosts failing to connect to slskd
slskd: {
regex: [
@'^<ip> .* "POST /api/v0/session HTTP/..." 401 [0-9]+ .https://slskddomain',
],
retry: 3,
retryperiod: '1h',
actions: banFor('6h'),
},
// Ban suspect HTTP requests
// Those are frequent malicious requests I got from bots
// Make sure you don't have honnest use cases for those requests, or your clients may be banned for 2 weeks!
suspectRequests: {
regex: [
// (?:[^/" ]*/)* is a "non-capturing group" regex that allow for subpath(s)
// example: /code/.env should be matched as well as /.env
// ^^^^^
@'^<ip>.*"GET /(?:[^/" ]*/)*\.env ',
@'^<ip>.*"GET /(?:[^/" ]*/)*info\.php ',
@'^<ip>.*"GET /(?:[^/" ]*/)*owa/auth/logon.aspx ',
@'^<ip>.*"GET /(?:[^/" ]*/)*auth.html ',
@'^<ip>.*"GET /(?:[^/" ]*/)*auth1.html ',
@'^<ip>.*"GET /(?:[^/" ]*/)*password.txt ',
@'^<ip>.*"GET /(?:[^/" ]*/)*passwords.txt ',
@'^<ip>.*"GET /(?:[^/" ]*/)*dns-query ',
// Do not include this if you have a Wordpress website ;)
@'^<ip>.*"GET /(?:[^/" ]*/)*wp-login\.php',
@'^<ip>.*"GET /(?:[^/" ]*/)*wp-includes',
// Do not include this if a client must retrieve a config.json file ;)
@'^<ip>.*"GET /(?:[^/" ]*/)*config\.json ',
],
actions: banFor('720h'),
},
},
},
},
}

View file

@ -1,63 +0,0 @@
{
patterns: {
num: {
regex: '[0-9]+',
ignore: ['1'],
// ignoreregex: ['2.?'],
},
letter: {
regex: '[a-z]+',
ignore: ['b'],
// ignoreregex: ['b.?'],
},
},
streams: {
tailDown1: {
cmd: ['sh', '-c', "echo 1_abc 2_abc 3_abc abc_1 abc_2 abc_3 | tr ' ' '\n' | while read i; do sleep 1; echo found $i; done; sleep 30"],
filters: {
findIP: {
regex: [
'^found <num>_<letter>$',
'^found <letter>_<num>$',
],
retry: 2,
retryperiod: '30s',
actions: {
damn: {
cmd: ['echo', '<num>'],
},
undamn: {
cmd: ['echo', 'undamn', '<num>'],
after: '28s',
onexit: true,
},
},
},
},
},
tailDown2: {
cmd: ['sh', '-c', "echo 1_abc 2_abc 3_abc abc_1 abc_2 abc_3 | tr ' ' '\n' | while read i; do sleep 1; echo found $i; done; sleep 30"],
filters: {
findIP: {
regex: [
'^found <num>_<letter>$',
'^found <letter>_<num>$',
],
retry: 2,
retryperiod: '30s',
actions: {
damn: {
cmd: ['echo', '<num>'],
},
undamn: {
cmd: ['echo', 'undamn', '<num>'],
after: '28s',
onexit: true,
},
},
},
},
},
},
}

23
crates/treedb/Cargo.toml Normal file
View file

@ -0,0 +1,23 @@
[package]
name = "treedb"
version = "1.0.0"
edition = "2024"
[features]
test = []
[dependencies]
chrono.workspace = true
futures.workspace = true
serde.workspace = true
serde_json.workspace = true
thiserror.workspace = true
tokio.workspace = true
tokio.features = ["rt-multi-thread", "macros", "io-util", "time", "fs", "tracing"]
tokio-util.workspace = true
tokio-util.features = ["rt"]
tracing.workspace = true
[dev-dependencies]
tempfile.workspace = true

View file

@ -0,0 +1,188 @@
use std::{
collections::{BTreeMap, BTreeSet},
time::Duration,
};
use chrono::DateTime;
use serde_json::Value;
use crate::time::Time;
/// Tries to convert a [`Value`] into a [`String`]
pub fn to_string(val: &Value) -> Result<String, String> {
Ok(val.as_str().ok_or("not a string")?.to_owned())
}
/// Tries to convert a [`Value`] into a [`u64`]
pub fn to_u64(val: &Value) -> Result<u64, String> {
val.as_u64().ok_or("not a u64".into())
}
/// Old way of converting time: with chrono's serialization
fn old_string_to_time(val: &str) -> Result<Time, String> {
let time = DateTime::parse_from_rfc3339(val).map_err(|err| err.to_string())?;
Ok(Duration::new(time.timestamp() as u64, time.timestamp_subsec_nanos()).into())
}
/// New way of converting time: with our own implem
fn new_string_to_time(val: &str) -> Result<Time, String> {
let nanos: u128 = val.parse().map_err(|_| "not a number")?;
Ok(Duration::new(
(nanos / 1_000_000_000) as u64,
(nanos % 1_000_000_000) as u32,
)
.into())
}
/// Tries to convert a [`&str`] into a [`Time`]
fn string_to_time(val: &str) -> Result<Time, String> {
match new_string_to_time(val) {
Err(err) => match old_string_to_time(val) {
Err(_) => Err(err),
ok => ok,
},
ok => ok,
}
}
/// Tries to convert a [`Value`] into a [`Time`]
pub fn to_time(val: &Value) -> Result<Time, String> {
string_to_time(val.as_str().ok_or("not a string number")?)
}
/// Tries to convert a [`Value`] into a [`Vec<String>`]
pub fn to_match(val: &Value) -> Result<Vec<String>, String> {
val.as_array()
.ok_or("not an array")?
.iter()
.map(to_string)
.collect()
}
/// Tries to convert a [`Value`] into a [`BTreeSet<Time>`]
pub fn to_timeset(val: &Value) -> Result<BTreeSet<Time>, String> {
val.as_array()
.ok_or("not an array")?
.iter()
.map(to_time)
.collect()
}
/// Tries to convert a [`Value`] into a [`BTreeMap<Time, u64>`]
pub fn to_timemap(val: &Value) -> Result<BTreeMap<Time, u64>, String> {
val.as_object()
.ok_or("not a map")?
.iter()
.map(|(key, value)| Ok((string_to_time(key)?, to_u64(value)?)))
.collect()
}
#[cfg(test)]
mod tests {
use std::collections::BTreeMap;
use super::*;
#[test]
fn test_to_string() {
assert_eq!(to_string(&("".into())), Ok("".into()));
assert_eq!(to_string(&("ploup".into())), Ok("ploup".into()));
assert!(to_string(&(["ploup"].into())).is_err());
assert!(to_string(&(true.into())).is_err());
assert!(to_string(&(8.into())).is_err());
assert!(to_string(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_u64() {
assert_eq!(to_u64(&(0.into())), Ok(0));
assert_eq!(to_u64(&(8.into())), Ok(8));
assert_eq!(to_u64(&(u64::MAX.into())), Ok(u64::MAX));
assert!(to_u64(&("ploup".into())).is_err());
assert!(to_u64(&(["ploup"].into())).is_err());
assert!(to_u64(&(true.into())).is_err());
assert!(to_u64(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_time() {
assert_eq!(to_time(&"123456".into()).unwrap(), Time::from_nanos(123456),);
assert!(to_time(&(u64::MAX.into())).is_err());
assert!(to_time(&(["ploup"].into())).is_err());
assert!(to_time(&(true.into())).is_err());
// assert!(to_time(&(12345.into())).is_err());
assert!(to_time(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_match() {
assert_eq!(to_match(&([""].into())), Ok(vec!["".into()]));
assert_eq!(
to_match(&(["plip", "ploup"].into())),
Ok(vec!["plip".into(), "ploup".into()])
);
assert!(to_match(&[Value::from("plip"), Value::from(10)].into()).is_err());
assert!(to_match(&("ploup".into())).is_err());
assert!(to_match(&(true.into())).is_err());
assert!(to_match(&(8.into())).is_err());
assert!(to_match(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_timeset() {
assert_eq!(
to_timeset(&Value::from([Value::from("123456789")])),
Ok(BTreeSet::from([Time::from_nanos(123456789)]))
);
assert_eq!(
to_timeset(&Value::from([Value::from("8"), Value::from("123456")])),
Ok(BTreeSet::from([
Time::from_nanos(8),
Time::from_nanos(123456),
]))
);
assert!(to_timeset(&[Value::from("plip"), Value::from(10)].into()).is_err());
assert!(to_timeset(&([""].into())).is_err());
assert!(to_timeset(&(["ploup"].into())).is_err());
assert!(to_timeset(&(true.into())).is_err());
assert!(to_timeset(&(8.into())).is_err());
assert!(to_timeset(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_timemap() {
let time1 = 1234567;
let time1_t = Time::from_nanos(time1);
let time2 = 123456789;
let time2_t = Time::from_nanos(time2);
assert_eq!(
to_timemap(&Value::from_iter([(time2.to_string(), 1)])),
Ok(BTreeMap::from([(time2_t, 1)]))
);
assert_eq!(
to_timemap(&Value::from_iter([
(time1.to_string(), 4),
(time2.to_string(), 0)
])),
Ok(BTreeMap::from([(time1_t.into(), 4), (time2_t.into(), 0)]))
);
assert!(to_timemap(&Value::from_iter([("1-1", time2)])).is_err());
// assert!(to_timemap(&Value::from_iter([(time2.to_string(), time2)])).is_err());
assert!(to_timemap(&Value::from_iter([(time2)])).is_err());
assert!(to_timemap(&Value::from_iter([(1)])).is_err());
assert!(to_timemap(&(["1970-01-01T01:20:34.567+01:00"].into())).is_err());
assert!(to_timemap(&([""].into())).is_err());
assert!(to_timemap(&(["ploup"].into())).is_err());
assert!(to_timemap(&(true.into())).is_err());
assert!(to_timemap(&(8.into())).is_err());
assert!(to_timemap(&(None::<String>.into())).is_err());
}
}

746
crates/treedb/src/lib.rs Normal file
View file

@ -0,0 +1,746 @@
/// This module implements an asynchronously persisted BTreeMap named [`Tree`],
/// via an unique "Write Behind Log" (in opposition to WAL, "Write Ahead Log").
///
/// This permits to have RAM-speed read & write operations, while eventually
/// persisting operations. The log is flushed to kernel every 2 seconds.
///
/// Operations stored in the log have a timeout configured at the Tree level.
/// All operations are then stored for this lifetime.
///
/// Data is stored as JSONL. Each line stores a (key, value), plus the tree id,
/// and an expiry timestamp in milliseconds.
use std::{
collections::{BTreeMap, HashMap},
io::{Error as IoError, ErrorKind},
ops::Deref,
path::{Path, PathBuf},
time::Duration,
};
use serde::{Deserialize, Serialize, de::DeserializeOwned};
use serde_json::Value;
use tokio::{
fs::{File, rename},
sync::{mpsc, oneshot},
time::{MissedTickBehavior, interval},
};
use tokio_util::{sync::CancellationToken, task::task_tracker::TaskTrackerToken};
// Database
use raw::{ReadDB, WriteDB};
use time::{Time, now};
pub mod helpers;
mod raw;
pub mod time;
/// Any order the Database can receive
enum Order {
Log(Entry),
OpenTree(OpenTree),
}
/// Entry sent from [`Tree`] to [`Database`]
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
struct Entry {
pub tree: String,
pub key: Value,
pub value: Option<Value>,
pub expiry: Time,
}
/// Order to receive a tree from previous Database
pub struct OpenTree {
name: String,
resp: oneshot::Sender<Option<LoadedTree>>,
}
type LoadedTree = HashMap<Value, Value>;
pub type LoadedDB = HashMap<String, LoadedTree>;
const DB_NAME: &str = "reaction.db";
const DB_NEW_NAME: &str = "reaction.new.db";
fn path_of(state_directory: &Path, name: &str) -> PathBuf {
if state_directory.as_os_str().is_empty() {
name.into()
} else {
PathBuf::from(state_directory).join(name)
}
}
pub type DatabaseErrorReceiver = oneshot::Receiver<Result<(), String>>;
/// Public-facing API for a treedb Database
pub struct Database {
entry_tx: Option<mpsc::Sender<Order>>,
error_rx: DatabaseErrorReceiver,
}
impl Database {
/// Open a new Database, whom task will start in the background.
/// You'll have to:
/// - drop all [`Tree`]s,
/// - call [`Self::quit`],
///
/// to have the Database properly quit.
///
/// You can wait for [`Self::quit`] returned channel to know how it went.
pub async fn open(
path_directory: &Path,
cancellation_token: CancellationToken,
task_tracker_token: TaskTrackerToken,
) -> Result<Database, IoError> {
let (manager, entry_tx) = DatabaseManager::open(path_directory).await?;
let error_rx = manager.manager(cancellation_token, task_tracker_token);
Ok(Self {
entry_tx: Some(entry_tx),
error_rx,
})
}
/// Permit to close DB's channel.
/// Without this function manually called, the DB can't close.
pub fn quit(self) -> DatabaseErrorReceiver {
self.error_rx
}
}
// TODO rotate_db at a regular interval instead of every N bytes?
// This would make more sense, as actual garbage collection is time-based
/// A [`Database`] logs all write operations on [`Tree`]s in a single file.
/// Logs are written asynchronously, so the write operations in RAM will block only when the
/// underlying channel is full.
struct DatabaseManager {
/// Inner database
write_db: WriteDB,
/// [`Tree`]s loaded from disk
loaded_db: LoadedDB,
/// Path for the "normal" database
path: PathBuf,
/// Path for the "new" database, when rotating database.
/// New database atomically replaces the old one when its writing is done.
new_path: PathBuf,
/// The receiver on [`Tree`] write operations
entry_rx: mpsc::Receiver<Order>,
/// The interval at which the database must be flushed to kernel
flush_every: Duration,
/// The maximum bytes that must be written until the database is rotated
max_bytes: usize,
/// Counter to account of the current number of bytes written
bytes_written: usize,
}
impl DatabaseManager {
pub async fn open(
path_directory: &Path,
) -> Result<(DatabaseManager, mpsc::Sender<Order>), IoError> {
let path = path_of(path_directory, DB_NAME);
let new_path = path_of(path_directory, DB_NEW_NAME);
let (write_db, loaded_db) = rotate_db(&path, &new_path, true).await?;
let (entry_tx, entry_rx) = mpsc::channel(1000);
Ok((
DatabaseManager {
write_db,
loaded_db,
path,
new_path,
entry_rx,
flush_every: Duration::from_secs(2),
max_bytes: 20 * 1024 * 1024, // 20 MiB
bytes_written: 0,
},
entry_tx,
))
}
pub fn manager(
mut self,
cancellation_token: CancellationToken,
_task_tracker_token: TaskTrackerToken,
) -> oneshot::Receiver<Result<(), String>> {
let (error_tx, error_rx) = oneshot::channel();
tokio::spawn(async move {
let mut interval = interval(self.flush_every);
// If we missed a tick, it will tick immediately, then wait
// flush_every for the next tick, resulting in a relaxed interval.
// Hoping this will smooth IO pressure when under heavy load.
interval.set_missed_tick_behavior(MissedTickBehavior::Delay);
let mut status = loop {
tokio::select! {
order = self.entry_rx.recv() => {
if let Err(err) = self.handle_order(order).await {
cancellation_token.cancel();
break err;
}
}
_ = interval.tick() => {
if let Err(err) = self.flush().await {
cancellation_token.cancel();
break Some(err);
}
}
_ = cancellation_token.cancelled() => break None
};
};
// Finish consuming received entries when shutdown asked
if status.is_none() {
loop {
let order = self.entry_rx.recv().await;
if let Err(err) = self.handle_order(order).await {
status = err;
break;
}
}
}
// Shutdown
let close_status = self
.close()
.await
.map_err(|err| format!("while closing database: {err}"));
let _ = error_tx.send(if let Some(err) = status {
Err(err)
} else if close_status.is_err() {
close_status
} else {
Ok(())
});
});
error_rx
}
/// Executes an order. Returns:
/// - Err(Some) if there was an error,
/// - Err(None) is channel is closed,
/// - Ok(()) in general case.
async fn handle_order(&mut self, order: Option<Order>) -> Result<(), Option<String>> {
match order {
Some(Order::Log(entry)) => self.handle_entry(entry).await.map_err(Option::Some),
Some(Order::OpenTree(open_tree)) => {
self.handle_open_tree(open_tree);
Ok(())
}
None => Err(None),
}
}
/// Write a received entry.
async fn handle_entry(&mut self, entry: Entry) -> Result<(), String> {
match self.write_db.write_entry(&entry).await {
Ok(bytes_written) => {
self.bytes_written += bytes_written;
if self.bytes_written > self.max_bytes {
match self.rotate_db().await {
Ok(_) => {
self.bytes_written = 0;
Ok(())
}
Err(err) => Err(format!("while rotating database: {err}")),
}
} else {
Ok(())
}
}
Err(err) => Err(format!("while writing entry to database: {err}")),
}
}
/// Flush inner database.
async fn flush(&mut self) -> Result<(), String> {
self.write_db
.flush()
.await
.map_err(|err| format!("while flushing database: {err}"))
}
/// Close inner database.
async fn close(mut self) -> Result<(), IoError> {
self.write_db.close().await
}
/// Rotate inner database.
async fn rotate_db(&mut self) -> Result<(), IoError> {
self.write_db.close().await?;
let (write_db, _) = rotate_db(&self.path, &self.new_path, false).await?;
self.write_db = write_db;
Ok(())
}
}
async fn rotate_db(
path: &Path,
new_path: &Path,
startup: bool,
) -> Result<(WriteDB, LoadedDB), IoError> {
let file = match File::open(&path).await {
Ok(file) => file,
Err(err) => match (startup, err.kind()) {
// No need to rotate the database when it is new,
// we return here
(true, ErrorKind::NotFound) => {
return Ok((WriteDB::new(File::create(path).await?), HashMap::default()));
}
(_, _) => return Err(err),
},
};
let mut read_db = ReadDB::new(file);
let mut write_db = WriteDB::new(File::create(new_path).await?);
let loaded_db = read_db.read(&mut write_db, startup).await?;
rename(new_path, path).await?;
Ok((write_db, loaded_db))
}
// Tree
pub trait KeyType: Ord + Serialize + DeserializeOwned + Clone {}
pub trait ValueType: Ord + Serialize + DeserializeOwned + Clone {}
impl<T> KeyType for T where T: Ord + Serialize + DeserializeOwned + Clone {}
impl<T> ValueType for T where T: Ord + Serialize + DeserializeOwned + Clone {}
/// Main API of this crate.
/// [`Tree`] wraps and is meant to be used exactly like a standard [`std::collections::BTreeMap`].
/// Read operations are RAM only.
/// Write operations are asynchronously persisted on disk by its parent [`Database`].
/// They will never block.
pub struct Tree<K: KeyType, V: ValueType> {
// FIXME implement id as a u64 instead?
// Could permit to send more direct database entries.
// Database should write special entries as soon as a tree is opened.
/// The name of the tree
id: String,
/// The duration for which the data in the tree must be persisted to disk.
/// All write operations will stay logged on disk for this duration.
/// This property permits the database rotation to be `O(n)` in time and `O(1)` in RAM space,
/// `n` being the number of write operations from the last rotation plus the number of new
/// operations.
entry_timeout: Duration,
/// The inner BTreeMap
tree: BTreeMap<K, V>,
/// The sender that permits to asynchronously send write operations to database
tx: mpsc::Sender<Order>,
}
impl Database {
/// Creates a new Tree with the given name and entry timeout.
/// Takes a closure (or regular function) that converts (Value, Value) JSON entries
/// into (K, V) typed entries.
/// Helpers for this closure can be found in the [`helpers`] module.
pub async fn open_tree<K: KeyType, V: ValueType, F>(
&mut self,
name: String,
entry_timeout: Duration,
map_f: F,
) -> Result<Tree<K, V>, String>
where
F: Fn((Value, Value)) -> Result<(K, V), String>,
{
// Request the tree
let (tx, rx) = oneshot::channel();
let entry_tx = match self.entry_tx.clone() {
None => return Err("Database is closing".to_string()),
Some(entry_tx) => {
entry_tx
.send(Order::OpenTree(OpenTree {
name: name.clone(),
resp: tx,
}))
.await
.map_err(|_| "Database did not answer")?;
// Get a clone of the channel sender
entry_tx.clone()
}
};
// Load the tree from its JSON
let tree = if let Some(json_tree) = rx.await.map_err(|_| "Database did not respond")? {
json_tree
.into_iter()
.map(map_f)
.collect::<Result<BTreeMap<K, V>, String>>()?
} else {
BTreeMap::default()
};
Ok(Tree {
id: name,
entry_timeout,
tree,
tx: entry_tx,
})
}
}
impl DatabaseManager {
/// Creates a new Tree with the given name and entry timeout.
/// Takes a closure (or regular function) that converts (Value, Value) JSON entries
/// into (K, V) typed entries.
/// Helpers for this closure can be found in the [`helpers`] module.
pub fn handle_open_tree(&mut self, open_tree: OpenTree) {
let _ = open_tree.resp.send(self.loaded_db.remove(&open_tree.name));
}
// TODO keep only tree names, and use it for next db rotation to remove associated entries
// Drops Trees that have not been loaded already
// pub fn drop_trees(&mut self) {
// self.loaded_db = HashMap::default();
// }
}
// Gives access to all read-only functions
impl<K: KeyType, V: ValueType> Deref for Tree<K, V> {
type Target = BTreeMap<K, V>;
fn deref(&self) -> &Self::Target {
&self.tree
}
}
// Reimplement write functions
impl<K: KeyType, V: ValueType> Tree<K, V> {
/// Log an [`Entry`] to the [`Database`]
async fn log(&mut self, k: &K, v: Option<&V>) {
let now = now();
let e = Entry {
tree: self.id.clone(),
key: serde_json::to_value(k).expect("could not serialize key"),
value: v.map(|v| serde_json::to_value(v).expect("could not serialize value")),
expiry: now + self.entry_timeout,
};
let tx = self.tx.clone();
// FIXME what if send fails?
let _ = tx.send(Order::Log(e)).await;
}
/// Asynchronously persisted version of [`BTreeMap::insert`]
pub async fn insert(&mut self, key: K, value: V) -> Option<V> {
self.log(&key, Some(&value)).await;
self.tree.insert(key, value)
}
/// Asynchronously persisted version of [`BTreeMap::pop_first`]
pub async fn pop_first(&mut self) -> Option<(K, V)> {
match self.tree.pop_first() {
Some((key, value)) => {
self.log(&key, None).await;
Some((key, value))
}
None => None,
}
}
/// Asynchronously persisted version of [`BTreeMap::pop_last`]
pub async fn pop_last(&mut self) -> Option<(K, V)> {
match self.tree.pop_last() {
Some((key, value)) => {
self.log(&key, None).await;
Some((key, value))
}
None => None,
}
}
/// Asynchronously persisted version of [`BTreeMap::remove`]
pub async fn remove(&mut self, key: &K) -> Option<V> {
self.log(key, None).await;
self.tree.remove(key)
}
/// Updates an item and returns the previous value.
/// Returning None removes the item if it existed before.
/// Asynchronously persisted.
/// *API design borrowed from [`fjall::WriteTransaction::fetch_update`].*
pub async fn fetch_update<F: FnMut(Option<V>) -> Option<V>>(
&mut self,
key: K,
mut f: F,
) -> Option<V> {
let old_value = self.get(&key).map(|v| v.to_owned());
let new_value = f(old_value);
self.log(&key, new_value.as_ref()).await;
if let Some(new_value) = new_value {
self.tree.insert(key, new_value)
} else {
self.tree.remove(&key)
}
}
#[cfg(any(test, feature = "test"))]
pub fn tree(&self) -> &BTreeMap<K, V> {
&self.tree
}
}
#[cfg(any(test, feature = "test"))]
impl DatabaseManager {
pub fn set_loaded_db(&mut self, loaded_db: LoadedDB) {
self.loaded_db = loaded_db;
}
}
#[cfg(any(test, feature = "test"))]
impl Database {
pub async fn from_dir(dir_path: &Path, loaded_db: Option<LoadedDB>) -> Result<Self, IoError> {
use tokio_util::task::TaskTracker;
let (mut manager, entry_tx) = DatabaseManager::open(dir_path).await?;
if let Some(loaded_db) = loaded_db {
manager.set_loaded_db(loaded_db)
}
let error_rx = manager.manager(CancellationToken::new(), TaskTracker::new().token());
Ok(Self {
entry_tx: Some(entry_tx),
error_rx,
})
}
}
#[cfg(test)]
mod tests {
use std::{
collections::{BTreeMap, BTreeSet, HashMap},
time::Duration,
};
use serde_json::Value;
use tempfile::{NamedTempFile, TempDir};
use tokio::fs::File;
use super::{DB_NAME, Database, Entry, Time, helpers::*, now, raw::WriteDB, rotate_db};
#[tokio::test]
async fn test_rotate_db() {
let now = now();
let expired = now - Time::from_secs(2);
let valid = now + Time::from_secs(2);
let entries = [
Entry {
tree: "tree1".into(),
key: "key1".into(),
value: Some("value1".into()),
expiry: valid,
},
Entry {
tree: "tree2".into(),
key: "key2".into(),
value: Some("value2".into()),
expiry: valid,
},
Entry {
tree: "tree1".into(),
key: "key2".into(),
value: Some("value2".into()),
expiry: valid,
},
Entry {
tree: "tree1".into(),
key: "key1".into(),
value: None,
expiry: valid,
},
Entry {
tree: "tree3".into(),
key: "key1".into(),
value: Some("value1".into()),
expiry: expired,
},
];
let path = NamedTempFile::new().unwrap().into_temp_path();
let new_path = NamedTempFile::new().unwrap().into_temp_path();
let mut write_db = WriteDB::new(File::create(&path).await.unwrap());
for entry in entries {
write_db.write_entry(&entry).await.unwrap();
}
write_db.close().await.unwrap();
let (mut write_db, loaded_db) = rotate_db(&path, &new_path, true).await.unwrap();
assert_eq!(
loaded_db,
HashMap::from([
(
"tree1".into(),
HashMap::from([("key2".into(), "value2".into())])
),
(
"tree2".into(),
HashMap::from([("key2".into(), "value2".into())])
)
])
);
// Test that we can write in new db
write_db
.write_entry(&Entry {
tree: "tree3".into(),
key: "key3".into(),
value: Some("value3".into()),
expiry: valid,
})
.await
.unwrap();
write_db.close().await.unwrap();
// And that we get the correct result
let (_, loaded_db) = rotate_db(&path, &new_path, true).await.unwrap();
assert_eq!(
loaded_db,
HashMap::from([
(
"tree1".into(),
HashMap::from([("key2".into(), "value2".into())])
),
(
"tree2".into(),
HashMap::from([("key2".into(), "value2".into())])
),
(
"tree3".into(),
HashMap::from([("key3".into(), "value3".into())])
),
])
);
// Test that asking not to load results in no load
let (_, loaded_db) = rotate_db(&path, &new_path, false).await.unwrap();
assert_eq!(loaded_db, HashMap::default());
}
#[tokio::test]
async fn test_open_tree() {
let now = now();
let now2 = now + Time::from_millis(2);
let now3 = now + Time::from_millis(3);
// let now_ms = now.as_nanos().to_string();
// let now2_ms = now2.as_nanos().to_string();
// let now3_ms = now3.as_nanos().to_string();
let valid = now + Time::from_secs(2);
let ip127 = vec!["127.0.0.1".to_string()];
let ip1 = vec!["1.1.1.1".to_string()];
let entries = [
Entry {
tree: "time-match".into(),
key: now.as_nanos().to_string().into(),
value: Some(ip127.clone().into()),
expiry: valid,
},
Entry {
tree: "time-match".into(),
key: now2.as_nanos().to_string().into(),
value: Some(ip127.clone().into()),
expiry: valid,
},
Entry {
tree: "time-match".into(),
key: now3.as_nanos().to_string().into(),
value: Some(ip127.clone().into()),
expiry: valid,
},
Entry {
tree: "time-match".into(),
key: now2.as_nanos().to_string().into(),
value: Some(ip127.clone().into()),
expiry: valid,
},
Entry {
tree: "match-timeset".into(),
key: ip127.clone().into(),
value: Some([Value::String(now.as_nanos().to_string())].into()),
expiry: valid,
},
Entry {
tree: "match-timeset".into(),
key: ip1.clone().into(),
value: Some([Value::String(now2.as_nanos().to_string())].into()),
expiry: valid,
},
Entry {
tree: "match-timeset".into(),
key: ip1.clone().into(),
value: Some(
[
Value::String(now2.as_nanos().to_string()),
Value::String(now3.as_nanos().to_string()),
]
.into(),
),
expiry: valid,
},
];
let dir = TempDir::new().unwrap();
let dir_path = dir.path();
let db_path = dir_path.join(DB_NAME);
let mut write_db = WriteDB::new(File::create(&db_path).await.unwrap());
for entry in entries {
write_db.write_entry(&entry).await.unwrap();
}
write_db.close().await.unwrap();
drop(write_db);
let mut database = Database::from_dir(dir_path, None).await.unwrap();
let time_match = database
.open_tree(
"time-match".into(),
Duration::from_secs(2),
|(key, value)| Ok((to_time(&key)?, to_match(&value)?)),
)
.await
.unwrap();
assert_eq!(
time_match.tree,
BTreeMap::from([
(now, ip127.clone()),
(now2, ip127.clone()),
(now3, ip127.clone())
])
);
let match_timeset = database
.open_tree(
"match-timeset".into(),
Duration::from_hours(2),
|(key, value)| Ok((to_match(&key)?, to_timeset(&value)?)),
)
.await
.unwrap();
assert_eq!(
match_timeset.tree,
BTreeMap::from([
(ip127.clone(), BTreeSet::from([now])),
(ip1.clone(), BTreeSet::from([now2, now3])),
])
);
let unknown_tree = database
.open_tree(
"unknown_tree".into(),
Duration::from_hours(2),
|(key, value)| Ok((to_match(&key)?, to_timeset(&value)?)),
)
.await
.unwrap();
assert_eq!(unknown_tree.tree, BTreeMap::default());
}
}

497
crates/treedb/src/raw.rs Normal file
View file

@ -0,0 +1,497 @@
use std::{
collections::HashMap,
io::Error as IoError,
time::{SystemTime, UNIX_EPOCH},
};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use thiserror::Error;
use tokio::{
fs::File,
io::{AsyncBufReadExt, AsyncWriteExt, BufReader, BufWriter},
};
use tracing::error;
use crate::time::Time;
use super::{Entry, LoadedDB};
const DB_TREE_ID: u64 = 0;
// const DB_TREE_NAME: &str = "tree_names";
#[derive(Debug, Error)]
pub enum SerdeOrIoError {
#[error("{0}")]
IO(#[from] IoError),
#[error("{0}")]
Serde(#[from] serde_json::Error),
}
#[derive(Debug, Error)]
pub enum DatabaseError {
#[error("{0}")]
IO(#[from] IoError),
#[error("{0}")]
Serde(#[from] serde_json::Error),
#[error("missing key id: {0}")]
MissingKeyId(u64),
}
/// Entry in custom database format, just to be written in database
#[derive(Serialize)]
struct WriteEntry<'a> {
#[serde(rename = "t")]
pub tree: u64,
#[serde(rename = "k")]
pub key: &'a Value,
#[serde(rename = "v")]
pub value: &'a Option<Value>,
#[serde(rename = "e")]
pub expiry: u64,
}
/// Entry in custom database format, just read from database
#[derive(Deserialize)]
struct ReadEntry {
#[serde(rename = "t")]
pub tree: u64,
#[serde(rename = "k")]
pub key: Value,
#[serde(rename = "v")]
pub value: Option<Value>,
#[serde(rename = "e")]
pub expiry: u64,
}
/// Permits to write entries in a database.
/// Entries are written as plain JSONL.
pub struct WriteDB {
file: BufWriter<File>,
names: HashMap<String, u64>,
next_id: u64,
buffer: Vec<u8>,
}
impl WriteDB {
/// Creates a wrapper around a file opened in write mode,
/// To store entries in it.
pub fn new(file: File) -> Self {
Self {
file: BufWriter::new(file),
// names: HashMap::from([(DB_TREE_NAME.into(), DB_TREE_ID)]),
names: HashMap::default(),
next_id: 1,
buffer: Vec::default(),
}
}
/// Writes an entry in the database
pub async fn write_entry(&mut self, entry: &Entry) -> Result<usize, SerdeOrIoError> {
let mut written = 0;
let tree_id = match self.names.get(&entry.tree) {
Some(id) => *id,
// Insert special database entry when the tree is not recorded yet
None => {
let id = self.next_id;
self.next_id += 1;
self.names.insert(entry.tree.clone(), id);
written = self
._write_entry(&WriteEntry {
tree: DB_TREE_ID,
key: &id.into(),
value: &Some(entry.tree.clone().into()),
// Expiry is not used for special entries
expiry: 0,
})
.await?;
id
}
};
self._write_entry(&WriteEntry {
tree: tree_id,
key: &entry.key,
value: &entry.value,
expiry: entry.expiry.as_millis() as u64,
})
.await
.map(|bytes_written| bytes_written + written)
}
async fn _write_entry(&mut self, raw_entry: &WriteEntry<'_>) -> Result<usize, SerdeOrIoError> {
self.buffer.clear();
serde_json::to_writer(&mut self.buffer, &raw_entry)?;
self.buffer.push(b'\n');
Ok(self.file.write(self.buffer.as_ref()).await?)
}
/// Flushes the inner [`tokio::io::BufWriter`]
pub async fn flush(&mut self) -> Result<(), IoError> {
self.file.flush().await
}
/// Closes the inner [`tokio::io::BufWriter`]
/// WriteDB should not be used after this point.
pub async fn close(&mut self) -> Result<(), IoError> {
self.file.shutdown().await
}
}
/// Permits to read entries from a database.
/// The database is plain JSONL.
pub struct ReadDB {
file: BufReader<File>,
names: HashMap<u64, String>,
buffer: String,
}
impl ReadDB {
pub fn new(file: File) -> Self {
ReadDB {
file: BufReader::new(file),
// names: HashMap::from([(DB_TREE_ID, DB_TREE_NAME.into())]),
names: HashMap::default(),
buffer: String::default(),
}
}
pub async fn read(
&mut self,
write_db: &mut WriteDB,
load_db: bool,
) -> tokio::io::Result<LoadedDB> {
let mut data_maps = HashMap::new();
loop {
match self.next().await {
// EOF
Ok(None) => break,
// Can't recover io::Error here
Err(DatabaseError::IO(err)) => return Err(err),
// Just skip malformed entries
Err(DatabaseError::Serde(err)) => {
error!("malformed entry read from database: {err}")
}
Err(DatabaseError::MissingKeyId(id)) => {
error!("invalid entry read from database: {id}")
}
// Ok, we got an entry
Ok(Some(entry)) => {
// Add back in new DB
match write_db.write_entry(&entry).await {
Ok(_) => (),
Err(err) => match err {
SerdeOrIoError::IO(err) => return Err(err),
SerdeOrIoError::Serde(err) => error!(
"serde should be able to serialize an entry just deserialized: {err}"
),
},
}
// Insert data in RAM
if load_db {
let map: &mut HashMap<Value, Value> =
data_maps.entry(entry.tree).or_default();
match entry.value {
Some(value) => map.insert(entry.key, value),
None => map.remove(&entry.key),
};
}
}
}
}
Ok(data_maps)
}
async fn next(&mut self) -> Result<Option<Entry>, DatabaseError> {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis() as u64;
// Loop until we get a non-special value
let raw_entry = loop {
self.buffer.clear();
let raw_entry = match self.file.read_line(&mut self.buffer).await {
// EOF
Ok(0) => return Ok(None),
Ok(_) => match serde_json::from_str::<ReadEntry>(&self.buffer) {
Ok(raw_entry) => Ok(raw_entry),
Err(err) => Err(DatabaseError::Serde(err)),
},
Err(err) => Err(DatabaseError::IO(err)),
}?;
if raw_entry.tree == DB_TREE_ID {
// Insert new tree
self.names.insert(
raw_entry
.key
.as_u64()
.expect("database reserved entry doesn't have an uint as key")
.to_owned(),
raw_entry
.value
.expect("database reserved entry doesn't have a value")
.as_str()
.expect("database reserved entry doesn't have a string as value")
.to_owned(),
);
// Skip expired entries
} else if raw_entry.expiry >= now {
break raw_entry;
}
};
match self.names.get(&raw_entry.tree) {
Some(tree) => Ok(Some(Entry {
tree: tree.to_owned(),
key: raw_entry.key,
value: raw_entry.value,
expiry: Time::from_millis(raw_entry.expiry),
})),
None => Err(DatabaseError::MissingKeyId(raw_entry.tree)),
}
}
}
#[cfg(test)]
mod tests {
use std::collections::HashMap;
use serde_json::Value;
use tempfile::NamedTempFile;
use tokio::fs::{File, read, write};
use crate::{
Entry,
raw::{DB_TREE_ID, DatabaseError, ReadDB, WriteDB},
time::{Time, now},
};
#[tokio::test]
async fn write_db_write_entry() {
let now = now();
let expired = now - Time::from_secs(2);
let expired_ts = expired.as_millis();
// let valid = now + Time::from_secs(2);
// let valid_ts = valid.timestamp_millis();
let path = NamedTempFile::new().unwrap().into_temp_path();
let mut write_db = WriteDB::new(File::create(&path).await.unwrap());
write_db
.write_entry(&Entry {
tree: "yooo".into(),
key: "key1".into(),
value: Some("value1".into()),
expiry: expired,
})
.await
.unwrap();
write_db.flush().await.unwrap();
let contents = String::from_utf8(read(path).await.unwrap()).unwrap();
println!("{}", &contents);
assert_eq!(
contents,
format!(
"{{\"t\":0,\"k\":1,\"v\":\"yooo\",\"e\":0}}\n{{\"t\":1,\"k\":\"key1\",\"v\":\"value1\",\"e\":{expired_ts}}}\n"
)
);
}
#[tokio::test]
async fn read_db_next() {
let now = now();
let expired = now - Time::from_secs(2);
let expired_ts = expired.as_millis();
let valid = now + Time::from_secs(2);
let valid_ts = valid.as_millis();
// Truncate to millisecond precision
let valid = Time::new(valid.as_secs(), valid.subsec_millis() * 1_000_000);
let path = NamedTempFile::new().unwrap().into_temp_path();
write(
&path,
format!(
"{{\"t\": {DB_TREE_ID}, \"k\": 1, \"v\": \"test_tree\", \"e\": 0}}
{{\"t\": 1, \"k\": \"key1\", \"v\": 1, \"e\": {expired_ts}}}
{{\"t\": 1, \"k\": \"key2\", \"v\": 2, \"e\": {valid_ts}}}
malformed entry: not json
{{\"t\": \"tree cant be string\", \"k\": \"key2\", \"v\": 2, \"e\": {valid_ts}}}
{{\"t\": 1, \"k\": \"missing quote, \"v\": 2, \"e\": {valid_ts}}}
{{\"t\": 3, \"k\": \"missing key id\", \"v\": 2, \"e\": {valid_ts}}}"
),
)
.await
.unwrap();
let mut read_db = ReadDB::new(File::open(path).await.unwrap());
assert_eq!(
read_db.next().await.unwrap(),
Some(Entry {
tree: "test_tree".into(),
key: "key2".into(),
value: Some(2.into()),
expiry: valid,
})
);
matches!(read_db.next().await, Err(DatabaseError::Serde(_)));
matches!(read_db.next().await, Err(DatabaseError::Serde(_)));
matches!(read_db.next().await, Err(DatabaseError::Serde(_)));
matches!(read_db.next().await, Err(DatabaseError::MissingKeyId(3)));
assert!(read_db.next().await.unwrap().is_none());
}
#[tokio::test]
async fn read_db_read() {
let now = now();
let expired = now - Time::from_secs(2);
let expired_ts = expired.as_millis();
let valid = now + Time::from_secs(2);
let valid_ts = valid.as_millis();
let read_path = NamedTempFile::new().unwrap().into_temp_path();
let write_path = NamedTempFile::new().unwrap().into_temp_path();
write(
&read_path,
format!(
"{{\"t\": {DB_TREE_ID}, \"k\": 1, \"v\": \"test_tree\", \"e\": 0}}
{{\"t\": 1, \"k\": \"key1\", \"v\": 1, \"e\": {expired_ts}}}
{{\"t\": 1, \"k\": \"key2\", \"v\": 2, \"e\": {valid_ts}}}
malformed entry: not json
{{\"t\": \"tree cant be string\", \"k\": \"key2\", \"v\": 2, \"e\": {valid_ts}}}
{{\"t\": 1, \"k\": \"missing quote, \"v\": 2, \"e\": {valid_ts}}}
{{\"t\": 3, \"k\": \"missing key id\", \"v\": 2, \"e\": {valid_ts}}}"
),
)
.await
.unwrap();
// First test loading in RAM
let mut read_db = ReadDB::new(File::open(&read_path).await.unwrap());
let mut write_db = WriteDB::new(File::create(&write_path).await.unwrap());
let maps = read_db.read(&mut write_db, true).await.unwrap();
// Test RAM
assert_eq!(
maps,
HashMap::from([(
"test_tree".into(),
HashMap::from([("key2".into(), 2.into())])
)])
);
// Test disk
write_db.close().await.unwrap();
let contents = String::from_utf8(read(&write_path).await.unwrap()).unwrap();
assert_eq!(
contents,
format!(
"{{\"t\":{DB_TREE_ID},\"k\":1,\"v\":\"test_tree\",\"e\":0}}\n\
{{\"t\":1,\"k\":\"key2\",\"v\":2,\"e\":{valid_ts}}}\n"
)
);
// Second test only rotating on disk
let mut read_db = ReadDB::new(File::open(&read_path).await.unwrap());
let mut write_db = WriteDB::new(File::create(&write_path).await.unwrap());
let maps = read_db.read(&mut write_db, false).await.unwrap();
// Test RAM
assert_eq!(maps, HashMap::default());
// Test disk
write_db.close().await.unwrap();
let contents = String::from_utf8(read(write_path).await.unwrap()).unwrap();
assert_eq!(
contents,
format!(
"{{\"t\":{DB_TREE_ID},\"k\":1,\"v\":\"test_tree\",\"e\":0}}\n\
{{\"t\":1,\"k\":\"key2\",\"v\":2,\"e\":{valid_ts}}}\n"
)
);
}
// write then read 1000 random entries
#[tokio::test]
async fn write_then_read_1000() {
// Generate entries
let now = now();
let entries: Vec<_> = (0..1000)
.map(|i| Entry {
tree: format!("tree{}", i % 4),
key: format!("key{}", i % 10).into(),
value: Some(format!("value{}", i % 10).into()),
expiry: now + Time::from_secs(i % 4) - Time::from_secs(1),
})
.collect();
let remove_entries: Vec<_> = (0..1000)
.filter(|i| i % 5 == 1)
.map(|i| Entry {
tree: format!("tree{}", i % 4),
key: format!("key{}", i % 10).into(),
value: None,
expiry: now + Time::from_secs(i % 4),
})
.collect();
let all_entries: Vec<_> = entries.iter().chain(remove_entries.iter()).collect();
let kept_entries: HashMap<String, HashMap<Value, Value>> = entries
.iter()
.filter(|entry| entry.expiry > now)
.filter(|entry| {
remove_entries
.iter()
.all(|rm_entry| rm_entry.tree != entry.tree || rm_entry.key != entry.key)
})
.fold(HashMap::default(), |mut acc, entry| {
acc.entry(entry.tree.clone())
.or_default()
.insert(entry.key.clone(), entry.value.clone().unwrap());
acc
});
// Write entries
let read_path = NamedTempFile::new().unwrap().into_temp_path();
let write_path = NamedTempFile::new().unwrap().into_temp_path();
let mut write_db = WriteDB::new(File::create(&read_path).await.unwrap());
for entry in all_entries {
write_db.write_entry(entry).await.unwrap();
}
write_db.close().await.unwrap();
// Read entries
let mut read_db = ReadDB::new(File::open(&read_path).await.unwrap());
let mut write_db = WriteDB::new(File::create(&write_path).await.unwrap());
let maps = read_db
.read(&mut write_db, true)
.await
.unwrap()
.into_iter()
.filter(|(_, map)| !map.is_empty())
.collect::<HashMap<_, _>>();
assert_eq!(maps, kept_entries);
}
}

117
crates/treedb/src/time.rs Normal file
View file

@ -0,0 +1,117 @@
use std::{
fmt,
ops::{Add, Deref, Sub},
time::{Duration, SystemTime, UNIX_EPOCH},
};
use serde::{Deserialize, Serialize};
/// [`std::time::Duration`] since [`std::time::UNIX_EPOCH`]
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub struct Time(Duration);
impl Deref for Time {
type Target = Duration;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl From<Duration> for Time {
fn from(value: Duration) -> Self {
Time(value)
}
}
impl Into<Duration> for Time {
fn into(self) -> Duration {
self.0
}
}
impl Add<Duration> for Time {
type Output = Time;
fn add(self, rhs: Duration) -> Self::Output {
Time(self.0 + rhs)
}
}
impl Add<Time> for Time {
type Output = Time;
fn add(self, rhs: Time) -> Self::Output {
Time(self.0 + rhs.0)
}
}
impl Sub<Duration> for Time {
type Output = Time;
fn sub(self, rhs: Duration) -> Self::Output {
Time(self.0 - rhs)
}
}
impl Sub<Time> for Time {
type Output = Time;
fn sub(self, rhs: Time) -> Self::Output {
Time(self.0 - rhs.0)
}
}
impl Serialize for Time {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
self.as_nanos().to_string().serialize(serializer)
}
}
struct TimeVisitor;
impl<'de> serde::de::Visitor<'de> for TimeVisitor {
type Value = Time;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a string representing nanoseconds")
}
fn visit_str<E>(self, s: &str) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
match s.parse::<u128>() {
Ok(nanos) => Ok(Time(Duration::new(
(nanos / 1_000_000_000) as u64,
(nanos % 1_000_000_000) as u32,
))),
Err(_) => Err(serde::de::Error::invalid_value(
serde::de::Unexpected::Str(s),
&self,
)),
}
}
}
impl<'de> Deserialize<'de> for Time {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
deserializer.deserialize_str(TimeVisitor)
}
}
impl Time {
pub fn new(secs: u64, nanos: u32) -> Time {
Time(Duration::new(secs, nanos))
}
pub fn from_hours(hours: u64) -> Time {
Time(Duration::from_hours(hours))
}
pub fn from_mins(mins: u64) -> Time {
Time(Duration::from_mins(mins))
}
pub fn from_secs(secs: u64) -> Time {
Time(Duration::from_secs(secs))
}
pub fn from_millis(millis: u64) -> Time {
Time(Duration::from_millis(millis))
}
pub fn from_nanos(nanos: u64) -> Time {
Time(Duration::from_nanos(nanos))
}
}
pub fn now() -> Time {
Time(SystemTime::now().duration_since(UNIX_EPOCH).unwrap())
}

5
debian/changelog vendored
View file

@ -1,5 +0,0 @@
reaction (1.3.1-1) stable; urgency=low
* Initial release.
-- ppom <reaction@ppom.me> Sat, 06 Apr 2024 18:59:13 +0000

23
debian/control vendored
View file

@ -1,23 +0,0 @@
Source: reaction
Maintainer: Luc Didry <luc.reaction@didry.org>
Section: utils
Priority: optional
Standards-Version: 4.6.2
Build-Depends: debhelper-compat (= 13)
Homepage: https://framagit.org/ppom/reaction
Package: reaction
Architecture: any
Package-Type: deb
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: daemon that scans program outputs for patterns, and takes action
A common use of reaction is to scan ssh and web server logs,
and ban hosts that cause multiple authentication errors.
reaction doesn't have all the features of the honorable fail2ban,
but it's ~10x faster and easier to configure.
Tag: admin::automation, admin::logging, admin::monitoring,
interface::commandline, interface::daemon,
network::firewall, protocol::ip, role::program,
security::authentication, security::firewall, security::ids,
security::log-analyzer, use::login, use::monitor,
works-with-format::plaintext, works-with::logfile, works-with::text

678
debian/copyright vendored
View file

@ -1,678 +0,0 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Source: https://framagit.org/ppom/reaction
Upstream-Name: reaction
Upstream-Contact: ppom <reaction@ppom.me>
License: AGPL-3
Files:
*
Copyright: 2023 ppom
License: AGPL-3
Files:
debian/*
Copyright: 2024 Luc Didry
License: AGPL-3
License: AGPL-3
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
.
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
.
Preamble
.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
.
The precise terms and conditions for copying, distribution and
modification follow.
.
TERMS AND CONDITIONS
.
0. Definitions.
.
"This License" refers to version 3 of the GNU Affero General Public License.
.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
.
A "covered work" means either the unmodified Program or a work based
on the Program.
.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
.
1. Source Code.
.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
.
The Corresponding Source for a work in source code form is that
same work.
.
2. Basic Permissions.
.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
.
4. Conveying Verbatim Copies.
.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
.
5. Conveying Modified Source Versions.
.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
.
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
.
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
.
6. Conveying Non-Source Forms.
.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
.
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
.
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
.
7. Additional Terms.
.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
.
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
.
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
.
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
.
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
.
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
.
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
.
8. Termination.
.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
.
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
.
9. Acceptance Not Required for Having Copies.
.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
.
10. Automatic Licensing of Downstream Recipients.
.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
.
11. Patents.
.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
.
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
.
12. No Surrender of Others' Freedom.
.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
.
13. Remote Network Interaction; Use with the GNU General Public License.
.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
.
14. Revised Versions of this License.
.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
.
15. Disclaimer of Warranty.
.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
.
16. Limitation of Liability.
.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
.
17. Interpretation of Sections 15 and 16.
.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
.
END OF TERMS AND CONDITIONS
.
How to Apply These Terms to Your New Programs
.
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
.
Also add information on how to contact you by electronic and paper mail.
.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<http://www.gnu.org/licenses/>.

View file

@ -1,3 +0,0 @@
usr/bin
usr/sbin
lib/systemd/system

View file

@ -1 +0,0 @@
reaction: initial-upload-closes-no-bugs

View file

@ -1,13 +0,0 @@
# vim: ft=systemd
[Unit]
Description=A daemon that scans program outputs for repeated patterns, and takes action.
Documentation=https://framagit.org/ppom/reaction-wiki
[Service]
ExecStart=/usr/bin/reaction start -c /etc/%i
StateDirectory=reaction
RuntimeDirectory=reaction
WorkingDirectory=/var/lib/reaction
[Install]
WantedBy=multi-user.target

8
debian/rules vendored
View file

@ -1,8 +0,0 @@
#!/usr/bin/make -f
%:
dh $@
override_dh_auto_install:
install -m755 reaction $$(pwd)/debian/reaction/usr/bin
install -m755 nft46 $$(pwd)/debian/reaction/usr/sbin
install -m755 ip46tables $$(pwd)/debian/reaction/usr/sbin

View file

@ -1 +0,0 @@
3.0 (quilt)

View file

@ -1,18 +0,0 @@
{ buildGoModule, fetchFromGitLab }:
let
pname = "reaction";
version = "v0.1";
in buildGoModule {
inherit pname version;
src = ./.;
# src = fetchFromGitLab {
# domain = "framagit.org";
# owner = "ppom";
# repo = pname;
# rev = version;
# sha256 = "sha256-45ytTNZIbTIUOPBgAdD7o9hyWlJo//izUhGe53PcwNA=";
# };
vendorHash = "sha256-g+yaVIx4jxpAQ/+WrGKxhVeliYx7nLQe/zsGpxV4Fn4=";
}

10
go.mod
View file

@ -1,10 +0,0 @@
module framagit.org/ppom/reaction
go 1.21
require (
github.com/google/go-jsonnet v0.20.0
sigs.k8s.io/yaml v1.1.0
)
require gopkg.in/yaml.v2 v2.4.0 // indirect

9
go.sum
View file

@ -1,9 +0,0 @@
github.com/google/go-jsonnet v0.20.0 h1:WG4TTSARuV7bSm4PMB4ohjxe33IHT5WVTrJSU33uT4g=
github.com/google/go-jsonnet v0.20.0/go.mod h1:VbgWF9JX7ztlv770x/TolZNGGFfiHEVx9G6ca2eUmeA=
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=

View file

@ -1,91 +0,0 @@
#include<ctype.h>
#include<errno.h>
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<unistd.h>
// If this programs
// - receives an ipv4 address in its arguments:
// → it will executes iptables with the same arguments in place.
//
// - receives an ipv6 address in its arguments:
// → it will executes ip6tables with the same arguments in place.
//
// - doesn't receive an ipv4 or ipv6 address in its arguments:
// → it will executes both, with the same arguments in place.
int isIPv4(char *tab) {
int i,len;
// IPv4 addresses are at least 7 chars long
len = strlen(tab);
if (len < 7 || !isdigit(tab[0]) || !isdigit(tab[len-1])) {
return 0;
}
// Each char must be a digit or a dot between 2 digits
for (i=1; i<len-1; i++) {
if (!isdigit(tab[i]) && !(tab[i] == '.' && isdigit(tab[i-1]) && isdigit(tab[i+1]))) {
return 0;
}
}
return 1;
}
int isIPv6(char *tab) {
int i,len, twodots = 0;
// IPv6 addresses are at least 3 chars long
len = strlen(tab);
if (len < 3) {
return 0;
}
// Each char must be a digit, :, a-f, or A-F
for (i=0; i<len; i++) {
if (!isdigit(tab[i]) && tab[i] != ':' && !(tab[i] >= 'a' && tab[i] <= 'f') && !(tab[i] >= 'A' && tab[i] <= 'F')) {
return 0;
}
}
return 1;
}
int guess_type(int len, char *tab[]) {
int i;
for (i=0; i<len; i++) {
if (isIPv4(tab[i])) {
return 4;
} else if (isIPv6(tab[i])) {
return 6;
}
}
return 0;
}
int exec(char *str, char **argv) {
argv[0] = str;
execvp(str, argv);
// returns only if fails
printf("ip46tables: exec failed %d\n", errno);
}
int main(int argc, char **argv) {
if (argc < 2) {
printf("ip46tables: At least one argument has to be given\n");
exit(1);
}
int type;
type = guess_type(argc, argv);
if (type == 4) {
exec("iptables", argv);
} else if (type == 6) {
exec("ip6tables", argv);
} else {
pid_t pid = fork();
if (pid == -1) {
printf("ip46tables: fork failed\n");
exit(1);
} else if (pid) {
exec("iptables", argv);
} else {
exec("ip6tables", argv);
}
}
}

View file

@ -1,97 +0,0 @@
#include<ctype.h>
#include<errno.h>
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<unistd.h>
// nft46 'add element inet reaction ipvXbans { 1.2.3.4 }' → nft 'add element inet reaction ipv4bans { 1.2.3.4 }'
// nft46 'add element inet reaction ipvXbans { a:b::c:d }' → nft 'add element inet reaction ipv6bans { a:b::c:d }'
//
// the character X is replaced by 4 or 6 depending on the address family of the specified IP
//
// Limitations:
// - nft46 must receive exactly one argument
// - only one IP must be given per command
// - the IP must be between { braces }
int isIPv4(char *tab, int len) {
int i;
// IPv4 addresses are at least 7 chars long
if (len < 7 || !isdigit(tab[0]) || !isdigit(tab[len-1])) {
return 0;
}
// Each char must be a digit or a dot between 2 digits
for (i=1; i<len-1; i++) {
if (!isdigit(tab[i]) && !(tab[i] == '.' && isdigit(tab[i-1]) && isdigit(tab[i+1]))) {
return 0;
}
}
return 1;
}
int isIPv6(char *tab, int len) {
int i;
// IPv6 addresses are at least 3 chars long
if (len < 3) {
return 0;
}
// Each char must be a digit, :, a-f, or A-F
for (i=0; i<len; i++) {
if (!isdigit(tab[i]) && tab[i] != ':' && tab[i] != '.' && !(tab[i] >= 'a' && tab[i] <= 'f') && !(tab[i] >= 'A' && tab[i] <= 'F')) {
return 0;
}
}
return 1;
}
int findchar(char *tab, char c, int i, int len) {
while (i < len && tab[i] != c) i++;
if (i == len) {
printf("nft46: one %c must be present", c);
exit(1);
}
return i;
}
void adapt_args(char *tab) {
int i, len, X, startIP, endIP, startedIP;
X = startIP = endIP = -1;
startedIP = 0;
len = strlen(tab);
i = 0;
X = i = findchar(tab, 'X', i, len);
startIP = i = findchar(tab, '{', i, len);
while (startIP + 1 <= (i = findchar(tab, ' ', i, len))) startIP = i + 1;
i = startIP;
endIP = i = findchar(tab, ' ', i, len) - 1;
if (isIPv4(tab+startIP, endIP-startIP+1)) {
tab[X] = '4';
return;
}
if (isIPv6(tab+startIP, endIP-startIP+1)) {
tab[X] = '6';
return;
}
printf("nft46: no IP address found\n");
exit(1);
}
int exec(char *str, char **argv) {
argv[0] = str;
execvp(str, argv);
// returns only if fails
printf("nft46: exec failed %d\n", errno);
}
int main(int argc, char **argv) {
if (argc != 2) {
printf("nft46: Exactly one argument must be given\n");
exit(1);
}
adapt_args(argv[1]);
exec("nft", argv);
}

View file

@ -1,80 +0,0 @@
package logger
import "log"
type Level int
const (
UNKNOWN = Level(-1)
DEBUG = Level(1)
INFO = Level(2)
WARN = Level(3)
ERROR = Level(4)
FATAL = Level(5)
)
func (l Level) String() string {
switch l {
case DEBUG:
return "DEBUG "
case INFO:
return "INFO "
case WARN:
return "WARN "
case ERROR:
return "ERROR "
case FATAL:
return "FATAL "
default:
return "????? "
}
}
func FromString(s string) Level {
switch s {
case "DEBUG":
return DEBUG
case "INFO":
return INFO
case "WARN":
return WARN
case "ERROR":
return ERROR
case "FATAL":
return FATAL
default:
return UNKNOWN
}
}
var LogLevel Level = 2
func SetLogLevel(level Level) {
LogLevel = level
}
func Println(level Level, args ...any) {
if level >= LogLevel {
newargs := make([]any, 0)
newargs = append(newargs, level)
newargs = append(newargs, args...)
log.Println(newargs...)
}
}
func Printf(level Level, format string, args ...any) {
if level >= LogLevel {
log.Printf(level.String()+format, args...)
}
}
func Fatalln(args ...any) {
newargs := make([]any, 0)
newargs = append(newargs, FATAL)
newargs = append(newargs, args...)
log.Fatalln(newargs...)
}
func Fatalf(format string, args ...any) {
log.Fatalf(FATAL.String()+format, args...)
}

34
logo/nlnet.svg Normal file
View file

@ -0,0 +1,34 @@
<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN" "http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<!-- Created using Karbon14, part of koffice: http://www.koffice.org/karbon -->
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="449px" height="168px">
<defs>
</defs>
<g id="Layer">
</g>
<g id="Layer">
<path fill="#98bf00" d="M446.602 73.8789L449.102 60.234L436.207 60.234L439.957 40.145L424.512 46.191L422.012 60.234L412.617 60.234L410.117 73.8789L419.363 73.8789L416.215 91.1719C416.066 92.125 415.816 93.5234 415.566 95.3203C415.316 97.1211 415.164 98.7188 415.164 100.07C415.215 106.316 416.715 111.465 419.664 115.516C422.613 119.66 427.41 122.109 434.109 122.859L440.555 109.566C437.105 109.117 434.508 107.766 432.66 105.469C430.809 103.117 429.91 100.168 429.91 96.5703C429.91 95.8711 430.012 94.8711 430.16 93.5234C430.309 92.1719 430.461 91.0742 430.609 90.2227L433.609 73.8789L446.602 73.8789L446.602 73.8789Z" />
<path fill="#98bf00" d="M310.707 72.332C313.105 71.4805 315.207 71.0312 316.957 71.0312C318.855 71.0312 320.453 71.582 321.754 72.6797C323.004 73.7305 323.602 75.2812 323.602 77.4297C323.602 78.0273 323.504 78.9297 323.301 80.1797C323.102 81.3281 322.953 82.3789 322.805 83.2773L319.203 100.168C318.953 101.469 318.703 102.82 318.453 104.219C318.203 105.668 318.105 106.918 318.105 107.965C318.105 112.016 319.203 115.414 321.453 118.113C323.602 120.812 327.449 122.41 333 122.859L339.348 110.016C337.195 109.668 335.648 108.867 334.699 107.617C333.699 106.418 333.199 104.719 333.199 102.57C333.199 102.07 333.25 101.469 333.348 100.82C333.398 100.168 333.5 99.6211 333.547 99.2188L337.195 82.0273C337.496 80.5781 337.746 79.1289 337.945 77.6797C338.148 76.2812 338.246 74.8789 338.246 73.5312C338.246 68.582 336.797 64.586 333.898 61.637C330.949 58.688 326.852 57.188 321.602 57.188C318.555 57.188 315.656 57.688 312.809 58.688C310.008 59.637 306.609 61.234 302.66 63.586C302.512 62.637 302.16 61.484 301.66 60.188C301.113 58.938 300.512 57.836 299.863 56.836L286.469 62.586C287.617 64.336 288.516 66.184 289.066 68.082C289.566 69.9805 289.816 71.7812 289.816 73.4297C289.816 74.2812 289.766 75.3281 289.617 76.4805C289.516 77.6289 289.367 78.5273 289.215 79.1797L281.27 121.512L295.664 121.512L304.109 75.8281C306.16 74.2812 308.359 73.1289 310.707 72.332L310.707 72.332Z" />
<path fill="#98bf00" d="M350.742 80.0781C349.191 84.6758 348.441 89.5742 348.441 94.7227C348.441 99.2188 349.043 103.219 350.191 106.719C351.34 110.215 352.992 113.164 355.09 115.516C357.141 117.914 359.688 119.711 362.637 120.961C365.586 122.211 368.883 122.859 372.484 122.859C376.832 122.859 381.129 122.062 385.43 120.461C389.777 118.863 393.574 116.363 396.824 113.016L391.426 100.519C388.926 103.32 386.176 105.418 383.129 106.867C380.078 108.316 377.031 109.016 374.031 109.016C370.535 109.016 367.785 107.918 365.785 105.719C363.836 103.469 362.836 100.668 362.836 97.3711L362.836 96.4219C362.836 96.0234 362.887 95.6211 362.988 95.2227C365.637 94.8711 368.633 94.4219 371.984 93.8242C375.332 93.2227 378.73 92.5234 382.18 91.7227C385.629 90.875 388.977 89.9258 392.273 88.9258C395.523 87.9258 398.422 86.875 400.871 85.8242L400.871 80.0781C400.871 76.5312 400.32 73.332 399.223 70.4805C398.074 67.734 396.574 65.332 394.625 63.285C392.676 61.285 390.324 59.785 387.676 58.785C385.078 57.738 382.23 57.188 379.18 57.188C374.73 57.188 370.582 58.188 366.836 60.137C363.035 62.086 359.789 64.785 357.141 68.2344C354.391 71.6328 352.293 75.5781 350.742 80.0781L350.742 80.0781ZM372.383 69.9805C373.934 69.1328 375.684 68.7344 377.633 68.7344C380.281 68.7344 382.48 69.582 384.227 71.332C385.977 73.0312 386.879 75.5781 386.879 79.0273C385.43 79.4766 383.727 80.0273 381.73 80.5781C379.68 81.0781 377.633 81.5781 375.531 82.0273C373.383 82.4766 371.332 82.9258 369.285 83.3281C367.234 83.6758 365.484 83.9766 363.984 84.2266C364.234 82.1289 364.688 80.1289 365.387 78.2773C366.137 76.4297 367.086 74.7812 368.234 73.3789C369.484 71.9805 370.832 70.832 372.383 69.9805L372.383 69.9805Z" fill-rule="evenodd" />
<path fill="#000000" d="M404.172 140.453C404.172 139.203 403.969 138.055 403.57 137.055C403.172 136.055 402.621 135.207 401.973 134.457C401.27 133.758 400.473 133.207 399.523 132.856C398.574 132.508 397.523 132.309 396.422 132.309C394.973 132.309 393.625 132.606 392.375 133.156C391.125 133.707 390.027 134.508 389.078 135.504C388.125 136.504 387.379 137.656 386.828 139.004C386.277 140.356 385.977 141.805 385.977 143.402C385.977 144.652 386.176 145.75 386.578 146.801C386.926 147.801 387.477 148.652 388.176 149.352C388.828 150.101 389.676 150.648 390.625 151.051C391.574 151.399 392.625 151.598 393.773 151.598C395.176 151.598 396.523 151.301 397.773 150.75C399.023 150.199 400.121 149.398 401.07 148.402C402.02 147.449 402.77 146.25 403.32 144.902C403.871 143.551 404.172 142.055 404.172 140.453L404.172 140.453ZM390.277 140.402C390.574 139.504 390.977 138.703 391.477 138.004C392.023 137.305 392.676 136.754 393.426 136.305C394.176 135.856 394.973 135.656 395.922 135.656C397.371 135.656 398.422 136.106 399.172 137.004C399.922 137.856 400.32 139.106 400.32 140.652C400.32 141.602 400.172 142.555 399.871 143.504C399.621 144.402 399.223 145.203 398.672 145.902C398.121 146.602 397.473 147.152 396.723 147.601C395.973 148 395.125 148.199 394.223 148.199C392.773 148.199 391.727 147.75 390.977 146.902C390.227 146 389.824 144.801 389.824 143.254C389.824 142.305 389.977 141.352 390.277 140.402L390.277 140.402Z" fill-rule="evenodd" />
<path fill="#000000" d="M434.559 132.559L431.008 132.559L429.109 143.602C429.059 143.754 429.012 144.004 429.012 144.352C429.012 144.703 429.012 144.953 429.012 145.203L428.859 145.203L422.465 132.559L419.113 132.559L415.766 151.301L419.363 151.301L421.363 140.004C421.414 139.856 421.414 139.606 421.414 139.356C421.414 139.106 421.414 138.805 421.414 138.504L421.563 138.504L428.109 151.449L431.309 151.149L434.559 132.559L434.559 132.559Z" />
<path fill="#000000" d="M374.383 132.559L370.734 132.559L367.387 151.301L371.082 151.301L374.383 132.559L374.383 132.559Z" />
<path fill="#000000" d="M328.949 132.559L324.703 132.559C323.902 133.906 323.051 135.457 322.102 137.106C321.152 138.754 320.254 140.453 319.355 142.152C318.453 143.852 317.656 145.5 316.906 147.102C316.156 148.699 315.555 150.101 315.105 151.301L318.953 151.301C319.105 150.949 319.254 150.5 319.453 150.051C319.652 149.602 319.855 149.102 320.105 148.652C320.305 148.199 320.504 147.75 320.703 147.301C320.902 146.852 321.102 146.453 321.254 146.102L327.75 146.102C327.801 146.551 327.801 147 327.852 147.5L328 148.949C328.051 149.398 328.102 149.852 328.152 150.301C328.199 150.75 328.199 151.098 328.199 151.449L331.898 151.149C331.898 150.449 331.848 149.648 331.75 148.699C331.699 147.75 331.551 146.75 331.398 145.703C331.25 144.652 331.098 143.504 330.898 142.351C330.75 141.203 330.551 140.055 330.301 138.906C330.102 137.754 329.898 136.656 329.648 135.555C329.398 134.508 329.199 133.508 328.949 132.559L328.949 132.559ZM326.602 138.106C326.703 138.656 326.801 139.254 326.902 139.902C327 140.504 327.102 141.106 327.152 141.652C327.25 142.203 327.301 142.601 327.352 142.953L322.703 142.953C322.953 142.504 323.203 142.004 323.453 141.453C323.754 140.902 324.051 140.305 324.352 139.703C324.703 139.106 325 138.555 325.301 138.004C325.602 137.453 325.852 136.957 326.102 136.606L326.301 136.606C326.402 137.004 326.5 137.504 326.602 138.106L326.602 138.106Z" fill-rule="evenodd" />
<path fill="#000000" d="M357.641 135.957L358.188 132.559L345.395 132.559L344.844 135.957L349.391 135.957L346.742 151.301L350.391 151.301L353.09 135.957L357.641 135.957L357.641 135.957Z" />
<path fill="#000000" d="M297.465 132.309C296.414 132.309 295.363 132.356 294.312 132.457C293.266 132.606 292.266 132.758 291.316 133.008L288.168 150.852C289.117 151.098 290.215 151.25 291.414 151.399C292.566 151.551 293.664 151.598 294.715 151.598C296.262 151.598 297.664 151.348 299.012 150.852C300.363 150.301 301.562 149.602 302.562 148.652C303.559 147.699 304.359 146.551 304.961 145.203C305.508 143.852 305.809 142.305 305.809 140.606C305.809 139.254 305.609 138.106 305.211 137.055C304.762 136.004 304.211 135.156 303.461 134.457C302.711 133.758 301.812 133.207 300.812 132.856C299.762 132.508 298.664 132.309 297.465 132.309L297.465 132.309ZM296.664 135.707C297.414 135.707 298.113 135.805 298.762 135.957C299.414 136.106 299.961 136.406 300.41 136.805C300.91 137.203 301.312 137.703 301.562 138.356C301.812 138.953 301.961 139.703 301.961 140.652C301.961 141.852 301.812 142.902 301.461 143.852C301.16 144.801 300.711 145.602 300.113 146.25C299.512 146.902 298.812 147.352 297.961 147.699C297.113 148.051 296.215 148.199 295.164 148.199C294.715 148.199 294.266 148.199 293.715 148.152C293.164 148.102 292.664 148.051 292.316 148L294.465 135.906C294.766 135.856 295.164 135.805 295.613 135.754C296.062 135.707 296.414 135.707 296.664 135.707L296.664 135.707Z" fill-rule="evenodd" />
<path fill="#000000" d="M185.809 62.586C186.957 64.336 187.855 66.184 188.406 68.082C188.906 69.9805 189.156 71.7812 189.156 73.4297C189.156 74.2812 189.105 75.3281 188.957 76.4805C188.855 77.6289 188.707 78.5273 188.555 79.1797L180.609 121.512L195.004 121.512L203.449 75.8281C205.5 74.2812 207.699 73.1289 210.047 72.332C212.445 71.4805 214.547 71.0312 216.297 71.0312C218.195 71.0312 219.793 71.582 221.094 72.6797C222.344 73.7305 222.941 75.2812 222.941 77.4297C222.941 78.0273 222.844 78.9297 222.645 80.1797C222.441 81.3281 222.293 82.3789 222.145 83.2773L218.543 100.168C218.293 101.469 218.043 102.82 217.793 104.219C217.547 105.668 217.445 106.918 217.445 107.965C217.445 112.016 218.543 115.414 220.793 118.113C222.941 120.812 226.793 122.41 232.34 122.859L238.688 110.016C236.539 109.668 234.988 108.867 234.039 107.617C233.039 106.418 232.539 104.719 232.539 102.57C232.539 102.07 232.59 101.469 232.688 100.82C232.738 100.168 232.84 99.6211 232.891 99.2188L236.539 82.0273C236.836 80.5781 237.086 79.1289 237.285 77.6797C237.488 76.2812 237.586 74.8789 237.586 73.5312C237.586 68.582 236.137 64.586 233.238 61.637C230.289 58.688 226.191 57.188 220.945 57.188C217.895 57.188 214.996 57.688 212.148 58.688C209.348 59.637 205.949 61.234 202 63.586C201.852 62.637 201.5 61.484 201 60.188C200.453 58.938 199.852 57.836 199.203 56.836L185.809 62.586L185.809 62.586Z" />
<path fill="#000000" d="M276.82 31.547L262.676 31.547L251.883 90.0234C251.43 91.9727 251.082 94.0234 250.832 96.1719C250.582 98.2695 250.434 100.219 250.434 102.019C250.434 107.816 251.531 112.566 253.781 116.262C256.031 119.961 259.828 122.16 265.176 122.859L271.672 109.566C270.625 109.066 269.723 108.516 268.875 107.918C268.023 107.367 267.324 106.617 266.773 105.769C266.176 104.918 265.727 103.918 265.477 102.719C265.227 101.519 265.074 100.019 265.074 98.2695C265.074 97.4219 265.125 96.4727 265.227 95.4727C265.375 94.4219 265.527 93.3711 265.676 92.2734L276.82 31.547L276.82 31.547Z" />
<path fill="#000000" d="M246.434 132.559L242.785 132.559L240.387 146.25C239.887 146.801 239.285 147.25 238.535 147.652C237.785 148 236.988 148.199 236.086 148.199C235.188 148.199 234.488 148 233.988 147.601C233.438 147.152 233.188 146.453 233.188 145.402C233.188 145.203 233.238 144.902 233.289 144.504C233.34 144.152 233.34 143.801 233.387 143.504L235.387 132.559L231.688 132.559L229.738 143.453C229.691 143.902 229.641 144.352 229.59 144.801C229.539 145.25 229.539 145.602 229.539 145.953C229.539 146.953 229.691 147.801 229.988 148.551C230.289 149.301 230.691 149.852 231.191 150.301C231.738 150.75 232.34 151.098 232.988 151.301C233.688 151.5 234.387 151.598 235.137 151.598C236.988 151.598 238.637 151.051 240.137 149.898C240.137 150.148 240.137 150.449 240.188 150.75C240.188 151 240.188 151.25 240.234 151.5L243.883 151.25C243.836 151 243.836 150.75 243.836 150.449C243.785 150.199 243.785 149.898 243.785 149.551C243.785 148.949 243.836 148.301 243.883 147.652C243.934 146.953 243.984 146.301 244.133 145.703L246.434 132.559L246.434 132.559Z" />
<path fill="#000000" d="M276.621 132.559L273.074 132.559L271.172 143.602C271.125 143.754 271.074 144.004 271.074 144.352C271.074 144.703 271.074 144.953 271.074 145.203L270.922 145.203L264.527 132.559L261.176 132.559L257.828 151.301L261.426 151.301L263.426 140.004C263.477 139.856 263.477 139.606 263.477 139.356C263.477 139.106 263.477 138.805 263.477 138.504L263.625 138.504L270.176 151.449L273.371 151.149L276.621 132.559L276.621 132.559Z" />
<path fill="#000000" d="M214.797 134.457C214.098 133.758 213.297 133.207 212.348 132.856C211.398 132.508 210.348 132.309 209.25 132.309C207.801 132.309 206.449 132.606 205.199 133.156C203.949 133.707 202.852 134.508 201.902 135.504C200.953 136.504 200.203 137.656 199.652 139.004C199.102 140.356 198.801 141.805 198.801 143.402C198.801 144.652 199.004 145.75 199.402 146.801C199.754 147.801 200.301 148.652 201 149.352C201.652 150.101 202.5 150.648 203.449 151.051C204.398 151.399 205.449 151.598 206.598 151.598C208 151.598 209.348 151.301 210.598 150.75C211.848 150.199 212.945 149.398 213.895 148.402C214.848 147.449 215.598 146.25 216.145 144.902C216.695 143.551 216.996 142.055 216.996 140.453C216.996 139.203 216.797 138.055 216.395 137.055C215.996 136.055 215.445 135.207 214.797 134.457L214.797 134.457ZM204.301 138.004C204.852 137.305 205.5 136.754 206.25 136.305C207 135.856 207.801 135.656 208.75 135.656C210.199 135.656 211.246 136.106 211.996 137.004C212.746 137.856 213.148 139.106 213.148 140.652C213.148 141.602 212.996 142.555 212.695 143.504C212.445 144.402 212.047 145.203 211.496 145.902C210.949 146.602 210.297 147.152 209.547 147.601C208.797 148 207.949 148.199 207.051 148.199C205.602 148.199 204.551 147.75 203.801 146.902C203.051 146 202.652 144.801 202.652 143.254C202.652 142.305 202.801 141.352 203.102 140.402C203.402 139.504 203.801 138.703 204.301 138.004L204.301 138.004Z" fill-rule="evenodd" />
<path fill="#000000" d="M188.258 132.559L177.961 132.559L174.613 151.301L178.312 151.301L179.559 144.152L186.309 144.152L186.906 140.754L180.16 140.754L181.008 135.957L187.656 135.957L188.258 132.559L188.258 132.559Z" />
<path fill="#98bf00" d="M127.082 44.891C128.43 33.945 125.684 24.102 118.883 15.402C112.086 6.707 103.191 1.66 92.2461 0.309C81.3008 -1.039 71.4531 1.711 62.7578 8.508C54.7109 14.754 49.8125 22.801 48.0625 32.648C47.9141 33.496 47.7617 34.297 47.6641 35.145C47.5625 35.996 47.5117 36.797 47.4648 37.594C47.1133 42.191 47.5625 46.59 48.7617 50.789C50.1133 55.688 52.4609 60.285 55.8594 64.633C59.2578 68.9805 63.1563 72.3828 67.6055 74.9297C71.3516 77.0781 75.5 78.5273 80.0508 79.3281C80.8516 79.4766 81.6484 79.5781 82.5 79.7266C82.9492 79.7773 83.3984 79.8281 83.8477 79.8789C84.9492 75.4297 86.6484 71.2812 88.9961 67.531C87.4453 67.582 85.8477 67.531 84.25 67.383C84.1484 67.332 84.0977 67.332 84.0469 67.332C82.1992 67.082 80.3984 66.734 78.75 66.184C73.6016 64.535 69.2539 61.484 65.707 56.938C62.1562 52.391 60.2578 47.441 59.9062 42.043C59.8086 40.293 59.8594 38.543 60.1094 36.695C60.1094 36.645 60.1094 36.547 60.1094 36.496C61.0586 29.047 64.5078 23 70.4531 18.352C76.4531 13.703 83.1992 11.805 90.7461 12.754C98.293 13.656 104.391 17.102 109.039 23.102C113.688 29.098 115.586 35.844 114.688 43.395C114.438 45.094 114.137 46.691 113.688 48.242C117.887 46.891 122.281 46.191 126.883 46.242C126.93 45.793 127.031 45.344 127.082 44.891L127.082 44.891Z" />
<path fill="#98bf00" d="M132.328 51.488C131.48 51.391 130.68 51.289 129.828 51.238C125.23 50.941 120.832 51.391 116.637 52.539C111.738 53.887 107.141 56.289 102.789 59.688C98.4414 63.035 95.043 66.934 92.5469 71.3828C90.3945 75.1289 88.9453 79.2773 88.0977 83.8281C92.4453 84.5742 96.4453 85.8242 100.141 87.6758C100.391 85.875 100.742 84.1758 101.242 82.5781C102.891 77.4297 105.941 73.082 110.488 69.5312C115.035 65.984 119.984 64.035 125.434 63.684C127.18 63.586 128.93 63.633 130.781 63.883C130.828 63.883 130.879 63.883 130.93 63.883C138.375 64.836 144.426 68.332 149.074 74.2812C153.77 80.2266 155.668 86.9766 154.719 94.5234C153.77 102.07 150.32 108.168 144.375 112.863C138.426 117.512 131.68 119.363 124.23 118.461C125.082 122.512 125.332 126.758 125.031 131.156C134.977 131.809 143.973 128.957 152.02 122.711C160.719 115.914 165.766 107.016 167.113 96.0703C168.465 85.125 165.715 75.2812 158.918 66.582C152.621 58.535 144.574 53.637 134.777 51.891C133.93 51.738 133.129 51.59 132.328 51.488L132.328 51.488Z" />
<path fill="#000000" d="M128.93 78.7266C125.48 78.3281 122.434 79.1797 119.684 81.3281C116.934 83.4766 115.387 86.2266 114.984 89.625C114.535 93.0742 115.387 96.1211 117.535 98.8711C119.684 101.621 122.434 103.168 125.883 103.57C129.281 104.019 132.328 103.168 135.078 101.019C137.828 98.8711 139.375 96.1211 139.824 92.6719C140.227 89.2734 139.375 86.2266 137.227 83.4766C135.078 80.7266 132.328 79.1797 128.93 78.7266L128.93 78.7266Z" />
<path fill="#98bf00" d="M12.8281 73.6289C13.7773 66.082 17.2266 59.938 23.2227 55.289C29.1719 50.641 35.8672 48.742 43.3164 49.691C42.4648 45.641 42.1641 41.395 42.5156 36.996C32.5703 36.344 23.5742 39.145 15.5273 45.441C6.77734 52.238 1.78125 61.137 0.433594 72.082C-0.917969 83.0273 1.78125 92.8242 8.62891 101.57C14.875 109.617 22.9219 114.516 32.7695 116.262C33.5703 116.414 34.3672 116.512 35.2188 116.664C36.0664 116.762 36.8672 116.863 37.7188 116.914C42.3164 117.215 46.7148 116.762 50.9102 115.613C55.7578 114.215 60.4062 111.816 64.7578 108.465C69.0547 105.066 72.4531 101.168 75.0039 96.7695C77.1523 93.0234 78.6016 88.875 79.4492 84.3281C75.1016 83.5781 71.1055 82.2773 67.4062 80.4766C67.1563 82.2266 66.8047 83.9258 66.3047 85.5742C64.6562 90.7227 61.6055 95.0703 57.0586 98.6211C52.5117 102.168 47.5625 104.117 42.1641 104.469C40.4141 104.566 38.6172 104.519 36.7656 104.269C36.7188 104.269 36.668 104.269 36.6172 104.219C29.1719 103.269 23.1211 99.8203 18.4727 93.8711C13.7773 87.875 11.8789 81.1289 12.8281 73.6289L12.8281 73.6289Z" />
<path fill="#000000" d="M32.4688 67.133C29.7188 69.2305 28.1719 72.0312 27.7227 75.4805C27.3203 78.8281 28.1719 81.8789 30.3203 84.625C32.418 87.375 35.168 88.9727 38.6172 89.4258C42.0664 89.7734 45.1133 88.9258 47.8633 86.8242C50.5625 84.6758 52.1094 81.8789 52.5625 78.5273C53.0117 75.0781 52.1602 71.9805 50.0117 69.2812C47.8633 66.535 45.1133 64.984 41.6641 64.586C38.2148 64.133 35.168 64.984 32.4688 67.133L32.4688 67.133Z" />
<path fill="#000000" d="M97.293 32.348C95.1445 29.598 92.3438 28.047 88.9453 27.648C85.4961 27.199 82.4492 28.047 79.75 30.199C77 32.297 75.4023 35.098 75.0039 38.543C74.5508 41.941 75.4531 44.992 77.6016 47.742C79.6992 50.441 82.4492 52.039 85.8984 52.488C89.2969 52.84 92.3438 51.988 95.0938 49.891C97.8438 47.742 99.3906 44.941 99.8438 41.594C100.242 38.145 99.3906 35.047 97.293 32.348L97.293 32.348Z" />
<path fill="#98bf00" d="M85.0469 88.4258C84.5977 88.375 84.1484 88.3242 83.6992 88.2734C82.5977 92.7227 80.8984 96.8711 78.5508 100.621C80.1016 100.519 81.6992 100.57 83.3477 100.769C83.3984 100.769 83.4492 100.769 83.5 100.82C85.3477 101.019 87.0977 101.371 88.7969 101.918C93.9453 103.57 98.293 106.668 101.84 111.215C105.391 115.715 107.289 120.66 107.641 126.109C107.738 127.859 107.688 129.609 107.438 131.457C107.438 131.508 107.438 131.559 107.438 131.656C106.488 139.106 103.039 145.152 97.0938 149.801C91.0938 154.449 84.3477 156.348 76.8008 155.398C69.2539 154.449 63.1563 151 58.5078 145.051C53.8086 139.055 51.9102 132.309 52.8594 124.762C53.0625 123.062 53.4102 121.461 53.9102 119.91C49.6641 121.262 45.2656 121.91 40.6641 121.91C40.6172 122.359 40.5156 122.812 40.4648 123.262C39.1172 134.207 41.8164 144.004 48.6641 152.75C55.4609 161.445 64.3555 166.492 75.3008 167.844C86.2461 169.191 96.043 166.445 104.789 159.645C112.836 153.348 117.734 145.301 119.484 135.457C119.633 134.656 119.734 133.856 119.883 133.008C119.934 132.156 120.035 131.359 120.082 130.559C120.383 125.91 119.934 121.512 118.785 117.363C117.434 112.465 115.035 107.867 111.688 103.519C108.289 99.1719 104.391 95.7227 99.9922 93.2227C96.1953 91.0742 92.0469 89.625 87.4961 88.8242C86.6992 88.6758 85.8984 88.5234 85.0469 88.4258L85.0469 88.4258Z" />
<path fill="#000000" d="M89.9961 120.41C87.8477 117.664 85.0977 116.113 81.6484 115.664C78.1992 115.266 75.1523 116.113 72.4531 118.262C69.7031 120.41 68.1562 123.16 67.7031 126.559C67.2539 130.008 68.1562 133.059 70.3047 135.805C72.4024 138.555 75.1523 140.106 78.6016 140.504C82.0508 140.953 85.0977 140.106 87.8477 137.953C90.5469 135.805 92.0938 133.059 92.5469 129.609C92.9453 126.211 92.0938 123.16 89.9961 120.41L89.9961 120.41Z" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 20 KiB

7
logo/reaction.svg Normal file
View file

@ -0,0 +1,7 @@
<svg xmlns="http://www.w3.org/2000/svg" width="260" height="260">
<polygon points="0,0 0,260 260,260 260,0" style="fill:black;"/>
<polygon points="20,100 80,100 80,120 60,120 60,130 80,130 80,140 50,140 50,110 30,110 30,140 20,140" style="fill:#d18c6d;"/>
<polygon points="90,110 90,120 110,120 110,130 90,130 90,150 140,150 140,140 120,140 120,120 150,120 150,150 160,150 160,120 190,120 190,150 220,150 220,120 230,120 230,150 240,150 240,110" style="fill:white;"/>
<polygon points="170,130 180,130 180,150 170,150" style="fill:white;"/>
<polygon points="200,120 210,120 210,140 200,140" style="fill:black;"/>
</svg>

After

Width:  |  Height:  |  Size: 639 B

26
packaging/Makefile Normal file
View file

@ -0,0 +1,26 @@
PREFIX ?= /usr/local
BINDIR = $(PREFIX)/bin
MANDIR = $(PREFIX)/share/man/man1
SYSTEMDDIR ?= /etc/systemd
install:
install -Dm755 reaction $(DESTDIR)$(BINDIR)
install -Dm755 reaction-plugin-virtual $(DESTDIR)$(BINDIR)
install -Dm644 reaction*.1 -t $(DESTDIR)$(MANDIR)/
install -Dm644 reaction.bash $(DESTDIR)$(PREFIX)/share/bash-completion/completions/reaction
install -Dm644 reaction.fish $(DESTDIR)$(PREFIX)/share/fish/vendor_completions.d/reaction.fish
install -Dm644 _reaction $(DESTDIR)$(PREFIX)/share/zsh/vendor-completions/_reaction
install -Dm644 reaction.service $(SYSTEMDDIR)/system/reaction.service
install-ipset:
install -Dm755 reaction-plugin-ipset $(DESTDIR)$(BINDIR)
remove:
rm -f $(DESTDIR)$(BINDIR)/bin/reaction
rm -f $(DESTDIR)$(BINDIR)/bin/reaction-plugin-virtual
rm -f $(DESTDIR)$(BINDIR)/bin/reaction-plugin-ipset
rm -f $(DESTDIR)$(MANDIR)/reaction*.1
rm -f $(DESTDIR)$(PREFIX)/share/bash-completion/completions/reaction
rm -f $(DESTDIR)$(PREFIX)/share/fish/vendor_completions.d/reaction.fish
rm -f $(DESTDIR)$(PREFIX)/share/zsh/vendor-completions/_reaction
rm -f $(SYSTEMDDIR)/system/reaction.service

View file

@ -0,0 +1,24 @@
# vim: ft=systemd
[Unit]
Description=reaction daemon
Documentation=https://reaction.ppom.me
# Ensure reaction will insert its chain after docker has inserted theirs. Only useful when iptables & docker are used
# After=docker.service
# See `man systemd.exec` and `man systemd.service` for most options below
[Service]
ExecStart=/usr/bin/reaction start -c /etc/reaction/
# Ask systemd to create /var/lib/reaction (/var/lib/ is implicit)
StateDirectory=reaction
# Ask systemd to create /run/reaction at runtime (/run/ is implicit)
RuntimeDirectory=reaction
# Start reaction in its state directory
WorkingDirectory=/var/lib/reaction
# Let reaction kill its child processes first
KillMode=mixed
# Put reaction in its own slice so that plugins can be grouped within.
Slice=system-reaction.slice
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1 @@
[Slice]

View file

@ -0,0 +1,23 @@
[package]
name = "reaction-plugin-cluster"
version = "0.1.0"
edition = "2024"
[dependencies]
reaction-plugin.workspace = true
chrono.workspace = true
futures.workspace = true
remoc.workspace = true
serde.workspace = true
serde_json.workspace = true
tokio.workspace = true
tokio.features = ["rt-multi-thread"]
treedb.workspace = true
data-encoding = "2.9.0"
iroh = { version = "0.95.1", default-features = false }
rand = "0.9.2"
[dev-dependencies]
assert_fs.workspace = true

View file

@ -0,0 +1,165 @@
use std::{
collections::BTreeMap,
net::{SocketAddrV4, SocketAddrV6},
sync::Arc,
time::Duration,
};
use futures::future::join_all;
use iroh::{
Endpoint,
endpoint::{ConnectOptions, TransportConfig},
};
use reaction_plugin::{Line, shutdown::ShutdownController};
use remoc::rch::mpsc as remocMpsc;
use tokio::sync::mpsc as tokioMpsc;
use treedb::{Database, time::Time};
use crate::{ActionInit, StreamInit, connection::ConnectionManager, endpoint::EndpointManager};
pub const ALPN: [&[u8]; 1] = ["reaction_cluster_1".as_bytes()];
pub type UtcLine = (Arc<String>, Time);
pub fn transport_config() -> TransportConfig {
// FIXME higher timeouts and keep alive
let mut transport = TransportConfig::default();
transport
.max_idle_timeout(Some(Duration::from_millis(2000).try_into().unwrap()))
.keep_alive_interval(Some(Duration::from_millis(200)));
transport
}
pub fn connect_config() -> ConnectOptions {
ConnectOptions::new().with_transport_config(transport_config().into())
}
pub async fn bind(stream: &StreamInit) -> Result<Endpoint, String> {
let mut builder = Endpoint::builder()
.secret_key(stream.secret_key.clone())
.alpns(ALPN.iter().map(|slice| slice.to_vec()).collect())
.relay_mode(iroh::RelayMode::Disabled)
.clear_discovery()
.transport_config(transport_config());
if let Some(ip) = stream.bind_ipv4 {
builder = builder.bind_addr_v4(SocketAddrV4::new(ip, stream.listen_port));
}
if let Some(ip) = stream.bind_ipv6 {
builder = builder.bind_addr_v6(SocketAddrV6::new(ip, stream.listen_port, 0, 0));
}
builder.bind().await.map_err(|err| {
format!(
"Could not create socket address for cluster {}: {err}",
stream.name
)
})
}
pub async fn cluster_tasks(
endpoint: Endpoint,
mut stream: StreamInit,
mut actions: Vec<ActionInit>,
db: &mut Database,
shutdown: ShutdownController,
) -> Result<(), String> {
eprintln!("DEBUG cluster tasks starts running");
let (message_action2connection_txs, mut message_action2connection_rxs): (
Vec<tokioMpsc::Sender<UtcLine>>,
Vec<tokioMpsc::Receiver<UtcLine>>,
) = (0..stream.nodes.len())
.map(|_| tokioMpsc::channel(1))
.unzip();
// Spawn action tasks
while let Some(mut action) = actions.pop() {
let message_action2connection_txs = message_action2connection_txs.clone();
let own_cluster_tx = stream.tx.clone();
tokio::spawn(async move {
action
.serve(message_action2connection_txs, own_cluster_tx)
.await
});
}
let endpoint = Arc::new(endpoint);
let mut connection_endpoint2connection_txs = BTreeMap::new();
// Spawn connection managers
while let Some((pk, endpoint_addr)) = stream.nodes.pop_first() {
let cluster_name = stream.name.clone();
let endpoint = endpoint.clone();
let message_action2connection_rx = message_action2connection_rxs.pop().unwrap();
let stream_tx = stream.tx.clone();
let shutdown = shutdown.clone();
let (connection_manager, connection_endpoint2connection_tx) = ConnectionManager::new(
cluster_name,
endpoint_addr,
endpoint,
stream.message_timeout,
message_action2connection_rx,
stream_tx,
db,
shutdown,
)
.await?;
tokio::spawn(async move { connection_manager.task().await });
connection_endpoint2connection_txs.insert(pk, connection_endpoint2connection_tx);
}
// Spawn connection accepter
EndpointManager::new(
endpoint.clone(),
stream.name.clone(),
connection_endpoint2connection_txs,
shutdown.clone(),
);
eprintln!("DEBUG cluster tasks finished running");
Ok(())
}
impl ActionInit {
// Receive messages from its reaction action and dispatch them to all connections and to the reaction stream
async fn serve(
&mut self,
nodes_tx: Vec<tokioMpsc::Sender<UtcLine>>,
own_stream_tx: remocMpsc::Sender<Line>,
) {
while let Ok(Some(m)) = self.rx.recv().await {
eprintln!("DEBUG action: received a message to send to connections");
let line = self.send.line(m.match_);
if self.self_
&& let Err(err) = own_stream_tx.send((line.clone(), m.time)).await
{
eprintln!("ERROR while queueing message to be sent to own cluster stream: {err}");
}
let line = (Arc::new(line), m.time.into());
for result in join_all(nodes_tx.iter().map(|tx| tx.send(line.clone()))).await {
if let Err(err) = result {
eprintln!("ERROR while queueing message to be sent to cluster nodes: {err}");
};
}
}
}
}
#[cfg(test)]
mod tests {
use chrono::{DateTime, Local};
// As long as nodes communicate with UTC datetimes, them having different local timezones is not an issue!
#[test]
fn different_local_tz_is_ok() {
let dates: Vec<DateTime<Local>> = serde_json::from_str(
"[\"2025-11-02T17:47:21.716229569+01:00\",\"2025-11-02T18:47:21.716229569+02:00\"]",
)
.unwrap();
assert_eq!(dates[0].to_utc(), dates[1].to_utc());
}
}

View file

@ -0,0 +1,668 @@
use std::{cmp::max, io::Error as IoError, sync::Arc, time::Duration};
use futures::FutureExt;
use iroh::{
Endpoint, EndpointAddr,
endpoint::{Connection, RecvStream, SendStream, VarInt},
};
use rand::random_range;
use reaction_plugin::{Line, shutdown::ShutdownController};
use tokio::{
io::{AsyncReadExt, AsyncWriteExt, BufReader, BufWriter},
sync::mpsc,
time::sleep,
};
use treedb::{
Database, Tree,
helpers::{to_string, to_time},
time::{Time, now},
};
use crate::{
cluster::{ALPN, UtcLine, connect_config},
key::Show,
};
const PROTOCOL_VERSION: u32 = 1;
const CLOSE_RECV: (u32, &[u8]) = (1, b"error receiving from your stream");
const CLOSE_CLOSED: (u32, &[u8]) = (2, b"you closed your stream");
const CLOSE_SEND: (u32, &[u8]) = (3, b"could not send a message to your channel so I quit");
type MaybeRemoteLine = Result<Option<(String, Time)>, IoError>;
enum Event {
LocalMessageReceived(Option<UtcLine>),
RemoteMessageReceived(MaybeRemoteLine),
ConnectionReceived(Option<ConnOrConn>),
}
pub struct OwnConnection {
connection: Connection,
id: u64,
line_tx: BufWriter<SendStream>,
line_rx: BufReader<RecvStream>,
next_time_secs: Option<u64>,
next_time_nanos: Option<u32>,
next_len: Option<usize>,
next_line: Option<Vec<u8>>,
}
impl OwnConnection {
fn new(
connection: Connection,
id: u64,
line_tx: BufWriter<SendStream>,
line_rx: BufReader<RecvStream>,
) -> Self {
Self {
connection,
id,
line_tx,
line_rx,
next_time_secs: None,
next_time_nanos: None,
next_len: None,
next_line: None,
}
}
/// Send a line to peer.
///
/// Time is a std::time::Duration since UNIX_EPOCH, which is defined as UTC
/// So it's safe to use between nodes using different timezones
async fn send_line(&mut self, line: &String, time: &Time) -> Result<(), std::io::Error> {
self.line_tx.write_u64(time.as_secs()).await?;
self.line_tx.write_u32(time.subsec_nanos()).await?;
self.line_tx.write_u32(line.len() as u32).await?;
self.line_tx.write_all(line.as_bytes()).await?;
self.line_tx.flush().await?;
Ok(())
}
/// Cancel-safe function that returns next line from peer
/// Returns None if we don't have all data yet.
async fn recv_line(&mut self) -> MaybeRemoteLine {
if self.next_time_secs.is_none() {
self.next_time_secs = Some(self.line_rx.read_u64().await?);
}
if self.next_time_nanos.is_none() {
self.next_time_nanos = Some(self.line_rx.read_u32().await?);
}
if self.next_len.is_none() {
self.next_len = Some(self.line_rx.read_u32().await? as usize);
}
// Ok we have next_len.is_some()
let next_len = self.next_len.clone().unwrap();
if self.next_line.is_none() {
self.next_line = Some(Vec::with_capacity(next_len));
}
// Ok we have next_line.is_some()
let next_line = self.next_line.as_mut().unwrap();
let actual_len = next_line.len();
// Resize to wanted length
next_line.resize(next_len, 0);
// Read bytes
let bytes_read = self
.line_rx
.read(&mut next_line[actual_len..next_len])
.await?;
// Truncate possibly unread bytes
next_line.truncate(actual_len + bytes_read);
// Let's test if we read all bytes
if next_line.len() == next_len {
// Ok we have a full line
self.next_len.take();
let line = String::try_from(self.next_line.take().unwrap()).map_err(|err| {
std::io::Error::new(std::io::ErrorKind::InvalidData, err.to_string())
})?;
let time = Time::new(
self.next_time_secs.take().unwrap(),
self.next_time_nanos.take().unwrap(),
);
Ok(Some((line, time)))
} else {
// Ok we don't have a full line, will be next time!
Ok(None)
}
}
}
pub enum ConnOrConn {
Connection(Connection),
OwnConnection(OwnConnection),
}
/// Handle a remote node.
/// Manage reception and sending of messages to this node.
/// Retry failed connections.
pub struct ConnectionManager {
/// Cluster's name (for logging)
cluster_name: String,
/// The remote node we're communicating with (for logging)
node_id: String,
/// Remote
remote: EndpointAddr,
/// Endpoint
endpoint: Arc<Endpoint>,
/// Cancel asking for a connection
cancel_ask_connection: Option<mpsc::Sender<()>>,
/// Create a delegated task to send ourselves a connection
connection_tx: mpsc::Sender<ConnOrConn>,
/// The EndpointManager or our delegated task sending us a connection (whether we asked for it or not)
connection_rx: mpsc::Receiver<ConnOrConn>,
/// Our own connection (when we have one)
connection: Option<OwnConnection>,
/// Last connexion ID, used to have a determinist way to choose between conflicting connections
last_connection_id: u64,
/// Max duration before we drop pending messages to a node we can't connect to.
message_timeout: Duration,
/// Message we receive from actions
message_rx: mpsc::Receiver<UtcLine>,
/// Our queue of messages to send
message_queue: Tree<Time, Arc<String>>,
/// Messages we send from remote nodes to our own stream
own_cluster_tx: remoc::rch::mpsc::Sender<Line>,
/// shutdown
shutdown: ShutdownController,
}
impl ConnectionManager {
pub async fn new(
cluster_name: String,
remote: EndpointAddr,
endpoint: Arc<Endpoint>,
message_timeout: Duration,
message_rx: mpsc::Receiver<UtcLine>,
own_cluster_tx: remoc::rch::mpsc::Sender<Line>,
db: &mut Database,
shutdown: ShutdownController,
) -> Result<(Self, mpsc::Sender<ConnOrConn>), String> {
let node_id = remote.id.show();
let message_queue = db
.open_tree(
format!("message_queue_{}_{}", endpoint.id().show(), node_id),
message_timeout,
|(key, value)| Ok((to_time(&key)?, Arc::new(to_string(&value)?))),
)
.await?;
let (connection_tx, connection_rx) = mpsc::channel(1);
Ok((
Self {
cluster_name,
node_id,
remote,
endpoint,
connection: None,
cancel_ask_connection: None,
connection_tx: connection_tx.clone(),
connection_rx,
last_connection_id: 0,
message_timeout,
message_rx,
message_queue,
own_cluster_tx,
shutdown,
},
connection_tx,
))
}
/// Main loop
pub async fn task(mut self) {
self.ask_connection();
loop {
let have_connection = self.connection.is_some();
let maybe_conn_rx = self
.connection
.as_mut()
.map(|conn| conn.recv_line().boxed())
// This Future will never be polled because of the if in select!
// It still needs to be present because the branch will be evaluated
// so we can't unwrap
.unwrap_or(false_recv().boxed());
let event = tokio::select! {
biased;
// Quitting
_ = self.shutdown.wait() => None,
// Receive a connection from EndpointManager
conn = self.connection_rx.recv() => Some(Event::ConnectionReceived(conn)),
// Receive remote message when we have a connection
msg = maybe_conn_rx, if have_connection => Some(Event::RemoteMessageReceived(msg)),
// Receive a message from local Actions
msg = self.message_rx.recv() => Some(Event::LocalMessageReceived(msg)),
};
match event {
Some(event) => {
self.handle_event(event).await;
self.send_queue_messages().await;
self.drop_timeout_messages().await;
}
None => break,
}
}
}
async fn handle_event(&mut self, event: Event) {
match event {
Event::ConnectionReceived(connection) => {
self.handle_connection(connection).await;
}
Event::LocalMessageReceived(utc_line) => {
self.handle_local_message(utc_line).await;
}
Event::RemoteMessageReceived(message) => {
self.handle_remote_message(message).await;
}
}
}
async fn send_queue_messages(&mut self) {
while let Some(connection) = &mut self.connection
&& let Some((time, line)) = self
.message_queue
.first_key_value()
.map(|(k, v)| (k.clone(), v.clone()))
{
if let Err(err) = connection.send_line(&line, &time).await {
eprintln!(
"INFO cluster {}: connection with node {} failed: {err}",
self.cluster_name, self.node_id,
);
self.close_connection(CLOSE_SEND).await;
} else {
self.message_queue.remove(&time).await;
eprintln!(
"DEBUG cluster {}: node {}: sent a local message to remote: {}",
self.cluster_name, self.node_id, line
);
}
}
}
async fn drop_timeout_messages(&mut self) {
let now = now();
let mut count = 0;
loop {
// We have a next key and it reached timeout
if let Some(next_key) = self.message_queue.first_key_value().map(|kv| kv.0.clone())
&& next_key + self.message_timeout < now
{
self.message_queue.remove(&next_key).await;
count += 1;
} else {
break;
}
}
if count > 0 {
eprintln!(
"DEBUG cluster {}: node {}: dropping {count} messages that reached timeout",
self.cluster_name, self.node_id,
)
}
}
/// Bootstrap a new Connection
/// Returns true if we have a valid connection now
async fn handle_connection(&mut self, connection: Option<ConnOrConn>) {
match connection {
None => {
eprintln!(
"DEBUG cluster {}: ConnectionManager {}: quitting because EndpointManager has quit",
self.cluster_name, self.node_id,
);
self.quit();
}
Some(connection) => {
if let Some(cancel) = self.cancel_ask_connection.take() {
let _ = cancel.send(()).await;
}
let last_connection_id = self.last_connection_id;
let mut insert_connection = |own_connection: OwnConnection| {
if self
.connection
.as_ref()
.is_none_or(|old_own| old_own.id < own_connection.id)
{
self.last_connection_id = own_connection.id;
self.connection = Some(own_connection);
} else {
eprintln!(
"WARN cluster {}: node {}: ignoring incoming connection, as we already have a valid connection with it and our connection id is greater",
self.cluster_name, self.node_id,
);
}
};
match connection {
ConnOrConn::Connection(connection) => {
match open_channels(
connection,
last_connection_id,
&self.cluster_name,
&self.node_id,
)
.await
{
Ok(own_connection) => insert_connection(own_connection),
Err(err) => {
eprintln!(
"ERROR cluster {}: trying to initialize connection to node {}: {err}",
self.cluster_name, self.node_id,
);
if self.connection.is_none() {
self.ask_connection();
}
}
}
}
ConnOrConn::OwnConnection(own_connection) => insert_connection(own_connection),
}
}
}
}
async fn handle_remote_message(&mut self, message: MaybeRemoteLine) {
match message {
Err(err) => {
eprintln!(
"WARN cluster {}: node {}: connection {}: error receiving remote message: {err}",
self.cluster_name, self.node_id, self.last_connection_id
);
self.close_connection(CLOSE_RECV).await;
}
Ok(None) => {
eprintln!(
"WARN cluster {}: node {} closed its stream",
self.cluster_name, self.node_id,
);
self.close_connection(CLOSE_CLOSED).await;
}
Ok(Some(line)) => {
if let Err(err) = self
.own_cluster_tx
.send((line.0.clone(), line.1.into()))
.await
{
eprintln!(
"ERROR cluster {}: could not send message to reaction stream: {err}",
self.cluster_name
);
eprintln!(
"INFO cluster {}: line that can't be sent: {}",
self.cluster_name, line.0
);
self.quit();
} else {
eprintln!(
"DEBUG cluster {}: node {}: sent a remote message to local stream: {}",
self.cluster_name, self.node_id, line.0
);
}
}
}
}
async fn handle_local_message(&mut self, message: Option<UtcLine>) {
eprintln!(
"DEBUG cluster {}: node {}: received a local message",
self.cluster_name, self.node_id,
);
match message {
None => {
eprintln!(
"INFO cluster {}: no action remaining, quitting",
self.cluster_name
);
self.quit();
}
Some(message) => match &mut self.connection {
Some(connection) => {
if let Err(err) = connection.send_line(&message.0, &message.1).await {
eprintln!(
"INFO cluster {}: connection with node {} failed: {err}",
self.cluster_name, self.node_id,
);
self.message_queue.insert(message.1, message.0).await;
self.close_connection(CLOSE_SEND).await;
} else {
eprintln!(
"DEBUG cluster {}: node {}: sent a local message to remote: {}",
self.cluster_name, self.node_id, message.0
);
}
}
None => {
eprintln!(
"DEBUG cluster {}: node {}: no connection, saving local message to send later: {}",
self.cluster_name, self.node_id, message.0
);
self.message_queue.insert(message.1, message.0).await;
}
},
}
}
async fn close_connection(&mut self, code: (u32, &[u8])) {
if let Some(connection) = self.connection.take() {
connection
.connection
.close(VarInt::from_u32(code.0), code.1);
}
self.ask_connection();
}
fn ask_connection(&mut self) {
// if self.node_id.starts_with('H') {
let (tx, rx) = mpsc::channel(1);
self.cancel_ask_connection = Some(tx);
try_connect(
self.cluster_name.clone(),
self.remote.clone(),
self.endpoint.clone(),
self.last_connection_id,
self.connection_tx.clone(),
rx,
);
}
fn quit(&mut self) {
self.shutdown.ask_shutdown();
}
}
/// Open accept one stream and create one stream.
/// This way, there is no need to know if we created or accepted the connection.
async fn open_channels(
connection: Connection,
last_connexion_id: u64,
cluster_name: &str,
node_id: &str,
) -> Result<OwnConnection, IoError> {
eprintln!(
"DEBUG cluster {}: node {}: opening uni channel",
cluster_name, node_id
);
let mut output = BufWriter::new(connection.open_uni().await?);
let our_id = random_range(last_connexion_id + 1..last_connexion_id + 1_000_000);
eprintln!(
"DEBUG cluster {}: node {}: sending handshake in uni channel",
cluster_name, node_id
);
output.write_u32(PROTOCOL_VERSION).await?;
output.write_u64(our_id).await?;
output.flush().await?;
eprintln!(
"DEBUG cluster {}: node {}: accepting uni channel",
cluster_name, node_id
);
let mut input = BufReader::new(connection.accept_uni().await?);
eprintln!(
"DEBUG cluster {}: node {}: reading handshake from uni channel",
cluster_name, node_id
);
let their_version = input.read_u32().await?;
if their_version != PROTOCOL_VERSION {
return Err(IoError::new(
std::io::ErrorKind::InvalidData,
format!(
"incompatible version: {their_version}. We use {PROTOCOL_VERSION}. Consider upgrading the node with the older version."
),
));
}
let their_id = input.read_u64().await?;
// FIXME Do we need to test this? If so, this function should return their_id even when error in order to retry better next time
// if their_id < last_connexion_id
// ERROR
// else
let chosen_id = max(our_id, their_id);
eprintln!(
"DEBUG cluster {}: node {}: version handshake complete: last id: {last_connexion_id}, our id: {our_id}, their id: {their_id}: chosen id: {chosen_id}",
cluster_name, node_id
);
Ok(OwnConnection::new(connection, chosen_id, output, input))
}
async fn false_recv() -> MaybeRemoteLine {
Ok(None)
}
const START_TIMEOUT: Duration = Duration::from_millis(500);
const MAX_TIMEOUT: Duration = Duration::from_hours(1);
const TIMEOUT_FACTOR: f64 = 1.5;
fn with_random(d: Duration) -> Duration {
let max_delta = d.as_micros() as f32 * 0.2;
d + Duration::from_micros(rand::random_range(0.0..max_delta) as u64)
}
// Compute the next wait Duration.
// We're multiplying the Duration by [`TIMEOUT_FACTOR`] and cap it to [`MAX_TIMEOUT`].
fn next_delta(delta: Option<Duration>) -> Duration {
with_random(match delta {
None => START_TIMEOUT,
Some(delta) => {
// Multiply timeout by TIMEOUT_FACTOR
let delta = Duration::from_millis(((delta.as_millis() as f64) * TIMEOUT_FACTOR) as u64);
// Cap to MAX_TIMEOUT
if delta > MAX_TIMEOUT {
MAX_TIMEOUT
} else {
delta
}
}
})
}
#[cfg(test)]
#[test]
fn test_with_random() {
for d in [
123, 1234, 12345, 123456, 1234567, 12345678, 123456789, 1234567890,
] {
let rd = with_random(Duration::from_micros(d)).as_micros();
assert!(rd as f32 >= d as f32, "{rd} < {d}");
assert!(rd as f32 <= (d + 1) as f32 * 1.2, "{rd} > {d} * 1.2");
}
}
fn try_connect(
cluster_name: String,
remote: EndpointAddr,
endpoint: Arc<Endpoint>,
last_connection_id: u64,
connection_tx: mpsc::Sender<ConnOrConn>,
mut order_stop: mpsc::Receiver<()>,
) {
tokio::spawn(async move {
let node_id = remote.id.show();
// Until we have a connection or we're requested to stop
let mut keep_trying = true;
let mut delta = None;
while keep_trying {
delta = Some(next_delta(delta));
keep_trying = tokio::select! {
_ = sleep(delta.unwrap_or_default()) => true,
_ = order_stop.recv() => false,
};
if keep_trying {
eprintln!("DEBUG cluster {cluster_name}: node {node_id}: trying to connect...");
let connect = tokio::select! {
// conn = endpoint.connect(remote.clone(), ALPN[0]) => Some(conn),
conn = endpoint.connect_with_opts(remote.clone(), ALPN[0], connect_config()) => Some(conn),
_ = order_stop.recv() => None,
};
if let Some(connect) = connect {
let res = match connect {
Ok(connecting) => match connecting.await {
Ok(connection) => {
eprintln!(
"DEBUG cluster {cluster_name}: node {node_id}: created connection"
);
match open_channels(
connection,
last_connection_id,
&cluster_name,
&node_id,
)
.await
{
Ok(own_connection) => {
if let Err(err) = connection_tx
.send(ConnOrConn::OwnConnection(own_connection))
.await
{
eprintln!(
"DEBUG cluster {cluster_name}: node {node_id}: quitting because ConnectionManager has quit: {err}"
);
}
// successfully opened connection
keep_trying = false;
Ok(())
}
Err(err) => Err(err.to_string()),
}
}
Err(err) => Err(err.to_string()),
},
Err(err) => Err(err.to_string()),
};
if let Err(err) = res {
eprintln!(
"WARN cluster {cluster_name}: node {node_id}: while trying to connect: {err}"
);
}
} else {
// received stop order
eprintln!(
"DEBUG cluster {cluster_name}: node {node_id}: stop to try connecting to node be cause we received a connection from it"
);
keep_trying = false;
}
}
}
});
}

View file

@ -0,0 +1,128 @@
use std::collections::BTreeMap;
use std::sync::Arc;
use iroh::{Endpoint, PublicKey, endpoint::Incoming};
use reaction_plugin::shutdown::ShutdownController;
use tokio::sync::mpsc;
use crate::{connection::ConnOrConn, key::Show};
enum Break {
Yes,
No,
}
pub struct EndpointManager {
/// The [`iroh::Endpoint`] to manage
endpoint: Arc<Endpoint>,
/// Cluster's name (for logging)
cluster_name: String,
/// Connection sender to the Connection Managers
connections_tx: BTreeMap<PublicKey, mpsc::Sender<ConnOrConn>>,
/// shutdown
shutdown: ShutdownController,
}
impl EndpointManager {
pub fn new(
endpoint: Arc<Endpoint>,
cluster_name: String,
connections_tx: BTreeMap<PublicKey, mpsc::Sender<ConnOrConn>>,
shutdown: ShutdownController,
) {
tokio::spawn(async move {
Self {
endpoint,
cluster_name,
connections_tx,
shutdown,
}
.task()
.await
});
}
async fn task(&mut self) {
loop {
// Uncomment this line and comment the select! for faster development in this function
// let event = Event::TryConnect(self.endpoint_addr_rx.recv().await);
let incoming = tokio::select! {
incoming = self.endpoint.accept() => incoming,
_ = self.shutdown.wait() => None,
};
match incoming {
Some(incoming) => {
if let Break::Yes = self.handle_incoming(incoming).await {
break;
}
}
None => break,
}
}
self.endpoint.close().await
}
async fn handle_incoming(&mut self, incoming: Incoming) -> Break {
eprintln!(
"DEBUG cluster {}: EndpointManager: receiving connection",
self.cluster_name,
);
// FIXME a malicious actor could maybe prevent a node from connecting to
// its cluster by sending lots of invalid slow connection requests?
// This function could be moved to a new 'oneshot' task instead
let remote_address = incoming.remote_address();
let remote_address_validated = incoming.remote_address_validated();
let connection = match incoming.await {
Ok(connection) => connection,
Err(err) => {
if remote_address_validated {
eprintln!("INFO refused connection from {}: {err}", remote_address)
} else {
eprintln!("INFO refused connection: {err}")
}
return Break::No;
}
};
let remote_id = connection.remote_id();
match self.connections_tx.get(&remote_id) {
None => {
eprintln!(
"WARN cluster {}: incoming connection from node '{}', ip: {} is not in our list, refusing incoming connection.",
self.cluster_name,
remote_id.show(),
remote_address
);
eprintln!(
"INFO cluster {}: {}, {}",
self.cluster_name,
"maybe it's not from our cluster,",
"maybe this node's configuration has not yet been updated to add this new node."
);
return Break::No;
}
Some(tx) => {
if let Err(_) = tx.send(ConnOrConn::Connection(connection)).await {
eprintln!(
"DEBUG cluster {}: EndpointManager: quitting because ConnectionManager has quit",
self.cluster_name,
);
self.shutdown.ask_shutdown();
return Break::Yes;
}
eprintln!(
"DEBUG cluster {}: EndpointManager: receiving connection from {}",
self.cluster_name,
remote_id.show(),
);
}
}
// TODO persist the incoming address, so that we don't forget this address
Break::No
}
}

View file

@ -0,0 +1,188 @@
use std::io;
use data_encoding::DecodeError;
use iroh::{PublicKey, SecretKey};
use tokio::{
fs::{self, File},
io::AsyncWriteExt,
};
pub fn secret_key_path(dir: &str, cluster_name: &str) -> String {
format!("{dir}/secret_key_{cluster_name}.txt")
}
pub async fn secret_key(dir: &str, cluster_name: &str) -> Result<SecretKey, String> {
let path = secret_key_path(dir, cluster_name);
if let Some(key) = get_secret_key(&path).await? {
Ok(key)
} else {
let key = SecretKey::generate(&mut rand::rng());
set_secret_key(&path, &key).await?;
Ok(key)
}
}
async fn get_secret_key(path: &str) -> Result<Option<SecretKey>, String> {
let key = match fs::read_to_string(path).await {
Ok(key) => Ok(key),
Err(err) => match err.kind() {
io::ErrorKind::NotFound => return Ok(None),
_ => Err(format!("can't read secret key file: {err}")),
},
}?;
let bytes = match key_b64_to_bytes(&key) {
Ok(key) => Ok(key),
Err(err) => Err(format!(
"invalid secret key read from file: {err}. Please remove the `{path}` file from plugin directory.",
)),
}?;
Ok(Some(SecretKey::from_bytes(&bytes)))
}
async fn set_secret_key(path: &str, key: &SecretKey) -> Result<(), String> {
let secret_key = key.show();
File::options()
.mode(0o600)
.write(true)
.create(true)
.open(path)
.await
.map_err(|err| format!("can't open `{path}` in plugin directory: {err}"))?
.write_all(secret_key.as_bytes())
.await
.map_err(|err| format!("can't write to `{path}` in plugin directory: {err}"))
}
pub fn key_b64_to_bytes(key: &str) -> Result<[u8; 32], DecodeError> {
let vec = data_encoding::BASE64URL.decode(key.as_bytes())?;
if vec.len() != 32 {
return Err(DecodeError {
position: vec.len(),
kind: data_encoding::DecodeKind::Length,
});
}
let mut bytes = [0u8; 32];
for i in 0..32 {
bytes[i] = vec[i];
}
Ok(bytes)
}
pub fn key_bytes_to_b64(key: &[u8; 32]) -> String {
data_encoding::BASE64URL.encode(key)
}
/// Implemented by PublicKey & SecretKey to display keys as base64 instead of hexadecimal.
/// Similar to Display/ToString
pub trait Show {
fn show(&self) -> String;
}
impl Show for PublicKey {
fn show(&self) -> String {
key_bytes_to_b64(self.as_bytes())
}
}
impl Show for SecretKey {
fn show(&self) -> String {
key_bytes_to_b64(&self.to_bytes())
}
}
#[cfg(test)]
mod tests {
use assert_fs::{
TempDir,
prelude::{FileWriteStr, PathChild},
};
use iroh::{PublicKey, SecretKey};
use tokio::fs::read_to_string;
use crate::key::{
get_secret_key, key_b64_to_bytes, key_bytes_to_b64, secret_key_path, set_secret_key,
};
#[test]
fn secret_key_encode_decode() {
for (secret_key, public_key) in [
(
"g7U1LPq2cgGSyk6CH_v1QpoXowSFKVQ8IcFljd_ZKGw=",
"HhVh7ghqpXM9375HZ82OOeB504HBSS25wgug-1vUggY=",
),
(
"5EgRjwIpqd60IXWCGg5dFTtxkI-0fS1PlhoIhUjh1eY=",
"LPSQ9pS7m_5vvNC-fhoBNeL2-eS2Fd6aO4ImSnXp3lc=",
),
] {
assert_eq!(
secret_key,
&key_bytes_to_b64(&key_b64_to_bytes(secret_key).unwrap())
);
assert_eq!(
public_key,
&key_bytes_to_b64(&key_b64_to_bytes(public_key).unwrap())
);
let secret_key_parsed = SecretKey::from_bytes(&key_b64_to_bytes(secret_key).unwrap());
let public_key_parsed =
PublicKey::from_bytes(&key_b64_to_bytes(public_key).unwrap()).unwrap();
assert_eq!(secret_key_parsed.public(), public_key_parsed);
}
}
#[tokio::test]
async fn secret_key_get() {
let tmp_dir = TempDir::new().unwrap();
let tmp_dir_str = tmp_dir.to_str().unwrap();
for (secret_key, cluster_name) in [
("g7U1LPq2cgGSyk6CH_v1QpoXowSFKVQ8IcFljd_ZKGw=", "my_cluster"),
("5EgRjwIpqd60IXWCGg5dFTtxkI-0fS1PlhoIhUjh1eY=", "name"),
] {
tmp_dir
.child(&format!("secret_key_{cluster_name}.txt"))
.write_str(secret_key)
.unwrap();
let secret_key_parsed = SecretKey::from_bytes(&key_b64_to_bytes(secret_key).unwrap());
let path = secret_key_path(tmp_dir_str, cluster_name);
let secret_key_from_file = get_secret_key(&path).await.unwrap();
assert_eq!(
secret_key_parsed.to_bytes(),
secret_key_from_file.unwrap().to_bytes()
)
}
assert_eq!(
Ok(None),
get_secret_key(&format!("{tmp_dir_str}/non_existent"))
.await
// Can't compare secret keys so we map to bytes
// even if we don't want one
.map(|opt| opt.map(|pk| pk.to_bytes()))
);
// Will fail if we're root, but who runs this as root??
assert!(
get_secret_key(&format!("/root/non_existent"))
.await
.is_err()
);
}
#[tokio::test]
async fn secret_key_set() {
let tmp_dir = TempDir::new().unwrap();
let tmp_dir_str = tmp_dir.to_str().unwrap();
let path = format!("{tmp_dir_str}/secret");
let key = SecretKey::generate(&mut rand::rng());
assert!(set_secret_key(&path, &key).await.is_ok());
let read_file = read_to_string(&path).await;
assert!(read_file.is_ok());
assert_eq!(read_file.unwrap(), key_bytes_to_b64(&key.to_bytes()));
}
}

View file

@ -0,0 +1,273 @@
use std::{
collections::{BTreeMap, BTreeSet},
net::{Ipv4Addr, Ipv6Addr, SocketAddr},
path::PathBuf,
time::Duration,
};
use iroh::{EndpointAddr, PublicKey, SecretKey, TransportAddr};
use reaction_plugin::{
ActionConfig, ActionImpl, Exec, Hello, Line, Manifest, PluginInfo, RemoteResult, StreamConfig,
StreamImpl, line::PatternLine, main_loop, shutdown::ShutdownController, time::parse_duration,
};
use remoc::{rch::mpsc, rtc};
use serde::{Deserialize, Serialize};
use treedb::Database;
use crate::key::Show;
mod cluster;
mod connection;
mod endpoint;
mod key;
#[cfg(test)]
mod tests;
#[tokio::main]
async fn main() {
let plugin = Plugin::default();
main_loop(plugin).await;
}
#[derive(Default)]
struct Plugin {
init: BTreeMap<String, (StreamInit, Vec<ActionInit>)>,
cluster_shutdown: ShutdownController,
}
/// Stream options as defined by the user
#[derive(Serialize, Deserialize)]
struct StreamOptions {
/// The UDP port to open
listen_port: u16,
/// The IPv4 to bind to. Defaults to 0.0.0.0.
/// Set to `null` to use IPv6 only.
#[serde(default = "ipv4_unspecified")]
bind_ipv4: Option<Ipv4Addr>,
/// The IPv6 to bind to. Defaults to 0.0.0.0.
/// Set to `null` to use IPv4 only.
#[serde(default = "ipv6_unspecified")]
bind_ipv6: Option<Ipv6Addr>,
/// Other nodes which are part of the cluster.
nodes: Vec<NodeOption>,
/// Max duration before we drop pending messages to a node we can't connect to.
message_timeout: String,
}
fn ipv4_unspecified() -> Option<Ipv4Addr> {
Some(Ipv4Addr::UNSPECIFIED)
}
fn ipv6_unspecified() -> Option<Ipv6Addr> {
Some(Ipv6Addr::UNSPECIFIED)
}
#[derive(Serialize, Deserialize)]
struct NodeOption {
public_key: String,
#[serde(default)]
addresses: Vec<SocketAddr>,
}
/// Stream information before start
struct StreamInit {
name: String,
listen_port: u16,
bind_ipv4: Option<Ipv4Addr>,
bind_ipv6: Option<Ipv6Addr>,
secret_key: SecretKey,
message_timeout: Duration,
nodes: BTreeMap<PublicKey, EndpointAddr>,
tx: mpsc::Sender<Line>,
}
#[derive(Serialize, Deserialize)]
struct ActionOptions {
/// The line to send to the corresponding cluster, example: "ban \<ip\>"
send: String,
/// The name of the corresponding cluster, example: "my_cluster_stream"
to: String,
/// Whether the stream of this node also receives the line
#[serde(default, rename = "self")]
self_: bool,
}
struct ActionInit {
name: String,
send: PatternLine,
self_: bool,
rx: mpsc::Receiver<Exec>,
}
impl PluginInfo for Plugin {
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
Ok(Manifest {
hello: Hello::new(),
streams: BTreeSet::from(["cluster".into()]),
actions: BTreeSet::from(["cluster_send".into()]),
})
}
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
let mut ret_streams = Vec::with_capacity(streams.len());
let mut ret_actions = Vec::with_capacity(actions.len());
for StreamConfig {
stream_name,
stream_type,
config,
} in streams
{
if &stream_type != "cluster" {
return Err("This plugin can't handle other stream types than cluster".into());
}
let options: StreamOptions = serde_json::from_value(config.into())
.map_err(|err| format!("invalid options: {err}"))?;
let mut nodes = BTreeMap::default();
let message_timeout = parse_duration(&options.message_timeout)
.map_err(|err| format!("invalid message_timeout: {err}"))?;
if options.bind_ipv4.is_none() && options.bind_ipv6.is_none() {
Err(
"At least one of bind_ipv4 and bind_ipv6 must be enabled. Unset at least one of them or set at least one of them to an IP.",
)?;
}
if options.nodes.is_empty() {
Err("At least one remote node has to be configured for a cluster")?;
}
for node in options.nodes.into_iter() {
let bytes = key::key_b64_to_bytes(&node.public_key)
.map_err(|err| format!("invalid public key {}: {err}", node.public_key))?;
let public_key = PublicKey::from_bytes(&bytes)
.map_err(|err| format!("invalid public key {}: {err}", node.public_key))?;
nodes.insert(
public_key,
EndpointAddr {
id: public_key,
addrs: node
.addresses
.into_iter()
.map(|addr| TransportAddr::Ip(addr))
.collect(),
},
);
}
let secret_key = key::secret_key(".", &stream_name).await?;
eprintln!(
"INFO public key of this node for cluster {stream_name}: {}",
secret_key.public().show()
);
let (tx, rx) = mpsc::channel(1);
let stream = StreamInit {
name: stream_name.clone(),
listen_port: options.listen_port,
bind_ipv4: options.bind_ipv4,
bind_ipv6: options.bind_ipv6,
secret_key,
message_timeout,
nodes,
tx,
};
if let Some(_) = self.init.insert(stream_name, (stream, vec![])) {
return Err("this virtual stream has already been initialized".into());
}
ret_streams.push(StreamImpl {
stream: rx,
standalone: true,
})
}
for ActionConfig {
stream_name,
filter_name,
action_name,
action_type,
config,
patterns,
} in actions
{
if &action_type != "cluster_send" {
return Err(
"This plugin can't handle other action types than 'cluster_send'".into(),
);
}
let options: ActionOptions = serde_json::from_value(config.into())
.map_err(|err| format!("invalid options: {err}"))?;
let (tx, rx) = mpsc::channel(1);
let init_action = ActionInit {
name: format!("{}.{}.{}", stream_name, filter_name, action_name),
send: PatternLine::new(options.send, patterns),
self_: options.self_,
rx,
};
match self.init.get_mut(&options.to) {
Some((_, actions)) => actions.push(init_action),
None => {
return Err(format!(
"ERROR action '{}' sends 'to' unknown stream '{}'",
init_action.name, options.to
)
.into());
}
}
ret_actions.push(ActionImpl { tx })
}
Ok((ret_streams, ret_actions))
}
async fn start(&mut self) -> RemoteResult<()> {
self.cluster_shutdown.delegate().handle_quit_signals()?;
let mut db = {
let path = PathBuf::from(".");
let (cancellation_token, task_tracker_token) = self.cluster_shutdown.token().split();
Database::open(&path, cancellation_token, task_tracker_token)
.await
.map_err(|err| format!("Can't open database: {err}"))?
};
while let Some((_, (stream, actions))) = self.init.pop_first() {
let endpoint = cluster::bind(&stream).await?;
cluster::cluster_tasks(
endpoint,
stream,
actions,
&mut db,
self.cluster_shutdown.clone(),
)
.await?;
}
// Free containers
self.init = Default::default();
eprintln!("DEBUG started");
Ok(())
}
async fn close(self) -> RemoteResult<()> {
self.cluster_shutdown.ask_shutdown();
self.cluster_shutdown.wait_all_task_shutdown().await;
Ok(())
}
}

View file

@ -0,0 +1,293 @@
use std::env::set_current_dir;
use assert_fs::TempDir;
use reaction_plugin::{ActionConfig, PluginInfo, StreamConfig};
use serde_json::json;
use crate::{Plugin, tests::insert_secret_key};
use super::{PUBLIC_KEY_A, TEST_MUTEX, stream_ok};
#[tokio::test]
async fn conf_stream() {
// Minimal node configuration
let nodes = json!([{
"public_key": PUBLIC_KEY_A,
}]);
// Invalid type
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "clust".into(),
config: stream_ok().into(),
}],
vec![]
)
.await
.is_err()
);
for (json, is_ok) in [
(
json!({
"listen_port": 2048,
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok as fn(&_) -> bool,
),
(
// invalid time
json!({
"listen_port": 2048,
"nodes": nodes,
"message_timeout": "30pv",
}),
Result::is_err,
),
(
json!({
"listen_port": 2048,
"bind_ipv4": "0.0.0.0",
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
json!({
"listen_port": 2048,
"bind_ipv6": "::",
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
json!({
"listen_port": 2048,
"bind_ipv4": "0.0.0.0",
"bind_ipv6": "::",
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
json!({
"listen_port": 2048,
"bind_ipv4": null,
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
json!({
"listen_port": 2048,
"bind_ipv6": null,
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
// No bind
json!({
"listen_port": 2048,
"bind_ipv4": null,
"bind_ipv6": null,
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_err,
),
(json!({}), Result::is_err),
] {
assert!(is_ok(
&Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: json.into(),
}],
vec![]
)
.await
));
}
}
#[tokio::test]
async fn conf_action() {
let patterns = vec!["p1".into(), "p2".into()];
// Invalid type
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_sen".into(),
config: json!({
"send": "<p1>",
"to": "stream",
})
.into(),
patterns: patterns.clone(),
}]
)
.await
.is_err()
);
for (json, is_ok) in [
(
json!({
"send": "<p1>",
"to": "stream",
}),
true,
),
(
json!({
"send": "<p1>",
"to": "stream",
"self": true,
}),
true,
),
(
json!({
"send": "<p1>",
"to": "stream",
"self": false,
}),
true,
),
(
// missing to
json!({
"send": "<p1>",
}),
false,
),
(
// missing send
json!({
"to": "stream",
}),
false,
),
(
// invalid self
json!({
"send": "<p1>",
"to": "stream",
"self": "true",
}),
false,
),
(
// missing conf
json!({}),
false,
),
] {
let ret = Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json.clone().into(),
patterns: patterns.clone(),
}],
)
.await;
assert!(
ret.is_ok() == is_ok,
"is_ok: {is_ok}, ret: {:?}, action conf: {json:?}",
ret.map(|_| ())
);
}
}
#[tokio::test]
async fn conf_send() {
let _lock = TEST_MUTEX.lock();
let dir = TempDir::new().unwrap();
set_current_dir(&dir).unwrap();
insert_secret_key().await;
// No action is ok
let res = Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![],
)
.await;
assert!(res.is_ok(), "{:?}", res.map(|_| ()));
// An action is ok
let res = Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json!({ "send": "message", "to": "stream" }).into(),
patterns: vec![],
}],
)
.await;
assert!(res.is_ok(), "{:?}", res.map(|_| ()));
// Invalid to: option
let res = Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json!({ "send": "message", "to": "stream1" }).into(),
patterns: vec![],
}],
)
.await;
assert!(res.is_err(), "{:?}", res.map(|_| ()));
}

View file

@ -0,0 +1,319 @@
use std::{env::set_current_dir, time::Duration};
use assert_fs::TempDir;
use reaction_plugin::{ActionConfig, Exec, PluginInfo, StreamConfig};
use serde_json::json;
use tokio::{fs, time::timeout};
use treedb::time::now;
use crate::{
Plugin,
key::secret_key_path,
tests::{PUBLIC_KEY_A, PUBLIC_KEY_B, SECRET_KEY_A, SECRET_KEY_B, TEST_MUTEX},
};
#[derive(Clone)]
struct TestNode {
public_key: &'static str,
private_key: &'static str,
port: u16,
}
const POOL: [TestNode; 15] = [
TestNode {
public_key: PUBLIC_KEY_A,
private_key: SECRET_KEY_A,
port: 2055,
},
TestNode {
public_key: PUBLIC_KEY_B,
private_key: SECRET_KEY_B,
port: 2056,
},
TestNode {
public_key: "ZjEPlIdGikV_sPIAUzO3RFUidlERJUhJ9XwNAlieuvU=",
private_key: "SCbd8Ids3Dg9MwzyMNV1KFcUtsyRbeCp7GDmu-xXBSs=",
port: 2057,
},
TestNode {
public_key: "2FUpABLl9I6bU9a2XtWKMLDzwHfrVcNEG6K8Ix6sxWQ=",
private_key: "F0W8nIlVmuFVpelwYH4PDaBDM0COYOyXDmBEmnHyo5s=",
port: 2058,
},
TestNode {
public_key: "qR4JDI_yyPWUBrmBbQjqfFbGP14v9dEaQVPHPOjId1o=",
private_key: "S5pxTafNXPd_9TMT4_ERuPXlZ882UmggAHrf8Yntfqg=",
port: 2059,
},
TestNode {
public_key: "NjkPBwDO4IEOBjkcxufYtVXspJNQZ0qF6GamRq2TOB4=",
private_key: "zM_lXiFuwTkmPuuXqIghW_J0uwq0a53L_yhM57uy_R8=",
port: 2060,
},
TestNode {
public_key: "_mgTzrlE8b_zvka3LgfD5qH2h_d3S0hcDU1WzIL6C74=",
private_key: "6Obq7fxOXK-u-P3QB5FJvNnwXdKwP1FsVJ0555o7DXs=",
port: 2061,
},
TestNode {
public_key: "FLKxCSSjjzxH0ZWTpQ8xXcSIRutXUhIDhZimjamxO2s=",
private_key: "pBPcJ32bt4xGZIGZDLDtj0eedg7p5DENjAwA-wM-1vk=",
port: 2062,
},
TestNode {
public_key: "yYBWzhzXO4isdPW2SzI-Sv3mcy3dUl6Kl0oFN6YpuzE=",
private_key: "nC8F6prLAY9-86EZlfXwpOjQeghlPKf3PtT-zXsJZsA=",
port: 2063,
},
TestNode {
public_key: "QLbNxlLEUt0tieD9BX9of663gCm9WjKeqch0BIFJ3CE=",
private_key: "JL4bKNHJMaMX_ElnaDHc6Ql74HZbovcswNOrY6fN1sU=",
port: 2064,
},
TestNode {
public_key: "2cmAmcaEFW-9val6WMoHSfTW25IxiQHes7Jwy6NqLLc=",
private_key: "TCvfDLHLQ5RxfAs7_2Th2u1XF48ygxTLAAsUzVPBn_o=",
port: 2065,
},
TestNode {
public_key: "PfKYILyGmu0C6GFUOLw4MSLxN6gtkj0XUdvQW50A2xA=",
private_key: "LaQgDWsXpwSQlZZXd8UEllrgpeXw9biSye4zcjLclU0=",
port: 2066,
},
TestNode {
public_key: "OQMXwPl90gr-2y-f5qZIZuVG4WEae5cc8JOB39LTNYE=",
private_key: "blcigXzk0oeQ8J1jwYFiYHJ-pMiUqbUM4SJBlxA0MiI=",
port: 2067,
},
TestNode {
public_key: "DHpkBgnQUfpC7s4-mTfpn1_PN4dzj7hCCMF6GwO3Bus=",
private_key: "sw7-2gPOswznF2OJHJdbfyJxdjS-P5O0lie6SdOL_08=",
port: 2068,
},
TestNode {
public_key: "odjjaYd6lL1DG8N9AXHW9LGsrKIb5IlW0KZz-rgxfXA=",
private_key: "6JU6YHRBM_rJkuQmMaGaio_PZiyzZlTIU0qE8AHPGSE=",
port: 2069,
},
];
async fn stream_action(
name: &str,
index: usize,
nodes: &[TestNode],
) -> (StreamConfig, ActionConfig) {
let stream_name = format!("stream_{name}");
let this_node = &nodes[index];
let other_nodes: Vec<_> = nodes
.iter()
.filter(|node| node.public_key != this_node.public_key)
.map(|node| {
json!({
"public_key": node.public_key,
"addresses": [format!("[::1]:{}", node.port)]
})
})
.collect();
fs::write(secret_key_path(".", &stream_name), this_node.private_key)
.await
.unwrap();
(
StreamConfig {
stream_name: stream_name.clone(),
stream_type: "cluster".into(),
config: json!({
"message_timeout": "30s",
"listen_port": this_node.port,
"nodes": other_nodes,
})
.into(),
},
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json!({
"send": format!("from {name}: <test>"),
"to": stream_name,
})
.into(),
patterns: vec!["test".into()],
},
)
}
#[tokio::test]
async fn two_nodes_simultaneous_startup() {
for separate_plugin in [true /*, false */] {
let _lock = TEST_MUTEX.lock();
let dir = TempDir::new().unwrap();
set_current_dir(&dir).unwrap();
let ((mut stream_a, action_a), (mut stream_b, action_b)) = if separate_plugin {
let mut plugin_a = Plugin::default();
let (sa, aa) = stream_action("a", 0, &POOL[0..2]).await;
let (mut streams_a, mut actions_a) =
plugin_a.load_config(vec![sa], vec![aa]).await.unwrap();
plugin_a.start().await.unwrap();
let mut plugin_b = Plugin::default();
let (sb, ab) = stream_action("b", 1, &POOL[0..2]).await;
let (mut streams_b, mut actions_b) =
plugin_b.load_config(vec![sb], vec![ab]).await.unwrap();
plugin_b.start().await.unwrap();
(
(streams_a.remove(0), actions_a.remove(0)),
(streams_b.remove(0), actions_b.remove(0)),
)
} else {
let mut plugin = Plugin::default();
let a = stream_action("a", 0, &POOL[0..2]).await;
let b = stream_action("b", 1, &POOL[0..2]).await;
let (mut streams, mut actions) = plugin
.load_config(vec![a.0, b.0], vec![a.1, b.1])
.await
.unwrap();
plugin.start().await.unwrap();
(
(streams.remove(0), actions.remove(0)),
(streams.remove(1), actions.remove(1)),
)
};
for m in ["test1", "test2", "test3"] {
let time = now().into();
for (stream, action, from) in [
(&mut stream_b, &action_a, "a"),
(&mut stream_a, &action_b, "b"),
] {
assert!(
action
.tx
.send(Exec {
match_: vec![m.into()],
time,
})
.await
.is_ok(),
"separate_plugin: {separate_plugin}, message: {m}, from: {from}"
);
let received = timeout(Duration::from_millis(5000), stream.stream.recv()).await;
assert!(
received.is_ok(),
"separate_plugin: {separate_plugin}, message: {m}, from: {from}, did timeout"
);
let received = received.unwrap();
assert!(
received.is_ok(),
"separate_plugin: {separate_plugin}, message: {m}, from: {from}, remoc receive error"
);
let received = received.unwrap();
assert_eq!(
received,
Some((format!("from {from}: {m}"), time)),
"separate_plugin: {separate_plugin}, message: {m}, from: {from}"
);
}
}
}
}
#[tokio::test]
async fn n_nodes_simultaneous_startup() {
let _lock = TEST_MUTEX.lock();
// Ports can take some time to be really closed
let mut port_delta = 0;
for n in 3..=POOL.len() {
println!("\nNODES: {n}\n");
port_delta += n;
// for n in 3..=3 {
let dir = TempDir::new().unwrap();
set_current_dir(&dir).unwrap();
let mut plugins = Vec::with_capacity(n);
let mut streams = Vec::with_capacity(n);
let mut actions = Vec::with_capacity(n);
for i in 0..n {
let mut plugin = Plugin::default();
let name = format!("n{i}");
let (stream, action) = stream_action(
&name,
i,
&POOL[0..n]
.iter()
.map(|node| node.clone())
.map(|node| TestNode {
port: node.port + port_delta as u16,
..node
})
.collect::<Vec<_>>()
.as_slice(),
)
.await;
let (mut stream, mut action) = plugin
.load_config(vec![stream], vec![action])
.await
.unwrap();
plugin.start().await.unwrap();
plugins.push(plugin);
streams.push(stream.pop().unwrap());
actions.push((action.pop().unwrap(), name));
}
for m in ["test1", "test2", "test3", "test4", "test5"] {
let time = now().into();
for (i, (action, from)) in actions.iter().enumerate() {
assert!(
action
.tx
.send(Exec {
match_: vec![m.into()],
time,
})
.await
.is_ok(),
"n nodes: {n}, n°action{i}, message: {m}, from: {from}"
);
for (j, stream) in streams.iter_mut().enumerate().filter(|(j, _)| *j != i) {
let received = timeout(Duration::from_millis(5000), stream.stream.recv()).await;
assert!(
received.is_ok(),
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}, did timeout"
);
let received = received.unwrap();
assert!(
received.is_ok(),
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}, remoc receive error"
);
let received = received.unwrap();
assert_eq!(
received,
Some((format!("from {from}: {m}"), time)),
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}"
);
println!(
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}"
);
}
}
}
for plugin in plugins {
plugin.close().await.unwrap();
}
}
}
// TODO test:
// with inexisting nodes
// different startup times
// stopping & restarting a node mid exchange

View file

@ -0,0 +1,40 @@
use std::sync::{LazyLock, Mutex};
use serde_json::json;
use tokio::fs::write;
mod conf;
mod e2e;
mod self_;
const SECRET_KEY_A: &str = "g7U1LPq2cgGSyk6CH_v1QpoXowSFKVQ8IcFljd_ZKGw=";
const PUBLIC_KEY_A: &str = "HhVh7ghqpXM9375HZ82OOeB504HBSS25wgug-1vUggY=";
const SECRET_KEY_B: &str = "5EgRjwIpqd60IXWCGg5dFTtxkI-0fS1PlhoIhUjh1eY=";
const PUBLIC_KEY_B: &str = "LPSQ9pS7m_5vvNC-fhoBNeL2-eS2Fd6aO4ImSnXp3lc=";
// Tests that spawn a database in current directory must be run one at a time
static TEST_MUTEX: LazyLock<Mutex<()>> = LazyLock::new(|| Mutex::new(()));
fn stream_ok_port(port: u16) -> serde_json::Value {
json!({
"listen_port": port,
"nodes": [{
"public_key": PUBLIC_KEY_A,
}],
"message_timeout": "30m",
})
}
fn stream_ok() -> serde_json::Value {
stream_ok_port(2048)
}
async fn insert_secret_key() {
write(
"./secret_key_stream.txt",
b"pBPcJ32bt4xGZIGZDLDtj0eedg7p5DENjAwA-wM-1vk=",
)
.await
.unwrap();
}

View file

@ -0,0 +1,78 @@
use std::{env::set_current_dir, time::Duration};
use assert_fs::TempDir;
use reaction_plugin::{ActionConfig, Exec, PluginInfo, StreamConfig};
use serde_json::json;
use tokio::time::timeout;
use treedb::time::now;
use crate::{Plugin, tests::insert_secret_key};
use super::{TEST_MUTEX, stream_ok_port};
#[tokio::test]
async fn run_with_self() {
let _lock = TEST_MUTEX.lock();
let dir = TempDir::new().unwrap();
set_current_dir(&dir).unwrap();
insert_secret_key().await;
for self_ in [true, false] {
let mut plugin = Plugin::default();
let (mut streams, mut actions) = plugin
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok_port(2052).into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json!({
"send": "message <test>",
"to": "stream",
"self": self_,
})
.into(),
patterns: vec!["test".into()],
}],
)
.await
.unwrap();
let mut stream = streams.pop().unwrap();
let action = actions.pop().unwrap();
assert!(stream.standalone);
assert!(plugin.start().await.is_ok());
for m in ["test1", "test2", "test3", " a a a aa a a"] {
let time = now().into();
assert!(
action
.tx
.send(Exec {
match_: vec![m.into()],
time,
})
.await
.is_ok()
);
if self_ {
assert_eq!(
stream.stream.recv().await.unwrap().unwrap(),
(format!("message {m}"), time),
);
} else {
// Don't receive anything
assert!(
timeout(Duration::from_millis(100), stream.stream.recv())
.await
.is_err()
);
}
}
}
}

View file

@ -0,0 +1,26 @@
[package]
name = "reaction-plugin-ipset"
description = "ipset plugin for reaction"
version = "1.0.0"
edition = "2024"
authors = ["ppom <reaction@ppom.me>"]
license = "AGPL-3.0"
homepage = "https://reaction.ppom.me"
repository = "https://framagit.org/ppom/reaction"
keywords = ["security", "sysadmin", "fail2ban", "logs", "monitoring"]
default-run = "reaction-plugin-ipset"
[dependencies]
tokio = { workspace = true, features = ["rt-multi-thread"] }
remoc.workspace = true
reaction-plugin.path = "../reaction-plugin"
serde.workspace = true
serde_json.workspace = true
ipset = "0.9.0"
[package.metadata.deb]
section = "net"
assets = [
[ "target/release/reaction-plugin-ipset", "/usr/bin/reaction-plugin-ipset", "755" ],
]
depends = ["libipset-dev", "reaction"]

View file

@ -0,0 +1,419 @@
use std::{fmt::Debug, u32, usize};
use reaction_plugin::{Exec, shutdown::ShutdownToken, time::parse_duration};
use remoc::rch::mpsc as remocMpsc;
use serde::{Deserialize, Serialize};
use crate::ipset::{CreateSet, IpSet, Order, SetChain, Version};
#[derive(Default, Serialize, Deserialize, PartialEq, Eq, Clone, Copy)]
pub enum IpVersion {
#[default]
#[serde(rename = "ip")]
Ip,
#[serde(rename = "ipv4")]
Ipv4,
#[serde(rename = "ipv6")]
Ipv6,
}
impl Debug for IpVersion {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
match self {
IpVersion::Ipv4 => "ipv4",
IpVersion::Ipv6 => "ipv6",
IpVersion::Ip => "ip",
}
)
}
}
#[derive(Default, Serialize, Deserialize)]
pub enum AddDel {
#[default]
#[serde(alias = "add")]
Add,
#[serde(alias = "del")]
Del,
}
/// User-facing action options
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct ActionOptions {
/// The set that should be used by this action
pub set: String,
/// The pattern name of the IP.
/// Defaults to "ip"
#[serde(default = "serde_ip")]
pub pattern: String,
#[serde(skip)]
ip_index: usize,
// Whether the action is to "add" or "del" the ip from the set
#[serde(default)]
action: AddDel,
#[serde(flatten)]
pub set_options: SetOptions,
}
fn serde_ip() -> String {
"ip".into()
}
impl ActionOptions {
pub fn set_ip_index(&mut self, patterns: Vec<String>) -> Result<(), ()> {
self.ip_index = patterns
.into_iter()
.enumerate()
.filter(|(_, name)| name == &self.pattern)
.next()
.ok_or(())?
.0;
Ok(())
}
}
/// Merged set options
#[derive(Default, Clone, Deserialize, Serialize, Debug, PartialEq, Eq)]
pub struct SetOptions {
/// The IP type.
/// Defaults to `46`.
/// If `ipv4`: creates an IPv4 set with this name
/// If `ipv6`: creates an IPv6 set with this name
/// If `ip`: creates an IPv4 set with its name suffixed by 'v4' AND an IPv6 set with its name suffixed by 'v6'
/// *Merged set-wise*.
#[serde(default)]
version: Option<IpVersion>,
/// Chains where the IP set should be inserted.
/// Defaults to `["INPUT", "FORWARD"]`
/// *Merged set-wise*.
#[serde(default)]
chains: Option<Vec<String>>,
// Optional timeout, letting linux/netfilter handle set removal instead of reaction
// Note that `reaction show` and `reaction flush` won't work if set instead of an `after` action
// Same syntax as after and retryperiod in reaction.
/// *Merged set-wise*.
#[serde(skip_serializing_if = "Option::is_none")]
timeout: Option<String>,
#[serde(skip)]
timeout_u32: Option<u32>,
// Target that iptables should use when the IP is encountered.
// Defaults to DROP, but can also be ACCEPT, RETURN or any user-defined chain
/// *Merged set-wise*.
#[serde(default)]
target: Option<String>,
}
impl SetOptions {
pub fn merge(&mut self, options: &SetOptions) -> Result<(), String> {
// merge two Option<T> and fail if there is conflict
fn inner_merge<T: Eq + Clone + std::fmt::Debug>(
a: &mut Option<T>,
b: &Option<T>,
name: &str,
) -> Result<(), String> {
match (&a, &b) {
(Some(aa), Some(bb)) => {
if aa != bb {
return Err(format!(
"Conflicting options for {name}: `{aa:?}` and `{bb:?}`"
));
}
}
(None, Some(_)) => {
*a = b.clone();
}
_ => (),
};
Ok(())
}
inner_merge(&mut self.version, &options.version, "version")?;
inner_merge(&mut self.timeout, &options.timeout, "timeout")?;
inner_merge(&mut self.chains, &options.chains, "chains")?;
inner_merge(&mut self.target, &options.target, "target")?;
if let Some(timeout) = &self.timeout {
let duration = parse_duration(timeout)
.map_err(|err| format!("failed to parse timeout: {}", err))?
.as_secs();
if duration > u32::MAX as u64 {
return Err(format!(
"timeout is limited to {} seconds (approx {} days)",
u32::MAX,
49_000
));
}
self.timeout_u32 = Some(duration as u32);
}
Ok(())
}
}
pub struct Set {
sets: SetNames,
chains: Vec<String>,
timeout: Option<u32>,
target: String,
}
impl Set {
pub fn from(name: String, options: SetOptions) -> Self {
Self {
sets: SetNames::new(name, options.version),
timeout: options.timeout_u32,
target: options.target.unwrap_or("DROP".into()),
chains: options
.chains
.unwrap_or(vec!["INPUT".into(), "FORWARD".into()]),
}
}
pub async fn init(&self, ipset: &mut IpSet) -> Result<(), (usize, String)> {
for (set, version) in [
(&self.sets.ipv4, Version::IPv4),
(&self.sets.ipv6, Version::IPv6),
] {
if let Some(set) = set {
// create set
ipset
.order(Order::CreateSet(CreateSet {
name: set.clone(),
version,
timeout: self.timeout,
}))
.await
.map_err(|err| (0, err.to_string()))?;
// insert set in chains
for (i, chain) in self.chains.iter().enumerate() {
ipset
.order(Order::InsertSet(SetChain {
set: set.clone(),
chain: chain.clone(),
target: self.target.clone(),
}))
.await
.map_err(|err| (i + 1, err.to_string()))?;
}
}
}
Ok(())
}
pub async fn destroy(&self, ipset: &mut IpSet, until: Option<usize>) {
for set in [&self.sets.ipv4, &self.sets.ipv6] {
if let Some(set) = set {
for chain in self
.chains
.iter()
.take(until.map(|until| until - 1).unwrap_or(usize::MAX))
{
let _ = ipset
.order(Order::RemoveSet(SetChain {
set: set.clone(),
chain: chain.clone(),
target: self.target.clone(),
}))
.await;
}
if until.is_none_or(|until| until != 0) {
let _ = ipset.order(Order::DestroySet(set.clone())).await;
}
}
}
}
}
pub struct SetNames {
pub ipv4: Option<String>,
pub ipv6: Option<String>,
}
impl SetNames {
pub fn new(name: String, version: Option<IpVersion>) -> Self {
Self {
ipv4: match version {
Some(IpVersion::Ipv4) => Some(name.clone()),
Some(IpVersion::Ipv6) => None,
None | Some(IpVersion::Ip) => Some(format!("{}v4", name)),
},
ipv6: match version {
Some(IpVersion::Ipv4) => None,
Some(IpVersion::Ipv6) => Some(name),
None | Some(IpVersion::Ip) => Some(format!("{}v6", name)),
},
}
}
}
pub struct Action {
ipset: IpSet,
rx: remocMpsc::Receiver<Exec>,
shutdown: ShutdownToken,
sets: SetNames,
// index of pattern ip in match vec
ip_index: usize,
action: AddDel,
}
impl Action {
pub fn new(
ipset: IpSet,
shutdown: ShutdownToken,
rx: remocMpsc::Receiver<Exec>,
options: ActionOptions,
) -> Result<Self, String> {
Ok(Action {
ipset,
rx,
shutdown,
sets: SetNames::new(options.set, options.set_options.version),
ip_index: options.ip_index,
action: options.action,
})
}
pub async fn serve(mut self) {
loop {
let event = tokio::select! {
exec = self.rx.recv() => Some(exec),
_ = self.shutdown.wait() => None,
};
match event {
// shutdown asked
None => break,
// channel closed
Some(Ok(None)) => break,
// error from channel
Some(Err(err)) => {
eprintln!("ERROR {err}");
break;
}
// ok
Some(Ok(Some(exec))) => {
if let Err(err) = self.handle_exec(exec).await {
eprintln!("ERROR {err}");
break;
}
}
}
}
// eprintln!("DEBUG Asking for shutdown");
// self.shutdown.ask_shutdown();
}
async fn handle_exec(&mut self, mut exec: Exec) -> Result<(), String> {
// safeguard against Vec::remove's panic
if exec.match_.len() <= self.ip_index {
return Err(format!(
"match received from reaction is smaller than expected. looking for index {} but size is {}. this is a bug!",
self.ip_index,
exec.match_.len()
));
}
let ip = exec.match_.remove(self.ip_index);
// select set
let set = match (&self.sets.ipv4, &self.sets.ipv6) {
(None, None) => return Err(format!("action is neither IPv4 nor IPv6, this is a bug!")),
(None, Some(set)) => set,
(Some(set), None) => set,
(Some(set4), Some(set6)) => {
if ip.contains(':') {
set6
} else {
set4
}
}
};
// add/remove ip to set
self.ipset
.order(match self.action {
AddDel::Add => Order::Add(set.clone(), ip),
AddDel::Del => Order::Del(set.clone(), ip),
})
.await?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use crate::action::{IpVersion, SetOptions};
#[tokio::test]
async fn set_options_merge() {
let s1 = SetOptions {
version: None,
chains: None,
timeout: None,
timeout_u32: None,
target: None,
};
let s2 = SetOptions {
version: Some(IpVersion::Ipv4),
chains: Some(vec!["INPUT".into()]),
timeout: Some("3h".into()),
timeout_u32: Some(3 * 3600),
target: Some("DROP".into()),
};
assert_ne!(s1, s2);
assert_eq!(s1, SetOptions::default());
{
// s2 can be merged in s1
let mut s1 = s1.clone();
assert!(s1.merge(&s2).is_ok());
assert_eq!(s1, s2);
}
{
// s1 can be merged in s2
let mut s2 = s2.clone();
assert!(s2.merge(&s1).is_ok());
}
{
// s1 can be merged in itself
let mut s3 = s1.clone();
assert!(s3.merge(&s1).is_ok());
assert_eq!(s1, s3);
}
{
// s2 can be merged in itself
let mut s3 = s2.clone();
assert!(s3.merge(&s2).is_ok());
assert_eq!(s2, s3);
}
for s3 in [
SetOptions {
version: Some(IpVersion::Ipv6),
..Default::default()
},
SetOptions {
chains: Some(vec!["damn".into()]),
..Default::default()
},
SetOptions {
timeout: Some("30min".into()),
..Default::default()
},
SetOptions {
target: Some("log-refuse".into()),
..Default::default()
},
] {
// none with some is ok
assert!(s3.clone().merge(&s1).is_ok(), "s3: {s3:?}");
assert!(s1.clone().merge(&s3).is_ok(), "s3: {s3:?}");
// different some is ko
assert!(s3.clone().merge(&s2).is_err(), "s3: {s3:?}");
assert!(s2.clone().merge(&s3).is_err(), "s3: {s3:?}");
}
}
}

View file

@ -0,0 +1,248 @@
use std::{collections::BTreeMap, fmt::Display, net::Ipv4Addr, process::Command, thread};
use ipset::{
Session,
types::{HashNet, NetDataType, Parse},
};
use tokio::sync::{mpsc, oneshot};
#[derive(PartialEq, Eq, PartialOrd, Ord, Copy, Clone)]
pub enum Version {
IPv4,
IPv6,
}
impl Display for Version {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Version::IPv4 => "IPv4",
Version::IPv6 => "IPv6",
})
}
}
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone)]
pub struct CreateSet {
pub name: String,
pub version: Version,
pub timeout: Option<u32>,
}
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone)]
pub struct SetChain {
pub set: String,
pub chain: String,
pub target: String,
}
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone)]
pub enum Order {
CreateSet(CreateSet),
DestroySet(String),
InsertSet(SetChain),
RemoveSet(SetChain),
Add(String, String),
Del(String, String),
}
#[derive(Clone)]
pub struct IpSet {
tx: mpsc::Sender<OrderType>,
}
impl Default for IpSet {
fn default() -> Self {
let (tx, rx) = mpsc::channel(1);
thread::spawn(move || IPsetManager::default().serve(rx));
Self { tx }
}
}
impl IpSet {
pub async fn order(&mut self, order: Order) -> Result<(), IpSetError> {
let (tx, rx) = oneshot::channel();
self.tx
.send((order, tx))
.await
.map_err(|err| IpSetError::Thread(format!("ipset thread has quit: {err}")))?;
rx.await
.map_err(|err| IpSetError::Thread(format!("ipset thread didn't respond: {err}")))?
.map_err(IpSetError::IpSet)
}
}
pub enum IpSetError {
Thread(String),
IpSet(()),
}
impl Display for IpSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
match self {
IpSetError::Thread(err) => err,
IpSetError::IpSet(()) => "ipset error",
}
)
}
}
impl From<IpSetError> for String {
fn from(value: IpSetError) -> Self {
match value {
IpSetError::Thread(err) => err,
IpSetError::IpSet(()) => "ipset error".to_string(),
}
}
}
pub type OrderType = (Order, oneshot::Sender<Result<(), ()>>);
struct Set {
session: Session<HashNet>,
version: Version,
}
#[derive(Default)]
struct IPsetManager {
// IPset sessions
sessions: BTreeMap<String, Set>,
}
impl IPsetManager {
fn serve(mut self, mut rx: mpsc::Receiver<OrderType>) {
loop {
match rx.blocking_recv() {
None => break,
Some((order, response)) => {
let result = self.handle_order(order);
let _ = response.send(result);
}
}
}
}
fn handle_order(&mut self, order: Order) -> Result<(), ()> {
match order {
Order::CreateSet(CreateSet {
name,
version,
timeout,
}) => {
eprintln!("INFO creating {version} set {name}");
let mut session: Session<HashNet> = Session::new(name.clone());
session
.create(|builder| {
let builder = if let Some(timeout) = timeout {
builder.with_timeout(timeout)?
} else {
builder
};
builder.with_ipv6(version == Version::IPv6)?.build()
})
.map_err(|err| eprintln!("ERROR Could not create set {name}: {err}"))?;
self.sessions.insert(name, Set { session, version });
}
Order::DestroySet(set) => {
if let Some(mut session) = self.sessions.remove(&set) {
eprintln!("INFO destroying {} set {set}", session.version);
session
.session
.destroy()
.map_err(|err| eprintln!("ERROR Could not destroy set {set}: {err}"))?;
}
}
Order::InsertSet(options) => self.insert_remove_set(options, true)?,
Order::RemoveSet(options) => self.insert_remove_set(options, false)?,
Order::Add(set, ip) => self.insert_remove_ip(set, ip, true)?,
Order::Del(set, ip) => self.insert_remove_ip(set, ip, false)?,
};
Ok(())
}
fn insert_remove_ip(&mut self, set: String, ip: String, insert: bool) -> Result<(), ()> {
self._insert_remove_ip(set, ip, insert)
.map_err(|err| eprintln!("ERROR {err}"))
}
fn _insert_remove_ip(&mut self, set: String, ip: String, insert: bool) -> Result<(), String> {
let session = self.sessions.get_mut(&set).ok_or(format!(
"No set handled by this plugin with this name: {set}. This likely is a bug."
))?;
let mut net_data = NetDataType::new(Ipv4Addr::LOCALHOST, 0);
net_data
.parse(&ip)
.map_err(|err| format!("`{ip}` is not recognized as an IP: {err}"))?;
if insert {
session.session.add(net_data, &[])
} else {
session.session.del(net_data)
}
.map_err(|err| format!("Could not add `{ip}` to set {set}: {err}"))?;
Ok(())
}
fn insert_remove_set(&self, options: SetChain, insert: bool) -> Result<(), ()> {
self._insert_remove_set(options, insert)
.map_err(|err| eprintln!("ERROR {err}"))
}
fn _insert_remove_set(&self, options: SetChain, insert: bool) -> Result<(), String> {
let SetChain { set, chain, target } = options;
let version = self
.sessions
.get(&set)
.ok_or(format!(
"No set managed by this plugin with this name: {set}"
))?
.version;
let (verb, verbing, from) = if insert {
("insert", "inserting", "in")
} else {
("remove", "removing", "from")
};
eprintln!("INFO {verbing} {version} set {set} {from} chain {chain}");
let command = match version {
Version::IPv4 => "iptables",
Version::IPv6 => "ip6tables",
};
let mut child = Command::new(command)
.args([
"-w",
if insert { "-I" } else { "-D" },
&chain,
"-m",
"set",
"--match-set",
&set,
"src",
"-j",
&target,
])
.spawn()
.map_err(|err| format!("Could not {verb} ipset {set} {from} chain {chain}: Could not execute {command}: {err}"))?;
let exit = child
.wait()
.map_err(|err| format!("Could not {verb} ipset {set} {from} chain {chain}: {err}"))?;
if exit.success() {
Ok(())
} else {
Err(format!(
"Could not {verb} ipset: exit code {}",
exit.code()
.map(|c| c.to_string())
.unwrap_or_else(|| "<unknown>".to_string())
))
}
}
}

View file

@ -0,0 +1,159 @@
use std::collections::{BTreeMap, BTreeSet};
use reaction_plugin::{
ActionConfig, ActionImpl, Hello, Manifest, PluginInfo, RemoteError, RemoteResult, StreamConfig,
StreamImpl,
shutdown::{ShutdownController, ShutdownToken},
};
use remoc::rtc;
use crate::{
action::{Action, ActionOptions, Set, SetOptions},
ipset::IpSet,
};
#[cfg(test)]
mod tests;
mod action;
mod ipset;
#[tokio::main]
async fn main() {
let plugin = Plugin::default();
reaction_plugin::main_loop(plugin).await;
}
#[derive(Default)]
struct Plugin {
ipset: IpSet,
sets: Vec<Set>,
actions: Vec<Action>,
shutdown: ShutdownController,
}
impl PluginInfo for Plugin {
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
Ok(Manifest {
hello: Hello::new(),
streams: BTreeSet::default(),
actions: BTreeSet::from(["ipset".into()]),
})
}
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
if !streams.is_empty() {
return Err("This plugin can't handle any stream type".into());
}
let mut ret_actions = Vec::with_capacity(actions.len());
let mut set_options: BTreeMap<String, SetOptions> = BTreeMap::new();
for ActionConfig {
stream_name,
filter_name,
action_name,
action_type,
config,
patterns,
} in actions
{
if &action_type != "ipset" {
return Err("This plugin can't handle other action types than ipset".into());
}
let mut options: ActionOptions = serde_json::from_value(config.into()).map_err(|err| {
format!("invalid options for action {stream_name}.{filter_name}.{action_name}: {err}")
})?;
options.set_ip_index(patterns).map_err(|_|
format!(
"No pattern with name {} in filter {stream_name}.{filter_name}. Try setting the option `pattern` to your pattern name of type 'ip'",
&options.pattern
)
)?;
// Merge option
set_options
.entry(options.set.clone())
.or_default()
.merge(&options.set_options)
.map_err(|err| format!("ipset {}: {err}", options.set))?;
let (tx, rx) = remoc::rch::mpsc::channel(1);
self.actions.push(Action::new(
self.ipset.clone(),
self.shutdown.token(),
rx,
options,
)?);
ret_actions.push(ActionImpl { tx });
}
// Init all sets
while let Some((name, options)) = set_options.pop_first() {
self.sets.push(Set::from(name, options));
}
Ok((vec![], ret_actions))
}
async fn start(&mut self) -> RemoteResult<()> {
self.shutdown.delegate().handle_quit_signals()?;
let mut first_error = None;
for (i, set) in self.sets.iter().enumerate() {
// Retain if error
if let Err((failed_step, err)) = set.init(&mut self.ipset).await {
first_error = Some((i, failed_step, RemoteError::Plugin(err)));
break;
}
}
// Destroy initialized sets if error
if let Some((last_set, failed_step, err)) = first_error {
eprintln!("DEBUG last_set: {last_set} failed_step: {failed_step} err: {err}");
for (curr_set, set) in self.sets.iter().enumerate().take(last_set + 1) {
let until = if last_set == curr_set {
Some(failed_step)
} else {
None
};
let _ = set.destroy(&mut self.ipset, until).await;
}
return Err(err);
}
// Launch a task that will destroy the sets on shutdown
tokio::spawn(destroy_sets_at_shutdown(
self.ipset.clone(),
std::mem::take(&mut self.sets),
self.shutdown.token(),
));
// Launch all actions
while let Some(action) = self.actions.pop() {
tokio::spawn(async move { action.serve().await });
}
self.actions = Default::default();
Ok(())
}
async fn close(self) -> RemoteResult<()> {
self.shutdown.ask_shutdown();
self.shutdown.wait_all_task_shutdown().await;
Ok(())
}
}
async fn destroy_sets_at_shutdown(mut ipset: IpSet, sets: Vec<Set>, shutdown: ShutdownToken) {
shutdown.wait().await;
for set in sets {
set.destroy(&mut ipset, None).await;
}
}

View file

@ -0,0 +1,253 @@
use reaction_plugin::{ActionConfig, PluginInfo, StreamConfig, Value};
use serde_json::json;
use crate::Plugin;
#[tokio::test]
async fn conf_stream() {
// No stream is supported by ipset
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "ipset".into(),
config: Value::Null
}],
vec![]
)
.await
.is_err()
);
// Nothing is ok
assert!(Plugin::default().load_config(vec![], vec![]).await.is_ok());
}
#[tokio::test]
async fn conf_action_standalone() {
let p = vec!["name".into(), "ip".into(), "ip2".into()];
let p_noip = vec!["name".into(), "ip2".into()];
for (is_ok, conf, patterns) in [
// minimal set
(true, json!({ "set": "test" }), &p),
// missing set key
(false, json!({}), &p),
(false, json!({ "version": "ipv4" }), &p),
// unknown key
(false, json!({ "set": "test", "unknown": "yes" }), &p),
(false, json!({ "set": "test", "ip_index": 1 }), &p),
(false, json!({ "set": "test", "timeout_u32": 1 }), &p),
// pattern //
(true, json!({ "set": "test" }), &p),
(true, json!({ "set": "test", "pattern": "ip" }), &p),
(true, json!({ "set": "test", "pattern": "ip2" }), &p),
(true, json!({ "set": "test", "pattern": "ip2" }), &p_noip),
// unknown pattern "ip"
(false, json!({ "set": "test" }), &p_noip),
(false, json!({ "set": "test", "pattern": "ip" }), &p_noip),
// unknown pattern
(false, json!({ "set": "test", "pattern": "unknown" }), &p),
(false, json!({ "set": "test", "pattern": "uwu" }), &p_noip),
// bad type
(false, json!({ "set": "test", "pattern": 0 }), &p_noip),
(false, json!({ "set": "test", "pattern": true }), &p_noip),
// action //
(true, json!({ "set": "test", "action": "add" }), &p),
(true, json!({ "set": "test", "action": "del" }), &p),
// unknown action
(false, json!({ "set": "test", "action": "create" }), &p),
(false, json!({ "set": "test", "action": "insert" }), &p),
(false, json!({ "set": "test", "action": "delete" }), &p),
(false, json!({ "set": "test", "action": "destroy" }), &p),
// bad type
(false, json!({ "set": "test", "action": true }), &p),
(false, json!({ "set": "test", "action": 1 }), &p),
// ip version //
// ok
(true, json!({ "set": "test", "version": "ipv4" }), &p),
(true, json!({ "set": "test", "version": "ipv6" }), &p),
(true, json!({ "set": "test", "version": "ip" }), &p),
// unknown version
(false, json!({ "set": "test", "version": 4 }), &p),
(false, json!({ "set": "test", "version": 6 }), &p),
(false, json!({ "set": "test", "version": 46 }), &p),
(false, json!({ "set": "test", "version": "5" }), &p),
(false, json!({ "set": "test", "version": "ipv5" }), &p),
(false, json!({ "set": "test", "version": "4" }), &p),
(false, json!({ "set": "test", "version": "6" }), &p),
(false, json!({ "set": "test", "version": "46" }), &p),
// bad type
(false, json!({ "set": "test", "version": true }), &p),
// chains //
// everything is fine really
(true, json!({ "set": "test", "chains": [] }), &p),
(true, json!({ "set": "test", "chains": ["INPUT"] }), &p),
(true, json!({ "set": "test", "chains": ["FORWARD"] }), &p),
(
true,
json!({ "set": "test", "chains": ["custom_chain"] }),
&p,
),
(
true,
json!({ "set": "test", "chains": ["INPUT", "FORWARD"] }),
&p,
),
(
true,
json!({
"set": "test",
"chains": ["INPUT", "FORWARD", "my_iptables_chain"]
}),
&p,
),
// timeout //
(true, json!({ "set": "test", "timeout": "1m" }), &p),
(true, json!({ "set": "test", "timeout": "3 days" }), &p),
// bad
(false, json!({ "set": "test", "timeout": "3 dayz"}), &p),
(false, json!({ "set": "test", "timeout": 12 }), &p),
// target //
// anything is fine too
(true, json!({ "set": "test", "target": "DROP" }), &p),
(true, json!({ "set": "test", "target": "ACCEPT" }), &p),
(true, json!({ "set": "test", "target": "RETURN" }), &p),
(true, json!({ "set": "test", "target": "custom_chain" }), &p),
// bad
(false, json!({ "set": "test", "target": 11 }), &p),
(false, json!({ "set": "test", "target": ["DROP"] }), &p),
] {
let res = Plugin::default()
.load_config(
vec![],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "ipset".into(),
config: conf.clone().into(),
patterns: patterns.clone(),
}],
)
.await;
assert!(
res.is_ok() == is_ok,
"conf: {:?}, must be ok: {is_ok}, result: {:?}",
conf,
// empty Result::Ok because ActionImpl is not Debug
res.map(|_| ())
);
}
}
// TODO
#[tokio::test]
async fn conf_action_merge() {
let mut plugin = Plugin::default();
let set1 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action1".into(),
action_type: "ipset".into(),
config: json!({
"set": "test",
"target": "DROP",
"chains": ["INPUT"],
"action": "add",
})
.into(),
patterns: vec!["ip".into()],
};
let set2 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action2".into(),
action_type: "ipset".into(),
config: json!({
"set": "test",
"target": "DROP",
"version": "ip",
"action": "add",
})
.into(),
patterns: vec!["ip".into()],
};
let set3 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action2".into(),
action_type: "ipset".into(),
config: json!({
"set": "test",
"action": "del",
})
.into(),
patterns: vec!["ip".into()],
};
let res = plugin
.load_config(
vec![],
vec![
// First set
set1.clone(),
// Same set, adding options, no conflict
set2.clone(),
// Same set, no new options, no conflict
set3.clone(),
// Unrelated set, so no conflict
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action3".into(),
action_type: "ipset".into(),
config: json!({
"set": "test2",
"target": "target1",
"version": "ipv6",
})
.into(),
patterns: vec!["ip".into()],
},
],
)
.await;
assert!(res.is_ok(), "res: {:?}", res.map(|_| ()));
// Another set with conflict is not ok
let res = plugin
.load_config(
vec![],
vec![
// First set
set1,
// Same set, adding options, no conflict
set2,
// Same set, no new options, no conflict
set3,
// Another set with conflict
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action3".into(),
action_type: "ipset".into(),
config: json!({
"set": "test",
"target": "target2",
"action": "del",
})
.into(),
patterns: vec!["ip".into()],
},
],
)
.await;
assert!(res.is_err(), "res: {:?}", res.map(|_| ()));
}

View file

@ -0,0 +1,13 @@
[package]
name = "reaction-plugin-nftables"
version = "0.1.0"
edition = "2024"
[dependencies]
tokio = { workspace = true, features = ["rt-multi-thread"] }
remoc.workspace = true
reaction-plugin.path = "../reaction-plugin"
serde.workspace = true
serde_json.workspace = true
nftables = { version = "0.6.3", features = ["tokio"] }
libnftables1-sys = { version = "0.1.1" }

View file

@ -0,0 +1,493 @@
use std::{
borrow::Cow,
collections::HashSet,
fmt::{Debug, Display},
u32,
};
use nftables::{
batch::Batch,
expr::Expression,
schema::{Element, NfListObject, Rule, SetFlag, SetType, SetTypeValue},
stmt::Statement,
types::{NfFamily, NfHook},
};
use reaction_plugin::{Exec, shutdown::ShutdownToken, time::parse_duration};
use remoc::rch::mpsc as remocMpsc;
use serde::{Deserialize, Serialize};
use crate::{helpers::Version, nft::NftClient};
#[derive(Default, Serialize, Deserialize, PartialEq, Eq, Clone, Copy)]
pub enum IpVersion {
#[default]
#[serde(rename = "ip")]
Ip,
#[serde(rename = "ipv4")]
Ipv4,
#[serde(rename = "ipv6")]
Ipv6,
}
impl Debug for IpVersion {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
match self {
IpVersion::Ipv4 => "ipv4",
IpVersion::Ipv6 => "ipv6",
IpVersion::Ip => "ip",
}
)
}
}
#[derive(Default, Debug, Serialize, Deserialize)]
pub enum AddDel {
#[default]
#[serde(alias = "add")]
Add,
#[serde(alias = "delete")]
Delete,
}
/// User-facing action options
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct ActionOptions {
/// The set that should be used by this action
pub set: String,
/// The pattern name of the IP.
/// Defaults to "ip"
#[serde(default = "serde_ip")]
pub pattern: String,
#[serde(skip)]
ip_index: usize,
// Whether the action is to "add" or "del" the ip from the set
#[serde(default)]
action: AddDel,
#[serde(flatten)]
pub set_options: SetOptions,
}
fn serde_ip() -> String {
"ip".into()
}
impl ActionOptions {
pub fn set_ip_index(&mut self, patterns: Vec<String>) -> Result<(), ()> {
self.ip_index = patterns
.into_iter()
.enumerate()
.filter(|(_, name)| name == &self.pattern)
.next()
.ok_or(())?
.0;
Ok(())
}
}
/// Merged set options
#[derive(Default, Clone, Deserialize, Serialize, Debug, PartialEq, Eq)]
pub struct SetOptions {
/// The IP type.
/// Defaults to `46`.
/// If `ipv4`: creates an IPv4 set with this name
/// If `ipv6`: creates an IPv6 set with this name
/// If `ip`: creates an IPv4 set with its name suffixed by 'v4' AND an IPv6 set with its name suffixed by 'v6'
/// *Merged set-wise*.
#[serde(default)]
version: Option<IpVersion>,
/// Chains where the IP set should be inserted.
/// Defaults to `["input", "forward"]`
/// *Merged set-wise*.
#[serde(default)]
hooks: Option<Vec<RHook>>,
// Optional timeout, letting linux/netfilter handle set removal instead of reaction
// Note that `reaction show` and `reaction flush` won't work if set instead of an `after` action
// Same syntax as after and retryperiod in reaction.
/// *Merged set-wise*.
#[serde(skip_serializing_if = "Option::is_none")]
timeout: Option<String>,
#[serde(skip)]
timeout_u32: Option<u32>,
// Target that iptables should use when the IP is encountered.
// Defaults to DROP, but can also be ACCEPT, RETURN or any user-defined chain
/// *Merged set-wise*.
#[serde(default)]
target: Option<RStatement>,
}
impl SetOptions {
pub fn merge(&mut self, options: &SetOptions) -> Result<(), String> {
// merge two Option<T> and fail if there is conflict
fn inner_merge<T: Eq + Clone + std::fmt::Debug>(
a: &mut Option<T>,
b: &Option<T>,
name: &str,
) -> Result<(), String> {
match (&a, &b) {
(Some(aa), Some(bb)) => {
if aa != bb {
return Err(format!(
"Conflicting options for {name}: `{aa:?}` and `{bb:?}`"
));
}
}
(None, Some(_)) => {
*a = b.clone();
}
_ => (),
};
Ok(())
}
inner_merge(&mut self.version, &options.version, "version")?;
inner_merge(&mut self.timeout, &options.timeout, "timeout")?;
inner_merge(&mut self.hooks, &options.hooks, "chains")?;
inner_merge(&mut self.target, &options.target, "target")?;
if let Some(timeout) = &self.timeout {
let duration = parse_duration(timeout)
.map_err(|err| format!("failed to parse timeout: {}", err))?
.as_secs();
if duration > u32::MAX as u64 {
return Err(format!(
"timeout is limited to {} seconds (approx {} days)",
u32::MAX,
49_000
));
}
self.timeout_u32 = Some(duration as u32);
}
Ok(())
}
}
#[derive(Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum RHook {
Ingress,
Prerouting,
Forward,
Input,
Output,
Postrouting,
Egress,
}
impl RHook {
pub fn as_str(&self) -> &'static str {
match self {
RHook::Ingress => "ingress",
RHook::Prerouting => "prerouting",
RHook::Forward => "forward",
RHook::Input => "input",
RHook::Output => "output",
RHook::Postrouting => "postrouting",
RHook::Egress => "egress",
}
}
}
impl Display for RHook {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str())
}
}
impl From<&RHook> for NfHook {
fn from(value: &RHook) -> Self {
match value {
RHook::Ingress => Self::Ingress,
RHook::Prerouting => Self::Prerouting,
RHook::Forward => Self::Forward,
RHook::Input => Self::Input,
RHook::Output => Self::Output,
RHook::Postrouting => Self::Postrouting,
RHook::Egress => Self::Egress,
}
}
}
#[derive(Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum RStatement {
Accept,
Drop,
Continue,
Return,
}
pub struct Set {
pub sets: SetNames,
pub hooks: Vec<RHook>,
pub timeout: Option<u32>,
pub target: RStatement,
}
impl Set {
pub fn from(name: String, options: SetOptions) -> Self {
Self {
sets: SetNames::new(name, options.version),
timeout: options.timeout_u32,
target: options.target.unwrap_or(RStatement::Drop),
hooks: options.hooks.unwrap_or(vec![RHook::Input, RHook::Forward]),
}
}
pub fn init<'a>(&self, batch: &mut Batch<'a>) -> Result<(), String> {
for (set, version) in [
(&self.sets.ipv4, Version::IPv4),
(&self.sets.ipv6, Version::IPv6),
] {
if let Some(set) = set {
let family = NfFamily::INet;
let table = Cow::from("reaction");
// create set
batch.add(NfListObject::<'a>::Set(Box::new(nftables::schema::Set::<
'a,
> {
family,
table: table.to_owned(),
name: Cow::Owned(set.to_owned()),
// TODO Try a set which is both ipv4 and ipv6?
set_type: SetTypeValue::Single(match version {
Version::IPv4 => SetType::Ipv4Addr,
Version::IPv6 => SetType::Ipv6Addr,
}),
flags: Some({
let mut flags = HashSet::from([SetFlag::Interval]);
if self.timeout.is_some() {
flags.insert(SetFlag::Timeout);
}
flags
}),
timeout: self.timeout.clone(),
..Default::default()
})));
// insert set in chains
let expr = vec![match self.target {
RStatement::Accept => Statement::Accept(None),
RStatement::Drop => Statement::Drop(None),
RStatement::Continue => Statement::Continue(None),
RStatement::Return => Statement::Return(None),
}];
for hook in &self.hooks {
batch.add(NfListObject::Rule(Rule {
family,
table: table.to_owned(),
chain: Cow::from(hook.to_string()),
expr: Cow::Owned(expr.clone()),
..Default::default()
}));
}
}
}
Ok(())
}
}
pub struct SetNames {
pub ipv4: Option<String>,
pub ipv6: Option<String>,
}
impl SetNames {
pub fn new(name: String, version: Option<IpVersion>) -> Self {
Self {
ipv4: match version {
Some(IpVersion::Ipv4) => Some(name.clone()),
Some(IpVersion::Ipv6) => None,
None | Some(IpVersion::Ip) => Some(format!("{}v4", name)),
},
ipv6: match version {
Some(IpVersion::Ipv4) => None,
Some(IpVersion::Ipv6) => Some(name),
None | Some(IpVersion::Ip) => Some(format!("{}v6", name)),
},
}
}
}
pub struct Action {
nft: NftClient,
rx: remocMpsc::Receiver<Exec>,
shutdown: ShutdownToken,
sets: SetNames,
// index of pattern ip in match vec
ip_index: usize,
action: AddDel,
}
impl Action {
pub fn new(
nft: NftClient,
shutdown: ShutdownToken,
rx: remocMpsc::Receiver<Exec>,
options: ActionOptions,
) -> Result<Self, String> {
Ok(Action {
nft,
rx,
shutdown,
sets: SetNames::new(options.set, options.set_options.version),
ip_index: options.ip_index,
action: options.action,
})
}
pub async fn serve(mut self) {
loop {
let event = tokio::select! {
exec = self.rx.recv() => Some(exec),
_ = self.shutdown.wait() => None,
};
match event {
// shutdown asked
None => break,
// channel closed
Some(Ok(None)) => break,
// error from channel
Some(Err(err)) => {
eprintln!("ERROR {err}");
break;
}
// ok
Some(Ok(Some(exec))) => {
if let Err(err) = self.handle_exec(exec).await {
eprintln!("ERROR {err}");
break;
}
}
}
}
// eprintln!("DEBUG Asking for shutdown");
// self.shutdown.ask_shutdown();
}
async fn handle_exec(&mut self, mut exec: Exec) -> Result<(), String> {
// safeguard against Vec::remove's panic
if exec.match_.len() <= self.ip_index {
return Err(format!(
"match received from reaction is smaller than expected. looking for index {} but size is {}. this is a bug!",
self.ip_index,
exec.match_.len()
));
}
let ip = exec.match_.remove(self.ip_index);
// select set
let set = match (&self.sets.ipv4, &self.sets.ipv6) {
(None, None) => return Err(format!("action is neither IPv4 nor IPv6, this is a bug!")),
(None, Some(set)) => set,
(Some(set), None) => set,
(Some(set4), Some(set6)) => {
if ip.contains(':') {
set6
} else {
set4
}
}
};
// add/remove ip to set
let element = NfListObject::Element(Element {
family: NfFamily::INet,
table: Cow::from("reaction"),
name: Cow::from(set),
elem: Cow::from(vec![Expression::String(Cow::from(ip.clone()))]),
});
let mut batch = Batch::new();
match self.action {
AddDel::Add => batch.add(element),
AddDel::Delete => batch.delete(element),
};
match self.nft.send(batch).await {
Ok(ok) => {
eprintln!("DEBUG action ok {:?} {ip}: {ok}", self.action);
Ok(())
}
Err(err) => Err(format!("action ko {:?} {ip}: {err}", self.action)),
}
}
}
#[cfg(test)]
mod tests {
use crate::action::{IpVersion, RHook, RStatement, SetOptions};
#[tokio::test]
async fn set_options_merge() {
let s1 = SetOptions {
version: None,
hooks: None,
timeout: None,
timeout_u32: None,
target: None,
};
let s2 = SetOptions {
version: Some(IpVersion::Ipv4),
hooks: Some(vec![RHook::Input]),
timeout: Some("3h".into()),
timeout_u32: Some(3 * 3600),
target: Some(RStatement::Drop),
};
assert_ne!(s1, s2);
assert_eq!(s1, SetOptions::default());
{
// s2 can be merged in s1
let mut s1 = s1.clone();
assert!(s1.merge(&s2).is_ok());
assert_eq!(s1, s2);
}
{
// s1 can be merged in s2
let mut s2 = s2.clone();
assert!(s2.merge(&s1).is_ok());
}
{
// s1 can be merged in itself
let mut s3 = s1.clone();
assert!(s3.merge(&s1).is_ok());
assert_eq!(s1, s3);
}
{
// s2 can be merged in itself
let mut s3 = s2.clone();
assert!(s3.merge(&s2).is_ok());
assert_eq!(s2, s3);
}
for s3 in [
SetOptions {
version: Some(IpVersion::Ipv6),
..Default::default()
},
SetOptions {
hooks: Some(vec![RHook::Output]),
..Default::default()
},
SetOptions {
timeout: Some("30min".into()),
..Default::default()
},
SetOptions {
target: Some(RStatement::Continue),
..Default::default()
},
] {
// none with some is ok
assert!(s3.clone().merge(&s1).is_ok(), "s3: {s3:?}");
assert!(s1.clone().merge(&s3).is_ok(), "s3: {s3:?}");
// different some is ko
assert!(s3.clone().merge(&s2).is_err(), "s3: {s3:?}");
assert!(s2.clone().merge(&s3).is_err(), "s3: {s3:?}");
}
}
}

View file

@ -0,0 +1,15 @@
use std::fmt::Display;
#[derive(PartialEq, Eq, PartialOrd, Ord, Copy, Clone)]
pub enum Version {
IPv4,
IPv6,
}
impl Display for Version {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Version::IPv4 => "IPv4",
Version::IPv6 => "IPv6",
})
}
}

View file

@ -0,0 +1,176 @@
use std::{
borrow::Cow,
collections::{BTreeMap, BTreeSet},
};
use nftables::{
batch::Batch,
schema::{Chain, NfListObject, Table},
types::{NfChainType, NfFamily},
};
use reaction_plugin::{
ActionConfig, ActionImpl, Hello, Manifest, PluginInfo, RemoteResult, StreamConfig, StreamImpl,
shutdown::ShutdownController,
};
use remoc::rtc;
use crate::{
action::{Action, ActionOptions, Set, SetOptions},
nft::NftClient,
};
#[cfg(test)]
mod tests;
mod action;
pub mod helpers;
mod nft;
#[tokio::main]
async fn main() {
let plugin = Plugin::default();
reaction_plugin::main_loop(plugin).await;
}
#[derive(Default)]
struct Plugin {
nft: NftClient,
sets: Vec<Set>,
actions: Vec<Action>,
shutdown: ShutdownController,
}
impl PluginInfo for Plugin {
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
Ok(Manifest {
hello: Hello::new(),
streams: BTreeSet::default(),
actions: BTreeSet::from(["nftables".into()]),
})
}
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
if !streams.is_empty() {
return Err("This plugin can't handle any stream type".into());
}
let mut ret_actions = Vec::with_capacity(actions.len());
let mut set_options: BTreeMap<String, SetOptions> = BTreeMap::new();
for ActionConfig {
stream_name,
filter_name,
action_name,
action_type,
config,
patterns,
} in actions
{
if &action_type != "nftables" {
return Err("This plugin can't handle other action types than nftables".into());
}
let mut options: ActionOptions = serde_json::from_value(config.into()).map_err(|err| {
format!("invalid options for action {stream_name}.{filter_name}.{action_name}: {err}")
})?;
options.set_ip_index(patterns).map_err(|_|
format!(
"No pattern with name {} in filter {stream_name}.{filter_name}. Try setting the option `pattern` to your pattern name of type 'ip'",
&options.pattern
)
)?;
// Merge option
set_options
.entry(options.set.clone())
.or_default()
.merge(&options.set_options)
.map_err(|err| format!("set {}: {err}", options.set))?;
let (tx, rx) = remoc::rch::mpsc::channel(1);
self.actions.push(Action::new(
self.nft.clone(),
self.shutdown.token(),
rx,
options,
)?);
ret_actions.push(ActionImpl { tx });
}
// Init all sets
while let Some((name, options)) = set_options.pop_first() {
self.sets.push(Set::from(name, options));
}
Ok((vec![], ret_actions))
}
async fn start(&mut self) -> RemoteResult<()> {
self.shutdown.delegate().handle_quit_signals()?;
let mut batch = Batch::new();
batch.add(reaction_table());
// Create a chain for each registered netfilter hook
for hook in self
.sets
.iter()
.flat_map(|set| &set.hooks)
.collect::<BTreeSet<_>>()
{
batch.add(NfListObject::Chain(Chain {
family: NfFamily::INet,
table: Cow::Borrowed("reaction"),
name: Cow::from(hook.as_str()),
_type: Some(NfChainType::Filter),
hook: Some(hook.into()),
prio: Some(0),
..Default::default()
}));
}
for set in &self.sets {
set.init(&mut batch)?;
}
// TODO apply batch
self.nft.send(batch).await?;
// Launch a task that will destroy the table on shutdown
{
let token = self.shutdown.token();
tokio::spawn(async move {
token.wait().await;
Batch::new().delete(reaction_table());
});
}
// Launch all actions
while let Some(action) = self.actions.pop() {
tokio::spawn(async move { action.serve().await });
}
self.actions = Default::default();
Ok(())
}
async fn close(self) -> RemoteResult<()> {
self.shutdown.ask_shutdown();
self.shutdown.wait_all_task_shutdown().await;
Ok(())
}
}
fn reaction_table() -> NfListObject<'static> {
NfListObject::Table(Table {
family: NfFamily::INet,
name: Cow::Borrowed("reaction"),
handle: None,
})
}

View file

@ -0,0 +1,81 @@
use std::{
ffi::{CStr, CString},
thread,
};
use libnftables1_sys::Nftables;
use nftables::batch::Batch;
use tokio::sync::{mpsc, oneshot};
/// A client with a dedicated server thread to libnftables.
/// Calling [`Default::default()`] spawns a new server thread.
/// Cloning just creates a new client to the same server thread.
#[derive(Clone)]
pub struct NftClient {
tx: mpsc::Sender<NftCommand>,
}
impl Default for NftClient {
fn default() -> Self {
let (tx, mut rx) = mpsc::channel(10);
thread::spawn(move || {
let mut conn = Nftables::new();
while let Some(NftCommand { json, ret }) = rx.blocking_recv() {
let (rc, output, error) = conn.run_cmd(json.as_ptr());
let res = match rc {
0 => to_rust_string(output)
.ok_or_else(|| "unknown ok (rc = 0 but no output buffer)".into()),
_ => to_rust_string(error)
.map(|err| format!("error (rc = {rc}: {err})"))
.ok_or_else(|| format!("unknown error (rc = {rc} but no error buffer)")),
};
let _ = ret.send(res);
}
});
NftClient { tx }
}
}
impl NftClient {
/// Send a batch to nftables.
pub async fn send(&self, batch: Batch<'_>) -> Result<String, String> {
// convert JSON to CString
let mut json = serde_json::to_vec(&batch.to_nftables())
.map_err(|err| format!("couldn't build json to send to nftables: {err}"))?;
json.push('\0' as u8);
let json = CString::from_vec_with_nul(json)
.map_err(|err| format!("invalid json with null char: {err}"))?;
// Send command
let (tx, rx) = oneshot::channel();
let command = NftCommand { json, ret: tx };
self.tx
.send(command)
.await
.map_err(|err| format!("nftables thread has quit, can't send command: {err}"))?;
// Wait for result
rx.await
.map_err(|_| format!("nftables thread has quit, no response for command"))?
}
}
struct NftCommand {
json: CString,
ret: oneshot::Sender<Result<String, String>>,
}
fn to_rust_string(c_ptr: *const i8) -> Option<String> {
if c_ptr.is_null() {
None
} else {
Some(
unsafe { CStr::from_ptr(c_ptr) }
.to_string_lossy()
.into_owned(),
)
}
}

View file

@ -0,0 +1,247 @@
use reaction_plugin::{ActionConfig, PluginInfo, StreamConfig, Value};
use serde_json::json;
use crate::Plugin;
#[tokio::test]
async fn conf_stream() {
// No stream is supported by nftables
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "nftables".into(),
config: Value::Null
}],
vec![]
)
.await
.is_err()
);
// Empty config is ok
assert!(Plugin::default().load_config(vec![], vec![]).await.is_ok());
}
#[tokio::test]
async fn conf_action_standalone() {
let p = vec!["name".into(), "ip".into(), "ip2".into()];
let p_noip = vec!["name".into(), "ip2".into()];
for (is_ok, conf, patterns) in [
// minimal set
(true, json!({ "set": "test" }), &p),
// missing set key
(false, json!({}), &p),
(false, json!({ "version": "ipv4" }), &p),
// unknown key
(false, json!({ "set": "test", "unknown": "yes" }), &p),
(false, json!({ "set": "test", "ip_index": 1 }), &p),
(false, json!({ "set": "test", "timeout_u32": 1 }), &p),
// pattern //
(true, json!({ "set": "test" }), &p),
(true, json!({ "set": "test", "pattern": "ip" }), &p),
(true, json!({ "set": "test", "pattern": "ip2" }), &p),
(true, json!({ "set": "test", "pattern": "ip2" }), &p_noip),
// unknown pattern "ip"
(false, json!({ "set": "test" }), &p_noip),
(false, json!({ "set": "test", "pattern": "ip" }), &p_noip),
// unknown pattern
(false, json!({ "set": "test", "pattern": "unknown" }), &p),
(false, json!({ "set": "test", "pattern": "uwu" }), &p_noip),
// bad type
(false, json!({ "set": "test", "pattern": 0 }), &p_noip),
(false, json!({ "set": "test", "pattern": true }), &p_noip),
// action //
(true, json!({ "set": "test", "action": "add" }), &p),
(true, json!({ "set": "test", "action": "delete" }), &p),
// unknown action
(false, json!({ "set": "test", "action": "create" }), &p),
(false, json!({ "set": "test", "action": "insert" }), &p),
(false, json!({ "set": "test", "action": "del" }), &p),
(false, json!({ "set": "test", "action": "destroy" }), &p),
// bad type
(false, json!({ "set": "test", "action": true }), &p),
(false, json!({ "set": "test", "action": 1 }), &p),
// ip version //
// ok
(true, json!({ "set": "test", "version": "ipv4" }), &p),
(true, json!({ "set": "test", "version": "ipv6" }), &p),
(true, json!({ "set": "test", "version": "ip" }), &p),
// unknown version
(false, json!({ "set": "test", "version": 4 }), &p),
(false, json!({ "set": "test", "version": 6 }), &p),
(false, json!({ "set": "test", "version": 46 }), &p),
(false, json!({ "set": "test", "version": "5" }), &p),
(false, json!({ "set": "test", "version": "ipv5" }), &p),
(false, json!({ "set": "test", "version": "4" }), &p),
(false, json!({ "set": "test", "version": "6" }), &p),
(false, json!({ "set": "test", "version": "46" }), &p),
// bad type
(false, json!({ "set": "test", "version": true }), &p),
// hooks //
// everything is fine really
(true, json!({ "set": "test", "hooks": [] }), &p),
(
true,
json!({ "set": "test", "hooks": ["input", "forward", "ingress", "prerouting", "output", "postrouting", "egress"] }),
&p,
),
(false, json!({ "set": "test", "hooks": ["INPUT"] }), &p),
(false, json!({ "set": "test", "hooks": ["FORWARD"] }), &p),
(
false,
json!({ "set": "test", "hooks": ["unknown_hook"] }),
&p,
),
// timeout //
(true, json!({ "set": "test", "timeout": "1m" }), &p),
(true, json!({ "set": "test", "timeout": "3 days" }), &p),
// bad
(false, json!({ "set": "test", "timeout": "3 dayz"}), &p),
(false, json!({ "set": "test", "timeout": 12 }), &p),
// target //
// anything is fine too
(true, json!({ "set": "test", "target": "drop" }), &p),
(true, json!({ "set": "test", "target": "accept" }), &p),
(true, json!({ "set": "test", "target": "return" }), &p),
(true, json!({ "set": "test", "target": "continue" }), &p),
// bad
(false, json!({ "set": "test", "target": "custom" }), &p),
(false, json!({ "set": "test", "target": "DROP" }), &p),
(false, json!({ "set": "test", "target": 11 }), &p),
(false, json!({ "set": "test", "target": ["DROP"] }), &p),
] {
let res = Plugin::default()
.load_config(
vec![],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "nftables".into(),
config: conf.clone().into(),
patterns: patterns.clone(),
}],
)
.await;
assert!(
res.is_ok() == is_ok,
"conf: {:?}, must be ok: {is_ok}, result: {:?}",
conf,
// empty Result::Ok because ActionImpl is not Debug
res.map(|_| ())
);
}
}
// TODO
#[tokio::test]
async fn conf_action_merge() {
let mut plugin = Plugin::default();
let set1 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action1".into(),
action_type: "nftables".into(),
config: json!({
"set": "test",
"target": "drop",
"hooks": ["input"],
"action": "add",
})
.into(),
patterns: vec!["ip".into()],
};
let set2 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action2".into(),
action_type: "nftables".into(),
config: json!({
"set": "test",
"target": "drop",
"version": "ip",
"action": "add",
})
.into(),
patterns: vec!["ip".into()],
};
let set3 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action2".into(),
action_type: "nftables".into(),
config: json!({
"set": "test",
"action": "delete",
})
.into(),
patterns: vec!["ip".into()],
};
let res = plugin
.load_config(
vec![],
vec![
// First set
set1.clone(),
// Same set, adding options, no conflict
set2.clone(),
// Same set, no new options, no conflict
set3.clone(),
// Unrelated set, so no conflict
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action3".into(),
action_type: "nftables".into(),
config: json!({
"set": "test2",
"target": "return",
"version": "ipv6",
})
.into(),
patterns: vec!["ip".into()],
},
],
)
.await;
assert!(res.is_ok(), "res: {:?}", res.map(|_| ()));
// Another set with conflict is not ok
let res = plugin
.load_config(
vec![],
vec![
// First set
set1,
// Same set, adding options, no conflict
set2,
// Same set, no new options, no conflict
set3,
// Another set with conflict
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action3".into(),
action_type: "nftables".into(),
config: json!({
"set": "test",
"target": "target2",
"action": "del",
})
.into(),
patterns: vec!["ip".into()],
},
],
)
.await;
assert!(res.is_err(), "res: {:?}", res.map(|_| ()));
}

View file

@ -0,0 +1,11 @@
[package]
name = "reaction-plugin-virtual"
version = "1.0.0"
edition = "2024"
[dependencies]
tokio = { workspace = true, features = ["rt-multi-thread"] }
remoc.workspace = true
reaction-plugin.path = "../reaction-plugin"
serde.workspace = true
serde_json.workspace = true

View file

@ -0,0 +1,179 @@
use std::collections::{BTreeMap, BTreeSet};
use reaction_plugin::{
ActionConfig, ActionImpl, Exec, Hello, Line, Manifest, PluginInfo, RemoteResult, StreamConfig,
StreamImpl, Value, line::PatternLine,
};
use remoc::{rch::mpsc, rtc};
use serde::{Deserialize, Serialize};
#[cfg(test)]
mod tests;
#[tokio::main]
async fn main() {
let plugin = Plugin::default();
reaction_plugin::main_loop(plugin).await;
}
#[derive(Default)]
struct Plugin {}
impl PluginInfo for Plugin {
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
Ok(Manifest {
hello: Hello::new(),
streams: BTreeSet::from(["virtual".into()]),
actions: BTreeSet::from(["virtual".into()]),
})
}
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
let mut ret_streams = Vec::with_capacity(streams.len());
let mut ret_actions = Vec::with_capacity(actions.len());
let mut local_streams = BTreeMap::new();
for StreamConfig {
stream_name,
stream_type,
config,
} in streams
{
if stream_type != "virtual" {
return Err("This plugin can't handle other stream types than virtual".into());
}
let (virtual_stream, receiver) = VirtualStream::new(config)?;
if let Some(_) = local_streams.insert(stream_name, virtual_stream) {
return Err("this virtual stream has already been initialized".into());
}
ret_streams.push(StreamImpl {
stream: receiver,
standalone: false,
});
}
for ActionConfig {
stream_name,
filter_name,
action_name,
action_type,
config,
patterns,
} in actions
{
if &action_type != "virtual" {
return Err("This plugin can't handle other action types than virtual".into());
}
let (mut virtual_action, tx) = VirtualAction::new(
stream_name,
filter_name,
action_name,
config,
patterns,
&local_streams,
)?;
tokio::spawn(async move { virtual_action.serve().await });
ret_actions.push(ActionImpl { tx });
}
Ok((ret_streams, ret_actions))
}
async fn start(&mut self) -> RemoteResult<()> {
Ok(())
}
async fn close(self) -> RemoteResult<()> {
Ok(())
}
}
#[derive(Clone)]
struct VirtualStream {
tx: mpsc::Sender<Line>,
}
impl VirtualStream {
fn new(config: Value) -> Result<(Self, mpsc::Receiver<Line>), String> {
const CONFIG_ERROR: &'static str = "streams of type virtual take no options";
match config {
Value::Null => (),
Value::Object(map) => {
if map.len() != 0 {
return Err(CONFIG_ERROR.into());
}
}
_ => return Err(CONFIG_ERROR.into()),
}
let (tx, rx) = mpsc::channel(1);
Ok((Self { tx }, rx))
}
}
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
struct ActionOptions {
/// The line to send to the corresponding virtual stream, example: "ban \<ip\>"
send: String,
/// The name of the corresponding virtual stream, example: "my_stream"
to: String,
}
struct VirtualAction {
rx: mpsc::Receiver<Exec>,
send: PatternLine,
to: VirtualStream,
}
impl VirtualAction {
fn new(
stream_name: String,
filter_name: String,
action_name: String,
config: Value,
patterns: Vec<String>,
streams: &BTreeMap<String, VirtualStream>,
) -> Result<(Self, mpsc::Sender<Exec>), String> {
let options: ActionOptions = serde_json::from_value(config.into()).map_err(|err| {
format!("invalid options for action {stream_name}.{filter_name}.{action_name}: {err}")
})?;
let send = PatternLine::new(options.send, patterns);
let stream = streams.get(&options.to).ok_or_else(|| {
format!(
"action {}.{}.{}: send \"{}\" matches no stream name",
stream_name, filter_name, action_name, options.to
)
})?;
let (tx, rx) = mpsc::channel(1);
Ok((
Self {
rx,
send: send,
to: stream.clone(),
},
tx,
))
}
async fn serve(&mut self) {
while let Ok(Some(exec)) = self.rx.recv().await {
let line = self.send.line(exec.match_);
self.to.tx.send((line, exec.time)).await.unwrap();
}
}
}

View file

@ -0,0 +1,322 @@
use std::time::{SystemTime, UNIX_EPOCH};
use reaction_plugin::{ActionConfig, Exec, PluginInfo, StreamConfig, Value};
use serde_json::json;
use crate::Plugin;
#[tokio::test]
async fn conf_stream() {
// Invalid type
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtu".into(),
config: Value::Null
}],
vec![]
)
.await
.is_err()
);
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null
}],
vec![]
)
.await
.is_ok()
);
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: json!({}).into(),
}],
vec![]
)
.await
.is_ok()
);
// Invalid conf: must be empty
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: json!({"key": "value" }).into(),
}],
vec![]
)
.await
.is_err()
);
}
#[tokio::test]
async fn conf_action() {
let streams = vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}];
let valid_conf = json!({ "send": "message", "to": "stream" });
let missing_send_conf = json!({ "to": "stream" });
let missing_to_conf = json!({ "send": "stream" });
let extra_attr_conf = json!({ "send": "message", "send2": "message", "to": "stream" });
let patterns = Vec::default();
// Invalid type
assert!(
Plugin::default()
.load_config(
streams.clone(),
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtu".into(),
config: Value::Null,
patterns: patterns.clone(),
}]
)
.await
.is_err()
);
assert!(
Plugin::default()
.load_config(
streams.clone(),
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: valid_conf.into(),
patterns: patterns.clone()
}]
)
.await
.is_ok()
);
for conf in [missing_send_conf, missing_to_conf, extra_attr_conf] {
assert!(
Plugin::default()
.load_config(
streams.clone(),
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: conf.clone().into(),
patterns: patterns.clone()
}]
)
.await
.is_err(),
"conf: {:?}",
conf
);
}
}
#[tokio::test]
async fn conf_send() {
// Valid to: option
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "message", "to": "stream" }).into(),
patterns: vec![],
}]
)
.await
.is_ok(),
);
// Invalid to: option
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "message", "to": "stream1" }).into(),
patterns: vec![],
}]
)
.await
.is_err(),
);
}
// Let's allow empty streams for now.
// I guess it can be useful to have manual only actions.
//
// #[tokio::test]
// async fn conf_empty_stream() {
// assert!(
// Plugin::default()
// .load_config(
// vec![StreamConfig {
// stream_name: "stream".into(),
// stream_type: "virtual".into(),
// config: Value::Null,
// }],
// vec![],
// )
// .await
// .is_err(),
// );
// }
#[tokio::test]
async fn run_simple() {
let mut plugin = Plugin::default();
let (mut streams, mut actions) = plugin
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "message <test>", "to": "stream" }).into(),
patterns: vec!["test".into()],
}],
)
.await
.unwrap();
let mut stream = streams.pop().unwrap();
let action = actions.pop().unwrap();
assert!(!stream.standalone);
for m in ["test1", "test2", "test3", " a a a aa a a"] {
let time = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
assert!(
action
.tx
.send(Exec {
match_: vec![m.into()],
time,
})
.await
.is_ok()
);
assert_eq!(
stream.stream.recv().await.unwrap().unwrap(),
(format!("message {m}"), time),
);
}
}
#[tokio::test]
async fn run_two_actions() {
let mut plugin = Plugin::default();
let (mut streams, mut actions) = plugin
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}],
vec![
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "send <a>", "to": "stream" }).into(),
patterns: vec!["a".into(), "b".into()],
},
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "<b> send", "to": "stream" }).into(),
patterns: vec!["a".into(), "b".into()],
},
],
)
.await
.unwrap();
let mut stream = streams.pop().unwrap();
assert!(!stream.standalone);
let action2 = actions.pop().unwrap();
let action1 = actions.pop().unwrap();
let time = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
assert!(
action1
.tx
.send(Exec {
match_: vec!["aa".into(), "bb".into()],
time,
})
.await
.is_ok(),
);
assert_eq!(
stream.stream.recv().await.unwrap().unwrap(),
("send aa".into(), time),
);
assert!(
action2
.tx
.send(Exec {
match_: vec!["aa".into(), "bb".into()],
time,
})
.await
.is_ok(),
);
assert_eq!(
stream.stream.recv().await.unwrap().unwrap(),
("bb send".into(), time),
);
}

View file

@ -0,0 +1,20 @@
[package]
name = "reaction-plugin"
version = "1.0.0"
edition = "2024"
authors = ["ppom <reaction@ppom.me>"]
license = "AGPL-3.0"
homepage = "https://reaction.ppom.me"
repository = "https://framagit.org/ppom/reaction"
keywords = ["security", "sysadmin", "logs", "monitoring", "plugin"]
categories = ["security"]
description = "Plugin interface for reaction, a daemon that scans logs and takes action (alternative to fail2ban)"
[dependencies]
remoc.workspace = true
serde.workspace = true
serde_json.workspace = true
tokio.workspace = true
tokio.features = ["io-std", "signal"]
tokio-util.workspace = true
tokio-util.features = ["rt"]

View file

@ -0,0 +1,599 @@
//! This crate defines the API between reaction's core and plugins.
//!
//! Plugins must be written in Rust, for now.
//!
//! This documentation assumes the reader has some knowledge of Rust.
//! However, if you find that something is unclear, don't hesitate to
//! [ask for help](https://framagit.org/ppom/reaction/#help), even if you're new to Rust.
//!
//! To implement a plugin, one has to provide an implementation of [`PluginInfo`], that provides
//! the entrypoint for a plugin.
//! It permits to define `0` to `n` custom stream and action types.
//!
//! ## Note on reaction-plugin API stability
//!
//! This is the v1 of reaction's plugin interface.
//! It's quite efficient and complete, but it has the big drawback of being Rust-only and [`tokio`]-only.
//!
//! In the future, I'd like to define a language-agnostic interface, which will be a major breaking change in the API.
//! However, I'll try my best to reduce the necessary code changes for plugins that use this v1.
//!
//! ## Naming & calling conventions
//!
//! Your plugin should be named `reaction-plugin-$NAME`, eg. `reaction-plugin-postgresql`.
//! It will be invoked with one positional argument "serve".
//! ```bash
//! reaction-plugin-$NAME serve
//! ```
//! This can be useful if you want to provide CLI functionnality to your users,
//! so you can distinguish between a human user and reaction.
//!
//! ### State directory
//!
//! It will be executed in its own directory, in which it should have write access.
//! The directory is `$reaction_state_directory/plugin_data/$NAME`.
//! reaction's [state_directory](https://reaction.ppom.me/reference.html#state_directory)
//! defaults to its working directory, which is `/var/lib/reaction` in most setups.
//!
//! So your plugin directory should most often be `/var/lib/reaction/plugin_data/$NAME`,
//! but the plugin shouldn't expect that and use the current working directory instead.
//!
//! ## Communication
//!
//! Communication between the plugin and reaction is based on [`remoc`], which permits to multiplex channels and remote objects/functions/trait
//! calls over a single transport channel.
//! The channels read and write channels are stdin and stdout, so you shouldn't use them for something else.
//!
//! [`remoc`] builds upon [`tokio`], so you'll need to use tokio too.
//!
//! ### Errors
//!
//! Errors during:
//! - config loading in [`PluginInfo::load_config`]
//! - startup in [`PluginInfo::start`]
//!
//! should be returned to reaction by the function's return value, permitting reaction to abort startup.
//!
//! During normal runtime, after the plugin has loaded its config and started, and before reaction is quitting, there is no *rusty* way to send errors to reaction.
//! Then errors can be printed to stderr.
//! They'll be captured line by line and re-printed by reaction, with the plugin name prepended.
//!
//! A line can start with `DEBUG `, `INFO `, `WARN `, `ERROR `.
//! If it starts with none of the above, the line is assumed to be an error.
//!
//! Example:
//! Those lines:
//! ```log
//! WARN This is an official warning from the plugin
//! Freeeee errrooooorrr
//! ```
//! Will become:
//! ```log
//! WARN plugin test: This is an official warning from the plugin
//! ERROR plugin test: Freeeee errrooooorrr
//! ```
//!
//! Plugins should not exit when there is an error: reaction quits only when told to do so,
//! or if all its streams exit, and won't retry starting a failing plugin or stream.
//! Please only exit if you're in a 100% failing state.
//! It's considered better to continue operating in a degraded state than exiting.
//!
//! ## Getting started
//!
//! If you don't have Rust already installed, follow their [*Getting Started* documentation](https://rust-lang.org/learn/get-started/)
//! to get rust build tools and learn about editor support.
//!
//! Then create a new repository with cargo:
//!
//! ```bash
//! cargo new reaction-plugin-$NAME
//! cd reaction-plugin-$NAME
//! ```
//!
//! Add required dependencies:
//!
//! ```bash
//! cargo add reaction-plugin tokio
//! ```
//!
//! Replace `src/main.rs` with those contents:
//!
//! ```ignore
//! use reaction_plugin::PluginInfo;
//!
//! #[tokio::main]
//! async fn main() {
//! let plugin = MyPlugin::default();
//! reaction_plugin::main_loop(plugin).await;
//! }
//!
//! #[derive(Default)]
//! struct MyPlugin {}
//!
//! impl PluginInfo for MyPlugin {
//! // ...
//! }
//! ```
//!
//! Your IDE should now propose to implement missing members of the [`PluginInfo`] trait.
//! Your journey starts!
//!
//! ## Examples
//!
//! Core plugins can be found here: <https://framagit.org/ppom/reaction/-/tree/main/plugins>.
//!
//! - The "virtual" plugin is the simplest and can serve as a good complete example that links custom stream types and custom action types.
//! - The "ipset" plugin is a good example of an action-only plugin.
use std::{
collections::{BTreeMap, BTreeSet},
env::args,
error::Error,
fmt::Display,
process::exit,
time::Duration,
};
use remoc::{
Connect, rch,
rtc::{self, Server},
};
use serde::{Deserialize, Serialize};
use serde_json::{Number, Value as JValue};
use tokio::io::{stdin, stdout};
pub mod line;
pub mod shutdown;
pub mod time;
/// The only trait that **must** be implemented by a plugin.
/// It provides lists of stream, filter and action types implemented by a dynamic plugin.
#[rtc::remote]
pub trait PluginInfo {
/// Return the manifest of the plugin.
/// This should not be dynamic, and return always the same manifest.
///
/// Example implementation:
/// ```
/// Ok(Manifest {
/// hello: Hello::new(),
/// streams: BTreeSet::from(["mystreamtype".into()]),
/// actions: BTreeSet::from(["myactiontype".into()]),
/// })
/// ```
///
/// First function called.
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError>;
/// Load all plugin stream and action configurations.
/// Must error if config is invalid.
///
/// The plugin should not start running mutable commands here:
/// It should be ok to quit without cleanup for now.
///
/// Each [`StreamConfig`] from the `streams` arg should result in a corresponding [`StreamImpl`] returned, in the same order.
/// Each [`ActionConfig`] from the `actions` arg should result in a corresponding [`ActionImpl`] returned, in the same order.
///
/// Function called after [`PluginInfo::manifest`].
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)>;
/// Notify the plugin that setup is finished, permitting a last occasion to report an error that'll make reaction exit.
/// All initialization (opening remote connections, starting streams, etc) should happen here.
///
/// Function called after [`PluginInfo::load_config`].
async fn start(&mut self) -> RemoteResult<()>;
/// Notify the plugin that reaction is quitting and that the plugin should quit too.
/// A few seconds later, the plugin will receive SIGTERM.
/// A few seconds later, the plugin will receive SIGKILL.
///
/// Function called after [`PluginInfo::start`], when reaction is quitting.
async fn close(mut self) -> RemoteResult<()>;
}
/// The config for one Stream of a type advertised by this plugin.
///
/// For example this user config:
/// ```jsonnet
/// {
/// streams: {
/// mystream: {
/// type: "mystreamtype",
/// options: {
/// key: "value",
/// num: 3,
/// },
/// // filters: ...
/// },
/// },
/// }
/// ```
///
/// would result in the following `StreamConfig`:
///
/// ```
/// StreamConfig {
/// stream_name: "mystream",
/// stream_type: "mystreamtype",
/// config: Value::Object(BTreeMap::from([
/// ("key", Value::String("value")),
/// ("num", Value::Integer(3)),
/// ])),
/// }
/// ```
///
/// Don't hesitate to take advantage of [`serde_json::from_value`], to deserialize the [`Value`] into a Rust struct:
///
/// ```
/// #[derive(Deserialize)]
/// struct MyStreamOptions {
/// key: String,
/// num: i64,
/// }
///
/// fn validate_config(stream_config: Value) -> Result<MyStreamOptions, serde_json::Error> {
/// serde_json::from_value(stream_config.into())
/// }
/// ```
#[derive(Serialize, Deserialize, Clone)]
pub struct StreamConfig {
pub stream_name: String,
pub stream_type: String,
pub config: Value,
}
/// The config for one Stream of a type advertised by this plugin.
///
/// For example this user config:
/// ```jsonnet
/// {
/// streams: {
/// mystream: {
/// // ...
/// filters: {
/// myfilter: {
/// // ...
/// actions: {
/// myaction: {
/// type: "myactiontype",
/// options: {
/// boolean: true,
/// array: ["item"],
/// },
/// },
/// },
/// },
/// },
/// },
/// },
/// }
/// ```
///
/// would result in the following `ActionConfig`:
///
/// ```rust
/// ActionConfig {
/// action_name: "myaction",
/// action_type: "myactiontype",
/// config: Value::Object(BTreeMap::from([
/// ("boolean", Value::Boolean(true)),
/// ("array", Value::Array([Value::String("item")])),
/// ])),
/// }
/// ```
///
/// Don't hesitate to take advantage of [`serde_json::from_value`], to deserialize the [`Value`] into a Rust struct:
///
/// ```rust
/// #[derive(Deserialize)]
/// struct MyActionOptions {
/// boolean: bool,
/// array: Vec<String>,
/// }
///
/// fn validate_config(action_config: Value) -> Result<MyActionOptions, serde_json::Error> {
/// serde_json::from_value(action_config.into())
/// }
/// ```
#[derive(Serialize, Deserialize, Clone)]
pub struct ActionConfig {
pub stream_name: String,
pub filter_name: String,
pub action_name: String,
pub action_type: String,
pub config: Value,
pub patterns: Vec<String>,
}
/// Mandatory announcement of a plugin's protocol version, stream and action types.
#[derive(Serialize, Deserialize)]
pub struct Manifest {
// Protocol version.
// Just use the [`Hello::new`] constructor that uses this crate's current version.
pub hello: Hello,
/// Stream types that should be made available to reaction users
///
/// ```jsonnet
/// {
/// streams: {
/// my_stream: {
/// type: "..."
/// # ↑ all those exposed types
/// }
/// }
/// }
/// ```
pub streams: BTreeSet<String>,
/// Action types that should be made available to reaction users
///
/// ```jsonnet
/// {
/// streams: {
/// mystream: {
/// filters: {
/// myfilter: {
/// actions: {
/// myaction: {
/// type: "myactiontype",
/// # ↑ all those exposed types
/// },
/// },
/// },
/// },
/// },
/// },
/// }
/// ```
pub actions: BTreeSet<String>,
}
#[derive(Default, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct Hello {
/// Major version of the protocol
/// Increment means breaking change
pub version_major: u32,
/// Minor version of the protocol
/// Increment means reaction core can handle older version plugins
pub version_minor: u32,
}
impl Hello {
/// Constructor that fills a [`Hello`] struct with [`crate`]'s version.
/// You should use this in your plugin [`Manifest`].
pub fn new() -> Hello {
Hello {
version_major: env!("CARGO_PKG_VERSION_MAJOR").parse().unwrap(),
version_minor: env!("CARGO_PKG_VERSION_MINOR").parse().unwrap(),
}
}
/// Used by the reaction daemon. Permits to check compatibility between two versions.
/// Major versions must be the same between the daemon and plugin.
/// Minor version of the daemon must be greater than or equal minor version of the plugin.
pub fn is_compatible(server: &Hello, plugin: &Hello) -> std::result::Result<(), String> {
if server.version_major == plugin.version_major
&& server.version_minor >= plugin.version_minor
{
Ok(())
} else if plugin.version_major > server.version_major
|| (plugin.version_major == server.version_major
&& plugin.version_minor > server.version_minor)
{
Err("consider upgrading reaction".into())
} else {
Err("consider upgrading the plugin".into())
}
}
}
/// A clone of [`serde_json::Value`].
/// Implements From & Into [`serde_json::Value`].
#[derive(Serialize, Deserialize, Clone)]
pub enum Value {
Null,
Bool(bool),
Integer(i64),
Float(f64),
String(String),
Array(Vec<Value>),
Object(BTreeMap<String, Value>),
}
impl From<JValue> for Value {
fn from(value: serde_json::Value) -> Self {
match value {
JValue::Null => Value::Null,
JValue::Bool(b) => Value::Bool(b),
JValue::Number(number) => {
if let Some(number) = number.as_i64() {
Value::Integer(number)
} else if let Some(number) = number.as_f64() {
Value::Float(number)
} else {
Value::Null
}
}
JValue::String(s) => Value::String(s.into()),
JValue::Array(v) => Value::Array(v.into_iter().map(|e| e.into()).collect()),
JValue::Object(m) => Value::Object(m.into_iter().map(|(k, v)| (k, v.into())).collect()),
}
}
}
impl Into<JValue> for Value {
fn into(self) -> JValue {
match self {
Value::Null => JValue::Null,
Value::Bool(v) => JValue::Bool(v),
Value::Integer(v) => JValue::Number(v.into()),
Value::Float(v) => JValue::Number(Number::from_f64(v).unwrap()),
Value::String(v) => JValue::String(v),
Value::Array(v) => JValue::Array(v.into_iter().map(|e| e.into()).collect()),
Value::Object(m) => JValue::Object(m.into_iter().map(|(k, v)| (k, v.into())).collect()),
}
}
}
/// Represents a Stream handled by a plugin on reaction core's side.
///
/// During [`PluginInfo::load_config`], the plugin should create a [`remoc::rch::mpsc::channel`] of [`Line`].
/// It will keep the sending side for itself and put the receiving side in a [`StreamImpl`].
///
/// The plugin should start sending [`Line`]s in the channel only after [`PluginInfo::start`] has been called by reaction core.
#[derive(Debug, Serialize, Deserialize)]
pub struct StreamImpl {
pub stream: rch::mpsc::Receiver<Line>,
/// Whether this stream works standalone, or if it needs other streams or actions to be fed.
/// Defaults to true.
/// When `false`, reaction will exit if it's the last one standing.
#[serde(default = "_true")]
pub standalone: bool,
}
fn _true() -> bool {
true
}
/// Messages passed from the [`StreamImpl`] of a plugin to reaction core
pub type Line = (String, Duration);
// // Filters
// // For now, plugins can't handle custom filter implementations.
// #[derive(Serialize, Deserialize)]
// pub struct FilterImpl {
// pub stream: rch::lr::Sender<Exec>,
// }
// #[derive(Serialize, Deserialize)]
// pub struct Match {
// pub match_: String,
// pub result: rch::oneshot::Sender<bool>,
// }
/// Represents an Action handled by a plugin on reaction core's side.
///
/// During [`PluginInfo::load_config`], the plugin should create a [`remoc::rch::mpsc::channel`] of [`Exec`].
/// It will keep the receiving side for itself and put the sending side in a [`ActionImpl`].
///
/// The plugin will start receiving [`Exec`]s in the channel from reaction only after [`PluginInfo::start`] has been called by reaction core.
#[derive(Clone, Serialize, Deserialize)]
pub struct ActionImpl {
pub tx: rch::mpsc::Sender<Exec>,
}
/// A [trigger](https://reaction.ppom.me/reference.html#trigger) of the Action, sent by reaction core to the plugin.
///
/// The plugin should perform the configured action for each received [`Exec`].
///
/// Any error during its execution should be logged to stderr, see [`crate#Errors`] for error handling recommandations.
#[derive(Serialize, Deserialize)]
pub struct Exec {
pub match_: Vec<String>,
pub time: Duration,
}
/// The main loop for a plugin.
///
/// Bootstraps the communication with reaction core on the process' stdin and stdout,
/// then holds the connection and maintains the plugin in a server state.
///
/// Your main function should only create a struct that implements [`PluginInfo`]
/// and then call [`main_loop`]:
/// ```ignore
/// #[tokio::main]
/// async fn main() {
/// let plugin = MyPlugin::default();
/// reaction_plugin::main_loop(plugin).await;
/// }
/// ```
pub async fn main_loop<T: PluginInfo + Send + Sync + 'static>(plugin_info: T) {
// First check that we're called by reaction
let mut args = args();
// skip 0th argument
let _skip = args.next();
if args.next().is_none_or(|arg| arg != "serve") {
eprintln!("This plugin is not meant to be called as-is.");
eprintln!(
"reaction daemon starts plugins itself and communicates with them on stdin, stdout and stderr."
);
eprintln!("See the doc on plugin configuration: https://reaction.ppom.me/plugins/");
exit(1);
} else {
let (conn, mut tx, _rx): (
_,
remoc::rch::base::Sender<PluginInfoClient>,
remoc::rch::base::Receiver<()>,
) = Connect::io(remoc::Cfg::default(), stdin(), stdout())
.await
.unwrap();
let (server, client) = PluginInfoServer::new(plugin_info, 1);
let (res1, (_, res2), res3) = tokio::join!(tx.send(client), server.serve(), conn);
let mut exit_code = 0;
if let Err(err) = res1 {
eprintln!("ERROR could not send plugin info to reaction: {err}");
exit_code = 1;
}
if let Err(err) = res2 {
eprintln!("ERROR could not launch plugin service for reaction: {err}");
exit_code = 2;
}
if let Err(err) = res3 {
eprintln!("ERROR connection error with reaction: {err}");
exit_code = 3;
}
exit(exit_code);
}
}
// Errors
pub type RemoteResult<T> = Result<T, RemoteError>;
/// reaction-plugin's Error type.
#[derive(Debug, Serialize, Deserialize)]
pub enum RemoteError {
/// A connection error that origins from [`remoc`], the crate used for communication on the plugin's `stdin`/`stdout`.
///
/// You should not instantiate this type of error yourself.
Remoc(rtc::CallError),
/// A free String for application-specific errors.
///
/// You should only instantiate this type of error yourself, for any error that you encounter at startup and shutdown.
///
/// Otherwise, any error during the plugin's runtime should be logged to stderr, see [`crate#Errors`] for error handling recommandations.
Plugin(String),
}
impl Display for RemoteError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
RemoteError::Remoc(call_error) => write!(f, "communication error: {call_error}"),
RemoteError::Plugin(err) => write!(f, "{err}"),
}
}
}
impl Error for RemoteError {}
impl From<String> for RemoteError {
fn from(value: String) -> Self {
Self::Plugin(value)
}
}
impl From<&str> for RemoteError {
fn from(value: &str) -> Self {
Self::Plugin(value.into())
}
}
impl From<rtc::CallError> for RemoteError {
fn from(value: rtc::CallError) -> Self {
Self::Remoc(value)
}
}

View file

@ -0,0 +1,237 @@
//! Helper module that permits to use templated lines (ie. `bad password for <ip>`), like in Stream's and Action's `cmd`.
//!
//! Corresponding reaction core settings:
//! - [Stream's `cmd`](https://reaction.ppom.me/reference.html#cmd)
//! - [Action's `cmd`](https://reaction.ppom.me/reference.html#cmd-1)
//!
#[derive(Debug, PartialEq, Eq)]
enum SendItem {
Index(usize),
Str(String),
}
impl SendItem {
fn min_size(&self) -> usize {
match self {
Self::Index(_) => 0,
Self::Str(s) => s.len(),
}
}
}
/// Helper struct that permits to transform a template line with patterns into an instantiated line from a match.
///
/// Useful when you permit the user to reconstruct lines from an action, like in reaction's native actions and in the virtual plugin:
/// ```yaml
/// actions:
/// native:
/// cmd: ["iptables", "...", "<ip>"]
///
/// virtual:
/// type: virtual
/// options:
/// send: "<ip>: bad password on user <user>"
/// to: "my_virtual_stream"
/// ```
///
/// Usage example:
/// ```
/// # use reaction_plugin::line::PatternLine;
/// #
/// let template = "<ip>: bad password on user <user>".to_string();
/// let patterns = vec!["ip".to_string(), "user".to_string()];
/// let pattern_line = PatternLine::new(template, patterns);
///
/// assert_eq!(
/// pattern_line.line(vec!["1.2.3.4".to_string(), "root".to_string()]),
/// "1.2.3.4: bad password on user root".to_string(),
/// );
/// ```
///
/// You can find full examples in those plugins:
/// `reaction-plugin-virtual`,
/// `reaction-plugin-cluster`.
///
#[derive(Debug)]
pub struct PatternLine {
line: Vec<SendItem>,
min_size: usize,
}
impl PatternLine {
/// Construct [`PatternLine`] from a template line and the list of patterns of the underlying [Filter](https://reaction.ppom.me/reference.html#filter).
///
/// This list of patterns comes from [`super::ActionConfig`].
pub fn new(template: String, patterns: Vec<String>) -> Self {
let line = Self::_from(patterns, Vec::from([SendItem::Str(template)]));
Self {
min_size: line.iter().map(SendItem::min_size).sum(),
line,
}
}
fn _from(mut patterns: Vec<String>, acc: Vec<SendItem>) -> Vec<SendItem> {
match patterns.pop() {
None => acc,
Some(pattern) => {
let enclosed_pattern = format!("<{pattern}>");
let acc = acc
.into_iter()
.flat_map(|item| match &item {
SendItem::Index(_) => vec![item],
SendItem::Str(str) => match str.find(&enclosed_pattern) {
Some(i) => {
let pattern_index = patterns.len();
let mut ret = vec![];
let (left, mid) = str.split_at(i);
if !left.is_empty() {
ret.push(SendItem::Str(left.into()))
}
ret.push(SendItem::Index(pattern_index));
if mid.len() > enclosed_pattern.len() {
let (_, right) = mid.split_at(enclosed_pattern.len());
ret.push(SendItem::Str(right.into()))
}
ret
}
None => vec![item],
},
})
.collect();
Self::_from(patterns, acc)
}
}
}
pub fn line(&self, match_: Vec<String>) -> String {
let mut res = String::with_capacity(self.min_size);
for item in &self.line {
match item {
SendItem::Index(i) => {
if let Some(element) = match_.get(*i) {
res.push_str(element);
}
}
SendItem::Str(str) => res.push_str(str),
}
}
res
}
}
#[cfg(test)]
mod tests {
use crate::line::{PatternLine, SendItem};
#[test]
fn line_0_pattern() {
let msg = "my message".to_string();
let line = PatternLine::new(msg.clone(), vec![]);
assert_eq!(line.line, vec![SendItem::Str(msg.clone())]);
assert_eq!(line.min_size, msg.len());
assert_eq!(line.line(vec![]), msg.clone());
}
#[test]
fn line_1_pattern() {
let patterns = vec![
"ignored".into(),
"oh".into(),
"ignored".into(),
"my".into(),
"test".into(),
];
let matches = vec!["yay", "oh", "my", "test", "<oh>", "<my>", "<test>"];
let tests = [
(
"<oh> my test",
1,
vec![SendItem::Index(1), SendItem::Str(" my test".into())],
vec![
("yay", "yay my test"),
("oh", "oh my test"),
("my", "my my test"),
("test", "test my test"),
("<oh>", "<oh> my test"),
("<my>", "<my> my test"),
("<test>", "<test> my test"),
],
),
(
"oh <my> test",
3,
vec![
SendItem::Str("oh ".into()),
SendItem::Index(3),
SendItem::Str(" test".into()),
],
vec![
("yay", "oh yay test"),
("oh", "oh oh test"),
("my", "oh my test"),
("test", "oh test test"),
("<oh>", "oh <oh> test"),
("<my>", "oh <my> test"),
("<test>", "oh <test> test"),
],
),
(
"oh my <test>",
4,
vec![SendItem::Str("oh my ".into()), SendItem::Index(4)],
vec![
("yay", "oh my yay"),
("oh", "oh my oh"),
("my", "oh my my"),
("test", "oh my test"),
("<oh>", "oh my <oh>"),
("<my>", "oh my <my>"),
("<test>", "oh my <test>"),
],
),
];
for (msg, index, expected_pl, lines) in tests {
let pattern_line = PatternLine::new(msg.to_string(), patterns.clone());
assert_eq!(pattern_line.line, expected_pl);
for (match_element, line) in lines {
for match_default in &matches {
let mut match_ = vec![
match_default.to_string(),
match_default.to_string(),
match_default.to_string(),
match_default.to_string(),
match_default.to_string(),
];
match_[index] = match_element.to_string();
assert_eq!(
pattern_line.line(match_.clone()),
line,
"match: {match_:?}, pattern_line: {pattern_line:?}"
);
}
}
}
}
#[test]
fn line_2_pattern() {
let pattern_line = PatternLine::new("<a> ; <b>".into(), vec!["a".into(), "b".into()]);
let matches = ["a", "b", "ab", "<a>", "<b>"];
for a in &matches {
for b in &matches {
assert_eq!(
pattern_line.line(vec![a.to_string(), b.to_string()]),
format!("{a} ; {b}"),
);
}
}
}
}

View file

@ -0,0 +1,162 @@
//! Helper module that provides structures to ease the quitting process when having multiple tokio tasks.
//!
//! It defines a [`ShutdownController`], that permits to keep track of ongoing tasks, ask them to shutdown and wait for all of them to quit.
//!
//! You can have it as an attribute of your plugin struct.
//! ```
//! struct MyPlugin {
//! shutdown: ShutdownController
//! }
//! ```
//!
//! You can then give a [`ShutdownToken`] to other tasks when creating them:
//!
//! ```
//! impl PluginInfo for MyPlugin {
//! async fn start(&mut self) -> RemoteResult<()> {
//! let token = self.shutdown.token();
//!
//! tokio::spawn(async move {
//! token.wait().await;
//! eprintln!("DEBUG shutdown asked to quit, now quitting")
//! })
//! }
//! }
//! ```
//!
//! On closing, calling [`ShutdownController::ask_shutdown`] will inform all tasks waiting on [`ShutdownToken::wait`] that it's time to leave.
//! Then we can wait for [`ShutdownController::wait_all_task_shutdown`] to complete.
//!
//! ```
//! impl PluginInfo for MyPlugin {
//! async fn close(self) -> RemoteResult<()> {
//! self.shutdown.ask_shutdown();
//! self.shutdown.wait_all_task_shutdown().await;
//! Ok(())
//! }
//! }
//! ```
//!
//! [`ShutdownDelegate::handle_quit_signals`] permits to handle SIGHUP, SIGINT and SIGTERM by gracefully shutting down tasks.
use tokio::signal::unix::{SignalKind, signal};
use tokio_util::{
sync::{CancellationToken, WaitForCancellationFuture},
task::task_tracker::{TaskTracker, TaskTrackerToken},
};
/// Permits to keep track of ongoing tasks, ask them to shutdown and wait for all of them to quit.
/// Stupid wrapper around [`tokio_util::sync::CancellationToken`] and [`tokio_util::task::task_tracker::TaskTracker`].
#[derive(Default, Clone)]
pub struct ShutdownController {
shutdown_notifyer: CancellationToken,
task_tracker: TaskTracker,
}
impl ShutdownController {
pub fn new() -> Self {
Self::default()
}
/// Ask for all tasks to quit
pub fn ask_shutdown(&self) {
self.shutdown_notifyer.cancel();
self.task_tracker.close();
}
/// Wait for all tasks to quit.
/// This task may return even without having called [`ShutdownController::ask_shutdown`]
/// first, if all tasks quit by themselves.
pub async fn wait_all_task_shutdown(self) {
self.task_tracker.close();
self.task_tracker.wait().await;
}
/// Returns a new shutdown token, to be held by a task.
pub fn token(&self) -> ShutdownToken {
ShutdownToken::new(self.shutdown_notifyer.clone(), self.task_tracker.token())
}
/// Returns a [`ShutdownDelegate`], which is able to ask for shutdown,
/// without counting as a task that needs to be awaited.
pub fn delegate(&self) -> ShutdownDelegate {
ShutdownDelegate(self.shutdown_notifyer.clone())
}
/// Returns a future that will resolve only when a shutdown request happened.
pub fn wait(&self) -> WaitForCancellationFuture<'_> {
self.shutdown_notifyer.cancelled()
}
}
/// Permits to ask for shutdown, without counting as a task that needs to be awaited.
pub struct ShutdownDelegate(CancellationToken);
impl ShutdownDelegate {
/// Ask for all tasks to quit
pub fn ask_shutdown(&self) {
self.0.cancel();
}
/// Ensure [`Self::ask_shutdown`] is called whenever we receive SIGHUP,
/// SIGTERM or SIGINT. Spawns a task that consumes self.
pub fn handle_quit_signals(self) -> Result<(), String> {
let err_str = |err| format!("could not register signal: {err}");
let mut sighup = signal(SignalKind::hangup()).map_err(err_str)?;
let mut sigint = signal(SignalKind::interrupt()).map_err(err_str)?;
let mut sigterm = signal(SignalKind::terminate()).map_err(err_str)?;
tokio::spawn(async move {
let signal = tokio::select! {
_ = sighup.recv() => "SIGHUP",
_ = sigint.recv() => "SIGINT",
_ = sigterm.recv() => "SIGTERM",
};
eprintln!("received {signal}, closing...");
self.ask_shutdown();
});
Ok(())
}
}
/// Created by a [`ShutdownController`].
/// Serves two purposes:
///
/// - Wait for a shutdown request to happen with [`Self::wait`]
/// - Keep track of the current task. While this token is held,
/// [`ShutdownController::wait_all_task_shutdown`] will block.
#[derive(Clone)]
pub struct ShutdownToken {
shutdown_notifyer: CancellationToken,
_task_tracker_token: TaskTrackerToken,
}
impl ShutdownToken {
fn new(shutdown_notifyer: CancellationToken, _task_tracker_token: TaskTrackerToken) -> Self {
Self {
shutdown_notifyer,
_task_tracker_token,
}
}
/// Returns underlying [`CancellationToken`] and [`TaskTrackerToken`], consuming self.
pub fn split(self) -> (CancellationToken, TaskTrackerToken) {
(self.shutdown_notifyer, self._task_tracker_token)
}
/// Returns a future that will resolve only when a shutdown request happened.
pub fn wait(&self) -> WaitForCancellationFuture<'_> {
self.shutdown_notifyer.cancelled()
}
/// Returns true if the shutdown request happened
pub fn is_shutdown(&self) -> bool {
self.shutdown_notifyer.is_cancelled()
}
/// Ask for all tasks to quit
pub fn ask_shutdown(&self) {
self.shutdown_notifyer.cancel();
}
}

View file

@ -0,0 +1,76 @@
//! This module provides [`parse_duration`], which parses duration in reaction's format (ie. `6h`, `3 days`)
//!
//! Like in those reaction core settings:
//! - [Filters' `retryperiod`](https://reaction.ppom.me/reference.html#retryperiod)
//! - [Actions' `after`](https://reaction.ppom.me/reference.html#after).
use std::time::Duration;
/// Parses the &str argument as a Duration
/// Returns Ok(Duration) if successful, or Err(String).
///
/// Format is defined as follows: `<integer> <unit>`
/// - whitespace between the integer and unit is optional
/// - integer must be positive (>= 0)
/// - unit can be one of:
/// - `ms` / `millis` / `millisecond` / `milliseconds`
/// - `s` / `sec` / `secs` / `second` / `seconds`
/// - `m` / `min` / `mins` / `minute` / `minutes`
/// - `h` / `hour` / `hours`
/// - `d` / `day` / `days`
pub fn parse_duration(d: &str) -> Result<Duration, String> {
let d_trimmed = d.trim();
let chars = d_trimmed.as_bytes();
let mut value = 0;
let mut i = 0;
while i < chars.len() && chars[i].is_ascii_digit() {
value = value * 10 + (chars[i] - b'0') as u32;
i += 1;
}
if i == 0 {
return Err(format!("duration '{}' doesn't start with digits", d));
}
let ok_as = |func: fn(u64) -> Duration| -> Result<_, String> { Ok(func(value as u64)) };
match d_trimmed[i..].trim() {
"ms" | "millis" | "millisecond" | "milliseconds" => ok_as(Duration::from_millis),
"s" | "sec" | "secs" | "second" | "seconds" => ok_as(Duration::from_secs),
"m" | "min" | "mins" | "minute" | "minutes" => ok_as(Duration::from_mins),
"h" | "hour" | "hours" => ok_as(Duration::from_hours),
"d" | "day" | "days" => ok_as(|d: u64| Duration::from_hours(d * 24)),
unit => Err(format!(
"unit {} not recognised. must be one of s/sec/seconds, m/min/minutes, h/hours, d/days",
unit
)),
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn char_conversion() {
assert_eq!(b'9' - b'0', 9);
}
#[test]
fn parse_duration_test() {
assert_eq!(parse_duration("1s"), Ok(Duration::from_secs(1)));
assert_eq!(parse_duration("12s"), Ok(Duration::from_secs(12)));
assert_eq!(parse_duration(" 12 secs "), Ok(Duration::from_secs(12)));
assert_eq!(parse_duration("2m"), Ok(Duration::from_mins(2)));
assert_eq!(parse_duration("6 hours"), Ok(Duration::from_hours(6)));
assert_eq!(parse_duration("1d"), Ok(Duration::from_hours(1 * 24)));
assert_eq!(parse_duration("365d"), Ok(Duration::from_hours(365 * 24)));
assert!(parse_duration("d 3").is_err());
assert!(parse_duration("d3").is_err());
assert!(parse_duration("3da").is_err());
assert!(parse_duration("3_days").is_err());
assert!(parse_duration("_3d").is_err());
assert!(parse_duration("3 3d").is_err());
assert!(parse_duration("3.3d").is_err());
}
}

View file

@ -1,16 +0,0 @@
package main
import (
"framagit.org/ppom/reaction/app"
)
func main() {
app.Main(version, commit)
}
var (
// Must be passed when building
// go build -ldflags "-X app.commit XXX -X app.version XXX"
version string
commit string
)

360
release.py Normal file
View file

@ -0,0 +1,360 @@
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p "python3.withPackages (ps: with ps; [ requests ])" -p debian-devscripts git minisign docker cargo-deb
import argparse
import http.client
import json
import os
import shutil
import subprocess
import sys
import tempfile
def run_command(args, **kwargs):
print(f"\033[36mCMD: {args}\033[0m")
cmd = subprocess.run(args, **kwargs)
if cmd.returncode != 0:
print(f"\033[31mCMD failed with exit code {cmd.returncode}\033[0m")
sys.exit(1)
return cmd
def main():
# CLI arguments
parser = argparse.ArgumentParser(description="create a reaction release")
parser.add_argument(
"-p",
"--publish",
action="store_true",
help="publish a release. else build only",
)
args = parser.parse_args()
root_dir = os.getcwd()
# Git tag
cmd = run_command(
["git", "tag", "--sort=-creatordate"], capture_output=True, text=True
)
tag = ""
try:
tag = cmd.stdout.strip().split("\n")[0]
except Exception:
pass
if tag == "":
print("could not retrieve last git tag.")
sys.exit(1)
# Ask user
if (
args.publish
and input(
f"We will create a release for tag {tag}. Do you want to continue? (y/n) "
)
!= "y"
):
print("exiting.")
sys.exit(1)
# Minisign password
cmd = subprocess.run(["rbw", "get", "minisign"], capture_output=True, text=True)
minisign_password = cmd.stdout
if args.publish:
# Git push
run_command(["git", "push", "--tags"])
# Create directory
run_command(
[
"ssh",
"akesi",
# "-J", "pica01",
"mkdir",
"-p",
f"/var/www/static/reaction/releases/{tag}/",
]
)
else:
# Prepare directory for tarball and deb file.
# We must do a `cargo clean` before each build,
# So we have to move them out of `target/`
local_dir = os.path.join(root_dir, "local")
try:
os.mkdir(local_dir)
except FileExistsError:
pass
architectures = {
"x86_64-unknown-linux-gnu": "amd64",
# I would like to build for those targets instead:
# "x86_64-unknown-linux-musl": "amd64",
# "aarch64-unknown-linux-musl": "arm64",
# "arm-unknown-linux-gnueabihf": "armhf",
}
all_files = []
instructions = [
"## Changes",
"""
## Instructions
You'll need to install minisign to check the authenticity of the package.
After installing reaction, create your configuration file(s) in JSON, YAML or JSONnet in the
`/etc/reaction/` directory.
See <https://reaction.ppom.me> for documentation.
Reload systemd:
```bash
$ sudo systemctl daemon-reload
```
Then enable and start reaction with this command
```bash
# write first your configuration file(s) in /etc/reaction/
$ sudo systemctl enable --now reaction.service
```
""".strip(),
]
for architecture_rs, architecture_pretty in architectures.items():
# Cargo clean
# run_command(["cargo", "clean"])
# Build docker image
run_command(["docker", "pull", "rust:bookworm"])
run_command(["docker", "build", "-t", "rust:reaction", "."])
binaries = [
# Binaries
"reaction",
"reaction-plugin-virtual",
"reaction-plugin-ipset",
]
# Build
run_command(
[
"docker",
"run",
"--rm",
"-u", str(os.getuid()),
"-v", ".:/reaction",
"rust:reaction",
"sh", "-c",
" && ".join([
f"cargo build --release --target {architecture_rs} --package {binary}"
for binary in binaries
])
]
)
# Build .deb
debs = [
"reaction",
"reaction-plugin-ipset",
]
for deb in debs:
cmd = run_command(
[
"cargo-deb",
"--target", architecture_rs,
"--package", deb,
"--no-build",
"--no-strip"
]
)
deb_dir = os.path.join("./target", architecture_rs, "debian")
deb_names = [f for f in os.listdir(deb_dir) if f.endswith(".deb")]
deb_paths = [os.path.join(deb_dir, deb_name) for deb_name in deb_names]
# Archive
files_path = os.path.join("./target", architecture_rs, "release")
pkg_name = f"reaction-{tag}-{architecture_pretty}"
tar_name = f"{pkg_name}.tar.gz"
tar_path = os.path.join(files_path, tar_name)
os.chdir(files_path)
try:
os.mkdir(pkg_name)
except FileExistsError:
pass
files = binaries + [
# Shell completion
"reaction.bash",
"reaction.fish",
"_reaction",
# Man pages
"reaction.1",
"reaction-flush.1",
"reaction-show.1",
"reaction-start.1",
"reaction-test-regex.1",
"reaction-test-config.1",
]
for file in files:
shutil.copy(file, pkg_name)
makefile = os.path.join(root_dir, "packaging", "Makefile")
shutil.copy(makefile, pkg_name)
systemd = os.path.join(root_dir, "config", "reaction.service")
shutil.copy(systemd, pkg_name)
run_command(["tar", "czf", tar_name, pkg_name])
os.chdir(root_dir)
# Sign
run_command(
["minisign", "-Sm", tar_path] + deb_paths,
text=True,
input=minisign_password,
)
deb_sig_paths = [f"{deb_path}.minisig" for deb_path in deb_paths]
deb_sig_names = [f"{deb_name}.minisig" for deb_name in deb_names]
tar_sig = f"{tar_path}.minisig"
if args.publish:
# Push
run_command(
[
"rsync",
"-az", # "-e", "ssh -J pica01",
tar_path,
tar_sig,
]
+ deb_paths
+ deb_sig_paths
+ [
f"akesi:/var/www/static/reaction/releases/{tag}/",
]
)
else:
# Copy
run_command(["cp", tar_path, tar_sig] + deb_paths + deb_sig_paths + [local_dir])
all_files.extend([tar_path, tar_sig])
all_files.extend(deb_paths)
all_files.extend(deb_sig_paths)
# Instructions
instructions.append(
f"""
## Tar installation ({architecture_pretty} linux)
```bash
curl -O https://static.ppom.me/reaction/releases/{tag}/{tar_name} \\
-O https://static.ppom.me/reaction/releases/{tag}/{tar_name}.minisig \\
&& minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m {tar_name} \\
&& rm {tar_name}.minisig \\
&& tar xvf {tar_name} \\
&& cd {pkg_name} \\
&& sudo make install
```
If you want to install the ipset plugin as well:
```bash
sudo apt install -y libipset-dev && sudo make install-ipset
```
""".strip()
)
instructions.append(
f"""
## Debian installation ({architecture_pretty} linux)
```bash
curl \\
{"\n".join([
f" -O https://static.ppom.me/reaction/releases/{tag}/{deb_name} \\"
for deb_name in deb_names + deb_sig_names
])}
{"\n".join([
f" && minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m {deb_name} \\"
for deb_name in deb_names
])}
&& rm {" ".join(deb_sig_names)} \\
&& sudo apt install {" ".join([f"./{deb_name}" for deb_name in deb_names])}
```
*You can also use [this third-party package repository](https://packages.azlux.fr).*
""".strip()
)
if not args.publish:
print("\n\n".join(instructions))
return
# Release
cmd = run_command(
["rbw", "get", "framagit.org", "token"], capture_output=True, text=True
)
token = cmd.stdout.strip()
if token == "":
print("Could not retrieve token")
sys.exit(1)
# Make user edit the description
tmpdir = tempfile.TemporaryDirectory()
desc_path = tmpdir.name + "/description.md"
with open(desc_path, "w+") as desc_file:
desc_file.write("\n\n".join(instructions))
run_command(["vi", desc_path])
with open(desc_path) as desc_file:
description = desc_file.read().strip()
if description == "":
print()
print("User deleted emptied description, exiting.")
sys.exit(1)
# Construct JSON payload
files = [os.path.basename(file) for file in all_files]
data = {
"tag_name": tag,
"description": description,
"assets": {
"links": [
{
"url": "https://"
+ f"static.ppom.me/reaction/releases/{tag}/{os.path.basename(file)}".replace(
"//", "/"
),
"name": file,
"link_type": "other" if file.endswith(".minisig") else "package",
}
for file in files
]
},
}
body = json.dumps(data)
print(body)
# Send POST request
headers = {
"Host": "framagit.org",
"Content-Type": "application/json",
"PRIVATE-TOKEN": token,
}
conn = http.client.HTTPSConnection("framagit.org")
conn.request("POST", "/api/v4/projects/90566/releases", body=body, headers=headers)
response = conn.getresponse()
body = json.loads(response.read())
if response.status != 201:
print(
f"sending message failed: status: {response.status}, reason: {response.reason}, message: {body.message}"
)
sys.exit(1)
main()

View file

@ -1,40 +0,0 @@
#!/usr/bin/env bash
set -exu
git push --tags
TAG="$(git tag --sort=v:refname | tail -n1)"
docker run -it --rm -e HOME=/tmp/ -v "$(pwd)":/tmp/code -w /tmp/code golang:1.21-bullseye sh -c "git config --global --add safe.directory . && make reaction_${TAG:1}-1_amd64.deb reaction ip46tables nft46"
make "signatures_${TAG:1}"
rsync -avz -e 'ssh -J pica01' ./ip46tables ./nft46 ./reaction ./reaction_${TAG:1}-1_amd64.deb ./nft46.minisig ./ip46tables.minisig ./reaction.minisig ./reaction_${TAG:1}-1_amd64.deb.minisig akesi:/var/www/static/reaction/releases/"$TAG"
TOKEN="$(rbw get framagit.org token)"
DATA='{
"tag_name":"'"$TAG"'",
"description": "**Changes**\n\n**Download**\n\n```bash\nwget https://static.ppom.me/reaction/releases/'"$TAG"'/nft46 \\\n https://static.ppom.me/reaction/releases/'"$TAG"'/reaction \\\n https://static.ppom.me/reaction/releases/'"$TAG"'/ip46tables \\\n https://static.ppom.me/reaction/releases/'"$TAG"'/nft46.minisig \\\n https://static.ppom.me/reaction/releases/'"$TAG"'/reaction.minisig \\\n https://static.ppom.me/reaction/releases/'"$TAG"'/ip46tables.minisig\nfor i in nft46 ip46tables reaction; do\n minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m $i &&\n rm $i.minisig\ndone\n```\n\n**Debian Installation**\n\n```bash\nwget https://static.ppom.me/reaction/releases/'"$TAG"'/reaction_'"${TAG:1}"'-1_amd64.deb \\\n https://static.ppom.me/reaction/releases/'"$TAG"'/reaction_'"${TAG:1}"'-1_amd64.deb.minisig\nminisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m reaction_'"${TAG:1}"'-1_amd64.deb &&\n rm reaction_'"${TAG:1}"'-1_amd64.deb.minisig &&\n apt install ./reaction_'"${TAG:1}"'-1_amd64.deb\n```",
"assets":{"links":[
{"url": "https://static.ppom.me/reaction/releases/'"$TAG"'/nft46", "name": "nft46 (x86-64)", "link_type": "package"},
{"url": "https://static.ppom.me/reaction/releases/'"$TAG"'/reaction", "name": "reaction (x86-64)", "link_type": "package"},
{"url": "https://static.ppom.me/reaction/releases/'"$TAG"'/ip46tables", "name": "ip46tables (x86-64)", "link_type": "package"},
{"url": "https://static.ppom.me/reaction/releases/'"$TAG"'/reaction_'"${TAG:1}"'-1_amd64.deb", "name": "reaction_'"${TAG:1}"'-1_amd64.deb (x86-64)", "link_type": "package"},
{"url": "https://static.ppom.me/reaction/releases/'"$TAG"'/nft46.minisig", "name": "nft46.minisig", "link_type": "other"},
{"url": "https://static.ppom.me/reaction/releases/'"$TAG"'/reaction.minisig", "name": "reaction.minisig", "link_type": "other"},
{"url": "https://static.ppom.me/reaction/releases/'"$TAG"'/ip46tables.minisig", "name": "ip46tables.minisig", "link_type": "other"},
{"url": "https://static.ppom.me/reaction/releases/'"$TAG"'/reaction_'"${TAG:1}"'-1_amd64.deb.minisig", "name": "reaction_'"${TAG:1}"'-1_amd64.deb.minisig", "link_type": "other"}
]}}'
curl \
--fail-with-body \
--location \
-X POST \
-H 'Content-Type: application/json' \
-H "PRIVATE-TOKEN: $TOKEN" \
'https://framagit.org/api/v4/projects/90566/releases' \
--data "$DATA"
make clean

14
shell.nix Normal file
View file

@ -0,0 +1,14 @@
# This shell.nix for NixOS users is only needed when building reaction-plugin-ipset
with import <nixpkgs> {};
pkgs.mkShell {
name = "libipset";
buildInputs = [
ipset
nftables
clang
];
src = null;
shellHook = ''
export LIBCLANG_PATH="$(clang -print-file-name=libclang.so)"
'';
}

173
src/cli.rs Normal file
View file

@ -0,0 +1,173 @@
use std::{fmt, path::PathBuf};
use clap::{Parser, Subcommand, ValueEnum};
use regex::Regex;
use tracing::Level;
#[derive(Parser)]
#[clap(version)]
#[command(
name = "reaction",
about = "Scan logs and take action",
long_about = "A daemon that scans program outputs for repeated patterns, and takes action.
Aims at being more versatile and flexible than fail2ban, while being faster and having simpler configuration.
See usage examples, service configurations and good practices
on the wiki: https://reaction.ppom.me
"
)]
pub struct Cli {
#[clap(subcommand)]
pub command: SubCommand,
}
#[derive(Subcommand)]
pub enum SubCommand {
/// Start reaction daemon
Start {
/// configuration file in json, jsonnet or yaml format, or directory containing those files. required.
#[clap(short = 'c', long)]
config: PathBuf,
/// minimum log level to show
#[clap(short = 'l', long, value_parser = parse_log_level, default_value_t = Level::INFO, ignore_case = true)]
loglevel: Level,
/// path to the client-daemon communication socket
#[clap(short = 's', long, default_value = "/run/reaction/reaction.sock")]
socket: PathBuf,
},
/// Show current matches and actions
#[command(
long_about = "Show current matches and which actions are still to be run (e.g. know what is currently banned"
)]
Show {
/// path to the client-daemon communication socket
#[clap(short = 's', long, default_value = "/run/reaction/reaction.sock")]
socket: PathBuf,
/// how to format output
#[clap(short = 'f', long, default_value_t = Format::YAML)]
format: Format,
/// only show items related to this STREAM[.FILTER]
#[clap(short = 'l', long, value_name = "STREAM[.FILTER]")]
limit: Option<String>,
/// only show items matching name=PATTERN regex
#[clap(value_parser = parse_named_regex, value_name = "NAME=PATTERN")]
patterns: Vec<(String, String)>,
},
/// Remove a target from reaction (e.g. unban)
#[command(
long_about = "Remove currently active matches and run currently pending actions for the specified TARGET. (e.g. unban)
Then prints the flushed matches and actions."
)]
Flush {
/// path to the client-daemon communication socket
#[clap(short = 's', long, default_value = "/run/reaction/reaction.sock")]
socket: PathBuf,
/// how to format output
#[clap(short = 'f', long, default_value_t = Format::YAML)]
format: Format,
/// only show items related to this STREAM[.FILTER]
#[clap(short = 'l', long, value_name = "STREAM[.FILTER]")]
limit: Option<String>,
/// only show items matching name=PATTERN regex
#[clap(value_parser = parse_named_regex, value_name = "NAME=PATTERN")]
patterns: Vec<(String, String)>,
},
/// Trigger a target in reaction (e.g. ban)
#[command(
long_about = "Trigger actions and remove currently active matches for the specified PATTERNS in the specified STREAM.FILTER. (e.g. ban)"
)]
Trigger {
/// path to the client-daemon communication socket
#[clap(short = 's', long, default_value = "/run/reaction/reaction.sock")]
socket: PathBuf,
/// STREAM.FILTER to trigger
#[clap(value_name = "STREAM.FILTER")]
limit: String,
/// PATTERNs to trigger on (e.g. ip=1.2.3.4)
#[clap(value_parser = parse_named_regex, value_name = "NAME=PATTERN")]
patterns: Vec<(String, String)>,
},
/// Test a regex
#[command(
name = "test-regex",
long_about = "Test a REGEX against one LINE, or against standard input.
Giving a configuration file permits to use its patterns in REGEX."
)]
TestRegex {
/// configuration file in json, jsonnet or yaml format, or directory containing those files. required.
#[clap(short = 'c', long)]
config: PathBuf,
/// Regex to test
#[clap(value_name = "REGEX")]
regex: String,
/// Line to be tested
#[clap(value_name = "LINE")]
line: Option<String>,
},
/// Test your configuration
TestConfig {
/// either a configuration file in json, jsonnet or yaml format, or a directory containing those files. required.
#[clap(short = 'c', long)]
config: PathBuf,
/// how to format output
#[clap(short = 'f', long, default_value_t = Format::YAML)]
format: Format,
/// whether to output additional information
#[clap(short = 'v', long, default_value_t = false)]
verbose: bool,
},
}
// Enums
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Debug, ValueEnum)]
pub enum Format {
JSON,
YAML,
}
impl fmt::Display for Format {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Format::JSON => write!(f, "json"),
Format::YAML => write!(f, "yaml"),
}
}
}
fn parse_named_regex(s: &str) -> Result<(String, String), String> {
let (name, v) = s
.split_once('=')
.ok_or("When given as a positional argument, a pattern must be prefixed with a name, ex: ip=192.168.0.1")?;
let _ = Regex::new(v).map_err(|err| format!("{}", err))?;
Ok((name.to_string(), v.to_string()))
}
fn parse_log_level(s: &str) -> Result<Level, String> {
match s.to_ascii_uppercase().as_str() {
"DEBUG" => Ok(Level::DEBUG),
"INFO" => Ok(Level::INFO),
"WARN" => Ok(Level::WARN),
"ERROR" => Ok(Level::ERROR),
"FATAL" => Ok(Level::ERROR),
_ => Err("must be one of ERROR, WARN, INFO, DEBUG".into()),
}
}

7
src/client/mod.rs Normal file
View file

@ -0,0 +1,7 @@
mod request;
mod test_config;
mod test_regex;
pub use request::request;
pub use test_config::test_config;
pub use test_regex::test_regex;

84
src/client/request.rs Normal file
View file

@ -0,0 +1,84 @@
use std::{
error::Error,
path::{Path, PathBuf},
};
use futures::{SinkExt, StreamExt};
use tokio::net::UnixStream;
use tokio_util::{
bytes::Bytes,
codec::{Framed, LengthDelimitedCodec},
};
use crate::{
cli::Format,
protocol::{Cleanable, ClientRequest, ClientStatus, DaemonResponse, Order},
};
macro_rules! or_quit {
($msg:expr, $expression:expr) => {
$expression.map_err(|err| format!("failed to communicate to daemon: {}, {}", $msg, err))?
};
}
async fn send_retrieve(socket: &Path, req: &ClientRequest) -> Result<DaemonResponse, String> {
let conn = or_quit!(
"opening connection to daemon",
UnixStream::connect(socket).await
);
// Encode
let mut transport = Framed::new(conn, LengthDelimitedCodec::new());
let encoded_request = or_quit!("failed to encode request", serde_json::to_string(req));
or_quit!(
"failed to send request",
transport.send(Bytes::from(encoded_request)).await
);
// Decode
let encoded_response = or_quit!(
"failed to read response",
transport.next().await.ok_or("empty response from server")
);
let encoded_response = or_quit!("failed to decode response", encoded_response);
Ok(or_quit!(
"failed to decode response",
serde_json::from_slice::<DaemonResponse>(&encoded_response)
))
}
fn print_status(cs: ClientStatus, format: Format) -> Result<(), Box<dyn Error>> {
let cs = cs.clean();
let encoded = match format {
Format::JSON => serde_json::to_string_pretty(&cs)?,
Format::YAML => serde_yaml::to_string(&cs)?,
};
println!("{}", encoded);
Ok(())
}
pub async fn request(
socket: PathBuf,
format: Format,
stream_filter: Option<String>,
patterns: Vec<(String, String)>,
order: Order,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let response = send_retrieve(
&socket,
&ClientRequest {
order,
stream_filter,
patterns,
},
)
.await;
match response? {
DaemonResponse::Order(cs) => {
print_status(cs, format).map_err(|err| format!("while printing response: {err}"))
}
DaemonResponse::Err(err) => Err(format!(
"failed to communicate to daemon: error response: {err}"
)),
DaemonResponse::Ok(_) => Ok(()),
}?;
Ok(())
}

42
src/client/test_config.rs Normal file
View file

@ -0,0 +1,42 @@
use std::{error::Error, path::PathBuf};
use crate::{cli::Format, concepts::Config};
pub fn test_config(
config_path: PathBuf,
format: Format,
verbose: bool,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let (mut cfg, cfg_files) = Config::from_path_raw(&config_path)?;
if verbose {
if config_path.is_dir() {
println!(
"Loaded the configuration from the following files in the directory {} in this order:",
config_path.display()
);
println!("{}\n", cfg_files.join("\n"));
} else {
println!(
"Loaded the configuration from the file {}",
config_path.display()
);
}
}
// first serialize the raw config (before regexes are transformed and their original version
// discarded)
let cfg_str = match format {
Format::JSON => serde_json::to_string_pretty(&cfg).map_err(|e| e.to_string()),
Format::YAML => serde_yaml::to_string(&cfg).map_err(|e| e.to_string()),
}
.map_err(|e| format!("Error serializing back the configuration: {e}"))?;
// then try to finalize the configuration: will raise an error on an invalid config
cfg.setup()
.map_err(|e| format!("Configuration file {}: {}", config_path.display(), e))?;
// only print the serialized config if everyting went well
println!("{cfg_str}");
Ok(())
}

78
src/client/test_regex.rs Normal file
View file

@ -0,0 +1,78 @@
use std::{
collections::BTreeSet,
error::Error,
io::{stdin, BufRead, BufReader},
path::PathBuf,
sync::Arc,
};
use regex::Regex;
use crate::concepts::{Config, Pattern};
pub fn test_regex(
config_path: PathBuf,
mut regex: String,
line: Option<String>,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let config = Config::from_path(&config_path)?;
// Code close to Filter::setup()
let mut used_patterns: BTreeSet<Arc<Pattern>> = BTreeSet::new();
for pattern in config.patterns.values() {
if let Some(index) = regex.find(pattern.name_with_braces()) {
// we already `find` it, so we must be able to `rfind` it
#[allow(clippy::unwrap_used)]
if regex.rfind(pattern.name_with_braces()).unwrap() != index {
return Err(format!(
"pattern {} present multiple times in regex",
pattern.name_with_braces()
)
.into());
}
used_patterns.insert(pattern.clone());
}
regex = regex.replacen(pattern.name_with_braces(), &pattern.regex, 1);
}
let compiled = Regex::new(&regex).map_err(|err| format!("regex doesn't compile: {err}"))?;
let match_closure = |line: String| {
let mut ignored = false;
if let Some(matches) = compiled.captures(&line) {
let mut result = Vec::new();
if !used_patterns.is_empty() {
for pattern in used_patterns.iter() {
if let Some(match_) = matches.name(&pattern.name) {
result.push(match_.as_str().to_string());
if pattern.is_ignore(match_.as_str()) {
ignored = true;
}
}
}
if !ignored {
println!("\x1b[32mmatching\x1b[0m {result:?}: {line}");
} else {
println!("\x1b[33mignore matching\x1b[0m {result:?}: {line}");
}
} else {
println!("\x1b[32mmatching\x1b[0m: {line}");
}
} else {
println!("\x1b[31mno match\x1b[0m: {line}");
}
};
if let Some(line) = line {
match_closure(line);
} else {
eprintln!("no second argument: reading from stdin");
for line in BufReader::new(stdin()).lines() {
match line {
Ok(line) => match_closure(line),
Err(_) => break,
};
}
}
Ok(())
}

300
src/concepts/action.rs Normal file
View file

@ -0,0 +1,300 @@
use std::{cmp::Ordering, collections::BTreeSet, fmt::Display, sync::Arc, time::Duration};
use reaction_plugin::{ActionConfig, time::parse_duration};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use tokio::process::Command;
use super::{Match, Pattern, PatternType};
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
#[serde(deny_unknown_fields)]
pub struct Action {
#[serde(default)]
pub cmd: Vec<String>,
// TODO one shot time deserialization
#[serde(skip_serializing_if = "Option::is_none")]
pub after: Option<String>,
#[serde(skip)]
pub after_duration: Option<Duration>,
#[serde(
rename = "onexit",
default = "set_false",
skip_serializing_if = "is_false"
)]
pub on_exit: bool,
#[serde(default = "set_false", skip_serializing_if = "is_false")]
pub oneshot: bool,
#[serde(default = "set_false", skip_serializing_if = "is_false")]
pub ipv4only: bool,
#[serde(default = "set_false", skip_serializing_if = "is_false")]
pub ipv6only: bool,
#[serde(skip)]
pub patterns: Arc<BTreeSet<Arc<Pattern>>>,
#[serde(skip)]
pub name: String,
#[serde(skip)]
pub filter_name: String,
#[serde(skip)]
pub stream_name: String,
// Plugin-specific
#[serde(default, rename = "type", skip_serializing_if = "Option::is_none")]
pub action_type: Option<String>,
#[serde(default, skip_serializing_if = "Value::is_null")]
pub options: Value,
}
fn set_false() -> bool {
false
}
fn is_false(b: &bool) -> bool {
!*b
}
impl Action {
pub fn is_plugin(&self) -> bool {
self.action_type
.as_ref()
.is_some_and(|action_type| action_type != "cmd")
}
pub fn setup(
&mut self,
stream_name: &str,
filter_name: &str,
name: &str,
patterns: Arc<BTreeSet<Arc<Pattern>>>,
) -> Result<(), String> {
self._setup(stream_name, filter_name, name, patterns)
.map_err(|msg| format!("action {}: {}", name, msg))
}
fn _setup(
&mut self,
stream_name: &str,
filter_name: &str,
name: &str,
patterns: Arc<BTreeSet<Arc<Pattern>>>,
) -> Result<(), String> {
self.stream_name = stream_name.to_string();
self.filter_name = filter_name.to_string();
self.name = name.to_string();
self.patterns = patterns;
if self.name.is_empty() {
return Err("action name is empty".into());
}
if self.name.contains('.') {
return Err("character '.' is not allowed in filter name".into());
}
if !self.is_plugin() {
if self.cmd.is_empty() {
return Err("cmd is empty".into());
}
if self.cmd[0].is_empty() {
return Err("cmd's first item is empty".into());
}
if !self.options.is_null() {
return Err("can't define options without a plugin type".into());
}
} else if !self.cmd.is_empty() {
return Err("can't define a cmd and a plugin type".into());
}
if let Some(after) = &self.after {
self.after_duration = Some(
parse_duration(after)
.map_err(|err| format!("failed to parse after time: {}", err))?,
);
self.after = None;
} else if self.on_exit {
return Err("cannot have `onexit: true`, without an `after` directive".into());
}
if self.ipv4only && self.ipv6only {
return Err("cannot have `ipv4only: true` and `ipv6only: true` in one action".into());
}
if self
.patterns
.iter()
.all(|pattern| pattern.pattern_type() != PatternType::Ip)
{
if self.ipv4only {
return Err("it makes no sense to have an action with `ipv4only: true` when no pattern of type ip is defined on the filter".into());
}
if self.ipv6only {
return Err("it makes no sense to have an action with `ipv6only: true` when no pattern of type ip is defined on the filter".into());
}
}
Ok(())
}
// TODO test
pub fn exec(&self, match_: &Match) -> Command {
let computed_command = if self.patterns.is_empty() {
self.cmd.clone()
} else {
self.cmd
.iter()
.map(|item| {
(0..match_.len())
.zip(self.patterns.as_ref())
.fold(item.clone(), |acc, (i, pattern)| {
acc.replace(pattern.name_with_braces(), &match_[i])
})
})
.collect()
};
let mut cmd = Command::new(&computed_command[0]);
cmd.args(&computed_command[1..]);
cmd
}
pub fn to_action_config(&self) -> Result<ActionConfig, String> {
Ok(ActionConfig {
stream_name: self.stream_name.clone(),
filter_name: self.filter_name.clone(),
action_name: self.name.clone(),
action_type: self
.action_type
.clone()
.ok_or_else(|| format!("action {} doesn't load a plugin. this is a bug!", self))?,
config: self.options.clone().into(),
patterns: self
.patterns
.iter()
.map(|pattern| pattern.name.clone())
.collect(),
})
}
}
impl PartialEq for Action {
fn eq(&self, other: &Self) -> bool {
self.stream_name == other.stream_name && self.name == other.name
}
}
impl Eq for Action {}
impl Ord for Action {
fn cmp(&self, other: &Self) -> Ordering {
match self.stream_name.cmp(&other.stream_name) {
Ordering::Equal => match self.filter_name.cmp(&other.filter_name) {
Ordering::Equal => self.name.cmp(&other.name),
o => o,
},
o => o,
}
}
}
impl PartialOrd for Action {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Display for Action {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}.{}.{}", self.stream_name, self.filter_name, self.name)
}
}
#[cfg(test)]
impl Action {
/// Test-only constructor designed to be easy to call
#[allow(clippy::too_many_arguments)]
pub fn new(
cmd: Vec<&str>,
after: Option<&str>,
on_exit: bool,
stream_name: &str,
filter_name: &str,
name: &str,
config_patterns: &super::Patterns,
ip_only: u8,
) -> Self {
let mut action = Self {
cmd: cmd.into_iter().map(|s| s.into()).collect(),
after: after.map(|s| s.into()),
on_exit,
ipv4only: ip_only == 4,
ipv6only: ip_only == 6,
..Default::default()
};
action
.setup(
stream_name,
filter_name,
name,
config_patterns
.clone()
.into_values()
.collect::<BTreeSet<_>>()
.into(),
)
.unwrap();
action
}
}
#[cfg(test)]
pub mod tests {
use super::*;
pub fn ok_action() -> Action {
Action {
cmd: vec!["command".into()],
..Default::default()
}
}
pub fn ok_action_with_after(d: String, name: &str) -> Action {
let mut action = Action {
cmd: vec!["command".into()],
after: Some(d),
..Default::default()
};
action
.setup("", "", name, Arc::new(BTreeSet::default()))
.unwrap();
action
}
#[test]
fn missing_config() {
let mut action;
let name = "name".to_string();
let patterns = Arc::new(BTreeSet::default());
// No command
action = Action::default();
assert!(action.setup(&name, &name, &name, patterns.clone()).is_err());
// No command
action = Action::default();
action.cmd = vec!["".into()];
assert!(action.setup(&name, &name, &name, patterns.clone()).is_err());
// No command
action = Action::default();
action.cmd = vec!["".into(), "arg1".into()];
assert!(action.setup(&name, &name, &name, patterns.clone()).is_err());
// command ok
action = ok_action();
assert!(action.setup(&name, &name, &name, patterns.clone()).is_ok());
// command ok
action = ok_action();
action.cmd.push("arg1".into());
assert!(action.setup(&name, &name, &name, patterns.clone()).is_ok());
}
}

812
src/concepts/config.rs Normal file
View file

@ -0,0 +1,812 @@
use std::{
collections::{BTreeMap, btree_map::Entry},
fs::File,
io,
path::Path,
process::{Command, Stdio},
sync::Arc,
};
use serde::{Deserialize, Serialize};
use thiserror::Error;
use tracing::{debug, error, info, warn};
use super::{Pattern, Plugin, Stream, merge_attrs};
pub type Patterns = BTreeMap<String, Arc<Pattern>>;
#[derive(Clone, Debug, Deserialize, Serialize)]
#[cfg_attr(test, derive(Default))]
#[serde(deny_unknown_fields)]
pub struct Config {
#[serde(default = "num_cpus::get")]
pub concurrency: usize,
#[serde(default = "dot", skip_serializing_if = "String::is_empty")]
pub state_directory: String,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub plugins: BTreeMap<String, Plugin>,
#[serde(default)]
pub patterns: Patterns,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub start: Vec<Vec<String>>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub stop: Vec<Vec<String>>,
#[serde(default)]
pub streams: BTreeMap<String, Stream>,
// This field only serve the purpose of having a top-level place for saving YAML variables
#[serde(default, skip_serializing, rename = "definitions")]
_definitions: serde_json::Value,
}
fn dot() -> String {
".".into()
}
impl Config {
fn merge(&mut self, mut other: Config) -> Result<(), String> {
for (key, plugin) in other.plugins.into_iter() {
match self.plugins.entry(key) {
Entry::Vacant(e) => {
e.insert(plugin);
}
Entry::Occupied(e) => {
return Err(format!(
"plugin {} is already defined. plugin definitions can't be spread accross multiple files.",
e.key()
));
}
}
}
for (key, pattern) in other.patterns.into_iter() {
match self.patterns.entry(key) {
Entry::Vacant(e) => {
e.insert(pattern);
}
Entry::Occupied(e) => {
return Err(format!(
"pattern {} is already defined. pattern definitions can't be spread accross multiple files.",
e.key()
));
}
}
}
for (key, stream) in other.streams.into_iter() {
match self.streams.entry(key) {
Entry::Vacant(e) => {
e.insert(stream);
}
Entry::Occupied(mut e) => {
e.get_mut()
.merge(stream)
.map_err(|err| format!("Stream {}: {}", e.key(), err))?;
}
}
}
self.start.append(&mut other.start);
self.stop.append(&mut other.stop);
self.state_directory = merge_attrs(
self.state_directory.clone(),
other.state_directory,
".".into(),
"state_directory",
)?;
self.concurrency = merge_attrs(
self.concurrency,
other.concurrency,
num_cpus::get(),
"concurrency",
)?;
Ok(())
}
pub fn setup(&mut self) -> Result<(), String> {
if self.concurrency == 0 {
self.concurrency = num_cpus::get();
}
// Nullify this useless field
self._definitions = serde_json::Value::Null;
for (key, value) in &mut self.plugins {
value.setup(key)?;
}
if self.patterns.is_empty() {
return Err("no patterns configured".into());
}
let mut new_patterns = BTreeMap::new();
for (key, value) in &self.patterns {
let mut value = value.as_ref().clone();
value.setup(key)?;
new_patterns.insert(key.clone(), Arc::new(value));
}
self.patterns = new_patterns;
if self.streams.is_empty() {
return Err("no streams configured".into());
}
for (key, value) in &mut self.streams {
value.setup(key, &self.patterns)?;
}
Ok(())
}
pub fn start(&self) -> bool {
run_commands(&self.start, "start")
}
pub fn stop(&self) -> bool {
run_commands(&self.stop, "stop")
}
pub fn from_path(path: &Path) -> Result<Self, String> {
match Self::from_path_raw(path) {
Ok((mut cfg, files)) => {
cfg.setup().map_err(ConfigError::BadConfig).map_err(|e| {
if path.is_dir() {
format!(
"{e}\nWhile reading config from {}. List of files read, in that order:\n{}",
path.display(),
files.join("\n"),
)
} else {
format!("{e}\nWhile reading config from {}.", path.display())
}
})?;
Ok(cfg)
}
Err(e) => Err(e),
}
}
pub fn from_path_raw(path: &Path) -> Result<(Self, Vec<String>), String> {
match std::fs::metadata(path) {
Err(e) => Err(format!("Error accessing {}: {e}", path.to_string_lossy())),
Ok(m) => {
if m.is_file() {
Self::_from_file_raw(path)
.map(|cfg| {
let fname = path
.file_name()
.map(|s| s.to_string_lossy().to_string())
.unwrap_or("".to_string());
(cfg, vec![fname])
})
.map_err(|e| format!("Configuration file {}: {}", path.display(), e))
} else if m.is_dir() {
Self::_from_dir_raw(path)
} else {
Err(format!(
"Invalid file type for {}: not a file nor a directory",
path.to_string_lossy()
))
}
}
}
}
fn _from_dir_raw(path: &Path) -> Result<(Self, Vec<String>), String> {
let dir = std::fs::read_dir(path)
.map_err(|e| format!("Error accessing directory {}: {e}", path.display()))?;
// sorts files by name
let mut cfg_files = BTreeMap::new();
for f in dir {
let f =
f.map_err(|e| format!("Error while reading directory {}: {e}", path.display()))?;
let fname = f.file_name();
let fname = match fname.to_str() {
Some(fname) => fname,
None => {
warn!(
"Ignoring file {} in {}",
f.file_name().to_string_lossy(),
path.display()
);
continue;
}
};
if fname.starts_with(".") || fname.starts_with("_") {
// silently ignore hidden file
debug!("Ignoring hidden file {fname} in {}", path.display());
continue;
}
let fpath = f.path();
let ext = match fpath.extension() {
None => {
// silently ignore files without extensions (may be directory)
debug!(
"Ignoring file without extension {fname} in {}",
path.display()
);
continue;
}
Some(ext) => {
if let Some(ext) = ext.to_str() {
ext
} else {
warn!(
"Ignoring file {} in {} with unexpected extension",
fname,
path.display()
);
continue;
}
}
};
match Self::_extension_to_format(ext) {
Ok(fmt) => cfg_files.insert(fname.to_string(), (fpath, fmt)),
Err(_) => {
// silently ignore files without an expected extension
debug!(
"Ignoring file with non recognized extension {fname} in {}",
path.display()
);
continue;
}
};
}
let mut cfg: Option<Self> = None;
for (fname, (fpath, fmt)) in &cfg_files {
let cfg_part = Self::_load_file(fpath, *fmt)
.map_err(|e| format!("While reading {fname} in {}: {e}", path.display()))?;
if let Some(mut cfg_agg) = cfg.take() {
cfg_agg.merge(cfg_part)?;
cfg = Some(cfg_agg);
} else {
cfg = Some(cfg_part)
}
}
if let Some(cfg) = cfg {
Ok((cfg, cfg_files.into_keys().collect()))
} else {
Err(format!(
"No valid configuration files found in {}",
path.display()
))
}
}
fn _extension_to_format(extension: &str) -> Result<Format, ConfigError> {
match extension {
"yaml" | "yml" => Ok(Format::Yaml),
"json" => Ok(Format::Json),
"jsonnet" => Ok(Format::Jsonnet),
_ => Err(ConfigError::Extension(format!(
"extension {} is not recognized",
extension
))),
}
}
fn _load_file(path: &Path, format: Format) -> Result<Self, ConfigError> {
let cfg: Self = match format {
Format::Json => serde_json::from_reader(File::open(path)?)?,
Format::Yaml => serde_yaml::from_reader(File::open(path)?)?,
Format::Jsonnet => serde_json::from_str(&jsonnet::from_path(path)?)?,
};
Ok(cfg)
}
fn _from_file_raw(path: &Path) -> Result<Self, ConfigError> {
let extension = path
.extension()
.and_then(|ex| ex.to_str())
.ok_or(ConfigError::Extension("no file extension".into()))?;
let format = Self::_extension_to_format(extension)?;
Self::_load_file(path, format)
}
}
#[derive(Clone, Copy)]
enum Format {
Yaml,
Json,
Jsonnet,
}
#[derive(Error, Debug)]
enum ConfigError {
#[error("{0}")]
Any(String),
#[error("Bad configuration: {0}")]
BadConfig(String),
#[error("{0}. Must be json, jsonnet, yml or yaml.")]
Extension(String),
#[error("{0}")]
IO(#[from] io::Error),
#[error("{0}")]
JSON(#[from] serde_json::Error),
#[error("{0}")]
YAML(#[from] serde_yaml::Error),
}
mod jsonnet {
use std::path::Path;
use jrsonnet_evaluator::{EvaluationState, FileImportResolver, error::LocError};
use super::ConfigError;
pub fn from_path(path: &Path) -> Result<String, ConfigError> {
let state = EvaluationState::default();
state.with_stdlib();
state.set_import_resolver(Box::<FileImportResolver>::default());
// state.set_import_resolver(Box::new(FileImportResolver {
// library_paths: Vec::new(),
// }));
match evaluate(path, &state) {
Ok(val) => Ok(val),
Err(err) => Err(ConfigError::Any(state.stringify_err(&err))),
}
}
fn evaluate(path: &Path, state: &EvaluationState) -> Result<String, LocError> {
let val = state.evaluate_file_raw(path)?;
let result = state.manifest(val)?;
Ok(result.to_string())
}
}
fn run_commands(commands: &Vec<Vec<String>>, moment: &str) -> bool {
debug!("Running {moment} commands...");
let mut ok = true;
for command in commands {
info!("{} command: run {:?}\n", moment, command);
// TODO reaction-go waits the subprocess completion for a minute maximum
match Command::new(&command[0])
.args(&command[1..])
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::null())
.status()
{
Ok(exit_status) => {
if !exit_status.success() {
error!(
"{} command: run {:?}: exit code: {}",
moment,
command,
exit_status.code().unwrap_or(1)
);
ok = false;
}
}
Err(err) => {
error!(
"{} command: run {:?}: could not execute: {}",
moment, command, err
);
ok = false;
}
}
}
ok
}
#[cfg(test)]
#[allow(clippy::unwrap_used)]
mod tests {
use super::*;
#[test]
fn config_missing() {
let mut config = Config::default();
assert!(config.setup().is_err());
}
fn parse_config_json(cfg: &str) -> Config {
const DUMMY_PATTERNS: &str = r#"
"patterns": {
"zero": {
"regex": "0"
}
}"#;
const DUMMY_FILTERS: &str = r#"
"filters": {
"dummy": {
"regex": ["abc"],
"actions": {
"act": {
"cmd": ["echo", "1"]
}
}
}
}"#;
const DUMMY_STREAMS: &str = r#"
"streams": {
"dummy": {
"cmd": ["true"],
{{FILTERS}}
}
}
"#;
let cfg = cfg
.to_string()
.replace("{{STREAMS}}", DUMMY_STREAMS)
.replace("{{PATTERNS}}", DUMMY_PATTERNS)
.replace("{{FILTERS}}", DUMMY_FILTERS);
serde_json::from_str(&cfg)
.inspect_err(|_| {
eprintln!("{cfg}");
})
.unwrap()
}
#[test]
fn config_without_stream() {
let mut cfg1 = parse_config_json(
r#"{
{{PATTERNS}}
}"#,
);
let mut cfg2 = parse_config_json(
r#"{
{{PATTERNS}},
"streams": {}
}"#,
);
let res = cfg1.setup();
assert!(res.is_err());
let res = cfg2.setup();
assert!(res.is_err());
}
#[test]
fn config_without_pattern() {
let mut cfg1 = parse_config_json(
r#"{
{{STREAMS}}
}"#,
);
let mut cfg2 = parse_config_json(
r#"{
"patterns": {},
{{STREAMS}}
}"#,
);
let res = cfg1.setup();
assert!(res.is_err());
let res = cfg2.setup();
assert!(res.is_err());
}
#[test]
fn merge_config_distinct_patterns() {
let mut cfg_org = parse_config_json(
r#"{
"patterns": {
"ip4": {
"regex": "ip4"
}
},
{{STREAMS}}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"patterns": {
"ip6": {
"regex": "ip6"
}
}
}"#,
);
cfg_org.merge(cfg_oth).unwrap();
cfg_org.setup().unwrap();
assert!(cfg_org.patterns.contains_key("ip4"));
assert!(cfg_org.patterns.contains_key("ip6"));
assert_eq!(cfg_org.patterns.len(), 2);
assert!(cfg_org.streams.contains_key("dummy"));
assert_eq!(cfg_org.streams.len(), 1);
}
#[test]
fn merge_config_same_patterns() {
let mut cfg_org = parse_config_json(
r#"{
"patterns": {
"zero": {
"regex": "0"
}
},
{{STREAMS}}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"patterns": {
"zero": {
"regex": "00"
}
}
}"#,
);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_err());
}
#[test]
fn merge_config_distinct_streams() {
let mut cfg_org = parse_config_json(
r#"{
{{PATTERNS}},
"streams": {
"echo1": {
"cmd": ["echo"],
{{FILTERS}}
}
}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"streams": {
"echo2": {
"cmd": ["echo"],
{{FILTERS}}
}
}
}"#,
);
cfg_org.merge(cfg_oth).unwrap();
cfg_org.setup().unwrap();
assert!(cfg_org.patterns.contains_key("zero"));
assert_eq!(cfg_org.patterns.len(), 1);
assert!(cfg_org.streams.contains_key("echo1"));
assert!(cfg_org.streams.contains_key("echo2"));
assert_eq!(cfg_org.streams.len(), 2);
}
#[test]
fn merge_config_same_streams_distinct_filters() {
let mut cfg_org = parse_config_json(
r#"{
{{PATTERNS}},
"streams": {
"echo": {
"cmd": ["echo"],
"filters": {
"f1": {
"regex": ["abc"],
"actions": {
"act": {
"cmd": ["echo", "1"]
}
}
}
}
}
}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"streams": {
"echo": {
"cmd": ["echo"],
"filters": {
"f2": {
"regex": ["abc"],
"actions": {
"act": {
"cmd": ["echo", "1"]
}
}
}
}
}
}
}"#,
);
cfg_org.merge(cfg_oth).unwrap();
cfg_org.setup().unwrap();
assert!(cfg_org.streams.contains_key("echo"));
assert_eq!(cfg_org.streams.len(), 1);
let filters = &cfg_org.streams.get("echo").unwrap().filters;
assert!(filters.contains_key("f1"));
assert!(filters.contains_key("f2"));
assert_eq!(filters.len(), 2);
}
#[test]
fn merge_config_same_streams_distinct_command() {
let mut cfg_org = parse_config_json(
r#"{
{{PATTERNS}},
"streams": {
"echo": {
"cmd": ["echo"],
{{FILTERS}}
}
}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"streams": {
"echo": {
"cmd": ["true"]
}
}
}"#,
);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_err());
}
#[test]
fn merge_config_same_streams_command_in_one() {
let mut cfg_org = parse_config_json(
r#"{
{{PATTERNS}},
"streams": {
"echo": {
{{FILTERS}}
}
}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"streams": {
"echo": {
"cmd": ["echo"]
}
}
}"#,
);
cfg_org.merge(cfg_oth).unwrap();
cfg_org.setup().unwrap();
assert!(cfg_org.streams.contains_key("echo"));
assert_eq!(cfg_org.streams.len(), 1);
let stream = cfg_org.streams.get("echo").unwrap();
assert_eq!(stream.cmd.len(), 1);
assert_eq!(stream.filters.len(), 1);
}
#[test]
fn merge_config_same_streams_same_filters() {
let mut cfg_org = parse_config_json(
r#"{
{{PATTERNS}},
"streams": {
"echo": {
"cmd": ["echo"],
{{FILTERS}}
}
}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"streams": {
"echo": {
"cmd": ["echo"],
{{FILTERS}}
}
}
}"#,
);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_err());
}
#[test]
fn merge_config_same_concurrency() {
let mut cfg_org = parse_config_json(
r#"{
"concurrency": 97,
{{STREAMS}}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"concurrency": 97
}"#,
);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_ok());
assert_eq!(cfg_org.concurrency, 97);
}
#[test]
fn merge_config_distinct_concurrency() {
let mut cfg_org = parse_config_json(
r#"{
"concurrency": 97,
{{STREAMS}}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"concurrency": 96
}"#,
);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_err());
}
#[test]
fn merge_config_one_concurrency() {
let mut cfg_org = parse_config_json(
r#"{
"concurrency": 97,
{{STREAMS}}
}"#,
);
let cfg_oth = parse_config_json(r#"{}"#);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_ok());
assert_eq!(cfg_org.concurrency, 97);
}
#[test]
fn merge_config_right_concurrency() {
let mut cfg_org = parse_config_json(
r#"{
{{STREAMS}}
}"#,
);
let cfg_oth = parse_config_json(
r#"{
"concurrency": 97
}"#,
);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_ok());
assert_eq!(cfg_org.concurrency, 97);
}
#[test]
fn merge_config_no_concurrency() {
let mut cfg_org = parse_config_json(
r#"{
{{STREAMS}}
}"#,
);
let cfg_oth = parse_config_json(r#"{ }"#);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_ok());
assert_eq!(cfg_org.concurrency, num_cpus::get());
}
}

993
src/concepts/filter.rs Normal file
View file

@ -0,0 +1,993 @@
use std::{
cmp::Ordering,
collections::{BTreeMap, BTreeSet},
fmt::Display,
hash::Hash,
sync::Arc,
time::Duration,
};
use reaction_plugin::time::parse_duration;
use regex::Regex;
use serde::{Deserialize, Serialize};
use super::{Action, Match, Pattern, PatternType, Patterns};
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Deserialize, Serialize)]
pub enum Duplicate {
#[default]
#[serde(rename = "extend")]
Extend,
#[serde(rename = "ignore")]
Ignore,
#[serde(rename = "rerun")]
Rerun,
}
// Only names are serialized
// Only computed fields are not deserialized
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
#[serde(deny_unknown_fields)]
pub struct Filter {
#[serde(skip)]
pub longuest_action_duration: Duration,
#[serde(skip)]
pub has_ip: bool,
pub regex: Vec<String>,
#[serde(skip)]
pub compiled_regex: Vec<Regex>,
// We want patterns to be ordered
// This is necessary when using matches which contain multiple patterns
#[serde(skip)]
pub patterns: Arc<BTreeSet<Arc<Pattern>>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub retry: Option<u32>,
#[serde(rename = "retryperiod", skip_serializing_if = "Option::is_none")]
pub retry_period: Option<String>,
#[serde(skip)]
pub retry_duration: Option<Duration>,
#[serde(default)]
pub duplicate: Duplicate,
pub actions: BTreeMap<String, Action>,
#[serde(skip)]
pub name: String,
#[serde(skip)]
pub stream_name: String,
// // Plugin-specific
// #[serde(default, rename = "type")]
// pub filter_type: Option<String>,
// #[serde(default = "null_value")]
// pub options: Value,
}
impl Filter {
#[cfg(test)]
pub fn from_name(stream_name: &str, filter_name: &str) -> Filter {
Filter {
stream_name: stream_name.into(),
name: filter_name.into(),
..Filter::default()
}
}
#[cfg(test)]
pub fn from_name_and_patterns(
stream_name: &str,
filter_name: &str,
patterns: Vec<Pattern>,
) -> Filter {
Filter {
stream_name: stream_name.into(),
name: filter_name.into(),
patterns: Arc::new(patterns.into_iter().map(Arc::new).collect()),
..Filter::default()
}
}
pub fn setup(
&mut self,
stream_name: &str,
name: &str,
config_patterns: &Patterns,
) -> Result<(), String> {
self._setup(stream_name, name, config_patterns)
.map_err(|msg| format!("filter {}: {}", name, msg))
}
fn _setup(
&mut self,
stream_name: &str,
name: &str,
config_patterns: &Patterns,
) -> Result<(), String> {
self.stream_name = stream_name.to_string();
self.name = name.to_string();
if self.name.is_empty() {
return Err("filter name is empty".into());
}
if self.name.contains('.') {
return Err("character '.' is not allowed in filter name".into());
}
if self.retry.is_some() != self.retry_period.is_some() {
return Err("retry and retryperiod must be specified one with each other".into());
}
if self.retry.is_some_and(|r| r < 2) {
return Err("retry must be >= 2. Remove 'retry' and 'retryperiod' to trigger at the first occurence.".into());
}
if let Some(retry_period) = &self.retry_period {
self.retry_duration = Some(
parse_duration(retry_period)
.map_err(|err| format!("failed to parse retry period: {}", err))?,
);
self.retry_period = None;
}
if self.regex.is_empty() {
return Err("no regex configured".into());
}
let mut new_patterns = BTreeSet::new();
let mut new_regex = Vec::new();
let mut first = true;
for regex in &self.regex {
let mut regex_buf = regex.clone();
for pattern in config_patterns.values() {
if let Some(index) = regex.find(pattern.name_with_braces()) {
// we already `find` it, so we must be able to `rfind` it
#[allow(clippy::unwrap_used)]
if regex.rfind(pattern.name_with_braces()).unwrap() != index {
return Err(format!(
"pattern {} present multiple times in regex",
pattern.name_with_braces()
));
}
if first {
new_patterns.insert(pattern.clone());
} else if !new_patterns.contains(pattern) {
return Err(format!(
"pattern {} is not present in the first regex but is present in a following regex. all regexes should contain the same set of regexes",
&pattern.name_with_braces()
));
}
} else if !first && new_patterns.contains(pattern) {
return Err(format!(
"pattern {} is present in the first regex but is not present in a following regex. all regexes should contain the same set of regexes",
&pattern.name_with_braces()
));
}
regex_buf = regex_buf.replacen(pattern.name_with_braces(), &pattern.regex, 1);
}
new_regex.push(regex_buf.clone());
let compiled = Regex::new(&regex_buf).map_err(|err| err.to_string())?;
self.compiled_regex.push(compiled);
first = false;
}
self.regex = new_regex;
self.patterns = Arc::new(new_patterns);
if self.actions.is_empty() {
return Err("no actions configured".into());
}
for (key, action) in &mut self.actions {
action.setup(stream_name, name, key, self.patterns.clone())?;
}
self.has_ip = self
.actions
.values()
.any(|action| action.ipv4only || action.ipv6only);
self.longuest_action_duration =
self.actions
.values()
.fold(Duration::from_secs(0), |acc, v| {
v.after_duration
.map_or(acc, |v| if v > acc { v } else { acc })
});
Ok(())
}
pub fn get_match(&self, line: &str) -> Option<Match> {
for regex in &self.compiled_regex {
if let Some(matches) = regex.captures(line) {
if !self.patterns.is_empty() {
let mut result = Match::new();
for pattern in self.patterns.as_ref() {
// if the pattern is in an optional part of the regex,
// there may be no captured group for it.
if let Some(match_) = matches.name(&pattern.name)
&& !pattern.is_ignore(match_.as_str())
{
let mut match_ = match_.as_str().to_string();
pattern.normalize(&mut match_);
result.push(match_);
}
}
if result.len() == self.patterns.len() {
return Some(result);
}
} else {
return Some(vec![]);
}
}
}
None
}
/// Test that the patterns map conforms to the filter's patterns.
/// Then returns a corresponding [`Match`].
pub fn get_match_from_patterns(
&self,
mut patterns: BTreeMap<Arc<Pattern>, String>,
) -> Result<Match, String> {
// Check pattern length
if patterns.len() != self.patterns.len() {
return Err(format!(
"{} patterns specified, while the {}.{} filter has {} pattern: ({})",
patterns.len(),
self.stream_name,
self.name,
self.patterns.len(),
self.patterns
.iter()
.map(|pattern| pattern.name.clone())
.reduce(|acc, pattern| acc + ", " + &pattern)
.unwrap_or("".into()),
));
}
for (pattern, match_) in &mut patterns {
if self.patterns.get(pattern).is_none() {
return Err(format!(
"pattern {} is not present in the filter {}.{}",
pattern.name, self.stream_name, self.name
));
}
if !pattern.is_match(match_) {
return Err(format!(
"'{}' doesn't match pattern {}",
match_, pattern.name,
));
}
if pattern.is_ignore(match_) {
return Err(format!(
"'{}' is explicitly ignored by pattern {}",
match_, pattern.name,
));
}
pattern.normalize(match_);
}
for pattern in self.patterns.iter() {
if !patterns.contains_key(pattern) {
return Err(format!(
"pattern {} is missing, because it's in the filter {}.{}",
pattern.name, self.stream_name, self.name
));
}
}
Ok(patterns.into_values().collect())
}
/// Filters [`Filter`]'s [`Action`]s according to its [`Pattern`]s [`PatternType`]
/// and those of the given [`Match`]
pub fn filtered_actions_from_match(&self, m: &Match) -> Vec<&Action> {
let ip_type = if self.has_ip {
self.patterns
.iter()
.zip(m)
.find(|(p, _)| p.pattern_type() == PatternType::Ip)
.map(|(_, m)| -> _ {
// Using this dumb heuristic is ok,
// because we know we have a valid IP address.
if m.contains(':') {
PatternType::Ipv6
} else if m.contains('.') {
PatternType::Ipv4
} else {
// This else should not happen, but better falling back on something than
// panicking, right? Maybe we should add a warning there?
PatternType::Regex
}
})
.unwrap_or(PatternType::Regex)
} else {
PatternType::Regex
};
let mut actions: Vec<_> = self
.actions
.values()
// If specific ip version, check it
.filter(move |action| !action.ipv4only || ip_type == PatternType::Ipv4)
.filter(move |action| !action.ipv6only || ip_type == PatternType::Ipv6)
.collect();
// Sort by after
actions.sort_by(|a, b| {
a.after_duration
.unwrap_or_default()
.cmp(&b.after_duration.unwrap_or_default())
});
actions
}
}
impl Display for Filter {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}.{}", self.stream_name, self.name)
}
}
impl PartialEq for Filter {
fn eq(&self, other: &Self) -> bool {
self.stream_name == other.stream_name && self.name == other.name
}
}
impl Eq for Filter {}
impl Ord for Filter {
fn cmp(&self, other: &Self) -> Ordering {
match self.stream_name.cmp(&other.stream_name) {
Ordering::Equal => self.name.cmp(&other.name),
o => o,
}
}
}
impl PartialOrd for Filter {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Hash for Filter {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.stream_name.hash(state);
self.name.hash(state);
}
}
#[cfg(test)]
#[allow(clippy::too_many_arguments)]
impl Filter {
/// Test-only constructor designed to be easy to call
pub fn new(
actions: Vec<Action>,
regex: Vec<&str>,
retry: Option<u32>,
retry_period: Option<&str>,
stream_name: &str,
name: &str,
duplicate: Duplicate,
config_patterns: &Patterns,
) -> Self {
let mut filter = Self {
actions: actions.into_iter().map(|a| (a.name.clone(), a)).collect(),
regex: regex.into_iter().map(|s| s.into()).collect(),
retry,
retry_period: retry_period.map(|s| s.into()),
duplicate,
..Default::default()
};
filter.setup(stream_name, name, config_patterns).unwrap();
filter
}
pub fn new_static(
actions: Vec<Action>,
regex: Vec<&str>,
retry: Option<u32>,
retry_period: Option<&str>,
stream_name: &str,
name: &str,
duplicate: Duplicate,
config_patterns: &Patterns,
) -> &'static Self {
Box::leak(Box::new(Self::new(
actions,
regex,
retry,
retry_period,
stream_name,
name,
duplicate,
config_patterns,
)))
}
}
#[cfg(test)]
pub mod tests {
use crate::concepts::action::tests::{ok_action, ok_action_with_after};
use crate::concepts::pattern::PatternIp;
use crate::concepts::pattern::tests::{
boubou_pattern_with_ignore, default_pattern, number_pattern, ok_pattern_with_ignore,
};
use super::*;
pub fn ok_filter() -> Filter {
let mut filter = Filter::default();
let name = "name".to_string();
filter.regex = vec!["reg".into()];
filter.actions.insert(name.clone(), ok_action());
filter
}
#[test]
fn setup_missing_config() {
let mut filter;
let name = "name".to_string();
let empty_patterns = Patterns::new();
// action but no regex
filter = ok_filter();
filter.regex = Vec::new();
assert!(filter.setup(&name, &name, &empty_patterns).is_err());
// regex but no action
filter = ok_filter();
filter.actions = BTreeMap::new();
assert!(filter.setup(&name, &name, &empty_patterns).is_err());
// ok
filter = ok_filter();
assert!(filter.setup(&name, &name, &empty_patterns).is_ok());
}
#[test]
fn setup_retry() {
let mut filter;
let name = "name".to_string();
let empty_patterns = Patterns::new();
// retry but no retry_period
filter = ok_filter();
filter.retry = Some(2);
assert!(filter.setup(&name, &name, &empty_patterns).is_err());
// retry_period but no retry
filter = ok_filter();
filter.retry_period = Some("2d".into());
assert!(filter.setup(&name, &name, &empty_patterns).is_err());
// invalid retry_period
filter = ok_filter();
filter.retry = Some(2);
filter.retry_period = Some("2".into());
assert!(filter.setup(&name, &name, &empty_patterns).is_err());
// ok
filter = ok_filter();
filter.retry = Some(2);
filter.retry_period = Some("2d".into());
assert!(filter.setup(&name, &name, &empty_patterns).is_ok());
}
#[test]
fn setup_longuest_action_duration() {
let mut filter;
let name = "name".to_string();
let empty_patterns = Patterns::new();
let minute_str = "1m".to_string();
let minute = Duration::from_secs(60);
let two_minutes = Duration::from_secs(60 * 2);
let two_minutes_str = "2m".to_string();
// duration 0
filter = ok_filter();
filter.setup(&name, &name, &empty_patterns).unwrap();
assert_eq!(filter.longuest_action_duration, Duration::default());
let minute_action = ok_action_with_after(minute_str.clone(), &minute_str);
// duration 60
filter = ok_filter();
filter
.actions
.insert(minute_str.clone(), minute_action.clone());
filter.setup(&name, &name, &empty_patterns).unwrap();
assert_eq!(filter.longuest_action_duration, minute);
let two_minutes_action = ok_action_with_after(two_minutes_str.clone(), &two_minutes_str);
// duration 120
filter = ok_filter();
filter.actions.insert(two_minutes_str, two_minutes_action);
filter.actions.insert(minute_str, minute_action);
filter.setup(&name, &name, &empty_patterns).unwrap();
assert_eq!(filter.longuest_action_duration, two_minutes);
}
#[test]
fn setup_regexes() {
let name = "name".to_string();
let mut filter;
// make a Patterns
let mut patterns = Patterns::new();
let mut pattern = default_pattern();
pattern.regex = "[abc]".to_string();
assert!(pattern.setup(&name).is_ok());
patterns.insert(name.clone(), pattern.clone().into());
let unused_name = "unused".to_string();
let mut unused_pattern = default_pattern();
unused_pattern.regex = "compile[error".to_string();
assert!(unused_pattern.setup(&unused_name).is_err());
patterns.insert(unused_name.clone(), unused_pattern.clone().into());
let boubou_name = "boubou".to_string();
let mut boubou = boubou_pattern_with_ignore();
boubou.setup(&boubou_name).unwrap();
patterns.insert(boubou_name.clone(), boubou.clone().into());
// correct regex replacement
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter.regex.push("insert <name> here$".to_string());
assert!(filter.setup(&name, &name, &patterns).is_ok());
assert_eq!(
filter.compiled_regex[0].to_string(),
Regex::new("insert (?P<name>[abc]) here$")
.unwrap()
.to_string()
);
assert_eq!(&filter.regex[0].to_string(), "insert (?P<name>[abc]) here$");
assert_eq!(filter.patterns.len(), 1);
let stored_pattern = filter.patterns.first().unwrap();
assert_eq!(stored_pattern.regex, pattern.regex);
// same pattern two times in regex
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter
.regex
.push("there <name> are two <name> s!".to_string());
assert!(filter.setup(&name, &name, &patterns).is_err());
// two patterns in one regex
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter
.regex
.push("insert <name> here and <boubou> there".to_string());
assert!(filter.setup(&name, &name, &patterns).is_ok());
assert_eq!(
filter.compiled_regex[0].to_string(),
Regex::new("insert (?P<name>[abc]) here and (?P<boubou>(?:bou){1,3}) there")
.unwrap()
.to_string()
);
assert_eq!(
&filter.compiled_regex[0].to_string(),
"insert (?P<name>[abc]) here and (?P<boubou>(?:bou){1,3}) there"
);
assert_eq!(filter.patterns.len(), 2);
let stored_pattern = filter.patterns.first().unwrap();
assert_eq!(stored_pattern.regex, boubou.regex);
let stored_pattern = filter.patterns.last().unwrap();
assert_eq!(stored_pattern.regex, pattern.regex);
// multiple regexes with same pattern
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter.regex.push("insert <name> here".to_string());
filter.regex.push("also add <name> there".to_string());
assert!(filter.setup(&name, &name, &patterns).is_ok());
assert_eq!(
filter.compiled_regex[0].to_string(),
Regex::new("insert (?P<name>[abc]) here")
.unwrap()
.to_string()
);
assert_eq!(
&filter.compiled_regex[0].to_string(),
"insert (?P<name>[abc]) here"
);
assert_eq!(
filter.compiled_regex[1].to_string(),
Regex::new("also add (?P<name>[abc]) there")
.unwrap()
.to_string()
);
assert_eq!(
&filter.compiled_regex[1].to_string(),
"also add (?P<name>[abc]) there"
);
assert_eq!(filter.patterns.len(), 1);
let stored_pattern = filter.patterns.first().unwrap();
assert_eq!(stored_pattern.regex, pattern.regex);
// multiple regexes with same patterns
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter
.regex
.push("insert <name> here and <boubou> there".to_string());
filter
.regex
.push("also add <boubou> here and <name> there".to_string());
assert!(filter.setup(&name, &name, &patterns).is_ok());
assert_eq!(
filter.compiled_regex[0].to_string(),
Regex::new("insert (?P<name>[abc]) here and (?P<boubou>(?:bou){1,3}) there")
.unwrap()
.to_string()
);
assert_eq!(
filter.compiled_regex[1].to_string(),
Regex::new("also add (?P<boubou>(?:bou){1,3}) here and (?P<name>[abc]) there")
.unwrap()
.to_string()
);
assert_eq!(
&filter.compiled_regex[1].to_string(),
"also add (?P<boubou>(?:bou){1,3}) here and (?P<name>[abc]) there"
);
assert_eq!(filter.patterns.len(), 2);
let stored_pattern = filter.patterns.first().unwrap();
assert_eq!(stored_pattern.regex, boubou.regex);
let stored_pattern = filter.patterns.last().unwrap();
assert_eq!(stored_pattern.regex, pattern.regex);
// multiple regexes with different patterns 1
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter.regex.push("insert <name> here".to_string());
filter.regex.push("also add <boubou> there".to_string());
assert!(filter.setup(&name, &name, &patterns).is_err());
// multiple regexes with different patterns 2
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter
.regex
.push("insert <name> here and <boubou> there".to_string());
filter.regex.push("also add <boubou> there".to_string());
assert!(filter.setup(&name, &name, &patterns).is_err());
// multiple regexes with different patterns 3
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter.regex.push("also add <boubou> there".to_string());
filter
.regex
.push("insert <name> here and <boubou> there".to_string());
assert!(filter.setup(&name, &name, &patterns).is_err());
}
#[test]
fn get_match() {
let name = "name".to_string();
let mut filter;
let mut pattern = ok_pattern_with_ignore();
pattern.setup(&name).unwrap();
let pattern = Arc::new(pattern);
let boubou_name = "boubou".to_string();
let mut boubou = boubou_pattern_with_ignore();
boubou.setup(&boubou_name).unwrap();
let boubou = Arc::new(boubou);
let patterns = Patterns::from([
(name.clone(), pattern.clone()),
(boubou_name.clone(), boubou.clone()),
]);
let number_name = "number".to_string();
let mut number_pattern = number_pattern();
number_pattern.setup(&number_name).unwrap();
let number_pattern = Arc::new(number_pattern);
// one simple regex
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter.regex.push("insert <name> here$".to_string());
assert!(filter.setup(&name, &name, &patterns).is_ok());
assert_eq!(filter.get_match("insert b here"), Some(vec!("b".into())));
assert_eq!(filter.get_match("insert a here"), None);
assert_eq!(filter.get_match("youpi b youpi"), None);
assert_eq!(filter.get_match("insert here"), None);
// Ok
assert_eq!(
filter.get_match_from_patterns(BTreeMap::from([(pattern.clone(), "b".into())])),
Ok(vec!("b".into()))
);
// Doesn't match
assert!(
filter
.get_match_from_patterns(BTreeMap::from([(pattern.clone(), "abc".into())]))
.is_err()
);
// Ignored match
assert!(
filter
.get_match_from_patterns(BTreeMap::from([(pattern.clone(), "a".into())]))
.is_err()
);
// Bad pattern
assert!(
filter
.get_match_from_patterns(BTreeMap::from([(boubou.clone(), "bou".into())]))
.is_err()
);
// Bad number of patterns
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(boubou.clone(), "bou".into()),
]))
.is_err()
);
// Bad number of patterns
assert!(filter.get_match_from_patterns(BTreeMap::from([])).is_err());
// two patterns in one regex
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter
.regex
.push("insert <name> here and <boubou> there".to_string());
assert!(filter.setup(&name, &name, &patterns).is_ok());
assert_eq!(
filter.get_match("insert b here and bouboubou there"),
Some(vec!("bouboubou".into(), "b".into()))
);
assert_eq!(filter.get_match("insert a here and bouboubou there"), None);
assert_eq!(filter.get_match("insert b here and boubou there"), None);
// Ok
assert_eq!(
filter.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(boubou.clone(), "bou".into()),
])),
// Reordered by pattern name
Ok(vec!("bou".into(), "b".into()))
);
// Doesn't match
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "abc".into()),
(boubou.clone(), "bou".into()),
]))
.is_err()
);
// Ignored match
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(boubou.clone(), "boubou".into()),
]))
.is_err()
);
// Bad pattern
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(number_pattern.clone(), "1".into()),
]))
.is_err()
);
// Bad number of patterns
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(boubou.clone(), "bou".into()),
(number_pattern.clone(), "1".into()),
]))
.is_err()
);
// Bad number of patterns
assert!(filter.get_match_from_patterns(BTreeMap::from([])).is_err());
// multiple regexes with same pattern
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter.regex.push("insert <name> here".to_string());
filter.regex.push("also add <name> there".to_string());
assert!(filter.setup(&name, &name, &patterns).is_ok());
assert_eq!(filter.get_match("insert a here"), None);
assert_eq!(filter.get_match("insert b here"), Some(vec!("b".into())));
assert_eq!(filter.get_match("also add a there"), None);
assert_eq!(filter.get_match("also add b there"), Some(vec!("b".into())));
// multiple regexes with same patterns
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
filter
.regex
.push("insert <name> here and <boubou> there".to_string());
filter
.regex
.push("also add <boubou> here and <name> there".to_string());
assert!(filter.setup(&name, &name, &patterns).is_ok());
assert_eq!(
filter.get_match("insert b here and bouboubou there"),
Some(vec!("bouboubou".into(), "b".into()))
);
assert_eq!(
filter.get_match("also add bouboubou here and b there"),
Some(vec!("bouboubou".into(), "b".into()))
);
assert_eq!(filter.get_match("insert a here and bouboubou there"), None);
assert_eq!(
filter.get_match("also add bouboubou here and a there"),
None
);
assert_eq!(filter.get_match("insert b here and boubou there"), None);
assert_eq!(filter.get_match("also add boubou here and b there"), None);
}
#[test]
fn get_match_from_patterns() {
// TODO
}
#[test]
fn filtered_actions_from_match_one_regex_pattern() {
let az_patterns = Pattern::new_map("az", "[a-z]+").unwrap();
let action = Action::new(
vec!["zblorg <az>"],
None,
false,
"test",
"test",
"a1",
&az_patterns,
0,
);
let filter = Filter::new(
vec![action.clone()],
vec![""],
None,
None,
"test",
"test",
Duplicate::default(),
&az_patterns,
);
assert_eq!(
vec![&action],
filter.filtered_actions_from_match(&vec!["zboum".into()])
);
}
#[test]
fn filtered_actions_from_match_two_regex_patterns() {
let patterns = BTreeMap::from([
(
"az".to_string(),
Arc::new(Pattern::new("az", "[a-z]+").unwrap()),
),
(
"num".to_string(),
Arc::new(Pattern::new("num", "[0-9]{1,3}").unwrap()),
),
]);
let action1 = Action::new(
vec!["zblorg <az> <num>"],
None,
false,
"test",
"test",
"a1",
&patterns,
0,
);
let action2 = Action::new(
vec!["zbleurg <num> <az>"],
None,
false,
"test",
"test",
"a2",
&patterns,
0,
);
let filter = Filter::new(
vec![action1.clone(), action2.clone()],
vec![""],
None,
None,
"test",
"test",
Duplicate::default(),
&patterns,
);
assert_eq!(
vec![&action1, &action2],
filter.filtered_actions_from_match(&vec!["zboum".into()])
);
}
#[test]
fn filtered_actions_from_match_one_regex_one_ip() {
let patterns = BTreeMap::from([
(
"az".to_string(),
Arc::new(Pattern::new("az", "[a-z]+").unwrap()),
),
("ip".to_string(), {
let mut pattern = Pattern {
ip: PatternIp {
pattern_type: PatternType::Ip,
..Default::default()
},
..Default::default()
};
pattern.setup("ip").unwrap();
Arc::new(pattern)
}),
]);
let action4 = Action::new(
vec!["zblorg4 <az> <ip>"],
None,
false,
"test",
"test",
"action4",
&patterns,
4,
);
let action6 = Action::new(
vec!["zblorg6 <az> <ip>"],
None,
false,
"test",
"test",
"action6",
&patterns,
6,
);
let action = Action::new(
vec!["zblorg <az> <ip>"],
None,
false,
"test",
"test",
"action",
&patterns,
0,
);
let filter = Filter::new(
vec![action4.clone(), action6.clone(), action.clone()],
vec!["<az>: <ip>"],
None,
None,
"test",
"test",
Duplicate::default(),
&patterns,
);
assert_eq!(
filter.filtered_actions_from_match(&vec!["zboum".into(), "1.2.3.4".into()]),
vec![&action, &action4],
);
assert_eq!(
filter.filtered_actions_from_match(&vec!["zboum".into(), "ab4:35f::1".into()]),
vec![&action, &action6],
);
}
}

Some files were not shown because too many files have changed in this diff Show more