Compare commits

...

241 commits

Author SHA1 Message Date
ppom
8a34a1fa11
Remove useless gitlab ci file 2026-03-13 12:00:00 +01:00
ppom
3ca54c6c43
ipset: Better error handling and messages
- Clearer messages.
- Make sure logs are showed in order.
- When cleaning after an error on startup,
  do not try to undo an action that failed.
2026-03-02 12:00:00 +01:00
ppom
16692731f0
Remove useless chrono dependency from reaction-plugin 2026-03-02 12:00:00 +01:00
ppom
938a366576
More useful error message when plugin can't launch and systemd=true 2026-03-01 12:00:00 +01:00
ppom
5a6c203c01
Add system-reaction.slice 2026-02-27 12:00:00 +01:00
ppom
f2b1accec0
Fix slice-inherit option 2026-02-26 12:00:00 +01:00
ppom
00725ed9e2
notif test: add a filter that shouldn't match 2026-02-26 12:00:00 +01:00
ppom
ea0e7177d9
nftables: Fix bad action advertised 2026-02-26 12:00:00 +01:00
ppom
c41c89101d
Fix #151: Move RegexSet creation from StreamManager to config Stream
This move the potential error of a too big regex set to the config setup,
a place where it can be gracefully handled, instead of the place it was,
where this would make reaction mess up with start/stop, etc.
2026-02-26 12:00:00 +01:00
ppom
3d7e647ef7
Adapt tests to nftables configuration 2026-02-25 12:00:00 +01:00
ppom
5b6cc35deb
nftables: Fix compilation errors and actually use libnftables 2026-02-25 12:00:00 +01:00
ppom
0cd765251a
run plugins in the same slice as reaction
And reaction should be started in system-reaction.slice.
The plugins could then be grouped together with the daemon
2026-02-20 12:00:00 +01:00
ppom
26cf3a96e7
First draft of an nftables plugin
Not compiling yet but I'm getting there.
Must be careful on the unsafe, C-wrapping code!
2026-02-20 12:00:00 +01:00
ppom
285954f7cd
Remove outdated FIXME 2026-02-18 12:00:00 +01:00
ppom
dc51d7d432
Add support for macOS 2026-02-17 12:00:00 +01:00
ppom
488dc6c66f
Update release instructions 2026-02-15 12:00:00 +01:00
ppom
88c99fff0f
Fix install instructions 2026-02-12 12:00:00 +01:00
ppom
645d72ac1e
.gitignore cleanup 2026-02-12 12:00:00 +01:00
ppom
a7e958f248
Update ARCHITECTURE.md 2026-02-12 12:00:00 +01:00
ppom
5577d4f46f
reaction-plugin: Add metadata 2026-02-12 12:00:00 +01:00
ppom
a8cd1af78d
Set CapabiltyBoundingSet again 2026-02-12 12:00:00 +01:00
ppom
2f57f73ac9
Fix systemd functionality
- Non-absolute WorkingDirectory was refused by systemd
- Plugin specific-conf updated

Improvements:
- ReadOnlyPaths=/
- ProtectHome=true in release builds
- SystemCallFilter further restricted

Disabled:
- DynamicUser: breaks stdio communication, FIXME!
- RestrictAddressFamilies: seems impossible to override to default.
- CapabilityBoundingSet: too restrictive
2026-02-12 12:00:00 +01:00
ppom
d629d57a7e
Change ipset version option from 4/6/46 to ipv4/ipv6/ip 2026-02-12 12:00:00 +01:00
ppom
3c20d8f008
Fix merging of systemd options 2026-02-12 12:00:00 +01:00
ppom
5a030ffb7e
Make systemd default options more accessible for users by moving them up 2026-02-12 12:00:00 +01:00
ppom
a4ea173c13
Do not permit options key when stream/action is not a plugin 2026-02-12 12:00:00 +01:00
ppom
3a61db9e6f
plugin: shutdown: add function that permit graceful shutdown by signal
Handling SIGTERM (etc) signals permit graceful shutdown, cleaning of resources etc.

Added in ipset and cluster.
2026-02-12 12:00:00 +01:00
ppom
b4313699df
systemd: Let reaction stop its subprocesses before killing them
systemd by default send SIGTERM to all processes in the cgroup, which
doesn't let reaction handle the shutdown of its plugins.
This is fixed by adding KillMode=mixed.
2026-02-12 12:00:00 +01:00
ppom
270c6cb969
systemd service: config file must live in /etc/reaction/
This is a breaking change, but it unifies config
for yaml, json, jsonnet and directory users.
2026-02-12 12:00:00 +01:00
ppom
15f923ef64
Safeguard against users executing plugins themselves
main_loop now first checks that it has been started with the `serve` argument.
If not, it prints an info message and quits.
2026-02-11 12:00:00 +01:00
ppom
a37a5e5752
release v2.3.0
- cross-rs project doesn't compile anymore: switching to debian12-amd64 only binary release
- package virtual plugin in reaction .deb
- package ipset plugin in separate .deb with its required libipset-dev dependency
2026-02-11 12:00:00 +01:00
ppom
a8651bf2e0
Removal of nft46 and ip46tables 2026-02-11 12:00:00 +01:00
ppom
b07b5064e9
Improve reaction-plugin developer documentation 2026-02-11 12:00:00 +01:00
ppom
b7d997ca5e
Slight change on the "no audit" sentence 2026-02-09 12:00:00 +01:00
ppom
cce850fc71
Add recommandation on ipset or nftables rather than plain iptables 2026-02-09 12:00:00 +01:00
ppom
109fb6d869
Adapt reaction core to plugin interface change 2026-02-09 12:00:00 +01:00
ppom
ae28cfbb31
cluster: adapt to plugin interface change 2026-02-09 12:00:00 +01:00
ppom
b0dc3c56ad
ipset: adapt to plugin interface change 2026-02-09 12:00:00 +01:00
ppom
57d6da5377
virtual: adapt to plugin interface change 2026-02-09 12:00:00 +01:00
ppom
12fc90535a
Change plugin interface: oneshot load_config and start
Instead of multiple stream_impl / action_impl and one finish_setup.
This made plugin implementations awkward: they often got some conf and
couldn't determine if it was valid or not.
Now they get all the conf in one function and don't have to keep partial
state from one call to another.

This has the other important benefit that configuration loading is
separated from startup. This will make plugin lifecycle management less
clunky.
2026-02-09 12:00:00 +01:00
ppom
62933b55e4
Start plugins after start commands
Because stop commands run after plugins' shutdown, so it seems better
that commands embrace ({ plugins }).

Fix outdated comment about aborting on startup.
2026-02-09 12:00:00 +01:00
ppom
34e2a8f294
plugin: simpler crate version retrieval 2026-02-09 12:00:00 +01:00
ppom
41bc3525f8
Fix time-based test sometimes failing by increasing sleep 2026-02-09 12:00:00 +01:00
ppom
5ce773c8e5
cluster: ignore integration tests for now 2026-02-09 12:00:00 +01:00
ppom
6914f19fb8
fix assert_cmd::cargo_bin deprecation warning 2026-02-09 12:00:00 +01:00
ppom
7cd4a4305d
fix: merge plugins in configuration 2026-02-09 12:00:00 +01:00
ppom
c39fdecef3
ipset: add tests for configuration 2026-02-09 12:00:00 +01:00
ppom
885e6b7ef7
ipset: re-arrange spacing in logs 2026-02-09 12:00:00 +01:00
ppom
516e6956ab
fix double-printing of square brackets in plugin logs 2026-02-09 12:00:00 +01:00
ppom
79ec6d279f
ipset: Manuel e2e test does pass 2026-02-09 12:00:00 +01:00
ppom
a83c93ac9d
ipset: do not shutdown plugin when one action errors 2026-02-09 12:00:00 +01:00
ppom
47947d18db
ipset: Fix dumb bug due to future not awaited
The edge case is so dumb, cargo is supposed to tell me about this ><

Just learnt that Python never warns about this btw:
https://trio.readthedocs.io/en/v0.9.0/tutorial.html#warning-don-t-forget-that-await
2026-02-09 12:00:00 +01:00
ppom
915e308015
Better plugin process management
following stderr: task doesn't use shutdown anymore. It will simply follow
stderr until the end of reaction, which at worst is a negligible
memory leak if reaction continues running.
I tried closing stderr on the plugin side with a raw syscall of the file
descriptor, but reaction side doesn't see that stderr is closed.
So I can't rely on that.
Quitting when shutdown.wait() returns is too early, because that's also
what makes reaction asking for the plugin to close(), and it can print
important logs during its shutdown.
The task ignoring all the shutdown part is dead simple and is most likely
correct everytime.

updated the wording of plugin-related errors.

also replaced futures::select! { future, sleep() } with more concise and
macro-less tokio::timeout.
2026-02-09 12:00:00 +01:00
ppom
41b8a661d2
Print on stderr instead of stdout
...stdout is already taken by remoc ;)
2026-02-09 12:00:00 +01:00
ppom
87a25cf04c
Extract ipset options from action options so that it's globally merged
Actions don't manage sets anymore.
Set options are merged at each new action,
then Sets are managed by themselves.
2026-02-09 12:00:00 +01:00
ppom
d6b6e9096b
ipset: Add the add/del option, journal orders & deduplicate them 2026-02-09 12:00:00 +01:00
ppom
3ccd471b45
ipset: so much ~~waow~~ code 2026-02-09 12:00:00 +01:00
ppom
3a6260fa26
reaction-plugin-ipset: first work session 2026-02-09 12:00:00 +01:00
kol3rby
959c32c01e Fix project not compiling on BSD & Solaris systems 2026-02-09 11:03:00 +01:00
ppom
05c6c1fbce
Fix tests
I initially wrote those tests with a test secret key file in the same directory.
Better having them write their own secret key file in their own dir
than a dangling test file in source code and be sensitive to the directory tests are run in.
2026-01-19 12:00:00 +01:00
ppom
615d721c9a
cluster: Upgrade iroh to 0.95.1 2026-01-19 12:00:00 +01:00
ppom
19ee5688a7
Testing with clusters of up to 15 nodes. Fails at ~6 to 9 nodes.
Still a "connection lost" issue.
Happens irregularly.
Nodes tend to ignore incoming connections because their id is too small.
I should debug why it is the case.
Nodes may succeed to recreate connections,
but they should not lose connections on localhost like that...
2026-01-19 12:00:00 +01:00
ppom
fb6f54d84f
Disable test where one plugin is in multiple nodes of one cluster. Test pass! 2026-01-19 12:00:00 +01:00
ppom
4fce6ecaf5
no long living task to try connect to a node. one shot task. add interval randomness. 2026-01-19 12:00:00 +01:00
ppom
5bfcf318c7
Tests on a cluster of 2 nodes 2026-01-19 12:00:00 +01:00
ppom
7ede2fa79c
cluster: Fix use of stream timestamp in action 2026-01-19 12:00:00 +01:00
ppom
1e082086e5
cluster: add tests
- on configuration
- on sending messages to its own cluster
2026-01-19 12:00:00 +01:00
ppom
5a44ae89e9
sleep a bit more to fix time-sensitive test 2025-12-16 12:00:00 +01:00
ppom
8b3bde456e
cluster: UTC: no need for conversions, as Time already is UTC-aware 2025-12-14 12:00:00 +01:00
ppom
2095009fa9
cluster: use treedb for message queue persistance 2025-12-15 12:00:00 +01:00
ppom
f414245168
Separate treedb into its own crate 2025-12-14 12:00:00 +01:00
ppom
c595552504
plugin: Remove action oneshot response 2025-12-07 12:00:00 +01:00
ppom
96a551f7b9
Remove debug 2025-12-07 12:00:00 +01:00
ppom
f6e03496e1
Fix plugin_cluster test, now passing 🎉 2025-12-07 12:00:00 +01:00
ppom
fbf8c24e31
cluster: try_connect opens the channels and handshakes itself
This fixes a deadlock where each node is initiating a connection
and therefore unable to accept an incoming connection.

connection_rx can now be either a raw connection or an initialized connection.
cluster startup has been refactored to take this into account and make
ConnectionManager create this channel itself.
2025-12-08 12:00:00 +01:00
ppom
114dcd9945
Remove extra space in plugin relogging 2025-12-07 12:00:00 +01:00
ppom
da257966d9
Fix connection time out 🎉
I misinterpreted a millisecond arg as seconds, so the timeout was at 2ms
and the keep alive at 200ms, what could go wrong?

Also I gave this TransportConfig option to connect too. If not, the
default is used, not the Endpoint's own config.
https://github.com/n0-computer/iroh/issues/2872
2025-12-07 12:00:00 +01:00
ppom
b14f781528
cluster: use reaction_plugin's PatternLine 2025-12-08 12:00:00 +01:00
ppom
c9e3a07fde
ignore cluster test for now 2025-12-07 12:00:00 +01:00
ppom
aac9a71d4e
DB migration for previous commit change 2025-12-07 12:00:00 +01:00
ppom
79d85c1df1
Reduce usage of chrono
TODO: handle migrations
2025-12-07 12:00:00 +01:00
ppom
1c423c5258
Fix panic caused by previous commit
Connection still close as soon as they idle :/
2025-12-07 12:00:00 +01:00
ppom
b667b1a373
Get rid of remoc for peer communications
I couldn't understand why all communications timed out as soon as all
messages are sent with a remoc RecvError::ChMux "multiplexer terminated".

So I'm getting rid of remoc (for now at least) and sending/receiving
raw data over the stream.

For now it panics, after the handshake complete, which is already good
after only one test O:D
2025-12-07 12:00:00 +01:00
ppom
83ac520d27
Connections have ids, to fix simultaneous connections races 2025-12-07 12:00:00 +01:00
ppom
81fa49aa5c
Add tests to virtual and use reaction-plugin's PatternLine
Those tests permitted to find the bug that led me to create PatternLine
Also add a serde option to deny extra keys in virtual action's config
2025-12-07 12:00:00 +01:00
ppom
e22429f92e
Add time to Exec messages, so that plugin actions don't have to calc this 2025-12-07 12:00:00 +01:00
ppom
da5c3afefb
Provide a correct implementation of user-configured match line parsing 2025-12-07 12:00:00 +01:00
ppom
3ed2ebd488
Two nodes succeeded to exchange messages 🎉
Separated try_connect to another task, to prevent interblocking

Send a byte to the new stream so that the other can see the stream
and accept it.
2025-12-07 12:00:00 +01:00
ppom
ff5200b0a0
cluster: add a lot of DEBUG msgs, Show trait to ease logging 2025-12-07 12:00:00 +01:00
ppom
a5d31f6c1a
cluster: First round of fixes and tests after first run
Still not working!
2025-12-07 12:00:00 +01:00
ppom
43fdd3a877
cluster: finish first draft
finish ConnectionManager main loop
handle local & remote messages, maintain local queue
2025-12-07 12:00:00 +01:00
ppom
2216edfba0
shutdown: permit ShutdownController to be cloned
When multiple tasks can ask to quit
2025-12-07 12:00:00 +01:00
ppom
0635bae544
cluster: created ConnectionManager
Reorganized code.
Moved some functionnality from EndpointManager to ConnectionManager.
Still a lot to do there, but few in the rest of the code.
2025-12-07 12:00:00 +01:00
ppom
552b311ac4
Move shutdown module to reaction-plugin and use in cluster 2025-12-07 12:00:00 +01:00
ppom
71d26766f8
plugin: Stream plugins now pass time information along their lines
This will permit the cluster to accurately receive older-than-immediate
information, and it will permit potential log plugins (journald?) to go
back in time at startup.
2025-12-07 12:00:00 +01:00
ppom
bc0271b209
Fix test that did not pass when virtual was not previously built
This seems a bit hacky though because
the test needs to have `cargo` in `$PATH`
2025-12-07 12:00:00 +01:00
ppom
5782e3eb29
Fix reaction-plugin doctests 2025-12-07 12:00:00 +01:00
ppom
a70b45ba2d
Move parse_duration to reaction-plugin and fix dependency tree 2025-12-07 12:00:00 +01:00
ppom
40c6202cd4
WIP switch to one task per connection 2025-12-07 12:00:00 +01:00
ppom
7e680a3a66
Remove shared_secret option 2025-12-07 12:00:00 +01:00
ppom
9235873084
Expose parse_duration to the plugin
It may be better to put it in the reaction-plugin module instead
2025-12-07 12:00:00 +01:00
ppom
ba9ab4c319
Remove insecure handshake and just check if we know this public key 2025-12-07 12:00:00 +01:00
ppom
2e7fa016c6
Insecure hash-based handshake. I must find something else. 2025-12-07 12:00:00 +01:00
ppom
3f6e74d096
Accept remote connections. Prepare work for shared_secret handshake
Renamed ConnectionInitializer to EndpointManager.
Endpoint isn't shared with Cluster anymore.

Moved big `match` in `loop` to own function, mainly to separate it from
the select macro and reduce LSP latency. But that's cleaner too.
2025-12-07 12:00:00 +01:00
ppom
983eff13eb
cluster initialization
- Actions are connected to Cluster,
- Separate task to (re)initialize connections
2025-12-07 12:00:00 +01:00
ppom
db622eec53
show plugin stream exit error only when not quitting 2025-12-07 12:00:00 +01:00
ppom
cd2d337850
Fixed communication error: do not use serde_json::Value
So maybe serde_json's Value can't be serialized with postbag.
Recreated my own Value that can be converted from and to serde_json's.

removed one useless tokio::spawn.
2025-12-07 12:00:00 +01:00
ppom
ebf906ea51
Better doc and errors 2025-12-07 12:00:00 +01:00
ppom
310d3dbe99
Fix plugin build, one secret key per cluster, more work on cluster init 2025-12-07 12:00:00 +01:00
ppom
58180fe609
fmt, clippy, tests, fix some tests after startup refacto 2025-12-07 12:00:00 +01:00
ppom
20921be07d
Fix daemon startup: all subsystems will cleanly exit
Regardless of which startup error makes reaction exit.

Also made plugin stderr task exit when the ShutdownToken asks for it.
Also updated Rust edition to 2024.
2025-12-07 12:00:00 +01:00
ppom
a7604ca8d5
WIP allow plugin to print error to stderr and capture them
I have a race condition where reaction quits before printing process' stderr.
This will be the occasion to rework (again) reaction's daemon startup
2025-12-07 12:00:00 +01:00
ppom
124a2827d9
Cluster plugin init
- Remove PersistData utility
- Provide plugins a state directory instead, by starting them inside.
- Store the secret key as a file inside this directory.
- Use iroh's crate for base64 encoding, thus removing one dependency.
- Implement plugin's stream_impl and action_impl functions,
  creating all necessary data structures.
2025-12-07 12:00:00 +01:00
ppom
e3060d0404
cluster: retrieve, generate and store iroh SecretKey 2025-12-07 12:00:00 +01:00
ppom
c918910453
plugin: add simple way to store small data for plugins 2025-12-07 12:00:00 +01:00
ppom
61fe405b85
Add cluster plugin skeleton 2025-12-07 12:00:00 +01:00
ppom
8d864b1fb9
Add PersistData to trait 2025-12-07 12:00:00 +01:00
ppom
fa350310fd
plugin protocol: add manifest with version 2025-12-07 12:00:00 +01:00
ppom
0c4d19a4d7
plugins are now named
and fixed the virtual test
2025-12-07 12:00:00 +01:00
ppom
9f56e5d8d2
fmt 2025-12-07 12:00:00 +01:00
ppom
a5c563d55f
WIP systemd support
The logic seems to be fine.
Still need to think what security defaults are pertinent.
2025-12-07 12:00:00 +01:00
ppom
76bc551043
Specify reaction as default bin 2025-12-07 12:00:00 +01:00
ppom
b44800ed30
cargo build builds plugin
And benchmark for virtual plugin
2025-12-07 12:00:00 +01:00
ppom
7cbf482e4d
plugin improvements
- fix panic of channel(0)
- cleaner plugin interface with one level of Result
- standalone metadata for stream plugins
- new test for plugin virtual
2025-12-07 12:00:00 +01:00
ppom
f08762c3f3
First shot of "virtual stream" plugin 2025-12-07 12:00:00 +01:00
ppom
160d27f13a
Fix tests 2025-12-07 12:00:00 +01:00
ppom
147a4623b2
First building version of reaction with plugins 2025-12-07 12:00:00 +01:00
ppom
d887acf27e
Adapt Config and plugin loading
daemon::Stream integration TBD
2025-12-07 12:00:00 +01:00
ppom
a99dea4421
Adapt reaction-plugin to remoc 2025-12-07 12:00:00 +01:00
ppom
fc11234f12
Loading plugin not on config side, but stream/action manager side
Trying to implement this on the StreamManager first.
I get lifetime errors that make no sense to me, like futures should
hold any argument with 'static.

I wonder if I should try to convert everything stabby to abi_stable &
async_ffi. I'll try this and see if it solves anything.
2025-12-07 12:00:00 +01:00
ppom
05f30c3c57
First WIP iteration on the plugin system, reaction side.
Delaying the implementation of plugin Filters. I'm not sure it's useful,
(apart from JSON, what can be done?) and it's likely to be more painful
than the rest.
I'll probably just implement one custom JSON Filter like I did with
Pattern's IP support.
2025-12-07 12:00:00 +01:00
ppom
338aa8a8a2
Fix compilation error
The lifetime compilation error disappears when the filter methods are
async, so let's just do that for now
2025-12-07 12:00:00 +01:00
ppom
ae46932219
Fix workspace dependency 2025-12-07 12:00:00 +01:00
ppom
8229f01182
Dependency cleanup 2025-12-07 12:00:00 +01:00
ppom
22125dfd53
First plugin shot 2025-12-07 12:00:00 +01:00
ppom
d36e54c55b
README: add mail 2025-10-31 12:00:00 +01:00
ppom
fa63a9feb8
README: fix example YAML 2025-10-07 12:00:00 +02:00
ppom
7f0cf32666
v2.2.1 2025-09-20 12:00:00 +02:00
Baptiste Careil
c6e4af96cd
Fix some triggers no longer triggering after being loaded from db 2025-09-19 12:00:00 +02:00
ppom
278baaa3e6
Shorter variant of the heavy load benchmark for quicker results 2025-09-19 12:00:00 +02:00
ppom
974139610f
async db
Fixing deadlock on start.
FilterManager send a lot of write operations on start.
Each of them spawned a new Task to send the log in a channel.
All those writes were unlocked when the Database started, shortly after.
Now that the channel sending is awaited, it made a deadlock.

Database's API and startup has been rewritten, so that open_tree is made
accross the same channel used to log write operations.

Database is started as soon as it is opened.
The Database struct is now just a Sender to the real Database, now
DatabaseManager.

This removes the constraint for Tree opening happening before any write
operation!
2025-09-19 12:00:00 +02:00
ppom
aec3bb54ed
async db 2025-09-07 12:00:00 +02:00
ppom
582889f71e
WIP async db
Fixes inherent problem on sync db, which spawns a new task for
persistance. This makes the log unordered, which can cause inconsistence
issues.
2025-09-07 12:00:00 +02:00
ppom
e37bd6ebbe
Add a mention on Azlux's third-party repository
Related to #134
2025-09-06 12:00:00 +02:00
Baptiste Careil
1f734a516d Fix test load_conf_directory
- Fixed concurrency to 1 not to be platform dependent
- Added fields introduced by recent changes
- Used builtin str comparator that produces a diff instead of the eq
  predicate
2025-08-17 18:33:09 +02:00
ppom
e45963dd4c
Debian: Add extended-description
Fix #134
2025-08-11 12:00:00 +02:00
ppom
0e75514db3
Debian: Add section information
Fix #134
2025-08-11 12:00:00 +02:00
ppom
fc6a385574
Add armhf-gnu build for Raspberry Pis 2025-08-11 12:00:00 +02:00
ppom
dcc2e1ec4c
v2.2.0 2025-08-08 12:00:00 +02:00
ppom
ca89c7f72a
Fix filter commands executing before start commands
Now creating the socket file before starting its manager.
So I can launch start commands after its creation, and before creating
the filter managers.
2025-08-08 12:00:00 +02:00
ppom
e8f13dc9ff
cargo fmt 2025-08-08 12:00:00 +02:00
ppom
a7b63b69a8
Database: finish writing entries when quitting 2025-08-08 12:00:00 +02:00
ppom
10bd0a1859
Tree::fetch_update: Do not remove and re-add entries.
Better cloning the value than writing another entry!
2025-08-08 12:00:00 +02:00
ppom
f4b5ed20ab
Add debug on start/stop commands 2025-08-08 12:00:00 +02:00
ppom
58f4793308
Fix triggers being forgotten on after actions with on_exit: true
decrement_trigger do not delete triggers anymore when exiting

test still failing because filters start before start commands
2025-08-08 12:00:00 +02:00
ppom
607141f22f
Fix after action commands not being correctly awaited
We were scheduling the action with exec_now, but it spawns a new task
itself, which did not have the ShutdownToken.

Persistance part of the start_stop test doesn't work
because when the after actions are executed, they decrement the trigger,
which is then removed from DB.
So they should not decrement it anymore, just check that it's still
there. Next commit!
2025-08-06 12:00:00 +02:00
ppom
c824583613
Add new failing tests on start / stop sequences.
They fail because reaction don't correctly order stop commands after
2025-08-06 12:00:00 +02:00
ppom
91885e49bd
Ignore new tests that fail for now
FIXME check this later
2025-08-06 12:00:00 +02:00
ppom
eea708883b
Add example config equality test 2025-08-06 12:00:00 +02:00
Baptiste Careil
0337fcab1f
Automate some tests 2025-08-06 12:00:00 +02:00
ppom
90ec56902a
Add tests for triggers tree migration 2025-08-06 12:00:00 +02:00
ppom
eaf40cb579
test Filter::regex conformity after setup 2025-08-06 12:00:00 +02:00
ppom
441d981a20
Duplicate::Ignore: do not show ignored matches
move match logging from concepts/filter to daemon/filter
2025-08-06 12:00:00 +02:00
ppom
f36464299a
Duplicate::Extend: reschedule correctly actions not already triggered
Before, it rescheduled all actions with an `after` directive,
which is wrong when some after actions have already been executed
(in case of different actions with different after durations)
2025-08-06 12:00:00 +02:00
ppom
a1df62077c
cargo clippy 2025-08-05 12:00:00 +02:00
ppom
56e4d77854
Deduplication of triggers on start 2025-08-05 12:00:00 +02:00
ppom
f4d002c615
Fix trigger count on start
schedule_exec was called before inserting the data in triggers,
resulting in action count being set again after decrement in schedule
exec.
This could lead to:
- trigger not disappearing after done
- second action with no "after" not being run
- ...
2025-08-05 12:00:00 +02:00
ppom
f477310a29
duplicates: Add failing tests for Deduplication on start 2025-08-05 12:00:00 +02:00
ppom
773eb76f92
Update README to advertise ip-specific features 2025-08-04 12:00:00 +02:00
ppom
59c7bfdd1d
Move action filtering logic from daemon to concepts and use at 3 places
Used in Filter::schedule_exec, Filter::handle_order, State::add_trigger
Add proper testing.
This also fix previously failing test.
2025-08-04 12:00:00 +02:00
ppom
cebdbc7ad0
ipv4 regex: do no accept numbers 0[0-9]
The Rust std won't accept it anyway, as it interprets numbers starting
with 0 as octal numbers and forbid that.
2025-08-04 12:00:00 +02:00
ppom
0b2bfe533b
Update example configs to get rid of ip46tables 2025-08-04 12:00:00 +02:00
ppom
a0b804811b
Refacto: make all Config structures' fields public
Config is 'static after setup anyways.
I don't need to hide all this, it's just cumbersome for tests.
2025-08-04 12:00:00 +02:00
ppom
6f63f49acd
Add failing test for flushing ipvXonly actions 2025-08-04 12:00:00 +02:00
ppom
b927ba4fdf
Add ipv4only/ipv6only logic to actions 2025-08-04 12:00:00 +02:00
ppom
e4e50dd03b
cargo clippy 2025-08-04 12:00:00 +02:00
ppom
0a9c7f97df
Split IP pattern code in 3 files 2025-08-04 12:00:00 +02:00
ppom
130607d28f
Add test for pattern deserialization 2025-08-04 12:00:00 +02:00
ppom
19e3b2bf98
Make IP regex much more robust and add tests
IP will be correctly extracted in any regex line, even if it is
surrounded by greedy catch-all: .*<ip>.*

This what actually hard to do!
2025-08-04 12:00:00 +02:00
ppom
421002442e
Add ip tests on daemon::filter
Fix PatternType deserialization
Fix regex deserialization (now optional)
Tests currently failing
2025-08-04 12:00:00 +02:00
ppom
4f79b476aa
Cut ip regexes in smaller blocks and add tests 2025-08-04 12:00:00 +02:00
ppom
6cde89cc4b
rename file 2025-08-04 12:00:00 +02:00
ppom
43f8b66870
Update config documentation 2025-08-04 12:00:00 +02:00
ppom
94b40c4a0b
Add more tests
Done: Tests on PatternIp.
Todo: Tests on Pattern.

Fixed a bug in is_ignore.
Checked a new possible misconfiguration.
2025-08-04 12:00:00 +02:00
ppom
a5f616e295
WIP pattern ip
add ipv{4,6}mask
factorize redundant code in util functions
normalize match
most tests done
2025-08-04 12:00:00 +02:00
ppom
04b5dfd95b
ip: Add includes, tests, more setup constraints 2025-08-04 12:00:00 +02:00
ppom
44e5757ae3
WIP pattern ip 2025-08-04 12:00:00 +02:00
ppom
ea0452f62c
Fix components starting order
Now Database and Socket components are created before start commands are
executed. So in case of error, start commands are not executed.

Also socket syscalls are now async instead of blocking, for better
integration with the async runtime.

New start order:
- DB
- Socket
- Start commands
- Streams
2025-08-04 12:00:00 +02:00
ppom
6b970e74c5
Update configuration reference 2025-07-14 12:00:00 +02:00
ppom
d8db2a1745
Add extensive test on Duplicate and fix related bug 2025-07-14 12:00:00 +02:00
ppom
6f346ff371
Test existing FilterManager tests for each Duplicate enum 2025-07-14 12:00:00 +02:00
ppom
81e5fb4c42
add State tests and fix trigger persistance
Triggers were only persisted for retry duration, instead of longuest
action duration.
As retry is often shorter than after, this would make reaction forget
most triggers on restart.
entry_timeout is now set to longuest_action_duration.
2025-07-14 12:00:00 +02:00
ppom
270a1a9bdf
Duplicate: Fix tests, more tests 2025-07-14 12:00:00 +02:00
ppom
d9842c2340
Duplicate::Extend: Re-Trigger only after actions
- implement schedule_exec's only_after
2025-07-14 12:00:00 +02:00
ppom
22384a2cb4
rename React::Exec to React::Trigger 2025-07-14 12:00:00 +02:00
ppom
2cebb733b5
WIP duplicates
- change remove_trigger to remove all triggers for a Match
- schedule_exec will take only_after boolean
2025-07-14 12:00:00 +02:00
ppom
881fc76bf9
WIP duplicates
- new duplicate option
- change triggers Tree structure to keep O(log(n)) querying:
  now we need to know if a match already has a trigger.
- triggers migration
- triggers adaptations in State & FilterManager
2025-07-14 12:00:00 +02:00
ppom
4ddaf6c195
v2.1.2 2025-07-14 12:00:00 +02:00
ppom
b62f085e51
Fix trigger persistance
Triggers were only persisted for retry duration, instead of longuest
action duration.
As retry is often shorter than after, this would make reaction forget
most triggers on restart.
entry_timeout is now set to longuest_action_duration.

Cherry picked from the duplicate branch.
2025-07-14 12:00:00 +02:00
ppom
fd0dc91824
Get rid of useless Buffer wrapper for Vec<u8>
Write is already implemented on Vec<u8>
2025-07-11 12:00:00 +02:00
ppom
d880f7338b
Get rid of low-level async with Poll
use futures::stream::try_unfold to create a Stream from an async closure
2025-07-11 12:00:00 +02:00
ppom
e0609e3c3e
Move rewrite section 2025-07-08 12:00:00 +02:00
ppom
28f136f491
README update
project status update (rust rewrite ok)

contributing: separate ideas & code
2025-07-08 12:00:00 +02:00
ppom
5d9f2ceb6a
v2.1.1 2025-07-07 12:00:00 +02:00
ppom
bba113b6ab
Remove newline at the end of stream lines
Bug introduced by !24 which kept trailing `\n` and fed it to filters.
Thus regexes ending with `$` couldn't match anymore.

Fixes #128
2025-07-07 12:00:00 +02:00
ppom
5bf67860f4
Fix Filter::regex for StreamManager::compiled_regex_set
regexes were pushed multiple times, with pattern names not completely
replaced by their corresponding regexes.
Now only pushed when pattern replacement is finished.
2025-07-07 12:00:00 +02:00
ppom
39bf662296
Fix example configs
- Fix comma issues
- Fix regex syntax doc
- Add ssh regexes
2025-06-28 12:00:00 +02:00
ppom
359957c58c
README: Add trigger command 2025-06-24 12:00:00 +02:00
ppom
3f3236cafb
v2.1.0 2025-06-24 12:00:00 +02:00
ppom
78056b6fc5
src/client/request.rs rename and ARCHITECTURE.md update 2025-06-24 12:00:00 +02:00
ppom
6a778f3d01
cargo fmt, cargo clippy --all-targets 2025-06-24 12:00:00 +02:00
ppom
35862d32fa
Fix trigger command
- Force STREAM.FILTER one the command line
- Fix typo
2025-06-24 12:00:00 +02:00
ppom
283d1867b8
Benchmark: Add real-life configuration file and benchmark wrapper
Performance on this real-life configuration:

Before last commit:
Service runtime: 2min 22.669s
CPU time consumed: 3min 44.299s
Memory peak: 50.7M (swap: 0B)

With last commit:
Service runtime: 7.569s
CPU time consumed: 21.998s
Memory peak: 105.6M (swap: 0B)
2025-06-23 12:00:00 +02:00
ppom
ad6b0faa30
Performance: Use a RegexSet for all regexes of a Stream
StreamManager is now a struct that has its own RegexSet created from all
the regexes inside its Filters. Instead of calling
FilterManager::handle_line on all its FilterManagers, resulting in m*n
regex passes, it matches on all the regexes with its RegexSet.
It then only calls FilterManager::handle_line on matching Filters.

This should increase performance in those cases:
- Streams with a lot of filters or a lot of regexes
- Filters that match a small proportion of their Stream lines

This may decrease performance when most of lines are matched by all
Filters of a Stream.
2025-06-23 12:00:00 +02:00
ppom
55ed7b9c5f
Amend heavy-load test
- Add wrapper script
- Add non-matching lines
- Put two filters on the same stream, where either one of them matches
2025-06-23 12:00:00 +02:00
Baptiste Careil
d12a61c14a Fix #124: discard invalid utf8 sequences from input streams 2025-06-23 19:13:49 +00:00
Baptiste Careil
d4ffae8489 Fix #126: make config evaluation order predictable 2025-06-23 20:07:17 +02:00
ppom
529e40acd4
move State into its own file
This permit to reduce filter/mod.rs file size
2025-06-23 12:00:00 +02:00
ppom
39ae570ae5
rename file 2025-06-23 12:00:00 +02:00
ppom
4cb69fb0d4
Add test for trigger command 2025-06-23 12:00:00 +02:00
ppom
fad9ce1166
Add unit tests to FilterManager::handle_trigger 2025-06-23 12:00:00 +02:00
ppom
ff8ea60ce6
WIP trigger command and ignoreregex performance improvement
- ignoreregex is now a RegexSet for improved performance
- Vec::clear() replaced by new Vec to really free RAM
2025-06-23 12:00:00 +02:00
ppom
b0c307a9d2
WIP trigger command 2025-06-23 12:00:00 +02:00
ppom
731ad6ddfd
Simplify parse_duration tests by using appropriate units 2025-06-23 12:00:00 +02:00
ppom
0ff8fda607
cargo fmt, cargo clippy --all-targets 2025-06-17 12:00:00 +02:00
ppom
9963ef4192
Improve error message for retry < 2.
Fixes #125
2025-06-17 12:00:00 +02:00
ppom
ff84a31a7d
build: use CC env var if available. defaults to cc instead of gcc 2025-06-15 12:00:00 +02:00
ppom
0d9fc47016
Update duration format documentation
As it's no longer Go's format
2025-06-10 12:00:00 +02:00
ppom
5bccdb5ba7
Add oneshot option for actions
Fixes #92
2025-06-10 12:00:00 +02:00
ppom
cc38c55fdb
Add test-config subcommand to README 2025-06-06 12:00:00 +02:00
ppom
c04168d4dc
Fix outdated links in README 2025-06-06 12:00:00 +02:00
ppom
2e9e7a2a7b
Remove old go codebase 2025-06-06 12:00:00 +02:00
ppom
e642620ae3
Cross-compile C binaries too 2025-06-06 12:00:00 +02:00
ppom
8f5511b415
v2.0.1 2025-06-05 12:00:00 +02:00
ppom
388d4dac90
Fix tarball Makefile, release.py
- Makefile creates missing directories
- release.py puts tarballs & debs in local/ directory when not
  publishing
2025-06-05 12:00:00 +02:00
ppom
f63502759f
make official release only with --publish flag 2025-06-05 12:00:00 +02:00
ppom
74280d0f45
Fix completions filenames and their removal 2025-06-05 12:00:00 +02:00
Martin
8543fead54 Fix makefile install
remove duplicated /man/man1
otherwise, get an error during installation
install: target '/usr/local/share/man/man1/man/man1/': No such file or directory

Use -D to create missing directory
2025-06-05 16:52:09 +02:00
ppom
3beca6d7a5
Document state_directory
Fixes #71
2025-06-05 12:00:00 +02:00
ppom
b53044323c
Add small doc for C helpers 2025-06-05 12:00:00 +02:00
ppom
02f13a263e
fix release 2025-06-05 12:00:00 +02:00
133 changed files with 18341 additions and 5376 deletions

1
.envrc Normal file
View file

@ -0,0 +1 @@
use_nix

13
.gitignore vendored
View file

@ -1,22 +1,15 @@
/reaction
/ip46tables
/nft46
reaction*.db
reaction*.db.old
reaction.db
reaction.db.old
/data
/lmdb
reaction*.export.json
/reaction*.sock
/result
/wiki
/deb
*.deb
*.minisig
*.qcow2
debian-packaging/*
*.swp
export-go-db/export-go-db
import-rust-db/target
/target
/local
.ccls-cache
.direnv

View file

@ -1,15 +0,0 @@
---
image: golang:1.20-bookworm
stages:
- build
variables:
DEBIAN_FRONTEND: noninteractive
test_building:
stage: build
before_script:
- apt-get -qq -y update
- apt-get -qq -y install build-essential devscripts debhelper quilt wget
script:
- make reaction ip46tables nft46

View file

@ -6,18 +6,18 @@ Here is a high-level overview of the codebase.
## Build
- `bench/`: Configuration that spawns a very high load on reaction. Useful to test performance improvements and regressions.
- `build.rs`: permits to create shell completions and man pages on build.
- `Cargo.toml`, `Cargo.lock`: manifest and dependencies.
- `config`: example / test configuration files. Look at its git history to discover more.
- `config/`: example / test configuration files. Look at its git history to discover more.
- `Makefile`: Makefile. Resumes useful commands.
- `packaging`: Files useful for .deb and .tar generation.
- `packaging/`: Files useful for .deb and .tar generation.
- `release.py`: Build process for a release. Handles cross-compilation, .tar and .deb generation.
## Main source code
- `helpers_c`: C helpers. I wish to have special IP support in reaction and get rid of them. See #79 and #116.
- `tests`: Integration tests. For now they test basic reaction runtime behavior, persistance, and client-daemon communication.
- `src`: The source code, here we go!
- `tests/`: Integration tests. They test reaction runtime behavior, persistance, client-daemon communication, plugin integrations.
- `src/`: The source code, here we go!
### Top-level files
@ -25,24 +25,20 @@ Here is a high-level overview of the codebase.
- `src/lib.rs`: Second main entrypoint
- `src/cli.rs`: Command-line arguments
- `src/tests.rs`: Test utilities
- `src/protocol.rs`: de/serialization and client/daemon protocol messages.
### `src/concepts`
### `src/concepts/`
reaction really is about its configuration, which is at the center of the code.
There is one file for each of its concepts: configuration, streams, filters, actions, patterns.
There is one file for each of its concepts: configuration, streams, filters, actions, patterns, plugins.
### `src/protocol`
### `src/client/`
Low-level serialization/deserialization and client-daemon protocol messages.
Client code: `reaction show`, `reaction flush`, `reaction trigger`, `reaction test-regex`.
Shared by the client and daemon's socket. Also used by daemon's database.
### `src/client`
Client code: `reaction show`, `reaction flush`, `reaction test-regex`.
- `show_flush.rs`: `show` & `flush` commands.
- `request.rs`: commands requiring client/server communication: `show`, `flush` & `trigger`.
- `test_config.rs`: `test-config` command.
- `test_regex.rs`: `test-regex` command.
### `src/daemon`
@ -53,16 +49,33 @@ This code has async code, to handle input streams and communication with clients
- `mod.rs`: daemon main function. Initializes all tasks, handles synchronization and quitting, etc.
- `stream.rs`: Stream managers: start the stream `cmd` and dispatch its stdout lines to its Filter managers.
- `filter.rs`: Filter managers: handle lines, persistance, store matches and trigger actions. This is the main piece of runtime logic.
- `filter/`: Filter managers: handle lines, persistance, store matches and trigger actions. This is the main piece of runtime logic.
- `mod.rs`: High-level logic
- `state.rs`: Inner state operations
- `socket.rs`: The socket task, responsible for communication with clients.
- `plugin.rs`: Plugin startup, configuration loading and cleanup.
### `src/tree`
### `crates/treedb`
Persistence layer.
This is a database highly adapted to reaction workload, making reaction faster than when used with general purpose key-value databases
(heed, sled and fjall crates ahve been tested).
(heed, sled and fjall crates have been tested).
Its design is explained in the comments of its files:
- `mod.rs`: main database code, with its two API structs: Tree and Database.
- `raw.rs` low-level part, directly interacting with de/serializisation and files.
- `lib.rs`: main database code, with its two API structs: Tree and Database.
- `raw.rs`: low-level part, directly interacting with de/serializisation and files.
- `time.rs`: time definitions shared with reaction.
- `helpers.rs`: utilities to ease db deserialization from disk.
### `plugins/reaction-plugin`
Shared plugin interface between reaction daemon and its plugins.
Also defines some shared logic between them:
- `shutdown.rs`: Logic for passing shutdown signal across all tasks
- `parse_duration.rs` Duration parsing
### `plugins/reaction-plugin-*`
All core plugins.

4063
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,7 @@
[package]
name = "reaction"
version = "2.0.0"
edition = "2021"
version = "2.3.0"
edition = "2024"
authors = ["ppom <reaction@ppom.me>"]
license = "AGPL-3.0"
description = "Scan logs and take action"
@ -10,40 +10,58 @@ homepage = "https://reaction.ppom.me"
repository = "https://framagit.org/ppom/reaction"
keywords = ["security", "sysadmin", "fail2ban", "logs", "monitoring"]
build = "build.rs"
default-run = "reaction"
[package.metadata.deb]
section = "net"
extended-description = """A daemon that scans program outputs for repeated patterns, and takes action.
A common usage is to scan ssh and webserver logs, and to ban hosts that cause multiple authentication errors.
reaction aims at being a successor to fail2ban."""
maintainer-scripts = "packaging/"
systemd-units = { enable = false }
assets = [
# Executables
[ "target/release/reaction", "/usr/bin/reaction", "755" ],
[ "target/release/ip46tables", "/usr/bin/ip46tables", "755" ],
[ "target/release/nft46", "/usr/bin/nft46", "755" ],
[ "target/release/reaction-plugin-virtual", "/usr/bin/reaction-plugin-virtual", "755" ],
# Man pages
[ "target/release/reaction*.1", "/usr/share/man/man1/", "644" ],
# Shell completions
[ "target/release/reaction.bash", "/usr/share/bash-completion/completions/reaction", "644" ],
[ "target/release/reaction.fish", "/usr/share/fish/completions/", "644" ],
[ "target/release/_reaction", "/usr/share/zsh/vendor-completions/", "644" ],
# Slice
[ "packaging/system-reaction.slice", "/usr/lib/systemd/system/", "644" ],
]
[dependencies]
chrono = { version = "0.4.38", features = ["std", "clock", "serde"] }
# Time types
chrono.workspace = true
# CLI parsing
clap = { version = "4.5.4", features = ["derive"] }
jrsonnet-evaluator = "0.4.2"
# Unix interfaces
nix = { version = "0.29.0", features = ["signal"] }
num_cpus = "1.16.0"
# Regex matching
regex = "1.10.4"
serde = { version = "1.0.203", features = ["derive"] }
serde_json = "1.0.117"
# Configuration languages, ser/deserialisation
serde.workspace = true
serde_json.workspace = true
serde_yaml = "0.9.34"
thiserror = "1.0.63"
timer = "0.2.0"
futures = "0.3.30"
tokio = { version = "1.40.0", features = ["full", "tracing"] }
tokio-util = { version = "0.7.12", features = ["codec"] }
tracing = "0.1.40"
jrsonnet-evaluator = "0.4.2"
# Error macro
thiserror.workspace = true
# Async runtime & helpers
futures = { workspace = true }
tokio = { workspace = true, features = ["full", "tracing"] }
tokio-util = { workspace = true, features = ["codec"] }
# Async logging
tracing.workspace = true
tracing-subscriber = "0.3.18"
# Database
treedb.workspace = true
# Reaction plugin system
remoc.workspace = true
reaction-plugin.workspace = true
[build-dependencies]
clap = { version = "4.5.4", features = ["derive"] }
@ -54,4 +72,34 @@ tracing = "0.1.40"
[dev-dependencies]
rand = "0.8.5"
treedb.workspace = true
treedb.features = ["test"]
tempfile.workspace = true
assert_fs.workspace = true
assert_cmd = "2.0.17"
predicates = "3.1.3"
[workspace]
members = [
"crates/treedb",
"plugins/reaction-plugin",
"plugins/reaction-plugin-cluster",
"plugins/reaction-plugin-ipset",
"plugins/reaction-plugin-nftables",
"plugins/reaction-plugin-virtual"
]
[workspace.dependencies]
assert_fs = "1.1.3"
chrono = { version = "0.4.38", features = ["std", "clock", "serde"] }
futures = "0.3.30"
remoc = { version = "0.18.3" }
serde = { version = "1.0.203", features = ["derive"] }
serde_json = { version = "1.0.117", features = ["arbitrary_precision"] }
tempfile = "3.12.0"
thiserror = "1.0.63"
tokio = { version = "1.40.0" }
tokio-util = { version = "0.7.12" }
tracing = "0.1.40"
reaction-plugin = { path = "plugins/reaction-plugin" }
treedb = { path = "crates/treedb" }

11
Dockerfile Normal file
View file

@ -0,0 +1,11 @@
# This Dockerfile permits to build reaction and its plugins
# Use debian old-stable, so that it runs on both old-stable and stable
FROM rust:bookworm
RUN apt update && apt install -y \
clang \
libipset-dev \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /reaction

View file

@ -14,8 +14,6 @@ reaction:
install: reaction
install -m755 target/release/reaction $(DESTDIR)$(BINDIR)
install -m755 target/release/ip46tables $(DESTDIR)$(BINDIR)
install -m755 target/release/nft46 $(DESTDIR)$(BINDIR)
install_systemd: install
install -m644 packaging/reaction.service $(SYSTEMDDIR)/system/reaction.service

106
README.md
View file

@ -4,39 +4,40 @@ A daemon that scans program outputs for repeated patterns, and takes action.
A common usage is to scan ssh and webserver logs, and to ban hosts that cause multiple authentication errors.
🚧 This program hasn't received external audit. however, it already works well on my servers 🚧
## Current project status
reaction just reached v2.0.0-rc2 version, which is a complete rust rewrite of reaction.
It's in feature parity with the Go version, and breaking changes should be small.
See https://reaction.ppom.me/migrate-to-v2.html
🚧 This program hasn't received external security audit yet. However, it already works well on many servers 🚧
## Rationale
I was using the honorable fail2ban since quite a long time, but i was a bit frustrated by its cpu consumption
I was using the honorable fail2ban since quite a long time, but i was a bit frustrated by its CPU consumption
and all its heavy default configuration.
In my view, a security-oriented program should be simple to configure
and an always-running daemon should be implemented in a fast*er* language.
reaction does not have all the features of the honorable fail2ban, but it's ~10x faster and has more manageable configuration.
reaction does not have all the features of the honorable fail2ban, but it's more than 10x faster and has more manageable configuration.
[📽️ quick french name explanation 😉](https://u.ppom.me/reaction.webm)
[🇬🇧 in-depth blog article](https://blog.ppom.me/en-reaction)
/ [🇫🇷 french version](https://blog.ppom.me/fr-reaction)
## Rust rewrite
reaction v2.x is a complete Rust rewrite of reaction.
It's in feature parity with the Go version, v1.x, which is now deprecated.
See https://blog.ppom.me/en-reaction-v2.
## Configuration
YAML and [JSONnet](https://jsonnet.org/) (more powerful) are supported.
both are extensions of JSON, so JSON is transitively supported.
- See [reaction.yml](./app/example.yml) or [reaction.jsonnet](./config/example.jsonnet) for a fully explained reference
- See [server.jsonnet](./config/server.jsonnet) for a real-world configuration
- See [reaction.example.service](./config/reaction.example.service) for a systemd service file
- This minimal example shows what's needed to prevent brute force attacks on an ssh server (please read at least the [Security](https://reaction.ppom.me/security.html) part of the wiki before starting 🆙):
- See [reaction.yml](./config/example.yml) or [reaction.jsonnet](./config/example.jsonnet) for a fully explained reference (ipv4 + ipv6)
- See the [wiki](https://reaction.ppom.me) for multiple examples, security recommendations and FAQ.
- See [server.jsonnet](https://reaction.ppom.me/configurations/ppom/server.jsonnet.html) for a real-world configuration
- See [reaction.service](./config/reaction.service) for a systemd service file
- This minimal example (ipv4 only) shows what's needed to prevent brute force attacks on an ssh server (please read at least the [Security](https://reaction.ppom.me/security.html) part of the wiki before starting 🆙):
<details open>
@ -45,21 +46,18 @@ both are extensions of JSON, so JSON is transitively supported.
```yaml
patterns:
ip:
regex: '(([0-9]{1,3}\.){3}[0-9]{1,3})|([0-9a-fA-F:]{2,90})'
ignore:
- '127.0.0.1'
- '::1'
type: ipv4
start:
- [ 'ip46tables', '-w', '-N', 'reaction' ]
- [ 'ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-N', 'reaction' ]
- [ 'iptables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
stop:
- [ 'ip46tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-F', 'reaction' ]
- [ 'ip46tables', '-w', '-X', 'reaction' ]
- [ 'iptables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-F', 'reaction' ]
- [ 'iptables', '-w', '-X', 'reaction' ]
streams:
ssh:
@ -69,15 +67,15 @@ streams:
regex:
- 'authentication failure;.*rhost=<ip>'
- 'Failed password for .* from <ip>'
- 'Invalid user .* from <ip>',
- 'banner exchange: Connection from <ip> port [0-9]*: invalid format',
- 'Invalid user .* from <ip>'
- 'banner exchange: Connection from <ip> port [0-9]*: invalid format'
retry: 3
retryperiod: '6h'
actions:
ban:
cmd: [ 'ip46tables', '-w', '-I', 'reaction', '1', '-s', '<ip>', '-j', 'DROP' ]
cmd: [ 'iptables', '-w', '-I', 'reaction', '1', '-s', '<ip>', '-j', 'DROP' ]
unban:
cmd: [ 'ip46tables', '-w', '-D', 'reaction', '1', '-s', '<ip>', '-j', 'DROP' ]
cmd: [ 'iptables', '-w', '-D', 'reaction', '1', '-s', '<ip>', '-j', 'DROP' ]
after: '48h'
```
@ -88,41 +86,40 @@ streams:
<summary><code>/etc/reaction.jsonnet</code></summary>
```jsonnet
local iptables(args) = [ 'ip46tables', '-w' ] + args;
local banFor(time) = {
ban: {
cmd: iptables(['-A', 'reaction', '-s', '<ip>', '-j', 'DROP']),
cmd: ['iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
},
unban: {
cmd: ['iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
after: time,
cmd: iptables(['-D', 'reaction', '-s', '<ip>', '-j', 'DROP']),
},
};
{
patterns: {
ip: {
regex: @'(?:(?:[ 0-9 ]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})',
type: 'ipv4',
},
},
start: [
iptables([ '-N', 'reaction' ]),
iptables([ '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]),
iptables([ '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]),
['iptables', '-N', 'reaction'],
['iptables', '-I', 'INPUT', '-p', 'all', '-j', 'reaction'],
['iptables', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction'],
],
stop: [
iptables([ '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]),
iptables([ '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]),
iptables([ '-F', 'reaction' ]),
iptables([ '-X', 'reaction' ]),
['iptables', '-D', 'INPUT', '-p', 'all', '-j', 'reaction'],
['iptables', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction'],
['iptables', '-F', 'reaction'],
['iptables', '-X', 'reaction'],
],
streams: {
ssh: {
cmd: [ 'journalctl', '-fu', 'sshd.service' ],
cmd: ['journalctl', '-fu', 'sshd.service'],
filters: {
failedlogin: {
regex: [
@'authentication failure;.*rhost=<ip>'
@'authentication failure;.*rhost=<ip>',
@'Failed password for .* from <ip>',
@'banner exchange: Connection from <ip> port [0-9]*: invalid format',
@'Invalid user .* from <ip>',
@ -139,6 +136,9 @@ local banFor(time) = {
</details>
> It is recommended to setup reaction with [`nftables`](https://reaction.ppom.me/actions/nftables.html)
> or [`ipset` + `iptables`](https://reaction.ppom.me/actions/ipset.html), which are much more performant
> solutions than `iptables` alone.
### Database
@ -148,17 +148,17 @@ If you don't know where to start reaction, `/var/lib/reaction` should be a sane
### CLI
- `reaction start` runs the server
- `reaction show` show pending actions (ie. bans)
- `reaction show` show pending actions (ie. show current bans)
- `reaction flush` permits to run pending actions (ie. clear bans)
- `reaction trigger` permits to manually trigger a filter (ie. run custom ban)
- `reaction test-regex` permits to test regexes
- `reaction test-config` shows loaded configuration
- `reaction help` for full usage.
### `ip46tables`
### old binaries
`ip46tables` is a minimal c program present in its own subdirectory with only standard posix dependencies.
It permits to configure `iptables` and `ip6tables` at the same time.
It will execute `iptables` when detecting ipv4, `ip6tables` when detecting ipv6 and both if no ip address is present on the command line.
`ip46tables` and `nft46` binaries are no longer part of reaction. If you really need them, see
[the last commit that included them](https://framagit.org/ppom/reaction/-/tree/b7d997ca5e9a69c8572bb2ec9d27d0eb03b3cb9f/helpers_c).
## Wiki
@ -216,7 +216,16 @@ make install_systemd
- [NGI's Diversity and Inclusion Guide](https://nlnet.nl/NGI0/bestpractices/DiversityAndInclusionGuide-v4.pdf)
I'll do my best to maintain a safe contribution place, as free as possible from discrimination and elitism.
### Ideas
Please take a look at issues which have the "Opinion Welcome 👀" label!
*Your opinion is welcome.*
Your ideas are welcome in the issues.
### Code
Contributions are welcome.
For any substantial feature, please file an issue first, to be assured that we agree on the feature, and to avoid unnecessary work.
@ -230,6 +239,7 @@ French version: [#reaction-dev-fr:club1.fr](https://matrix.to/#/#reaction-dev-fr
You can ask for help in the issues or in this Matrix room: [#reaction-users-en:club1.fr](https://matrix.to/#/#reaction-users-en:club1.fr).
French version: [#reaction-users-fr:club1.fr](https://matrix.to/#/#reaction-users-fr:club1.fr).
You can alternatively send a mail: `reaction` on domain `ppom.me`.
## Funding

3
TODO Normal file
View file

@ -0,0 +1,3 @@
Test what happens when a Filter's pattern Set changes (I think it's shitty)
DB: add tests on stress testing (lines should always be in order)
conf: merge filters

24
bench/bench.sh Executable file
View file

@ -0,0 +1,24 @@
set -e
if test "$(realpath "$PWD")" != "$(realpath "$(dirname "$0")/..")"
then
echo "You must be in reaction root directory"
exit 1
fi
if test ! -f "$1"
then
# shellcheck disable=SC2016
echo '$1 must be a configuration file (most probably in ./bench)'
exit 1
fi
rm -f reaction.db
cargo build --release --bins
sudo systemd-run --wait \
-p User="$(id -nu)" \
-p MemoryAccounting=yes \
-p IOAccounting=yes \
-p WorkingDirectory="$(pwd)" \
-p Environment=PATH=/run/current-system/sw/bin/ \
sh -c "for i in 1 2; do ./target/release/reaction start -c '$1' -l ERROR -s ./reaction.sock; done"

View file

@ -1,6 +1,9 @@
---
# This configuration permits to test reaction's performance
# under a very high load
#
# It keeps regexes super simple, to avoid benchmarking the `regex` crate,
# and benchmark reaction's internals instead.
concurrency: 32
patterns:
@ -15,7 +18,7 @@ streams:
tailDown1:
cmd: [ 'sh', '-c', 'sleep 2; seq 10001 | while read i; do echo found $i; done' ]
filters:
find:
find1:
regex:
- '^found <num>'
retry: 9
@ -28,9 +31,9 @@ streams:
after: 1m
onexit: false
tailDown2:
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; done' ]
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find:
find2:
regex:
- '^found <num>'
retry: 480
@ -43,9 +46,9 @@ streams:
after: 1m
onexit: false
tailDown3:
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; done' ]
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find:
find3:
regex:
- '^found <num>'
retry: 480
@ -57,12 +60,9 @@ streams:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
tailDown4:
cmd: [ 'sh', '-c', 'sleep 2; seq 1000100 | while read i; do echo found $i; done' ]
filters:
find:
find4:
regex:
- '^found <num>'
- '^trouvé <num>'
retry: 480
retryperiod: 6m
actions:

130
bench/nginx.yml Normal file
View file

@ -0,0 +1,130 @@
# This is an extract of a real life configuration
#
# It reads an nginx's access.log in the following format:
# log_format '$remote_addr - $remote_user [$time_local] '
# '$host '
# '"$request" $status $bytes_sent '
# '"$http_referer" "$http_user_agent"';
#
# I can't make my access.log public for obvious privacy reasons.
#
# On the opposite of heavy-load.yml, this test is closer to real-life regex complexity.
#
# It has been created to test the performance improvements of
# the previous commit: ad6b0faa30c1af84360f66074a917b4bf6cda10a
#
# On this test, most lines don't match anything, so most time is spent matching regexes.
concurrency: 0
patterns:
ip:
ignore:
- 192.168.1.253
- 10.1.1.1
- 10.1.1.5
- 10.1.1.4
- 127.0.0.1
- ::1
regex: (?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))
untilEOL:
regex: .*$
streams:
nginx:
cmd:
- cat
- /tmp/access.log
filters:
directusFailedLogin:
actions:
ban:
cmd:
- sleep
- 0.01
unban:
after: 4h
cmd:
- sleep
- 0.01
regex:
- ^<ip> .* "POST /repertoire/auth/login HTTP/..." 401 [0-9]+ .https://babos.land
- ^<ip> .* "POST /pompeani.art/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
- ^<ip> .* "POST /leborddeleau/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
- ^<ip> .* "POST /5eroue/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
- ^<ip> .* "POST /edit/auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.me
- ^<ip> .* "POST /auth/login HTTP/..." 401 [0-9]+ .https://edit.ppom.fr
retry: 6
retryperiod: 4h
gptbot:
actions:
ban:
cmd:
- sleep
- 0.01
unban:
after: 4h
cmd:
- sleep
- 0.01
regex:
- ^<ip>.*"[^"]*AI2Bot[^"]*"$
- ^<ip>.*"[^"]*Amazonbot[^"]*"$
- ^<ip>.*"[^"]*Applebot[^"]*"$
- ^<ip>.*"[^"]*Applebot-Extended[^"]*"$
- ^<ip>.*"[^"]*Bytespider[^"]*"$
- ^<ip>.*"[^"]*CCBot[^"]*"$
- ^<ip>.*"[^"]*ChatGPT-User[^"]*"$
- ^<ip>.*"[^"]*ClaudeBot[^"]*"$
- ^<ip>.*"[^"]*Diffbot[^"]*"$
- ^<ip>.*"[^"]*DuckAssistBot[^"]*"$
- ^<ip>.*"[^"]*FacebookBot[^"]*"$
- ^<ip>.*"[^"]*GPTBot[^"]*"$
- ^<ip>.*"[^"]*Google-Extended[^"]*"$
- ^<ip>.*"[^"]*Kangaroo Bot[^"]*"$
- ^<ip>.*"[^"]*Meta-ExternalAgent[^"]*"$
- ^<ip>.*"[^"]*Meta-ExternalFetcher[^"]*"$
- ^<ip>.*"[^"]*OAI-SearchBot[^"]*"$
- ^<ip>.*"[^"]*PerplexityBot[^"]*"$
- ^<ip>.*"[^"]*Timpibot[^"]*"$
- ^<ip>.*"[^"]*Webzio-Extended[^"]*"$
- ^<ip>.*"[^"]*YouBot[^"]*"$
- ^<ip>.*"[^"]*omgili[^"]*"$
slskd-failedLogin:
actions:
ban:
cmd:
- sleep
- 0.01
unban:
after: 4h
cmd:
- sleep
- 0.01
regex:
- ^<ip> .* "POST /slskd/api/v0/session HTTP/..." 401 [0-9]+ .https://ppom.me
- ^<ip> .* "POST /kiosque/api/v0/session HTTP/..." 401 [0-9]+ .https://babos.land
retry: 3
retryperiod: 1h
suspectRequests:
actions:
ban:
cmd:
- sleep
- 0.01
unban:
after: 4h
cmd:
- sleep
- 0.01
regex:
- ^<ip> .*"GET /(?:[^/" ]*/)*wp-login\.php
- ^<ip> .*"GET /(?:[^/" ]*/)*wp-includes
- '^<ip> .*"GET /(?:[^/" ]*/)*\.env '
- '^<ip> .*"GET /(?:[^/" ]*/)*config\.json '
- '^<ip> .*"GET /(?:[^/" ]*/)*info\.php '
- '^<ip> .*"GET /(?:[^/" ]*/)*owa/auth/logon.aspx '
- '^<ip> .*"GET /(?:[^/" ]*/)*auth.html '
- '^<ip> .*"GET /(?:[^/" ]*/)*auth1.html '
- '^<ip> .*"GET /(?:[^/" ]*/)*password.txt '
- '^<ip> .*"GET /(?:[^/" ]*/)*passwords.txt '
- '^<ip> .*"GET /(?:[^/" ]*/)*dns-query '
- '^<ip> .*"GET /(?:[^/" ]*/)*\.git/ '

View file

@ -0,0 +1,86 @@
---
# This configuration permits to test reaction's performance
# under a very high load
#
# It keeps regexes super simple, to avoid benchmarking the `regex` crate,
# and benchmark reaction's internals instead.
concurrency: 32
plugins:
- path: "/home/ppom/prg/reaction/target/release/reaction-plugin-virtual"
patterns:
num:
regex: '[0-9]{3}'
ip:
regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
ignore:
- 1.0.0.1
streams:
virtual:
type: virtual
filters:
find0:
regex:
- '^<num>$'
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
tailDown1:
cmd: [ 'sh', '-c', 'sleep 2; seq 1001 | while read i; do echo found $i; done' ]
filters:
find1:
regex:
- '^found <num>'
retry: 9
retryperiod: 6m
actions:
virtual:
type: virtual
options:
send: '<num>'
to: virtual
tailDown2:
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find2:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
virtual:
type: virtual
options:
send: '<num>'
to: virtual
tailDown3:
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find3:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
virtual:
type: virtual
options:
send: '<num>'
to: virtual
find4:
regex:
- '^trouvé <num>'
retry: 480
retryperiod: 6m
actions:
virtual:
type: virtual
options:
send: '<num>'
to: virtual

View file

@ -0,0 +1,74 @@
---
# This configuration permits to test reaction's performance
# under a very high load
#
# It keeps regexes super simple, to avoid benchmarking the `regex` crate,
# and benchmark reaction's internals instead.
concurrency: 32
patterns:
num:
regex: '[0-9]{3}'
ip:
regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
ignore:
- 1.0.0.1
streams:
tailDown1:
cmd: [ 'sh', '-c', 'sleep 2; seq 1001 | while read i; do echo found $i; done' ]
filters:
find1:
regex:
- '^found <num>'
retry: 9
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
tailDown2:
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find2:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
tailDown3:
cmd: [ 'sh', '-c', 'sleep 2; seq 100100 | while read i; do echo found $i; echo trouvé $i; done' ]
filters:
find3:
regex:
- '^found <num>'
retry: 480
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false
find4:
regex:
- '^trouvé <num>'
retry: 480
retryperiod: 6m
actions:
damn:
cmd: [ 'sleep', '0.0<num>' ]
undamn:
cmd: [ 'sleep', '0.0<num>' ]
after: 1m
onexit: false

View file

@ -1,8 +1,6 @@
use std::{
env::var_os,
io::{self, ErrorKind},
path::Path,
process,
};
use clap_complete::shells;
@ -10,25 +8,10 @@ use clap_complete::shells;
// SubCommand defined here
include!("src/cli.rs");
fn compile_helper(name: &str, out_dir: &Path) -> io::Result<()> {
process::Command::new("gcc")
.args([
&format!("helpers_c/{name}.c"),
"-o",
out_dir.join(name).to_str().expect("could not join path"),
])
.spawn()?;
Ok(())
}
fn main() -> io::Result<()> {
if var_os("PROFILE").ok_or(ErrorKind::NotFound)? == "release" {
let out_dir = PathBuf::from(var_os("OUT_DIR").ok_or(ErrorKind::NotFound)?).join("../../..");
// Compile C helpers
compile_helper("ip46tables", &out_dir)?;
compile_helper("nft46", &out_dir)?;
// Build CLI
let cli = clap::Command::new("reaction");
let cli = SubCommand::augment_subcommands(cli);
@ -51,8 +34,6 @@ See usage examples, service configurations and good practices on the wiki: https
println!("cargo::rerun-if-changed=build.rs");
println!("cargo::rerun-if-changed=src/cli.rs");
println!("cargo::rerun-if-changed=helpers_c/ip46tables.c");
println!("cargo::rerun-if-changed=helpers_c/nft46.c");
Ok(())
}

View file

@ -7,61 +7,106 @@
// strongly encouraged to take a look at the full documentation: https://reaction.ppom.me
// JSONnet functions
local iptables(args) = ['ip46tables', '-w'] + args;
// ip46tables is a minimal C program (only POSIX dependencies) present in a
// subdirectory of this repo.
// it permits to handle both ipv4/iptables and ipv6/ip6tables commands
local ipBan(cmd) = [cmd, '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'];
local ipUnban(cmd) = [cmd, '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'];
// See meaning and usage of this function around L106
// See meaning and usage of this function around L180
local banFor(time) = {
ban: {
cmd: iptables(['-A', 'reaction', '-s', '<ip>', '-j', 'DROP']),
ban4: {
cmd: ipBan('iptables'),
ipv4only: true,
},
unban: {
ban6: {
cmd: ipBan('ip6tables'),
ipv6only: true,
},
unban4: {
cmd: ipUnban('iptables'),
after: time,
cmd: iptables(['-D', 'reaction', '-s', '<ip>', '-j', 'DROP']),
ipv4only: true,
},
unban6: {
cmd: ipUnban('ip6tables'),
after: time,
ipv6only: true,
},
};
// See usage of this function around L90
// Generates a command for iptables and ip46tables
local ip46tables(arguments) = [
['iptables', '-w'] + arguments,
['ip6tables', '-w'] + arguments,
];
{
// patterns are substitued in regexes.
// when a filter performs an action, it replaces the found pattern
patterns: {
ip: {
// reaction regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
// jsonnet's @'string' is for verbatim strings
// simple version: regex: @'(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})',
regex: @'(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))',
ignore: ['127.0.0.1', '::1'],
// Patterns can be ignored based on regexes, it will try to match the whole string detected by the pattern
// ignoreregex: [@'10\.0\.[0-9]{1,3}\.[0-9]{1,3}'],
name: {
// reaction regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
// common patterns have a 'regex' field
regex: '[a-z]+',
// patterns can ignore specific strings
ignore: ['cecilia'],
// patterns can also be ignored based on regexes, it will try to match the whole string detected by the pattern
ignoreregex: [
// ignore names starting with 'jo'
'jo.*',
],
},
ip: {
// patterns can have a special 'ip' type that matches both ipv4 and ipv6
// or 'ipv4' or 'ipv6' to match only that ip version
type: 'ip',
ignore: ['127.0.0.1', '::1'],
// they can also ignore whole CIDR ranges of ip
ignorecidr: ['10.0.0.0/8'],
// last but not least, patterns of type ip, ipv4, ipv6 can also group their matched ips by mask
// ipv4mask: 30
// this means that ipv6 matches will be converted to their network part.
ipv6mask: 64,
// for example,"2001:db8:85a3:9de5::8a2e:370:7334" will be converted to "2001:db8:85a3:9de5::/64".
},
// ipv4: {
// type: 'ipv4',
// ignore: ...
// ipv4mask: ...
// },
},
// where the state (database) must be read
// defaults to . which means reaction's working directory.
// The systemd service starts reaction in /var/lib/reaction.
state_directory: '.',
// if set to a positive number max number of concurrent actions
// if set to a negative number no limit
// if not specified or set to 0 defaults to the number of CPUs on the system
concurrency: 0,
// Those commands will be executed in order at start, before everything else
start: [
start:
// Create an iptables chain for reaction
iptables(['-N', 'reaction']),
ip46tables(['-N', 'reaction']) +
// Insert this chain as the first item of the INPUT & FORWARD chains (for incoming connections)
iptables(['-I', 'INPUT', '-p', 'all', '-j', 'reaction']),
iptables(['-I', 'FORWARD', '-p', 'all', '-j', 'reaction']),
],
ip46tables(['-I', 'INPUT', '-p', 'all', '-j', 'reaction']) +
ip46tables(['-I', 'FORWARD', '-p', 'all', '-j', 'reaction']),
// Those commands will be executed in order at stop, after everything else
stop: [
stop:
// Remove the chain from the INPUT & FORWARD chains
iptables(['-D', 'INPUT', '-p', 'all', '-j', 'reaction']),
iptables(['-D', 'FORWARD', '-p', 'all', '-j', 'reaction']),
ip46tables(['-D', 'INPUT', '-p', 'all', '-j', 'reaction']) +
ip46tables(['-D', 'FORWARD', '-p', 'all', '-j', 'reaction']) +
// Empty the chain
iptables(['-F', 'reaction']),
ip46tables(['-F', 'reaction']) +
// Delete the chain
iptables(['-X', 'reaction']),
],
ip46tables(['-X', 'reaction']),
// streams are commands
// they are run and their ouptut is captured
@ -73,41 +118,85 @@ local banFor(time) = {
// note that if the command is not in environment's `PATH`
// its full path must be given.
cmd: ['journalctl', '-n0', '-fu', 'sshd.service'],
// filters run actions when they match regexes on a stream
filters: {
// filters have a user-defined name
failedlogin: {
// reaction's regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
// reaction's regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
regex: [
// <ip> is predefined in the patterns section
// ip's regex is inserted in the following regex
@'authentication failure;.*rhost=<ip>',
@'Failed password for .* from <ip>',
@'Invalid user .* from <ip>',
@'Connection (reset|closed) by (authenticating|invalid) user .* <ip>',
@'banner exchange: Connection from <ip> port [0-9]*: invalid format',
],
// if retry and retryperiod are defined,
// the actions will only take place if a same pattern is
// found `retry` times in a `retryperiod` interval
retry: 3,
// format is defined here: https://pkg.go.dev/time#ParseDuration
// format is defined as follows: <integer> <unit>
// - whitespace between the integer and unit is optional
// - integer must be positive (>= 0)
// - unit can be one of:
// - ms / millis / millisecond / milliseconds
// - s / sec / secs / second / seconds
// - m / min / mins / minute / minutes
// - h / hour / hours
// - d / day / days
retryperiod: '6h',
// duplicate specify how to handle matches after an action has already been taken.
// 3 options are possible:
// - extend (default): update the pending actions' time, so they run later
// - ignore: don't do anything, ignore the match
// - rerun: run the actions again. so we may have the same pending actions multiple times.
// (this was the default before 2.2.0)
// duplicate: extend
// actions are run by the filter when regexes are matched
actions: {
// actions have a user-defined name
ban: {
cmd: iptables(['-A', 'reaction', '-s', '<ip>', '-j', 'DROP']),
ban4: {
cmd: ['iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
// this optional field permits to run an action only when a pattern of type ip contains an ipv4
ipv4only: true,
},
unban: {
cmd: iptables(['-D', 'reaction', '-s', '<ip>', '-j', 'DROP']),
ban6: {
cmd: ['ip6tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP'],
// this optional field permits to run an action only when a pattern of type ip contains an ipv6
ipv6only: true,
},
unban4: {
cmd: ['iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
// if after is defined, the action will not take place immediately, but after a specified duration
// same format as retryperiod
after: '48h',
after: '2 days',
// let's say reaction is quitting. does it run all those pending commands which had an `after` duration set?
// if you want reaction to run those pending commands before exiting, you can set this:
// onexit: true,
// (defaults to false)
// here it is not useful because we will flush and delete the chain containing the bans anyway
// (with the stop commands)
ipv4only: true,
},
unban6: {
cmd: ['ip6tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP'],
after: '2 days',
ipv6only: true,
},
mail: {
cmd: ['sendmail', '...', '<ip>'],
// some commands, such as alerting commands, are "oneshot".
// this means they'll be run only once, and won't be executed again when reaction is restarted
oneshot: true,
},
},
// or use the banFor function defined at the beginning!

View file

@ -10,11 +10,18 @@
# using YAML anchors `&name` and pointers `*name`
# definitions are not readed by reaction
definitions:
- &iptablesban [ 'ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &iptablesunban [ 'ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &ip4tablesban [ 'iptables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &ip6tablesban [ 'ip6tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &ip4tablesunban [ 'iptables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &ip6tablesunban [ 'ip6tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
# ip46tables is a minimal C program (only POSIX dependencies) present as a subdirectory.
# it permits to handle both ipv4/iptables and ipv6/ip6tables commands
# where the state (database) must be read
# defaults to . which means reaction's working directory.
# The systemd service starts reaction in /var/lib/reaction.
state_directory: .
# if set to a positive number → max number of concurrent actions
# if set to a negative number → no limit
# if not specified or set to 0 → defaults to the number of CPUs on the system
@ -23,29 +30,57 @@ concurrency: 0
# patterns are substitued in regexes.
# when a filter performs an action, it replaces the found pattern
patterns:
name:
# reaction regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
# common patterns have a 'regex' field
regex: '[a-z]+'
# patterns can ignore specific strings
ignore:
- 'cecilia'
# patterns can also be ignored based on regexes, it will try to match the whole string detected by the pattern
ignoreregex:
# ignore names starting with 'jo'
- 'jo.*'
ip:
# reaction regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
# simple version: regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
regex: '(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))'
# patterns can have a special 'ip' type that matches both ipv4 and ipv6
# or 'ipv4' or 'ipv6' to match only that ip version
type: ip
ignore:
- 127.0.0.1
- ::1
# Patterns can be ignored based on regexes, it will try to match the whole string detected by the pattern
# ignoreregex:
# - '10\.0\.[0-9]{1,3}\.[0-9]{1,3}'
# they can also ignore whole CIDR ranges of ip
ignorecidr:
- 10.0.0.0/8
# last but not least, patterns of type ip, ipv4, ipv6 can also group their matched ips by mask
# ipv4mask: 30
# this means that ipv6 matches will be converted to their network part.
ipv6mask: 64
# for example,"2001:db8:85a3:9de5::8a2e:370:7334" will be converted to "2001:db8:85a3:9de5::/64".
# ipv4:
# type: ipv4
# ignore: ...
# Those commands will be executed in order at start, before everything else
start:
- [ 'ip46tables', '-w', '-N', 'reaction' ]
- [ 'ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-N', 'reaction' ]
- [ 'ip6tables', '-w', '-N', 'reaction' ]
- [ 'iptables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip6tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'ip6tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
# Those commands will be executed in order at stop, after everything else
stop:
- [ 'ip46tables', '-w,', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w,', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-F', 'reaction' ]
- [ 'ip46tables', '-w', '-X', 'reaction' ]
- [ 'iptables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip6tables', '-w', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'ip6tables', '-w', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'iptables', '-w', '-F', 'reaction' ]
- [ 'ip6tables', '-w', '-F', 'reaction' ]
- [ 'iptables', '-w', '-X', 'reaction' ]
- [ 'ip6tables', '-w', '-X', 'reaction' ]
# streams are commands
# they are run and their ouptut is captured
@ -57,40 +92,81 @@ streams:
# note that if the command is not in environment's `PATH`
# its full path must be given.
cmd: [ 'journalctl', '-n0', '-fu', 'sshd.service' ]
# filters run actions when they match regexes on a stream
filters:
# filters have a user-defined name
failedlogin:
# reaction's regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
# reaction's regex syntax is defined here: https://docs.rs/regex/latest/regex/#syntax
regex:
# <ip> is predefined in the patterns section
# ip's regex is inserted in the following regex
- 'authentication failure;.*rhost=<ip>'
- 'Failed password for .* from <ip>'
- 'Invalid user .* from <ip>'
- 'Connection (reset|closed) by (authenticating|invalid) user .* <ip>'
- 'banner exchange: Connection from <ip> port [0-9]*: invalid format'
# if retry and retryperiod are defined,
# the actions will only take place if a same pattern is
# found `retry` times in a `retryperiod` interval
retry: 3
# format is defined here: https://pkg.go.dev/time#ParseDuration
# format is defined as follows: <integer> <unit>
# - whitespace between the integer and unit is optional
# - integer must be positive (>= 0)
# - unit can be one of:
# - ms / millis / millisecond / milliseconds
# - s / sec / secs / second / seconds
# - m / min / mins / minute / minutes
# - h / hour / hours
# - d / day / days
retryperiod: 6h
# duplicate specify how to handle matches after an action has already been taken.
# 3 options are possible:
# - extend (default): update the pending actions' time, so they run later
# - ignore: don't do anything, ignore the match
# - rerun: run the actions again. so we may have the same pending actions multiple times.
# (this was the default before 2.2.0)
# duplicate: extend
# actions are run by the filter when regexes are matched
actions:
# actions have a user-defined name
ban:
ban4:
# YAML substitutes *reference by the value anchored at &reference
cmd: *iptablesban
unban:
cmd: *iptablesunban
cmd: *ip4tablesban
# this optional field permits to run an action only when a pattern of type ip contains an ipv4
ipv4only: true
ban6:
cmd: *ip6tablesban
# this optional field permits to run an action only when a pattern of type ip contains an ipv6
ipv6only: true
unban4:
cmd: *ip4tablesunban
# if after is defined, the action will not take place immediately, but after a specified duration
# same format as retryperiod
after: 48h
after: '2 days'
# let's say reaction is quitting. does it run all those pending commands which had an `after` duration set?
# if you want reaction to run those pending commands before exiting, you can set this:
# onexit: true
# (defaults to false)
# here it is not useful because we will flush and delete the chain containing the bans anyway
# (with the stop commands)
ipv4only: true
unban6:
cmd: *ip6tablesunban
after: '2 days'
ipv6only: true
mail:
cmd: ['sendmail', '...', '<ip>']
# some commands, such as alerting commands, are "oneshot".
# this means they'll be run only once, and won't be executed again when reaction is restarted
oneshot: true
# persistence
# tldr; when an `after` action is set in a filter, such filter acts as a 'jail',

View file

@ -7,7 +7,7 @@ Documentation=https://reaction.ppom.me
# See `man systemd.exec` and `man systemd.service` for most options below
[Service]
ExecStart=/usr/local/bin/reaction start -c /etc/reaction.jsonnet
ExecStart=/usr/local/bin/reaction start -c /etc/reaction/
# Ask systemd to create /var/lib/reaction (/var/lib/ is implicit)
StateDirectory=reaction
@ -15,6 +15,8 @@ StateDirectory=reaction
RuntimeDirectory=reaction
# Start reaction in its state directory
WorkingDirectory=/var/lib/reaction
# Let reaction kill its child processes first
KillMode=mixed
[Install]
WantedBy=multi-user.target

23
crates/treedb/Cargo.toml Normal file
View file

@ -0,0 +1,23 @@
[package]
name = "treedb"
version = "1.0.0"
edition = "2024"
[features]
test = []
[dependencies]
chrono.workspace = true
futures.workspace = true
serde.workspace = true
serde_json.workspace = true
thiserror.workspace = true
tokio.workspace = true
tokio.features = ["rt-multi-thread", "macros", "io-util", "time", "fs", "tracing"]
tokio-util.workspace = true
tokio-util.features = ["rt"]
tracing.workspace = true
[dev-dependencies]
tempfile.workspace = true

View file

@ -0,0 +1,188 @@
use std::{
collections::{BTreeMap, BTreeSet},
time::Duration,
};
use chrono::DateTime;
use serde_json::Value;
use crate::time::Time;
/// Tries to convert a [`Value`] into a [`String`]
pub fn to_string(val: &Value) -> Result<String, String> {
Ok(val.as_str().ok_or("not a string")?.to_owned())
}
/// Tries to convert a [`Value`] into a [`u64`]
pub fn to_u64(val: &Value) -> Result<u64, String> {
val.as_u64().ok_or("not a u64".into())
}
/// Old way of converting time: with chrono's serialization
fn old_string_to_time(val: &str) -> Result<Time, String> {
let time = DateTime::parse_from_rfc3339(val).map_err(|err| err.to_string())?;
Ok(Duration::new(time.timestamp() as u64, time.timestamp_subsec_nanos()).into())
}
/// New way of converting time: with our own implem
fn new_string_to_time(val: &str) -> Result<Time, String> {
let nanos: u128 = val.parse().map_err(|_| "not a number")?;
Ok(Duration::new(
(nanos / 1_000_000_000) as u64,
(nanos % 1_000_000_000) as u32,
)
.into())
}
/// Tries to convert a [`&str`] into a [`Time`]
fn string_to_time(val: &str) -> Result<Time, String> {
match new_string_to_time(val) {
Err(err) => match old_string_to_time(val) {
Err(_) => Err(err),
ok => ok,
},
ok => ok,
}
}
/// Tries to convert a [`Value`] into a [`Time`]
pub fn to_time(val: &Value) -> Result<Time, String> {
string_to_time(val.as_str().ok_or("not a string number")?)
}
/// Tries to convert a [`Value`] into a [`Vec<String>`]
pub fn to_match(val: &Value) -> Result<Vec<String>, String> {
val.as_array()
.ok_or("not an array")?
.iter()
.map(to_string)
.collect()
}
/// Tries to convert a [`Value`] into a [`BTreeSet<Time>`]
pub fn to_timeset(val: &Value) -> Result<BTreeSet<Time>, String> {
val.as_array()
.ok_or("not an array")?
.iter()
.map(to_time)
.collect()
}
/// Tries to convert a [`Value`] into a [`BTreeMap<Time, u64>`]
pub fn to_timemap(val: &Value) -> Result<BTreeMap<Time, u64>, String> {
val.as_object()
.ok_or("not a map")?
.iter()
.map(|(key, value)| Ok((string_to_time(key)?, to_u64(value)?)))
.collect()
}
#[cfg(test)]
mod tests {
use std::collections::BTreeMap;
use super::*;
#[test]
fn test_to_string() {
assert_eq!(to_string(&("".into())), Ok("".into()));
assert_eq!(to_string(&("ploup".into())), Ok("ploup".into()));
assert!(to_string(&(["ploup"].into())).is_err());
assert!(to_string(&(true.into())).is_err());
assert!(to_string(&(8.into())).is_err());
assert!(to_string(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_u64() {
assert_eq!(to_u64(&(0.into())), Ok(0));
assert_eq!(to_u64(&(8.into())), Ok(8));
assert_eq!(to_u64(&(u64::MAX.into())), Ok(u64::MAX));
assert!(to_u64(&("ploup".into())).is_err());
assert!(to_u64(&(["ploup"].into())).is_err());
assert!(to_u64(&(true.into())).is_err());
assert!(to_u64(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_time() {
assert_eq!(to_time(&"123456".into()).unwrap(), Time::from_nanos(123456),);
assert!(to_time(&(u64::MAX.into())).is_err());
assert!(to_time(&(["ploup"].into())).is_err());
assert!(to_time(&(true.into())).is_err());
// assert!(to_time(&(12345.into())).is_err());
assert!(to_time(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_match() {
assert_eq!(to_match(&([""].into())), Ok(vec!["".into()]));
assert_eq!(
to_match(&(["plip", "ploup"].into())),
Ok(vec!["plip".into(), "ploup".into()])
);
assert!(to_match(&[Value::from("plip"), Value::from(10)].into()).is_err());
assert!(to_match(&("ploup".into())).is_err());
assert!(to_match(&(true.into())).is_err());
assert!(to_match(&(8.into())).is_err());
assert!(to_match(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_timeset() {
assert_eq!(
to_timeset(&Value::from([Value::from("123456789")])),
Ok(BTreeSet::from([Time::from_nanos(123456789)]))
);
assert_eq!(
to_timeset(&Value::from([Value::from("8"), Value::from("123456")])),
Ok(BTreeSet::from([
Time::from_nanos(8),
Time::from_nanos(123456),
]))
);
assert!(to_timeset(&[Value::from("plip"), Value::from(10)].into()).is_err());
assert!(to_timeset(&([""].into())).is_err());
assert!(to_timeset(&(["ploup"].into())).is_err());
assert!(to_timeset(&(true.into())).is_err());
assert!(to_timeset(&(8.into())).is_err());
assert!(to_timeset(&(None::<String>.into())).is_err());
}
#[test]
fn test_to_timemap() {
let time1 = 1234567;
let time1_t = Time::from_nanos(time1);
let time2 = 123456789;
let time2_t = Time::from_nanos(time2);
assert_eq!(
to_timemap(&Value::from_iter([(time2.to_string(), 1)])),
Ok(BTreeMap::from([(time2_t, 1)]))
);
assert_eq!(
to_timemap(&Value::from_iter([
(time1.to_string(), 4),
(time2.to_string(), 0)
])),
Ok(BTreeMap::from([(time1_t.into(), 4), (time2_t.into(), 0)]))
);
assert!(to_timemap(&Value::from_iter([("1-1", time2)])).is_err());
// assert!(to_timemap(&Value::from_iter([(time2.to_string(), time2)])).is_err());
assert!(to_timemap(&Value::from_iter([(time2)])).is_err());
assert!(to_timemap(&Value::from_iter([(1)])).is_err());
assert!(to_timemap(&(["1970-01-01T01:20:34.567+01:00"].into())).is_err());
assert!(to_timemap(&([""].into())).is_err());
assert!(to_timemap(&(["ploup"].into())).is_err());
assert!(to_timemap(&(true.into())).is_err());
assert!(to_timemap(&(8.into())).is_err());
assert!(to_timemap(&(None::<String>.into())).is_err());
}
}

View file

@ -17,49 +17,93 @@ use std::{
time::Duration,
};
use chrono::{Local, TimeDelta};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use serde::{Deserialize, Serialize, de::DeserializeOwned};
use serde_json::Value;
use tokio::{
fs::{rename, File},
fs::{File, rename},
sync::{mpsc, oneshot},
time::{interval, MissedTickBehavior},
time::{MissedTickBehavior, interval},
};
use crate::{
concepts::{Config, Time},
daemon::ShutdownToken,
};
pub mod helpers;
use tokio_util::{sync::CancellationToken, task::task_tracker::TaskTrackerToken};
// Database
use raw::{ReadDB, WriteDB};
use time::{Time, now};
pub mod helpers;
mod raw;
pub mod time;
/// Any order the Database can receive
enum Order {
Log(Entry),
OpenTree(OpenTree),
}
/// Entry sent from [`Tree`] to [`Database`]
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
pub struct Entry {
struct Entry {
pub tree: String,
pub key: Value,
pub value: Option<Value>,
pub expiry: Time,
}
pub type LoadedDB = HashMap<String, HashMap<Value, Value>>;
/// Order to receive a tree from previous Database
pub struct OpenTree {
name: String,
resp: oneshot::Sender<Option<LoadedTree>>,
}
type LoadedTree = HashMap<Value, Value>;
pub type LoadedDB = HashMap<String, LoadedTree>;
const DB_NAME: &str = "reaction.db";
const DB_NEW_NAME: &str = "reaction.new.db";
impl Config {
fn path_of(&self, name: &str) -> PathBuf {
if self.state_directory().is_empty() {
name.into()
} else {
PathBuf::from(self.state_directory()).join(name)
}
fn path_of(state_directory: &Path, name: &str) -> PathBuf {
if state_directory.as_os_str().is_empty() {
name.into()
} else {
PathBuf::from(state_directory).join(name)
}
}
pub type DatabaseErrorReceiver = oneshot::Receiver<Result<(), String>>;
/// Public-facing API for a treedb Database
pub struct Database {
entry_tx: Option<mpsc::Sender<Order>>,
error_rx: DatabaseErrorReceiver,
}
impl Database {
/// Open a new Database, whom task will start in the background.
/// You'll have to:
/// - drop all [`Tree`]s,
/// - call [`Self::quit`],
///
/// to have the Database properly quit.
///
/// You can wait for [`Self::quit`] returned channel to know how it went.
pub async fn open(
path_directory: &Path,
cancellation_token: CancellationToken,
task_tracker_token: TaskTrackerToken,
) -> Result<Database, IoError> {
let (manager, entry_tx) = DatabaseManager::open(path_directory).await?;
let error_rx = manager.manager(cancellation_token, task_tracker_token);
Ok(Self {
entry_tx: Some(entry_tx),
error_rx,
})
}
/// Permit to close DB's channel.
/// Without this function manually called, the DB can't close.
pub fn quit(self) -> DatabaseErrorReceiver {
self.error_rx
}
}
@ -67,8 +111,9 @@ impl Config {
// This would make more sense, as actual garbage collection is time-based
/// A [`Database`] logs all write operations on [`Tree`]s in a single file.
/// Logs are written asynchronously, so the write operations in RAM will never block.
pub struct Database {
/// Logs are written asynchronously, so the write operations in RAM will block only when the
/// underlying channel is full.
struct DatabaseManager {
/// Inner database
write_db: WriteDB,
/// [`Tree`]s loaded from disk
@ -79,10 +124,7 @@ pub struct Database {
/// New database atomically replaces the old one when its writing is done.
new_path: PathBuf,
/// The receiver on [`Tree`] write operations
entry_rx: mpsc::Receiver<Entry>,
/// The sender on [`Tree`] write operations.
/// Only used to clone new senders for new Trees.
entry_tx: mpsc::Sender<Entry>,
entry_rx: mpsc::Receiver<Order>,
/// The interval at which the database must be flushed to kernel
flush_every: Duration,
/// The maximum bytes that must be written until the database is rotated
@ -91,29 +133,37 @@ pub struct Database {
bytes_written: usize,
}
impl Database {
pub async fn open(config: &Config) -> Result<Database, IoError> {
let path = config.path_of(DB_NAME);
let new_path = config.path_of(DB_NEW_NAME);
impl DatabaseManager {
pub async fn open(
path_directory: &Path,
) -> Result<(DatabaseManager, mpsc::Sender<Order>), IoError> {
let path = path_of(path_directory, DB_NAME);
let new_path = path_of(path_directory, DB_NEW_NAME);
let (write_db, loaded_db) = rotate_db(&path, &new_path, true).await?;
let (entry_tx, entry_rx) = mpsc::channel(1000);
Ok(Database {
write_db,
loaded_db,
path,
new_path,
entry_rx,
Ok((
DatabaseManager {
write_db,
loaded_db,
path,
new_path,
entry_rx,
flush_every: Duration::from_secs(2),
max_bytes: 20 * 1024 * 1024, // 20 MiB
bytes_written: 0,
},
entry_tx,
flush_every: Duration::from_secs(2),
max_bytes: 20 * 1024 * 1024, // 20 MiB
bytes_written: 0,
})
))
}
pub fn manager(mut self, shutdown: ShutdownToken) -> oneshot::Receiver<Result<(), String>> {
pub fn manager(
mut self,
cancellation_token: CancellationToken,
_task_tracker_token: TaskTrackerToken,
) -> oneshot::Receiver<Result<(), String>> {
let (error_tx, error_rx) = oneshot::channel();
tokio::spawn(async move {
let mut interval = interval(self.flush_every);
@ -121,24 +171,35 @@ impl Database {
// flush_every for the next tick, resulting in a relaxed interval.
// Hoping this will smooth IO pressure when under heavy load.
interval.set_missed_tick_behavior(MissedTickBehavior::Delay);
let status = loop {
let mut status = loop {
tokio::select! {
entry = self.entry_rx.recv() => {
if let Err(err) = self.handle_entry(entry).await {
shutdown.ask_shutdown();
order = self.entry_rx.recv() => {
if let Err(err) = self.handle_order(order).await {
cancellation_token.cancel();
break err;
}
}
_ = interval.tick() => {
if let Err(err) = self.flush().await {
shutdown.ask_shutdown();
cancellation_token.cancel();
break Some(err);
}
}
_ = shutdown.wait() => break None
_ = cancellation_token.cancelled() => break None
};
};
// Finish consuming received entries when shutdown asked
if status.is_none() {
loop {
let order = self.entry_rx.recv().await;
if let Err(err) = self.handle_order(order).await {
status = err;
break;
}
}
}
// Shutdown
let close_status = self
.close()
@ -156,30 +217,42 @@ impl Database {
error_rx
}
/// Write a received entry. Return:
/// Executes an order. Returns:
/// - Err(Some) if there was an error,
/// - Err(None) is channel is closed,
/// - Ok(()) in general case.
async fn handle_entry(&mut self, entry: Option<Entry>) -> Result<(), Option<String>> {
match entry {
Some(entry) => match self.write_db.write_entry(&entry).await {
Ok(bytes_written) => {
self.bytes_written += bytes_written;
if self.bytes_written > self.max_bytes {
self.rotate_db()
.await
.and_then(|_| Ok(self.bytes_written = 0))
.map_err(|err| Some(format!("while rotating database: {err}")))
} else {
Ok(())
}
}
Err(err) => Err(Some(format!("while writing entry to database: {err}"))),
},
async fn handle_order(&mut self, order: Option<Order>) -> Result<(), Option<String>> {
match order {
Some(Order::Log(entry)) => self.handle_entry(entry).await.map_err(Option::Some),
Some(Order::OpenTree(open_tree)) => {
self.handle_open_tree(open_tree);
Ok(())
}
None => Err(None),
}
}
/// Write a received entry.
async fn handle_entry(&mut self, entry: Entry) -> Result<(), String> {
match self.write_db.write_entry(&entry).await {
Ok(bytes_written) => {
self.bytes_written += bytes_written;
if self.bytes_written > self.max_bytes {
match self.rotate_db().await {
Ok(_) => {
self.bytes_written = 0;
Ok(())
}
Err(err) => Err(format!("while rotating database: {err}")),
}
} else {
Ok(())
}
}
Err(err) => Err(format!("while writing entry to database: {err}")),
}
}
/// Flush inner database.
async fn flush(&mut self) -> Result<(), String> {
self.write_db
@ -213,7 +286,7 @@ async fn rotate_db(
// No need to rotate the database when it is new,
// we return here
(true, ErrorKind::NotFound) => {
return Ok((WriteDB::new(File::create(path).await?), HashMap::default()))
return Ok((WriteDB::new(File::create(path).await?), HashMap::default()));
}
(_, _) => return Err(err),
},
@ -253,34 +326,49 @@ pub struct Tree<K: KeyType, V: ValueType> {
/// This property permits the database rotation to be `O(n)` in time and `O(1)` in RAM space,
/// `n` being the number of write operations from the last rotation plus the number of new
/// operations.
entry_timeout: TimeDelta,
entry_timeout: Duration,
/// The inner BTreeMap
tree: BTreeMap<K, V>,
/// The sender that permits to asynchronously send write operations to database
tx: mpsc::Sender<Entry>,
tx: mpsc::Sender<Order>,
}
impl Database {
/// Creates a new Tree with the given name and entry timeout.
/// Takes a closure (or regular function) that converts (Value, Value) JSON entries
/// into (K, V) typed entries.
/// Helpers for this closure can be find in the [`helpers`] module.
pub fn open_tree<K: KeyType, V: ValueType, F>(
/// Helpers for this closure can be found in the [`helpers`] module.
pub async fn open_tree<K: KeyType, V: ValueType, F>(
&mut self,
name: String,
entry_timeout: TimeDelta,
entry_timeout: Duration,
map_f: F,
) -> Result<Tree<K, V>, String>
where
F: Fn((Value, Value)) -> Result<(K, V), String>,
{
// Request the tree
let (tx, rx) = oneshot::channel();
let entry_tx = match self.entry_tx.clone() {
None => return Err("Database is closing".to_string()),
Some(entry_tx) => {
entry_tx
.send(Order::OpenTree(OpenTree {
name: name.clone(),
resp: tx,
}))
.await
.map_err(|_| "Database did not answer")?;
// Get a clone of the channel sender
entry_tx.clone()
}
};
// Load the tree from its JSON
let tree = if let Some(json_tree) = self.loaded_db.remove(&name) {
let tree = if let Some(json_tree) = rx.await.map_err(|_| "Database did not respond")? {
json_tree
.into_iter()
.map(map_f)
.collect::<Result<BTreeMap<K, V>, String>>()
.unwrap()
.collect::<Result<BTreeMap<K, V>, String>>()?
} else {
BTreeMap::default()
};
@ -288,15 +376,25 @@ impl Database {
id: name,
entry_timeout,
tree,
tx: self.entry_tx.clone(),
tx: entry_tx,
})
}
}
impl DatabaseManager {
/// Creates a new Tree with the given name and entry timeout.
/// Takes a closure (or regular function) that converts (Value, Value) JSON entries
/// into (K, V) typed entries.
/// Helpers for this closure can be found in the [`helpers`] module.
pub fn handle_open_tree(&mut self, open_tree: OpenTree) {
let _ = open_tree.resp.send(self.loaded_db.remove(&open_tree.name));
}
// TODO keep only tree names, and use it for next db rotation to remove associated entries
/// Drops Trees that have not been loaded already
pub fn drop_trees(&mut self) {
self.loaded_db = HashMap::default();
}
// Drops Trees that have not been loaded already
// pub fn drop_trees(&mut self) {
// self.loaded_db = HashMap::default();
// }
}
// Gives access to all read-only functions
@ -311,45 +409,50 @@ impl<K: KeyType, V: ValueType> Deref for Tree<K, V> {
// Reimplement write functions
impl<K: KeyType, V: ValueType> Tree<K, V> {
/// Log an [`Entry`] to the [`Database`]
fn log(&mut self, k: &K, v: Option<&V>) {
async fn log(&mut self, k: &K, v: Option<&V>) {
let now = now();
let e = Entry {
tree: self.id.clone(),
key: serde_json::to_value(k).expect("could not serialize key"),
value: v.map(|v| serde_json::to_value(v).expect("could not serialize value")),
expiry: Local::now() + self.entry_timeout,
expiry: now + self.entry_timeout,
};
let tx = self.tx.clone();
// FIXME what if send fails?
tokio::spawn(async move {
let _ = tx.send(e).await;
});
let _ = tx.send(Order::Log(e)).await;
}
/// Asynchronously persisted version of [`BTreeMap::insert`]
pub fn insert(&mut self, key: K, value: V) -> Option<V> {
self.log(&key, Some(&value));
pub async fn insert(&mut self, key: K, value: V) -> Option<V> {
self.log(&key, Some(&value)).await;
self.tree.insert(key, value)
}
/// Asynchronously persisted version of [`BTreeMap::pop_first`]
pub fn pop_first(&mut self) -> Option<(K, V)> {
self.tree.pop_first().map(|(key, value)| {
self.log(&key, None);
(key, value)
})
pub async fn pop_first(&mut self) -> Option<(K, V)> {
match self.tree.pop_first() {
Some((key, value)) => {
self.log(&key, None).await;
Some((key, value))
}
None => None,
}
}
/// Asynchronously persisted version of [`BTreeMap::pop_last`]
pub fn pop_last(&mut self) -> Option<(K, V)> {
self.tree.pop_last().map(|(key, value)| {
self.log(&key, None);
(key, value)
})
pub async fn pop_last(&mut self) -> Option<(K, V)> {
match self.tree.pop_last() {
Some((key, value)) => {
self.log(&key, None).await;
Some((key, value))
}
None => None,
}
}
/// Asynchronously persisted version of [`BTreeMap::remove`]
pub fn remove(&mut self, key: &K) -> Option<V> {
self.log(&key, None);
pub async fn remove(&mut self, key: &K) -> Option<V> {
self.log(key, None).await;
self.tree.remove(key)
}
@ -357,22 +460,49 @@ impl<K: KeyType, V: ValueType> Tree<K, V> {
/// Returning None removes the item if it existed before.
/// Asynchronously persisted.
/// *API design borrowed from [`fjall::WriteTransaction::fetch_update`].*
pub fn fetch_update<F: FnMut(Option<&V>) -> Option<V>>(
pub async fn fetch_update<F: FnMut(Option<V>) -> Option<V>>(
&mut self,
key: K,
mut f: F,
) -> Option<V> {
let old_value = self.get(&key);
let old_value = self.get(&key).map(|v| v.to_owned());
let new_value = f(old_value);
if old_value != new_value.as_ref() {
self.log(&key, new_value.as_ref());
}
self.log(&key, new_value.as_ref()).await;
if let Some(new_value) = new_value {
self.tree.insert(key, new_value)
} else {
self.tree.remove(&key)
}
}
#[cfg(any(test, feature = "test"))]
pub fn tree(&self) -> &BTreeMap<K, V> {
&self.tree
}
}
#[cfg(any(test, feature = "test"))]
impl DatabaseManager {
pub fn set_loaded_db(&mut self, loaded_db: LoadedDB) {
self.loaded_db = loaded_db;
}
}
#[cfg(any(test, feature = "test"))]
impl Database {
pub async fn from_dir(dir_path: &Path, loaded_db: Option<LoadedDB>) -> Result<Self, IoError> {
use tokio_util::task::TaskTracker;
let (mut manager, entry_tx) = DatabaseManager::open(dir_path).await?;
if let Some(loaded_db) = loaded_db {
manager.set_loaded_db(loaded_db)
}
let error_rx = manager.manager(CancellationToken::new(), TaskTracker::new().token());
Ok(Self {
entry_tx: Some(entry_tx),
error_rx,
})
}
}
#[cfg(test)]
@ -380,68 +510,21 @@ mod tests {
use std::{
collections::{BTreeMap, BTreeSet, HashMap},
io::Error as IoError,
path::Path,
time::Duration,
};
use chrono::{Local, TimeDelta};
use serde_json::Value;
use tempfile::{NamedTempFile, TempDir};
use tokio::fs::{write, File};
use tokio::fs::File;
use crate::concepts::Config;
use super::{
helpers::*, raw::WriteDB, rotate_db, Database, Entry, KeyType, LoadedDB, Tree, ValueType,
DB_NAME,
};
impl Database {
pub async fn from_dir(dir_path: &Path) -> Result<Self, IoError> {
let config_path = dir_path.join("reaction.jsonnet");
write(
&config_path,
format!(
"
{{
state_directory: {dir_path:?},
patterns: {{ pattern: {{ regex: \"prout\" }} }},
streams: {{ dummy: {{
cmd: [\"dummy\"],
filters: {{ dummy: {{
regex: [\"dummy\"],
actions: {{ dummy: {{
cmd: [\"dummy\"]
}} }}
}} }}
}} }}
}}
"
),
)
.await?;
let config = Config::from_path(&config_path).unwrap();
Database::open(&config).await
}
pub fn set_loaded_db(&mut self, loaded_db: LoadedDB) {
self.loaded_db = loaded_db;
}
}
impl<K: KeyType, V: ValueType> Tree<K, V> {
pub fn tree(&self) -> &BTreeMap<K, V> {
&self.tree
}
}
use super::{DB_NAME, Database, Entry, Time, helpers::*, now, raw::WriteDB, rotate_db};
#[tokio::test]
async fn test_rotate_db() {
let now = Local::now();
let now = now();
let expired = now - TimeDelta::seconds(2);
let valid = now + TimeDelta::seconds(2);
let expired = now - Time::from_secs(2);
let valid = now + Time::from_secs(2);
let entries = [
Entry {
@ -539,15 +622,16 @@ mod tests {
#[tokio::test]
async fn test_open_tree() {
let now = Local::now();
let now2 = now + TimeDelta::milliseconds(2);
let now3 = now + TimeDelta::milliseconds(3);
let now = now();
let now_ms = now.to_rfc3339();
let now2_ms = now2.to_rfc3339();
let now3_ms = now3.to_rfc3339();
let now2 = now + Time::from_millis(2);
let now3 = now + Time::from_millis(3);
let valid = now + TimeDelta::seconds(2);
// let now_ms = now.as_nanos().to_string();
// let now2_ms = now2.as_nanos().to_string();
// let now3_ms = now3.as_nanos().to_string();
let valid = now + Time::from_secs(2);
let ip127 = vec!["127.0.0.1".to_string()];
let ip1 = vec!["1.1.1.1".to_string()];
@ -555,44 +639,50 @@ mod tests {
let entries = [
Entry {
tree: "time-match".into(),
key: now_ms.clone().into(),
key: now.as_nanos().to_string().into(),
value: Some(ip127.clone().into()),
expiry: valid,
},
Entry {
tree: "time-match".into(),
key: now2_ms.clone().into(),
key: now2.as_nanos().to_string().into(),
value: Some(ip127.clone().into()),
expiry: valid,
},
Entry {
tree: "time-match".into(),
key: now3_ms.clone().into(),
key: now3.as_nanos().to_string().into(),
value: Some(ip127.clone().into()),
expiry: valid,
},
Entry {
tree: "time-match".into(),
key: now2_ms.clone().into(),
key: now2.as_nanos().to_string().into(),
value: Some(ip127.clone().into()),
expiry: valid,
},
Entry {
tree: "match-timeset".into(),
key: ip127.clone().into(),
value: Some([Value::String(now_ms.into())].into()),
value: Some([Value::String(now.as_nanos().to_string())].into()),
expiry: valid,
},
Entry {
tree: "match-timeset".into(),
key: ip1.clone().into(),
value: Some([Value::String(now2_ms.clone().into())].into()),
value: Some([Value::String(now2.as_nanos().to_string())].into()),
expiry: valid,
},
Entry {
tree: "match-timeset".into(),
key: ip1.clone().into(),
value: Some([Value::String(now2_ms.clone().into()), now3_ms.into()].into()),
value: Some(
[
Value::String(now2.as_nanos().to_string()),
Value::String(now3.as_nanos().to_string()),
]
.into(),
),
expiry: valid,
},
];
@ -608,14 +698,15 @@ mod tests {
write_db.close().await.unwrap();
drop(write_db);
let mut database = Database::from_dir(&dir_path).await.unwrap();
let mut database = Database::from_dir(dir_path, None).await.unwrap();
let time_match = database
.open_tree(
"time-match".into(),
TimeDelta::seconds(2),
Duration::from_secs(2),
|(key, value)| Ok((to_time(&key)?, to_match(&value)?)),
)
.await
.unwrap();
assert_eq!(
time_match.tree,
@ -629,9 +720,10 @@ mod tests {
let match_timeset = database
.open_tree(
"match-timeset".into(),
TimeDelta::hours(2),
Duration::from_hours(2),
|(key, value)| Ok((to_match(&key)?, to_timeset(&value)?)),
)
.await
.unwrap();
assert_eq!(
match_timeset.tree,
@ -644,9 +736,10 @@ mod tests {
let unknown_tree = database
.open_tree(
"unknown_tree".into(),
TimeDelta::hours(2),
Duration::from_hours(2),
|(key, value)| Ok((to_match(&key)?, to_timeset(&value)?)),
)
.await
.unwrap();
assert_eq!(unknown_tree.tree, BTreeMap::default());
}

View file

@ -1,9 +1,9 @@
use std::{
collections::HashMap,
io::{Error as IoError, Write},
io::Error as IoError,
time::{SystemTime, UNIX_EPOCH},
};
use chrono::{Local, TimeZone};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use thiserror::Error;
@ -13,6 +13,8 @@ use tokio::{
};
use tracing::error;
use crate::time::Time;
use super::{Entry, LoadedDB};
const DB_TREE_ID: u64 = 0;
@ -46,7 +48,7 @@ struct WriteEntry<'a> {
#[serde(rename = "v")]
pub value: &'a Option<Value>,
#[serde(rename = "e")]
pub expiry: i64,
pub expiry: u64,
}
/// Entry in custom database format, just read from database
@ -59,7 +61,7 @@ struct ReadEntry {
#[serde(rename = "v")]
pub value: Option<Value>,
#[serde(rename = "e")]
pub expiry: i64,
pub expiry: u64,
}
/// Permits to write entries in a database.
@ -68,7 +70,7 @@ pub struct WriteDB {
file: BufWriter<File>,
names: HashMap<String, u64>,
next_id: u64,
buffer: Buffer,
buffer: Vec<u8>,
}
impl WriteDB {
@ -80,7 +82,7 @@ impl WriteDB {
// names: HashMap::from([(DB_TREE_NAME.into(), DB_TREE_ID)]),
names: HashMap::default(),
next_id: 1,
buffer: Buffer::new(),
buffer: Vec::default(),
}
}
@ -112,7 +114,7 @@ impl WriteDB {
tree: tree_id,
key: &entry.key,
value: &entry.value,
expiry: entry.expiry.timestamp_millis(),
expiry: entry.expiry.as_millis() as u64,
})
.await
.map(|bytes_written| bytes_written + written)
@ -121,11 +123,8 @@ impl WriteDB {
async fn _write_entry(&mut self, raw_entry: &WriteEntry<'_>) -> Result<usize, SerdeOrIoError> {
self.buffer.clear();
serde_json::to_writer(&mut self.buffer, &raw_entry)?;
self.buffer.push("\n".as_bytes());
self.file
.write(self.buffer.as_ref())
.await
.map_err(|err| err.into())
self.buffer.push(b'\n');
Ok(self.file.write(self.buffer.as_ref()).await?)
}
/// Flushes the inner [`tokio::io::BufWriter`]
@ -182,12 +181,14 @@ impl ReadDB {
Ok(Some(entry)) => {
// Add back in new DB
match write_db.write_entry(&entry).await {
Ok(_) => (),
Err(err) => match err {
SerdeOrIoError::IO(err) => return Err(err),
SerdeOrIoError::Serde(err) => panic!("serde should be able to serialize an entry just deserialized: {err}"),
}
}
Ok(_) => (),
Err(err) => match err {
SerdeOrIoError::IO(err) => return Err(err),
SerdeOrIoError::Serde(err) => error!(
"serde should be able to serialize an entry just deserialized: {err}"
),
},
}
// Insert data in RAM
if load_db {
let map: &mut HashMap<Value, Value> =
@ -205,7 +206,10 @@ impl ReadDB {
}
async fn next(&mut self) -> Result<Option<Entry>, DatabaseError> {
let now = Local::now().timestamp_millis();
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis() as u64;
// Loop until we get a non-special value
let raw_entry = loop {
self.buffer.clear();
@ -245,72 +249,33 @@ impl ReadDB {
tree: tree.to_owned(),
key: raw_entry.key,
value: raw_entry.value,
expiry: Local.timestamp_millis_opt(raw_entry.expiry).unwrap(),
expiry: Time::from_millis(raw_entry.expiry),
})),
None => Err(DatabaseError::MissingKeyId(raw_entry.tree)),
}
}
}
/// This [`String`] buffer implements [`Write`] to permit allocation reuse.
/// Using [`serde_json::to_string`] allocates for every entry.
/// This Buffer permits to use [`serde_json::to_writer`] instead.
struct Buffer {
b: Vec<u8>,
}
impl AsRef<Vec<u8>> for Buffer {
fn as_ref(&self) -> &Vec<u8> {
&self.b
}
}
impl Buffer {
fn new() -> Self {
Buffer { b: Vec::new() }
}
/// Truncates the buffer without touching its capacity
fn clear(&mut self) {
self.b.clear()
}
fn push(&mut self, buf: &[u8]) {
self.b.extend_from_slice(buf);
}
}
impl Write for Buffer {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
self.push(buf);
Ok(buf.len())
}
fn flush(&mut self) -> std::io::Result<()> {
Ok(())
}
}
#[cfg(test)]
mod tests {
use std::collections::HashMap;
use chrono::{Local, TimeDelta, TimeZone};
use serde_json::Value;
use tempfile::NamedTempFile;
use tokio::fs::{read, write, File};
use tokio::fs::{File, read, write};
use crate::treedb::{
raw::{DatabaseError, ReadDB, WriteDB, DB_TREE_ID},
use crate::{
Entry,
raw::{DB_TREE_ID, DatabaseError, ReadDB, WriteDB},
time::{Time, now},
};
#[tokio::test]
async fn write_db_write_entry() {
let now = Local::now();
let expired = now - TimeDelta::seconds(2);
let expired_ts = expired.timestamp_millis();
// let valid = now + TimeDelta::seconds(2);
let now = now();
let expired = now - Time::from_secs(2);
let expired_ts = expired.as_millis();
// let valid = now + Time::from_secs(2);
// let valid_ts = valid.timestamp_millis();
let path = NamedTempFile::new().unwrap().into_temp_path();
@ -334,21 +299,23 @@ mod tests {
assert_eq!(
contents,
format!("{{\"t\":0,\"k\":1,\"v\":\"yooo\",\"e\":0}}\n{{\"t\":1,\"k\":\"key1\",\"v\":\"value1\",\"e\":{expired_ts}}}\n")
format!(
"{{\"t\":0,\"k\":1,\"v\":\"yooo\",\"e\":0}}\n{{\"t\":1,\"k\":\"key1\",\"v\":\"value1\",\"e\":{expired_ts}}}\n"
)
);
}
#[tokio::test]
async fn read_db_next() {
let now = Local::now();
let now = now();
let expired = now - TimeDelta::seconds(2);
let expired_ts = expired.timestamp_millis();
let expired = now - Time::from_secs(2);
let expired_ts = expired.as_millis();
let valid = now + TimeDelta::seconds(2);
let valid_ts = valid.timestamp_millis();
let valid = now + Time::from_secs(2);
let valid_ts = valid.as_millis();
// Truncate to millisecond precision
let valid = Local.timestamp_millis_opt(valid_ts).unwrap();
let valid = Time::new(valid.as_secs(), valid.subsec_millis() * 1_000_000);
let path = NamedTempFile::new().unwrap().into_temp_path();
@ -379,34 +346,22 @@ mod tests {
})
);
assert!(match read_db.next().await {
Err(DatabaseError::Serde(_)) => true,
_ => false,
});
assert!(match read_db.next().await {
Err(DatabaseError::Serde(_)) => true,
_ => false,
});
assert!(match read_db.next().await {
Err(DatabaseError::Serde(_)) => true,
_ => false,
});
assert!(match read_db.next().await {
Err(DatabaseError::MissingKeyId(3)) => true,
_ => false,
});
matches!(read_db.next().await, Err(DatabaseError::Serde(_)));
matches!(read_db.next().await, Err(DatabaseError::Serde(_)));
matches!(read_db.next().await, Err(DatabaseError::Serde(_)));
matches!(read_db.next().await, Err(DatabaseError::MissingKeyId(3)));
assert!(read_db.next().await.unwrap().is_none());
}
#[tokio::test]
async fn read_db_read() {
let now = Local::now();
let now = now();
let expired = now - TimeDelta::seconds(2);
let expired_ts = expired.timestamp_millis();
let expired = now - Time::from_secs(2);
let expired_ts = expired.as_millis();
let valid = now + TimeDelta::seconds(2);
let valid_ts = valid.timestamp_millis();
let valid = now + Time::from_secs(2);
let valid_ts = valid.as_millis();
let read_path = NamedTempFile::new().unwrap().into_temp_path();
let write_path = NamedTempFile::new().unwrap().into_temp_path();
@ -479,13 +434,13 @@ mod tests {
#[tokio::test]
async fn write_then_read_1000() {
// Generate entries
let now = Local::now();
let now = now();
let entries: Vec<_> = (0..1000)
.map(|i| Entry {
tree: format!("tree{}", i % 4),
key: format!("key{}", i % 10).into(),
value: Some(format!("value{}", i % 10).into()),
expiry: now + TimeDelta::seconds((i % 4) - 1),
expiry: now + Time::from_secs(i % 4) - Time::from_secs(1),
})
.collect();
@ -495,7 +450,7 @@ mod tests {
tree: format!("tree{}", i % 4),
key: format!("key{}", i % 10).into(),
value: None,
expiry: now + TimeDelta::seconds(i % 4),
expiry: now + Time::from_secs(i % 4),
})
.collect();

117
crates/treedb/src/time.rs Normal file
View file

@ -0,0 +1,117 @@
use std::{
fmt,
ops::{Add, Deref, Sub},
time::{Duration, SystemTime, UNIX_EPOCH},
};
use serde::{Deserialize, Serialize};
/// [`std::time::Duration`] since [`std::time::UNIX_EPOCH`]
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub struct Time(Duration);
impl Deref for Time {
type Target = Duration;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl From<Duration> for Time {
fn from(value: Duration) -> Self {
Time(value)
}
}
impl Into<Duration> for Time {
fn into(self) -> Duration {
self.0
}
}
impl Add<Duration> for Time {
type Output = Time;
fn add(self, rhs: Duration) -> Self::Output {
Time(self.0 + rhs)
}
}
impl Add<Time> for Time {
type Output = Time;
fn add(self, rhs: Time) -> Self::Output {
Time(self.0 + rhs.0)
}
}
impl Sub<Duration> for Time {
type Output = Time;
fn sub(self, rhs: Duration) -> Self::Output {
Time(self.0 - rhs)
}
}
impl Sub<Time> for Time {
type Output = Time;
fn sub(self, rhs: Time) -> Self::Output {
Time(self.0 - rhs.0)
}
}
impl Serialize for Time {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
self.as_nanos().to_string().serialize(serializer)
}
}
struct TimeVisitor;
impl<'de> serde::de::Visitor<'de> for TimeVisitor {
type Value = Time;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
write!(formatter, "a string representing nanoseconds")
}
fn visit_str<E>(self, s: &str) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
match s.parse::<u128>() {
Ok(nanos) => Ok(Time(Duration::new(
(nanos / 1_000_000_000) as u64,
(nanos % 1_000_000_000) as u32,
))),
Err(_) => Err(serde::de::Error::invalid_value(
serde::de::Unexpected::Str(s),
&self,
)),
}
}
}
impl<'de> Deserialize<'de> for Time {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
deserializer.deserialize_str(TimeVisitor)
}
}
impl Time {
pub fn new(secs: u64, nanos: u32) -> Time {
Time(Duration::new(secs, nanos))
}
pub fn from_hours(hours: u64) -> Time {
Time(Duration::from_hours(hours))
}
pub fn from_mins(mins: u64) -> Time {
Time(Duration::from_mins(mins))
}
pub fn from_secs(secs: u64) -> Time {
Time(Duration::from_secs(secs))
}
pub fn from_millis(millis: u64) -> Time {
Time(Duration::from_millis(millis))
}
pub fn from_nanos(nanos: u64) -> Time {
Time(Duration::from_nanos(nanos))
}
}
pub fn now() -> Time {
Time(SystemTime::now().duration_since(UNIX_EPOCH).unwrap())
}

View file

@ -1,4 +0,0 @@
This is the old Go codebase of reaction, ie. all 0.x and 1.x versions.
This codebase most probably won't be updated.
Development now continues in Rust for reaction 2.x.

View file

@ -1,393 +0,0 @@
package app
import (
"bufio"
"encoding/gob"
"encoding/json"
"fmt"
"net"
"os"
"regexp"
"slices"
"strings"
"time"
"framagit.org/ppom/reaction/logger"
"sigs.k8s.io/yaml"
)
const (
Info = 0
Flush = 1
)
type Request struct {
Request int
Flush PSF
}
type Response struct {
Err error
// Config Conf
Matches MatchesMap
Actions ActionsMap
}
func SendAndRetrieve(data Request) Response {
conn, err := net.Dial("unix", *SocketPath)
if err != nil {
logger.Fatalln("Error opening connection to daemon:", err)
}
defer conn.Close()
err = gob.NewEncoder(conn).Encode(data)
if err != nil {
logger.Fatalln("Can't send message:", err)
}
var response Response
err = gob.NewDecoder(conn).Decode(&response)
if err != nil {
logger.Fatalln("Invalid answer from daemon:", err)
}
return response
}
type PatternStatus struct {
Matches int `json:"matches,omitempty"`
Actions map[string][]string `json:"actions,omitempty"`
}
type MapPatternStatus map[Match]*PatternStatus
type MapPatternStatusFlush MapPatternStatus
type ClientStatus map[string]map[string]MapPatternStatus
type ClientStatusFlush ClientStatus
func (mps MapPatternStatusFlush) MarshalJSON() ([]byte, error) {
for _, v := range mps {
return json.Marshal(v)
}
return []byte(""), nil
}
func (csf ClientStatusFlush) MarshalJSON() ([]byte, error) {
ret := make(map[string]map[string]MapPatternStatusFlush)
for k, v := range csf {
ret[k] = make(map[string]MapPatternStatusFlush)
for kk, vv := range v {
ret[k][kk] = MapPatternStatusFlush(vv)
}
}
return json.Marshal(ret)
}
func pfMatches(streamName string, filterName string, regexes map[string]*regexp.Regexp, match Match, filter *Filter) bool {
// Check stream and filter match
if streamName != "" && streamName != filter.Stream.Name {
return false
}
if filterName != "" && filterName != filter.Name {
return false
}
// Check that all user requested patterns are in this filter
var nbMatched int
var localMatches = match.Split()
// For each pattern of this filter
for i, pattern := range filter.Pattern {
// Check that this pattern has user requested name
if reg, ok := regexes[pattern.Name]; ok {
// Check that the PF.p[i] matches user requested pattern
if reg.MatchString(localMatches[i]) {
nbMatched++
}
}
}
if len(regexes) != nbMatched {
return false
}
// All checks passed
return true
}
func addMatchToCS(cs ClientStatus, pf PF, times map[time.Time]struct{}) {
patterns, streamName, filterName := pf.P, pf.F.Stream.Name, pf.F.Name
if cs[streamName] == nil {
cs[streamName] = make(map[string]MapPatternStatus)
}
if cs[streamName][filterName] == nil {
cs[streamName][filterName] = make(MapPatternStatus)
}
cs[streamName][filterName][patterns] = &PatternStatus{len(times), nil}
}
func addActionToCS(cs ClientStatus, pa PA, times map[time.Time]struct{}) {
patterns, streamName, filterName, actionName := pa.P, pa.A.Filter.Stream.Name, pa.A.Filter.Name, pa.A.Name
if cs[streamName] == nil {
cs[streamName] = make(map[string]MapPatternStatus)
}
if cs[streamName][filterName] == nil {
cs[streamName][filterName] = make(MapPatternStatus)
}
if cs[streamName][filterName][patterns] == nil {
cs[streamName][filterName][patterns] = new(PatternStatus)
}
ps := cs[streamName][filterName][patterns]
if ps.Actions == nil {
ps.Actions = make(map[string][]string)
}
for then := range times {
ps.Actions[actionName] = append(ps.Actions[actionName], then.Format(time.DateTime))
}
}
func printClientStatus(cs ClientStatus, format string) {
var text []byte
var err error
if format == "json" {
text, err = json.MarshalIndent(cs, "", " ")
} else {
text, err = yaml.Marshal(cs)
}
if err != nil {
logger.Fatalln("Failed to convert daemon binary response to text format:", err)
}
fmt.Println(strings.ReplaceAll(string(text), "\\0", " "))
}
func compileKVPatterns(kvpatterns []string) map[string]*regexp.Regexp {
var regexes map[string]*regexp.Regexp
regexes = make(map[string]*regexp.Regexp)
for _, p := range kvpatterns {
// p syntax already checked in Main
key, value, found := strings.Cut(p, "=")
if !found {
logger.Printf(logger.ERROR, "Bad argument: no `=` in %v", p)
logger.Fatalln("Patterns must be prefixed by their name (e.g. ip=1.1.1.1)")
}
if regexes[key] != nil {
logger.Fatalf("Bad argument: same pattern name provided multiple times: %v", key)
}
compiled, err := regexp.Compile(fmt.Sprintf("^%v$", value))
if err != nil {
logger.Fatalf("Bad argument: Could not compile: `%v`: %v", value, err)
}
regexes[key] = compiled
}
return regexes
}
func ClientShow(format, stream, filter string, kvpatterns []string) {
response := SendAndRetrieve(Request{Info, PSF{}})
if response.Err != nil {
logger.Fatalln("Received error from daemon:", response.Err)
}
cs := make(ClientStatus)
var regexes map[string]*regexp.Regexp
if len(kvpatterns) != 0 {
regexes = compileKVPatterns(kvpatterns)
}
var found bool
// Painful data manipulation
for pf, times := range response.Matches {
// Check this PF is not empty
if len(times) == 0 {
continue
}
if !pfMatches(stream, filter, regexes, pf.P, pf.F) {
continue
}
addMatchToCS(cs, pf, times)
found = true
}
// Painful data manipulation
for pa, times := range response.Actions {
// Check this PF is not empty
if len(times) == 0 {
continue
}
if !pfMatches(stream, filter, regexes, pa.P, pa.A.Filter) {
continue
}
addActionToCS(cs, pa, times)
found = true
}
if !found {
logger.Println(logger.WARN, "No matching stream.filter items found. This does not mean it doesn't exist, maybe it just didn't receive any match.")
os.Exit(1)
}
printClientStatus(cs, format)
os.Exit(0)
}
// TODO : Show values we just flushed - for now we got no details :
/*
* % ./reaction flush -l ssh.failedlogin login=".*t"
* ssh:
* failedlogin:
* actions:
* unban:
* - "2024-04-30 15:27:28"
* - "2024-04-30 15:27:28"
* - "2024-04-30 15:27:28"
* - "2024-04-30 15:27:28"
*
*/
func ClientFlush(format, streamName, filterName string, patterns []string) {
requestedPatterns := compileKVPatterns(patterns)
// Remember which Filters are compatible with the query
filterCompatibility := make(map[SF]bool)
isCompatible := func(filter *Filter) bool {
sf := SF{filter.Stream.Name, filter.Name}
compatible, ok := filterCompatibility[sf]
// already tested
if ok {
return compatible
}
for k := range requestedPatterns {
if -1 == slices.IndexFunc(filter.Pattern, func(pattern *Pattern) bool {
return pattern.Name == k
}) {
filterCompatibility[sf] = false
return false
}
}
filterCompatibility[sf] = true
return true
}
// match functions
kvMatch := func(filter *Filter, filterPatterns []string) bool {
// For each user requested pattern
for k, v := range requestedPatterns {
// Find its index on the Filter.Pattern
for i, pattern := range filter.Pattern {
if k == pattern.Name {
// Test the match
if !v.MatchString(filterPatterns[i]) {
return false
}
}
}
}
return true
}
var found bool
fullMatch := func(filter *Filter, match Match) bool {
// Test if we limit by stream
if streamName == "" || filter.Stream.Name == streamName {
// Test if we limit by filter
if filterName == "" || filter.Name == filterName {
found = true
filterPatterns := match.Split()
return isCompatible(filter) && kvMatch(filter, filterPatterns)
}
}
return false
}
response := SendAndRetrieve(Request{Info, PSF{}})
if response.Err != nil {
logger.Fatalln("Received error from daemon:", response.Err)
}
commands := make([]PSF, 0)
cs := make(ClientStatus)
for pf, times := range response.Matches {
if fullMatch(pf.F, pf.P) {
commands = append(commands, PSF{pf.P, pf.F.Stream.Name, pf.F.Name})
addMatchToCS(cs, pf, times)
}
}
for pa, times := range response.Actions {
if fullMatch(pa.A.Filter, pa.P) {
commands = append(commands, PSF{pa.P, pa.A.Filter.Stream.Name, pa.A.Filter.Name})
addActionToCS(cs, pa, times)
}
}
if !found {
logger.Println(logger.WARN, "No matching stream.filter items found. This does not mean it doesn't exist, maybe it just didn't receive any match.")
os.Exit(1)
}
for _, psf := range commands {
response := SendAndRetrieve(Request{Flush, psf})
if response.Err != nil {
logger.Fatalln("Received error from daemon:", response.Err)
}
}
printClientStatus(cs, format)
os.Exit(0)
}
func TestRegex(confFilename, regex, line string) {
conf := parseConf(confFilename)
// Code close to app/startup.go
var usedPatterns []*Pattern
for _, pattern := range conf.Patterns {
if strings.Contains(regex, pattern.nameWithBraces) {
usedPatterns = append(usedPatterns, pattern)
regex = strings.Replace(regex, pattern.nameWithBraces, pattern.Regex, 1)
}
}
reg, err := regexp.Compile(regex)
if err != nil {
logger.Fatalln("ERROR the specified regex is invalid: %v", err)
os.Exit(1)
}
// Code close to app/daemon.go
match := func(line string) {
var ignored bool
if matches := reg.FindStringSubmatch(line); matches != nil {
if usedPatterns != nil {
var result []string
for _, p := range usedPatterns {
match := matches[reg.SubexpIndex(p.Name)]
result = append(result, match)
if !p.notAnIgnore(&match) {
ignored = true
}
}
if !ignored {
fmt.Printf("\033[32mmatching\033[0m %v: %v\n", WithBrackets(result), line)
} else {
fmt.Printf("\033[33mignore matching\033[0m %v: %v\n", WithBrackets(result), line)
}
} else {
fmt.Printf("\033[32mmatching\033[0m [%v]:\n", line)
}
} else {
fmt.Printf("\033[31mno match\033[0m: %v\n", line)
}
}
if line != "" {
match(line)
} else {
logger.Println(logger.INFO, "no second argument: reading from stdin")
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
match(scanner.Text())
}
}
}

View file

@ -1,454 +0,0 @@
package app
import (
"bufio"
"os"
"os/exec"
"os/signal"
"strings"
"sync"
"syscall"
"time"
"framagit.org/ppom/reaction/logger"
)
// Executes a command and channel-send its stdout
func cmdStdout(commandline []string) chan *string {
lines := make(chan *string)
go func() {
cmd := exec.Command(commandline[0], commandline[1:]...)
stdout, err := cmd.StdoutPipe()
if err != nil {
logger.Fatalln("couldn't open stdout on command:", err)
}
if err := cmd.Start(); err != nil {
logger.Fatalln("couldn't start command:", err)
}
defer stdout.Close()
scanner := bufio.NewScanner(stdout)
for scanner.Scan() {
line := scanner.Text()
lines <- &line
logger.Println(logger.DEBUG, "stdout:", line)
}
close(lines)
}()
return lines
}
func runCommands(commands [][]string, moment string) bool {
ok := true
for _, command := range commands {
cmd := exec.Command(command[0], command[1:]...)
cmd.WaitDelay = time.Minute
logger.Printf(logger.INFO, "%v command: run %v\n", moment, command)
if err := cmd.Start(); err != nil {
logger.Printf(logger.ERROR, "%v command: run %v: %v", moment, command, err)
ok = false
} else {
err := cmd.Wait()
if err != nil {
logger.Printf(logger.ERROR, "%v command: run %v: %v", moment, command, err)
ok = false
}
}
}
return ok
}
func (p *Pattern) notAnIgnore(match *string) bool {
for _, regex := range p.compiledIgnoreRegex {
if regex.MatchString(*match) {
return false
}
}
for _, ignore := range p.Ignore {
if ignore == *match {
return false
}
}
return true
}
// Whether one of the filter's regexes is matched on a line
func (f *Filter) match(line *string) Match {
for _, regex := range f.compiledRegex {
if matches := regex.FindStringSubmatch(*line); matches != nil {
if f.Pattern != nil {
var result []string
for _, p := range f.Pattern {
match := matches[regex.SubexpIndex(p.Name)]
if p.notAnIgnore(&match) {
result = append(result, match)
}
}
if len(result) == len(f.Pattern) {
logger.Printf(logger.INFO, "%s.%s: match %s", f.Stream.Name, f.Name, WithBrackets(result))
return JoinMatch(result)
}
} else {
logger.Printf(logger.INFO, "%s.%s: match [.]\n", f.Stream.Name, f.Name)
// No pattern, so this match will never actually be used
return "."
}
}
}
return ""
}
func (f *Filter) sendActions(match Match, at time.Time) {
for _, a := range f.Actions {
actionsC <- PAT{match, a, at.Add(a.afterDuration)}
}
}
func (a *Action) exec(match Match) {
defer wgActions.Done()
var computedCommand []string
if a.Filter.Pattern != nil {
computedCommand = make([]string, 0, len(a.Cmd))
matches := match.Split()
for _, item := range a.Cmd {
for i, p := range a.Filter.Pattern {
item = strings.ReplaceAll(item, p.nameWithBraces, matches[i])
}
computedCommand = append(computedCommand, item)
}
} else {
computedCommand = a.Cmd
}
logger.Printf(logger.INFO, "%s.%s.%s: run %s\n", a.Filter.Stream.Name, a.Filter.Name, a.Name, computedCommand)
cmd := exec.Command(computedCommand[0], computedCommand[1:]...)
if ret := cmd.Run(); ret != nil {
logger.Printf(logger.ERROR, "%s.%s.%s: run %s, code %s\n", a.Filter.Stream.Name, a.Filter.Name, a.Name, computedCommand, ret)
}
}
func ActionsManager(concurrency int) {
// concurrency init
execActionsC := make(chan PA)
if concurrency > 0 {
for i := 0; i < concurrency; i++ {
go func() {
var pa PA
for {
pa = <-execActionsC
pa.A.exec(pa.P)
}
}()
}
} else {
go func() {
var pa PA
for {
pa = <-execActionsC
go func(pa PA) {
pa.A.exec(pa.P)
}(pa)
}
}()
}
execAction := func(a *Action, p Match) {
wgActions.Add(1)
execActionsC <- PA{p, a}
}
// main
pendingActionsC := make(chan PAT)
for {
select {
case pat := <-actionsC:
pa := PA{pat.P, pat.A}
pattern, action, then := pat.P, pat.A, pat.T
now := time.Now()
// check if must be executed now
if then.Compare(now) <= 0 {
execAction(action, pattern)
} else {
if actions[pa] == nil {
actions[pa] = make(map[time.Time]struct{})
}
actions[pa][then] = struct{}{}
go func(insidePat PAT, insideNow time.Time) {
time.Sleep(insidePat.T.Sub(insideNow))
pendingActionsC <- insidePat
}(pat, now)
}
case pat := <-pendingActionsC:
pa := PA{pat.P, pat.A}
pattern, action, then := pat.P, pat.A, pat.T
if actions[pa] != nil {
delete(actions[pa], then)
execAction(action, pattern)
}
case fo := <-flushToActionsC:
for pa := range actions {
if fo.S == pa.A.Filter.Stream.Name &&
fo.F == pa.A.Filter.Name &&
fo.P == pa.P {
for range actions[pa] {
execAction(pa.A, pa.P)
}
delete(actions, pa)
break
}
}
case _, _ = <-stopActions:
for pa := range actions {
if pa.A.OnExit {
for range actions[pa] {
execAction(pa.A, pa.P)
}
}
}
wgActions.Done()
return
}
}
}
func MatchesManager() {
var fo PSF
var pft PFT
end := false
for !end {
select {
case fo = <-flushToMatchesC:
matchesManagerHandleFlush(fo)
case fo, ok := <-startupMatchesC:
if !ok {
end = true
} else {
_ = matchesManagerHandleMatch(fo)
}
}
}
for {
select {
case fo = <-flushToMatchesC:
matchesManagerHandleFlush(fo)
case pft = <-matchesC:
entry := LogEntry{pft.T, 0, pft.P, pft.F.Stream.Name, pft.F.Name, 0, false}
entry.Exec = matchesManagerHandleMatch(pft)
logsC <- entry
}
}
}
func matchesManagerHandleFlush(fo PSF) {
matchesLock.Lock()
for pf := range matches {
if fo.S == pf.F.Stream.Name &&
fo.F == pf.F.Name &&
fo.P == pf.P {
delete(matches, pf)
break
}
}
matchesLock.Unlock()
}
func matchesManagerHandleMatch(pft PFT) bool {
matchesLock.Lock()
defer matchesLock.Unlock()
filter, patterns, then := pft.F, pft.P, pft.T
pf := PF{pft.P, pft.F}
if filter.Retry > 1 {
// make sure map exists
if matches[pf] == nil {
matches[pf] = make(map[time.Time]struct{})
}
// add new match
matches[pf][then] = struct{}{}
// remove match when expired
go func(pf PF, then time.Time) {
time.Sleep(then.Sub(time.Now()) + filter.retryDuration)
matchesLock.Lock()
if matches[pf] != nil {
// FIXME replace this and all similar occurences
// by clear() when switching to go 1.21
delete(matches[pf], then)
}
matchesLock.Unlock()
}(pf, then)
}
if filter.Retry <= 1 || len(matches[pf]) >= filter.Retry {
delete(matches, pf)
filter.sendActions(patterns, then)
return true
}
return false
}
func StreamManager(s *Stream, endedSignal chan *Stream) {
defer wgStreams.Done()
logger.Printf(logger.INFO, "%s: start %s\n", s.Name, s.Cmd)
lines := cmdStdout(s.Cmd)
for {
select {
case line, ok := <-lines:
if !ok {
endedSignal <- s
return
}
for _, filter := range s.Filters {
if match := filter.match(line); match != "" {
matchesC <- PFT{match, filter, time.Now()}
}
}
case _, _ = <-stopStreams:
return
}
}
}
var actions ActionsMap
var matches MatchesMap
var matchesLock sync.Mutex
var stopStreams chan bool
var stopActions chan bool
var wgActions sync.WaitGroup
var wgStreams sync.WaitGroup
/*
<StreamCmds>
StreamManager onstartup:matches
matches MatchesManager logs DatabaseManager ·
actions ActionsManager
SocketManager flushes··
<Clients>
*/
// DatabaseManager → MatchesManager
var startupMatchesC chan PFT
// StreamManager → MatchesManager
var matchesC chan PFT
// MatchesManager → DatabaseManager
var logsC chan LogEntry
// MatchesManager → ActionsManager
var actionsC chan PAT
// SocketManager, DatabaseManager → MatchesManager
var flushToMatchesC chan PSF
// SocketManager → ActionsManager
var flushToActionsC chan PSF
// SocketManager → DatabaseManager
var flushToDatabaseC chan LogEntry
func Daemon(confFilename string) {
conf := parseConf(confFilename)
startupMatchesC = make(chan PFT)
matchesC = make(chan PFT)
logsC = make(chan LogEntry)
actionsC = make(chan PAT)
flushToMatchesC = make(chan PSF)
flushToActionsC = make(chan PSF)
flushToDatabaseC = make(chan LogEntry)
stopActions = make(chan bool)
stopStreams = make(chan bool)
actions = make(ActionsMap)
matches = make(MatchesMap)
_ = runCommands(conf.Start, "start")
go DatabaseManager(conf)
go MatchesManager()
go ActionsManager(conf.Concurrency)
// Ready to start
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
endSignals := make(chan *Stream)
nbStreamsInExecution := len(conf.Streams)
for _, stream := range conf.Streams {
wgStreams.Add(1)
go StreamManager(stream, endSignals)
}
go SocketManager(conf)
for {
select {
case finishedStream := <-endSignals:
logger.Printf(logger.ERROR, "%s stream finished", finishedStream.Name)
nbStreamsInExecution--
if nbStreamsInExecution == 0 {
quit(conf, false)
}
case <-sigs:
// Trap endSignals, which may cause a deadlock otherwise
go func() {
ok := true
for ok {
_, ok = <-endSignals
}
}()
logger.Printf(logger.INFO, "Received SIGINT/SIGTERM, exiting")
quit(conf, true)
}
}
}
func quit(conf *Conf, graceful bool) {
// send stop to StreamManager·s
close(stopStreams)
logger.Println(logger.INFO, "Waiting for Streams to finish...")
wgStreams.Wait()
// ActionsManager calls wgActions.Done() when it has launched all pending actions
wgActions.Add(1)
// send stop to ActionsManager
close(stopActions)
// stop all actions
logger.Println(logger.INFO, "Waiting for Actions to finish...")
wgActions.Wait()
// run stop commands
stopOk := runCommands(conf.Stop, "stop")
// delete pipe
err := os.Remove(*SocketPath)
if err != nil {
logger.Println(logger.ERROR, "Failed to remove socket:", err)
}
if !stopOk || !graceful {
os.Exit(1)
}
os.Exit(0)
}

View file

@ -1,108 +0,0 @@
---
# This example configuration file is a good starting point, but you're
# strongly encouraged to take a look at the full documentation: https://reaction.ppom.me
#
# This file is using the well-established YAML configuration language.
# Note that the more powerful JSONnet configuration language is also supported
# and that the documentation uses JSONnet
# definitions are just a place to put chunks of conf you want to reuse in another place
# using YAML anchors `&name` and pointers `*name`
# definitions are not readed by reaction
definitions:
- &iptablesban [ 'ip46tables', '-w', '-A', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
- &iptablesunban [ 'ip46tables', '-w', '-D', 'reaction', '-s', '<ip>', '-j', 'DROP' ]
# ip46tables is a minimal C program (only POSIX dependencies) present as a subdirectory.
# it permits to handle both ipv4/iptables and ipv6/ip6tables commands
# if set to a positive number → max number of concurrent actions
# if set to a negative number → no limit
# if not specified or set to 0 → defaults to the number of CPUs on the system
concurrency: 0
# patterns are substitued in regexes.
# when a filter performs an action, it replaces the found pattern
patterns:
ip:
# reaction regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
# simple version: regex: '(?:(?:[0-9]{1,3}\.){3}[0-9]{1,3})|(?:[0-9a-fA-F:]{2,90})'
regex: '(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}|(?:(?:[0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,7}:|(?:[0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|(?:[0-9a-fA-F]{1,4}:){1,5}(?::[0-9a-fA-F]{1,4}){1,2}|(?:[0-9a-fA-F]{1,4}:){1,4}(?::[0-9a-fA-F]{1,4}){1,3}|(?:[0-9a-fA-F]{1,4}:){1,3}(?::[0-9a-fA-F]{1,4}){1,4}|(?:[0-9a-fA-F]{1,4}:){1,2}(?::[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:(?:(?::[0-9a-fA-F]{1,4}){1,6})|:(?:(?::[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(?::[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(?:ffff(?::0{1,4}){0,1}:){0,1}(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])|(?:[0-9a-fA-F]{1,4}:){1,4}:(?:(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(?:25[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9]))'
ignore:
- 127.0.0.1
- ::1
# Patterns can be ignored based on regexes, it will try to match the whole string detected by the pattern
# ignoreregex:
# - '10\.0\.[0-9]{1,3}\.[0-9]{1,3}'
# Those commands will be executed in order at start, before everything else
start:
- [ 'ip46tables', '-w', '-N', 'reaction' ]
- [ 'ip46tables', '-w', '-I', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-I', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
# Those commands will be executed in order at stop, after everything else
stop:
- [ 'ip46tables', '-w,', '-D', 'INPUT', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w,', '-D', 'FORWARD', '-p', 'all', '-j', 'reaction' ]
- [ 'ip46tables', '-w', '-F', 'reaction' ]
- [ 'ip46tables', '-w', '-X', 'reaction' ]
# streams are commands
# they are run and their ouptut is captured
# *example:* `tail -f /var/log/nginx/access.log`
# their output will be used by one or more filters
streams:
# streams have a user-defined name
ssh:
# note that if the command is not in environment's `PATH`
# its full path must be given.
cmd: [ 'journalctl', '-n0', '-fu', 'sshd.service' ]
# filters run actions when they match regexes on a stream
filters:
# filters have a user-defined name
failedlogin:
# reaction's regex syntax is defined here: https://github.com/google/re2/wiki/Syntax
regex:
# <ip> is predefined in the patterns section
# ip's regex is inserted in the following regex
- 'authentication failure;.*rhost=<ip>'
- 'Failed password for .* from <ip>'
- 'Connection (reset|closed) by (authenticating|invalid) user .* <ip>'
# if retry and retryperiod are defined,
# the actions will only take place if a same pattern is
# found `retry` times in a `retryperiod` interval
retry: 3
# format is defined here: https://pkg.go.dev/time#ParseDuration
retryperiod: 6h
# actions are run by the filter when regexes are matched
actions:
# actions have a user-defined name
ban:
# YAML substitutes *reference by the value anchored at &reference
cmd: *iptablesban
unban:
cmd: *iptablesunban
# if after is defined, the action will not take place immediately, but after a specified duration
# same format as retryperiod
after: 48h
# let's say reaction is quitting. does it run all those pending commands which had an `after` duration set?
# if you want reaction to run those pending commands before exiting, you can set this:
# onexit: true
# (defaults to false)
# here it is not useful because we will flush and delete the chain containing the bans anyway
# (with the stop commands)
# persistence
# tldr; when an `after` action is set in a filter, such filter acts as a 'jail',
# which is persisted after reboots.
#
# when a filter is triggered, there are 2 flows:
#
# if none of its actions have an `after` directive set:
# no action will be replayed.
#
# else (if at least one action has an `after` directive set):
# if reaction stops while `after` actions are pending:
# and reaction starts again while those actions would still be pending:
# reaction executes the past actions (actions without after or with then+after < now)
# and plans the execution of future actions (actions with then+after > now)

View file

@ -1,230 +0,0 @@
package app
import (
_ "embed"
"flag"
"fmt"
"os"
"strings"
"framagit.org/ppom/reaction/logger"
)
func addStringFlag(names []string, defvalue string, f *flag.FlagSet) *string {
var value string
for _, name := range names {
f.StringVar(&value, name, defvalue, "")
}
return &value
}
func addBoolFlag(names []string, f *flag.FlagSet) *bool {
var value bool
for _, name := range names {
f.BoolVar(&value, name, false, "")
}
return &value
}
var SocketPath *string
func addSocketFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"s", "socket"}, "/run/reaction/reaction.sock", f)
}
func addConfFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"c", "config"}, "", f)
}
func addFormatFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"f", "format"}, "yaml", f)
}
func addLimitFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"l", "limit"}, "", f)
}
func addLevelFlag(f *flag.FlagSet) *string {
return addStringFlag([]string{"l", "loglevel"}, "INFO", f)
}
func subCommandParse(f *flag.FlagSet, maxRemainingArgs int) {
help := addBoolFlag([]string{"h", "help"}, f)
f.Parse(os.Args[2:])
if *help {
basicUsage()
os.Exit(0)
}
// -1 = no limit to remaining args
if maxRemainingArgs > -1 && len(f.Args()) > maxRemainingArgs {
fmt.Printf("ERROR unrecognized argument(s): %v\n", f.Args()[maxRemainingArgs:])
basicUsage()
os.Exit(1)
}
}
func basicUsage() {
const (
bold = "\033[1m"
reset = "\033[0m"
)
fmt.Print(
bold + `reaction help` + reset + `
# print this help message
` + bold + `reaction start` + reset + `
# start the daemon
# options:
-c/--config CONFIG_FILE # configuration file in json, jsonnet or yaml format (required)
-l/--loglevel LEVEL # minimum log level to show, in DEBUG, INFO, WARN, ERROR, FATAL
# (default: INFO)
-s/--socket SOCKET # path to the client-daemon communication socket
# (default: /run/reaction/reaction.sock)
` + bold + `reaction example-conf` + reset + `
# print a configuration file example
` + bold + `reaction show` + reset + ` [NAME=PATTERN...]
# show current matches and which actions are still to be run for the specified PATTERN regexe(s)
# (e.g know what is currenly banned)
reaction show
reaction show "ip=192.168.1.1"
reaction show "ip=192\.168\..*" login=root
# options:
-s/--socket SOCKET # path to the client-daemon communication socket
-f/--format yaml|json # (default: yaml)
-l/--limit STREAM[.FILTER] # only show items related to this STREAM (or STREAM.FILTER)
` + bold + `reaction flush` + reset + ` NAME=PATTERN [NAME=PATTERN...]
# remove currently active matches and run currently pending actions for the specified PATTERN regexe(s)
# (then show flushed matches and actions)
reaction flush "ip=192.168.1.1"
reaction flush "ip=192\.168\..*" login=root
# options:
-s/--socket SOCKET # path to the client-daemon communication socket
-f/--format yaml|json # (default: yaml)
-l/--limit STREAM.FILTER # flush only items related to this STREAM.FILTER
` + bold + `reaction test-regex` + reset + ` REGEX LINE # test REGEX against LINE
cat FILE | ` + bold + `reaction test-regex` + reset + ` REGEX # test REGEX against each line of FILE
# options:
-c/--config CONFIG_FILE # configuration file in json, jsonnet or yaml format
# optional: permits to use configured patterns like <ip> in regex
` + bold + `reaction version` + reset + `
# print version information
see usage examples, service configurations and good practices
on the ` + bold + `wiki` + reset + `: https://reaction.ppom.me
`)
}
//go:embed example.yml
var exampleConf string
func Main(version string) {
if len(os.Args) <= 1 {
logger.Fatalln("No argument provided. Try `reaction help`")
basicUsage()
os.Exit(1)
}
f := flag.NewFlagSet(os.Args[1], flag.ExitOnError)
switch os.Args[1] {
case "help", "-h", "-help", "--help":
basicUsage()
case "version", "-v", "--version":
fmt.Printf("reaction version %v\n", version)
case "example-conf":
subCommandParse(f, 0)
fmt.Print(exampleConf)
case "start":
SocketPath = addSocketFlag(f)
confFilename := addConfFlag(f)
logLevel := addLevelFlag(f)
subCommandParse(f, 0)
if *confFilename == "" {
logger.Fatalln("no configuration file provided")
basicUsage()
os.Exit(1)
}
logLevelType := logger.FromString(*logLevel)
if logLevelType == logger.UNKNOWN {
logger.Fatalf("Log Level %v not recognized", logLevel)
basicUsage()
os.Exit(1)
}
logger.SetLogLevel(logLevelType)
Daemon(*confFilename)
case "show":
SocketPath = addSocketFlag(f)
queryFormat := addFormatFlag(f)
limit := addLimitFlag(f)
subCommandParse(f, -1)
if *queryFormat != "yaml" && *queryFormat != "json" {
logger.Fatalln("only yaml and json formats are supported")
}
stream, filter := "", ""
if *limit != "" {
splitSF := strings.Split(*limit, ".")
stream = splitSF[0]
if len(splitSF) == 2 {
filter = splitSF[1]
} else if len(splitSF) > 2 {
logger.Fatalln("-l/--limit: only one . separator is supported")
}
}
ClientShow(*queryFormat, stream, filter, f.Args())
case "flush":
SocketPath = addSocketFlag(f)
queryFormat := addFormatFlag(f)
limit := addLimitFlag(f)
subCommandParse(f, -1)
if *queryFormat != "yaml" && *queryFormat != "json" {
logger.Fatalln("only yaml and json formats are supported")
}
if len(f.Args()) == 0 {
logger.Fatalln("subcommand flush takes at least one TARGET argument")
}
stream, filter := "", ""
if *limit != "" {
splitSF := strings.Split(*limit, ".")
stream = splitSF[0]
if len(splitSF) == 2 {
filter = splitSF[1]
} else if len(splitSF) > 2 {
logger.Fatalln("-l/--limit: only one . separator is supported")
}
}
ClientFlush(*queryFormat, stream, filter, f.Args())
case "test-regex":
// socket not needed, no interaction with the daemon
confFilename := addConfFlag(f)
subCommandParse(f, 2)
if *confFilename == "" {
logger.Println(logger.WARN, "no configuration file provided. Can't make use of registered patterns.")
}
if f.Arg(0) == "" {
logger.Fatalln("subcommand test-regex takes at least one REGEX argument")
basicUsage()
os.Exit(1)
}
TestRegex(*confFilename, f.Arg(0), f.Arg(1))
default:
logger.Fatalf("subcommand %v not recognized. Try `reaction help`", os.Args[1])
basicUsage()
os.Exit(1)
}
}

View file

@ -1,264 +0,0 @@
package app
import (
"encoding/gob"
"errors"
"io"
"os"
"time"
"framagit.org/ppom/reaction/logger"
)
const (
logDBName = "./reaction-matches.db"
logDBNewName = "./reaction-matches.new.db"
flushDBName = "./reaction-flushes.db"
)
func openDB(path string) (bool, *ReadDB) {
file, err := os.Open(path)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
logger.Printf(logger.WARN, "No DB found at %s. It's ok if this is the first time reaction is running.\n", path)
return true, nil
}
logger.Fatalln("Failed to open DB:", err)
}
return false, &ReadDB{file, gob.NewDecoder(file)}
}
func createDB(path string) *WriteDB {
file, err := os.Create(path)
if err != nil {
logger.Fatalln("Failed to create DB:", err)
}
return &WriteDB{file, gob.NewEncoder(file)}
}
func DatabaseManager(c *Conf) {
logDB, flushDB := c.RotateDB(true)
close(startupMatchesC)
c.manageLogs(logDB, flushDB)
}
func (c *Conf) manageLogs(logDB *WriteDB, flushDB *WriteDB) {
cpt := 0
writeSF2int := make(map[SF]int)
writeCpt := 1
for {
select {
case entry := <-flushToDatabaseC:
flushDB.enc.Encode(entry)
case entry := <-logsC:
encodeOrFatal(logDB.enc, entry, writeSF2int, &writeCpt)
cpt++
// let's say 100 000 entries ~ 10 MB
if cpt == 500_000 {
cpt = 0
logger.Printf(logger.INFO, "Rotating database...")
logDB.file.Close()
flushDB.file.Close()
logDB, flushDB = c.RotateDB(false)
logger.Printf(logger.INFO, "Rotated database")
}
}
}
}
func (c *Conf) RotateDB(startup bool) (*WriteDB, *WriteDB) {
var (
doesntExist bool
err error
logReadDB *ReadDB
flushReadDB *ReadDB
logWriteDB *WriteDB
flushWriteDB *WriteDB
)
doesntExist, logReadDB = openDB(logDBName)
if doesntExist {
return createDB(logDBName), createDB(flushDBName)
}
doesntExist, flushReadDB = openDB(flushDBName)
if doesntExist {
logger.Println(logger.WARN, "Strange! No flushes db, opening /dev/null instead")
doesntExist, flushReadDB = openDB("/dev/null")
if doesntExist {
logger.Fatalln("Opening dummy /dev/null failed")
}
}
logWriteDB = createDB(logDBNewName)
rotateDB(c, logReadDB.dec, flushReadDB.dec, logWriteDB.enc, startup)
err = logReadDB.file.Close()
if err != nil {
logger.Fatalln("Failed to close old DB:", err)
}
// It should be ok to rename an open file
err = os.Rename(logDBNewName, logDBName)
if err != nil {
logger.Fatalln("Failed to replace old DB with new one:", err)
}
err = os.Remove(flushDBName)
if err != nil && !errors.Is(err, os.ErrNotExist) {
logger.Fatalln("Failed to delete old DB:", err)
}
flushWriteDB = createDB(flushDBName)
return logWriteDB, flushWriteDB
}
func rotateDB(c *Conf, logDec *gob.Decoder, flushDec *gob.Decoder, logEnc *gob.Encoder, startup bool) {
// This mapping is a space optimization feature
// It permits to compress stream+filter to a small number (which is a byte in gob)
// We do this only for matches, not for flushes
readSF2int := make(map[int]SF)
writeSF2int := make(map[SF]int)
writeCounter := 1
// This extra code is made to warn only one time for each non-existant filter
discardedEntries := make(map[SF]int)
malformedEntries := 0
defer func() {
for sf, t := range discardedEntries {
if t > 0 {
logger.Printf(logger.WARN, "info discarded %v times from the DBs: stream/filter not found: %s.%s\n", t, sf.S, sf.F)
}
}
if malformedEntries > 0 {
logger.Printf(logger.WARN, "%v malformed entries discarded from the DBs\n", malformedEntries)
}
}()
// pattern, stream, fitler → last flush
flushes := make(map[*PSF]time.Time)
for {
var entry LogEntry
var filter *Filter
// decode entry
err := flushDec.Decode(&entry)
if err != nil {
if err == io.EOF {
break
}
malformedEntries++
continue
}
// retrieve related filter
if entry.Stream != "" || entry.Filter != "" {
if stream := c.Streams[entry.Stream]; stream != nil {
filter = stream.Filters[entry.Filter]
}
if filter == nil {
discardedEntries[SF{entry.Stream, entry.Filter}]++
continue
}
}
// store
flushes[&PSF{entry.Pattern, entry.Stream, entry.Filter}] = entry.T
}
lastTimeCpt := int64(0)
now := time.Now()
for {
var entry LogEntry
var filter *Filter
// decode entry
err := logDec.Decode(&entry)
if err != nil {
if err == io.EOF {
break
}
malformedEntries++
continue
}
// retrieve related stream & filter
if entry.Stream == "" && entry.Filter == "" {
sf, ok := readSF2int[entry.SF]
if !ok {
discardedEntries[SF{"", ""}]++
continue
}
entry.Stream = sf.S
entry.Filter = sf.F
}
if stream := c.Streams[entry.Stream]; stream != nil {
filter = stream.Filters[entry.Filter]
}
if filter == nil {
discardedEntries[SF{entry.Stream, entry.Filter}]++
continue
}
if entry.SF != 0 {
readSF2int[entry.SF] = SF{entry.Stream, entry.Filter}
}
// check if number of patterns is in sync
if len(entry.Pattern.Split()) != len(filter.Pattern) {
continue
}
// check if it hasn't been flushed
lastGlobalFlush := flushes[&PSF{entry.Pattern, "", ""}].Unix()
lastLocalFlush := flushes[&PSF{entry.Pattern, entry.Stream, entry.Filter}].Unix()
entryTime := entry.T.Unix()
if lastLocalFlush > entryTime || lastGlobalFlush > entryTime {
continue
}
// restore time
if entry.T.IsZero() {
entry.T = time.Unix(entry.S, lastTimeCpt)
}
lastTimeCpt++
// store matches
if !entry.Exec && entry.T.Add(filter.retryDuration).Unix() > now.Unix() {
if startup {
startupMatchesC <- PFT{entry.Pattern, filter, entry.T}
}
encodeOrFatal(logEnc, entry, writeSF2int, &writeCounter)
}
// replay executions
if entry.Exec && entry.T.Add(*filter.longuestActionDuration).Unix() > now.Unix() {
if startup {
flushToMatchesC <- PSF{entry.Pattern, entry.Stream, entry.Filter}
filter.sendActions(entry.Pattern, entry.T)
}
encodeOrFatal(logEnc, entry, writeSF2int, &writeCounter)
}
}
}
func encodeOrFatal(enc *gob.Encoder, entry LogEntry, writeSF2int map[SF]int, writeCounter *int) {
// Stream/Filter reduction
sf, ok := writeSF2int[SF{entry.Stream, entry.Filter}]
if ok {
entry.SF = sf
entry.Stream = ""
entry.Filter = ""
} else {
entry.SF = *writeCounter
writeSF2int[SF{entry.Stream, entry.Filter}] = *writeCounter
*writeCounter++
}
// Time reduction
if !entry.T.IsZero() {
entry.S = entry.T.Unix()
entry.T = time.Time{}
}
err := enc.Encode(entry)
if err != nil {
logger.Fatalln("Failed to write to new DB:", err)
}
}

View file

@ -1,81 +0,0 @@
package app
import (
"encoding/gob"
"errors"
"net"
"os"
"path"
"time"
"framagit.org/ppom/reaction/logger"
)
func createOpenSocket() net.Listener {
err := os.MkdirAll(path.Dir(*SocketPath), 0755)
if err != nil {
logger.Fatalln("Failed to create socket directory")
}
_, err = os.Stat(*SocketPath)
if err == nil {
logger.Println(logger.WARN, "socket", SocketPath, "already exists: Is the daemon already running? Deleting.")
err = os.Remove(*SocketPath)
if err != nil {
logger.Fatalln("Failed to remove socket:", err)
}
}
ln, err := net.Listen("unix", *SocketPath)
if err != nil {
logger.Fatalln("Failed to create socket:", err)
}
return ln
}
// Handle connections
//func SocketManager(streams map[string]*Stream) {
func SocketManager(conf *Conf) {
ln := createOpenSocket()
defer ln.Close()
for {
conn, err := ln.Accept()
if err != nil {
logger.Println(logger.ERROR, "Failed to open connection from cli:", err)
continue
}
go func(conn net.Conn) {
defer conn.Close()
var request Request
var response Response
err := gob.NewDecoder(conn).Decode(&request)
if err != nil {
logger.Println(logger.ERROR, "Invalid Message from cli:", err)
return
}
switch request.Request {
case Info:
// response.Config = *conf
response.Matches = matches
response.Actions = actions
case Flush:
le := LogEntry{time.Now(), 0, request.Flush.P, request.Flush.S, request.Flush.F, 0, false}
flushToMatchesC <- request.Flush
flushToActionsC <- request.Flush
flushToDatabaseC <- le
default:
logger.Println(logger.ERROR, "Invalid Message from cli: unrecognised command type")
response.Err = errors.New("unrecognised command type")
return
}
err = gob.NewEncoder(conn).Encode(response)
if err != nil {
logger.Println(logger.ERROR, "Can't respond to cli:", err)
return
}
}(conn)
}
}

View file

@ -1,178 +0,0 @@
package app
import (
"encoding/json"
"fmt"
"os"
"regexp"
"runtime"
"slices"
"strings"
"time"
"framagit.org/ppom/reaction/logger"
"github.com/google/go-jsonnet"
)
func (c *Conf) setup() {
if c.Concurrency == 0 {
c.Concurrency = runtime.NumCPU()
}
// Assure we iterate through c.Patterns map in reproductible order
sortedPatternNames := make([]string, 0, len(c.Patterns))
for k := range c.Patterns {
sortedPatternNames = append(sortedPatternNames, k)
}
slices.Sort(sortedPatternNames)
for _, patternName := range sortedPatternNames {
pattern := c.Patterns[patternName]
pattern.Name = patternName
pattern.nameWithBraces = fmt.Sprintf("<%s>", pattern.Name)
if pattern.Regex == "" {
logger.Fatalf("Bad configuration: pattern's regex %v is empty!", patternName)
}
compiled, err := regexp.Compile(fmt.Sprintf("^%v$", pattern.Regex))
if err != nil {
logger.Fatalf("Bad configuration: pattern %v: %v", patternName, err)
}
pattern.Regex = fmt.Sprintf("(?P<%s>%s)", patternName, pattern.Regex)
for _, ignore := range pattern.Ignore {
if !compiled.MatchString(ignore) {
logger.Fatalf("Bad configuration: pattern ignore '%v' doesn't match pattern %v! It should be fixed or removed.", ignore, pattern.nameWithBraces)
}
}
// Compile ignore regexes
for _, regex := range pattern.IgnoreRegex {
// Enclose the regex to make sure that it matches the whole detected string
compiledRegex, err := regexp.Compile("^" + regex + "$")
if err != nil {
logger.Fatalf("Bad configuration: in ignoreregex of pattern %s: %v", pattern.Name, err)
}
pattern.compiledIgnoreRegex = append(pattern.compiledIgnoreRegex, *compiledRegex)
}
}
if len(c.Streams) == 0 {
logger.Fatalln("Bad configuration: no streams configured!")
}
for streamName := range c.Streams {
stream := c.Streams[streamName]
stream.Name = streamName
if strings.Contains(stream.Name, ".") {
logger.Fatalf("Bad configuration: character '.' is not allowed in stream names: '%v'", stream.Name)
}
if len(stream.Filters) == 0 {
logger.Fatalf("Bad configuration: no filters configured in %v", stream.Name)
}
for filterName := range stream.Filters {
filter := stream.Filters[filterName]
filter.Stream = stream
filter.Name = filterName
if strings.Contains(filter.Name, ".") {
logger.Fatalf("Bad configuration: character '.' is not allowed in filter names: '%v'", filter.Name)
}
// Parse Duration
if filter.RetryPeriod == "" {
if filter.Retry > 1 {
logger.Fatalf("Bad configuration: retry but no retryperiod in %v.%v", stream.Name, filter.Name)
}
} else {
retryDuration, err := time.ParseDuration(filter.RetryPeriod)
if err != nil {
logger.Fatalf("Bad configuration: Failed to parse retry time in %v.%v: %v", stream.Name, filter.Name, err)
}
filter.retryDuration = retryDuration
}
if len(filter.Regex) == 0 {
logger.Fatalf("Bad configuration: no regexes configured in %v.%v", stream.Name, filter.Name)
}
// Compute Regexes
// Look for Patterns inside Regexes
for _, regex := range filter.Regex {
// iterate through patterns in reproductible order
for _, patternName := range sortedPatternNames {
pattern := c.Patterns[patternName]
if strings.Contains(regex, pattern.nameWithBraces) {
if !slices.Contains(filter.Pattern, pattern) {
filter.Pattern = append(filter.Pattern, pattern)
}
regex = strings.Replace(regex, pattern.nameWithBraces, pattern.Regex, 1)
}
}
compiledRegex, err := regexp.Compile(regex)
if err != nil {
logger.Fatalf("Bad configuration: regex of filter %s.%s: %v", stream.Name, filter.Name, err)
}
filter.compiledRegex = append(filter.compiledRegex, *compiledRegex)
}
if len(filter.Actions) == 0 {
logger.Fatalln("Bad configuration: no actions configured in", stream.Name, ".", filter.Name)
}
for actionName := range filter.Actions {
action := filter.Actions[actionName]
action.Filter = filter
action.Name = actionName
if strings.Contains(action.Name, ".") {
logger.Fatalln("Bad configuration: character '.' is not allowed in action names", action.Name)
}
// Parse Duration
if action.After != "" {
afterDuration, err := time.ParseDuration(action.After)
if err != nil {
logger.Fatalln("Bad configuration: Failed to parse after time in ", stream.Name, ".", filter.Name, ".", action.Name, ":", err)
}
action.afterDuration = afterDuration
} else if action.OnExit {
logger.Fatalln("Bad configuration: Cannot have `onexit: true` without an `after` directive in", stream.Name, ".", filter.Name, ".", action.Name)
}
if filter.longuestActionDuration == nil || filter.longuestActionDuration.Milliseconds() < action.afterDuration.Milliseconds() {
filter.longuestActionDuration = &action.afterDuration
}
}
}
}
}
func parseConf(filename string) *Conf {
data, err := os.Open(filename)
if err != nil {
logger.Fatalln("Failed to read configuration file:", err)
}
var conf Conf
if filename[len(filename)-4:] == ".yml" || filename[len(filename)-5:] == ".yaml" {
err = jsonnet.NewYAMLToJSONDecoder(data).Decode(&conf)
if err != nil {
logger.Fatalln("Failed to parse yaml configuration file:", err)
}
} else {
var jsondata string
jsondata, err = jsonnet.MakeVM().EvaluateFile(filename)
if err == nil {
err = json.Unmarshal([]byte(jsondata), &conf)
}
if err != nil {
logger.Fatalln("Failed to parse json configuration file:", err)
}
}
conf.setup()
return &conf
}

View file

@ -1,200 +0,0 @@
package app
import (
"bytes"
"encoding/gob"
"fmt"
"os"
"regexp"
"strings"
"time"
)
type Conf struct {
Concurrency int `json:"concurrency"`
Patterns map[string]*Pattern `json:"patterns"`
Streams map[string]*Stream `json:"streams"`
Start [][]string `json:"start"`
Stop [][]string `json:"stop"`
}
type Pattern struct {
Regex string `json:"regex"`
Ignore []string `json:"ignore"`
IgnoreRegex []string `json:"ignoreregex"`
compiledIgnoreRegex []regexp.Regexp `json:"-"`
Name string `json:"-"`
nameWithBraces string `json:"-"`
}
// Stream, Filter & Action structures must never be copied.
// They're always referenced through pointers
type Stream struct {
Name string `json:"-"`
Cmd []string `json:"cmd"`
Filters map[string]*Filter `json:"filters"`
}
type LilStream struct {
Name string
}
func (s *Stream) GobEncode() ([]byte, error) {
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
err := enc.Encode(LilStream{s.Name})
return buf.Bytes(), err
}
func (s *Stream) GobDecode(b []byte)(error) {
var ls LilStream
dec := gob.NewDecoder(bytes.NewReader(b))
err := dec.Decode(&ls)
s.Name = ls.Name
return err
}
type Filter struct {
Stream *Stream `json:"-"`
Name string `json:"-"`
Regex []string `json:"regex"`
compiledRegex []regexp.Regexp `json:"-"`
Pattern []*Pattern `json:"-"`
Retry int `json:"retry"`
RetryPeriod string `json:"retryperiod"`
retryDuration time.Duration `json:"-"`
Actions map[string]*Action `json:"actions"`
longuestActionDuration *time.Duration
}
// those small versions are needed to prevent infinite recursion in gob because of
// data cycles: Stream <-> Filter, Filter <-> Action
type LilFilter struct {
Stream *Stream
Name string
Pattern []*Pattern
}
func (f *Filter) GobDecode(b []byte)(error) {
var lf LilFilter
dec := gob.NewDecoder(bytes.NewReader(b))
err := dec.Decode(&lf)
f.Stream = lf.Stream
f.Name = lf.Name
f.Pattern = lf.Pattern
return err
}
func (f *Filter) GobEncode() ([]byte, error) {
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
err := enc.Encode(LilFilter{f.Stream, f.Name, f.Pattern})
return buf.Bytes(), err
}
type Action struct {
Filter *Filter `json:"-"`
Name string `json:"-"`
Cmd []string `json:"cmd"`
After string `json:"after"`
afterDuration time.Duration `json:"-"`
OnExit bool `json:"onexit"`
}
type LilAction struct {
Filter *Filter
Name string
}
func (a *Action) GobEncode() ([]byte, error) {
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
err := enc.Encode(LilAction{a.Filter, a.Name})
return buf.Bytes(), err
}
func (a *Action) GobDecode(b []byte)(error) {
var la LilAction
dec := gob.NewDecoder(bytes.NewReader(b))
err := dec.Decode(&la)
a.Filter = la.Filter
a.Name = la.Name
return err
}
type LogEntry struct {
T time.Time
S int64
Pattern Match
Stream, Filter string
SF int
Exec bool
}
type ReadDB struct {
file *os.File
dec *gob.Decoder
}
type WriteDB struct {
file *os.File
enc *gob.Encoder
}
type MatchesMap map[PF]map[time.Time]struct{}
type ActionsMap map[PA]map[time.Time]struct{}
// This is a "\x00" Joined string
// which contains all matches on a line.
type Match string
func (m *Match) Split() []string {
return strings.Split(string(*m), "\x00")
}
func JoinMatch(mm []string) Match {
return Match(strings.Join(mm, "\x00"))
}
func WithBrackets(mm []string) string {
var b strings.Builder
for _, match := range mm {
fmt.Fprintf(&b, "[%s]", match)
}
return b.String()
}
// Helper structs made to carry information
// Stream, Filter
type SF struct{ S, F string }
// Pattern, Stream, Filter
type PSF struct {
P Match
S, F string
}
type PF struct {
P Match
F *Filter
}
type PFT struct {
P Match
F *Filter
T time.Time
}
type PA struct {
P Match
A *Action
}
type PAT struct {
P Match
A *Action
T time.Time
}

View file

@ -1,10 +0,0 @@
module framagit.org/ppom/reaction
go 1.21
require (
github.com/google/go-jsonnet v0.20.0
sigs.k8s.io/yaml v1.1.0
)
require gopkg.in/yaml.v2 v2.4.0 // indirect

View file

@ -1,9 +0,0 @@
github.com/google/go-jsonnet v0.20.0 h1:WG4TTSARuV7bSm4PMB4ohjxe33IHT5WVTrJSU33uT4g=
github.com/google/go-jsonnet v0.20.0/go.mod h1:VbgWF9JX7ztlv770x/TolZNGGFfiHEVx9G6ca2eUmeA=
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=

View file

@ -1,80 +0,0 @@
package logger
import "log"
type Level int
const (
UNKNOWN = Level(-1)
DEBUG = Level(1)
INFO = Level(2)
WARN = Level(3)
ERROR = Level(4)
FATAL = Level(5)
)
func (l Level) String() string {
switch l {
case DEBUG:
return "DEBUG "
case INFO:
return "INFO "
case WARN:
return "WARN "
case ERROR:
return "ERROR "
case FATAL:
return "FATAL "
default:
return "????? "
}
}
func FromString(s string) Level {
switch s {
case "DEBUG":
return DEBUG
case "INFO":
return INFO
case "WARN":
return WARN
case "ERROR":
return ERROR
case "FATAL":
return FATAL
default:
return UNKNOWN
}
}
var LogLevel Level = 2
func SetLogLevel(level Level) {
LogLevel = level
}
func Println(level Level, args ...any) {
if level >= LogLevel {
newargs := make([]any, 0)
newargs = append(newargs, level)
newargs = append(newargs, args...)
log.Println(newargs...)
}
}
func Printf(level Level, format string, args ...any) {
if level >= LogLevel {
log.Printf(level.String()+format, args...)
}
}
func Fatalln(args ...any) {
newargs := make([]any, 0)
newargs = append(newargs, FATAL)
newargs = append(newargs, args...)
log.Fatalln(newargs...)
}
func Fatalf(format string, args ...any) {
log.Fatalf(FATAL.String()+format, args...)
}

View file

@ -1,13 +0,0 @@
package main
import (
"framagit.org/ppom/reaction/app"
)
func main() {
app.Main(version)
}
var (
version = "v1.4.2"
)

View file

@ -1,91 +0,0 @@
#include<ctype.h>
#include<errno.h>
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<unistd.h>
// If this programs
// - receives an ipv4 address in its arguments:
// → it will executes iptables with the same arguments in place.
//
// - receives an ipv6 address in its arguments:
// → it will executes ip6tables with the same arguments in place.
//
// - doesn't receive an ipv4 or ipv6 address in its arguments:
// → it will executes both, with the same arguments in place.
int isIPv4(char *tab) {
int i,len;
// IPv4 addresses are at least 7 chars long
len = strlen(tab);
if (len < 7 || !isdigit(tab[0]) || !isdigit(tab[len-1])) {
return 0;
}
// Each char must be a digit or a dot between 2 digits
for (i=1; i<len-1; i++) {
if (!isdigit(tab[i]) && !(tab[i] == '.' && isdigit(tab[i-1]) && isdigit(tab[i+1]))) {
return 0;
}
}
return 1;
}
int isIPv6(char *tab) {
int i,len, twodots = 0;
// IPv6 addresses are at least 3 chars long
len = strlen(tab);
if (len < 3) {
return 0;
}
// Each char must be a digit, :, a-f, or A-F
for (i=0; i<len; i++) {
if (!isdigit(tab[i]) && tab[i] != ':' && !(tab[i] >= 'a' && tab[i] <= 'f') && !(tab[i] >= 'A' && tab[i] <= 'F')) {
return 0;
}
}
return 1;
}
int guess_type(int len, char *tab[]) {
int i;
for (i=0; i<len; i++) {
if (isIPv4(tab[i])) {
return 4;
} else if (isIPv6(tab[i])) {
return 6;
}
}
return 0;
}
void exec(char *str, char **argv) {
argv[0] = str;
execvp(str, argv);
// returns only if fails
printf("ip46tables: exec failed %d\n", errno);
}
int main(int argc, char **argv) {
if (argc < 2) {
printf("ip46tables: At least one argument has to be given\n");
exit(1);
}
int type;
type = guess_type(argc, argv);
if (type == 4) {
exec("iptables", argv);
} else if (type == 6) {
exec("ip6tables", argv);
} else {
pid_t pid = fork();
if (pid == -1) {
printf("ip46tables: fork failed\n");
exit(1);
} else if (pid) {
exec("iptables", argv);
} else {
exec("ip6tables", argv);
}
}
}

View file

@ -1,97 +0,0 @@
#include<ctype.h>
#include<errno.h>
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<unistd.h>
// nft46 'add element inet reaction ipvXbans { 1.2.3.4 }' → nft 'add element inet reaction ipv4bans { 1.2.3.4 }'
// nft46 'add element inet reaction ipvXbans { a:b::c:d }' → nft 'add element inet reaction ipv6bans { a:b::c:d }'
//
// the character X is replaced by 4 or 6 depending on the address family of the specified IP
//
// Limitations:
// - nft46 must receive exactly one argument
// - only one IP must be given per command
// - the IP must be between { braces }
int isIPv4(char *tab, int len) {
int i;
// IPv4 addresses are at least 7 chars long
if (len < 7 || !isdigit(tab[0]) || !isdigit(tab[len-1])) {
return 0;
}
// Each char must be a digit or a dot between 2 digits
for (i=1; i<len-1; i++) {
if (!isdigit(tab[i]) && !(tab[i] == '.' && isdigit(tab[i-1]) && isdigit(tab[i+1]))) {
return 0;
}
}
return 1;
}
int isIPv6(char *tab, int len) {
int i;
// IPv6 addresses are at least 3 chars long
if (len < 3) {
return 0;
}
// Each char must be a digit, :, a-f, or A-F
for (i=0; i<len; i++) {
if (!isdigit(tab[i]) && tab[i] != ':' && tab[i] != '.' && !(tab[i] >= 'a' && tab[i] <= 'f') && !(tab[i] >= 'A' && tab[i] <= 'F')) {
return 0;
}
}
return 1;
}
int findchar(char *tab, char c, int i, int len) {
while (i < len && tab[i] != c) i++;
if (i == len) {
printf("nft46: one %c must be present", c);
exit(1);
}
return i;
}
void adapt_args(char *tab) {
int i, len, X, startIP, endIP, startedIP;
X = startIP = endIP = -1;
startedIP = 0;
len = strlen(tab);
i = 0;
X = i = findchar(tab, 'X', i, len);
startIP = i = findchar(tab, '{', i, len);
while (startIP + 1 <= (i = findchar(tab, ' ', i, len))) startIP = i + 1;
i = startIP;
endIP = i = findchar(tab, ' ', i, len) - 1;
if (isIPv4(tab+startIP, endIP-startIP+1)) {
tab[X] = '4';
return;
}
if (isIPv6(tab+startIP, endIP-startIP+1)) {
tab[X] = '6';
return;
}
printf("nft46: no IP address found\n");
exit(1);
}
void exec(char *str, char **argv) {
argv[0] = str;
execvp(str, argv);
// returns only if fails
printf("nft46: exec failed %d\n", errno);
}
int main(int argc, char **argv) {
if (argc != 2) {
printf("nft46: Exactly one argument must be given\n");
exit(1);
}
adapt_args(argv[1]);
exec("nft", argv);
}

View file

@ -4,19 +4,23 @@ MANDIR = $(PREFIX)/share/man/man1
SYSTEMDDIR ?= /etc/systemd
install:
install -m755 reaction nft46 ip46tables $(DESTDIR)$(BINDIR)
install -m644 reaction*.1 $(DESTDIR)$(MANDIR)/man/man1/
install -m644 reaction.bash $(DESTDIR)/share/bash-completion/completions/reaction
install -m644 reaction.fish $(DESTDIR)/share/fish/completions/
install -m644 _reaction $(DESTDIR)/share/zsh/vendor-completions/
install -m644 reaction.service $(SYSTEMDDIR)/system/reaction.service
install -Dm755 reaction $(DESTDIR)$(BINDIR)
install -Dm755 reaction-plugin-virtual $(DESTDIR)$(BINDIR)
install -Dm644 reaction*.1 -t $(DESTDIR)$(MANDIR)/
install -Dm644 reaction.bash $(DESTDIR)$(PREFIX)/share/bash-completion/completions/reaction
install -Dm644 reaction.fish $(DESTDIR)$(PREFIX)/share/fish/vendor_completions.d/reaction.fish
install -Dm644 _reaction $(DESTDIR)$(PREFIX)/share/zsh/vendor-completions/_reaction
install -Dm644 reaction.service $(SYSTEMDDIR)/system/reaction.service
install-ipset:
install -Dm755 reaction-plugin-ipset $(DESTDIR)$(BINDIR)
remove:
rm -f $(DESTDIR)$(BINDIR)/bin/reaction
rm -f $(DESTDIR)$(BINDIR)/bin/nft46
rm -f $(DESTDIR)$(BINDIR)/bin/ip46tables
rm -f $(DESTDIR)$(MANDIR)/man/man1/reaction*.1
rm -f $(DESTDIR)/share/bash-completion/completions/reaction
rm -f $(DESTDIR)/share/fish/completions/
rm -f $(DESTDIR)/share/zsh/vendor-completions/
rm -f $(DESTDIR)$(BINDIR)/bin/reaction-plugin-virtual
rm -f $(DESTDIR)$(BINDIR)/bin/reaction-plugin-ipset
rm -f $(DESTDIR)$(MANDIR)/reaction*.1
rm -f $(DESTDIR)$(PREFIX)/share/bash-completion/completions/reaction
rm -f $(DESTDIR)$(PREFIX)/share/fish/vendor_completions.d/reaction.fish
rm -f $(DESTDIR)$(PREFIX)/share/zsh/vendor-completions/_reaction
rm -f $(SYSTEMDDIR)/system/reaction.service

View file

@ -1,13 +1,13 @@
# vim: ft=systemd
[Unit]
Description=A daemon that scans program outputs for repeated patterns, and takes action.
Description=reaction daemon
Documentation=https://reaction.ppom.me
# Ensure reaction will insert its chain after docker has inserted theirs. Only useful when iptables & docker are used
# After=docker.service
# See `man systemd.exec` and `man systemd.service` for most options below
[Service]
ExecStart=/usr/bin/reaction start -c /etc/%i
ExecStart=/usr/bin/reaction start -c /etc/reaction/
# Ask systemd to create /var/lib/reaction (/var/lib/ is implicit)
StateDirectory=reaction
@ -15,6 +15,10 @@ StateDirectory=reaction
RuntimeDirectory=reaction
# Start reaction in its state directory
WorkingDirectory=/var/lib/reaction
# Let reaction kill its child processes first
KillMode=mixed
# Put reaction in its own slice so that plugins can be grouped within.
Slice=system-reaction.slice
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1 @@
[Slice]

View file

@ -0,0 +1,23 @@
[package]
name = "reaction-plugin-cluster"
version = "0.1.0"
edition = "2024"
[dependencies]
reaction-plugin.workspace = true
chrono.workspace = true
futures.workspace = true
remoc.workspace = true
serde.workspace = true
serde_json.workspace = true
tokio.workspace = true
tokio.features = ["rt-multi-thread"]
treedb.workspace = true
data-encoding = "2.9.0"
iroh = { version = "0.95.1", default-features = false }
rand = "0.9.2"
[dev-dependencies]
assert_fs.workspace = true

View file

@ -0,0 +1,165 @@
use std::{
collections::BTreeMap,
net::{SocketAddrV4, SocketAddrV6},
sync::Arc,
time::Duration,
};
use futures::future::join_all;
use iroh::{
Endpoint,
endpoint::{ConnectOptions, TransportConfig},
};
use reaction_plugin::{Line, shutdown::ShutdownController};
use remoc::rch::mpsc as remocMpsc;
use tokio::sync::mpsc as tokioMpsc;
use treedb::{Database, time::Time};
use crate::{ActionInit, StreamInit, connection::ConnectionManager, endpoint::EndpointManager};
pub const ALPN: [&[u8]; 1] = ["reaction_cluster_1".as_bytes()];
pub type UtcLine = (Arc<String>, Time);
pub fn transport_config() -> TransportConfig {
// FIXME higher timeouts and keep alive
let mut transport = TransportConfig::default();
transport
.max_idle_timeout(Some(Duration::from_millis(2000).try_into().unwrap()))
.keep_alive_interval(Some(Duration::from_millis(200)));
transport
}
pub fn connect_config() -> ConnectOptions {
ConnectOptions::new().with_transport_config(transport_config().into())
}
pub async fn bind(stream: &StreamInit) -> Result<Endpoint, String> {
let mut builder = Endpoint::builder()
.secret_key(stream.secret_key.clone())
.alpns(ALPN.iter().map(|slice| slice.to_vec()).collect())
.relay_mode(iroh::RelayMode::Disabled)
.clear_discovery()
.transport_config(transport_config());
if let Some(ip) = stream.bind_ipv4 {
builder = builder.bind_addr_v4(SocketAddrV4::new(ip, stream.listen_port));
}
if let Some(ip) = stream.bind_ipv6 {
builder = builder.bind_addr_v6(SocketAddrV6::new(ip, stream.listen_port, 0, 0));
}
builder.bind().await.map_err(|err| {
format!(
"Could not create socket address for cluster {}: {err}",
stream.name
)
})
}
pub async fn cluster_tasks(
endpoint: Endpoint,
mut stream: StreamInit,
mut actions: Vec<ActionInit>,
db: &mut Database,
shutdown: ShutdownController,
) -> Result<(), String> {
eprintln!("DEBUG cluster tasks starts running");
let (message_action2connection_txs, mut message_action2connection_rxs): (
Vec<tokioMpsc::Sender<UtcLine>>,
Vec<tokioMpsc::Receiver<UtcLine>>,
) = (0..stream.nodes.len())
.map(|_| tokioMpsc::channel(1))
.unzip();
// Spawn action tasks
while let Some(mut action) = actions.pop() {
let message_action2connection_txs = message_action2connection_txs.clone();
let own_cluster_tx = stream.tx.clone();
tokio::spawn(async move {
action
.serve(message_action2connection_txs, own_cluster_tx)
.await
});
}
let endpoint = Arc::new(endpoint);
let mut connection_endpoint2connection_txs = BTreeMap::new();
// Spawn connection managers
while let Some((pk, endpoint_addr)) = stream.nodes.pop_first() {
let cluster_name = stream.name.clone();
let endpoint = endpoint.clone();
let message_action2connection_rx = message_action2connection_rxs.pop().unwrap();
let stream_tx = stream.tx.clone();
let shutdown = shutdown.clone();
let (connection_manager, connection_endpoint2connection_tx) = ConnectionManager::new(
cluster_name,
endpoint_addr,
endpoint,
stream.message_timeout,
message_action2connection_rx,
stream_tx,
db,
shutdown,
)
.await?;
tokio::spawn(async move { connection_manager.task().await });
connection_endpoint2connection_txs.insert(pk, connection_endpoint2connection_tx);
}
// Spawn connection accepter
EndpointManager::new(
endpoint.clone(),
stream.name.clone(),
connection_endpoint2connection_txs,
shutdown.clone(),
);
eprintln!("DEBUG cluster tasks finished running");
Ok(())
}
impl ActionInit {
// Receive messages from its reaction action and dispatch them to all connections and to the reaction stream
async fn serve(
&mut self,
nodes_tx: Vec<tokioMpsc::Sender<UtcLine>>,
own_stream_tx: remocMpsc::Sender<Line>,
) {
while let Ok(Some(m)) = self.rx.recv().await {
eprintln!("DEBUG action: received a message to send to connections");
let line = self.send.line(m.match_);
if self.self_
&& let Err(err) = own_stream_tx.send((line.clone(), m.time)).await
{
eprintln!("ERROR while queueing message to be sent to own cluster stream: {err}");
}
let line = (Arc::new(line), m.time.into());
for result in join_all(nodes_tx.iter().map(|tx| tx.send(line.clone()))).await {
if let Err(err) = result {
eprintln!("ERROR while queueing message to be sent to cluster nodes: {err}");
};
}
}
}
}
#[cfg(test)]
mod tests {
use chrono::{DateTime, Local};
// As long as nodes communicate with UTC datetimes, them having different local timezones is not an issue!
#[test]
fn different_local_tz_is_ok() {
let dates: Vec<DateTime<Local>> = serde_json::from_str(
"[\"2025-11-02T17:47:21.716229569+01:00\",\"2025-11-02T18:47:21.716229569+02:00\"]",
)
.unwrap();
assert_eq!(dates[0].to_utc(), dates[1].to_utc());
}
}

View file

@ -0,0 +1,668 @@
use std::{cmp::max, io::Error as IoError, sync::Arc, time::Duration};
use futures::FutureExt;
use iroh::{
Endpoint, EndpointAddr,
endpoint::{Connection, RecvStream, SendStream, VarInt},
};
use rand::random_range;
use reaction_plugin::{Line, shutdown::ShutdownController};
use tokio::{
io::{AsyncReadExt, AsyncWriteExt, BufReader, BufWriter},
sync::mpsc,
time::sleep,
};
use treedb::{
Database, Tree,
helpers::{to_string, to_time},
time::{Time, now},
};
use crate::{
cluster::{ALPN, UtcLine, connect_config},
key::Show,
};
const PROTOCOL_VERSION: u32 = 1;
const CLOSE_RECV: (u32, &[u8]) = (1, b"error receiving from your stream");
const CLOSE_CLOSED: (u32, &[u8]) = (2, b"you closed your stream");
const CLOSE_SEND: (u32, &[u8]) = (3, b"could not send a message to your channel so I quit");
type MaybeRemoteLine = Result<Option<(String, Time)>, IoError>;
enum Event {
LocalMessageReceived(Option<UtcLine>),
RemoteMessageReceived(MaybeRemoteLine),
ConnectionReceived(Option<ConnOrConn>),
}
pub struct OwnConnection {
connection: Connection,
id: u64,
line_tx: BufWriter<SendStream>,
line_rx: BufReader<RecvStream>,
next_time_secs: Option<u64>,
next_time_nanos: Option<u32>,
next_len: Option<usize>,
next_line: Option<Vec<u8>>,
}
impl OwnConnection {
fn new(
connection: Connection,
id: u64,
line_tx: BufWriter<SendStream>,
line_rx: BufReader<RecvStream>,
) -> Self {
Self {
connection,
id,
line_tx,
line_rx,
next_time_secs: None,
next_time_nanos: None,
next_len: None,
next_line: None,
}
}
/// Send a line to peer.
///
/// Time is a std::time::Duration since UNIX_EPOCH, which is defined as UTC
/// So it's safe to use between nodes using different timezones
async fn send_line(&mut self, line: &String, time: &Time) -> Result<(), std::io::Error> {
self.line_tx.write_u64(time.as_secs()).await?;
self.line_tx.write_u32(time.subsec_nanos()).await?;
self.line_tx.write_u32(line.len() as u32).await?;
self.line_tx.write_all(line.as_bytes()).await?;
self.line_tx.flush().await?;
Ok(())
}
/// Cancel-safe function that returns next line from peer
/// Returns None if we don't have all data yet.
async fn recv_line(&mut self) -> MaybeRemoteLine {
if self.next_time_secs.is_none() {
self.next_time_secs = Some(self.line_rx.read_u64().await?);
}
if self.next_time_nanos.is_none() {
self.next_time_nanos = Some(self.line_rx.read_u32().await?);
}
if self.next_len.is_none() {
self.next_len = Some(self.line_rx.read_u32().await? as usize);
}
// Ok we have next_len.is_some()
let next_len = self.next_len.clone().unwrap();
if self.next_line.is_none() {
self.next_line = Some(Vec::with_capacity(next_len));
}
// Ok we have next_line.is_some()
let next_line = self.next_line.as_mut().unwrap();
let actual_len = next_line.len();
// Resize to wanted length
next_line.resize(next_len, 0);
// Read bytes
let bytes_read = self
.line_rx
.read(&mut next_line[actual_len..next_len])
.await?;
// Truncate possibly unread bytes
next_line.truncate(actual_len + bytes_read);
// Let's test if we read all bytes
if next_line.len() == next_len {
// Ok we have a full line
self.next_len.take();
let line = String::try_from(self.next_line.take().unwrap()).map_err(|err| {
std::io::Error::new(std::io::ErrorKind::InvalidData, err.to_string())
})?;
let time = Time::new(
self.next_time_secs.take().unwrap(),
self.next_time_nanos.take().unwrap(),
);
Ok(Some((line, time)))
} else {
// Ok we don't have a full line, will be next time!
Ok(None)
}
}
}
pub enum ConnOrConn {
Connection(Connection),
OwnConnection(OwnConnection),
}
/// Handle a remote node.
/// Manage reception and sending of messages to this node.
/// Retry failed connections.
pub struct ConnectionManager {
/// Cluster's name (for logging)
cluster_name: String,
/// The remote node we're communicating with (for logging)
node_id: String,
/// Remote
remote: EndpointAddr,
/// Endpoint
endpoint: Arc<Endpoint>,
/// Cancel asking for a connection
cancel_ask_connection: Option<mpsc::Sender<()>>,
/// Create a delegated task to send ourselves a connection
connection_tx: mpsc::Sender<ConnOrConn>,
/// The EndpointManager or our delegated task sending us a connection (whether we asked for it or not)
connection_rx: mpsc::Receiver<ConnOrConn>,
/// Our own connection (when we have one)
connection: Option<OwnConnection>,
/// Last connexion ID, used to have a determinist way to choose between conflicting connections
last_connection_id: u64,
/// Max duration before we drop pending messages to a node we can't connect to.
message_timeout: Duration,
/// Message we receive from actions
message_rx: mpsc::Receiver<UtcLine>,
/// Our queue of messages to send
message_queue: Tree<Time, Arc<String>>,
/// Messages we send from remote nodes to our own stream
own_cluster_tx: remoc::rch::mpsc::Sender<Line>,
/// shutdown
shutdown: ShutdownController,
}
impl ConnectionManager {
pub async fn new(
cluster_name: String,
remote: EndpointAddr,
endpoint: Arc<Endpoint>,
message_timeout: Duration,
message_rx: mpsc::Receiver<UtcLine>,
own_cluster_tx: remoc::rch::mpsc::Sender<Line>,
db: &mut Database,
shutdown: ShutdownController,
) -> Result<(Self, mpsc::Sender<ConnOrConn>), String> {
let node_id = remote.id.show();
let message_queue = db
.open_tree(
format!("message_queue_{}_{}", endpoint.id().show(), node_id),
message_timeout,
|(key, value)| Ok((to_time(&key)?, Arc::new(to_string(&value)?))),
)
.await?;
let (connection_tx, connection_rx) = mpsc::channel(1);
Ok((
Self {
cluster_name,
node_id,
remote,
endpoint,
connection: None,
cancel_ask_connection: None,
connection_tx: connection_tx.clone(),
connection_rx,
last_connection_id: 0,
message_timeout,
message_rx,
message_queue,
own_cluster_tx,
shutdown,
},
connection_tx,
))
}
/// Main loop
pub async fn task(mut self) {
self.ask_connection();
loop {
let have_connection = self.connection.is_some();
let maybe_conn_rx = self
.connection
.as_mut()
.map(|conn| conn.recv_line().boxed())
// This Future will never be polled because of the if in select!
// It still needs to be present because the branch will be evaluated
// so we can't unwrap
.unwrap_or(false_recv().boxed());
let event = tokio::select! {
biased;
// Quitting
_ = self.shutdown.wait() => None,
// Receive a connection from EndpointManager
conn = self.connection_rx.recv() => Some(Event::ConnectionReceived(conn)),
// Receive remote message when we have a connection
msg = maybe_conn_rx, if have_connection => Some(Event::RemoteMessageReceived(msg)),
// Receive a message from local Actions
msg = self.message_rx.recv() => Some(Event::LocalMessageReceived(msg)),
};
match event {
Some(event) => {
self.handle_event(event).await;
self.send_queue_messages().await;
self.drop_timeout_messages().await;
}
None => break,
}
}
}
async fn handle_event(&mut self, event: Event) {
match event {
Event::ConnectionReceived(connection) => {
self.handle_connection(connection).await;
}
Event::LocalMessageReceived(utc_line) => {
self.handle_local_message(utc_line).await;
}
Event::RemoteMessageReceived(message) => {
self.handle_remote_message(message).await;
}
}
}
async fn send_queue_messages(&mut self) {
while let Some(connection) = &mut self.connection
&& let Some((time, line)) = self
.message_queue
.first_key_value()
.map(|(k, v)| (k.clone(), v.clone()))
{
if let Err(err) = connection.send_line(&line, &time).await {
eprintln!(
"INFO cluster {}: connection with node {} failed: {err}",
self.cluster_name, self.node_id,
);
self.close_connection(CLOSE_SEND).await;
} else {
self.message_queue.remove(&time).await;
eprintln!(
"DEBUG cluster {}: node {}: sent a local message to remote: {}",
self.cluster_name, self.node_id, line
);
}
}
}
async fn drop_timeout_messages(&mut self) {
let now = now();
let mut count = 0;
loop {
// We have a next key and it reached timeout
if let Some(next_key) = self.message_queue.first_key_value().map(|kv| kv.0.clone())
&& next_key + self.message_timeout < now
{
self.message_queue.remove(&next_key).await;
count += 1;
} else {
break;
}
}
if count > 0 {
eprintln!(
"DEBUG cluster {}: node {}: dropping {count} messages that reached timeout",
self.cluster_name, self.node_id,
)
}
}
/// Bootstrap a new Connection
/// Returns true if we have a valid connection now
async fn handle_connection(&mut self, connection: Option<ConnOrConn>) {
match connection {
None => {
eprintln!(
"DEBUG cluster {}: ConnectionManager {}: quitting because EndpointManager has quit",
self.cluster_name, self.node_id,
);
self.quit();
}
Some(connection) => {
if let Some(cancel) = self.cancel_ask_connection.take() {
let _ = cancel.send(()).await;
}
let last_connection_id = self.last_connection_id;
let mut insert_connection = |own_connection: OwnConnection| {
if self
.connection
.as_ref()
.is_none_or(|old_own| old_own.id < own_connection.id)
{
self.last_connection_id = own_connection.id;
self.connection = Some(own_connection);
} else {
eprintln!(
"WARN cluster {}: node {}: ignoring incoming connection, as we already have a valid connection with it and our connection id is greater",
self.cluster_name, self.node_id,
);
}
};
match connection {
ConnOrConn::Connection(connection) => {
match open_channels(
connection,
last_connection_id,
&self.cluster_name,
&self.node_id,
)
.await
{
Ok(own_connection) => insert_connection(own_connection),
Err(err) => {
eprintln!(
"ERROR cluster {}: trying to initialize connection to node {}: {err}",
self.cluster_name, self.node_id,
);
if self.connection.is_none() {
self.ask_connection();
}
}
}
}
ConnOrConn::OwnConnection(own_connection) => insert_connection(own_connection),
}
}
}
}
async fn handle_remote_message(&mut self, message: MaybeRemoteLine) {
match message {
Err(err) => {
eprintln!(
"WARN cluster {}: node {}: connection {}: error receiving remote message: {err}",
self.cluster_name, self.node_id, self.last_connection_id
);
self.close_connection(CLOSE_RECV).await;
}
Ok(None) => {
eprintln!(
"WARN cluster {}: node {} closed its stream",
self.cluster_name, self.node_id,
);
self.close_connection(CLOSE_CLOSED).await;
}
Ok(Some(line)) => {
if let Err(err) = self
.own_cluster_tx
.send((line.0.clone(), line.1.into()))
.await
{
eprintln!(
"ERROR cluster {}: could not send message to reaction stream: {err}",
self.cluster_name
);
eprintln!(
"INFO cluster {}: line that can't be sent: {}",
self.cluster_name, line.0
);
self.quit();
} else {
eprintln!(
"DEBUG cluster {}: node {}: sent a remote message to local stream: {}",
self.cluster_name, self.node_id, line.0
);
}
}
}
}
async fn handle_local_message(&mut self, message: Option<UtcLine>) {
eprintln!(
"DEBUG cluster {}: node {}: received a local message",
self.cluster_name, self.node_id,
);
match message {
None => {
eprintln!(
"INFO cluster {}: no action remaining, quitting",
self.cluster_name
);
self.quit();
}
Some(message) => match &mut self.connection {
Some(connection) => {
if let Err(err) = connection.send_line(&message.0, &message.1).await {
eprintln!(
"INFO cluster {}: connection with node {} failed: {err}",
self.cluster_name, self.node_id,
);
self.message_queue.insert(message.1, message.0).await;
self.close_connection(CLOSE_SEND).await;
} else {
eprintln!(
"DEBUG cluster {}: node {}: sent a local message to remote: {}",
self.cluster_name, self.node_id, message.0
);
}
}
None => {
eprintln!(
"DEBUG cluster {}: node {}: no connection, saving local message to send later: {}",
self.cluster_name, self.node_id, message.0
);
self.message_queue.insert(message.1, message.0).await;
}
},
}
}
async fn close_connection(&mut self, code: (u32, &[u8])) {
if let Some(connection) = self.connection.take() {
connection
.connection
.close(VarInt::from_u32(code.0), code.1);
}
self.ask_connection();
}
fn ask_connection(&mut self) {
// if self.node_id.starts_with('H') {
let (tx, rx) = mpsc::channel(1);
self.cancel_ask_connection = Some(tx);
try_connect(
self.cluster_name.clone(),
self.remote.clone(),
self.endpoint.clone(),
self.last_connection_id,
self.connection_tx.clone(),
rx,
);
}
fn quit(&mut self) {
self.shutdown.ask_shutdown();
}
}
/// Open accept one stream and create one stream.
/// This way, there is no need to know if we created or accepted the connection.
async fn open_channels(
connection: Connection,
last_connexion_id: u64,
cluster_name: &str,
node_id: &str,
) -> Result<OwnConnection, IoError> {
eprintln!(
"DEBUG cluster {}: node {}: opening uni channel",
cluster_name, node_id
);
let mut output = BufWriter::new(connection.open_uni().await?);
let our_id = random_range(last_connexion_id + 1..last_connexion_id + 1_000_000);
eprintln!(
"DEBUG cluster {}: node {}: sending handshake in uni channel",
cluster_name, node_id
);
output.write_u32(PROTOCOL_VERSION).await?;
output.write_u64(our_id).await?;
output.flush().await?;
eprintln!(
"DEBUG cluster {}: node {}: accepting uni channel",
cluster_name, node_id
);
let mut input = BufReader::new(connection.accept_uni().await?);
eprintln!(
"DEBUG cluster {}: node {}: reading handshake from uni channel",
cluster_name, node_id
);
let their_version = input.read_u32().await?;
if their_version != PROTOCOL_VERSION {
return Err(IoError::new(
std::io::ErrorKind::InvalidData,
format!(
"incompatible version: {their_version}. We use {PROTOCOL_VERSION}. Consider upgrading the node with the older version."
),
));
}
let their_id = input.read_u64().await?;
// FIXME Do we need to test this? If so, this function should return their_id even when error in order to retry better next time
// if their_id < last_connexion_id
// ERROR
// else
let chosen_id = max(our_id, their_id);
eprintln!(
"DEBUG cluster {}: node {}: version handshake complete: last id: {last_connexion_id}, our id: {our_id}, their id: {their_id}: chosen id: {chosen_id}",
cluster_name, node_id
);
Ok(OwnConnection::new(connection, chosen_id, output, input))
}
async fn false_recv() -> MaybeRemoteLine {
Ok(None)
}
const START_TIMEOUT: Duration = Duration::from_millis(500);
const MAX_TIMEOUT: Duration = Duration::from_hours(1);
const TIMEOUT_FACTOR: f64 = 1.5;
fn with_random(d: Duration) -> Duration {
let max_delta = d.as_micros() as f32 * 0.2;
d + Duration::from_micros(rand::random_range(0.0..max_delta) as u64)
}
// Compute the next wait Duration.
// We're multiplying the Duration by [`TIMEOUT_FACTOR`] and cap it to [`MAX_TIMEOUT`].
fn next_delta(delta: Option<Duration>) -> Duration {
with_random(match delta {
None => START_TIMEOUT,
Some(delta) => {
// Multiply timeout by TIMEOUT_FACTOR
let delta = Duration::from_millis(((delta.as_millis() as f64) * TIMEOUT_FACTOR) as u64);
// Cap to MAX_TIMEOUT
if delta > MAX_TIMEOUT {
MAX_TIMEOUT
} else {
delta
}
}
})
}
#[cfg(test)]
#[test]
fn test_with_random() {
for d in [
123, 1234, 12345, 123456, 1234567, 12345678, 123456789, 1234567890,
] {
let rd = with_random(Duration::from_micros(d)).as_micros();
assert!(rd as f32 >= d as f32, "{rd} < {d}");
assert!(rd as f32 <= (d + 1) as f32 * 1.2, "{rd} > {d} * 1.2");
}
}
fn try_connect(
cluster_name: String,
remote: EndpointAddr,
endpoint: Arc<Endpoint>,
last_connection_id: u64,
connection_tx: mpsc::Sender<ConnOrConn>,
mut order_stop: mpsc::Receiver<()>,
) {
tokio::spawn(async move {
let node_id = remote.id.show();
// Until we have a connection or we're requested to stop
let mut keep_trying = true;
let mut delta = None;
while keep_trying {
delta = Some(next_delta(delta));
keep_trying = tokio::select! {
_ = sleep(delta.unwrap_or_default()) => true,
_ = order_stop.recv() => false,
};
if keep_trying {
eprintln!("DEBUG cluster {cluster_name}: node {node_id}: trying to connect...");
let connect = tokio::select! {
// conn = endpoint.connect(remote.clone(), ALPN[0]) => Some(conn),
conn = endpoint.connect_with_opts(remote.clone(), ALPN[0], connect_config()) => Some(conn),
_ = order_stop.recv() => None,
};
if let Some(connect) = connect {
let res = match connect {
Ok(connecting) => match connecting.await {
Ok(connection) => {
eprintln!(
"DEBUG cluster {cluster_name}: node {node_id}: created connection"
);
match open_channels(
connection,
last_connection_id,
&cluster_name,
&node_id,
)
.await
{
Ok(own_connection) => {
if let Err(err) = connection_tx
.send(ConnOrConn::OwnConnection(own_connection))
.await
{
eprintln!(
"DEBUG cluster {cluster_name}: node {node_id}: quitting because ConnectionManager has quit: {err}"
);
}
// successfully opened connection
keep_trying = false;
Ok(())
}
Err(err) => Err(err.to_string()),
}
}
Err(err) => Err(err.to_string()),
},
Err(err) => Err(err.to_string()),
};
if let Err(err) = res {
eprintln!(
"WARN cluster {cluster_name}: node {node_id}: while trying to connect: {err}"
);
}
} else {
// received stop order
eprintln!(
"DEBUG cluster {cluster_name}: node {node_id}: stop to try connecting to node be cause we received a connection from it"
);
keep_trying = false;
}
}
}
});
}

View file

@ -0,0 +1,128 @@
use std::collections::BTreeMap;
use std::sync::Arc;
use iroh::{Endpoint, PublicKey, endpoint::Incoming};
use reaction_plugin::shutdown::ShutdownController;
use tokio::sync::mpsc;
use crate::{connection::ConnOrConn, key::Show};
enum Break {
Yes,
No,
}
pub struct EndpointManager {
/// The [`iroh::Endpoint`] to manage
endpoint: Arc<Endpoint>,
/// Cluster's name (for logging)
cluster_name: String,
/// Connection sender to the Connection Managers
connections_tx: BTreeMap<PublicKey, mpsc::Sender<ConnOrConn>>,
/// shutdown
shutdown: ShutdownController,
}
impl EndpointManager {
pub fn new(
endpoint: Arc<Endpoint>,
cluster_name: String,
connections_tx: BTreeMap<PublicKey, mpsc::Sender<ConnOrConn>>,
shutdown: ShutdownController,
) {
tokio::spawn(async move {
Self {
endpoint,
cluster_name,
connections_tx,
shutdown,
}
.task()
.await
});
}
async fn task(&mut self) {
loop {
// Uncomment this line and comment the select! for faster development in this function
// let event = Event::TryConnect(self.endpoint_addr_rx.recv().await);
let incoming = tokio::select! {
incoming = self.endpoint.accept() => incoming,
_ = self.shutdown.wait() => None,
};
match incoming {
Some(incoming) => {
if let Break::Yes = self.handle_incoming(incoming).await {
break;
}
}
None => break,
}
}
self.endpoint.close().await
}
async fn handle_incoming(&mut self, incoming: Incoming) -> Break {
eprintln!(
"DEBUG cluster {}: EndpointManager: receiving connection",
self.cluster_name,
);
// FIXME a malicious actor could maybe prevent a node from connecting to
// its cluster by sending lots of invalid slow connection requests?
// This function could be moved to a new 'oneshot' task instead
let remote_address = incoming.remote_address();
let remote_address_validated = incoming.remote_address_validated();
let connection = match incoming.await {
Ok(connection) => connection,
Err(err) => {
if remote_address_validated {
eprintln!("INFO refused connection from {}: {err}", remote_address)
} else {
eprintln!("INFO refused connection: {err}")
}
return Break::No;
}
};
let remote_id = connection.remote_id();
match self.connections_tx.get(&remote_id) {
None => {
eprintln!(
"WARN cluster {}: incoming connection from node '{}', ip: {} is not in our list, refusing incoming connection.",
self.cluster_name,
remote_id.show(),
remote_address
);
eprintln!(
"INFO cluster {}: {}, {}",
self.cluster_name,
"maybe it's not from our cluster,",
"maybe this node's configuration has not yet been updated to add this new node."
);
return Break::No;
}
Some(tx) => {
if let Err(_) = tx.send(ConnOrConn::Connection(connection)).await {
eprintln!(
"DEBUG cluster {}: EndpointManager: quitting because ConnectionManager has quit",
self.cluster_name,
);
self.shutdown.ask_shutdown();
return Break::Yes;
}
eprintln!(
"DEBUG cluster {}: EndpointManager: receiving connection from {}",
self.cluster_name,
remote_id.show(),
);
}
}
// TODO persist the incoming address, so that we don't forget this address
Break::No
}
}

View file

@ -0,0 +1,188 @@
use std::io;
use data_encoding::DecodeError;
use iroh::{PublicKey, SecretKey};
use tokio::{
fs::{self, File},
io::AsyncWriteExt,
};
pub fn secret_key_path(dir: &str, cluster_name: &str) -> String {
format!("{dir}/secret_key_{cluster_name}.txt")
}
pub async fn secret_key(dir: &str, cluster_name: &str) -> Result<SecretKey, String> {
let path = secret_key_path(dir, cluster_name);
if let Some(key) = get_secret_key(&path).await? {
Ok(key)
} else {
let key = SecretKey::generate(&mut rand::rng());
set_secret_key(&path, &key).await?;
Ok(key)
}
}
async fn get_secret_key(path: &str) -> Result<Option<SecretKey>, String> {
let key = match fs::read_to_string(path).await {
Ok(key) => Ok(key),
Err(err) => match err.kind() {
io::ErrorKind::NotFound => return Ok(None),
_ => Err(format!("can't read secret key file: {err}")),
},
}?;
let bytes = match key_b64_to_bytes(&key) {
Ok(key) => Ok(key),
Err(err) => Err(format!(
"invalid secret key read from file: {err}. Please remove the `{path}` file from plugin directory.",
)),
}?;
Ok(Some(SecretKey::from_bytes(&bytes)))
}
async fn set_secret_key(path: &str, key: &SecretKey) -> Result<(), String> {
let secret_key = key.show();
File::options()
.mode(0o600)
.write(true)
.create(true)
.open(path)
.await
.map_err(|err| format!("can't open `{path}` in plugin directory: {err}"))?
.write_all(secret_key.as_bytes())
.await
.map_err(|err| format!("can't write to `{path}` in plugin directory: {err}"))
}
pub fn key_b64_to_bytes(key: &str) -> Result<[u8; 32], DecodeError> {
let vec = data_encoding::BASE64URL.decode(key.as_bytes())?;
if vec.len() != 32 {
return Err(DecodeError {
position: vec.len(),
kind: data_encoding::DecodeKind::Length,
});
}
let mut bytes = [0u8; 32];
for i in 0..32 {
bytes[i] = vec[i];
}
Ok(bytes)
}
pub fn key_bytes_to_b64(key: &[u8; 32]) -> String {
data_encoding::BASE64URL.encode(key)
}
/// Implemented by PublicKey & SecretKey to display keys as base64 instead of hexadecimal.
/// Similar to Display/ToString
pub trait Show {
fn show(&self) -> String;
}
impl Show for PublicKey {
fn show(&self) -> String {
key_bytes_to_b64(self.as_bytes())
}
}
impl Show for SecretKey {
fn show(&self) -> String {
key_bytes_to_b64(&self.to_bytes())
}
}
#[cfg(test)]
mod tests {
use assert_fs::{
TempDir,
prelude::{FileWriteStr, PathChild},
};
use iroh::{PublicKey, SecretKey};
use tokio::fs::read_to_string;
use crate::key::{
get_secret_key, key_b64_to_bytes, key_bytes_to_b64, secret_key_path, set_secret_key,
};
#[test]
fn secret_key_encode_decode() {
for (secret_key, public_key) in [
(
"g7U1LPq2cgGSyk6CH_v1QpoXowSFKVQ8IcFljd_ZKGw=",
"HhVh7ghqpXM9375HZ82OOeB504HBSS25wgug-1vUggY=",
),
(
"5EgRjwIpqd60IXWCGg5dFTtxkI-0fS1PlhoIhUjh1eY=",
"LPSQ9pS7m_5vvNC-fhoBNeL2-eS2Fd6aO4ImSnXp3lc=",
),
] {
assert_eq!(
secret_key,
&key_bytes_to_b64(&key_b64_to_bytes(secret_key).unwrap())
);
assert_eq!(
public_key,
&key_bytes_to_b64(&key_b64_to_bytes(public_key).unwrap())
);
let secret_key_parsed = SecretKey::from_bytes(&key_b64_to_bytes(secret_key).unwrap());
let public_key_parsed =
PublicKey::from_bytes(&key_b64_to_bytes(public_key).unwrap()).unwrap();
assert_eq!(secret_key_parsed.public(), public_key_parsed);
}
}
#[tokio::test]
async fn secret_key_get() {
let tmp_dir = TempDir::new().unwrap();
let tmp_dir_str = tmp_dir.to_str().unwrap();
for (secret_key, cluster_name) in [
("g7U1LPq2cgGSyk6CH_v1QpoXowSFKVQ8IcFljd_ZKGw=", "my_cluster"),
("5EgRjwIpqd60IXWCGg5dFTtxkI-0fS1PlhoIhUjh1eY=", "name"),
] {
tmp_dir
.child(&format!("secret_key_{cluster_name}.txt"))
.write_str(secret_key)
.unwrap();
let secret_key_parsed = SecretKey::from_bytes(&key_b64_to_bytes(secret_key).unwrap());
let path = secret_key_path(tmp_dir_str, cluster_name);
let secret_key_from_file = get_secret_key(&path).await.unwrap();
assert_eq!(
secret_key_parsed.to_bytes(),
secret_key_from_file.unwrap().to_bytes()
)
}
assert_eq!(
Ok(None),
get_secret_key(&format!("{tmp_dir_str}/non_existent"))
.await
// Can't compare secret keys so we map to bytes
// even if we don't want one
.map(|opt| opt.map(|pk| pk.to_bytes()))
);
// Will fail if we're root, but who runs this as root??
assert!(
get_secret_key(&format!("/root/non_existent"))
.await
.is_err()
);
}
#[tokio::test]
async fn secret_key_set() {
let tmp_dir = TempDir::new().unwrap();
let tmp_dir_str = tmp_dir.to_str().unwrap();
let path = format!("{tmp_dir_str}/secret");
let key = SecretKey::generate(&mut rand::rng());
assert!(set_secret_key(&path, &key).await.is_ok());
let read_file = read_to_string(&path).await;
assert!(read_file.is_ok());
assert_eq!(read_file.unwrap(), key_bytes_to_b64(&key.to_bytes()));
}
}

View file

@ -0,0 +1,273 @@
use std::{
collections::{BTreeMap, BTreeSet},
net::{Ipv4Addr, Ipv6Addr, SocketAddr},
path::PathBuf,
time::Duration,
};
use iroh::{EndpointAddr, PublicKey, SecretKey, TransportAddr};
use reaction_plugin::{
ActionConfig, ActionImpl, Exec, Hello, Line, Manifest, PluginInfo, RemoteResult, StreamConfig,
StreamImpl, line::PatternLine, main_loop, shutdown::ShutdownController, time::parse_duration,
};
use remoc::{rch::mpsc, rtc};
use serde::{Deserialize, Serialize};
use treedb::Database;
use crate::key::Show;
mod cluster;
mod connection;
mod endpoint;
mod key;
#[cfg(test)]
mod tests;
#[tokio::main]
async fn main() {
let plugin = Plugin::default();
main_loop(plugin).await;
}
#[derive(Default)]
struct Plugin {
init: BTreeMap<String, (StreamInit, Vec<ActionInit>)>,
cluster_shutdown: ShutdownController,
}
/// Stream options as defined by the user
#[derive(Serialize, Deserialize)]
struct StreamOptions {
/// The UDP port to open
listen_port: u16,
/// The IPv4 to bind to. Defaults to 0.0.0.0.
/// Set to `null` to use IPv6 only.
#[serde(default = "ipv4_unspecified")]
bind_ipv4: Option<Ipv4Addr>,
/// The IPv6 to bind to. Defaults to 0.0.0.0.
/// Set to `null` to use IPv4 only.
#[serde(default = "ipv6_unspecified")]
bind_ipv6: Option<Ipv6Addr>,
/// Other nodes which are part of the cluster.
nodes: Vec<NodeOption>,
/// Max duration before we drop pending messages to a node we can't connect to.
message_timeout: String,
}
fn ipv4_unspecified() -> Option<Ipv4Addr> {
Some(Ipv4Addr::UNSPECIFIED)
}
fn ipv6_unspecified() -> Option<Ipv6Addr> {
Some(Ipv6Addr::UNSPECIFIED)
}
#[derive(Serialize, Deserialize)]
struct NodeOption {
public_key: String,
#[serde(default)]
addresses: Vec<SocketAddr>,
}
/// Stream information before start
struct StreamInit {
name: String,
listen_port: u16,
bind_ipv4: Option<Ipv4Addr>,
bind_ipv6: Option<Ipv6Addr>,
secret_key: SecretKey,
message_timeout: Duration,
nodes: BTreeMap<PublicKey, EndpointAddr>,
tx: mpsc::Sender<Line>,
}
#[derive(Serialize, Deserialize)]
struct ActionOptions {
/// The line to send to the corresponding cluster, example: "ban \<ip\>"
send: String,
/// The name of the corresponding cluster, example: "my_cluster_stream"
to: String,
/// Whether the stream of this node also receives the line
#[serde(default, rename = "self")]
self_: bool,
}
struct ActionInit {
name: String,
send: PatternLine,
self_: bool,
rx: mpsc::Receiver<Exec>,
}
impl PluginInfo for Plugin {
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
Ok(Manifest {
hello: Hello::new(),
streams: BTreeSet::from(["cluster".into()]),
actions: BTreeSet::from(["cluster_send".into()]),
})
}
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
let mut ret_streams = Vec::with_capacity(streams.len());
let mut ret_actions = Vec::with_capacity(actions.len());
for StreamConfig {
stream_name,
stream_type,
config,
} in streams
{
if &stream_type != "cluster" {
return Err("This plugin can't handle other stream types than cluster".into());
}
let options: StreamOptions = serde_json::from_value(config.into())
.map_err(|err| format!("invalid options: {err}"))?;
let mut nodes = BTreeMap::default();
let message_timeout = parse_duration(&options.message_timeout)
.map_err(|err| format!("invalid message_timeout: {err}"))?;
if options.bind_ipv4.is_none() && options.bind_ipv6.is_none() {
Err(
"At least one of bind_ipv4 and bind_ipv6 must be enabled. Unset at least one of them or set at least one of them to an IP.",
)?;
}
if options.nodes.is_empty() {
Err("At least one remote node has to be configured for a cluster")?;
}
for node in options.nodes.into_iter() {
let bytes = key::key_b64_to_bytes(&node.public_key)
.map_err(|err| format!("invalid public key {}: {err}", node.public_key))?;
let public_key = PublicKey::from_bytes(&bytes)
.map_err(|err| format!("invalid public key {}: {err}", node.public_key))?;
nodes.insert(
public_key,
EndpointAddr {
id: public_key,
addrs: node
.addresses
.into_iter()
.map(|addr| TransportAddr::Ip(addr))
.collect(),
},
);
}
let secret_key = key::secret_key(".", &stream_name).await?;
eprintln!(
"INFO public key of this node for cluster {stream_name}: {}",
secret_key.public().show()
);
let (tx, rx) = mpsc::channel(1);
let stream = StreamInit {
name: stream_name.clone(),
listen_port: options.listen_port,
bind_ipv4: options.bind_ipv4,
bind_ipv6: options.bind_ipv6,
secret_key,
message_timeout,
nodes,
tx,
};
if let Some(_) = self.init.insert(stream_name, (stream, vec![])) {
return Err("this virtual stream has already been initialized".into());
}
ret_streams.push(StreamImpl {
stream: rx,
standalone: true,
})
}
for ActionConfig {
stream_name,
filter_name,
action_name,
action_type,
config,
patterns,
} in actions
{
if &action_type != "cluster_send" {
return Err(
"This plugin can't handle other action types than 'cluster_send'".into(),
);
}
let options: ActionOptions = serde_json::from_value(config.into())
.map_err(|err| format!("invalid options: {err}"))?;
let (tx, rx) = mpsc::channel(1);
let init_action = ActionInit {
name: format!("{}.{}.{}", stream_name, filter_name, action_name),
send: PatternLine::new(options.send, patterns),
self_: options.self_,
rx,
};
match self.init.get_mut(&options.to) {
Some((_, actions)) => actions.push(init_action),
None => {
return Err(format!(
"ERROR action '{}' sends 'to' unknown stream '{}'",
init_action.name, options.to
)
.into());
}
}
ret_actions.push(ActionImpl { tx })
}
Ok((ret_streams, ret_actions))
}
async fn start(&mut self) -> RemoteResult<()> {
self.cluster_shutdown.delegate().handle_quit_signals()?;
let mut db = {
let path = PathBuf::from(".");
let (cancellation_token, task_tracker_token) = self.cluster_shutdown.token().split();
Database::open(&path, cancellation_token, task_tracker_token)
.await
.map_err(|err| format!("Can't open database: {err}"))?
};
while let Some((_, (stream, actions))) = self.init.pop_first() {
let endpoint = cluster::bind(&stream).await?;
cluster::cluster_tasks(
endpoint,
stream,
actions,
&mut db,
self.cluster_shutdown.clone(),
)
.await?;
}
// Free containers
self.init = Default::default();
eprintln!("DEBUG started");
Ok(())
}
async fn close(self) -> RemoteResult<()> {
self.cluster_shutdown.ask_shutdown();
self.cluster_shutdown.wait_all_task_shutdown().await;
Ok(())
}
}

View file

@ -0,0 +1,293 @@
use std::env::set_current_dir;
use assert_fs::TempDir;
use reaction_plugin::{ActionConfig, PluginInfo, StreamConfig};
use serde_json::json;
use crate::{Plugin, tests::insert_secret_key};
use super::{PUBLIC_KEY_A, TEST_MUTEX, stream_ok};
#[tokio::test]
async fn conf_stream() {
// Minimal node configuration
let nodes = json!([{
"public_key": PUBLIC_KEY_A,
}]);
// Invalid type
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "clust".into(),
config: stream_ok().into(),
}],
vec![]
)
.await
.is_err()
);
for (json, is_ok) in [
(
json!({
"listen_port": 2048,
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok as fn(&_) -> bool,
),
(
// invalid time
json!({
"listen_port": 2048,
"nodes": nodes,
"message_timeout": "30pv",
}),
Result::is_err,
),
(
json!({
"listen_port": 2048,
"bind_ipv4": "0.0.0.0",
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
json!({
"listen_port": 2048,
"bind_ipv6": "::",
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
json!({
"listen_port": 2048,
"bind_ipv4": "0.0.0.0",
"bind_ipv6": "::",
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
json!({
"listen_port": 2048,
"bind_ipv4": null,
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
json!({
"listen_port": 2048,
"bind_ipv6": null,
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_ok,
),
(
// No bind
json!({
"listen_port": 2048,
"bind_ipv4": null,
"bind_ipv6": null,
"nodes": nodes,
"message_timeout": "30m",
}),
Result::is_err,
),
(json!({}), Result::is_err),
] {
assert!(is_ok(
&Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: json.into(),
}],
vec![]
)
.await
));
}
}
#[tokio::test]
async fn conf_action() {
let patterns = vec!["p1".into(), "p2".into()];
// Invalid type
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_sen".into(),
config: json!({
"send": "<p1>",
"to": "stream",
})
.into(),
patterns: patterns.clone(),
}]
)
.await
.is_err()
);
for (json, is_ok) in [
(
json!({
"send": "<p1>",
"to": "stream",
}),
true,
),
(
json!({
"send": "<p1>",
"to": "stream",
"self": true,
}),
true,
),
(
json!({
"send": "<p1>",
"to": "stream",
"self": false,
}),
true,
),
(
// missing to
json!({
"send": "<p1>",
}),
false,
),
(
// missing send
json!({
"to": "stream",
}),
false,
),
(
// invalid self
json!({
"send": "<p1>",
"to": "stream",
"self": "true",
}),
false,
),
(
// missing conf
json!({}),
false,
),
] {
let ret = Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json.clone().into(),
patterns: patterns.clone(),
}],
)
.await;
assert!(
ret.is_ok() == is_ok,
"is_ok: {is_ok}, ret: {:?}, action conf: {json:?}",
ret.map(|_| ())
);
}
}
#[tokio::test]
async fn conf_send() {
let _lock = TEST_MUTEX.lock();
let dir = TempDir::new().unwrap();
set_current_dir(&dir).unwrap();
insert_secret_key().await;
// No action is ok
let res = Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![],
)
.await;
assert!(res.is_ok(), "{:?}", res.map(|_| ()));
// An action is ok
let res = Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json!({ "send": "message", "to": "stream" }).into(),
patterns: vec![],
}],
)
.await;
assert!(res.is_ok(), "{:?}", res.map(|_| ()));
// Invalid to: option
let res = Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok().into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json!({ "send": "message", "to": "stream1" }).into(),
patterns: vec![],
}],
)
.await;
assert!(res.is_err(), "{:?}", res.map(|_| ()));
}

View file

@ -0,0 +1,319 @@
use std::{env::set_current_dir, time::Duration};
use assert_fs::TempDir;
use reaction_plugin::{ActionConfig, Exec, PluginInfo, StreamConfig};
use serde_json::json;
use tokio::{fs, time::timeout};
use treedb::time::now;
use crate::{
Plugin,
key::secret_key_path,
tests::{PUBLIC_KEY_A, PUBLIC_KEY_B, SECRET_KEY_A, SECRET_KEY_B, TEST_MUTEX},
};
#[derive(Clone)]
struct TestNode {
public_key: &'static str,
private_key: &'static str,
port: u16,
}
const POOL: [TestNode; 15] = [
TestNode {
public_key: PUBLIC_KEY_A,
private_key: SECRET_KEY_A,
port: 2055,
},
TestNode {
public_key: PUBLIC_KEY_B,
private_key: SECRET_KEY_B,
port: 2056,
},
TestNode {
public_key: "ZjEPlIdGikV_sPIAUzO3RFUidlERJUhJ9XwNAlieuvU=",
private_key: "SCbd8Ids3Dg9MwzyMNV1KFcUtsyRbeCp7GDmu-xXBSs=",
port: 2057,
},
TestNode {
public_key: "2FUpABLl9I6bU9a2XtWKMLDzwHfrVcNEG6K8Ix6sxWQ=",
private_key: "F0W8nIlVmuFVpelwYH4PDaBDM0COYOyXDmBEmnHyo5s=",
port: 2058,
},
TestNode {
public_key: "qR4JDI_yyPWUBrmBbQjqfFbGP14v9dEaQVPHPOjId1o=",
private_key: "S5pxTafNXPd_9TMT4_ERuPXlZ882UmggAHrf8Yntfqg=",
port: 2059,
},
TestNode {
public_key: "NjkPBwDO4IEOBjkcxufYtVXspJNQZ0qF6GamRq2TOB4=",
private_key: "zM_lXiFuwTkmPuuXqIghW_J0uwq0a53L_yhM57uy_R8=",
port: 2060,
},
TestNode {
public_key: "_mgTzrlE8b_zvka3LgfD5qH2h_d3S0hcDU1WzIL6C74=",
private_key: "6Obq7fxOXK-u-P3QB5FJvNnwXdKwP1FsVJ0555o7DXs=",
port: 2061,
},
TestNode {
public_key: "FLKxCSSjjzxH0ZWTpQ8xXcSIRutXUhIDhZimjamxO2s=",
private_key: "pBPcJ32bt4xGZIGZDLDtj0eedg7p5DENjAwA-wM-1vk=",
port: 2062,
},
TestNode {
public_key: "yYBWzhzXO4isdPW2SzI-Sv3mcy3dUl6Kl0oFN6YpuzE=",
private_key: "nC8F6prLAY9-86EZlfXwpOjQeghlPKf3PtT-zXsJZsA=",
port: 2063,
},
TestNode {
public_key: "QLbNxlLEUt0tieD9BX9of663gCm9WjKeqch0BIFJ3CE=",
private_key: "JL4bKNHJMaMX_ElnaDHc6Ql74HZbovcswNOrY6fN1sU=",
port: 2064,
},
TestNode {
public_key: "2cmAmcaEFW-9val6WMoHSfTW25IxiQHes7Jwy6NqLLc=",
private_key: "TCvfDLHLQ5RxfAs7_2Th2u1XF48ygxTLAAsUzVPBn_o=",
port: 2065,
},
TestNode {
public_key: "PfKYILyGmu0C6GFUOLw4MSLxN6gtkj0XUdvQW50A2xA=",
private_key: "LaQgDWsXpwSQlZZXd8UEllrgpeXw9biSye4zcjLclU0=",
port: 2066,
},
TestNode {
public_key: "OQMXwPl90gr-2y-f5qZIZuVG4WEae5cc8JOB39LTNYE=",
private_key: "blcigXzk0oeQ8J1jwYFiYHJ-pMiUqbUM4SJBlxA0MiI=",
port: 2067,
},
TestNode {
public_key: "DHpkBgnQUfpC7s4-mTfpn1_PN4dzj7hCCMF6GwO3Bus=",
private_key: "sw7-2gPOswznF2OJHJdbfyJxdjS-P5O0lie6SdOL_08=",
port: 2068,
},
TestNode {
public_key: "odjjaYd6lL1DG8N9AXHW9LGsrKIb5IlW0KZz-rgxfXA=",
private_key: "6JU6YHRBM_rJkuQmMaGaio_PZiyzZlTIU0qE8AHPGSE=",
port: 2069,
},
];
async fn stream_action(
name: &str,
index: usize,
nodes: &[TestNode],
) -> (StreamConfig, ActionConfig) {
let stream_name = format!("stream_{name}");
let this_node = &nodes[index];
let other_nodes: Vec<_> = nodes
.iter()
.filter(|node| node.public_key != this_node.public_key)
.map(|node| {
json!({
"public_key": node.public_key,
"addresses": [format!("[::1]:{}", node.port)]
})
})
.collect();
fs::write(secret_key_path(".", &stream_name), this_node.private_key)
.await
.unwrap();
(
StreamConfig {
stream_name: stream_name.clone(),
stream_type: "cluster".into(),
config: json!({
"message_timeout": "30s",
"listen_port": this_node.port,
"nodes": other_nodes,
})
.into(),
},
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json!({
"send": format!("from {name}: <test>"),
"to": stream_name,
})
.into(),
patterns: vec!["test".into()],
},
)
}
#[tokio::test]
async fn two_nodes_simultaneous_startup() {
for separate_plugin in [true /*, false */] {
let _lock = TEST_MUTEX.lock();
let dir = TempDir::new().unwrap();
set_current_dir(&dir).unwrap();
let ((mut stream_a, action_a), (mut stream_b, action_b)) = if separate_plugin {
let mut plugin_a = Plugin::default();
let (sa, aa) = stream_action("a", 0, &POOL[0..2]).await;
let (mut streams_a, mut actions_a) =
plugin_a.load_config(vec![sa], vec![aa]).await.unwrap();
plugin_a.start().await.unwrap();
let mut plugin_b = Plugin::default();
let (sb, ab) = stream_action("b", 1, &POOL[0..2]).await;
let (mut streams_b, mut actions_b) =
plugin_b.load_config(vec![sb], vec![ab]).await.unwrap();
plugin_b.start().await.unwrap();
(
(streams_a.remove(0), actions_a.remove(0)),
(streams_b.remove(0), actions_b.remove(0)),
)
} else {
let mut plugin = Plugin::default();
let a = stream_action("a", 0, &POOL[0..2]).await;
let b = stream_action("b", 1, &POOL[0..2]).await;
let (mut streams, mut actions) = plugin
.load_config(vec![a.0, b.0], vec![a.1, b.1])
.await
.unwrap();
plugin.start().await.unwrap();
(
(streams.remove(0), actions.remove(0)),
(streams.remove(1), actions.remove(1)),
)
};
for m in ["test1", "test2", "test3"] {
let time = now().into();
for (stream, action, from) in [
(&mut stream_b, &action_a, "a"),
(&mut stream_a, &action_b, "b"),
] {
assert!(
action
.tx
.send(Exec {
match_: vec![m.into()],
time,
})
.await
.is_ok(),
"separate_plugin: {separate_plugin}, message: {m}, from: {from}"
);
let received = timeout(Duration::from_millis(5000), stream.stream.recv()).await;
assert!(
received.is_ok(),
"separate_plugin: {separate_plugin}, message: {m}, from: {from}, did timeout"
);
let received = received.unwrap();
assert!(
received.is_ok(),
"separate_plugin: {separate_plugin}, message: {m}, from: {from}, remoc receive error"
);
let received = received.unwrap();
assert_eq!(
received,
Some((format!("from {from}: {m}"), time)),
"separate_plugin: {separate_plugin}, message: {m}, from: {from}"
);
}
}
}
}
#[tokio::test]
async fn n_nodes_simultaneous_startup() {
let _lock = TEST_MUTEX.lock();
// Ports can take some time to be really closed
let mut port_delta = 0;
for n in 3..=POOL.len() {
println!("\nNODES: {n}\n");
port_delta += n;
// for n in 3..=3 {
let dir = TempDir::new().unwrap();
set_current_dir(&dir).unwrap();
let mut plugins = Vec::with_capacity(n);
let mut streams = Vec::with_capacity(n);
let mut actions = Vec::with_capacity(n);
for i in 0..n {
let mut plugin = Plugin::default();
let name = format!("n{i}");
let (stream, action) = stream_action(
&name,
i,
&POOL[0..n]
.iter()
.map(|node| node.clone())
.map(|node| TestNode {
port: node.port + port_delta as u16,
..node
})
.collect::<Vec<_>>()
.as_slice(),
)
.await;
let (mut stream, mut action) = plugin
.load_config(vec![stream], vec![action])
.await
.unwrap();
plugin.start().await.unwrap();
plugins.push(plugin);
streams.push(stream.pop().unwrap());
actions.push((action.pop().unwrap(), name));
}
for m in ["test1", "test2", "test3", "test4", "test5"] {
let time = now().into();
for (i, (action, from)) in actions.iter().enumerate() {
assert!(
action
.tx
.send(Exec {
match_: vec![m.into()],
time,
})
.await
.is_ok(),
"n nodes: {n}, n°action{i}, message: {m}, from: {from}"
);
for (j, stream) in streams.iter_mut().enumerate().filter(|(j, _)| *j != i) {
let received = timeout(Duration::from_millis(5000), stream.stream.recv()).await;
assert!(
received.is_ok(),
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}, did timeout"
);
let received = received.unwrap();
assert!(
received.is_ok(),
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}, remoc receive error"
);
let received = received.unwrap();
assert_eq!(
received,
Some((format!("from {from}: {m}"), time)),
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}"
);
println!(
"n nodes: {n}, n°action: {i}, n°stream: {j}, message: {m}, from: {from}"
);
}
}
}
for plugin in plugins {
plugin.close().await.unwrap();
}
}
}
// TODO test:
// with inexisting nodes
// different startup times
// stopping & restarting a node mid exchange

View file

@ -0,0 +1,40 @@
use std::sync::{LazyLock, Mutex};
use serde_json::json;
use tokio::fs::write;
mod conf;
mod e2e;
mod self_;
const SECRET_KEY_A: &str = "g7U1LPq2cgGSyk6CH_v1QpoXowSFKVQ8IcFljd_ZKGw=";
const PUBLIC_KEY_A: &str = "HhVh7ghqpXM9375HZ82OOeB504HBSS25wgug-1vUggY=";
const SECRET_KEY_B: &str = "5EgRjwIpqd60IXWCGg5dFTtxkI-0fS1PlhoIhUjh1eY=";
const PUBLIC_KEY_B: &str = "LPSQ9pS7m_5vvNC-fhoBNeL2-eS2Fd6aO4ImSnXp3lc=";
// Tests that spawn a database in current directory must be run one at a time
static TEST_MUTEX: LazyLock<Mutex<()>> = LazyLock::new(|| Mutex::new(()));
fn stream_ok_port(port: u16) -> serde_json::Value {
json!({
"listen_port": port,
"nodes": [{
"public_key": PUBLIC_KEY_A,
}],
"message_timeout": "30m",
})
}
fn stream_ok() -> serde_json::Value {
stream_ok_port(2048)
}
async fn insert_secret_key() {
write(
"./secret_key_stream.txt",
b"pBPcJ32bt4xGZIGZDLDtj0eedg7p5DENjAwA-wM-1vk=",
)
.await
.unwrap();
}

View file

@ -0,0 +1,78 @@
use std::{env::set_current_dir, time::Duration};
use assert_fs::TempDir;
use reaction_plugin::{ActionConfig, Exec, PluginInfo, StreamConfig};
use serde_json::json;
use tokio::time::timeout;
use treedb::time::now;
use crate::{Plugin, tests::insert_secret_key};
use super::{TEST_MUTEX, stream_ok_port};
#[tokio::test]
async fn run_with_self() {
let _lock = TEST_MUTEX.lock();
let dir = TempDir::new().unwrap();
set_current_dir(&dir).unwrap();
insert_secret_key().await;
for self_ in [true, false] {
let mut plugin = Plugin::default();
let (mut streams, mut actions) = plugin
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "cluster".into(),
config: stream_ok_port(2052).into(),
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "cluster_send".into(),
config: json!({
"send": "message <test>",
"to": "stream",
"self": self_,
})
.into(),
patterns: vec!["test".into()],
}],
)
.await
.unwrap();
let mut stream = streams.pop().unwrap();
let action = actions.pop().unwrap();
assert!(stream.standalone);
assert!(plugin.start().await.is_ok());
for m in ["test1", "test2", "test3", " a a a aa a a"] {
let time = now().into();
assert!(
action
.tx
.send(Exec {
match_: vec![m.into()],
time,
})
.await
.is_ok()
);
if self_ {
assert_eq!(
stream.stream.recv().await.unwrap().unwrap(),
(format!("message {m}"), time),
);
} else {
// Don't receive anything
assert!(
timeout(Duration::from_millis(100), stream.stream.recv())
.await
.is_err()
);
}
}
}
}

View file

@ -0,0 +1,26 @@
[package]
name = "reaction-plugin-ipset"
description = "ipset plugin for reaction"
version = "1.0.0"
edition = "2024"
authors = ["ppom <reaction@ppom.me>"]
license = "AGPL-3.0"
homepage = "https://reaction.ppom.me"
repository = "https://framagit.org/ppom/reaction"
keywords = ["security", "sysadmin", "fail2ban", "logs", "monitoring"]
default-run = "reaction-plugin-ipset"
[dependencies]
tokio = { workspace = true, features = ["rt-multi-thread"] }
remoc.workspace = true
reaction-plugin.path = "../reaction-plugin"
serde.workspace = true
serde_json.workspace = true
ipset = "0.9.0"
[package.metadata.deb]
section = "net"
assets = [
[ "target/release/reaction-plugin-ipset", "/usr/bin/reaction-plugin-ipset", "755" ],
]
depends = ["libipset-dev", "reaction"]

View file

@ -0,0 +1,419 @@
use std::{fmt::Debug, u32, usize};
use reaction_plugin::{Exec, shutdown::ShutdownToken, time::parse_duration};
use remoc::rch::mpsc as remocMpsc;
use serde::{Deserialize, Serialize};
use crate::ipset::{CreateSet, IpSet, Order, SetChain, Version};
#[derive(Default, Serialize, Deserialize, PartialEq, Eq, Clone, Copy)]
pub enum IpVersion {
#[default]
#[serde(rename = "ip")]
Ip,
#[serde(rename = "ipv4")]
Ipv4,
#[serde(rename = "ipv6")]
Ipv6,
}
impl Debug for IpVersion {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
match self {
IpVersion::Ipv4 => "ipv4",
IpVersion::Ipv6 => "ipv6",
IpVersion::Ip => "ip",
}
)
}
}
#[derive(Default, Serialize, Deserialize)]
pub enum AddDel {
#[default]
#[serde(alias = "add")]
Add,
#[serde(alias = "del")]
Del,
}
/// User-facing action options
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct ActionOptions {
/// The set that should be used by this action
pub set: String,
/// The pattern name of the IP.
/// Defaults to "ip"
#[serde(default = "serde_ip")]
pub pattern: String,
#[serde(skip)]
ip_index: usize,
// Whether the action is to "add" or "del" the ip from the set
#[serde(default)]
action: AddDel,
#[serde(flatten)]
pub set_options: SetOptions,
}
fn serde_ip() -> String {
"ip".into()
}
impl ActionOptions {
pub fn set_ip_index(&mut self, patterns: Vec<String>) -> Result<(), ()> {
self.ip_index = patterns
.into_iter()
.enumerate()
.filter(|(_, name)| name == &self.pattern)
.next()
.ok_or(())?
.0;
Ok(())
}
}
/// Merged set options
#[derive(Default, Clone, Deserialize, Serialize, Debug, PartialEq, Eq)]
pub struct SetOptions {
/// The IP type.
/// Defaults to `46`.
/// If `ipv4`: creates an IPv4 set with this name
/// If `ipv6`: creates an IPv6 set with this name
/// If `ip`: creates an IPv4 set with its name suffixed by 'v4' AND an IPv6 set with its name suffixed by 'v6'
/// *Merged set-wise*.
#[serde(default)]
version: Option<IpVersion>,
/// Chains where the IP set should be inserted.
/// Defaults to `["INPUT", "FORWARD"]`
/// *Merged set-wise*.
#[serde(default)]
chains: Option<Vec<String>>,
// Optional timeout, letting linux/netfilter handle set removal instead of reaction
// Note that `reaction show` and `reaction flush` won't work if set instead of an `after` action
// Same syntax as after and retryperiod in reaction.
/// *Merged set-wise*.
#[serde(skip_serializing_if = "Option::is_none")]
timeout: Option<String>,
#[serde(skip)]
timeout_u32: Option<u32>,
// Target that iptables should use when the IP is encountered.
// Defaults to DROP, but can also be ACCEPT, RETURN or any user-defined chain
/// *Merged set-wise*.
#[serde(default)]
target: Option<String>,
}
impl SetOptions {
pub fn merge(&mut self, options: &SetOptions) -> Result<(), String> {
// merge two Option<T> and fail if there is conflict
fn inner_merge<T: Eq + Clone + std::fmt::Debug>(
a: &mut Option<T>,
b: &Option<T>,
name: &str,
) -> Result<(), String> {
match (&a, &b) {
(Some(aa), Some(bb)) => {
if aa != bb {
return Err(format!(
"Conflicting options for {name}: `{aa:?}` and `{bb:?}`"
));
}
}
(None, Some(_)) => {
*a = b.clone();
}
_ => (),
};
Ok(())
}
inner_merge(&mut self.version, &options.version, "version")?;
inner_merge(&mut self.timeout, &options.timeout, "timeout")?;
inner_merge(&mut self.chains, &options.chains, "chains")?;
inner_merge(&mut self.target, &options.target, "target")?;
if let Some(timeout) = &self.timeout {
let duration = parse_duration(timeout)
.map_err(|err| format!("failed to parse timeout: {}", err))?
.as_secs();
if duration > u32::MAX as u64 {
return Err(format!(
"timeout is limited to {} seconds (approx {} days)",
u32::MAX,
49_000
));
}
self.timeout_u32 = Some(duration as u32);
}
Ok(())
}
}
pub struct Set {
sets: SetNames,
chains: Vec<String>,
timeout: Option<u32>,
target: String,
}
impl Set {
pub fn from(name: String, options: SetOptions) -> Self {
Self {
sets: SetNames::new(name, options.version),
timeout: options.timeout_u32,
target: options.target.unwrap_or("DROP".into()),
chains: options
.chains
.unwrap_or(vec!["INPUT".into(), "FORWARD".into()]),
}
}
pub async fn init(&self, ipset: &mut IpSet) -> Result<(), (usize, String)> {
for (set, version) in [
(&self.sets.ipv4, Version::IPv4),
(&self.sets.ipv6, Version::IPv6),
] {
if let Some(set) = set {
// create set
ipset
.order(Order::CreateSet(CreateSet {
name: set.clone(),
version,
timeout: self.timeout,
}))
.await
.map_err(|err| (0, err.to_string()))?;
// insert set in chains
for (i, chain) in self.chains.iter().enumerate() {
ipset
.order(Order::InsertSet(SetChain {
set: set.clone(),
chain: chain.clone(),
target: self.target.clone(),
}))
.await
.map_err(|err| (i + 1, err.to_string()))?;
}
}
}
Ok(())
}
pub async fn destroy(&self, ipset: &mut IpSet, until: Option<usize>) {
for set in [&self.sets.ipv4, &self.sets.ipv6] {
if let Some(set) = set {
for chain in self
.chains
.iter()
.take(until.map(|until| until - 1).unwrap_or(usize::MAX))
{
let _ = ipset
.order(Order::RemoveSet(SetChain {
set: set.clone(),
chain: chain.clone(),
target: self.target.clone(),
}))
.await;
}
if until.is_none_or(|until| until != 0) {
let _ = ipset.order(Order::DestroySet(set.clone())).await;
}
}
}
}
}
pub struct SetNames {
pub ipv4: Option<String>,
pub ipv6: Option<String>,
}
impl SetNames {
pub fn new(name: String, version: Option<IpVersion>) -> Self {
Self {
ipv4: match version {
Some(IpVersion::Ipv4) => Some(name.clone()),
Some(IpVersion::Ipv6) => None,
None | Some(IpVersion::Ip) => Some(format!("{}v4", name)),
},
ipv6: match version {
Some(IpVersion::Ipv4) => None,
Some(IpVersion::Ipv6) => Some(name),
None | Some(IpVersion::Ip) => Some(format!("{}v6", name)),
},
}
}
}
pub struct Action {
ipset: IpSet,
rx: remocMpsc::Receiver<Exec>,
shutdown: ShutdownToken,
sets: SetNames,
// index of pattern ip in match vec
ip_index: usize,
action: AddDel,
}
impl Action {
pub fn new(
ipset: IpSet,
shutdown: ShutdownToken,
rx: remocMpsc::Receiver<Exec>,
options: ActionOptions,
) -> Result<Self, String> {
Ok(Action {
ipset,
rx,
shutdown,
sets: SetNames::new(options.set, options.set_options.version),
ip_index: options.ip_index,
action: options.action,
})
}
pub async fn serve(mut self) {
loop {
let event = tokio::select! {
exec = self.rx.recv() => Some(exec),
_ = self.shutdown.wait() => None,
};
match event {
// shutdown asked
None => break,
// channel closed
Some(Ok(None)) => break,
// error from channel
Some(Err(err)) => {
eprintln!("ERROR {err}");
break;
}
// ok
Some(Ok(Some(exec))) => {
if let Err(err) = self.handle_exec(exec).await {
eprintln!("ERROR {err}");
break;
}
}
}
}
// eprintln!("DEBUG Asking for shutdown");
// self.shutdown.ask_shutdown();
}
async fn handle_exec(&mut self, mut exec: Exec) -> Result<(), String> {
// safeguard against Vec::remove's panic
if exec.match_.len() <= self.ip_index {
return Err(format!(
"match received from reaction is smaller than expected. looking for index {} but size is {}. this is a bug!",
self.ip_index,
exec.match_.len()
));
}
let ip = exec.match_.remove(self.ip_index);
// select set
let set = match (&self.sets.ipv4, &self.sets.ipv6) {
(None, None) => return Err(format!("action is neither IPv4 nor IPv6, this is a bug!")),
(None, Some(set)) => set,
(Some(set), None) => set,
(Some(set4), Some(set6)) => {
if ip.contains(':') {
set6
} else {
set4
}
}
};
// add/remove ip to set
self.ipset
.order(match self.action {
AddDel::Add => Order::Add(set.clone(), ip),
AddDel::Del => Order::Del(set.clone(), ip),
})
.await?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use crate::action::{IpVersion, SetOptions};
#[tokio::test]
async fn set_options_merge() {
let s1 = SetOptions {
version: None,
chains: None,
timeout: None,
timeout_u32: None,
target: None,
};
let s2 = SetOptions {
version: Some(IpVersion::Ipv4),
chains: Some(vec!["INPUT".into()]),
timeout: Some("3h".into()),
timeout_u32: Some(3 * 3600),
target: Some("DROP".into()),
};
assert_ne!(s1, s2);
assert_eq!(s1, SetOptions::default());
{
// s2 can be merged in s1
let mut s1 = s1.clone();
assert!(s1.merge(&s2).is_ok());
assert_eq!(s1, s2);
}
{
// s1 can be merged in s2
let mut s2 = s2.clone();
assert!(s2.merge(&s1).is_ok());
}
{
// s1 can be merged in itself
let mut s3 = s1.clone();
assert!(s3.merge(&s1).is_ok());
assert_eq!(s1, s3);
}
{
// s2 can be merged in itself
let mut s3 = s2.clone();
assert!(s3.merge(&s2).is_ok());
assert_eq!(s2, s3);
}
for s3 in [
SetOptions {
version: Some(IpVersion::Ipv6),
..Default::default()
},
SetOptions {
chains: Some(vec!["damn".into()]),
..Default::default()
},
SetOptions {
timeout: Some("30min".into()),
..Default::default()
},
SetOptions {
target: Some("log-refuse".into()),
..Default::default()
},
] {
// none with some is ok
assert!(s3.clone().merge(&s1).is_ok(), "s3: {s3:?}");
assert!(s1.clone().merge(&s3).is_ok(), "s3: {s3:?}");
// different some is ko
assert!(s3.clone().merge(&s2).is_err(), "s3: {s3:?}");
assert!(s2.clone().merge(&s3).is_err(), "s3: {s3:?}");
}
}
}

View file

@ -0,0 +1,248 @@
use std::{collections::BTreeMap, fmt::Display, net::Ipv4Addr, process::Command, thread};
use ipset::{
Session,
types::{HashNet, NetDataType, Parse},
};
use tokio::sync::{mpsc, oneshot};
#[derive(PartialEq, Eq, PartialOrd, Ord, Copy, Clone)]
pub enum Version {
IPv4,
IPv6,
}
impl Display for Version {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Version::IPv4 => "IPv4",
Version::IPv6 => "IPv6",
})
}
}
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone)]
pub struct CreateSet {
pub name: String,
pub version: Version,
pub timeout: Option<u32>,
}
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone)]
pub struct SetChain {
pub set: String,
pub chain: String,
pub target: String,
}
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone)]
pub enum Order {
CreateSet(CreateSet),
DestroySet(String),
InsertSet(SetChain),
RemoveSet(SetChain),
Add(String, String),
Del(String, String),
}
#[derive(Clone)]
pub struct IpSet {
tx: mpsc::Sender<OrderType>,
}
impl Default for IpSet {
fn default() -> Self {
let (tx, rx) = mpsc::channel(1);
thread::spawn(move || IPsetManager::default().serve(rx));
Self { tx }
}
}
impl IpSet {
pub async fn order(&mut self, order: Order) -> Result<(), IpSetError> {
let (tx, rx) = oneshot::channel();
self.tx
.send((order, tx))
.await
.map_err(|err| IpSetError::Thread(format!("ipset thread has quit: {err}")))?;
rx.await
.map_err(|err| IpSetError::Thread(format!("ipset thread didn't respond: {err}")))?
.map_err(IpSetError::IpSet)
}
}
pub enum IpSetError {
Thread(String),
IpSet(()),
}
impl Display for IpSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
match self {
IpSetError::Thread(err) => err,
IpSetError::IpSet(()) => "ipset error",
}
)
}
}
impl From<IpSetError> for String {
fn from(value: IpSetError) -> Self {
match value {
IpSetError::Thread(err) => err,
IpSetError::IpSet(()) => "ipset error".to_string(),
}
}
}
pub type OrderType = (Order, oneshot::Sender<Result<(), ()>>);
struct Set {
session: Session<HashNet>,
version: Version,
}
#[derive(Default)]
struct IPsetManager {
// IPset sessions
sessions: BTreeMap<String, Set>,
}
impl IPsetManager {
fn serve(mut self, mut rx: mpsc::Receiver<OrderType>) {
loop {
match rx.blocking_recv() {
None => break,
Some((order, response)) => {
let result = self.handle_order(order);
let _ = response.send(result);
}
}
}
}
fn handle_order(&mut self, order: Order) -> Result<(), ()> {
match order {
Order::CreateSet(CreateSet {
name,
version,
timeout,
}) => {
eprintln!("INFO creating {version} set {name}");
let mut session: Session<HashNet> = Session::new(name.clone());
session
.create(|builder| {
let builder = if let Some(timeout) = timeout {
builder.with_timeout(timeout)?
} else {
builder
};
builder.with_ipv6(version == Version::IPv6)?.build()
})
.map_err(|err| eprintln!("ERROR Could not create set {name}: {err}"))?;
self.sessions.insert(name, Set { session, version });
}
Order::DestroySet(set) => {
if let Some(mut session) = self.sessions.remove(&set) {
eprintln!("INFO destroying {} set {set}", session.version);
session
.session
.destroy()
.map_err(|err| eprintln!("ERROR Could not destroy set {set}: {err}"))?;
}
}
Order::InsertSet(options) => self.insert_remove_set(options, true)?,
Order::RemoveSet(options) => self.insert_remove_set(options, false)?,
Order::Add(set, ip) => self.insert_remove_ip(set, ip, true)?,
Order::Del(set, ip) => self.insert_remove_ip(set, ip, false)?,
};
Ok(())
}
fn insert_remove_ip(&mut self, set: String, ip: String, insert: bool) -> Result<(), ()> {
self._insert_remove_ip(set, ip, insert)
.map_err(|err| eprintln!("ERROR {err}"))
}
fn _insert_remove_ip(&mut self, set: String, ip: String, insert: bool) -> Result<(), String> {
let session = self.sessions.get_mut(&set).ok_or(format!(
"No set handled by this plugin with this name: {set}. This likely is a bug."
))?;
let mut net_data = NetDataType::new(Ipv4Addr::LOCALHOST, 0);
net_data
.parse(&ip)
.map_err(|err| format!("`{ip}` is not recognized as an IP: {err}"))?;
if insert {
session.session.add(net_data, &[])
} else {
session.session.del(net_data)
}
.map_err(|err| format!("Could not add `{ip}` to set {set}: {err}"))?;
Ok(())
}
fn insert_remove_set(&self, options: SetChain, insert: bool) -> Result<(), ()> {
self._insert_remove_set(options, insert)
.map_err(|err| eprintln!("ERROR {err}"))
}
fn _insert_remove_set(&self, options: SetChain, insert: bool) -> Result<(), String> {
let SetChain { set, chain, target } = options;
let version = self
.sessions
.get(&set)
.ok_or(format!(
"No set managed by this plugin with this name: {set}"
))?
.version;
let (verb, verbing, from) = if insert {
("insert", "inserting", "in")
} else {
("remove", "removing", "from")
};
eprintln!("INFO {verbing} {version} set {set} {from} chain {chain}");
let command = match version {
Version::IPv4 => "iptables",
Version::IPv6 => "ip6tables",
};
let mut child = Command::new(command)
.args([
"-w",
if insert { "-I" } else { "-D" },
&chain,
"-m",
"set",
"--match-set",
&set,
"src",
"-j",
&target,
])
.spawn()
.map_err(|err| format!("Could not {verb} ipset {set} {from} chain {chain}: Could not execute {command}: {err}"))?;
let exit = child
.wait()
.map_err(|err| format!("Could not {verb} ipset {set} {from} chain {chain}: {err}"))?;
if exit.success() {
Ok(())
} else {
Err(format!(
"Could not {verb} ipset: exit code {}",
exit.code()
.map(|c| c.to_string())
.unwrap_or_else(|| "<unknown>".to_string())
))
}
}
}

View file

@ -0,0 +1,159 @@
use std::collections::{BTreeMap, BTreeSet};
use reaction_plugin::{
ActionConfig, ActionImpl, Hello, Manifest, PluginInfo, RemoteError, RemoteResult, StreamConfig,
StreamImpl,
shutdown::{ShutdownController, ShutdownToken},
};
use remoc::rtc;
use crate::{
action::{Action, ActionOptions, Set, SetOptions},
ipset::IpSet,
};
#[cfg(test)]
mod tests;
mod action;
mod ipset;
#[tokio::main]
async fn main() {
let plugin = Plugin::default();
reaction_plugin::main_loop(plugin).await;
}
#[derive(Default)]
struct Plugin {
ipset: IpSet,
sets: Vec<Set>,
actions: Vec<Action>,
shutdown: ShutdownController,
}
impl PluginInfo for Plugin {
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
Ok(Manifest {
hello: Hello::new(),
streams: BTreeSet::default(),
actions: BTreeSet::from(["ipset".into()]),
})
}
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
if !streams.is_empty() {
return Err("This plugin can't handle any stream type".into());
}
let mut ret_actions = Vec::with_capacity(actions.len());
let mut set_options: BTreeMap<String, SetOptions> = BTreeMap::new();
for ActionConfig {
stream_name,
filter_name,
action_name,
action_type,
config,
patterns,
} in actions
{
if &action_type != "ipset" {
return Err("This plugin can't handle other action types than ipset".into());
}
let mut options: ActionOptions = serde_json::from_value(config.into()).map_err(|err| {
format!("invalid options for action {stream_name}.{filter_name}.{action_name}: {err}")
})?;
options.set_ip_index(patterns).map_err(|_|
format!(
"No pattern with name {} in filter {stream_name}.{filter_name}. Try setting the option `pattern` to your pattern name of type 'ip'",
&options.pattern
)
)?;
// Merge option
set_options
.entry(options.set.clone())
.or_default()
.merge(&options.set_options)
.map_err(|err| format!("ipset {}: {err}", options.set))?;
let (tx, rx) = remoc::rch::mpsc::channel(1);
self.actions.push(Action::new(
self.ipset.clone(),
self.shutdown.token(),
rx,
options,
)?);
ret_actions.push(ActionImpl { tx });
}
// Init all sets
while let Some((name, options)) = set_options.pop_first() {
self.sets.push(Set::from(name, options));
}
Ok((vec![], ret_actions))
}
async fn start(&mut self) -> RemoteResult<()> {
self.shutdown.delegate().handle_quit_signals()?;
let mut first_error = None;
for (i, set) in self.sets.iter().enumerate() {
// Retain if error
if let Err((failed_step, err)) = set.init(&mut self.ipset).await {
first_error = Some((i, failed_step, RemoteError::Plugin(err)));
break;
}
}
// Destroy initialized sets if error
if let Some((last_set, failed_step, err)) = first_error {
eprintln!("DEBUG last_set: {last_set} failed_step: {failed_step} err: {err}");
for (curr_set, set) in self.sets.iter().enumerate().take(last_set + 1) {
let until = if last_set == curr_set {
Some(failed_step)
} else {
None
};
let _ = set.destroy(&mut self.ipset, until).await;
}
return Err(err);
}
// Launch a task that will destroy the sets on shutdown
tokio::spawn(destroy_sets_at_shutdown(
self.ipset.clone(),
std::mem::take(&mut self.sets),
self.shutdown.token(),
));
// Launch all actions
while let Some(action) = self.actions.pop() {
tokio::spawn(async move { action.serve().await });
}
self.actions = Default::default();
Ok(())
}
async fn close(self) -> RemoteResult<()> {
self.shutdown.ask_shutdown();
self.shutdown.wait_all_task_shutdown().await;
Ok(())
}
}
async fn destroy_sets_at_shutdown(mut ipset: IpSet, sets: Vec<Set>, shutdown: ShutdownToken) {
shutdown.wait().await;
for set in sets {
set.destroy(&mut ipset, None).await;
}
}

View file

@ -0,0 +1,253 @@
use reaction_plugin::{ActionConfig, PluginInfo, StreamConfig, Value};
use serde_json::json;
use crate::Plugin;
#[tokio::test]
async fn conf_stream() {
// No stream is supported by ipset
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "ipset".into(),
config: Value::Null
}],
vec![]
)
.await
.is_err()
);
// Nothing is ok
assert!(Plugin::default().load_config(vec![], vec![]).await.is_ok());
}
#[tokio::test]
async fn conf_action_standalone() {
let p = vec!["name".into(), "ip".into(), "ip2".into()];
let p_noip = vec!["name".into(), "ip2".into()];
for (is_ok, conf, patterns) in [
// minimal set
(true, json!({ "set": "test" }), &p),
// missing set key
(false, json!({}), &p),
(false, json!({ "version": "ipv4" }), &p),
// unknown key
(false, json!({ "set": "test", "unknown": "yes" }), &p),
(false, json!({ "set": "test", "ip_index": 1 }), &p),
(false, json!({ "set": "test", "timeout_u32": 1 }), &p),
// pattern //
(true, json!({ "set": "test" }), &p),
(true, json!({ "set": "test", "pattern": "ip" }), &p),
(true, json!({ "set": "test", "pattern": "ip2" }), &p),
(true, json!({ "set": "test", "pattern": "ip2" }), &p_noip),
// unknown pattern "ip"
(false, json!({ "set": "test" }), &p_noip),
(false, json!({ "set": "test", "pattern": "ip" }), &p_noip),
// unknown pattern
(false, json!({ "set": "test", "pattern": "unknown" }), &p),
(false, json!({ "set": "test", "pattern": "uwu" }), &p_noip),
// bad type
(false, json!({ "set": "test", "pattern": 0 }), &p_noip),
(false, json!({ "set": "test", "pattern": true }), &p_noip),
// action //
(true, json!({ "set": "test", "action": "add" }), &p),
(true, json!({ "set": "test", "action": "del" }), &p),
// unknown action
(false, json!({ "set": "test", "action": "create" }), &p),
(false, json!({ "set": "test", "action": "insert" }), &p),
(false, json!({ "set": "test", "action": "delete" }), &p),
(false, json!({ "set": "test", "action": "destroy" }), &p),
// bad type
(false, json!({ "set": "test", "action": true }), &p),
(false, json!({ "set": "test", "action": 1 }), &p),
// ip version //
// ok
(true, json!({ "set": "test", "version": "ipv4" }), &p),
(true, json!({ "set": "test", "version": "ipv6" }), &p),
(true, json!({ "set": "test", "version": "ip" }), &p),
// unknown version
(false, json!({ "set": "test", "version": 4 }), &p),
(false, json!({ "set": "test", "version": 6 }), &p),
(false, json!({ "set": "test", "version": 46 }), &p),
(false, json!({ "set": "test", "version": "5" }), &p),
(false, json!({ "set": "test", "version": "ipv5" }), &p),
(false, json!({ "set": "test", "version": "4" }), &p),
(false, json!({ "set": "test", "version": "6" }), &p),
(false, json!({ "set": "test", "version": "46" }), &p),
// bad type
(false, json!({ "set": "test", "version": true }), &p),
// chains //
// everything is fine really
(true, json!({ "set": "test", "chains": [] }), &p),
(true, json!({ "set": "test", "chains": ["INPUT"] }), &p),
(true, json!({ "set": "test", "chains": ["FORWARD"] }), &p),
(
true,
json!({ "set": "test", "chains": ["custom_chain"] }),
&p,
),
(
true,
json!({ "set": "test", "chains": ["INPUT", "FORWARD"] }),
&p,
),
(
true,
json!({
"set": "test",
"chains": ["INPUT", "FORWARD", "my_iptables_chain"]
}),
&p,
),
// timeout //
(true, json!({ "set": "test", "timeout": "1m" }), &p),
(true, json!({ "set": "test", "timeout": "3 days" }), &p),
// bad
(false, json!({ "set": "test", "timeout": "3 dayz"}), &p),
(false, json!({ "set": "test", "timeout": 12 }), &p),
// target //
// anything is fine too
(true, json!({ "set": "test", "target": "DROP" }), &p),
(true, json!({ "set": "test", "target": "ACCEPT" }), &p),
(true, json!({ "set": "test", "target": "RETURN" }), &p),
(true, json!({ "set": "test", "target": "custom_chain" }), &p),
// bad
(false, json!({ "set": "test", "target": 11 }), &p),
(false, json!({ "set": "test", "target": ["DROP"] }), &p),
] {
let res = Plugin::default()
.load_config(
vec![],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "ipset".into(),
config: conf.clone().into(),
patterns: patterns.clone(),
}],
)
.await;
assert!(
res.is_ok() == is_ok,
"conf: {:?}, must be ok: {is_ok}, result: {:?}",
conf,
// empty Result::Ok because ActionImpl is not Debug
res.map(|_| ())
);
}
}
// TODO
#[tokio::test]
async fn conf_action_merge() {
let mut plugin = Plugin::default();
let set1 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action1".into(),
action_type: "ipset".into(),
config: json!({
"set": "test",
"target": "DROP",
"chains": ["INPUT"],
"action": "add",
})
.into(),
patterns: vec!["ip".into()],
};
let set2 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action2".into(),
action_type: "ipset".into(),
config: json!({
"set": "test",
"target": "DROP",
"version": "ip",
"action": "add",
})
.into(),
patterns: vec!["ip".into()],
};
let set3 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action2".into(),
action_type: "ipset".into(),
config: json!({
"set": "test",
"action": "del",
})
.into(),
patterns: vec!["ip".into()],
};
let res = plugin
.load_config(
vec![],
vec![
// First set
set1.clone(),
// Same set, adding options, no conflict
set2.clone(),
// Same set, no new options, no conflict
set3.clone(),
// Unrelated set, so no conflict
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action3".into(),
action_type: "ipset".into(),
config: json!({
"set": "test2",
"target": "target1",
"version": "ipv6",
})
.into(),
patterns: vec!["ip".into()],
},
],
)
.await;
assert!(res.is_ok(), "res: {:?}", res.map(|_| ()));
// Another set with conflict is not ok
let res = plugin
.load_config(
vec![],
vec![
// First set
set1,
// Same set, adding options, no conflict
set2,
// Same set, no new options, no conflict
set3,
// Another set with conflict
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action3".into(),
action_type: "ipset".into(),
config: json!({
"set": "test",
"target": "target2",
"action": "del",
})
.into(),
patterns: vec!["ip".into()],
},
],
)
.await;
assert!(res.is_err(), "res: {:?}", res.map(|_| ()));
}

View file

@ -0,0 +1,13 @@
[package]
name = "reaction-plugin-nftables"
version = "0.1.0"
edition = "2024"
[dependencies]
tokio = { workspace = true, features = ["rt-multi-thread"] }
remoc.workspace = true
reaction-plugin.path = "../reaction-plugin"
serde.workspace = true
serde_json.workspace = true
nftables = { version = "0.6.3", features = ["tokio"] }
libnftables1-sys = { version = "0.1.1" }

View file

@ -0,0 +1,493 @@
use std::{
borrow::Cow,
collections::HashSet,
fmt::{Debug, Display},
u32,
};
use nftables::{
batch::Batch,
expr::Expression,
schema::{Element, NfListObject, Rule, SetFlag, SetType, SetTypeValue},
stmt::Statement,
types::{NfFamily, NfHook},
};
use reaction_plugin::{Exec, shutdown::ShutdownToken, time::parse_duration};
use remoc::rch::mpsc as remocMpsc;
use serde::{Deserialize, Serialize};
use crate::{helpers::Version, nft::NftClient};
#[derive(Default, Serialize, Deserialize, PartialEq, Eq, Clone, Copy)]
pub enum IpVersion {
#[default]
#[serde(rename = "ip")]
Ip,
#[serde(rename = "ipv4")]
Ipv4,
#[serde(rename = "ipv6")]
Ipv6,
}
impl Debug for IpVersion {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
match self {
IpVersion::Ipv4 => "ipv4",
IpVersion::Ipv6 => "ipv6",
IpVersion::Ip => "ip",
}
)
}
}
#[derive(Default, Debug, Serialize, Deserialize)]
pub enum AddDel {
#[default]
#[serde(alias = "add")]
Add,
#[serde(alias = "delete")]
Delete,
}
/// User-facing action options
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
pub struct ActionOptions {
/// The set that should be used by this action
pub set: String,
/// The pattern name of the IP.
/// Defaults to "ip"
#[serde(default = "serde_ip")]
pub pattern: String,
#[serde(skip)]
ip_index: usize,
// Whether the action is to "add" or "del" the ip from the set
#[serde(default)]
action: AddDel,
#[serde(flatten)]
pub set_options: SetOptions,
}
fn serde_ip() -> String {
"ip".into()
}
impl ActionOptions {
pub fn set_ip_index(&mut self, patterns: Vec<String>) -> Result<(), ()> {
self.ip_index = patterns
.into_iter()
.enumerate()
.filter(|(_, name)| name == &self.pattern)
.next()
.ok_or(())?
.0;
Ok(())
}
}
/// Merged set options
#[derive(Default, Clone, Deserialize, Serialize, Debug, PartialEq, Eq)]
pub struct SetOptions {
/// The IP type.
/// Defaults to `46`.
/// If `ipv4`: creates an IPv4 set with this name
/// If `ipv6`: creates an IPv6 set with this name
/// If `ip`: creates an IPv4 set with its name suffixed by 'v4' AND an IPv6 set with its name suffixed by 'v6'
/// *Merged set-wise*.
#[serde(default)]
version: Option<IpVersion>,
/// Chains where the IP set should be inserted.
/// Defaults to `["input", "forward"]`
/// *Merged set-wise*.
#[serde(default)]
hooks: Option<Vec<RHook>>,
// Optional timeout, letting linux/netfilter handle set removal instead of reaction
// Note that `reaction show` and `reaction flush` won't work if set instead of an `after` action
// Same syntax as after and retryperiod in reaction.
/// *Merged set-wise*.
#[serde(skip_serializing_if = "Option::is_none")]
timeout: Option<String>,
#[serde(skip)]
timeout_u32: Option<u32>,
// Target that iptables should use when the IP is encountered.
// Defaults to DROP, but can also be ACCEPT, RETURN or any user-defined chain
/// *Merged set-wise*.
#[serde(default)]
target: Option<RStatement>,
}
impl SetOptions {
pub fn merge(&mut self, options: &SetOptions) -> Result<(), String> {
// merge two Option<T> and fail if there is conflict
fn inner_merge<T: Eq + Clone + std::fmt::Debug>(
a: &mut Option<T>,
b: &Option<T>,
name: &str,
) -> Result<(), String> {
match (&a, &b) {
(Some(aa), Some(bb)) => {
if aa != bb {
return Err(format!(
"Conflicting options for {name}: `{aa:?}` and `{bb:?}`"
));
}
}
(None, Some(_)) => {
*a = b.clone();
}
_ => (),
};
Ok(())
}
inner_merge(&mut self.version, &options.version, "version")?;
inner_merge(&mut self.timeout, &options.timeout, "timeout")?;
inner_merge(&mut self.hooks, &options.hooks, "chains")?;
inner_merge(&mut self.target, &options.target, "target")?;
if let Some(timeout) = &self.timeout {
let duration = parse_duration(timeout)
.map_err(|err| format!("failed to parse timeout: {}", err))?
.as_secs();
if duration > u32::MAX as u64 {
return Err(format!(
"timeout is limited to {} seconds (approx {} days)",
u32::MAX,
49_000
));
}
self.timeout_u32 = Some(duration as u32);
}
Ok(())
}
}
#[derive(Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum RHook {
Ingress,
Prerouting,
Forward,
Input,
Output,
Postrouting,
Egress,
}
impl RHook {
pub fn as_str(&self) -> &'static str {
match self {
RHook::Ingress => "ingress",
RHook::Prerouting => "prerouting",
RHook::Forward => "forward",
RHook::Input => "input",
RHook::Output => "output",
RHook::Postrouting => "postrouting",
RHook::Egress => "egress",
}
}
}
impl Display for RHook {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str())
}
}
impl From<&RHook> for NfHook {
fn from(value: &RHook) -> Self {
match value {
RHook::Ingress => Self::Ingress,
RHook::Prerouting => Self::Prerouting,
RHook::Forward => Self::Forward,
RHook::Input => Self::Input,
RHook::Output => Self::Output,
RHook::Postrouting => Self::Postrouting,
RHook::Egress => Self::Egress,
}
}
}
#[derive(Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum RStatement {
Accept,
Drop,
Continue,
Return,
}
pub struct Set {
pub sets: SetNames,
pub hooks: Vec<RHook>,
pub timeout: Option<u32>,
pub target: RStatement,
}
impl Set {
pub fn from(name: String, options: SetOptions) -> Self {
Self {
sets: SetNames::new(name, options.version),
timeout: options.timeout_u32,
target: options.target.unwrap_or(RStatement::Drop),
hooks: options.hooks.unwrap_or(vec![RHook::Input, RHook::Forward]),
}
}
pub fn init<'a>(&self, batch: &mut Batch<'a>) -> Result<(), String> {
for (set, version) in [
(&self.sets.ipv4, Version::IPv4),
(&self.sets.ipv6, Version::IPv6),
] {
if let Some(set) = set {
let family = NfFamily::INet;
let table = Cow::from("reaction");
// create set
batch.add(NfListObject::<'a>::Set(Box::new(nftables::schema::Set::<
'a,
> {
family,
table: table.to_owned(),
name: Cow::Owned(set.to_owned()),
// TODO Try a set which is both ipv4 and ipv6?
set_type: SetTypeValue::Single(match version {
Version::IPv4 => SetType::Ipv4Addr,
Version::IPv6 => SetType::Ipv6Addr,
}),
flags: Some({
let mut flags = HashSet::from([SetFlag::Interval]);
if self.timeout.is_some() {
flags.insert(SetFlag::Timeout);
}
flags
}),
timeout: self.timeout.clone(),
..Default::default()
})));
// insert set in chains
let expr = vec![match self.target {
RStatement::Accept => Statement::Accept(None),
RStatement::Drop => Statement::Drop(None),
RStatement::Continue => Statement::Continue(None),
RStatement::Return => Statement::Return(None),
}];
for hook in &self.hooks {
batch.add(NfListObject::Rule(Rule {
family,
table: table.to_owned(),
chain: Cow::from(hook.to_string()),
expr: Cow::Owned(expr.clone()),
..Default::default()
}));
}
}
}
Ok(())
}
}
pub struct SetNames {
pub ipv4: Option<String>,
pub ipv6: Option<String>,
}
impl SetNames {
pub fn new(name: String, version: Option<IpVersion>) -> Self {
Self {
ipv4: match version {
Some(IpVersion::Ipv4) => Some(name.clone()),
Some(IpVersion::Ipv6) => None,
None | Some(IpVersion::Ip) => Some(format!("{}v4", name)),
},
ipv6: match version {
Some(IpVersion::Ipv4) => None,
Some(IpVersion::Ipv6) => Some(name),
None | Some(IpVersion::Ip) => Some(format!("{}v6", name)),
},
}
}
}
pub struct Action {
nft: NftClient,
rx: remocMpsc::Receiver<Exec>,
shutdown: ShutdownToken,
sets: SetNames,
// index of pattern ip in match vec
ip_index: usize,
action: AddDel,
}
impl Action {
pub fn new(
nft: NftClient,
shutdown: ShutdownToken,
rx: remocMpsc::Receiver<Exec>,
options: ActionOptions,
) -> Result<Self, String> {
Ok(Action {
nft,
rx,
shutdown,
sets: SetNames::new(options.set, options.set_options.version),
ip_index: options.ip_index,
action: options.action,
})
}
pub async fn serve(mut self) {
loop {
let event = tokio::select! {
exec = self.rx.recv() => Some(exec),
_ = self.shutdown.wait() => None,
};
match event {
// shutdown asked
None => break,
// channel closed
Some(Ok(None)) => break,
// error from channel
Some(Err(err)) => {
eprintln!("ERROR {err}");
break;
}
// ok
Some(Ok(Some(exec))) => {
if let Err(err) = self.handle_exec(exec).await {
eprintln!("ERROR {err}");
break;
}
}
}
}
// eprintln!("DEBUG Asking for shutdown");
// self.shutdown.ask_shutdown();
}
async fn handle_exec(&mut self, mut exec: Exec) -> Result<(), String> {
// safeguard against Vec::remove's panic
if exec.match_.len() <= self.ip_index {
return Err(format!(
"match received from reaction is smaller than expected. looking for index {} but size is {}. this is a bug!",
self.ip_index,
exec.match_.len()
));
}
let ip = exec.match_.remove(self.ip_index);
// select set
let set = match (&self.sets.ipv4, &self.sets.ipv6) {
(None, None) => return Err(format!("action is neither IPv4 nor IPv6, this is a bug!")),
(None, Some(set)) => set,
(Some(set), None) => set,
(Some(set4), Some(set6)) => {
if ip.contains(':') {
set6
} else {
set4
}
}
};
// add/remove ip to set
let element = NfListObject::Element(Element {
family: NfFamily::INet,
table: Cow::from("reaction"),
name: Cow::from(set),
elem: Cow::from(vec![Expression::String(Cow::from(ip.clone()))]),
});
let mut batch = Batch::new();
match self.action {
AddDel::Add => batch.add(element),
AddDel::Delete => batch.delete(element),
};
match self.nft.send(batch).await {
Ok(ok) => {
eprintln!("DEBUG action ok {:?} {ip}: {ok}", self.action);
Ok(())
}
Err(err) => Err(format!("action ko {:?} {ip}: {err}", self.action)),
}
}
}
#[cfg(test)]
mod tests {
use crate::action::{IpVersion, RHook, RStatement, SetOptions};
#[tokio::test]
async fn set_options_merge() {
let s1 = SetOptions {
version: None,
hooks: None,
timeout: None,
timeout_u32: None,
target: None,
};
let s2 = SetOptions {
version: Some(IpVersion::Ipv4),
hooks: Some(vec![RHook::Input]),
timeout: Some("3h".into()),
timeout_u32: Some(3 * 3600),
target: Some(RStatement::Drop),
};
assert_ne!(s1, s2);
assert_eq!(s1, SetOptions::default());
{
// s2 can be merged in s1
let mut s1 = s1.clone();
assert!(s1.merge(&s2).is_ok());
assert_eq!(s1, s2);
}
{
// s1 can be merged in s2
let mut s2 = s2.clone();
assert!(s2.merge(&s1).is_ok());
}
{
// s1 can be merged in itself
let mut s3 = s1.clone();
assert!(s3.merge(&s1).is_ok());
assert_eq!(s1, s3);
}
{
// s2 can be merged in itself
let mut s3 = s2.clone();
assert!(s3.merge(&s2).is_ok());
assert_eq!(s2, s3);
}
for s3 in [
SetOptions {
version: Some(IpVersion::Ipv6),
..Default::default()
},
SetOptions {
hooks: Some(vec![RHook::Output]),
..Default::default()
},
SetOptions {
timeout: Some("30min".into()),
..Default::default()
},
SetOptions {
target: Some(RStatement::Continue),
..Default::default()
},
] {
// none with some is ok
assert!(s3.clone().merge(&s1).is_ok(), "s3: {s3:?}");
assert!(s1.clone().merge(&s3).is_ok(), "s3: {s3:?}");
// different some is ko
assert!(s3.clone().merge(&s2).is_err(), "s3: {s3:?}");
assert!(s2.clone().merge(&s3).is_err(), "s3: {s3:?}");
}
}
}

View file

@ -0,0 +1,15 @@
use std::fmt::Display;
#[derive(PartialEq, Eq, PartialOrd, Ord, Copy, Clone)]
pub enum Version {
IPv4,
IPv6,
}
impl Display for Version {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Version::IPv4 => "IPv4",
Version::IPv6 => "IPv6",
})
}
}

View file

@ -0,0 +1,176 @@
use std::{
borrow::Cow,
collections::{BTreeMap, BTreeSet},
};
use nftables::{
batch::Batch,
schema::{Chain, NfListObject, Table},
types::{NfChainType, NfFamily},
};
use reaction_plugin::{
ActionConfig, ActionImpl, Hello, Manifest, PluginInfo, RemoteResult, StreamConfig, StreamImpl,
shutdown::ShutdownController,
};
use remoc::rtc;
use crate::{
action::{Action, ActionOptions, Set, SetOptions},
nft::NftClient,
};
#[cfg(test)]
mod tests;
mod action;
pub mod helpers;
mod nft;
#[tokio::main]
async fn main() {
let plugin = Plugin::default();
reaction_plugin::main_loop(plugin).await;
}
#[derive(Default)]
struct Plugin {
nft: NftClient,
sets: Vec<Set>,
actions: Vec<Action>,
shutdown: ShutdownController,
}
impl PluginInfo for Plugin {
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
Ok(Manifest {
hello: Hello::new(),
streams: BTreeSet::default(),
actions: BTreeSet::from(["nftables".into()]),
})
}
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
if !streams.is_empty() {
return Err("This plugin can't handle any stream type".into());
}
let mut ret_actions = Vec::with_capacity(actions.len());
let mut set_options: BTreeMap<String, SetOptions> = BTreeMap::new();
for ActionConfig {
stream_name,
filter_name,
action_name,
action_type,
config,
patterns,
} in actions
{
if &action_type != "nftables" {
return Err("This plugin can't handle other action types than nftables".into());
}
let mut options: ActionOptions = serde_json::from_value(config.into()).map_err(|err| {
format!("invalid options for action {stream_name}.{filter_name}.{action_name}: {err}")
})?;
options.set_ip_index(patterns).map_err(|_|
format!(
"No pattern with name {} in filter {stream_name}.{filter_name}. Try setting the option `pattern` to your pattern name of type 'ip'",
&options.pattern
)
)?;
// Merge option
set_options
.entry(options.set.clone())
.or_default()
.merge(&options.set_options)
.map_err(|err| format!("set {}: {err}", options.set))?;
let (tx, rx) = remoc::rch::mpsc::channel(1);
self.actions.push(Action::new(
self.nft.clone(),
self.shutdown.token(),
rx,
options,
)?);
ret_actions.push(ActionImpl { tx });
}
// Init all sets
while let Some((name, options)) = set_options.pop_first() {
self.sets.push(Set::from(name, options));
}
Ok((vec![], ret_actions))
}
async fn start(&mut self) -> RemoteResult<()> {
self.shutdown.delegate().handle_quit_signals()?;
let mut batch = Batch::new();
batch.add(reaction_table());
// Create a chain for each registered netfilter hook
for hook in self
.sets
.iter()
.flat_map(|set| &set.hooks)
.collect::<BTreeSet<_>>()
{
batch.add(NfListObject::Chain(Chain {
family: NfFamily::INet,
table: Cow::Borrowed("reaction"),
name: Cow::from(hook.as_str()),
_type: Some(NfChainType::Filter),
hook: Some(hook.into()),
prio: Some(0),
..Default::default()
}));
}
for set in &self.sets {
set.init(&mut batch)?;
}
// TODO apply batch
self.nft.send(batch).await?;
// Launch a task that will destroy the table on shutdown
{
let token = self.shutdown.token();
tokio::spawn(async move {
token.wait().await;
Batch::new().delete(reaction_table());
});
}
// Launch all actions
while let Some(action) = self.actions.pop() {
tokio::spawn(async move { action.serve().await });
}
self.actions = Default::default();
Ok(())
}
async fn close(self) -> RemoteResult<()> {
self.shutdown.ask_shutdown();
self.shutdown.wait_all_task_shutdown().await;
Ok(())
}
}
fn reaction_table() -> NfListObject<'static> {
NfListObject::Table(Table {
family: NfFamily::INet,
name: Cow::Borrowed("reaction"),
handle: None,
})
}

View file

@ -0,0 +1,81 @@
use std::{
ffi::{CStr, CString},
thread,
};
use libnftables1_sys::Nftables;
use nftables::batch::Batch;
use tokio::sync::{mpsc, oneshot};
/// A client with a dedicated server thread to libnftables.
/// Calling [`Default::default()`] spawns a new server thread.
/// Cloning just creates a new client to the same server thread.
#[derive(Clone)]
pub struct NftClient {
tx: mpsc::Sender<NftCommand>,
}
impl Default for NftClient {
fn default() -> Self {
let (tx, mut rx) = mpsc::channel(10);
thread::spawn(move || {
let mut conn = Nftables::new();
while let Some(NftCommand { json, ret }) = rx.blocking_recv() {
let (rc, output, error) = conn.run_cmd(json.as_ptr());
let res = match rc {
0 => to_rust_string(output)
.ok_or_else(|| "unknown ok (rc = 0 but no output buffer)".into()),
_ => to_rust_string(error)
.map(|err| format!("error (rc = {rc}: {err})"))
.ok_or_else(|| format!("unknown error (rc = {rc} but no error buffer)")),
};
let _ = ret.send(res);
}
});
NftClient { tx }
}
}
impl NftClient {
/// Send a batch to nftables.
pub async fn send(&self, batch: Batch<'_>) -> Result<String, String> {
// convert JSON to CString
let mut json = serde_json::to_vec(&batch.to_nftables())
.map_err(|err| format!("couldn't build json to send to nftables: {err}"))?;
json.push('\0' as u8);
let json = CString::from_vec_with_nul(json)
.map_err(|err| format!("invalid json with null char: {err}"))?;
// Send command
let (tx, rx) = oneshot::channel();
let command = NftCommand { json, ret: tx };
self.tx
.send(command)
.await
.map_err(|err| format!("nftables thread has quit, can't send command: {err}"))?;
// Wait for result
rx.await
.map_err(|_| format!("nftables thread has quit, no response for command"))?
}
}
struct NftCommand {
json: CString,
ret: oneshot::Sender<Result<String, String>>,
}
fn to_rust_string(c_ptr: *const i8) -> Option<String> {
if c_ptr.is_null() {
None
} else {
Some(
unsafe { CStr::from_ptr(c_ptr) }
.to_string_lossy()
.into_owned(),
)
}
}

View file

@ -0,0 +1,247 @@
use reaction_plugin::{ActionConfig, PluginInfo, StreamConfig, Value};
use serde_json::json;
use crate::Plugin;
#[tokio::test]
async fn conf_stream() {
// No stream is supported by nftables
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "nftables".into(),
config: Value::Null
}],
vec![]
)
.await
.is_err()
);
// Empty config is ok
assert!(Plugin::default().load_config(vec![], vec![]).await.is_ok());
}
#[tokio::test]
async fn conf_action_standalone() {
let p = vec!["name".into(), "ip".into(), "ip2".into()];
let p_noip = vec!["name".into(), "ip2".into()];
for (is_ok, conf, patterns) in [
// minimal set
(true, json!({ "set": "test" }), &p),
// missing set key
(false, json!({}), &p),
(false, json!({ "version": "ipv4" }), &p),
// unknown key
(false, json!({ "set": "test", "unknown": "yes" }), &p),
(false, json!({ "set": "test", "ip_index": 1 }), &p),
(false, json!({ "set": "test", "timeout_u32": 1 }), &p),
// pattern //
(true, json!({ "set": "test" }), &p),
(true, json!({ "set": "test", "pattern": "ip" }), &p),
(true, json!({ "set": "test", "pattern": "ip2" }), &p),
(true, json!({ "set": "test", "pattern": "ip2" }), &p_noip),
// unknown pattern "ip"
(false, json!({ "set": "test" }), &p_noip),
(false, json!({ "set": "test", "pattern": "ip" }), &p_noip),
// unknown pattern
(false, json!({ "set": "test", "pattern": "unknown" }), &p),
(false, json!({ "set": "test", "pattern": "uwu" }), &p_noip),
// bad type
(false, json!({ "set": "test", "pattern": 0 }), &p_noip),
(false, json!({ "set": "test", "pattern": true }), &p_noip),
// action //
(true, json!({ "set": "test", "action": "add" }), &p),
(true, json!({ "set": "test", "action": "delete" }), &p),
// unknown action
(false, json!({ "set": "test", "action": "create" }), &p),
(false, json!({ "set": "test", "action": "insert" }), &p),
(false, json!({ "set": "test", "action": "del" }), &p),
(false, json!({ "set": "test", "action": "destroy" }), &p),
// bad type
(false, json!({ "set": "test", "action": true }), &p),
(false, json!({ "set": "test", "action": 1 }), &p),
// ip version //
// ok
(true, json!({ "set": "test", "version": "ipv4" }), &p),
(true, json!({ "set": "test", "version": "ipv6" }), &p),
(true, json!({ "set": "test", "version": "ip" }), &p),
// unknown version
(false, json!({ "set": "test", "version": 4 }), &p),
(false, json!({ "set": "test", "version": 6 }), &p),
(false, json!({ "set": "test", "version": 46 }), &p),
(false, json!({ "set": "test", "version": "5" }), &p),
(false, json!({ "set": "test", "version": "ipv5" }), &p),
(false, json!({ "set": "test", "version": "4" }), &p),
(false, json!({ "set": "test", "version": "6" }), &p),
(false, json!({ "set": "test", "version": "46" }), &p),
// bad type
(false, json!({ "set": "test", "version": true }), &p),
// hooks //
// everything is fine really
(true, json!({ "set": "test", "hooks": [] }), &p),
(
true,
json!({ "set": "test", "hooks": ["input", "forward", "ingress", "prerouting", "output", "postrouting", "egress"] }),
&p,
),
(false, json!({ "set": "test", "hooks": ["INPUT"] }), &p),
(false, json!({ "set": "test", "hooks": ["FORWARD"] }), &p),
(
false,
json!({ "set": "test", "hooks": ["unknown_hook"] }),
&p,
),
// timeout //
(true, json!({ "set": "test", "timeout": "1m" }), &p),
(true, json!({ "set": "test", "timeout": "3 days" }), &p),
// bad
(false, json!({ "set": "test", "timeout": "3 dayz"}), &p),
(false, json!({ "set": "test", "timeout": 12 }), &p),
// target //
// anything is fine too
(true, json!({ "set": "test", "target": "drop" }), &p),
(true, json!({ "set": "test", "target": "accept" }), &p),
(true, json!({ "set": "test", "target": "return" }), &p),
(true, json!({ "set": "test", "target": "continue" }), &p),
// bad
(false, json!({ "set": "test", "target": "custom" }), &p),
(false, json!({ "set": "test", "target": "DROP" }), &p),
(false, json!({ "set": "test", "target": 11 }), &p),
(false, json!({ "set": "test", "target": ["DROP"] }), &p),
] {
let res = Plugin::default()
.load_config(
vec![],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "nftables".into(),
config: conf.clone().into(),
patterns: patterns.clone(),
}],
)
.await;
assert!(
res.is_ok() == is_ok,
"conf: {:?}, must be ok: {is_ok}, result: {:?}",
conf,
// empty Result::Ok because ActionImpl is not Debug
res.map(|_| ())
);
}
}
// TODO
#[tokio::test]
async fn conf_action_merge() {
let mut plugin = Plugin::default();
let set1 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action1".into(),
action_type: "nftables".into(),
config: json!({
"set": "test",
"target": "drop",
"hooks": ["input"],
"action": "add",
})
.into(),
patterns: vec!["ip".into()],
};
let set2 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action2".into(),
action_type: "nftables".into(),
config: json!({
"set": "test",
"target": "drop",
"version": "ip",
"action": "add",
})
.into(),
patterns: vec!["ip".into()],
};
let set3 = ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action2".into(),
action_type: "nftables".into(),
config: json!({
"set": "test",
"action": "delete",
})
.into(),
patterns: vec!["ip".into()],
};
let res = plugin
.load_config(
vec![],
vec![
// First set
set1.clone(),
// Same set, adding options, no conflict
set2.clone(),
// Same set, no new options, no conflict
set3.clone(),
// Unrelated set, so no conflict
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action3".into(),
action_type: "nftables".into(),
config: json!({
"set": "test2",
"target": "return",
"version": "ipv6",
})
.into(),
patterns: vec!["ip".into()],
},
],
)
.await;
assert!(res.is_ok(), "res: {:?}", res.map(|_| ()));
// Another set with conflict is not ok
let res = plugin
.load_config(
vec![],
vec![
// First set
set1,
// Same set, adding options, no conflict
set2,
// Same set, no new options, no conflict
set3,
// Another set with conflict
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action3".into(),
action_type: "nftables".into(),
config: json!({
"set": "test",
"target": "target2",
"action": "del",
})
.into(),
patterns: vec!["ip".into()],
},
],
)
.await;
assert!(res.is_err(), "res: {:?}", res.map(|_| ()));
}

View file

@ -0,0 +1,11 @@
[package]
name = "reaction-plugin-virtual"
version = "1.0.0"
edition = "2024"
[dependencies]
tokio = { workspace = true, features = ["rt-multi-thread"] }
remoc.workspace = true
reaction-plugin.path = "../reaction-plugin"
serde.workspace = true
serde_json.workspace = true

View file

@ -0,0 +1,179 @@
use std::collections::{BTreeMap, BTreeSet};
use reaction_plugin::{
ActionConfig, ActionImpl, Exec, Hello, Line, Manifest, PluginInfo, RemoteResult, StreamConfig,
StreamImpl, Value, line::PatternLine,
};
use remoc::{rch::mpsc, rtc};
use serde::{Deserialize, Serialize};
#[cfg(test)]
mod tests;
#[tokio::main]
async fn main() {
let plugin = Plugin::default();
reaction_plugin::main_loop(plugin).await;
}
#[derive(Default)]
struct Plugin {}
impl PluginInfo for Plugin {
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError> {
Ok(Manifest {
hello: Hello::new(),
streams: BTreeSet::from(["virtual".into()]),
actions: BTreeSet::from(["virtual".into()]),
})
}
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)> {
let mut ret_streams = Vec::with_capacity(streams.len());
let mut ret_actions = Vec::with_capacity(actions.len());
let mut local_streams = BTreeMap::new();
for StreamConfig {
stream_name,
stream_type,
config,
} in streams
{
if stream_type != "virtual" {
return Err("This plugin can't handle other stream types than virtual".into());
}
let (virtual_stream, receiver) = VirtualStream::new(config)?;
if let Some(_) = local_streams.insert(stream_name, virtual_stream) {
return Err("this virtual stream has already been initialized".into());
}
ret_streams.push(StreamImpl {
stream: receiver,
standalone: false,
});
}
for ActionConfig {
stream_name,
filter_name,
action_name,
action_type,
config,
patterns,
} in actions
{
if &action_type != "virtual" {
return Err("This plugin can't handle other action types than virtual".into());
}
let (mut virtual_action, tx) = VirtualAction::new(
stream_name,
filter_name,
action_name,
config,
patterns,
&local_streams,
)?;
tokio::spawn(async move { virtual_action.serve().await });
ret_actions.push(ActionImpl { tx });
}
Ok((ret_streams, ret_actions))
}
async fn start(&mut self) -> RemoteResult<()> {
Ok(())
}
async fn close(self) -> RemoteResult<()> {
Ok(())
}
}
#[derive(Clone)]
struct VirtualStream {
tx: mpsc::Sender<Line>,
}
impl VirtualStream {
fn new(config: Value) -> Result<(Self, mpsc::Receiver<Line>), String> {
const CONFIG_ERROR: &'static str = "streams of type virtual take no options";
match config {
Value::Null => (),
Value::Object(map) => {
if map.len() != 0 {
return Err(CONFIG_ERROR.into());
}
}
_ => return Err(CONFIG_ERROR.into()),
}
let (tx, rx) = mpsc::channel(1);
Ok((Self { tx }, rx))
}
}
#[derive(Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
struct ActionOptions {
/// The line to send to the corresponding virtual stream, example: "ban \<ip\>"
send: String,
/// The name of the corresponding virtual stream, example: "my_stream"
to: String,
}
struct VirtualAction {
rx: mpsc::Receiver<Exec>,
send: PatternLine,
to: VirtualStream,
}
impl VirtualAction {
fn new(
stream_name: String,
filter_name: String,
action_name: String,
config: Value,
patterns: Vec<String>,
streams: &BTreeMap<String, VirtualStream>,
) -> Result<(Self, mpsc::Sender<Exec>), String> {
let options: ActionOptions = serde_json::from_value(config.into()).map_err(|err| {
format!("invalid options for action {stream_name}.{filter_name}.{action_name}: {err}")
})?;
let send = PatternLine::new(options.send, patterns);
let stream = streams.get(&options.to).ok_or_else(|| {
format!(
"action {}.{}.{}: send \"{}\" matches no stream name",
stream_name, filter_name, action_name, options.to
)
})?;
let (tx, rx) = mpsc::channel(1);
Ok((
Self {
rx,
send: send,
to: stream.clone(),
},
tx,
))
}
async fn serve(&mut self) {
while let Ok(Some(exec)) = self.rx.recv().await {
let line = self.send.line(exec.match_);
self.to.tx.send((line, exec.time)).await.unwrap();
}
}
}

View file

@ -0,0 +1,322 @@
use std::time::{SystemTime, UNIX_EPOCH};
use reaction_plugin::{ActionConfig, Exec, PluginInfo, StreamConfig, Value};
use serde_json::json;
use crate::Plugin;
#[tokio::test]
async fn conf_stream() {
// Invalid type
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtu".into(),
config: Value::Null
}],
vec![]
)
.await
.is_err()
);
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null
}],
vec![]
)
.await
.is_ok()
);
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: json!({}).into(),
}],
vec![]
)
.await
.is_ok()
);
// Invalid conf: must be empty
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: json!({"key": "value" }).into(),
}],
vec![]
)
.await
.is_err()
);
}
#[tokio::test]
async fn conf_action() {
let streams = vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}];
let valid_conf = json!({ "send": "message", "to": "stream" });
let missing_send_conf = json!({ "to": "stream" });
let missing_to_conf = json!({ "send": "stream" });
let extra_attr_conf = json!({ "send": "message", "send2": "message", "to": "stream" });
let patterns = Vec::default();
// Invalid type
assert!(
Plugin::default()
.load_config(
streams.clone(),
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtu".into(),
config: Value::Null,
patterns: patterns.clone(),
}]
)
.await
.is_err()
);
assert!(
Plugin::default()
.load_config(
streams.clone(),
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: valid_conf.into(),
patterns: patterns.clone()
}]
)
.await
.is_ok()
);
for conf in [missing_send_conf, missing_to_conf, extra_attr_conf] {
assert!(
Plugin::default()
.load_config(
streams.clone(),
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: conf.clone().into(),
patterns: patterns.clone()
}]
)
.await
.is_err(),
"conf: {:?}",
conf
);
}
}
#[tokio::test]
async fn conf_send() {
// Valid to: option
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "message", "to": "stream" }).into(),
patterns: vec![],
}]
)
.await
.is_ok(),
);
// Invalid to: option
assert!(
Plugin::default()
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "message", "to": "stream1" }).into(),
patterns: vec![],
}]
)
.await
.is_err(),
);
}
// Let's allow empty streams for now.
// I guess it can be useful to have manual only actions.
//
// #[tokio::test]
// async fn conf_empty_stream() {
// assert!(
// Plugin::default()
// .load_config(
// vec![StreamConfig {
// stream_name: "stream".into(),
// stream_type: "virtual".into(),
// config: Value::Null,
// }],
// vec![],
// )
// .await
// .is_err(),
// );
// }
#[tokio::test]
async fn run_simple() {
let mut plugin = Plugin::default();
let (mut streams, mut actions) = plugin
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}],
vec![ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "message <test>", "to": "stream" }).into(),
patterns: vec!["test".into()],
}],
)
.await
.unwrap();
let mut stream = streams.pop().unwrap();
let action = actions.pop().unwrap();
assert!(!stream.standalone);
for m in ["test1", "test2", "test3", " a a a aa a a"] {
let time = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
assert!(
action
.tx
.send(Exec {
match_: vec![m.into()],
time,
})
.await
.is_ok()
);
assert_eq!(
stream.stream.recv().await.unwrap().unwrap(),
(format!("message {m}"), time),
);
}
}
#[tokio::test]
async fn run_two_actions() {
let mut plugin = Plugin::default();
let (mut streams, mut actions) = plugin
.load_config(
vec![StreamConfig {
stream_name: "stream".into(),
stream_type: "virtual".into(),
config: Value::Null,
}],
vec![
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "send <a>", "to": "stream" }).into(),
patterns: vec!["a".into(), "b".into()],
},
ActionConfig {
stream_name: "stream".into(),
filter_name: "filter".into(),
action_name: "action".into(),
action_type: "virtual".into(),
config: json!({ "send": "<b> send", "to": "stream" }).into(),
patterns: vec!["a".into(), "b".into()],
},
],
)
.await
.unwrap();
let mut stream = streams.pop().unwrap();
assert!(!stream.standalone);
let action2 = actions.pop().unwrap();
let action1 = actions.pop().unwrap();
let time = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
assert!(
action1
.tx
.send(Exec {
match_: vec!["aa".into(), "bb".into()],
time,
})
.await
.is_ok(),
);
assert_eq!(
stream.stream.recv().await.unwrap().unwrap(),
("send aa".into(), time),
);
assert!(
action2
.tx
.send(Exec {
match_: vec!["aa".into(), "bb".into()],
time,
})
.await
.is_ok(),
);
assert_eq!(
stream.stream.recv().await.unwrap().unwrap(),
("bb send".into(), time),
);
}

View file

@ -0,0 +1,20 @@
[package]
name = "reaction-plugin"
version = "1.0.0"
edition = "2024"
authors = ["ppom <reaction@ppom.me>"]
license = "AGPL-3.0"
homepage = "https://reaction.ppom.me"
repository = "https://framagit.org/ppom/reaction"
keywords = ["security", "sysadmin", "logs", "monitoring", "plugin"]
categories = ["security"]
description = "Plugin interface for reaction, a daemon that scans logs and takes action (alternative to fail2ban)"
[dependencies]
remoc.workspace = true
serde.workspace = true
serde_json.workspace = true
tokio.workspace = true
tokio.features = ["io-std", "signal"]
tokio-util.workspace = true
tokio-util.features = ["rt"]

View file

@ -0,0 +1,599 @@
//! This crate defines the API between reaction's core and plugins.
//!
//! Plugins must be written in Rust, for now.
//!
//! This documentation assumes the reader has some knowledge of Rust.
//! However, if you find that something is unclear, don't hesitate to
//! [ask for help](https://framagit.org/ppom/reaction/#help), even if you're new to Rust.
//!
//! To implement a plugin, one has to provide an implementation of [`PluginInfo`], that provides
//! the entrypoint for a plugin.
//! It permits to define `0` to `n` custom stream and action types.
//!
//! ## Note on reaction-plugin API stability
//!
//! This is the v1 of reaction's plugin interface.
//! It's quite efficient and complete, but it has the big drawback of being Rust-only and [`tokio`]-only.
//!
//! In the future, I'd like to define a language-agnostic interface, which will be a major breaking change in the API.
//! However, I'll try my best to reduce the necessary code changes for plugins that use this v1.
//!
//! ## Naming & calling conventions
//!
//! Your plugin should be named `reaction-plugin-$NAME`, eg. `reaction-plugin-postgresql`.
//! It will be invoked with one positional argument "serve".
//! ```bash
//! reaction-plugin-$NAME serve
//! ```
//! This can be useful if you want to provide CLI functionnality to your users,
//! so you can distinguish between a human user and reaction.
//!
//! ### State directory
//!
//! It will be executed in its own directory, in which it should have write access.
//! The directory is `$reaction_state_directory/plugin_data/$NAME`.
//! reaction's [state_directory](https://reaction.ppom.me/reference.html#state_directory)
//! defaults to its working directory, which is `/var/lib/reaction` in most setups.
//!
//! So your plugin directory should most often be `/var/lib/reaction/plugin_data/$NAME`,
//! but the plugin shouldn't expect that and use the current working directory instead.
//!
//! ## Communication
//!
//! Communication between the plugin and reaction is based on [`remoc`], which permits to multiplex channels and remote objects/functions/trait
//! calls over a single transport channel.
//! The channels read and write channels are stdin and stdout, so you shouldn't use them for something else.
//!
//! [`remoc`] builds upon [`tokio`], so you'll need to use tokio too.
//!
//! ### Errors
//!
//! Errors during:
//! - config loading in [`PluginInfo::load_config`]
//! - startup in [`PluginInfo::start`]
//!
//! should be returned to reaction by the function's return value, permitting reaction to abort startup.
//!
//! During normal runtime, after the plugin has loaded its config and started, and before reaction is quitting, there is no *rusty* way to send errors to reaction.
//! Then errors can be printed to stderr.
//! They'll be captured line by line and re-printed by reaction, with the plugin name prepended.
//!
//! A line can start with `DEBUG `, `INFO `, `WARN `, `ERROR `.
//! If it starts with none of the above, the line is assumed to be an error.
//!
//! Example:
//! Those lines:
//! ```log
//! WARN This is an official warning from the plugin
//! Freeeee errrooooorrr
//! ```
//! Will become:
//! ```log
//! WARN plugin test: This is an official warning from the plugin
//! ERROR plugin test: Freeeee errrooooorrr
//! ```
//!
//! Plugins should not exit when there is an error: reaction quits only when told to do so,
//! or if all its streams exit, and won't retry starting a failing plugin or stream.
//! Please only exit if you're in a 100% failing state.
//! It's considered better to continue operating in a degraded state than exiting.
//!
//! ## Getting started
//!
//! If you don't have Rust already installed, follow their [*Getting Started* documentation](https://rust-lang.org/learn/get-started/)
//! to get rust build tools and learn about editor support.
//!
//! Then create a new repository with cargo:
//!
//! ```bash
//! cargo new reaction-plugin-$NAME
//! cd reaction-plugin-$NAME
//! ```
//!
//! Add required dependencies:
//!
//! ```bash
//! cargo add reaction-plugin tokio
//! ```
//!
//! Replace `src/main.rs` with those contents:
//!
//! ```ignore
//! use reaction_plugin::PluginInfo;
//!
//! #[tokio::main]
//! async fn main() {
//! let plugin = MyPlugin::default();
//! reaction_plugin::main_loop(plugin).await;
//! }
//!
//! #[derive(Default)]
//! struct MyPlugin {}
//!
//! impl PluginInfo for MyPlugin {
//! // ...
//! }
//! ```
//!
//! Your IDE should now propose to implement missing members of the [`PluginInfo`] trait.
//! Your journey starts!
//!
//! ## Examples
//!
//! Core plugins can be found here: <https://framagit.org/ppom/reaction/-/tree/main/plugins>.
//!
//! - The "virtual" plugin is the simplest and can serve as a good complete example that links custom stream types and custom action types.
//! - The "ipset" plugin is a good example of an action-only plugin.
use std::{
collections::{BTreeMap, BTreeSet},
env::args,
error::Error,
fmt::Display,
process::exit,
time::Duration,
};
use remoc::{
Connect, rch,
rtc::{self, Server},
};
use serde::{Deserialize, Serialize};
use serde_json::{Number, Value as JValue};
use tokio::io::{stdin, stdout};
pub mod line;
pub mod shutdown;
pub mod time;
/// The only trait that **must** be implemented by a plugin.
/// It provides lists of stream, filter and action types implemented by a dynamic plugin.
#[rtc::remote]
pub trait PluginInfo {
/// Return the manifest of the plugin.
/// This should not be dynamic, and return always the same manifest.
///
/// Example implementation:
/// ```
/// Ok(Manifest {
/// hello: Hello::new(),
/// streams: BTreeSet::from(["mystreamtype".into()]),
/// actions: BTreeSet::from(["myactiontype".into()]),
/// })
/// ```
///
/// First function called.
async fn manifest(&mut self) -> Result<Manifest, rtc::CallError>;
/// Load all plugin stream and action configurations.
/// Must error if config is invalid.
///
/// The plugin should not start running mutable commands here:
/// It should be ok to quit without cleanup for now.
///
/// Each [`StreamConfig`] from the `streams` arg should result in a corresponding [`StreamImpl`] returned, in the same order.
/// Each [`ActionConfig`] from the `actions` arg should result in a corresponding [`ActionImpl`] returned, in the same order.
///
/// Function called after [`PluginInfo::manifest`].
async fn load_config(
&mut self,
streams: Vec<StreamConfig>,
actions: Vec<ActionConfig>,
) -> RemoteResult<(Vec<StreamImpl>, Vec<ActionImpl>)>;
/// Notify the plugin that setup is finished, permitting a last occasion to report an error that'll make reaction exit.
/// All initialization (opening remote connections, starting streams, etc) should happen here.
///
/// Function called after [`PluginInfo::load_config`].
async fn start(&mut self) -> RemoteResult<()>;
/// Notify the plugin that reaction is quitting and that the plugin should quit too.
/// A few seconds later, the plugin will receive SIGTERM.
/// A few seconds later, the plugin will receive SIGKILL.
///
/// Function called after [`PluginInfo::start`], when reaction is quitting.
async fn close(mut self) -> RemoteResult<()>;
}
/// The config for one Stream of a type advertised by this plugin.
///
/// For example this user config:
/// ```jsonnet
/// {
/// streams: {
/// mystream: {
/// type: "mystreamtype",
/// options: {
/// key: "value",
/// num: 3,
/// },
/// // filters: ...
/// },
/// },
/// }
/// ```
///
/// would result in the following `StreamConfig`:
///
/// ```
/// StreamConfig {
/// stream_name: "mystream",
/// stream_type: "mystreamtype",
/// config: Value::Object(BTreeMap::from([
/// ("key", Value::String("value")),
/// ("num", Value::Integer(3)),
/// ])),
/// }
/// ```
///
/// Don't hesitate to take advantage of [`serde_json::from_value`], to deserialize the [`Value`] into a Rust struct:
///
/// ```
/// #[derive(Deserialize)]
/// struct MyStreamOptions {
/// key: String,
/// num: i64,
/// }
///
/// fn validate_config(stream_config: Value) -> Result<MyStreamOptions, serde_json::Error> {
/// serde_json::from_value(stream_config.into())
/// }
/// ```
#[derive(Serialize, Deserialize, Clone)]
pub struct StreamConfig {
pub stream_name: String,
pub stream_type: String,
pub config: Value,
}
/// The config for one Stream of a type advertised by this plugin.
///
/// For example this user config:
/// ```jsonnet
/// {
/// streams: {
/// mystream: {
/// // ...
/// filters: {
/// myfilter: {
/// // ...
/// actions: {
/// myaction: {
/// type: "myactiontype",
/// options: {
/// boolean: true,
/// array: ["item"],
/// },
/// },
/// },
/// },
/// },
/// },
/// },
/// }
/// ```
///
/// would result in the following `ActionConfig`:
///
/// ```rust
/// ActionConfig {
/// action_name: "myaction",
/// action_type: "myactiontype",
/// config: Value::Object(BTreeMap::from([
/// ("boolean", Value::Boolean(true)),
/// ("array", Value::Array([Value::String("item")])),
/// ])),
/// }
/// ```
///
/// Don't hesitate to take advantage of [`serde_json::from_value`], to deserialize the [`Value`] into a Rust struct:
///
/// ```rust
/// #[derive(Deserialize)]
/// struct MyActionOptions {
/// boolean: bool,
/// array: Vec<String>,
/// }
///
/// fn validate_config(action_config: Value) -> Result<MyActionOptions, serde_json::Error> {
/// serde_json::from_value(action_config.into())
/// }
/// ```
#[derive(Serialize, Deserialize, Clone)]
pub struct ActionConfig {
pub stream_name: String,
pub filter_name: String,
pub action_name: String,
pub action_type: String,
pub config: Value,
pub patterns: Vec<String>,
}
/// Mandatory announcement of a plugin's protocol version, stream and action types.
#[derive(Serialize, Deserialize)]
pub struct Manifest {
// Protocol version.
// Just use the [`Hello::new`] constructor that uses this crate's current version.
pub hello: Hello,
/// Stream types that should be made available to reaction users
///
/// ```jsonnet
/// {
/// streams: {
/// my_stream: {
/// type: "..."
/// # ↑ all those exposed types
/// }
/// }
/// }
/// ```
pub streams: BTreeSet<String>,
/// Action types that should be made available to reaction users
///
/// ```jsonnet
/// {
/// streams: {
/// mystream: {
/// filters: {
/// myfilter: {
/// actions: {
/// myaction: {
/// type: "myactiontype",
/// # ↑ all those exposed types
/// },
/// },
/// },
/// },
/// },
/// },
/// }
/// ```
pub actions: BTreeSet<String>,
}
#[derive(Default, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct Hello {
/// Major version of the protocol
/// Increment means breaking change
pub version_major: u32,
/// Minor version of the protocol
/// Increment means reaction core can handle older version plugins
pub version_minor: u32,
}
impl Hello {
/// Constructor that fills a [`Hello`] struct with [`crate`]'s version.
/// You should use this in your plugin [`Manifest`].
pub fn new() -> Hello {
Hello {
version_major: env!("CARGO_PKG_VERSION_MAJOR").parse().unwrap(),
version_minor: env!("CARGO_PKG_VERSION_MINOR").parse().unwrap(),
}
}
/// Used by the reaction daemon. Permits to check compatibility between two versions.
/// Major versions must be the same between the daemon and plugin.
/// Minor version of the daemon must be greater than or equal minor version of the plugin.
pub fn is_compatible(server: &Hello, plugin: &Hello) -> std::result::Result<(), String> {
if server.version_major == plugin.version_major
&& server.version_minor >= plugin.version_minor
{
Ok(())
} else if plugin.version_major > server.version_major
|| (plugin.version_major == server.version_major
&& plugin.version_minor > server.version_minor)
{
Err("consider upgrading reaction".into())
} else {
Err("consider upgrading the plugin".into())
}
}
}
/// A clone of [`serde_json::Value`].
/// Implements From & Into [`serde_json::Value`].
#[derive(Serialize, Deserialize, Clone)]
pub enum Value {
Null,
Bool(bool),
Integer(i64),
Float(f64),
String(String),
Array(Vec<Value>),
Object(BTreeMap<String, Value>),
}
impl From<JValue> for Value {
fn from(value: serde_json::Value) -> Self {
match value {
JValue::Null => Value::Null,
JValue::Bool(b) => Value::Bool(b),
JValue::Number(number) => {
if let Some(number) = number.as_i64() {
Value::Integer(number)
} else if let Some(number) = number.as_f64() {
Value::Float(number)
} else {
Value::Null
}
}
JValue::String(s) => Value::String(s.into()),
JValue::Array(v) => Value::Array(v.into_iter().map(|e| e.into()).collect()),
JValue::Object(m) => Value::Object(m.into_iter().map(|(k, v)| (k, v.into())).collect()),
}
}
}
impl Into<JValue> for Value {
fn into(self) -> JValue {
match self {
Value::Null => JValue::Null,
Value::Bool(v) => JValue::Bool(v),
Value::Integer(v) => JValue::Number(v.into()),
Value::Float(v) => JValue::Number(Number::from_f64(v).unwrap()),
Value::String(v) => JValue::String(v),
Value::Array(v) => JValue::Array(v.into_iter().map(|e| e.into()).collect()),
Value::Object(m) => JValue::Object(m.into_iter().map(|(k, v)| (k, v.into())).collect()),
}
}
}
/// Represents a Stream handled by a plugin on reaction core's side.
///
/// During [`PluginInfo::load_config`], the plugin should create a [`remoc::rch::mpsc::channel`] of [`Line`].
/// It will keep the sending side for itself and put the receiving side in a [`StreamImpl`].
///
/// The plugin should start sending [`Line`]s in the channel only after [`PluginInfo::start`] has been called by reaction core.
#[derive(Debug, Serialize, Deserialize)]
pub struct StreamImpl {
pub stream: rch::mpsc::Receiver<Line>,
/// Whether this stream works standalone, or if it needs other streams or actions to be fed.
/// Defaults to true.
/// When `false`, reaction will exit if it's the last one standing.
#[serde(default = "_true")]
pub standalone: bool,
}
fn _true() -> bool {
true
}
/// Messages passed from the [`StreamImpl`] of a plugin to reaction core
pub type Line = (String, Duration);
// // Filters
// // For now, plugins can't handle custom filter implementations.
// #[derive(Serialize, Deserialize)]
// pub struct FilterImpl {
// pub stream: rch::lr::Sender<Exec>,
// }
// #[derive(Serialize, Deserialize)]
// pub struct Match {
// pub match_: String,
// pub result: rch::oneshot::Sender<bool>,
// }
/// Represents an Action handled by a plugin on reaction core's side.
///
/// During [`PluginInfo::load_config`], the plugin should create a [`remoc::rch::mpsc::channel`] of [`Exec`].
/// It will keep the receiving side for itself and put the sending side in a [`ActionImpl`].
///
/// The plugin will start receiving [`Exec`]s in the channel from reaction only after [`PluginInfo::start`] has been called by reaction core.
#[derive(Clone, Serialize, Deserialize)]
pub struct ActionImpl {
pub tx: rch::mpsc::Sender<Exec>,
}
/// A [trigger](https://reaction.ppom.me/reference.html#trigger) of the Action, sent by reaction core to the plugin.
///
/// The plugin should perform the configured action for each received [`Exec`].
///
/// Any error during its execution should be logged to stderr, see [`crate#Errors`] for error handling recommandations.
#[derive(Serialize, Deserialize)]
pub struct Exec {
pub match_: Vec<String>,
pub time: Duration,
}
/// The main loop for a plugin.
///
/// Bootstraps the communication with reaction core on the process' stdin and stdout,
/// then holds the connection and maintains the plugin in a server state.
///
/// Your main function should only create a struct that implements [`PluginInfo`]
/// and then call [`main_loop`]:
/// ```ignore
/// #[tokio::main]
/// async fn main() {
/// let plugin = MyPlugin::default();
/// reaction_plugin::main_loop(plugin).await;
/// }
/// ```
pub async fn main_loop<T: PluginInfo + Send + Sync + 'static>(plugin_info: T) {
// First check that we're called by reaction
let mut args = args();
// skip 0th argument
let _skip = args.next();
if args.next().is_none_or(|arg| arg != "serve") {
eprintln!("This plugin is not meant to be called as-is.");
eprintln!(
"reaction daemon starts plugins itself and communicates with them on stdin, stdout and stderr."
);
eprintln!("See the doc on plugin configuration: https://reaction.ppom.me/plugins/");
exit(1);
} else {
let (conn, mut tx, _rx): (
_,
remoc::rch::base::Sender<PluginInfoClient>,
remoc::rch::base::Receiver<()>,
) = Connect::io(remoc::Cfg::default(), stdin(), stdout())
.await
.unwrap();
let (server, client) = PluginInfoServer::new(plugin_info, 1);
let (res1, (_, res2), res3) = tokio::join!(tx.send(client), server.serve(), conn);
let mut exit_code = 0;
if let Err(err) = res1 {
eprintln!("ERROR could not send plugin info to reaction: {err}");
exit_code = 1;
}
if let Err(err) = res2 {
eprintln!("ERROR could not launch plugin service for reaction: {err}");
exit_code = 2;
}
if let Err(err) = res3 {
eprintln!("ERROR connection error with reaction: {err}");
exit_code = 3;
}
exit(exit_code);
}
}
// Errors
pub type RemoteResult<T> = Result<T, RemoteError>;
/// reaction-plugin's Error type.
#[derive(Debug, Serialize, Deserialize)]
pub enum RemoteError {
/// A connection error that origins from [`remoc`], the crate used for communication on the plugin's `stdin`/`stdout`.
///
/// You should not instantiate this type of error yourself.
Remoc(rtc::CallError),
/// A free String for application-specific errors.
///
/// You should only instantiate this type of error yourself, for any error that you encounter at startup and shutdown.
///
/// Otherwise, any error during the plugin's runtime should be logged to stderr, see [`crate#Errors`] for error handling recommandations.
Plugin(String),
}
impl Display for RemoteError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
RemoteError::Remoc(call_error) => write!(f, "communication error: {call_error}"),
RemoteError::Plugin(err) => write!(f, "{err}"),
}
}
}
impl Error for RemoteError {}
impl From<String> for RemoteError {
fn from(value: String) -> Self {
Self::Plugin(value)
}
}
impl From<&str> for RemoteError {
fn from(value: &str) -> Self {
Self::Plugin(value.into())
}
}
impl From<rtc::CallError> for RemoteError {
fn from(value: rtc::CallError) -> Self {
Self::Remoc(value)
}
}

View file

@ -0,0 +1,237 @@
//! Helper module that permits to use templated lines (ie. `bad password for <ip>`), like in Stream's and Action's `cmd`.
//!
//! Corresponding reaction core settings:
//! - [Stream's `cmd`](https://reaction.ppom.me/reference.html#cmd)
//! - [Action's `cmd`](https://reaction.ppom.me/reference.html#cmd-1)
//!
#[derive(Debug, PartialEq, Eq)]
enum SendItem {
Index(usize),
Str(String),
}
impl SendItem {
fn min_size(&self) -> usize {
match self {
Self::Index(_) => 0,
Self::Str(s) => s.len(),
}
}
}
/// Helper struct that permits to transform a template line with patterns into an instantiated line from a match.
///
/// Useful when you permit the user to reconstruct lines from an action, like in reaction's native actions and in the virtual plugin:
/// ```yaml
/// actions:
/// native:
/// cmd: ["iptables", "...", "<ip>"]
///
/// virtual:
/// type: virtual
/// options:
/// send: "<ip>: bad password on user <user>"
/// to: "my_virtual_stream"
/// ```
///
/// Usage example:
/// ```
/// # use reaction_plugin::line::PatternLine;
/// #
/// let template = "<ip>: bad password on user <user>".to_string();
/// let patterns = vec!["ip".to_string(), "user".to_string()];
/// let pattern_line = PatternLine::new(template, patterns);
///
/// assert_eq!(
/// pattern_line.line(vec!["1.2.3.4".to_string(), "root".to_string()]),
/// "1.2.3.4: bad password on user root".to_string(),
/// );
/// ```
///
/// You can find full examples in those plugins:
/// `reaction-plugin-virtual`,
/// `reaction-plugin-cluster`.
///
#[derive(Debug)]
pub struct PatternLine {
line: Vec<SendItem>,
min_size: usize,
}
impl PatternLine {
/// Construct [`PatternLine`] from a template line and the list of patterns of the underlying [Filter](https://reaction.ppom.me/reference.html#filter).
///
/// This list of patterns comes from [`super::ActionConfig`].
pub fn new(template: String, patterns: Vec<String>) -> Self {
let line = Self::_from(patterns, Vec::from([SendItem::Str(template)]));
Self {
min_size: line.iter().map(SendItem::min_size).sum(),
line,
}
}
fn _from(mut patterns: Vec<String>, acc: Vec<SendItem>) -> Vec<SendItem> {
match patterns.pop() {
None => acc,
Some(pattern) => {
let enclosed_pattern = format!("<{pattern}>");
let acc = acc
.into_iter()
.flat_map(|item| match &item {
SendItem::Index(_) => vec![item],
SendItem::Str(str) => match str.find(&enclosed_pattern) {
Some(i) => {
let pattern_index = patterns.len();
let mut ret = vec![];
let (left, mid) = str.split_at(i);
if !left.is_empty() {
ret.push(SendItem::Str(left.into()))
}
ret.push(SendItem::Index(pattern_index));
if mid.len() > enclosed_pattern.len() {
let (_, right) = mid.split_at(enclosed_pattern.len());
ret.push(SendItem::Str(right.into()))
}
ret
}
None => vec![item],
},
})
.collect();
Self::_from(patterns, acc)
}
}
}
pub fn line(&self, match_: Vec<String>) -> String {
let mut res = String::with_capacity(self.min_size);
for item in &self.line {
match item {
SendItem::Index(i) => {
if let Some(element) = match_.get(*i) {
res.push_str(element);
}
}
SendItem::Str(str) => res.push_str(str),
}
}
res
}
}
#[cfg(test)]
mod tests {
use crate::line::{PatternLine, SendItem};
#[test]
fn line_0_pattern() {
let msg = "my message".to_string();
let line = PatternLine::new(msg.clone(), vec![]);
assert_eq!(line.line, vec![SendItem::Str(msg.clone())]);
assert_eq!(line.min_size, msg.len());
assert_eq!(line.line(vec![]), msg.clone());
}
#[test]
fn line_1_pattern() {
let patterns = vec![
"ignored".into(),
"oh".into(),
"ignored".into(),
"my".into(),
"test".into(),
];
let matches = vec!["yay", "oh", "my", "test", "<oh>", "<my>", "<test>"];
let tests = [
(
"<oh> my test",
1,
vec![SendItem::Index(1), SendItem::Str(" my test".into())],
vec![
("yay", "yay my test"),
("oh", "oh my test"),
("my", "my my test"),
("test", "test my test"),
("<oh>", "<oh> my test"),
("<my>", "<my> my test"),
("<test>", "<test> my test"),
],
),
(
"oh <my> test",
3,
vec![
SendItem::Str("oh ".into()),
SendItem::Index(3),
SendItem::Str(" test".into()),
],
vec![
("yay", "oh yay test"),
("oh", "oh oh test"),
("my", "oh my test"),
("test", "oh test test"),
("<oh>", "oh <oh> test"),
("<my>", "oh <my> test"),
("<test>", "oh <test> test"),
],
),
(
"oh my <test>",
4,
vec![SendItem::Str("oh my ".into()), SendItem::Index(4)],
vec![
("yay", "oh my yay"),
("oh", "oh my oh"),
("my", "oh my my"),
("test", "oh my test"),
("<oh>", "oh my <oh>"),
("<my>", "oh my <my>"),
("<test>", "oh my <test>"),
],
),
];
for (msg, index, expected_pl, lines) in tests {
let pattern_line = PatternLine::new(msg.to_string(), patterns.clone());
assert_eq!(pattern_line.line, expected_pl);
for (match_element, line) in lines {
for match_default in &matches {
let mut match_ = vec![
match_default.to_string(),
match_default.to_string(),
match_default.to_string(),
match_default.to_string(),
match_default.to_string(),
];
match_[index] = match_element.to_string();
assert_eq!(
pattern_line.line(match_.clone()),
line,
"match: {match_:?}, pattern_line: {pattern_line:?}"
);
}
}
}
}
#[test]
fn line_2_pattern() {
let pattern_line = PatternLine::new("<a> ; <b>".into(), vec!["a".into(), "b".into()]);
let matches = ["a", "b", "ab", "<a>", "<b>"];
for a in &matches {
for b in &matches {
assert_eq!(
pattern_line.line(vec![a.to_string(), b.to_string()]),
format!("{a} ; {b}"),
);
}
}
}
}

View file

@ -0,0 +1,162 @@
//! Helper module that provides structures to ease the quitting process when having multiple tokio tasks.
//!
//! It defines a [`ShutdownController`], that permits to keep track of ongoing tasks, ask them to shutdown and wait for all of them to quit.
//!
//! You can have it as an attribute of your plugin struct.
//! ```
//! struct MyPlugin {
//! shutdown: ShutdownController
//! }
//! ```
//!
//! You can then give a [`ShutdownToken`] to other tasks when creating them:
//!
//! ```
//! impl PluginInfo for MyPlugin {
//! async fn start(&mut self) -> RemoteResult<()> {
//! let token = self.shutdown.token();
//!
//! tokio::spawn(async move {
//! token.wait().await;
//! eprintln!("DEBUG shutdown asked to quit, now quitting")
//! })
//! }
//! }
//! ```
//!
//! On closing, calling [`ShutdownController::ask_shutdown`] will inform all tasks waiting on [`ShutdownToken::wait`] that it's time to leave.
//! Then we can wait for [`ShutdownController::wait_all_task_shutdown`] to complete.
//!
//! ```
//! impl PluginInfo for MyPlugin {
//! async fn close(self) -> RemoteResult<()> {
//! self.shutdown.ask_shutdown();
//! self.shutdown.wait_all_task_shutdown().await;
//! Ok(())
//! }
//! }
//! ```
//!
//! [`ShutdownDelegate::handle_quit_signals`] permits to handle SIGHUP, SIGINT and SIGTERM by gracefully shutting down tasks.
use tokio::signal::unix::{SignalKind, signal};
use tokio_util::{
sync::{CancellationToken, WaitForCancellationFuture},
task::task_tracker::{TaskTracker, TaskTrackerToken},
};
/// Permits to keep track of ongoing tasks, ask them to shutdown and wait for all of them to quit.
/// Stupid wrapper around [`tokio_util::sync::CancellationToken`] and [`tokio_util::task::task_tracker::TaskTracker`].
#[derive(Default, Clone)]
pub struct ShutdownController {
shutdown_notifyer: CancellationToken,
task_tracker: TaskTracker,
}
impl ShutdownController {
pub fn new() -> Self {
Self::default()
}
/// Ask for all tasks to quit
pub fn ask_shutdown(&self) {
self.shutdown_notifyer.cancel();
self.task_tracker.close();
}
/// Wait for all tasks to quit.
/// This task may return even without having called [`ShutdownController::ask_shutdown`]
/// first, if all tasks quit by themselves.
pub async fn wait_all_task_shutdown(self) {
self.task_tracker.close();
self.task_tracker.wait().await;
}
/// Returns a new shutdown token, to be held by a task.
pub fn token(&self) -> ShutdownToken {
ShutdownToken::new(self.shutdown_notifyer.clone(), self.task_tracker.token())
}
/// Returns a [`ShutdownDelegate`], which is able to ask for shutdown,
/// without counting as a task that needs to be awaited.
pub fn delegate(&self) -> ShutdownDelegate {
ShutdownDelegate(self.shutdown_notifyer.clone())
}
/// Returns a future that will resolve only when a shutdown request happened.
pub fn wait(&self) -> WaitForCancellationFuture<'_> {
self.shutdown_notifyer.cancelled()
}
}
/// Permits to ask for shutdown, without counting as a task that needs to be awaited.
pub struct ShutdownDelegate(CancellationToken);
impl ShutdownDelegate {
/// Ask for all tasks to quit
pub fn ask_shutdown(&self) {
self.0.cancel();
}
/// Ensure [`Self::ask_shutdown`] is called whenever we receive SIGHUP,
/// SIGTERM or SIGINT. Spawns a task that consumes self.
pub fn handle_quit_signals(self) -> Result<(), String> {
let err_str = |err| format!("could not register signal: {err}");
let mut sighup = signal(SignalKind::hangup()).map_err(err_str)?;
let mut sigint = signal(SignalKind::interrupt()).map_err(err_str)?;
let mut sigterm = signal(SignalKind::terminate()).map_err(err_str)?;
tokio::spawn(async move {
let signal = tokio::select! {
_ = sighup.recv() => "SIGHUP",
_ = sigint.recv() => "SIGINT",
_ = sigterm.recv() => "SIGTERM",
};
eprintln!("received {signal}, closing...");
self.ask_shutdown();
});
Ok(())
}
}
/// Created by a [`ShutdownController`].
/// Serves two purposes:
///
/// - Wait for a shutdown request to happen with [`Self::wait`]
/// - Keep track of the current task. While this token is held,
/// [`ShutdownController::wait_all_task_shutdown`] will block.
#[derive(Clone)]
pub struct ShutdownToken {
shutdown_notifyer: CancellationToken,
_task_tracker_token: TaskTrackerToken,
}
impl ShutdownToken {
fn new(shutdown_notifyer: CancellationToken, _task_tracker_token: TaskTrackerToken) -> Self {
Self {
shutdown_notifyer,
_task_tracker_token,
}
}
/// Returns underlying [`CancellationToken`] and [`TaskTrackerToken`], consuming self.
pub fn split(self) -> (CancellationToken, TaskTrackerToken) {
(self.shutdown_notifyer, self._task_tracker_token)
}
/// Returns a future that will resolve only when a shutdown request happened.
pub fn wait(&self) -> WaitForCancellationFuture<'_> {
self.shutdown_notifyer.cancelled()
}
/// Returns true if the shutdown request happened
pub fn is_shutdown(&self) -> bool {
self.shutdown_notifyer.is_cancelled()
}
/// Ask for all tasks to quit
pub fn ask_shutdown(&self) {
self.shutdown_notifyer.cancel();
}
}

View file

@ -0,0 +1,76 @@
//! This module provides [`parse_duration`], which parses duration in reaction's format (ie. `6h`, `3 days`)
//!
//! Like in those reaction core settings:
//! - [Filters' `retryperiod`](https://reaction.ppom.me/reference.html#retryperiod)
//! - [Actions' `after`](https://reaction.ppom.me/reference.html#after).
use std::time::Duration;
/// Parses the &str argument as a Duration
/// Returns Ok(Duration) if successful, or Err(String).
///
/// Format is defined as follows: `<integer> <unit>`
/// - whitespace between the integer and unit is optional
/// - integer must be positive (>= 0)
/// - unit can be one of:
/// - `ms` / `millis` / `millisecond` / `milliseconds`
/// - `s` / `sec` / `secs` / `second` / `seconds`
/// - `m` / `min` / `mins` / `minute` / `minutes`
/// - `h` / `hour` / `hours`
/// - `d` / `day` / `days`
pub fn parse_duration(d: &str) -> Result<Duration, String> {
let d_trimmed = d.trim();
let chars = d_trimmed.as_bytes();
let mut value = 0;
let mut i = 0;
while i < chars.len() && chars[i].is_ascii_digit() {
value = value * 10 + (chars[i] - b'0') as u32;
i += 1;
}
if i == 0 {
return Err(format!("duration '{}' doesn't start with digits", d));
}
let ok_as = |func: fn(u64) -> Duration| -> Result<_, String> { Ok(func(value as u64)) };
match d_trimmed[i..].trim() {
"ms" | "millis" | "millisecond" | "milliseconds" => ok_as(Duration::from_millis),
"s" | "sec" | "secs" | "second" | "seconds" => ok_as(Duration::from_secs),
"m" | "min" | "mins" | "minute" | "minutes" => ok_as(Duration::from_mins),
"h" | "hour" | "hours" => ok_as(Duration::from_hours),
"d" | "day" | "days" => ok_as(|d: u64| Duration::from_hours(d * 24)),
unit => Err(format!(
"unit {} not recognised. must be one of s/sec/seconds, m/min/minutes, h/hours, d/days",
unit
)),
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn char_conversion() {
assert_eq!(b'9' - b'0', 9);
}
#[test]
fn parse_duration_test() {
assert_eq!(parse_duration("1s"), Ok(Duration::from_secs(1)));
assert_eq!(parse_duration("12s"), Ok(Duration::from_secs(12)));
assert_eq!(parse_duration(" 12 secs "), Ok(Duration::from_secs(12)));
assert_eq!(parse_duration("2m"), Ok(Duration::from_mins(2)));
assert_eq!(parse_duration("6 hours"), Ok(Duration::from_hours(6)));
assert_eq!(parse_duration("1d"), Ok(Duration::from_hours(1 * 24)));
assert_eq!(parse_duration("365d"), Ok(Duration::from_hours(365 * 24)));
assert!(parse_duration("d 3").is_err());
assert!(parse_duration("d3").is_err());
assert!(parse_duration("3da").is_err());
assert!(parse_duration("3_days").is_err());
assert!(parse_duration("_3d").is_err());
assert!(parse_duration("3 3d").is_err());
assert!(parse_duration("3.3d").is_err());
}
}

View file

@ -1,10 +1,11 @@
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p "python3.withPackages (ps: with ps; [ requests ])" -p debian-devscripts git minisign cargo-cross rustup cargo-deb
#!nix-shell -i python3 -p "python3.withPackages (ps: with ps; [ requests ])" -p debian-devscripts git minisign docker cargo-deb
import argparse
import http.client
import json
import os
import subprocess
import shutil
import subprocess
import sys
import tempfile
@ -19,6 +20,18 @@ def run_command(args, **kwargs):
def main():
# CLI arguments
parser = argparse.ArgumentParser(description="create a reaction release")
parser.add_argument(
"-p",
"--publish",
action="store_true",
help="publish a release. else build only",
)
args = parser.parse_args()
root_dir = os.getcwd()
# Git tag
cmd = run_command(
["git", "tag", "--sort=-creatordate"], capture_output=True, text=True
@ -33,36 +46,53 @@ def main():
sys.exit(1)
# Ask user
# if input(f"We will create a release for tag {tag}. Do you want to continue? (y/n) ") != "y":
# print("exiting.")
# sys.exit(1)
# Git push
# run_command(["git", "push", "--tags"])
if (
args.publish
and input(
f"We will create a release for tag {tag}. Do you want to continue? (y/n) "
)
!= "y"
):
print("exiting.")
sys.exit(1)
# Minisign password
cmd = subprocess.run(["rbw", "get", "minisign"], capture_output=True, text=True)
minisign_password = cmd.stdout
# Create directory
run_command(
[
"ssh",
"akesi",
# "-J", "pica01",
"mkdir",
"-p",
f"/var/www/static/reaction/releases/{tag}/",
]
)
if args.publish:
# Git push
run_command(["git", "push", "--tags"])
# Create directory
run_command(
[
"ssh",
"akesi",
# "-J", "pica01",
"mkdir",
"-p",
f"/var/www/static/reaction/releases/{tag}/",
]
)
else:
# Prepare directory for tarball and deb file.
# We must do a `cargo clean` before each build,
# So we have to move them out of `target/`
local_dir = os.path.join(root_dir, "local")
try:
os.mkdir(local_dir)
except FileExistsError:
pass
architectures = {
"x86_64-unknown-linux-musl": "amd64",
"aarch64-unknown-linux-musl": "arm64",
"x86_64-unknown-linux-gnu": "amd64",
# I would like to build for those targets instead:
# "x86_64-unknown-linux-musl": "amd64",
# "aarch64-unknown-linux-musl": "arm64",
# "arm-unknown-linux-gnueabihf": "armhf",
}
root_dir = os.getcwd()
all_files = []
instructions = [
@ -72,9 +102,8 @@ def main():
You'll need to install minisign to check the authenticity of the package.
After installing reaction, create your configuration file at
`/etc/reaction.json`, `/etc/reaction.jsonnet` or `/etc/reaction.yml`.
You can also provide a directory containing multiple configuration files in the previous formats.
After installing reaction, create your configuration file(s) in JSON, YAML or JSONnet in the
`/etc/reaction/` directory.
See <https://reaction.ppom.me> for documentation.
Reload systemd:
@ -84,44 +113,67 @@ $ sudo systemctl daemon-reload
Then enable and start reaction with this command
```bash
# replace `reaction.jsonnet` with the name of your configuration file in /etc/
$ sudo systemctl enable --now reaction@reaction.jsonnet.service
# write first your configuration file(s) in /etc/reaction/
$ sudo systemctl enable --now reaction.service
```
""".strip(),
]
for architecture in architectures.keys():
for architecture_rs, architecture_pretty in architectures.items():
# Cargo clean
run_command(["cargo", "clean"])
# run_command(["cargo", "clean"])
# Install toolchain
# Build docker image
run_command(["docker", "pull", "rust:bookworm"])
run_command(["docker", "build", "-t", "rust:reaction", "."])
binaries = [
# Binaries
"reaction",
"reaction-plugin-virtual",
"reaction-plugin-ipset",
]
# Build
run_command(
[
"rustup",
"toolchain",
"install",
f"stable-{architecture}",
"--force-non-host", # I know, I know!
"--profile",
"minimal",
"docker",
"run",
"--rm",
"-u", str(os.getuid()),
"-v", ".:/reaction",
"rust:reaction",
"sh", "-c",
" && ".join([
f"cargo build --release --target {architecture_rs} --package {binary}"
for binary in binaries
])
]
)
# Build
run_command(["cross", "build", "--release", "--target", architecture])
# Build .deb
cmd = run_command(
["cargo-deb", f"--target={architecture}", "--no-build", "--no-strip"]
)
debs = [
"reaction",
"reaction-plugin-ipset",
]
for deb in debs:
cmd = run_command(
[
"cargo-deb",
"--target", architecture_rs,
"--package", deb,
"--no-build",
"--no-strip"
]
)
deb_dir = os.path.join("./target", architecture, "debian")
deb_name = [f for f in os.listdir(deb_dir) if f.endswith(".deb")][0]
deb_path = os.path.join(deb_dir, deb_name)
deb_dir = os.path.join("./target", architecture_rs, "debian")
deb_names = [f for f in os.listdir(deb_dir) if f.endswith(".deb")]
deb_paths = [os.path.join(deb_dir, deb_name) for deb_name in deb_names]
# Archive
files_path = os.path.join("./target", architecture, "release")
pkg_name = f"reaction-{tag}-{architectures[architecture]}"
files_path = os.path.join("./target", architecture_rs, "release")
pkg_name = f"reaction-{tag}-{architecture_pretty}"
tar_name = f"{pkg_name}.tar.gz"
tar_path = os.path.join(files_path, tar_name)
@ -131,11 +183,7 @@ $ sudo systemctl enable --now reaction@reaction.jsonnet.service
except FileExistsError:
pass
files = [
# Binaries
"reaction",
"nft46",
"ip46tables",
files = binaries + [
# Shell completion
"reaction.bash",
"reaction.fish",
@ -163,56 +211,86 @@ $ sudo systemctl enable --now reaction@reaction.jsonnet.service
# Sign
run_command(
["minisign", "-Sm", deb_path, tar_path], text=True, input=minisign_password
["minisign", "-Sm", tar_path] + deb_paths,
text=True,
input=minisign_password,
)
deb_sig = f"{deb_path}.minisig"
deb_sig_paths = [f"{deb_path}.minisig" for deb_path in deb_paths]
deb_sig_names = [f"{deb_name}.minisig" for deb_name in deb_names]
tar_sig = f"{tar_path}.minisig"
# Push
run_command(
[
"rsync",
"-az", # "-e", "ssh -J pica01",
tar_path,
tar_sig,
deb_path,
deb_sig,
f"akesi:/var/www/static/reaction/releases/{tag}/",
]
)
all_files.extend([tar_path, tar_sig, deb_path, deb_sig])
if args.publish:
# Push
run_command(
[
"rsync",
"-az", # "-e", "ssh -J pica01",
tar_path,
tar_sig,
]
+ deb_paths
+ deb_sig_paths
+ [
f"akesi:/var/www/static/reaction/releases/{tag}/",
]
)
else:
# Copy
run_command(["cp", tar_path, tar_sig] + deb_paths + deb_sig_paths + [local_dir])
all_files.extend([tar_path, tar_sig])
all_files.extend(deb_paths)
all_files.extend(deb_sig_paths)
# Instructions
instructions.append(
f"""
## Tar installation ({architectures[architecture]} linux)
## Tar installation ({architecture_pretty} linux)
```bash
curl -O https://static.ppom.me/reaction/releases/{tag}/{tar_name} \\
-O https://static.ppom.me/reaction/releases/{tag}/{tar_name}.minisig \\
&& minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m {tar_name} \\
&& rm {tar_name}.minisig \\
&& cd {tar_name} \\
&& tar xvf {tar_name} \\
&& cd {pkg_name} \\
&& sudo make install
```
""".strip()
If you want to install the ipset plugin as well:
```bash
sudo apt install -y libipset-dev && sudo make install-ipset
```
""".strip()
)
instructions.append(
f"""
## Debian installation ({architectures[architecture]} linux)
## Debian installation ({architecture_pretty} linux)
```bash
curl -O https://static.ppom.me/reaction/releases/{tag}/{deb_name} \\
-O https://static.ppom.me/reaction/releases/{tag}/{deb_name}.minisig \\
&& minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m {deb_name} \\
&& rm {deb_name}.minisig \\
&& sudo apt install ./{deb_name}
curl \\
{"\n".join([
f" -O https://static.ppom.me/reaction/releases/{tag}/{deb_name} \\"
for deb_name in deb_names + deb_sig_names
])}
{"\n".join([
f" && minisign -VP RWSpLTPfbvllNqRrXUgZzM7mFjLUA7PQioAItz80ag8uU4A2wtoT2DzX -m {deb_name} \\"
for deb_name in deb_names
])}
&& rm {" ".join(deb_sig_names)} \\
&& sudo apt install {" ".join([f"./{deb_name}" for deb_name in deb_names])}
```
""".strip()
*You can also use [this third-party package repository](https://packages.azlux.fr).*
""".strip()
)
if not args.publish:
print("\n\n".join(instructions))
return
# Release
cmd = run_command(
["rbw", "get", "framagit.org", "token"], capture_output=True, text=True
@ -270,10 +348,11 @@ curl -O https://static.ppom.me/reaction/releases/{tag}/{deb_name} \\
conn = http.client.HTTPSConnection("framagit.org")
conn.request("POST", "/api/v4/projects/90566/releases", body=body, headers=headers)
response = conn.getresponse()
body = json.loads(response.read())
if response.status != 201:
print(
f"sending message failed: status: {response.status}, reason: {response.reason}"
f"sending message failed: status: {response.status}, reason: {response.reason}, message: {body.message}"
)
sys.exit(1)

14
shell.nix Normal file
View file

@ -0,0 +1,14 @@
# This shell.nix for NixOS users is only needed when building reaction-plugin-ipset
with import <nixpkgs> {};
pkgs.mkShell {
name = "libipset";
buildInputs = [
ipset
nftables
clang
];
src = null;
shellHook = ''
export LIBCLANG_PATH="$(clang -print-file-name=libclang.so)"
'';
}

View file

@ -83,6 +83,24 @@ Then prints the flushed matches and actions."
patterns: Vec<(String, String)>,
},
/// Trigger a target in reaction (e.g. ban)
#[command(
long_about = "Trigger actions and remove currently active matches for the specified PATTERNS in the specified STREAM.FILTER. (e.g. ban)"
)]
Trigger {
/// path to the client-daemon communication socket
#[clap(short = 's', long, default_value = "/run/reaction/reaction.sock")]
socket: PathBuf,
/// STREAM.FILTER to trigger
#[clap(value_name = "STREAM.FILTER")]
limit: String,
/// PATTERNs to trigger on (e.g. ip=1.2.3.4)
#[clap(value_parser = parse_named_regex, value_name = "NAME=PATTERN")]
patterns: Vec<(String, String)>,
},
/// Test a regex
#[command(
name = "test-regex",

View file

@ -1,7 +1,7 @@
mod show_flush;
mod request;
mod test_config;
mod test_regex;
pub use show_flush::request;
pub use request::request;
pub use test_config::test_config;
pub use test_regex::test_regex;

View file

@ -78,6 +78,7 @@ pub async fn request(
DaemonResponse::Err(err) => Err(format!(
"failed to communicate to daemon: error response: {err}"
)),
DaemonResponse::Ok(_) => Ok(()),
}?;
Ok(())
}

View file

@ -19,7 +19,7 @@ pub fn test_regex(
// Code close to Filter::setup()
let mut used_patterns: BTreeSet<Arc<Pattern>> = BTreeSet::new();
for pattern in config.patterns().values() {
for pattern in config.patterns.values() {
if let Some(index) = regex.find(pattern.name_with_braces()) {
// we already `find` it, so we must be able to `rfind` it
#[allow(clippy::unwrap_used)]
@ -43,9 +43,9 @@ pub fn test_regex(
let mut result = Vec::new();
if !used_patterns.is_empty() {
for pattern in used_patterns.iter() {
if let Some(match_) = matches.name(pattern.name()) {
if let Some(match_) = matches.name(&pattern.name) {
result.push(match_.as_str().to_string());
if !pattern.not_an_ignore(match_.as_str()) {
if pattern.is_ignore(match_.as_str()) {
ignored = true;
}
}

View file

@ -1,39 +1,52 @@
use std::{cmp::Ordering, collections::BTreeSet, fmt::Display, sync::Arc};
use chrono::TimeDelta;
use std::{cmp::Ordering, collections::BTreeSet, fmt::Display, sync::Arc, time::Duration};
use reaction_plugin::{ActionConfig, time::parse_duration};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use tokio::process::Command;
use super::parse_duration::*;
use super::{Match, Pattern};
use super::{Match, Pattern, PatternType};
#[derive(Clone, Debug, Default, Deserialize , Serialize)]
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
#[serde(deny_unknown_fields)]
pub struct Action {
cmd: Vec<String>,
#[serde(default)]
pub cmd: Vec<String>,
// TODO one shot time deserialization
#[serde(skip_serializing_if = "Option::is_none")]
after: Option<String>,
pub after: Option<String>,
#[serde(skip)]
after_duration: Option<TimeDelta>,
pub after_duration: Option<Duration>,
#[serde(
rename = "onexit",
default = "set_false",
skip_serializing_if = "is_false"
)]
on_exit: bool,
pub on_exit: bool,
#[serde(default = "set_false", skip_serializing_if = "is_false")]
pub oneshot: bool,
#[serde(default = "set_false", skip_serializing_if = "is_false")]
pub ipv4only: bool,
#[serde(default = "set_false", skip_serializing_if = "is_false")]
pub ipv6only: bool,
#[serde(skip)]
patterns: Arc<BTreeSet<Arc<Pattern>>>,
pub patterns: Arc<BTreeSet<Arc<Pattern>>>,
#[serde(skip)]
name: String,
pub name: String,
#[serde(skip)]
filter_name: String,
pub filter_name: String,
#[serde(skip)]
stream_name: String,
pub stream_name: String,
// Plugin-specific
#[serde(default, rename = "type", skip_serializing_if = "Option::is_none")]
pub action_type: Option<String>,
#[serde(default, skip_serializing_if = "Value::is_null")]
pub options: Value,
}
fn set_false() -> bool {
@ -45,16 +58,10 @@ fn is_false(b: &bool) -> bool {
}
impl Action {
pub fn name(&self) -> &str {
&self.name
}
pub fn after_duration(&self) -> Option<TimeDelta> {
self.after_duration
}
pub fn on_exit(&self) -> bool {
self.on_exit
pub fn is_plugin(&self) -> bool {
self.action_type
.as_ref()
.is_some_and(|action_type| action_type != "cmd")
}
pub fn setup(
@ -87,11 +94,18 @@ impl Action {
return Err("character '.' is not allowed in filter name".into());
}
if self.cmd.is_empty() {
return Err("cmd is empty".into());
}
if self.cmd[0].is_empty() {
return Err("cmd's first item is empty".into());
if !self.is_plugin() {
if self.cmd.is_empty() {
return Err("cmd is empty".into());
}
if self.cmd[0].is_empty() {
return Err("cmd's first item is empty".into());
}
if !self.options.is_null() {
return Err("can't define options without a plugin type".into());
}
} else if !self.cmd.is_empty() {
return Err("can't define a cmd and a plugin type".into());
}
if let Some(after) = &self.after {
@ -104,6 +118,22 @@ impl Action {
return Err("cannot have `onexit: true`, without an `after` directive".into());
}
if self.ipv4only && self.ipv6only {
return Err("cannot have `ipv4only: true` and `ipv6only: true` in one action".into());
}
if self
.patterns
.iter()
.all(|pattern| pattern.pattern_type() != PatternType::Ip)
{
if self.ipv4only {
return Err("it makes no sense to have an action with `ipv4only: true` when no pattern of type ip is defined on the filter".into());
}
if self.ipv6only {
return Err("it makes no sense to have an action with `ipv6only: true` when no pattern of type ip is defined on the filter".into());
}
}
Ok(())
}
@ -127,6 +157,24 @@ impl Action {
cmd.args(&computed_command[1..]);
cmd
}
pub fn to_action_config(&self) -> Result<ActionConfig, String> {
Ok(ActionConfig {
stream_name: self.stream_name.clone(),
filter_name: self.filter_name.clone(),
action_name: self.name.clone(),
action_type: self
.action_type
.clone()
.ok_or_else(|| format!("action {} doesn't load a plugin. this is a bug!", self))?,
config: self.options.clone().into(),
patterns: self
.patterns
.iter()
.map(|pattern| pattern.name.clone())
.collect(),
})
}
}
impl PartialEq for Action {
@ -161,6 +209,7 @@ impl Display for Action {
#[cfg(test)]
impl Action {
/// Test-only constructor designed to be easy to call
#[allow(clippy::too_many_arguments)]
pub fn new(
cmd: Vec<&str>,
after: Option<&str>,
@ -169,11 +218,14 @@ impl Action {
filter_name: &str,
name: &str,
config_patterns: &super::Patterns,
ip_only: u8,
) -> Self {
let mut action = Self {
cmd: cmd.into_iter().map(|s| s.into()).collect(),
after: after.map(|s| s.into()),
on_exit,
ipv4only: ip_only == 4,
ipv6only: ip_only == 6,
..Default::default()
};
action
@ -197,29 +249,19 @@ pub mod tests {
use super::*;
fn default_action() -> Action {
pub fn ok_action() -> Action {
Action {
cmd: Vec::new(),
name: "".into(),
filter_name: "".into(),
stream_name: "".into(),
after: None,
after_duration: None,
on_exit: false,
patterns: Arc::new(BTreeSet::default()),
cmd: vec!["command".into()],
..Default::default()
}
}
pub fn ok_action() -> Action {
let mut action = default_action();
action.cmd = vec!["command".into()];
action
}
pub fn ok_action_with_after(d: String, name: &str) -> Action {
let mut action = default_action();
action.cmd = vec!["command".into()];
action.after = Some(d);
let mut action = Action {
cmd: vec!["command".into()],
after: Some(d),
..Default::default()
};
action
.setup("", "", name, Arc::new(BTreeSet::default()))
.unwrap();
@ -233,16 +275,16 @@ pub mod tests {
let patterns = Arc::new(BTreeSet::default());
// No command
action = default_action();
action = Action::default();
assert!(action.setup(&name, &name, &name, patterns.clone()).is_err());
// No command
action = default_action();
action = Action::default();
action.cmd = vec!["".into()];
assert!(action.setup(&name, &name, &name, patterns.clone()).is_err());
// No command
action = default_action();
action = Action::default();
action.cmd = vec!["".into(), "arg1".into()];
assert!(action.setup(&name, &name, &name, patterns.clone()).is_err());

View file

@ -1,5 +1,5 @@
use std::{
collections::{btree_map::Entry, BTreeMap},
collections::{BTreeMap, btree_map::Entry},
fs::File,
io,
path::Path,
@ -11,7 +11,7 @@ use serde::{Deserialize, Serialize};
use thiserror::Error;
use tracing::{debug, error, info, warn};
use super::{Filter, Pattern, Stream};
use super::{Pattern, Plugin, Stream, merge_attrs};
pub type Patterns = BTreeMap<String, Arc<Pattern>>;
@ -20,20 +20,23 @@ pub type Patterns = BTreeMap<String, Arc<Pattern>>;
#[serde(deny_unknown_fields)]
pub struct Config {
#[serde(default = "num_cpus::get")]
concurrency: usize,
pub concurrency: usize,
#[serde(default = "dot", skip_serializing_if = "String::is_empty")]
state_directory: String,
pub state_directory: String,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub plugins: BTreeMap<String, Plugin>,
#[serde(default)]
patterns: Patterns,
pub patterns: Patterns,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
start: Vec<Vec<String>>,
pub start: Vec<Vec<String>>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
stop: Vec<Vec<String>>,
pub stop: Vec<Vec<String>>,
#[serde(default)]
streams: BTreeMap<String, Stream>,
pub streams: BTreeMap<String, Stream>,
// This field only serve the purpose of having a top-level place for saving YAML variables
#[serde(default, skip_serializing, rename = "definitions")]
@ -45,43 +48,31 @@ fn dot() -> String {
}
impl Config {
pub fn streams(&self) -> &BTreeMap<String, Stream> {
&self.streams
}
pub fn patterns(&self) -> &Patterns {
&self.patterns
}
pub fn concurrency(&self) -> usize {
self.concurrency
}
pub fn state_directory(&self) -> &str {
&self.state_directory
}
pub fn filters(&self) -> Vec<&Filter> {
self.streams
.values()
.flat_map(|stream| stream.filters().values())
.collect()
}
pub fn get_filter(&self, name: &(String, String)) -> Option<&Filter> {
self.streams
.get(&name.0)
.and_then(|stream| stream.get_filter(&name.1))
}
fn merge(&mut self, mut other: Config) -> Result<(), String> {
for (key, plugin) in other.plugins.into_iter() {
match self.plugins.entry(key) {
Entry::Vacant(e) => {
e.insert(plugin);
}
Entry::Occupied(e) => {
return Err(format!(
"plugin {} is already defined. plugin definitions can't be spread accross multiple files.",
e.key()
));
}
}
}
for (key, pattern) in other.patterns.into_iter() {
match self.patterns.entry(key) {
Entry::Vacant(e) => {
e.insert(pattern);
}
Entry::Occupied(e) => {
return Err(format!("pattern {} is already defined. pattern definitions can't be spread accross multiple files.", e.key()));
return Err(format!(
"pattern {} is already defined. pattern definitions can't be spread accross multiple files.",
e.key()
));
}
}
}
@ -102,26 +93,19 @@ impl Config {
self.start.append(&mut other.start);
self.stop.append(&mut other.stop);
self.state_directory = merge_attrs(
self.state_directory.clone(),
other.state_directory,
".".into(),
"state_directory",
)?;
if !(self.state_directory == dot()
|| other.state_directory == dot()
|| self.state_directory == other.state_directory)
{
return Err("state_directory have conflicting definitions".into());
}
if self.state_directory == dot() {
self.state_directory = other.state_directory;
}
if !(self.concurrency == num_cpus::get()
|| other.concurrency == num_cpus::get()
|| self.concurrency == other.concurrency)
{
return Err("concurrency have conflicting definitions".into());
}
if self.concurrency == num_cpus::get() {
self.concurrency = other.concurrency;
}
self.concurrency = merge_attrs(
self.concurrency,
other.concurrency,
num_cpus::get(),
"concurrency",
)?;
Ok(())
}
@ -134,6 +118,10 @@ impl Config {
// Nullify this useless field
self._definitions = serde_json::Value::Null;
for (key, value) in &mut self.plugins {
value.setup(key)?;
}
if self.patterns.is_empty() {
return Err("no patterns configured".into());
}
@ -213,8 +201,8 @@ impl Config {
fn _from_dir_raw(path: &Path) -> Result<(Self, Vec<String>), String> {
let dir = std::fs::read_dir(path)
.map_err(|e| format!("Error accessing directory {}: {e}", path.display()))?;
let mut cfg: Option<Self> = None;
let mut read_cfg_fname = vec![];
// sorts files by name
let mut cfg_files = BTreeMap::new();
for f in dir {
let f =
f.map_err(|e| format!("Error while reading directory {}: {e}", path.display()))?;
@ -262,8 +250,8 @@ impl Config {
}
};
let cfg_format = match Self::_extension_to_format(ext) {
Ok(fmt) => fmt,
match Self::_extension_to_format(ext) {
Ok(fmt) => cfg_files.insert(fname.to_string(), (fpath, fmt)),
Err(_) => {
// silently ignore files without an expected extension
debug!(
@ -273,10 +261,12 @@ impl Config {
continue;
}
};
}
let cfg_part = Self::_load_file(&fpath, cfg_format)
let mut cfg: Option<Self> = None;
for (fname, (fpath, fmt)) in &cfg_files {
let cfg_part = Self::_load_file(fpath, *fmt)
.map_err(|e| format!("While reading {fname} in {}: {e}", path.display()))?;
read_cfg_fname.push(fname.to_string());
if let Some(mut cfg_agg) = cfg.take() {
cfg_agg.merge(cfg_part)?;
@ -287,7 +277,7 @@ impl Config {
}
if let Some(cfg) = cfg {
Ok((cfg, read_cfg_fname))
Ok((cfg, cfg_files.into_keys().collect()))
} else {
Err(format!(
"No valid configuration files found in {}",
@ -329,6 +319,7 @@ impl Config {
}
}
#[derive(Clone, Copy)]
enum Format {
Yaml,
Json,
@ -354,7 +345,7 @@ enum ConfigError {
mod jsonnet {
use std::path::Path;
use jrsonnet_evaluator::{error::LocError, EvaluationState, FileImportResolver};
use jrsonnet_evaluator::{EvaluationState, FileImportResolver, error::LocError};
use super::ConfigError;
@ -379,6 +370,7 @@ mod jsonnet {
}
fn run_commands(commands: &Vec<Vec<String>>, moment: &str) -> bool {
debug!("Running {moment} commands...");
let mut ok = true;
for command in commands {
info!("{} command: run {:?}\n", moment, command);
@ -642,7 +634,7 @@ mod tests {
assert!(cfg_org.streams.contains_key("echo"));
assert_eq!(cfg_org.streams.len(), 1);
let filters = cfg_org.streams.get("echo").unwrap().filters();
let filters = &cfg_org.streams.get("echo").unwrap().filters;
assert!(filters.contains_key("f1"));
assert!(filters.contains_key("f2"));
assert_eq!(filters.len(), 2);
@ -702,8 +694,8 @@ mod tests {
assert!(cfg_org.streams.contains_key("echo"));
assert_eq!(cfg_org.streams.len(), 1);
let stream = cfg_org.streams.get("echo").unwrap();
assert_eq!(stream.cmd().len(), 1);
assert_eq!(stream.filters().len(), 1);
assert_eq!(stream.cmd.len(), 1);
assert_eq!(stream.filters.len(), 1);
}
#[test]
@ -779,9 +771,7 @@ mod tests {
{{STREAMS}}
}"#,
);
let cfg_oth = parse_config_json(
r#"{}"#,
);
let cfg_oth = parse_config_json(r#"{}"#);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_ok());
@ -813,9 +803,7 @@ mod tests {
{{STREAMS}}
}"#,
);
let cfg_oth = parse_config_json(
r#"{ }"#,
);
let cfg_oth = parse_config_json(r#"{ }"#);
let res = cfg_org.merge(cfg_oth);
assert!(res.is_ok());

View file

@ -4,15 +4,25 @@ use std::{
fmt::Display,
hash::Hash,
sync::Arc,
time::Duration,
};
use chrono::TimeDelta;
use reaction_plugin::time::parse_duration;
use regex::Regex;
use serde::{Deserialize, Serialize};
use tracing::info;
use super::parse_duration;
use super::{Action, Match, Pattern, Patterns};
use super::{Action, Match, Pattern, PatternType, Patterns};
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Deserialize, Serialize)]
pub enum Duplicate {
#[default]
#[serde(rename = "extend")]
Extend,
#[serde(rename = "ignore")]
Ignore,
#[serde(rename = "rerun")]
Rerun,
}
// Only names are serialized
// Only computed fields are not deserialized
@ -20,29 +30,39 @@ use super::{Action, Match, Pattern, Patterns};
#[serde(deny_unknown_fields)]
pub struct Filter {
#[serde(skip)]
longuest_action_duration: TimeDelta,
regex: Vec<String>,
pub longuest_action_duration: Duration,
#[serde(skip)]
compiled_regex: Vec<Regex>,
pub has_ip: bool,
pub regex: Vec<String>,
#[serde(skip)]
pub compiled_regex: Vec<Regex>,
// We want patterns to be ordered
// This is necessary when using matches which contain multiple patterns
#[serde(skip)]
patterns: Arc<BTreeSet<Arc<Pattern>>>,
pub patterns: Arc<BTreeSet<Arc<Pattern>>>,
#[serde(skip_serializing_if = "Option::is_none")]
retry: Option<u32>,
pub retry: Option<u32>,
#[serde(rename = "retryperiod", skip_serializing_if = "Option::is_none")]
retry_period: Option<String>,
pub retry_period: Option<String>,
#[serde(skip)]
retry_duration: Option<TimeDelta>,
pub retry_duration: Option<Duration>,
actions: BTreeMap<String, Action>,
#[serde(default)]
pub duplicate: Duplicate,
pub actions: BTreeMap<String, Action>,
#[serde(skip)]
name: String,
pub name: String,
#[serde(skip)]
stream_name: String,
pub stream_name: String,
// // Plugin-specific
// #[serde(default, rename = "type")]
// pub filter_type: Option<String>,
// #[serde(default = "null_value")]
// pub options: Value,
}
impl Filter {
@ -64,39 +84,11 @@ impl Filter {
Filter {
stream_name: stream_name.into(),
name: filter_name.into(),
patterns: Arc::new(patterns.into_iter().map(|p| Arc::new(p)).collect()),
patterns: Arc::new(patterns.into_iter().map(Arc::new).collect()),
..Filter::default()
}
}
pub fn name(&self) -> &str {
&self.name
}
pub fn stream_name(&self) -> &str {
&self.stream_name
}
pub fn retry(&self) -> Option<u32> {
self.retry
}
pub fn retry_duration(&self) -> Option<TimeDelta> {
self.retry_duration
}
pub fn longuest_action_duration(&self) -> TimeDelta {
self.longuest_action_duration
}
pub fn actions(&self) -> &BTreeMap<String, Action> {
&self.actions
}
pub fn patterns(&self) -> &BTreeSet<Arc<Pattern>> {
&self.patterns
}
pub fn setup(
&mut self,
stream_name: &str,
@ -128,13 +120,13 @@ impl Filter {
}
if self.retry.is_some_and(|r| r < 2) {
return Err("retry has been specified but is < 2".into());
return Err("retry must be >= 2. Remove 'retry' and 'retryperiod' to trigger at the first occurence.".into());
}
if let Some(retry_period) = &self.retry_period {
self.retry_duration = Some(
parse_duration(retry_period)
.map_err(|err| format!("failed to parse retry time: {}", err))?,
.map_err(|err| format!("failed to parse retry period: {}", err))?,
);
self.retry_period = None;
}
@ -144,6 +136,7 @@ impl Filter {
}
let mut new_patterns = BTreeSet::new();
let mut new_regex = Vec::new();
let mut first = true;
for regex in &self.regex {
let mut regex_buf = regex.clone();
@ -167,17 +160,18 @@ impl Filter {
}
} else if !first && new_patterns.contains(pattern) {
return Err(format!(
"pattern {} is present in the first regex but is not present in a following regex. all regexes should contain the same set of regexes",
&pattern.name_with_braces()
));
"pattern {} is present in the first regex but is not present in a following regex. all regexes should contain the same set of regexes",
&pattern.name_with_braces()
));
}
regex_buf = regex_buf.replacen(pattern.name_with_braces(), &pattern.regex, 1);
}
new_regex.push(regex_buf.clone());
let compiled = Regex::new(&regex_buf).map_err(|err| err.to_string())?;
self.compiled_regex.push(compiled);
first = false;
}
self.regex.clear();
self.regex = new_regex;
self.patterns = Arc::new(new_patterns);
if self.actions.is_empty() {
@ -187,12 +181,18 @@ impl Filter {
for (key, action) in &mut self.actions {
action.setup(stream_name, name, key, self.patterns.clone())?;
}
self.has_ip = self
.actions
.values()
.any(|action| action.ipv4only || action.ipv6only);
self.longuest_action_duration =
self.actions.values().fold(TimeDelta::seconds(0), |acc, v| {
v.after_duration()
.map_or(acc, |v| if v > acc { v } else { acc })
});
self.actions
.values()
.fold(Duration::from_secs(0), |acc, v| {
v.after_duration
.map_or(acc, |v| if v > acc { v } else { acc })
});
Ok(())
}
@ -203,26 +203,128 @@ impl Filter {
if !self.patterns.is_empty() {
let mut result = Match::new();
for pattern in self.patterns.as_ref() {
// if the pattern is in an optional part of the regex, there may be no
// captured group for it.
if let Some(match_) = matches.name(pattern.name()) {
if pattern.not_an_ignore(match_.as_str()) {
result.push(match_.as_str().to_string());
}
// if the pattern is in an optional part of the regex,
// there may be no captured group for it.
if let Some(match_) = matches.name(&pattern.name)
&& !pattern.is_ignore(match_.as_str())
{
let mut match_ = match_.as_str().to_string();
pattern.normalize(&mut match_);
result.push(match_);
}
}
if result.len() == self.patterns.len() {
info!("{}: match {:?}", self, result);
return Some(result);
}
} else {
info!("{}: match []", self);
return Some(vec![".".to_string()]);
return Some(vec![]);
}
}
}
None
}
/// Test that the patterns map conforms to the filter's patterns.
/// Then returns a corresponding [`Match`].
pub fn get_match_from_patterns(
&self,
mut patterns: BTreeMap<Arc<Pattern>, String>,
) -> Result<Match, String> {
// Check pattern length
if patterns.len() != self.patterns.len() {
return Err(format!(
"{} patterns specified, while the {}.{} filter has {} pattern: ({})",
patterns.len(),
self.stream_name,
self.name,
self.patterns.len(),
self.patterns
.iter()
.map(|pattern| pattern.name.clone())
.reduce(|acc, pattern| acc + ", " + &pattern)
.unwrap_or("".into()),
));
}
for (pattern, match_) in &mut patterns {
if self.patterns.get(pattern).is_none() {
return Err(format!(
"pattern {} is not present in the filter {}.{}",
pattern.name, self.stream_name, self.name
));
}
if !pattern.is_match(match_) {
return Err(format!(
"'{}' doesn't match pattern {}",
match_, pattern.name,
));
}
if pattern.is_ignore(match_) {
return Err(format!(
"'{}' is explicitly ignored by pattern {}",
match_, pattern.name,
));
}
pattern.normalize(match_);
}
for pattern in self.patterns.iter() {
if !patterns.contains_key(pattern) {
return Err(format!(
"pattern {} is missing, because it's in the filter {}.{}",
pattern.name, self.stream_name, self.name
));
}
}
Ok(patterns.into_values().collect())
}
/// Filters [`Filter`]'s [`Action`]s according to its [`Pattern`]s [`PatternType`]
/// and those of the given [`Match`]
pub fn filtered_actions_from_match(&self, m: &Match) -> Vec<&Action> {
let ip_type = if self.has_ip {
self.patterns
.iter()
.zip(m)
.find(|(p, _)| p.pattern_type() == PatternType::Ip)
.map(|(_, m)| -> _ {
// Using this dumb heuristic is ok,
// because we know we have a valid IP address.
if m.contains(':') {
PatternType::Ipv6
} else if m.contains('.') {
PatternType::Ipv4
} else {
// This else should not happen, but better falling back on something than
// panicking, right? Maybe we should add a warning there?
PatternType::Regex
}
})
.unwrap_or(PatternType::Regex)
} else {
PatternType::Regex
};
let mut actions: Vec<_> = self
.actions
.values()
// If specific ip version, check it
.filter(move |action| !action.ipv4only || ip_type == PatternType::Ipv4)
.filter(move |action| !action.ipv6only || ip_type == PatternType::Ipv6)
.collect();
// Sort by after
actions.sort_by(|a, b| {
a.after_duration
.unwrap_or_default()
.cmp(&b.after_duration.unwrap_or_default())
});
actions
}
}
impl Display for Filter {
@ -258,6 +360,7 @@ impl Hash for Filter {
}
#[cfg(test)]
#[allow(clippy::too_many_arguments)]
impl Filter {
/// Test-only constructor designed to be easy to call
pub fn new(
@ -267,13 +370,15 @@ impl Filter {
retry_period: Option<&str>,
stream_name: &str,
name: &str,
duplicate: Duplicate,
config_patterns: &Patterns,
) -> Self {
let mut filter = Self {
actions: actions.into_iter().map(|a| (a.name().into(), a)).collect(),
actions: actions.into_iter().map(|a| (a.name.clone(), a)).collect(),
regex: regex.into_iter().map(|s| s.into()).collect(),
retry,
retry_period: retry_period.map(|s| s.into()),
duplicate,
..Default::default()
};
filter.setup(stream_name, name, config_patterns).unwrap();
@ -287,6 +392,7 @@ impl Filter {
retry_period: Option<&str>,
stream_name: &str,
name: &str,
duplicate: Duplicate,
config_patterns: &Patterns,
) -> &'static Self {
Box::leak(Box::new(Self::new(
@ -296,6 +402,7 @@ impl Filter {
retry_period,
stream_name,
name,
duplicate,
config_patterns,
)))
}
@ -304,8 +411,9 @@ impl Filter {
#[cfg(test)]
pub mod tests {
use crate::concepts::action::tests::{ok_action, ok_action_with_after};
use crate::concepts::pattern::PatternIp;
use crate::concepts::pattern::tests::{
boubou_pattern_with_ignore, default_pattern, ok_pattern_with_ignore,
boubou_pattern_with_ignore, default_pattern, number_pattern, ok_pattern_with_ignore,
};
use super::*;
@ -374,14 +482,14 @@ pub mod tests {
let name = "name".to_string();
let empty_patterns = Patterns::new();
let minute_str = "1m".to_string();
let minute = TimeDelta::seconds(60);
let two_minutes = TimeDelta::seconds(60 * 2);
let minute = Duration::from_secs(60);
let two_minutes = Duration::from_secs(60 * 2);
let two_minutes_str = "2m".to_string();
// duration 0
filter = ok_filter();
filter.setup(&name, &name, &empty_patterns).unwrap();
assert_eq!(filter.longuest_action_duration, TimeDelta::default());
assert_eq!(filter.longuest_action_duration, Duration::default());
let minute_action = ok_action_with_after(minute_str.clone(), &minute_str);
@ -438,6 +546,7 @@ pub mod tests {
.unwrap()
.to_string()
);
assert_eq!(&filter.regex[0].to_string(), "insert (?P<name>[abc]) here$");
assert_eq!(filter.patterns.len(), 1);
let stored_pattern = filter.patterns.first().unwrap();
assert_eq!(stored_pattern.regex, pattern.regex);
@ -463,6 +572,10 @@ pub mod tests {
.unwrap()
.to_string()
);
assert_eq!(
&filter.compiled_regex[0].to_string(),
"insert (?P<name>[abc]) here and (?P<boubou>(?:bou){1,3}) there"
);
assert_eq!(filter.patterns.len(), 2);
let stored_pattern = filter.patterns.first().unwrap();
assert_eq!(stored_pattern.regex, boubou.regex);
@ -481,12 +594,20 @@ pub mod tests {
.unwrap()
.to_string()
);
assert_eq!(
&filter.compiled_regex[0].to_string(),
"insert (?P<name>[abc]) here"
);
assert_eq!(
filter.compiled_regex[1].to_string(),
Regex::new("also add (?P<name>[abc]) there")
.unwrap()
.to_string()
);
assert_eq!(
&filter.compiled_regex[1].to_string(),
"also add (?P<name>[abc]) there"
);
assert_eq!(filter.patterns.len(), 1);
let stored_pattern = filter.patterns.first().unwrap();
assert_eq!(stored_pattern.regex, pattern.regex);
@ -513,6 +634,10 @@ pub mod tests {
.unwrap()
.to_string()
);
assert_eq!(
&filter.compiled_regex[1].to_string(),
"also add (?P<boubou>(?:bou){1,3}) here and (?P<name>[abc]) there"
);
assert_eq!(filter.patterns.len(), 2);
let stored_pattern = filter.patterns.first().unwrap();
assert_eq!(stored_pattern.regex, boubou.regex);
@ -550,17 +675,24 @@ pub mod tests {
let name = "name".to_string();
let mut filter;
// make a Patterns
let mut patterns = Patterns::new();
let mut pattern = ok_pattern_with_ignore();
pattern.setup(&name).unwrap();
patterns.insert(name.clone(), pattern.clone().into());
let pattern = Arc::new(pattern);
let boubou_name = "boubou".to_string();
let mut boubou = boubou_pattern_with_ignore();
boubou.setup(&boubou_name).unwrap();
patterns.insert(boubou_name.clone(), boubou.clone().into());
let boubou = Arc::new(boubou);
let patterns = Patterns::from([
(name.clone(), pattern.clone()),
(boubou_name.clone(), boubou.clone()),
]);
let number_name = "number".to_string();
let mut number_pattern = number_pattern();
number_pattern.setup(&number_name).unwrap();
let number_pattern = Arc::new(number_pattern);
// one simple regex
filter = Filter::default();
@ -572,6 +704,41 @@ pub mod tests {
assert_eq!(filter.get_match("youpi b youpi"), None);
assert_eq!(filter.get_match("insert here"), None);
// Ok
assert_eq!(
filter.get_match_from_patterns(BTreeMap::from([(pattern.clone(), "b".into())])),
Ok(vec!("b".into()))
);
// Doesn't match
assert!(
filter
.get_match_from_patterns(BTreeMap::from([(pattern.clone(), "abc".into())]))
.is_err()
);
// Ignored match
assert!(
filter
.get_match_from_patterns(BTreeMap::from([(pattern.clone(), "a".into())]))
.is_err()
);
// Bad pattern
assert!(
filter
.get_match_from_patterns(BTreeMap::from([(boubou.clone(), "bou".into())]))
.is_err()
);
// Bad number of patterns
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(boubou.clone(), "bou".into()),
]))
.is_err()
);
// Bad number of patterns
assert!(filter.get_match_from_patterns(BTreeMap::from([])).is_err());
// two patterns in one regex
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
@ -586,6 +753,55 @@ pub mod tests {
assert_eq!(filter.get_match("insert a here and bouboubou there"), None);
assert_eq!(filter.get_match("insert b here and boubou there"), None);
// Ok
assert_eq!(
filter.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(boubou.clone(), "bou".into()),
])),
// Reordered by pattern name
Ok(vec!("bou".into(), "b".into()))
);
// Doesn't match
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "abc".into()),
(boubou.clone(), "bou".into()),
]))
.is_err()
);
// Ignored match
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(boubou.clone(), "boubou".into()),
]))
.is_err()
);
// Bad pattern
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(number_pattern.clone(), "1".into()),
]))
.is_err()
);
// Bad number of patterns
assert!(
filter
.get_match_from_patterns(BTreeMap::from([
(pattern.clone(), "b".into()),
(boubou.clone(), "bou".into()),
(number_pattern.clone(), "1".into()),
]))
.is_err()
);
// Bad number of patterns
assert!(filter.get_match_from_patterns(BTreeMap::from([])).is_err());
// multiple regexes with same pattern
filter = Filter::default();
filter.actions.insert(name.clone(), ok_action());
@ -623,4 +839,155 @@ pub mod tests {
assert_eq!(filter.get_match("insert b here and boubou there"), None);
assert_eq!(filter.get_match("also add boubou here and b there"), None);
}
#[test]
fn get_match_from_patterns() {
// TODO
}
#[test]
fn filtered_actions_from_match_one_regex_pattern() {
let az_patterns = Pattern::new_map("az", "[a-z]+").unwrap();
let action = Action::new(
vec!["zblorg <az>"],
None,
false,
"test",
"test",
"a1",
&az_patterns,
0,
);
let filter = Filter::new(
vec![action.clone()],
vec![""],
None,
None,
"test",
"test",
Duplicate::default(),
&az_patterns,
);
assert_eq!(
vec![&action],
filter.filtered_actions_from_match(&vec!["zboum".into()])
);
}
#[test]
fn filtered_actions_from_match_two_regex_patterns() {
let patterns = BTreeMap::from([
(
"az".to_string(),
Arc::new(Pattern::new("az", "[a-z]+").unwrap()),
),
(
"num".to_string(),
Arc::new(Pattern::new("num", "[0-9]{1,3}").unwrap()),
),
]);
let action1 = Action::new(
vec!["zblorg <az> <num>"],
None,
false,
"test",
"test",
"a1",
&patterns,
0,
);
let action2 = Action::new(
vec!["zbleurg <num> <az>"],
None,
false,
"test",
"test",
"a2",
&patterns,
0,
);
let filter = Filter::new(
vec![action1.clone(), action2.clone()],
vec![""],
None,
None,
"test",
"test",
Duplicate::default(),
&patterns,
);
assert_eq!(
vec![&action1, &action2],
filter.filtered_actions_from_match(&vec!["zboum".into()])
);
}
#[test]
fn filtered_actions_from_match_one_regex_one_ip() {
let patterns = BTreeMap::from([
(
"az".to_string(),
Arc::new(Pattern::new("az", "[a-z]+").unwrap()),
),
("ip".to_string(), {
let mut pattern = Pattern {
ip: PatternIp {
pattern_type: PatternType::Ip,
..Default::default()
},
..Default::default()
};
pattern.setup("ip").unwrap();
Arc::new(pattern)
}),
]);
let action4 = Action::new(
vec!["zblorg4 <az> <ip>"],
None,
false,
"test",
"test",
"action4",
&patterns,
4,
);
let action6 = Action::new(
vec!["zblorg6 <az> <ip>"],
None,
false,
"test",
"test",
"action6",
&patterns,
6,
);
let action = Action::new(
vec!["zblorg <az> <ip>"],
None,
false,
"test",
"test",
"action",
&patterns,
0,
);
let filter = Filter::new(
vec![action4.clone(), action6.clone(), action.clone()],
vec!["<az>: <ip>"],
None,
None,
"test",
"test",
Duplicate::default(),
&patterns,
);
assert_eq!(
filter.filtered_actions_from_match(&vec!["zboum".into(), "1.2.3.4".into()]),
vec![&action, &action4],
);
assert_eq!(
filter.filtered_actions_from_match(&vec!["zboum".into(), "ab4:35f::1".into()]),
vec![&action, &action6],
);
}
}

View file

@ -1,21 +1,22 @@
mod action;
mod config;
mod filter;
mod parse_duration;
mod pattern;
mod plugin;
mod stream;
use std::fmt::Debug;
use serde::{Deserialize, Serialize};
pub use action::Action;
pub use config::{Config, Patterns};
pub use filter::Filter;
use parse_duration::parse_duration;
pub use pattern::Pattern;
use serde::{Deserialize, Serialize};
pub use filter::{Duplicate, Filter};
pub use pattern::{Pattern, PatternType};
pub use plugin::Plugin;
pub use stream::Stream;
pub use treedb::time::{Time, now};
use chrono::{DateTime, Local};
pub type Time = DateTime<Local>;
pub type Match = Vec<String>;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize)]
@ -24,5 +25,66 @@ pub struct MatchTime {
pub t: Time,
}
fn merge_attrs<A: Default + Debug + PartialEq + Eq + Clone>(
this: A,
other: A,
default: A,
name: &str,
) -> Result<A, String> {
if !(this == default || other == default || this == other) {
return Err(format!(
"'{name}' has conflicting definitions: '{this:?}', '{other:?}'"
));
}
if this == default {
return Ok(other);
}
Ok(this)
}
#[cfg(test)]
pub use filter::tests as filter_tests;
#[cfg(test)]
mod tests {
use crate::concepts::merge_attrs;
#[test]
fn test_merge_attrs() {
assert_eq!(merge_attrs(None::<String>, None, None, "t"), Ok(None));
assert_eq!(
merge_attrs(Some("coucou"), None, None, "t"),
Ok(Some("coucou"))
);
assert_eq!(
merge_attrs(None, Some("coucou"), None, "t"),
Ok(Some("coucou"))
);
assert_eq!(
merge_attrs(Some("coucou"), Some("coucou"), None, "t"),
Ok(Some("coucou"))
);
assert_eq!(
merge_attrs(Some("coucou"), Some("hello"), None, "t"),
Err("'t' has conflicting definitions: 'Some(\"coucou\")', 'Some(\"hello\")'".into())
);
assert_eq!(merge_attrs("", "", "", "t"), Ok(""));
assert_eq!(merge_attrs("coucou", "", "", "t"), Ok("coucou"));
assert_eq!(merge_attrs("", "coucou", "", "t"), Ok("coucou"));
assert_eq!(merge_attrs("coucou", "coucou", "", "t"), Ok("coucou"));
assert_eq!(
merge_attrs("coucou", "hello", "", "t"),
Err("'t' has conflicting definitions: '\"coucou\"', '\"hello\"'".into())
);
assert_eq!(merge_attrs(0, 0, 0, "t"), Ok(0));
assert_eq!(merge_attrs(5, 0, 0, "t"), Ok(5));
assert_eq!(merge_attrs(0, 5, 0, "t"), Ok(5));
assert_eq!(merge_attrs(5, 5, 0, "t"), Ok(5));
assert_eq!(
merge_attrs(5, 6, 0, "t"),
Err("'t' has conflicting definitions: '5', '6'".into())
);
}
}

View file

@ -1,66 +0,0 @@
use chrono::TimeDelta;
pub fn parse_duration(d: &str) -> Result<TimeDelta, String> {
let d_trimmed = d.trim();
let chars = d_trimmed.as_bytes();
let mut value = 0;
let mut i = 0;
while i < chars.len() && chars[i].is_ascii_digit() {
value = value * 10 + (chars[i] - b'0') as u32;
i += 1;
}
if i == 0 {
return Err(format!("duration '{}' doesn't start with digits", d));
}
let ok_as = |func: fn(i64) -> TimeDelta| -> Result<_, String> { Ok(func(value as i64)) };
match d_trimmed[i..].trim() {
"ms" | "millis" | "millisecond" | "milliseconds" => ok_as(TimeDelta::milliseconds),
"s" | "sec" | "secs" | "second" | "seconds" => ok_as(TimeDelta::seconds),
"m" | "min" | "mins" | "minute" | "minutes" => ok_as(TimeDelta::minutes),
"h" | "hour" | "hours" => ok_as(TimeDelta::hours),
"d" | "day" | "days" => ok_as(TimeDelta::days),
unit => Err(format!(
"unit {} not recognised. must be one of s/sec/seconds, m/min/minutes, h/hours, d/days",
unit
)),
}
}
#[cfg(test)]
mod tests {
use chrono::TimeDelta;
use super::*;
#[test]
fn char_conversion() {
assert_eq!(b'9' - b'0', 9);
}
#[test]
fn parse_duration_test() {
assert_eq!(parse_duration("1s"), Ok(TimeDelta::seconds(1)));
assert_eq!(parse_duration("12s"), Ok(TimeDelta::seconds(12)));
assert_eq!(parse_duration(" 12 secs "), Ok(TimeDelta::seconds(12)));
assert_eq!(parse_duration("2m"), Ok(TimeDelta::seconds(2 * 60)));
assert_eq!(
parse_duration("6 hours"),
Ok(TimeDelta::seconds(6 * 60 * 60))
);
assert_eq!(parse_duration("1d"), Ok(TimeDelta::seconds(24 * 60 * 60)));
assert_eq!(
parse_duration("365d"),
Ok(TimeDelta::seconds(365 * 24 * 60 * 60))
);
assert!(parse_duration("d 3").is_err());
assert!(parse_duration("d3").is_err());
assert!(parse_duration("3da").is_err());
assert!(parse_duration("3_days").is_err());
assert!(parse_duration("_3d").is_err());
assert!(parse_duration("3 3d").is_err());
assert!(parse_duration("3.3d").is_err());
}
}

View file

@ -0,0 +1,239 @@
use std::{
fmt::Display,
net::{IpAddr, Ipv4Addr, Ipv6Addr},
str::FromStr,
};
use super::*;
/// Stores an IP and an associated mask.
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum Cidr {
IPv4((Ipv4Addr, Ipv4Addr)),
IPv6((Ipv6Addr, Ipv6Addr)),
}
impl FromStr for Cidr {
type Err = String;
fn from_str(cidr: &str) -> Result<Self, Self::Err> {
let (ip, mask) = cidr.split_once('/').ok_or(format!(
"malformed IP/MASK. '{cidr}' doesn't contain any '/'"
))?;
let ip = normalize(ip).map_err(|err| format!("malformed IP '{ip}' in '{cidr}': {err}"))?;
let mask_count = u8::from_str(mask)
.map_err(|err| format!("malformed mask '{mask}' in '{cidr}': {err}"))?;
// Let's accept any mask size for now, as useless as it may seem
// if mask_count < 2 {
// return Err("Can't have a network mask of 0 or 1. You're either ignoring all Internet or half of it.".into());
// } else if mask_count
// < (match ip {
// IpAddr::V4(_) => 8,
// IpAddr::V6(_) => 16,
// })
// {
// warn!("With a mask of {mask_count}, you're ignoring a big part of Internet. Are you sure you want to do this?");
// }
Self::from_ip_and_mask(ip, mask_count)
}
}
impl Display for Cidr {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}/{}", self.network(), self.mask())
}
}
impl Cidr {
fn from_ip_and_mask(ip: IpAddr, mask_count: u8) -> Result<Self, String> {
match ip {
IpAddr::V4(mut ipv4_addr) => {
// Create bitmask
let mask = mask_to_ipv4(mask_count)?;
// Normalize IP from mask
ipv4_addr &= mask;
Ok(Cidr::IPv4((ipv4_addr, mask)))
}
IpAddr::V6(mut ipv6_addr) => {
let mask = mask_to_ipv6(mask_count)?;
// Normalize IP from mask
ipv6_addr &= mask;
Ok(Cidr::IPv6((ipv6_addr, mask)))
}
}
}
/// Whether an IP is included in this IP CIDR.
/// If IP is not the same version as CIDR, returns always false.
pub fn includes(&self, ip: &IpAddr) -> bool {
let ip = normalize_ip(*ip);
match self {
Cidr::IPv4((network_ipv4, mask)) => match ip {
IpAddr::V6(_) => false,
IpAddr::V4(ipv4_addr) => *network_ipv4 == ipv4_addr & mask,
},
Cidr::IPv6((network_ipv6, mask)) => match ip {
IpAddr::V4(_) => false,
IpAddr::V6(ipv6_addr) => *network_ipv6 == ipv6_addr & mask,
},
}
}
fn network(&self) -> IpAddr {
match self {
Cidr::IPv4((network, _)) => IpAddr::from(*network),
Cidr::IPv6((network, _)) => IpAddr::from(*network),
}
}
fn mask(&self) -> u8 {
let mut raw_mask = match self {
Cidr::IPv4((_, mask)) => mask.to_bits() as u128,
Cidr::IPv6((_, mask)) => mask.to_bits(),
};
let mut ret = 0;
for _ in 0..128 {
if raw_mask % 2 == 1 {
ret += 1;
}
raw_mask >>= 1;
}
ret
}
}
#[cfg(test)]
mod cidr_tests {
use std::{
net::{IpAddr, Ipv4Addr, Ipv6Addr},
str::FromStr,
};
use super::Cidr;
#[test]
fn cidrv4_from_str() {
assert_eq!(
Ok(Cidr::IPv4((Ipv4Addr::new(192, 168, 1, 4), u32::MAX.into()))),
Cidr::from_str("192.168.1.4/32")
);
// Test IP normalization from mask
assert_eq!(
Ok(Cidr::IPv4((
Ipv4Addr::new(192, 168, 1, 0),
Ipv4Addr::new(255, 255, 255, 0),
))),
Cidr::from_str("192.168.1.4/24")
);
// Another ok-test "pour la route"
assert_eq!(
Ok(Cidr::IPv4((
Ipv4Addr::new(1, 1, 0, 0),
Ipv4Addr::new(255, 255, 0, 0),
))),
Cidr::from_str("1.1.248.25/16")
);
// Errors
assert!(Cidr::from_str("256.1.1.1/8").is_err());
// assert!(Cidr::from_str("1.1.1.1/0").is_err());
// assert!(Cidr::from_str("1.1.1.1/1").is_err());
// assert!(Cidr::from_str("1.1.1.1.1").is_err());
assert!(Cidr::from_str("1.1.1.1/16/16").is_err());
}
#[test]
fn cidrv6_from_str() {
assert_eq!(
Ok(Cidr::IPv6((
Ipv6Addr::new(0xfe80, 0, 0, 0, 0xdf68, 0x2ee, 0xe4f9, 0xe68),
u128::MAX.into()
))),
Cidr::from_str("fe80::df68:2ee:e4f9:e68/128")
);
// Test IP normalization from mask
assert_eq!(
Ok(Cidr::IPv6((
Ipv6Addr::new(0x2001, 0xdb8, 0x85a3, 0x9de5, 0, 0, 0, 0),
Ipv6Addr::new(u16::MAX, u16::MAX, u16::MAX, u16::MAX, 0, 0, 0, 0),
))),
Cidr::from_str("2001:db8:85a3:9de5::8a2e:370:7334/64")
);
// Another ok-test "pour la route"
assert_eq!(
Ok(Cidr::IPv6((
Ipv6Addr::new(0x2001, 0xdb8, 0x85a3, 0x9d00, 0, 0, 0, 0),
Ipv6Addr::new(
u16::MAX,
u16::MAX,
u16::MAX,
u16::MAX - u8::MAX as u16,
0,
0,
0,
0
),
))),
Cidr::from_str("2001:db8:85a3:9d00::8a2e:370:7334/56")
);
assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334/56").is_ok());
assert!(Cidr::from_str("2001:DB8:85A3:0:0:8A2E:370:7334/56").is_ok());
// Errors
assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:g334/56").is_err());
// assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334/0").is_err());
// assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334/1").is_err());
assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334:11/56").is_err());
assert!(Cidr::from_str("2001:db8:85a3:0:0:8a2e:370:7334/11/56").is_err());
}
#[test]
fn cidrv4_includes() {
let cidr = Cidr::from_str("192.168.1.0/24").unwrap();
assert!(cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 1, 0))));
assert!(cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 1, 1))));
assert!(cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 1, 234))));
assert!(!cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 0, 1))));
assert!(!cidr.includes(&IpAddr::V6(Ipv6Addr::new(
0xfe80, 0, 0, 0, 0xdf68, 0x2ee, 0xe4f9, 0xe68
),)));
}
#[test]
fn cidrv6_includes() {
let cidr = Cidr::from_str("2001:db8:85a3:9d00:0:8a2e:370:7334/56").unwrap();
assert!(cidr.includes(&IpAddr::V6(Ipv6Addr::new(
0x2001, 0x0db8, 0x85a3, 0x9d00, 0, 0, 0, 0
))));
assert!(cidr.includes(&IpAddr::V6(Ipv6Addr::new(
0x2001, 0x0db8, 0x85a3, 0x9da4, 0x34fc, 0x0d8b, 0xffff, 0x1111
))));
assert!(!cidr.includes(&IpAddr::V6(Ipv6Addr::new(
0x2001, 0x0db8, 0x85a3, 0xad00, 0, 0, 0, 1
))));
assert!(!cidr.includes(&IpAddr::V4(Ipv4Addr::new(192, 168, 1, 0))));
}
#[test]
fn cidr_display() {
let cidrs = [
("192.168.1.4/32", "192.168.1.4/32"),
("192.168.1.4/24", "192.168.1.0/24"),
("1.1.248.25/16", "1.1.0.0/16"),
("fe80::df68:2ee:e4f9:e68/128", "fe80::df68:2ee:e4f9:e68/128"),
(
"2001:db8:85a3:9de5::8a2e:370:7334/64",
"2001:db8:85a3:9de5::/64",
),
(
"2001:db8:85a3:9d00::8a2e:370:7334/56",
"2001:db8:85a3:9d00::/56",
),
];
for (from, to) in cidrs {
assert_eq!(Cidr::from_str(from).unwrap().to_string(), to);
}
}
}

View file

@ -0,0 +1,730 @@
use std::{
net::{IpAddr, Ipv4Addr, Ipv6Addr},
str::FromStr,
};
use serde::{Deserialize, Serialize};
use tracing::warn;
use cidr::Cidr;
use utils::*;
mod cidr;
mod utils;
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Deserialize, Serialize)]
pub enum PatternType {
#[default]
#[serde(rename = "regex")]
Regex,
#[serde(rename = "ip")]
Ip,
#[serde(rename = "ipv4")]
Ipv4,
#[serde(rename = "ipv6")]
Ipv6,
}
impl PatternType {
pub fn is_default(&self) -> bool {
*self == PatternType::default()
}
pub fn regex(&self) -> Option<String> {
// Those orders of preference are very important for <ip>
// patterns that have greedy catch-all regexes becore or after them,
// for example: "Failed password .*<ip>.*"
let num4 = [
// Order is important, first is preferred.
// first 25x
"(?:25[0-5]",
// then 2xx
"2[0-4][0-9]",
// then 1xx
"1[0-9][0-9]",
// then 0xx
"[1-9][0-9]",
// then 0x
"[0-9])",
]
.join("|");
let numsix = "[0-9a-fA-F]{1,4}";
let ipv4 = format!(r#"{num4}(?:\.{num4}){{3}}"#);
#[allow(clippy::useless_format)]
let ipv6 = [
// We're unrolling all possibilities, longer IPv6 first,
// to make it super-greedy,
// more than an eventual .* before or after <ip> ,
// that would "eat" its first or last blocks.
// Order is important, first is preferred.
// We put IPv4-suffixed regexes first
format!(r#"::(?:ffff(?::0{{1,4}})?:)?{ipv4}"#),
format!(r#"(?:{numsix}:){{1,4}}:{ipv4}"#),
// Then link-local addresses with interface name
format!(r#"fe80:(?::[0-9a-fA-F]{{0,4}}){{0,4}}%[0-9a-zA-Z]+"#),
// Full IPv6
format!("(?:{numsix}:){{7}}{numsix}"),
// 1 block cut
format!("(?:{numsix}:){{7}}:"),
format!("(?:{numsix}:){{6}}:{numsix}"),
format!("(?:{numsix}:){{5}}(?::{numsix}){{2}}"),
format!("(?:{numsix}:){{4}}(?::{numsix}){{3}}"),
format!("(?:{numsix}:){{3}}(?::{numsix}){{4}}"),
format!("(?:{numsix}:){{2}}(?::{numsix}){{5}}"),
format!("{numsix}:(?:(?::{numsix}){{6}})"),
format!(":(?:(?::{numsix}){{7}})"),
// 2 blocks cut
format!("(?:{numsix}:){{6}}:"),
format!("(?:{numsix}:){{5}}:{numsix}"),
format!("(?:{numsix}:){{4}}(?::{numsix}){{2}}"),
format!("(?:{numsix}:){{3}}(?::{numsix}){{3}}"),
format!("(?:{numsix}:){{2}}(?::{numsix}){{4}}"),
format!("{numsix}:(?:(?::{numsix}){{5}})"),
format!(":(?:(?::{numsix}){{6}})"),
// 3 blocks cut
format!("(?:{numsix}:){{5}}:"),
format!("(?:{numsix}:){{4}}:{numsix}"),
format!("(?:{numsix}:){{3}}(?::{numsix}){{2}}"),
format!("(?:{numsix}:){{2}}(?::{numsix}){{3}}"),
format!("{numsix}:(?:(?::{numsix}){{4}})"),
format!(":(?:(?::{numsix}){{5}})"),
// 4 blocks cut
format!("(?:{numsix}:){{4}}:"),
format!("(?:{numsix}:){{3}}:{numsix}"),
format!("(?:{numsix}:){{2}}(?::{numsix}){{2}}"),
format!("{numsix}:(?:(?::{numsix}){{3}})"),
format!(":(?:(?::{numsix}){{4}})"),
// 5 blocks cut
format!("(?:{numsix}:){{3}}:"),
format!("(?:{numsix}:){{2}}:{numsix}"),
format!("{numsix}:(?:(?::{numsix}){{2}})"),
format!(":(?:(?::{numsix}){{3}})"),
// 6 blocks cut
format!("(?:{numsix}:){{2}}:"),
format!("{numsix}::{numsix}"),
format!(":(?:(?::{numsix}){{2}})"),
// 7 blocks cut
format!("{numsix}::"),
format!("::{numsix}"),
// special cuts
// 8 blocks cut
format!("::"),
]
.join("|");
match self {
PatternType::Ipv4 => Some(ipv4),
PatternType::Ipv6 => Some(ipv6),
PatternType::Ip => Some(format!("{ipv4}|{ipv6}")),
PatternType::Regex => None,
}
}
}
#[derive(Clone, Debug, Default, PartialEq, Eq, Deserialize, Serialize)]
pub struct PatternIp {
#[serde(
default,
rename = "type",
skip_serializing_if = "PatternType::is_default"
)]
pub pattern_type: PatternType,
#[serde(default, rename = "ipv4mask")]
pub ipv4_mask: Option<u8>,
#[serde(default, rename = "ipv6mask")]
pub ipv6_mask: Option<u8>,
#[serde(skip)]
pub ipv4_bitmask: Option<Ipv4Addr>,
#[serde(skip)]
pub ipv6_bitmask: Option<Ipv6Addr>,
#[serde(default, rename = "ignorecidr", skip_serializing_if = "Vec::is_empty")]
pub ignore_cidr: Vec<String>,
#[serde(skip)]
pub ignore_cidr_normalized: Vec<Cidr>,
}
impl PatternIp {
pub fn pattern_type(&self) -> PatternType {
self.pattern_type
}
/// Setup the IP-specific part of a Pattern.
/// Returns an optional regex string if of type IP, else None
/// Returns an error if one of:
/// - the type is not IP but there is IP-specific config
/// - the type is IP/IPv4/IPv6 and there is invalid IP-specific config
/// - the type is IPv4 and there is IPv6-specific config
/// - the type is IPv6 and there is IPv4-specific config
pub fn setup(&mut self) -> Result<Option<String>, String> {
match self.pattern_type {
PatternType::Regex => {
if self.ipv4_mask.is_some() {
return Err("ipv4mask is only allowed for patterns of `type: 'ip'`".into());
}
if self.ipv6_mask.is_some() {
return Err("ipv6mask is only allowed for patterns of `type: 'ip'`".into());
}
if !self.ignore_cidr.is_empty() {
return Err("ignorecidr is only allowed for patterns of `type: 'ip'`".into());
}
}
PatternType::Ip | PatternType::Ipv4 | PatternType::Ipv6 => {
if let Some(mask) = self.ipv4_mask {
self.ipv4_bitmask = Some(mask_to_ipv4(mask)?);
}
if let Some(mask) = self.ipv6_mask {
self.ipv6_bitmask = Some(mask_to_ipv6(mask)?);
}
for cidr in &self.ignore_cidr {
let cidr_normalized = Cidr::from_str(cidr)?;
let cidr_normalized_string = cidr_normalized.to_string();
if &cidr_normalized_string != cidr {
warn!(
"CIDR {cidr} should be rewritten in its normalized form: {cidr_normalized_string}"
);
}
self.ignore_cidr_normalized.push(cidr_normalized);
}
self.ignore_cidr = Vec::default();
match self.pattern_type {
PatternType::Regex => (),
PatternType::Ip => (),
PatternType::Ipv4 => {
if self.ipv6_mask.is_some() {
return Err("An IPv4-only pattern can't have an ipv6mask".into());
}
for cidr in &self.ignore_cidr_normalized {
if let Cidr::IPv6(_) = cidr {
return Err(format!(
"An IPv4-only pattern can't have an IPv6 ({}) as an ignore",
cidr
));
}
}
}
PatternType::Ipv6 => {
if self.ipv4_mask.is_some() {
return Err("An IPv6-only pattern can't have an ipv4mask".into());
}
for cidr in &self.ignore_cidr_normalized {
if let Cidr::IPv4(_) = cidr {
return Err(format!(
"An IPv6-only pattern can't have an IPv4 ({}) as an ignore",
cidr
));
}
}
}
}
}
}
Ok(self.pattern_type.regex())
}
/// Whether the IP match is included in one of [`Self::ignore_cidr`]
pub fn is_ignore(&self, match_: &str) -> bool {
let match_ip = match IpAddr::from_str(match_) {
Ok(ip) => ip,
Err(_) => return false,
};
self.ignore_cidr_normalized
.iter()
.any(|cidr| cidr.includes(&match_ip))
}
/// Normalize the pattern.
/// This should happen after checking on ignores.
/// No-op when the pattern is not an IP.
/// Otherwise BitAnd the IP with its configured mask,
/// and add the /<mask>
pub fn normalize(&self, match_: &mut String) {
let ip = match self.pattern_type {
PatternType::Regex => None,
// Attempt to normalize only if type is IP*
_ => normalize(match_)
.ok()
.and_then(|ip| match self.pattern_type {
PatternType::Ip => Some(ip),
PatternType::Ipv4 => match ip {
IpAddr::V4(_) => Some(ip),
_ => None,
},
PatternType::Ipv6 => match ip {
IpAddr::V6(_) => Some(ip),
_ => None,
},
_ => None,
}),
};
if let Some(ip) = ip {
*match_ = match ip {
IpAddr::V4(addr) => match self.ipv4_bitmask {
Some(bitmask) => {
format!("{}/{}", addr & bitmask, self.ipv4_mask.unwrap_or(32))
}
None => addr.to_string(),
},
IpAddr::V6(addr) => match self.ipv6_bitmask {
Some(bitmask) => {
format!("{}/{}", addr & bitmask, self.ipv6_mask.unwrap_or(128))
}
None => addr.to_string(),
},
};
}
}
}
#[cfg(test)]
mod patternip_tests {
use std::net::{Ipv4Addr, Ipv6Addr};
use tokio::{fs::read_to_string, task::JoinSet};
use crate::{
concepts::{Action, Duplicate, Filter, Pattern, now},
daemon::{React, tests::TestBed},
};
use super::{Cidr, PatternIp, PatternType};
#[test]
fn test_setup_type_regex() {
let mut regex_struct = PatternIp {
pattern_type: PatternType::Regex,
..Default::default()
};
let copy = regex_struct.clone();
// All default patterns is ok for regex type
assert!(regex_struct.setup().is_ok());
// Setup changes nothing
assert_eq!(regex_struct, copy);
// Any non-default field is err
let mut regex_struct = PatternIp {
pattern_type: PatternType::Regex,
ipv4_mask: Some(24),
..Default::default()
};
assert!(regex_struct.setup().is_err());
let mut regex_struct = PatternIp {
pattern_type: PatternType::Regex,
ipv6_mask: Some(64),
..Default::default()
};
assert!(regex_struct.setup().is_err());
let mut regex_struct = PatternIp {
pattern_type: PatternType::Regex,
ignore_cidr: vec!["192.168.1/24".into()],
..Default::default()
};
assert!(regex_struct.setup().is_err());
}
#[test]
fn test_setup_type_ip() {
for pattern_type in [PatternType::Ip, PatternType::Ipv4, PatternType::Ipv6] {
let mut ip_struct = PatternIp {
pattern_type,
..Default::default()
};
assert!(ip_struct.setup().is_ok());
let mut ip_struct = PatternIp {
pattern_type,
ipv4_mask: Some(24),
..Default::default()
};
match pattern_type {
PatternType::Ipv6 => assert!(ip_struct.setup().is_err()),
_ => {
assert!(ip_struct.setup().is_ok());
assert_eq!(
ip_struct.ipv4_bitmask,
Some(Ipv4Addr::new(255, 255, 255, 0))
);
}
}
let mut ip_struct = PatternIp {
pattern_type,
ipv6_mask: Some(64),
..Default::default()
};
match pattern_type {
PatternType::Ipv4 => assert!(ip_struct.setup().is_err()),
_ => {
assert!(ip_struct.setup().is_ok());
assert_eq!(
ip_struct.ipv6_bitmask,
Some(Ipv6Addr::new(0xffff, 0xffff, 0xffff, 0xffff, 0, 0, 0, 0))
);
}
}
let mut ip_struct = PatternIp {
pattern_type,
ignore_cidr: vec!["192.168.1.0/24".into()],
..Default::default()
};
match pattern_type {
PatternType::Ipv6 => assert!(ip_struct.setup().is_err()),
_ => {
assert!(ip_struct.setup().is_ok());
assert_eq!(
ip_struct.ignore_cidr_normalized,
vec![Cidr::IPv4((
Ipv4Addr::new(192, 168, 1, 0),
Ipv4Addr::new(255, 255, 255, 0)
))]
);
}
}
let mut ip_struct = PatternIp {
pattern_type,
ignore_cidr: vec!["::ffff:192.168.1.0/24".into()],
..Default::default()
};
match pattern_type {
PatternType::Ipv6 => assert!(ip_struct.setup().is_err()),
_ => {
assert!(ip_struct.setup().is_ok());
assert_eq!(
ip_struct.ignore_cidr_normalized,
vec![Cidr::IPv4((
Ipv4Addr::new(192, 168, 1, 0),
Ipv4Addr::new(255, 255, 255, 0)
))]
);
}
}
let mut ip_struct = PatternIp {
pattern_type,
ignore_cidr: vec!["2001:db8:85a3:9de5::8a2e:370:7334/64".into()],
..Default::default()
};
match pattern_type {
PatternType::Ipv4 => assert!(ip_struct.setup().is_err()),
_ => {
assert!(ip_struct.setup().is_ok());
assert_eq!(
ip_struct.ignore_cidr_normalized,
vec![Cidr::IPv6((
Ipv6Addr::new(0x2001, 0xdb8, 0x85a3, 0x9de5, 0, 0, 0, 0),
Ipv6Addr::new(u16::MAX, u16::MAX, u16::MAX, u16::MAX, 0, 0, 0, 0),
))]
);
}
}
}
}
#[test]
fn test_is_ignore() {
let mut ip_struct = PatternIp {
pattern_type: PatternType::Ip,
ignore_cidr: vec!["10.0.0.0/8".into(), "2001:db8:85a3:9de5::/64".into()],
..Default::default()
};
ip_struct.setup().unwrap();
assert!(!ip_struct.is_ignore("prout"));
assert!(!ip_struct.is_ignore("1.1.1.1"));
assert!(!ip_struct.is_ignore("11.1.1.1"));
assert!(!ip_struct.is_ignore("2001:db8:85a3:9de6::1"));
assert!(ip_struct.is_ignore("10.1.1.1"));
assert!(ip_struct.is_ignore("2001:db8:85a3:9de5::1"));
}
#[test]
fn test_normalize() {
let ipv4_32 = "1.1.1.1";
let ipv4_32_norm = "1.1.1.1";
let ipv4_24 = "1.1.1.0";
let ipv4_24_norm = "1.1.1.0";
let ipv4_24_mask = "1.1.1.0/24";
let ipv6_128 = "2001:db8:85a3:9de5:0:0:01:02";
let ipv6_128_norm = "2001:db8:85a3:9de5::1:2";
let ipv6_64 = "2001:db8:85a3:9de5:0:0:0:0";
let ipv6_64_norm = "2001:db8:85a3:9de5::";
let ipv6_64_mask = "2001:db8:85a3:9de5::/64";
for (ipv4_mask, ipv6_mask) in [(Some(24), None), (None, Some(64)), (Some(24), Some(64))] {
let mut ip_struct = PatternIp {
pattern_type: PatternType::Ip,
ipv4_mask,
ipv6_mask,
..Default::default()
};
ip_struct.setup().unwrap();
let mut ipv4_32_modified = ipv4_32.to_string();
let mut ipv4_24_modified = ipv4_24.to_string();
let mut ipv6_128_modified = ipv6_128.to_string();
let mut ipv6_64_modified = ipv6_64.to_string();
ip_struct.normalize(&mut ipv4_32_modified);
ip_struct.normalize(&mut ipv4_24_modified);
ip_struct.normalize(&mut ipv6_128_modified);
ip_struct.normalize(&mut ipv6_64_modified);
match ipv4_mask {
Some(_) => {
// modified with mask
assert_eq!(
ipv4_32_modified, ipv4_24_mask,
"ipv4mask: {:?}, ipv6mask: {:?}",
ipv4_mask, ipv6_mask
);
assert_eq!(
ipv4_24_modified, ipv4_24_mask,
"ipv4mask: {:?}, ipv6mask: {:?}",
ipv4_mask, ipv6_mask
);
}
None => {
// only normaized
assert_eq!(
ipv4_32_modified, ipv4_32_norm,
"ipv4mask: {:?}, ipv6mask: {:?}",
ipv4_mask, ipv6_mask
);
assert_eq!(
ipv4_24_modified, ipv4_24_norm,
"ipv4mask: {:?}, ipv6mask: {:?}",
ipv4_mask, ipv6_mask
);
}
}
match ipv6_mask {
Some(_) => {
// modified with mask
assert_eq!(
ipv6_128_modified, ipv6_64_mask,
"ipv4mask: {:?}, ipv6mask: {:?}",
ipv4_mask, ipv6_mask
);
assert_eq!(
ipv6_64_modified, ipv6_64_mask,
"ipv4mask: {:?}, ipv6mask: {:?}",
ipv4_mask, ipv6_mask
);
}
None => {
// only normalized
assert_eq!(
ipv6_128_modified, ipv6_128_norm,
"ipv4mask: {:?}, ipv6mask: {:?}",
ipv4_mask, ipv6_mask
);
assert_eq!(
ipv6_64_modified, ipv6_64_norm,
"ipv4mask: {:?}, ipv6mask: {:?}",
ipv4_mask, ipv6_mask
);
}
}
}
}
pub const VALID_IPV4: [&str; 8] = [
"252.4.92.250",
"212.4.92.210",
"112.4.92.110",
"83.4.92.35",
"83.4.92.0",
"3.254.92.4",
"1.2.3.4",
"255.255.255.255",
];
pub const VALID_IPV6: [&str; 42] = [
// all accepted characters
"0123:4567:89:ab:cdef:AB:CD:EF",
// ipv6-mapped ipv4
"::ffff:1.2.3.4",
"ffff::1.2.3.4",
// 8 blocks
"1111:2:3:4:5:6:7:8888",
// 7 blocks
"::2:3:4:5:6:7:8888",
"1111::3:4:5:6:7:8888",
"1111:2::4:5:6:7:8888",
"1111:2:3::5:6:7:8888",
"1111:2:3:4::6:7:8888",
"1111:2:3:4:5::7:8888",
"1111:2:3:4:5:6::8888",
"1111:2:3:4:5:6:7::",
// 6 blocks
"::3:4:5:6:7:8888",
"1111::4:5:6:7:8888",
"1111:2::5:6:7:8888",
"1111:2:3::6:7:8888",
"1111:2:3:4::7:8888",
"1111:2:3:4:5::8888",
"1111:2:3:4:5:6::",
// 5 blocks
"::4:5:6:7:8888",
"1111::5:6:7:8888",
"1111:2::6:7:8888",
"1111:2:3::7:8888",
"1111:2:3:4::8888",
"1111:2:3:4:5::",
// 4 blocks
"::5:6:7:8888",
"1111::6:7:8888",
"1111:2::7:8888",
"1111:2:3::8888",
"1111:2:3:4::",
// 3 blocks
"::6:7:8888",
"1111::7:8888",
"1111:2::8888",
"1111:2:3::",
// 2 blocks
"::7:8888",
"1111::8888",
"1111:2::",
// 1 block
"::8",
"::8888",
"1::",
"1111::",
// 0 block
"::",
];
#[test]
fn test_ip_regexes() {
for pattern_type in [PatternType::Ip, PatternType::Ipv4, PatternType::Ipv6] {
let mut pattern = Pattern {
ip: PatternIp {
pattern_type,
..Default::default()
},
..Default::default()
};
assert!(pattern.setup("zblorg").is_ok());
let regex = pattern.compiled().unwrap();
let accepts_ipv4 = pattern_type == PatternType::Ip || pattern_type == PatternType::Ipv4;
let accepts_ipv6 = pattern_type == PatternType::Ip || pattern_type == PatternType::Ipv6;
macro_rules! assert2 {
($a:expr) => {
assert!($a, "PatternType: {pattern_type:?}");
};
}
for ip in VALID_IPV4 {
assert2!(accepts_ipv4 == regex.is_match(ip));
}
assert2!(!regex.is_match(".1.2.3.4"));
assert2!(!regex.is_match(" 1.2.3.4"));
assert2!(!regex.is_match("1.2.3.4 "));
assert2!(!regex.is_match("1.2. 3.4"));
assert2!(!regex.is_match("257.2.3.4"));
assert2!(!regex.is_match("074.2.3.4"));
assert2!(!regex.is_match("1.2.3.4.5"));
assert2!(!regex.is_match("1.2..4"));
assert2!(!regex.is_match("1.2..3.4"));
for ip in VALID_IPV6 {
assert2!(accepts_ipv6 == regex.is_match(ip));
}
assert2!(!regex.is_match("1:"));
assert2!(!regex.is_match("1:::"));
assert2!(!regex.is_match("1:::2"));
assert2!(!regex.is_match("1:2:3:4:5:6:7:8:9"));
assert2!(!regex.is_match("1:23456:3:4:5:6:7:8"));
assert2!(!regex.is_match("1:2:3:4:5:6:7:8:"));
}
}
#[tokio::test(flavor = "multi_thread")]
async fn ip_pattern_matches() {
let mut join_set = JoinSet::new();
for ip in VALID_IPV4.iter().chain(&VALID_IPV6) {
for line in [
format!("borned {ip} test"),
//
format!("right-unborned {ip} text"),
format!("right-unborned {ip}text"),
format!("right-unborned {ip}:"),
//
format!("left-unborned text {ip}"),
format!("left-unborned text{ip}"),
format!("left-unborned :{ip}"),
//
format!("full-unborned text {ip} text"),
format!("full-unborned text{ip} text"),
format!("full-unborned text {ip}text"),
format!("full-unborned text{ip}text"),
format!("full-unborned :{ip}:"),
format!("full-unborned : {ip}:"),
] {
join_set.spawn(tokio::spawn(async move {
let bed = TestBed::default();
let filter = Filter::new_static(
vec![Action::new(
vec!["sh", "-c", &format!("echo <ip> >> {}", &bed.out_file)],
None,
false,
"test",
"test",
"a1",
&bed.ip_patterns,
0,
)],
vec![
"^borned <ip> test",
"^right-unborned <ip>.*",
"^left-unborned .*<ip>",
"^full-unborned .*<ip>.*",
],
None,
None,
"test",
"test",
Duplicate::Ignore,
&bed.ip_patterns,
);
let bed = bed.part2(filter, now(), None).await;
assert_eq!(
bed.manager.handle_line(&line, now()).await,
React::Trigger,
"line: {line}"
);
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
assert_eq!(
&read_to_string(&bed.out_file).await.unwrap().trim_end(),
ip,
"line: {line}"
);
}));
}
}
join_set.join_all().await;
}
}

View file

@ -0,0 +1,122 @@
use std::{
net::{AddrParseError, IpAddr, Ipv4Addr, Ipv6Addr},
str::FromStr,
};
/// Normalize a string as an IP address.
/// IPv6-mapped IPv4 addresses are casted to IPv4.
pub fn normalize(ip: &str) -> Result<IpAddr, AddrParseError> {
IpAddr::from_str(ip).map(normalize_ip)
}
/// Normalize a string as an IP address.
/// IPv6-mapped IPv4 addresses are casted to IPv4.
pub fn normalize_ip(ip: IpAddr) -> IpAddr {
match ip {
IpAddr::V4(_) => ip,
IpAddr::V6(ipv6) => match ipv6.to_ipv4_mapped() {
Some(ipv4) => IpAddr::V4(ipv4),
None => ip,
},
}
}
/// Creates an [`Ipv4Addr`] from a mask
pub fn mask_to_ipv4(mask_count: u8) -> Result<Ipv4Addr, String> {
if mask_count > 32 {
Err(format!(
"an IPv4 mask must be 32 max. {mask_count} is too big."
))
} else {
let mask = match mask_count {
0 => 0u32,
n => u32::MAX << (32 - n),
};
let mask = Ipv4Addr::from_bits(mask);
Ok(mask)
}
}
/// Creates an [`Ipv4Addr`] from a mask
pub fn mask_to_ipv6(mask_count: u8) -> Result<Ipv6Addr, String> {
if mask_count > 128 {
Err(format!(
"an IPv4 mask must be 128 max. {mask_count} is too big."
))
} else {
let mask = match mask_count {
0 => 0u128,
n => u128::MAX << (128 - n),
};
let mask = Ipv6Addr::from_bits(mask);
Ok(mask)
}
}
#[cfg(test)]
mod utils_tests {
use std::net::{IpAddr, Ipv4Addr, Ipv6Addr};
use super::{mask_to_ipv4, mask_to_ipv6, normalize};
#[test]
fn test_normalize_ip() {
assert_eq!(
normalize("83.44.23.14"),
Ok(IpAddr::V4(Ipv4Addr::new(83, 44, 23, 14)))
);
assert_eq!(
normalize("2001:db8:85a3::8a2e:370:7334"),
Ok(IpAddr::V6(Ipv6Addr::new(
0x2001, 0xdb8, 0x85a3, 0x0, 0x0, 0x8a2e, 0x370, 0x7334
)))
);
assert_eq!(
normalize("::ffff:192.168.1.34"),
Ok(IpAddr::V4(Ipv4Addr::new(192, 168, 1, 34)))
);
assert_eq!(
normalize("::ffff:1.2.3.4"),
Ok(IpAddr::V4(Ipv4Addr::new(1, 2, 3, 4)))
);
// octal numbers are forbidden
assert!(normalize("083.44.23.14").is_err());
}
#[test]
fn test_mask_to_ipv4() {
assert!(mask_to_ipv4(33).is_err());
assert!(mask_to_ipv4(100).is_err());
assert_eq!(mask_to_ipv4(16), Ok(Ipv4Addr::new(255, 255, 0, 0)));
assert_eq!(mask_to_ipv4(24), Ok(Ipv4Addr::new(255, 255, 255, 0)));
assert_eq!(mask_to_ipv4(25), Ok(Ipv4Addr::new(255, 255, 255, 128)));
assert_eq!(mask_to_ipv4(26), Ok(Ipv4Addr::new(255, 255, 255, 192)));
assert_eq!(mask_to_ipv4(32), Ok(Ipv4Addr::new(255, 255, 255, 255)));
}
#[test]
fn test_mask_to_ipv6() {
assert!(mask_to_ipv6(129).is_err());
assert!(mask_to_ipv6(254).is_err());
assert_eq!(
mask_to_ipv6(56),
Ok(Ipv6Addr::new(0xffff, 0xffff, 0xffff, 0xff00, 0, 0, 0, 0))
);
assert_eq!(
mask_to_ipv6(64),
Ok(Ipv6Addr::new(0xffff, 0xffff, 0xffff, 0xffff, 0, 0, 0, 0))
);
assert_eq!(
mask_to_ipv6(112),
Ok(Ipv6Addr::new(
0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0
))
);
assert_eq!(
mask_to_ipv6(128),
Ok(Ipv6Addr::new(
0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff, 0xffff
))
);
}
}

View file

@ -1,26 +1,34 @@
use std::cmp::Ordering;
use regex::Regex;
use regex::{Regex, RegexSet};
use serde::{Deserialize, Serialize};
mod ip;
pub use ip::{PatternIp, PatternType};
#[derive(Clone, Debug, Deserialize, Serialize)]
#[cfg_attr(test, derive(Default))]
#[serde(deny_unknown_fields)]
pub struct Pattern {
#[serde(default)]
pub regex: String,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
ignore: Vec<String>,
pub ignore: Vec<String>,
#[serde(default, rename = "ignoreregex", skip_serializing_if = "Vec::is_empty")]
ignore_regex: Vec<String>,
pub ignore_regex: Vec<String>,
#[serde(skip)]
compiled_ignore_regex: Vec<Regex>,
pub compiled_ignore_regex: RegexSet,
#[serde(flatten)]
pub ip: PatternIp,
#[serde(skip)]
name: String,
pub name: String,
#[serde(skip)]
name_with_braces: String,
pub name_with_braces: String,
}
impl Pattern {
@ -31,19 +39,20 @@ impl Pattern {
..Pattern::default()
}
}
pub fn setup(&mut self, name: &str) -> Result<(), String> {
self._setup(name)
.map_err(|msg| format!("pattern {}: {}", name, msg))
}
pub fn name(&self) -> &String {
&self.name
}
pub fn name_with_braces(&self) -> &String {
&self.name_with_braces
}
pub fn _setup(&mut self, name: &str) -> Result<(), String> {
pub fn pattern_type(&self) -> PatternType {
self.ip.pattern_type
}
pub fn setup(&mut self, name: &str) -> Result<(), String> {
self._setup(name)
.map_err(|msg| format!("pattern {}: {}", name, msg))
}
fn _setup(&mut self, name: &str) -> Result<(), String> {
self.name = name.to_string();
self.name_with_braces = format!("<{}>", name);
@ -54,10 +63,17 @@ impl Pattern {
return Err("character '.' is not allowed in pattern name".into());
}
if let Some(regex) = self.ip.setup()? {
if !self.regex.is_empty() {
return Err("patterns of type ip, ipv4, ipv6 have a built-in regex defined. you should not define it yourself".into());
}
self.regex = regex;
}
if self.regex.is_empty() {
return Err("regex is empty".into());
}
let compiled = Regex::new(&format!("^{}$", self.regex)).map_err(|err| err.to_string())?;
let compiled = self.compiled()?;
self.regex = format!("(?P<{}>{})", self.name, self.regex);
@ -70,29 +86,65 @@ impl Pattern {
}
}
for ignore_regex in &self.ignore_regex {
let compiled_ignore = Regex::new(&format!("^{}$", ignore_regex))
.map_err(|err| format!("ignoreregex '{}': {}", ignore_regex, err))?;
self.compiled_ignore_regex.push(compiled_ignore);
}
self.ignore_regex.clear();
self.compiled_ignore_regex =
match RegexSet::new(self.ignore_regex.iter().map(|regex| format!("^{}$", regex))) {
Ok(set) => set,
Err(err) => {
// Recompile regexes one by one to display a more specific error
for ignore_regex in &self.ignore_regex {
Regex::new(&format!("^{}$", ignore_regex))
.map_err(|err| format!("ignoreregex '{}': {}", ignore_regex, err))?;
}
// Here we should have returned an error already.
// Returning a more generic error if not (which shouldn't happen).
return Err(format!("ignoreregex: {}", err));
}
};
self.ignore_regex = Vec::default();
Ok(())
}
pub fn not_an_ignore(&self, match_: &str) -> bool {
for regex in &self.compiled_ignore_regex {
if regex.is_match(match_) {
return false;
}
/// Returns the pattern's regex compiled standalone, enclosed in ^ and $
/// It's not kept as a field of the [`Pattern`] struct
/// because it's only used during setup and for the `trigger` manual command.
///
/// *Yes, I know, avoiding a few bytes of memory is certainly a bad idea.*
/// *I'm open to discussion.*
fn compiled(&self) -> Result<Regex, String> {
Regex::new(&format!("^{}$", self.regex)).map_err(|err| err.to_string())
}
/// Normalize the pattern.
/// This should happen after checking on ignores.
/// No-op when the pattern is not an IP.
/// Otherwise BitAnd the IP with its configured mask,
/// and add the /<mask>
pub fn normalize(&self, match_: &mut String) {
self.ip.normalize(match_)
}
/// Whether the provided string is a match for this pattern or not.
///
/// Doesn't take into account ignore and ignore_regex:
/// use [`Self::is_ignore`] to access this information.
pub fn is_match(&self, match_: &str) -> bool {
match self.compiled() {
Ok(regex) => regex.is_match(match_),
// Should not happen, this function should be called only after
// [`Pattern::setup`]
Err(_) => false,
}
for ignore in &self.ignore {
if ignore == match_ {
return false;
}
}
true
}
/// Whether the provided string is ignored by the ignore or ignoreregex
/// fields of this pattern.
///
/// Can be used in combination with [`Self::is_match`].
pub fn is_ignore(&self, match_: &str) -> bool {
self.ignore.iter().any(|ignore| ignore == match_)
|| self.compiled_ignore_regex.is_match(match_)
|| self.ip.is_ignore(match_)
}
}
@ -129,7 +181,7 @@ impl Pattern {
}
/// Test-only constructor designed to be easy to call.
/// Constructs a full super::Paterns collection with one given pattern
/// Constructs a full [`super::Patterns`] collection with one given pattern
pub fn new_map(name: &str, regex: &str) -> Result<super::Patterns, String> {
Ok(std::iter::once((name.into(), Self::new(name, regex)?.into())).collect())
}
@ -145,7 +197,8 @@ pub mod tests {
regex: "".into(),
ignore: Vec::new(),
ignore_regex: Vec::new(),
compiled_ignore_regex: Vec::new(),
compiled_ignore_regex: RegexSet::default(),
ip: PatternIp::default(),
name: "".into(),
name_with_braces: "".into(),
}
@ -170,6 +223,12 @@ pub mod tests {
pattern
}
pub fn number_pattern() -> Pattern {
let mut pattern = ok_pattern();
pattern.regex = "[0-1]+".to_string();
pattern
}
#[test]
fn setup_missing_information() {
let mut pattern;
@ -245,7 +304,27 @@ pub mod tests {
}
#[test]
fn not_an_ignore() {
fn setup_yml() {
let mut pattern: Pattern = serde_yaml::from_str("{}").unwrap();
assert!(pattern.setup("name").is_err());
let mut pattern: Pattern = serde_yaml::from_str(r#"regex: "[abc]""#).unwrap();
assert!(pattern.setup("name").is_ok());
let mut pattern: Pattern = serde_yaml::from_str(r#"type: ip"#).unwrap();
assert!(pattern.setup("name").is_ok());
let mut pattern: Pattern = serde_yaml::from_str(r#"type: ipv4"#).unwrap();
assert!(pattern.setup("name").is_ok());
let mut pattern: Pattern = serde_yaml::from_str(r#"type: ipv6"#).unwrap();
assert!(pattern.setup("name").is_ok());
assert!(serde_yaml::from_str::<Pattern>(r#"type: zblorg"#).is_err());
}
#[test]
fn is_ignore() {
let mut pattern;
// ignore ok
@ -257,13 +336,13 @@ pub mod tests {
pattern.ignore_regex.push("[de]".into());
pattern.setup("name").unwrap();
assert!(!pattern.not_an_ignore("a"));
assert!(!pattern.not_an_ignore("b"));
assert!(!pattern.not_an_ignore("c"));
assert!(!pattern.not_an_ignore("d"));
assert!(!pattern.not_an_ignore("e"));
assert!(pattern.not_an_ignore("f"));
assert!(pattern.not_an_ignore("g"));
assert!(pattern.not_an_ignore("h"));
assert!(pattern.is_ignore("a"));
assert!(pattern.is_ignore("b"));
assert!(pattern.is_ignore("c"));
assert!(pattern.is_ignore("d"));
assert!(pattern.is_ignore("e"));
assert!(!pattern.is_ignore("f"));
assert!(!pattern.is_ignore("g"));
assert!(!pattern.is_ignore("h"));
}
}

218
src/concepts/plugin.rs Normal file
View file

@ -0,0 +1,218 @@
use std::{collections::BTreeMap, io::Error, path, process::Stdio};
#[cfg(target_os = "macos")]
use std::os::darwin::fs::MetadataExt;
#[cfg(target_os = "freebsd")]
use std::os::freebsd::fs::MetadataExt;
#[cfg(target_os = "illumos")]
use std::os::illumos::fs::MetadataExt;
#[cfg(target_os = "linux")]
use std::os::linux::fs::MetadataExt;
#[cfg(target_os = "netbsd")]
use std::os::netbsd::fs::MetadataExt;
#[cfg(target_os = "openbsd")]
use std::os::openbsd::fs::MetadataExt;
#[cfg(target_os = "solaris")]
use std::os::solaris::fs::MetadataExt;
use serde::{Deserialize, Serialize};
use tokio::{
fs,
process::{Child, Command},
};
use tracing::{debug, warn};
// TODO commented options block execution of program,
// while developping in my home directory.
// Some options may still be useful in production environments.
fn systemd_default_options(working_directory: &str) -> BTreeMap<String, Vec<String>> {
BTreeMap::from(
[
// reaction slice (does nothing if inexistent)
("Slice", vec!["reaction.slice"]),
// Started in its own directory
("WorkingDirectory", vec![working_directory]),
// No file access except own directory
("ReadWritePaths", vec![working_directory]),
("ReadOnlyPaths", vec!["/"]),
("InaccessiblePaths", vec!["/boot", "/etc"]),
// Protect special filesystems
("PrivateDevices", vec!["true"]),
("PrivateMounts", vec!["true"]),
("PrivateTmp", vec!["true"]),
// ("PrivateUsers", vec!["true"]),
("ProcSubset", vec!["pid"]),
("ProtectClock", vec!["true"]),
("ProtectControlGroups", vec!["true"]),
#[cfg(not(debug_assertions))]
("ProtectHome", vec!["true"]),
("ProtectHostname", vec!["true"]),
("ProtectKernelLogs", vec!["true"]),
("ProtectKernelModules", vec!["true"]),
("ProtectKernelTunables", vec!["true"]),
("ProtectProc", vec!["invisible"]),
("ProtectSystem", vec!["strict"]),
// Various Protections
("LockPersonality", vec!["true"]),
("NoNewPrivileges", vec!["true"]),
("AmbientCapabilities", vec![""]),
("CapabilityBoundingSet", vec![""]),
// Isolate File
("RemoveIPC", vec!["true"]),
("RestrictNamespaces", vec!["true"]),
("RestrictSUIDSGID", vec!["true"]),
("SystemCallArchitectures", vec!["native"]),
(
"SystemCallFilter",
vec!["@system-service", "~@privileged", "~@resources", "~@setuid"],
),
// User
// FIXME Setting another user doesn't work, because of stdio pipe permission errors
// ("DynamicUser", vec!["true"]),
// ("User", vec!["reaction-plugin-test"]),
// Too restrictive
// ("NoExecPaths", vec!["/"]),
// ("RestrictAddressFamilies", vec![""]),
]
.map(|(k, v)| (k.into(), v.into_iter().map(|v| v.into()).collect())),
)
}
#[derive(Clone, Debug, Deserialize, Serialize)]
#[cfg_attr(test, derive(Default))]
#[serde(deny_unknown_fields)]
pub struct Plugin {
#[serde(skip)]
pub name: String,
pub path: String,
/// Check that plugin file owner is root
#[serde(default = "_true")]
pub check_root: bool,
/// Enable systemd containerization
#[serde(default = "_true")]
pub systemd: bool,
/// Options for `run0`
#[serde(default)]
pub systemd_options: BTreeMap<String, Vec<String>>,
}
fn _true() -> bool {
true
}
// NOTE
// `run0` can be used for security customisation.
// with the --pipe option, raw stdio fd are transmitted to the underlying command, so there is no overhead.
impl Plugin {
pub fn setup(&mut self, name: &str) -> Result<(), String> {
self.name = name.to_string();
if self.path.is_empty() {
return Err("can't specify empty plugin path".into());
}
// Only when testing, make relative paths absolute
#[cfg(debug_assertions)]
if !self.path.starts_with("/") {
self.path = format!(
"{}/{}",
std::env::current_dir()
.map_err(|err| format!("error on working directory: {err}"))?
.to_string_lossy(),
self.path
);
}
// Disallow relative paths
if !self.path.starts_with("/") {
return Err(format!("plugin paths must be absolute: {}", self.path));
}
Ok(())
}
/// Override default options with user-defined options, when defined.
pub fn systemd_setup(&self, working_directory: &str) -> BTreeMap<String, Vec<String>> {
let mut new_options = systemd_default_options(working_directory);
for (option, value) in self.systemd_options.iter() {
new_options.insert(option.clone(), value.clone());
}
new_options
}
pub async fn launch(&self, state_directory: &str) -> Result<Child, std::io::Error> {
// owner check
if self.check_root {
let path = self.path.clone();
let stat = fs::metadata(path).await?;
if stat.st_uid() != 0 {
return Err(Error::other("plugin file is not owned by root"));
}
}
let self_uid = if self.systemd {
Some(
// Well well we want to check if we're root
#[allow(unsafe_code)]
unsafe {
nix::libc::geteuid()
},
)
} else {
None
};
// Create plugin working directory (also state directory)
let plugin_working_directory = format!("{state_directory}/plugin_data/{}", self.name);
fs::create_dir_all(&plugin_working_directory).await?;
let mut command = if self_uid.is_some_and(|self_uid| self_uid == 0) {
let mut command = Command::new("run0");
// --pipe gives direct, non-emulated stdio access, for better performance.
command.arg("--pipe");
// run the command inside the same slice as reaction
command.arg("--slice-inherit");
// Make path absolute for systemd
let full_workdir = path::absolute(&plugin_working_directory)?;
let full_workdir = full_workdir.to_str().ok_or_else(|| {
std::io::Error::new(
std::io::ErrorKind::InvalidFilename,
format!(
"Could not absolutize plugin working directory {plugin_working_directory}"
),
)
})?;
let merged_systemd_options = self.systemd_setup(full_workdir);
// run0 options
for (option, values) in merged_systemd_options.iter() {
for value in values.iter() {
command.arg("--property").arg(format!("{option}={value}"));
}
}
command.arg(&self.path);
command
} else {
if self.systemd {
warn!("Disabling systemd because reaction does not run as root");
}
let mut command = Command::new(&self.path);
command.current_dir(plugin_working_directory);
command
};
command.arg("serve");
debug!(
"plugin {}: running command: {:?}",
self.name,
command.as_std()
);
command
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.env("RUST_BACKTRACE", "1")
.spawn()
}
}

View file

@ -1,57 +1,64 @@
use std::{cmp::Ordering, collections::BTreeMap, hash::Hash};
use reaction_plugin::StreamConfig;
use regex::RegexSet;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use super::{Filter, Patterns};
use super::{Filter, Patterns, merge_attrs};
#[derive(Clone, Debug, Deserialize, Serialize)]
#[cfg_attr(test, derive(Default))]
#[serde(deny_unknown_fields)]
pub struct Stream {
#[serde(default)]
cmd: Vec<String>,
pub cmd: Vec<String>,
#[serde(default)]
filters: BTreeMap<String, Filter>,
pub filters: BTreeMap<String, Filter>,
#[serde(skip)]
name: String,
pub name: String,
#[serde(skip)]
pub compiled_regex_set: RegexSet,
#[serde(skip)]
pub regex_index_to_filter_name: Vec<String>,
// Plugin-specific
#[serde(default, rename = "type", skip_serializing_if = "Option::is_none")]
pub stream_type: Option<String>,
#[serde(default, skip_serializing_if = "Value::is_null")]
pub options: Value,
}
impl Stream {
pub fn filters(&self) -> &BTreeMap<String, Filter> {
&self.filters
}
pub fn get_filter(&self, filter_name: &str) -> Option<&Filter> {
self.filters.get(filter_name)
}
pub fn name(&self) -> &str {
&self.name
}
pub fn cmd(&self) -> &Vec<String> {
&self.cmd
}
pub fn merge(&mut self, other: Stream) -> Result<(), String> {
if !(self.cmd.is_empty() || other.cmd.is_empty() || self.cmd == other.cmd) {
return Err("cmd has conflicting definitions".into());
}
if self.cmd.is_empty() {
self.cmd = other.cmd;
}
self.cmd = merge_attrs(self.cmd.clone(), other.cmd, Vec::default(), "cmd")?;
self.stream_type = merge_attrs(self.stream_type.clone(), other.stream_type, None, "type")?;
for (key, filter) in other.filters.into_iter() {
if self.filters.insert(key.clone(), filter).is_some() {
return Err(format!("filter {} is already defined. filter definitions can't be spread accross multiple files.", key));
return Err(format!(
"filter {} is already defined. filter definitions can't be spread accross multiple files.",
key
));
}
}
Ok(())
}
pub fn is_plugin(&self) -> bool {
self.stream_type
.as_ref()
.is_some_and(|stream_type| stream_type != "cmd")
}
pub fn setup(&mut self, name: &str, patterns: &Patterns) -> Result<(), String> {
self._setup(name, patterns)
.map_err(|msg| format!("stream {}: {}", name, msg))
@ -67,11 +74,18 @@ impl Stream {
return Err("character '.' is not allowed in stream name".into());
}
if self.cmd.is_empty() {
return Err("cmd is empty".into());
}
if self.cmd[0].is_empty() {
return Err("cmd's first item is empty".into());
if !self.is_plugin() {
if self.cmd.is_empty() {
return Err("cmd is empty".into());
}
if self.cmd[0].is_empty() {
return Err("cmd's first item is empty".into());
}
if !self.options.is_null() {
return Err("can't define options without a plugin type".into());
}
} else if !self.cmd.is_empty() {
return Err("can't define cmd and a plugin type".into());
}
if self.filters.is_empty() {
@ -82,8 +96,33 @@ impl Stream {
filter.setup(name, key, patterns)?;
}
let all_regexes: BTreeMap<_, _> = self
.filters
.values()
.flat_map(|filter| {
filter
.regex
.iter()
.map(|regex| (regex, filter.name.clone()))
})
.collect();
self.compiled_regex_set = RegexSet::new(all_regexes.keys())
.map_err(|err| format!("too much regexes on the filters of this stream: {err}"))?;
self.regex_index_to_filter_name = all_regexes.into_values().collect();
Ok(())
}
pub fn to_stream_config(&self) -> Result<StreamConfig, String> {
Ok(StreamConfig {
stream_name: self.name.clone(),
stream_type: self.stream_type.clone().ok_or_else(|| {
format!("stream {} doesn't load a plugin. this is a bug!", self.name)
})?,
config: self.options.clone().into(),
})
}
}
impl PartialEq for Stream {
@ -114,19 +153,12 @@ mod tests {
use super::*;
use crate::concepts::filter::tests::ok_filter;
fn default_stream() -> Stream {
Stream {
cmd: Vec::new(),
name: "".into(),
filters: BTreeMap::new(),
}
}
fn ok_stream() -> Stream {
let mut stream = default_stream();
stream.cmd = vec!["command".into()];
stream.filters.insert("name".into(), ok_filter());
stream
Stream {
cmd: vec!["command".into()],
filters: BTreeMap::from([("name".into(), ok_filter())]),
..Default::default()
}
}
#[test]

View file

@ -1,445 +0,0 @@
#[cfg(test)]
mod tests;
use std::{
collections::{BTreeMap, BTreeSet},
process::Stdio,
sync::{Arc, Mutex, MutexGuard},
};
use regex::Regex;
use tokio::sync::Semaphore;
use tracing::{error, info};
use crate::{
concepts::{Action, Filter, Match, MatchTime, Pattern, Time},
protocol::{Order, PatternStatus},
treedb::{
helpers::{to_match, to_matchtime, to_time, to_u64},
Database, Tree,
},
};
use super::shutdown::ShutdownToken;
/// Responsible for handling all runtime logic dedicated to a [`Filter`].
/// Notably handles incoming lines from [`super::stream::stream_manager`]
/// and orders from the [`super::socket::socket_manager`]
#[derive(Clone)]
pub struct FilterManager {
/// the Filter managed
filter: &'static Filter,
/// Permits to limit concurrency of actions execution
exec_limit: Option<Arc<Semaphore>>,
/// Permits to run pending actions on shutdown
shutdown: ShutdownToken,
/// Inner state.
/// Protected by a [`Mutex`], permitting FilterManager to be cloned
/// and concurrently owned by its stream manager, the socket manager,
/// and actions' delayed tasks.
state: Arc<Mutex<State>>,
}
#[derive(Debug, PartialEq, Eq)]
pub enum React {
NoMatch,
Match,
Exec,
}
#[allow(clippy::unwrap_used)]
impl FilterManager {
pub fn new(
filter: &'static Filter,
exec_limit: Option<Arc<Semaphore>>,
shutdown: ShutdownToken,
db: &mut Database,
now: Time,
) -> Result<Self, String> {
let this = Self {
filter,
exec_limit,
shutdown,
state: Arc::new(Mutex::new(State::new(
filter,
!filter.longuest_action_duration().is_zero(),
db,
now,
)?)),
};
this.clear_past_triggers_and_schedule_future_actions(now);
Ok(this)
}
pub fn handle_line(&self, line: &str, now: Time) -> React {
if let Some(match_) = self.filter.get_match(line) {
if self.handle_match(match_, now) {
React::Exec
} else {
React::Match
}
} else {
React::NoMatch
}
}
fn handle_match(&self, m: Match, now: Time) -> bool {
#[allow(clippy::unwrap_used)] // propagating panics is ok
let mut state = self.state.lock().unwrap();
state.clear_past_matches(now);
let exec = match self.filter.retry() {
None => true,
Some(retry) => {
state.add_match(m.clone(), now);
// Number of stored times for this match >= configured retry for this filter
state.get_times(&m) >= retry as usize
}
};
if exec {
state.remove_match(&m);
state.add_trigger(m.clone(), now);
self.schedule_exec(m, now, now, &mut state);
}
exec
}
pub fn handle_order(
&self,
patterns: &BTreeMap<Arc<Pattern>, Regex>,
order: Order,
now: Time,
) -> BTreeMap<String, PatternStatus> {
let is_match = |match_: &Match| {
match_
.iter()
.zip(self.filter.patterns())
.filter_map(|(a_match, pattern)| {
patterns.get(pattern.as_ref()).map(|regex| (a_match, regex))
})
.all(|(a_match, regex)| regex.is_match(a_match))
};
#[allow(clippy::unwrap_used)] // propagating panics is ok
let mut state = self.state.lock().unwrap();
let mut cs: BTreeMap<_, _> = {
let cloned_matches = state
.matches
.keys()
// match filtering
.filter(|match_| is_match(match_))
// clone necessary to drop all references to State
.cloned()
.collect::<Vec<_>>();
cloned_matches
.into_iter()
.map(|match_| {
// mutable State required here
if let Order::Flush = order {
state.remove_match(&match_);
}
let matches = state
.matches
.get(&match_)
.map(|times| times.len())
.unwrap_or(0);
(
match_,
PatternStatus {
matches,
..Default::default()
},
)
})
.collect()
};
let cloned_triggers = state
.triggers
.keys()
// match filtering
.filter(|match_| is_match(&match_.m))
// clone necessary to drop all references to State
.cloned()
.collect::<Vec<_>>();
for mt in cloned_triggers.into_iter() {
// mutable State required here
// Remove the match from the triggers
if let Order::Flush = order {
// delete specific (Match, Time) tuple
state.remove_trigger(&mt.m, &mt.t);
}
let m = mt.m.clone();
let pattern_status = cs.entry(m).or_default();
for action in self.filter.actions().values() {
let action_time = mt.t + action.after_duration().unwrap_or_default();
if action_time > now {
// Insert action
pattern_status
.actions
.entry(action.name().into())
.or_default()
.push(action_time.to_rfc3339().chars().take(19).collect());
// Execute the action early
if let Order::Flush = order {
exec_now(&self.exec_limit, action, mt.m.clone());
}
}
}
}
cs.into_iter().map(|(k, v)| (k.join(" "), v)).collect()
}
/// Schedule execution for a given Action and Match.
/// We check first if the trigger is still here
/// because pending actions can be flushed.
fn schedule_exec(&self, m: Match, t: Time, now: Time, state: &mut MutexGuard<State>) {
for action in self.filter.actions().values() {
let exec_time = t + action.after_duration().unwrap_or_default();
let m = m.clone();
if exec_time <= now {
if state.decrement_trigger(&m, t) {
exec_now(&self.exec_limit, action, m);
}
} else {
let this = self.clone();
tokio::spawn(async move {
let dur = (exec_time - now)
.to_std()
// Could cause an error if t + after < now
// In this case, 0 is fine
.unwrap_or_default();
// Wait either for end of sleep
// or reaction exiting
let exiting = tokio::select! {
_ = tokio::time::sleep(dur) => false,
_ = this.shutdown.wait() => true,
};
// Exec action if triggered hasn't been already flushed
if !exiting || action.on_exit() {
#[allow(clippy::unwrap_used)] // propagating panics is ok
let mut state = this.state.lock().unwrap();
if state.decrement_trigger(&m, t) {
exec_now(&this.exec_limit, action, m);
}
}
});
}
}
}
fn clear_past_triggers_and_schedule_future_actions(&self, now: Time) {
let longuest_action_duration = self.filter.longuest_action_duration();
let number_of_actions = self.filter.actions().len();
#[allow(clippy::unwrap_used)] // propagating panics is ok
let mut state = self.state.lock().unwrap();
let cloned_triggers = state
.triggers
.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect::<Vec<_>>();
for (mt, remaining) in cloned_triggers.into_iter() {
if remaining > 0 && mt.t + longuest_action_duration > now {
// Insert back the upcoming times
state.triggers.insert(mt.clone(), number_of_actions as u64);
// Schedule the upcoming times
self.schedule_exec(mt.m, mt.t, now, &mut state);
} else {
state.triggers.remove(&mt);
}
}
}
}
fn exec_now(exec_limit: &Option<Arc<Semaphore>>, action: &'static Action, m: Match) {
let exec_limit = exec_limit.clone();
tokio::spawn(async move {
// Wait for semaphore's permission, if it is Some
let _permit = match exec_limit {
#[allow(clippy::unwrap_used)] // We know the semaphore is not closed
Some(semaphore) => Some(semaphore.acquire_owned().await.unwrap()),
None => None,
};
// Construct command
let mut command = action.exec(&m);
info!("{}: run [{:?}]", &action, command.as_std());
if let Err(err) = command
.stdin(Stdio::null())
.stderr(Stdio::null())
.stdout(Stdio::piped())
.status()
.await
{
error!("{}: run [{:?}], code {}", &action, command.as_std(), err);
}
});
}
fn filter_ordered_times_db_name(filter: &Filter) -> String {
format!(
"filter_ordered_times_{}.{}",
filter.stream_name(),
filter.name()
)
}
fn filter_triggers_db_name(filter: &Filter) -> String {
format!("filter_triggers_{}.{}", filter.stream_name(), filter.name())
}
/// Internal state of a [`FilterManager`].
/// Holds all data on current matches and triggers.
struct State {
/// the Filter managed
filter: &'static Filter,
/// Has the filter at least an action with an after directive?
has_after: bool,
/// Saves all the current Matches for this Filter
/// Has duplicate values for a key
/// Not persisted
matches: BTreeMap<Match, BTreeSet<Time>>,
/// Alternative view of the current Matches for O(1) cleaning of old Matches
/// without added async Tasks to remove them
/// Persisted
ordered_times: Tree<Time, Match>,
/// Saves all the current Triggers for this Filter
/// Persisted
triggers: Tree<MatchTime, u64>,
}
impl State {
fn new(
filter: &'static Filter,
has_after: bool,
db: &mut Database,
now: Time,
) -> Result<Self, String> {
let mut this = Self {
filter,
has_after,
matches: BTreeMap::new(),
ordered_times: db.open_tree(
filter_ordered_times_db_name(filter),
filter.retry_duration().unwrap_or_default(),
|(key, value)| Ok((to_time(&key)?, to_match(&value)?)),
)?,
triggers: db.open_tree(
filter_triggers_db_name(filter),
filter.retry_duration().unwrap_or_default(),
|(key, value)| Ok((to_matchtime(&key)?, to_u64(&value)?)),
)?,
};
this.clear_past_matches(now);
this.load_matches_from_ordered_times();
Ok(this)
}
fn add_match(&mut self, m: Match, t: Time) {
let set = self.matches.entry(m.clone()).or_default();
set.insert(t);
self.ordered_times.insert(t, m);
}
fn add_trigger(&mut self, m: Match, t: Time) {
// We record triggered filters only when there is an action with an `after` directive
if self.has_after {
// Add the (Match, Time) to the triggers map
self.triggers
.insert(MatchTime { m, t }, self.filter.actions().len() as u64);
}
}
// Completely remove a Match from the matches
fn remove_match(&mut self, m: &Match) {
if let Some(set) = self.matches.get(m) {
for t in set {
self.ordered_times.remove(&t);
}
self.matches.remove(m);
}
}
/// Completely remove a Match from the triggers
fn remove_trigger(&mut self, m: &Match, t: &Time) {
self.triggers.remove(&MatchTime {
m: m.clone(),
t: *t,
});
}
/// Returns whether we should still execute an action for this (Match, Time) trigger
fn decrement_trigger(&mut self, m: &Match, t: Time) -> bool {
// We record triggered filters only when there is an action with an `after` directive
if self.has_after {
let mut exec_needed = false;
let mt = MatchTime { m: m.clone(), t };
let count = self.triggers.get(&mt);
if let Some(count) = count {
exec_needed = true;
if *count <= 1 {
self.triggers.remove(&mt);
} else {
self.triggers.insert(mt, count - 1);
}
}
exec_needed
} else {
true
}
}
fn clear_past_matches(&mut self, now: Time) {
let retry_duration = self.filter.retry_duration().unwrap_or_default();
while self
.ordered_times
.first_key_value()
.is_some_and(|(t, _)| *t + retry_duration < now)
{
#[allow(clippy::unwrap_used)]
// unwrap: we just checked in the condition that first is_some
let (t, m) = {
let (t, m) = self.ordered_times.first_key_value().unwrap();
(t.clone(), m.clone())
};
self.ordered_times.remove(&t);
if let Some(set) = self.matches.get(&m) {
let mut set = set.clone();
set.remove(&t);
if set.is_empty() {
self.matches.remove(&m);
} else {
self.matches.insert(m, set);
}
}
}
}
fn get_times(&self, m: &Match) -> usize {
match self.matches.get(m) {
Some(vec) => vec.len(),
None => 0,
}
}
fn load_matches_from_ordered_times(&mut self) {
for (t, m) in self.ordered_times.iter() {
let set = self.matches.entry(m.clone()).or_default();
set.insert(*t);
}
}
}

458
src/daemon/filter/mod.rs Normal file
View file

@ -0,0 +1,458 @@
#[cfg(test)]
pub mod tests;
mod state;
use std::{collections::BTreeMap, process::Stdio, sync::Arc};
use chrono::TimeZone;
use reaction_plugin::{ActionImpl, shutdown::ShutdownToken};
use regex::Regex;
use tokio::sync::{Mutex, MutexGuard, Semaphore};
use tracing::{error, info};
use crate::{
concepts::{Action, Duplicate, Filter, Match, Pattern, Time},
daemon::plugin::Plugins,
protocol::{Order, PatternStatus},
};
use treedb::Database;
use state::State;
/// Responsible for handling all runtime logic dedicated to a [`Filter`].
/// Notably handles incoming lines from [`super::stream::stream_manager`]
/// and orders from the [`super::socket::socket_manager`]
#[derive(Clone)]
pub struct FilterManager {
/// the Filter managed
filter: &'static Filter,
/// Permits to limit concurrency of actions execution
exec_limit: Option<Arc<Semaphore>>,
/// Permits to run pending actions on shutdown
shutdown: ShutdownToken,
/// Action Plugins
action_plugins: BTreeMap<&'static String, ActionImpl>,
/// Inner state.
/// Protected by a [`Mutex`], permitting FilterManager to be cloned
/// and concurrently owned by its stream manager, the socket manager,
/// and actions' delayed tasks.
state: Arc<Mutex<State>>,
}
/// The react to a line handling.
#[derive(Debug, PartialEq, Eq)]
pub enum React {
/// This line doesn't match
NoMatch,
/// This line matches, but no execution is triggered
Match,
/// This line matches, and an execution is triggered
Trigger,
}
#[allow(clippy::unwrap_used)]
impl FilterManager {
pub async fn new(
filter: &'static Filter,
exec_limit: Option<Arc<Semaphore>>,
shutdown: ShutdownToken,
db: &mut Database,
plugins: &mut Plugins,
now: Time,
) -> Result<Self, String> {
let mut action_plugins = BTreeMap::default();
for (action_name, action) in filter.actions.iter().filter(|action| action.1.is_plugin()) {
action_plugins.insert(
action_name,
plugins.get_action_impl(action.to_string()).ok_or_else(|| {
format!("action {action} doesn't load a plugin. this is a bug!")
})?,
);
}
let this = Self {
filter,
exec_limit,
shutdown,
action_plugins,
state: Arc::new(Mutex::new(State::new(filter, db, now).await?)),
};
Ok(this)
}
pub async fn handle_line(&self, line: &str, now: Time) -> React {
if let Some(match_) = self.filter.get_match(line) {
if self.handle_match(match_, now).await {
React::Trigger
} else {
React::Match
}
} else {
React::NoMatch
}
}
async fn handle_match(&self, m: Match, now: Time) -> bool {
#[allow(clippy::unwrap_used)] // propagating panics is ok
let mut state = self.state.lock().await;
state.clear_past_matches(now).await;
// if Duplicate::Ignore and already triggered, skip
if state.triggers.contains_key(&m) && Duplicate::Ignore == self.filter.duplicate {
return false;
}
info!("{}: match {:?}", self.filter, &m);
let trigger = match self.filter.retry {
None => true,
Some(retry) => {
state.add_match(m.clone(), now).await;
// Number of stored times for this match >= configured retry for this filter
state.get_times(&m).await >= retry as usize
}
};
if trigger {
state.remove_match(&m).await;
let actions_left = if Duplicate::Extend == self.filter.duplicate {
// Get number of actions left from last trigger
state
.remove_trigger(&m)
.await
// Only one entry in the map because Duplicate::Extend
.and_then(|map| map.first_key_value().map(|(_, n)| *n))
} else {
None
};
state.add_trigger(m.clone(), now, actions_left).await;
self.schedule_exec(m, now, now, &mut state, false, actions_left)
.await;
}
trigger
}
pub async fn handle_trigger(
&self,
patterns: BTreeMap<Arc<Pattern>, String>,
now: Time,
) -> Result<(), String> {
let match_ = self.filter.get_match_from_patterns(patterns)?;
#[allow(clippy::unwrap_used)] // propagating panics is ok
let mut state = self.state.lock().await;
state.remove_match(&match_).await;
state.add_trigger(match_.clone(), now, None).await;
self.schedule_exec(match_, now, now, &mut state, false, None)
.await;
Ok(())
}
pub async fn handle_order(
&self,
patterns: &BTreeMap<Arc<Pattern>, Regex>,
order: Order,
now: Time,
) -> BTreeMap<String, PatternStatus> {
let is_match = |match_: &Match| {
match_
.iter()
.zip(self.filter.patterns.as_ref())
.filter_map(|(a_match, pattern)| {
patterns.get(pattern.as_ref()).map(|regex| (a_match, regex))
})
.all(|(a_match, regex)| regex.is_match(a_match))
};
#[allow(clippy::unwrap_used)] // propagating panics is ok
let mut state = self.state.lock().await;
let mut cs: BTreeMap<_, _> = {
let cloned_matches = state
.matches
.keys()
// match filtering
.filter(|match_| is_match(match_))
// clone necessary to drop all references to State
.cloned()
.collect::<Vec<_>>();
let mut cs = BTreeMap::new();
for match_ in cloned_matches {
// mutable State required here
if let Order::Flush = order {
state.remove_match(&match_).await;
}
let matches = state
.matches
.get(&match_)
.map(|times| times.len())
.unwrap_or(0);
cs.insert(
match_,
PatternStatus {
matches,
..Default::default()
},
);
}
cs
};
let cloned_triggers = state
.triggers
.keys()
// match filtering
.filter(|match_| is_match(match_))
// clone necessary to drop all references to State
.cloned()
.collect::<Vec<_>>();
for m in cloned_triggers.into_iter() {
let map = state.triggers.get(&m).unwrap().clone();
if let Order::Flush = order {
state.remove_trigger(&m).await;
}
for (t, remaining) in map {
if remaining > 0 {
let pattern_status = cs.entry(m.clone()).or_default();
for action in self.filter.filtered_actions_from_match(&m) {
let action_time = t + action.after_duration.unwrap_or_default();
if action_time > now {
// Pretty print time
let time = chrono::Local
.timestamp_opt(
action_time.as_secs() as i64,
action_time.subsec_nanos(),
)
.unwrap()
.to_rfc3339()
.chars()
.take(19)
.collect();
// Insert action
pattern_status
.actions
.entry(action.name.clone())
.or_default()
.push(time);
// Execute the action early
if let Order::Flush = order {
self.exec_now(action, m.clone(), t);
}
}
}
}
}
}
cs.into_iter().map(|(k, v)| (k.join(" "), v)).collect()
}
/// Schedule execution for a given Match.
/// We check first if the trigger is still here
/// because pending actions can be flushed.
async fn schedule_exec(
&self,
m: Match,
t: Time,
now: Time,
state: &mut MutexGuard<'_, State>,
startup: bool,
actions_left: Option<u64>,
) {
let actions = self
.filter
.filtered_actions_from_match(&m)
.into_iter()
// On startup, skip oneshot actions
.filter(|action| !startup || !action.oneshot)
// skip any actions
.skip(match actions_left {
Some(actions_left) => {
self.filter.filtered_actions_from_match(&m).len() - actions_left as usize
}
None => 0,
});
// Scheduling each action
for action in actions {
let exec_time = t + action.after_duration.unwrap_or_default();
let m = m.clone();
if exec_time <= now {
if state.decrement_trigger(&m, t, false).await {
self.exec_now(action, m, t);
}
} else {
let this = self.clone();
let action_impl = self.action_plugins.get(&action.name).cloned();
tokio::spawn(async move {
let dur = exec_time - now;
// Wait either for end of sleep
// or reaction exiting
let exiting = tokio::select! {
_ = tokio::time::sleep(dur.into()) => false,
_ = this.shutdown.wait() => true,
};
// Exec action if triggered hasn't been already flushed
if !exiting || action.on_exit {
#[allow(clippy::unwrap_used)] // propagating panics is ok
let mut state = this.state.lock().await;
if state.decrement_trigger(&m, t, exiting).await {
exec_now(&this.exec_limit, this.shutdown, action, action_impl, m, t);
}
}
});
}
}
}
/// Clear past triggers and schedule future actions
pub async fn start(&self, now: Time) {
let longuest_action_duration = self.filter.longuest_action_duration;
let number_of_actions = self
.filter
.actions
.values()
// On startup, skip oneshot actions
.filter(|action| !action.oneshot)
.count() as u64;
#[allow(clippy::unwrap_used)] // propagating panics is ok
let mut state = self.state.lock().await;
let cloned_triggers = state
.triggers
.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect::<Vec<_>>();
for (m, map) in cloned_triggers.into_iter() {
let map: BTreeMap<_, _> = map
.into_iter()
// Keep only up-to-date triggers
.filter(|(t, remaining)| *remaining > 0 && *t + longuest_action_duration > now)
// Reset action count
.map(|(t, _)| (t, number_of_actions))
.collect();
if map.is_empty() {
state.triggers.remove(&m).await;
} else {
// Filter duplicates
// unwrap is fine because map is not empty (see if)
let map = match self.filter.duplicate {
// Keep only last item
Duplicate::Extend => BTreeMap::from([map.into_iter().next_back().unwrap()]),
// Keep only first item
Duplicate::Ignore => BTreeMap::from([map.into_iter().next().unwrap()]),
// No filtering
Duplicate::Rerun => map,
};
state.triggers.insert(m.clone(), map.clone()).await;
for (t, _) in map {
// Schedule the upcoming times
self.schedule_exec(m.clone(), t, now, &mut state, true, None)
.await;
}
}
}
}
fn exec_now(&self, action: &'static Action, m: Match, t: Time) {
let action_impl = self.action_plugins.get(&action.name).cloned();
exec_now(
&self.exec_limit,
self.shutdown.clone(),
action,
action_impl,
m,
t,
)
}
}
fn exec_now(
exec_limit: &Option<Arc<Semaphore>>,
shutdown: ShutdownToken,
action: &'static Action,
action_impl: Option<ActionImpl>,
m: Match,
t: Time,
) {
let exec_limit = exec_limit.clone();
tokio::spawn(async move {
// Move ShutdownToken in task
let _shutdown = shutdown;
match action_impl {
Some(action_impl) => {
info!(
"{action}: run {} {:?}",
action.action_type.clone().unwrap_or_default(),
&m,
);
// Sending action
if let Err(err) = action_impl
.tx
.send(reaction_plugin::Exec {
match_: m,
time: t.into(),
})
.await
{
error!("{action}: communication with plugin failed: {err}");
return;
}
}
None => {
// Wait for semaphore's permission, if it is Some
let _permit = match exec_limit {
#[allow(clippy::unwrap_used)] // We know the semaphore is not closed
Some(semaphore) => Some(semaphore.acquire_owned().await.unwrap()),
None => None,
};
// Construct command
let mut command = action.exec(&m);
info!("{action}: run [{:?}]", command.as_std());
if let Err(err) = command
.stdin(Stdio::null())
.stderr(Stdio::null())
.stdout(Stdio::piped())
.status()
.await
{
error!("{action}: run [{:?}], code {err}", command.as_std());
}
}
}
});
}
impl PartialEq for FilterManager {
fn eq(&self, other: &Self) -> bool {
self.filter == other.filter
}
}
impl Eq for FilterManager {}
impl Ord for FilterManager {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
self.filter.cmp(other.filter)
}
}
impl PartialOrd for FilterManager {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}

655
src/daemon/filter/state.rs Normal file
View file

@ -0,0 +1,655 @@
use std::collections::{BTreeMap, BTreeSet};
use serde_json::Value;
use treedb::{Database, Tree, helpers::*};
use crate::concepts::{Filter, Match, MatchTime, Time};
pub fn filter_ordered_times_db_name(filter: &Filter) -> String {
format!(
"filter_ordered_times_{}.{}",
filter.stream_name, filter.name
)
}
pub fn filter_triggers_old_db_name(filter: &Filter) -> String {
format!("filter_triggers_{}.{}", filter.stream_name, filter.name)
}
pub fn filter_triggers_db_name(filter: &Filter) -> String {
format!("filter_triggers2_{}.{}", filter.stream_name, filter.name)
}
/// Internal state of a [`FilterManager`].
/// Holds all data on current matches and triggers.
pub struct State {
/// the Filter managed
filter: &'static Filter,
/// Has the filter at least an action with an after directive?
has_after: bool,
/// Saves all the current Matches for this Filter
/// Has duplicate values for a key
/// Not persisted
pub matches: BTreeMap<Match, BTreeSet<Time>>,
/// Alternative view of the current Matches for O(1) cleaning of old Matches
/// without added async Tasks to remove them
/// Persisted
///
/// I'm pretty confident that Time will always be unique, because it has enough precision.
/// See this code that gives different times, even in a minimal loop:
/// ```rust
/// use reaction::concepts::now;
///
/// let mut res = vec![];
/// for _ in 0..10 {
/// let now = now();
/// res.push(format!("Now: {}", now.as_nanos()));
/// }
/// for s in res {
/// println!("{s}");
/// }
/// ```
pub ordered_times: Tree<Time, Match>,
/// Saves all the current Triggers for this Filter
/// Persisted
pub triggers: Tree<Match, BTreeMap<Time, u64>>,
}
impl State {
pub async fn new(
filter: &'static Filter,
db: &mut Database,
now: Time,
) -> Result<Self, String> {
let ordered_times = db
.open_tree(
filter_ordered_times_db_name(filter),
filter.retry_duration.unwrap_or_default(),
|(key, value)| Ok((to_time(&key)?, to_match(&value)?)),
)
.await?;
let mut triggers = db
.open_tree(
filter_triggers_db_name(filter),
filter.longuest_action_duration,
|(key, value)| Ok((to_match(&key)?, to_timemap(&value)?)),
)
.await?;
if triggers.is_empty() {
let old_triggers = db
.open_tree(
filter_triggers_old_db_name(filter),
filter.longuest_action_duration,
|(key, value)| Ok((to_matchtime(&key)?, to_u64(&value)?)),
)
.await?;
for (mt, n) in old_triggers.iter() {
triggers
.fetch_update(mt.m.clone(), |map| {
Some(match map {
None => [(mt.t, *n)].into(),
Some(mut map) => {
map.insert(mt.t, *n);
map
}
})
})
.await;
}
}
let mut this = Self {
filter,
has_after: !filter.longuest_action_duration.is_zero(),
matches: BTreeMap::new(),
ordered_times,
triggers,
};
this.clear_past_matches(now).await;
this.load_matches_from_ordered_times().await;
Ok(this)
}
pub async fn add_match(&mut self, m: Match, t: Time) {
let set = self.matches.entry(m.clone()).or_default();
set.insert(t);
self.ordered_times.insert(t, m).await;
}
pub async fn add_trigger(&mut self, m: Match, t: Time, action_count: Option<u64>) {
// We record triggered filters only when there is an action with an `after` directive
if self.has_after {
// Add the (Match, Time) to the triggers map
let n = action_count
.unwrap_or_else(|| self.filter.filtered_actions_from_match(&m).len() as u64);
self.triggers
.fetch_update(m, |map| {
Some(match map {
None => [(t, n)].into(),
Some(mut value) => {
value.insert(t, n);
value
}
})
})
.await;
}
}
// Completely remove a Match from the matches
pub async fn remove_match(&mut self, m: &Match) {
if let Some(set) = self.matches.get(m) {
for t in set {
self.ordered_times.remove(t).await;
}
self.matches.remove(m);
}
}
/// Completely remove a Match from the triggers
pub async fn remove_trigger(&mut self, m: &Match) -> Option<BTreeMap<Time, u64>> {
self.triggers.remove(m).await
}
/// Returns whether we should still execute an action for this (Match, Time) trigger
pub async fn decrement_trigger(&mut self, m: &Match, t: Time, exiting: bool) -> bool {
// We record triggered filters only when there is an action with an `after` directive
if self.has_after {
let mut exec_needed = false;
let mt = MatchTime { m: m.clone(), t };
let count = self
.triggers
.get(&mt.m)
.and_then(|map| map.get(&mt.t))
.cloned();
if let Some(count) = count {
exec_needed = true;
if count <= 1 {
if !exiting {
self.triggers
.fetch_update(mt.m, |map| {
map.and_then(|mut map| {
map.remove(&mt.t);
if map.is_empty() { None } else { Some(map) }
})
})
.await;
}
// else don't do anything
// Because that will remove the entry in the DB, and make
// it forget this trigger.
// Maybe we should have 2 maps for triggers:
// - The current for action counting, not persisted
// - Another like ordered_times, Tree<Time, Match>, persisted
} else {
self.triggers
.fetch_update(mt.m, |map| {
map.map(|mut map| {
map.insert(mt.t, count - 1);
map
})
})
.await;
}
}
exec_needed
} else {
true
}
}
pub async fn clear_past_matches(&mut self, now: Time) {
let retry_duration = self.filter.retry_duration.unwrap_or_default();
while self
.ordered_times
.first_key_value()
.is_some_and(|(t, _)| *t + retry_duration < now)
{
#[allow(clippy::unwrap_used)]
// unwrap: we just checked in the condition that first is_some
let (t, m) = {
let (t, m) = self.ordered_times.first_key_value().unwrap();
(*t, m.clone())
};
self.ordered_times.remove(&t).await;
if let Some(set) = self.matches.get(&m) {
let mut set = set.clone();
set.remove(&t);
if set.is_empty() {
self.matches.remove(&m);
} else {
self.matches.insert(m, set);
}
}
}
}
pub async fn get_times(&self, m: &Match) -> usize {
match self.matches.get(m) {
Some(vec) => vec.len(),
None => 0,
}
}
async fn load_matches_from_ordered_times(&mut self) {
for (t, m) in self.ordered_times.iter() {
let set = self.matches.entry(m.clone()).or_default();
set.insert(*t);
}
}
}
/// Tries to convert a [`Value`] into a [`MatchTime`]
pub fn to_matchtime(val: &Value) -> Result<MatchTime, String> {
let map = val.as_object().ok_or("not an object")?;
Ok(MatchTime {
m: to_match(map.get("m").ok_or("no m in object")?)?,
t: to_time(map.get("t").ok_or("no t in object")?)?,
})
}
#[cfg(test)]
mod tests {
use std::collections::{BTreeMap, HashMap};
use serde_json::{Map, Value, json};
use crate::{
concepts::{
Action, Duplicate, Filter, MatchTime, Pattern, Time, filter_tests::ok_filter, now,
},
tests::TempDatabase,
};
use super::{State, to_matchtime};
// Tests `new`, `clear_past_matches` and `load_matches_from_ordered_times`
#[tokio::test]
async fn state_new() {
let patterns = Pattern::new_map("az", "[a-z]+").unwrap();
let filter = Filter::new_static(
vec![
Action::new(vec!["true"], None, false, "s1", "f1", "a1", &patterns, 0),
Action::new(
vec!["true"],
Some("3s"),
false,
"s1",
"f1",
"a2",
&patterns,
0,
),
],
vec!["test <az>"],
Some(3),
Some("2s"),
"s1",
"f1",
Duplicate::default(),
&patterns,
);
let now = Time::from_secs(1234567);
// DateTime::parse_from_rfc3339("2025-07-10T12:35:00.000+00:00")
// .unwrap()
// .with_timezone(&Local);
let now_plus_1m = now + Time::from_mins(1);
let now_plus_1m01 = now_plus_1m + Time::from_secs(1);
let now_less_1m = now - Time::from_mins(1);
let now_less_1s = now - Time::from_secs(1);
let now_less_4s = now - Time::from_secs(4);
let now_less_5s = now - Time::from_secs(5);
let triggers = [
// format v1
(
"filter_triggers_s1.f1".into(),
HashMap::from([
// Will stay
(
json!({
"t": now_plus_1m,
"m": ["one"],
}),
json!(1),
),
(
json!({
"t": now_less_1s,
"m": ["one"],
}),
json!(1),
),
// Will not get cleaned because it's FilterManager's task
(
json!({
"t": now_less_5s,
"m": ["one"],
}),
json!(1),
),
]),
),
// format v2 (since v2.2.0)
(
"filter_triggers2_s1.f1".into(),
HashMap::from([(
json!(["one"]),
json!({
// Will stay
now_plus_1m.as_nanos().to_string(): 1,
now_less_1s.as_nanos().to_string(): 1,
// Will not get cleaned because it's FilterManager's task
now_less_5s.as_nanos().to_string(): 1,
}),
)]),
),
];
for trigger_db in triggers {
let mut db = TempDatabase::from_loaded_db(HashMap::from([
(
"filter_ordered_times_s1.f1".into(),
HashMap::from([
// Will stay
(now_plus_1m.as_nanos().to_string().into(), ["one"].into()),
(now_plus_1m01.as_nanos().to_string().into(), ["one"].into()),
(now_less_1s.as_nanos().to_string().into(), ["two"].into()), // stays because retry: 2s
// Will get cleaned
(now_less_4s.as_nanos().to_string().into(), ["two"].into()),
(now_less_5s.as_nanos().to_string().into(), ["three"].into()),
(now_less_1m.as_nanos().to_string().into(), ["two"].into()),
]),
),
trigger_db,
]))
.await;
let state = State::new(filter, &mut db, now).await.unwrap();
assert_eq!(
state.ordered_times.tree(),
&BTreeMap::from([
(now_less_1s, vec!["two".into()]),
(now_plus_1m, vec!["one".into()]),
(now_plus_1m01, vec!["one".into()]),
])
);
assert_eq!(
state.matches,
BTreeMap::from([
(vec!["one".into()], [now_plus_1m, now_plus_1m01].into()),
(vec!["two".into()], [now_less_1s].into()),
])
);
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(
vec!["one".into()],
BTreeMap::from([
(now_less_5s, 1u64),
(now_less_1s, 1u64),
(now_plus_1m, 1u64),
]),
)])
);
}
}
#[tokio::test]
async fn state_match_add_remove() {
let filter = Box::leak(Box::new(ok_filter()));
let one = vec!["one".into()];
let now = Time::from_secs(1234567);
let now_less_1s = now - Time::from_secs(1);
let now_less_4s = now - Time::from_secs(4);
let mut db = TempDatabase::default().await;
let mut state = State::new(filter, &mut db, now).await.unwrap();
assert!(state.ordered_times.tree().is_empty());
assert!(state.matches.is_empty());
// Add non-previously added match
state.add_match(one.clone(), now_less_1s).await;
assert_eq!(
state.ordered_times.tree(),
&BTreeMap::from([(now_less_1s, one.clone()),])
);
assert_eq!(
state.matches,
BTreeMap::from([(one.clone(), [now_less_1s].into())])
);
// Add previously added match
state.add_match(one.clone(), now_less_4s).await;
assert_eq!(
state.ordered_times.tree(),
&BTreeMap::from([(now_less_1s, one.clone()), (now_less_4s, one.clone())])
);
assert_eq!(
state.matches,
BTreeMap::from([(one.clone(), [now_less_1s, now_less_4s].into())])
);
// Remove added match
state.remove_match(&one).await;
assert!(state.ordered_times.tree().is_empty());
assert!(state.matches.is_empty());
}
#[tokio::test]
async fn state_trigger_no_after_add_remove_decrement() {
let filter = Box::leak(Box::new(ok_filter()));
let one = vec!["one".into()];
let now = now();
let mut db = TempDatabase::default().await;
let mut state = State::new(filter, &mut db, now).await.unwrap();
assert!(state.triggers.tree().is_empty());
// Add unique trigger
state.add_trigger(one.clone(), now, None).await;
// Nothing is really added
assert!(state.triggers.tree().is_empty());
// Will be called immediately after, it returns true
assert!(state.decrement_trigger(&one, now, false).await);
}
#[tokio::test]
async fn state_trigger_has_after_add_remove_decrement() {
let patterns = Pattern::new_map("az", "[a-z]+").unwrap();
let filter = Filter::new_static(
vec![
Action::new(vec!["true"], None, false, "s1", "f1", "a1", &patterns, 0),
Action::new(
vec!["true"],
Some("1s"),
false,
"s1",
"f1",
"a2",
&patterns,
0,
),
Action::new(
vec!["true"],
Some("3s"),
false,
"s1",
"f1",
"a3",
&patterns,
0,
),
],
vec!["test <az>"],
Some(3),
Some("2s"),
"s1",
"f1",
Duplicate::default(),
&patterns,
);
let one = vec!["one".into()];
let now = now();
let now_plus_1s = now + Time::from_secs(1);
let mut db = TempDatabase::default().await;
let mut state = State::new(filter, &mut db, now).await.unwrap();
assert!(state.triggers.tree().is_empty());
// Add unique trigger
state.add_trigger(one.clone(), now, None).await;
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now, 3)].into())])
);
// Decrement → true
assert!(state.decrement_trigger(&one, now, false).await);
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now, 2)].into())])
);
// Decrement → true
assert!(state.decrement_trigger(&one, now, false).await);
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now, 1)].into())])
);
// Decrement → true
assert!(state.decrement_trigger(&one, now, false).await);
assert!(state.triggers.tree().is_empty());
// Decrement → false
assert!(!state.decrement_trigger(&one, now, false).await);
// Add unique trigger (but decrement exiting-like)
state.add_trigger(one.clone(), now, None).await;
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now, 3)].into())])
);
// Decrement → true
assert!(state.decrement_trigger(&one, now, true).await);
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now, 2)].into())])
);
// Decrement → true
assert!(state.decrement_trigger(&one, now, true).await);
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now, 1)].into())])
);
// Decrement but exiting → true, does nothing
assert!(state.decrement_trigger(&one, now, true).await);
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now, 1)].into())])
);
// Decrement → true
assert!(state.decrement_trigger(&one, now, false).await);
assert!(state.triggers.tree().is_empty());
// Decrement → false
assert!(!state.decrement_trigger(&one, now, false).await);
// Add trigger with neighbour
state.add_trigger(one.clone(), now, None).await;
state.add_trigger(one.clone(), now_plus_1s, None).await;
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3), (now, 3)].into())])
);
// Decrement → true
assert!(state.decrement_trigger(&one, now, false).await);
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3), (now, 2)].into())])
);
// Decrement → true
assert!(state.decrement_trigger(&one, now, false).await);
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3), (now, 1)].into())])
);
// Decrement → true
assert!(state.decrement_trigger(&one, now, false).await);
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3)].into())])
);
// Decrement → false
assert!(!state.decrement_trigger(&one, now, false).await);
// Remove neighbour
state.remove_trigger(&one).await;
assert!(state.triggers.tree().is_empty());
// Add two neighbour triggers
state.add_trigger(one.clone(), now, None).await;
state.add_trigger(one.clone(), now_plus_1s, None).await;
assert_eq!(
state.triggers.tree(),
&BTreeMap::from([(one.clone(), [(now_plus_1s, 3), (now, 3)].into())])
);
// Remove them
state.remove_trigger(&one).await;
assert!(state.triggers.tree().is_empty());
}
#[test]
fn test_to_matchtime() {
assert_eq!(
to_matchtime(&Value::Object(Map::from_iter(
BTreeMap::from([
("m".into(), ["plip", "ploup"].into()),
("t".into(), "12345678".into()),
])
.into_iter()
))),
Ok(MatchTime {
m: vec!["plip".into(), "ploup".into()],
t: Time::from_nanos(12345678),
})
);
assert!(
to_matchtime(&Value::Object(Map::from_iter(
BTreeMap::from([("m".into(), ["plip", "ploup"].into()),]).into_iter()
)))
.is_err()
);
assert!(
to_matchtime(&Value::Object(Map::from_iter(
BTreeMap::from([("t".into(), 12345678.into()),]).into_iter()
)))
.is_err()
);
assert!(
to_matchtime(&Value::Object(Map::from_iter(
BTreeMap::from([("m".into(), "ploup".into()), ("t".into(), 12345678.into()),])
.into_iter()
)))
.is_err()
);
assert!(
to_matchtime(&Value::Object(Map::from_iter(
BTreeMap::from([
("m".into(), ["plip", "ploup"].into()),
("t".into(), [1234567].into()),
])
.into_iter()
)))
.is_err()
);
}
}

File diff suppressed because it is too large Load diff

View file

@ -3,114 +3,180 @@ use std::{
error::Error,
path::PathBuf,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
atomic::{AtomicBool, Ordering},
},
};
use chrono::Local;
use futures::future::join_all;
use reaction_plugin::shutdown::{ShutdownController, ShutdownDelegate, ShutdownToken};
use tokio::{
select,
signal::unix::{signal, SignalKind},
signal::unix::{SignalKind, signal},
sync::Semaphore,
};
use tracing::{debug, info};
use tracing::{debug, error, info};
use treedb::Database;
use crate::{concepts::Config, treedb::Database};
use crate::concepts::{Config, now};
use filter::FilterManager;
pub use shutdown::{ShutdownController, ShutdownDelegate, ShutdownToken};
use socket::socket_manager;
use stream::stream_manager;
pub use filter::React;
use plugin::Plugins;
use socket::Socket;
use stream::StreamManager;
#[cfg(test)]
pub use filter::tests;
mod filter;
mod shutdown;
mod plugin;
mod socket;
mod stream;
mod utils;
pub async fn daemon(
config_path: PathBuf,
socket: PathBuf,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let config: &'static Config = Box::leak(Box::new(Config::from_path(&config_path)?));
if !config.start() {
return Err("a start command failed, exiting.".into());
}
let mut stream_task_handles = Vec::new();
pub async fn daemon(config_path: PathBuf, socket: PathBuf) -> i32 {
// Load config or quit
let config: &'static Config = Box::leak(Box::new(match Config::from_path(&config_path) {
Ok(config) => config,
Err(err) => {
error!("{err}");
return 1;
}
}));
// Cancellation Token
let shutdown = ShutdownController::new();
// Semaphore limiting action execution concurrency
let exec_limit = match config.concurrency() {
0 => None,
n => Some(Arc::new(Semaphore::new(n))),
};
// Open Database
let mut db = Database::open(config).await?;
// Filter managers
let now = Local::now();
let mut state = HashMap::new();
for stream in config.streams().values() {
let mut filter_managers = HashMap::new();
for filter in stream.filters().values() {
let manager =
FilterManager::new(filter, exec_limit.clone(), shutdown.token(), &mut db, now)?;
filter_managers.insert(filter, manager);
}
state.insert(stream, filter_managers.clone());
let token = shutdown.token();
stream_task_handles.push(tokio::spawn(async move {
stream_manager(stream, filter_managers, token).await
}));
}
drop(exec_limit);
// Run database task
let mut db_status_rx = {
let token = shutdown.token();
db.manager(token)
};
// Close streams when we receive a quit signal
// Cancel when we receive a quit signal
let signal_received = Arc::new(AtomicBool::new(false));
handle_signals(shutdown.delegate(), signal_received.clone())?;
// Run socket task
{
let socket = socket.to_owned();
let token = shutdown.token();
tokio::spawn(async move { socket_manager(config, socket, state, token).await });
if let Err(err) = handle_signals(shutdown.delegate(), signal_received.clone()) {
error!("{err}");
return 1;
}
// Wait for all streams to quit
for task_handle in stream_task_handles {
let _ = task_handle.await;
let mut db = None;
let mut config_started = false;
let mut daemon_err = false;
// Start the real daemon 👹
if let Err(err) = daemon_start(
config,
socket,
shutdown.token(),
&mut db,
&mut config_started,
)
.await
{
error!("{err}");
daemon_err = true;
}
// Release last db's sender
let mut db_status = None;
if let Some(db) = db {
db_status = Some(db.quit());
}
debug!("Asking for all tasks to quit...");
shutdown.ask_shutdown();
debug!("Waiting for all tasks to quit...");
shutdown.wait_shutdown().await;
shutdown.wait_all_task_shutdown().await;
let db_status = db_status_rx.try_recv();
let stop_ok = config.stop();
if let Ok(Err(err)) = db_status {
Err(format!("database error: {}", err).into())
} else if !signal_received.load(Ordering::SeqCst) {
Err("quitting because all streams finished".into())
} else if !stop_ok {
Err("while executing stop command".into())
} else {
Ok(())
let mut stop_ok = true;
if config_started {
stop_ok = config.stop();
}
if daemon_err || !stop_ok {
return 1;
} else if let Some(mut db_status) = db_status
&& let Ok(Err(err)) = db_status.try_recv()
{
error!("database error: {}", err);
return 1;
} else if !signal_received.load(Ordering::SeqCst) {
error!("quitting because all streams finished");
return 1;
} else {
return 0;
}
}
async fn daemon_start(
config: &'static Config,
socket: PathBuf,
shutdown: ShutdownToken,
db: &mut Option<Database>,
config_started: &mut bool,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let mut plugins = Plugins::new(config, shutdown.clone()).await?;
// Open Database
let (cancellation, task_tracker) = shutdown.clone().split();
let path = PathBuf::from(config.state_directory.clone());
*db = Some(Database::open(&path, cancellation, task_tracker).await?);
let (state, stream_managers) = {
// Semaphore limiting action execution concurrency
let exec_limit = match config.concurrency {
0 => None,
n => Some(Arc::new(Semaphore::new(n))),
};
// Filter managers
let now = now();
let mut state = HashMap::new();
let mut stream_managers = Vec::new();
for stream in config.streams.values() {
let mut filter_managers = HashMap::new();
for filter in stream.filters.values() {
let manager = FilterManager::new(
filter,
exec_limit.clone(),
shutdown.clone(),
db.as_mut().unwrap(),
&mut plugins,
now,
)
.await?;
filter_managers.insert(filter, manager);
}
state.insert(stream, filter_managers.clone());
stream_managers.push(
StreamManager::new(stream, filter_managers, shutdown.clone(), &mut plugins).await?,
);
}
(state, stream_managers)
};
// Open socket and run task
let socket = Socket::open(socket).await?;
socket.manager(config, state, shutdown.clone());
// all core systems started, we can run start commands
*config_started = true;
if !config.start() {
return Err("a start command failed, exiting.".into());
}
// Finish plugin setup
plugins.start().await?;
plugins.manager();
// Start Stream managers
let stream_task_handles = stream_managers.into_iter().filter_map(|stream_manager| {
let standalone = stream_manager.is_standalone();
let handle = tokio::spawn(async move { stream_manager.start().await });
// Only wait for standalone streams
if standalone { Some(handle) } else { None }
});
// Wait for all streams to quit
join_all(stream_task_handles).await;
Ok(())
}
fn handle_signals(

405
src/daemon/plugin/mod.rs Normal file
View file

@ -0,0 +1,405 @@
use std::{
collections::{BTreeMap, BTreeSet},
fmt::Display,
io,
ops::{Deref, DerefMut},
process::ExitStatus,
time::Duration,
};
use futures::{StreamExt, future::join_all};
use reaction_plugin::{
ActionConfig, ActionImpl, Hello, PluginInfo, PluginInfoClient, StreamConfig, StreamImpl,
};
use remoc::Connect;
use tokio::{
process::{Child, ChildStderr},
time::timeout,
};
use tracing::{error, info};
use crate::{
concepts::{Action, Config, Plugin, Stream},
daemon::{ShutdownToken, stream::reader_to_stream, utils::kill_child},
};
pub struct PluginManager {
child: Child,
shutdown: ShutdownToken,
plugin: &'static Plugin,
plugin_info: PluginInfoClient,
streams: BTreeSet<String>,
actions: BTreeSet<String>,
}
impl Deref for PluginManager {
type Target = PluginInfoClient;
fn deref(&self) -> &Self::Target {
&self.plugin_info
}
}
impl DerefMut for PluginManager {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.plugin_info
}
}
impl PluginManager {
async fn new(
plugin: &'static Plugin,
state_directory: &str,
shutdown: ShutdownToken,
) -> Result<Self, String> {
let mut child = plugin
.launch(state_directory)
.await
.map_err(|err| systemd_error(plugin, "could not launch plugin", err))?;
{
let stderr = child.stderr.take().unwrap();
// let shutdown = shutdown.clone();
tokio::spawn(async move { handle_stderr(stderr, plugin.name.clone()).await });
}
let stdin = child.stdin.take().unwrap();
let stdout = child.stdout.take().unwrap();
let (conn, _tx, mut rx): (
_,
remoc::rch::base::Sender<()>,
remoc::rch::base::Receiver<PluginInfoClient>,
) = Connect::io(remoc::Cfg::default(), stdout, stdin)
.await
.map_err(|err| {
systemd_error(plugin, "could not init communication with plugin", err)
})?;
tokio::spawn(conn);
let mut plugin_info = rx
.recv()
.await
.map_err(|err| format!("could not retrieve initial information from plugin: {err}"))?
.ok_or("could not retrieve initial information from plugin: no data")?;
let manifest = plugin_info
.manifest()
.await
.map_err(|err| format!("error while getting plugin {} manifest: {err}", plugin.name))?;
let my_hello = Hello::new();
if let Err(hint) = Hello::is_compatible(&my_hello, &manifest.hello) {
return Err(format!(
"reaction can't handle plugin {} with incompatible version {}.{}: current version: {}.{}. {}",
plugin.name,
manifest.hello.version_major,
manifest.hello.version_minor,
my_hello.version_major,
my_hello.version_minor,
hint
));
}
Ok(Self {
child,
shutdown,
plugin,
plugin_info,
streams: manifest.streams,
actions: manifest.actions,
})
}
async fn handle_child(mut self) {
const PLUGIN_STOP_GRACE_TIME: u64 = 15;
// wait either for the child process to exit on its own or for the shutdown signal
tokio::select! {
status = self.child.wait() => {
self.print_exit(status);
return;
}
_ = self.shutdown.wait() => {}
}
match timeout(
Duration::from_secs(PLUGIN_STOP_GRACE_TIME),
self.plugin_info.close(),
)
.await
{
Ok(Ok(())) => (),
Ok(Err(err)) => {
error!("plugin {}: {err}", self.plugin.name);
}
// got timeout
Err(_) => {
error!(
"plugin {} did not respond to close request in time, killing",
self.plugin.name
);
kill_child(self.child, format!("plugin {}", self.plugin.name), 5).await;
}
}
}
fn print_exit(&self, status: io::Result<ExitStatus>) {
match status {
Ok(status) => match status.code() {
Some(code) => {
error!(
"plugin {}: process exited. exit code: {}",
self.plugin.name, code
);
}
None => {
error!("plugin {}: process exited.", self.plugin.name);
}
},
Err(err) => {
error!("plugin {}: process exited. {err}", self.plugin.name);
}
}
}
}
fn systemd_error(plugin: &Plugin, message: &str, err: impl Display) -> String {
if plugin.systemd {
format!(
"{message}: {err}. \
`plugins.{0}.systemd` is set to true, so this may be an issue with systemd's run0. \
please make sure `sudo run0 ls /` returns the same thing as `sudo ls /` as a test. \
if run0 can't be found or doesn't output anything, set `plugins.{0}.systemd` to false.",
plugin.name,
)
} else {
format!("{message}: {err}")
}
}
async fn handle_stderr(stderr: ChildStderr, plugin_name: String) {
// read lines until shutdown
let lines = reader_to_stream(stderr);
tokio::pin!(lines);
loop {
match lines.next().await {
Some(Ok(line)) => {
// sad: I can't factorize this because the tracing::event! macro
// requires its log level to be a constant.
if line.starts_with("DEBUG ") {
tracing::debug!("plugin {plugin_name}: {}", line.split_at(6).1)
} else if line.starts_with("INFO ") {
tracing::info!("plugin {plugin_name}: {}", line.split_at(5).1)
} else if line.starts_with("WARN ") {
tracing::warn!("plugin {plugin_name}: {}", line.split_at(5).1)
} else if line.starts_with("ERROR ") {
tracing::error!("plugin {plugin_name}: {}", line.split_at(6).1)
} else {
// If there is no log level, we suppose it's an error (may be a panic or something)
tracing::error!("plugin {plugin_name}: {}", line)
}
}
Some(Err(err)) => {
tracing::error!("while trying to read plugin {plugin_name} stderr: {err}");
break;
}
None => break,
}
}
}
#[derive(Default)]
pub struct Plugins {
/// Loaded plugins
plugins: BTreeMap<String, PluginManager>,
/// stream_type to plugin name
stream_to_plugin: BTreeMap<String, String>,
/// action_type to plugin name
action_to_plugin: BTreeMap<String, String>,
/// plugin name to config list
plugin_to_confs: BTreeMap<String, (Vec<&'static Stream>, Vec<&'static Action>)>,
/// stream name to impl
stream_to_impl: BTreeMap<String, StreamImpl>,
/// action name to impl
action_to_impl: BTreeMap<String, ActionImpl>,
}
impl Plugins {
pub async fn new(config: &'static Config, shutdown: ShutdownToken) -> Result<Self, String> {
let mut this = Self::default();
for plugin in config.plugins.values() {
let name = plugin.name.clone();
this.load_plugin(plugin, &config.state_directory, shutdown.clone())
.await
.map_err(|err| format!("plugin {name}: {err}]"))?;
}
this.aggregate_plugin_configs(config)?;
this.load_plugin_configs().await?;
Ok(this)
}
async fn load_plugin(
&mut self,
plugin: &'static Plugin,
state_directory: &str,
shutdown: ShutdownToken,
) -> Result<(), String> {
let name = plugin.name.clone();
let manager = PluginManager::new(plugin, state_directory, shutdown).await?;
for stream in &manager.streams {
if let Some(name) = self.stream_to_plugin.insert(stream.clone(), name.clone()) {
return Err(format!(
"plugin {name} already exposed a stream with type name '{stream}'",
));
}
}
for action in &manager.actions {
if let Some(name) = self.action_to_plugin.insert(action.clone(), name.clone()) {
return Err(format!(
"plugin {name} already exposed a action with type name '{action}'",
));
}
}
self.plugins.insert(name, manager);
Ok(())
}
fn aggregate_plugin_configs(&mut self, config: &'static Config) -> Result<(), String> {
for stream in config.streams.values() {
if stream.is_plugin()
&& let Some(stream_type) = &stream.stream_type
{
let plugin_name = self.stream_to_plugin.get(stream_type).ok_or_else(|| {
display_plugin_exposed_types(&self.stream_to_plugin, "stream", stream_type)
})?;
let (streams, _) = self
.plugin_to_confs
.entry(plugin_name.to_owned())
.or_default();
streams.push(stream);
}
for action in stream
.filters
.values()
.flat_map(|filter| filter.actions.values())
{
if action.is_plugin()
&& let Some(action_type) = &action.action_type
{
let plugin_name = self.action_to_plugin.get(action_type).ok_or_else(|| {
display_plugin_exposed_types(&self.action_to_plugin, "action", action_type)
})?;
let (_, actions) = self
.plugin_to_confs
.entry(plugin_name.to_owned())
.or_default();
actions.push(action);
}
}
}
Ok(())
}
async fn load_plugin_configs(&mut self) -> Result<(), String> {
let plugin_to_confs = std::mem::take(&mut self.plugin_to_confs);
for (plugin_name, (streams, actions)) in plugin_to_confs {
let plugin = self
.plugins
.get_mut(&plugin_name)
.ok_or_else(|| format!("could not find plugin {plugin_name}. this is a bug!"))?;
let stream_names: Vec<String> =
streams.iter().map(|stream| stream.name.clone()).collect();
let action_names: Vec<String> =
actions.iter().map(|action| action.to_string()).collect();
let (stream_impls, action_impls) = plugin
.load_config(
streams
.into_iter()
.map(Stream::to_stream_config)
.collect::<Result<Vec<StreamConfig>, String>>()?,
actions
.into_iter()
.map(Action::to_action_config)
.collect::<Result<Vec<ActionConfig>, String>>()?,
)
.await
.map_err(|err| {
format!("plugin {plugin_name} is not happy with your config: {err}")
})?;
self.stream_to_impl
.extend(stream_names.into_iter().zip(stream_impls));
self.action_to_impl
.extend(action_names.into_iter().zip(action_impls));
}
Ok(())
}
pub fn get_stream_impl(&mut self, stream_name: String) -> Option<StreamImpl> {
self.stream_to_impl.remove(&stream_name)
}
pub fn get_action_impl(&mut self, action_fullname: String) -> Option<ActionImpl> {
self.action_to_impl.remove(&action_fullname)
}
pub async fn start(&mut self) -> Result<(), String> {
// Finish setup of all plugins
join_all(
self.plugins
.values_mut()
.map(|plugin_manager| plugin_manager.start()),
)
.await
// Convert Vec<Result<Result>> into Result
.into_iter()
.zip(self.plugins.values())
.try_for_each(|(result, plugin_manager)| {
result.map_err(|err| {
format!(
"plugin {}: {}",
plugin_manager.plugin.name,
err.to_string().replace('\n', " ")
)
})
})
}
pub fn manager(self) {
for plugin in self.plugins.into_values() {
tokio::spawn(async move {
plugin.handle_child().await;
});
}
}
}
fn display_plugin_exposed_types(
type_to_plugin: &BTreeMap<String, String>,
name: &str,
invalid: &str,
) -> String {
let mut plugin_to_types: BTreeMap<&str, Vec<&str>> = BTreeMap::new();
for (type_, plugin) in type_to_plugin {
plugin_to_types.entry(plugin).or_default().push(type_);
}
for (plugin, types) in plugin_to_types {
info!(
"Plugin {plugin} exposes those {name} types: '{}'",
types.join("', '")
);
}
format!("No plugin provides the {name} type: {invalid}")
}

View file

@ -1,90 +0,0 @@
use tokio::sync::mpsc;
use tokio_util::sync::{CancellationToken, WaitForCancellationFuture};
// Thanks to this article for inspiration
// https://www.wcygan.io/post/tokio-graceful-shutdown/
// Now TaskTracker exist, but I don't know what I'd gain for using it instead?
// https://docs.rs/tokio-util/0.7.13/tokio_util/task/task_tracker/struct.TaskTracker.html
/// Permits to keep track of ongoing tasks and ask them to shutdown.
pub struct ShutdownController {
shutdown_notifyer: CancellationToken,
task_tracker: mpsc::Sender<()>,
task_waiter: mpsc::Receiver<()>,
}
impl ShutdownController {
pub fn new() -> Self {
let (task_tracker, task_waiter) = mpsc::channel(1);
Self {
shutdown_notifyer: CancellationToken::new(),
task_tracker,
task_waiter,
}
}
/// Ask for all tasks to quit
pub fn ask_shutdown(&self) {
self.shutdown_notifyer.cancel();
}
/// Wait for all tasks to quit.
/// This task may return even without having called [`ShutdownController::ask_shutdown`]
/// first, if all tasks quit by themselves.
pub async fn wait_shutdown(mut self) {
drop(self.task_tracker);
self.task_waiter.recv().await;
}
/// Returns a new shutdown token, to be held by a task.
pub fn token(&self) -> ShutdownToken {
ShutdownToken::new(self.shutdown_notifyer.clone(), self.task_tracker.clone())
}
/// Returns a [`ShutdownDelegate`], which is able to ask for shutdown,
/// without counting as a task that needs to be awaited.
pub fn delegate(&self) -> ShutdownDelegate {
ShutdownDelegate(self.shutdown_notifyer.clone())
}
}
/// Permits to ask for shutdown, without counting as a task that needs to be awaited.
pub struct ShutdownDelegate(CancellationToken);
impl ShutdownDelegate {
/// Ask for all tasks to quit
pub fn ask_shutdown(&self) {
self.0.cancel();
}
}
/// Created by a [`ShutdownController`].
/// Serves two purposes:
///
/// - Wait for a shutdown request to happen.
/// - Keep track of the current task. While this token is held,
/// the [`ShutdownController::wait_shutdown`] will block.
#[derive(Clone)]
pub struct ShutdownToken {
shutdown_notifyer: CancellationToken,
_task_tracker: mpsc::Sender<()>,
}
impl ShutdownToken {
fn new(shutdown_notifyer: CancellationToken, _task_tracker: mpsc::Sender<()>) -> Self {
Self {
shutdown_notifyer,
_task_tracker,
}
}
/// Returns a future that will resolve only when a shutdown request happened.
pub fn wait(&self) -> WaitForCancellationFuture<'_> {
self.shutdown_notifyer.cancelled()
}
/// Ask for all tasks to quit
pub fn ask_shutdown(&self) {
self.shutdown_notifyer.cancel();
}
}

View file

@ -1,15 +1,13 @@
use std::{
collections::{BTreeMap, HashMap},
fs, io,
path::PathBuf,
process::exit,
sync::Arc,
};
use chrono::Local;
use futures::{SinkExt, StreamExt};
use reaction_plugin::shutdown::ShutdownToken;
use regex::Regex;
use tokio::net::UnixListener;
use tokio::{fs, net::UnixListener};
use tokio_util::{
bytes::Bytes,
codec::{Framed, LengthDelimitedCodec},
@ -17,36 +15,35 @@ use tokio_util::{
use tracing::{error, warn};
use crate::{
concepts::{Config, Filter, Pattern, Stream},
protocol::{ClientRequest, ClientStatus, DaemonResponse},
concepts::{Config, Filter, Pattern, Stream, now},
protocol::{ClientRequest, ClientStatus, DaemonResponse, Order},
};
use super::{filter::FilterManager, shutdown::ShutdownToken};
use super::filter::FilterManager;
macro_rules! err_str {
($expression:expr) => {
$expression.map_err(|err| err.to_string())
};
}
fn open_socket(path: PathBuf) -> Result<UnixListener, String> {
async fn open_socket(path: PathBuf) -> Result<UnixListener, String> {
macro_rules! err_str {
($expression:expr) => {
$expression.map_err(|err| err.to_string())
};
}
// First create all directories to the file
let dir = path
.parent()
.ok_or(format!("socket {path:?} has no parent directory"))?;
err_str!(fs::create_dir_all(dir))?;
err_str!(fs::create_dir_all(dir).await)?;
// Test if file exists
match fs::metadata(&path) {
match fs::metadata(&path).await {
Ok(meta) => {
if meta.file_type().is_dir() {
Err(format!("socket {path:?} is already a directory"))
} else {
warn!("socket {path:?} already exists: is the daemon already running? deleting.");
err_str!(fs::remove_file(&path))
err_str!(fs::remove_file(&path).await)
}
}
Err(err) => err_str!(match err.kind() {
io::ErrorKind::NotFound => Ok(()),
std::io::ErrorKind::NotFound => Ok(()),
_ => Err(err),
}),
}?;
@ -54,11 +51,105 @@ fn open_socket(path: PathBuf) -> Result<UnixListener, String> {
err_str!(UnixListener::bind(path))
}
fn answer_order(
async fn handle_trigger_order(
stream_name: Option<String>,
filter_name: Option<String>,
patterns: BTreeMap<Arc<Pattern>, String>,
shared_state: &HashMap<&'static Stream, HashMap<&'static Filter, FilterManager>>,
) -> DaemonResponse {
// Check names existence
let (stream_name, filter_name) = match (stream_name, filter_name) {
(Some(s), Some(p)) => (s, p),
_ => {
return DaemonResponse::Err(
"trigger must target a filter, e.g. `reaction trigger mystream.myfilter ...`"
.into(),
);
}
};
// Check patterns existence
if patterns.is_empty() {
return DaemonResponse::Err(
"trigger must specify patterns, e.g. `reaction trigger ... ip=1.2.3.4`".into(),
);
}
// Check stream existance
let filters = match shared_state
.iter()
.find(|(stream, _)| stream_name == stream.name)
{
Some((_, filters)) => filters,
None => {
return DaemonResponse::Err(format!("stream {stream_name} doesn't exist"));
}
};
// Check filter existance
let filter_manager = match filters
.iter()
.find(|(filter, _)| filter_name == filter.name)
{
Some((_, filter)) => filter,
None => {
return DaemonResponse::Err(format!(
"filter {stream_name}.{filter_name} doesn't exist"
));
}
};
match filter_manager.handle_trigger(patterns, now()).await {
Ok(()) => DaemonResponse::Ok(()),
Err(err) => DaemonResponse::Err(err),
}
}
async fn handle_show_or_flush_order(
stream_name: Option<String>,
filter_name: Option<String>,
patterns: BTreeMap<Arc<Pattern>, Regex>,
order: Order,
shared_state: &HashMap<&'static Stream, HashMap<&'static Filter, FilterManager>>,
) -> DaemonResponse {
let now = now();
let iter = shared_state
.iter()
// stream filtering
.filter(|(stream, _)| {
stream_name.is_none() || stream_name.clone().is_some_and(|name| name == stream.name)
});
let mut cs = ClientStatus::new();
for (stream, filter_manager) in iter {
let iter = filter_manager
.iter()
// filter filtering
.filter(|(filter, _)| {
filter_name.is_none() || filter_name.clone().is_some_and(|name| name == filter.name)
})
// pattern filtering
.filter(|(filter, _)| {
patterns
.iter()
.all(|(pattern, _)| filter.patterns.get(pattern).is_some())
});
let mut inner_map = BTreeMap::new();
for (filter, manager) in iter {
inner_map.insert(
filter.name.to_owned(),
manager.handle_order(&patterns, order, now).await,
);
}
cs.insert(stream.name.to_owned(), inner_map);
}
DaemonResponse::Order(cs)
}
async fn answer_order(
config: &'static Config,
shared_state: &HashMap<&'static Stream, HashMap<&'static Filter, FilterManager>>,
options: ClientRequest,
) -> Result<ClientStatus, String> {
) -> DaemonResponse {
// Compute options
let (stream_name, filter_name) = match options.stream_filter {
Some(sf) => match sf.split_once(".") {
@ -68,64 +159,53 @@ fn answer_order(
None => (None, None),
};
// Compute the Vec<(pattern_name: String, regex: String)> into a BTreeMap<Arc<Pattern>, Regex>
let patterns = options
// Compute the Vec<(pattern_name: String, regex: String)> into a BTreeMap<Arc<Pattern>, String>
let patterns = match options
.patterns
.into_iter()
.map(|(name, reg)| {
// lookup pattern in config.patterns
config
.patterns()
.patterns
.iter()
// retrieve or Err
.find(|(pattern_name, _)| &name == *pattern_name)
.ok_or_else(|| format!("pattern '{name}' doesn't exist"))
// compile Regex or Err
.and_then(|(_, pattern)| match Regex::new(&reg) {
Ok(reg) => Ok((pattern.clone(), reg)),
Err(err) => Err(format!("pattern '{name}' regex doesn't compile: {err}")),
})
.map(|(_, pattern)| (pattern.clone(), reg))
})
.collect::<Result<BTreeMap<Arc<Pattern>, Regex>, String>>()?;
.collect::<Result<BTreeMap<Arc<Pattern>, String>, String>>()
{
Ok(p) => p,
Err(err) => return DaemonResponse::Err(err),
};
let now = Local::now();
let cs: ClientStatus = shared_state
.iter()
// stream filtering
.filter(|(stream, _)| {
stream_name.is_none()
|| stream_name
.clone()
.is_some_and(|name| name == stream.name())
})
.fold(BTreeMap::new(), |mut acc, (stream, filter_manager)| {
let inner_map = filter_manager
.iter()
// filter filtering
.filter(|(filter, _)| {
filter_name.is_none()
|| filter_name
.clone()
.is_some_and(|name| name == filter.name())
})
// pattern filtering
.filter(|(filter, _)| {
patterns
.iter()
.all(|(pattern, _)| filter.patterns().get(pattern).is_some())
})
.map(|(filter, manager)| {
(
filter.name().to_owned(),
manager.handle_order(&patterns, options.order, now),
)
})
.collect();
acc.insert(stream.name().to_owned(), inner_map);
acc
});
if let Order::Trigger = options.order {
handle_trigger_order(stream_name, filter_name, patterns, shared_state).await
} else {
let patterns = match patterns
.into_iter()
.map(|(pattern, reg)| match Regex::new(&reg) {
Ok(reg) => Ok((pattern, reg)),
Err(err) => Err(format!(
"pattern '{}' regex doesn't compile: {err}",
pattern.name
)),
})
.collect::<Result<BTreeMap<Arc<Pattern>, Regex>, String>>()
{
Ok(p) => p,
Err(err) => return DaemonResponse::Err(err),
};
Ok(cs)
handle_show_or_flush_order(
stream_name,
filter_name,
patterns,
options.order,
shared_state,
)
.await
}
}
macro_rules! or_next {
@ -140,60 +220,67 @@ macro_rules! or_next {
};
}
pub async fn socket_manager(
config: &'static Config,
socket: PathBuf,
shared_state: HashMap<&'static Stream, HashMap<&'static Filter, FilterManager>>,
shutdown: ShutdownToken,
) {
let listener = match open_socket(socket.clone()) {
Ok(l) => l,
Err(err) => {
error!("while creating communication socket: {err}");
exit(1);
}
};
pub struct Socket {
path: PathBuf,
socket: UnixListener,
}
loop {
tokio::select! {
_ = shutdown.wait() => break,
try_conn = listener.accept() => {
match try_conn {
Ok((conn, _)) => {
let mut transport = Framed::new(conn, LengthDelimitedCodec::new());
// Decode
let received = transport.next().await;
let encoded_request = match received {
Some(r) => or_next!("while reading request", r),
None => {
error!("failed to answer client: client sent no request");
continue;
impl Socket {
pub async fn open(socket: PathBuf) -> Result<Self, String> {
Ok(Socket {
socket: open_socket(socket.clone())
.await
.map_err(|err| format!("while creating communication socket: {err}"))?,
path: socket,
})
}
pub fn manager(
self,
config: &'static Config,
shared_state: HashMap<&'static Stream, HashMap<&'static Filter, FilterManager>>,
shutdown: ShutdownToken,
) {
tokio::spawn(async move {
loop {
tokio::select! {
_ = shutdown.wait() => break,
try_conn = self.socket.accept() => {
match try_conn {
Ok((conn, _)) => {
let mut transport = Framed::new(conn, LengthDelimitedCodec::new());
// Decode
let received = transport.next().await;
let encoded_request = match received {
Some(r) => or_next!("while reading request", r),
None => {
error!("failed to answer client: client sent no request");
continue;
}
};
let request = or_next!(
"failed to decode request",
serde_json::from_slice(&encoded_request)
);
// Process
let response = answer_order(config, &shared_state, request).await;
// Encode
let encoded_response =
or_next!("failed to serialize response", serde_json::to_string::<DaemonResponse>(&response));
or_next!(
"failed to send response:",
transport.send(Bytes::from(encoded_response)).await
);
}
};
let request = or_next!(
"failed to decode request",
serde_json::from_slice(&encoded_request)
);
// Process
let response = match answer_order(config, &shared_state, request) {
Ok(res) => DaemonResponse::Order(res),
Err(err) => DaemonResponse::Err(err),
};
// Encode
let encoded_response =
or_next!("failed to serialize response", serde_json::to_string::<DaemonResponse>(&response));
or_next!(
"failed to send response:",
transport.send(Bytes::from(encoded_response)).await
);
Err(err) => error!("failed to open connection from cli: {err}"),
}
}
Err(err) => error!("failed to open connection from cli: {err}"),
}
}
}
}
if let Err(err) = fs::remove_file(socket) {
error!("failed to remove socket: {}", err);
if let Err(err) = fs::remove_file(self.path).await {
error!("failed to remove socket: {}", err);
}
});
}
}

View file

@ -1,146 +1,239 @@
use std::{collections::HashMap, process::Stdio, task::Poll, time::Duration};
use chrono::Local;
use futures::{FutureExt, StreamExt};
use tokio::{
io::{AsyncBufReadExt, BufReader, Lines},
pin,
process::{Child, ChildStderr, ChildStdout, Command},
time::sleep,
use std::{
collections::{BTreeSet, HashMap},
process::Stdio,
};
use tracing::{error, info, warn};
use futures::{FutureExt, Stream as AsyncStream, StreamExt, future::join_all};
use reaction_plugin::{StreamImpl, shutdown::ShutdownToken};
use tokio::{
io::{AsyncBufReadExt, BufReader},
process::{Child, ChildStderr, ChildStdout, Command},
};
use tracing::{debug, error, info};
use crate::{
concepts::{Filter, Stream},
daemon::filter::FilterManager,
concepts::{Filter, Stream, Time, now},
daemon::{filter::FilterManager, plugin::Plugins, utils::kill_child},
};
use super::shutdown::ShutdownToken;
fn lines_to_stream<T: tokio::io::AsyncBufRead + Unpin>(
mut lines: Lines<T>,
) -> futures::stream::PollFn<
impl FnMut(&mut std::task::Context) -> Poll<Option<Result<String, std::io::Error>>>,
> {
futures::stream::poll_fn(move |cx| {
let nl = lines.next_line();
pin!(nl);
futures::Future::poll(nl, cx).map(Result::transpose)
})
/// Converts bytes to line string, discarding invalid utf8 sequences and newlines at the end
fn to_line(data: &[u8]) -> String {
String::from_utf8_lossy(data)
.trim_end_matches('\n')
.replace(std::char::REPLACEMENT_CHARACTER, "")
}
pub async fn stream_manager(
pub fn reader_to_stream(
reader: impl tokio::io::AsyncRead + Unpin,
) -> impl AsyncStream<Item = Result<String, std::io::Error>> {
let buf_reader = BufReader::new(reader);
let buffer = vec![];
futures::stream::try_unfold(
(buf_reader, buffer),
|(mut buf_reader, mut buffer)| async move {
let nl = buf_reader.read_until(b'\n', &mut buffer).await?;
if nl > 0 {
let line = to_line(&buffer);
buffer.clear();
Ok(Some((line, (buf_reader, buffer))))
} else {
Ok(None)
}
},
)
}
pub struct StreamManager {
regex_index_to_filter_manager: Vec<FilterManager>,
stream: &'static Stream,
filter_managers: HashMap<&'static Filter, FilterManager>,
stream_plugin: Option<StreamImpl>,
shutdown: ShutdownToken,
) {
info!("{}: start {:?}", stream.name(), stream.cmd());
let mut child = match Command::new(&stream.cmd()[0])
.args(&stream.cmd()[1..])
.stdin(Stdio::null())
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.spawn()
{
Ok(child) => child,
Err(err) => {
error!("could not execute stream {} cmd: {}", stream.name(), err);
return;
}
};
// keep stdout/stderr before moving child to handle_child
#[allow(clippy::unwrap_used)]
// we know there is an stdout because we asked for Stdio::piped()
let child_stdout = child.stdout.take().unwrap();
#[allow(clippy::unwrap_used)]
// we know there is an stderr because we asked for Stdio::piped()
let child_stderr = child.stderr.take().unwrap();
tokio::join!(
handle_child(stream.name(), child, shutdown),
handle_io(stream.name(), child_stdout, child_stderr, filter_managers)
);
}
async fn handle_child(stream_name: &'static str, mut child: Child, shutdown: ShutdownToken) {
const STREAM_PROCESS_GRACE_TIME_SEC: u64 = 15;
const STREAM_PROCESS_KILL_WAIT_TIMEOUT_SEC: u64 = 5;
impl StreamManager {
pub async fn new(
stream: &'static Stream,
filter_managers: HashMap<&'static Filter, FilterManager>,
shutdown: ShutdownToken,
plugins: &mut Plugins,
) -> Result<Self, String> {
let stream_plugin = if stream.is_plugin() {
Some(
plugins
.get_stream_impl(stream.name.clone())
.ok_or_else(|| {
format!(
"stream {} doesn't load a plugin. this is a bug!",
stream.name
)
})?,
)
} else {
None
};
// wait either for the child process to exit on its own or for the shutdown signal
futures::select! {
_ = child.wait().fuse() => {
error!("stream {stream_name} exited: its command returned.");
return;
}
_ = shutdown.wait().fuse() => {}
let regex_index_to_filter_manager = stream
.regex_index_to_filter_name
.iter()
.map(|filter_name| {
filter_managers
.iter()
.find(|(filter, _)| filter_name == &filter.name)
.unwrap()
.1
.clone()
})
.collect();
debug!("successfully initialized stream {}", stream.name);
Ok(StreamManager {
regex_index_to_filter_manager,
stream,
stream_plugin,
shutdown,
})
}
// first, try to ask nicely the child process to exit
if let Some(pid) = child.id() {
let pid = nix::unistd::Pid::from_raw(pid as i32);
// the most likely error is that the process does not exist anymore
// but we still need to reclaim it with Child::wait
let _ = nix::sys::signal::kill(pid, nix::sys::signal::SIGTERM);
futures::select! {
_ = child.wait().fuse() => {
return;
},
_ = sleep(Duration::from_secs(STREAM_PROCESS_GRACE_TIME_SEC)).fuse() => {},
}
} else {
warn!("could not get PID of child process for stream {stream_name}");
// still try to use tokio API to kill and reclaim the child process
}
// if that fails, or we cannot get the underlying PID, terminate the process.
// NOTE: processes killed with SIGKILL are not guaranteed to exit. They can be locked up in a
// syscall to a resource no-longer available (a notorious example is a read on a disconnected
// NFS share)
// as before, the only expected error is that the child process already terminated
// but we still need to reclaim it if that's the case.
let _ = child.start_kill();
futures::select! {
_ = child.wait().fuse() => {}
_ = sleep(Duration::from_secs(STREAM_PROCESS_KILL_WAIT_TIMEOUT_SEC)).fuse() => {
error!("child process of stream {stream_name} did not terminate");
pub fn is_standalone(&self) -> bool {
match &self.stream_plugin {
Some(plugin) => plugin.standalone,
None => true,
}
}
}
async fn handle_io(
stream_name: &'static str,
child_stdout: ChildStdout,
child_stderr: ChildStderr,
filter_managers: HashMap<&'static Filter, FilterManager>,
) {
let lines_stdout = lines_to_stream(BufReader::new(child_stdout).lines());
let lines_stderr = lines_to_stream(BufReader::new(child_stderr).lines());
// aggregate outputs, will end when both streams end
let mut lines = futures::stream::select(lines_stdout, lines_stderr);
pub async fn start(mut self) {
// First start FilterManagers persisted actions
let now = now();
join_all(
self.regex_index_to_filter_manager
.iter()
.map(|filter_manager| filter_manager.start(now)),
)
.await;
loop {
match lines.next().await {
Some(Ok(line)) => {
let now = Local::now();
for manager in filter_managers.values() {
manager.handle_line(&line, now);
// Then start stream
info!("{}: start {:?}", self.stream.name, self.stream.cmd);
if self.stream_plugin.is_some() {
self.start_plugin().await
} else {
self.start_cmd().await
}
}
async fn start_plugin(&mut self) {
let mut plugin = self.stream_plugin.take().unwrap();
loop {
match plugin.stream.recv().await {
Ok(Some((line, time))) => {
self.handle_line(line, time.into()).await;
}
Err(err) => {
if err.is_final() {
error!(
"error reading from plugin stream {}: {}",
self.stream.name, err
);
return;
} else {
error!(
"temporary error reading from plugin stream {}: {}",
self.stream.name, err
);
}
}
Ok(None) => {
if !self.shutdown.is_shutdown() {
error!("stream {} has exited", self.stream.name);
}
return;
}
}
Some(Err(err)) => {
error!(
"impossible to read output from stream {}: {}",
stream_name, err
);
}
}
async fn start_cmd(&self) {
let mut child = match Command::new(&self.stream.cmd[0])
.args(&self.stream.cmd[1..])
.stdin(Stdio::null())
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.spawn()
{
Ok(child) => child,
Err(err) => {
error!("could not execute stream {} cmd: {}", self.stream.name, err);
return;
}
None => {
};
// keep stdout/stderr before moving child to handle_child
#[allow(clippy::unwrap_used)]
// we know there is an stdout because we asked for Stdio::piped()
let child_stdout = child.stdout.take().unwrap();
#[allow(clippy::unwrap_used)]
// we know there is an stderr because we asked for Stdio::piped()
let child_stderr = child.stderr.take().unwrap();
tokio::join!(
self.handle_child(child),
self.handle_io(child_stdout, child_stderr),
);
}
async fn handle_child(&self, mut child: Child) {
// wait either for the child process to exit on its own or for the shutdown signal
futures::select! {
_ = child.wait().fuse() => {
error!("stream {} exited: its command returned.", self.stream.name);
return;
}
_ = self.shutdown.wait().fuse() => {}
}
kill_child(child, format!("stream {}", self.stream.name), 15).await;
}
async fn handle_io(&self, child_stdout: ChildStdout, child_stderr: ChildStderr) {
let lines_stdout = reader_to_stream(child_stdout);
let lines_stderr = reader_to_stream(child_stderr);
// aggregate outputs, will end when both streams end
let lines = futures::stream::select(lines_stdout, lines_stderr);
tokio::pin!(lines);
loop {
match lines.next().await {
Some(Ok(line)) => {
let now = now();
self.handle_line(line, now).await;
}
Some(Err(err)) => {
error!(
"impossible to read output from stream {}: {}",
self.stream.name, err
);
return;
}
None => {
return;
}
}
}
}
async fn handle_line(&self, line: String, time: Time) {
for manager in self.matching_filters(&line) {
manager.handle_line(&line, time).await;
}
}
fn matching_filters(&self, line: &str) -> BTreeSet<&FilterManager> {
let matches = self.stream.compiled_regex_set.matches(line);
matches
.into_iter()
.map(|match_| &self.regex_index_to_filter_manager[match_])
.collect()
}
}

51
src/daemon/utils.rs Normal file
View file

@ -0,0 +1,51 @@
use std::time::Duration;
use tokio::{process::Child, time::timeout};
use tracing::{error, warn};
pub async fn kill_child(mut child: Child, context: String, grace_time_sec: u64) {
const STREAM_PROCESS_KILL_WAIT_TIMEOUT_SEC: u64 = 5;
// first, try to ask nicely the child process to exit
if let Some(pid) = child.id() {
let pid = nix::unistd::Pid::from_raw(pid as i32);
// the most likely error is that the process does not exist anymore
// but we still need to reclaim it with Child::wait
let _ = nix::sys::signal::kill(pid, nix::sys::signal::SIGTERM);
if let Ok(_) = timeout(Duration::from_secs(grace_time_sec), child.wait()).await {
return;
}
} else {
warn!("could not get PID of child process for {context}");
// still try to use tokio API to kill and reclaim the child process
}
// if that fails, or we cannot get the underlying PID, terminate the process.
// NOTE: processes killed with SIGKILL are not guaranteed to exit. They can be locked up in a
// syscall to a resource no-longer available (a notorious example is a read on a disconnected
// NFS share)
// as before, the only expected error is that the child process already terminated
// but we still need to reclaim it if that's the case.
warn!("process for {context} didn't exit {grace_time_sec}s after SIGTERM, sending SIGKILL");
let _ = child.start_kill();
match timeout(
Duration::from_secs(STREAM_PROCESS_KILL_WAIT_TIMEOUT_SEC),
child.wait(),
)
.await
{
Ok(_) => {}
Err(_) => match child.id() {
Some(id) => {
error!("child process of {context} did not terminate. PID: {id}");
}
None => {
error!("child process of {context} did not terminate");
}
},
}
}

View file

@ -1,12 +1,5 @@
#![warn(
clippy::panic,
clippy::todo,
clippy::unimplemented,
clippy::unwrap_used,
unsafe_code
)]
#![warn(clippy::panic, clippy::todo, clippy::unimplemented, unsafe_code)]
#![allow(clippy::upper_case_acronyms, clippy::mutable_key_type)]
// Allow unwrap in tests
#![cfg_attr(test, allow(clippy::unwrap_used))]
@ -16,4 +9,3 @@ pub mod concepts;
pub mod daemon;
pub mod protocol;
pub mod tests;
pub mod treedb;

View file

@ -2,12 +2,11 @@ use std::{io::IsTerminal, process::exit};
use clap::Parser;
use reaction::{
cli::{Cli, SubCommand},
cli::{Cli, Format, SubCommand},
client::{request, test_config, test_regex},
daemon::daemon,
protocol::Order,
};
use tracing::{error, Level};
#[tokio::main]
async fn main() {
@ -28,67 +27,64 @@ async fn main() {
let cli = Cli::parse();
let (is_daemon, level) = if let SubCommand::Start { loglevel, .. } = cli.command {
(true, loglevel)
} else {
(false, Level::DEBUG)
};
if is_daemon {
// Set log level
if let SubCommand::Start {
loglevel,
config,
socket,
} = cli.command
{
if let Err(err) = tracing_subscriber::fmt::fmt()
.without_time()
.with_target(false)
.with_ansi(std::io::stdout().is_terminal())
.with_max_level(level)
.with_max_level(loglevel)
// .with_max_level(Level::TRACE)
.try_init()
{
eprintln!("ERROR could not initialize logging: {err}");
exit(1);
}
}
let result = match cli.command {
SubCommand::Start {
config,
socket,
..
} => daemon(config, socket).await,
SubCommand::Show {
socket,
format,
limit,
patterns,
} => request(socket, format, limit, patterns, Order::Show).await,
SubCommand::Flush {
socket,
format,
limit,
patterns,
} => request(socket, format, limit, patterns, Order::Flush).await,
SubCommand::TestRegex {
config,
regex,
line,
} => test_regex(config, regex, line),
SubCommand::TestConfig {
config,
format,
verbose,
} => test_config(config, format, verbose),
};
match result {
Ok(()) => {
exit(0);
}
Err(err) => {
if is_daemon {
error!("{err}");
} else {
eprintln!("ERROR {err}");
exit(daemon(config, socket).await);
} else {
let result = match cli.command {
SubCommand::Show {
socket,
format,
limit,
patterns,
} => request(socket, format, limit, patterns, Order::Show).await,
SubCommand::Flush {
socket,
format,
limit,
patterns,
} => request(socket, format, limit, patterns, Order::Flush).await,
SubCommand::Trigger {
socket,
limit,
patterns,
} => request(socket, Format::JSON, Some(limit), patterns, Order::Trigger).await,
SubCommand::TestRegex {
config,
regex,
line,
} => test_regex(config, regex, line),
SubCommand::TestConfig {
config,
format,
verbose,
} => test_config(config, format, verbose),
// Can't be daemon
_ => Ok(()),
};
match result {
Ok(()) => {
exit(0);
}
Err(err) => {
eprintln!("ERROR {err}");
exit(1);
}
exit(1);
}
}
}

Some files were not shown because too many files have changed in this diff Show more