Compare commits

...

149 commits

Author SHA1 Message Date
Joachim Bauch 8123be9551
Update changelog for 1.3.1 2024-05-23 21:18:48 +02:00
Joachim Bauch cad442c486
Merge pull request #747 from strukturag/improve-real-ip
Improve detection of actual client IP.
2024-05-23 21:16:16 +02:00
Joachim Bauch e8ebfed711
Merge pull request #749 from strukturag/docker-janus
docker: Update Janus in example image to 1.2.2
2024-05-23 10:18:35 +02:00
Joachim Bauch 8d8ec677f1
CI: Disable Janus "--version" check temporarily in example image.
Needs https://github.com/meetecho/janus-gateway/issues/3383 to be resolved.
2024-05-23 10:16:07 +02:00
Joachim Bauch 80d96916b9
docker: Compile example image on all cores. 2024-05-23 10:07:58 +02:00
Joachim Bauch 8a0ce7c9b6
docker: Update libsrtp in example image to 2.6.0 2024-05-23 10:05:56 +02:00
Joachim Bauch 1952bfc2be
docker: Update Janus in example image to 1.2.2 2024-05-23 10:03:43 +02:00
Joachim Bauch b3d2f7b02c
Merge pull request #748 from strukturag/ci-lint-deprecated-options
CI: Remove deprecated options from lint workflow.
2024-05-23 09:40:10 +02:00
Joachim Bauch 7583fb6486
CI: Remove deprecated options from lint workflow. 2024-05-23 09:37:44 +02:00
Joachim Bauch 040e663b37
Add examples on how to set "X-Real-IP" for Apache and Caddy. 2024-05-23 09:32:10 +02:00
Joachim Bauch 15b1214413
Add note that "X-Real-Ip" will take precedence. 2024-05-23 09:20:08 +02:00
Joachim Bauch 05810e10ce
Improve detection of actual client IP.
Based on recommendations from MDN.
2024-05-23 09:16:25 +02:00
Joachim Bauch 7e7a04ad6c
Merge pull request #746 from strukturag/dependabot/docker/docker/janus/alpine-3.20
Bump alpine from 3.19 to 3.20 in /docker/janus
2024-05-23 07:50:48 +02:00
dependabot[bot] d25169d0ff
Bump alpine from 3.19 to 3.20 in /docker/janus
Bumps alpine from 3.19 to 3.20.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-22 20:08:31 +00:00
Joachim Bauch 79b76b1ca4
Merge pull request #745 from strukturag/shellcheck
docker: Fix proxy entrypoint.
2024-05-22 14:12:50 +02:00
Joachim Bauch f8e37a1bca
docker: Add missing "fi" in proxy entrypoint. 2024-05-22 14:09:52 +02:00
Joachim Bauch b5cbb917c5
Fix shellcheck errors and make executable. 2024-05-22 14:09:52 +02:00
Joachim Bauch e2ac08ae67
CI: Run shellcheck on scripts. 2024-05-22 14:09:51 +02:00
Joachim Bauch 00d17bae97
Update changelog for 1.3.0 2024-05-22 11:05:10 +02:00
Joachim Bauch ff69a294a9
Add note on remote streams. 2024-05-22 10:59:34 +02:00
Joachim Bauch 5790e7a369
Merge pull request #744 from strukturag/backend-throttle
Add throttler for backend requests
2024-05-22 10:39:50 +02:00
Joachim Bauch 4c807c86e8
Throttle resume / internal hello. 2024-05-22 10:35:11 +02:00
Joachim Bauch e862392872
Add throttled requests to metrics. 2024-05-22 10:35:09 +02:00
Joachim Bauch 39f4b2eb11
server: Increase default write timeout so delayed responses can be sent out. 2024-05-22 10:34:29 +02:00
Joachim Bauch 7f8e44b3b5
Add bruteforce detection to backend server room handler. 2024-05-22 10:34:29 +02:00
Joachim Bauch 31b8c74d1c
Add throttler class. 2024-05-22 10:34:25 +02:00
Joachim Bauch 5f18913646
Merge pull request #743 from strukturag/dependabot/go_modules/github.com/nats-io/nats-server/v2-2.10.16
build(deps): Bump github.com/nats-io/nats-server/v2 from 2.10.15 to 2.10.16
2024-05-22 07:42:47 +02:00
dependabot[bot] 716a93538b
---
updated-dependencies:
- dependency-name: github.com/nats-io/nats-server/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-21 20:19:36 +00:00
Joachim Bauch 2cd3418f09
Merge pull request #708 from strukturag/proxy-features
Add support for remote streams
2024-05-21 09:49:47 +02:00
Joachim Bauch c6cbe88d0e
Pass contexts when creating / starting MCUs. 2024-05-21 09:29:23 +02:00
Joachim Bauch f73ad7b508
Add tests for publishers on different hubs. 2024-05-21 09:29:23 +02:00
Joachim Bauch efb722a55e
Use interface instead of concrete "Hub" class for GRPC server. 2024-05-21 09:29:22 +02:00
Joachim Bauch d63b1cf14a
proxy: Add more token tests. 2024-05-21 09:29:22 +02:00
Joachim Bauch 75060b25aa
Add testcase for subscriber in different country but same continent. 2024-05-21 09:29:21 +02:00
Joachim Bauch 7e7a6d5c09
Support bandwidth limits when selecting proxy to use. 2024-05-21 09:29:20 +02:00
Joachim Bauch a4b8a81734
Automatically reconnect proxy connections if interrupted. 2024-05-21 09:29:19 +02:00
Joachim Bauch 3ce963ee91
Re-create publisher with new endpoint if it already exists. 2024-05-21 09:29:19 +02:00
Joachim Bauch 24c1a09662
Add methods to unpublish remotely. 2024-05-21 09:29:18 +02:00
Joachim Bauch 56f5a72f61
Get list of remote streams from offer/answer SDP. 2024-05-21 09:29:17 +02:00
Joachim Bauch a66c1d82bf
Move Janus classes to separate files, no functional changes. 2024-05-21 09:29:17 +02:00
Joachim Bauch d9deddfda7
Move remote classes to separate files and add event handlers. 2024-05-21 09:29:16 +02:00
Joachim Bauch 9c99129242
Make "skipverify" configurable for remote proxy requests. 2024-05-21 09:29:15 +02:00
Joachim Bauch 63c42dd84c
First draft of remote subscriber streams. 2024-05-21 09:29:15 +02:00
Joachim Bauch 92cbc28065
Add basic tests for mcu proxy client. 2024-05-21 09:29:14 +02:00
Joachim Bauch 132cf0d474
Add "String()" method to messages to help with debugging. 2024-05-21 09:29:11 +02:00
Joachim Bauch 4fd929c15a
Merge pull request #733 from strukturag/relax-message-validation
Relax "MessageClientMessageData" validation.
2024-05-21 09:28:50 +02:00
Joachim Bauch 879469df19
Merge pull request #741 from strukturag/dependabot/go_modules/github.com/nats-io/nats-server/v2-2.10.15
build(deps): Bump github.com/nats-io/nats-server/v2 from 2.10.14 to 2.10.15
2024-05-21 09:27:59 +02:00
Joachim Bauch fe0a002adf
Merge pull request #739 from strukturag/rawmessage-pointer
Don't use unnecessary pointer to "json.RawMessage".
2024-05-21 09:27:31 +02:00
dependabot[bot] 7b555e91ec
build(deps): Bump github.com/nats-io/nats-server/v2
Bumps [github.com/nats-io/nats-server/v2](https://github.com/nats-io/nats-server) from 2.10.14 to 2.10.15.
- [Release notes](https://github.com/nats-io/nats-server/releases)
- [Changelog](https://github.com/nats-io/nats-server/blob/main/.goreleaser.yml)
- [Commits](https://github.com/nats-io/nats-server/commits)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats-server/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-21 07:21:47 +00:00
Joachim Bauch b2afa88bcc
Merge pull request #740 from strukturag/dependabot/go_modules/github.com/nats-io/nats.go-1.35.0
build(deps): Bump github.com/nats-io/nats.go from 1.34.1 to 1.35.0
2024-05-21 09:20:59 +02:00
Joachim Bauch 1bbc49351a
Merge pull request #742 from strukturag/leave-room-lock-order-inversion
Fix lock order inversion when leaving room / publishing room sessions.
2024-05-21 09:20:34 +02:00
Joachim Bauch dff78d0101
Fix lock order inversion when leaving room / publishing room sessions.
Deadlock could happen between

1 @ 0x44038e 0x451898 0x45186f 0x46f325 0x489f3d 0xbb7b76 0xbb7b45 0xc1fe52 0xc190f7 0x473461
0x46f324	sync.runtime_SemacquireMutex+0x24							/usr/lib/go-1.21/src/runtime/sema.go:77
0x489f3c	sync.(*Mutex).lockSlow+0x15c								/usr/lib/go-1.21/src/sync/mutex.go:171
0xbb7b75	sync.(*Mutex).Lock+0x55									/usr/lib/go-1.21/src/sync/mutex.go:90
0xbb7b44	github.com/strukturag/nextcloud-spreed-signaling.(*ClientSession).RoomSessionId+0x24	/build/nextcloud-spreed-signaling-1.2.3/clientsession.go:157
0xc1fe51	github.com/strukturag/nextcloud-spreed-signaling.(*Room).publishActiveSessions+0x231	/build/nextcloud-spreed-signaling-1.2.3/room.go:925
0xc190f6	github.com/strukturag/nextcloud-spreed-signaling.(*Room).run+0x36			/build/nextcloud-spreed-signaling-1.2.3/room.go:179

(which locks "mu" in the room and then "mu" in the client session) and

1 @ 0x44038e 0x451898 0x45186f 0x46f3e5 0x48b44a 0xc1ba76 0xbba37e 0xbe2aab 0xbdf8e5 0xbee0f8 0xbb6134 0x473461
0x46f3e4	sync.runtime_SemacquireRWMutex+0x24							/usr/lib/go-1.21/src/runtime/sema.go:87
0x48b449	sync.(*RWMutex).Lock+0x69								/usr/lib/go-1.21/src/sync/rwmutex.go:152
0xc1ba75	github.com/strukturag/nextcloud-spreed-signaling.(*Room).RemoveSession+0x35		/build/nextcloud-spreed-signaling-1.2.3/room.go:440
0xbba37d	github.com/strukturag/nextcloud-spreed-signaling.(*ClientSession).LeaveRoom+0xdd	/build/nextcloud-spreed-signaling-1.2.3/clientsession.go:489
0xbe2aaa	github.com/strukturag/nextcloud-spreed-signaling.(*Hub).processRoom+0x6a		/build/nextcloud-spreed-signaling-1.2.3/hub.go:1268
0xbdf8e4	github.com/strukturag/nextcloud-spreed-signaling.(*Hub).processMessage+0x984		/build/nextcloud-spreed-signaling-1.2.3/hub.go:909
0xbee0f7	github.com/strukturag/nextcloud-spreed-signaling.(*Hub).OnMessageReceived+0x17		/build/nextcloud-spreed-signaling-1.2.3/hub.go:2427
0xbb6133	github.com/strukturag/nextcloud-spreed-signaling.(*Client).processMessages+0x53		/build/nextcloud-spreed-signaling-1.2.3/client.go:347

(which locks "mu" in the client session and then "mu" in the room).
2024-05-21 09:09:10 +02:00
dependabot[bot] 2ad2327090
build(deps): Bump github.com/nats-io/nats.go from 1.34.1 to 1.35.0
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.34.1 to 1.35.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.34.1...v1.35.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-17 20:42:38 +00:00
Joachim Bauch 4b76a49355
Don't use unnecessary pointer to "json.RawMessage".
The slice is a pointer already.
2024-05-16 20:58:42 +02:00
Joachim Bauch f6125dac3f
docker: Make trusted proxies configurable.
Follow-up to #738
2024-05-16 16:31:08 +02:00
Joachim Bauch c2e93cd92a
Merge pull request #738 from strukturag/trusted-proxies
Make trusted proxies configurable and default to loopback / private IPs.
2024-05-16 16:24:44 +02:00
Joachim Bauch 4f8349d4c1
Update tests. 2024-05-16 14:51:28 +02:00
Joachim Bauch aac4874e72
Make trusted proxies configurable and default to loopback / private IPs. 2024-05-16 14:44:00 +02:00
Joachim Bauch 936f83feb9
Merge pull request #693 from strukturag/dependabot/go_modules/etcd-a88448dd84
build(deps): Bump the etcd group with 4 updates
2024-05-16 13:32:20 +02:00
dependabot[bot] c1e9e02087
build(deps): Bump the etcd group with 4 updates
Bumps the etcd group with 4 updates: [go.etcd.io/etcd/api/v3](https://github.com/etcd-io/etcd), [go.etcd.io/etcd/client/pkg/v3](https://github.com/etcd-io/etcd), [go.etcd.io/etcd/client/v3](https://github.com/etcd-io/etcd) and [go.etcd.io/etcd/server/v3](https://github.com/etcd-io/etcd).


Updates `go.etcd.io/etcd/api/v3` from 3.5.12 to 3.5.13
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.5.12...v3.5.13)

Updates `go.etcd.io/etcd/client/pkg/v3` from 3.5.12 to 3.5.13
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.5.12...v3.5.13)

Updates `go.etcd.io/etcd/client/v3` from 3.5.12 to 3.5.13
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.5.12...v3.5.13)

Updates `go.etcd.io/etcd/server/v3` from 3.5.12 to 3.5.13
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.5.12...v3.5.13)

---
updated-dependencies:
- dependency-name: go.etcd.io/etcd/api/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
- dependency-name: go.etcd.io/etcd/client/pkg/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
- dependency-name: go.etcd.io/etcd/client/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
- dependency-name: go.etcd.io/etcd/server/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-16 11:23:55 +00:00
Joachim Bauch beee423a7c
Merge pull request #694 from strukturag/ci-govuln-check
CI: Run "govulncheck".
2024-05-16 13:22:39 +02:00
Joachim Bauch 5a85fecb10
CI: Run "govulncheck". 2024-05-16 13:21:05 +02:00
Joachim Bauch 88575abea2
Merge pull request #737 from strukturag/remove-golang-1.20
Drop support for Golang 1.20
2024-05-16 13:19:46 +02:00
Joachim Bauch fdc43d12cd
Use new builtin "clear" to remove map entries. 2024-05-16 13:14:56 +02:00
Joachim Bauch d03ea86991
Function "min" is builtin with Go 1.21 2024-05-16 11:23:57 +02:00
Joachim Bauch 18300ce89e
Drop support for Golang 1.20 2024-05-16 11:17:06 +02:00
Joachim Bauch d8f2f265ab
Merge pull request #736 from strukturag/log-mcu-proxy-client-closed
Log something if mcu publisher / subscriber was closed.
2024-05-16 10:37:08 +02:00
Joachim Bauch ddbf1065f6
Merge pull request #707 from strukturag/validate-received-sdp
Validate received SDP earlier.
2024-05-16 10:19:15 +02:00
Joachim Bauch bad52af35a
Validate received SDP earlier. 2024-05-16 10:04:57 +02:00
Joachim Bauch c58564c0e8
Log something if mcu publisher / subscriber was closed. 2024-05-16 09:44:47 +02:00
Joachim Bauch 0b259a8171
Merge pull request #732 from strukturag/close-context
Add Context to clients / sessions.
2024-05-16 09:36:34 +02:00
Joachim Bauch 3fc5f5253d
Merge pull request #735 from strukturag/read-error-after-close
Don't log read error after we closed the connection.
2024-05-16 09:36:07 +02:00
Joachim Bauch 3e92664edc
Don't log read error after we closed the connection. 2024-05-16 09:23:32 +02:00
Joachim Bauch 0ee976d377
Add Context to clients / sessions.
The Context will be closed when the client disconnects / the session is removed,
so any pending requests can be cancelled.
2024-05-16 09:07:59 +02:00
Joachim Bauch 552474f6f0
Merge pull request #734 from strukturag/dependabot/go_modules/google.golang.org/grpc-1.64.0
build(deps): Bump google.golang.org/grpc from 1.63.2 to 1.64.0
2024-05-16 08:51:38 +02:00
dependabot[bot] 09e010ee14
build(deps): Bump google.golang.org/grpc from 1.63.2 to 1.64.0
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.63.2 to 1.64.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.63.2...v1.64.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-15 20:58:48 +00:00
Joachim Bauch 70a5318973
Relax "MessageClientMessageData" validation.
Allow empty `roomType` values.
2024-05-15 13:12:25 +02:00
Joachim Bauch 94a8f0f02b
test: Reset logging to global defaults on cleanup. 2024-05-14 16:52:46 +02:00
Joachim Bauch 4603b2b290
test: Make sure tests that change global state are not executed concurrently. 2024-05-14 16:51:20 +02:00
Joachim Bauch a50d637107
etcd: Wait for server to be stopped in tests. 2024-05-14 16:13:13 +02:00
Joachim Bauch 307ffdc29a
Merge pull request #721 from strukturag/config-envvars
Support environment variables in some configuration.
2024-05-14 14:25:27 +02:00
Joachim Bauch ec3ac62474
Merge pull request #729 from strukturag/dependabot/github_actions/golangci/golangci-lint-action-6.0.1
build(deps): Bump golangci/golangci-lint-action from 5.1.0 to 6.0.1
2024-05-14 13:33:30 +02:00
Joachim Bauch e3a163fbe5
Support environment variables in URL / listener configuration. 2024-05-13 13:26:38 +02:00
Joachim Bauch cf36530b30
Add function to resolve environment variables in config values. 2024-05-13 13:26:35 +02:00
Joachim Bauch adc72aa578
Merge pull request #731 from strukturag/capabilities-race
Fix potential race in capabilities test.
2024-05-13 13:25:34 +02:00
Joachim Bauch ea0d31b0dc
Fix potential race in capabilities test. 2024-05-13 13:16:49 +02:00
Joachim Bauch 5b305f6f99
Merge pull request #730 from strukturag/dependabot/go_modules/github.com/prometheus/client_golang-1.19.1
build(deps): Bump github.com/prometheus/client_golang from 1.19.0 to 1.19.1
2024-05-13 08:56:20 +02:00
Joachim Bauch 3c923a9ef9
Merge pull request #725 from strukturag/dependabot/go_modules/google.golang.org/protobuf-1.34.1
build(deps): Bump google.golang.org/protobuf from 1.33.0 to 1.34.1
2024-05-13 08:55:55 +02:00
Joachim Bauch 1a692bc4bb
Merge pull request #726 from strukturag/dependabot/pip/docs/jinja2-3.1.4
build(deps): Bump jinja2 from 3.1.3 to 3.1.4 in /docs
2024-05-13 08:54:38 +02:00
Joachim Bauch 6a495bfc5c
Merge pull request #728 from strukturag/dependabot/github_actions/coverallsapp/github-action-2.3.0
build(deps): Bump coverallsapp/github-action from 2.2.3 to 2.3.0
2024-05-13 08:53:48 +02:00
dependabot[bot] 9a91e885cf
build(deps): Bump github.com/prometheus/client_golang
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.19.0 to 1.19.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.19.0...v1.19.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-09 20:55:05 +00:00
dependabot[bot] b4830b1fd3
build(deps): Bump golangci/golangci-lint-action from 5.1.0 to 6.0.1
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 5.1.0 to 6.0.1.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v5.1.0...v6.0.1)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-08 20:37:17 +00:00
dependabot[bot] 16da87106a
build(deps): Bump coverallsapp/github-action from 2.2.3 to 2.3.0
Bumps [coverallsapp/github-action](https://github.com/coverallsapp/github-action) from 2.2.3 to 2.3.0.
- [Release notes](https://github.com/coverallsapp/github-action/releases)
- [Commits](https://github.com/coverallsapp/github-action/compare/v2.2.3...v2.3.0)

---
updated-dependencies:
- dependency-name: coverallsapp/github-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-08 20:37:13 +00:00
dependabot[bot] e763f4519c
build(deps): Bump jinja2 from 3.1.3 to 3.1.4 in /docs
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-06 20:40:53 +00:00
dependabot[bot] bfb185f382
build(deps): Bump google.golang.org/protobuf from 1.33.0 to 1.34.1
Bumps google.golang.org/protobuf from 1.33.0 to 1.34.1.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-06 20:36:29 +00:00
Joachim Bauch 46e8ea9148
Merge pull request #722 from strukturag/docker-graceful-stop
docker: Add helper scripts to gracefully stop / wait for server.
2024-04-30 12:01:38 +02:00
Joachim Bauch 4eb1b6609d
Merge pull request #720 from strukturag/dependabot/github_actions/golangci/golangci-lint-action-5.1.0
build(deps): Bump golangci/golangci-lint-action from 5.0.0 to 5.1.0
2024-04-30 11:58:27 +02:00
Joachim Bauch 815088f269
docker: Add helper scripts to gracefully stop / wait for server. 2024-04-30 11:57:58 +02:00
dependabot[bot] 527061bbe2
build(deps): Bump golangci/golangci-lint-action from 5.0.0 to 5.1.0
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 5.0.0 to 5.1.0.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v5.0.0...v5.1.0)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-29 20:31:34 +00:00
Joachim Bauch a2f0bec564
Merge pull request #719 from strukturag/dependabot/github_actions/golangci/golangci-lint-action-5.0.0
build(deps): Bump golangci/golangci-lint-action from 4.0.0 to 5.0.0
2024-04-29 08:22:06 +02:00
dependabot[bot] 70f0519ca2
build(deps): Bump golangci/golangci-lint-action from 4.0.0 to 5.0.0
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 4.0.0 to 5.0.0.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v4.0.0...v5.0.0)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-25 20:47:19 +00:00
Joachim Bauch 9e2a896326
Catch log of embedded etcd in tests (follow-up to #718). 2024-04-25 16:07:51 +02:00
Joachim Bauch 2d48018b58
Catch log in GeoIP tests (follow-up to #718). 2024-04-25 15:47:28 +02:00
Joachim Bauch cf19b3b1b4
Merge pull request #718 from strukturag/speedup-tests
Speedup tests by running in parallel
2024-04-25 15:27:10 +02:00
Joachim Bauch ebb215c592
make: Don't run tests verbose by default. 2024-04-25 15:22:24 +02:00
Joachim Bauch 0eb234b24d
Run tests in parallel and catch log output from tests. 2024-04-25 15:21:54 +02:00
Joachim Bauch cad397e59e
Merge pull request #706 from strukturag/graceful-shutdown
Gracefully shut down signaling server on SIGUSR1.
2024-04-23 12:43:06 +02:00
Joachim Bauch f8899ef189
Add mutex for "handler" in client.
Fix flaky race as follow-up to #715
2024-04-23 12:42:31 +02:00
Joachim Bauch 54c4f1847a
Gracefully shut down signaling server on SIGUSR1.
This will wait for all non-internal sessions to be removed / expired
but stop accepting new connections.
2024-04-23 12:25:33 +02:00
Joachim Bauch d368a060fa
Merge pull request #715 from strukturag/resume-remote
Support resuming remote sessions
2024-04-23 11:58:07 +02:00
Joachim Bauch 602452fa25
Support resuming sessions that exist on a different Hub in the cluster. 2024-04-23 11:52:43 +02:00
Joachim Bauch 0c2cefa63a
Don't return "false" if message sending closed the connection. 2024-04-23 11:09:04 +02:00
Joachim Bauch 2468443572
Add "HandlerClient" interface to support custom implementations. 2024-04-23 11:03:30 +02:00
Joachim Bauch 3721fb131f
Don't include empty "auth" field in hello client messages. 2024-04-23 11:03:28 +02:00
Joachim Bauch 6960912681
Merge pull request #716 from strukturag/leak-grpc-goroutines
Prevent goroutine leaks in GRPC tests.
2024-04-23 10:59:23 +02:00
Joachim Bauch b77525603c
Enable goroutine leak checks for more tests. 2024-04-23 10:53:55 +02:00
Joachim Bauch 9adb762ccf
Close file watcher on shutdown to prevent goroutine leaks. 2024-04-23 10:53:28 +02:00
Joachim Bauch bf68a15943
Make sure "clientsMap" is updated so all clients are closed on shutdown. 2024-04-23 10:37:15 +02:00
Joachim Bauch bc7aea68f3
Merge pull request #714 from strukturag/dependabot/pip/docs/mkdocs-1.6.0
build(deps): Bump mkdocs from 1.5.3 to 1.6.0 in /docs
2024-04-23 08:20:27 +02:00
dependabot[bot] 69beea84cb
build(deps): Bump mkdocs from 1.5.3 to 1.6.0 in /docs
Bumps [mkdocs](https://github.com/mkdocs/mkdocs) from 1.5.3 to 1.6.0.
- [Release notes](https://github.com/mkdocs/mkdocs/releases)
- [Commits](https://github.com/mkdocs/mkdocs/compare/1.5.3...1.6.0)

---
updated-dependencies:
- dependency-name: mkdocs
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-22 20:25:03 +00:00
Joachim Bauch 952b8ae460
Merge pull request #713 from strukturag/session-expiration
Don't keep expiration timestamp in each session.
2024-04-22 15:19:16 +02:00
Joachim Bauch 2e6cf7f86b
Don't keep expiration timestamp in each session.
Reduces memory size per session and make hub lock usage consistent.
2024-04-22 15:07:48 +02:00
Joachim Bauch dcec32be7e
Merge pull request #711 from strukturag/dependabot/go_modules/golang.org/x/net-0.23.0
build(deps): Bump golang.org/x/net from 0.21.0 to 0.23.0
2024-04-22 10:38:14 +02:00
Joachim Bauch b0d052c6ec
Merge pull request #712 from strukturag/dependabot/pip/docs/sphinx-7.3.7
build(deps): Bump sphinx from 7.3.5 to 7.3.7 in /docs
2024-04-22 10:38:00 +02:00
dependabot[bot] 318ed3700f
build(deps): Bump sphinx from 7.3.5 to 7.3.7 in /docs
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 7.3.5 to 7.3.7.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/master/CHANGES.rst)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v7.3.5...v7.3.7)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-19 20:12:57 +00:00
dependabot[bot] ee16a8d8be
build(deps): Bump golang.org/x/net from 0.21.0 to 0.23.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.21.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.21.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-19 13:16:11 +00:00
Joachim Bauch 91033bf8c2
Merge pull request #709 from strukturag/dependabot/pip/docs/sphinx-7.3.5
build(deps): Bump sphinx from 7.2.6 to 7.3.5 in /docs
2024-04-18 11:22:56 +02:00
dependabot[bot] b541ebc4c6
build(deps): Bump sphinx from 7.2.6 to 7.3.5 in /docs
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 7.2.6 to 7.3.5.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/master/CHANGES.rst)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v7.2.6...v7.3.5)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-17 20:24:44 +00:00
Joachim Bauch 0aed690463
Merge pull request #669 from strukturag/janus-multistream
Improve support for Janus 1.x
2024-04-16 16:23:58 +02:00
Joachim Bauch 71a4248568
Merge pull request #705 from strukturag/dependabot/go_modules/go.uber.org/zap-1.27.0
build(deps): Bump go.uber.org/zap from 1.17.0 to 1.27.0
2024-04-16 14:29:15 +02:00
dependabot[bot] df210a6a85
build(deps): Bump go.uber.org/zap from 1.17.0 to 1.27.0
Bumps [go.uber.org/zap](https://github.com/uber-go/zap) from 1.17.0 to 1.27.0.
- [Release notes](https://github.com/uber-go/zap/releases)
- [Changelog](https://github.com/uber-go/zap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/uber-go/zap/compare/v1.17.0...v1.27.0)

---
updated-dependencies:
- dependency-name: go.uber.org/zap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-15 20:22:57 +00:00
Joachim Bauch 5bc9ada233
Merge pull request #704 from strukturag/etcd-prev-value
Include previous value with etcd watch events.
2024-04-15 12:07:37 +02:00
Joachim Bauch d0d68f0d21
Include previous value with etcd watch events. 2024-04-15 11:57:52 +02:00
Joachim Bauch 9a892a194e
Merge pull request #701 from strukturag/etcd-watch-updates
Update etcd watch handling.
2024-04-15 08:58:46 +02:00
Joachim Bauch 26102e7acb
Backoff when retrying watch. 2024-04-15 08:37:52 +02:00
Joachim Bauch 88a575c36c
Cancel GRPC self-check if client is closed. 2024-04-15 08:37:52 +02:00
Joachim Bauch fdab3db819
Update etcd watch handling.
- Properly cancel watch if object is closed.
- Retry watch if interupted.
- Pass revision to watch to not miss changes.
2024-04-15 08:37:52 +02:00
Joachim Bauch c8aa4c71e0
Merge pull request #702 from strukturag/dependabot/go_modules/github.com/nats-io/nats-server/v2-2.10.14
build(deps): Bump github.com/nats-io/nats-server/v2 from 2.10.12 to 2.10.14
2024-04-15 08:29:24 +02:00
Joachim Bauch ec9e44f5d6
Merge pull request #700 from strukturag/dependabot/go_modules/google.golang.org/grpc-1.63.2
build(deps): Bump google.golang.org/grpc from 1.63.0 to 1.63.2
2024-04-15 08:29:13 +02:00
dependabot[bot] 543a85f8aa
build(deps): Bump github.com/nats-io/nats-server/v2
Bumps [github.com/nats-io/nats-server/v2](https://github.com/nats-io/nats-server) from 2.10.12 to 2.10.14.
- [Release notes](https://github.com/nats-io/nats-server/releases)
- [Changelog](https://github.com/nats-io/nats-server/blob/main/.goreleaser.yml)
- [Commits](https://github.com/nats-io/nats-server/compare/v2.10.12...v2.10.14)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats-server/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-12 20:31:46 +00:00
dependabot[bot] 9f104cb281
build(deps): Bump google.golang.org/grpc from 1.63.0 to 1.63.2
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.63.0 to 1.63.2.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.63.0...v1.63.2)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 20:22:09 +00:00
Joachim Bauch 4e623a8e08
Merge pull request #699 from strukturag/dependabot/go_modules/google.golang.org/grpc-1.63.0
build(deps): Bump google.golang.org/grpc from 1.62.1 to 1.63.0
2024-04-08 11:57:28 +02:00
Joachim Bauch 9ba5b4330a
Switch to "grpc.NewClient" from deprecated "grpc.Dial". 2024-04-08 11:50:15 +02:00
dependabot[bot] 4b6a4dbfe1
build(deps): Bump google.golang.org/grpc from 1.62.1 to 1.63.0
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.62.1 to 1.63.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.62.1...v1.63.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-04 20:04:08 +00:00
Joachim Bauch e1f40a024e
Merge pull request #698 from strukturag/filewatcher-rename
Improve detecting renames in file watcher.
2024-04-04 09:55:55 +02:00
Joachim Bauch 47fc6694ca
Merge pull request #697 from strukturag/dependabot/go_modules/github.com/nats-io/nats.go-1.34.1
build(deps): Bump github.com/nats-io/nats.go from 1.34.0 to 1.34.1
2024-04-04 09:48:07 +02:00
Joachim Bauch d0c711b500
Improve detecting renames in file watcher. 2024-04-04 09:47:59 +02:00
dependabot[bot] 7dc450350b
build(deps): Bump github.com/nats-io/nats.go from 1.34.0 to 1.34.1
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.34.0 to 1.34.1.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.34.0...v1.34.1)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-03 20:20:26 +00:00
Joachim Bauch bd445bd99b
Subscribe through "streams" list instead of "feed" for multistream Janus. 2024-02-27 17:00:43 +01:00
114 changed files with 9415 additions and 1993 deletions

View file

@ -34,7 +34,3 @@ jobs:
context: docker/janus
load: true
tags: ${{ env.TEST_TAG }}
- name: Test Docker image
run: |
docker run --rm ${{ env.TEST_TAG }} /usr/local/bin/janus --version

46
.github/workflows/govuln.yml vendored Normal file
View file

@ -0,0 +1,46 @@
name: Go Vulnerability Checker
on:
push:
branches: [ master ]
paths:
- '.github/workflows/govuln.yml'
- '**.go'
- 'go.*'
pull_request:
branches: [ master ]
paths:
- '.github/workflows/govuln.yml'
- '**.go'
- 'go.*'
schedule:
- cron: "0 2 * * SUN"
permissions:
contents: read
jobs:
run:
runs-on: ubuntu-latest
strategy:
matrix:
go-version:
- "1.21"
- "1.22"
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
- run: date
- name: Install dependencies
run: |
sudo apt -y update && sudo apt -y install protobuf-compiler
make common
- name: Install and run govulncheck
run: |
set -euo pipefail
go install golang.org/x/vuln/cmd/govulncheck@latest
govulncheck ./...

View file

@ -28,7 +28,7 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: "1.20"
go-version: "1.21"
- name: Install dependencies
run: |
@ -36,13 +36,11 @@ jobs:
make common
- name: lint
uses: golangci/golangci-lint-action@v4.0.0
uses: golangci/golangci-lint-action@v6.0.1
with:
version: latest
args: --timeout=2m0s
skip-cache: true
skip-pkg-cache: true
skip-build-cache: true
dependencies:
name: dependencies
@ -56,7 +54,7 @@ jobs:
- name: Check minimum supported version of Go
run: |
go mod tidy -go=1.20 -compat=1.20
go mod tidy -go=1.21 -compat=1.21
- name: Check go.mod / go.sum
run: |

27
.github/workflows/shellcheck.yml vendored Normal file
View file

@ -0,0 +1,27 @@
name: shellcheck
on:
push:
branches: [ master ]
paths:
- '.github/workflows/shellcheck.yml'
- '**.sh'
pull_request:
branches: [ master ]
paths:
- '.github/workflows/shellcheck.yml'
- '**.sh'
permissions:
contents: read
jobs:
lint:
name: shellcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: shellcheck
run: |
find -name "*.sh" | xargs shellcheck

View file

@ -24,7 +24,6 @@ jobs:
strategy:
matrix:
go-version:
- "1.20"
- "1.21"
- "1.22"
runs-on: ubuntu-latest
@ -53,7 +52,6 @@ jobs:
strategy:
matrix:
go-version:
- "1.20"
- "1.21"
- "1.22"
runs-on: ubuntu-latest

View file

@ -27,7 +27,6 @@ jobs:
strategy:
matrix:
go-version:
- "1.20"
- "1.21"
- "1.22"
runs-on: ubuntu-latest
@ -64,7 +63,7 @@ jobs:
outfile: cover.lcov
- name: Coveralls Parallel
uses: coverallsapp/github-action@v2.2.3
uses: coverallsapp/github-action@v2.3.0
env:
COVERALLS_FLAG_NAME: run-${{ matrix.go-version }}
with:
@ -79,7 +78,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Coveralls Finished
uses: coverallsapp/github-action@v2.2.3
uses: coverallsapp/github-action@v2.3.0
with:
github-token: ${{ secrets.github_token }}
parallel-finished: true

View file

@ -2,6 +2,122 @@
All notable changes to this project will be documented in this file.
## 1.3.1 - 2024-05-23
### Changed
- Bump alpine from 3.19 to 3.20 in /docker/janus
[#746](https://github.com/strukturag/nextcloud-spreed-signaling/pull/746)
- CI: Remove deprecated options from lint workflow.
[#748](https://github.com/strukturag/nextcloud-spreed-signaling/pull/748)
- docker: Update Janus in example image to 1.2.2
[#749](https://github.com/strukturag/nextcloud-spreed-signaling/pull/749)
- Improve detection of actual client IP.
[#747](https://github.com/strukturag/nextcloud-spreed-signaling/pull/747)
### Fixed
- docker: Fix proxy entrypoint.
[#745](https://github.com/strukturag/nextcloud-spreed-signaling/pull/745)
## 1.3.0 - 2024-05-22
### Added
- Support resuming remote sessions
[#715](https://github.com/strukturag/nextcloud-spreed-signaling/pull/715)
- Gracefully shut down signaling server on SIGUSR1.
[#706](https://github.com/strukturag/nextcloud-spreed-signaling/pull/706)
- docker: Add helper scripts to gracefully stop / wait for server.
[#722](https://github.com/strukturag/nextcloud-spreed-signaling/pull/722)
- Support environment variables in some configuration.
[#721](https://github.com/strukturag/nextcloud-spreed-signaling/pull/721)
- Add Context to clients / sessions.
[#732](https://github.com/strukturag/nextcloud-spreed-signaling/pull/732)
- Drop support for Golang 1.20
[#737](https://github.com/strukturag/nextcloud-spreed-signaling/pull/737)
- CI: Run "govulncheck".
[#694](https://github.com/strukturag/nextcloud-spreed-signaling/pull/694)
- Make trusted proxies configurable and default to loopback / private IPs.
[#738](https://github.com/strukturag/nextcloud-spreed-signaling/pull/738)
- Add support for remote streams (preview)
[#708](https://github.com/strukturag/nextcloud-spreed-signaling/pull/708)
- Add throttler for backend requests
[#744](https://github.com/strukturag/nextcloud-spreed-signaling/pull/744)
### Changed
- build(deps): Bump github.com/nats-io/nats.go from 1.34.0 to 1.34.1
[#697](https://github.com/strukturag/nextcloud-spreed-signaling/pull/697)
- build(deps): Bump google.golang.org/grpc from 1.62.1 to 1.63.0
[#699](https://github.com/strukturag/nextcloud-spreed-signaling/pull/699)
- build(deps): Bump google.golang.org/grpc from 1.63.0 to 1.63.2
[#700](https://github.com/strukturag/nextcloud-spreed-signaling/pull/700)
- build(deps): Bump github.com/nats-io/nats-server/v2 from 2.10.12 to 2.10.14
[#702](https://github.com/strukturag/nextcloud-spreed-signaling/pull/702)
- Include previous value with etcd watch events.
[#704](https://github.com/strukturag/nextcloud-spreed-signaling/pull/704)
- build(deps): Bump go.uber.org/zap from 1.17.0 to 1.27.0
[#705](https://github.com/strukturag/nextcloud-spreed-signaling/pull/705)
- Improve support for Janus 1.x
[#669](https://github.com/strukturag/nextcloud-spreed-signaling/pull/669)
- build(deps): Bump sphinx from 7.2.6 to 7.3.5 in /docs
[#709](https://github.com/strukturag/nextcloud-spreed-signaling/pull/709)
- build(deps): Bump sphinx from 7.3.5 to 7.3.7 in /docs
[#712](https://github.com/strukturag/nextcloud-spreed-signaling/pull/712)
- build(deps): Bump golang.org/x/net from 0.21.0 to 0.23.0
[#711](https://github.com/strukturag/nextcloud-spreed-signaling/pull/711)
- Don't keep expiration timestamp in each session.
[#713](https://github.com/strukturag/nextcloud-spreed-signaling/pull/713)
- build(deps): Bump mkdocs from 1.5.3 to 1.6.0 in /docs
[#714](https://github.com/strukturag/nextcloud-spreed-signaling/pull/714)
- Speedup tests by running in parallel
[#718](https://github.com/strukturag/nextcloud-spreed-signaling/pull/718)
- build(deps): Bump golangci/golangci-lint-action from 4.0.0 to 5.0.0
[#719](https://github.com/strukturag/nextcloud-spreed-signaling/pull/719)
- build(deps): Bump golangci/golangci-lint-action from 5.0.0 to 5.1.0
[#720](https://github.com/strukturag/nextcloud-spreed-signaling/pull/720)
- build(deps): Bump coverallsapp/github-action from 2.2.3 to 2.3.0
[#728](https://github.com/strukturag/nextcloud-spreed-signaling/pull/728)
- build(deps): Bump jinja2 from 3.1.3 to 3.1.4 in /docs
[#726](https://github.com/strukturag/nextcloud-spreed-signaling/pull/726)
- build(deps): Bump google.golang.org/protobuf from 1.33.0 to 1.34.1
[#725](https://github.com/strukturag/nextcloud-spreed-signaling/pull/725)
- build(deps): Bump github.com/prometheus/client_golang from 1.19.0 to 1.19.1
[#730](https://github.com/strukturag/nextcloud-spreed-signaling/pull/730)
- build(deps): Bump golangci/golangci-lint-action from 5.1.0 to 6.0.1
[#729](https://github.com/strukturag/nextcloud-spreed-signaling/pull/729)
- build(deps): Bump google.golang.org/grpc from 1.63.2 to 1.64.0
[#734](https://github.com/strukturag/nextcloud-spreed-signaling/pull/734)
- Validate received SDP earlier.
[#707](https://github.com/strukturag/nextcloud-spreed-signaling/pull/707)
- Log something if mcu publisher / subscriber was closed.
[#736](https://github.com/strukturag/nextcloud-spreed-signaling/pull/736)
- build(deps): Bump the etcd group with 4 updates
[#693](https://github.com/strukturag/nextcloud-spreed-signaling/pull/693)
- build(deps): Bump github.com/nats-io/nats.go from 1.34.1 to 1.35.0
[#740](https://github.com/strukturag/nextcloud-spreed-signaling/pull/740)
- Don't use unnecessary pointer to "json.RawMessage".
[#739](https://github.com/strukturag/nextcloud-spreed-signaling/pull/739)
- build(deps): Bump github.com/nats-io/nats-server/v2 from 2.10.14 to 2.10.15
[#741](https://github.com/strukturag/nextcloud-spreed-signaling/pull/741)
- build(deps): Bump github.com/nats-io/nats-server/v2 from 2.10.15 to 2.10.16
[#743](https://github.com/strukturag/nextcloud-spreed-signaling/pull/743)
### Fixed
- Improve detecting renames in file watcher.
[#698](https://github.com/strukturag/nextcloud-spreed-signaling/pull/698)
- Update etcd watch handling.
[#701](https://github.com/strukturag/nextcloud-spreed-signaling/pull/701)
- Prevent goroutine leaks in GRPC tests.
[#716](https://github.com/strukturag/nextcloud-spreed-signaling/pull/716)
- Fix potential race in capabilities test.
[#731](https://github.com/strukturag/nextcloud-spreed-signaling/pull/731)
- Don't log read error after we closed the connection.
[#735](https://github.com/strukturag/nextcloud-spreed-signaling/pull/735)
- Fix lock order inversion when leaving room / publishing room sessions.
[#742](https://github.com/strukturag/nextcloud-spreed-signaling/pull/742)
- Relax "MessageClientMessageData" validation.
[#733](https://github.com/strukturag/nextcloud-spreed-signaling/pull/733)
## 1.2.4 - 2024-04-03
### Added

View file

@ -52,6 +52,14 @@ ifneq ($(COUNT),)
TESTARGS := $(TESTARGS) -count $(COUNT)
endif
ifneq ($(PARALLEL),)
TESTARGS := $(TESTARGS) -parallel $(PARALLEL)
endif
ifneq ($(VERBOSE),)
TESTARGS := $(TESTARGS) -v
endif
ifeq ($(GOARCH), amd64)
GOPATHBIN := $(GOPATH)/bin
else
@ -93,18 +101,18 @@ vet: common
$(GO) vet $(ALL_PACKAGES)
test: vet common
$(GO) test -v -timeout $(TIMEOUT) $(TESTARGS) $(ALL_PACKAGES)
$(GO) test -timeout $(TIMEOUT) $(TESTARGS) $(ALL_PACKAGES)
cover: vet common
rm -f cover.out && \
$(GO) test -v -timeout $(TIMEOUT) -coverprofile cover.out $(ALL_PACKAGES) && \
$(GO) test -timeout $(TIMEOUT) -coverprofile cover.out $(ALL_PACKAGES) && \
sed -i "/_easyjson/d" cover.out && \
sed -i "/\.pb\.go/d" cover.out && \
$(GO) tool cover -func=cover.out
coverhtml: vet common
rm -f cover.out && \
$(GO) test -v -timeout $(TIMEOUT) -coverprofile cover.out $(ALL_PACKAGES) && \
$(GO) test -timeout $(TIMEOUT) -coverprofile cover.out $(ALL_PACKAGES) && \
sed -i "/_easyjson/d" cover.out && \
sed -i "/\.pb\.go/d" cover.out && \
$(GO) tool cover -html=cover.out -o coverage.html

View file

@ -17,7 +17,7 @@ information on the API of the signaling server.
The following tools are required for building the signaling server.
- git
- go >= 1.20
- go >= 1.21
- make
- protobuf-compiler >= 3
@ -171,7 +171,17 @@ proxy process gracefully after all clients have been disconnected. No new
publishers will be accepted in this case.
### Clustering
### Remote streams (preview)
With Janus 1.1.0 or newer, remote streams are supported, i.e. a subscriber can
receive a published stream from any server. For this, you need to configure
`hostname`, `token_id` and `token_key` in the proxy configuration. Each proxy
server also supports configuring maximum `incoming` and `outgoing` bandwidth
settings, which will also be used to select remote streams.
See `proxy.conf.in` in section `app` for details.
## Clustering
The signaling server supports a clustering mode where multiple running servers
can be interconnected to form a single "virtual" server. This can be used to
@ -299,6 +309,8 @@ interface on port `8080` below):
# Enable proxying Websocket requests to the standalone signaling server.
ProxyPass "/standalone-signaling/" "ws://127.0.0.1:8080/"
RequestHeader set X-Real-IP %{REMOTE_ADDR}s
RewriteEngine On
# Websocket connections from the clients.
RewriteRule ^/standalone-signaling/spreed/$ - [L]
@ -334,6 +346,7 @@ myserver.domain.invalid {
route /standalone-signaling/* {
uri strip_prefix /standalone-signaling
reverse_proxy http://127.0.0.1:8080
header_up X-Real-IP {remote_host}
}
}
```

View file

@ -22,6 +22,7 @@
package signaling
import (
"bytes"
"fmt"
"net"
"strings"
@ -31,6 +32,19 @@ type AllowedIps struct {
allowed []*net.IPNet
}
func (a *AllowedIps) String() string {
var b bytes.Buffer
b.WriteString("[")
for idx, n := range a.allowed {
if idx > 0 {
b.WriteString(", ")
}
b.WriteString(n.String())
}
b.WriteString("]")
return b.String()
}
func (a *AllowedIps) Empty() bool {
return len(a.allowed) == 0
}
@ -99,3 +113,22 @@ func DefaultAllowedIps() *AllowedIps {
}
return result
}
var (
privateIpNets = []string{
// Loopback addresses.
"127.0.0.0/8",
// Private addresses.
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
}
)
func DefaultPrivateIps() *AllowedIps {
allowed, err := ParseAllowedIps(strings.Join(privateIpNets, ","))
if err != nil {
panic(fmt.Errorf("could not parse private ips %+v: %w", privateIpNets, err))
}
return allowed
}

View file

@ -34,6 +34,9 @@ func TestAllowedIps(t *testing.T) {
if a.Empty() {
t.Fatal("should not be empty")
}
if expected := `[127.0.0.1/32, 192.168.0.1/32, 192.168.1.0/24]`; a.String() != expected {
t.Errorf("expected %s, got %s", expected, a.String())
}
allowed := []string{
"127.0.0.1",

View file

@ -118,8 +118,8 @@ type BackendRoomInviteRequest struct {
UserIds []string `json:"userids,omitempty"`
// TODO(jojo): We should get rid of "AllUserIds" and find a better way to
// notify existing users the room has changed and they need to update it.
AllUserIds []string `json:"alluserids,omitempty"`
Properties *json.RawMessage `json:"properties,omitempty"`
AllUserIds []string `json:"alluserids,omitempty"`
Properties json.RawMessage `json:"properties,omitempty"`
}
type BackendRoomDisinviteRequest struct {
@ -127,13 +127,13 @@ type BackendRoomDisinviteRequest struct {
SessionIds []string `json:"sessionids,omitempty"`
// TODO(jojo): We should get rid of "AllUserIds" and find a better way to
// notify existing users the room has changed and they need to update it.
AllUserIds []string `json:"alluserids,omitempty"`
Properties *json.RawMessage `json:"properties,omitempty"`
AllUserIds []string `json:"alluserids,omitempty"`
Properties json.RawMessage `json:"properties,omitempty"`
}
type BackendRoomUpdateRequest struct {
UserIds []string `json:"userids,omitempty"`
Properties *json.RawMessage `json:"properties,omitempty"`
UserIds []string `json:"userids,omitempty"`
Properties json.RawMessage `json:"properties,omitempty"`
}
type BackendRoomDeleteRequest struct {
@ -154,7 +154,7 @@ type BackendRoomParticipantsRequest struct {
}
type BackendRoomMessageRequest struct {
Data *json.RawMessage `json:"data,omitempty"`
Data json.RawMessage `json:"data,omitempty"`
}
type BackendRoomSwitchToSessionsList []string
@ -169,7 +169,7 @@ type BackendRoomSwitchToMessageRequest struct {
// In the map, the key is the session id, the value additional details
// (or null) for the session. The details will be included in the request
// to the connected client.
Sessions *json.RawMessage `json:"sessions,omitempty"`
Sessions json.RawMessage `json:"sessions,omitempty"`
// Internal properties
SessionsList BackendRoomSwitchToSessionsList `json:"sessionslist,omitempty"`
@ -237,8 +237,8 @@ type BackendRoomDialoutResponse struct {
// Requests from the signaling server to the Nextcloud backend.
type BackendClientAuthRequest struct {
Version string `json:"version"`
Params *json.RawMessage `json:"params"`
Version string `json:"version"`
Params json.RawMessage `json:"params"`
}
type BackendClientRequest struct {
@ -256,7 +256,7 @@ type BackendClientRequest struct {
Session *BackendClientSessionRequest `json:"session,omitempty"`
}
func NewBackendClientAuthRequest(params *json.RawMessage) *BackendClientRequest {
func NewBackendClientAuthRequest(params json.RawMessage) *BackendClientRequest {
return &BackendClientRequest{
Type: "auth",
Auth: &BackendClientAuthRequest{
@ -284,9 +284,9 @@ type BackendClientResponse struct {
}
type BackendClientAuthResponse struct {
Version string `json:"version"`
UserId string `json:"userid"`
User *json.RawMessage `json:"user"`
Version string `json:"version"`
UserId string `json:"userid"`
User json.RawMessage `json:"user"`
}
type BackendClientRoomRequest struct {
@ -315,14 +315,14 @@ func NewBackendClientRoomRequest(roomid string, userid string, sessionid string)
}
type BackendClientRoomResponse struct {
Version string `json:"version"`
RoomId string `json:"roomid"`
Properties *json.RawMessage `json:"properties"`
Version string `json:"version"`
RoomId string `json:"roomid"`
Properties json.RawMessage `json:"properties"`
// Optional information about the Nextcloud Talk session. Can be used for
// example to define a "userid" for otherwise anonymous users.
// See "RoomSessionData" for a possible content.
Session *json.RawMessage `json:"session,omitempty"`
Session json.RawMessage `json:"session,omitempty"`
Permissions *[]Permission `json:"permissions,omitempty"`
}
@ -359,12 +359,12 @@ type BackendClientRingResponse struct {
}
type BackendClientSessionRequest struct {
Version string `json:"version"`
RoomId string `json:"roomid"`
Action string `json:"action"`
SessionId string `json:"sessionid"`
UserId string `json:"userid,omitempty"`
User *json.RawMessage `json:"user,omitempty"`
Version string `json:"version"`
RoomId string `json:"roomid"`
Action string `json:"action"`
SessionId string `json:"sessionid"`
UserId string `json:"userid,omitempty"`
User json.RawMessage `json:"user,omitempty"`
}
type BackendClientSessionResponse struct {
@ -396,8 +396,8 @@ type OcsMeta struct {
}
type OcsBody struct {
Meta OcsMeta `json:"meta"`
Data *json.RawMessage `json:"data"`
Meta OcsMeta `json:"meta"`
Data json.RawMessage `json:"data"`
}
type OcsResponse struct {

View file

@ -27,6 +27,7 @@ import (
)
func TestBackendChecksum(t *testing.T) {
t.Parallel()
rnd := newRandomString(32)
body := []byte{1, 2, 3, 4, 5}
secret := []byte("shared-secret")
@ -58,6 +59,7 @@ func TestBackendChecksum(t *testing.T) {
}
func TestValidNumbers(t *testing.T) {
t.Parallel()
valid := []string{
"+12",
"+12345",

View file

@ -24,6 +24,7 @@ package signaling
import (
"encoding/json"
"fmt"
"net/url"
"github.com/golang-jwt/jwt/v4"
)
@ -48,6 +49,14 @@ type ProxyClientMessage struct {
Payload *PayloadProxyClientMessage `json:"payload,omitempty"`
}
func (m *ProxyClientMessage) String() string {
data, err := json.Marshal(m)
if err != nil {
return fmt.Sprintf("Could not serialize %#v: %s", m, err)
}
return string(data)
}
func (m *ProxyClientMessage) CheckValid() error {
switch m.Type {
case "":
@ -115,6 +124,14 @@ type ProxyServerMessage struct {
Event *EventProxyServerMessage `json:"event,omitempty"`
}
func (r *ProxyServerMessage) String() string {
data, err := json.Marshal(r)
if err != nil {
return fmt.Sprintf("Could not serialize %#v: %s", r, err)
}
return string(data)
}
func (r *ProxyServerMessage) CloseAfterSend(session Session) bool {
switch r.Type {
case "bye":
@ -185,6 +202,14 @@ type CommandProxyClientMessage struct {
ClientId string `json:"clientId,omitempty"`
Bitrate int `json:"bitrate,omitempty"`
MediaTypes MediaType `json:"mediatypes,omitempty"`
RemoteUrl string `json:"remoteUrl,omitempty"`
remoteUrl *url.URL
RemoteToken string `json:"remoteToken,omitempty"`
Hostname string `json:"hostname,omitempty"`
Port int `json:"port,omitempty"`
RtcpPort int `json:"rtcpPort,omitempty"`
}
func (m *CommandProxyClientMessage) CheckValid() error {
@ -202,6 +227,17 @@ func (m *CommandProxyClientMessage) CheckValid() error {
if m.StreamType == "" {
return fmt.Errorf("stream type missing")
}
if m.RemoteUrl != "" {
if m.RemoteToken == "" {
return fmt.Errorf("remote token missing")
}
remoteUrl, err := url.Parse(m.RemoteUrl)
if err != nil {
return fmt.Errorf("invalid remote url: %w", err)
}
m.remoteUrl = remoteUrl
}
case "delete-publisher":
fallthrough
case "delete-subscriber":
@ -217,6 +253,8 @@ type CommandProxyServerMessage struct {
Sid string `json:"sid,omitempty"`
Bitrate int `json:"bitrate,omitempty"`
Streams []PublisherStream `json:"streams,omitempty"`
}
// Type "payload"
@ -261,12 +299,41 @@ type PayloadProxyServerMessage struct {
// Type "event"
type EventProxyServerBandwidth struct {
// Incoming is the bandwidth utilization for publishers in percent.
Incoming *float64 `json:"incoming,omitempty"`
// Outgoing is the bandwidth utilization for subscribers in percent.
Outgoing *float64 `json:"outgoing,omitempty"`
}
func (b *EventProxyServerBandwidth) String() string {
if b.Incoming != nil && b.Outgoing != nil {
return fmt.Sprintf("bandwidth: incoming=%.3f%%, outgoing=%.3f%%", *b.Incoming, *b.Outgoing)
} else if b.Incoming != nil {
return fmt.Sprintf("bandwidth: incoming=%.3f%%, outgoing=unlimited", *b.Incoming)
} else if b.Outgoing != nil {
return fmt.Sprintf("bandwidth: incoming=unlimited, outgoing=%.3f%%", *b.Outgoing)
} else {
return "bandwidth: incoming=unlimited, outgoing=unlimited"
}
}
func (b EventProxyServerBandwidth) AllowIncoming() bool {
return b.Incoming == nil || *b.Incoming < 100
}
func (b EventProxyServerBandwidth) AllowOutgoing() bool {
return b.Outgoing == nil || *b.Outgoing < 100
}
type EventProxyServerMessage struct {
Type string `json:"type"`
ClientId string `json:"clientId,omitempty"`
Load int64 `json:"load,omitempty"`
Sid string `json:"sid,omitempty"`
Bandwidth *EventProxyServerBandwidth `json:"bandwidth,omitempty"`
}
// Information on a proxy in the etcd cluster.

View file

@ -32,6 +32,7 @@ import (
"time"
"github.com/golang-jwt/jwt/v4"
"github.com/pion/sdp/v3"
)
const (
@ -42,6 +43,11 @@ const (
HelloVersionV2 = "2.0"
)
var (
ErrNoSdp = NewError("no_sdp", "Payload does not contain a SDP.")
ErrInvalidSdp = NewError("invalid_sdp", "Payload does not contain a valid SDP.")
)
// ClientMessage is a message that is sent from a client to the server.
type ClientMessage struct {
json.Marshaler
@ -192,12 +198,12 @@ func (r *ServerMessage) CloseAfterSend(session Session) bool {
}
func (r *ServerMessage) IsChatRefresh() bool {
if r.Type != "message" || r.Message == nil || r.Message.Data == nil || len(*r.Message.Data) == 0 {
if r.Type != "message" || r.Message == nil || len(r.Message.Data) == 0 {
return false
}
var data MessageServerMessageData
if err := json.Unmarshal(*r.Message.Data, &data); err != nil {
if err := json.Unmarshal(r.Message.Data, &data); err != nil {
return false
}
@ -360,7 +366,7 @@ func (p *HelloV2AuthParams) CheckValid() error {
type HelloV2TokenClaims struct {
jwt.RegisteredClaims
UserData *json.RawMessage `json:"userdata,omitempty"`
UserData json.RawMessage `json:"userdata,omitempty"`
}
type HelloClientMessageAuth struct {
@ -368,7 +374,7 @@ type HelloClientMessageAuth struct {
// "HelloClientTypeClient"
Type string `json:"type,omitempty"`
Params *json.RawMessage `json:"params"`
Params json.RawMessage `json:"params"`
Url string `json:"url"`
parsedUrl *url.URL
@ -387,7 +393,7 @@ type HelloClientMessage struct {
Features []string `json:"features,omitempty"`
// The authentication credentials.
Auth HelloClientMessageAuth `json:"auth"`
Auth *HelloClientMessageAuth `json:"auth,omitempty"`
}
func (m *HelloClientMessage) CheckValid() error {
@ -395,7 +401,7 @@ func (m *HelloClientMessage) CheckValid() error {
return InvalidHelloVersion
}
if m.ResumeId == "" {
if m.Auth.Params == nil || len(*m.Auth.Params) == 0 {
if m.Auth == nil || len(m.Auth.Params) == 0 {
return fmt.Errorf("params missing")
}
if m.Auth.Type == "" {
@ -419,14 +425,14 @@ func (m *HelloClientMessage) CheckValid() error {
case HelloVersionV1:
// No additional validation necessary.
case HelloVersionV2:
if err := json.Unmarshal(*m.Auth.Params, &m.Auth.helloV2Params); err != nil {
if err := json.Unmarshal(m.Auth.Params, &m.Auth.helloV2Params); err != nil {
return err
} else if err := m.Auth.helloV2Params.CheckValid(); err != nil {
return err
}
}
case HelloClientTypeInternal:
if err := json.Unmarshal(*m.Auth.Params, &m.Auth.internalParams); err != nil {
if err := json.Unmarshal(m.Auth.Params, &m.Auth.internalParams); err != nil {
return err
} else if err := m.Auth.internalParams.CheckValid(); err != nil {
return err
@ -528,8 +534,8 @@ func (m *RoomClientMessage) CheckValid() error {
}
type RoomServerMessage struct {
RoomId string `json:"roomid"`
Properties *json.RawMessage `json:"properties,omitempty"`
RoomId string `json:"roomid"`
Properties json.RawMessage `json:"properties,omitempty"`
}
type RoomErrorDetails struct {
@ -554,7 +560,7 @@ type MessageClientMessageRecipient struct {
type MessageClientMessage struct {
Recipient MessageClientMessageRecipient `json:"recipient"`
Data *json.RawMessage `json:"data"`
Data json.RawMessage `json:"data"`
}
type MessageClientMessageData struct {
@ -563,17 +569,44 @@ type MessageClientMessageData struct {
RoomType string `json:"roomType"`
Bitrate int `json:"bitrate,omitempty"`
Payload map[string]interface{} `json:"payload"`
offerSdp *sdp.SessionDescription // Only set if Type == "offer"
answerSdp *sdp.SessionDescription // Only set if Type == "answer"
}
func (m *MessageClientMessageData) CheckValid() error {
if !IsValidStreamType(m.RoomType) {
if m.RoomType != "" && !IsValidStreamType(m.RoomType) {
return fmt.Errorf("invalid room type: %s", m.RoomType)
}
if m.Type == "offer" || m.Type == "answer" {
sdpValue, found := m.Payload["sdp"]
if !found {
return ErrNoSdp
}
sdpText, ok := sdpValue.(string)
if !ok {
return ErrInvalidSdp
}
var sdp sdp.SessionDescription
if err := sdp.Unmarshal([]byte(sdpText)); err != nil {
return NewErrorDetail("invalid_sdp", "Error parsing SDP from payload.", map[string]interface{}{
"error": err.Error(),
})
}
switch m.Type {
case "offer":
m.offerSdp = &sdp
case "answer":
m.answerSdp = &sdp
}
}
return nil
}
func (m *MessageClientMessage) CheckValid() error {
if m.Data == nil || len(*m.Data) == 0 {
if len(m.Data) == 0 {
return fmt.Errorf("message empty")
}
switch m.Recipient.Type {
@ -614,7 +647,7 @@ type MessageServerMessage struct {
Sender *MessageServerMessageSender `json:"sender"`
Recipient *MessageClientMessageRecipient `json:"recipient,omitempty"`
Data *json.RawMessage `json:"data"`
Data json.RawMessage `json:"data"`
}
// Type "control"
@ -631,7 +664,7 @@ type ControlServerMessage struct {
Sender *MessageServerMessageSender `json:"sender"`
Recipient *MessageClientMessageRecipient `json:"recipient,omitempty"`
Data *json.RawMessage `json:"data"`
Data json.RawMessage `json:"data"`
}
// Type "internal"
@ -660,10 +693,10 @@ type AddSessionOptions struct {
type AddSessionInternalClientMessage struct {
CommonSessionInternalClientMessage
UserId string `json:"userid,omitempty"`
User *json.RawMessage `json:"user,omitempty"`
Flags uint32 `json:"flags,omitempty"`
InCall *int `json:"incall,omitempty"`
UserId string `json:"userid,omitempty"`
User json.RawMessage `json:"user,omitempty"`
Flags uint32 `json:"flags,omitempty"`
InCall *int `json:"incall,omitempty"`
Options *AddSessionOptions `json:"options,omitempty"`
}
@ -815,10 +848,10 @@ type InternalServerMessage struct {
// Type "event"
type RoomEventServerMessage struct {
RoomId string `json:"roomid"`
Properties *json.RawMessage `json:"properties,omitempty"`
RoomId string `json:"roomid"`
Properties json.RawMessage `json:"properties,omitempty"`
// TODO(jojo): Change "InCall" to "int" when #914 has landed in NC Talk.
InCall *json.RawMessage `json:"incall,omitempty"`
InCall json.RawMessage `json:"incall,omitempty"`
Changed []map[string]interface{} `json:"changed,omitempty"`
Users []map[string]interface{} `json:"users,omitempty"`
@ -845,8 +878,8 @@ type RoomDisinviteEventServerMessage struct {
}
type RoomEventMessage struct {
RoomId string `json:"roomid"`
Data *json.RawMessage `json:"data,omitempty"`
RoomId string `json:"roomid"`
Data json.RawMessage `json:"data,omitempty"`
}
type RoomFlagsServerMessage struct {
@ -896,10 +929,10 @@ func (m *EventServerMessage) String() string {
}
type EventServerMessageSessionEntry struct {
SessionId string `json:"sessionid"`
UserId string `json:"userid"`
User *json.RawMessage `json:"user,omitempty"`
RoomSessionId string `json:"roomsessionid,omitempty"`
SessionId string `json:"sessionid"`
UserId string `json:"userid"`
User json.RawMessage `json:"user,omitempty"`
RoomSessionId string `json:"roomsessionid,omitempty"`
}
func (e *EventServerMessageSessionEntry) Clone() *EventServerMessageSessionEntry {
@ -932,9 +965,9 @@ type AnswerOfferMessage struct {
type TransientDataClientMessage struct {
Type string `json:"type"`
Key string `json:"key,omitempty"`
Value *json.RawMessage `json:"value,omitempty"`
TTL time.Duration `json:"ttl,omitempty"`
Key string `json:"key,omitempty"`
Value json.RawMessage `json:"value,omitempty"`
TTL time.Duration `json:"ttl,omitempty"`
}
func (m *TransientDataClientMessage) CheckValid() error {

View file

@ -81,6 +81,7 @@ func testMessages(t *testing.T, messageType string, valid_messages []testCheckVa
}
func TestClientMessage(t *testing.T) {
t.Parallel()
// The message needs a type.
msg := ClientMessage{}
if err := msg.CheckValid(); err == nil {
@ -89,30 +90,31 @@ func TestClientMessage(t *testing.T) {
}
func TestHelloClientMessage(t *testing.T) {
t.Parallel()
internalAuthParams := []byte("{\"backend\":\"https://domain.invalid\"}")
tokenAuthParams := []byte("{\"token\":\"invalid-token\"}")
valid_messages := []testCheckValid{
// Hello version 1
&HelloClientMessage{
Version: HelloVersionV1,
Auth: HelloClientMessageAuth{
Params: &json.RawMessage{'{', '}'},
Auth: &HelloClientMessageAuth{
Params: json.RawMessage("{}"),
Url: "https://domain.invalid",
},
},
&HelloClientMessage{
Version: HelloVersionV1,
Auth: HelloClientMessageAuth{
Auth: &HelloClientMessageAuth{
Type: "client",
Params: &json.RawMessage{'{', '}'},
Params: json.RawMessage("{}"),
Url: "https://domain.invalid",
},
},
&HelloClientMessage{
Version: HelloVersionV1,
Auth: HelloClientMessageAuth{
Auth: &HelloClientMessageAuth{
Type: "internal",
Params: (*json.RawMessage)(&internalAuthParams),
Params: internalAuthParams,
},
},
&HelloClientMessage{
@ -122,16 +124,16 @@ func TestHelloClientMessage(t *testing.T) {
// Hello version 2
&HelloClientMessage{
Version: HelloVersionV2,
Auth: HelloClientMessageAuth{
Params: (*json.RawMessage)(&tokenAuthParams),
Auth: &HelloClientMessageAuth{
Params: tokenAuthParams,
Url: "https://domain.invalid",
},
},
&HelloClientMessage{
Version: HelloVersionV2,
Auth: HelloClientMessageAuth{
Auth: &HelloClientMessageAuth{
Type: "client",
Params: (*json.RawMessage)(&tokenAuthParams),
Params: tokenAuthParams,
Url: "https://domain.invalid",
},
},
@ -147,75 +149,75 @@ func TestHelloClientMessage(t *testing.T) {
&HelloClientMessage{Version: HelloVersionV1},
&HelloClientMessage{
Version: HelloVersionV1,
Auth: HelloClientMessageAuth{
Params: &json.RawMessage{'{', '}'},
Auth: &HelloClientMessageAuth{
Params: json.RawMessage("{}"),
Type: "invalid-type",
},
},
&HelloClientMessage{
Version: HelloVersionV1,
Auth: HelloClientMessageAuth{
Auth: &HelloClientMessageAuth{
Url: "https://domain.invalid",
},
},
&HelloClientMessage{
Version: HelloVersionV1,
Auth: HelloClientMessageAuth{
Params: &json.RawMessage{'{', '}'},
Auth: &HelloClientMessageAuth{
Params: json.RawMessage("{}"),
},
},
&HelloClientMessage{
Version: HelloVersionV1,
Auth: HelloClientMessageAuth{
Params: &json.RawMessage{'{', '}'},
Auth: &HelloClientMessageAuth{
Params: json.RawMessage("{}"),
Url: "invalid-url",
},
},
&HelloClientMessage{
Version: HelloVersionV1,
Auth: HelloClientMessageAuth{
Auth: &HelloClientMessageAuth{
Type: "internal",
Params: &json.RawMessage{'{', '}'},
Params: json.RawMessage("{}"),
},
},
&HelloClientMessage{
Version: HelloVersionV1,
Auth: HelloClientMessageAuth{
Auth: &HelloClientMessageAuth{
Type: "internal",
Params: &json.RawMessage{'x', 'y', 'z'}, // Invalid JSON.
Params: json.RawMessage("xyz"), // Invalid JSON.
},
},
// Hello version 2
&HelloClientMessage{
Version: HelloVersionV2,
Auth: HelloClientMessageAuth{
Auth: &HelloClientMessageAuth{
Url: "https://domain.invalid",
},
},
&HelloClientMessage{
Version: HelloVersionV2,
Auth: HelloClientMessageAuth{
Params: (*json.RawMessage)(&tokenAuthParams),
Auth: &HelloClientMessageAuth{
Params: tokenAuthParams,
},
},
&HelloClientMessage{
Version: HelloVersionV2,
Auth: HelloClientMessageAuth{
Params: (*json.RawMessage)(&tokenAuthParams),
Auth: &HelloClientMessageAuth{
Params: tokenAuthParams,
Url: "invalid-url",
},
},
&HelloClientMessage{
Version: HelloVersionV2,
Auth: HelloClientMessageAuth{
Params: (*json.RawMessage)(&internalAuthParams),
Auth: &HelloClientMessageAuth{
Params: internalAuthParams,
Url: "https://domain.invalid",
},
},
&HelloClientMessage{
Version: HelloVersionV2,
Auth: HelloClientMessageAuth{
Params: &json.RawMessage{'x', 'y', 'z'}, // Invalid JSON.
Auth: &HelloClientMessageAuth{
Params: json.RawMessage("xyz"), // Invalid JSON.
Url: "https://domain.invalid",
},
},
@ -233,26 +235,27 @@ func TestHelloClientMessage(t *testing.T) {
}
func TestMessageClientMessage(t *testing.T) {
t.Parallel()
valid_messages := []testCheckValid{
&MessageClientMessage{
Recipient: MessageClientMessageRecipient{
Type: "session",
SessionId: "the-session-id",
},
Data: &json.RawMessage{'{', '}'},
Data: json.RawMessage("{}"),
},
&MessageClientMessage{
Recipient: MessageClientMessageRecipient{
Type: "user",
UserId: "the-user-id",
},
Data: &json.RawMessage{'{', '}'},
Data: json.RawMessage("{}"),
},
&MessageClientMessage{
Recipient: MessageClientMessageRecipient{
Type: "room",
},
Data: &json.RawMessage{'{', '}'},
Data: json.RawMessage("{}"),
},
}
invalid_messages := []testCheckValid{
@ -267,20 +270,20 @@ func TestMessageClientMessage(t *testing.T) {
Recipient: MessageClientMessageRecipient{
Type: "session",
},
Data: &json.RawMessage{'{', '}'},
Data: json.RawMessage("{}"),
},
&MessageClientMessage{
Recipient: MessageClientMessageRecipient{
Type: "session",
UserId: "the-user-id",
},
Data: &json.RawMessage{'{', '}'},
Data: json.RawMessage("{}"),
},
&MessageClientMessage{
Recipient: MessageClientMessageRecipient{
Type: "user",
},
Data: &json.RawMessage{'{', '}'},
Data: json.RawMessage("{}"),
},
&MessageClientMessage{
Recipient: MessageClientMessageRecipient{
@ -293,13 +296,13 @@ func TestMessageClientMessage(t *testing.T) {
Type: "user",
SessionId: "the-user-id",
},
Data: &json.RawMessage{'{', '}'},
Data: json.RawMessage("{}"),
},
&MessageClientMessage{
Recipient: MessageClientMessageRecipient{
Type: "unknown-type",
},
Data: &json.RawMessage{'{', '}'},
Data: json.RawMessage("{}"),
},
}
testMessages(t, "message", valid_messages, invalid_messages)
@ -314,6 +317,7 @@ func TestMessageClientMessage(t *testing.T) {
}
func TestByeClientMessage(t *testing.T) {
t.Parallel()
// Any "bye" message is valid.
valid_messages := []testCheckValid{
&ByeClientMessage{},
@ -332,6 +336,7 @@ func TestByeClientMessage(t *testing.T) {
}
func TestRoomClientMessage(t *testing.T) {
t.Parallel()
// Any "room" message is valid.
valid_messages := []testCheckValid{
&RoomClientMessage{},
@ -350,6 +355,7 @@ func TestRoomClientMessage(t *testing.T) {
}
func TestErrorMessages(t *testing.T) {
t.Parallel()
id := "request-id"
msg := ClientMessage{
Id: id,
@ -382,12 +388,13 @@ func TestErrorMessages(t *testing.T) {
}
func TestIsChatRefresh(t *testing.T) {
t.Parallel()
var msg ServerMessage
data_true := []byte("{\"type\":\"chat\",\"chat\":{\"refresh\":true}}")
msg = ServerMessage{
Type: "message",
Message: &MessageServerMessage{
Data: (*json.RawMessage)(&data_true),
Data: data_true,
},
}
if !msg.IsChatRefresh() {
@ -398,7 +405,7 @@ func TestIsChatRefresh(t *testing.T) {
msg = ServerMessage{
Type: "message",
Message: &MessageServerMessage{
Data: (*json.RawMessage)(&data_false),
Data: data_false,
},
}
if msg.IsChatRefresh() {
@ -426,6 +433,7 @@ func assertEqualStrings(t *testing.T, expected, result []string) {
}
func Test_Welcome_AddRemoveFeature(t *testing.T) {
t.Parallel()
var msg WelcomeServerMessage
assertEqualStrings(t, []string{}, msg.Features)

View file

@ -280,6 +280,8 @@ func (e *asyncEventsNats) Close() {
sub.close()
}
}(e.sessionSubscriptions)
// Can't use clear(...) here as the maps are processed asynchronously by the
// goroutines above.
e.backendRoomSubscriptions = make(map[string]*asyncBackendRoomSubscriberNats)
e.roomSubscriptions = make(map[string]*asyncRoomSubscriberNats)
e.userSubscriptions = make(map[string]*asyncUserSubscriberNats)

View file

@ -194,7 +194,7 @@ func (b *BackendClient) PerformJSONRequest(ctx context.Context, u *url.URL, requ
if err := json.Unmarshal(body, &ocs); err != nil {
log.Printf("Could not decode OCS response %s from %s: %s", string(body), req.URL, err)
return err
} else if ocs.Ocs == nil || ocs.Ocs.Data == nil {
} else if ocs.Ocs == nil || len(ocs.Ocs.Data) == 0 {
log.Printf("Incomplete OCS response %s from %s", string(body), req.URL)
return ErrIncompleteResponse
}
@ -205,8 +205,8 @@ func (b *BackendClient) PerformJSONRequest(ctx context.Context, u *url.URL, requ
return ErrThrottledResponse
}
if err := json.Unmarshal(*ocs.Ocs.Data, response); err != nil {
log.Printf("Could not decode OCS response body %s from %s: %s", string(*ocs.Ocs.Data), req.URL, err)
if err := json.Unmarshal(ocs.Ocs.Data, response); err != nil {
log.Printf("Could not decode OCS response body %s from %s: %s", string(ocs.Ocs.Data), req.URL, err)
return err
}
} else if err := json.Unmarshal(body, response); err != nil {

View file

@ -45,7 +45,7 @@ func returnOCS(t *testing.T, w http.ResponseWriter, body []byte) {
StatusCode: http.StatusOK,
Message: "OK",
},
Data: (*json.RawMessage)(&body),
Data: body,
},
}
if strings.Contains(t.Name(), "Throttled") {
@ -70,6 +70,8 @@ func returnOCS(t *testing.T, w http.ResponseWriter, body []byte) {
}
func TestPostOnRedirect(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
r := mux.NewRouter()
r.HandleFunc("/ocs/v2.php/one", func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "/ocs/v2.php/two", http.StatusTemporaryRedirect)
@ -125,6 +127,8 @@ func TestPostOnRedirect(t *testing.T) {
}
func TestPostOnRedirectDifferentHost(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
r := mux.NewRouter()
r.HandleFunc("/ocs/v2.php/one", func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "http://domain.invalid/ocs/v2.php/two", http.StatusTemporaryRedirect)
@ -165,6 +169,8 @@ func TestPostOnRedirectDifferentHost(t *testing.T) {
}
func TestPostOnRedirectStatusFound(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
r := mux.NewRouter()
r.HandleFunc("/ocs/v2.php/one", func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "/ocs/v2.php/two", http.StatusFound)
@ -217,6 +223,8 @@ func TestPostOnRedirectStatusFound(t *testing.T) {
}
func TestHandleThrottled(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
r := mux.NewRouter()
r.HandleFunc("/ocs/v2.php/one", func(w http.ResponseWriter, r *http.Request) {
returnOCS(t, w, []byte("[]"))

View file

@ -92,6 +92,7 @@ func testBackends(t *testing.T, config *BackendConfiguration, valid_urls [][]str
}
func TestIsUrlAllowed_Compat(t *testing.T) {
CatchLogForTest(t)
// Old-style configuration
valid_urls := []string{
"http://domain.invalid",
@ -114,6 +115,7 @@ func TestIsUrlAllowed_Compat(t *testing.T) {
}
func TestIsUrlAllowed_CompatForceHttps(t *testing.T) {
CatchLogForTest(t)
// Old-style configuration, force HTTPS
valid_urls := []string{
"https://domain.invalid",
@ -135,6 +137,7 @@ func TestIsUrlAllowed_CompatForceHttps(t *testing.T) {
}
func TestIsUrlAllowed(t *testing.T) {
CatchLogForTest(t)
valid_urls := [][]string{
{"https://domain.invalid/foo", string(testBackendSecret) + "-foo"},
{"https://domain.invalid/foo/", string(testBackendSecret) + "-foo"},
@ -180,6 +183,7 @@ func TestIsUrlAllowed(t *testing.T) {
}
func TestIsUrlAllowed_EmptyAllowlist(t *testing.T) {
CatchLogForTest(t)
valid_urls := []string{}
invalid_urls := []string{
"http://domain.invalid",
@ -197,6 +201,7 @@ func TestIsUrlAllowed_EmptyAllowlist(t *testing.T) {
}
func TestIsUrlAllowed_AllowAll(t *testing.T) {
CatchLogForTest(t)
valid_urls := []string{
"http://domain.invalid",
"https://domain.invalid",
@ -222,6 +227,7 @@ type ParseBackendIdsTestcase struct {
}
func TestParseBackendIds(t *testing.T) {
CatchLogForTest(t)
testcases := []ParseBackendIdsTestcase{
{"", nil},
{"backend1", []string{"backend1"}},
@ -241,6 +247,7 @@ func TestParseBackendIds(t *testing.T) {
}
func TestBackendReloadNoChange(t *testing.T) {
CatchLogForTest(t)
current := testutil.ToFloat64(statsBackendsCurrent)
original_config := goconf.NewConfigFile()
original_config.AddOption("backend", "backends", "backend1, backend2")
@ -276,6 +283,7 @@ func TestBackendReloadNoChange(t *testing.T) {
}
func TestBackendReloadChangeExistingURL(t *testing.T) {
CatchLogForTest(t)
current := testutil.ToFloat64(statsBackendsCurrent)
original_config := goconf.NewConfigFile()
original_config.AddOption("backend", "backends", "backend1, backend2")
@ -316,6 +324,7 @@ func TestBackendReloadChangeExistingURL(t *testing.T) {
}
func TestBackendReloadChangeSecret(t *testing.T) {
CatchLogForTest(t)
current := testutil.ToFloat64(statsBackendsCurrent)
original_config := goconf.NewConfigFile()
original_config.AddOption("backend", "backends", "backend1, backend2")
@ -354,6 +363,7 @@ func TestBackendReloadChangeSecret(t *testing.T) {
}
func TestBackendReloadAddBackend(t *testing.T) {
CatchLogForTest(t)
current := testutil.ToFloat64(statsBackendsCurrent)
original_config := goconf.NewConfigFile()
original_config.AddOption("backend", "backends", "backend1")
@ -394,6 +404,7 @@ func TestBackendReloadAddBackend(t *testing.T) {
}
func TestBackendReloadRemoveHost(t *testing.T) {
CatchLogForTest(t)
current := testutil.ToFloat64(statsBackendsCurrent)
original_config := goconf.NewConfigFile()
original_config.AddOption("backend", "backends", "backend1, backend2")
@ -431,6 +442,7 @@ func TestBackendReloadRemoveHost(t *testing.T) {
}
func TestBackendReloadRemoveBackendFromSharedHost(t *testing.T) {
CatchLogForTest(t)
current := testutil.ToFloat64(statsBackendsCurrent)
original_config := goconf.NewConfigFile()
original_config.AddOption("backend", "backends", "backend1, backend2")
@ -486,6 +498,8 @@ func mustParse(s string) *url.URL {
}
func TestBackendConfiguration_Etcd(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
etcd, client := NewEtcdClientForTest(t)
url1 := "https://domain1.invalid/foo"
@ -619,6 +633,8 @@ func TestBackendConfiguration_Etcd(t *testing.T) {
}
func TestBackendCommonSecret(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
u1, err := url.Parse("http://domain1.invalid")
if err != nil {
t.Fatal(err)

View file

@ -277,7 +277,7 @@ func (b *BackendServer) parseRequestBody(f func(http.ResponseWriter, *http.Reque
}
}
func (b *BackendServer) sendRoomInvite(roomid string, backend *Backend, userids []string, properties *json.RawMessage) {
func (b *BackendServer) sendRoomInvite(roomid string, backend *Backend, userids []string, properties json.RawMessage) {
msg := &AsyncMessage{
Type: "message",
Message: &ServerMessage{
@ -347,7 +347,7 @@ func (b *BackendServer) sendRoomDisinvite(roomid string, backend *Backend, reaso
wg.Wait()
}
func (b *BackendServer) sendRoomUpdate(roomid string, backend *Backend, notified_userids []string, all_userids []string, properties *json.RawMessage) {
func (b *BackendServer) sendRoomUpdate(roomid string, backend *Backend, notified_userids []string, all_userids []string, properties json.RawMessage) {
msg := &AsyncMessage{
Type: "message",
Message: &ServerMessage{
@ -553,11 +553,11 @@ func (b *BackendServer) sendRoomSwitchTo(roomid string, backend *Backend, reques
var wg sync.WaitGroup
var mu sync.Mutex
if request.SwitchTo.Sessions != nil {
if len(request.SwitchTo.Sessions) > 0 {
// We support both a list of sessions or a map with additional details per session.
if (*request.SwitchTo.Sessions)[0] == '[' {
if request.SwitchTo.Sessions[0] == '[' {
var sessionsList BackendRoomSwitchToSessionsList
if err := json.Unmarshal(*request.SwitchTo.Sessions, &sessionsList); err != nil {
if err := json.Unmarshal(request.SwitchTo.Sessions, &sessionsList); err != nil {
return err
}
@ -595,7 +595,7 @@ func (b *BackendServer) sendRoomSwitchTo(roomid string, backend *Backend, reques
request.SwitchTo.SessionsMap = nil
} else {
var sessionsMap BackendRoomSwitchToSessionsMap
if err := json.Unmarshal(*request.SwitchTo.Sessions, &sessionsMap); err != nil {
if err := json.Unmarshal(request.SwitchTo.Sessions, &sessionsMap); err != nil {
return err
}
@ -761,6 +761,16 @@ func (b *BackendServer) startDialout(roomid string, backend *Backend, backendUrl
}
func (b *BackendServer) roomHandler(w http.ResponseWriter, r *http.Request, body []byte) {
throttle, err := b.hub.throttler.CheckBruteforce(r.Context(), b.hub.getRealUserIP(r), "BackendRoomAuth")
if err == ErrBruteforceDetected {
http.Error(w, "Too many requests", http.StatusTooManyRequests)
return
} else if err != nil {
log.Printf("Error checking for bruteforce: %s", err)
http.Error(w, "Could not check for bruteforce", http.StatusInternalServerError)
return
}
v := mux.Vars(r)
roomid := v["roomid"]
@ -773,6 +783,7 @@ func (b *BackendServer) roomHandler(w http.ResponseWriter, r *http.Request, body
if backend == nil {
// Unknown backend URL passed, return immediately.
throttle(r.Context())
http.Error(w, "Authentication check failed", http.StatusForbidden)
return
}
@ -794,12 +805,14 @@ func (b *BackendServer) roomHandler(w http.ResponseWriter, r *http.Request, body
}
if backend == nil {
throttle(r.Context())
http.Error(w, "Authentication check failed", http.StatusForbidden)
return
}
}
if !ValidateBackendChecksum(r, body, backend.Secret()) {
throttle(r.Context())
http.Error(w, "Authentication check failed", http.StatusForbidden)
return
}
@ -814,7 +827,6 @@ func (b *BackendServer) roomHandler(w http.ResponseWriter, r *http.Request, body
request.ReceivedTime = time.Now().UnixNano()
var response any
var err error
switch request.Type {
case "invite":
b.sendRoomInvite(roomid, backend, request.Invite.UserIds, request.Invite.Properties)
@ -881,15 +893,9 @@ func (b *BackendServer) roomHandler(w http.ResponseWriter, r *http.Request, body
}
func (b *BackendServer) allowStatsAccess(r *http.Request) bool {
addr := getRealUserIP(r)
if strings.Contains(addr, ":") {
if host, _, err := net.SplitHostPort(addr); err == nil {
addr = host
}
}
addr := b.hub.getRealUserIP(r)
ip := net.ParseIP(addr)
if ip == nil {
if len(ip) == 0 {
return false
}

View file

@ -30,6 +30,7 @@ import (
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"net/http/httptest"
"net/textproto"
@ -275,6 +276,8 @@ func expectRoomlistEvent(ch chan *AsyncMessage, msgType string) (*EventServerMes
}
func TestBackendServer_NoAuth(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, _, _, server := CreateBackendServerForTest(t)
roomId := "the-room-id"
@ -301,6 +304,8 @@ func TestBackendServer_NoAuth(t *testing.T) {
}
func TestBackendServer_InvalidAuth(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, _, _, server := CreateBackendServerForTest(t)
roomId := "the-room-id"
@ -329,6 +334,8 @@ func TestBackendServer_InvalidAuth(t *testing.T) {
}
func TestBackendServer_OldCompatAuth(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, _, _, server := CreateBackendServerForTest(t)
roomId := "the-room-id"
@ -343,7 +350,7 @@ func TestBackendServer_OldCompatAuth(t *testing.T) {
AllUserIds: []string{
userid,
},
Properties: &roomProperties,
Properties: roomProperties,
},
}
@ -378,6 +385,8 @@ func TestBackendServer_OldCompatAuth(t *testing.T) {
}
func TestBackendServer_InvalidBody(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, _, _, server := CreateBackendServerForTest(t)
roomId := "the-room-id"
@ -397,6 +406,8 @@ func TestBackendServer_InvalidBody(t *testing.T) {
}
func TestBackendServer_UnsupportedRequest(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, _, _, server := CreateBackendServerForTest(t)
msg := &BackendServerRoomRequest{
@ -423,8 +434,10 @@ func TestBackendServer_UnsupportedRequest(t *testing.T) {
}
func TestBackendServer_RoomInvite(t *testing.T) {
CatchLogForTest(t)
for _, backend := range eventBackendsForTest {
t.Run(backend, func(t *testing.T) {
t.Parallel()
RunTestBackendServer_RoomInvite(t)
})
}
@ -468,7 +481,7 @@ func RunTestBackendServer_RoomInvite(t *testing.T) {
AllUserIds: []string{
userid,
},
Properties: &roomProperties,
Properties: roomProperties,
},
}
@ -497,14 +510,16 @@ func RunTestBackendServer_RoomInvite(t *testing.T) {
t.Errorf("Expected invite, got %+v", event)
} else if event.Invite.RoomId != roomId {
t.Errorf("Expected room %s, got %+v", roomId, event)
} else if event.Invite.Properties == nil || !bytes.Equal(*event.Invite.Properties, roomProperties) {
t.Errorf("Room properties don't match: expected %s, got %s", string(roomProperties), string(*event.Invite.Properties))
} else if !bytes.Equal(event.Invite.Properties, roomProperties) {
t.Errorf("Room properties don't match: expected %s, got %s", string(roomProperties), string(event.Invite.Properties))
}
}
func TestBackendServer_RoomDisinvite(t *testing.T) {
CatchLogForTest(t)
for _, backend := range eventBackendsForTest {
t.Run(backend, func(t *testing.T) {
t.Parallel()
RunTestBackendServer_RoomDisinvite(t)
})
}
@ -568,7 +583,7 @@ func RunTestBackendServer_RoomDisinvite(t *testing.T) {
roomId + "-" + hello.Hello.SessionId,
},
AllUserIds: []string{},
Properties: &roomProperties,
Properties: roomProperties,
},
}
@ -596,8 +611,8 @@ func RunTestBackendServer_RoomDisinvite(t *testing.T) {
t.Errorf("Expected disinvite, got %+v", event)
} else if event.Disinvite.RoomId != roomId {
t.Errorf("Expected room %s, got %+v", roomId, event)
} else if event.Disinvite.Properties != nil {
t.Errorf("Room properties should be omitted, got %s", string(*event.Disinvite.Properties))
} else if len(event.Disinvite.Properties) > 0 {
t.Errorf("Room properties should be omitted, got %s", string(event.Disinvite.Properties))
} else if event.Disinvite.Reason != "disinvited" {
t.Errorf("Reason should be disinvited, got %s", event.Disinvite.Reason)
}
@ -616,6 +631,8 @@ func RunTestBackendServer_RoomDisinvite(t *testing.T) {
}
func TestBackendServer_RoomDisinviteDifferentRooms(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, hub, _, server := CreateBackendServerForTest(t)
client1 := NewTestClient(t, server, hub)
@ -712,7 +729,7 @@ func TestBackendServer_RoomDisinviteDifferentRooms(t *testing.T) {
UserIds: []string{
testDefaultUserId,
},
Properties: (*json.RawMessage)(&testRoomProperties),
Properties: testRoomProperties,
},
}
@ -741,8 +758,10 @@ func TestBackendServer_RoomDisinviteDifferentRooms(t *testing.T) {
}
func TestBackendServer_RoomUpdate(t *testing.T) {
CatchLogForTest(t)
for _, backend := range eventBackendsForTest {
t.Run(backend, func(t *testing.T) {
t.Parallel()
RunTestBackendServer_RoomUpdate(t)
})
}
@ -762,7 +781,7 @@ func RunTestBackendServer_RoomUpdate(t *testing.T) {
if backend == nil {
t.Fatalf("Did not find backend")
}
room, err := hub.createRoom(roomId, &emptyProperties, backend)
room, err := hub.createRoom(roomId, emptyProperties, backend)
if err != nil {
t.Fatalf("Could not create room: %s", err)
}
@ -786,7 +805,7 @@ func RunTestBackendServer_RoomUpdate(t *testing.T) {
UserIds: []string{
userid,
},
Properties: &roomProperties,
Properties: roomProperties,
},
}
@ -814,8 +833,8 @@ func RunTestBackendServer_RoomUpdate(t *testing.T) {
t.Errorf("Expected update, got %+v", event)
} else if event.Update.RoomId != roomId {
t.Errorf("Expected room %s, got %+v", roomId, event)
} else if event.Update.Properties == nil || !bytes.Equal(*event.Update.Properties, roomProperties) {
t.Errorf("Room properties don't match: expected %s, got %s", string(roomProperties), string(*event.Update.Properties))
} else if !bytes.Equal(event.Update.Properties, roomProperties) {
t.Errorf("Room properties don't match: expected %s, got %s", string(roomProperties), string(event.Update.Properties))
}
// TODO: Use event to wait for asynchronous messages.
@ -825,14 +844,16 @@ func RunTestBackendServer_RoomUpdate(t *testing.T) {
if room == nil {
t.Fatalf("Room %s does not exist", roomId)
}
if string(*room.Properties()) != string(roomProperties) {
t.Errorf("Expected properties %s for room %s, got %s", string(roomProperties), room.Id(), string(*room.Properties()))
if string(room.Properties()) != string(roomProperties) {
t.Errorf("Expected properties %s for room %s, got %s", string(roomProperties), room.Id(), string(room.Properties()))
}
}
func TestBackendServer_RoomDelete(t *testing.T) {
CatchLogForTest(t)
for _, backend := range eventBackendsForTest {
t.Run(backend, func(t *testing.T) {
t.Parallel()
RunTestBackendServer_RoomDelete(t)
})
}
@ -852,7 +873,7 @@ func RunTestBackendServer_RoomDelete(t *testing.T) {
if backend == nil {
t.Fatalf("Did not find backend")
}
if _, err := hub.createRoom(roomId, &emptyProperties, backend); err != nil {
if _, err := hub.createRoom(roomId, emptyProperties, backend); err != nil {
t.Fatalf("Could not create room: %s", err)
}
@ -900,8 +921,8 @@ func RunTestBackendServer_RoomDelete(t *testing.T) {
t.Errorf("Expected disinvite, got %+v", event)
} else if event.Disinvite.RoomId != roomId {
t.Errorf("Expected room %s, got %+v", roomId, event)
} else if event.Disinvite.Properties != nil {
t.Errorf("Room properties should be omitted, got %s", string(*event.Disinvite.Properties))
} else if len(event.Disinvite.Properties) > 0 {
t.Errorf("Room properties should be omitted, got %s", string(event.Disinvite.Properties))
} else if event.Disinvite.Reason != "deleted" {
t.Errorf("Reason should be deleted, got %s", event.Disinvite.Reason)
}
@ -916,8 +937,10 @@ func RunTestBackendServer_RoomDelete(t *testing.T) {
}
func TestBackendServer_ParticipantsUpdatePermissions(t *testing.T) {
CatchLogForTest(t)
for _, subtest := range clusteredTests {
t.Run(subtest, func(t *testing.T) {
t.Parallel()
var hub1 *Hub
var hub2 *Hub
var server1 *httptest.Server
@ -1047,6 +1070,8 @@ func TestBackendServer_ParticipantsUpdatePermissions(t *testing.T) {
}
func TestBackendServer_ParticipantsUpdateEmptyPermissions(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, hub, _, server := CreateBackendServerForTest(t)
client := NewTestClient(t, server, hub)
@ -1132,6 +1157,8 @@ func TestBackendServer_ParticipantsUpdateEmptyPermissions(t *testing.T) {
}
func TestBackendServer_ParticipantsUpdateTimeout(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, hub, _, server := CreateBackendServerForTest(t)
client1 := NewTestClient(t, server, hub)
@ -1345,8 +1372,10 @@ func TestBackendServer_ParticipantsUpdateTimeout(t *testing.T) {
}
func TestBackendServer_InCallAll(t *testing.T) {
CatchLogForTest(t)
for _, subtest := range clusteredTests {
t.Run(subtest, func(t *testing.T) {
t.Parallel()
var hub1 *Hub
var hub2 *Hub
var server1 *httptest.Server
@ -1471,8 +1500,8 @@ func TestBackendServer_InCallAll(t *testing.T) {
t.Error(err)
} else if !in_call_1.All {
t.Errorf("All flag not set in message %+v", in_call_1)
} else if !bytes.Equal(*in_call_1.InCall, []byte("7")) {
t.Errorf("Expected inCall flag 7, got %s", string(*in_call_1.InCall))
} else if !bytes.Equal(in_call_1.InCall, []byte("7")) {
t.Errorf("Expected inCall flag 7, got %s", string(in_call_1.InCall))
}
if msg2_a, err := client2.RunUntilMessage(ctx); err != nil {
@ -1481,8 +1510,8 @@ func TestBackendServer_InCallAll(t *testing.T) {
t.Error(err)
} else if !in_call_1.All {
t.Errorf("All flag not set in message %+v", in_call_1)
} else if !bytes.Equal(*in_call_1.InCall, []byte("7")) {
t.Errorf("Expected inCall flag 7, got %s", string(*in_call_1.InCall))
} else if !bytes.Equal(in_call_1.InCall, []byte("7")) {
t.Errorf("Expected inCall flag 7, got %s", string(in_call_1.InCall))
}
if !room1.IsSessionInCall(session1) {
@ -1552,8 +1581,8 @@ func TestBackendServer_InCallAll(t *testing.T) {
t.Error(err)
} else if !in_call_1.All {
t.Errorf("All flag not set in message %+v", in_call_1)
} else if !bytes.Equal(*in_call_1.InCall, []byte("0")) {
t.Errorf("Expected inCall flag 0, got %s", string(*in_call_1.InCall))
} else if !bytes.Equal(in_call_1.InCall, []byte("0")) {
t.Errorf("Expected inCall flag 0, got %s", string(in_call_1.InCall))
}
if msg2_a, err := client2.RunUntilMessage(ctx); err != nil {
@ -1562,8 +1591,8 @@ func TestBackendServer_InCallAll(t *testing.T) {
t.Error(err)
} else if !in_call_1.All {
t.Errorf("All flag not set in message %+v", in_call_1)
} else if !bytes.Equal(*in_call_1.InCall, []byte("0")) {
t.Errorf("Expected inCall flag 0, got %s", string(*in_call_1.InCall))
} else if !bytes.Equal(in_call_1.InCall, []byte("0")) {
t.Errorf("Expected inCall flag 0, got %s", string(in_call_1.InCall))
}
if room1.IsSessionInCall(session1) {
@ -1595,6 +1624,8 @@ func TestBackendServer_InCallAll(t *testing.T) {
}
func TestBackendServer_RoomMessage(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, hub, _, server := CreateBackendServerForTest(t)
client := NewTestClient(t, server, hub)
@ -1628,7 +1659,7 @@ func TestBackendServer_RoomMessage(t *testing.T) {
msg := &BackendServerRoomRequest{
Type: "message",
Message: &BackendRoomMessageRequest{
Data: &messageData,
Data: messageData,
},
}
@ -1654,12 +1685,14 @@ func TestBackendServer_RoomMessage(t *testing.T) {
t.Error(err)
} else if message.RoomId != roomId {
t.Errorf("Expected message for room %s, got %s", roomId, message.RoomId)
} else if !bytes.Equal(messageData, *message.Data) {
t.Errorf("Expected message data %s, got %s", string(messageData), string(*message.Data))
} else if !bytes.Equal(messageData, message.Data) {
t.Errorf("Expected message data %s, got %s", string(messageData), string(message.Data))
}
}
func TestBackendServer_TurnCredentials(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, _, _, server := CreateBackendServerForTestWithTurn(t)
q := make(url.Values)
@ -1703,7 +1736,9 @@ func TestBackendServer_TurnCredentials(t *testing.T) {
}
func TestBackendServer_StatsAllowedIps(t *testing.T) {
CatchLogForTest(t)
config := goconf.NewConfigFile()
config.AddOption("app", "trustedproxies", "1.2.3.4")
config.AddOption("stats", "allowed_ips", "127.0.0.1, 192.168.0.1, 192.168.1.1/24")
_, backend, _, _, _, _ := CreateBackendServerForTestFromConfig(t, config)
@ -1720,7 +1755,9 @@ func TestBackendServer_StatsAllowedIps(t *testing.T) {
}
for _, addr := range allowed {
addr := addr
t.Run(addr, func(t *testing.T) {
t.Parallel()
r1 := &http.Request{
RemoteAddr: addr,
}
@ -1728,6 +1765,10 @@ func TestBackendServer_StatsAllowedIps(t *testing.T) {
t.Errorf("should allow %s", addr)
}
if host, _, err := net.SplitHostPort(addr); err == nil {
addr = host
}
r2 := &http.Request{
RemoteAddr: "1.2.3.4:12345",
Header: http.Header{
@ -1761,7 +1802,9 @@ func TestBackendServer_StatsAllowedIps(t *testing.T) {
}
for _, addr := range notAllowed {
addr := addr
t.Run(addr, func(t *testing.T) {
t.Parallel()
r := &http.Request{
RemoteAddr: addr,
}
@ -1773,6 +1816,7 @@ func TestBackendServer_StatsAllowedIps(t *testing.T) {
}
func Test_IsNumeric(t *testing.T) {
t.Parallel()
numeric := []string{
"0",
"1",
@ -1802,6 +1846,8 @@ func Test_IsNumeric(t *testing.T) {
}
func TestBackendServer_DialoutNoSipBridge(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, hub, _, server := CreateBackendServerForTest(t)
client := NewTestClient(t, server, hub)
@ -1860,6 +1906,8 @@ func TestBackendServer_DialoutNoSipBridge(t *testing.T) {
}
func TestBackendServer_DialoutAccepted(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, hub, _, server := CreateBackendServerForTest(t)
client := NewTestClient(t, server, hub)
@ -1966,6 +2014,8 @@ func TestBackendServer_DialoutAccepted(t *testing.T) {
}
func TestBackendServer_DialoutAcceptedCompat(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, hub, _, server := CreateBackendServerForTest(t)
client := NewTestClient(t, server, hub)
@ -2072,6 +2122,8 @@ func TestBackendServer_DialoutAcceptedCompat(t *testing.T) {
}
func TestBackendServer_DialoutRejected(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
_, _, _, hub, _, server := CreateBackendServerForTest(t)
client := NewTestClient(t, server, hub)

View file

@ -24,10 +24,10 @@ package signaling
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"net/url"
"sync"
"time"
"github.com/dlintw/goconf"
@ -43,8 +43,10 @@ type backendStorageEtcd struct {
initializedCtx context.Context
initializedFunc context.CancelFunc
initializedWg sync.WaitGroup
wakeupChanForTesting chan struct{}
closeCtx context.Context
closeFunc context.CancelFunc
}
func NewBackendStorageEtcd(config *goconf.ConfigFile, etcdClient *EtcdClient) (BackendStorage, error) {
@ -58,6 +60,7 @@ func NewBackendStorageEtcd(config *goconf.ConfigFile, etcdClient *EtcdClient) (B
}
initializedCtx, initializedFunc := context.WithCancel(context.Background())
closeCtx, closeFunc := context.WithCancel(context.Background())
result := &backendStorageEtcd{
backendStorageCommon: backendStorageCommon{
backends: make(map[string][]*Backend),
@ -68,6 +71,8 @@ func NewBackendStorageEtcd(config *goconf.ConfigFile, etcdClient *EtcdClient) (B
initializedCtx: initializedCtx,
initializedFunc: initializedFunc,
closeCtx: closeCtx,
closeFunc: closeFunc,
}
etcdClient.AddListener(result)
@ -95,15 +100,12 @@ func (s *backendStorageEtcd) wakeupForTesting() {
}
func (s *backendStorageEtcd) EtcdClientCreated(client *EtcdClient) {
s.initializedWg.Add(1)
go func() {
if err := client.Watch(context.Background(), s.keyPrefix, s, clientv3.WithPrefix()); err != nil {
log.Printf("Error processing watch for %s: %s", s.keyPrefix, err)
}
}()
if err := client.WaitForConnection(s.closeCtx); err != nil {
if errors.Is(err, context.Canceled) {
return
}
go func() {
if err := client.WaitForConnection(context.Background()); err != nil {
panic(err)
}
@ -111,41 +113,61 @@ func (s *backendStorageEtcd) EtcdClientCreated(client *EtcdClient) {
if err != nil {
panic(err)
}
for {
response, err := s.getBackends(client, s.keyPrefix)
for s.closeCtx.Err() == nil {
response, err := s.getBackends(s.closeCtx, client, s.keyPrefix)
if err != nil {
if err == context.DeadlineExceeded {
if errors.Is(err, context.Canceled) {
return
} else if errors.Is(err, context.DeadlineExceeded) {
log.Printf("Timeout getting initial list of backends, retry in %s", backoff.NextWait())
} else {
log.Printf("Could not get initial list of backends, retry in %s: %s", backoff.NextWait(), err)
}
backoff.Wait(context.Background())
backoff.Wait(s.closeCtx)
continue
}
for _, ev := range response.Kvs {
s.EtcdKeyUpdated(client, string(ev.Key), ev.Value)
s.EtcdKeyUpdated(client, string(ev.Key), ev.Value, nil)
}
s.initializedWg.Wait()
s.initializedFunc()
nextRevision := response.Header.Revision + 1
prevRevision := nextRevision
backoff.Reset()
for s.closeCtx.Err() == nil {
var err error
if nextRevision, err = client.Watch(s.closeCtx, s.keyPrefix, nextRevision, s, clientv3.WithPrefix()); err != nil {
log.Printf("Error processing watch for %s (%s), retry in %s", s.keyPrefix, err, backoff.NextWait())
backoff.Wait(s.closeCtx)
continue
}
if nextRevision != prevRevision {
backoff.Reset()
prevRevision = nextRevision
} else {
log.Printf("Processing watch for %s interrupted, retry in %s", s.keyPrefix, backoff.NextWait())
backoff.Wait(s.closeCtx)
}
}
return
}
}()
}
func (s *backendStorageEtcd) EtcdWatchCreated(client *EtcdClient, key string) {
s.initializedWg.Done()
}
func (s *backendStorageEtcd) getBackends(client *EtcdClient, keyPrefix string) (*clientv3.GetResponse, error) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
func (s *backendStorageEtcd) getBackends(ctx context.Context, client *EtcdClient, keyPrefix string) (*clientv3.GetResponse, error) {
ctx, cancel := context.WithTimeout(ctx, time.Second)
defer cancel()
return client.Get(ctx, keyPrefix, clientv3.WithPrefix())
}
func (s *backendStorageEtcd) EtcdKeyUpdated(client *EtcdClient, key string, data []byte) {
func (s *backendStorageEtcd) EtcdKeyUpdated(client *EtcdClient, key string, data []byte, prevValue []byte) {
var info BackendInformationEtcd
if err := json.Unmarshal(data, &info); err != nil {
log.Printf("Could not decode backend information %s: %s", string(data), err)
@ -205,7 +227,7 @@ func (s *backendStorageEtcd) EtcdKeyUpdated(client *EtcdClient, key string, data
s.wakeupForTesting()
}
func (s *backendStorageEtcd) EtcdKeyDeleted(client *EtcdClient, key string) {
func (s *backendStorageEtcd) EtcdKeyDeleted(client *EtcdClient, key string, prevValue []byte) {
s.mu.Lock()
defer s.mu.Unlock()
@ -241,6 +263,7 @@ func (s *backendStorageEtcd) EtcdKeyDeleted(client *EtcdClient, key string) {
func (s *backendStorageEtcd) Close() {
s.etcdClient.RemoveListener(s)
s.closeFunc()
}
func (s *backendStorageEtcd) Reload(config *goconf.ConfigFile) {

View file

@ -21,6 +21,13 @@
*/
package signaling
import (
"testing"
"github.com/dlintw/goconf"
"go.etcd.io/etcd/server/v3/embed"
)
func (s *backendStorageEtcd) getWakeupChannelForTesting() <-chan struct{} {
s.mu.Lock()
defer s.mu.Unlock()
@ -33,3 +40,38 @@ func (s *backendStorageEtcd) getWakeupChannelForTesting() <-chan struct{} {
s.wakeupChanForTesting = ch
return ch
}
type testListener struct {
etcd *embed.Etcd
closed chan struct{}
}
func (tl *testListener) EtcdClientCreated(client *EtcdClient) {
tl.etcd.Server.Stop()
close(tl.closed)
}
func Test_BackendStorageEtcdNoLeak(t *testing.T) {
CatchLogForTest(t)
ensureNoGoroutinesLeak(t, func(t *testing.T) {
etcd, client := NewEtcdClientForTest(t)
tl := &testListener{
etcd: etcd,
closed: make(chan struct{}),
}
client.AddListener(tl)
defer client.RemoveListener(tl)
config := goconf.NewConfigFile()
config.AddOption("backend", "backendtype", "etcd")
config.AddOption("backend", "backendprefix", "/backends")
cfg, err := NewBackendConfiguration(config, client)
if err != nil {
t.Fatal(err)
}
<-tl.closed
cfg.Close()
})
}

View file

@ -28,6 +28,7 @@ import (
)
func TestBackoff_Exponential(t *testing.T) {
t.Parallel()
backoff, err := NewExponentialBackoff(100*time.Millisecond, 500*time.Millisecond)
if err != nil {
t.Fatal(err)

View file

@ -48,9 +48,6 @@ const (
maxInvalidateInterval = time.Minute
)
// Can be overwritten by tests.
var getCapabilitiesNow = time.Now
type capabilitiesEntry struct {
nextUpdate time.Time
capabilities map[string]interface{}
@ -59,6 +56,9 @@ type capabilitiesEntry struct {
type Capabilities struct {
mu sync.RWMutex
// Can be overwritten by tests.
getNow func() time.Time
version string
pool *HttpClientPool
entries map[string]*capabilitiesEntry
@ -67,6 +67,8 @@ type Capabilities struct {
func NewCapabilities(version string, pool *HttpClientPool) (*Capabilities, error) {
result := &Capabilities{
getNow: time.Now,
version: version,
pool: pool,
entries: make(map[string]*capabilitiesEntry),
@ -86,15 +88,15 @@ type CapabilitiesVersion struct {
}
type CapabilitiesResponse struct {
Version CapabilitiesVersion `json:"version"`
Capabilities map[string]*json.RawMessage `json:"capabilities"`
Version CapabilitiesVersion `json:"version"`
Capabilities map[string]json.RawMessage `json:"capabilities"`
}
func (c *Capabilities) getCapabilities(key string) (map[string]interface{}, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
now := getCapabilitiesNow()
now := c.getNow()
if entry, found := c.entries[key]; found && entry.nextUpdate.After(now) {
return entry.capabilities, true
}
@ -103,14 +105,15 @@ func (c *Capabilities) getCapabilities(key string) (map[string]interface{}, bool
}
func (c *Capabilities) setCapabilities(key string, capabilities map[string]interface{}) {
now := getCapabilitiesNow()
c.mu.Lock()
defer c.mu.Unlock()
now := c.getNow()
entry := &capabilitiesEntry{
nextUpdate: now.Add(CapabilitiesCacheDuration),
capabilities: capabilities,
}
c.mu.Lock()
defer c.mu.Unlock()
c.entries[key] = entry
}
@ -118,7 +121,7 @@ func (c *Capabilities) invalidateCapabilities(key string) {
c.mu.Lock()
defer c.mu.Unlock()
now := getCapabilitiesNow()
now := c.getNow()
if entry, found := c.nextInvalidate[key]; found && entry.After(now) {
return
}
@ -188,25 +191,25 @@ func (c *Capabilities) loadCapabilities(ctx context.Context, u *url.URL) (map[st
if err := json.Unmarshal(body, &ocs); err != nil {
log.Printf("Could not decode OCS response %s from %s: %s", string(body), capUrl.String(), err)
return nil, false, err
} else if ocs.Ocs == nil || ocs.Ocs.Data == nil {
} else if ocs.Ocs == nil || len(ocs.Ocs.Data) == 0 {
log.Printf("Incomplete OCS response %s from %s", string(body), u)
return nil, false, fmt.Errorf("incomplete OCS response")
}
var response CapabilitiesResponse
if err := json.Unmarshal(*ocs.Ocs.Data, &response); err != nil {
log.Printf("Could not decode OCS response body %s from %s: %s", string(*ocs.Ocs.Data), capUrl.String(), err)
if err := json.Unmarshal(ocs.Ocs.Data, &response); err != nil {
log.Printf("Could not decode OCS response body %s from %s: %s", string(ocs.Ocs.Data), capUrl.String(), err)
return nil, false, err
}
capaObj, found := response.Capabilities[AppNameSpreed]
if !found || capaObj == nil {
if !found || len(capaObj) == 0 {
log.Printf("No capabilities received for app spreed from %s: %+v", capUrl.String(), response)
return nil, false, nil
}
var capa map[string]interface{}
if err := json.Unmarshal(*capaObj, &capa); err != nil {
if err := json.Unmarshal(capaObj, &capa); err != nil {
log.Printf("Unsupported capabilities received for app spreed from %s: %+v", capUrl.String(), response)
return nil, false, nil
}

View file

@ -80,9 +80,9 @@ func NewCapabilitiesForTestWithCallback(t *testing.T, callback func(*Capabilitie
Version: CapabilitiesVersion{
Major: 20,
},
Capabilities: map[string]*json.RawMessage{
"anotherApp": (*json.RawMessage)(&emptyArray),
"spreed": (*json.RawMessage)(&spreedCapa),
Capabilities: map[string]json.RawMessage{
"anotherApp": emptyArray,
"spreed": spreedCapa,
},
}
@ -102,7 +102,7 @@ func NewCapabilitiesForTestWithCallback(t *testing.T, callback func(*Capabilitie
StatusCode: http.StatusOK,
Message: http.StatusText(http.StatusOK),
},
Data: (*json.RawMessage)(&data),
Data: data,
}
if data, err = json.Marshal(ocs); err != nil {
t.Fatal(err)
@ -120,16 +120,25 @@ func NewCapabilitiesForTest(t *testing.T) (*url.URL, *Capabilities) {
return NewCapabilitiesForTestWithCallback(t, nil)
}
func SetCapabilitiesGetNow(t *testing.T, f func() time.Time) {
old := getCapabilitiesNow
func SetCapabilitiesGetNow(t *testing.T, capabilities *Capabilities, f func() time.Time) {
capabilities.mu.Lock()
defer capabilities.mu.Unlock()
old := capabilities.getNow
t.Cleanup(func() {
getCapabilitiesNow = old
capabilities.mu.Lock()
defer capabilities.mu.Unlock()
capabilities.getNow = old
})
getCapabilitiesNow = f
capabilities.getNow = f
}
func TestCapabilities(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
url, capabilities := NewCapabilitiesForTest(t)
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
@ -192,6 +201,8 @@ func TestCapabilities(t *testing.T) {
}
func TestInvalidateCapabilities(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
var called atomic.Uint32
url, capabilities := NewCapabilitiesForTestWithCallback(t, func(cr *CapabilitiesResponse) {
called.Add(1)
@ -244,7 +255,7 @@ func TestInvalidateCapabilities(t *testing.T) {
}
// At a later time, invalidating can be done again.
SetCapabilitiesGetNow(t, func() time.Time {
SetCapabilitiesGetNow(t, capabilities, func() time.Time {
return time.Now().Add(2 * time.Minute)
})

View file

@ -66,6 +66,11 @@ func NewCertificateReloader(certFile string, keyFile string) (*CertificateReload
return reloader, nil
}
func (r *CertificateReloader) Close() {
r.keyWatcher.Close()
r.certWatcher.Close()
}
func (r *CertificateReloader) reload(filename string) {
log.Printf("reloading certificate from %s with %s", r.certFile, r.keyFile)
pair, err := tls.LoadX509KeyPair(r.certFile, r.keyFile)
@ -135,6 +140,10 @@ func NewCertPoolReloader(certFile string) (*CertPoolReloader, error) {
return reloader, nil
}
func (r *CertPoolReloader) Close() {
r.certWatcher.Close()
}
func (r *CertPoolReloader) reload(filename string) {
log.Printf("reloading certificate pool from %s", r.certFile)
pool, err := loadCertPool(r.certFile)

View file

@ -28,6 +28,9 @@ import (
)
func UpdateCertificateCheckIntervalForTest(t *testing.T, interval time.Duration) {
t.Helper()
// Make sure test is not executed with "t.Parallel()"
t.Setenv("PARALLEL_CHECK", "1")
old := deduplicateWatchEvents.Load()
t.Cleanup(func() {
deduplicateWatchEvents.Store(old)

144
client.go
View file

@ -23,8 +23,11 @@ package signaling
import (
"bytes"
"context"
"encoding/json"
"errors"
"log"
"net"
"strconv"
"strings"
"sync"
@ -92,26 +95,49 @@ type WritableClientMessage interface {
CloseAfterSend(session Session) bool
}
type HandlerClient interface {
Context() context.Context
RemoteAddr() string
Country() string
UserAgent() string
IsConnected() bool
IsAuthenticated() bool
GetSession() Session
SetSession(session Session)
SendError(e *Error) bool
SendByeResponse(message *ClientMessage) bool
SendByeResponseWithReason(message *ClientMessage, reason string) bool
SendMessage(message WritableClientMessage) bool
Close()
}
type ClientHandler interface {
OnClosed(*Client)
OnMessageReceived(*Client, []byte)
OnRTTReceived(*Client, time.Duration)
OnClosed(HandlerClient)
OnMessageReceived(HandlerClient, []byte)
OnRTTReceived(HandlerClient, time.Duration)
}
type ClientGeoIpHandler interface {
OnLookupCountry(*Client) string
OnLookupCountry(HandlerClient) string
}
type Client struct {
ctx context.Context
conn *websocket.Conn
addr string
handler ClientHandler
agent string
closed atomic.Int32
country *string
logRTT bool
session atomic.Pointer[ClientSession]
handlerMu sync.RWMutex
handler ClientHandler
session atomic.Pointer[Session]
sessionId atomic.Pointer[string]
mu sync.Mutex
@ -121,7 +147,7 @@ type Client struct {
messageChan chan *bytes.Buffer
}
func NewClient(conn *websocket.Conn, remoteAddress string, agent string, handler ClientHandler) (*Client, error) {
func NewClient(ctx context.Context, conn *websocket.Conn, remoteAddress string, agent string, handler ClientHandler) (*Client, error) {
remoteAddress = strings.TrimSpace(remoteAddress)
if remoteAddress == "" {
remoteAddress = "unknown remote address"
@ -132,6 +158,7 @@ func NewClient(conn *websocket.Conn, remoteAddress string, agent string, handler
}
client := &Client{
ctx: ctx,
agent: agent,
logRTT: true,
}
@ -142,12 +169,28 @@ func NewClient(conn *websocket.Conn, remoteAddress string, agent string, handler
func (c *Client) SetConn(conn *websocket.Conn, remoteAddress string, handler ClientHandler) {
c.conn = conn
c.addr = remoteAddress
c.handler = handler
c.SetHandler(handler)
c.closer = NewCloser()
c.messageChan = make(chan *bytes.Buffer, 16)
c.messagesDone = make(chan struct{})
}
func (c *Client) SetHandler(handler ClientHandler) {
c.handlerMu.Lock()
defer c.handlerMu.Unlock()
c.handler = handler
}
func (c *Client) getHandler() ClientHandler {
c.handlerMu.RLock()
defer c.handlerMu.RUnlock()
return c.handler
}
func (c *Client) Context() context.Context {
return c.ctx
}
func (c *Client) IsConnected() bool {
return c.closed.Load() == 0
}
@ -156,12 +199,39 @@ func (c *Client) IsAuthenticated() bool {
return c.GetSession() != nil
}
func (c *Client) GetSession() *ClientSession {
return c.session.Load()
func (c *Client) GetSession() Session {
session := c.session.Load()
if session == nil {
return nil
}
return *session
}
func (c *Client) SetSession(session *ClientSession) {
c.session.Store(session)
func (c *Client) SetSession(session Session) {
if session == nil {
c.session.Store(nil)
} else {
c.session.Store(&session)
}
}
func (c *Client) SetSessionId(sessionId string) {
c.sessionId.Store(&sessionId)
}
func (c *Client) GetSessionId() string {
sessionId := c.sessionId.Load()
if sessionId == nil {
session := c.GetSession()
if session == nil {
return ""
}
return session.PublicId()
}
return *sessionId
}
func (c *Client) RemoteAddr() string {
@ -175,7 +245,7 @@ func (c *Client) UserAgent() string {
func (c *Client) Country() string {
if c.country == nil {
var country string
if handler, ok := c.handler.(ClientGeoIpHandler); ok {
if handler, ok := c.getHandler().(ClientGeoIpHandler); ok {
country = handler.OnLookupCountry(c)
} else {
country = unknownCountry
@ -214,7 +284,7 @@ func (c *Client) doClose() {
c.closer.Close()
<-c.messagesDone
c.handler.OnClosed(c)
c.getHandler().OnClosed(c)
c.SetSession(nil)
}
}
@ -234,12 +304,14 @@ func (c *Client) SendByeResponse(message *ClientMessage) bool {
func (c *Client) SendByeResponseWithReason(message *ClientMessage, reason string) bool {
response := &ServerMessage{
Type: "bye",
Bye: &ByeServerMessage{},
}
if message != nil {
response.Id = message.Id
}
if reason != "" {
if response.Bye == nil {
response.Bye = &ByeServerMessage{}
}
response.Bye.Reason = reason
}
return c.SendMessage(response)
@ -277,13 +349,13 @@ func (c *Client) ReadPump() {
rtt := now.Sub(time.Unix(0, ts))
if c.logRTT {
rtt_ms := rtt.Nanoseconds() / time.Millisecond.Nanoseconds()
if session := c.GetSession(); session != nil {
log.Printf("Client %s has RTT of %d ms (%s)", session.PublicId(), rtt_ms, rtt)
if sessionId := c.GetSessionId(); sessionId != "" {
log.Printf("Client %s has RTT of %d ms (%s)", sessionId, rtt_ms, rtt)
} else {
log.Printf("Client from %s has RTT of %d ms (%s)", addr, rtt_ms, rtt)
}
}
c.handler.OnRTTReceived(c, rtt)
c.getHandler().OnRTTReceived(c, rtt)
}
return nil
})
@ -292,12 +364,15 @@ func (c *Client) ReadPump() {
conn.SetReadDeadline(time.Now().Add(pongWait)) // nolint
messageType, reader, err := conn.NextReader()
if err != nil {
if _, ok := err.(*websocket.CloseError); !ok || websocket.IsUnexpectedCloseError(err,
// Gorilla websocket hides the original net.Error, so also compare error messages
if errors.Is(err, net.ErrClosed) || strings.Contains(err.Error(), net.ErrClosed.Error()) {
break
} else if _, ok := err.(*websocket.CloseError); !ok || websocket.IsUnexpectedCloseError(err,
websocket.CloseNormalClosure,
websocket.CloseGoingAway,
websocket.CloseNoStatusReceived) {
if session := c.GetSession(); session != nil {
log.Printf("Error reading from client %s: %v", session.PublicId(), err)
if sessionId := c.GetSessionId(); sessionId != "" {
log.Printf("Error reading from client %s: %v", sessionId, err)
} else {
log.Printf("Error reading from %s: %v", addr, err)
}
@ -306,8 +381,8 @@ func (c *Client) ReadPump() {
}
if messageType != websocket.TextMessage {
if session := c.GetSession(); session != nil {
log.Printf("Unsupported message type %v from client %s", messageType, session.PublicId())
if sessionId := c.GetSessionId(); sessionId != "" {
log.Printf("Unsupported message type %v from client %s", messageType, sessionId)
} else {
log.Printf("Unsupported message type %v from %s", messageType, addr)
}
@ -319,8 +394,8 @@ func (c *Client) ReadPump() {
decodeBuffer.Reset()
if _, err := decodeBuffer.ReadFrom(reader); err != nil {
bufferPool.Put(decodeBuffer)
if session := c.GetSession(); session != nil {
log.Printf("Error reading message from client %s: %v", session.PublicId(), err)
if sessionId := c.GetSessionId(); sessionId != "" {
log.Printf("Error reading message from client %s: %v", sessionId, err)
} else {
log.Printf("Error reading message from %s: %v", addr, err)
}
@ -344,7 +419,7 @@ func (c *Client) processMessages() {
break
}
c.handler.OnMessageReceived(c, buffer.Bytes())
c.getHandler().OnMessageReceived(c, buffer.Bytes())
bufferPool.Put(buffer)
}
@ -373,8 +448,8 @@ func (c *Client) writeInternal(message json.Marshaler) bool {
return false
}
if session := c.GetSession(); session != nil {
log.Printf("Could not send message %+v to client %s: %v", message, session.PublicId(), err)
if sessionId := c.GetSessionId(); sessionId != "" {
log.Printf("Could not send message %+v to client %s: %v", message, sessionId, err)
} else {
log.Printf("Could not send message %+v to %s: %v", message, c.RemoteAddr(), err)
}
@ -386,8 +461,8 @@ func (c *Client) writeInternal(message json.Marshaler) bool {
close:
c.conn.SetWriteDeadline(time.Now().Add(writeWait)) // nolint
if err := c.conn.WriteMessage(websocket.CloseMessage, closeData); err != nil {
if session := c.GetSession(); session != nil {
log.Printf("Could not send close message to client %s: %v", session.PublicId(), err)
if sessionId := c.GetSessionId(); sessionId != "" {
log.Printf("Could not send close message to client %s: %v", sessionId, err)
} else {
log.Printf("Could not send close message to %s: %v", c.RemoteAddr(), err)
}
@ -413,8 +488,8 @@ func (c *Client) writeError(e error) bool { // nolint
closeData := websocket.FormatCloseMessage(websocket.CloseInternalServerErr, e.Error())
c.conn.SetWriteDeadline(time.Now().Add(writeWait)) // nolint
if err := c.conn.WriteMessage(websocket.CloseMessage, closeData); err != nil {
if session := c.GetSession(); session != nil {
log.Printf("Could not send close message to client %s: %v", session.PublicId(), err)
if sessionId := c.GetSessionId(); sessionId != "" {
log.Printf("Could not send close message to client %s: %v", sessionId, err)
} else {
log.Printf("Could not send close message to %s: %v", c.RemoteAddr(), err)
}
@ -445,7 +520,6 @@ func (c *Client) writeMessageLocked(message WritableClientMessage) bool {
go session.Close()
}
go c.Close()
return false
}
return true
@ -462,8 +536,8 @@ func (c *Client) sendPing() bool {
msg := strconv.FormatInt(now, 10)
c.conn.SetWriteDeadline(time.Now().Add(writeWait)) // nolint
if err := c.conn.WriteMessage(websocket.PingMessage, []byte(msg)); err != nil {
if session := c.GetSession(); session != nil {
log.Printf("Could not send ping to client %s: %v", session.PublicId(), err)
if sessionId := c.GetSessionId(); sessionId != "" {
log.Printf("Could not send ping to client %s: %v", sessionId, err)
} else {
log.Printf("Could not send ping to %s: %v", c.RemoteAddr(), err)
}

View file

@ -248,7 +248,7 @@ func (c *SignalingClient) PublicSessionId() string {
func (c *SignalingClient) processMessageMessage(message *signaling.ServerMessage) {
var msg MessagePayload
if err := json.Unmarshal(*message.Message.Data, &msg); err != nil {
if err := json.Unmarshal(message.Message.Data, &msg); err != nil {
log.Println("Error in unmarshal", err)
return
}
@ -404,7 +404,7 @@ func (c *SignalingClient) SendMessages(clients []*SignalingClient) {
Type: "session",
SessionId: sessionIds[recipient],
},
Data: (*json.RawMessage)(&data),
Data: data,
},
}
sender.Send(msg)
@ -461,7 +461,7 @@ func registerAuthHandler(router *mux.Router) {
StatusCode: http.StatusOK,
Message: http.StatusText(http.StatusOK),
},
Data: &rawdata,
Data: rawdata,
},
}
@ -601,9 +601,9 @@ func main() {
Type: "hello",
Hello: &signaling.HelloClientMessage{
Version: signaling.HelloVersionV1,
Auth: signaling.HelloClientMessageAuth{
Auth: &signaling.HelloClientMessageAuth{
Url: backendUrl + "/auth",
Params: &json.RawMessage{'{', '}'},
Params: json.RawMessage("{}"),
},
},
}

View file

@ -36,9 +36,6 @@ import (
)
var (
// Sessions expire 30 seconds after the connection closed.
sessionExpireDuration = 30 * time.Second
// Warn if a session has 32 or more pending messages.
warnPendingMessagesCount = 32
@ -54,11 +51,13 @@ type ClientSession struct {
privateId string
publicId string
data *SessionIdData
ctx context.Context
closeFunc context.CancelFunc
clientType string
features []string
userId string
userData *json.RawMessage
userData json.RawMessage
inCall Flags
supportsPermissions bool
@ -68,14 +67,14 @@ type ClientSession struct {
backendUrl string
parsedBackendUrl *url.URL
expires time.Time
mu sync.Mutex
client *Client
room atomic.Pointer[Room]
roomJoinTime atomic.Int64
roomSessionId string
client HandlerClient
room atomic.Pointer[Room]
roomJoinTime atomic.Int64
roomSessionIdLock sync.RWMutex
roomSessionId string
publisherWaiters ChannelWaiters
@ -96,12 +95,15 @@ type ClientSession struct {
}
func NewClientSession(hub *Hub, privateId string, publicId string, data *SessionIdData, backend *Backend, hello *HelloClientMessage, auth *BackendClientAuthResponse) (*ClientSession, error) {
ctx, closeFunc := context.WithCancel(context.Background())
s := &ClientSession{
hub: hub,
events: hub.events,
privateId: privateId,
publicId: publicId,
data: data,
ctx: ctx,
closeFunc: closeFunc,
clientType: hello.Auth.Type,
features: hello.Features,
@ -145,6 +147,10 @@ func NewClientSession(hub *Hub, privateId string, publicId string, data *Session
return s, nil
}
func (s *ClientSession) Context() context.Context {
return s.ctx
}
func (s *ClientSession) PrivateId() string {
return s.privateId
}
@ -154,8 +160,8 @@ func (s *ClientSession) PublicId() string {
}
func (s *ClientSession) RoomSessionId() string {
s.mu.Lock()
defer s.mu.Unlock()
s.roomSessionIdLock.RLock()
defer s.roomSessionIdLock.RUnlock()
return s.roomSessionId
}
@ -309,25 +315,10 @@ func (s *ClientSession) UserId() string {
return userId
}
func (s *ClientSession) UserData() *json.RawMessage {
func (s *ClientSession) UserData() json.RawMessage {
return s.userData
}
func (s *ClientSession) StartExpire() {
// The hub mutex must be held when calling this method.
s.expires = time.Now().Add(sessionExpireDuration)
s.hub.expiredSessions[s] = true
}
func (s *ClientSession) StopExpire() {
// The hub mutex must be held when calling this method.
delete(s.hub.expiredSessions, s)
}
func (s *ClientSession) IsExpired(now time.Time) bool {
return now.After(s.expires)
}
func (s *ClientSession) SetRoom(room *Room) {
s.room.Store(room)
if room != nil {
@ -357,7 +348,7 @@ func (s *ClientSession) getRoomJoinTime() time.Time {
func (s *ClientSession) releaseMcuObjects() {
if len(s.publishers) > 0 {
go func(publishers map[StreamType]McuPublisher) {
ctx := context.TODO()
ctx := context.Background()
for _, publisher := range publishers {
publisher.Close(ctx)
}
@ -366,7 +357,7 @@ func (s *ClientSession) releaseMcuObjects() {
}
if len(s.subscribers) > 0 {
go func(subscribers map[string]McuSubscriber) {
ctx := context.TODO()
ctx := context.Background()
for _, subscriber := range subscribers {
subscriber.Close(ctx)
}
@ -380,6 +371,7 @@ func (s *ClientSession) Close() {
}
func (s *ClientSession) closeAndWait(wait bool) {
s.closeFunc()
s.hub.removeSession(s)
s.mu.Lock()
@ -413,8 +405,8 @@ func (s *ClientSession) SubscribeEvents() error {
}
func (s *ClientSession) UpdateRoomSessionId(roomSessionId string) error {
s.mu.Lock()
defer s.mu.Unlock()
s.roomSessionIdLock.Lock()
defer s.roomSessionIdLock.Unlock()
if s.roomSessionId == roomSessionId {
return nil
@ -443,8 +435,8 @@ func (s *ClientSession) UpdateRoomSessionId(roomSessionId string) error {
}
func (s *ClientSession) SubscribeRoomEvents(roomid string, roomSessionId string) error {
s.mu.Lock()
defer s.mu.Unlock()
s.roomSessionIdLock.Lock()
defer s.roomSessionIdLock.Unlock()
if err := s.events.RegisterRoomListener(roomid, s.backend, s); err != nil {
return err
@ -503,6 +495,9 @@ func (s *ClientSession) doUnsubscribeRoomEvents(notify bool) {
s.events.UnregisterRoomListener(room.Id(), s.Backend(), s)
}
s.hub.roomSessions.DeleteRoomSession(s)
s.roomSessionIdLock.Lock()
defer s.roomSessionIdLock.Unlock()
if notify && room != nil && s.roomSessionId != "" {
// Notify
go func(sid string) {
@ -520,14 +515,14 @@ func (s *ClientSession) doUnsubscribeRoomEvents(notify bool) {
s.roomSessionId = ""
}
func (s *ClientSession) ClearClient(client *Client) {
func (s *ClientSession) ClearClient(client HandlerClient) {
s.mu.Lock()
defer s.mu.Unlock()
s.clearClientLocked(client)
}
func (s *ClientSession) clearClientLocked(client *Client) {
func (s *ClientSession) clearClientLocked(client HandlerClient) {
if s.client == nil {
return
} else if client != nil && s.client != client {
@ -540,18 +535,18 @@ func (s *ClientSession) clearClientLocked(client *Client) {
prevClient.SetSession(nil)
}
func (s *ClientSession) GetClient() *Client {
func (s *ClientSession) GetClient() HandlerClient {
s.mu.Lock()
defer s.mu.Unlock()
return s.getClientUnlocked()
}
func (s *ClientSession) getClientUnlocked() *Client {
func (s *ClientSession) getClientUnlocked() HandlerClient {
return s.client
}
func (s *ClientSession) SetClient(client *Client) *Client {
func (s *ClientSession) SetClient(client HandlerClient) HandlerClient {
if client == nil {
panic("Use ClearClient to set the client to nil")
}
@ -594,7 +589,7 @@ func (s *ClientSession) sendOffer(client McuClient, sender string, streamType St
Type: "session",
SessionId: sender,
},
Data: (*json.RawMessage)(&offer_data),
Data: offer_data,
},
}
@ -624,7 +619,7 @@ func (s *ClientSession) sendCandidate(client McuClient, sender string, streamTyp
Type: "session",
SessionId: sender,
},
Data: (*json.RawMessage)(&candidate_data),
Data: candidate_data,
},
}
@ -740,23 +735,6 @@ func (s *ClientSession) SubscriberClosed(subscriber McuSubscriber) {
}
}
type SdpError struct {
message string
}
func (e *SdpError) Error() string {
return e.message
}
type WrappedSdpError struct {
SdpError
err error
}
func (e *WrappedSdpError) Unwrap() error {
return e.err
}
type PermissionError struct {
permission Permission
}
@ -769,23 +747,10 @@ func (e *PermissionError) Error() string {
return fmt.Sprintf("permission \"%s\" not found", e.permission)
}
func (s *ClientSession) isSdpAllowedToSendLocked(payload map[string]interface{}) (MediaType, error) {
sdpValue, found := payload["sdp"]
if !found {
return 0, &SdpError{"payload does not contain a sdp"}
}
sdpText, ok := sdpValue.(string)
if !ok {
return 0, &SdpError{"payload does not contain a valid sdp"}
}
var sdp sdp.SessionDescription
if err := sdp.Unmarshal([]byte(sdpText)); err != nil {
return 0, &WrappedSdpError{
SdpError: SdpError{
message: fmt.Sprintf("could not parse sdp: %s", err),
},
err: err,
}
func (s *ClientSession) isSdpAllowedToSendLocked(sdp *sdp.SessionDescription) (MediaType, error) {
if sdp == nil {
// Should have already been checked when data was validated.
return 0, ErrNoSdp
}
var mediaTypes MediaType
@ -823,8 +788,8 @@ func (s *ClientSession) IsAllowedToSend(data *MessageClientMessageData) error {
// Client is allowed to publish any media (audio / video).
return nil
} else if data != nil && data.Type == "offer" {
// Parse SDP to check what user is trying to publish and check permissions accordingly.
if _, err := s.isSdpAllowedToSendLocked(data.Payload); err != nil {
// Check what user is trying to publish and check permissions accordingly.
if _, err := s.isSdpAllowedToSendLocked(data.offerSdp); err != nil {
return err
}
@ -854,7 +819,7 @@ func (s *ClientSession) checkOfferTypeLocked(streamType StreamType, data *Messag
return MediaTypeScreen, nil
} else if data != nil && data.Type == "offer" {
mediaTypes, err := s.isSdpAllowedToSendLocked(data.Payload)
mediaTypes, err := s.isSdpAllowedToSendLocked(data.offerSdp)
if err != nil {
return 0, err
}
@ -905,7 +870,7 @@ func (s *ClientSession) GetOrCreatePublisher(ctx context.Context, mcu Mcu, strea
if prev, found := s.publishers[streamType]; found {
// Another thread created the publisher while we were waiting.
go func(pub McuPublisher) {
closeCtx := context.TODO()
closeCtx := context.Background()
pub.Close(closeCtx)
}(publisher)
publisher = prev
@ -969,9 +934,10 @@ func (s *ClientSession) GetOrCreateSubscriber(ctx context.Context, mcu Mcu, id s
subscriber, found := s.subscribers[getStreamId(id, streamType)]
if !found {
client := s.getClientUnlocked()
s.mu.Unlock()
var err error
subscriber, err = mcu.NewSubscriber(ctx, s, id, streamType)
subscriber, err = mcu.NewSubscriber(ctx, s, id, streamType, client)
s.mu.Lock()
if err != nil {
return nil, err
@ -982,7 +948,7 @@ func (s *ClientSession) GetOrCreateSubscriber(ctx context.Context, mcu Mcu, id s
if prev, found := s.subscribers[getStreamId(id, streamType)]; found {
// Another thread created the subscriber while we were waiting.
go func(sub McuSubscriber) {
closeCtx := context.TODO()
closeCtx := context.Background()
sub.Close(closeCtx)
}(subscriber)
subscriber = prev
@ -1056,7 +1022,7 @@ func (s *ClientSession) processAsyncMessage(message *AsyncMessage) {
case "sendoffer":
// Process asynchronously to not block other messages received.
go func() {
ctx, cancel := context.WithTimeout(context.Background(), s.hub.mcuTimeout)
ctx, cancel := context.WithTimeout(s.Context(), s.hub.mcuTimeout)
defer cancel()
mc, err := s.GetOrCreateSubscriber(ctx, s.hub.mcu, message.SendOffer.SessionId, StreamType(message.SendOffer.Data.RoomType))
@ -1088,7 +1054,7 @@ func (s *ClientSession) processAsyncMessage(message *AsyncMessage) {
return
}
mc.SendMessage(context.TODO(), nil, message.SendOffer.Data, func(err error, response map[string]interface{}) {
mc.SendMessage(s.Context(), nil, message.SendOffer.Data, func(err error, response map[string]interface{}) {
if err != nil {
log.Printf("Could not send MCU message %+v for session %s to %s: %s", message.SendOffer.Data, message.SendOffer.SessionId, s.PublicId(), err)
if err := s.events.PublishSessionMessage(message.SendOffer.SessionId, s.backend, &AsyncMessage{
@ -1146,13 +1112,13 @@ func (s *ClientSession) storePendingMessage(message *ServerMessage) {
func filterDisplayNames(events []*EventServerMessageSessionEntry) []*EventServerMessageSessionEntry {
result := make([]*EventServerMessageSessionEntry, 0, len(events))
for _, event := range events {
if event.User == nil {
if len(event.User) == 0 {
result = append(result, event)
continue
}
var userdata map[string]interface{}
if err := json.Unmarshal(*event.User, &userdata); err != nil {
if err := json.Unmarshal(event.User, &userdata); err != nil {
result = append(result, event)
continue
}
@ -1178,7 +1144,7 @@ func filterDisplayNames(events []*EventServerMessageSessionEntry) []*EventServer
}
e := event.Clone()
e.User = (*json.RawMessage)(&data)
e.User = data
result = append(result, e)
}
return result
@ -1273,12 +1239,12 @@ func (s *ClientSession) filterMessage(message *ServerMessage) *ServerMessage {
delete(s.seenJoinedEvents, e)
}
case "message":
if message.Event.Message == nil || message.Event.Message.Data == nil || len(*message.Event.Message.Data) == 0 || !s.HasPermission(PERMISSION_HIDE_DISPLAYNAMES) {
if message.Event.Message == nil || len(message.Event.Message.Data) == 0 || !s.HasPermission(PERMISSION_HIDE_DISPLAYNAMES) {
return message
}
var data RoomEventMessageData
if err := json.Unmarshal(*message.Event.Message.Data, &data); err != nil {
if err := json.Unmarshal(message.Event.Message.Data, &data); err != nil {
return message
}
@ -1295,7 +1261,7 @@ func (s *ClientSession) filterMessage(message *ServerMessage) *ServerMessage {
Target: message.Event.Target,
Message: &RoomEventMessage{
RoomId: message.Event.Message.RoomId,
Data: (*json.RawMessage)(&encoded),
Data: encoded,
},
},
}
@ -1305,9 +1271,9 @@ func (s *ClientSession) filterMessage(message *ServerMessage) *ServerMessage {
}
}
case "message":
if message.Message != nil && message.Message.Data != nil && len(*message.Message.Data) > 0 && s.HasPermission(PERMISSION_HIDE_DISPLAYNAMES) {
if message.Message != nil && len(message.Message.Data) > 0 && s.HasPermission(PERMISSION_HIDE_DISPLAYNAMES) {
var data MessageServerMessageData
if err := json.Unmarshal(*message.Message.Data, &data); err != nil {
if err := json.Unmarshal(message.Message.Data, &data); err != nil {
return message
}
@ -1361,7 +1327,7 @@ func (s *ClientSession) filterAsyncMessage(msg *AsyncMessage) *ServerMessage {
}
}
func (s *ClientSession) NotifySessionResumed(client *Client) {
func (s *ClientSession) NotifySessionResumed(client HandlerClient) {
s.mu.Lock()
if len(s.pendingClientMessages) == 0 {
s.mu.Unlock()

View file

@ -117,6 +117,7 @@ func Test_permissionsEqual(t *testing.T) {
for idx, test := range tests {
test := test
t.Run(strconv.Itoa(idx), func(t *testing.T) {
t.Parallel()
equal := permissionsEqual(test.a, test.b)
if equal != test.equal {
t.Errorf("Expected %+v to be %s to %+v but was %s", test.a, equalStrings[test.equal], test.b, equalStrings[equal])
@ -126,12 +127,17 @@ func Test_permissionsEqual(t *testing.T) {
}
func TestBandwidth_Client(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
hub, _, _, server := CreateHubForTest(t)
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
defer cancel()
mcu, err := NewTestMCU()
if err != nil {
t.Fatal(err)
} else if err := mcu.Start(); err != nil {
} else if err := mcu.Start(ctx); err != nil {
t.Fatal(err)
}
defer mcu.Stop()
@ -145,9 +151,6 @@ func TestBandwidth_Client(t *testing.T) {
t.Fatal(err)
}
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
defer cancel()
hello, err := client.RunUntilHello(ctx)
if err != nil {
t.Fatal(err)
@ -198,6 +201,8 @@ func TestBandwidth_Client(t *testing.T) {
}
func TestBandwidth_Backend(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
hub, _, _, server := CreateHubWithMultipleBackendsForTest(t)
u, err := url.Parse(server.URL + "/one")
@ -212,10 +217,13 @@ func TestBandwidth_Backend(t *testing.T) {
backend.maxScreenBitrate = 1000
backend.maxStreamBitrate = 2000
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
defer cancel()
mcu, err := NewTestMCU()
if err != nil {
t.Fatal(err)
} else if err := mcu.Start(); err != nil {
} else if err := mcu.Start(ctx); err != nil {
t.Fatal(err)
}
defer mcu.Stop()
@ -227,9 +235,6 @@ func TestBandwidth_Backend(t *testing.T) {
StreamTypeScreen,
}
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
defer cancel()
for _, streamType := range streamTypes {
t.Run(string(streamType), func(t *testing.T) {
client := NewTestClient(t, server, hub)

View file

@ -23,10 +23,40 @@ package signaling
import (
"errors"
"os"
"regexp"
"github.com/dlintw/goconf"
)
var (
searchVarsRegexp = regexp.MustCompile(`\$\([A-Za-z][A-Za-z0-9_]*\)`)
)
func replaceEnvVars(s string) string {
return searchVarsRegexp.ReplaceAllStringFunc(s, func(name string) string {
name = name[2 : len(name)-1]
value, found := os.LookupEnv(name)
if !found {
return name
}
return value
})
}
// GetStringOptionWithEnv will get the string option and resolve any environment
// variable references in the form "$(VAR)".
func GetStringOptionWithEnv(config *goconf.ConfigFile, section string, option string) (string, error) {
value, err := config.GetString(section, option)
if err != nil {
return "", err
}
value = replaceEnvVars(value)
return value, nil
}
func GetStringOptions(config *goconf.ConfigFile, section string, ignoreErrors bool) (map[string]string, error) {
options, _ := config.GetOptions(section)
if len(options) == 0 {
@ -35,7 +65,7 @@ func GetStringOptions(config *goconf.ConfigFile, section string, ignoreErrors bo
result := make(map[string]string)
for _, option := range options {
value, err := config.GetString(section, option)
value, err := GetStringOptionWithEnv(config, section, option)
if err != nil {
if ignoreErrors {
continue

View file

@ -29,13 +29,19 @@ import (
)
func TestStringOptions(t *testing.T) {
t.Setenv("FOO", "foo")
expected := map[string]string{
"one": "1",
"two": "2",
"foo": "http://foo/1",
}
config := goconf.NewConfigFile()
for k, v := range expected {
config.AddOption("foo", k, v)
if k == "foo" {
config.AddOption("foo", k, "http://$(FOO)/1")
} else {
config.AddOption("foo", k, v)
}
}
config.AddOption("default", "three", "3")
@ -48,3 +54,39 @@ func TestStringOptions(t *testing.T) {
t.Errorf("expected %+v, got %+v", expected, options)
}
}
func TestStringOptionWithEnv(t *testing.T) {
t.Setenv("FOO", "foo")
t.Setenv("BAR", "")
t.Setenv("BA_R", "bar")
config := goconf.NewConfigFile()
config.AddOption("test", "foo", "http://$(FOO)/1")
config.AddOption("test", "bar", "http://$(BAR)/2")
config.AddOption("test", "bar2", "http://$(BA_R)/3")
config.AddOption("test", "baz", "http://$(BAZ)/4")
config.AddOption("test", "inv1", "http://$(FOO")
config.AddOption("test", "inv2", "http://$FOO)")
config.AddOption("test", "inv3", "http://$((FOO)")
config.AddOption("test", "inv4", "http://$(F.OO)")
expected := map[string]string{
"foo": "http://foo/1",
"bar": "http:///2",
"bar2": "http://bar/3",
"baz": "http://BAZ/4",
"inv1": "http://$(FOO",
"inv2": "http://$FOO)",
"inv3": "http://$((FOO)",
"inv4": "http://$(F.OO)",
}
for k, v := range expected {
value, err := GetStringOptionWithEnv(config, "test", k)
if err != nil {
t.Errorf("expected value for %s, got %s", k, err)
} else if value != v {
t.Errorf("expected value %s for %s, got %s", v, k, value)
}
}
}

View file

@ -35,6 +35,7 @@ func TestDeferredExecutor_MultiClose(t *testing.T) {
}
func TestDeferredExecutor_QueueSize(t *testing.T) {
t.Parallel()
e := NewDeferredExecutor(0)
defer e.waitForStop()
defer e.Close()
@ -100,6 +101,7 @@ func TestDeferredExecutor_CloseFromFunc(t *testing.T) {
}
func TestDeferredExecutor_DeferAfterClose(t *testing.T) {
CatchLogForTest(t)
e := NewDeferredExecutor(64)
defer e.waitForStop()

View file

@ -55,6 +55,7 @@ The running container can be configured through different environment variables:
- `GEOIP_OVERRIDES`: Optional space-separated list of overrides for GeoIP lookups.
- `CONTINENT_OVERRIDES`: Optional space-separated list of overrides for continent mappings.
- `STATS_IPS`: Comma-separated list of IP addresses that are allowed to access the stats endpoint.
- `TRUSTED_PROXIES`: Comma-separated list of IPs / networks that are trusted proxies.
- `GRPC_LISTEN`: IP and port to listen on for GRPC requests.
- `GRPC_SERVER_CERTIFICATE`: Certificate to use for the GRPC server.
- `GRPC_SERVER_KEY`: Private key to use for the GRPC server.
@ -99,9 +100,16 @@ The running container can be configured through different environment variables:
- `CONFIG`: Optional name of configuration file to use.
- `HTTP_LISTEN`: Address of HTTP listener.
- `COUNTRY`: Optional ISO 3166 country this proxy is located at.
- `EXTERNAL_HOSTNAME`: The external hostname for remote streams. Will try to autodetect if omitted.
- `TOKEN_ID`: Id of the token to use when connecting remote streams.
- `TOKEN_KEY`: Private key for the configured token id.
- `BANDWIDTH_INCOMING`: Optional incoming target bandwidth (in megabits per second).
- `BANDWIDTH_OUTGOING`: Optional outgoing target bandwidth (in megabits per second).
- `JANUS_URL`: Url to Janus server.
- `MAX_STREAM_BITRATE`: Optional maximum bitrate for audio/video streams.
- `MAX_SCREEN_BITRATE`: Optional maximum bitrate for screensharing streams.
- `STATS_IPS`: Comma-separated list of IP addresses that are allowed to access the stats endpoint.
- `TRUSTED_PROXIES`: Comma-separated list of IPs / networks that are trusted proxies.
- `ETCD_ENDPOINTS`: Static list of etcd endpoints (if etcd should be used).
- `ETCD_DISCOVERY_SRV`: Alternative domain to use for DNS SRV configuration of etcd endpoints (if etcd should be used).
- `ETCD_DISCOVERY_SERVICE`: Optional service name for DNS SRV configuration of etcd..

View file

@ -1,5 +1,5 @@
# Modified from https://gitlab.com/powerpaul17/nc_talk_backend/-/blob/dcbb918d8716dad1eb72a889d1e6aa1e3a543641/docker/janus/Dockerfile
FROM alpine:3.19
FROM alpine:3.20
RUN apk add --no-cache curl autoconf automake libtool pkgconf build-base \
glib-dev libconfig-dev libnice-dev jansson-dev openssl-dev zlib libsrtp-dev \
@ -15,30 +15,30 @@ RUN cd /tmp && \
git checkout $USRSCTP_VERSION && \
./bootstrap && \
./configure --prefix=/usr && \
make && make install
make -j$(nproc) && make install
# libsrtp
ARG LIBSRTP_VERSION=2.4.2
ARG LIBSRTP_VERSION=2.6.0
RUN cd /tmp && \
wget https://github.com/cisco/libsrtp/archive/v$LIBSRTP_VERSION.tar.gz && \
tar xfv v$LIBSRTP_VERSION.tar.gz && \
cd libsrtp-$LIBSRTP_VERSION && \
./configure --prefix=/usr --enable-openssl && \
make shared_library && \
make shared_library -j$(nproc) && \
make install && \
rm -fr /libsrtp-$LIBSRTP_VERSION && \
rm -f /v$LIBSRTP_VERSION.tar.gz
# JANUS
ARG JANUS_VERSION=0.14.1
ARG JANUS_VERSION=1.2.2
RUN mkdir -p /usr/src/janus && \
cd /usr/src/janus && \
curl -L https://github.com/meetecho/janus-gateway/archive/v$JANUS_VERSION.tar.gz | tar -xz && \
cd /usr/src/janus/janus-gateway-$JANUS_VERSION && \
./autogen.sh && \
./configure --disable-rabbitmq --disable-mqtt --disable-boringssl && \
make && \
make -j$(nproc) && \
make install && \
make configs

View file

@ -44,6 +44,22 @@ if [ ! -f "$CONFIG" ]; then
sed -i "s|#country =.*|country = $COUNTRY|" "$CONFIG"
fi
if [ -n "$EXTERNAL_HOSTNAME" ]; then
sed -i "s|#hostname =.*|hostname = $EXTERNAL_HOSTNAME|" "$CONFIG"
fi
if [ -n "$TOKEN_ID" ]; then
sed -i "s|#token_id =.*|token_id = $TOKEN_ID|" "$CONFIG"
fi
if [ -n "$TOKEN_KEY" ]; then
sed -i "s|#token_key =.*|token_key = $TOKEN_KEY|" "$CONFIG"
fi
if [ -n "$BANDWIDTH_INCOMING" ]; then
sed -i "s|#incoming =.*|incoming = $BANDWIDTH_INCOMING|" "$CONFIG"
fi
if [ -n "$BANDWIDTH_OUTGOING" ]; then
sed -i "s|#outgoing =.*|outgoing = $BANDWIDTH_OUTGOING|" "$CONFIG"
fi
HAS_ETCD=
if [ -n "$ETCD_ENDPOINTS" ]; then
sed -i "s|#endpoints =.*|endpoints = $ETCD_ENDPOINTS|" "$CONFIG"
@ -109,6 +125,10 @@ if [ ! -f "$CONFIG" ]; then
if [ -n "$STATS_IPS" ]; then
sed -i "s|#allowed_ips =.*|allowed_ips = $STATS_IPS|" "$CONFIG"
fi
if [ -n "$TRUSTED_PROXIES" ]; then
sed -i "s|#trustedproxies =.*|trustedproxies = $TRUSTED_PROXIES|" "$CONFIG"
fi
fi
echo "Starting signaling proxy with $CONFIG ..."

View file

@ -19,9 +19,12 @@ RUN adduser -D spreedbackend && \
COPY --from=builder /workdir/bin/signaling /usr/bin/nextcloud-spreed-signaling
COPY ./server.conf.in /config/server.conf.in
COPY ./docker/server/entrypoint.sh /
COPY ./docker/server/stop.sh /
COPY ./docker/server/wait.sh /
RUN chown spreedbackend /config
RUN /usr/bin/nextcloud-spreed-signaling -version
USER spreedbackend
STOPSIGNAL SIGUSR1
ENTRYPOINT [ "/entrypoint.sh" ]

View file

@ -157,6 +157,10 @@ if [ ! -f "$CONFIG" ]; then
sed -i "s|#allowed_ips =.*|allowed_ips = $STATS_IPS|" "$CONFIG"
fi
if [ -n "$TRUSTED_PROXIES" ]; then
sed -i "s|#trustedproxies =.*|trustedproxies = $TRUSTED_PROXIES|" "$CONFIG"
fi
if [ -n "$GRPC_LISTEN" ]; then
sed -i "s|#listen = 0.0.0.0:9090|listen = $GRPC_LISTEN|" "$CONFIG"

26
docker/server/stop.sh Executable file
View file

@ -0,0 +1,26 @@
#!/bin/bash
#
# Standalone signaling server for the Nextcloud Spreed app.
# Copyright (C) 2024 struktur AG
#
# @author Joachim Bauch <bauch@struktur.de>
#
# @license GNU AGPL version 3 or any later version
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
set -e
echo "Schedule signaling server to shutdown ..."
exec killall -USR1 nextcloud-spreed-signaling

33
docker/server/wait.sh Executable file
View file

@ -0,0 +1,33 @@
#!/bin/bash
#
# Standalone signaling server for the Nextcloud Spreed app.
# Copyright (C) 2024 struktur AG
#
# @author Joachim Bauch <bauch@struktur.de>
#
# @license GNU AGPL version 3 or any later version
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
set -e
echo "Waiting for signaling server to shutdown ..."
while true
do
if ! pgrep nextcloud-spreed-signaling > /dev/null ; then
echo "Signaling server has stopped"
exit 0
fi
sleep 1
done

View file

@ -49,3 +49,5 @@ The following metrics are available:
| `signaling_grpc_client_calls_total` | Counter | 1.0.0 | The total number of GRPC client calls | `method` |
| `signaling_grpc_server_calls_total` | Counter | 1.0.0 | The total number of GRPC server calls | `method` |
| `signaling_http_client_pool_connections` | Gauge | 1.2.4 | The current number of HTTP client connections per host | `host` |
| `signaling_throttle_delayed_total` | Counter | 1.2.5 | The total number of delayed requests | `action`, `delay` |
| `signaling_throttle_bruteforce_total` | Counter | 1.2.5 | The total number of rejected bruteforce requests | `action` |

View file

@ -1,6 +1,6 @@
jinja2==3.1.3
jinja2==3.1.4
markdown==3.6
mkdocs==1.5.3
mkdocs==1.6.0
readthedocs-sphinx-search==0.3.2
sphinx==7.2.6
sphinx==7.3.7
sphinx_rtd_theme==2.0.0

View file

@ -23,6 +23,7 @@ package signaling
import (
"context"
"errors"
"fmt"
"log"
"strings"
@ -34,6 +35,8 @@ import (
"go.etcd.io/etcd/client/pkg/v3/srv"
"go.etcd.io/etcd/client/pkg/v3/transport"
clientv3 "go.etcd.io/etcd/client/v3"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
type EtcdClientListener interface {
@ -42,8 +45,8 @@ type EtcdClientListener interface {
type EtcdClientWatcher interface {
EtcdWatchCreated(client *EtcdClient, key string)
EtcdKeyUpdated(client *EtcdClient, key string, value []byte)
EtcdKeyDeleted(client *EtcdClient, key string)
EtcdKeyUpdated(client *EtcdClient, key string, value []byte, prevValue []byte)
EtcdKeyDeleted(client *EtcdClient, key string, prevValue []byte)
}
type EtcdClient struct {
@ -112,6 +115,17 @@ func (c *EtcdClient) load(config *goconf.ConfigFile, ignoreErrors bool) error {
DialTimeout: time.Second,
}
if logLevel, _ := config.GetString("etcd", "loglevel"); logLevel != "" {
var l zapcore.Level
if err := l.Set(logLevel); err != nil {
return fmt.Errorf("Unsupported etcd log level %s: %w", logLevel, err)
}
logConfig := zap.NewProductionConfig()
logConfig.Level = zap.NewAtomicLevelAt(l)
cfg.LogConfig = &logConfig
}
clientKey := c.getConfigStringWithFallback(config, "clientkey")
clientCert := c.getConfigStringWithFallback(config, "clientcert")
caCert := c.getConfigStringWithFallback(config, "cacert")
@ -176,8 +190,8 @@ func (c *EtcdClient) getEtcdClient() *clientv3.Client {
return client.(*clientv3.Client)
}
func (c *EtcdClient) syncClient() error {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
func (c *EtcdClient) syncClient(ctx context.Context) error {
ctx, cancel := context.WithTimeout(ctx, time.Second)
defer cancel()
return c.getEtcdClient().Sync(ctx)
@ -223,8 +237,10 @@ func (c *EtcdClient) WaitForConnection(ctx context.Context) error {
return err
}
if err := c.syncClient(); err != nil {
if err == context.DeadlineExceeded {
if err := c.syncClient(ctx); err != nil {
if errors.Is(err, context.Canceled) {
return err
} else if errors.Is(err, context.DeadlineExceeded) {
log.Printf("Timeout waiting for etcd client to connect to the cluster, retry in %s", backoff.NextWait())
} else {
log.Printf("Could not sync etcd client with the cluster, retry in %s: %s", backoff.NextWait(), err)
@ -243,27 +259,37 @@ func (c *EtcdClient) Get(ctx context.Context, key string, opts ...clientv3.OpOpt
return c.getEtcdClient().Get(ctx, key, opts...)
}
func (c *EtcdClient) Watch(ctx context.Context, key string, watcher EtcdClientWatcher, opts ...clientv3.OpOption) error {
log.Printf("Wait for leader and start watching on %s", key)
func (c *EtcdClient) Watch(ctx context.Context, key string, nextRevision int64, watcher EtcdClientWatcher, opts ...clientv3.OpOption) (int64, error) {
log.Printf("Wait for leader and start watching on %s (rev=%d)", key, nextRevision)
opts = append(opts, clientv3.WithRev(nextRevision), clientv3.WithPrevKV())
ch := c.getEtcdClient().Watch(clientv3.WithRequireLeader(ctx), key, opts...)
log.Printf("Watch created for %s", key)
watcher.EtcdWatchCreated(c, key)
for response := range ch {
if err := response.Err(); err != nil {
return err
return nextRevision, err
}
nextRevision = response.Header.Revision + 1
for _, ev := range response.Events {
switch ev.Type {
case clientv3.EventTypePut:
watcher.EtcdKeyUpdated(c, string(ev.Kv.Key), ev.Kv.Value)
var prevValue []byte
if ev.PrevKv != nil {
prevValue = ev.PrevKv.Value
}
watcher.EtcdKeyUpdated(c, string(ev.Kv.Key), ev.Kv.Value, prevValue)
case clientv3.EventTypeDelete:
watcher.EtcdKeyDeleted(c, string(ev.Kv.Key))
var prevValue []byte
if ev.PrevKv != nil {
prevValue = ev.PrevKv.Value
}
watcher.EtcdKeyDeleted(c, string(ev.Kv.Key), prevValue)
default:
log.Printf("Unsupported watch event %s %q -> %q", ev.Type, ev.Kv.Key, ev.Kv.Value)
}
}
}
return nil
return nextRevision, nil
}

View file

@ -29,7 +29,6 @@ import (
"os"
"runtime"
"strconv"
"sync"
"syscall"
"testing"
"time"
@ -39,6 +38,8 @@ import (
clientv3 "go.etcd.io/etcd/client/v3"
"go.etcd.io/etcd/server/v3/embed"
"go.etcd.io/etcd/server/v3/lease"
"go.uber.org/zap"
"go.uber.org/zap/zaptest"
)
var (
@ -89,6 +90,7 @@ func NewEtcdForTest(t *testing.T) *embed.Etcd {
cfg.ListenPeerUrls = []url.URL{*peerListener}
cfg.AdvertisePeerUrls = []url.URL{*peerListener}
cfg.InitialCluster = "default=" + peerListener.String()
cfg.ZapLoggerBuilder = embed.NewZapLoggerBuilder(zaptest.NewLogger(t, zaptest.Level(zap.WarnLevel)))
etcd, err = embed.StartEtcd(cfg)
if isErrorAddressAlreadyInUse(err) {
continue
@ -103,6 +105,7 @@ func NewEtcdForTest(t *testing.T) *embed.Etcd {
t.Cleanup(func() {
etcd.Close()
<-etcd.Server.StopNotify()
})
// Wait for server to be ready.
<-etcd.Server.ReadyNotify()
@ -115,6 +118,7 @@ func NewEtcdClientForTest(t *testing.T) (*embed.Etcd, *EtcdClient) {
config := goconf.NewConfigFile()
config.AddOption("etcd", "endpoints", etcd.Config().ListenClientUrls[0].String())
config.AddOption("etcd", "loglevel", "error")
client, err := NewEtcdClient(config, "")
if err != nil {
@ -143,6 +147,8 @@ func DeleteEtcdValue(etcd *embed.Etcd, key string) {
}
func Test_EtcdClient_Get(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
etcd, client := NewEtcdClientForTest(t)
if response, err := client.Get(context.Background(), "foo"); err != nil {
@ -165,6 +171,8 @@ func Test_EtcdClient_Get(t *testing.T) {
}
func Test_EtcdClient_GetPrefix(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
etcd, client := NewEtcdClientForTest(t)
if response, err := client.Get(context.Background(), "foo"); err != nil {
@ -196,6 +204,8 @@ type etcdEvent struct {
t mvccpb.Event_EventType
key string
value string
prevValue string
}
type EtcdClientTestListener struct {
@ -204,9 +214,8 @@ type EtcdClientTestListener struct {
ctx context.Context
cancel context.CancelFunc
initial chan struct{}
initialWg sync.WaitGroup
events chan etcdEvent
initial chan struct{}
events chan etcdEvent
}
func NewEtcdClientTestListener(ctx context.Context, t *testing.T) *EtcdClientTestListener {
@ -227,15 +236,7 @@ func (l *EtcdClientTestListener) Close() {
}
func (l *EtcdClientTestListener) EtcdClientCreated(client *EtcdClient) {
l.initialWg.Add(1)
go func() {
if err := client.Watch(clientv3.WithRequireLeader(l.ctx), "foo", l, clientv3.WithPrefix()); err != nil {
l.t.Error(err)
}
}()
go func() {
defer close(l.initial)
if err := client.WaitForConnection(l.ctx); err != nil {
l.t.Errorf("error waiting for connection: %s", err)
return
@ -244,7 +245,8 @@ func (l *EtcdClientTestListener) EtcdClientCreated(client *EtcdClient) {
ctx, cancel := context.WithTimeout(l.ctx, time.Second)
defer cancel()
if response, err := client.Get(ctx, "foo", clientv3.WithPrefix()); err != nil {
response, err := client.Get(ctx, "foo", clientv3.WithPrefix())
if err != nil {
l.t.Error(err)
} else if response.Count != 1 {
l.t.Errorf("expected 1 responses, got %d", response.Count)
@ -253,30 +255,47 @@ func (l *EtcdClientTestListener) EtcdClientCreated(client *EtcdClient) {
} else if string(response.Kvs[0].Value) != "1" {
l.t.Errorf("expected value \"1\", got \"%s\"", string(response.Kvs[0].Value))
}
l.initialWg.Wait()
close(l.initial)
nextRevision := response.Header.Revision + 1
for l.ctx.Err() == nil {
var err error
if nextRevision, err = client.Watch(clientv3.WithRequireLeader(l.ctx), "foo", nextRevision, l, clientv3.WithPrefix()); err != nil {
l.t.Error(err)
}
}
}()
}
func (l *EtcdClientTestListener) EtcdWatchCreated(client *EtcdClient, key string) {
l.initialWg.Done()
}
func (l *EtcdClientTestListener) EtcdKeyUpdated(client *EtcdClient, key string, value []byte) {
l.events <- etcdEvent{
func (l *EtcdClientTestListener) EtcdKeyUpdated(client *EtcdClient, key string, value []byte, prevValue []byte) {
evt := etcdEvent{
t: clientv3.EventTypePut,
key: string(key),
value: string(value),
}
if len(prevValue) > 0 {
evt.prevValue = string(prevValue)
}
l.events <- evt
}
func (l *EtcdClientTestListener) EtcdKeyDeleted(client *EtcdClient, key string) {
l.events <- etcdEvent{
func (l *EtcdClientTestListener) EtcdKeyDeleted(client *EtcdClient, key string, prevValue []byte) {
evt := etcdEvent{
t: clientv3.EventTypeDelete,
key: string(key),
}
if len(prevValue) > 0 {
evt.prevValue = string(prevValue)
}
l.events <- evt
}
func Test_EtcdClient_Watch(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
etcd, client := NewEtcdClientForTest(t)
SetEtcdValue(etcd, "foo/a", []byte("1"))
@ -299,11 +318,23 @@ func Test_EtcdClient_Watch(t *testing.T) {
t.Errorf("expected value %s, got %s", "2", event.value)
}
SetEtcdValue(etcd, "foo/a", []byte("3"))
event = <-listener.events
if event.t != clientv3.EventTypePut {
t.Errorf("expected type %d, got %d", clientv3.EventTypePut, event.t)
} else if event.key != "foo/a" {
t.Errorf("expected key %s, got %s", "foo/a", event.key)
} else if event.value != "3" {
t.Errorf("expected value %s, got %s", "3", event.value)
}
DeleteEtcdValue(etcd, "foo/a")
event = <-listener.events
if event.t != clientv3.EventTypeDelete {
t.Errorf("expected type %d, got %d", clientv3.EventTypeDelete, event.t)
} else if event.key != "foo/a" {
t.Errorf("expected key %s, got %s", "foo/a", event.key)
} else if event.prevValue != "3" {
t.Errorf("expected previous value %s, got %s", "3", event.prevValue)
}
}

View file

@ -22,6 +22,7 @@
package signaling
import (
"context"
"errors"
"log"
"os"
@ -54,7 +55,9 @@ type FileWatcher struct {
target string
callback FileWatcherCallback
watcher *fsnotify.Watcher
watcher *fsnotify.Watcher
closeCtx context.Context
closeFunc context.CancelFunc
}
func NewFileWatcher(filename string, callback FileWatcherCallback) (*FileWatcher, error) {
@ -73,24 +76,28 @@ func NewFileWatcher(filename string, callback FileWatcherCallback) (*FileWatcher
return nil, err
}
if filename != realFilename {
if err := watcher.Add(path.Dir(filename)); err != nil {
watcher.Close() // nolint
return nil, err
}
if err := watcher.Add(path.Dir(filename)); err != nil {
watcher.Close() // nolint
return nil, err
}
closeCtx, closeFunc := context.WithCancel(context.Background())
w := &FileWatcher{
filename: filename,
target: realFilename,
callback: callback,
watcher: watcher,
closeCtx: closeCtx,
closeFunc: closeFunc,
}
go w.run()
return w, nil
}
func (f *FileWatcher) Close() error {
f.closeFunc()
return f.watcher.Close()
}
@ -154,6 +161,8 @@ func (f *FileWatcher) run() {
}
log.Printf("Error watching %s: %s", f.filename, err)
case <-f.closeCtx.Done():
return
}
}
}

View file

@ -47,6 +47,53 @@ func TestFileWatcher_NotExist(t *testing.T) {
}
func TestFileWatcher_File(t *testing.T) {
ensureNoGoroutinesLeak(t, func(t *testing.T) {
tmpdir := t.TempDir()
filename := path.Join(tmpdir, "test.txt")
if err := os.WriteFile(filename, []byte("Hello world!"), 0644); err != nil {
t.Fatal(err)
}
modified := make(chan struct{})
w, err := NewFileWatcher(filename, func(filename string) {
modified <- struct{}{}
})
if err != nil {
t.Fatal(err)
}
defer w.Close()
if err := os.WriteFile(filename, []byte("Updated"), 0644); err != nil {
t.Fatal(err)
}
<-modified
ctxTimeout, cancel := context.WithTimeout(context.Background(), testWatcherNoEventTimeout)
defer cancel()
select {
case <-modified:
t.Error("should not have received another event")
case <-ctxTimeout.Done():
}
if err := os.WriteFile(filename, []byte("Updated"), 0644); err != nil {
t.Fatal(err)
}
<-modified
ctxTimeout, cancel = context.WithTimeout(context.Background(), testWatcherNoEventTimeout)
defer cancel()
select {
case <-modified:
t.Error("should not have received another event")
case <-ctxTimeout.Done():
}
})
}
func TestFileWatcher_Rename(t *testing.T) {
tmpdir := t.TempDir()
filename := path.Join(tmpdir, "test.txt")
if err := os.WriteFile(filename, []byte("Hello world!"), 0644); err != nil {
@ -62,10 +109,10 @@ func TestFileWatcher_File(t *testing.T) {
}
defer w.Close()
if err := os.WriteFile(filename, []byte("Updated"), 0644); err != nil {
filename2 := path.Join(tmpdir, "test.txt.tmp")
if err := os.WriteFile(filename2, []byte("Updated"), 0644); err != nil {
t.Fatal(err)
}
<-modified
ctxTimeout, cancel := context.WithTimeout(context.Background(), testWatcherNoEventTimeout)
defer cancel()
@ -76,7 +123,7 @@ func TestFileWatcher_File(t *testing.T) {
case <-ctxTimeout.Done():
}
if err := os.WriteFile(filename, []byte("Updated"), 0644); err != nil {
if err := os.Rename(filename2, filename); err != nil {
t.Fatal(err)
}
<-modified
@ -211,3 +258,53 @@ func TestFileWatcher_OtherSymlink(t *testing.T) {
case <-ctxTimeout.Done():
}
}
func TestFileWatcher_RenameSymlinkTarget(t *testing.T) {
tmpdir := t.TempDir()
sourceFilename1 := path.Join(tmpdir, "test1.txt")
if err := os.WriteFile(sourceFilename1, []byte("Hello world!"), 0644); err != nil {
t.Fatal(err)
}
filename := path.Join(tmpdir, "test.txt")
if err := os.Symlink(sourceFilename1, filename); err != nil {
t.Fatal(err)
}
modified := make(chan struct{})
w, err := NewFileWatcher(filename, func(filename string) {
modified <- struct{}{}
})
if err != nil {
t.Fatal(err)
}
defer w.Close()
sourceFilename2 := path.Join(tmpdir, "test1.txt.tmp")
if err := os.WriteFile(sourceFilename2, []byte("Updated"), 0644); err != nil {
t.Fatal(err)
}
ctxTimeout, cancel := context.WithTimeout(context.Background(), testWatcherNoEventTimeout)
defer cancel()
select {
case <-modified:
t.Error("should not have received another event")
case <-ctxTimeout.Done():
}
if err := os.Rename(sourceFilename2, sourceFilename1); err != nil {
t.Fatal(err)
}
<-modified
ctxTimeout, cancel = context.WithTimeout(context.Background(), testWatcherNoEventTimeout)
defer cancel()
select {
case <-modified:
t.Error("should not have received another event")
case <-ctxTimeout.Done():
}
}

View file

@ -97,6 +97,7 @@ func runConcurrentFlags(t *testing.T, count int, f func()) {
}
func TestFlagsConcurrentAdd(t *testing.T) {
t.Parallel()
var flags Flags
var added atomic.Int32
@ -111,6 +112,7 @@ func TestFlagsConcurrentAdd(t *testing.T) {
}
func TestFlagsConcurrentRemove(t *testing.T) {
t.Parallel()
var flags Flags
flags.Set(1)
@ -126,6 +128,7 @@ func TestFlagsConcurrentRemove(t *testing.T) {
}
func TestFlagsConcurrentSet(t *testing.T) {
t.Parallel()
var flags Flags
var set atomic.Int32

View file

@ -78,6 +78,7 @@ func GetGeoIpUrlForTest(t *testing.T) string {
}
func TestGeoLookup(t *testing.T) {
CatchLogForTest(t)
reader, err := NewGeoLookupFromUrl(GetGeoIpUrlForTest(t))
if err != nil {
t.Fatal(err)
@ -92,6 +93,7 @@ func TestGeoLookup(t *testing.T) {
}
func TestGeoLookupCaching(t *testing.T) {
CatchLogForTest(t)
reader, err := NewGeoLookupFromUrl(GetGeoIpUrlForTest(t))
if err != nil {
t.Fatal(err)
@ -138,6 +140,7 @@ func TestGeoLookupContinent(t *testing.T) {
}
func TestGeoLookupCloseEmpty(t *testing.T) {
CatchLogForTest(t)
reader, err := NewGeoLookupFromUrl("ignore-url")
if err != nil {
t.Fatal(err)
@ -146,6 +149,7 @@ func TestGeoLookupCloseEmpty(t *testing.T) {
}
func TestGeoLookupFromFile(t *testing.T) {
CatchLogForTest(t)
geoIpUrl := GetGeoIpUrlForTest(t)
resp, err := http.Get(geoIpUrl)

53
go.mod
View file

@ -1,6 +1,6 @@
module github.com/strukturag/nextcloud-spreed-signaling
go 1.20
go 1.21
require (
github.com/dlintw/goconf v0.0.0-20120228082610-dcc070983490
@ -11,19 +11,20 @@ require (
github.com/gorilla/securecookie v1.1.2
github.com/gorilla/websocket v1.5.1
github.com/mailru/easyjson v0.7.7
github.com/nats-io/nats-server/v2 v2.10.12
github.com/nats-io/nats.go v1.34.0
github.com/nats-io/nats-server/v2 v2.10.16
github.com/nats-io/nats.go v1.35.0
github.com/notedit/janus-go v0.0.0-20200517101215-10eb8b95d1a0
github.com/oschwald/maxminddb-golang v1.12.0
github.com/pion/sdp/v3 v3.0.9
github.com/prometheus/client_golang v1.19.0
go.etcd.io/etcd/api/v3 v3.5.12
go.etcd.io/etcd/client/pkg/v3 v3.5.12
go.etcd.io/etcd/client/v3 v3.5.12
go.etcd.io/etcd/server/v3 v3.5.12
google.golang.org/grpc v1.62.1
github.com/prometheus/client_golang v1.19.1
go.etcd.io/etcd/api/v3 v3.5.13
go.etcd.io/etcd/client/pkg/v3 v3.5.13
go.etcd.io/etcd/client/v3 v3.5.13
go.etcd.io/etcd/server/v3 v3.5.13
go.uber.org/zap v1.27.0
google.golang.org/grpc v1.64.0
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.3.0
google.golang.org/protobuf v1.33.0
google.golang.org/protobuf v1.34.1
)
require (
@ -46,26 +47,26 @@ require (
github.com/jonboulle/clockwork v0.2.2 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.17.7 // indirect
github.com/klauspost/compress v1.17.8 // indirect
github.com/minio/highwayhash v1.0.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/nats-io/jwt/v2 v2.5.5 // indirect
github.com/nats-io/jwt/v2 v2.5.7 // indirect
github.com/nats-io/nkeys v0.4.7 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.48.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/sirupsen/logrus v1.7.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/soheilhy/cmux v0.1.5 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802 // indirect
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 // indirect
go.etcd.io/bbolt v1.3.8 // indirect
go.etcd.io/etcd/client/v2 v2.305.12 // indirect
go.etcd.io/etcd/pkg/v3 v3.5.12 // indirect
go.etcd.io/etcd/raft/v3 v3.5.12 // indirect
go.etcd.io/bbolt v1.3.9 // indirect
go.etcd.io/etcd/client/v2 v2.305.13 // indirect
go.etcd.io/etcd/pkg/v3 v3.5.13 // indirect
go.etcd.io/etcd/raft/v3 v3.5.13 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.0 // indirect
go.opentelemetry.io/otel v1.20.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.20.0 // indirect
@ -74,17 +75,15 @@ require (
go.opentelemetry.io/otel/sdk v1.20.0 // indirect
go.opentelemetry.io/otel/trace v1.20.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
go.uber.org/zap v1.17.0 // indirect
golang.org/x/crypto v0.21.0 // indirect
golang.org/x/net v0.21.0 // indirect
golang.org/x/sys v0.18.0 // indirect
golang.org/x/text v0.14.0 // indirect
go.uber.org/multierr v1.10.0 // indirect
golang.org/x/crypto v0.23.0 // indirect
golang.org/x/net v0.23.0 // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.15.0 // indirect
golang.org/x/time v0.5.0 // indirect
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240318140521-94a12d6c2237 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240318140521-94a12d6c2237 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
sigs.k8s.io/yaml v1.2.0 // indirect

129
go.sum
View file

@ -1,8 +1,10 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.112.0 h1:tpFCD7hpHFlQ8yPwT3x+QeXqc2T6+n6T+hmABHfDUSM=
cloud.google.com/go/compute v1.23.3 h1:6sVlXXBmbd7jNX0Ipq0trII3e4n1/MsADLK6a+aiVlk=
cloud.google.com/go/compute v1.25.1 h1:ZRpHJedLtTpKgr3RV1Fx23NuaAEN1Zfx9hw1u4aJdjU=
cloud.google.com/go/compute v1.25.1/go.mod h1:oopOIR53ly6viBYxaDhBfJwzUAxf1zE//uf3IB011ls=
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
@ -15,8 +17,10 @@ github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/xds/go v0.0.0-20231128003011-0fa0005c9caa h1:jQCWAUqqlij9Pgj2i/PB79y4KOPYVyFYdROxgaCwdTQ=
github.com/cncf/xds/go v0.0.0-20240318125728-8a4994d93e50 h1:DBmgJDC9dTfkVyGgipamEh2BpGYxScCH1TOF1LL1cXc=
github.com/cncf/xds/go v0.0.0-20240318125728-8a4994d93e50/go.mod h1:5e1+Vvlzido69INQaVO6d87Qn543Xr6nooe9Kz7oBFM=
github.com/cockroachdb/datadriven v1.0.2 h1:H9MtNqVoVhvd9nCBwOyDjUEdZCREqbIdCJD93PBm/jA=
github.com/cockroachdb/datadriven v1.0.2/go.mod h1:a9RdTaap04u637JoCzcUoIcDmvwSUtcUFtT/C3kJlTU=
github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd/v22 v22.3.2 h1:D9/bQk5vlXQFZ6Kwuu6zaiXJ9oTPe68++AzAJc1DzSI=
@ -33,6 +37,7 @@ github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.m
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v1.0.4 h1:gVPz/FMfvh57HdSJQyvBtF00j8JU4zdyUgIUNhlgg0A=
github.com/envoyproxy/protoc-gen-validate v1.0.4/go.mod h1:qys6tmnRsYrQqIhm2bvKZH4Blx/1gTIZ2UKVY1M+Yew=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
@ -51,6 +56,7 @@ github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOW
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/glog v1.2.0 h1:uCdmnmatrKCgMBlM4rMuJZWOkPDqdbZPnrMXDY4gI68=
github.com/golang/glog v1.2.0/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@ -62,8 +68,10 @@ github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@ -89,12 +97,14 @@ github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnr
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.7 h1:ehO88t2UGzQK66LMdE8tibEd1ErmzZjNEqWkjLAKQQg=
github.com/klauspost/compress v1.17.7/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/compress v1.17.8 h1:YcnTYrq7MikUT7k0Yb5eceMmALQPYBW/Xltxn0NAMnU=
github.com/klauspost/compress v1.17.8/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/minio/highwayhash v1.0.2 h1:Aak5U0nElisjDCfPSG79Tgzkn2gl66NxOMspRrKnA/g=
@ -104,12 +114,12 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/nats-io/jwt/v2 v2.5.5 h1:ROfXb50elFq5c9+1ztaUbdlrArNFl2+fQWP6B8HGEq4=
github.com/nats-io/jwt/v2 v2.5.5/go.mod h1:ZdWS1nZa6WMZfFwwgpEaqBV8EPGVgOTDHN/wTbz0Y5A=
github.com/nats-io/nats-server/v2 v2.10.12 h1:G6u+RDrHkw4bkwn7I911O5jqys7jJVRY6MwgndyUsnE=
github.com/nats-io/nats-server/v2 v2.10.12/go.mod h1:H1n6zXtYLFCgXcf/SF8QNTSIFuS8tyZQMN9NguUHdEs=
github.com/nats-io/nats.go v1.34.0 h1:fnxnPCNiwIG5w08rlMcEKTUw4AV/nKyGCOJE8TdhSPk=
github.com/nats-io/nats.go v1.34.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/jwt/v2 v2.5.7 h1:j5lH1fUXCnJnY8SsQeB/a/z9Azgu2bYIDvtPVNdxe2c=
github.com/nats-io/jwt/v2 v2.5.7/go.mod h1:ZdWS1nZa6WMZfFwwgpEaqBV8EPGVgOTDHN/wTbz0Y5A=
github.com/nats-io/nats-server/v2 v2.10.16 h1:2jXaiydp5oB/nAx/Ytf9fdCi9QN6ItIc9eehX8kwVV0=
github.com/nats-io/nats-server/v2 v2.10.16/go.mod h1:Pksi38H2+6xLe1vQx0/EA4bzetM0NqyIHcIbmgXSkIU=
github.com/nats-io/nats.go v1.35.0 h1:XFNqNM7v5B+MQMKqVGAyHwYhyKb48jrenXNxIU20ULk=
github.com/nats-io/nats.go v1.35.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
@ -123,12 +133,11 @@ github.com/pion/randutil v0.1.0 h1:CFG1UdESneORglEsnimhUjf33Rwjubwj6xfiOXBa3mA=
github.com/pion/randutil v0.1.0/go.mod h1:XcJrSMMbbMRhASFVOlj/5hQial/Y8oH/HVo7TBZq+j8=
github.com/pion/sdp/v3 v3.0.9 h1:pX++dCHoHUwq43kuwf3PyJfHlwIj4hXA7Vrifiq0IJY=
github.com/pion/sdp/v3 v3.0.9/go.mod h1:B5xmvENq5IXJimIO4zfp6LAe1fD9N+kFv+V/1lOdz8M=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.19.0 h1:ygXvpU1AoN1MhdzckN+PyD9QJOSD4x7kmXYlnfbA6JU=
github.com/prometheus/client_golang v1.19.0/go.mod h1:ZRM9uEAypZakd+q/x7+gmsvXdURP+DABIEIjnmDdp+k=
github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE=
github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw=
github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI=
@ -138,9 +147,10 @@ github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k
github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.7.0 h1:ShrD1U9pZB12TX0cVy0DtePoCH97K8EtX+mg7ZARUtM=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
@ -165,22 +175,22 @@ github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.8 h1:xs88BrvEv273UsB79e0hcVrlUWmS0a8upikMFhSyAtA=
go.etcd.io/bbolt v1.3.8/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw=
go.etcd.io/etcd/api/v3 v3.5.12 h1:W4sw5ZoU2Juc9gBWuLk5U6fHfNVyY1WC5g9uiXZio/c=
go.etcd.io/etcd/api/v3 v3.5.12/go.mod h1:Ot+o0SWSyT6uHhA56al1oCED0JImsRiU9Dc26+C2a+4=
go.etcd.io/etcd/client/pkg/v3 v3.5.12 h1:EYDL6pWwyOsylrQyLp2w+HkQ46ATiOvoEdMarindU2A=
go.etcd.io/etcd/client/pkg/v3 v3.5.12/go.mod h1:seTzl2d9APP8R5Y2hFL3NVlD6qC/dOT+3kvrqPyTas4=
go.etcd.io/etcd/client/v2 v2.305.12 h1:0m4ovXYo1CHaA/Mp3X/Fak5sRNIWf01wk/X1/G3sGKI=
go.etcd.io/etcd/client/v2 v2.305.12/go.mod h1:aQ/yhsxMu+Oht1FOupSr60oBvcS9cKXHrzBpDsPTf9E=
go.etcd.io/etcd/client/v3 v3.5.12 h1:v5lCPXn1pf1Uu3M4laUE2hp/geOTc5uPcYYsNe1lDxg=
go.etcd.io/etcd/client/v3 v3.5.12/go.mod h1:tSbBCakoWmmddL+BKVAJHa9km+O/E+bumDe9mSbPiqw=
go.etcd.io/etcd/pkg/v3 v3.5.12 h1:OK2fZKI5hX/+BTK76gXSTyZMrbnARyX9S643GenNGb8=
go.etcd.io/etcd/pkg/v3 v3.5.12/go.mod h1:UVwg/QIMoJncyeb/YxvJBJCE/NEwtHWashqc8A1nj/M=
go.etcd.io/etcd/raft/v3 v3.5.12 h1:7r22RufdDsq2z3STjoR7Msz6fYH8tmbkdheGfwJNRmU=
go.etcd.io/etcd/raft/v3 v3.5.12/go.mod h1:ERQuZVe79PI6vcC3DlKBukDCLja/L7YMu29B74Iwj4U=
go.etcd.io/etcd/server/v3 v3.5.12 h1:EtMjsbfyfkwZuA2JlKOiBfuGkFCekv5H178qjXypbG8=
go.etcd.io/etcd/server/v3 v3.5.12/go.mod h1:axB0oCjMy+cemo5290/CutIjoxlfA6KVYKD1w0uue10=
go.etcd.io/bbolt v1.3.9 h1:8x7aARPEXiXbHmtUwAIv7eV2fQFHrLLavdiJ3uzJXoI=
go.etcd.io/bbolt v1.3.9/go.mod h1:zaO32+Ti0PK1ivdPtgMESzuzL2VPoIG1PCQNvOdo/dE=
go.etcd.io/etcd/api/v3 v3.5.13 h1:8WXU2/NBge6AUF1K1gOexB6e07NgsN1hXK0rSTtgSp4=
go.etcd.io/etcd/api/v3 v3.5.13/go.mod h1:gBqlqkcMMZMVTMm4NDZloEVJzxQOQIls8splbqBDa0c=
go.etcd.io/etcd/client/pkg/v3 v3.5.13 h1:RVZSAnWWWiI5IrYAXjQorajncORbS0zI48LQlE2kQWg=
go.etcd.io/etcd/client/pkg/v3 v3.5.13/go.mod h1:XxHT4u1qU12E2+po+UVPrEeL94Um6zL58ppuJWXSAB8=
go.etcd.io/etcd/client/v2 v2.305.13 h1:RWfV1SX5jTU0lbCvpVQe3iPQeAHETWdOTb6pxhd77C8=
go.etcd.io/etcd/client/v2 v2.305.13/go.mod h1:iQnL7fepbiomdXMb3om1rHq96htNNGv2sJkEcZGDRRg=
go.etcd.io/etcd/client/v3 v3.5.13 h1:o0fHTNJLeO0MyVbc7I3fsCf6nrOqn5d+diSarKnB2js=
go.etcd.io/etcd/client/v3 v3.5.13/go.mod h1:cqiAeY8b5DEEcpxvgWKsbLIWNM/8Wy2xJSDMtioMcoI=
go.etcd.io/etcd/pkg/v3 v3.5.13 h1:st9bDWNsKkBNpP4PR1MvM/9NqUPfvYZx/YXegsYEH8M=
go.etcd.io/etcd/pkg/v3 v3.5.13/go.mod h1:N+4PLrp7agI/Viy+dUYpX7iRtSPvKq+w8Y14d1vX+m0=
go.etcd.io/etcd/raft/v3 v3.5.13 h1:7r/NKAOups1YnKcfro2RvGGo2PTuizF/xh26Z2CTAzA=
go.etcd.io/etcd/raft/v3 v3.5.13/go.mod h1:uUFibGLn2Ksm2URMxN1fICGhk8Wu96EfDQyuLhAcAmw=
go.etcd.io/etcd/server/v3 v3.5.13 h1:V6KG+yMfMSqWt+lGnhFpP5z5dRUj1BDRJ5k1fQ9DFok=
go.etcd.io/etcd/server/v3 v3.5.13/go.mod h1:K/8nbsGupHqmr5MkgaZpLlH1QdX1pcNQLAkODy44XcQ=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.0 h1:PzIubN4/sjByhDRHLviCjJuweBXWFZWhghjg7cS28+M=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.0/go.mod h1:Ct6zzQEuGK3WpJs2n4dn+wfJYzd/+hNnxMRTWjGn30M=
go.opentelemetry.io/otel v1.20.0 h1:vsb/ggIY+hUjD/zCAQHpzTmndPqv/ml2ArbsbfBYTAc=
@ -198,20 +208,19 @@ go.opentelemetry.io/otel/trace v1.20.0/go.mod h1:HJSK7F/hA5RlzpZ0zKDCHCDHm556LCD
go.opentelemetry.io/proto/otlp v1.0.0 h1:T0TX0tmXU8a3CbNXzEKGeU5mIVOdf0oykP+u2lIVU/I=
go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.17.0 h1:MTjgFu6ZLKvY6Pvaqk97GlxNBuMpV4Hy/3P6tRGlI2U=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@ -229,31 +238,34 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.21.0 h1:AQyQV4dYCvJ7vGmJyKki9+PBdyvhkSd8EIx/qb0AYv4=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.16.0 h1:aDkGMBSYxElaoP81NpoUoz2oo2R2wHdZpGToUxfyQrQ=
golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI=
golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190130150945-aca44879d564/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -271,30 +283,32 @@ golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8T
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM=
google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 h1:KAeGQVN3M9nD0/bQXnr/ClcEMJ968gUXJQ9pwfSynuQ=
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80/go.mod h1:cc8bqMqtv9gMOr0zHg2Vzff5ULhhL2IXP4sbcn32Dro=
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 h1:Lj5rbfG876hIAYFjqiJnPHfhXbv+nzTWfm04Fg/XSVU=
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80/go.mod h1:4jWUdICTdgc3Ibxmr8nAJiiLHwQBY0UI0XZcEMaFKaA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80 h1:AjyfHzEPEFp/NpvfN5g+KDla3EMojjhRVZc1i7cj+oM=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80/go.mod h1:PAREbraiVEVGVdTZsVWjSbbTtSyGbAgIIvni8a8CD5s=
google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de h1:F6qOa9AZTYJXOUEr4jDysRDLrm4PHePlge4v4TGAlxY=
google.golang.org/genproto v0.0.0-20240227224415-6ceb2ff114de/go.mod h1:VUhTRKeHn9wwcdrk73nvdC9gF178Tzhmt/qyaFcPLSo=
google.golang.org/genproto/googleapis/api v0.0.0-20240318140521-94a12d6c2237 h1:RFiFrvy37/mpSpdySBDrUdipW/dHwsRwh3J3+A9VgT4=
google.golang.org/genproto/googleapis/api v0.0.0-20240318140521-94a12d6c2237/go.mod h1:Z5Iiy3jtmioajWHDGFk7CeugTyHtPvMHA4UTmUkyalE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240318140521-94a12d6c2237 h1:NnYq6UN9ReLM9/Y01KWNOWyI5xQ9kbIms5GGJVwS/Yc=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240318140521-94a12d6c2237/go.mod h1:WtryC6hu0hhx87FDGxWCDptyssuo68sk10vYjF+T9fY=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.62.1 h1:B4n+nfKzOICUXMgyrNd19h/I9oH0L1pizfk1d4zSgTk=
google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=
google.golang.org/grpc v1.64.0 h1:KH3VH9y/MgNQg1dE7b3XfVK0GsPSIzJwdF617gUSbvY=
google.golang.org/grpc v1.64.0/go.mod h1:oxjF8E3FBnjp+/gVFYdWacaLDx9na1aqy9oovLpxQYg=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.3.0 h1:rNBFJjBCOgVr9pWD7rs/knKL4FRTKgpZmsRfV214zcA=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.3.0/go.mod h1:Dk1tviKTvMCz5tvh7t+fh94dhmQVHuCt2OzJB3CTW9Y=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg=
google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@ -303,7 +317,6 @@ gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View file

@ -24,7 +24,9 @@ package signaling
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"net"
"net/url"
@ -37,6 +39,8 @@ import (
clientv3 "go.etcd.io/etcd/client/v3"
"google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/resolver"
status "google.golang.org/grpc/status"
)
@ -49,6 +53,8 @@ const (
)
var (
ErrNoSuchResumeId = fmt.Errorf("unknown resume id")
customResolverPrefix atomic.Uint64
)
@ -136,9 +142,9 @@ func NewGrpcClient(target string, ip net.IP, opts ...grpc.DialOption) (*GrpcClie
hostname: hostname,
}
opts = append(opts, grpc.WithResolvers(resolver))
conn, err = grpc.Dial(fmt.Sprintf("%s://%s", resolver.Scheme(), target), opts...)
conn, err = grpc.NewClient(fmt.Sprintf("%s://%s", resolver.Scheme(), target), opts...)
} else {
conn, err = grpc.Dial(target, opts...)
conn, err = grpc.NewClient(target, opts...)
}
if err != nil {
return nil, err
@ -183,6 +189,26 @@ func (c *GrpcClient) GetServerId(ctx context.Context) (string, error) {
return response.GetServerId(), nil
}
func (c *GrpcClient) LookupResumeId(ctx context.Context, resumeId string) (*LookupResumeIdReply, error) {
statsGrpcClientCalls.WithLabelValues("LookupResumeId").Inc()
// TODO: Remove debug logging
log.Printf("Lookup resume id %s on %s", resumeId, c.Target())
response, err := c.impl.LookupResumeId(ctx, &LookupResumeIdRequest{
ResumeId: resumeId,
}, grpc.WaitForReady(true))
if s, ok := status.FromError(err); ok && s.Code() == codes.NotFound {
return nil, ErrNoSuchResumeId
} else if err != nil {
return nil, err
}
if sessionId := response.GetSessionId(); sessionId == "" {
return nil, ErrNoSuchResumeId
}
return response, nil
}
func (c *GrpcClient) LookupSessionId(ctx context.Context, roomSessionId string, disconnectReason string) (string, error) {
statsGrpcClientCalls.WithLabelValues("LookupSessionId").Inc()
// TODO: Remove debug logging
@ -256,6 +282,86 @@ func (c *GrpcClient) GetSessionCount(ctx context.Context, u *url.URL) (uint32, e
return response.GetCount(), nil
}
type ProxySessionReceiver interface {
RemoteAddr() string
Country() string
UserAgent() string
OnProxyMessage(message *ServerSessionMessage) error
OnProxyClose(err error)
}
type SessionProxy struct {
sessionId string
receiver ProxySessionReceiver
sendMu sync.Mutex
client RpcSessions_ProxySessionClient
}
func (p *SessionProxy) recvPump() {
var closeError error
defer func() {
p.receiver.OnProxyClose(closeError)
if err := p.Close(); err != nil {
log.Printf("Error closing proxy for session %s: %s", p.sessionId, err)
}
}()
for {
msg, err := p.client.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break
}
log.Printf("Error receiving message from proxy for session %s: %s", p.sessionId, err)
closeError = err
break
}
if err := p.receiver.OnProxyMessage(msg); err != nil {
log.Printf("Error processing message %+v from proxy for session %s: %s", msg, p.sessionId, err)
}
}
}
func (p *SessionProxy) Send(message *ClientSessionMessage) error {
p.sendMu.Lock()
defer p.sendMu.Unlock()
return p.client.Send(message)
}
func (p *SessionProxy) Close() error {
p.sendMu.Lock()
defer p.sendMu.Unlock()
return p.client.CloseSend()
}
func (c *GrpcClient) ProxySession(ctx context.Context, sessionId string, receiver ProxySessionReceiver) (*SessionProxy, error) {
statsGrpcClientCalls.WithLabelValues("ProxySession").Inc()
md := metadata.Pairs(
"sessionId", sessionId,
"remoteAddr", receiver.RemoteAddr(),
"country", receiver.Country(),
"userAgent", receiver.UserAgent(),
)
client, err := c.impl.ProxySession(metadata.NewOutgoingContext(ctx, md), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
proxy := &SessionProxy{
sessionId: sessionId,
receiver: receiver,
client: client,
}
go proxy.recvPump()
return proxy, nil
}
type grpcClientsList struct {
clients []*GrpcClient
entry *DnsMonitorEntry
@ -274,21 +380,27 @@ type GrpcClients struct {
targetPrefix string
targetInformation map[string]*GrpcTargetInformationEtcd
dialOptions atomic.Value // []grpc.DialOption
creds credentials.TransportCredentials
initializedCtx context.Context
initializedFunc context.CancelFunc
initializedWg sync.WaitGroup
wakeupChanForTesting chan struct{}
selfCheckWaitGroup sync.WaitGroup
closeCtx context.Context
closeFunc context.CancelFunc
}
func NewGrpcClients(config *goconf.ConfigFile, etcdClient *EtcdClient, dnsMonitor *DnsMonitor) (*GrpcClients, error) {
initializedCtx, initializedFunc := context.WithCancel(context.Background())
closeCtx, closeFunc := context.WithCancel(context.Background())
result := &GrpcClients{
dnsMonitor: dnsMonitor,
etcdClient: etcdClient,
initializedCtx: initializedCtx,
initializedFunc: initializedFunc,
closeCtx: closeCtx,
closeFunc: closeFunc,
}
if err := result.load(config, false); err != nil {
return nil, err
@ -302,6 +414,13 @@ func (c *GrpcClients) load(config *goconf.ConfigFile, fromReload bool) error {
return err
}
if c.creds != nil {
if cr, ok := c.creds.(*reloadableCredentials); ok {
cr.Close()
}
}
c.creds = creds
opts := []grpc.DialOption{grpc.WithTransportCredentials(creds)}
c.dialOptions.Store(opts)
@ -375,6 +494,10 @@ loop:
id, err := c.getServerIdWithTimeout(ctx, client)
if err != nil {
if errors.Is(err, context.Canceled) {
return
}
if status.Code(err) != codes.Canceled {
log.Printf("Error checking GRPC server id of %s, retrying in %s: %s", client.Target(), backoff.NextWait(), err)
}
@ -474,12 +597,13 @@ func (c *GrpcClients) loadTargetsStatic(config *goconf.ConfigFile, fromReload bo
}
c.selfCheckWaitGroup.Add(1)
go c.checkIsSelf(context.Background(), target, client)
go c.checkIsSelf(c.closeCtx, target, client)
log.Printf("Adding %s as GRPC target", client.Target())
entry, found := clientsMap[target]
if !found {
entry = &grpcClientsList{}
clientsMap[target] = entry
}
entry.clients = append(entry.clients, client)
clients = append(clients, client)
@ -548,7 +672,7 @@ func (c *GrpcClients) onLookup(entry *DnsMonitorEntry, all []net.IP, added []net
}
c.selfCheckWaitGroup.Add(1)
go c.checkIsSelf(context.Background(), target, client)
go c.checkIsSelf(c.closeCtx, target, client)
log.Printf("Adding %s as GRPC target", client.Target())
newClients = append(newClients, client)
@ -586,54 +710,72 @@ func (c *GrpcClients) loadTargetsEtcd(config *goconf.ConfigFile, fromReload bool
}
func (c *GrpcClients) EtcdClientCreated(client *EtcdClient) {
c.initializedWg.Add(1)
go func() {
if err := client.Watch(context.Background(), c.targetPrefix, c, clientv3.WithPrefix()); err != nil {
log.Printf("Error processing watch for %s: %s", c.targetPrefix, err)
}
}()
if err := client.WaitForConnection(c.closeCtx); err != nil {
if errors.Is(err, context.Canceled) {
return
}
go func() {
if err := client.WaitForConnection(context.Background()); err != nil {
panic(err)
}
backoff, _ := NewExponentialBackoff(initialWaitDelay, maxWaitDelay)
for {
response, err := c.getGrpcTargets(client, c.targetPrefix)
var nextRevision int64
for c.closeCtx.Err() == nil {
response, err := c.getGrpcTargets(c.closeCtx, client, c.targetPrefix)
if err != nil {
if err == context.DeadlineExceeded {
if errors.Is(err, context.Canceled) {
return
} else if errors.Is(err, context.DeadlineExceeded) {
log.Printf("Timeout getting initial list of GRPC targets, retry in %s", backoff.NextWait())
} else {
log.Printf("Could not get initial list of GRPC targets, retry in %s: %s", backoff.NextWait(), err)
}
backoff.Wait(context.Background())
backoff.Wait(c.closeCtx)
continue
}
for _, ev := range response.Kvs {
c.EtcdKeyUpdated(client, string(ev.Key), ev.Value)
c.EtcdKeyUpdated(client, string(ev.Key), ev.Value, nil)
}
c.initializedWg.Wait()
c.initializedFunc()
return
nextRevision = response.Header.Revision + 1
break
}
prevRevision := nextRevision
backoff.Reset()
for c.closeCtx.Err() == nil {
var err error
if nextRevision, err = client.Watch(c.closeCtx, c.targetPrefix, nextRevision, c, clientv3.WithPrefix()); err != nil {
log.Printf("Error processing watch for %s (%s), retry in %s", c.targetPrefix, err, backoff.NextWait())
backoff.Wait(c.closeCtx)
continue
}
if nextRevision != prevRevision {
backoff.Reset()
prevRevision = nextRevision
} else {
log.Printf("Processing watch for %s interrupted, retry in %s", c.targetPrefix, backoff.NextWait())
backoff.Wait(c.closeCtx)
}
}
}()
}
func (c *GrpcClients) EtcdWatchCreated(client *EtcdClient, key string) {
c.initializedWg.Done()
}
func (c *GrpcClients) getGrpcTargets(client *EtcdClient, targetPrefix string) (*clientv3.GetResponse, error) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
func (c *GrpcClients) getGrpcTargets(ctx context.Context, client *EtcdClient, targetPrefix string) (*clientv3.GetResponse, error) {
ctx, cancel := context.WithTimeout(ctx, time.Second)
defer cancel()
return client.Get(ctx, targetPrefix, clientv3.WithPrefix())
}
func (c *GrpcClients) EtcdKeyUpdated(client *EtcdClient, key string, data []byte) {
func (c *GrpcClients) EtcdKeyUpdated(client *EtcdClient, key string, data []byte, prevValue []byte) {
var info GrpcTargetInformationEtcd
if err := json.Unmarshal(data, &info); err != nil {
log.Printf("Could not decode GRPC target %s=%s: %s", key, string(data), err)
@ -666,7 +808,7 @@ func (c *GrpcClients) EtcdKeyUpdated(client *EtcdClient, key string, data []byte
}
c.selfCheckWaitGroup.Add(1)
go c.checkIsSelf(context.Background(), info.Address, cl)
go c.checkIsSelf(c.closeCtx, info.Address, cl)
log.Printf("Adding %s as GRPC target", cl.Target())
@ -682,7 +824,7 @@ func (c *GrpcClients) EtcdKeyUpdated(client *EtcdClient, key string, data []byte
c.wakeupForTesting()
}
func (c *GrpcClients) EtcdKeyDeleted(client *EtcdClient, key string) {
func (c *GrpcClients) EtcdKeyDeleted(client *EtcdClient, key string, prevValue []byte) {
c.mu.Lock()
defer c.mu.Unlock()
@ -766,6 +908,12 @@ func (c *GrpcClients) Close() {
if c.etcdClient != nil {
c.etcdClient.RemoveListener(c)
}
if c.creds != nil {
if cr, ok := c.creds.(*reloadableCredentials); ok {
cr.Close()
}
}
c.closeFunc()
}
func (c *GrpcClients) GetClients() []*GrpcClient {

View file

@ -112,27 +112,32 @@ func waitForEvent(ctx context.Context, t *testing.T, ch <-chan struct{}) {
}
func Test_GrpcClients_EtcdInitial(t *testing.T) {
_, addr1 := NewGrpcServerForTest(t)
_, addr2 := NewGrpcServerForTest(t)
CatchLogForTest(t)
ensureNoGoroutinesLeak(t, func(t *testing.T) {
_, addr1 := NewGrpcServerForTest(t)
_, addr2 := NewGrpcServerForTest(t)
etcd := NewEtcdForTest(t)
etcd := NewEtcdForTest(t)
SetEtcdValue(etcd, "/grpctargets/one", []byte("{\"address\":\""+addr1+"\"}"))
SetEtcdValue(etcd, "/grpctargets/two", []byte("{\"address\":\""+addr2+"\"}"))
SetEtcdValue(etcd, "/grpctargets/one", []byte("{\"address\":\""+addr1+"\"}"))
SetEtcdValue(etcd, "/grpctargets/two", []byte("{\"address\":\""+addr2+"\"}"))
client, _ := NewGrpcClientsWithEtcdForTest(t, etcd)
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
if err := client.WaitForInitialized(ctx); err != nil {
t.Fatal(err)
}
client, _ := NewGrpcClientsWithEtcdForTest(t, etcd)
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
if err := client.WaitForInitialized(ctx); err != nil {
t.Fatal(err)
}
if clients := client.GetClients(); len(clients) != 2 {
t.Errorf("Expected two clients, got %+v", clients)
}
if clients := client.GetClients(); len(clients) != 2 {
t.Errorf("Expected two clients, got %+v", clients)
}
})
}
func Test_GrpcClients_EtcdUpdate(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
etcd := NewEtcdForTest(t)
client, _ := NewGrpcClientsWithEtcdForTest(t, etcd)
ch := client.getWakeupChannelForTesting()
@ -187,6 +192,8 @@ func Test_GrpcClients_EtcdUpdate(t *testing.T) {
}
func Test_GrpcClients_EtcdIgnoreSelf(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
etcd := NewEtcdForTest(t)
client, _ := NewGrpcClientsWithEtcdForTest(t, etcd)
ch := client.getWakeupChannelForTesting()
@ -231,60 +238,65 @@ func Test_GrpcClients_EtcdIgnoreSelf(t *testing.T) {
}
func Test_GrpcClients_DnsDiscovery(t *testing.T) {
lookup := newMockDnsLookupForTest(t)
target := "testgrpc:12345"
ip1 := net.ParseIP("192.168.0.1")
ip2 := net.ParseIP("192.168.0.2")
targetWithIp1 := fmt.Sprintf("%s (%s)", target, ip1)
targetWithIp2 := fmt.Sprintf("%s (%s)", target, ip2)
lookup.Set("testgrpc", []net.IP{ip1})
client, dnsMonitor := NewGrpcClientsForTest(t, target)
ch := client.getWakeupChannelForTesting()
CatchLogForTest(t)
ensureNoGoroutinesLeak(t, func(t *testing.T) {
lookup := newMockDnsLookupForTest(t)
target := "testgrpc:12345"
ip1 := net.ParseIP("192.168.0.1")
ip2 := net.ParseIP("192.168.0.2")
targetWithIp1 := fmt.Sprintf("%s (%s)", target, ip1)
targetWithIp2 := fmt.Sprintf("%s (%s)", target, ip2)
lookup.Set("testgrpc", []net.IP{ip1})
client, dnsMonitor := NewGrpcClientsForTest(t, target)
ch := client.getWakeupChannelForTesting()
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
defer cancel()
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
defer cancel()
dnsMonitor.checkHostnames()
if clients := client.GetClients(); len(clients) != 1 {
t.Errorf("Expected one client, got %+v", clients)
} else if clients[0].Target() != targetWithIp1 {
t.Errorf("Expected target %s, got %s", targetWithIp1, clients[0].Target())
} else if !clients[0].ip.Equal(ip1) {
t.Errorf("Expected IP %s, got %s", ip1, clients[0].ip)
}
dnsMonitor.checkHostnames()
if clients := client.GetClients(); len(clients) != 1 {
t.Errorf("Expected one client, got %+v", clients)
} else if clients[0].Target() != targetWithIp1 {
t.Errorf("Expected target %s, got %s", targetWithIp1, clients[0].Target())
} else if !clients[0].ip.Equal(ip1) {
t.Errorf("Expected IP %s, got %s", ip1, clients[0].ip)
}
lookup.Set("testgrpc", []net.IP{ip1, ip2})
drainWakeupChannel(ch)
dnsMonitor.checkHostnames()
waitForEvent(ctx, t, ch)
lookup.Set("testgrpc", []net.IP{ip1, ip2})
drainWakeupChannel(ch)
dnsMonitor.checkHostnames()
waitForEvent(ctx, t, ch)
if clients := client.GetClients(); len(clients) != 2 {
t.Errorf("Expected two client, got %+v", clients)
} else if clients[0].Target() != targetWithIp1 {
t.Errorf("Expected target %s, got %s", targetWithIp1, clients[0].Target())
} else if !clients[0].ip.Equal(ip1) {
t.Errorf("Expected IP %s, got %s", ip1, clients[0].ip)
} else if clients[1].Target() != targetWithIp2 {
t.Errorf("Expected target %s, got %s", targetWithIp2, clients[1].Target())
} else if !clients[1].ip.Equal(ip2) {
t.Errorf("Expected IP %s, got %s", ip2, clients[1].ip)
}
if clients := client.GetClients(); len(clients) != 2 {
t.Errorf("Expected two client, got %+v", clients)
} else if clients[0].Target() != targetWithIp1 {
t.Errorf("Expected target %s, got %s", targetWithIp1, clients[0].Target())
} else if !clients[0].ip.Equal(ip1) {
t.Errorf("Expected IP %s, got %s", ip1, clients[0].ip)
} else if clients[1].Target() != targetWithIp2 {
t.Errorf("Expected target %s, got %s", targetWithIp2, clients[1].Target())
} else if !clients[1].ip.Equal(ip2) {
t.Errorf("Expected IP %s, got %s", ip2, clients[1].ip)
}
lookup.Set("testgrpc", []net.IP{ip2})
drainWakeupChannel(ch)
dnsMonitor.checkHostnames()
waitForEvent(ctx, t, ch)
lookup.Set("testgrpc", []net.IP{ip2})
drainWakeupChannel(ch)
dnsMonitor.checkHostnames()
waitForEvent(ctx, t, ch)
if clients := client.GetClients(); len(clients) != 1 {
t.Errorf("Expected one client, got %+v", clients)
} else if clients[0].Target() != targetWithIp2 {
t.Errorf("Expected target %s, got %s", targetWithIp2, clients[0].Target())
} else if !clients[0].ip.Equal(ip2) {
t.Errorf("Expected IP %s, got %s", ip2, clients[0].ip)
}
if clients := client.GetClients(); len(clients) != 1 {
t.Errorf("Expected one client, got %+v", clients)
} else if clients[0].Target() != targetWithIp2 {
t.Errorf("Expected target %s, got %s", targetWithIp2, clients[0].Target())
} else if !clients[0].ip.Equal(ip2) {
t.Errorf("Expected IP %s, got %s", ip2, clients[0].ip)
}
})
}
func Test_GrpcClients_DnsDiscoveryInitialFailed(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
lookup := newMockDnsLookupForTest(t)
target := "testgrpc:12345"
ip1 := net.ParseIP("192.168.0.1")
@ -320,55 +332,58 @@ func Test_GrpcClients_DnsDiscoveryInitialFailed(t *testing.T) {
}
func Test_GrpcClients_Encryption(t *testing.T) {
serverKey, err := rsa.GenerateKey(rand.Reader, 1024)
if err != nil {
t.Fatal(err)
}
clientKey, err := rsa.GenerateKey(rand.Reader, 1024)
if err != nil {
t.Fatal(err)
}
serverCert := GenerateSelfSignedCertificateForTesting(t, 1024, "Server cert", serverKey)
clientCert := GenerateSelfSignedCertificateForTesting(t, 1024, "Testing client", clientKey)
dir := t.TempDir()
serverPrivkeyFile := path.Join(dir, "server-privkey.pem")
serverPubkeyFile := path.Join(dir, "server-pubkey.pem")
serverCertFile := path.Join(dir, "server-cert.pem")
WritePrivateKey(serverKey, serverPrivkeyFile) // nolint
WritePublicKey(&serverKey.PublicKey, serverPubkeyFile) // nolint
os.WriteFile(serverCertFile, serverCert, 0755) // nolint
clientPrivkeyFile := path.Join(dir, "client-privkey.pem")
clientPubkeyFile := path.Join(dir, "client-pubkey.pem")
clientCertFile := path.Join(dir, "client-cert.pem")
WritePrivateKey(clientKey, clientPrivkeyFile) // nolint
WritePublicKey(&clientKey.PublicKey, clientPubkeyFile) // nolint
os.WriteFile(clientCertFile, clientCert, 0755) // nolint
serverConfig := goconf.NewConfigFile()
serverConfig.AddOption("grpc", "servercertificate", serverCertFile)
serverConfig.AddOption("grpc", "serverkey", serverPrivkeyFile)
serverConfig.AddOption("grpc", "clientca", clientCertFile)
_, addr := NewGrpcServerForTestWithConfig(t, serverConfig)
clientConfig := goconf.NewConfigFile()
clientConfig.AddOption("grpc", "targets", addr)
clientConfig.AddOption("grpc", "clientcertificate", clientCertFile)
clientConfig.AddOption("grpc", "clientkey", clientPrivkeyFile)
clientConfig.AddOption("grpc", "serverca", serverCertFile)
clients, _ := NewGrpcClientsForTestWithConfig(t, clientConfig, nil)
ctx, cancel1 := context.WithTimeout(context.Background(), time.Second)
defer cancel1()
if err := clients.WaitForInitialized(ctx); err != nil {
t.Fatal(err)
}
for _, client := range clients.GetClients() {
if _, err := client.GetServerId(ctx); err != nil {
CatchLogForTest(t)
ensureNoGoroutinesLeak(t, func(t *testing.T) {
serverKey, err := rsa.GenerateKey(rand.Reader, 1024)
if err != nil {
t.Fatal(err)
}
}
clientKey, err := rsa.GenerateKey(rand.Reader, 1024)
if err != nil {
t.Fatal(err)
}
serverCert := GenerateSelfSignedCertificateForTesting(t, 1024, "Server cert", serverKey)
clientCert := GenerateSelfSignedCertificateForTesting(t, 1024, "Testing client", clientKey)
dir := t.TempDir()
serverPrivkeyFile := path.Join(dir, "server-privkey.pem")
serverPubkeyFile := path.Join(dir, "server-pubkey.pem")
serverCertFile := path.Join(dir, "server-cert.pem")
WritePrivateKey(serverKey, serverPrivkeyFile) // nolint
WritePublicKey(&serverKey.PublicKey, serverPubkeyFile) // nolint
os.WriteFile(serverCertFile, serverCert, 0755) // nolint
clientPrivkeyFile := path.Join(dir, "client-privkey.pem")
clientPubkeyFile := path.Join(dir, "client-pubkey.pem")
clientCertFile := path.Join(dir, "client-cert.pem")
WritePrivateKey(clientKey, clientPrivkeyFile) // nolint
WritePublicKey(&clientKey.PublicKey, clientPubkeyFile) // nolint
os.WriteFile(clientCertFile, clientCert, 0755) // nolint
serverConfig := goconf.NewConfigFile()
serverConfig.AddOption("grpc", "servercertificate", serverCertFile)
serverConfig.AddOption("grpc", "serverkey", serverPrivkeyFile)
serverConfig.AddOption("grpc", "clientca", clientCertFile)
_, addr := NewGrpcServerForTestWithConfig(t, serverConfig)
clientConfig := goconf.NewConfigFile()
clientConfig.AddOption("grpc", "targets", addr)
clientConfig.AddOption("grpc", "clientcertificate", clientCertFile)
clientConfig.AddOption("grpc", "clientkey", clientPrivkeyFile)
clientConfig.AddOption("grpc", "serverca", serverCertFile)
clients, _ := NewGrpcClientsForTestWithConfig(t, clientConfig, nil)
ctx, cancel1 := context.WithTimeout(context.Background(), time.Second)
defer cancel1()
if err := clients.WaitForInitialized(ctx); err != nil {
t.Fatal(err)
}
for _, client := range clients.GetClients() {
if _, err := client.GetServerId(ctx); err != nil {
t.Fatal(err)
}
}
})
}

View file

@ -125,6 +125,15 @@ func (c *reloadableCredentials) OverrideServerName(serverName string) error {
return nil
}
func (c *reloadableCredentials) Close() {
if c.loader != nil {
c.loader.Close()
}
if c.pool != nil {
c.pool.Close()
}
}
func NewReloadableCredentials(config *goconf.ConfigFile, server bool) (credentials.TransportCredentials, error) {
var prefix string
var caPrefix string

229
grpc_remote_client.go Normal file
View file

@ -0,0 +1,229 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2024 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"sync/atomic"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
)
const (
grpcRemoteClientMessageQueue = 16
)
func getMD(md metadata.MD, key string) string {
if values := md.Get(key); len(values) > 0 {
return values[0]
}
return ""
}
// remoteGrpcClient is a remote client connecting from a GRPC proxy to a Hub.
type remoteGrpcClient struct {
hub *Hub
client RpcSessions_ProxySessionServer
sessionId string
remoteAddr string
country string
userAgent string
closeCtx context.Context
closeFunc context.CancelCauseFunc
session atomic.Pointer[Session]
messages chan WritableClientMessage
}
func newRemoteGrpcClient(hub *Hub, request RpcSessions_ProxySessionServer) (*remoteGrpcClient, error) {
md, found := metadata.FromIncomingContext(request.Context())
if !found {
return nil, errors.New("no metadata provided")
}
closeCtx, closeFunc := context.WithCancelCause(context.Background())
result := &remoteGrpcClient{
hub: hub,
client: request,
sessionId: getMD(md, "sessionId"),
remoteAddr: getMD(md, "remoteAddr"),
country: getMD(md, "country"),
userAgent: getMD(md, "userAgent"),
closeCtx: closeCtx,
closeFunc: closeFunc,
messages: make(chan WritableClientMessage, grpcRemoteClientMessageQueue),
}
return result, nil
}
func (c *remoteGrpcClient) readPump() {
var closeError error
defer func() {
c.closeFunc(closeError)
c.hub.OnClosed(c)
}()
for {
msg, err := c.client.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
// Connection was closed locally.
break
}
if status.Code(err) != codes.Canceled {
log.Printf("Error reading from remote client for session %s: %s", c.sessionId, err)
closeError = err
}
break
}
c.hub.OnMessageReceived(c, msg.Message)
}
}
func (c *remoteGrpcClient) Context() context.Context {
return c.client.Context()
}
func (c *remoteGrpcClient) RemoteAddr() string {
return c.remoteAddr
}
func (c *remoteGrpcClient) UserAgent() string {
return c.userAgent
}
func (c *remoteGrpcClient) Country() string {
return c.country
}
func (c *remoteGrpcClient) IsConnected() bool {
return true
}
func (c *remoteGrpcClient) IsAuthenticated() bool {
return c.GetSession() != nil
}
func (c *remoteGrpcClient) GetSession() Session {
session := c.session.Load()
if session == nil {
return nil
}
return *session
}
func (c *remoteGrpcClient) SetSession(session Session) {
if session == nil {
c.session.Store(nil)
} else {
c.session.Store(&session)
}
}
func (c *remoteGrpcClient) SendError(e *Error) bool {
message := &ServerMessage{
Type: "error",
Error: e,
}
return c.SendMessage(message)
}
func (c *remoteGrpcClient) SendByeResponse(message *ClientMessage) bool {
return c.SendByeResponseWithReason(message, "")
}
func (c *remoteGrpcClient) SendByeResponseWithReason(message *ClientMessage, reason string) bool {
response := &ServerMessage{
Type: "bye",
}
if message != nil {
response.Id = message.Id
}
if reason != "" {
if response.Bye == nil {
response.Bye = &ByeServerMessage{}
}
response.Bye.Reason = reason
}
return c.SendMessage(response)
}
func (c *remoteGrpcClient) SendMessage(message WritableClientMessage) bool {
if c.closeCtx.Err() != nil {
return false
}
select {
case c.messages <- message:
return true
default:
log.Printf("Message queue for remote client of session %s is full, not sending %+v", c.sessionId, message)
return false
}
}
func (c *remoteGrpcClient) Close() {
c.closeFunc(nil)
}
func (c *remoteGrpcClient) run() error {
go c.readPump()
for {
select {
case <-c.closeCtx.Done():
if err := context.Cause(c.closeCtx); err != context.Canceled {
return err
}
return nil
case msg := <-c.messages:
data, err := json.Marshal(msg)
if err != nil {
log.Printf("Error marshalling %+v for remote client for session %s: %s", msg, c.sessionId, err)
continue
}
if err := c.client.Send(&ServerSessionMessage{
Message: data,
}); err != nil {
return fmt.Errorf("error sending %+v to remote client for session %s: %w", msg, c.sessionId, err)
}
}
}
}

View file

@ -55,6 +55,14 @@ func init() {
GrpcServerId = hex.EncodeToString(md.Sum(nil))
}
type GrpcServerHub interface {
GetSessionByResumeId(resumeId string) Session
GetSessionByPublicId(sessionId string) Session
GetSessionIdByRoomSessionId(roomSessionId string) (string, error)
GetBackend(u *url.URL) *Backend
}
type GrpcServer struct {
UnimplementedRpcBackendServer
UnimplementedRpcInternalServer
@ -66,7 +74,7 @@ type GrpcServer struct {
listener net.Listener
serverId string // can be overwritten from tests
hub *Hub
hub GrpcServerHub
}
func NewGrpcServer(config *goconf.ConfigFile) (*GrpcServer, error) {
@ -108,13 +116,30 @@ func (s *GrpcServer) Run() error {
func (s *GrpcServer) Close() {
s.conn.GracefulStop()
if cr, ok := s.creds.(*reloadableCredentials); ok {
cr.Close()
}
}
func (s *GrpcServer) LookupResumeId(ctx context.Context, request *LookupResumeIdRequest) (*LookupResumeIdReply, error) {
statsGrpcServerCalls.WithLabelValues("LookupResumeId").Inc()
// TODO: Remove debug logging
log.Printf("Lookup session for resume id %s", request.ResumeId)
session := s.hub.GetSessionByResumeId(request.ResumeId)
if session == nil {
return nil, status.Error(codes.NotFound, "no such room session id")
}
return &LookupResumeIdReply{
SessionId: session.PublicId(),
}, nil
}
func (s *GrpcServer) LookupSessionId(ctx context.Context, request *LookupSessionIdRequest) (*LookupSessionIdReply, error) {
statsGrpcServerCalls.WithLabelValues("LookupSessionId").Inc()
// TODO: Remove debug logging
log.Printf("Lookup session id for room session id %s", request.RoomSessionId)
sid, err := s.hub.roomSessions.GetSessionId(request.RoomSessionId)
sid, err := s.hub.GetSessionIdByRoomSessionId(request.RoomSessionId)
if errors.Is(err, ErrNoSuchRoomSession) {
return nil, status.Error(codes.NotFound, "no such room session id")
} else if err != nil {
@ -204,7 +229,7 @@ func (s *GrpcServer) GetSessionCount(ctx context.Context, request *GetSessionCou
return nil, status.Error(codes.InvalidArgument, "invalid url")
}
backend := s.hub.backend.GetBackend(u)
backend := s.hub.GetBackend(u)
if backend == nil {
return nil, status.Error(codes.NotFound, "no such backend")
}
@ -213,3 +238,21 @@ func (s *GrpcServer) GetSessionCount(ctx context.Context, request *GetSessionCou
Count: uint32(backend.Len()),
}, nil
}
func (s *GrpcServer) ProxySession(request RpcSessions_ProxySessionServer) error {
statsGrpcServerCalls.WithLabelValues("ProxySession").Inc()
hub, ok := s.hub.(*Hub)
if !ok {
return status.Error(codes.Internal, "invalid hub type")
}
client, err := newRemoteGrpcClient(hub, request)
if err != nil {
return err
}
sid := hub.registerClient(client)
defer hub.unregisterClient(sid)
return client.run()
}

View file

@ -98,6 +98,7 @@ func NewGrpcServerForTest(t *testing.T) (server *GrpcServer, addr string) {
}
func Test_GrpcServer_ReloadCerts(t *testing.T) {
CatchLogForTest(t)
key, err := rsa.GenerateKey(rand.Reader, 1024)
if err != nil {
t.Fatal(err)
@ -178,6 +179,7 @@ func Test_GrpcServer_ReloadCerts(t *testing.T) {
}
func Test_GrpcServer_ReloadCA(t *testing.T) {
CatchLogForTest(t)
serverKey, err := rsa.GenerateKey(rand.Reader, 1024)
if err != nil {
t.Fatal(err)

View file

@ -26,8 +26,18 @@ option go_package = "github.com/strukturag/nextcloud-spreed-signaling;signaling"
package signaling;
service RpcSessions {
rpc LookupResumeId(LookupResumeIdRequest) returns (LookupResumeIdReply) {}
rpc LookupSessionId(LookupSessionIdRequest) returns (LookupSessionIdReply) {}
rpc IsSessionInCall(IsSessionInCallRequest) returns (IsSessionInCallReply) {}
rpc ProxySession(stream ClientSessionMessage) returns (stream ServerSessionMessage) {}
}
message LookupResumeIdRequest {
string resumeId = 1;
}
message LookupResumeIdReply {
string sessionId = 1;
}
message LookupSessionIdRequest {
@ -49,3 +59,11 @@ message IsSessionInCallRequest {
message IsSessionInCallReply {
bool inCall = 1;
}
message ClientSessionMessage {
bytes message = 1;
}
message ServerSessionMessage {
bytes message = 1;
}

View file

@ -29,6 +29,7 @@ import (
)
func TestHttpClientPool(t *testing.T) {
t.Parallel()
if _, err := NewHttpClientPool(0, false); err == nil {
t.Error("should not be possible to create empty pool")
}

560
hub.go

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -258,8 +258,8 @@ type JanusGateway struct {
// return gateway, nil
// }
func NewJanusGateway(wsURL string, listener GatewayListener) (*JanusGateway, error) {
conn, _, err := janusDialer.Dial(wsURL, nil)
func NewJanusGateway(ctx context.Context, wsURL string, listener GatewayListener) (*JanusGateway, error) {
conn, _, err := janusDialer.DialContext(ctx, wsURL, nil)
if err != nil {
return nil, err
}
@ -310,7 +310,7 @@ func (gateway *JanusGateway) cancelTransactions() {
t.quit()
}(t)
}
gateway.transactions = make(map[uint64]*transaction)
clear(gateway.transactions)
gateway.Unlock()
}

View file

@ -66,7 +66,7 @@ type McuInitiator interface {
}
type Mcu interface {
Start() error
Start(ctx context.Context) error
Stop()
Reload(config *goconf.ConfigFile)
@ -76,7 +76,48 @@ type Mcu interface {
GetStats() interface{}
NewPublisher(ctx context.Context, listener McuListener, id string, sid string, streamType StreamType, bitrate int, mediaTypes MediaType, initiator McuInitiator) (McuPublisher, error)
NewSubscriber(ctx context.Context, listener McuListener, publisher string, streamType StreamType) (McuSubscriber, error)
NewSubscriber(ctx context.Context, listener McuListener, publisher string, streamType StreamType, initiator McuInitiator) (McuSubscriber, error)
}
// PublisherStream contains the available properties when creating a
// remote publisher in Janus.
type PublisherStream struct {
Mid string `json:"mid"`
Mindex int `json:"mindex"`
Type string `json:"type"`
Description string `json:"description,omitempty"`
Disabled bool `json:"disabled,omitempty"`
// For types "audio" and "video"
Codec string `json:"codec,omitempty"`
// For type "audio"
Stereo bool `json:"stereo,omitempty"`
Fec bool `json:"fec,omitempty"`
Dtx bool `json:"dtx,omitempty"`
// For type "video"
Simulcast bool `json:"simulcast,omitempty"`
Svc bool `json:"svc,omitempty"`
ProfileH264 string `json:"h264_profile,omitempty"`
ProfileVP9 string `json:"vp9_profile,omitempty"`
ExtIdVideoOrientation int `json:"videoorient_ext_id,omitempty"`
ExtIdPlayoutDelay int `json:"playoutdelay_ext_id,omitempty"`
}
type RemotePublisherController interface {
PublisherId() string
StartPublishing(ctx context.Context, publisher McuRemotePublisherProperties) error
GetStreams(ctx context.Context) ([]PublisherStream, error)
}
type RemoteMcu interface {
NewRemotePublisher(ctx context.Context, listener McuListener, controller RemotePublisherController, streamType StreamType) (McuRemotePublisher, error)
NewRemoteSubscriber(ctx context.Context, listener McuListener, publisher McuRemotePublisher) (McuRemoteSubscriber, error)
}
type StreamType string
@ -116,6 +157,10 @@ type McuPublisher interface {
HasMedia(MediaType) bool
SetMedia(MediaType)
GetStreams(ctx context.Context) ([]PublisherStream, error)
PublishRemote(ctx context.Context, remoteId string, hostname string, port int, rtcpPort int) error
UnpublishRemote(ctx context.Context, remoteId string) error
}
type McuSubscriber interface {
@ -123,3 +168,18 @@ type McuSubscriber interface {
Publisher() string
}
type McuRemotePublisherProperties interface {
Port() int
RtcpPort() int
}
type McuRemotePublisher interface {
McuClient
McuRemotePublisherProperties
}
type McuRemoteSubscriber interface {
McuSubscriber
}

View file

@ -28,3 +28,43 @@ import (
func TestCommonMcuStats(t *testing.T) {
collectAndLint(t, commonMcuStats...)
}
type MockMcuListener struct {
publicId string
}
func (m *MockMcuListener) PublicId() string {
return m.publicId
}
func (m *MockMcuListener) OnUpdateOffer(client McuClient, offer map[string]interface{}) {
}
func (m *MockMcuListener) OnIceCandidate(client McuClient, candidate interface{}) {
}
func (m *MockMcuListener) OnIceCompleted(client McuClient) {
}
func (m *MockMcuListener) SubscriberSidUpdated(subscriber McuSubscriber) {
}
func (m *MockMcuListener) PublisherClosed(publisher McuPublisher) {
}
func (m *MockMcuListener) SubscriberClosed(subscriber McuSubscriber) {
}
type MockMcuInitiator struct {
country string
}
func (m *MockMcuInitiator) Country() string {
return m.country
}

File diff suppressed because it is too large Load diff

216
mcu_janus_client.go Normal file
View file

@ -0,0 +1,216 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2017 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"context"
"log"
"reflect"
"strconv"
"sync"
"github.com/notedit/janus-go"
)
type mcuJanusClient struct {
mcu *mcuJanus
listener McuListener
mu sync.Mutex // nolint
id uint64
session uint64
roomId uint64
sid string
streamType StreamType
maxBitrate int
handle *JanusHandle
handleId uint64
closeChan chan struct{}
deferred chan func()
handleEvent func(event *janus.EventMsg)
handleHangup func(event *janus.HangupMsg)
handleDetached func(event *janus.DetachedMsg)
handleConnected func(event *janus.WebRTCUpMsg)
handleSlowLink func(event *janus.SlowLinkMsg)
handleMedia func(event *janus.MediaMsg)
}
func (c *mcuJanusClient) Id() string {
return strconv.FormatUint(c.id, 10)
}
func (c *mcuJanusClient) Sid() string {
return c.sid
}
func (c *mcuJanusClient) StreamType() StreamType {
return c.streamType
}
func (c *mcuJanusClient) MaxBitrate() int {
return c.maxBitrate
}
func (c *mcuJanusClient) Close(ctx context.Context) {
}
func (c *mcuJanusClient) SendMessage(ctx context.Context, message *MessageClientMessage, data *MessageClientMessageData, callback func(error, map[string]interface{})) {
}
func (c *mcuJanusClient) closeClient(ctx context.Context) bool {
if handle := c.handle; handle != nil {
c.handle = nil
close(c.closeChan)
if _, err := handle.Detach(ctx); err != nil {
if e, ok := err.(*janus.ErrorMsg); !ok || e.Err.Code != JANUS_ERROR_HANDLE_NOT_FOUND {
log.Println("Could not detach client", handle.Id, err)
}
}
return true
}
return false
}
func (c *mcuJanusClient) run(handle *JanusHandle, closeChan <-chan struct{}) {
loop:
for {
select {
case msg := <-handle.Events:
switch t := msg.(type) {
case *janus.EventMsg:
c.handleEvent(t)
case *janus.HangupMsg:
c.handleHangup(t)
case *janus.DetachedMsg:
c.handleDetached(t)
case *janus.MediaMsg:
c.handleMedia(t)
case *janus.WebRTCUpMsg:
c.handleConnected(t)
case *janus.SlowLinkMsg:
c.handleSlowLink(t)
case *TrickleMsg:
c.handleTrickle(t)
default:
log.Println("Received unsupported event type", msg, reflect.TypeOf(msg))
}
case f := <-c.deferred:
f()
case <-closeChan:
break loop
}
}
}
func (c *mcuJanusClient) sendOffer(ctx context.Context, offer map[string]interface{}, callback func(error, map[string]interface{})) {
handle := c.handle
if handle == nil {
callback(ErrNotConnected, nil)
return
}
configure_msg := map[string]interface{}{
"request": "configure",
"audio": true,
"video": true,
"data": true,
}
answer_msg, err := handle.Message(ctx, configure_msg, offer)
if err != nil {
callback(err, nil)
return
}
callback(nil, answer_msg.Jsep)
}
func (c *mcuJanusClient) sendAnswer(ctx context.Context, answer map[string]interface{}, callback func(error, map[string]interface{})) {
handle := c.handle
if handle == nil {
callback(ErrNotConnected, nil)
return
}
start_msg := map[string]interface{}{
"request": "start",
"room": c.roomId,
}
start_response, err := handle.Message(ctx, start_msg, answer)
if err != nil {
callback(err, nil)
return
}
log.Println("Started listener", start_response)
callback(nil, nil)
}
func (c *mcuJanusClient) sendCandidate(ctx context.Context, candidate interface{}, callback func(error, map[string]interface{})) {
handle := c.handle
if handle == nil {
callback(ErrNotConnected, nil)
return
}
if _, err := handle.Trickle(ctx, candidate); err != nil {
callback(err, nil)
return
}
callback(nil, nil)
}
func (c *mcuJanusClient) handleTrickle(event *TrickleMsg) {
if event.Candidate.Completed {
c.listener.OnIceCompleted(c)
} else {
c.listener.OnIceCandidate(c, event.Candidate)
}
}
func (c *mcuJanusClient) selectStream(ctx context.Context, stream *streamSelection, callback func(error, map[string]interface{})) {
handle := c.handle
if handle == nil {
callback(ErrNotConnected, nil)
return
}
if stream == nil || !stream.HasValues() {
callback(nil, nil)
return
}
configure_msg := map[string]interface{}{
"request": "configure",
}
if stream != nil {
stream.AddToMessage(configure_msg)
}
_, err := handle.Message(ctx, configure_msg, nil)
if err != nil {
callback(err, nil)
return
}
callback(nil, nil)
}

457
mcu_janus_publisher.go Normal file
View file

@ -0,0 +1,457 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2017 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"context"
"errors"
"fmt"
"log"
"strconv"
"strings"
"sync/atomic"
"github.com/notedit/janus-go"
"github.com/pion/sdp/v3"
)
const (
ExtensionUrlPlayoutDelay = "http://www.webrtc.org/experiments/rtp-hdrext/playout-delay"
ExtensionUrlVideoOrientation = "urn:3gpp:video-orientation"
)
const (
sdpHasOffer = 1
sdpHasAnswer = 2
)
type mcuJanusPublisher struct {
mcuJanusClient
id string
bitrate int
mediaTypes MediaType
stats publisherStatsCounter
sdpFlags Flags
sdpReady *Closer
offerSdp atomic.Pointer[sdp.SessionDescription]
answerSdp atomic.Pointer[sdp.SessionDescription]
}
func (p *mcuJanusPublisher) handleEvent(event *janus.EventMsg) {
if videoroom := getPluginStringValue(event.Plugindata, pluginVideoRoom, "videoroom"); videoroom != "" {
ctx := context.TODO()
switch videoroom {
case "destroyed":
log.Printf("Publisher %d: associated room has been destroyed, closing", p.handleId)
go p.Close(ctx)
case "slow_link":
// Ignore, processed through "handleSlowLink" in the general events.
default:
log.Printf("Unsupported videoroom publisher event in %d: %+v", p.handleId, event)
}
} else {
log.Printf("Unsupported publisher event in %d: %+v", p.handleId, event)
}
}
func (p *mcuJanusPublisher) handleHangup(event *janus.HangupMsg) {
log.Printf("Publisher %d received hangup (%s), closing", p.handleId, event.Reason)
go p.Close(context.Background())
}
func (p *mcuJanusPublisher) handleDetached(event *janus.DetachedMsg) {
log.Printf("Publisher %d received detached, closing", p.handleId)
go p.Close(context.Background())
}
func (p *mcuJanusPublisher) handleConnected(event *janus.WebRTCUpMsg) {
log.Printf("Publisher %d received connected", p.handleId)
p.mcu.publisherConnected.Notify(getStreamId(p.id, p.streamType))
}
func (p *mcuJanusPublisher) handleSlowLink(event *janus.SlowLinkMsg) {
if event.Uplink {
log.Printf("Publisher %s (%d) is reporting %d lost packets on the uplink (Janus -> client)", p.listener.PublicId(), p.handleId, event.Lost)
} else {
log.Printf("Publisher %s (%d) is reporting %d lost packets on the downlink (client -> Janus)", p.listener.PublicId(), p.handleId, event.Lost)
}
}
func (p *mcuJanusPublisher) handleMedia(event *janus.MediaMsg) {
mediaType := StreamType(event.Type)
if mediaType == StreamTypeVideo && p.streamType == StreamTypeScreen {
// We want to differentiate between audio, video and screensharing
mediaType = p.streamType
}
p.stats.EnableStream(mediaType, event.Receiving)
}
func (p *mcuJanusPublisher) HasMedia(mt MediaType) bool {
return (p.mediaTypes & mt) == mt
}
func (p *mcuJanusPublisher) SetMedia(mt MediaType) {
p.mediaTypes = mt
}
func (p *mcuJanusPublisher) NotifyReconnected() {
ctx := context.TODO()
handle, session, roomId, _, err := p.mcu.getOrCreatePublisherHandle(ctx, p.id, p.streamType, p.bitrate)
if err != nil {
log.Printf("Could not reconnect publisher %s: %s", p.id, err)
// TODO(jojo): Retry
return
}
p.handle = handle
p.handleId = handle.Id
p.session = session
p.roomId = roomId
log.Printf("Publisher %s reconnected on handle %d", p.id, p.handleId)
}
func (p *mcuJanusPublisher) Close(ctx context.Context) {
notify := false
p.mu.Lock()
if handle := p.handle; handle != nil && p.roomId != 0 {
destroy_msg := map[string]interface{}{
"request": "destroy",
"room": p.roomId,
}
if _, err := handle.Request(ctx, destroy_msg); err != nil {
log.Printf("Error destroying room %d: %s", p.roomId, err)
} else {
log.Printf("Room %d destroyed", p.roomId)
}
p.mcu.mu.Lock()
delete(p.mcu.publishers, getStreamId(p.id, p.streamType))
p.mcu.mu.Unlock()
p.roomId = 0
notify = true
}
p.closeClient(ctx)
p.mu.Unlock()
p.stats.Reset()
if notify {
statsPublishersCurrent.WithLabelValues(string(p.streamType)).Dec()
p.mcu.unregisterClient(p)
p.listener.PublisherClosed(p)
}
p.mcuJanusClient.Close(ctx)
}
func (p *mcuJanusPublisher) SendMessage(ctx context.Context, message *MessageClientMessage, data *MessageClientMessageData, callback func(error, map[string]interface{})) {
statsMcuMessagesTotal.WithLabelValues(data.Type).Inc()
jsep_msg := data.Payload
switch data.Type {
case "offer":
p.deferred <- func() {
if data.offerSdp == nil {
// Should have been checked before.
go callback(errors.New("No sdp found in offer"), nil)
return
}
p.offerSdp.Store(data.offerSdp)
p.sdpFlags.Add(sdpHasOffer)
if p.sdpFlags.Get() == sdpHasAnswer|sdpHasOffer {
p.sdpReady.Close()
}
// TODO Tear down previous publisher and get a new one if sid does
// not match?
msgctx, cancel := context.WithTimeout(context.Background(), p.mcu.mcuTimeout)
defer cancel()
p.sendOffer(msgctx, jsep_msg, func(err error, jsep map[string]interface{}) {
if err != nil {
callback(err, jsep)
return
}
sdpData, found := jsep["sdp"]
if !found {
log.Printf("No sdp found in answer %+v", jsep)
} else {
sdpString, ok := sdpData.(string)
if !ok {
log.Printf("Invalid sdp found in answer %+v", jsep)
} else {
var answerSdp sdp.SessionDescription
if err := answerSdp.UnmarshalString(sdpString); err != nil {
log.Printf("Error parsing answer sdp %+v: %s", sdpString, err)
p.answerSdp.Store(nil)
p.sdpFlags.Remove(sdpHasAnswer)
} else {
p.answerSdp.Store(&answerSdp)
p.sdpFlags.Add(sdpHasAnswer)
if p.sdpFlags.Get() == sdpHasAnswer|sdpHasOffer {
p.sdpReady.Close()
}
}
}
}
callback(nil, jsep)
})
}
case "candidate":
p.deferred <- func() {
msgctx, cancel := context.WithTimeout(context.Background(), p.mcu.mcuTimeout)
defer cancel()
if data.Sid == "" || data.Sid == p.Sid() {
p.sendCandidate(msgctx, jsep_msg["candidate"], callback)
} else {
go callback(fmt.Errorf("Candidate message sid (%s) does not match publisher sid (%s)", data.Sid, p.Sid()), nil)
}
}
case "endOfCandidates":
// Ignore
default:
go callback(fmt.Errorf("Unsupported message type: %s", data.Type), nil)
}
}
func getFmtpValue(fmtp string, key string) (string, bool) {
parts := strings.Split(fmtp, ";")
for _, part := range parts {
kv := strings.SplitN(part, "=", 2)
if len(kv) != 2 {
continue
}
if strings.EqualFold(strings.TrimSpace(kv[0]), key) {
return strings.TrimSpace(kv[1]), true
}
}
return "", false
}
func (p *mcuJanusPublisher) GetStreams(ctx context.Context) ([]PublisherStream, error) {
offerSdp := p.offerSdp.Load()
answerSdp := p.answerSdp.Load()
if offerSdp == nil || answerSdp == nil {
select {
case <-ctx.Done():
return nil, ctx.Err()
case <-p.sdpReady.C:
offerSdp = p.offerSdp.Load()
answerSdp = p.answerSdp.Load()
if offerSdp == nil || answerSdp == nil {
// Only can happen on invalid SDPs.
return nil, errors.New("no offer and/or answer processed yet")
}
}
}
var streams []PublisherStream
for idx, m := range answerSdp.MediaDescriptions {
mid, found := m.Attribute(sdp.AttrKeyMID)
if !found {
continue
}
s := PublisherStream{
Mid: mid,
Mindex: idx,
Type: m.MediaName.Media,
}
if len(m.MediaName.Formats) == 0 {
continue
}
if strings.EqualFold(s.Type, "application") && strings.EqualFold(m.MediaName.Formats[0], "webrtc-datachannel") {
s.Type = "data"
streams = append(streams, s)
continue
}
pt, err := strconv.ParseInt(m.MediaName.Formats[0], 10, 8)
if err != nil {
continue
}
answerCodec, err := answerSdp.GetCodecForPayloadType(uint8(pt))
if err != nil {
continue
}
if strings.EqualFold(s.Type, "audio") {
s.Codec = answerCodec.Name
if value, found := getFmtpValue(answerCodec.Fmtp, "useinbandfec"); found && value == "1" {
s.Fec = true
}
if value, found := getFmtpValue(answerCodec.Fmtp, "usedtx"); found && value == "1" {
s.Dtx = true
}
if value, found := getFmtpValue(answerCodec.Fmtp, "stereo"); found && value == "1" {
s.Stereo = true
}
} else if strings.EqualFold(s.Type, "video") {
s.Codec = answerCodec.Name
// TODO: Determine if SVC is used.
s.Svc = false
if strings.EqualFold(answerCodec.Name, "vp9") {
// Parse VP9 profile from "profile-id=XXX"
// Exampe: "a=fmtp:98 profile-id=0"
if profile, found := getFmtpValue(answerCodec.Fmtp, "profile-id"); found {
s.ProfileVP9 = profile
}
} else if strings.EqualFold(answerCodec.Name, "h264") {
// Parse H.264 profile from "profile-level-id=XXX"
// Example: "a=fmtp:104 level-asymmetry-allowed=1;packetization-mode=0;profile-level-id=42001f"
if profile, found := getFmtpValue(answerCodec.Fmtp, "profile-level-id"); found {
s.ProfileH264 = profile
}
}
var extmap sdp.ExtMap
for _, a := range m.Attributes {
switch a.Key {
case sdp.AttrKeyExtMap:
if err := extmap.Unmarshal(extmap.Name() + ":" + a.Value); err != nil {
log.Printf("Error parsing extmap %s: %s", a.Value, err)
continue
}
switch extmap.URI.String() {
case ExtensionUrlPlayoutDelay:
s.ExtIdPlayoutDelay = extmap.Value
case ExtensionUrlVideoOrientation:
s.ExtIdVideoOrientation = extmap.Value
}
case "simulcast":
s.Simulcast = true
case sdp.AttrKeySSRCGroup:
if strings.HasPrefix(a.Value, "SIM ") {
s.Simulcast = true
}
}
}
for _, a := range offerSdp.MediaDescriptions[idx].Attributes {
switch a.Key {
case "simulcast":
s.Simulcast = true
case sdp.AttrKeySSRCGroup:
if strings.HasPrefix(a.Value, "SIM ") {
s.Simulcast = true
}
}
}
} else if strings.EqualFold(s.Type, "data") { // nolint
// Already handled above.
} else {
log.Printf("Skip type %s", s.Type)
continue
}
streams = append(streams, s)
}
return streams, nil
}
func getPublisherRemoteId(id string, remoteId string) string {
return fmt.Sprintf("%s@%s", id, remoteId)
}
func (p *mcuJanusPublisher) PublishRemote(ctx context.Context, remoteId string, hostname string, port int, rtcpPort int) error {
msg := map[string]interface{}{
"request": "publish_remotely",
"room": p.roomId,
"publisher_id": streamTypeUserIds[p.streamType],
"remote_id": getPublisherRemoteId(p.id, remoteId),
"host": hostname,
"port": port,
"rtcp_port": rtcpPort,
}
response, err := p.handle.Request(ctx, msg)
if err != nil {
return err
}
errorMessage := getPluginStringValue(response.PluginData, pluginVideoRoom, "error")
errorCode := getPluginIntValue(response.PluginData, pluginVideoRoom, "error_code")
if errorMessage != "" || errorCode != 0 {
if errorCode == 0 {
errorCode = 500
}
if errorMessage == "" {
errorMessage = "unknown error"
}
return &janus.ErrorMsg{
Err: janus.ErrorData{
Code: int(errorCode),
Reason: errorMessage,
},
}
}
log.Printf("Publishing %s to %s (port=%d, rtcpPort=%d) for %s", p.id, hostname, port, rtcpPort, remoteId)
return nil
}
func (p *mcuJanusPublisher) UnpublishRemote(ctx context.Context, remoteId string) error {
msg := map[string]interface{}{
"request": "unpublish_remotely",
"room": p.roomId,
"publisher_id": streamTypeUserIds[p.streamType],
"remote_id": getPublisherRemoteId(p.id, remoteId),
}
response, err := p.handle.Request(ctx, msg)
if err != nil {
return err
}
errorMessage := getPluginStringValue(response.PluginData, pluginVideoRoom, "error")
errorCode := getPluginIntValue(response.PluginData, pluginVideoRoom, "error_code")
if errorMessage != "" || errorCode != 0 {
if errorCode == 0 {
errorCode = 500
}
if errorMessage == "" {
errorMessage = "unknown error"
}
return &janus.ErrorMsg{
Err: janus.ErrorData{
Code: int(errorCode),
Reason: errorMessage,
},
}
}
log.Printf("Unpublished remote %s for %s", p.id, remoteId)
return nil
}

View file

@ -0,0 +1,92 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2024 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"testing"
)
func TestGetFmtpValueH264(t *testing.T) {
testcases := []struct {
fmtp string
profile string
}{
{
"",
"",
},
{
"level-asymmetry-allowed=1;packetization-mode=0;profile-level-id=42001f",
"42001f",
},
{
"level-asymmetry-allowed=1;packetization-mode=0",
"",
},
{
"level-asymmetry-allowed=1; packetization-mode=0; profile-level-id = 42001f",
"42001f",
},
}
for _, tc := range testcases {
value, found := getFmtpValue(tc.fmtp, "profile-level-id")
if !found && tc.profile != "" {
t.Errorf("did not find profile \"%s\" in \"%s\"", tc.profile, tc.fmtp)
} else if found && tc.profile == "" {
t.Errorf("did not expect profile in \"%s\" but got \"%s\"", tc.fmtp, value)
} else if found && tc.profile != value {
t.Errorf("expected profile \"%s\" in \"%s\" but got \"%s\"", tc.profile, tc.fmtp, value)
}
}
}
func TestGetFmtpValueVP9(t *testing.T) {
testcases := []struct {
fmtp string
profile string
}{
{
"",
"",
},
{
"profile-id=0",
"0",
},
{
"profile-id = 0",
"0",
},
}
for _, tc := range testcases {
value, found := getFmtpValue(tc.fmtp, "profile-id")
if !found && tc.profile != "" {
t.Errorf("did not find profile \"%s\" in \"%s\"", tc.profile, tc.fmtp)
} else if found && tc.profile == "" {
t.Errorf("did not expect profile in \"%s\" but got \"%s\"", tc.fmtp, value)
} else if found && tc.profile != value {
t.Errorf("expected profile \"%s\" in \"%s\" but got \"%s\"", tc.profile, tc.fmtp, value)
}
}
}

View file

@ -0,0 +1,150 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2024 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"context"
"log"
"sync/atomic"
"github.com/notedit/janus-go"
)
type mcuJanusRemotePublisher struct {
mcuJanusPublisher
ref atomic.Int64
port int
rtcpPort int
}
func (p *mcuJanusRemotePublisher) addRef() int64 {
return p.ref.Add(1)
}
func (p *mcuJanusRemotePublisher) release() bool {
return p.ref.Add(-1) == 0
}
func (p *mcuJanusRemotePublisher) Port() int {
return p.port
}
func (p *mcuJanusRemotePublisher) RtcpPort() int {
return p.rtcpPort
}
func (p *mcuJanusRemotePublisher) handleEvent(event *janus.EventMsg) {
if videoroom := getPluginStringValue(event.Plugindata, pluginVideoRoom, "videoroom"); videoroom != "" {
ctx := context.TODO()
switch videoroom {
case "destroyed":
log.Printf("Remote publisher %d: associated room has been destroyed, closing", p.handleId)
go p.Close(ctx)
case "slow_link":
// Ignore, processed through "handleSlowLink" in the general events.
default:
log.Printf("Unsupported videoroom remote publisher event in %d: %+v", p.handleId, event)
}
} else {
log.Printf("Unsupported remote publisher event in %d: %+v", p.handleId, event)
}
}
func (p *mcuJanusRemotePublisher) handleHangup(event *janus.HangupMsg) {
log.Printf("Remote publisher %d received hangup (%s), closing", p.handleId, event.Reason)
go p.Close(context.Background())
}
func (p *mcuJanusRemotePublisher) handleDetached(event *janus.DetachedMsg) {
log.Printf("Remote publisher %d received detached, closing", p.handleId)
go p.Close(context.Background())
}
func (p *mcuJanusRemotePublisher) handleConnected(event *janus.WebRTCUpMsg) {
log.Printf("Remote publisher %d received connected", p.handleId)
p.mcu.publisherConnected.Notify(getStreamId(p.id, p.streamType))
}
func (p *mcuJanusRemotePublisher) handleSlowLink(event *janus.SlowLinkMsg) {
if event.Uplink {
log.Printf("Remote publisher %s (%d) is reporting %d lost packets on the uplink (Janus -> client)", p.listener.PublicId(), p.handleId, event.Lost)
} else {
log.Printf("Remote publisher %s (%d) is reporting %d lost packets on the downlink (client -> Janus)", p.listener.PublicId(), p.handleId, event.Lost)
}
}
func (p *mcuJanusRemotePublisher) NotifyReconnected() {
ctx := context.TODO()
handle, session, roomId, _, err := p.mcu.getOrCreatePublisherHandle(ctx, p.id, p.streamType, p.bitrate)
if err != nil {
log.Printf("Could not reconnect remote publisher %s: %s", p.id, err)
// TODO(jojo): Retry
return
}
p.handle = handle
p.handleId = handle.Id
p.session = session
p.roomId = roomId
log.Printf("Remote publisher %s reconnected on handle %d", p.id, p.handleId)
}
func (p *mcuJanusRemotePublisher) Close(ctx context.Context) {
if !p.release() {
return
}
p.mu.Lock()
if handle := p.handle; handle != nil {
response, err := p.handle.Request(ctx, map[string]interface{}{
"request": "remove_remote_publisher",
"room": p.roomId,
"id": streamTypeUserIds[p.streamType],
})
if err != nil {
log.Printf("Error removing remote publisher %s in room %d: %s", p.id, p.roomId, err)
} else {
log.Printf("Removed remote publisher: %+v", response)
}
if p.roomId != 0 {
destroy_msg := map[string]interface{}{
"request": "destroy",
"room": p.roomId,
}
if _, err := handle.Request(ctx, destroy_msg); err != nil {
log.Printf("Error destroying room %d: %s", p.roomId, err)
} else {
log.Printf("Room %d destroyed", p.roomId)
}
p.mcu.mu.Lock()
delete(p.mcu.remotePublishers, getStreamId(p.id, p.streamType))
p.mcu.mu.Unlock()
p.roomId = 0
}
}
p.closeClient(ctx)
p.mu.Unlock()
}

View file

@ -0,0 +1,115 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2024 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"context"
"log"
"strconv"
"sync/atomic"
"github.com/notedit/janus-go"
)
type mcuJanusRemoteSubscriber struct {
mcuJanusSubscriber
remote atomic.Pointer[mcuJanusRemotePublisher]
}
func (p *mcuJanusRemoteSubscriber) handleEvent(event *janus.EventMsg) {
if videoroom := getPluginStringValue(event.Plugindata, pluginVideoRoom, "videoroom"); videoroom != "" {
ctx := context.TODO()
switch videoroom {
case "destroyed":
log.Printf("Remote subscriber %d: associated room has been destroyed, closing", p.handleId)
go p.Close(ctx)
case "event":
// Handle renegotiations, but ignore other events like selected
// substream / temporal layer.
if getPluginStringValue(event.Plugindata, pluginVideoRoom, "configured") == "ok" &&
event.Jsep != nil && event.Jsep["type"] == "offer" && event.Jsep["sdp"] != nil {
p.listener.OnUpdateOffer(p, event.Jsep)
}
case "slow_link":
// Ignore, processed through "handleSlowLink" in the general events.
default:
log.Printf("Unsupported videoroom event %s for remote subscriber %d: %+v", videoroom, p.handleId, event)
}
} else {
log.Printf("Unsupported event for remote subscriber %d: %+v", p.handleId, event)
}
}
func (p *mcuJanusRemoteSubscriber) handleHangup(event *janus.HangupMsg) {
log.Printf("Remote subscriber %d received hangup (%s), closing", p.handleId, event.Reason)
go p.Close(context.Background())
}
func (p *mcuJanusRemoteSubscriber) handleDetached(event *janus.DetachedMsg) {
log.Printf("Remote subscriber %d received detached, closing", p.handleId)
go p.Close(context.Background())
}
func (p *mcuJanusRemoteSubscriber) handleConnected(event *janus.WebRTCUpMsg) {
log.Printf("Remote subscriber %d received connected", p.handleId)
p.mcu.SubscriberConnected(p.Id(), p.publisher, p.streamType)
}
func (p *mcuJanusRemoteSubscriber) handleSlowLink(event *janus.SlowLinkMsg) {
if event.Uplink {
log.Printf("Remote subscriber %s (%d) is reporting %d lost packets on the uplink (Janus -> client)", p.listener.PublicId(), p.handleId, event.Lost)
} else {
log.Printf("Remote subscriber %s (%d) is reporting %d lost packets on the downlink (client -> Janus)", p.listener.PublicId(), p.handleId, event.Lost)
}
}
func (p *mcuJanusRemoteSubscriber) handleMedia(event *janus.MediaMsg) {
// Only triggered for publishers
}
func (p *mcuJanusRemoteSubscriber) NotifyReconnected() {
ctx, cancel := context.WithTimeout(context.Background(), p.mcu.mcuTimeout)
defer cancel()
handle, pub, err := p.mcu.getOrCreateSubscriberHandle(ctx, p.publisher, p.streamType)
if err != nil {
// TODO(jojo): Retry?
log.Printf("Could not reconnect remote subscriber for publisher %s: %s", p.publisher, err)
p.Close(context.Background())
return
}
p.handle = handle
p.handleId = handle.Id
p.roomId = pub.roomId
p.sid = strconv.FormatUint(handle.Id, 10)
p.listener.SubscriberSidUpdated(p)
log.Printf("Subscriber %d for publisher %s reconnected on handle %d", p.id, p.publisher, p.handleId)
}
func (p *mcuJanusRemoteSubscriber) Close(ctx context.Context) {
p.mcuJanusSubscriber.Close(ctx)
if remote := p.remote.Swap(nil); remote != nil {
remote.Close(context.Background())
}
}

View file

@ -0,0 +1,110 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2017 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"database/sql"
"fmt"
)
type streamSelection struct {
substream sql.NullInt16
temporal sql.NullInt16
audio sql.NullBool
video sql.NullBool
}
func (s *streamSelection) HasValues() bool {
return s.substream.Valid || s.temporal.Valid || s.audio.Valid || s.video.Valid
}
func (s *streamSelection) AddToMessage(message map[string]interface{}) {
if s.substream.Valid {
message["substream"] = s.substream.Int16
}
if s.temporal.Valid {
message["temporal"] = s.temporal.Int16
}
if s.audio.Valid {
message["audio"] = s.audio.Bool
}
if s.video.Valid {
message["video"] = s.video.Bool
}
}
func parseStreamSelection(payload map[string]interface{}) (*streamSelection, error) {
var stream streamSelection
if value, found := payload["substream"]; found {
switch value := value.(type) {
case int:
stream.substream.Valid = true
stream.substream.Int16 = int16(value)
case float32:
stream.substream.Valid = true
stream.substream.Int16 = int16(value)
case float64:
stream.substream.Valid = true
stream.substream.Int16 = int16(value)
default:
return nil, fmt.Errorf("Unsupported substream value: %v", value)
}
}
if value, found := payload["temporal"]; found {
switch value := value.(type) {
case int:
stream.temporal.Valid = true
stream.temporal.Int16 = int16(value)
case float32:
stream.temporal.Valid = true
stream.temporal.Int16 = int16(value)
case float64:
stream.temporal.Valid = true
stream.temporal.Int16 = int16(value)
default:
return nil, fmt.Errorf("Unsupported temporal value: %v", value)
}
}
if value, found := payload["audio"]; found {
switch value := value.(type) {
case bool:
stream.audio.Valid = true
stream.audio.Bool = value
default:
return nil, fmt.Errorf("Unsupported audio value: %v", value)
}
}
if value, found := payload["video"]; found {
switch value := value.(type) {
case bool:
stream.video.Valid = true
stream.video.Bool = value
default:
return nil, fmt.Errorf("Unsupported video value: %v", value)
}
}
return &stream, nil
}

321
mcu_janus_subscriber.go Normal file
View file

@ -0,0 +1,321 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2017 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"context"
"fmt"
"log"
"strconv"
"github.com/notedit/janus-go"
)
type mcuJanusSubscriber struct {
mcuJanusClient
publisher string
}
func (p *mcuJanusSubscriber) Publisher() string {
return p.publisher
}
func (p *mcuJanusSubscriber) handleEvent(event *janus.EventMsg) {
if videoroom := getPluginStringValue(event.Plugindata, pluginVideoRoom, "videoroom"); videoroom != "" {
ctx := context.TODO()
switch videoroom {
case "destroyed":
log.Printf("Subscriber %d: associated room has been destroyed, closing", p.handleId)
go p.Close(ctx)
case "event":
// Handle renegotiations, but ignore other events like selected
// substream / temporal layer.
if getPluginStringValue(event.Plugindata, pluginVideoRoom, "configured") == "ok" &&
event.Jsep != nil && event.Jsep["type"] == "offer" && event.Jsep["sdp"] != nil {
p.listener.OnUpdateOffer(p, event.Jsep)
}
case "slow_link":
// Ignore, processed through "handleSlowLink" in the general events.
default:
log.Printf("Unsupported videoroom event %s for subscriber %d: %+v", videoroom, p.handleId, event)
}
} else {
log.Printf("Unsupported event for subscriber %d: %+v", p.handleId, event)
}
}
func (p *mcuJanusSubscriber) handleHangup(event *janus.HangupMsg) {
log.Printf("Subscriber %d received hangup (%s), closing", p.handleId, event.Reason)
go p.Close(context.Background())
}
func (p *mcuJanusSubscriber) handleDetached(event *janus.DetachedMsg) {
log.Printf("Subscriber %d received detached, closing", p.handleId)
go p.Close(context.Background())
}
func (p *mcuJanusSubscriber) handleConnected(event *janus.WebRTCUpMsg) {
log.Printf("Subscriber %d received connected", p.handleId)
p.mcu.SubscriberConnected(p.Id(), p.publisher, p.streamType)
}
func (p *mcuJanusSubscriber) handleSlowLink(event *janus.SlowLinkMsg) {
if event.Uplink {
log.Printf("Subscriber %s (%d) is reporting %d lost packets on the uplink (Janus -> client)", p.listener.PublicId(), p.handleId, event.Lost)
} else {
log.Printf("Subscriber %s (%d) is reporting %d lost packets on the downlink (client -> Janus)", p.listener.PublicId(), p.handleId, event.Lost)
}
}
func (p *mcuJanusSubscriber) handleMedia(event *janus.MediaMsg) {
// Only triggered for publishers
}
func (p *mcuJanusSubscriber) NotifyReconnected() {
ctx, cancel := context.WithTimeout(context.Background(), p.mcu.mcuTimeout)
defer cancel()
handle, pub, err := p.mcu.getOrCreateSubscriberHandle(ctx, p.publisher, p.streamType)
if err != nil {
// TODO(jojo): Retry?
log.Printf("Could not reconnect subscriber for publisher %s: %s", p.publisher, err)
p.Close(context.Background())
return
}
p.handle = handle
p.handleId = handle.Id
p.roomId = pub.roomId
p.sid = strconv.FormatUint(handle.Id, 10)
p.listener.SubscriberSidUpdated(p)
log.Printf("Subscriber %d for publisher %s reconnected on handle %d", p.id, p.publisher, p.handleId)
}
func (p *mcuJanusSubscriber) Close(ctx context.Context) {
p.mu.Lock()
closed := p.closeClient(ctx)
p.mu.Unlock()
if closed {
p.mcu.SubscriberDisconnected(p.Id(), p.publisher, p.streamType)
statsSubscribersCurrent.WithLabelValues(string(p.streamType)).Dec()
}
p.mcu.unregisterClient(p)
p.listener.SubscriberClosed(p)
p.mcuJanusClient.Close(ctx)
}
func (p *mcuJanusSubscriber) joinRoom(ctx context.Context, stream *streamSelection, callback func(error, map[string]interface{})) {
handle := p.handle
if handle == nil {
callback(ErrNotConnected, nil)
return
}
waiter := p.mcu.publisherConnected.NewWaiter(getStreamId(p.publisher, p.streamType))
defer p.mcu.publisherConnected.Release(waiter)
loggedNotPublishingYet := false
retry:
join_msg := map[string]interface{}{
"request": "join",
"ptype": "subscriber",
"room": p.roomId,
}
if p.mcu.isMultistream() {
join_msg["streams"] = []map[string]interface{}{
{
"feed": streamTypeUserIds[p.streamType],
},
}
} else {
join_msg["feed"] = streamTypeUserIds[p.streamType]
}
if stream != nil {
stream.AddToMessage(join_msg)
}
join_response, err := handle.Message(ctx, join_msg, nil)
if err != nil {
callback(err, nil)
return
}
if error_code := getPluginIntValue(join_response.Plugindata, pluginVideoRoom, "error_code"); error_code > 0 {
switch error_code {
case JANUS_VIDEOROOM_ERROR_ALREADY_JOINED:
// The subscriber is already connected to the room. This can happen
// if a client leaves a call but keeps the subscriber objects active.
// On joining the call again, the subscriber tries to join on the
// MCU which will fail because he is still connected.
// To get a new Offer SDP, we have to tear down the session on the
// MCU and join again.
p.mu.Lock()
p.closeClient(ctx)
p.mu.Unlock()
var pub *mcuJanusPublisher
handle, pub, err = p.mcu.getOrCreateSubscriberHandle(ctx, p.publisher, p.streamType)
if err != nil {
// Reconnection didn't work, need to unregister/remove subscriber
// so a new object will be created if the request is retried.
p.mcu.unregisterClient(p)
p.listener.SubscriberClosed(p)
callback(fmt.Errorf("Already connected as subscriber for %s, error during re-joining: %s", p.streamType, err), nil)
return
}
p.handle = handle
p.handleId = handle.Id
p.roomId = pub.roomId
p.sid = strconv.FormatUint(handle.Id, 10)
p.listener.SubscriberSidUpdated(p)
p.closeChan = make(chan struct{}, 1)
go p.run(p.handle, p.closeChan)
log.Printf("Already connected subscriber %d for %s, leaving and re-joining on handle %d", p.id, p.streamType, p.handleId)
goto retry
case JANUS_VIDEOROOM_ERROR_NO_SUCH_ROOM:
fallthrough
case JANUS_VIDEOROOM_ERROR_NO_SUCH_FEED:
switch error_code {
case JANUS_VIDEOROOM_ERROR_NO_SUCH_ROOM:
log.Printf("Publisher %s not created yet for %s, wait and retry to join room %d as subscriber", p.publisher, p.streamType, p.roomId)
case JANUS_VIDEOROOM_ERROR_NO_SUCH_FEED:
log.Printf("Publisher %s not sending yet for %s, wait and retry to join room %d as subscriber", p.publisher, p.streamType, p.roomId)
}
if !loggedNotPublishingYet {
loggedNotPublishingYet = true
statsWaitingForPublisherTotal.WithLabelValues(string(p.streamType)).Inc()
}
if err := waiter.Wait(ctx); err != nil {
callback(err, nil)
return
}
log.Printf("Retry subscribing %s from %s", p.streamType, p.publisher)
goto retry
default:
// TODO(jojo): Should we handle other errors, too?
callback(fmt.Errorf("Error joining room as subscriber: %+v", join_response), nil)
return
}
}
//log.Println("Joined as listener", join_response)
p.session = join_response.Session
callback(nil, join_response.Jsep)
}
func (p *mcuJanusSubscriber) update(ctx context.Context, stream *streamSelection, callback func(error, map[string]interface{})) {
handle := p.handle
if handle == nil {
callback(ErrNotConnected, nil)
return
}
configure_msg := map[string]interface{}{
"request": "configure",
"update": true,
}
if stream != nil {
stream.AddToMessage(configure_msg)
}
configure_response, err := handle.Message(ctx, configure_msg, nil)
if err != nil {
callback(err, nil)
return
}
callback(nil, configure_response.Jsep)
}
func (p *mcuJanusSubscriber) SendMessage(ctx context.Context, message *MessageClientMessage, data *MessageClientMessageData, callback func(error, map[string]interface{})) {
statsMcuMessagesTotal.WithLabelValues(data.Type).Inc()
jsep_msg := data.Payload
switch data.Type {
case "requestoffer":
fallthrough
case "sendoffer":
p.deferred <- func() {
msgctx, cancel := context.WithTimeout(context.Background(), p.mcu.mcuTimeout)
defer cancel()
stream, err := parseStreamSelection(jsep_msg)
if err != nil {
go callback(err, nil)
return
}
if data.Sid == "" || data.Sid != p.Sid() {
p.joinRoom(msgctx, stream, callback)
} else {
p.update(msgctx, stream, callback)
}
}
case "answer":
p.deferred <- func() {
msgctx, cancel := context.WithTimeout(context.Background(), p.mcu.mcuTimeout)
defer cancel()
if data.Sid == "" || data.Sid == p.Sid() {
p.sendAnswer(msgctx, jsep_msg, callback)
} else {
go callback(fmt.Errorf("Answer message sid (%s) does not match subscriber sid (%s)", data.Sid, p.Sid()), nil)
}
}
case "candidate":
p.deferred <- func() {
msgctx, cancel := context.WithTimeout(context.Background(), p.mcu.mcuTimeout)
defer cancel()
if data.Sid == "" || data.Sid == p.Sid() {
p.sendCandidate(msgctx, jsep_msg["candidate"], callback)
} else {
go callback(fmt.Errorf("Candidate message sid (%s) does not match subscriber sid (%s)", data.Sid, p.Sid()), nil)
}
}
case "endOfCandidates":
// Ignore
case "selectStream":
stream, err := parseStreamSelection(jsep_msg)
if err != nil {
go callback(err, nil)
return
}
if stream == nil || !stream.HasValues() {
// Nothing to do
go callback(nil, nil)
return
}
p.deferred <- func() {
msgctx, cancel := context.WithTimeout(context.Background(), p.mcu.mcuTimeout)
defer cancel()
p.selectStream(msgctx, stream, callback)
}
default:
// Return error asynchronously
go callback(fmt.Errorf("Unsupported message type: %s", data.Type), nil)
}
}

View file

@ -162,6 +162,7 @@ func (p *mcuProxyPublisher) SetMedia(mt MediaType) {
}
func (p *mcuProxyPublisher) NotifyClosed() {
log.Printf("Publisher %s at %s was closed", p.proxyId, p.conn)
p.listener.PublisherClosed(p)
p.conn.removePublisher(p)
}
@ -185,7 +186,7 @@ func (p *mcuProxyPublisher) Close(ctx context.Context) {
return
}
log.Printf("Delete publisher %s at %s", p.proxyId, p.conn)
log.Printf("Deleted publisher %s at %s", p.proxyId, p.conn)
}
func (p *mcuProxyPublisher) SendMessage(ctx context.Context, message *MessageClientMessage, data *MessageClientMessageData, callback func(error, map[string]interface{})) {
@ -217,13 +218,26 @@ func (p *mcuProxyPublisher) ProcessEvent(msg *EventProxyServerMessage) {
}
}
func (p *mcuProxyPublisher) GetStreams(ctx context.Context) ([]PublisherStream, error) {
return nil, errors.New("not implemented")
}
func (p *mcuProxyPublisher) PublishRemote(ctx context.Context, remoteId string, hostname string, port int, rtcpPort int) error {
return errors.New("remote publishing not supported for proxy publishers")
}
func (p *mcuProxyPublisher) UnpublishRemote(ctx context.Context, remoteId string) error {
return errors.New("remote publishing not supported for proxy publishers")
}
type mcuProxySubscriber struct {
mcuProxyPubSubCommon
publisherId string
publisherId string
publisherConn *mcuProxyConnection
}
func newMcuProxySubscriber(publisherId string, sid string, streamType StreamType, maxBitrate int, proxyId string, conn *mcuProxyConnection, listener McuListener) *mcuProxySubscriber {
func newMcuProxySubscriber(publisherId string, sid string, streamType StreamType, maxBitrate int, proxyId string, conn *mcuProxyConnection, listener McuListener, publisherConn *mcuProxyConnection) *mcuProxySubscriber {
return &mcuProxySubscriber{
mcuProxyPubSubCommon: mcuProxyPubSubCommon{
sid: sid,
@ -234,7 +248,8 @@ func newMcuProxySubscriber(publisherId string, sid string, streamType StreamType
listener: listener,
},
publisherId: publisherId,
publisherId: publisherId,
publisherConn: publisherConn,
}
}
@ -243,6 +258,11 @@ func (s *mcuProxySubscriber) Publisher() string {
}
func (s *mcuProxySubscriber) NotifyClosed() {
if s.publisherConn != nil {
log.Printf("Remote subscriber %s at %s (forwarded to %s) was closed", s.proxyId, s.conn, s.publisherConn)
} else {
log.Printf("Subscriber %s at %s was closed", s.proxyId, s.conn)
}
s.listener.SubscriberClosed(s)
s.conn.removeSubscriber(s)
}
@ -259,14 +279,26 @@ func (s *mcuProxySubscriber) Close(ctx context.Context) {
}
if response, err := s.conn.performSyncRequest(ctx, msg); err != nil {
log.Printf("Could not delete subscriber %s at %s: %s", s.proxyId, s.conn, err)
if s.publisherConn != nil {
log.Printf("Could not delete remote subscriber %s at %s (forwarded to %s): %s", s.proxyId, s.conn, s.publisherConn, err)
} else {
log.Printf("Could not delete subscriber %s at %s: %s", s.proxyId, s.conn, err)
}
return
} else if response.Type == "error" {
log.Printf("Could not delete subscriber %s at %s: %s", s.proxyId, s.conn, response.Error)
if s.publisherConn != nil {
log.Printf("Could not delete remote subscriber %s at %s (forwarded to %s): %s", s.proxyId, s.conn, s.publisherConn, response.Error)
} else {
log.Printf("Could not delete subscriber %s at %s: %s", s.proxyId, s.conn, response.Error)
}
return
}
log.Printf("Delete subscriber %s at %s", s.proxyId, s.conn)
if s.publisherConn != nil {
log.Printf("Deleted remote subscriber %s at %s (forwarded to %s)", s.proxyId, s.conn, s.publisherConn)
} else {
log.Printf("Deleted subscriber %s at %s", s.proxyId, s.conn)
}
}
func (s *mcuProxySubscriber) SendMessage(ctx context.Context, message *MessageClientMessage, data *MessageClientMessageData, callback func(error, map[string]interface{})) {
@ -308,6 +340,7 @@ type mcuProxyConnection struct {
ip net.IP
load atomic.Int64
bandwidth atomic.Pointer[EventProxyServerBandwidth]
mu sync.Mutex
closer *Closer
closedDone *Closer
@ -326,7 +359,7 @@ type mcuProxyConnection struct {
msgId atomic.Int64
helloMsgId string
sessionId string
sessionId atomic.Value
country atomic.Value
callbacks map[string]func(*ProxyServerMessage)
@ -359,6 +392,7 @@ func newMcuProxyConnection(proxy *mcuProxy, baseUrl string, ip net.IP) (*mcuProx
}
conn.reconnectInterval.Store(int64(initialReconnectInterval))
conn.load.Store(loadNotConnected)
conn.bandwidth.Store(nil)
conn.country.Store("")
return conn, nil
}
@ -371,6 +405,54 @@ func (c *mcuProxyConnection) String() string {
return c.rawUrl
}
func (c *mcuProxyConnection) IsSameCountry(initiator McuInitiator) bool {
if initiator == nil {
return true
}
initiatorCountry := initiator.Country()
if initiatorCountry == "" {
return true
}
connCountry := c.Country()
if connCountry == "" {
return true
}
return initiatorCountry == connCountry
}
func (c *mcuProxyConnection) IsSameContinent(initiator McuInitiator) bool {
if initiator == nil {
return true
}
initiatorCountry := initiator.Country()
if initiatorCountry == "" {
return true
}
connCountry := c.Country()
if connCountry == "" {
return true
}
initiatorContinents, found := ContinentMap[initiatorCountry]
if found {
m := c.proxy.getContinentsMap()
// Map continents to other continents (e.g. use Europe for Africa).
for _, continent := range initiatorContinents {
if toAdd, found := m[continent]; found {
initiatorContinents = append(initiatorContinents, toAdd...)
}
}
}
connContinents := ContinentMap[connCountry]
return ContinentsOverlap(initiatorContinents, connContinents)
}
type mcuProxyConnectionStats struct {
Url string `json:"url"`
IP net.IP `json:"ip,omitempty"`
@ -414,10 +496,29 @@ func (c *mcuProxyConnection) Load() int64 {
return c.load.Load()
}
func (c *mcuProxyConnection) Bandwidth() *EventProxyServerBandwidth {
return c.bandwidth.Load()
}
func (c *mcuProxyConnection) Country() string {
return c.country.Load().(string)
}
func (c *mcuProxyConnection) SessionId() string {
sid := c.sessionId.Load()
if sid == nil {
return ""
}
return sid.(string)
}
func (c *mcuProxyConnection) IsConnected() bool {
c.mu.Lock()
defer c.mu.Unlock()
return c.conn != nil && c.SessionId() != ""
}
func (c *mcuProxyConnection) IsTemporary() bool {
return c.temporary.Load()
}
@ -443,7 +544,10 @@ func (c *mcuProxyConnection) readPump() {
}
}()
defer c.close()
defer c.load.Store(loadNotConnected)
defer func() {
c.load.Store(loadNotConnected)
c.bandwidth.Store(nil)
}()
c.mu.Lock()
conn := c.conn
@ -744,8 +848,9 @@ func (c *mcuProxyConnection) clearPublishers() {
publisher.NotifyClosed()
}
}(c.publishers)
// Can't use clear(...) here as the map is processed by the goroutine above.
c.publishers = make(map[string]*mcuProxyPublisher)
c.publisherIds = make(map[string]string)
clear(c.publisherIds)
if c.closeScheduled.Load() || c.IsTemporary() {
go c.closeIfEmpty()
@ -775,6 +880,7 @@ func (c *mcuProxyConnection) clearSubscribers() {
subscriber.NotifyClosed()
}
}(c.subscribers)
// Can't use clear(...) here as the map is processed by the goroutine above.
c.subscribers = make(map[string]*mcuProxySubscriber)
if c.closeScheduled.Load() || c.IsTemporary() {
@ -786,7 +892,7 @@ func (c *mcuProxyConnection) clearCallbacks() {
c.mu.Lock()
defer c.mu.Unlock()
c.callbacks = make(map[string]func(*ProxyServerMessage))
clear(c.callbacks)
}
func (c *mcuProxyConnection) getCallback(id string) func(*ProxyServerMessage) {
@ -806,11 +912,11 @@ func (c *mcuProxyConnection) processMessage(msg *ProxyServerMessage) {
switch msg.Type {
case "error":
if msg.Error.Code == "no_such_session" {
log.Printf("Session %s could not be resumed on %s, registering new", c.sessionId, c)
log.Printf("Session %s could not be resumed on %s, registering new", c.SessionId(), c)
c.clearPublishers()
c.clearSubscribers()
c.clearCallbacks()
c.sessionId = ""
c.sessionId.Store("")
if err := c.sendHello(); err != nil {
log.Printf("Could not send hello request to %s: %s", c, err)
c.scheduleReconnect()
@ -821,8 +927,8 @@ func (c *mcuProxyConnection) processMessage(msg *ProxyServerMessage) {
log.Printf("Hello connection to %s failed with %+v, reconnecting", c, msg.Error)
c.scheduleReconnect()
case "hello":
resumed := c.sessionId == msg.Hello.SessionId
c.sessionId = msg.Hello.SessionId
resumed := c.SessionId() == msg.Hello.SessionId
c.sessionId.Store(msg.Hello.SessionId)
country := ""
if msg.Hello.Server != nil {
if country = msg.Hello.Server.Country; country != "" && !IsValidCountry(country) {
@ -832,11 +938,11 @@ func (c *mcuProxyConnection) processMessage(msg *ProxyServerMessage) {
}
c.country.Store(country)
if resumed {
log.Printf("Resumed session %s on %s", c.sessionId, c)
log.Printf("Resumed session %s on %s", c.SessionId(), c)
} else if country != "" {
log.Printf("Received session %s from %s (in %s)", c.sessionId, c, country)
log.Printf("Received session %s from %s (in %s)", c.SessionId(), c, country)
} else {
log.Printf("Received session %s from %s", c.sessionId, c)
log.Printf("Received session %s from %s", c.SessionId(), c)
}
if c.trackClose.CompareAndSwap(false, true) {
statsConnectedProxyBackendsCurrent.WithLabelValues(c.Country()).Inc()
@ -907,9 +1013,10 @@ func (c *mcuProxyConnection) processEvent(msg *ProxyServerMessage) {
return
case "update-load":
if proxyDebugMessages {
log.Printf("Load of %s now at %d", c, event.Load)
log.Printf("Load of %s now at %d (%s)", c, event.Load, event.Bandwidth)
}
c.load.Store(event.Load)
c.bandwidth.Store(event.Bandwidth)
statsProxyBackendLoadCurrent.WithLabelValues(c.url.String()).Set(float64(event.Load))
return
case "shutdown-scheduled":
@ -944,8 +1051,8 @@ func (c *mcuProxyConnection) processBye(msg *ProxyServerMessage) {
bye := msg.Bye
switch bye.Reason {
case "session_resumed":
log.Printf("Session %s on %s was resumed by other client, resetting", c.sessionId, c)
c.sessionId = ""
log.Printf("Session %s on %s was resumed by other client, resetting", c.SessionId(), c)
c.sessionId.Store("")
default:
log.Printf("Received bye with unsupported reason from %s %+v", c, bye)
}
@ -960,17 +1067,10 @@ func (c *mcuProxyConnection) sendHello() error {
Version: "1.0",
},
}
if c.sessionId != "" {
msg.Hello.ResumeId = c.sessionId
if sessionId := c.SessionId(); sessionId != "" {
msg.Hello.ResumeId = sessionId
} else {
claims := &TokenClaims{
jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(time.Now()),
Issuer: c.proxy.tokenId,
},
}
token := jwt.NewWithClaims(jwt.SigningMethodRS256, claims)
tokenString, err := token.SignedString(c.proxy.tokenKey)
tokenString, err := c.proxy.createToken("")
if err != nil {
return err
}
@ -1091,7 +1191,48 @@ func (c *mcuProxyConnection) newSubscriber(ctx context.Context, listener McuList
proxyId := response.Command.Id
log.Printf("Created %s subscriber %s on %s for %s", streamType, proxyId, c, publisherSessionId)
subscriber := newMcuProxySubscriber(publisherSessionId, response.Command.Sid, streamType, response.Command.Bitrate, proxyId, c, listener)
subscriber := newMcuProxySubscriber(publisherSessionId, response.Command.Sid, streamType, response.Command.Bitrate, proxyId, c, listener, nil)
c.subscribersLock.Lock()
c.subscribers[proxyId] = subscriber
c.subscribersLock.Unlock()
statsSubscribersCurrent.WithLabelValues(string(streamType)).Inc()
statsSubscribersTotal.WithLabelValues(string(streamType)).Inc()
return subscriber, nil
}
func (c *mcuProxyConnection) newRemoteSubscriber(ctx context.Context, listener McuListener, publisherId string, publisherSessionId string, streamType StreamType, publisherConn *mcuProxyConnection) (McuSubscriber, error) {
if c == publisherConn {
return c.newSubscriber(ctx, listener, publisherId, publisherSessionId, streamType)
}
remoteToken, err := c.proxy.createToken(publisherId)
if err != nil {
return nil, err
}
msg := &ProxyClientMessage{
Type: "command",
Command: &CommandProxyClientMessage{
Type: "create-subscriber",
StreamType: streamType,
PublisherId: publisherId,
RemoteUrl: publisherConn.rawUrl,
RemoteToken: remoteToken,
},
}
response, err := c.performSyncRequest(ctx, msg)
if err != nil {
// TODO: Cancel request
return nil, err
} else if response.Type == "error" {
return nil, fmt.Errorf("Error creating remote %s subscriber for %s on %s (forwarded to %s): %+v", streamType, publisherSessionId, c, publisherConn, response.Error)
}
proxyId := response.Command.Id
log.Printf("Created remote %s subscriber %s on %s for %s (forwarded to %s)", streamType, proxyId, c, publisherSessionId, publisherConn)
subscriber := newMcuProxySubscriber(publisherSessionId, response.Command.Sid, streamType, response.Command.Bitrate, proxyId, c, listener, publisherConn)
c.subscribersLock.Lock()
c.subscribers[proxyId] = subscriber
c.subscribersLock.Unlock()
@ -1254,7 +1395,7 @@ func (m *mcuProxy) loadContinentsMap(config *goconf.ConfigFile) error {
return nil
}
func (m *mcuProxy) Start() error {
func (m *mcuProxy) Start(ctx context.Context) error {
log.Printf("Maximum bandwidth %d bits/sec per publishing stream", m.maxStreamBitrate)
log.Printf("Maximum bandwidth %d bits/sec per screensharing stream", m.maxScreenBitrate)
@ -1274,6 +1415,48 @@ func (m *mcuProxy) Stop() {
m.config.Stop()
}
func (m *mcuProxy) createToken(subject string) (string, error) {
claims := &TokenClaims{
jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(time.Now()),
Issuer: m.tokenId,
Subject: subject,
},
}
token := jwt.NewWithClaims(jwt.SigningMethodRS256, claims)
tokenString, err := token.SignedString(m.tokenKey)
if err != nil {
return "", err
}
return tokenString, nil
}
func (m *mcuProxy) hasConnections() bool {
m.connectionsMu.RLock()
defer m.connectionsMu.RUnlock()
for _, conn := range m.connections {
if conn.IsConnected() {
return true
}
}
return false
}
func (m *mcuProxy) WaitForConnections(ctx context.Context) error {
ticker := time.NewTicker(10 * time.Millisecond)
defer ticker.Stop()
for !m.hasConnections() {
select {
case <-ctx.Done():
return ctx.Err()
case <-ticker.C:
}
}
return nil
}
func (m *mcuProxy) AddConnection(ignoreErrors bool, url string, ips ...net.IP) error {
m.connectionsMu.Lock()
defer m.connectionsMu.Unlock()
@ -1565,27 +1748,27 @@ func (m *mcuProxy) removePublisher(publisher *mcuProxyPublisher) {
delete(m.publishers, getStreamId(publisher.id, publisher.StreamType()))
}
func (m *mcuProxy) NewPublisher(ctx context.Context, listener McuListener, id string, sid string, streamType StreamType, bitrate int, mediaTypes MediaType, initiator McuInitiator) (McuPublisher, error) {
connections := m.getSortedConnections(initiator)
func (m *mcuProxy) createPublisher(ctx context.Context, listener McuListener, id string, sid string, streamType StreamType, bitrate int, mediaTypes MediaType, initiator McuInitiator, connections []*mcuProxyConnection, isAllowed func(c *mcuProxyConnection) bool) McuPublisher {
var maxBitrate int
if streamType == StreamTypeScreen {
maxBitrate = m.maxScreenBitrate
} else {
maxBitrate = m.maxStreamBitrate
}
if bitrate <= 0 {
bitrate = maxBitrate
} else {
bitrate = min(bitrate, maxBitrate)
}
for _, conn := range connections {
if conn.IsShutdownScheduled() || conn.IsTemporary() {
if !isAllowed(conn) || conn.IsShutdownScheduled() || conn.IsTemporary() {
continue
}
subctx, cancel := context.WithTimeout(ctx, m.proxyTimeout)
defer cancel()
var maxBitrate int
if streamType == StreamTypeScreen {
maxBitrate = m.maxScreenBitrate
} else {
maxBitrate = m.maxStreamBitrate
}
if bitrate <= 0 {
bitrate = maxBitrate
} else {
bitrate = min(bitrate, maxBitrate)
}
publisher, err := conn.newPublisher(subctx, listener, id, sid, streamType, bitrate, mediaTypes)
if err != nil {
log.Printf("Could not create %s publisher for %s on %s: %s", streamType, id, conn, err)
@ -1596,11 +1779,61 @@ func (m *mcuProxy) NewPublisher(ctx context.Context, listener McuListener, id st
m.publishers[getStreamId(id, streamType)] = conn
m.mu.Unlock()
m.publisherWaiters.Wakeup()
return publisher, nil
return publisher
}
statsProxyNobackendAvailableTotal.WithLabelValues(string(streamType)).Inc()
return nil, fmt.Errorf("No MCU connection available")
return nil
}
func (m *mcuProxy) NewPublisher(ctx context.Context, listener McuListener, id string, sid string, streamType StreamType, bitrate int, mediaTypes MediaType, initiator McuInitiator) (McuPublisher, error) {
connections := m.getSortedConnections(initiator)
publisher := m.createPublisher(ctx, listener, id, sid, streamType, bitrate, mediaTypes, initiator, connections, func(c *mcuProxyConnection) bool {
bw := c.Bandwidth()
return bw == nil || bw.AllowIncoming()
})
if publisher == nil {
// No proxy has available bandwidth, select one with the lowest currently used bandwidth.
connections2 := make([]*mcuProxyConnection, 0, len(connections))
for _, c := range connections {
if c.Bandwidth() != nil {
connections2 = append(connections2, c)
}
}
SlicesSortFunc(connections2, func(a *mcuProxyConnection, b *mcuProxyConnection) int {
var incoming_a *float64
if bw := a.Bandwidth(); bw != nil {
incoming_a = bw.Incoming
}
var incoming_b *float64
if bw := b.Bandwidth(); bw != nil {
incoming_b = bw.Incoming
}
if incoming_a == nil && incoming_b == nil {
return 0
} else if incoming_a == nil && incoming_b != nil {
return -1
} else if incoming_a != nil && incoming_b == nil {
return -1
} else if *incoming_a < *incoming_b {
return -1
} else if *incoming_a > *incoming_b {
return 1
}
return 0
})
publisher = m.createPublisher(ctx, listener, id, sid, streamType, bitrate, mediaTypes, initiator, connections2, func(c *mcuProxyConnection) bool {
return true
})
}
if publisher == nil {
statsProxyNobackendAvailableTotal.WithLabelValues(string(streamType)).Inc()
return nil, fmt.Errorf("No MCU connection available")
}
return publisher, nil
}
func (m *mcuProxy) getPublisherConnection(publisher string, streamType StreamType) *mcuProxyConnection {
@ -1641,7 +1874,38 @@ func (m *mcuProxy) waitForPublisherConnection(ctx context.Context, publisher str
}
}
func (m *mcuProxy) NewSubscriber(ctx context.Context, listener McuListener, publisher string, streamType StreamType) (McuSubscriber, error) {
type proxyPublisherInfo struct {
id string
conn *mcuProxyConnection
err error
}
func (m *mcuProxy) createSubscriber(ctx context.Context, listener McuListener, id string, publisher string, streamType StreamType, publisherConn *mcuProxyConnection, connections []*mcuProxyConnection, isAllowed func(c *mcuProxyConnection) bool) McuSubscriber {
for _, conn := range connections {
if !isAllowed(conn) || conn.IsShutdownScheduled() || conn.IsTemporary() {
continue
}
var subscriber McuSubscriber
var err error
if conn == publisherConn {
subscriber, err = conn.newSubscriber(ctx, listener, id, publisher, streamType)
} else {
subscriber, err = conn.newRemoteSubscriber(ctx, listener, id, publisher, streamType, publisherConn)
}
if err != nil {
log.Printf("Could not create subscriber for %s publisher %s on %s: %s", streamType, publisher, conn, err)
continue
}
return subscriber
}
return nil
}
func (m *mcuProxy) NewSubscriber(ctx context.Context, listener McuListener, publisher string, streamType StreamType, initiator McuInitiator) (McuSubscriber, error) {
var publisherInfo *proxyPublisherInfo
if conn := m.getPublisherConnection(publisher, streamType); conn != nil {
// Fast common path: publisher is available locally.
conn.publishersLock.Lock()
@ -1651,113 +1915,190 @@ func (m *mcuProxy) NewSubscriber(ctx context.Context, listener McuListener, publ
return nil, fmt.Errorf("Unknown publisher %s", publisher)
}
return conn.newSubscriber(ctx, listener, id, publisher, streamType)
}
log.Printf("No %s publisher %s found yet, deferring", streamType, publisher)
ch := make(chan McuSubscriber)
getctx, cancel := context.WithCancel(ctx)
defer cancel()
// Wait for publisher to be created locally.
go func() {
if conn := m.waitForPublisherConnection(getctx, publisher, streamType); conn != nil {
cancel() // Cancel pending RPC calls.
conn.publishersLock.Lock()
id, found := conn.publisherIds[getStreamId(publisher, streamType)]
conn.publishersLock.Unlock()
if !found {
log.Printf("Unknown id for local %s publisher %s", streamType, publisher)
return
}
subscriber, err := conn.newSubscriber(ctx, listener, id, publisher, streamType)
if subscriber != nil {
ch <- subscriber
} else if err != nil {
log.Printf("Error creating local subscriber for %s publisher %s: %s", streamType, publisher, err)
}
publisherInfo = &proxyPublisherInfo{
id: id,
conn: conn,
}
}()
} else {
log.Printf("No %s publisher %s found yet, deferring", streamType, publisher)
ch := make(chan *proxyPublisherInfo, 1)
getctx, cancel := context.WithCancel(ctx)
defer cancel()
// Wait for publisher to be created on one of the other servers in the cluster.
if clients := m.rpcClients.GetClients(); len(clients) > 0 {
for _, client := range clients {
go func(client *GrpcClient) {
id, url, ip, err := client.GetPublisherId(getctx, publisher, streamType)
if errors.Is(err, context.Canceled) {
return
} else if err != nil {
log.Printf("Error getting %s publisher id %s from %s: %s", streamType, publisher, client.Target(), err)
return
} else if id == "" {
// Publisher not found on other server
return
}
var wg sync.WaitGroup
// Wait for publisher to be created locally.
wg.Add(1)
go func() {
defer wg.Done()
if conn := m.waitForPublisherConnection(getctx, publisher, streamType); conn != nil {
cancel() // Cancel pending RPC calls.
log.Printf("Found publisher id %s through %s on proxy %s", id, client.Target(), url)
m.connectionsMu.RLock()
connections := m.connections
m.connectionsMu.RUnlock()
var publisherConn *mcuProxyConnection
for _, conn := range connections {
if conn.rawUrl != url || !ip.Equal(conn.ip) {
continue
conn.publishersLock.Lock()
id, found := conn.publisherIds[getStreamId(publisher, streamType)]
conn.publishersLock.Unlock()
if !found {
ch <- &proxyPublisherInfo{
err: fmt.Errorf("Unknown id for local %s publisher %s", streamType, publisher),
}
// Simple case, signaling server has a connection to the same endpoint
publisherConn = conn
break
}
if publisherConn == nil {
publisherConn, err = newMcuProxyConnection(m, url, ip)
if err != nil {
log.Printf("Could not create temporary connection to %s for %s publisher %s: %s", url, streamType, publisher, err)
return
}
publisherConn.setTemporary()
publisherConn.start()
if err := publisherConn.waitUntilConnected(ctx); err != nil {
log.Printf("Could not establish new connection to %s: %s", publisherConn, err)
publisherConn.closeIfEmpty()
return
}
m.connectionsMu.Lock()
m.connections = append(m.connections, publisherConn)
conns, found := m.connectionsMap[url]
if found {
conns = append(conns, publisherConn)
} else {
conns = []*mcuProxyConnection{publisherConn}
}
m.connectionsMap[url] = conns
m.connectionsMu.Unlock()
}
subscriber, err := publisherConn.newSubscriber(ctx, listener, id, publisher, streamType)
if err != nil {
if publisherConn.IsTemporary() {
publisherConn.closeIfEmpty()
}
log.Printf("Could not create subscriber for %s publisher %s: %s", streamType, publisher, err)
return
}
ch <- subscriber
}(client)
ch <- &proxyPublisherInfo{
id: id,
conn: conn,
}
}
}()
// Wait for publisher to be created on one of the other servers in the cluster.
if clients := m.rpcClients.GetClients(); len(clients) > 0 {
for _, client := range clients {
wg.Add(1)
go func(client *GrpcClient) {
defer wg.Done()
id, url, ip, err := client.GetPublisherId(getctx, publisher, streamType)
if errors.Is(err, context.Canceled) {
return
} else if err != nil {
log.Printf("Error getting %s publisher id %s from %s: %s", streamType, publisher, client.Target(), err)
return
} else if id == "" {
// Publisher not found on other server
return
}
cancel() // Cancel pending RPC calls.
log.Printf("Found publisher id %s through %s on proxy %s", id, client.Target(), url)
m.connectionsMu.RLock()
connections := m.connections
m.connectionsMu.RUnlock()
var publisherConn *mcuProxyConnection
for _, conn := range connections {
if conn.rawUrl != url || !ip.Equal(conn.ip) {
continue
}
// Simple case, signaling server has a connection to the same endpoint
publisherConn = conn
break
}
if publisherConn == nil {
publisherConn, err = newMcuProxyConnection(m, url, ip)
if err != nil {
log.Printf("Could not create temporary connection to %s for %s publisher %s: %s", url, streamType, publisher, err)
return
}
publisherConn.setTemporary()
publisherConn.start()
if err := publisherConn.waitUntilConnected(ctx); err != nil {
log.Printf("Could not establish new connection to %s: %s", publisherConn, err)
publisherConn.closeIfEmpty()
return
}
m.connectionsMu.Lock()
m.connections = append(m.connections, publisherConn)
conns, found := m.connectionsMap[url]
if found {
conns = append(conns, publisherConn)
} else {
conns = []*mcuProxyConnection{publisherConn}
}
m.connectionsMap[url] = conns
m.connectionsMu.Unlock()
}
ch <- &proxyPublisherInfo{
id: id,
conn: publisherConn,
}
}(client)
}
}
wg.Wait()
select {
case ch <- &proxyPublisherInfo{
err: fmt.Errorf("No %s publisher %s found", streamType, publisher),
}:
default:
}
select {
case info := <-ch:
publisherInfo = info
case <-ctx.Done():
return nil, fmt.Errorf("No %s publisher %s found", streamType, publisher)
}
}
select {
case subscriber := <-ch:
return subscriber, nil
case <-ctx.Done():
return nil, fmt.Errorf("No %s publisher %s found", streamType, publisher)
if publisherInfo.err != nil {
return nil, publisherInfo.err
}
bw := publisherInfo.conn.Bandwidth()
allowOutgoing := bw == nil || bw.AllowOutgoing()
if !allowOutgoing || !publisherInfo.conn.IsSameCountry(initiator) {
connections := m.getSortedConnections(initiator)
if !allowOutgoing || len(connections) > 0 && !connections[0].IsSameCountry(publisherInfo.conn) {
// Connect to remote publisher through "closer" gateway.
subscriber := m.createSubscriber(ctx, listener, publisherInfo.id, publisher, streamType, publisherInfo.conn, connections, func(c *mcuProxyConnection) bool {
bw := c.Bandwidth()
return bw == nil || bw.AllowOutgoing()
})
if subscriber == nil {
connections2 := make([]*mcuProxyConnection, 0, len(connections))
for _, c := range connections {
if c.Bandwidth() != nil {
connections2 = append(connections2, c)
}
}
SlicesSortFunc(connections2, func(a *mcuProxyConnection, b *mcuProxyConnection) int {
var outgoing_a *float64
if bw := a.Bandwidth(); bw != nil {
outgoing_a = bw.Outgoing
}
var outgoing_b *float64
if bw := b.Bandwidth(); bw != nil {
outgoing_b = bw.Outgoing
}
if outgoing_a == nil && outgoing_b == nil {
return 0
} else if outgoing_a == nil && outgoing_b != nil {
return -1
} else if outgoing_a != nil && outgoing_b == nil {
return -1
} else if *outgoing_a < *outgoing_b {
return -1
} else if *outgoing_a > *outgoing_b {
return 1
}
return 0
})
subscriber = m.createSubscriber(ctx, listener, publisherInfo.id, publisher, streamType, publisherInfo.conn, connections2, func(c *mcuProxyConnection) bool {
return true
})
}
if subscriber != nil {
return subscriber, nil
}
}
}
subscriber, err := publisherInfo.conn.newSubscriber(ctx, listener, publisherInfo.id, publisher, streamType)
if err != nil {
if publisherInfo.conn.IsTemporary() {
publisherInfo.conn.closeIfEmpty()
}
log.Printf("Could not create subscriber for %s publisher %s on %s: %s", streamType, publisher, publisherInfo.conn, err)
return nil, err
}
return subscriber, nil
}

File diff suppressed because it is too large Load diff

View file

@ -23,6 +23,7 @@ package signaling
import (
"context"
"errors"
"fmt"
"log"
"sync"
@ -49,7 +50,7 @@ func NewTestMCU() (*TestMCU, error) {
}, nil
}
func (m *TestMCU) Start() error {
func (m *TestMCU) Start(ctx context.Context) error {
return nil
}
@ -117,7 +118,7 @@ func (m *TestMCU) GetPublisher(id string) *TestMCUPublisher {
return m.publishers[id]
}
func (m *TestMCU) NewSubscriber(ctx context.Context, listener McuListener, publisher string, streamType StreamType) (McuSubscriber, error) {
func (m *TestMCU) NewSubscriber(ctx context.Context, listener McuListener, publisher string, streamType StreamType, initiator McuInitiator) (McuSubscriber, error) {
m.mu.Lock()
defer m.mu.Unlock()
@ -222,6 +223,18 @@ func (p *TestMCUPublisher) SendMessage(ctx context.Context, message *MessageClie
}()
}
func (p *TestMCUPublisher) GetStreams(ctx context.Context) ([]PublisherStream, error) {
return nil, errors.New("not implemented")
}
func (p *TestMCUPublisher) PublishRemote(ctx context.Context, remoteId string, hostname string, port int, rtcpPort int) error {
return errors.New("remote publishing not supported")
}
func (p *TestMCUPublisher) UnpublishRemote(ctx context.Context, remoteId string) error {
return errors.New("remote publishing not supported")
}
type TestMCUSubscriber struct {
TestMCUClient
@ -253,6 +266,8 @@ func (s *TestMCUSubscriber) SendMessage(ctx context.Context, message *MessageCli
"type": "offer",
"sdp": sdp,
})
case "answer":
callback(nil, nil)
default:
callback(fmt.Errorf("Message type %s is not implemented", data.Type), nil)
}

View file

@ -104,6 +104,7 @@ func testNatsClient_Subscribe(t *testing.T, client NatsClient) {
}
func TestNatsClient_Subscribe(t *testing.T) {
CatchLogForTest(t)
ensureNoGoroutinesLeak(t, func(t *testing.T) {
client := CreateLocalNatsClientForTest(t)
@ -120,6 +121,7 @@ func testNatsClient_PublishAfterClose(t *testing.T, client NatsClient) {
}
func TestNatsClient_PublishAfterClose(t *testing.T) {
CatchLogForTest(t)
ensureNoGoroutinesLeak(t, func(t *testing.T) {
client := CreateLocalNatsClientForTest(t)
@ -137,6 +139,7 @@ func testNatsClient_SubscribeAfterClose(t *testing.T, client NatsClient) {
}
func TestNatsClient_SubscribeAfterClose(t *testing.T) {
CatchLogForTest(t)
ensureNoGoroutinesLeak(t, func(t *testing.T) {
client := CreateLocalNatsClientForTest(t)
@ -159,6 +162,7 @@ func testNatsClient_BadSubjects(t *testing.T, client NatsClient) {
}
func TestNatsClient_BadSubjects(t *testing.T) {
CatchLogForTest(t)
ensureNoGoroutinesLeak(t, func(t *testing.T) {
client := CreateLocalNatsClientForTest(t)

View file

@ -118,6 +118,7 @@ func TestNotifierResetWillNotify(t *testing.T) {
}
func TestNotifierDuplicate(t *testing.T) {
t.Parallel()
var notifier Notifier
var wgStart sync.WaitGroup
var wgEnd sync.WaitGroup

View file

@ -8,6 +8,12 @@
# See "https://golang.org/pkg/net/http/pprof/" for further information.
#debug = false
# Comma separated list of trusted proxies (IPs or CIDR networks) that may set
# the "X-Real-Ip" or "X-Forwarded-For" headers. If both are provided, the
# "X-Real-Ip" header will take precedence (if valid).
# Leave empty to allow loopback and local addresses.
#trustedproxies =
# ISO 3166 country this proxy is located at. This will be used by the signaling
# servers to determine the closest proxy for publishers.
#country = DE
@ -20,6 +26,36 @@
# - etcd: Token information are retrieved from an etcd cluster (see below).
tokentype = static
# The external hostname for remote streams. Leaving this empty will autodetect
# and use the first public IP found on the available network interfaces.
#hostname =
# The token id to use when connecting remote stream.
#token_id = server1
# The private key for the configured token id to use when connecting remote
# streams.
#token_key = privkey.pem
# If set to "true", certificate validation of remote stream requests will be
# skipped. This should only be enabled during development, e.g. to work with
# self-signed certificates.
#skipverify = false
[bandwidth]
# Target bandwidth limit for incoming streams (in megabits per second).
# Set to 0 to disable the limit. If the limit is reached, the proxy notifies
# the signaling servers that another proxy should be used for publishing if
# possible.
#incoming = 1024
# Target bandwidth limit for outgoing streams (in megabits per second).
# Set to 0 to disable the limit. If the limit is reached, the proxy notifies
# the signaling servers that another proxy should be used for subscribing if
# possible. Note that this might require additional outgoing bandwidth for the
# remote streams.
#outgoing = 1024
[tokens]
# For token type "static": Mapping of <tokenid> = <publickey> of signaling
# servers allowed to connect.

View file

@ -36,6 +36,8 @@ import (
"github.com/dlintw/goconf"
"github.com/gorilla/mux"
signaling "github.com/strukturag/nextcloud-spreed-signaling"
)
var (
@ -90,7 +92,7 @@ func main() {
}
defer proxy.Stop()
if addr, _ := config.GetString("http", "listen"); addr != "" {
if addr, _ := signaling.GetStringOptionWithEnv(config, "http", "listen"); addr != "" {
readTimeout, _ := config.GetInt("http", "readtimeout")
if readTimeout <= 0 {
readTimeout = defaultReadTimeout

View file

@ -53,18 +53,18 @@ func (c *ProxyClient) SetSession(session *ProxySession) {
c.session.Store(session)
}
func (c *ProxyClient) OnClosed(client *signaling.Client) {
func (c *ProxyClient) OnClosed(client signaling.HandlerClient) {
if session := c.GetSession(); session != nil {
session.MarkUsed()
}
c.proxy.clientClosed(&c.Client)
}
func (c *ProxyClient) OnMessageReceived(client *signaling.Client, data []byte) {
func (c *ProxyClient) OnMessageReceived(client signaling.HandlerClient, data []byte) {
c.proxy.processMessage(c, data)
}
func (c *ProxyClient) OnRTTReceived(client *signaling.Client, rtt time.Duration) {
func (c *ProxyClient) OnRTTReceived(client signaling.HandlerClient, rtt time.Duration) {
if session := c.GetSession(); session != nil {
session.MarkUsed()
}

490
proxy/proxy_remote.go Normal file
View file

@ -0,0 +1,490 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2024 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package main
import (
"context"
"crypto/rsa"
"crypto/tls"
"encoding/json"
"errors"
"log"
"net/http"
"net/url"
"strconv"
"sync"
"sync/atomic"
"time"
"github.com/golang-jwt/jwt/v4"
"github.com/gorilla/websocket"
signaling "github.com/strukturag/nextcloud-spreed-signaling"
)
const (
initialReconnectInterval = 1 * time.Second
maxReconnectInterval = 32 * time.Second
// Time allowed to write a message to the peer.
writeWait = 10 * time.Second
// Time allowed to read the next pong message from the peer.
pongWait = 60 * time.Second
// Send pings to peer with this period. Must be less than pongWait.
pingPeriod = (pongWait * 9) / 10
)
var (
ErrNotConnected = errors.New("not connected")
)
type RemoteConnection struct {
mu sync.Mutex
url *url.URL
conn *websocket.Conn
closer *signaling.Closer
closed atomic.Bool
tokenId string
tokenKey *rsa.PrivateKey
tlsConfig *tls.Config
connectedSince time.Time
reconnectTimer *time.Timer
reconnectInterval atomic.Int64
msgId atomic.Int64
helloMsgId string
sessionId string
pendingMessages []*signaling.ProxyClientMessage
messageCallbacks map[string]chan *signaling.ProxyServerMessage
}
func NewRemoteConnection(proxyUrl string, tokenId string, tokenKey *rsa.PrivateKey, tlsConfig *tls.Config) (*RemoteConnection, error) {
u, err := url.Parse(proxyUrl)
if err != nil {
return nil, err
}
result := &RemoteConnection{
url: u,
closer: signaling.NewCloser(),
tokenId: tokenId,
tokenKey: tokenKey,
tlsConfig: tlsConfig,
reconnectTimer: time.NewTimer(0),
messageCallbacks: make(map[string]chan *signaling.ProxyServerMessage),
}
result.reconnectInterval.Store(int64(initialReconnectInterval))
go result.writePump()
return result, nil
}
func (c *RemoteConnection) String() string {
return c.url.String()
}
func (c *RemoteConnection) reconnect() {
u, err := c.url.Parse("proxy")
if err != nil {
log.Printf("Could not resolve url to proxy at %s: %s", c, err)
c.scheduleReconnect()
return
}
if u.Scheme == "http" {
u.Scheme = "ws"
} else if u.Scheme == "https" {
u.Scheme = "wss"
}
dialer := websocket.Dialer{
Proxy: http.ProxyFromEnvironment,
TLSClientConfig: c.tlsConfig,
}
conn, _, err := dialer.DialContext(context.TODO(), u.String(), nil)
if err != nil {
log.Printf("Error connecting to proxy at %s: %s", c, err)
c.scheduleReconnect()
return
}
log.Printf("Connected to %s", c)
c.closed.Store(false)
c.mu.Lock()
c.connectedSince = time.Now()
c.conn = conn
c.mu.Unlock()
c.reconnectInterval.Store(int64(initialReconnectInterval))
if err := c.sendHello(); err != nil {
log.Printf("Error sending hello request to proxy at %s: %s", c, err)
c.scheduleReconnect()
return
}
if !c.sendPing() {
return
}
go c.readPump(conn)
}
func (c *RemoteConnection) scheduleReconnect() {
if err := c.sendClose(); err != nil && err != ErrNotConnected {
log.Printf("Could not send close message to %s: %s", c, err)
}
c.close()
interval := c.reconnectInterval.Load()
c.reconnectTimer.Reset(time.Duration(interval))
interval = interval * 2
if interval > int64(maxReconnectInterval) {
interval = int64(maxReconnectInterval)
}
c.reconnectInterval.Store(interval)
}
func (c *RemoteConnection) sendHello() error {
c.helloMsgId = strconv.FormatInt(c.msgId.Add(1), 10)
msg := &signaling.ProxyClientMessage{
Id: c.helloMsgId,
Type: "hello",
Hello: &signaling.HelloProxyClientMessage{
Version: "1.0",
},
}
if sessionId := c.sessionId; sessionId != "" {
msg.Hello.ResumeId = sessionId
} else {
tokenString, err := c.createToken("")
if err != nil {
return err
}
msg.Hello.Token = tokenString
}
return c.SendMessage(msg)
}
func (c *RemoteConnection) sendClose() error {
c.mu.Lock()
defer c.mu.Unlock()
if c.conn == nil {
return ErrNotConnected
}
c.conn.SetWriteDeadline(time.Now().Add(writeWait)) // nolint
return c.conn.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, ""))
}
func (c *RemoteConnection) close() {
c.mu.Lock()
defer c.mu.Unlock()
if c.conn != nil {
c.conn.Close()
c.conn = nil
}
}
func (c *RemoteConnection) Close() error {
c.mu.Lock()
defer c.mu.Unlock()
c.reconnectTimer.Stop()
if c.conn == nil {
return nil
}
c.sendClose()
err1 := c.conn.WriteControl(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, ""), time.Time{})
err2 := c.conn.Close()
c.conn = nil
if err1 != nil {
return err1
}
return err2
}
func (c *RemoteConnection) createToken(subject string) (string, error) {
claims := &signaling.TokenClaims{
RegisteredClaims: jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(time.Now()),
Issuer: c.tokenId,
Subject: subject,
},
}
token := jwt.NewWithClaims(jwt.SigningMethodRS256, claims)
tokenString, err := token.SignedString(c.tokenKey)
if err != nil {
return "", err
}
return tokenString, nil
}
func (c *RemoteConnection) SendMessage(msg *signaling.ProxyClientMessage) error {
c.mu.Lock()
defer c.mu.Unlock()
return c.sendMessageLocked(context.Background(), msg)
}
func (c *RemoteConnection) deferMessage(ctx context.Context, msg *signaling.ProxyClientMessage) {
c.pendingMessages = append(c.pendingMessages, msg)
if ctx.Done() != nil {
go func() {
<-ctx.Done()
c.mu.Lock()
defer c.mu.Unlock()
for idx, m := range c.pendingMessages {
if m == msg {
c.pendingMessages[idx] = nil
break
}
}
}()
}
}
func (c *RemoteConnection) sendMessageLocked(ctx context.Context, msg *signaling.ProxyClientMessage) error {
if c.conn == nil {
// Defer until connected.
c.deferMessage(ctx, msg)
return nil
}
if c.helloMsgId != "" && c.helloMsgId != msg.Id {
// Hello request is still inflight, defer.
c.deferMessage(ctx, msg)
return nil
}
c.conn.SetWriteDeadline(time.Now().Add(writeWait)) // nolint
return c.conn.WriteJSON(msg)
}
func (c *RemoteConnection) readPump(conn *websocket.Conn) {
defer func() {
if !c.closed.Load() {
c.scheduleReconnect()
}
}()
defer c.close()
for {
msgType, msg, err := conn.ReadMessage()
if err != nil {
if errors.Is(err, websocket.ErrCloseSent) {
break
} else if _, ok := err.(*websocket.CloseError); !ok || websocket.IsUnexpectedCloseError(err,
websocket.CloseNormalClosure,
websocket.CloseGoingAway,
websocket.CloseNoStatusReceived) {
log.Printf("Error reading from %s: %v", c, err)
}
break
}
if msgType != websocket.TextMessage {
log.Printf("unexpected message type %q (%s)", msgType, string(msg))
continue
}
var message signaling.ProxyServerMessage
if err := json.Unmarshal(msg, &message); err != nil {
log.Printf("could not decode message %s: %s", string(msg), err)
continue
}
c.mu.Lock()
helloMsgId := c.helloMsgId
c.mu.Unlock()
if helloMsgId != "" && message.Id == helloMsgId {
c.processHello(&message)
} else {
c.processMessage(&message)
}
}
}
func (c *RemoteConnection) sendPing() bool {
c.mu.Lock()
defer c.mu.Unlock()
if c.conn == nil {
return false
}
now := time.Now()
msg := strconv.FormatInt(now.UnixNano(), 10)
c.conn.SetWriteDeadline(now.Add(writeWait)) // nolint
if err := c.conn.WriteMessage(websocket.PingMessage, []byte(msg)); err != nil {
log.Printf("Could not send ping to proxy at %s: %v", c, err)
go c.scheduleReconnect()
return false
}
return true
}
func (c *RemoteConnection) writePump() {
ticker := time.NewTicker(pingPeriod)
defer func() {
ticker.Stop()
}()
defer c.reconnectTimer.Stop()
for {
select {
case <-c.reconnectTimer.C:
c.reconnect()
case <-ticker.C:
c.sendPing()
case <-c.closer.C:
return
}
}
}
func (c *RemoteConnection) processHello(msg *signaling.ProxyServerMessage) {
c.helloMsgId = ""
switch msg.Type {
case "error":
if msg.Error.Code == "no_such_session" {
log.Printf("Session %s could not be resumed on %s, registering new", c.sessionId, c)
c.sessionId = ""
if err := c.sendHello(); err != nil {
log.Printf("Could not send hello request to %s: %s", c, err)
c.scheduleReconnect()
}
return
}
log.Printf("Hello connection to %s failed with %+v, reconnecting", c, msg.Error)
c.scheduleReconnect()
case "hello":
resumed := c.sessionId == msg.Hello.SessionId
c.sessionId = msg.Hello.SessionId
country := ""
if msg.Hello.Server != nil {
if country = msg.Hello.Server.Country; country != "" && !signaling.IsValidCountry(country) {
log.Printf("Proxy %s sent invalid country %s in hello response", c, country)
country = ""
}
}
if resumed {
log.Printf("Resumed session %s on %s", c.sessionId, c)
} else if country != "" {
log.Printf("Received session %s from %s (in %s)", c.sessionId, c, country)
} else {
log.Printf("Received session %s from %s", c.sessionId, c)
}
pending := c.pendingMessages
c.pendingMessages = nil
for _, m := range pending {
if m == nil {
continue
}
if err := c.sendMessageLocked(context.Background(), m); err != nil {
log.Printf("Could not send pending message %+v to %s: %s", m, c, err)
}
}
default:
log.Printf("Received unsupported hello response %+v from %s, reconnecting", msg, c)
c.scheduleReconnect()
}
}
func (c *RemoteConnection) processMessage(msg *signaling.ProxyServerMessage) {
if msg.Id != "" {
c.mu.Lock()
ch, found := c.messageCallbacks[msg.Id]
if found {
delete(c.messageCallbacks, msg.Id)
c.mu.Unlock()
ch <- msg
return
}
c.mu.Unlock()
}
switch msg.Type {
case "event":
c.processEvent(msg)
default:
log.Printf("Received unsupported message %+v from %s", msg, c)
}
}
func (c *RemoteConnection) processEvent(msg *signaling.ProxyServerMessage) {
switch msg.Event.Type {
case "update-load":
default:
log.Printf("Received unsupported event %+v from %s", msg, c)
}
}
func (c *RemoteConnection) RequestMessage(ctx context.Context, msg *signaling.ProxyClientMessage) (*signaling.ProxyServerMessage, error) {
msg.Id = strconv.FormatInt(c.msgId.Add(1), 10)
c.mu.Lock()
defer c.mu.Unlock()
if err := c.sendMessageLocked(ctx, msg); err != nil {
return nil, err
}
ch := make(chan *signaling.ProxyServerMessage, 1)
c.messageCallbacks[msg.Id] = ch
c.mu.Unlock()
defer func() {
c.mu.Lock()
delete(c.messageCallbacks, msg.Id)
}()
select {
case <-ctx.Done():
// TODO: Cancel request.
return nil, ctx.Err()
case response := <-ch:
if response.Type == "error" {
return nil, response.Error
}
return response, nil
}
}

View file

@ -24,7 +24,10 @@ package main
import (
"context"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"io"
"log"
@ -45,6 +48,7 @@ import (
"github.com/gorilla/mux"
"github.com/gorilla/securecookie"
"github.com/gorilla/websocket"
"github.com/notedit/janus-go"
"github.com/prometheus/client_golang/prometheus/promhttp"
signaling "github.com/strukturag/nextcloud-spreed-signaling"
@ -63,6 +67,8 @@ const (
// Maximum age a token may have to prevent reuse of old tokens.
maxTokenAge = 5 * time.Minute
remotePublisherTimeout = 5 * time.Second
)
type ContextKey string
@ -70,28 +76,35 @@ type ContextKey string
var (
ContextKeySession = ContextKey("session")
TimeoutCreatingPublisher = signaling.NewError("timeout", "Timeout creating publisher.")
TimeoutCreatingSubscriber = signaling.NewError("timeout", "Timeout creating subscriber.")
TokenAuthFailed = signaling.NewError("auth_failed", "The token could not be authenticated.")
TokenExpired = signaling.NewError("token_expired", "The token is expired.")
TokenNotValidYet = signaling.NewError("token_not_valid_yet", "The token is not valid yet.")
UnknownClient = signaling.NewError("unknown_client", "Unknown client id given.")
UnsupportedCommand = signaling.NewError("bad_request", "Unsupported command received.")
UnsupportedMessage = signaling.NewError("bad_request", "Unsupported message received.")
UnsupportedPayload = signaling.NewError("unsupported_payload", "Unsupported payload type.")
ShutdownScheduled = signaling.NewError("shutdown_scheduled", "The server is scheduled to shutdown.")
TimeoutCreatingPublisher = signaling.NewError("timeout", "Timeout creating publisher.")
TimeoutCreatingSubscriber = signaling.NewError("timeout", "Timeout creating subscriber.")
TokenAuthFailed = signaling.NewError("auth_failed", "The token could not be authenticated.")
TokenExpired = signaling.NewError("token_expired", "The token is expired.")
TokenNotValidYet = signaling.NewError("token_not_valid_yet", "The token is not valid yet.")
UnknownClient = signaling.NewError("unknown_client", "Unknown client id given.")
UnsupportedCommand = signaling.NewError("bad_request", "Unsupported command received.")
UnsupportedMessage = signaling.NewError("bad_request", "Unsupported message received.")
UnsupportedPayload = signaling.NewError("unsupported_payload", "Unsupported payload type.")
ShutdownScheduled = signaling.NewError("shutdown_scheduled", "The server is scheduled to shutdown.")
RemoteSubscribersNotSupported = signaling.NewError("unsupported_subscriber", "Remote subscribers are not supported.")
)
type ProxyServer struct {
version string
country string
welcomeMessage string
config *goconf.ConfigFile
url string
mcu signaling.Mcu
stopped atomic.Bool
load atomic.Int64
maxIncoming int64
currentIncoming atomic.Int64
maxOutgoing int64
currentOutgoing atomic.Int64
shutdownChannel chan struct{}
shutdownScheduled atomic.Bool
@ -99,6 +112,7 @@ type ProxyServer struct {
tokens ProxyTokens
statsAllowedIps *signaling.AllowedIps
trustedProxies *signaling.AllowedIps
sid atomic.Uint64
cookie *securecookie.SecureCookie
@ -108,6 +122,48 @@ type ProxyServer struct {
clients map[string]signaling.McuClient
clientIds map[string]string
clientsLock sync.RWMutex
tokenId string
tokenKey *rsa.PrivateKey
remoteTlsConfig *tls.Config
remoteHostname string
remoteConnections map[string]*RemoteConnection
remoteConnectionsLock sync.Mutex
}
func IsPublicIP(IP net.IP) bool {
if IP.IsLoopback() || IP.IsLinkLocalMulticast() || IP.IsLinkLocalUnicast() {
return false
}
if ip4 := IP.To4(); ip4 != nil {
switch {
case ip4[0] == 10:
return false
case ip4[0] == 172 && ip4[1] >= 16 && ip4[1] <= 31:
return false
case ip4[0] == 192 && ip4[1] == 168:
return false
default:
return true
}
}
return false
}
func GetLocalIP() (string, error) {
addrs, err := net.InterfaceAddrs()
if err != nil {
return "", err
}
for _, address := range addrs {
if ipnet, ok := address.(*net.IPNet); ok && IsPublicIP(ipnet.IP) {
if ipnet.IP.To4() != nil {
return ipnet.IP.String(), nil
}
}
}
return "", nil
}
func NewProxyServer(r *mux.Router, version string, config *goconf.ConfigFile) (*ProxyServer, error) {
@ -153,6 +209,19 @@ func NewProxyServer(r *mux.Router, version string, config *goconf.ConfigFile) (*
statsAllowedIps = signaling.DefaultAllowedIps()
}
trustedProxies, _ := config.GetString("app", "trustedproxies")
trustedProxiesIps, err := signaling.ParseAllowedIps(trustedProxies)
if err != nil {
return nil, err
}
if !trustedProxiesIps.Empty() {
log.Printf("Trusted proxies: %s", trustedProxiesIps)
} else {
trustedProxiesIps = signaling.DefaultTrustedProxies
log.Printf("No trusted proxies configured, only allowing for %s", trustedProxiesIps)
}
country, _ := config.GetString("app", "country")
country = strings.ToUpper(country)
if signaling.IsValidCountry(country) {
@ -173,10 +242,75 @@ func NewProxyServer(r *mux.Router, version string, config *goconf.ConfigFile) (*
return nil, err
}
tokenId, _ := config.GetString("app", "token_id")
var tokenKey *rsa.PrivateKey
var remoteHostname string
var remoteTlsConfig *tls.Config
if tokenId != "" {
tokenKeyFilename, _ := config.GetString("app", "token_key")
if tokenKeyFilename == "" {
return nil, fmt.Errorf("No token key configured")
}
tokenKeyData, err := os.ReadFile(tokenKeyFilename)
if err != nil {
return nil, fmt.Errorf("Could not read private key from %s: %s", tokenKeyFilename, err)
}
tokenKey, err = jwt.ParseRSAPrivateKeyFromPEM(tokenKeyData)
if err != nil {
return nil, fmt.Errorf("Could not parse private key from %s: %s", tokenKeyFilename, err)
}
log.Printf("Using \"%s\" as token id for remote streams", tokenId)
remoteHostname, _ = config.GetString("app", "hostname")
if remoteHostname == "" {
remoteHostname, err = GetLocalIP()
if err != nil {
return nil, fmt.Errorf("could not get local ip: %w", err)
}
}
if remoteHostname == "" {
log.Printf("WARNING: Could not determine hostname for remote streams, will be disabled. Please configure manually.")
} else {
log.Printf("Using \"%s\" as hostname for remote streams", remoteHostname)
}
skipverify, _ := config.GetBool("backend", "skipverify")
if skipverify {
log.Println("WARNING: Remote stream requests verification is disabled!")
remoteTlsConfig = &tls.Config{
InsecureSkipVerify: skipverify,
}
}
} else {
log.Printf("No token id configured, remote streams will be disabled")
}
maxIncoming, _ := config.GetInt("bandwidth", "incoming")
if maxIncoming < 0 {
maxIncoming = 0
}
if maxIncoming > 0 {
log.Printf("Target bandwidth for incoming streams: %d MBit/s", maxIncoming)
} else {
log.Printf("Target bandwidth for incoming streams: unlimited")
}
maxOutgoing, _ := config.GetInt("bandwidth", "outgoing")
if maxOutgoing < 0 {
maxOutgoing = 0
}
if maxIncoming > 0 {
log.Printf("Target bandwidth for outgoing streams: %d MBit/s", maxOutgoing)
} else {
log.Printf("Target bandwidth for outgoing streams: unlimited")
}
result := &ProxyServer{
version: version,
country: country,
welcomeMessage: string(welcomeMessage) + "\n",
config: config,
maxIncoming: int64(maxIncoming) * 1024 * 1024,
maxOutgoing: int64(maxOutgoing) * 1024 * 1024,
shutdownChannel: make(chan struct{}),
@ -187,12 +321,19 @@ func NewProxyServer(r *mux.Router, version string, config *goconf.ConfigFile) (*
tokens: tokens,
statsAllowedIps: statsAllowedIps,
trustedProxies: trustedProxiesIps,
cookie: securecookie.New(hashKey, blockKey).MaxAge(0),
sessions: make(map[uint64]*ProxySession),
clients: make(map[string]signaling.McuClient),
clientIds: make(map[string]string),
tokenId: tokenId,
tokenKey: tokenKey,
remoteTlsConfig: remoteTlsConfig,
remoteHostname: remoteHostname,
remoteConnections: make(map[string]*RemoteConnection),
}
result.upgrader.CheckOrigin = result.checkOrigin
@ -223,7 +364,7 @@ func (s *ProxyServer) checkOrigin(r *http.Request) bool {
}
func (s *ProxyServer) Start(config *goconf.ConfigFile) error {
s.url, _ = config.GetString("mcu", "url")
s.url, _ = signaling.GetStringOptionWithEnv(config, "mcu", "url")
if s.url == "" {
return fmt.Errorf("No MCU server url configured")
}
@ -245,7 +386,7 @@ func (s *ProxyServer) Start(config *goconf.ConfigFile) error {
for {
switch mcuType {
case signaling.McuTypeJanus:
mcu, err = signaling.NewMcuJanus(s.url, config)
mcu, err = signaling.NewMcuJanus(ctx, s.url, config)
if err == nil {
signaling.RegisterJanusMcuStats()
}
@ -255,7 +396,7 @@ func (s *ProxyServer) Start(config *goconf.ConfigFile) error {
if err == nil {
mcu.SetOnConnected(s.onMcuConnected)
mcu.SetOnDisconnected(s.onMcuDisconnected)
err = mcu.Start()
err = mcu.Start(ctx)
if err != nil {
log.Printf("Could not create %s MCU at %s: %s", mcuType, s.url, err)
}
@ -298,18 +439,7 @@ loop:
}
}
func (s *ProxyServer) updateLoad() {
load := s.GetClientsLoad()
if load == s.load.Load() {
return
}
s.load.Store(load)
if s.shutdownScheduled.Load() {
// Server is scheduled to shutdown, no need to update clients with current load.
return
}
func (s *ProxyServer) newLoadEvent(load int64, incoming int64, outgoing int64) *signaling.ProxyServerMessage {
msg := &signaling.ProxyServerMessage{
Type: "event",
Event: &signaling.EventProxyServerMessage{
@ -317,7 +447,37 @@ func (s *ProxyServer) updateLoad() {
Load: load,
},
}
if s.maxIncoming > 0 || s.maxOutgoing > 0 {
msg.Event.Bandwidth = &signaling.EventProxyServerBandwidth{}
if s.maxIncoming > 0 {
value := float64(incoming) / float64(s.maxIncoming) * 100
msg.Event.Bandwidth.Incoming = &value
}
if s.maxOutgoing > 0 {
value := float64(outgoing) / float64(s.maxOutgoing) * 100
msg.Event.Bandwidth.Outgoing = &value
}
}
return msg
}
func (s *ProxyServer) updateLoad() {
load, incoming, outgoing := s.GetClientsLoad()
if load == s.load.Load() &&
incoming == s.currentIncoming.Load() &&
outgoing == s.currentOutgoing.Load() {
return
}
s.load.Store(load)
s.currentIncoming.Store(incoming)
s.currentOutgoing.Store(outgoing)
if s.shutdownScheduled.Load() {
// Server is scheduled to shutdown, no need to update clients with current load.
return
}
msg := s.newLoadEvent(load, incoming, outgoing)
s.IterateSessions(func(session *ProxySession) {
session.sendMessage(msg)
})
@ -398,24 +558,6 @@ func (s *ProxyServer) setCommonHeaders(f func(http.ResponseWriter, *http.Request
}
}
func getRealUserIP(r *http.Request) string {
// Note this function assumes it is running behind a trusted proxy, so
// the headers can be trusted.
if ip := r.Header.Get("X-Real-IP"); ip != "" {
return ip
}
if ip := r.Header.Get("X-Forwarded-For"); ip != "" {
// Result could be a list "clientip, proxy1, proxy2", so only use first element.
if pos := strings.Index(ip, ","); pos >= 0 {
ip = strings.TrimSpace(ip[:pos])
}
return ip
}
return r.RemoteAddr
}
func (s *ProxyServer) welcomeHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(http.StatusOK)
@ -423,7 +565,7 @@ func (s *ProxyServer) welcomeHandler(w http.ResponseWriter, r *http.Request) {
}
func (s *ProxyServer) proxyHandler(w http.ResponseWriter, r *http.Request) {
addr := getRealUserIP(r)
addr := signaling.GetRealUserIP(r, s.trustedProxies)
conn, err := s.upgrader.Upgrade(w, r, nil)
if err != nil {
log.Printf("Could not upgrade request from %s: %s", addr, err)
@ -479,13 +621,7 @@ func (s *ProxyServer) onMcuDisconnected() {
}
func (s *ProxyServer) sendCurrentLoad(session *ProxySession) {
msg := &signaling.ProxyServerMessage{
Type: "event",
Event: &signaling.EventProxyServerMessage{
Type: "update-load",
Load: s.load.Load(),
},
}
msg := s.newLoadEvent(s.load.Load(), s.currentIncoming.Load(), s.currentOutgoing.Load())
session.sendMessage(msg)
}
@ -613,6 +749,59 @@ func (i *emptyInitiator) Country() string {
return ""
}
type proxyRemotePublisher struct {
proxy *ProxyServer
remoteUrl string
publisherId string
}
func (p *proxyRemotePublisher) PublisherId() string {
return p.publisherId
}
func (p *proxyRemotePublisher) StartPublishing(ctx context.Context, publisher signaling.McuRemotePublisherProperties) error {
conn, err := p.proxy.getRemoteConnection(p.remoteUrl)
if err != nil {
return err
}
if _, err := conn.RequestMessage(ctx, &signaling.ProxyClientMessage{
Type: "command",
Command: &signaling.CommandProxyClientMessage{
Type: "publish-remote",
ClientId: p.publisherId,
Hostname: p.proxy.remoteHostname,
Port: publisher.Port(),
RtcpPort: publisher.RtcpPort(),
},
}); err != nil {
return err
}
return nil
}
func (p *proxyRemotePublisher) GetStreams(ctx context.Context) ([]signaling.PublisherStream, error) {
conn, err := p.proxy.getRemoteConnection(p.remoteUrl)
if err != nil {
return nil, err
}
response, err := conn.RequestMessage(ctx, &signaling.ProxyClientMessage{
Type: "command",
Command: &signaling.CommandProxyClientMessage{
Type: "get-publisher-streams",
ClientId: p.publisherId,
},
})
if err != nil {
return nil, err
}
return response.Command.Streams, nil
}
func (s *ProxyServer) processCommand(ctx context.Context, client *ProxyClient, session *ProxySession, message *signaling.ProxyClientMessage) {
cmd := message.Command
@ -655,18 +844,89 @@ func (s *ProxyServer) processCommand(ctx context.Context, client *ProxyClient, s
case "create-subscriber":
id := uuid.New().String()
publisherId := cmd.PublisherId
subscriber, err := s.mcu.NewSubscriber(ctx, session, publisherId, cmd.StreamType)
if err == context.DeadlineExceeded {
log.Printf("Timeout while creating %s subscriber on %s for %s", cmd.StreamType, publisherId, session.PublicId())
session.sendMessage(message.NewErrorServerMessage(TimeoutCreatingSubscriber))
return
} else if err != nil {
var subscriber signaling.McuSubscriber
var err error
handleCreateError := func(err error) {
if err == context.DeadlineExceeded {
log.Printf("Timeout while creating %s subscriber on %s for %s", cmd.StreamType, publisherId, session.PublicId())
session.sendMessage(message.NewErrorServerMessage(TimeoutCreatingSubscriber))
return
} else if errors.Is(err, signaling.ErrRemoteStreamsNotSupported) {
session.sendMessage(message.NewErrorServerMessage(RemoteSubscribersNotSupported))
return
}
log.Printf("Error while creating %s subscriber on %s for %s: %s", cmd.StreamType, publisherId, session.PublicId(), err)
session.sendMessage(message.NewWrappedErrorServerMessage(err))
return
}
log.Printf("Created %s subscriber %s as %s for %s", cmd.StreamType, subscriber.Id(), id, session.PublicId())
if cmd.RemoteUrl != "" {
if s.tokenId == "" || s.tokenKey == nil || s.remoteHostname == "" {
session.sendMessage(message.NewErrorServerMessage(RemoteSubscribersNotSupported))
return
}
remoteMcu, ok := s.mcu.(signaling.RemoteMcu)
if !ok {
session.sendMessage(message.NewErrorServerMessage(RemoteSubscribersNotSupported))
return
}
claims, _, err := s.parseToken(cmd.RemoteToken)
if err != nil {
if e, ok := err.(*signaling.Error); ok {
client.SendMessage(message.NewErrorServerMessage(e))
} else {
client.SendMessage(message.NewWrappedErrorServerMessage(err))
}
return
}
if claims.Subject != publisherId {
session.sendMessage(message.NewErrorServerMessage(TokenAuthFailed))
return
}
subCtx, cancel := context.WithTimeout(ctx, remotePublisherTimeout)
defer cancel()
log.Printf("Creating remote subscriber for %s on %s", publisherId, cmd.RemoteUrl)
controller := &proxyRemotePublisher{
proxy: s,
remoteUrl: cmd.RemoteUrl,
publisherId: publisherId,
}
var publisher signaling.McuRemotePublisher
publisher, err = remoteMcu.NewRemotePublisher(subCtx, session, controller, cmd.StreamType)
if err != nil {
handleCreateError(err)
return
}
defer func() {
go publisher.Close(context.Background())
}()
subscriber, err = remoteMcu.NewRemoteSubscriber(subCtx, session, publisher)
if err != nil {
handleCreateError(err)
return
}
log.Printf("Created remote %s subscriber %s as %s for %s on %s", cmd.StreamType, subscriber.Id(), id, session.PublicId(), cmd.RemoteUrl)
} else {
subscriber, err = s.mcu.NewSubscriber(ctx, session, publisherId, cmd.StreamType, &emptyInitiator{})
if err != nil {
handleCreateError(err)
return
}
log.Printf("Created %s subscriber %s as %s for %s", cmd.StreamType, subscriber.Id(), id, session.PublicId())
}
session.StoreSubscriber(ctx, id, subscriber)
s.StoreClient(id, subscriber)
@ -751,6 +1011,77 @@ func (s *ProxyServer) processCommand(ctx context.Context, client *ProxyClient, s
},
}
session.sendMessage(response)
case "publish-remote":
client := s.GetClient(cmd.ClientId)
if client == nil {
session.sendMessage(message.NewErrorServerMessage(UnknownClient))
return
}
publisher, ok := client.(signaling.McuPublisher)
if !ok {
session.sendMessage(message.NewErrorServerMessage(UnknownClient))
return
}
if err := publisher.PublishRemote(ctx, session.PublicId(), cmd.Hostname, cmd.Port, cmd.RtcpPort); err != nil {
var je *janus.ErrorMsg
if !errors.As(err, &je) || je.Err.Code != signaling.JANUS_VIDEOROOM_ERROR_ID_EXISTS {
log.Printf("Error publishing %s %s to remote %s (port=%d, rtcpPort=%d): %s", publisher.StreamType(), cmd.ClientId, cmd.Hostname, cmd.Port, cmd.RtcpPort, err)
session.sendMessage(message.NewWrappedErrorServerMessage(err))
return
}
if err := publisher.UnpublishRemote(ctx, session.PublicId()); err != nil {
log.Printf("Error unpublishing old %s %s to remote %s (port=%d, rtcpPort=%d): %s", publisher.StreamType(), cmd.ClientId, cmd.Hostname, cmd.Port, cmd.RtcpPort, err)
session.sendMessage(message.NewWrappedErrorServerMessage(err))
return
}
if err := publisher.PublishRemote(ctx, session.PublicId(), cmd.Hostname, cmd.Port, cmd.RtcpPort); err != nil {
log.Printf("Error publishing %s %s to remote %s (port=%d, rtcpPort=%d): %s", publisher.StreamType(), cmd.ClientId, cmd.Hostname, cmd.Port, cmd.RtcpPort, err)
session.sendMessage(message.NewWrappedErrorServerMessage(err))
return
}
}
response := &signaling.ProxyServerMessage{
Id: message.Id,
Type: "command",
Command: &signaling.CommandProxyServerMessage{
Id: cmd.ClientId,
},
}
session.sendMessage(response)
case "get-publisher-streams":
client := s.GetClient(cmd.ClientId)
if client == nil {
session.sendMessage(message.NewErrorServerMessage(UnknownClient))
return
}
publisher, ok := client.(signaling.McuPublisher)
if !ok {
session.sendMessage(message.NewErrorServerMessage(UnknownClient))
return
}
streams, err := publisher.GetStreams(ctx)
if err != nil {
log.Printf("Could not get streams of publisher %s: %s", publisher.Id(), err)
session.sendMessage(message.NewWrappedErrorServerMessage(err))
return
}
response := &signaling.ProxyServerMessage{
Id: message.Id,
Type: "command",
Command: &signaling.CommandProxyServerMessage{
Id: cmd.ClientId,
Streams: streams,
},
}
session.sendMessage(response)
default:
log.Printf("Unsupported command %+v", message.Command)
session.sendMessage(message.NewErrorServerMessage(UnsupportedCommand))
@ -777,9 +1108,10 @@ func (s *ProxyServer) processPayload(ctx context.Context, client *ProxyClient, s
fallthrough
case "candidate":
mcuData = &signaling.MessageClientMessageData{
Type: payload.Type,
Sid: payload.Sid,
Payload: payload.Payload,
RoomType: string(mcuClient.StreamType()),
Type: payload.Type,
Sid: payload.Sid,
Payload: payload.Payload,
}
case "endOfCandidates":
// Ignore but confirm, not passed along to Janus anyway.
@ -796,14 +1128,21 @@ func (s *ProxyServer) processPayload(ctx context.Context, client *ProxyClient, s
fallthrough
case "sendoffer":
mcuData = &signaling.MessageClientMessageData{
Type: payload.Type,
Sid: payload.Sid,
RoomType: string(mcuClient.StreamType()),
Type: payload.Type,
Sid: payload.Sid,
}
default:
session.sendMessage(message.NewErrorServerMessage(UnsupportedPayload))
return
}
if err := mcuData.CheckValid(); err != nil {
log.Printf("Received invalid payload %+v for %s client %s: %s", mcuData, mcuClient.StreamType(), payload.ClientId, err)
session.sendMessage(message.NewErrorServerMessage(UnsupportedPayload))
return
}
mcuClient.SendMessage(ctx, nil, mcuData, func(err error, response map[string]interface{}) {
var responseMsg *signaling.ProxyServerMessage
if err != nil {
@ -825,13 +1164,9 @@ func (s *ProxyServer) processPayload(ctx context.Context, client *ProxyClient, s
})
}
func (s *ProxyServer) NewSession(hello *signaling.HelloProxyClientMessage) (*ProxySession, error) {
if proxyDebugMessages {
log.Printf("Hello: %+v", hello)
}
func (s *ProxyServer) parseToken(tokenValue string) (*signaling.TokenClaims, string, error) {
reason := "auth-failed"
token, err := jwt.ParseWithClaims(hello.Token, &signaling.TokenClaims{}, func(token *jwt.Token) (interface{}, error) {
token, err := jwt.ParseWithClaims(tokenValue, &signaling.TokenClaims{}, func(token *jwt.Token) (interface{}, error) {
// Don't forget to validate the alg is what you expect:
if _, ok := token.Method.(*jwt.SigningMethodRSA); !ok {
log.Printf("Unexpected signing method: %v", token.Header["alg"])
@ -863,25 +1198,35 @@ func (s *ProxyServer) NewSession(hello *signaling.HelloProxyClientMessage) (*Pro
})
if err, ok := err.(*jwt.ValidationError); ok {
if err.Errors&jwt.ValidationErrorIssuedAt == jwt.ValidationErrorIssuedAt {
statsTokenErrorsTotal.WithLabelValues("not-valid-yet").Inc()
return nil, TokenNotValidYet
return nil, "not-valid-yet", TokenNotValidYet
}
}
if err != nil {
statsTokenErrorsTotal.WithLabelValues(reason).Inc()
return nil, TokenAuthFailed
return nil, reason, TokenAuthFailed
}
claims, ok := token.Claims.(*signaling.TokenClaims)
if !ok || !token.Valid {
statsTokenErrorsTotal.WithLabelValues("auth-failed").Inc()
return nil, TokenAuthFailed
return nil, "auth-failed", TokenAuthFailed
}
minIssuedAt := time.Now().Add(-maxTokenAge)
if issuedAt := claims.IssuedAt; issuedAt != nil && issuedAt.Before(minIssuedAt) {
statsTokenErrorsTotal.WithLabelValues("expired").Inc()
return nil, TokenExpired
return nil, "expired", TokenExpired
}
return claims, "", nil
}
func (s *ProxyServer) NewSession(hello *signaling.HelloProxyClientMessage) (*ProxySession, error) {
if proxyDebugMessages {
log.Printf("Hello: %+v", hello)
}
claims, reason, err := s.parseToken(hello.Token)
if err != nil {
statsTokenErrorsTotal.WithLabelValues(reason).Inc()
return nil, err
}
sid := s.sid.Add(1)
@ -977,15 +1322,21 @@ func (s *ProxyServer) HasClients() bool {
return len(s.clients) > 0
}
func (s *ProxyServer) GetClientsLoad() int64 {
func (s *ProxyServer) GetClientsLoad() (load int64, incoming int64, outgoing int64) {
s.clientsLock.RLock()
defer s.clientsLock.RUnlock()
var load int64
for _, c := range s.clients {
load += int64(c.MaxBitrate())
bitrate := int64(c.MaxBitrate())
load += bitrate
if _, ok := c.(signaling.McuPublisher); ok {
incoming += bitrate
} else if _, ok := c.(signaling.McuSubscriber); ok {
outgoing += bitrate
}
}
return load / 1024
load = load / 1024
return
}
func (s *ProxyServer) GetClient(id string) signaling.McuClient {
@ -994,6 +1345,22 @@ func (s *ProxyServer) GetClient(id string) signaling.McuClient {
return s.clients[id]
}
func (s *ProxyServer) GetPublisher(publisherId string) signaling.McuPublisher {
s.clientsLock.RLock()
defer s.clientsLock.RUnlock()
for _, c := range s.clients {
pub, ok := c.(signaling.McuPublisher)
if !ok {
continue
}
if pub.Id() == publisherId {
return pub
}
}
return nil
}
func (s *ProxyServer) GetClientId(client signaling.McuClient) string {
s.clientsLock.RLock()
defer s.clientsLock.RUnlock()
@ -1010,15 +1377,9 @@ func (s *ProxyServer) getStats() map[string]interface{} {
}
func (s *ProxyServer) allowStatsAccess(r *http.Request) bool {
addr := getRealUserIP(r)
if strings.Contains(addr, ":") {
if host, _, err := net.SplitHostPort(addr); err == nil {
addr = host
}
}
addr := signaling.GetRealUserIP(r, s.trustedProxies)
ip := net.ParseIP(addr)
if ip == nil {
if len(ip) == 0 {
return false
}
@ -1055,3 +1416,21 @@ func (s *ProxyServer) metricsHandler(w http.ResponseWriter, r *http.Request) {
// Expose prometheus metrics at "/metrics".
promhttp.Handler().ServeHTTP(w, r)
}
func (s *ProxyServer) getRemoteConnection(url string) (*RemoteConnection, error) {
s.remoteConnectionsLock.Lock()
defer s.remoteConnectionsLock.Unlock()
conn, found := s.remoteConnections[url]
if found {
return conn, nil
}
conn, err := NewRemoteConnection(url, s.tokenId, s.tokenKey, s.remoteTlsConfig)
if err != nil {
return nil, err
}
s.remoteConnections[url] = conn
return conn, nil
}

View file

@ -26,6 +26,7 @@ import (
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"net"
"os"
"testing"
"time"
@ -92,7 +93,94 @@ func newProxyServerForTest(t *testing.T) (*ProxyServer, *rsa.PrivateKey) {
return server, key
}
func TestTokenValid(t *testing.T) {
signaling.CatchLogForTest(t)
server, key := newProxyServerForTest(t)
claims := &signaling.TokenClaims{
RegisteredClaims: jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(time.Now().Add(-maxTokenAge / 2)),
Issuer: TokenIdForTest,
},
}
token := jwt.NewWithClaims(jwt.SigningMethodRS256, claims)
tokenString, err := token.SignedString(key)
if err != nil {
t.Fatalf("could not create token: %s", err)
}
hello := &signaling.HelloProxyClientMessage{
Version: "1.0",
Token: tokenString,
}
session, err := server.NewSession(hello)
if session != nil {
defer session.Close()
} else if err != nil {
t.Error(err)
}
}
func TestTokenNotSigned(t *testing.T) {
signaling.CatchLogForTest(t)
server, _ := newProxyServerForTest(t)
claims := &signaling.TokenClaims{
RegisteredClaims: jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(time.Now().Add(-maxTokenAge / 2)),
Issuer: TokenIdForTest,
},
}
token := jwt.NewWithClaims(jwt.SigningMethodNone, claims)
tokenString, err := token.SignedString(jwt.UnsafeAllowNoneSignatureType)
if err != nil {
t.Fatalf("could not create token: %s", err)
}
hello := &signaling.HelloProxyClientMessage{
Version: "1.0",
Token: tokenString,
}
session, err := server.NewSession(hello)
if session != nil {
defer session.Close()
t.Errorf("should not have created session")
} else if err != TokenAuthFailed {
t.Errorf("could have failed with TokenAuthFailed, got %s", err)
}
}
func TestTokenUnknown(t *testing.T) {
signaling.CatchLogForTest(t)
server, key := newProxyServerForTest(t)
claims := &signaling.TokenClaims{
RegisteredClaims: jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(time.Now().Add(-maxTokenAge / 2)),
Issuer: TokenIdForTest + "2",
},
}
token := jwt.NewWithClaims(jwt.SigningMethodRS256, claims)
tokenString, err := token.SignedString(key)
if err != nil {
t.Fatalf("could not create token: %s", err)
}
hello := &signaling.HelloProxyClientMessage{
Version: "1.0",
Token: tokenString,
}
session, err := server.NewSession(hello)
if session != nil {
defer session.Close()
t.Errorf("should not have created session")
} else if err != TokenAuthFailed {
t.Errorf("could have failed with TokenAuthFailed, got %s", err)
}
}
func TestTokenInFuture(t *testing.T) {
signaling.CatchLogForTest(t)
server, key := newProxyServerForTest(t)
claims := &signaling.TokenClaims{
@ -119,3 +207,67 @@ func TestTokenInFuture(t *testing.T) {
t.Errorf("could have failed with TokenNotValidYet, got %s", err)
}
}
func TestTokenExpired(t *testing.T) {
signaling.CatchLogForTest(t)
server, key := newProxyServerForTest(t)
claims := &signaling.TokenClaims{
RegisteredClaims: jwt.RegisteredClaims{
IssuedAt: jwt.NewNumericDate(time.Now().Add(-maxTokenAge * 2)),
Issuer: TokenIdForTest,
},
}
token := jwt.NewWithClaims(jwt.SigningMethodRS256, claims)
tokenString, err := token.SignedString(key)
if err != nil {
t.Fatalf("could not create token: %s", err)
}
hello := &signaling.HelloProxyClientMessage{
Version: "1.0",
Token: tokenString,
}
session, err := server.NewSession(hello)
if session != nil {
defer session.Close()
t.Errorf("should not have created session")
} else if err != TokenExpired {
t.Errorf("could have failed with TokenExpired, got %s", err)
}
}
func TestPublicIPs(t *testing.T) {
public := []string{
"8.8.8.8",
"172.15.1.2",
"172.32.1.2",
"192.167.0.1",
"192.169.0.1",
}
private := []string{
"127.0.0.1",
"10.1.2.3",
"172.16.1.2",
"172.31.1.2",
"192.168.0.1",
"192.168.254.254",
}
for _, s := range public {
ip := net.ParseIP(s)
if len(ip) == 0 {
t.Errorf("invalid IP: %s", s)
} else if !IsPublicIP(ip) {
t.Errorf("should be public IP: %s", s)
}
}
for _, s := range private {
ip := net.ParseIP(s)
if len(ip) == 0 {
t.Errorf("invalid IP: %s", s)
} else if IsPublicIP(ip) {
t.Errorf("should be private IP: %s", s)
}
}
}

View file

@ -299,8 +299,9 @@ func (s *ProxySession) clearPublishers() {
publisher.Close(context.Background())
}
}(s.publishers)
// Can't use clear(...) here as the map is processed by the goroutine above.
s.publishers = make(map[string]signaling.McuPublisher)
s.publisherIds = make(map[signaling.McuPublisher]string)
clear(s.publisherIds)
}
func (s *ProxySession) clearSubscribers() {
@ -315,8 +316,9 @@ func (s *ProxySession) clearSubscribers() {
subscriber.Close(context.Background())
}
}(s.subscribers)
// Can't use clear(...) here as the map is processed by the goroutine above.
s.subscribers = make(map[string]signaling.McuSubscriber)
s.subscriberIds = make(map[signaling.McuSubscriber]string)
clear(s.subscriberIds)
}
func (s *ProxySession) NotifyDisconnected() {

View file

@ -39,6 +39,8 @@ import (
"github.com/dlintw/goconf"
"go.etcd.io/etcd/server/v3/embed"
"go.etcd.io/etcd/server/v3/lease"
signaling "github.com/strukturag/nextcloud-spreed-signaling"
)
var (
@ -100,6 +102,7 @@ func newEtcdForTesting(t *testing.T) *embed.Etcd {
t.Cleanup(func() {
etcd.Close()
<-etcd.Server.StopNotify()
})
// Wait for server to be ready.
<-etcd.Server.ReadyNotify()
@ -160,6 +163,7 @@ func generateAndSaveKey(t *testing.T, etcd *embed.Etcd, name string) *rsa.Privat
}
func TestProxyTokensEtcd(t *testing.T) {
signaling.CatchLogForTest(t)
tokens, etcd := newTokensEtcdForTesting(t)
key1 := generateAndSaveKey(t, etcd, "/foo")

View file

@ -41,6 +41,9 @@ type proxyConfigEtcd struct {
keyPrefix string
keyInfos map[string]*ProxyInformationEtcd
urlToKey map[string]string
closeCtx context.Context
closeFunc context.CancelFunc
}
func NewProxyConfigEtcd(config *goconf.ConfigFile, etcdClient *EtcdClient, proxy McuProxy) (ProxyConfig, error) {
@ -48,12 +51,17 @@ func NewProxyConfigEtcd(config *goconf.ConfigFile, etcdClient *EtcdClient, proxy
return nil, errors.New("No etcd endpoints configured")
}
closeCtx, closeFunc := context.WithCancel(context.Background())
result := &proxyConfigEtcd{
proxy: proxy,
client: etcdClient,
keyInfos: make(map[string]*ProxyInformationEtcd),
urlToKey: make(map[string]string),
closeCtx: closeCtx,
closeFunc: closeFunc,
}
if err := result.configure(config, false); err != nil {
return nil, err
@ -83,17 +91,16 @@ func (p *proxyConfigEtcd) Reload(config *goconf.ConfigFile) error {
func (p *proxyConfigEtcd) Stop() {
p.client.RemoveListener(p)
p.closeFunc()
}
func (p *proxyConfigEtcd) EtcdClientCreated(client *EtcdClient) {
go func() {
if err := client.Watch(context.Background(), p.keyPrefix, p, clientv3.WithPrefix()); err != nil {
log.Printf("Error processing watch for %s: %s", p.keyPrefix, err)
}
}()
if err := client.WaitForConnection(p.closeCtx); err != nil {
if errors.Is(err, context.Canceled) {
return
}
go func() {
if err := client.WaitForConnection(context.Background()); err != nil {
panic(err)
}
@ -101,23 +108,47 @@ func (p *proxyConfigEtcd) EtcdClientCreated(client *EtcdClient) {
if err != nil {
panic(err)
}
for {
response, err := p.getProxyUrls(client, p.keyPrefix)
var nextRevision int64
for p.closeCtx.Err() == nil {
response, err := p.getProxyUrls(p.closeCtx, client, p.keyPrefix)
if err != nil {
if err == context.DeadlineExceeded {
if errors.Is(err, context.Canceled) {
return
} else if errors.Is(err, context.DeadlineExceeded) {
log.Printf("Timeout getting initial list of proxy URLs, retry in %s", backoff.NextWait())
} else {
log.Printf("Could not get initial list of proxy URLs, retry in %s: %s", backoff.NextWait(), err)
}
backoff.Wait(context.Background())
backoff.Wait(p.closeCtx)
continue
}
for _, ev := range response.Kvs {
p.EtcdKeyUpdated(client, string(ev.Key), ev.Value)
p.EtcdKeyUpdated(client, string(ev.Key), ev.Value, nil)
}
nextRevision = response.Header.Revision + 1
break
}
prevRevision := nextRevision
backoff.Reset()
for p.closeCtx.Err() == nil {
var err error
if nextRevision, err = client.Watch(p.closeCtx, p.keyPrefix, nextRevision, p, clientv3.WithPrefix()); err != nil {
log.Printf("Error processing watch for %s (%s), retry in %s", p.keyPrefix, err, backoff.NextWait())
backoff.Wait(p.closeCtx)
continue
}
if nextRevision != prevRevision {
backoff.Reset()
prevRevision = nextRevision
} else {
log.Printf("Processing watch for %s interrupted, retry in %s", p.keyPrefix, backoff.NextWait())
backoff.Wait(p.closeCtx)
}
return
}
}()
}
@ -125,14 +156,14 @@ func (p *proxyConfigEtcd) EtcdClientCreated(client *EtcdClient) {
func (p *proxyConfigEtcd) EtcdWatchCreated(client *EtcdClient, key string) {
}
func (p *proxyConfigEtcd) getProxyUrls(client *EtcdClient, keyPrefix string) (*clientv3.GetResponse, error) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
func (p *proxyConfigEtcd) getProxyUrls(ctx context.Context, client *EtcdClient, keyPrefix string) (*clientv3.GetResponse, error) {
ctx, cancel := context.WithTimeout(ctx, time.Second)
defer cancel()
return client.Get(ctx, keyPrefix, clientv3.WithPrefix())
}
func (p *proxyConfigEtcd) EtcdKeyUpdated(client *EtcdClient, key string, data []byte) {
func (p *proxyConfigEtcd) EtcdKeyUpdated(client *EtcdClient, key string, data []byte, prevValue []byte) {
var info ProxyInformationEtcd
if err := json.Unmarshal(data, &info); err != nil {
log.Printf("Could not decode proxy information %s: %s", string(data), err)
@ -173,7 +204,7 @@ func (p *proxyConfigEtcd) EtcdKeyUpdated(client *EtcdClient, key string, data []
}
}
func (p *proxyConfigEtcd) EtcdKeyDeleted(client *EtcdClient, key string) {
func (p *proxyConfigEtcd) EtcdKeyDeleted(client *EtcdClient, key string, prevValue []byte) {
p.mu.Lock()
defer p.mu.Unlock()

View file

@ -62,6 +62,8 @@ func SetEtcdProxy(t *testing.T, etcd *embed.Etcd, path string, proxy *TestProxyI
}
func TestProxyConfigEtcd(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
proxy := newMcuProxyForConfig(t)
etcd, config := newProxyConfigEtcd(t, proxy)

View file

@ -86,7 +86,7 @@ func (p *proxyConfigStatic) configure(config *goconf.ConfigFile, fromReload bool
remove[u] = ips
}
mcuUrl, _ := config.GetString("mcu", "url")
mcuUrl, _ := GetStringOptionWithEnv(config, "mcu", "url")
for _, u := range strings.Split(mcuUrl, " ") {
u = strings.TrimSpace(u)
if u == "" {

View file

@ -59,6 +59,7 @@ func updateProxyConfigStatic(t *testing.T, config ProxyConfig, dns bool, urls ..
}
func TestProxyConfigStaticSimple(t *testing.T) {
CatchLogForTest(t)
proxy := newMcuProxyForConfig(t)
config, _ := newProxyConfigStatic(t, proxy, false, "https://foo/")
proxy.Expect("add", "https://foo/")
@ -77,6 +78,7 @@ func TestProxyConfigStaticSimple(t *testing.T) {
}
func TestProxyConfigStaticDNS(t *testing.T) {
CatchLogForTest(t)
lookup := newMockDnsLookupForTest(t)
proxy := newMcuProxyForConfig(t)
config, dnsMonitor := newProxyConfigStatic(t, proxy, true, "https://foo/")

View file

@ -0,0 +1,99 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2021 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"sync"
)
type publisherStatsCounter struct {
mu sync.Mutex
streamTypes map[StreamType]bool
subscribers map[string]bool
}
func (c *publisherStatsCounter) Reset() {
c.mu.Lock()
defer c.mu.Unlock()
count := len(c.subscribers)
for streamType := range c.streamTypes {
statsMcuPublisherStreamTypesCurrent.WithLabelValues(string(streamType)).Dec()
statsMcuSubscriberStreamTypesCurrent.WithLabelValues(string(streamType)).Sub(float64(count))
}
c.streamTypes = nil
c.subscribers = nil
}
func (c *publisherStatsCounter) EnableStream(streamType StreamType, enable bool) {
c.mu.Lock()
defer c.mu.Unlock()
if enable == c.streamTypes[streamType] {
return
}
if enable {
if c.streamTypes == nil {
c.streamTypes = make(map[StreamType]bool)
}
c.streamTypes[streamType] = true
statsMcuPublisherStreamTypesCurrent.WithLabelValues(string(streamType)).Inc()
statsMcuSubscriberStreamTypesCurrent.WithLabelValues(string(streamType)).Add(float64(len(c.subscribers)))
} else {
delete(c.streamTypes, streamType)
statsMcuPublisherStreamTypesCurrent.WithLabelValues(string(streamType)).Dec()
statsMcuSubscriberStreamTypesCurrent.WithLabelValues(string(streamType)).Sub(float64(len(c.subscribers)))
}
}
func (c *publisherStatsCounter) AddSubscriber(id string) {
c.mu.Lock()
defer c.mu.Unlock()
if c.subscribers[id] {
return
}
if c.subscribers == nil {
c.subscribers = make(map[string]bool)
}
c.subscribers[id] = true
for streamType := range c.streamTypes {
statsMcuSubscriberStreamTypesCurrent.WithLabelValues(string(streamType)).Inc()
}
}
func (c *publisherStatsCounter) RemoveSubscriber(id string) {
c.mu.Lock()
defer c.mu.Unlock()
if !c.subscribers[id] {
return
}
delete(c.subscribers, id)
for streamType := range c.streamTypes {
statsMcuSubscriberStreamTypesCurrent.WithLabelValues(string(streamType)).Dec()
}
}

154
remotesession.go Normal file
View file

@ -0,0 +1,154 @@
/**
* Standalone signaling server for the Nextcloud Spreed app.
* Copyright (C) 2024 struktur AG
*
* @author Joachim Bauch <bauch@struktur.de>
*
* @license GNU AGPL version 3 or any later version
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package signaling
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"sync/atomic"
"time"
)
type RemoteSession struct {
hub *Hub
client *Client
remoteClient *GrpcClient
sessionId string
proxy atomic.Pointer[SessionProxy]
}
func NewRemoteSession(hub *Hub, client *Client, remoteClient *GrpcClient, sessionId string) (*RemoteSession, error) {
remoteSession := &RemoteSession{
hub: hub,
client: client,
remoteClient: remoteClient,
sessionId: sessionId,
}
client.SetSessionId(sessionId)
client.SetHandler(remoteSession)
// Don't use "client.Context()" here as it could close the proxy connection
// before any final messages are forwarded to the remote end.
proxy, err := remoteClient.ProxySession(context.Background(), sessionId, remoteSession)
if err != nil {
return nil, err
}
remoteSession.proxy.Store(proxy)
return remoteSession, nil
}
func (s *RemoteSession) Country() string {
return s.client.Country()
}
func (s *RemoteSession) RemoteAddr() string {
return s.client.RemoteAddr()
}
func (s *RemoteSession) UserAgent() string {
return s.client.UserAgent()
}
func (s *RemoteSession) IsConnected() bool {
return true
}
func (s *RemoteSession) Start(message *ClientMessage) error {
return s.sendMessage(message)
}
func (s *RemoteSession) OnProxyMessage(msg *ServerSessionMessage) error {
var message *ServerMessage
if err := json.Unmarshal(msg.Message, &message); err != nil {
return err
}
if !s.client.SendMessage(message) {
return fmt.Errorf("could not send message to client")
}
return nil
}
func (s *RemoteSession) OnProxyClose(err error) {
if err != nil {
log.Printf("Proxy connection for session %s to %s was closed with error: %s", s.sessionId, s.remoteClient.Target(), err)
}
s.Close()
}
func (s *RemoteSession) SendMessage(message WritableClientMessage) bool {
return s.sendMessage(message) == nil
}
func (s *RemoteSession) sendProxyMessage(message []byte) error {
proxy := s.proxy.Load()
if proxy == nil {
return errors.New("proxy already closed")
}
msg := &ClientSessionMessage{
Message: message,
}
return proxy.Send(msg)
}
func (s *RemoteSession) sendMessage(message interface{}) error {
data, err := json.Marshal(message)
if err != nil {
return err
}
return s.sendProxyMessage(data)
}
func (s *RemoteSession) Close() {
if proxy := s.proxy.Swap(nil); proxy != nil {
proxy.Close()
}
s.hub.unregisterRemoteSession(s)
s.client.Close()
}
func (s *RemoteSession) OnLookupCountry(client HandlerClient) string {
return s.hub.OnLookupCountry(client)
}
func (s *RemoteSession) OnClosed(client HandlerClient) {
s.Close()
}
func (s *RemoteSession) OnMessageReceived(client HandlerClient, message []byte) {
if err := s.sendProxyMessage(message); err != nil {
log.Printf("Error sending %s to the proxy for session %s: %s", string(message), s.sessionId, err)
s.Close()
}
}
func (s *RemoteSession) OnRTTReceived(client HandlerClient, rtt time.Duration) {
}

22
room.go
View file

@ -65,7 +65,7 @@ type Room struct {
events AsyncEvents
backend *Backend
properties *json.RawMessage
properties json.RawMessage
closer *Closer
mu *sync.RWMutex
@ -95,7 +95,7 @@ func getRoomIdForBackend(id string, backend *Backend) string {
return backend.Id() + "|" + id
}
func NewRoom(roomId string, properties *json.RawMessage, hub *Hub, events AsyncEvents, backend *Backend) (*Room, error) {
func NewRoom(roomId string, properties json.RawMessage, hub *Hub, events AsyncEvents, backend *Backend) (*Room, error) {
room := &Room{
id: roomId,
hub: hub,
@ -136,7 +136,7 @@ func (r *Room) Id() string {
return r.id
}
func (r *Room) Properties() *json.RawMessage {
func (r *Room) Properties() json.RawMessage {
r.mu.RLock()
defer r.mu.RUnlock()
return r.properties
@ -270,12 +270,12 @@ func (r *Room) processBackendRoomRequestAsyncRoom(message *AsyncRoomMessage) {
}
}
func (r *Room) AddSession(session Session, sessionData *json.RawMessage) {
func (r *Room) AddSession(session Session, sessionData json.RawMessage) {
var roomSessionData *RoomSessionData
if sessionData != nil && len(*sessionData) > 0 {
if len(sessionData) > 0 {
roomSessionData = &RoomSessionData{}
if err := json.Unmarshal(*sessionData, roomSessionData); err != nil {
log.Printf("Error decoding room session data \"%s\": %s", string(*sessionData), err)
if err := json.Unmarshal(sessionData, roomSessionData); err != nil {
log.Printf("Error decoding room session data \"%s\": %s", string(sessionData), err)
roomSessionData = nil
}
}
@ -480,11 +480,11 @@ func (r *Room) publish(message *ServerMessage) error {
})
}
func (r *Room) UpdateProperties(properties *json.RawMessage) {
func (r *Room) UpdateProperties(properties json.RawMessage) {
r.mu.Lock()
defer r.mu.Unlock()
if (r.properties == nil && properties == nil) ||
(r.properties != nil && properties != nil && bytes.Equal(*r.properties, *properties)) {
if (len(r.properties) == 0 && len(properties) == 0) ||
(len(r.properties) > 0 && len(properties) > 0 && bytes.Equal(r.properties, properties)) {
// Don't notify if properties didn't change.
return
}
@ -769,7 +769,7 @@ func (r *Room) PublishUsersInCallChangedAll(inCall int) {
Type: "update",
Update: &RoomEventServerMessage{
RoomId: r.id,
InCall: &inCallMsg,
InCall: inCallMsg,
All: true,
},
},

View file

@ -63,6 +63,7 @@ func NewRoomPingForTest(t *testing.T) (*url.URL, *RoomPing) {
}
func TestSingleRoomPing(t *testing.T) {
CatchLogForTest(t)
u, ping := NewRoomPingForTest(t)
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
@ -113,6 +114,7 @@ func TestSingleRoomPing(t *testing.T) {
}
func TestMultiRoomPing(t *testing.T) {
CatchLogForTest(t)
u, ping := NewRoomPingForTest(t)
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
@ -159,6 +161,7 @@ func TestMultiRoomPing(t *testing.T) {
}
func TestMultiRoomPing_Separate(t *testing.T) {
CatchLogForTest(t)
u, ping := NewRoomPingForTest(t)
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)
@ -201,6 +204,7 @@ func TestMultiRoomPing_Separate(t *testing.T) {
}
func TestMultiRoomPing_DeleteRoom(t *testing.T) {
CatchLogForTest(t)
u, ping := NewRoomPingForTest(t)
ctx, cancel := context.WithTimeout(context.Background(), testTimeout)

View file

@ -73,6 +73,8 @@ func TestRoom_InCall(t *testing.T) {
}
func TestRoom_Update(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
hub, _, router, server := CreateHubForTest(t)
config, err := getTestConfig(server)
@ -123,7 +125,7 @@ func TestRoom_Update(t *testing.T) {
UserIds: []string{
testDefaultUserId,
},
Properties: &roomProperties,
Properties: roomProperties,
},
}
@ -164,13 +166,13 @@ func TestRoom_Update(t *testing.T) {
t.Error(err)
} else if msg.RoomId != roomId {
t.Errorf("Expected room id %s, got %+v", roomId, msg)
} else if msg.Properties == nil || !bytes.Equal(*msg.Properties, roomProperties) {
} else if len(msg.Properties) == 0 || !bytes.Equal(msg.Properties, roomProperties) {
t.Errorf("Expected room properties %s, got %+v", string(roomProperties), msg)
}
} else {
if msg.RoomId != roomId {
t.Errorf("Expected room id %s, got %+v", roomId, msg)
} else if msg.Properties == nil || !bytes.Equal(*msg.Properties, roomProperties) {
} else if len(msg.Properties) == 0 || !bytes.Equal(msg.Properties, roomProperties) {
t.Errorf("Expected room properties %s, got %+v", string(roomProperties), msg)
}
if err := checkMessageRoomId(message2, roomId); err != nil {
@ -191,7 +193,7 @@ loop:
// The internal room has been updated with the new properties.
if room := hub.getRoom(roomId); room == nil {
err = fmt.Errorf("Room %s not found in hub", roomId)
} else if room.Properties() == nil || !bytes.Equal(*room.Properties(), roomProperties) {
} else if len(room.Properties()) == 0 || !bytes.Equal(room.Properties(), roomProperties) {
err = fmt.Errorf("Expected room properties %s, got %+v", string(roomProperties), room.Properties())
} else {
err = nil
@ -210,6 +212,8 @@ loop:
}
func TestRoom_Delete(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
hub, _, router, server := CreateHubForTest(t)
config, err := getTestConfig(server)
@ -352,6 +356,8 @@ loop:
}
func TestRoom_RoomSessionData(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
hub, _, router, server := CreateHubForTest(t)
config, err := getTestConfig(server)
@ -421,6 +427,8 @@ func TestRoom_RoomSessionData(t *testing.T) {
}
func TestRoom_InCallAll(t *testing.T) {
t.Parallel()
CatchLogForTest(t)
hub, _, router, server := CreateHubForTest(t)
config, err := getTestConfig(server)

View file

@ -22,17 +22,21 @@
package signaling
import (
"context"
"encoding/json"
"errors"
"net/url"
"testing"
"time"
)
type DummySession struct {
publicId string
}
func (s *DummySession) Context() context.Context {
return context.Background()
}
func (s *DummySession) PrivateId() string {
return ""
}
@ -53,7 +57,7 @@ func (s *DummySession) UserId() string {
return ""
}
func (s *DummySession) UserData() *json.RawMessage {
func (s *DummySession) UserData() json.RawMessage {
return nil
}
@ -80,10 +84,6 @@ func (s *DummySession) LeaveRoom(notify bool) *Room {
return nil
}
func (s *DummySession) IsExpired(now time.Time) bool {
return false
}
func (s *DummySession) Close() {
}
@ -91,6 +91,14 @@ func (s *DummySession) HasPermission(permission Permission) bool {
return false
}
func (s *DummySession) SendError(e *Error) bool {
return false
}
func (s *DummySession) SendMessage(message *ServerMessage) bool {
return false
}
func checkSession(t *testing.T, sessions RoomSessions, sessionId string, roomSessionId string) Session {
session := &DummySession{
publicId: sessionId,

32
scripts/log-simplifier.sh Normal file → Executable file
View file

@ -30,36 +30,40 @@
# Afterwards the script also creates a file per user and session
#
LOG_CONTENT="`cat $1`"
USER_SESSIONS=$(echo "$LOG_CONTENT" | egrep -o '[-a-zA-Z0-9_]{294,}==' | sort | uniq)
if [ -z "$1" ]; then
echo "USAGE: $0 <filename.log>"
exit 1
fi
LOG_CONTENT=$(cat "$1")
USER_SESSIONS=$(echo "$LOG_CONTENT" | grep -E -o '[-a-zA-Z0-9_]{294,}==' | sort | uniq)
NUM_USER_SESSIONS=$(echo "$USER_SESSIONS" | wc -l)
echo "User sessions found: $NUM_USER_SESSIONS"
for i in $(seq 1 $NUM_USER_SESSIONS);
for i in $(seq 1 "$NUM_USER_SESSIONS");
do
SESSION_NAME=$(echo "$USER_SESSIONS" | head -n $i | tail -n 1)
LOG_CONTENT=$(echo "${LOG_CONTENT//$SESSION_NAME/user$i}")
SESSION_NAME=$(echo "$USER_SESSIONS" | head -n "$i" | tail -n 1)
LOG_CONTENT="${LOG_CONTENT//$SESSION_NAME/user$i}"
done
ROOM_SESSIONS=$(echo "$LOG_CONTENT" | egrep -o '[-a-zA-Z0-9_+\/]{255}( |$)' | sort | uniq)
ROOM_SESSIONS=$(echo "$LOG_CONTENT" | grep -E -o '[-a-zA-Z0-9_+\/]{255}( |$)' | sort | uniq)
NUM_ROOM_SESSIONS=$(echo "$ROOM_SESSIONS" | wc -l)
echo "Room sessions found: $NUM_ROOM_SESSIONS"
for i in $(seq 1 $NUM_ROOM_SESSIONS);
for i in $(seq 1 "$NUM_ROOM_SESSIONS");
do
SESSION_NAME=$(echo "$ROOM_SESSIONS" | head -n $i | tail -n 1)
LOG_CONTENT=$(echo "${LOG_CONTENT//$SESSION_NAME/session$i}")
SESSION_NAME=$(echo "$ROOM_SESSIONS" | head -n "$i" | tail -n 1)
LOG_CONTENT="${LOG_CONTENT//$SESSION_NAME/session$i}"
done
echo "$LOG_CONTENT" > simple.log
for i in $(seq 1 $NUM_USER_SESSIONS);
for i in $(seq 1 "$NUM_USER_SESSIONS");
do
echo "$LOG_CONTENT" | egrep "user$i( |$)" > user$i.log
echo "$LOG_CONTENT" | grep -E "user$i( |$)" > "user$i.log"
done
for i in $(seq 1 $NUM_ROOM_SESSIONS);
for i in $(seq 1 "$NUM_ROOM_SESSIONS");
do
echo "$LOG_CONTENT" | egrep "session$i( |$)" > session$i.log
echo "$LOG_CONTENT" | grep -E "session$i( |$)" > "session$i.log"
done

View file

@ -7,7 +7,7 @@
#readtimeout = 15
# HTTP socket write timeout in seconds.
#writetimeout = 15
#writetimeout = 30
[https]
# IP and port to listen on for HTTPS requests.
@ -18,7 +18,7 @@
#readtimeout = 15
# HTTPS socket write timeout in seconds.
#writetimeout = 15
#writetimeout = 30
# Certificate / private key to use for the HTTPS server.
certificate = /etc/nginx/ssl/server.crt
@ -34,6 +34,12 @@ debug = false
# room and call can be subscribed.
#allowsubscribeany = false
# Comma separated list of trusted proxies (IPs or CIDR networks) that may set
# the "X-Real-Ip" or "X-Forwarded-For" headers. If both are provided, the
# "X-Real-Ip" header will take precedence (if valid).
# Leave empty to allow loopback and local addresses.
#trustedproxies =
[sessions]
# Secret value used to generate checksums of sessions. This should be a random
# string of 32 or 64 bytes.

Some files were not shown because too many files have changed in this diff Show more