Compare commits

...

13 commits

Author SHA1 Message Date
Nicola Murino 193d11587d
examples: update docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 14:40:20 +02:00
Nicola Murino 4d4d2ad801
remove obsolete rest-api-cli
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 14:11:14 +02:00
Nicola Murino 9b8407aeb0
remove fail2ban docs. Built-in defender should be used
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 14:09:41 +02:00
Nicola Murino aa4a7aa6f6
update some descriptions
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 14:01:33 +02:00
Nicola Murino 4bac74a149
remove docs and add a link to new documentation website
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 13:47:14 +02:00
Nicola Murino dd9b0b151f
sftpfs: simplify client creation
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 12:03:38 +02:00
Nicola Murino 0a8a0ee771
revert #450
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-27 10:50:25 +02:00
Nicola Murino 2bcf05ca45
refactor for secrets management in API and private key handling in SFTPFs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 16:17:24 +02:00
Nicola Murino aa426016f2
sftpd: remove folder_prefix
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 11:43:25 +02:00
Nicola Murino 1fc0f21506
hooks: remove logging output from external programs
This reverts #1208 because the contributor did not respond to our
request to sign the CLA

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 11:13:16 +02:00
Nicola Murino e1fdc10ef8
remove robots.txt endpoint
This reverts #833 because the contributor did not respond to our
request to sign the CLA

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 11:00:55 +02:00
Nicola Murino 26d19abf61
remove reading data provider username and password from file
This reverts #1455 because the contributor cannot sign the CLA

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 10:57:38 +02:00
Nicola Murino 590a1f1429
update deps and CI
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2024-04-26 10:26:53 +02:00
141 changed files with 261 additions and 7931 deletions

View file

@ -521,6 +521,6 @@ jobs:
go-version: '1.22'
- uses: actions/checkout@v4
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v4
uses: golangci/golangci-lint-action@v5
with:
version: latest

View file

@ -178,7 +178,7 @@ jobs:
FEATURES=${{ steps.info.outputs.features }}
labels: |
org.opencontainers.image.title=SFTPGo
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support
org.opencontainers.image.description=Full-featured and highly configurable file transfer server: SFTP, HTTP/S,FTP/S, WebDAV
org.opencontainers.image.url=https://github.com/drakkan/sftpgo
org.opencontainers.image.documentation=https://github.com/drakkan/sftpgo/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=https://github.com/drakkan/sftpgo

336
README.md
View file

@ -5,23 +5,27 @@
[![License: AGPL-3.0-only](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
Fully featured and highly configurable SFTP server with optional HTTP/S, FTP/S and WebDAV support.
Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.
Full-featured and highly configurable event-driven file transfer solution.
Server protocols: SFTP, HTTP/S, FTP/S, WebDAV.
Storage backends: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, other SFTP servers.
With SFTPGo you can leverage local and cloud storage backends for exchanging and storing files internally or with business partners using the same tools and processes you are already familiar with.
The WebAdmin UI allows to easily create and manage your users, folders, groups and other resources.
The WebClient UI allows end users to change their credentials, browse and manage their files in the browser and setup two-factor authentication which works with Microsoft Authenticator, Google Authenticator, Authy and other compatible apps.
## Sponsors
If you find SFTPGo useful please consider supporting this Open Source project.
We strongly believe in Open Source software model, so we decided to make SFTPGo available to everyone, but maintaining and evolving SFTPGo takes a lot of time and work. To make development and maintenance sustainable you should consider to support the project with a [sponsorship](https://github.com/sponsors/drakkan).
Maintaining and evolving SFTPGo is a lot of work - easily the equivalent of a full time job - for me.
We also provide [professional services](https://sftpgo.com/#pricing) to support you in using SFTPGo to the fullest.
I'd like to make SFTPGo into a sustainable long term project and would not like to introduce a dual licensing option and limit some features to the proprietary version only.
The open source license grant you freedom but not assurance of help. So why would you rely on free software without support or any guarantee it will stay healthy and maintained for the upcoming years?
If you use SFTPGo, it is in your best interest to ensure that the project you rely on stays healthy and well maintained.
This can only happen with your donations and [sponsorships](https://github.com/sponsors/drakkan) :heart:
Supporting the project benefit businesses and the community because if the project is financially sustainable, using this business model, we don't have to restrict features and/or switch to an [Open-core](https://en.wikipedia.org/wiki/Open-core_model) model. The technology stays truly open source. Everyone wins.
You can also purchase, using many payment methods, support plans from the [SFTPGo website](https://sftpgo.com/#pricing).
With sponsorships/donations or support plans we establish a channel for reciprocal access, ensuring better outcomes for both you and the project.
You should support the project for its ongoing maintenance, even if you don't have any questions or need new features. If SFTPGo is no longer maintained you will have troubles and your company will lose money: bugs and security vulnerabilities will no longer be fixed, new algorithms will not be added to support newer clients, and so on. You will be forced to switch to a similar proprietary product and pay for its license and the migration cost.
### Thank you to our sponsors
@ -47,317 +51,15 @@ With sponsorships/donations or support plans we establish a channel for reciproc
## Support policy
SFTPGo is an Open Source project and you can of course use it for free but please don't ask for free support as well.
You can use SFTPGo for free, respecting the obligations of the Open Source license, but please do not ask or expect free support as well.
We will check the reported issues to see if you are experiencing a bug and if so, it may or may not be fixed, we only provide support to project [sponsors/donors](#sponsors).
Use [discussions](https://github.com/drakkan/sftpgo/discussions) to ask questions and get support from the community.
If you report an invalid issue or ask for step-by-step support, your issue will remain open with no answer or will be closed as invalid without further explanation. Thanks for understanding.
If you report an invalid issue and/or ask for step-by-step support, your issue will be closed as invalid without further explanation. Invalid bug reports left open may confuse other users. Thanks for understanding.
## Features
## Documentation
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
- Virtual folders are supported: a virtual folder can use any of the supported storage backends. So you can have, for example, a user with the S3 backend mapping a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one. Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
- Configurable [custom commands and/or HTTP hooks](./docs/custom-actions.md) on upload, pre-upload, download, pre-download, delete, pre-delete, rename, mkdir, rmdir on SSH commands and on user add, update and delete.
- Virtual accounts stored within a "data provider".
- SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (key/value store in pure Go) and in-memory data providers are supported.
- Chroot isolation for local accounts. Cloud-based accounts can be restricted to a certain base path.
- Per-user and per-directory virtual permissions, for each path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode and modification time.
- [REST API](./docs/rest-api.md) for users and folders management, data retention, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- The [Event Manager](./docs/eventmanager.md) allows to define custom workflows based on server events or schedules.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- [Web client interface](./docs/web-client.md) so that end users can change their credentials, manage and share their files in the browser.
- Public key and password authentication. Multiple public keys per-user are supported.
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
- Per-user authentication methods.
- [Two-factor authentication](./docs/howto/two-factor-authentication.md) based on time-based one time passwords (RFC 6238) which works with Authy, Google Authenticator, Microsoft Authenticator and other compatible apps.
- LDAP/Active Directory authentication using a [plugin](https://github.com/sftpgo/sftpgo-plugin-auth).
- Simplified user administrations using [groups](./docs/groups.md).
- [Roles](./docs/roles.md) allow to create limited administrators who can only create and manage users with their role.
- Custom authentication via [external programs/HTTP API](./docs/external-auth.md).
- Web Client and Web Admin user interfaces support [OpenID Connect](https://openid.net/connect/) authentication and so they can be integrated with identity providers such as [Keycloak](https://www.keycloak.org/). You can find more details [here](./docs/oidc.md).
- [Data At Rest Encryption](./docs/dare.md).
- Dynamic user modification before login via [external programs/HTTP API](./docs/dynamic-user-mod.md).
- Quota support: accounts can have individual disk quota expressed as max total size and/or max number of files.
- Bandwidth throttling, with separate settings for upload and download and overrides based on the client's IP address.
- Data transfer bandwidth limits, with total limit or separate settings for uploads and downloads and overrides based on the client's IP address. Limits can be reset using the REST API.
- Per-protocol [rate limiting](./docs/rate-limiting.md) is supported and can be optionally connected to the built-in defender to automatically block hosts that repeatedly exceed the configured limit.
- Per-user maximum concurrent sessions.
- Per-user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per-user and per-directory shell like patterns filters: files can be allowed, denied and optionally hidden based on shell like patterns.
- Automatically terminating idle connections.
- Automatic blocklist management using the built-in [defender](./docs/defender.md).
- Geo-IP filtering using a [plugin](https://github.com/sftpgo/sftpgo-plugin-geoipfilter).
- Atomic uploads are configurable.
- Per-user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Support for Git repositories over SSH.
- SCP and rsync are supported.
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
- [WebDAV](./docs/webdav.md) is supported.
- ACME protocol is supported. SFTPGo can obtain and automatically renew TLS certificates for HTTPS, WebDAV and FTPS from `Let's Encrypt` or other ACME compliant certificate authorities, using the `HTTP-01` or `TLS-ALPN-01` [challenge types](https://letsencrypt.org/docs/challenge-types/).
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
- Per-user protocols restrictions. You can configure the allowed protocols (SSH/HTTP/FTP/WebDAV) for each user.
- [Prometheus metrics](./docs/metrics.md) are supported.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP service without losing the information about the client's address.
- Easy [migration](./examples/convertusers) from Linux system user accounts.
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
- Performance analysis using built-in [profiler](./docs/profiling.md).
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
- SFTPGo supports a [plugin system](./docs/plugins.md) and therefore can be extended using external plugins.
- Infrastructure as Code (IaC) support using the [Terraform provider](https://registry.terraform.io/providers/drakkan/sftpgo/latest).
- Partial (experimental) support for [internationalization](./docs/internationalization.md).
## Platforms
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS, Windows and FreeBSD. Other *BSD variants should work too.
## Requirements
- Go as build only dependency. We support the Go version(s) used in [continuous integration workflows](./.github/workflows).
- A suitable SQL server to use as data provider:
- upstream supported versions of PostgreSQL, MySQL and MariaDB.
- CockroachDB stable.
- The SQL server is optional: you can choose to use an embedded SQLite, bolt or in memory data provider.
## Installation
Binary releases for Linux, macOS, and Windows are available. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
An official Docker image is available. Documentation is [here](./docker/README.md).
<details>
<summary>Some Linux distro packages are available</summary>
- For Arch Linux via AUR:
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/). This package follows stable releases. It requires `git`, `gcc` and `go` to build.
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/). This package follows stable releases downloading the prebuilt linux binary from GitHub. It does not require `git`, `gcc` and `go` to build.
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git `main` branch. It requires `git`, `gcc` and `go` to build.
- Deb and RPM packages are built after each commit and for each release.
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
- Void Linux provides an [official package](https://github.com/void-linux/void-packages/tree/master/srcpkgs/sftpgo).
</details>
APT and YUM repositories are [available](./docs/repo.md).
SFTPGo is also available on some marketplaces:
- [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335)
- Azure Marketplace: [SFTPGo for Linux](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/eliamarzia1667381463185.sftpgo_linux), [SFTPGo for Windows](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/eliamarzia1667381463185.sftpgo_windows), [SFTPGo for AKS](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/eliamarzia1667381463185.sftpgo_aks).
- [Elest.io](https://elest.io/open-source/sftpgo)
Purchasing from there will help keep SFTPGo a long-term sustainable project.
<details><summary>Windows packages</summary>
- The Windows installer to install and run SFTPGo as a Windows service.
- The portable package to start SFTPGo on demand.
- The [winget](https://docs.microsoft.com/en-us/windows/package-manager/winget/install) package to install and run SFTPGo as a Windows service: `winget install SFTPGo`.
- The [Chocolatey package](https://community.chocolatey.org/packages/sftpgo) to install and run SFTPGo as a Windows service.
</details>
On macOS you can install from the Homebrew [Formula](https://formulae.brew.sh/formula/sftpgo).
On FreeBSD you can install from the [SFTPGo port](https://www.freshports.org/ftp/sftpgo).
On DragonFlyBSD you can install SFTPGo from [DPorts](https://github.com/DragonFlyBSD/DPorts/tree/master/ftp/sftpgo).
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
Alternately, you can [build from source](./docs/build-from-source.md).
[Getting Started Guide for the Impatient](./docs/howto/getting-started.md).
## Configuration
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
Please make sure to [initialize the data provider](#data-provider-initialization-and-management) before running the daemon.
To start SFTPGo with the default settings, simply run:
```bash
sftpgo serve
```
Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a service.
### Data provider initialization and management
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
For PostgreSQL, MySQL and CockroachDB providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
Alternately, you can create/update the required data provider structures yourself using the `initprovider` command.
For example, you can simply execute the following command from the configuration directory:
```bash
sftpgo initprovider
```
Take a look at the CLI usage to learn how to specify a different configuration file:
```bash
sftpgo initprovider --help
```
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
You can also reset your provider by using the `resetprovider` sub-command. Take a look at the CLI usage for more details:
```bash
sftpgo resetprovider --help
```
:warning: Please note that some data providers (e.g. MySQL and CockroachDB) do not support schema changes within a transaction, this means that you may end up with an inconsistent schema if migrations are forcibly aborted. CockroachDB doesn't support database-level locks, so make sure you don't execute migrations concurrently.
## Create the first admin
To start using SFTPGo you need to create an admin user, you can do it in several ways:
- by using the web admin interface. The default URL is [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
- by loading initial data
- by enabling `create_default_admin` in your configuration file and setting the environment variables `SFTPGO_DEFAULT_ADMIN_USERNAME` and `SFTPGO_DEFAULT_ADMIN_PASSWORD`
## Upgrading
SFTPGo supports upgrading from the previous release branch to the current one.
Some examples for supported upgrade paths are:
- from 2.1.x to 2.2.x
- from 2.2.x to 2.3.x and so on.
For supported upgrade paths, the data and schema are migrated automatically when SFTPGo starts, alternatively you can use the `initprovider` command before starting SFTPGo.
So if, for example, you want to upgrade from 2.0.x to 2.2.x, you must first install version 2.1.x, update the data provider (automatically, by starting SFTPGo or manually using the `initprovider` command) and finally install the version 2.2.x. It is recommended to always install the latest available minor version, ie do not install 2.1.0 if 2.1.2 is available.
Loading data from a provider independent JSON dump is supported from the previous release branch to the current one too. After upgrading SFTPGo it is advisable to regenerate the JSON dump from the new version.
## Downgrading
If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the `revertprovider` command for this task.
As for upgrading, SFTPGo supports downgrading from the previous release branch to the current one.
So, if you plan to downgrade from 2.3.x to 2.2.x, before uninstalling 2.3.x version, you can prepare your data provider executing the following command from the configuration directory:
```shell
sftpgo revertprovider
```
Take a look at the CLI usage to learn how to specify a configuration file:
```shell
sftpgo revertprovider --help
```
The `revertprovider` command is not supported for the memory provider.
Please note that we only support the current release branch and the current main branch, if you find a bug it is better to report it rather than downgrading to an older unsupported version.
## Users, groups, folders and other resource management
After starting SFTPGo you can manage users, groups, folders and other resources using:
- the [WebAdmin UI](./docs/web-admin.md)
- the [REST API](./docs/rest-api.md)
To support embedded data providers like `bolt` and `SQLite`, which do not support concurrent connections, we can't have a CLI that directly write users and other resources to the data provider, we always have to use the REST API.
Full details for users, groups, folders, admins and other resources are documented in the [OpenAPI](./openapi/openapi.yaml) schema. If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
:warning: SFTPGo users, groups and folders are virtual and therefore unrelated to the system ones. There is no need to create system-wide users and groups.
## Tutorials
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
## Authentication options
<details><summary> External Authentication</summary>
Custom authentication methods can easily be added. SFTPGo supports external authentication modules, and writing a new backend can be as simple as a few lines of shell script. More information can be found [here](./docs/external-auth.md).
</details>
<details><summary> Keyboard Interactive Authentication</summary>
Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client.
This authentication method is typically used for multi-factor authentication.
More information can be found [here](./docs/keyboard-interactive.md).
</details>
## Dynamic user creation or modification
A user can be created or modified by an external program just before the login. More information about this can be found [here](./docs/dynamic-user-mod.md).
## Custom Actions
SFTPGo allows you to configure custom commands and/or HTTP hooks to receive notifications about file uploads, deletions and several other events.
More information about custom actions can be found [here](./docs/custom-actions.md).
## Virtual folders
Directories outside the user home directory or based on a different storage provider can be mapped as virtual folders, more information [here](./docs/virtual-folders.md).
## Other hooks
You can get notified as soon as a new connection is established using the [Post-connect hook](./docs/post-connect-hook.md) and after each login using the [Post-login hook](./docs/post-login-hook.md).
You can use your own hook to [check passwords](./docs/check-password-hook.md).
## Storage backends
### S3/GCP/Azure
Each user can be mapped with a [S3 Compatible Object Storage](./docs/s3.md) /[Google Cloud Storage](./docs/google-cloud-storage.md)/[Azure Blob Storage](./docs/azure-blob-storage.md) bucket or a bucket virtual folder.
### SFTP backend
Each user can be mapped to another SFTP server account or a subfolder of it. More information can be found [here](./docs/sftpfs.md).
### Encrypted backend
Data at-rest encryption is supported via the [cryptfs backend](./docs/dare.md).
### HTTP/S backend
HTTP/S backend allows you to write your own custom storage backend by implementing a REST API. More information can be found [here](./docs/httpfs.md).
### Other Storage backends
Adding new storage backends is quite easy:
- implement the [Fs interface](./internal/vfs/vfs.go#L86 "interface for filesystem backends").
- update the user method `GetFilesystem` to return the new backend
- update the web interface and the REST API CLI
- add the flags for the new storage backed to the `portable` mode
Anyway, some backends require a pay per-use account (or they offer free account for a limited time period only). To be able to add support for such backends or to review pull requests, please provide a test account. The test account must be available for enough time to be able to maintain the backend and do basic tests before each new release.
## Brute force protection
SFTPGo supports a built-in [defender](./docs/defender.md).
Alternately you can use the [connection failed logs](./docs/logs.md) for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
## Account's configuration properties
Details information about account configuration properties can be found [here](./docs/account.md).
## Performance
SFTPGo can easily saturate a Gigabit connection on low end hardware with no special configuration, this is generally enough for most use cases.
More in-depth analysis of performance can be found [here](./docs/performance.md).
You can read more about supported features and documentation at [sftpgo.github.io](https://sftpgo.github.io/).
## Release Cadence

View file

@ -1,225 +0,0 @@
# Official Docker image
SFTPGo provides an official Docker image, it is available on both [Docker Hub](https://hub.docker.com/r/drakkan/sftpgo) and on [GitHub Container Registry](https://github.com/users/drakkan/packages/container/package/sftpgo).
## Supported tags and respective Dockerfile links
- [v2.5.6, v2.5, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.5.6/Dockerfile)
- [v2.5.6-plugins, v2.5-plugins, v2-plugins, plugins](https://github.com/drakkan/sftpgo/blob/v2.5.6/Dockerfile)
- [v2.5.6-alpine, v2.5-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.5.6/Dockerfile.alpine)
- [v2.5.6-slim, v2.5-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.5.6/Dockerfile)
- [v2.5.6-alpine-slim, v2.5-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.5.6/Dockerfile.alpine)
- [edge](../Dockerfile)
- [edge-plugins](../Dockerfile)
- [edge-alpine](../Dockerfile.alpine)
- [edge-slim](../Dockerfile)
- [edge-alpine-slim](../Dockerfile.alpine)
- [edge-distroless-slim](../Dockerfile.distroless)
## How to use the SFTPGo image
### Start a `sftpgo` server instance
Starting a SFTPGo instance is simple:
```shell
docker run --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
Now visit [http://localhost:8080/web/admin](http://localhost:8080/web/admin), replacing `localhost` with the appropriate IP address if SFTPGo is not reachable on localhost, create the first admin and a new SFTPGo user. The SFTP service is available on port 2022.
If you don't want to persist any files, for example for testing purposes, you can run an SFTPGo instance like this:
```shell
docker run --rm --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
If you prefer GitHub Container Registry to Docker Hub replace `drakkan/sftpgo:tag` with `ghcr.io/drakkan/sftpgo:tag`.
### Enable FTP service
FTP is disabled by default, you can enable the FTP service by starting the SFTPGo instance in this way:
```shell
docker run --name some-sftpgo \
-p 8080:8080 \
-p 2022:2022 \
-p 2121:2121 \
-p 50000-50100:50000-50100 \
-e SFTPGO_FTPD__BINDINGS__0__PORT=2121 \
-e SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP=<your external ip here> \
-d "drakkan/sftpgo:tag"
```
The FTP service is now available on port 2121 and SFTP on port 2022.
You can change the passive ports range (`50000-50100` by default) by setting the environment variables `SFTPGO_FTPD__PASSIVE_PORT_RANGE__START` and `SFTPGO_FTPD__PASSIVE_PORT_RANGE__END`.
It is recommended that you provide a certificate and key file to enable FTP over TLS. You should prefer SFTP to FTP even if you configure TLS, please don't blindly enable the old FTP protocol.
### Enable WebDAV service
WebDAV is disabled by default, you can enable the WebDAV service by starting the SFTPGo instance in this way:
```shell
docker run --name some-sftpgo \
-p 8080:8080 \
-p 2022:2022 \
-p 10080:10080 \
-e SFTPGO_WEBDAVD__BINDINGS__0__PORT=10080 \
-d "drakkan/sftpgo:tag"
```
The WebDAV service is now available on port 10080 and SFTP on port 2022.
It is recommended that you provide a certificate and key file to enable WebDAV over https.
### Container shell access and viewing SFTPGo logs
The docker exec command allows you to run commands inside a Docker container. The following command line will give you a shell inside your `sftpgo` container:
```shell
docker exec -it some-sftpgo sh
```
The logs are available through Docker's container log:
```shell
docker logs some-sftpgo
```
### Container graceful shutdown
```shell
docker run --name some-sftpgo \
-p 2022:2022 \
-e SFTPGO_GRACE_TIME=32 \
-d "drakkan/sftpgo:tag"
```
Setting the `SFTPGO_GRACE_TIME` environment variable to a non zero value when creating or running a container will enable a graceful shutdown period in seconds that will allow existing connections to hopefully complete before being forcibly closed when the time has passed.
While the SFTPGo container is in graceful shutdown mode waiting for the last connection(s) to finish, no new connections will be allowed.
If no connections are active or `SFTPGO_GRACE_TIME=0` (default value if unset) the container will shutdown immediately.
:warning: The default docker grace time is 10 seconds, so if your SFTPGO_GRACE_TIME is larger than the docker grace time, then any `docker stop some-sftpgo` command will terminate your container once the docker grace time has passed. To ensure that the full SFTPGO_GRACE_TIME can be used, you can send a SIGINT or SIGTERM signal. Those signals can be sent using one of these commands: `docker kill --signal=SIGINT some-sftpgo` or `docker kill --signal=SIGTERM some-sftpgo`.
Alternatively you can increase the default docker grace time to a value larger than your SFTPGO_GRACE_TIME. The default docker grace time can either be specified at creation/run time using `--stop-timeout <value>` or you can simply add `--time <value>` to the docker stop command like in this 60 seconds example `docker stop --time 60 some-sftpgo`.
### Where to Store Data
Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the SFTPGo images to familiarize themselves with the options available, including:
- Let Docker manage the storage for SFTPGo data by [writing them to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
- Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume). This places the SFTPGo files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly. The SFTPGo image runs using `1000` as UID/GID by default.
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/sftpgodata`. The user with ID `1000` must be able to write to this directory. Please note that you don't need an actual user with ID `1000` on your host system: `chown -R 1000:1000 /my/own/sftpgodata` is enough even if there is no user/group with UID/GID `1000`.
2. Create a home directory for the sftpgo container user on your host system e.g. `/my/own/sftpgohome`. As with the data directory above, make sure that the user with ID `1000` can write to this directory: `chown -R 1000:1000 /my/own/sftpgohome`
3. Start your SFTPGo container like this:
```shell
docker run --name some-sftpgo \
-p 8080:8090 \
-p 2022:2022 \
--mount type=bind,source=/my/own/sftpgodata,target=/srv/sftpgo \
--mount type=bind,source=/my/own/sftpgohome,target=/var/lib/sftpgo \
-e SFTPGO_HTTPD__BINDINGS__0__PORT=8090 \
-d "drakkan/sftpgo:tag"
```
As you can see SFTPGo uses two main volumes:
- `/srv/sftpgo` to handle persistent data. The default home directory for SFTP/FTP/WebDAV users is `/srv/sftpgo/data/<username>`. Backups are stored in `/srv/sftpgo/backups`
- `/var/lib/sftpgo` is the home directory for the sftpgo system user defined inside the container. This is the container working directory too, host keys will be created here when using the default configuration.
If you want to get fine grained control, you can also mount `/srv/sftpgo/data` and `/srv/sftpgo/backups` as separate volumes instead of mounting `/srv/sftpgo`.
### Configuration
The runtime configuration can be customized via environment variables that you can set passing the `-e` option to the `docker run` command or inside the `environment` section if you are using [docker stack deploy](https://docs.docker.com/engine/reference/commandline/stack_deploy/) or [docker-compose](https://github.com/docker/compose).
Please take a look [here](../docs/full-configuration.md) to learn how to configure SFTPGo via environment variables.
Alternately you can mount your custom configuration file to `/var/lib/sftpgo` or `/var/lib/sftpgo/.config/sftpgo`.
### Loading initial data
Initial data can be loaded in the following ways:
- via the `--loaddata-from` flag or the `SFTPGO_LOADDATA_FROM` environment variable. This flag is supported for both the `serve` command (load initial data and start the service) and the `initprovider` command (initialize the provider, load initial data and exit)
- by providing a dump file to the memory provider
Please take a look [here](../docs/full-configuration.md) for more details.
### Running as an arbitrary user
The SFTPGo image runs using `1000` as UID/GID by default. If you know the permissions of your data and/or configuration directory are already set appropriately or you have need of running SFTPGo with a specific UID/GID, it is possible to invoke this image with `--user` set to any value (other than `root/0`) in order to achieve the desired access/configuration:
```shell
$ ls -lnd data
drwxr-xr-x 2 1100 1100 6 7 nov 09.09 data
$ ls -lnd config
drwxr-xr-x 2 1100 1100 6 7 nov 09.19 config
```
With the above directory permissions, you can start a SFTPGo instance like this:
```shell
docker run --name some-sftpgo \
--user 1100:1100 \
-p 8080:8080 \
-p 2022:2022 \
--mount type=bind,source="${PWD}/data",target=/srv/sftpgo \
--mount type=bind,source="${PWD}/config",target=/var/lib/sftpgo \
-d "drakkan/sftpgo:tag"
```
Alternately build your own image using the official one as a base, here is a sample Dockerfile:
```shell
FROM drakkan/sftpgo:tag
USER root
RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpgo
USER 1100:1100
```
## Image Variants
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge`, `edge-slim`, `edge-alpine`, and `edge-alpine-slim` tags are updated after each new commit.
### `sftpgo:<version>`
This is the defacto image, it is based on [Debian](https://www.debian.org/), available in [the `debian` official image](https://hub.docker.com/_/debian). If you are unsure about what your needs are, you probably want to use this one.
### `sftpgo:<version>-alpine`
This image is based on the popular [Alpine Linux project](https://alpinelinux.org/), available in [the `alpine` official image](https://hub.docker.com/_/alpine). Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](https://musl.libc.org/) instead of [glibc and friends](https://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See [this Hacker News comment thread](https://news.ycombinator.com/item?id=10782897) for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
### `sftpgo:<version>-distroless`
This image is based on the popular [Distroless project](https://github.com/GoogleContainerTools/distroless). We use the latest Debian based distroless image as base.
Distroless variant contains only a statically linked sftpgo binary and its minimal runtime dependencies and so it doesn't allow shell access (no shell is installed).
SQLite support is disabled since it requires CGO and so a C runtime which is not installed.
The default data provider is `bolt`, all the supported data providers except `sqlite` work.
We only provide the slim variant and so the optional `git` dependency is not available.
### `sftpgo:<suite>-slim`
These tags provide a slimmer image that does not include `jq` and the optional `git` and `rsync` dependencies.
### `sftpgo:<suite>-plugins`
These tags provide the standard image with the addition of all "official" plugins installed in `/usr/local/bin`.
## Helm Chart
An helm chart is [available](https://artifacthub.io/packages/helm/sagikazarmark/sftpgo). You can find the source code [here](https://github.com/sagikazarmark/helm-charts/tree/master/charts/sftpgo).
This chart is not maintained by the SFTPGo project and any issues with it should be raised to the upstream repo.

View file

@ -1,22 +0,0 @@
# Account's configuration properties
Please take a look at the [OpenAPI schema](../openapi/openapi.yaml) for the exact definitions of user, folder and admin fields.
If you need an example you can export a dump using the Web Admin or by invoking the `dumpdata` endpoint directly, you need to obtain an access token first, for example:
```shell
$ curl "http://admin:password@127.0.0.1:8080/api/v2/token"
{"access_token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME","expires_at":"2021-02-14T20:41:01Z"}
curl -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME" "http://127.0.0.1:8080/api/v2/dumpdata?output-data=1"
```
the dump is a JSON with all SFTPGo data including users, folders, admins.
These properties are stored inside the configured data provider.
SFTPGo supports checking passwords stored with argon2id, bcrypt, pbkdf2, md5cryptm sha256crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512` or `$pbkdf2-b64salt-sha256$`. For example the pbkdf2-sha256 of the word password using 150000 iterations and E86a9YMX3zC7 as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. In pbkdf2 variant with b64salt the salt is base64 encoded. For bcrypt the format must be the one supported by golang's crypto/bcrypt package, for example the password secret with cost 14 must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt, sha256crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$`, `$5$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as argon2id, bcrypt, pbkdf2, md5crypt, sha256crypt or sha512crypt and it will be stored as is.
If you want to use your existing accounts, you have these options:
- you can import your users inside SFTPGo. Take a look at [convert users](../examples/convertusers) script, it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
- you can use an external authentication program

View file

@ -1,20 +0,0 @@
# Azure Blob Storage backend
To connect SFTPGo to Azure Blob Storage, you need to specify the access credentials. Azure Blob Storage has different options for credentials, we support:
1. Providing an account name and account key.
2. Providing a shared access signature (SAS).
If you authenticate using account and key you also need to specify a container. The endpoint can generally be left blank, the default is `blob.core.windows.net`.
If you provide a SAS URL the container is optional and if given it must match the one inside the shared access signature.
If you want to connect to an emulator such as [Azurite](https://github.com/Azure/Azurite) you need to provide the account name/key pair and an endpoint prefixed with the protocol, for example `http://127.0.0.1:10000`.
Specifying a different `key_prefix`, you can assign different "folders" of the same container to different users. This is similar to a chroot directory for local filesystem. Each SFTPGo user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and the Azure Blob service then the client should wait for the last parts to be uploaded to Azure after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
The configured container must exist.
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations. As with S3 `chtime` will fail with the default configuration, you can install the [metadata plugin](https://github.com/sftpgo/sftpgo-plugin-metadata) to make it work and thus be able to preserve/change file modification times.

View file

@ -1,42 +0,0 @@
# Build SFTPGo from source
Download the sources and use `go build`.
The following build tags are available:
- `nogcs`, disable Google Cloud Storage backend, default enabled
- `nos3`, disable S3 Compabible Object Storage backends, default enabled
- `noazblob`, disable Azure Blob Storage backend, default enabled
- `nobolt`, disable Bolt data provider, default enabled
- `nomysql`, disable MySQL data provider, default enabled
- `nopgsql`, disable PostgreSQL data provider, default enabled
- `nosqlite`, disable SQLite data provider, default enabled
- `noportable`, disable portable mode, default enabled
- `nometrics`, disable Prometheus metrics, default enabled
- `bundle`, embed static files and templates. Before building with this tag enabled you have to copy `openapi`, `static` and `templates` dirs to `internal/bundle` directory. Default disabled
- `unixcrypt`, enable linking to `libcrypt`, default disabled, requires `CGO`
If no build tag is specified the build will include the default features.
The optional [SQLite driver](https://github.com/mattn/go-sqlite3 "go-sqlite3") is a `CGO` package and so it requires a `C` compiler at build time.
On Linux and macOS, a compiler is easy to install or already installed. On Windows, you need to download [MinGW-w64](https://sourceforge.net/projects/mingw-w64/files/) and build SFTPGo from its command prompt.
The compiler is a build time only dependency. It is not required at runtime.
Version info, such as git commit and build date, can be embedded setting the following string variables at build time:
- `github.com/drakkan/sftpgo/v2/internal/version.commit`
- `github.com/drakkan/sftpgo/v2/internal/version.date`
For example, you can build using the following command:
```bash
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/internal/version.commit=`git describe --always --abbrev=8 --dirty` -X github.com/drakkan/sftpgo/v2/internal/version.date=`date -u +%FT%TZ`" -o sftpgo
```
You should get a version that includes git commit, build date and available features like this one:
```bash
$ ./sftpgo -v
SFTPGo 2.3.1-dev-c8158e1-2022-07-24T17:25:45Z +metrics +azblob +gcs +s3 +bolt +mysql +pgsql +sqlite +portable
```

View file

@ -1,49 +0,0 @@
# Check password hook
This hook allows you to externally check the provided password, its main use case is to allow to easily support things like password+OTP for protocols without keyboard interactive support such as FTP and WebDAV. You can ask your users to login using a string consisting of a fixed password and a One Time Token, you can verify the token inside the hook and ask to SFTPGo to verify the fixed part.
The same thing can be achieved using [External authentication](./external-auth.md) but using this hook is simpler in some use cases.
The `check password hook` can be defined as the absolute path of your program or an HTTP URL.
The expected response is a JSON serialized struct containing the following keys:
- `status` integer. 0 means KO, 1 means OK, 2 means partial success
- `to_verify` string. For `status` = 2 SFTPGo will check this password against the one stored inside SFTPGo data provider
If the hook defines an external program it can read the following environment variables:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_PASSWORD`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
Global environment variables are cleared, for security reasons, when the script is called. You can set additional environment variables in the "command" configuration section.
The program must write, on its standard output, the expected JSON serialized response described above.
Any output of the program on its standard error will be recorded in the SFTPGo logs with sender `check_password_hook` and level `warn`.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `password`
- `ip`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
If authentication succeeds the HTTP response code must be 200 and the response body must contain the expected JSON serialized response described above.
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
You can also restrict the hook scope using the `check_password_scope` configuration key:
- `0` means all supported protocols.
- `1` means SSH only
- `2` means FTP only
- `4` means WebDAV only
You can combine the scopes. For example, 6 means FTP and WebDAV.
You can disable the hook on a per-user basis.
An example check password program allowing 2FA using password + one time token can be found inside the source tree [checkpwd](../examples/OTP/authy/checkpwd) directory.

View file

@ -1,131 +0,0 @@
# Custom Actions
SFTPGo can notify filesystem and provider events using custom actions. A custom action can be an external program or an HTTP URL.
## Filesystem events
The `actions` struct inside the `common` configuration section allows to configure the actions for file operations and SSH commands.
The `hook` can be defined as the absolute path of your program or an HTTP URL.
The following `actions` are supported:
- `download`
- `first-download`
- `pre-download`
- `upload`
- `first-upload`
- `pre-upload`
- `delete`
- `pre-delete`
- `rename`
- `mkdir`
- `rmdir`
- `ssh_cmd`
- `copy`
The `upload` condition includes both uploads to new files and overwrite of existing ones. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`. The `first-download` and `first-upload` action are executed only if no error occour and they don't exclude the `download` and `upload` notifications, so you will get both the `first-upload` and `upload` notification after the first successful upload and the same for the first successful download.
For cloud backends directories are virtual, they are created implicitly when you upload a file and are implicitly removed when the last file within a directory is removed. The `mkdir` and `rmdir` notifications are sent only when a directory is explicitly created or removed.
The notification will indicate if an error is detected and so, for example, a partial file is uploaded.
The `pre-delete`, `pre-download` and `pre-upload` actions, will be called before deleting, downloading and uploading files. If the external command completes with a zero exit status or the HTTP notification response code is `200`, SFTPGo will allow the operation, otherwise the client will get a permission denied error.
If the `hook` defines a path to an external program, then this program can read the following environment variables:
- `SFTPGO_ACTION`, supported action
- `SFTPGO_ACTION_USERNAME`
- `SFTPGO_ACTION_PATH`, is the full filesystem path, can be empty for some ssh commands
- `SFTPGO_ACTION_TARGET`, full filesystem path, non-empty for `rename` `SFTPGO_ACTION` and for some SSH commands
- `SFTPGO_ACTION_VIRTUAL_PATH`, virtual path, seen by SFTPGo users
- `SFTPGO_ACTION_VIRTUAL_TARGET`, virtual target path, seen by SFTPGo users
- `SFTPGO_ACTION_SSH_CMD`, non-empty for `ssh_cmd` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FILE_SIZE`, non-zero for `pre-upload`, `upload`, `download`, `delete`, and `copy` actions if the file size is greater than `0`
- `SFTPGO_ACTION_ELAPSED`, elapsed time as milliseconds
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3, SFTP and Azure backend if configured
- `SFTPGO_ACTION_STATUS`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `OIDC`, `DataRetention`, `EventAction`
- `SFTPGO_ACTION_IP`, the action was executed from this IP address
- `SFTPGO_ACTION_SESSION_ID`, string. Unique protocol session identifier. For stateless protocols such as HTTP the session id will change for each request
- `SFTPGO_ACTION_OPEN_FLAGS`, integer. File open flags, can be non-zero for `pre-upload` action. If `SFTPGO_ACTION_FILE_SIZE` is greater than zero and `SFTPGO_ACTION_OPEN_FLAGS&512 == 0` the target file will not be truncated
- `SFTPGO_ACTION_ROLE`, string. Role of the user who executed the action
- `SFTPGO_ACTION_TIMESTAMP`, int64. Event timestamp as nanoseconds since epoch
- `SFTPGO_ACTION_METADATA`, string. Object metadata serialized as JSON. Omitted if there is no metadata
Global environment variables are cleared, for security reasons, when the script is called. You can set additional environment variables in the "command" configuration section.
The program must finish within 30 seconds.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `action`, string
- `username`, string
- `path`, string
- `target_path`, string, included for `rename` action and `sftpgo-copy` SSH command
- `virtual_path`, string, virtual path, seen by SFTPGo users
- `virtual_target_path`, string, virtual target path, seen by SFTPGo users
- `ssh_cmd`, string, included for `ssh_cmd` action
- `file_size`, int64, included for `pre-upload`, `upload`, `download`, `delete` and `copy` actions if the file size is greater than `0`
- `elapsed`, int64, elapsed size as milliseconds
- `fs_provider`, integer, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend, `6` for HTTPFs backend
- `bucket`, string, included for S3, GCS and Azure backends
- `endpoint`, string, included for S3, SFTP and Azure backend if configured
- `status`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `OIDC`, `DataRetention`, `EventAction`
- `ip`, string. The action was executed from this IP address
- `session_id`, string. Unique protocol session identifier. For stateless protocols such as HTTP the session id will change for each request
- `open_flags`, integer. File open flags, can be non-zero for `pre-upload` action. If `file_size` is greater than zero and `file_size&512 == 0` the target file will not be truncated
- `role`, string. Included if the user who executed the action has a role
- `timestamp`, int64. Event timestamp as nanoseconds since epoch
- `metadata`, struct. Object metadata. Both the keys and the values are string. Omitted if there is no metadata
The HTTP hook will use the global configuration for HTTP clients and will respect the retry, TLS and headers configurations. See the HTTP Clients (`http`) section of the [config reference](./full-configuration.md).
The `pre-*` actions are always executed synchronously while the other ones are asynchronous. You can specify the actions to run synchronously via the `execute_sync` configuration key. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. If your hook takes a long time to complete this could cause a timeout on the client side, which wouldn't receive the server response in a timely manner and eventually drop the connection.
If you add the `upload` action to the `execute_sync` configuration key, SFTPGo will try to delete the uploaded file and return an error to the client if the hook fails. A hook is considered failed if the external command completes with a non-zero exit status or the HTTP notification response code is other than `200` (or the HTTP endpoint cannot be reached or times out).
After a hook failure, the uploaded size is removed from the quota if SFTPGo is able to remove the file.
## Provider events
The `actions` struct inside the `data_provider` configuration section allows you to configure actions on data provider objects add, update, delete.
The supported object types are:
- `user`
- `folder`
- `group`
- `admin`
- `api_key`
- `share`
- `event_action`
- `event_rule`
Actions will not be fired for internal updates, such as the last login or the user quota fields, or after external authentication.
If the `hook` defines a path to an external program, then this program can read the following environment variables:
- `SFTPGO_PROVIDER_ACTION`, supported values are `add`, `update`, `delete`
- `SFTPGO_PROVIDER_OBJECT_TYPE`, affected object type
- `SFTPGO_PROVIDER_OBJECT_NAME`, unique identifier for the affected object, for example username or key id
- `SFTPGO_PROVIDER_USERNAME`, the admin username that executed the action. There are two special usernames: `__self__` identifies a user/admin that updates itself and `__system__` identifies an action that does not have an explicit executor associated with it, for example users/admins can be added/updated by loading them from initial data
- `SFTPGO_PROVIDER_IP`, the action was executed from this IP address
- `SFTPGO_PROVIDER_ROLE`, the action was executed by an admin with this role
- `SFTPGO_PROVIDER_TIMESTAMP`, event timestamp as nanoseconds since epoch
- `SFTPGO_PROVIDER_OBJECT`, object serialized as JSON with sensitive fields removed
Global environment variables are cleared, for security reasons, when the script is called. You can set additional environment variables in the "command" configuration section.
The program must finish within 15 seconds.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action, username, ip, object_type and object_name and timestamp and role are added to the query string, for example `<hook>?action=update&username=admin&ip=127.0.0.1&object_type=user&object_name=user1&timestamp=1633860803249`, and the full object is sent serialized as JSON inside the POST body with sensitive fields removed. The role is added only if not empty.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry, TLS and headers configurations. See the HTTP Clients (`http`) section of the [config reference](./full-configuration.md).
The structure for SFTPGo objects can be found within the [OpenAPI schema](../openapi/openapi.yaml).
## Pub/Sub services
You can forward SFTPGo events to several publish/subscribe systems using the [sftpgo-plugin-pubsub](https://github.com/sftpgo/sftpgo-plugin-pubsub). The notifiers SFTPGo plugins are not suitable for interactive actions such as `pre-*` events. Their scope is to simply forward events to external services. A custom hook is a better choice if you need to react to `pre-*` events.
## Database services
You can store SFTPGo events in database systems using the [sftpgo-plugin-eventstore](https://github.com/sftpgo/sftpgo-plugin-eventstore) and you can search the stored events using the [sftpgo-plugin-eventsearch](https://github.com/sftpgo/sftpgo-plugin-eventsearch).

View file

@ -1,20 +0,0 @@
# Data At Rest Encryption (DARE)
SFTPGo supports data at-rest encryption via its `cryptfs` virtual file system, in this mode SFTPGo transparently encrypts and decrypts data (to/from the local disk) on-the-fly during uploads and/or downloads, making sure that the files at-rest on the server-side are always encrypted.
Data At Rest Encryption is supported for local filesystem, for cloud storage backends you can use their server side encryption feature.
So, because of the way it works, as described here above, when you set up an encrypted filesystem for a user you need to make sure it points to an empty path/directory (that has no files in it). Otherwise, it would try to decrypt existing files that are not encrypted in the first place and fail.
The SFTPGo's `cryptfs` is a tiny wrapper around [sio](https://github.com/minio/sio) therefore data is encrypted and authenticated using `AES-256-GCM` or `ChaCha20-Poly1305`. AES-GCM will be used if the CPU provides hardware support for it.
The only required configuration parameter is a `passphrase`, each file will be encrypted using an unique, randomly generated secret key derived from the given passphrase using the HMAC-based Extract-and-Expand Key Derivation Function (HKDF) as defined in [RFC 5869](http://tools.ietf.org/html/rfc5869). It is important to note that the per-object encryption key is never stored anywhere: it is derived from your `passphrase` and a randomly generated initialization vector just before encryption/decryption. The initialization vector is stored with the file.
The passphrase is stored encrypted itself according to your [KMS configuration](./kms.md) and is required to decrypt any file encrypted using an encryption key derived from it.
The encrypted filesystem has some limitations compared to the local, unencrypted, one:
- Resuming uploads is not supported.
- Opening a file for both reading and writing at the same time is not supported and so clients that require advanced filesystem-like features such as `sshfs` are not supported too.
- Truncate is not supported.
- System commands such as `git` or `rsync` are not supported: they will store data unencrypted.

View file

@ -1,32 +0,0 @@
# Data retention hook
This hook runs after a data retention check completes if you specify `Hook` between notifications methods when you start the check.
The `data_retention_hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can read the following environment variable:
- `SFTPGO_DATA_RETENTION_RESULT`, it contains the data retention check result JSON serialized.
Global environment variables are cleared, for security reasons, when the script is called. You can set additional environment variables in the "command" configuration section.
The program must finish within 20 seconds.
If the hook defines an HTTP URL then this URL will be invoked as HTTP POST and the POST body contains the data retention check result JSON serialized.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry, TLS and headers configurations. See the HTTP Clients (`http`) section of the [config reference](./full-configuration.md).
Here is the schema for the data retention check result:
- `username`, string
- `status`, int. 1 means success, 0 error
- `start_time`, int64. Start time as UNIX timestamp in milliseconds
- `total_deleted_files`, int. Total number of files deleted
- `total_deleted_size`, int64. Total size deleted in bytes
- `elapsed`, int64. Elapsed time in milliseconds
- `details`, list of struct with details for each checked path, each struct contains the following fields:
- `path`, string
- `retention`, int. Retention time in hours
- `deleted_files`, int. Number of files deleted
- `deleted_size`, int64. Size deleted in bytes
- `info`, string. Informative, non fatal, message if any. For example it can indicates that the check was skipped because the user doesn't have the required permissions on this path
- `error`, string. Error message if any

View file

@ -1,45 +0,0 @@
# Defender
The built-in `defender` allows you to configure an auto-blocking policy for SFTPGo and thus helps to prevent DoS (Denial of Service) and brute force password guessing.
If enabled it will protect SFTP, HTTP (WebClient and user API), FTP and WebDAV services and it will automatically block hosts (IP addresses) that continually fail to log in or attempt to connect.
You can configure a score for the following events:
- `score_valid`, defines the score for valid login attempts, eg. user accounts that exist. Default `1`.
- `score_invalid`, defines the score for invalid login attempts, eg. non-existent user accounts. Default `2`.
- `score_no_auth`, defines the score for clients disconnected without any authentication attempt. Default `0`.
- `score_limit_exceeded`, defines the score for hosts that exceeded the configured rate limits or the configured max connections per host. Default `3`.
You can set the score to `0` to not penalize some events.
And then you can configure:
- `observation_time`, defines the time window, in minutes, for tracking client errors.
- `threshold`, defines the threshold value before banning a host.
- `ban_time`, defines the time to ban a client, as minutes
So a host is banned, for `ban_time` minutes, if the sum of the scores has exceeded the defined threshold during the last observation time minutes.
By defining the scores, each type of event can be weighted. Let's see an example: if `score_invalid` is 3 and `threshold` is 8, a host will be banned after 3 login attempts with an non-existent user within the configured `observation_time`.
A banned IP has no score, it makes no sense to accumulate host events in memory for an already banned IP address.
If an already banned client tries to log in again, its ban time will be incremented according the `ban_time_increment` configuration.
The `ban_time_increment` is calculated as percentage of `ban_time`, so if `ban_time` is 30 minutes and `ban_time_increment` is 50 the host will be banned for additionally 15 minutes. You can also specify values greater than 100 for `ban_time_increment` if you want to increase the penalty for already banned hosts.
SFTPGo can store host scores and banned hosts in memory or within the configured data provider according to the `driver` set in the `defender` configuration section. The available drivers are `memory` and `provider`.
The `provider` driver is useful if you want to share the defender data across multiple SFTPGo instances and it requires a shared or distributed data provider: `MySQL`, `PostgreSQL` and `CockroachDB` are supported.
If you set the `provider` driver, the defender implementation may do many database queries (at least one query every time a new client connects to check if it is banned), if you have a single SFTPGo instance the `memory` driver is recommended.
For the `memory` driver, you can limit the memory usage using the `entries_soft_limit` and `entries_hard_limit` configuration keys.
The `provider` driver will periodically clean up expired hosts and events.
Using the REST API you can:
- list hosts within the defender's lists
- remove hosts from the defender's lists
The `defender` can also check permanent block and safe lists of IP addresses/networks. You can define these lists using the WebAdmin UI or the REST API. In multi-nodes setups, the list entries propagation between nodes may take some minutes.

View file

@ -1,67 +0,0 @@
# Dynamic user creation or modification
Dynamic user creation or modification is supported via an external program or an HTTP URL that can be invoked just before the user login.
To enable dynamic user modification, you must set the absolute path of your program or an HTTP URL using the `pre_login_hook` key in your configuration file.
The external program can read the following environment variables to get info about the user trying to login:
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON. A JSON serialized user id equal to zero means the user does not exist inside SFTPGo
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey`, `keyboard-interactive`, `TLSCertificate`, `IDP` (external identity provider) or empty if the hook is executed after receiving the FTP `USER` command
- `SFTPGO_LOGIND_IP`, ip address of the user trying to login
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`, `OIDC` (OpenID Connect)
The program must write, on its standard output:
- an empty string (or no response at all) if the user should not be created/updated
- or the SFTPGo user, JSON serialized, if you want to create or update the given user
Any output of the program on its standard error will be recorded in the SFTPGo logs with sender `pre_login_hook` and level `warn`.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol and the ip address of the user trying to login are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH`.
The request body will contain the user trying to login serialized as JSON. If no modification is needed the HTTP response code must be 204, otherwise the response code must be 200 and the response body a valid SFTPGo user serialized as JSON.
Actions defined for user's updates will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
For very simple use cases, the JSON response can include only the fields to update instead of the full user. For example, if you want to disable the user, you can return a response like this:
```json
{"status": 0}
```
This will not work as expected for sub-struct and lists.
Please note that if you want to create a new user, the pre-login hook response must include all the mandatory user fields.
The program hook must finish within 30 seconds, the HTTP hook will use the global configuration for HTTP clients.
If an error happens while executing the hook then login will be denied.
"Dynamic user creation or modification" and "External Authentication" are mutually exclusive, they are quite similar, the difference is that "External Authentication" returns an already authenticated user while using "Dynamic users modification" you simply create or update a user. The authentication will be checked inside SFTPGo.
In other words while using "External Authentication" the external program receives the credentials of the user trying to login (for example the cleartext password) and it needs to validate them. While using "Dynamic users modification" the pre-login program receives the user stored inside the dataprovider (it includes the hashed password if any) and it can modify it, after the modification SFTPGo will check the credentials of the user trying to login.
For SFTPGo users (not admins) authenticating using an external identity provider such as OpenID Connect, the pre-login hook will be executed after a successful authentication against the external IDP so that you can create/update the SFTPGo user matching the one authenticated against the identity provider. In this case where the pre-login hook is executed even if an external authentication hook is defined.
If you enable FTP and allow both encrypted and plain text sessions, the pre-login hook is executed after receiving the FTP `USER` command. If you return an SFTPGo user with `ftp_security` set to `1` and the FTP session is not encrypted, it will be terminated. In this case where the pre-login hook is executed even if an external authentication hook is defined.
You can disable the hook on a per-user basis.
Let's see a very basic example. Our sample program will grant access to the existing user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.
```shell
#!/bin/bash
CURRENT_TIME=`date +%H:%M`
if [[ "$SFTPGO_LOGIND_USER" =~ "\"test_user\"" ]]
then
if [[ $CURRENT_TIME > "18:00" || $CURRENT_TIME < "10:00" ]]
then
echo '{"status":0}'
else
echo '{"status":1}'
fi
fi
```
Please note that this is a demo program and it might not work in all cases. For example, the username should be obtained by parsing the JSON serialized user and not by searching the username inside the JSON as shown here.
The structure for SFTPGo users can be found within the [OpenAPI schema](../openapi/openapi.yaml).

View file

@ -1,89 +0,0 @@
# Event Manager
The Event Manager allows an administrator to configure HTTP notifications, commands execution, email notifications and carry out certain server operations based on server events or schedules.
The following actions are supported:
- `HTTP notification`. You can notify an HTTP/S endpoing via GET, POST, PUT, DELETE methods. You can define custom headers, query parameters and a body for POST and PUT request. Placeholders are supported for username, body, header and query parameter values.
- `Command execution`. You can launch custom commands passing parameters via environment variables. Placeholders are supported for environment variable values.
- `Email notification`. Placeholders are supported in subject and body. The email will be sent as plain text. For this action to work you have to configure an SMTP server in the SFTPGo configuration file.
- `Backup`. A backup will be saved in the configured backup directory. The backup will contain the week day and the hour in the file name.
- `User quota reset`. The quota used by users will be updated based on current usage.
- `Folder quota reset`. The quota used by virtual folders will be updated based on current usage.
- `Transfer quota reset`. The transfer quota values will be reset to `0`.
- `Data retention check`. You can define per-folder retention policies.
- `Password expiration check`. You can send an email notification to users whose password is about to expire.
- `User expiration check`. You can receive notifications with expired users.
- `User inactivity check`. Allow to disable or delete inactive users.
- `Identity Provider account check`. You can create/update accounts for users/admins logging in using an Identity Provider.
- `Filesystem`. For these actions, the required permissions are automatically granted. This is the same as executing the actions from an SFTP client and the same restrictions applies. Supported actions:
- `Rename`. You can rename one or more files or directories.
- `Delete`. You can delete one or more files and directories.
- `Create directories`. You can create one or more directories including sub-directories.
- `Path exists`. Check if the specified path exists.
- `Copy`. You can copy one or more files or directories.
- `Compress paths`. You can compress (currently as zip) ore or more files and directories.
The following placeholders are supported:
- `{{Name}}`. Username, virtual folder name, admin username for provider events, domain name for TLS certificate events.
- `{{Event}}`. Event name, for example `upload`, `download` for filesystem events or `add`, `update` for provider events.
- `{{Status}}`. Status for filesystem events. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error.
- `{{StatusString}}`. Status as string. Possible values "OK", "KO".
- `{{ErrorString}}`. Error details. Replaced with an empty string if no errors occur.
- `{{VirtualPath}}`. Path seen by SFTPGo users, for example `/adir/afile.txt`.
- `{{VirtualDirPath}}`. Parent directory for VirtualPath, for example if VirtualPath is "/adir/afile.txt", VirtualDirPath is "/adir".
- `{{FsPath}}`. Full filesystem path, for example `/user/homedir/adir/afile.txt` or `C:/data/user/homedir/adir/afile.txt` on Windows.
- `{{ObjectName}}`. File/directory name, for example `afile.txt` or provider object name.
- `{{ObjectType}}`. Object type for provider events: `user`, `group`, `admin`, etc.
- `{{Ext}}`. File extension, for example `.txt` if the filename is `afile.txt`.
- `{{VirtualTargetPath}}`. Virtual target path for rename and copy operations.
- `{{VirtualTargetDirPath}}`. Parent directory for VirtualTargetPath.
- `{{TargetName}}`. Target object name for rename and copy operations.
- `{{FsTargetPath}}`. Full filesystem target path for rename and copy operations.
- `{{FileSize}}`. File size.
- `{{Elapsed}}`. Elapsed time as milliseconds for filesystem events.
- `{{Protocol}}`. Used protocol, for example `SFTP`, `FTP`.
- `{{IP}}`. Client IP address.
- `{{Role}}`. User or admin role.
- `{{Timestamp}}`. Event timestamp as nanoseconds since epoch.
- `{{Email}}`. For filesystem events, this is the email associated with the user performing the action. For the provider events, this is the email associated with the affected user or admin. Blank in all other cases.
- `{{ObjectData}}`. Provider object data serialized as JSON with sensitive fields removed.
- `{{ObjectDataString}}`. Provider object data as JSON escaped string with sensitive fields removed.
- `{{RetentionReports}}`. Data retention reports as zip compressed CSV files. Supported as email attachment, file path for multipart HTTP request and as single parameter for HTTP requests body. Data retention reports contain details on the number of files deleted and the total size deleted for each folder.
- `{{IDPField<fieldname>}}`. Identity Provider custom fields containing a string.
- `{{Metadata}}`. Cloud storage metadata for the downloaded file serialized as JSON.
- `{{MetadataString}}`. Cloud storage metadata for the downloaded file as JSON escaped string.
- `{{UID}}`. Unique ID.
Event rules are based on the premise that an event occours. To each rule you can associate one or more actions.
The following trigger events are supported:
- `Filesystem events`, for example `upload`, `download` etc.
- `Provider events`, for example `add`, `update`, `delete` user or other resources.
- `Schedules`. The scheduler uses UTC time.
- `IP Blocked`, this event can be generated if you enable the [defender](./defender.md).
- `Certificate`, this event is generated when a certificate is renewed using the built-in ACME protocol. Both successful and failed renewals are notified.
- `On demand`, this trigger is generated manually using the WebAdmin or the REST API.
- `Identity Provider login`, this trigger is generated when a user/admin logs in using an external Identity Provider.
You can further restrict a rule by specifying additional conditions that must be met before the rules actions are taken. For example you can react to uploads only if they are performed by a particular user or using a specified protocol.
Actions such as user quota reset, transfer quota reset, data retention check, folder quota reset and filesystem events are executed for all matching users if the trigger is a schedule or for the affected user if the trigger is a provider event or a filesystem action.
Actions are executed in a sequential order except for sync actions that are executed before the others. For each action associated to a rule you can define the following settings:
- `Stop on failure`, the next action will not be executed if the current one fails.
- `Failure action`, this action will be executed only if at least another one fails. :warning: Please note that a failure action isn't executed if the event fails, for example if a download fails the main action is executed. The failure action is executed only if one of the non-failure actions associated to a rule fails.
- `Execute sync`, for upload events, you can execute the action(s) synchronously. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your action have completed its execution. If your acion takes a long time to complete this could cause a timeout on the client side, which wouldn't receive the server response in a timely manner and eventually drop the connection. For pre-* events at least a sync action is required. If pre-delete,pre-upload, pre-download sync action(s) completes successfully, SFTPGo will allow the operation, otherwise the client will get a permission denied error.
If you are running multiple SFTPGo instances connected to the same data provider, you can choose whether to allow simultaneous execution for scheduled actions.
Some actions are not supported for some triggers, rules containing incompatible actions are skipped at runtime:
- `Filesystem events`, folder quota reset cannot be executed, we don't have a direct way to get the affected folder.
- `Provider events`, user quota reset, transfer quota reset, data retention check and filesystem actions can be executed only if a user is updated. They will be executed for the affected user. Folder quota reset can be executed only for folders. Filesystem actions are not executed for `delete` user events because the actions is executed after the user deletion.
- `IP Blocked`, user quota reset, folder quota reset, transfer quota reset, data retention check and filesystem actions cannot be executed, we only have an IP.
- `Certificate`, user quota reset, folder quota reset, transfer quota reset, data retention check and filesystem actions cannot be executed.
- `Email with attachments` are supported for filesystem events and provider events if a user is added/updated. We need a user to get the files to attach.
- `HTTP multipart requests with files as attachments` are supported for filesystem events and provider events if a user is added/updated. We need a user to get the files to attach.

View file

@ -1,83 +0,0 @@
# External Authentication
To enable external authentication, you must set the absolute path of your authentication program or an HTTP URL using the `external_auth_hook` key in your configuration file.
The external program can read the following environment variables to get info about the user trying to authenticate:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_USER`, STPGo user serialized as JSON, empty if the user does not exist within the data provider
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
- `SFTPGO_AUTHD_PASSWORD`, not empty for password authentication
- `SFTPGO_AUTHD_PUBLIC_KEY`, not empty for public key authentication
- `SFTPGO_AUTHD_KEYBOARD_INTERACTIVE`, not empty for keyboard interactive authentication
- `SFTPGO_AUTHD_TLS_CERT`, TLS client certificate PEM encoded. Not empty for TLS certificate authentication
Global environment variables are cleared, for security reasons, when the script is called. You can set additional environment variables in the "command" configuration section.
The program can inspect the SFTPGo user, if it exists, using the `SFTPGO_AUTHD_USER` environment variable.
The program must write, on its standard output:
- a valid SFTPGo user serialized as JSON if the authentication succeeds. The user will be added/updated within the defined data provider
- an empty string, or no response at all, if authentication succeeds and the existing SFTPGo user does not need to be updated. This means that the credentials already stored in SFTPGo must match those used for the current authentication.
- a user with an empty username if the authentication fails
Any output of the program on its standard error will be recorded in the SFTPGo logs with sender `external_auth_hook` and level `warn`.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `ip`
- `user`, STPGo user, omitted if the user does not exist within the data provider
- `protocol`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
- `password`, not empty for password authentication
- `public_key`, not empty for public key authentication
- `keyboard_interactive`, not empty for keyboard interactive authentication
- `tls_cert`, TLS client certificate PEM encoded. Not empty for TLS certificate authentication
If authentication succeeds the HTTP response code must be 200 and the response body can be:
- a valid SFTPGo user serialized as JSON. The user will be added/updated within the defined data provider
- empty, the existing SFTPGo user does not need to be updated. Please note that in versions 2.0.x and earlier an empty response was interpreted as an authentication error
If the authentication fails the HTTP response code must be != 200 or the returned SFTPGo user must have an empty username.
If the hook returns a user who is only allowed to authenticate using public key + password (multi step authentication), your hook will be invoked for each authentication step, so it must validate the public key and password separately. SFTPGo will take care that the client uses the allowed sequence.
Actions defined for users added/updated will not be executed in this case and an already logged in user with the same username will not be disconnected.
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
This method is slower than built-in authentication, but it's very flexible as anyone can easily write his own authentication hooks.
You can also restrict the authentication scope for the hook using the `external_auth_scope` configuration key:
- `0` means all supported authentication scopes. The external hook will be used for password, public key, keyboard interactive and TLS certificate authentication
- `1` means passwords only
- `2` means public keys only
- `4` means keyboard interactive only
- `8` means TLS certificate only
You can combine the scopes. For example, 3 means password and public key, 5 means password and keyboard interactive, and so on.
Let's see a very basic example. Our sample authentication program will only accept user `test_user` with any password or public key.
```shell
#!/bin/sh
if test "$SFTPGO_AUTHD_USERNAME" = "test_user"; then
echo '{"status":1,"username":"test_user","expiration_date":0,"home_dir":"/tmp/test_user","uid":0,"gid":0,"max_sessions":0,"quota_size":0,"quota_files":100000,"permissions":{"/":["*"],"/somedir":["list","download"]},"upload_bandwidth":0,"download_bandwidth":0,"filters":{"allowed_ip":[],"denied_ip":[]},"public_keys":[]}'
else
echo '{"username":""}'
fi
```
The structure for SFTPGo users can be found within the [OpenAPI schema](../openapi/openapi.yaml).
You can instruct SFTPGo to cache the external user by setting an `external_auth_cache_time` in user object returned by your hook. The `external_auth_cache_time` defines the cache time in seconds.
You can disable the hook on a per-user basis so that you can mix external and internal users.
An example authentication program allowing to authenticate against an LDAP server can be found inside the source tree [ldapauth](../examples/ldapauth) directory.
An example server, to use as HTTP authentication hook, allowing to authenticate against an LDAP server can be found inside the source tree [ldapauthserver](../examples/ldapauthserver) directory.
If you have an external authentication hook that could be useful to others too, please let us know and/or please send a pull request.

View file

@ -1,604 +0,0 @@
# Configuring SFTPGo
<details><summary><font size=5> Command line options</font></summary>
The SFTPGo executable can be used this way:
```console
Usage:
sftpgo [command]
Available Commands:
acme Obtain TLS certificates from ACME-based CAs like Let's Encrypt
gen A collection of useful generators
help Help about any command
initprovider Initialize and/or updates the configured data provider
ping Issues an health check to SFTPGo
portable Serve a single directory/account
resetprovider Reset the configured provider, any data will be lost
resetpwd Reset the password for the specified administrator
revertprovider Revert the configured data provider to a previous version
serve Start the SFTPGo service
smtptest Test the SMTP configuration
startsubsys Use sftpgo as SFTP file transfer subsystem
Flags:
-h, --help help for sftpgo
-v, --version
Use "sftpgo [command] --help" for more information about a command.
```
The `serve` command supports the following flags:
- `--config-dir` string. Location of the config dir. This directory is used as the base for files with a relative path, e.g. the private keys for the SFTP server or the database file if you use a file-based data provider.. The configuration file, if not explicitly set, is looked for in this dir. We support reading from JSON, TOML, YAML, HCL, envfile and Java properties config files. The default config file name is `sftpgo` and therefore `sftpgo.json`, `sftpgo.yaml` and so on are searched. The default value is the working directory (".") or the value of `SFTPGO_CONFIG_DIR` environment variable.
- `--config-file` string. This flag explicitly defines the path, name and extension of the config file. If must be an absolute path or a path relative to the configuration directory. The specified file name must have a supported extension (JSON, YAML, TOML, HCL or Java properties). The default value is empty or the value of `SFTPGO_CONFIG_FILE` environment variable.
- `--grace-time`, integer. Graceful shutdown is an option to initiate a shutdown without abrupt cancellation of the currently ongoing client-initiated transfer sessions. This grace time defines the number of seconds allowed for existing transfers to get completed before shutting down. 0 means disabled. The default value is `0` or the value of `SFTPGO_GRACE_TIME` environment variable. A graceful shutdown is triggered by an interrupt signal or by a service `stop` request on Windows, if a grace time is configured.
- `--loaddata-from` string. Load users and folders from this file. The file must be specified as absolute path and it must contain a backup obtained using the `dumpdata` REST API or compatible content. The default value is empty or the value of `SFTPGO_LOADDATA_FROM` environment variable.
- `--loaddata-clean` boolean. Determine if the loaddata-from file should be removed after a successful load. Default `false` or the value of `SFTPGO_LOADDATA_CLEAN` environment variable (1 or `true`, 0 or `false`).
- `--loaddata-mode`, integer. Restore mode for data to load. 0 means new users are added, existing users are updated. 1 means new users are added, existing users are not modified. Default 1 or the value of `SFTPGO_LOADDATA_MODE` environment variable.
- `--loaddata-scan`, integer. Quota scan mode after data load. 0 means no quota scan. 1 means quota scan. 2 means scan quota if the user has quota restrictions. Default 0 or the value of `SFTPGO_LOADDATA_QUOTA_SCAN` environment variable.
- `--log-compress` boolean. Determine if the rotated log files should be compressed using gzip. Default `false` or the value of `SFTPGO_LOG_COMPRESS` environment variable (1 or `true`, 0 or `false`). It is unused if `log-file-path` is empty.
- `--log-file-path` string. Location for the log file, default "sftpgo.log" or the value of `SFTPGO_LOG_FILE_PATH` environment variable. Leave empty to write logs to the standard error.
- `--log-max-age` int. Maximum number of days to retain old log files. Default 28 or the value of `SFTPGO_LOG_MAX_AGE` environment variable. It is unused if `log-file-path` is empty.
- `--log-max-backups` int. Maximum number of old log files to retain. Default 5 or the value of `SFTPGO_LOG_MAX_BACKUPS` environment variable. It is unused if `log-file-path` is empty.
- `--log-max-size` int. Maximum size in megabytes of the log file before it gets rotated. Default 10 or the value of `SFTPGO_LOG_MAX_SIZE` environment variable. It is unused if `log-file-path` is empty.
- `--log-level` string. Set the log level. Supported values: `debug`, `info`, `warn`, `error`. Default `debug` or the value of `SFTPGO_LOG_LEVEL` environment variable.
- `--log-utc-time` boolean. Enable UTC time for logging. Default `false` or the value of `SFTPGO_LOG_UTC_TIME` environment variable (1 or `true`, 0 or `false`)
Log file can be rotated on demand sending a `SIGUSR1` signal on Unix based systems and using the command `sftpgo service rotatelogs` on Windows.
If you don't configure any private host key, the daemon will use `id_rsa`, `id_ecdsa` and `id_ed25519` in the configuration directory. If these files don't exist, the daemon will attempt to autogenerate them. The server supports any private key format supported by [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/keys.go#L33).
The `gen` command allows to generate completion scripts for your shell and man pages.
</details>
<details><summary><font size=5> Configuration file</font></summary>
The configuration file contains the following sections:
<details><summary><font size=4>Common</font></summary>
- **"common"**, configuration parameters shared among all the supported protocols
- `idle_timeout`, integer. Time in minutes after which an idle client will be disconnected. 0 means disabled. Default: 15
- `upload_mode` integer. `0` means standard: the files are uploaded directly to the requested path. `1` means atomic: files are uploaded to a temporary path and renamed to the requested path when the client ends the upload. Atomic mode avoids problems such as a web server that serves partial files when the files are being uploaded. In atomic mode, if there is an upload error, the temporary file is deleted and so the requested upload path will not contain a partial file. `2` means atomic with resume support: same as atomic but if there is an upload error, the temporary file is renamed to the requested path and not deleted. This way, a client can reconnect and resume the upload. `4` means files for S3 backend are stored even if a client-side upload error is detected. `8` means files for Google Cloud Storage backend are stored even if a client-side upload error is detected. `16` means files for Azure Blob backend are stored even if a client-side upload error is detected. Ignored for SFTP backend if buffering is enabled. The flags can be combined, if you provide both `1` and `2`, `2` will be used. Default: `0`
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `pre-download`, `download`, `first-download`, `pre-upload`, `upload`, `first-upload`, `pre-delete`, `delete`, `rename`, `mkdir`, `rmdir`, `ssh_cmd`, `copy`. Leave empty to disable actions.
- `execute_sync`, list of strings. Actions, defined in the `execute_on` list above, to be performed synchronously. The `pre-*` actions are always executed synchronously while the other ones are asynchronous. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. Leave empty to execute only the defined `pre-*` hook synchronously
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `setstat_mode`, integer. 0 means "normal mode": requests for changing permissions, owner/group and access/modification times are executed. 1 means "ignore mode": requests for changing permissions, owner/group and access/modification times are silently ignored. 2 means "ignore mode if not supported": requests for changing permissions and owner/group are silently ignored for cloud filesystems and executed for local/SFTP filesystem.
- `rename_mode`, integer. By default (`0`), renaming of non-empty directories is not allowed for cloud storage providers (S3, GCS, Azure Blob). Set to `1` to enable recursive renames for these providers, they may be slow, there is no atomic rename API like for local filesystem, so SFTPGo will recursively list the directory contents and do a rename for each entry (partial renaming and incorrect disk quota updates are possible in error cases). Default `0`.
- `resume_max_size`, integer. defines the maximum size allowed, in bytes, to resume uploads on storage backends with immutable objects. By default, resuming uploads is not allowed for cloud storage providers (S3, GCS, Azure Blob) because SFTPGo must rewrite the entire file. Set to a value greater than 0 to allow resuming uploads of files smaller than or equal to the defined size. Please note that uploads for these backends are still atomic, the client must intentionally upload a portion of the target file and then resume uploading.. Default `0`.
- `temp_path`, string. Defines the path for temporary files such as those used for atomic uploads or file pipes. If you set this option you must make sure that the defined path exists, is accessible for writing by the user running SFTPGo, and is on the same filesystem as the users home directories otherwise the renaming for atomic uploads will become a copy and therefore may take a long time. The temporary files are not namespaced. The default is generally fine. Leave empty for the default.
- `proxy_protocol`, integer. Support for [HAProxy PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGINX, you can enable the proxy protocol. It provides a convenient way to safely transport connection information such as a client's address across multiple layers of NAT or TCP proxies to get the real client IP address instead of the proxy IP. Both protocol versions 1 and 2 are supported. If the proxy protocol is enabled in SFTPGo then you have to enable the protocol in your proxy configuration too. For example, for HAProxy, add `send-proxy` or `send-proxy-v2` to each server configuration line. The PROXY protocol is supported for SSH/SFTP and FTP/S. The following modes are supported:
- 0, disabled
- 1, enabled. If the upstream IP is not allowed to send a proxy header, the header will be ignored. Using this mode does not mean that we can accept connections with and without the proxy header. We always try to read the proxy header and we ignore it if the upstream IP is not allowed to send a proxy header. Set `proxy_skipped` if you want to allow some IP/networks to connect without sending a proxy header
- 2, required. If the upstream IP is not allowed to send a proxy header, the connection will be rejected (unless the upstream IP is listed in `proxy_skipped`)
- `proxy_allowed`, list of IP addresses and IP ranges allowed to send the proxy header:
- If `proxy_protocol` is set to 1 and we receive a proxy header from an IP that is not in the list then the connection will be accepted and the header will be ignored
- If `proxy_protocol` is set to 2 and we receive a proxy header from an IP that is not in the list then the connection will be rejected
- `proxy_skipped`, list of IP address and IP ranges for which not to read the proxy header
- `startup_hook`, string. Absolute path to an external program or an HTTP URL to invoke as soon as SFTPGo starts. If you define an HTTP URL it will be invoked using a `GET` request. Please note that SFTPGo services may not yet be available when this hook is run. Leave empty do disable
- `post_connect_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Post-connect hook](./post-connect-hook.md) for more details. Leave empty to disable
- `post_disconnect_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Post-disconnect hook](./post-disconnect-hook.md) for more details. Leave empty to disable
- `data_retention_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Data retention hook](./data-retention-hook.md) for more details. Leave empty to disable
- `max_total_connections`, integer. Maximum number of concurrent client connections. 0 means unlimited. Default: `0`.
- `max_per_host_connections`, integer. Maximum number of concurrent client connections from the same host (IP). If the defender is enabled, exceeding this limit will generate `score_limit_exceeded` events and thus hosts that repeatedly exceed the max allowed connections can be automatically blocked. 0 means unlimited. Default: `20`.
- `allowlist_status`, integer. Set to `1` to enable the allow list. The allow list can be populated using the WebAdmin or the REST API. If enabled, only the listed IPs/networks can access the configured services, all other client connections will be dropped before they even try to authenticate. Ensure to populate your allow list before enabling this setting. In multi-nodes setups, the list entries propagation between nodes may take some minutes. Default: `0`.
- `allow_self_connections`, integer. Allow users on this instance to use other users/virtual folders on this instance as storage backend. Enable this setting if you know what you are doing. Set to `1` to enable. Default: `0`.
- `umask`, string. Set the file mode creation mask, for example `002`. Leave blank to use the system umask. Supported on *NIX platforms. Default: blank.
- `metadata`, struct containing the configuration for managing the Cloud Storage backends metadata.
- `read`, integer. Set to `1` to read metadata before downloading files from Cloud Storage backends and making them available in notification events. Default: `0`.
- `defender`, struct containing the defender configuration. See [Defender](./defender.md) for more details.
- `enabled`, boolean. Default `false`.
- `driver`, string. Supported drivers are `memory` and `provider`. The `provider` driver will use the configured data provider to store defender events and it is supported for `MySQL`, `PostgreSQL` and `CockroachDB` data providers. Using the `provider` driver you can share the defender events among multiple SFTPGO instances. For a single instance the `memory` driver will be much faster. Default: `memory`.
- `ban_time`, integer. Ban time in minutes. Default: `30`.
- `ban_time_increment`, integer. Ban time increment, as a percentage, if a banned host tries to connect again. Default: `50`.
- `threshold`, integer. Threshold value for banning a client. Default: `15`.
- `score_invalid`, integer. Score for invalid login attempts, eg. non-existent user accounts. Default: `2`.
- `score_valid`, integer. Score for valid login attempts, eg. user accounts that exist. Default: `1`.
- `score_limit_exceeded`, integer. Score for hosts that exceeded the configured rate limits or the maximum, per-host, allowed connections. Default: `3`.
- `score_no_auth`, defines the score for clients disconnected without any authentication attempt. Default: `0`.
- `observation_time`, integer. Defines the time window, in minutes, for tracking client errors. A host is banned if it has exceeded the defined threshold during the last observation time minutes. Default: `30`.
- `entries_soft_limit`, integer. Ignored for `provider` driver. Default: `100`.
- `entries_hard_limit`, integer. The number of banned IPs and host scores kept in memory will vary between the soft and hard limit for `memory` driver. If you use the `provider` driver, this setting will limit the number of entries to return when you ask for the entire host list from the defender. Default: `150`.
- `rate_limiters`, list of structs containing the rate limiters configuration. Take a look [here](./rate-limiting.md) for more details. Each struct has the following fields:
- `average`, integer. Average defines the maximum rate allowed. 0 means disabled. Default: 0
- `period`, integer. Period defines the period as milliseconds. The rate is actually defined by dividing average by period Default: 1000 (1 second).
- `burst`, integer. Burst defines the maximum number of requests allowed to go through in the same arbitrarily small period of time. Default: 1
- `type`, integer. 1 means a global rate limiter, independent from the source host. 2 means a per-ip rate limiter. Default: 2
- `protocols`, list of strings. Available protocols are `SSH`, `FTP`, `DAV`, `HTTP`. By default all supported protocols are enabled
- `generate_defender_events`, boolean. If `true`, the defender is enabled, and this is not a global rate limiter, a new defender event will be generated each time the configured limit is exceeded. Default `false`
- `entries_soft_limit`, integer.
- `entries_hard_limit`, integer. The number of per-ip rate limiters kept in memory will vary between the soft and hard limit
</details>
<details><summary><font size=4>ACME</font></summary>
- **"acme"**, Automatic Certificate Management Environment (ACME) protocol configuration. To obtain the certificates the first time you have to configure the ACME protocol and execute the `sftpgo acme run` command. The SFTPGo service will take care of the automatic renewal of certificates for the configured domains.
- `domains`, list of domains for which to obtain certificates. If a single certificate is to be valid for multiple domains specify the names separated by commas or spaces, for example: `example.com,www.example.com` or `example.com www.example.com`. An empty list means that ACME protocol is disabled. Default: empty.
- `email`, string. Email used for registration and recovery contact. Default: empty.
- `key_type`, string. Key type to use for private keys. Supported values: `2048` (RSA 2048), `3072` (RSA 3072), `4096` (RSA 4096), `8192` (RSA 8192), `P256` (EC 256), `P384` (EC 384). Default: `4096`
- `certs_path`, string. Directory, absolute or relative to the configuration directory, to use for storing certificates and related data.
- `ca_endpoint`, string. Default: `https://acme-v02.api.letsencrypt.org/directory`.
- `renew_days`, integer. The number of days left on a certificate to renew it. Default: `30`.
- `http01_challenge`, configuration for `HTTP-01` challenge type, the following fields are supported:
- `port`, integer. This challenge is expected to run on port `80`. If you set a port other than `80` you have to proxy the path `/.well-known/acme-challenge` from the port `80` to the configured port. Default: `80`.
- `proxy_header`, string. Validate against this HTTP header when solving HTTP based challenges behind a reverse proxy. Empty means `Host`. Default: empty.
- `webroot`, string. Set the absolute path to the webroot folder to use for HTTP based challenges to write directly in a file in `.well-known/acme-challenge`. Setting a `webroot` disables the built-in server (the `port` setting is ignored) and expects the given directory to be publicly served, on port `80`, with access to `.well-known/acme-challenge`. If `webroot` is empty and `port` is `0` the `HTTP-01` challenge is disabled. Default: empty.
- `tls_alpn01_challenge`, configuration for `TLS-ALPN-01` challenge type, the following fields are supported:
- `port`, integer. This challenge is expected to run on port `443`. `0` means `TLS-ALPN-01` is disabled. Default: `0`.
</details>
<details><summary><font size=4>SFTP Server</font></summary>
- **"sftpd"**, the configuration for the SFTP server
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving SFTP requests. 0 means disabled. Default: 2022
- `address`, string. Leave blank to listen on all available network interfaces. Default: ""
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Default `true`
- `max_auth_tries` integer. Maximum number of authentication attempts permitted per connection. If set to a negative number, the number of attempts is unlimited. If set to zero, the number of attempts is limited to 6.
- `banner`, string. Allow some degree of customization for the advertised software version. Set to `short` to hide the SFTPGo version number, if different from `short` the default will be used. The default is `SFTPGo_<version>`, for example `SSH-2.0-SFTPGo_2.6.0`, if set to `short` the identification string will be `SSH-2.0-SFTPGo`. In previous SFTPGo releases we allowed to customize the whole software version, this is a bad idea because many clients rely on the version string to enable/disable some features.
- `host_keys`, list of strings. It contains the daemon's private host keys. Each host key can be defined as a path relative to the configuration directory or an absolute one. If empty, the daemon will search or try to generate `id_rsa`, `id_ecdsa` and `id_ed25519` keys inside the configuration directory. If you configure absolute paths to files named `id_rsa`, `id_ecdsa` and/or `id_ed25519` then SFTPGo will try to generate these keys using the default settings.
- `host_certificates`, list of strings. Public host certificates. Each certificate can be defined as a path relative to the configuration directory or an absolute one. Certificate's public key must match a private host key otherwise it will be silently ignored. Default: empty.
- `host_key_algorithms`, list of strings. Public key algorithms that the server will accept for host key authentication. The supported values are: `rsa-sha2-512-cert-v01@openssh.com`, `rsa-sha2-256-cert-v01@openssh.com`, `ssh-rsa-cert-v01@openssh.com`, `ssh-dss-cert-v01@openssh.com`, `ecdsa-sha2-nistp256-cert-v01@openssh.com`, `ecdsa-sha2-nistp384-cert-v01@openssh.com`, `ecdsa-sha2-nistp521-cert-v01@openssh.com`, `ssh-ed25519-cert-v01@openssh.com`, `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, `ecdsa-sha2-nistp521`, `rsa-sha2-512`, `rsa-sha2-256`, `ssh-rsa`, `ssh-dss`, `ssh-ed25519`. Certificate algorithms are listed for backward compatibility purposes only, they are not used. Default values: `rsa-sha2-512`, `rsa-sha2-256`, `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, `ecdsa-sha2-nistp521`, `ssh-ed25519`.
- `kex_algorithms`, list of strings. Available KEX (Key Exchange) algorithms in preference order. Leave empty to use default values. The supported values are: `curve25519-sha256` (will also enable the alias `curve25519-sha256@libssh.org`), `ecdh-sha2-nistp256`, `ecdh-sha2-nistp384`, `ecdh-sha2-nistp521`, `diffie-hellman-group14-sha256`, `diffie-hellman-group16-sha512`, `diffie-hellman-group14-sha1`, `diffie-hellman-group1-sha1`, `diffie-hellman-group-exchange-sha256`, `diffie-hellman-group-exchange-sha1`. Default values: `curve25519-sha256`, `ecdh-sha2-nistp256`, `ecdh-sha2-nistp384`, `ecdh-sha2-nistp521`, `diffie-hellman-group14-sha256`, `diffie-hellman-group-exchange-sha256`. SHA512 based KEXs are disabled by default because they are slow.
- `ciphers`, list of strings. Allowed ciphers in preference order. Leave empty to use default values. The supported values are: `aes128-gcm@openssh.com`, `aes256-gcm@openssh.com`, `chacha20-poly1305@openssh.com`, `aes128-ctr`, `aes192-ctr`, `aes256-ctr`, `aes128-cbc`, `aes192-cbc`, `aes256-cbc`, `3des-cbc`, `arcfour256`, `arcfour128`, `arcfour`. Default values: `aes128-gcm@openssh.com`, `aes256-gcm@openssh.com`, `chacha20-poly1305@openssh.com`, `aes128-ctr`, `aes192-ctr`, `aes256-ctr`. Please note that the ciphers disabled by default are insecure, you should expect that an active attacker can recover plaintext if you enable them.
- `macs`, list of strings. Available MAC (message authentication code) algorithms in preference order. Leave empty to use default values. The supported values are: `hmac-sha2-256-etm@openssh.com`, `hmac-sha2-256`, `hmac-sha2-512-etm@openssh.com`, `hmac-sha2-512`, `hmac-sha1`, `hmac-sha1-96`. Default values: `hmac-sha2-256-etm@openssh.com`, `hmac-sha2-256`.
- `public_key_algorithms`, list of strings. Public key algorithms that the server will accept for client authentication. The supported values are: `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, `ecdsa-sha2-nistp521`, `rsa-sha2-512`, `rsa-sha2-256`, `ssh-rsa`, `ssh-dss`, `ssh-ed25519`, `sk-ssh-ed25519@openssh.com`, `sk-ecdsa-sha2-nistp256@openssh.com`. Default values: `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, `ecdsa-sha2-nistp521`, `rsa-sha2-512`, `rsa-sha2-256`, `ssh-ed25519`, `sk-ssh-ed25519@openssh.com`, `sk-ecdsa-sha2-nistp256@openssh.com`.
- `trusted_user_ca_keys`, list of public keys paths of certificate authorities that are trusted to sign user certificates for authentication. The paths can be absolute or relative to the configuration directory.
- `revoked_user_certs_file`, path to a file containing the revoked user certificates. The path can be absolute or relative to the configuration directory. It must contain a JSON list with the public key fingerprints of the revoked certificates. Example content: `["SHA256:bsBRHC/xgiqBJdSuvSTNpJNLTISP/G356jNMCRYC5Es","SHA256:119+8cL/HH+NLMawRsJx6CzPF1I3xC+jpM60bQHXGE8"]`. The revocation list can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. Default: "".
- `login_banner_file`, path to the login banner file. The contents of the specified file, if any, are sent to the remote user before authentication is allowed. It can be a path relative to the config dir or an absolute one. Leave empty to disable login banner.
- `enabled_ssh_commands`, list of enabled SSH commands. `*` enables all supported commands. More information can be found [here](./ssh-commands.md).
- `keyboard_interactive_authentication`, boolean. This setting specifies whether keyboard interactive authentication is allowed. If no keyboard interactive hook or auth plugin is defined the default is to prompt for the user password and then the one time authentication code, if defined. Default: `true`.
- `keyboard_interactive_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for keyboard interactive authentication. See [Keyboard Interactive Authentication](./keyboard-interactive.md) for more details.
- `password_authentication`, boolean. Set to false to disable password authentication. This setting will disable multi-step authentication method using public key + password too. It is useful for public key only configurations if you need to manage old clients that will not attempt to authenticate with public keys if the password login method is advertised. Default: `true`.
- `folder_prefix`, string. Virtual root folder prefix to include in all file operations (ex: `/files`). The virtual paths used for per-directory permissions, file patterns etc. must not include the folder prefix. The prefix is only applied to SFTP requests (in SFTP server mode), SCP and other SSH commands will be automatically disabled if you configure a prefix. The prefix is ignored while running as OpenSSH's SFTP subsystem. This setting can help some specific migrations from SFTP servers based on OpenSSH and it is not recommended for general usage. Default: blank.
</details>
<details><summary><font size=4>FTP Server</font></summary>
- **"ftpd"**, the configuration for the FTP server
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving FTP requests. 0 means disabled. Default: 0.
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Please note that we expect the proxy header on control and data connections. Default `true`.
- `tls_mode`, integer. 0 means accept both cleartext and encrypted sessions. 1 means TLS is required for both control and data connection. 2 means implicit TLS.Please check that a proper TLS config is in place if you set `tls_mode` is different from 0.
- `tls_session_reuse`, integer. 0 means session reuse is not checked, clients may or may not reuse TLS sessions. 1 means TLS session reuse is required for explicit FTPS. Legacy reuse method based on session IDs is not supported, clients must use session tickets. Session reuse is not supported for implicit TLS. Default: `0`.
- `certificate_file`, string. Binding specific TLS certificate. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Binding specific private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If not set the global ones will be used, if any.
- `min_tls_version`, integer. Defines the minimum version of TLS to be enabled. `12` means TLS 1.2 (and therefore TLS 1.2 and TLS 1.3 will be enabled),`13` means TLS 1.3. Default: `12`.
- `force_passive_ip`, ip address. External IP address for passive connections. Leave empty to autodetect. If not empty, it must be a valid IPv4 address. Default: "".
- `passive_ip_overrides`, list of struct that allows to return a different passive ip based on the client IP address. Each struct has the following fields:
- `networks`, list of strings. Each string must define a network in CIDR notation, for example 192.168.1.0/24.
- `ip`, string. Passive IP to return if the client IP address belongs to the defined networks. Empty means autodetect.
- `passive_host`, string. Hostname for passive connections. This hostname will be resolved each time a passive connection is requested and this can, depending on the DNS configuration, take a noticeable amount of time. Enable this setting only if you have a dynamic IP address. Default: "".
- `client_auth_type`, integer. Set to `1` to require a client certificate and verify it. Set to `2` to request a client certificate during the TLS handshake and verify it if given, in this mode the client is allowed not to send a certificate. At least one certification authority must be defined in order to verify client certificates. If no certification authority is defined, this setting is ignored. Default: 0.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L53). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `passive_connections_security`, integer. Defines the security checks for passive data connections. Set to `0` to require matching peer IP addresses of control and data connection. Set to `1` to disable any checks. Please note that if you run the FTP service behind a proxy you must enable the proxy protocol for control and data connections. Default: `0`.
- `active_connections_security`, integer. Defines the security checks for active data connections. The supported values are the same as described for `passive_connections_security`. Please note that disabling the security checks you will make the FTP service vulnerable to bounce attacks on active data connections, so change the default value only if you are on a trusted/internal network. Default: `0`.
- `debug`, boolean. If enabled any FTP command will be logged. This will generate a lot of logs. Enable only if you are investigating a client compatibility issue or something similar. You shouldn't leave this setting enabled for production servers. Default `false`.
- `banner`, string. Greeting banner displayed when a connection first comes in. Leave empty to use the default banner. Default `SFTPGo <version> ready`, for example `SFTPGo 1.0.0-dev ready`.
- `banner_file`, path to the banner file. The contents of the specified file, if any, are displayed when someone connects to the server. It can be a path relative to the config dir or an absolute one. If set, it overrides the banner string provided by the `banner` option. Leave empty to disable.
- `active_transfers_port_non_20`, boolean. Do not impose the port 20 for active data transfers. Enabling this option allows to run SFTPGo with less privilege. Default: `true`.
- `passive_port_range`, struct containing the key `start` and `end`. Port Range for data connections. Random if not specified. Default range is 50000-50100.
- `disable_active_mode`, boolean. Set to `true` to disable active FTP, default `false`.
- `enable_site`, boolean. Set to true to enable the FTP SITE command. We support `chmod` and `symlink` if SITE support is enabled. Default `false`
- `hash_support`, integer. Set to `1` to enable FTP commands that allow to calculate the hash value of files. These FTP commands will be enabled: `HASH`, `XCRC`, `MD5/XMD5`, `XSHA/XSHA1`, `XSHA256`, `XSHA512`. Please keep in mind that to calculate the hash we need to read the whole file, for remote backends this means downloading the file, for the encrypted backend this means decrypting the file. Default `0`.
- `combine_support`, integer. Set to 1 to enable support for the non standard `COMB` FTP command. Combine is only supported for local filesystem, for cloud backends it has no advantage as it will download the partial files and will upload the combined one. Cloud backends natively support multipart uploads. Default `0`.
- `certificate_file`, string. Certificate for FTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. A certificate and the private key are required to enable explicit and implicit TLS. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The certificates are also polled for changes every 8 hours.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
</details>
<details><summary><font size=4>WebDAV Server</font></summary>
- **"webdavd"**, the configuration for the WebDAV server, more info [here](./webdav.md)
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving WebDAV requests. 0 means disabled. Default: 0.
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
- `certificate_file`, string. Binding specific TLS certificate. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Binding specific private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If not set the global ones will be used, if any.
- `min_tls_version`, integer. Defines the minimum version of TLS to be enabled. `12` means TLS 1.2 (and therefore TLS 1.2 and TLS 1.3 will be enabled),`13` means TLS 1.3. Default: `12`.
- `client_auth_type`, integer. Set to `1` to require a client certificate and verify it. Set to `2` to request a client certificate during the TLS handshake and verify it if given, in this mode the client is allowed not to send a certificate. At least one certification authority must be defined in order to verify client certificates. If no certification authority is defined, this setting is ignored. Default: 0.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L53). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `tls_protocols`, list of string. HTTPS protocols in preference order. Supported values: `http/1.1`, `h2`. Default: `http/1.1`, `h2`.
- `prefix`, string. Prefix for WebDAV resources, if empty WebDAV resources will be available at the `/` URI. If defined it must be an absolute URI, for example `/dav`. Default: "".
- `proxy_allowed`, list of IP addresses and IP ranges allowed to set client IP proxy header such as `X-Forwarded-For`. Any client IP proxy headers, if set on requests from a connection address not in this list, will be silently ignored. Default: empty.
- `client_ip_proxy_header`, string. Defines the allowed client IP proxy header such as `X-Forwarded-For`, `X-Real-IP` etc. Default: empty
- `client_ip_header_depth`, integer. Some client IP headers such as `X-Forwarded-For` can contain multiple IP address, this setting define the position to trust starting from the right. For example if we have: `10.0.0.1,11.0.0.1,12.0.0.1,13.0.0.1` and the depth is `0`, SFTPGo will use `13.0.0.1` as client IP, if depth is `1`, `12.0.0.1` will be used and so on. Default: `0`.
- `disable_www_auth_header`, boolean. Set to `true` to not add the WWW-Authenticate header after an authentication failure, only the `401` status code will be sent. Default: `false`.
- `certificate_file`, string. Certificate for WebDAV over HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. A certificate and a private key are required to enable HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The certificates are also polled for changes every 8 hours.
- `cors` struct containing CORS configuration. SFTPGo uses [Go CORS handler](https://github.com/rs/cors), please refer to upstream documentation for fields meaning and their default values.
- `enabled`, boolean, set to true to enable CORS.
- `allowed_origins`, list of strings.
- `allowed_methods`, list of strings.
- `allowed_headers`, list of strings.
- `exposed_headers`, list of strings.
- `allow_credentials` boolean.
- `max_age`, integer.
- `options_passthrough`, boolean.
- `options_success_status`, integer.
- `allow_private_network`, boolean.
- `cache` struct containing cache configurations.
- `users`, cache configuration for the authenticated users.
- `expiration_time`, integer. Expiration time, in minutes, for the cached users. 0 means unlimited. Default: 0.
- `max_size`, integer. Maximum number of users to cache. 0 means unlimited. Default: 50.
- `mime_types`, cache configuration for mime types.
- `enabled`, boolean, set to true to enable mime types caching. Default: `true`.
- `max_size`, integer. Maximum number of mime types to cache. 0 means no cache. Default: 1000.
- `custom_mappings`, additional mime types mapping. This is a platform independet way to add few additional mappings. You can set a limited number of mappings here, if you want to add a large list use the method provided by the OS of your choice. List of struct, each struct has the following fields:
- `ext`, string, file extension including the dot, for example `.json`
- `mime`, string, mime type, for example `application/json`
</details>
<details><summary><font size=4>Data Provider</font></summary>
- **"data_provider"**, the configuration for the data provider
- `driver`, string. Supported drivers are `sqlite`, `mysql`, `postgresql`, `cockroachdb`, `bolt`, `memory`
- `name`, string. Database name. For driver `sqlite` this can be the database name relative to the config dir or the absolute path to the SQLite database. For driver `memory` this is the (optional) path relative to the config dir or the absolute path to the provider dump, obtained using the `dumpdata` REST API, to load. This dump will be loaded at startup and can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The `memory` provider will not modify the provided file so quota usage and last login will not be persisted. If you plan to use a SQLite database over a `cifs` network share (this is not recommended in general) you must use the `nobrl` mount option otherwise you will get the `database is locked` error. Some users reported that the `bolt` provider works fine over `cifs` shares.
- `host`, string. Database host. For `postgresql` and `cockroachdb` drivers you can specify multiple hosts separated by commas. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `port`, integer. Database port. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `username`, string. Database user. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `username_file`, string. Defines the path to a file containing the database user. This can be an absolute path or a path relative to the config dir. If not empty it takes precedence over `username`. Default: blank.
- `password`, string. Database password. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `password_file`, string. Defines the path to a file containing the database password. This can be an absolute path or a path relative to the config dir. If not empty it takes precedence over `password`. Default: blank.
- `sslmode`, integer. Used for drivers `mysql` and `postgresql`. 0 disable TLS connections, 1 require TLS, 2 set TLS mode to `verify-ca` for driver `postgresql` and `skip-verify` for driver `mysql`, 3 set TLS mode to `verify-full` for driver `postgresql` and `preferred` for driver `mysql`, 4 set the TLS mode to `prefer` for driver `postgresql`, 5 set the TLS mode to `allow` for driver `postgresql`
- `root_cert`, string. Path to the root certificate authority used to verify that the server certificate was signed by a trusted CA
- `disable_sni`, boolean. Allows to opt out Server Name Indication (SNI) for TLS connections. Default: `false`
- `target_session_attrs`, string. This is a `postgresql` and `cockroachdb` specific option. It determines whether the session must have certain properties to be acceptable. It's typically used in combination with multiple host names to select the first acceptable alternative among several hosts. Supported values: `any`, `read-write`, `read-only`, `primary`, `standby`, `prefer-standby`. If empty, `any` is assumed. If you explicitly set `any` the connections will be randomly distributed among the specified hosts
- `client_cert`, string. Path to the client certificate for two-way TLS authentication
- `client_key`,string. Path to the client key for two-way TLS authentication
- `connection_string`, string. Provide a custom database connection string. If not empty, this connection string will be used instead of building one using the previous parameters. Leave empty for drivers `bolt` and `memory`
- `sql_tables_prefix`, string. Prefix for SQL tables
- `track_quota`, integer. Set the preferred mode to track users quota between the following choices:
- 0, disable quota tracking. REST API to scan users home directories/virtual folders and update quota will do nothing
- 1, quota is updated each time a user uploads or deletes a file, even if the user has no quota restrictions
- 2, quota is updated each time a user uploads or deletes a file, but only for users with quota restrictions and for virtual folders. With this configuration, the `quota scan` and `folder_quota_scan` REST API can still be used to periodically update space usage for users without quota restrictions and for folders
- `delayed_quota_update`, integer. This configuration parameter defines the number of seconds to accumulate quota updates. If there are a lot of close uploads, accumulating quota updates can save you many queries to the data provider. If you want to track quotas, a scheduled quota update is recommended in any case, the stored quota may be incorrect for several reasons, such as an unexpected shutdown while uploading files, temporary provider failures, files copied outside of SFTPGo, and so on. You could use the [quotascan example](../examples/quotascan) as a starting point. 0 means immediate quota update.
- `pool_size`, integer. Sets the maximum number of open connections for `mysql` and `postgresql` driver. Default 0 (unlimited)
- `users_base_dir`, string. Users default base directory. If no home dir is defined while adding a new user, and this value is a valid absolute path, then the user home dir will be automatically defined as the path obtained joining the base dir and the username
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `add`, `update`, `delete`. `update` action will not be fired for internal updates such as the last login or the user quota fields.
- `execute_for`, list of strings. Defines the provider objects that trigger the action. Valid values are `user`, `folder`, `group`, `admin`, `api_key`, `share`, `event_action`, `event_rule`.
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `external_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for users authentication. See [External Authentication](./external-auth.md) for more details. Leave empty to disable.
- `external_auth_scope`, integer. 0 means all supported authentication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. 8 means TLS certificate. The flags can be combined, for example 6 means public keys and keyboard interactive
- `credentials_path`, string. It defines the directory for storing user provided credential files such as Google Cloud Storage credentials. This can be an absolute path or a path relative to the config dir
- `pre_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to modify user details just before the login. See [Dynamic user modification](./dynamic-user-mod.md) for more details. Leave empty to disable.
- `post_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to notify a successful or failed login. See [Post-login hook](./post-login-hook.md) for more details. Leave empty to disable.
- `post_login_scope`, defines the scope for the post-login hook. 0 means notify both failed and successful logins. 1 means notify failed logins. 2 means notify successful logins.
- `check_password_hook`, string. Absolute path to an external program or an HTTP URL to invoke to check the user provided password. See [Check password hook](./check-password-hook.md) for more details. Leave empty to disable.
- `check_password_scope`, defines the scope for the check password hook. 0 means all protocols, 1 means SSH, 2 means FTP, 4 means WebDAV. You can combine the scopes, for example 6 means FTP and WebDAV.
- `password_hashing`, struct. It contains the configuration parameters to be used to generate the password hash. SFTPGo can verify passwords in several formats and uses, by default, the `bcrypt` algorithm to hash passwords in plain-text before storing them inside the data provider. These options allow you to customize how the hash is generated.
- `argon2_options`, struct containing the options for argon2id hashing algorithm. The `memory` and `iterations` parameters control the computational cost of hashing the password. The higher these figures are, the greater the cost of generating the hash and the longer the runtime. It also follows that the greater the cost will be for any attacker trying to guess the password. If the code is running on a machine with multiple cores, then you can decrease the runtime without reducing the cost by increasing the `parallelism` parameter. This controls the number of threads that the work is spread across.
- `memory`, unsigned integer. The amount of memory used by the algorithm (in kibibytes). Default: 65536.
- `iterations`, unsigned integer. The number of iterations over the memory. Default: 1.
- `parallelism`. unsigned 8 bit integer. The number of threads (or lanes) used by the algorithm. Default: 2.
- `bcrypt_options`, struct containing the options for bcrypt hashing algorithm
- `cost`, integer between 4 and 31. Default: 10
- `algo`, string. Algorithm to use for hashing passwords. Available algorithms: `argon2id`, `bcrypt`. For bcrypt hashing we use the `$2a$` prefix. Default: `bcrypt`
- `password_validation` struct. It defines the password validation rules for admins and protocol users.
- `admins`, struct. It defines the password validation rules for SFTPGo admins.
- `min_entropy`, float. Defines the minimum password entropy. Take a look [here](https://github.com/wagslane/go-password-validator#what-entropy-value-should-i-use) for more details. `0` means disabled, any password will be accepted. Default: `0`.
- `users`, struct. It defines the password validation rules for SFTPGo protocol users.
- `min_entropy`, float. This value is used as fallback if no more specific password strength is set at user/group level. Default: `0`.
- `password_caching`, boolean. Verifying argon2id passwords has a high memory and computational cost, verifying bcrypt passwords has a high computational cost, by enabling, in memory, password caching you reduce these costs. Default: `true`
- `update_mode`, integer. Defines how the database will be initialized/updated. 0 means automatically. 1 means manually using the initprovider sub-command.
- `create_default_admin`, boolean. Before you can use SFTPGo you need to create an admin account. If you open the admin web UI, a setup screen will guide you in creating the first admin account. You can automatically create the first admin account by enabling this setting and setting the environment variables `SFTPGO_DEFAULT_ADMIN_USERNAME` and `SFTPGO_DEFAULT_ADMIN_PASSWORD`. You can also create the first admin by loading initial data. This setting has no effect if an admin account is already found within the data provider. Default `false`.
- `naming_rules`, integer. Naming rules for usernames, folder, group, role and object names in general. `0` means no rules. `1` means you can use any UTF-8 character. The names are used in URIs for REST API and Web admin. If not set only unreserved URI characters are allowed: ALPHA / DIGIT / "-" / "." / "_" / "~". `2` means names are converted to lowercase before saving/matching and so case insensitive matching is possible. `4` means trimming trailing and leading white spaces before saving/matching, the WebAdmin needs this setting to work properly. Rules can be combined, for example `3` means both converting to lowercase and allowing any UTF-8 character. Enabling these options for existing installations could be backward incompatible, some users could be unable to login, for example existing users with mixed cases in their usernames. You have to ensure that all existing users respect the defined rules. Default: `5`.
- `is_shared`, integer. If the data provider is shared across multiple SFTPGo instances, set this parameter to `1`. `MySQL`, `PostgreSQL` and `CockroachDB` can be shared, this setting is ignored for other data providers. For shared data providers, active transfers are persisted in the database and thus quota checks between ongoing transfers will work cross multiple instances. Password reset requests and OIDC tokens/states are also persisted in the database if the provider is shared. For shared data providers, scheduled event actions are only executed on a single SFTPGo instance by default, you can override this behavior on a per-action basis. The database table `shared_sessions` is used only to store temporary sessions. In performance critical installations, you might consider using a database-specific optimization, for example you might use an `UNLOGGED` table for PostgreSQL. This optimization in only required in very limited use cases. Default: `0`.
- `node`, struct. Node-specific configurations to allow inter-node communications. If your provider is shared across multiple nodes, the nodes can exchange information to present a uniform view for node-specific data. The current implementation allows to obtain active connections from all nodes. Nodes connect to each other using the REST API.
- `host`, string. IP address or hostname that other nodes can use to connect to this node via REST API. Empty means inter-node communications disabled. Default: empty.
- `port`, integer. The port that other nodes can use to connect to this node via REST API. Default: `0`
- `proto`, string. Supported values `http` or `https`. For `https` the configurations for http clients is used, so you can, for example, enable mutual TLS authentication. Default: `http`
- `backups_path`, string. Path to the backup directory. This can be an absolute path or a path relative to the config dir. We don't allow backups in arbitrary paths for security reasons.
</details>
<details><summary><font size=4>HTTP Server</font></summary>
- **"httpd"**, the configuration for the HTTP server used to serve REST API and the built-in web interfaces
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving HTTP requests. Default: 8080.
- `address`, string. Leave blank to listen on all available network interfaces. On *NIX you can specify an absolute path to listen on a Unix-domain socket Default: blank.
- `enable_web_admin`, boolean. Set to `false` to disable the built-in web admin for this binding. You also need to define `templates_path` and `static_files_path` to use the built-in web admin interface. Default `true`.
- `enable_web_client`, boolean. Set to `false` to disable the built-in web client for this binding. You also need to define `templates_path` and `static_files_path` to use the built-in web client interface. Default `true`.
- `enable_rest_api`, boolean. Set to `false` to disable REST API. Default `true`.
- `enabled_login_methods`, integer. Defines the login methods available for the WebAdmin and WebClient UIs. `0` means any configured method: username/password login form and OIDC, if enabled. `1` means OIDC for the WebAdmin UI. `2` means OIDC for the WebClient UI. `4` means login form for the WebAdmin UI. `8` means login form for the WebClient UI. You can combine the values. For example `3` means that you can only login using OIDC on both WebClient and WebAdmin UI. Default: `0`.
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
- `certificate_file`, string. Binding specific TLS certificate. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Binding specific private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If not set the global ones will be used, if any.
- `min_tls_version`, integer. Defines the minimum version of TLS to be enabled. `12` means TLS 1.2 (and therefore TLS 1.2 and TLS 1.3 will be enabled),`13` means TLS 1.3. Default: `12`.
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to JWT/Web authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L53). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `tls_protocols`, list of string. HTTPS protocols in preference order. Supported values: `http/1.1`, `h2`. Default: `http/1.1`, `h2`.
- `proxy_allowed`, list of IP addresses and IP ranges allowed to set client IP proxy header such as `X-Forwarded-For`, `X-Real-IP` and any other headers defined in the `security` section. Any of the indicated headers, if set on requests from a connection address not in this list, will be silently ignored. Default: empty.
- `client_ip_proxy_header`, string. Defines the allowed client IP proxy header such as `X-Forwarded-For`, `X-Real-IP` etc. Default: empty
- `client_ip_header_depth`, integer. Some client IP headers such as `X-Forwarded-For` can contain multiple IP address, this setting define the position to trust starting from the right. For example if we have: `10.0.0.1,11.0.0.1,12.0.0.1,13.0.0.1` and the depth is `0`, SFTPGo will use `13.0.0.1` as client IP, if depth is `1`, `12.0.0.1` will be used and so on. Default: `0`.
- `hide_login_url`, integer. If both web admin and web client are enabled each login page will show a link to the other one. This setting allows to hide this link. 0 means that the login links are displayed on both admin and client login page. This is the default. 1 means that the login link to the web client login page is hidden on admin login page. 2 means that the login link to the web admin login page is hidden on client login page. The flags can be combined, for example 3 will disable both login links.
- `render_openapi`, boolean. Set to `false` to disable serving of the OpenAPI schema and renderer. Default `true`.
- `oidc`, struct. Defines the OpenID connect configuration. OpenID integration allows you to map your identity provider users to SFTPGo users and so you can login to SFTPGo Web Client and Web Admin user interfaces using your identity provider. The following fields are supported:
- `config_url`, string. Identifier for the service. If defined, SFTPGo will add `/.well-known/openid-configuration` to this url and attempt to retrieve the provider configuration on startup. SFTPGo will refuse to start if it fails to connect to the specified URL. Default: blank.
- `client_id`, string. Defines the application's ID. Default: blank.
- `client_secret`, string. Defines the application's secret. Default: blank.
- `client_secret_file`, string. Defines the path to a file containing the application secret. This can be an absolute path or a path relative to the config dir. If not empty, it takes precedence over `client_secret`. Default: blank.
- `redirect_base_url`, string. Defines the base URL to redirect to after OpenID authentication. The suffix `/web/oidc/redirect` will be added to this base URL, adding also the `web_root` if configured. Default: blank.
- `username_field`, string. Defines the ID token claims field to map to the SFTPGo username. Default: blank.
- `scopes`, list of strings. Request the OAuth provider to provide the scope information from an authenticated users. The `openid` scope is mandatory. Default: `"openid", "profile", "email"`.
- `role_field`, string. Defines the optional ID token claims field to map to a SFTPGo role. If the defined ID token claims field is set to `admin` the authenticated user is mapped to an SFTPGo admin. You don't need to specify this field if you want to use OpenID only for the Web Client UI. If the field is inside a nested structure, you can use the dot notation to traverse the structures. Default: blank.
- `implicit_roles`, boolean. If set, the `role_field` is ignored and the SFTPGo role is assumed based on the login link used. Default: `false`.
- `custom_fields`, list of strings. Custom token claims fields to pass to the pre-login hook. Default: empty.
- `insecure_skip_signature_check`, boolean. This setting causes SFTPGo to skip JWT signature validation. It's intended for special cases where providers, such as Azure, use the `none` algorithm. Skipping the signature validation can cause security issues. Default: `false`.
- `debug`, boolean. If set, the received id tokens will be logged at debug level. Default: `false`.
- `security`, struct. Defines security headers to add to HTTP responses and allows to restrict allowed hosts. The following parameters are supported:
- `enabled`, boolean. Set to `true` to enable security configurations. Default: `false`.
- `allowed_hosts`, list of strings. Fully qualified domain names that are allowed. An empty list allows any and all host names. Default: empty.
- `allowed_hosts_are_regex`, boolean. Determines if the provided allowed hosts contains valid regular expressions. Default: `false`.
- `hosts_proxy_headers`, list of string. Defines a set of header keys that may hold a proxied hostname value for the request, for example `X-Forwarded-Host`. Default: empty.
- `https_redirect`, boolean. Set to `true` to redirect HTTP requests to HTTPS. If you redirect from port `80` and you get your TLS certificates using the built-in ACME protocol and the `HTTP-01` challenge type, you need to use the webroot method and set the ACME web root to a path writable by SFTPGo in order to renew your certificates. Default: `false`.
- `https_host`, string. Defines the host name that is used to redirect HTTP requests to HTTPS. Default is blank, which indicates to use the same host. For example, if `https_redirect` is enabled and `https_host` is blank, a request for `http://127.0.0.1/web/client/login` will be redirected to `https://127.0.0.1/web/client/login`, if `https_host` is set to `www.example.com` the same request will be redirected to `https://www.example.com/web/client/login`.
- `https_proxy_headers`, list of struct, each struct contains the fields `key` and `value`. Defines a a list of header keys with associated values that would indicate a valid https request. For example `key` could be `X-Forwarded-Proto` and `value` `https`. Default: empty.
- `sts_seconds`, integer. Defines the max-age of the `Strict-Transport-Security` header. This header will be included for `https` responses or for HTTP request if the request includes a defined HTTPS proxy header. Default: `0`, which would NOT include the header.
- `sts_include_subdomains`, boolean. Set to `true`, the `includeSubdomains` will be appended to the `Strict-Transport-Security` header. Default: `false`.
- `sts_preload`, boolean. Set to true, the `preload` flag will be appended to the `Strict-Transport-Security` header. Default: `false`.
- `content_type_nosniff`, boolean. Set to `true` to add the `X-Content-Type-Options` header with the value `nosniff`. Default: `false`.
- `content_security_policy`, string. Allows to set the `Content-Security-Policy` header value. Default: blank.
- `permissions_policy`, string. Allows to set the `Permissions-Policy` header value. Default: blank.
- `cross_origin_opener_policy`, string. Allows to set the `Cross-Origin-Opener-Policy` header value. Default: blank.
- `branding`, struct. Defines the supported customizations to suit your brand. It contains the `web_admin` and `web_client` structs that define customizations for the WebAdmin and the WebClient UIs. Each customization struct contains the following fields:
- `name`, string. Defines the UI name
- `short_name`, string. Defines the short name to show next to the logo image and on the login page
- `favicon_path`, string. Path to the favicon relative to `static_files_path`. For example, if you create a directory named `branding` inside the static dir and put the `favicon.ico` file in it, you must set `/branding/favicon.ico` as path.
- `logo_path`, string. Path to your logo relative to `static_files_path`. The preferred image size is 256x256 pixel
- `login_image_path`, string. Path to a custom image to show on the login screen relative to `static_files_path`. The preferred image size is 900x900 pixel
- `disclaimer_name`, string. Name for your optional disclaimer
- `disclaimer_path`, string. Path to the HTML page with the disclaimer relative to `static_files_path` or an absolute URL (http or https).
- `default_css`, list of strings. Optional path to custom CSS files, relative to `static_files_path`, which replaces the default CSS
- `extra_css`, list of strings. Defines the paths, relative to `static_files_path`, to additional CSS files
- `templates_path`, string. Path to the HTML web templates. This can be an absolute path or a path relative to the config dir
- `static_files_path`, string. Path to the static files for the web interface. This can be an absolute path or a path relative to the config dir. If both `templates_path` and `static_files_path` are empty the built-in web interface will be disabled
- `openapi_path`, string. Path to the directory that contains the OpenAPI schema and the default renderer. This can be an absolute path or a path relative to the config dir. If empty the OpenAPI schema and the renderer will not be served regardless of the `render_openapi` directive
- `web_root`, string. Defines a base URL for the web admin and client interfaces. If empty web admin and client resources will be available at the root ("/") URI. If defined it must be an absolute URI or it will be ignored
- `certificate_file`, string. Certificate for HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, you can enable HTTPS for the configured bindings. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The certificates are also polled for changes every 8 hours.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `signing_passphrase`, string. Passphrase to use to derive the signing key for JWT and CSRF tokens. If empty a random signing key will be generated each time SFTPGo starts. If you set a signing passphrase you should consider rotating it periodically for added security.
- `signing_passphrase_file`, string. Defines the path to a file containing the signing passphrase. This can be an absolute path or a path relative to the config dir. If not empty, it takes precedence over `signing_passphrase`. Default: blank.
- `token_validation`, integer. Define how to validate JWT tokens, cookies and CSRF tokens. By default all the available security checks are enabled. Set to 1 to disable the requirement that a token must be used by the same IP for which it was issued. Default: `0`.
- `max_upload_file_size`, integer. Defines the maximum request body size, in bytes, for Web Client/API HTTP upload requests. `0` means no limit. Default: `0`.
- `cors` struct containing CORS configuration. SFTPGo uses [Go CORS handler](https://github.com/rs/cors), please refer to upstream documentation for fields meaning and their default values.
- `enabled`, boolean, set to `true` to enable CORS.
- `allowed_origins`, list of strings.
- `allowed_methods`, list of strings.
- `allowed_headers`, list of strings.
- `exposed_headers`, list of strings.
- `allow_credentials` boolean.
- `max_age`, integer.
- `options_passthrough`, boolean.
- `options_success_status`, integer.
- `allow_private_network`, boolean.
- `setup` struct containing configurations for the initial setup screen
- `installation_code`, string. If set, this installation code will be required when creating the first admin account. Please note that even if set using an environment variable this field is read at SFTPGo startup and not at runtime. This is not a license key or similar, the purpose here is to prevent anyone who can access to the initial setup screen from creating an admin user. Default: blank.
- `installation_code_hint`, string. Description for the installation code input field. Default: `Installation code`.
- `hide_support_link`, boolean. If set, the link to the [sponsors section](../README.md#sponsors) will not appear on the setup screen page. Default: `false`.
</details>
<details><summary><font size=4>Telemetry</font></summary>
- **"telemetry"**, the configuration for the telemetry server, more details [below](#telemetry-server)
- `bind_port`, integer. The port used for serving HTTP requests. Set to 0 to disable HTTP server. Default: 0
- `bind_address`, string. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: `127.0.0.1`
- `enable_profiler`, boolean. Enable the built-in profiler. Default `false`
- `auth_user_file`, string. Path to a file used to store usernames and passwords for basic authentication. This can be an absolute path or a path relative to the config dir. We support HTTP basic authentication, and the file format must conform to the one generated using the Apache `htpasswd` tool. The supported password formats are bcrypt (`$2y$` prefix) and md5 crypt (`$apr1$` prefix). If empty, HTTP authentication is disabled. Authentication will be always disabled for the `/healthz` endpoint.
- `certificate_file`, string. Certificate for HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `min_tls_version`, integer. Defines the minimum version of TLS to be enabled. `12` means TLS 1.2 (and therefore TLS 1.2 and TLS 1.3 will be enabled),`13` means TLS 1.3. Default: `12`.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L53). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `tls_protocols`, list of string. HTTPS protocols in preference order. Supported values: `http/1.1`, `h2`. Default: `http/1.1`, `h2`.
</details>
<details><summary><font size=4>HTTP clients</font></summary>
- **"http"**, the configuration for HTTP clients. HTTP clients are used for executing hooks. Some hooks use a retryable HTTP client, for these hooks you can configure the time between retries and the number of retries. Please check the hook specific documentation to understand which hooks use a retryable HTTP client.
- `timeout`, float. Timeout specifies a time limit, in seconds, for requests. For requests with retries this is the timeout for a single request
- `retry_wait_min`, integer. Defines the minimum waiting time between attempts in seconds.
- `retry_wait_max`, integer. Defines the maximum waiting time between attempts in seconds. The backoff algorithm will perform exponential backoff based on the attempt number and limited by the provided minimum and maximum durations.
- `retry_max`, integer. Defines the maximum number of retries if the first request fails.
- `ca_certificates`, list of strings. List of paths to extra CA certificates to trust. The paths can be absolute or relative to the config dir. Adding trusted CA certificates is a convenient way to use self-signed certificates without defeating the purpose of using TLS.
- `certificates`, list of certificate for mutual TLS. Each certificate is a struct with the following fields:
- `cert`, string. Path to the certificate file. The path can be absolute or relative to the config dir.
- `key`, string. Path to the key file. The path can be absolute or relative to the config dir.
- `skip_tls_verify`, boolean. if enabled the HTTP client accepts any TLS certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks. This should be used only for testing.
- `headers`, list of structs. You can define a list of http headers to add to each hook. Each struct has the following fields:
- `key`, string
- `value`, string. The header is silently ignored if `key` or `value` are empty
- `url`, string, optional. If not empty, the header will be added only if the request URL starts with the one specified here
</details>
<details><summary><font size=4>Commands</font></summary>
- **command**, configuration for external commands such as program based hooks
- `timeout`, integer. Timeout specifies a time limit, in seconds, to execute external commands. Valid range: `1-300`. Default: `30`
- `env`, list of strings. Environment variables to pass to all the external commands. Global environment variables are cleared, for security reasons, you have to explicitly set any environment variable such as `PATH` etc. if you need them. Each entry is of the form `key=value`. Do not use environment variables prefixed with `SFTPGO_` to avoid conflicts with environment variables that SFTPGo hooks can set. Default: empty
- `commands`, list of structs. Allow to customize configuration per-command. Each struct has the following fields:
- `path`, string. Define the command path as defined in the hook configuration
- `timeout`, integer. This value overrides the global timeout if set
- `env`, list of strings. These values are added to the environment variables defined for all commands, if any. Default: empty
- `args`, list of strings. Arguments to pass to the command identified by `path`. Default: empty
- `hook`, string. If not empty this configuration only apply to the specified hook name. Supported hook names: `fs_actions`, `provider_actions`, `startup`, `post_connect`, `post_disconnect`, `data_retention`, `check_password`, `pre_login`, `post_login`, `external_auth`, `keyboard_interactive`. Default: empty
</details>
<details><summary><font size=4>KMS</font></summary>
- **kms**, configuration for the Key Management Service, more details can be found [here](./kms.md)
- `secrets`
- `url`, string. Defines the URI to the KMS service. Default: blank.
- `master_key`, string. Defines the master encryption key as string. Default: blank.
- `master_key_path`, string. Defines the absolute path to a file containing the master encryption key. If not empty, it takes precedence over `master_key`. Default: blank.
</details>
<details><summary><font size=4>MFA</font></summary>
- **mfa**, multi-factor authentication settings
- `totp`, list of struct that define settings for time-based one time passwords (RFC 6238). Each struct has the following fields:
- `name`, string. Unique configuration name. This name should not be changed if there are users or admins using the configuration. The name is not visible to the authentication apps. Default: `Default`.
- `issuer`, string. Name of the issuing Organization/Company. Default: `SFTPGo`.
- `algo`, string. Algorithm to use for HMAC. The supported algorithms are: `sha1`, `sha256`, `sha512`. Currently Google Authenticator app on iPhone seems to only support `sha1`, please check the compatibility with your target apps/device before setting a different algorithm. You can also define multiple configurations, for example one that uses `sha256` or `sha512` and another one that uses `sha1` and instruct your users to use the appropriate configuration for their devices/apps. The algorithm should not be changed if there are users or admins using the configuration. Default: `sha1`.
</details>
<details><summary><font size=4>SMTP</font></summary>
- **smtp**, SMTP configuration enables SFTPGo email sending capabilities
- `host`, string. Location of SMTP email server. Leave empty to disable email sending capabilities. Default: blank.
- `port`, integer. Port of SMTP email server.
- `from`, string. From address, for example `SFTPGo <sftpgo@example.com>`. Many SMTP servers reject emails without a `From` header so, if not set, SFTPGo will try to use the username as fallback, this may or may not be appropriate. Default: blank
- `user`, string. SMTP username. Default: blank
- `password`, string. SMTP password. Leaving both username and password empty the SMTP authentication will be disabled. Default: blank
- `auth_type`, integer. 0 means `Plain`, 1 means `Login`, 2 means `CRAM-MD5`, 3 means `XOAUTH2`. Default: `0`.
- `encryption`, integer. 0 means no encryption, 1 means `TLS`, 2 means `STARTTLS`. Default: `0`.
- `domain`, string. Domain to use for `HELO` command, if empty `localhost` will be used. Default: blank.
- `templates_path`, string. Path to the email templates. This can be an absolute path or a path relative to the config dir. Templates are searched within a subdirectory named "email" in the specified path. You can customize the email templates by simply specifying an alternate path and putting your custom templates there.
- `debug`, integer. Set to `1` to enable SMTP debug. Default: `0`.
- `oauth2`, struct containing OAuth2 related configurations:
- `provider`, integer, 0 means `Google`, 1 means `Microsoft`. Default: `0`.
- `tenant`, string. Azure Active Directory tenant for the Microsoft provider. Typical values are `common`, `organizations`, `consumers` or tenant identifier. If empty `common` is used. Default: blank.
- `client_id`, string. Default: blank.
- `client_secret`, string. Default: blank.
- `refresh_token`, string. Default: blank.
</details>
<details><summary><font size=4>Plugins</font></summary>
- **plugins**, list of external plugins. :warning: Please note that the plugin system is experimental, the configuration parameters and interfaces may change in a backward incompatible way in future. Each plugin is configured using a struct with the following fields:
- `type`, string. Defines the plugin type. Supported types: `notifier`, `kms`, `auth`, `eventsearcher`, `ipfilter`.
- `notifier_options`, struct. Defines the options for notifier plugins.
- `fs_events`, list of strings. Defines the filesystem events that will be notified to this plugin.
- `provider_events`, list of strings. Defines the provider events that will be notified to this plugin.
- `provider_objects`, list if strings. Defines the provider objects that will be notified to this plugin.
- `log_events`, list of integers. Defines the log events that will be notified to this plugin. `1` means "Login failed", `2` means "Login with non-existent user", `3` means "No login tried", `4` means "Algorithm negotiation failed", `5` means "Login succeeded".
- `retry_max_time`, integer. Defines the maximum number of seconds an event can be late. SFTPGo adds a timestamp to each event and add to an internal queue any events that a the plugin fails to handle (the plugin returns an error or it is not running). If a plugin fails to handle an event that is too late, based on this configuration, it will be discarded. SFTPGo will try to resend queued events every 30 seconds. 0 means no retry.
- `retry_queue_max_size`, integer. Defines the maximum number of events that the internal queue can hold. Once the queue is full, the events that cannot be sent to the plugin will be discarded. 0 means no limit.
- `kms_options`, struct. Defines the options for kms plugins.
- `scheme`, string. KMS scheme. Supported schemes are: `awskms`, `gcpkms`, `hashivault`, `azurekeyvault`.
- `encrypted_status`, string. Encrypted status for a KMS secret. Supported statuses are: `AWS`, `GCP`, `VaultTransit`, `AzureKeyVault`.
- `auth_options`, struct. Defines the options for auth plugins.
- `scope`, integer. 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. 8 means TLS certificate. The flags can be combined, for example 6 means public keys and keyboard interactive. The scope must be explicit, `0` is not a valid option.
- `cmd`, string. Path to the plugin executable.
- `args`, list of strings. Optional arguments to pass to the plugin executable.
- `sha256sum`, string. SHA256 checksum for the plugin executable. If not empty it will be used to verify the integrity of the executable.
- `auto_mtls`, boolean. If enabled the client and the server automatically negotiate mutual TLS for transport authentication. This ensures that only the original client will be allowed to connect to the server, and all other connections will be rejected. The client will also refuse to connect to any server that isn't the original instance started by the client.
- `env_prefix`, string. Defines the prefix for env vars to pass from the SFTPGo process environment to the plugin. Set to `none` to not pass any environment variable, set to `*` to pass all environment variables. If empty, the prefix is returned as the plugin name in uppercase with `-` replaced with `_` and a trailing `_`. For example if the plugin name is `sftpgo-plugin-eventsearch` the prefix will be `SFTPGO_PLUGIN_EVENTSEARCH_`
- `env_vars`, list of strings. Additional environment variable names to pass from the SFTPGo process environment to the plugin.
</details>
A full example showing the default config (in JSON format) can be found [here](../sftpgo.json).
If you want to use a private host key that uses an algorithm/setting different from the auto generated RSA/ECDSA keys, or more than two private keys, you can generate your own keys and replace the empty `keys` array with something like this:
```json
"host_keys": [
"id_rsa",
"id_ecdsa",
"id_ed25519"
]
```
where `id_rsa`, `id_ecdsa` and `id_ed25519`, in this example, are files containing your generated keys. You can use absolute paths or paths relative to the configuration directory specified via the `--config-dir` serve flag. By default the configuration directory is the working directory.
If you want the default host keys generation in a directory different from the config dir, please specify absolute paths to files named `id_rsa`, `id_ecdsa` or `id_ed25519` like this:
```json
"host_keys": [
"/etc/sftpgo/keys/id_rsa",
"/etc/sftpgo/keys/id_ecdsa",
"/etc/sftpgo/keys/id_ed25519"
]
```
then SFTPGo will try to create `id_rsa`, `id_ecdsa` and `id_ed25519`, if they are missing, inside the directory `/etc/sftpgo/keys`.
The configuration can be read from JSON, TOML, YAML, HCL, envfile and Java properties config files. If your `config-file` flag is set to `sftpgo` (default value), you need to create a configuration file called `sftpgo.json` or `sftpgo.yaml` and so on inside `config-dir`.
</details>
<details><summary><font size=5> Environment variables</font></summary>
You can also override all the available configuration options using environment variables. SFTPGo will check for environment variables with a name matching the key uppercased and prefixed with the `SFTPGO_`. You need to use `__` to traverse a struct.
Let's see some examples:
- To set the `port` for the first sftpd binding, you need to define the env var `SFTPGO_SFTPD__BINDINGS__0__PORT`
- To set the `execute_on` actions, you need to define the env var `SFTPGO_COMMON__ACTIONS__EXECUTE_ON`. For example `SFTPGO_COMMON__ACTIONS__EXECUTE_ON=upload,download`
The configuration file can change between different versions and merging your custom settings with the default configuration file, after updating SFTPGo, may be time-consuming. For this reason we suggest to set your custom settings using environment variables. This eliminates the need to merge your changes with the default configuration file after each update, you have to just check that your custom configuration keys still exists.
Setting configuration options from environment variables is natural in Docker/Kubernetes.
If you install SFTPGo on Linux using the official deb/rpm packages you can set your custom environment variables in the file `/etc/sftpgo/sftpgo.env` (create this file if it does not exist, it is defined as `EnvironmentFile` in the SFTPGo systemd unit).
SFTPGo also reads files inside the `env.d` directory relative to config dir and then exports the valid variables into environment variables if they are not already set. With this method you can override any configuration options, set environment variables for SFTPGo plugins but you cannot set command flags because these files are read after that SFTPGo starts and the config dir must already be set.
Of course you can also set environment variables with the method provided by the operating system of your choice.
</details>
<details><summary><font size=5>Binding to privileged ports</font></summary>
On Linux, if you want to use Internet domain privileged ports (port numbers less than 1024) instead of running the SFTPGo service as root user you can set the `cap_net_bind_service` capability on the `sftpgo` binary. To set the capability you can use the following command:
```shell
$ sudo setcap cap_net_bind_service=+ep /usr/bin/sftpgo
# Check that the capability is added
$ getcap /usr/bin/sftpgo
/usr/bin/sftpgo cap_net_bind_service=ep
```
Now you can use privileged ports such as 21, 22, 443 etc.. without running the SFTPGo service as root user. You have to set the `cap_net_bind_service` capability each time you update the `sftpgo` binary.
The "official" deb/rpm packages attempt to set the `cap_net_bind_service` capability in their `postinstall` scripts.
An alternative method is to use `iptables`, for example you run the SFTP service on port `2022` and redirect traffic from port `22` to port `2022`:
```shell
sudo iptables -t nat -A PREROUTING -d <ip> -p tcp --dport 22 -m addrtype --dst-type LOCAL -j DNAT --to-destination <ip>:2022
sudo iptables -t nat -A OUTPUT -d <ip> -p tcp --dport 22 -m addrtype --dst-type LOCAL -j DNAT --to-destination <ip>:2022
```
</details>
<details><summary><font size=5>Supported Password Hashing Algorithms</font></summary>
SFTPGo can verify passwords in several formats and uses, by default, the `bcrypt` algorithm to hash passwords in plain-text before storing them inside the data provider. Each hashing algorithm is identified by a prefix.
Supported hash algorithms:
- bcrypt, prefix `$2a$`
- argon2id, prefix `$argon2id$`
- PBKDF2 sha1, prefix `$pbkdf2-sha1$`
- PBKDF2 sha256, prefix `$pbkdf2-sha256$`
- PBKDF2 sha512, prefix `$pbkdf2-sha512$`
- PBKDF2 sha256 with base64 salt, prefix `$pbkdf2-b64salt-sha256$`
- MD5 crypt, prefix `$1$`
- MD5 crypt APR1, prefix `$apr1$`
- SHA256 crypt, prefix `$5$`
- SHA512 crypt, prefix `$6$`
- MD5 digest, prefix `{MD5}`
- SHA256 digest, prefix `{SHA256}`
- SHA512 digest, prefix `{SHA512}`
If you set a password with one of these prefixes it will not be hashed.
When users log in, if their passwords are stored with anything other than the preferred algorithm, SFTPGo will automatically upgrade the algorithm to the preferred one.
</details>
## Telemetry Server
The telemetry server publishes the following endpoints:
- `/healthz`, health information (for health checks)
- `/metrics`, Prometheus metrics
- `/debug/pprof`, if enabled via the `enable_profiler` configuration key, for profiling, more details [here](./profiling.md)

View file

@ -1,11 +0,0 @@
# Google Cloud Storage backend
To connect SFTPGo to Google Cloud Storage you can use use the Application Default Credentials (ADC) strategy to try to find your application's credentials automatically or you can explicitly provide a JSON credentials file that you can obtain from the Google Cloud Console. Take a look [here](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application) for details.
Specifying a different `key_prefix`, you can assign different "folders" of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
You can optionally specify a [storage class](https://cloud.google.com/storage/docs/storage-classes) too. Leave it blank to use the default storage class.
The configured bucket must exist.
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations.

View file

@ -1,45 +0,0 @@
# Groups
Using groups simplifies the administration of multiple accounts by letting you assign settings once to a group, instead of multiple times to each individual user.
SFTPGo supports the following types of groups:
- primary groups
- secondary groups
- membership groups
A user can be a member of a primary group and many secondary and membership groups. Depending on the group type, the settings are inherited differently.
:warning: SFTPGo groups are completely unrelated to system groups. Therefore, it is not necessary to add Linux/Windows groups to use SFTPGo groups.
The following settings are inherited from the primary group:
- home dir, if set for the group will replace the one defined for the user. The `%username%` placeholder is replaced with the username, the `%role%` placeholder will be replaced with the role name
- filesystem config, if the provider set for the group is different from the "local provider" will replace the one defined for the user. The `%username%` and `%role%` placeholders will be replaced with the username and role name within the defined "prefix", for any vfs, and the "username" for the SFTP filesystem config
- max sessions, quota size/files, upload/download bandwidth, upload/download/total data transfer, max upload size, external auth cache time, ftp_security, default shares expiration, max shares expiration, password expiration, password strength: if they are set to `0` for the user they are replaced with the value set for the group, if different from `0`. The password strength defined at group level is only enforce when users change their password
- expires_in, if defined and the user does not have an expiration date set, defines the expiration of the account in number of days from the creation date
- TLS username, check password hook disabled, pre-login hook disabled, external auth hook disabled, filesystem checks disabled, allow API key authentication, anonymous user: if they are not set for the user they are replaced with the value set for the group
- starting directory, if the user does not have a starting directory set, the value set for the group is used, if any. The `%username%` placeholder is replaced with the username, the `%role%` placeholder will be replaced with the role name
The following settings are inherited from the primary and secondary groups:
- virtual folders, file patterns, permissions: they are added to the user configuration if the user does not already have a setting for the configured path. The `/` path is ignored for secondary groups. The `%username%` and `%role%` placeholders are replaced with the username and role name within the virtual path, the defined "prefix", for any vfs, and the "username" for the SFTP and HTTP filesystem config
- per-source bandwidth limits
- per-source data transfer limits
- allowed/denied IPs
- denied login methods and protocols
- two factor auth protocols
- web client/REST API permissions
The settings from the primary group are always merged first. no setting is inherited from "membership" groups.
The final settings are a combination of the user settings and the group ones.
For example you can define the following groups:
- "group1", it has a virtual directory to mount on `/vdir1`
- "group2", it has a virtual directory to mount on `/vdir2`
- "group3", it has a virtual directory to mount on `/vdir3`
If you define users with a virtual directory to mount on `/vdir` and make them member of all the above groups, they will have virtual directories mounted on `/vdir`, `/vdir1`, `/vdir2`, `/vdir3`. If users already have a virtual directory to mount on `/vdir1`, the group's one will be ignored.
Please note that if the same virtual path is set in more than one secondary group the behavior is undefined. For example if a user is a member of two secondary groups and each secondary group defines a virtual folder to mount on the `/vdir2` path, the virtual folder mounted on `/vdir2` may change with every login.

View file

@ -1,12 +0,0 @@
# Tutorials
Here we collect step-to-step tutorials. SFTPGo users are encouraged to contribute!
- [Getting Started](./getting-started.md)
- [Securing SFTPGo with a free Let's Encrypt TLS Certificate](./lets-encrypt-certificate.md)
- [Two-factor Authentication](./two-factor-authentication.md)
- [Event Manager](./eventmanager.md)
- [SFTPGo as OpenSSH's SFTP subsystem](./openssh-sftp-subsystem.md)
- [SFTPGo with PostgreSQL data provider and S3 backend](./postgresql-s3.md)
- [SFTPGo on Windows with Active Directory Integration + Caddy Static File Server](https://www.youtube.com/watch?v=M5UcJI8t4AI)
- [File Traefik: Serve files securely via SFTP, HTTPS, and WebDAV with SFTPGo proxied behind Traefik](https://thad.getterman.org/articles/file-traefik/)

View file

@ -1,130 +0,0 @@
# Event Manager
The Event Manager allows an administrator to configure HTTP notifications, commands execution, email notifications and carry out certain server operations based on server events or schedules. More details [here](../eventmanager.md).
Let's see some common use cases.
- [Preliminary Note](#preliminary-note)
- [Daily backups](#daily-backups)
- [Automatically create a folder structure](#automatically-create-a-folder-structure)
- [Upload notifications](#upload-notifications)
- [Recycle Bin](#recycle-bin)
## Preliminary Note
We will use email actions in the following paragraphs, so let's assume you have a working SMTP configuration.
You can adapt the following snippet to configure an SMTP server using environment variables.
```shell
SFTPGO_SMTP__HOST="your smtp server host"
SFTPGO_SMTP__FROM="SFTPGo <sftpgo@example.com>"
SFTPGO_SMTP__USER=sftpgo@example.com
SFTPGO_SMTP__PASSWORD="your password"
SFTPGO_SMTP__AUTH_TYPE=1 # change based on what your server supports
SFTPGO_SMTP__ENCRYPTION=2 # change based on what your server supports
```
SFTPGo supports several placeholders for event actions. You can see all supported placeholders by clicking on the "info" icon at the top right of the add/update action page.
## Daily backups
You can schedule SFTPGo data backups (users, folders, groups, admins etc.) on a regular basis, such as daily.
From the WebAdmin expand the `Event Manager` section, select `Event actions` and add a new action.
Create an action named `backup` and set the type to `Backup`.
![Backup action](./img/backup-action.png)
Create another action named `backup notification`, set the type to `Email` and fill the recipient/s.
As email subject set `Backup {{StatusString}}`. The `{{StatusString}}` placeholder will be expanded to `OK` or `KO`.
As email body set `Backup done {{ErrorString}}`. The error string will be empty if no errors occur.
![Backup notification action](./img/backup-notification-action.png)
Now select `Event rules` and create a rule named `Daily backup`, select `Schedule` as trigger and schedule a backup at midnight UTC time.
![Daily backup schedule](./img/daily-backup-schedule.png)
As actions select `backup` and `backup notification`.
![Daily backup actions](./img/daily-backup-actions.png)
Done! SFTPGo will make a new backup every day and you will receive an email with the status of the backup. The backup will be saved on the server side in the configured backup directory. The backup files will have names like this `backup_<week day>_<hour>.json`.
## Automatically create a folder structure
Suppose you want to automatically create the folders `in` and `out` when you create new users.
From the WebAdmin expand the `Event Manager` section, select `Event actions` and add a new action.
Create an action named `create dirs`, with the settings you can see in the following screen.
![Create dirs action](./img/create-dirs-action.png)
Create another action named `create dirs failure notification`, set the type to `Email` and fill the recipient/s.
As email subject set `Unable to create dirs for user {{ObjectName}}`.
As email body set `Error: {{ErrorString}}`.
![Create dirs notification](./img/create-dirs-failure-notification.png)
Now select `Event rules` and create a rule named `Create dirs for users`, select `Provider event` as trigger, `add` as provider event and `user` as object filters.
![Create dirs rule](./img/create-dirs-rule.png)
As actions select `create dirs` and `create dirs failure notification`, check `Is failure action` for the notification action.
This way you will only be notified by email if an error occurs.
![Create dirs rule actions](./img/create-dirs-rule-actions.png)
Done! Create a new user and check that the defined directories are automatically created.
## Upload notifications
Let's see how you can receive an email notification after each upload and, optionally, the uploaded file as well.
From the WebAdmin expand the `Event Manager` section, select `Event actions` and add a new action.
Create an action named `upload notification` with the settings you can see in the following screen.
![Upload notification action](./img/upload-notification.png)
You can optionally add the uploaded file as an attachment but note that SFTPGo allows you to attach a maximum of 10MB. Then the action will fail for files bigger than 10MB.
Now select `Event rules` and create a rule named `Upload rule`, select `Filesystem evens` as trigger and `upload` as filesystem event.
You can also filters events based on protocol, user and group name, filepath shell-like patterns, file size. We omit these additional filters for simplicity.
![Upload rule](./img/upload-rule.png)
As actions, select `upload notification`.
Done! Try uploading a new file and you will receive the configured email notification.
## Recycle Bin
Let's see how we can configure a Recycle Bin style function where files are not deleted strait away, but instead moved to a separate folder.
To easily apply the Recycle Bin to multiple users we will create a virtual folder and a group, this way all users who belong to the group will have a Recycle Bin.
Create a virtual folder named `recycle` with the settings you can see in the following screen.
![Recycle folder](./img/recycle-folder.png)
Create a group named `recycle` with the settings you can see in the following screen.
![Recycle group](./img/recycle-group.png)
Make your users members of the `recycle` group.
From the WebAdmin expand the `Event Manager` section, select `Event actions` and add a new action.
Create an action named `move to recycle` with the settings you can see in the following screen.
![Recycle move action](./img/recycle-move-action.png)
Now select `Event rules` and create a rule named `Recycle rule`, select `Filesystem events` as trigger, `pre-delete` as filesystem event and exclude the `/recycle` path.
![Recycle rule](./img/recycle-rule.png)
![Recycle rule exclude path](./img/recycle-rule-path.png)
As actions, select `move to recycle` and set `Execute sync`.
Done! Try deleting a file, it will be moved to the Recycle Bin.
You can also add a scheduled event rule to automatically delete files older than a configurable time from the `/recycle` folder.

View file

@ -1,657 +0,0 @@
# Getting Started
SFTPGo allows to securely share your files over SFTP and optionally FTP/S and WebDAV too.
Several storage backends are supported and they are configurable per user, so you can serve a local directory for a user and an S3 bucket (or part of it) for another one.
SFTPGo also supports virtual folders, a virtual folder can use any of the supported storage backends. So you can have, for example, a user with the S3 backend mapping a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one.
Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
In this tutorial we explore the main features and concepts using the built-in web admin interface. Advanced users can also use the SFTPGo [REST API](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml)
- [Installation](#installation)
- [Initial configuration](#initial-configuration)
- [Creating users](#creating-users)
- [Creating users with a Cloud Storage backend](#creating-users-with-a-cloud-storage-backend)
- [Creating users with a local encrypted backend (Data At Rest Encryption)](#creating-users-with-a-local-encrypted-backend-data-at-rest-encryption)
- [Virtual permissions](#virtual-permissions)
- [Virtual folders](#virtual-folders)
- [Groups](#groups)
- [Usage example](#usage-example)
- [Simplify user page](#simplify-user-page)
- [Configuration parameters](#configuration-parameters)
- [Use PostgreSQL data provider](#use-postgresql-data-provider)
- [Use MySQL/MariaDB data provider](#use-mysqlmariadb-data-provider)
- [Use CockroachDB data provider](#use-cockroachdb-data-provider)
- [Enable FTP service](#enable-ftp-service)
- [Enable WebDAV service](#enable-webdav-service)
## Installation
You can easily install SFTPGo in several ways, more details [here](../../README.md#installation).
In this guide, we assume that SFTPGo is already installed and running using the default configuration.
## Initial configuration
Before you can use SFTPGo you need to create an admin account, so open [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web) in your web browser, replacing `127.0.0.1` with the appropriate IP address if SFTPGo is not running on localhost.
![Setup](./img/setup.png)
After creating the admin account you will be automatically logged in and redirected to the page to set up two-factor authentication. Setting up two-factor authentication is optional.
![Users list](./img/initial-screen.png)
The web admin is now available at the following URL:
[http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
From the `Status` page you see the active services.
![Status](./img/status.png)
The default configuration enables the SFTP service on port `2022` and uses an embedded data provider (`SQLite` or `bolt` based on the target OS and architecture).
## Creating users
Let's create our first local user:
- from the `Users` page click the `+` icon to open the `Add user page`
- the only required fields are the `Username` and a `Password` or a `Public key`
- if you are on Windows or you installed SFTPGo manually and no `users_base_dir` is defined in your configuration file you also have to set a `Home Dir`. It must be an absolute path, for example `/srv/sftpgo/data/username` on Linux or `C:\sftpgo\data\username` on Windows. SFTPGo will try to automatically create the home directory, if missing, when the user logs in. Each user can only access files and folders inside its home directory.
- click `Save`
![Add user](./img/add-user.png)
:warning: Please note that, on Linux, SFTPGo runs using a dedicated system user and group called `sftpgo`, for added security. If you want to be able to use directories outside the `/srv/sftpgo` path you need to set the appropriate system level permissions. For example if you define `/home/username/test` as home dir you have to create this directory yourself, if it doesn't exist, and set the appropriate system-level permissions:
```shell
sudo mkdir /home/username/test
sudo chown sftpgo:sftpgo /home/username/test
```
You also need to make sure that the `sftpgo` system user has at least the read permission for any parent directory, so in the example above `/home/username` and `/home` must not have `0700` permissions.
Now test the new user, we use the `sftp` CLI here, you can use any SFTP client.
```shell
$ sftp -P 2022 nicola@127.0.0.1
nicola@127.0.0.1's password:
Connected to 127.0.0.1.
sftp> ls
sftp> put file.txt
Uploading file.txt to /file.txt
file.txt 100% 4034 3.9MB/s 00:00
sftp> ls
file.txt
sftp> mkdir adir
sftp> cd adir/
sftp> put file.txt
Uploading file.txt to /adir/file.txt
file.txt 100% 4034 4.0MB/s 00:00
sftp> ls
file.txt
sftp> get file.txt
Fetching /adir/file.txt to file.txt
/adir/file.txt 100% 4034 1.9MB/s 00:00
```
It worked! We can upload/download files and create directories.
Each user can browse and download their files, share files with external users, change their credentials and configure two-factor authentication using the WebClient interface available at the following URL:
[http://127.0.0.1:8080/web/client](http://127.0.0.1:8080/web/client)
![WebClient files](./img/web-client-files.png)
![WebClient two-factor authentication](./img/web-client-two-factor-auth.png)
### Creating users with a Cloud Storage backend
The procedure is similar to the one described for local users, you have only specify the Cloud Storage backend and its credentials.
The screenshot below shows an example configuration for an S3 backend.
![S3 user](./img/s3-user-1.png)
![S3 user](./img/s3-user-2.png)
The screenshot below shows an example configuration for an Azure Blob Storage backend.
![Azure Blob user](./img/az-user-1.png)
![Azure Blob user](./img/az-user-2.png)
The screenshot below shows an example configuration for a Google Cloud Storage backend.
![Google Cloud user](./img/gcs-user.png)
The screenshot below shows an example configuration for an SFTP server as storage backend.
![User using another SFTP server as storage backend](./img/sftp-user.png)
Setting a `Key Prefix` you restrict the user to a specific "sub-folder" in the bucket, so that the same bucket can be shared among different users.
### Creating users with a local encrypted backend (Data At Rest Encryption)
The procedure is similar to the one described for local users, you have only specify the encryption passphrase.
The screenshot below shows an example configuration.
![User with cryptfs backend](./img/local-encrypted.png)
You can find more details about Data At Rest Encryption [here](../dare.md).
## Virtual permissions
SFTPGo supports per directory virtual permissions. For each user you have to specify global permissions and then override them on a per-directory basis.
Take a look at the following screens.
![Permissions](./img/virtual-permissions.png)
This user has full access as default (`*`), can only list and download from `/read-only` path and has no permissions at all for the `/subdir` path.
Let's test it. We use the `sftp` CLI here, you can use any SFTP client.
```shell
$ sftp -P 2022 nicola@127.0.0.1
Connected to 127.0.0.1.
sftp> ls
adir file.txt read-only subdir
sftp> put file.txt
Uploading file.txt to /file.txt
file.txt 100% 4034 19.4MB/s 00:00
sftp> rm file.txt
Removing /file.txt
sftp> ls
adir read-only subdir
sftp> cd read-only/
sftp> ls
file.txt
sftp> put file1.txt
Uploading file1.txt to /read-only/file1.txt
remote open("/read-only/file1.txt"): Permission denied
sftp> get file.txt
Fetching /read-only/file.txt to file.txt
/read-only/file.txt 100% 4034 2.2MB/s 00:00
sftp> cd ..
sftp> ls
adir read-only subdir
sftp> cd /subdir
sftp> ls
remote readdir("/subdir"): Permission denied
```
as you can see it worked as expected.
## Virtual folders
A virtual folder is a mapping between a SFTPGo virtual path and a filesystem path outside the user home directory or on a different storage provider.
Therefore, there is no need to create virtual folders for the users home directory or for directories within the users home directory.
From the web admin interface click `Folders` and then the `+` icon.
![Add folder](./img/add-folder.png)
To create a local folder you need to specify a `Name` and an `Absolute path`. For other backends you have to specify the backend type and its credentials, this is the same procedure already detailed for creating users with cloud backends.
Suppose we created two virtual folders name `localfolder` and `minio` as you can see in the following screen.
![Folders](./img/folders.png)
- `localfolder` uses the local filesystem as storage backend
- `minio` uses MinIO (S3 compatible) as storage backend
Now, click `Users`, on the left menu, select a user and click the `Edit` icon, to update the user and associate the virtual folders.
Virtual folders must be referenced using their unique name and you can map them on a configurable virtual path. Take a look at the following screenshot.
![Virtual Folders](./img/virtual-folders.png)
We mapped the folder named `localfolder` on the path `/vdirlocal` (this must be an absolute UNIX path on Windows too) and the folder named `minio` on the path `/vdirminio`. For `localfolder` the quota usage is included within the user quota, while for the `minio` folder we defined separate quota limits: at most 2 files and at most 100MB, whichever is reached first.
The folder `minio` can be shared with other users and we can define different quota limits on a per-user basis. The folder `localfolder` is considered private since we have included its quota limits within those of the user, if we share them with other users we will break quota calculation.
Let's test these virtual folders. We use the `sftp` CLI here, you can use any SFTP client.
```shell
$ sftp -P 2022 nicola@127.0.0.1
nicola@127.0.0.1's password:
Connected to 127.0.0.1.
sftp> ls
adir read-only subdir vdirlocal vdirminio
sftp> cd vdirlocal
sftp> put file.txt
Uploading file.txt to /vdirlocal/file.txt
file.txt 100% 4034 17.3MB/s 00:00
sftp> ls
file.txt
sftp> cd ..
sftp> cd vdirminio/
sftp> put file.txt
Uploading file.txt to /vdirminio/file.txt
file.txt 100% 4034 4.8MB/s 00:00
sftp> ls
file.txt
sftp> put file.txt file1.txt
Uploading file.txt to /vdirminio/file1.txt
file.txt 100% 4034 2.8MB/s 00:00
sftp> put file.txt file2.txt
Uploading file.txt to /vdirminio/file2.txt
remote open("/vdirminio/file2.txt"): Failure
sftp> quit
```
The last upload failed since we exceeded the number of files quota limit.
## Groups
Using groups simplifies the administration of multiple SFTPGo users: you can assign settings once to a group, instead of multiple times to each individual user.
SFTPGo supports the following types of groups:
- primary groups
- secondary groups
- membership groups
A user can be a member of a primary group and many secondary and membership groups. Depending on the group type, the settings are inherited differently, more details [here](../groups.md).
:warning: SFTPGo groups are completely unrelated to system groups. Therefore, it is not necessary to add Linux/Windows groups to use SFTPGo groups.
### Usage example
Suppose you have the following requirements:
- each user must be restricted to a local home directory containing the username as last element of the path, for example `/srv/sftpgo/data/<username>`
- for each user, the maximum upload size for a single file must be limited to 1GB
- each user must have an S3 virtual folder available in the path `/s3<username>` and each user can only access a specified "prefix" of the S3 bucket. It must not be able to access other users' files
- each user must have an S3 virtual folder available in the path `/shared`. This is a folder shared with other users
- a group of users can only download and list contents in the `/shared` path while another group of users have full access
We can easily meet these requirements by defining two groups.
From the SFTPGo WebAdmin UI, click on `Folders` and then on the `+` icon.
Create a folder named `S3private`.
Set the storage to `AWS S3 (Compatible)` and fill the required parameters:
- bucket name
- region
- credentials: access key and access secret
![S3Private folder](./img/s3-private-folder.png)
The important part is the `Key Prefix`, set it to `users/%username%/`
![S3Private Key Prefix](./img/s3-key-prefix.png)
The placeholder `%username%` will be replaced with the associated username.
Create another folder named `S3shared` with the same settings as `S3private` but this time set the `Key Prefix` to `shared/`.
The `Key Prefix` has no placeholder, so the folder will operate on a static path that won't change based on the associated user.
Now click on `Groups` and then on the `+` icon and add a group named `Primary`.
Set the `Home Dir` to `/srv/sftpgo/data/%username%`.
![Add group](./img/add-group.png)
As before, the placeholder `%username%` will be replaced with the associated username.
Add the two virtual folders to this group and set the `Max file upload size` to 1GB.
![Primary group settings](./img/primary-group-settings.png)
Add a new group and name it `SharedReadOnly`, in the ACLs section set the permission on the `/shared` path so that read only access is granted.
![Read-only share](./img/read-only-share.png)
The group setup is now complete. We can now create our users and set the primary group to `Primary`.
For the users who need read-only access to the `/shared` path we also have to set `SharedReadOnly` as a secondary group.
You can now login with any SFTP client like FileZilla, WinSCP etc. and verify that the requirements are met.
### Simplify user page
The add/update user page has many configuration options and can be intimidating for some administrators. We can hide most of the settings and automatically add groups to newly created users. This way the hidden settings are inherited from the automatically assigned groups and therefore administrators can add new users simply by setting the username and credentials.
Click on `Admins` and then on the `+` icon and add an admin named `simply`.
In the `Groups for users` section set `Primary` as primary group and `SharedReadOnly` as `seconday` group.
In the `User page preferences` section hide all the sections.
![Simplified admin](./img/simplified-admin.png)
Log in using the newly created administrator and try to add a new user. The user page is simplified as you can see in the following screen.
![Simplified user add](./img/add-user-simplified.png)
## Configuration parameters
Until now we used the default configuration, to change the global service parameters you have to edit the configuration file, or set appropriate environment variables, and restart SFTPGo to apply the changes.
A full explanation of all configuration methods can be found [here](./../full-configuration.md), we explore some common use cases. Please keep in mind that SFTPGo can also be configured via environment variables, this is very convenient if you are using Docker.
The default configuration file is `sftpgo.json` and it can be found within the `/etc/sftpgo` directory if you installed from Linux distro packages. On Windows the configuration file can be found within the `{commonappdata}\SFTPGo` directory where `{commonappdata}` is typically `C:\ProgramData`. SFTPGo also supports reading from TOML and YAML configuration files.
The configuration file can change between different versions and merging your custom settings with the default configuration file, after updating SFTPGo, may be time-consuming. For this reason we suggest to set your custom settings using environment variables.
If you install SFTPGo on Linux using the official deb/rpm packages you can set your custom environment variables in the file `/etc/sftpgo/sftpgo.env`.
SFTPGo also reads files inside the `env.d` directory relative to config dir (`/etc/sftpgo/env.d` on Linux and `{commonappdata}\SFTPGo\env.d` on Windows) and then exports the valid variables into environment variables if they are not already set.
Of course you can also set environment variables with the method provided by the operating system of your choice.
The following snippets assume your are running SFTPGo on Linux but they can be easily adapted for other operating systems.
### Use PostgreSQL data provider
Create a PostgreSQL database named `sftpgo` and a PostgreSQL user with the correct permissions, for example using the `psql` CLI.
```shell
sudo -i -u postgres psql
CREATE DATABASE "sftpgo" WITH ENCODING='UTF8' CONNECTION LIMIT=-1;
create user "sftpgo" with encrypted password 'your password here';
grant all privileges on database "sftpgo" to "sftpgo";
\q
```
You can open the SFTPGo configuration file, search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "postgresql",
"name": "sftpgo",
"host": "127.0.0.1",
"port": 5432,
"username": "sftpgo",
"password": "your password here",
...
}
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/postgresql.env` with the following content.
```shell
SFTPGO_DATA_PROVIDER__DRIVER=postgresql
SFTPGO_DATA_PROVIDER__NAME=sftpgo
SFTPGO_DATA_PROVIDER__HOST=127.0.0.1
SFTPGO_DATA_PROVIDER__PORT=5432
SFTPGO_DATA_PROVIDER__USERNAME=sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD=your password here
```
Confirm that the database connection works by initializing the data provider.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2021-05-19T22:21:54.000 INF Initializing provider: "postgresql" config file: "/etc/sftpgo/sftpgo.json"
2021-05-19T22:21:54.000 INF updating database schema version: 8 -> 9
2021-05-19T22:21:54.000 INF Data provider successfully initialized/updated
```
Ensure that SFTPGo starts after the database service.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=postgresql.service
```
Restart SFTPGo to apply the changes.
### Use MySQL/MariaDB data provider
Create a MySQL database named `sftpgo` and a MySQL user with the correct permissions, for example using the `mysql` CLI.
```shell
$ mysql -u root
MariaDB [(none)]> CREATE DATABASE sftpgo CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> grant all privileges on sftpgo.* to sftpgo@localhost identified by 'your password here';
Query OK, 0 rows affected (0.027 sec)
MariaDB [(none)]> quit
Bye
```
You can open the SFTPGo configuration file, search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "mysql",
"name": "sftpgo",
"host": "127.0.0.1",
"port": 3306,
"username": "sftpgo",
"password": "your password here",
...
}
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/mysql.env` with the following content.
```shell
SFTPGO_DATA_PROVIDER__DRIVER=mysql
SFTPGO_DATA_PROVIDER__NAME=sftpgo
SFTPGO_DATA_PROVIDER__HOST=127.0.0.1
SFTPGO_DATA_PROVIDER__PORT=3306
SFTPGO_DATA_PROVIDER__USERNAME=sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD=your password here
```
Confirm that the database connection works by initializing the data provider.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2021-05-19T22:29:30.000 INF Initializing provider: "mysql" config file: "/etc/sftpgo/sftpgo.json"
2021-05-19T22:29:30.000 INF updating database schema version: 8 -> 9
2021-05-19T22:29:30.000 INF Data provider successfully initialized/updated
```
Ensure that SFTPGo starts after the database service.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=mariadb.service
```
Restart SFTPGo to apply the changes.
### Use CockroachDB data provider
We suppose you have installed CockroachDB this way:
```shell
sudo su
export CRDB_VERSION=22.1.8 # set the latest available version here
wget -qO- https://binaries.cockroachdb.com/cockroach-v${CRDB_VERSION}.linux-amd64.tgz | tar xvz
cp -i cockroach-v${CRDB_VERSION}.linux-amd64/cockroach /usr/local/bin/
mkdir -p /usr/local/lib/cockroach
cp -i cockroach-v${CRDB_VERSION}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/
cp -i cockroach-v${CRDB_VERSION}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/
mkdir /var/lib/cockroach
chown sftpgo:sftpgo /var/lib/cockroach
mkdir -p /etc/cockroach/{certs,ca}
chmod 700 /etc/cockroach/ca
/usr/local/bin/cockroach cert create-ca --certs-dir=/etc/cockroach/certs --ca-key=/etc/cockroach/ca/ca.key
/usr/local/bin/cockroach cert create-node localhost $(hostname) --certs-dir=/etc/cockroach/certs --ca-key=/etc/cockroach/ca/ca.key
/usr/local/bin/cockroach cert create-client root --certs-dir=/etc/cockroach/certs --ca-key=/etc/cockroach/ca/ca.key
chown -R sftpgo:sftpgo /etc/cockroach/certs
exit
```
and you are running it using a systemd unit like this one:
```shell
[Unit]
Description=Cockroach Database single node
Requires=network.target
[Service]
Type=notify
WorkingDirectory=/var/lib/cockroach
ExecStart=/usr/local/bin/cockroach start-single-node --certs-dir=/etc/cockroach/certs --http-addr 127.0.0.1:8888 --listen-addr 127.0.0.1:26257 --cache=.25 --max-sql-memory=.25 --store=path=/var/lib/cockroach
TimeoutStopSec=60
Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
User=sftpgo
[Install]
WantedBy=default.target
```
Create a CockroachDB database named `sftpgo`.
```shell
$ sudo /usr/local/bin/cockroach sql --certs-dir=/etc/cockroach/certs -e 'create database "sftpgo"'
CREATE DATABASE
Time: 13ms
```
You can open the SFTPGo configuration file, search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "cockroachdb",
"name": "sftpgo",
"host": "localhost",
"port": 26257,
"username": "root",
"password": "",
"sslmode": 3,
"root_cert": "/etc/cockroach/certs/ca.crt",
"client_cert": "/etc/cockroach/certs/client.root.crt",
"client_key": "/etc/cockroach/certs/client.root.key",
...
}
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/cockroachdb.env` with the following content.
```shell
SFTPGO_DATA_PROVIDER__DRIVER=cockroachdb
SFTPGO_DATA_PROVIDER__NAME=sftpgo
SFTPGO_DATA_PROVIDER__HOST=localhost
SFTPGO_DATA_PROVIDER__PORT=26257
SFTPGO_DATA_PROVIDER__USERNAME=root
SFTPGO_DATA_PROVIDER__SSLMODE=3
SFTPGO_DATA_PROVIDER__ROOT_CERT="/etc/cockroach/certs/ca.crt"
SFTPGO_DATA_PROVIDER__CLIENT_CERT="/etc/cockroach/certs/client.root.crt"
SFTPGO_DATA_PROVIDER__CLIENT_KEY="/etc/cockroach/certs/client.root.key"
```
Confirm that the database connection works by initializing the data provider.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2022-06-02T14:54:04.510 INF Initializing provider: "cockroachdb" config file: "/etc/sftpgo/sftpgo.json"
2022-06-02T14:54:04.554 INF creating initial database schema, version 15
2022-06-02T14:54:04.698 INF updating database schema version: 15 -> 16
2022-06-02T14:54:07.093 INF updating database schema version: 16 -> 17
2022-06-02T14:54:07.672 INF updating database schema version: 17 -> 18
2022-06-02T14:54:07.699 INF updating database schema version: 18 -> 19
2022-06-02T14:54:07.721 INF Data provider successfully initialized/updated
```
Ensure that SFTPGo starts after the database service.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=cockroachdb.service
```
Restart SFTPGo to apply the changes.
### Enable FTP service
You can set the configuration options to enable the FTP service by opening the SFTPGo configuration file, looking for the `ftpd` section and editing it as follows.
```json
"ftpd": {
"bindings": [
{
"port": 2121,
"address": "",
"apply_proxy_config": true,
"tls_mode": 0,
"certificate_file": "",
"certificate_key_file": "",
"min_tls_version": 12,
"force_passive_ip": "",
"passive_ip_overrides": [],
"client_auth_type": 0,
"tls_cipher_suites": [],
"passive_connections_security": 0,
"active_connections_security": 0,
"debug": false
}
],
"banner": "",
"banner_file": "",
"active_transfers_port_non_20": true,
"passive_port_range": {
"start": 50000,
"end": 50100
},
...
}
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/ftpd.env` with the following content.
```shell
SFTPGO_FTPD__BINDINGS__0__PORT=2121
```
Restart SFTPGo to apply the changes. The FTP service is now available on port `2121`.
You can also configure the passive ports range (`50000-50100` by default), these ports must be reachable for passive FTP to work. If your FTP server is on the private network side of a NAT configuration you have to set `force_passive_ip` to your external IP address. You may also need to open the passive port range on your firewall.
It is recommended that you provide a certificate and key file to allow FTP over TLS. You should prefer SFTP to FTP even if you configure TLS, please don't blindly enable the old FTP protocol.
### Enable WebDAV service
You can set the configuration options to enable the FTP service by opening the SFTPGo configuration file, looking for the `webdavd` section and editing it as follows.
```json
"webdavd": {
"bindings": [
{
"port": 10080,
"address": "",
"enable_https": false,
"certificate_file": "",
"certificate_key_file": "",
"min_tls_version": 12,
"client_auth_type": 0,
"tls_cipher_suites": [],
"prefix": "",
"proxy_allowed": [],
"client_ip_proxy_header": "",
"client_ip_header_depth": 0,
"disable_www_auth_header": false
}
],
...
}
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/webdavd.env` with the following content.
```shell
SFTPGO_WEBDAVD__BINDINGS__0__PORT=10080
```
Restart SFTPGo to apply the changes. The WebDAV service is now available on port `10080`. It is recommended that you provide a certificate and key file to allow WebDAV over https.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

View file

@ -1,247 +0,0 @@
# Securing SFTPGo with a free Let's Encrypt TLS Certificate
This tutorial shows how to obtain and renew a free Let's encrypt TLS certificate for the SFTPGo Web UI and REST API, the WebDAV service and the FTP service.
Obtaining a Let's Encrypt certificate involves solving a domain validation challenge issued by an ACME (Automatic Certificate Management Environment) server. This challenge verifies your ownership of the domain(s) you're trying to obtain a certificate for. Different challenge types exist, the most commonly used being `HTTP-01`. As its name suggests, it uses the HTTP protocol. While HTTP servers can be configured to use any TCP port, this challenge will only work on port `80` due to security measures.
More info about the supported challenge types can be found [here](https://letsencrypt.org/docs/challenge-types/).
There are several tools that allow you to obtain a Let's encrypt TLS certificate, in this tutorial we'll show how to use the [lego](https://github.com/go-acme/lego) CLI tool and the ACME protocol built into SFTPGo.
The `lego` CLI supports all the Let's encrypt challenge types.
The ACME protocol built into SFTPGo supports `HTTP-01` and `TLS-ALPN-01` challenge types.
In this tutorial we'll focus on `HTTP-01` challenge type and make the following assumptions:
- we are running SFTPGo on Linux
- we need a TLS certificate for the `sftpgo.com` domain
- we have an existing web server already running on port `80` for the `sftpgo.com` domain and the web root path is `/var/www/sftpgo.com`
## Overview
- [Obtaining a certificate using the Lego CLI tool](#obtaining-a-certificate-using-the-lego-cli-tool)
- [Automatic certificate renewal using the Lego CLI tool](#automatic-certificate-renewal-using-the-lego-cli-tool)
- [Obtaining a certificate using the ACME protocol built into SFTPGo](#obtaining-a-certificate-using-the-acme-protocol-built-into-sftpgo)
- [Enable HTTPS for SFTPGo Web UI and REST API](#enable-https-for-sftpgo-web-ui-and-rest-api)
- [Enable HTTPS for WebDAV service](#enable-https-for-webdav-service)
- [Enable explicit FTP over TLS](#enable-explicit-ftp-over-tls)
## Obtaining a certificate using the Lego CLI tool
Download the latest [lego release](https://github.com/go-acme/lego/releases) and extract the lego binary in `/usr/local/bin`, then verify that it works.
```shell
lego -v
lego version 4.4.0 linux/amd64
```
We'll store the certificates in `/var/lib/lego` so create this directory.
```shell
sudo mkdir -p /var/lib/lego
```
Now obtain a certificate. The HTTP based challenge will be created in a file in `/var/www/sftpgo.com/.well-known/acme-challenge`. This directory must be publicly served by your web server.
```shell
sudo lego --accept-tos --path="/var/lib/lego" --email="<you email address here>" --domains="sftpgo.com" --http.webroot="/var/www/sftpgo.com" --http run
```
You should be now able to list your certificate.
```shell
sudo lego --path="/var/lib/lego" list
Found the following certs:
Certificate Name: sftpgo.com
Domains: sftpgo.com
Expiry Date: 2021-09-09 19:41:51 +0000 UTC
Certificate Path: /var/lib/lego/certificates/sftpgo.com.crt
```
Now copy the certificate inside a private path to the SFTPGo service.
```shell
sudo mkdir -p /var/lib/sftpgo/certs
sudo cp /var/lib/lego/certificates/sftpgo.com.{crt,key} /var/lib/sftpgo/certs
sudo chown -R sftpgo:sftpgo /var/lib/sftpgo/certs
```
### Automatic certificate renewal using the Lego CLI tool
SFTPGo can reload TLS certificates without service interruption, so we'll create a small bash script that copies the certificates inside the SFTPGo private directory and instructs SFTPGo to load them. We then configure `lego` to run this script when the certificates are renewed.
Create the file `/usr/local/bin/sftpgo_lego_hook` with the following contents.
```shell
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
CERTS_DIR=/var/lib/sftpgo/certs
mkdir -p ${CERTS_DIR}
cp ${LEGO_CERT_PATH} ${LEGO_CERT_KEY_PATH} ${CERTS_DIR}
chown -R sftpgo:sftpgo ${CERTS_DIR}
systemctl reload sftpgo
```
Ensure that this script is executable.
```shell
sudo chmod 755 /usr/local/bin/sftpgo_lego_hook
```
Now create a daily cron job to check the certificate expiration and renew it if necessary. For example create the file `/etc/cron.daily/lego` with the following contents.
```shell
#!/bin/bash
lego --accept-tos --path="/var/lib/lego" --email="<you email address here>" --domains="sftpgo.com" --http-timeout 60 --http.webroot="/var/www/sftpgo.com" --http renew --renew-hook="/usr/local/bin/sftpgo_lego_hook"
```
Ensure that this cron script is executable.
```shell
sudo chmod 755 /etc/cron.daily/lego
```
When the certificate is renewed you should see SFTPGo logs like the following to confirm that the new certificate was successfully loaded.
```json
{"level":"debug","time":"2021-06-14T20:05:15.785","sender":"service","message":"Received reload request"}
{"level":"debug","time":"2021-06-14T20:05:15.785","sender":"httpd","message":"TLS certificate \"/var/lib/sftpgo/certs/sftpgo.com.crt\" successfully loaded"}
{"level":"debug","time":"2021-06-14T20:05:15.785","sender":"ftpd","message":"TLS certificate \"/var/lib/sftpgo/certs/sftpgo.com.crt\" successfully loaded"}
{"level":"debug","time":"2021-06-14T20:05:15.786","sender":"webdavd","message":"TLS certificate \"/var/lib/sftpgo/certs/sftpgo.com.crt\" successfully loaded"}
```
## Obtaining a certificate using the ACME protocol built into SFTPGo
:warning: Starting from SFTPGo v2.5.0 you can request certificates from the Server Manager -> Configurations -> ACME section of the WebAdmin UI.
You can open the SFTPGo configuration file, search for the `acme` section and change it as follow.
```json
"acme": {
"domains": ["sftpgo.com"],
"email": "<you email address here>",
"key_type": "4096",
"certs_path": "/var/lib/sftpgo/certs",
"ca_endpoint": "https://acme-v02.api.letsencrypt.org/directory",
"renew_days": 30,
"http01_challenge": {
"port": 80,
"proxy_header": "",
"webroot": "/var/www/sftpgo.com"
},
"tls_alpn01_challenge": {
"port": 0
}
}
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/acme.env` with the following content.
```shell
SFTPGO_ACME__DOMAINS="sftpgo.com"
SFTPGO_ACME__EMAIL="<you email address here>"
SFTPGO_ACME__HTTP01_CHALLENGE__WEBROOT="/var/www/sftpgo.com"
```
Make sure that the `sftpgo` user can write to the `/var/www/sftpgo.com` directory or pre-create the `/var/www/sftpgo.com/.well-known/acme-challenge` directory with the appropriate permissions.
This directory must be publicly served by your web server.
:warning: in this example we assume you have an existing HTTP server. If not, you can leave the web root blank and SFTPGo will resolve the HTTP01 challenge by itself.
Register your account and obtain certificates by running the following command.
```bash
sudo -E su - sftpgo -m -s /bin/bash -c 'sftpgo acme run -c /etc/sftpgo'
```
If this command completes successfully, you are done. The SFTPGo service will take care of the automatic renewal of certificates for the configured domains. Make sure that the `sftpgo` system user can read and write to `/var/lib/sftpgo/certs` directory otherwise the certificate renewal will fail.
## Enable HTTPS for SFTPGo Web UI and REST API
You can open the SFTPGo configuration file, search for the `httpd` section and change it as follow.
```json
"httpd": {
"bindings": [
{
"port": 9443,
"address": "",
"enable_web_admin": true,
"enable_web_client": true,
"enable_rest_api": true,
"enable_https": true,
"certificate_file": "/var/lib/sftpgo/certs/sftpgo.com.crt",
"certificate_key_file": "/var/lib/sftpgo/certs/sftpgo.com.key",
.....
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/httpd.env` with the following content.
```shell
SFTPGO_HTTPD__BINDINGS__0__PORT=9443
SFTPGO_HTTPD__BINDINGS__0__ENABLE_HTTPS=1
SFTPGO_HTTPD__BINDINGS__0__CERTIFICATE_FILE="/var/lib/sftpgo/certs/sftpgo.com.crt"
SFTPGO_HTTPD__BINDINGS__0__CERTIFICATE_KEY_FILE="/var/lib/sftpgo/certs/sftpgo.com.key"
```
Restart SFTPGo to apply the changes. The HTTPS service is now available on port `9443`.
## Enable HTTPS for WebDAV service
You can open the SFTPGo configuration file, search for the `webdavd` section and change it as follow.
```json
"webdavd": {
"bindings": [
{
"port": 10443,
"address": "",
"enable_https": true,
"certificate_file": "/var/lib/sftpgo/certs/sftpgo.com.crt",
"certificate_key_file": "/var/lib/sftpgo/certs/sftpgo.com.key",
...
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/webdavd.env` with the following content.
```shell
SFTPGO_WEBDAVD__BINDINGS__0__PORT=10443
SFTPGO_WEBDAVD__BINDINGS__0__ENABLE_HTTPS=1
SFTPGO_WEBDAVD__CERTIFICATE_FILE="/var/lib/sftpgo/certs/sftpgo.com.crt"
SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE="/var/lib/sftpgo/certs/sftpgo.com.key"
```
Restart SFTPGo to apply the changes. WebDAV is now availble over HTTPS on port `10443`.
## Enable explicit FTP over TLS
You can open the SFTPGo configuration file, search for the `ftpd` section and change it as follow.
```json
"ftpd": {
"bindings": [
{
"port": 2121,
"address": "",
"apply_proxy_config": true,
"tls_mode": 1,
"certificate_file": "/var/lib/sftpgo/certs/sftpgo.com.crt",
"certificate_key_file": "/var/lib/sftpgo/certs/sftpgo.com.key",
...
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/ftpd.env` with the following content.
```shell
SFTPGO_FTPD__BINDINGS__0__PORT=2121
SFTPGO_FTPD__BINDINGS__0__TLS_MODE=1
SFTPGO_FTPD__BINDINGS__0__CERTIFICATE_FILE="/var/lib/sftpgo/certs/sftpgo.com.crt"
SFTPGO_FTPD__BINDINGS__0__CERTIFICATE_KEY_FILE="/var/lib/sftpgo/certs/sftpgo.com.key"
```
Restart SFTPGo to apply the changes. FTPES service is now available on port `2121` and TLS is required for both control and data connection (`tls_mode` is 1).

View file

@ -1,230 +0,0 @@
# SFTPGo as OpenSSH's SFTP subsystem
This tutorial shows how to run SFTPGo as OpenSSH's SFTP subsystem and still use its advanced features.
Please note that when running in SFTP subsystem mode some SFTPGo features are not available, for example SFTPGo cannot limit the concurrent connnections or user sessions, restrict available ciphers etc. In this mode OpenSSH accepts the network connection, handles the SSH handshake and user authentication and then executes a separate SFTPGo process for each SFTP connection.
Hooks may or may not work because OpenSSH stops the SFTPGo process when the user logs out and some hooks may not have run or ended yet. If you need these features use SFTPGo in standalone mode.
## Preliminary Note
Before proceeding further you need to have a basic minimal installation of Ubuntu 20.04. The instructions can easily be adapted to any other Linux distribution.
## Install SFTPGo
To install SFTPGo you can use the PPA [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
Start by adding the PPA.
```shell
sudo add-apt-repository ppa:sftpgo/sftpgo
sudo apt update
```
Next install SFTPGo.
```shell
sudo apt install sftpgo
```
After installation SFTPGo should already be running and configured to start automatically at boot, check its status using the following command.
```shell
systemctl status sftpgo
```
We don't want to run SFTPGo as service, so let's stop and disable it.
```shell
sudo systemctl disable sftpgo
sudo systemctl stop sftpgo
```
## Configure OpenSSH to use SFTPGo as SFTP subsystem
We have several configuration options. Let's examine them in details in the following sections.
### Chroot any existing OpenSSH user within their home directory
Open the OpenSSH configuration file `/etc/ssh/sshd_config` find the following section:
```shell
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
```
and change it as follow.
```shell
# override default of no subsystems
#Subsystem sftp /usr/lib/openssh/sftp-server
Subsystem sftp /usr/bin/sftpgo startsubsys -j
```
The `-j` option instructs SFTPGo to write logs to `journald`. If unset the logs will be written to stdout.
Restart OpenSSH to apply the changes.
```shell
sudo systemctl restart sshd
```
Now try to login via SFTP with an existing OpenSSH user, it will work and you will not be able to escape the user's home directory.
### Change home dir, set virtual permissions and other SFTPGo specific features
The current setup is pretty straightforward and can also be easily achieved using OpenSSH. Can we set a different home directory or use specific SFTPGo features such as bandwidth throttling, virtual permissions and so on?
Of course we can, we need to configure SFTPGo with an appropriate configuration file and we need to map OpenSSH users to SFTPGo users.
SFTPGo stores its users within a data provider, several data providers are supported. For this use case SQLite and bolt cannot be used as OpenSSH will start multiple SFTPGo processes and it is not safe/possible to access to these data providers from multiple separate processes. So we will use the memory provider. MySQL, PostgreSQL and CockroachDB can be used too.
Any unmapped OpenSSH user will work as explained in the previous section. So you could only map specific users.
The memory provider can load users from a JSON file. Theoretically you could create the JSON file by hand, but this is quite hard. An easier way is to create users from another SFTPGo instance and then export a dump.
Then, we temporarily launch the system's SFTPGo instance.
```shell
sudo systemctl start sftpgo.service
```
We assume that we have an OpenSSH/system user named `nicola` and its home directory is `/home/nicola`, adjust the following instructions according to your configuration.
Open [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web) in your web browser, replacing `127.0.0.1` with the appropriate IP address if SFTPGo is not running on localhost and initialize SFTPGo. The full procedure is detailed within the [Getting Started](./getting-started.md#Initial-configuration) guide.
Now, from the SFTPGo web admin interface, create a user named `nicola` (like our OpenSSH/system user) and set his home directory to `/home/nicola/sftpdir`. You must set a password or a public key to be able to save the user, set any password it will be ignored as it is OpenSSH that authenticates users.
You can also set some virtual permissions, for example for the path `/test` allow `list` and `upload`. You can also set a quota or bandwidth limits, for example you can set `5` as quota files and `128 KB/s` as upload bandwidth.
Save the user.
From the `Maintenance` section save a backup to `/home/nicola`. You should now have the file `/home/nicola/sftpgo-backup.json`.
We can stop the SFTPGo instance now.
```shell
sudo systemctl stop sftpgo.service
```
If you check the JSON backup file, it should contain something like this.
```json
"users": [
{
"id": 1,
"status": 1,
"username": "nicola",
"expiration_date": 0,
"password": "$2a$10$dc.djrShrnyEdfpTEh5S2utQr2CTja1XOB2O4ZiGcvFxbrvcgu/WK",
"home_dir": "/home/nicola/sftpdir",
"uid": 0,
"gid": 0,
"max_sessions": 0,
"quota_size": 0,
"quota_files": 5,
"permissions": {
"/": [
"*"
],
"/test": [
"list",
"upload"
]
},
"used_quota_size": 0,
"used_quota_files": 0,
"last_quota_update": 0,
"upload_bandwidth": 128,
"download_bandwidth": 0,
...
```
Let's create a specific configuration directory for SFTPGo as a subsystem and copy the configuration file and the backup file there.
```shell
sudo mkdir /usr/local/etc/sftpgosubsys
sudo cp /etc/sftpgo/sftpgo.json /usr/local/etc/sftpgosubsys/
sudo chmod 644 /usr/local/etc/sftpgosubsys/sftpgo.json
sudo cp /home/nicola/sftpgo-backup.json /usr/local/etc/sftpgosubsys/
```
Open `/usr/local/etc/sftpgosubsys/sftpgo.json`, find the `data_provider` section and change it as follow.
```json
...
"data_provider": {
"driver": "memory",
"name": "/usr/local/etc/sftpgosubsys/sftpgo-backup.json",
...
```
Open `/etc/ssh/sshd_config` and set the following configuration.
```shell
# override default of no subsystems
#Subsystem sftp /usr/lib/openssh/sftp-server
Subsystem sftp /usr/bin/sftpgo startsubsys -c /usr/local/etc/sftpgosubsys -j -p
```
The `-c` option specifies where to search the configuration file and the `-p` option indicates to use the home directory from the data provider (the json backup file in this case) for users defined there.
Restart OpenSSH to apply the changes.
```shell
sudo systemctl restart sshd
```
Now you can log in using the user `nicola` and verify that the new chroot directory is `/home/nicola/sftpdir`, and that the other settings are working. Eg. create a directory called `test`, you will be able to upload files but not download them.
### Configure a custom hook on file uploads
We show this feature by executing a simple bash script each time a file is uploaded.
Here is the test script.
```shell
#!/bin/bash
echo `date` "env, action: $SFTPGO_ACTION, username: $SFTPGO_ACTION_USERNAME, path: $SFTPGO_ACTION_PATH, file size: $SFTPGO_ACTION_FILE_SIZE, status: $SFTPGO_ACTION_STATUS" >> /tmp/command_sftp.log
```
It simply logs some environment variables that SFTPGo sets for the upload action. Please refer to [Custom Actions](../custom-actions.md) for more detailed info about hooks.
So copy the above script to the file `/usr/local/bin/sftpgo-action.sh` and make it executable.
```shell
sudo chmod 755 /usr/local/bin/sftpgo-action.sh
```
Open `/usr/local/etc/sftpgosubsys/sftpgo.json`, and configure the custom action as follow.
```shell
{
"common": {
"idle_timeout": 15,
"upload_mode": 0,
"actions": {
"execute_on": ["upload"],
"execute_sync": [],
"hook": "/usr/local/bin/sftpgo-action.sh"
},
...
```
Login and upload a file.
```shell
sftp nicola@127.0.0.1
nicola@127.0.0.1's password:
Connected to 127.0.0.1.
sftp> put file.txt
Uploading fle.txt to /file.txt
sftp> quit
```
Verify that the custom action is executed.
```shell
cat /tmp/command_sftp.log
Fri 30 Jul 2021 06:56:50 PM UTC env, action: upload, username: nicola, path: /home/nicola/sftpdir/file.txt, file size: 4034, status: 1
```

View file

@ -1,226 +0,0 @@
# SFTPGo with PostgreSQL data provider and S3 backend
This tutorial shows the installation of SFTPGo on Ubuntu 20.04 (Focal Fossa) with PostgreSQL data provider and S3 backend. SFTPGo will run as an unprivileged (non-root) user. We assume that you want to serve a single S3 bucket and you want to assign different "virtual folders" of this bucket to different SFTPGo virtual users.
## Preliminary Note
Before proceeding further you need to have a basic minimal installation of Ubuntu 20.04.
## Install PostgreSQL
Before installing any packages on the Ubuntu system, update and upgrade all packages using the `apt` commands below.
```shell
sudo apt update
sudo apt upgrade
```
Install PostgreSQL with this `apt` command.
```shell
sudo apt -y install postgresql
```
Once installation is completed, start the PostgreSQL service and add it to the system boot.
```shell
sudo systemctl start postgresql
sudo systemctl enable postgresql
```
Next, check the PostgreSQL service using the following command.
```shell
systemctl status postgresql
```
## Configure PostgreSQL
PostgreSQL uses roles for user authentication and authorization, it just like Unix-Style permissions. By default, PostgreSQL creates a new user called `postgres` for basic authentication.
In this step, we will create a new PostgreSQL user for SFTPGo.
Login to the PostgreSQL shell using the command below.
```shell
sudo -i -u postgres psql
```
Next, create a new role `sftpgo` with the password `sftpgo_pg_pwd` using the following query.
```sql
create user "sftpgo" with encrypted password 'sftpgo_pg_pwd';
```
Next, create a new database `sftpgo.db` for the SFTPGo service using the following queries.
```sql
create database "sftpgo.db";
grant all privileges on database "sftpgo.db" to "sftpgo";
```
Exit from the PostgreSQL shell typing `\q`.
## Install SFTPGo
To install SFTPGo you can use the PPA [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
Start by adding the PPA.
```shell
sudo add-apt-repository ppa:sftpgo/sftpgo
sudo apt-get update
```
Next install SFTPGo.
```shell
sudo apt install sftpgo
```
After installation SFTPGo should already be running with default configuration and configured to start automatically at boot, check its status using the following command.
```shell
systemctl status sftpgo
```
## Configure AWS credentials
We assume that you want to serve a single S3 bucket and you want to assign different "virtual folders" of this bucket to different SFTPGo virtual users. In this case is very convenient to configure a credential file so SFTPGo will automatically use it and you don't need to specify the same AWS credentials for each user.
You can manually create the `/var/lib/sftpgo/.aws/credentials` file and write your AWS credentials like this.
```shell
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
```
Alternately you can install `AWS CLI` and manage the credential using this tool.
```shell
sudo apt install awscli
```
and now set your credentials, region, and output format with the following command.
```shell
aws configure
```
Confirm that you can list your bucket contents with the following command.
```shell
aws s3 ls s3://mybucket
```
The AWS CLI will create the credential file in `~/.aws/credentials`. The SFTPGo service runs using the `sftpgo` system user whose home directory is `/var/lib/sftpgo` so you need to copy the credentials file to the sftpgo home directory and assign it the proper permissions.
```shell
sudo mkdir /var/lib/sftpgo/.aws
sudo cp ~/.aws/credentials /var/lib/sftpgo/.aws/
sudo chown -R sftpgo:sftpgo /var/lib/sftpgo/.aws
```
## Configure SFTPGo
Now open the SFTPGo configuration.
```shell
sudo vi /etc/sftpgo/sftpgo.json
```
Search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "postgresql",
"name": "sftpgo.db",
"host": "127.0.0.1",
"port": 5432,
"username": "sftpgo",
"password": "sftpgo_pg_pwd",
...
}
```
Alternatively (recommended), you can use environment variables by creating the file `/etc/sftpgo/env.d/postgresql.env` with the following content.
```shell
SFTPGO_DATA_PROVIDER__DRIVER=postgresql
SFTPGO_DATA_PROVIDER__NAME="sftpgo.db"
SFTPGO_DATA_PROVIDER__HOST=127.0.0.1
SFTPGO_DATA_PROVIDER__PORT=5432
SFTPGO_DATA_PROVIDER__USERNAME=sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD=sftpgo_pg_pwd
```
This way we set the PostgreSQL connection parameters.
If you want to connect to PostgreSQL over a Unix Domain socket you have to set the value `/var/run/postgresql` for the `host` configuration key instead of `127.0.0.1`.
You can further customize your configuration adding custom actions and other hooks. A full explanation of all configuration parameters can be found [here](../full-configuration.md).
Next, initialize the data provider with the following command.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2020-10-09T21:07:50.000 INF Initializing provider: "postgresql" config file: "/etc/sftpgo/sftpgo.json"
2020-10-09T21:07:50.000 INF updating database schema version: 1 -> 2
2020-10-09T21:07:50.000 INF updating database schema version: 2 -> 3
2020-10-09T21:07:50.000 INF updating database schema version: 3 -> 4
2020-10-09T21:07:50.000 INF Data provider successfully initialized/updated
```
The default sftpgo systemd service will start after the network target, in this setup it is more appropriate to start it after the PostgreSQL service, so edit the service using the following command.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=postgresql.service
```
Confirm that `sftpgo.service` will start after `postgresql.service` with the next command.
```shell
$ systemctl show sftpgo.service | grep After=
After=postgresql.service systemd-journald.socket system.slice -.mount systemd-tmpfiles-setup.service network.target sysinit.target basic.target
```
Next restart the sftpgo service to use the new configuration and check that it is running.
```shell
sudo systemctl restart sftpgo
systemctl status sftpgo
```
## Create the first admin
To start using SFTPGo you need to create an admin user, the easiest way is to use the built-in Web admin interface, so open the Web Admin URL and create the first admin user.
[http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
## Add virtual users
The easiest way to add virtual users is to use the built-in Web interface.
So navigate to the Web Admin URL again and log in using the credentials you just set up.
[http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
Click `Add` and fill the user details, the minimum required parameters are:
- `Username`
- `Password` or `Public keys`
- `Permissions`
- `Home Dir` can be empty since we defined a default base dir
- Select `AWS S3 (Compatible)` as storage and then set `Bucket`, `Region` and optionally a `Key Prefix` if you want to restrict the user to a specific virtual folder in the bucket. The specified virtual folder does not need to be pre-created. You can leave `Access Key` and `Access Secret` empty since we defined global credentials for the `sftpgo` user and we use this system user to run the SFTPGo service.
You are done! Now you can connect to you SFTPGo instance using any compatible `sftp` client on port `2022`.
You can mix S3 users with local users but please be aware that we are running the service as the unprivileged `sftpgo` system user so if you set storage as `local` for an SFTPGo virtual user then the home directory for this user must be owned by the `sftpgo` system user. If you don't specify an home directory the default will be `/srv/sftpgo/data/<username>` which should be appropriate.

View file

@ -1,122 +0,0 @@
# Two-factor Authentication (2FA)
Two-factor authentication (also known as 2FA) is a subset of multi-factor authentication. It allows SFTPGo users and admins to enable additional protection for their account by requiring a combination of two different authentication factors:
- something they know (e.g. their password)
- something they have (usually their smartphone).
2FA is an excellent way to improve your security profile and provide an added layer of protection to your data.
SFTPGo supports authenticator apps that use TOTP (time based one-time password). These include apps such as Authy, Google Authenticator and any other apps that support time-based one time passwords ([RFC 6238](https://datatracker.ietf.org/doc/html/rfc6238)).
## Preliminary Note
Before proceeding further you need a working SFTPGo installation.
## Configure SFTPGo
Two-factor authentication is enabled by default with the following settings.
```json
"mfa": {
"totp": [
{
"name": "Default",
"issuer": "SFTPGo",
"algo": "sha1"
}
]
},
```
The `issuer` and `algo` are visible/used in the authenticators apps. For example, you could set your company/organization name as `issuer` and an `algo` appropriate for your target apps/devices. The supported algorithms are: `sha1`, `sha256`, `sha512`. Currently Google Authenticator app on iPhone seems to only support `sha1`, please check the compatibility with your target apps/device before setting a different algorithm.
You can also define multiple configurations, for example one that uses `sha256` or `sha512` and another one that uses `sha1` and instruct your users to use the appropriate configuration for their devices/apps. The algorithm should not be changed if there are users or admins using the configuration. The `name` is visible to the users/admins when they select the 2FA configuration to use and it must be unique. A configuration name should not be changed if there are users or admins using it.
SFTPGo can use 2FA for `HTTP`, `SSH` (SFTP, SCP) and `FTP` protocols.
## Enable 2FA for admins
Each admin can view/change his/her two-factor authentication by selecting the `Two-Factor Auth` link from the top-right web UI menu.
![Admin 2FA](./img/admin-2FA.png)
Then select a configuration and click "Generate new secret key". A QR code will be generated which you can scan with a compatible app. After you configured your app, enter a test code to ensure everything works correctly and click on "Save".
![Enable 2FA](./img/admin-save-2FA.png)
Then save the configuration.
SFTPGo automatically generates some recovery codes. They are a set of one time use codes that can be used in place of the TOTP to login to the web UI. You can use them if you lose access to your phone to login to your account and disable or regenerate TOTP configuration.
2FA is now enabled, so the next time you login with this admin you have to provide a valid authentication code after your password.
![Login with 2FA](./img/admin-2FA-login.png)
Two-factor authentication will also be required to use the REST API as this admin. You can provide the authentication code using the `X-SFTPGO-OTP` HTTP header.
If an admin loses access to their second factor auth device and has no recovery codes, another admin can disable second factor authentication for him/her using the `/api/v2/admins/{username}/2fa/disable` REST endpoint.
For example:
```shell
curl -X 'PUT' \
'http://localhost:8080/api/v2/admins/admin/2fa/disable' \
-H 'accept: application/json' \
-H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYzOTkzMTE3MiwianRpIjoiYzZ2bGd0NjEwZDFxYjZrdTBiNWciLCJuYmYiOjE2Mzk5Mjk5NDIsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiV20rYTF2bnVVc1VRYXA0TVZmSGtseWxObmR4TCswYVM3OVVjc1hXZitzdz0iLCJ1c2VybmFtZSI6ImFkbWluMSJ9.043lQFq7WRfJ-rdoCbp_TXEHbAo8Ihj5CAh3k8JQVQQ'
```
If you prefer a web UI instead of a CLI command to disable 2FA you can use the swagger UI interface available, by default, at the following URL `http://localhost:8080/openapi/swagger-ui`.
## Enable 2FA for users
Two-factor authentication is enabled by default for all SFTPGo users. Admins can disable 2FA per-user by selecting `mfa-disabled` in the Web client/REST API restrictions.
![User 2FA disabled](./img/user-2FA-disabled.png)
If 2FA is not disabled each user can enable it by selecting "Two-factor auth" from the left menu in the web UI.
Users can enable 2FA for HTTP, SSH and FTP protocols.
SSH protocol (SFTP/SCP/SSH commands) will ask for the passcode if the client uses keyboard interactive authentication.
HTTP protocol means Web UI and REST APIs. Web UI will ask for the passcode using a specific page. For REST APIs you have to specify the passcode using the `X-SFTPGO-OTP` HTTP header.
FTP has no standard way to support two factor authentication, if you enable the FTP protocol, you have to add the TOTP passcode after the password. For example if your password is "password" and your one time passcode is "123456" you have to use "password123456" as password.
To enable 2FA select the wanted protocols, a configuration and click "Generate new secret key". A QR code will be generated which you can scan with a compatible app. After you configured your app, enter a test code to ensure everything works correctly and click on "Save".
![Enable 2FA](./img/user-save-2FA.png)
Then save the configuration.
SFTPGo automatically generates some recovery codes. They are a set of one time use codes that can be used in place of the TOTP to login to the web UI. You can use them if you lose access to your phone to login to your account and disable or regenerate TOTP configuration.
2FA is now enabled, so the next time you login with this user you have to provide a valid authentication code after your password.
![Login with 2FA](./img/user-2FA-login.png)
Two-factor authentication will be also required to use the REST API as this user. You can provide the authentication code using the `X-SFTPGO-OTP` header.
If this user tries to login via SFTP it must provide a valid authentication code after the password.
```shell
$ sftp -P 2022 nicola@127.0.0.1
(nicola@127.0.0.1) Password:
(nicola@127.0.0.1) Authentication code:
Connected to 127.0.0.1.
sftp> quit
```
If a user loses access to their second factor auth device and has no recovery codes then an admin can disable the second factor authentication using the `/api/v2/users/{username}/2fa/disable` REST endpoint.
For example:
```shell
curl -X 'PUT' \
'http://localhost:8080/api/v2/users/nicola/2fa/disable' \
-H 'accept: application/json' \
-H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYzOTkzMzI1MywianRpIjoiYzZ2bTE1ZTEwZDFxcG9iamc3djAiLCJuYmYiOjE2Mzk5MzIwMjMsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiV20rYTF2bnVVc1VRYXA0TVZmSGtseWxObmR4TCswYVM3OVVjc1hXZitzdz0iLCJ1c2VybmFtZSI6ImFkbWluMSJ9.ntR0L2JTuwYwhBy6c0iu10rdmycLdtKZtmDObQ0PUoo'
```
If you prefer a web UI instead of a CLI command to disable 2FA you can use the swagger UI interface available, by default, at the following URL `http://localhost:8080/openapi/swagger-ui`.

View file

@ -1,27 +0,0 @@
# HTTP/S storage backend
SFTPGo can use custom storage backend implementations compliant with the REST API documented [here](./../openapi/httpfs.yaml).
:warning: HTTPFs is a work in progress and makes no API stability promises.
The only required parameter is the HTTP/S endpoint that SFTPGo must use to make API calls.
If you define `http://127.0.0.1:9999/api/v1` as endpoint, SFTPGo will add the API path, for example for the `stat` API it will invoke `http://127.0.0.1:9999/api/v1/stat/{name}`.
You can set a `username` and/or a `password` to instruct SFTPGo to use the basic authentication, or you can set an API key to instruct SFTPGo to add it to each API call in the `X-API-KEY` HTTP header.
Here is a mapping between HTTP response codes and protocol errors:
- `401`, `403` mean permission denied error
- `404`, means not found error
- `501`, means not supported error
- `200`, `201`, mean no error
- any other response code means a generic error
HTTPFs can also connect to UNIX domain sockets. To use UNIX domain sockets you need to set an endpoint with the following conventions:
- the URL schema can be `http` or `https` as usual.
- The URL host must be `unix`.
- The socket path is mandatory and is set using the `socket_path` query parameter. The path must be query escaped.
- The optional API prefix can be set using the `api_prefix` query parameter. The prefix must be query escaped.
Here is an example endpoint for UNIX domain socket connections: `http://unix?socket_path=%2Ftmp%2Fsftpgofs.sock&api_prefix=%2Fapi%2Fv1`. In this case we are connecting using the `HTTP` protocol to the socket `/tmp/sftpgofs.sock` and we use the `/api/v1` prefix for API URLs.

View file

@ -1,15 +0,0 @@
# Internationalization
SFTPGo uses the [i18next](https://www.i18next.com/) framework for managing translating phrases in WebAdmin and WebClient.
Support for internationalization is experimental and may be removed in future releases. It mainly depends on the maintenance effort required and how useful this feature is for SFTPGo users. We currently only support English and Italian.
The translations are available via [Crowdin](https://crowdin.com/project/sftpgo), who have granted us an open source license.
Translating phrases is hard, even lexically correct translations in the wrong context can completely change the meaning and confuse users.
Also, this is an ongoing effort, translations will be added, removed, updated in every release, even minor ones.
For these reasons we don't accept contributions from casual users. If you want to add support for a new language we require:
- you are a native speaker of the target language with the technical skills necessary to correctly contextualize SFTPGo translation phrases and/or a professional translator
- you/your company ensure long-term commitment to the project and help the project to be long-term sustainable

View file

@ -1,169 +0,0 @@
# Keyboard Interactive Authentication
Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client.
This authentication method is typically used for multi-factor authentication.
There are no restrictions on the number of questions asked on a particular authentication stage; there are also no restrictions on the number of stages involving different sets of questions.
To enable keyboard interactive authentication, you must set the absolute path of your authentication program or an HTTP URL using the `keyboard_interactive_auth_hook` key in your configuration file.
The external program can read the following environment variables to get info about the user trying to authenticate:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PASSWORD`, this is the hashed password as stored inside the data provider
Global environment variables are cleared, for security reasons, when the script is called. You can set additional environment variables in the "command" configuration section.
The program must write the questions on its standard output, in a single line, using the following struct JSON serialized:
- `instruction`, string. A short description to show to the user that is trying to authenticate. Can be empty or omitted
- `questions`, list of questions to be asked to the user
- `echos` list of boolean flags corresponding to the questions (so the lengths of both lists must be the same) and indicating whether user's reply for a particular question should be echoed on the screen while they are typing: true if it should be echoed, or false if it should be hidden.
- `check_password` optional integer. Ask exactly one question and set this field to `1` if the expected answer is the user password and you want that SFTPGo checks it for you or to `2` if the user has the SFTPGo built-in TOTP enabled and the expected answer is the user one time passcode. If the password/passcode is correct, the returned response to the program is `OK`. If the password is wrong, the program will be terminated and an authentication error will be returned to the user that is trying to authenticate.
- `auth_result`, integer. Set this field to 1 to indicate successful authentication. 0 is ignored. Any other value means authentication error. If this field is found and it is different from 0 then SFTPGo will not read any other questions from the external program, and it will finalize the authentication.
SFTPGo writes the user answers to the program standard input, one per line, in the same order as the questions.
Please be sure that your program receives the answers for all the issued questions before asking for the next ones.
Keyboard interactive authentication can be chained to the external authentication.
The authentication must finish within 60 seconds.
Let's see a very basic example. Our sample keyboard interactive authentication program will ask for 2 sets of questions and accept the user if the answer to the last question is `answer3`.
```shell
#!/bin/sh
echo '{"questions":["Question1: ","Question2: "],"instruction":"This is a sample for keyboard interactive authentication","echos":[true,false]}'
read ANSWER1
read ANSWER2
echo '{"questions":["Question3: "],"instruction":"","echos":[true]}'
read ANSWER3
if test "$ANSWER3" = "answer3"; then
echo '{"auth_result":1}'
else
echo '{"auth_result":-1}'
fi
```
and here is an example where SFTPGo checks the user password for you:
```shell
#!/bin/sh
echo '{"questions":["Password: "],"instruction":"This is a sample for keyboard interactive authentication","echos":[false],"check_password":1}'
read ANSWER1
if test "$ANSWER1" != "OK"; then
exit 1
fi
echo '{"questions":["One time token: "],"instruction":"","echos":[false]}'
read ANSWER2
if test "$ANSWER2" = "token"; then
echo '{"auth_result":1}'
else
echo '{"auth_result":-1}'
fi
```
If the hook is an HTTP URL then it will be invoked as HTTP POST multiple times for each login request.
The request body will contain a JSON struct with the following fields:
- `request_id`, string. Unique request identifier
- `step`, integer. Counter starting from 1
- `username`, string
- `ip`, string
- `password`, string. This is the hashed password as stored inside the data provider
- `answers`, list of string. It will be null for the first request
- `questions`, list of string. It will contain the previously asked questions. It will be null for the first request
The HTTP response code must be 200 and the body must contain the same JSON struct described for the program.
Let's see a basic sample, the configured hook is `http://127.0.0.1:8000/keyIntHookPwd`, as soon as the user tries to login, SFTPGo makes this HTTP POST request:
```shell
POST /keyIntHookPwd HTTP/1.1
Host: 127.0.0.1:8000
User-Agent: Go-http-client/1.1
Content-Length: 189
Content-Type: application/json
Accept-Encoding: gzip
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","step":1,"password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA=="}
```
as you can see in this first requests `answers` and `questions` are null.
Here is the response that instructs SFTPGo to ask for the user password and to check it:
```shell
HTTP/1.1 200 OK
Date: Tue, 31 Mar 2020 21:15:24 GMT
Server: WSGIServer/0.2 CPython/3.8.2
Content-Type: application/json
X-Frame-Options: SAMEORIGIN
Content-Length: 143
{"questions": ["Password: "], "check_password": 1, "instruction": "This is a sample for keyboard interactive authentication", "echos": [false]}
```
The user enters the correct password and so SFTPGo makes a new HTTP POST, please note that the `request_id` is the same of the previous request, this time the asked `questions` and the user's `answers` are not null:
```shell
POST /keyIntHookPwd HTTP/1.1
Host: 127.0.0.1:8000
User-Agent: Go-http-client/1.1
Content-Length: 233
Content-Type: application/json
Accept-Encoding: gzip
{"request_id":"bq1r5r7cdrpd2qtn25ng","step":2,"username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["OK"],"questions":["Password: "]}
```
Here is the HTTP response that instructs SFTPGo to ask for a new question:
```shell
HTTP/1.1 200 OK
Date: Tue, 31 Mar 2020 21:15:27 GMT
Server: WSGIServer/0.2 CPython/3.8.2
Content-Type: application/json
X-Frame-Options: SAMEORIGIN
Content-Length: 66
{"questions": ["Question2: "], "instruction": "", "echos": [true]}
```
As soon as the user answer to this question, SFTPGo will make a new HTTP POST request with the user's answers:
```shell
POST /keyIntHookPwd HTTP/1.1
Host: 127.0.0.1:8000
User-Agent: Go-http-client/1.1
Content-Length: 239
Content-Type: application/json
Accept-Encoding: gzip
{"request_id":"bq1r5r7cdrpd2qtn25ng","step":3,"username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["answer2"],"questions":["Question2: "]}
```
Here is the final HTTP response that allows the user login:
```shell
HTTP/1.1 200 OK
Date: Tue, 31 Mar 2020 21:15:29 GMT
Server: WSGIServer/0.2 CPython/3.8.2
Content-Type: application/json
X-Frame-Options: SAMEORIGIN
Content-Length: 18
{"auth_result": 1}
```
An example keyboard interactive program allowing to authenticate using [Twilio Authy 2FA](https://www.twilio.com/docs/authy) can be found inside the source tree [authy](../examples/OTP/authy) directory.

View file

@ -1,32 +0,0 @@
# Key Management Services
SFTPGo stores sensitive data such as Cloud account credentials or passphrases to derive per-object encryption keys. These data are stored as ciphertext and only loaded to RAM in plaintext when needed.
## Supported Services for encryption and decryption
The `secrets` section of the `kms` configuration allows to configure how to encrypt and decrypt sensitive data. The following configuration parameters are available:
- `url` defines the URI to the KMS service
- `master_key`, defines the master encryption key as string. If not empty, it takes precedence over `master_key_path`.
- `master_key_path` defines the absolute path to a file containing the master encryption key. This could be, for example, a docker secret or a file protected with filesystem level permissions.
### Local provider
If the `url` is empty SFTPGo uses local encryption for keeping secrets. Internally, it uses the [NaCl secret box](https://pkg.go.dev/golang.org/x/crypto/nacl/secretbox) algorithm to perform encryption and authentication.
We first generate a random key, then the per-object encryption key is derived from this random key in the following way:
1. a master key is provided: the encryption key is derived using the HMAC-based Extract-and-Expand Key Derivation Function (HKDF) as defined in [RFC 5869](http://tools.ietf.org/html/rfc5869)
2. no master key is provided: the encryption key is derived as simple hash of the random key. This is the default configuration.
For compatibility with SFTPGo versions 1.2.x and before we also support encryption based on `AES-256-GCM`. The data encrypted with this algorithm will never use the master key to keep backward compatibility. You can activate it using `builtin://` as `url` but this is not recommended.
### Cloud providers
Several cloud providers are supported using the [sftpgo-plugin-kms](https://github.com/sftpgo/sftpgo-plugin-kms).
### Notes
- The KMS configuration is global.
- If you set a master key you will be unable to decrypt the data without this key and the SFTPGo users that need the data as plain text will be unable to login.
- You can start using the local provider and then switch to an external one but you can't switch between external providers and still be able to decrypt the data encrypted using the previous provider.

View file

@ -1,67 +0,0 @@
# Logs
The log file is a stream of JSON structs. Each struct has a `sender` field that identifies the log type.
The logs can be divided into the following categories:
- **"app logs"**, internal logs used to debug SFTPGo:
- `sender` string. This is generally the package name that emits the log
- `time` string. Date/time with millisecond precision
- `level` string
- `connection_id`, string, optional
- `message` string
- **"transfer logs"**, SFTP/SCP transfer logs:
- `sender` string. `Upload` or `Download`
- `time` string. Date/time with millisecond precision
- `level` string
- `local_addr` string. IP/port of the local address the connection arrived on. For FTP protocol this is the address for the control connection. For example `127.0.0.1:1234`
- `remote_addr` string. IP and, optionally, port of the remote client. For example `127.0.0.1:1234` or `127.0.0.1`
- `elapsed_ms`, int64. Elapsed time, as milliseconds, for the upload/download
- `size_bytes`, int64. Size, as bytes, of the download/upload
- `username`, string
- `file_path` string
- `connection_id` string. Unique connection identifier
- `protocol` string. `SFTP`, `SCP`, `SSH`, `FTP`, `HTTP`, `HTTPShare`, `DAV`, `DataRetention`, `EventAction`
- `ftp_mode`, string. `active` or `passive`. Included only for `FTP` protocol
- **"command logs"**, SFTP/SCP command logs:
- `sender` string. `Rename`, `Rmdir`, `Mkdir`, `Symlink`, `Remove`, `Chmod`, `Chown`, `Chtimes`, `Truncate`, `Copy`, `SSHCommand`
- `level` string
- `local_addr` string. IP/port of the local address the connection arrived on. For example `127.0.0.1:1234`
- `remote_addr` string. IP and, optionally, port of the remote client. For example `127.0.0.1:1234` or `127.0.0.1`
- `username`, string
- `file_path` string
- `target_path` string
- `filemode` string. Valid for sender `Chmod` otherwise empty
- `uid` integer. Valid for sender `Chown` otherwise -1
- `gid` integer. Valid for sender `Chown` otherwise -1
- `access_time` datetime as YYYY-MM-DDTHH:MM:SS. Valid for sender `Chtimes` otherwise empty
- `modification_time` datetime as YYYY-MM-DDTHH:MM:SS. Valid for sender `Chtimes` otherwise empty
- `size` int64. Valid for sender `Truncate` otherwise -1
- `elapsed`, int64. Elapsed time, as milliseconds
- `ssh_command`, string. Valid for sender `SSHCommand` otherwise empty
- `connection_id` string. Unique connection identifier
- `protocol` string. `SFTP`, `SCP`, `SSH`, `FTP`, `HTTP`, `DAV`, `DataRetention`, `EventAction`
- **"http logs"**, REST API logs:
- `sender` string. `httpd`
- `level` string
- `time` string. Date/time with millisecond precision
- `local_addr` string. IP/port of the local address the connection arrived on. For example `127.0.0.1:1234`
- `remote_addr` string. IP and, optionally, port of the remote client. For example `127.0.0.1:1234` or `127.0.0.1`
- `proto` string, for example `HTTP/1.1`
- `method` string. HTTP method (`GET`, `POST`, `PUT`, `DELETE` etc.)
- `request_id` string. Omitted in telemetry logs
- `user_agent` string
- `uri` string. Full uri
- `resp_status` integer. HTTP response status code
- `resp_size` integer. Size in bytes of the HTTP response
- `elapsed_ms` int64. Elapsed time, as milliseconds, to complete the request
- `request_id` string. Unique request identifier
- **"connection failed logs"**, logs for failed attempts to initialize a connection. A connection can fail for an authentication error or other errors such as a client abort or a timeout if the login does not happen in two minutes
- `sender` string. `connection_failed`
- `level` string
- `time` string. Date/time with millisecond precision
- `username`, string. Can be empty if the connection is closed before an authentication attempt
- `client_ip` string.
- `protocol` string. Possible values are `SSH`, `FTP`, `DAV`
- `login_type` string. Can be `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive` or `no_auth_tried`
- `error` string. Optional error description

View file

@ -1,20 +0,0 @@
# Metrics
SFTPGo supports [Prometheus](https://prometheus.io/) metrics at the `/metrics` HTTP endpoint of the telemetry server.
Several counters and gauges are available, for example:
- Total uploads and downloads
- Total upload and download size
- Total upload and download errors
- Total executed SSH commands
- Total SSH command errors
- Number of active connections
- Data provider availability
- Total successful and failed logins using password, public key, keyboard interactive authentication or supported multi-step authentications
- Total HTTP requests served and totals for response code
- Go's runtime details about GC, number of goroutines and OS threads
- Process information like CPU, memory, file descriptor usage and start time
Please check the `/metrics` page for more details.
The telemetry server is disabled by default in the released [sftpgo.json](https://raw.githubusercontent.com/drakkan/sftpgo/main/sftpgo.json). To enable look in [docs/full-configuration.md](https://raw.githubusercontent.com/drakkan/sftpgo/main/docs/full-configuration.md) for configuration details.

View file

@ -1,170 +0,0 @@
# OpenID Connect
OpenID Connect integration allows you to map your identity provider users to SFTPGo admins/users,
so you can login to SFTPGo Web Client and Web Admin user interfaces, using your own identity provider.
SFTPGo allows to configure per-binding OpenID Connect configurations. The supported configuration parameters are documented within the `oidc` section [here](./full-configuration.md).
Let's see a basic integration with the [Keycloak](https://www.keycloak.org/) identify provider. Other OpenID connect compatible providers should work by configuring them in a similar way.
We'll not go through the complete process of creating a realm/clients/users in Keycloak. You can look this up [here](https://www.keycloak.org/docs/latest/server_admin/index.html#admin-console).
Here is just an outline:
- create a realm named `sftpgo`
- in "Realm Settings" -> "Login" adjust the "Require SSL" setting as per your requirements
- create a client named `sftpgo-client`
- for the `sftpgo-client` set the `Access Type` to `confidential` and a valid redirect URI, for example if your SFTPGo instance is running on `http://192.168.1.50:8080` a valid redirect URI is `http://192.168.1.50:8080/*`
- for the `sftpgo-client`, in the `Mappers` settings, make sure that the username and the sftpgo role are added to the ID token. For example you can add the user attribute `sftpgo_role` as JSON string to the ID token and the `username` as `preferred_username` JSON string to the ID token
- for your users who need to be mapped as SFTPGo administrators add a custom attribute specifying `sftpgo_role` as key and `admin` as value
The resulting JSON configuration for the `sftpgo-client` that you can obtain from the "Installation" tab is something like this:
```json
{
"realm": "sftpgo",
"auth-server-url": "http://192.168.1.12:8086/auth/",
"ssl-required": "none",
"resource": "sftpgo-client",
"credentials": {
"secret": "jRsmE0SWnuZjP7djBqNq0mrf8QN77j2c"
},
"confidential-port": 0
}
```
Add the following configuration parameters to the SFTPGo configuration file.
```json
...
"oidc": {
"client_id": "sftpgo-client",
"client_secret": "jRsmE0SWnuZjP7djBqNq0mrf8QN77j2c",
"config_url": "http://192.168.1.12:8086/auth/realms/sftpgo",
"redirect_base_url": "http://192.168.1.50:8080",
"scopes": [
"openid",
"profile",
"email"
],
"username_field": "preferred_username",
"role_field": "sftpgo_role",
"implicit_roles": false,
"custom_fields": []
}
...
```
Alternatively (recommended), you can use environment variables by creating the file `oidc.env` in the `env.d` directory with the following content.
```shell
SFTPGO_HTTPD__BINDINGS__0__OIDC__CLIENT_ID="sftpgo-client"
SFTPGO_HTTPD__BINDINGS__0__OIDC__CLIENT_SECRET="jRsmE0SWnuZjP7djBqNq0mrf8QN77j2c"
SFTPGO_HTTPD__BINDINGS__0__OIDC__CONFIG_URL="http://192.168.1.12:8086/auth/realms/sftpgo"
SFTPGO_HTTPD__BINDINGS__0__OIDC__REDIRECT_BASE_URL="http://192.168.1.50:8080"
SFTPGO_HTTPD__BINDINGS__0__OIDC__USERNAME_FIELD="preferred_username"
SFTPGO_HTTPD__BINDINGS__0__OIDC__ROLE_FIELD="sftpgo_role"
```
SFTPGo will automatically add the `/.well-known/openid-configuration` suffix to the provided `config_url` and uses [OpenID Connect Discovery specifications](https://openid.net/specs/openid-connect-discovery-1_0.html) to obtain information needed to interact with it, including its OAuth 2.0 endpoint locations.
From SFTPGo login page click `Login with OpenID` button, you will be redirected to the Keycloak login page, after a successful authentication Keycloack will redirect back to SFTPGo Web Admin or SFTPGo Web Client.
Please note that the ID token returned from Keycloak must contain the `username_field` specified in the SFTPGo configuration and optionally the `role_field`. The mapped usernames must exist in SFTPGo.
If you don't want to explicitly define SFTPGo roles in your identity provider, you can set `implicit_roles` to `true`. With this configuration, the SFTPGo role is assumed based on the login link used.
Here is an example ID token which allows the SFTPGo admin `root` to access to the Web Admin UI.
```json
{
"exp": 1644758026,
"iat": 1644757726,
"auth_time": 1644757647,
"jti": "c6cf172d-08d6-41cf-8e5d-20b7ac0b8011",
"iss": "http://192.168.1.12:8086/auth/realms/sftpgo",
"aud": "sftpgo-client",
"sub": "48b0de4b-3090-4315-bbcb-be63c48be1d2",
"typ": "ID",
"azp": "sftpgo-client",
"nonce": "XLxfYDhMmWwiYctgLTCZjC",
"session_state": "e20ab97c-d3a9-4e53-872d-09d104cbd286",
"at_hash": "UwubF1W8H0XItHU_DIpjfQ",
"acr": "0",
"sid": "e20ab97c-d3a9-4e53-872d-09d104cbd286",
"email_verified": false,
"preferred_username": "root",
"sftpgo_role": "admin"
}
```
And the following is an example ID token which allows the SFTPGo user `user1` to access to the Web Client UI.
```json
{
"exp": 1644758183,
"iat": 1644757883,
"auth_time": 1644757647,
"jti": "939de932-f941-4b04-90fc-7071b7cc6b10",
"iss": "http://192.168.1.12:8086/auth/realms/sftpgo",
"aud": "sftpgo-client",
"sub": "48b0de4b-3090-4315-bbcb-be63c48be1d2",
"typ": "ID",
"azp": "sftpgo-client",
"nonce": "wxcWPPi3H7ktembUdeToqQ",
"session_state": "e20ab97c-d3a9-4e53-872d-09d104cbd286",
"at_hash": "RSDpwzVG_6G2haaNF0jsJQ",
"acr": "0",
"sid": "e20ab97c-d3a9-4e53-872d-09d104cbd286",
"email_verified": false,
"preferred_username": "user1"
}
```
SFTPGo users (not admins) can be created/updated after successful OpenID authentication by defining a [pre-login hook](./dynamic-user-mod.md).
Users and admins can also be created/updated after successful OpenID authentication using the [EventManager](./eventmanager.md).
You can use `scopes` configuration to request additional information (claims) about authenticated users (See your provider's own documentation for more information).
By default the scopes `"openid", "profile", "email"` are retrieved.
The `custom_fields` configuration parameter can be used to define claim field names to pass to the pre-login hook,
these fields can be used e.g. for implementing custom logic when creating/updating the SFTPGo user within the hook.
For example, if you have created a scope with name `sftpgo` in your identity provider to provide a claim for `sftpgo_home_dir` ,
then you can add it to the `custom_fields` in the SFTPGo configuration like this:
```json
...
"oidc": {
"client_id": "sftpgo-client",
"client_secret": "jRsmE0SWnuZjP7djBqNq0mrf8QN77j2c",
"config_url": "http://192.168.1.12:8086/auth/realms/sftpgo",
"redirect_base_url": "http://192.168.1.50:8080",
"username_field": "preferred_username",
"scopes": [ "openid", "profile", "email", "sftpgo" ],
"role_field": "sftpgo_role",
"custom_fields": ["sftpgo_home_dir"]
}
...
```
Alternatively (recommended), you can use environment variables by creating the file `oidc.env` in the `env.d` directory with the following content.
```shell
SFTPGO_HTTPD__BINDINGS__0__OIDC__CLIENT_ID="sftpgo-client"
SFTPGO_HTTPD__BINDINGS__0__OIDC__CLIENT_SECRET="jRsmE0SWnuZjP7djBqNq0mrf8QN77j2c"
SFTPGO_HTTPD__BINDINGS__0__OIDC__CONFIG_URL="http://192.168.1.12:8086/auth/realms/sftpgo"
SFTPGO_HTTPD__BINDINGS__0__OIDC__REDIRECT_BASE_URL="http://192.168.1.50:8080"
SFTPGO_HTTPD__BINDINGS__0__OIDC__USERNAME_FIELD="preferred_username"
SFTPGO_HTTPD__BINDINGS__0__OIDC__SCOPES="openid,profile,email,sftpgo"
SFTPGO_HTTPD__BINDINGS__0__OIDC__ROLE_FIELD="sftpgo_role"
SFTPGO_HTTPD__BINDINGS__0__OIDC__CUSTOM_FIELDS="sftpgo_home_dir"
```
The pre-login hook will receive a JSON serialized user with the following field:
```json
...
"oidc_custom_fields": {
"sftpgo_home_dir": "configured value"
},
...
```
In EventManager actions you can use the placeholder `{{IDPFieldsftpgo_home_dir}}` for string-based custom fields.

View file

@ -1,164 +0,0 @@
# Performance
SFTPGo can easily saturate a Gigabit connection on low end hardware with no special configuration, this is generally enough for most use cases.
For Multi-Gig connections, some performance improvements and comparisons with OpenSSH have been discussed [here](https://github.com/drakkan/sftpgo/issues/69), most of them have been included in the main branch. To summarize:
- In current state with all performance improvements applied, SFTP performance is very close to OpenSSH however CPU usage is higher. SCP performance match OpenSSH.
- The main bottlenecks are the encryption and the messages authentication, so if you can use a fast cipher with implicit messages authentication, such as `aes128-gcm@openssh.com`, you will get a big performance boost.
- SCP protocol is much simpler than SFTP and so, the multi-platform, SFTPGo's SCP implementation performs better than SFTP.
- Load balancing with HAProxy can greatly improve the performance if CPU not become the bottleneck.
## Benchmark
### Hardware specification
**Server** ||
--- | --- |
OS| Debian 10.2 x64 |
CPU| Ryzen5 3600 |
RAM| 64GB 2400MHz ECC |
Disk| Ramdisk |
Ethernet| Mellanox ConnectX-3 40GbE|
**Client** ||
--- | --- |
OS| Ubuntu 19.10 x64 |
CPU| Threadripper 1920X |
RAM| 64GB 2400MHz ECC |
Disk| Ramdisk |
Ethernet| Mellanox ConnectX-3 40GbE|
### Test configurations
- `Baseline`: SFTPGo version 0.9.6.
- `Devel`: SFTPGo commit b0ed1905918b9dcc22f9a20e89e354313f491734, compiled with Golang 1.14.2. This is basically the same as v1.0.0 as far as performance is concerned.
- `Optimized`: Various [optimizations](#Optimizations-applied) applied on top of `Devel`.
- `Balanced`: Two optimized instances, running on localhost, load balanced by HAProxy 2.1.3.
- `OpenSSH`: OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1d 10 Sep 2019
Server's CPU is in Eco mode, you can expect better results in certain cases with a stronger CPU, especially multi-stream HAProxy balanced load.
#### Cipher aes128-ctr
The Message Authentication Code (MAC) used is `hmac-sha2-256`.
##### SFTP
Download:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|150|243|319|412|452|
2|267|452|600|740|735|
3|351|637|802|991|1045|
4|414|811|1002|1192|1265|
8|536|1451|1742|1552|1798|
Upload:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|172|273|343|407|426|
2|284|469|595|673|738|
3|368|644|820|881|1090|
4|446|851|1041|1026|1244|
8|605|1210|1368|1273|1820|
##### SCP
Download:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|220|369|525|611|558|
2|437|659|941|1048|856|
3|635|1000|1365|1363|1201|
4|787|1272|1664|1610|1415|
8|1297|2129|2690|2100|1959|
Upload:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|208|312|400|458|508|
2|360|516|647|745|926|
3|476|678|861|935|1254|
4|576|836|1080|1099|1569|
8|857|1161|1416|1433|2271|
#### Cipher aes128-gcm@openssh.com
With this cipher the messages authentication is implicit, no SHA256 computation is needed.
##### SFTP
Download:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|332|423|<--|583|443|
2|533|755|<--|970|809|
3|666|1045|<--|1249|1098|
4|762|1276|<--|1461|1351|
8|886|2064|<--|1825|1933|
Upload:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|348|410|<--|527|469|
2|596|729|<--|842|930|
3|778|974|<--|1088|1341|
4|886|1192|<--|1232|1494|
8|1042|1578|<--|1433|1893|
##### SCP
Download:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|776|793|<--|832|578|
2|1343|1415|<--|1435|938|
3|1815|1878|<--|1877|1279|
4|2192|2205|<--|2056|1567|
8|3237|3287|<--|2493|2036|
Upload:
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
---|---|---|---|---|---|
1|528|545|<--|608|584|
2|872|849|<--|975|1019|
3|1121|1138|<--|1217|1412|
4|1367|1387|<--|1368|1755|
8|1733|1744|<--|1664|2510|
### Optimizations applied
- AES-CTR optimization of Go compiler for x86_64, there is a [patch](https://go-review.googlesource.com/c/go/+/51670) that hasn't been merged yet, you can apply it yourself.
### HAProxy configuration
Here is the relevant HAProxy configuration used for the `Balanced` test configuration:
```console
frontend sftp
bind :2222
mode tcp
timeout client 600s
default_backend sftpgo
backend sftpgo
mode tcp
balance roundrobin
timeout connect 10s
timeout server 600s
timeout queue 30s
option tcp-check
tcp-check expect string SSH-2.0-
server sftpgo1 127.0.0.1:2022 check send-proxy-v2 weight 10 inter 10s rise 2 fall 3
server sftpgo2 127.0.0.1:2024 check send-proxy-v2 weight 10 inter 10s rise 2 fall 3
```

View file

@ -1,38 +0,0 @@
# Plugin system
SFTPGo's plugins are completely separate, standalone applications that SFTPGo executes and communicates with over RPC. This means the plugin process does not share the same memory space as SFTPGo and therefore can only access the interfaces and arguments given to it. This also means a crash in a plugin can not crash the entirety of SFTPGo.
## Configuration
The plugins are configured via the `plugins` section in the main SFTPGo configuration file. You basically have to set the path to your plugin, the plugin type and any plugin specific configuration parameters. If you set the SHA256 checksum for the plugin executable, SFTPGo will verify the plugin integrity before starting it.
For added security you can enable the automatic TLS. In this way, the client and the server automatically negotiate mutual TLS for transport authentication. This ensures that only the original client will be allowed to connect to the server, and all other connections will be rejected. The client will also refuse to connect to any server that isn't the original instance started by the client.
The following plugin types are supported:
- `auth`, allows to authenticate users.
- `notifier`, allows to receive notifications for supported filesystem events such as file uploads, downloads etc. and provider events such as objects add, update, delete.
- `kms`, allows to support additional KMS providers.
- `ipfilter`, allows to allow/deny access based on client IP.
Full configuration details can be found [here](./full-configuration.md).
:warning: Please note that the plugin system is experimental, the configuration parameters and interfaces may change in a backward incompatible way in future.
## Available plugins
Some "official" supported plugins can be found [here](https://github.com/sftpgo/).
## Plugin Development
:warning: Advanced topic! Plugin development is a highly advanced topic in SFTPGo, and is not required knowledge for day-to-day usage. If you don't plan on writing any plugins, we recommend not reading this section of the documentation.
Because SFTPGo communicates to plugins over a RPC interface, you can build and distribute a plugin for SFTPGo without having to rebuild SFTPGo itself.
In theory, because the plugin interface is HTTP, you could even develop a plugin using a completely different programming language! (Disclaimer, you would also have to re-implement the plugin API which is not a trivial amount of work).
Developing a plugin is simple. The only knowledge necessary to write a plugin is basic command-line skills and basic knowledge of the [Go programming language](http://golang.org/).
Your plugin implementation needs to satisfy the interface for the plugin type you want to build. You can find these definitions in the [docs](https://pkg.go.dev/github.com/sftpgo/sdk/plugin#section-directories).
The SFTPGo plugin system uses the HashiCorp [go-plugin](https://github.com/hashicorp/go-plugin) library. Please refer to its documentation for more in-depth information on writing plugins.

View file

@ -1,186 +0,0 @@
# Portable mode
SFTPGo allows to share a single directory on demand using the `portable` subcommand:
```console
sftpgo portable --help
To serve the current working directory with auto generated credentials simply
use:
$ sftpgo portable
Please take a look at the usage below to customize the serving parameters
Usage:
sftpgo portable [flags]
Flags:
--allowed-patterns stringArray Allowed file patterns case insensitive.
The format is:
/dir::pattern1,pattern2.
For example: "/somedir::*.jpg,a*b?.png"
--az-access-tier string Leave empty to use the default
container setting
--az-account-key string
--az-account-name string
--az-container string
--az-download-concurrency int How many parts are downloaded in
parallel (default 5)
--az-download-part-size int The buffer size for multipart downloads
(MB) (default 5)
--az-endpoint string Leave empty to use the default:
"blob.core.windows.net"
--az-key-prefix string Allows to restrict access to the
virtual folder identified by this
prefix and its contents
--az-sas-url string Shared access signature URL
--az-upload-concurrency int How many parts are uploaded in
parallel (default 5)
--az-upload-part-size int The buffer size for multipart uploads
(MB) (default 5)
--az-use-emulator
-c, --config-dir string Location of the config dir. This directory
is used as the base for files with a relative
path, e.g. the private keys for the SFTP
server or the database file if you use a
file-based data provider.
The configuration file, if not explicitly set,
is looked for in this dir. We support reading
from JSON, TOML, YAML, HCL, envfile and Java
properties config files. The default config
file name is "sftpgo" and therefore
"sftpgo.json", "sftpgo.yaml" and so on are
searched.
This flag can be set using SFTPGO_CONFIG_DIR
env var too. (default ".")
--config-file string Path to SFTPGo configuration file.
This flag explicitly defines the path, name
and extension of the config file. If must be
an absolute path or a path relative to the
configuration directory. The specified file
name must have a supported extension (JSON,
YAML, TOML, HCL or Java properties).
This flag can be set using SFTPGO_CONFIG_FILE
env var too.
--crypto-passphrase string Passphrase for encryption/decryption
--denied-patterns stringArray Denied file patterns case insensitive.
The format is:
/dir::pattern1,pattern2.
For example: "/somedir::*.jpg,a*b?.png"
-d, --directory string Path to the directory to serve.
This can be an absolute path or a path
relative to the current directory
(default ".")
-f, --fs-provider string osfs => local filesystem (legacy value: 0)
s3fs => AWS S3 compatible (legacy: 1)
gcsfs => Google Cloud Storage (legacy: 2)
azblobfs => Azure Blob Storage (legacy: 3)
cryptfs => Encrypted local filesystem (legacy: 4)
sftpfs => SFTP (legacy: 5) (default "osfs")
--ftpd-cert string Path to the certificate file for FTPS
--ftpd-key string Path to the key file for FTPS
--ftpd-port int 0 means a random unprivileged port,
< 0 disabled (default -1)
--gcs-automatic-credentials int 0 means explicit credentials using
a JSON credentials file, 1 automatic
(default 1)
--gcs-bucket string
--gcs-credentials-file string Google Cloud Storage JSON credentials
file
--gcs-key-prefix string Allows to restrict access to the
virtual folder identified by this
prefix and its contents
--gcs-storage-class string
--grace-time int This grace time defines the number of
seconds allowed for existing transfers
to get completed before shutting down.
A graceful shutdown is triggered by an
interrupt signal.
-h, --help help for portable
--httpd-cert string Path to the certificate file for WebClient
over HTTPS
--httpd-key string Path to the key file for WebClient over
HTTPS
--httpd-port int 0 means a random unprivileged port,
< 0 disabled (default -1)
-l, --log-file-path string Leave empty to disable logging
--log-level string Set the log level.
Supported values:
debug, info, warn, error.
(default "debug")
--log-utc-time Use UTC time for logging
-p, --password string Leave empty to use an auto generated
value
--password-file string Read the password from the specified
file path. Leave empty to use an auto
generated value
-g, --permissions strings User's permissions. "*" means any
permission (default [list,download])
-k, --public-key strings
--s3-access-key string
--s3-access-secret string
--s3-acl string
--s3-bucket string
--s3-endpoint string
--s3-force-path-style Force path style bucket URL
--s3-key-prefix string Allows to restrict access to the
virtual folder identified by this
prefix and its contents
--s3-region string
--s3-role-arn string
--s3-skip-tls-verify If enabled the S3 client accepts any TLS
certificate presented by the server and
any host name in that certificate.
In this mode, TLS is susceptible to
man-in-the-middle attacks.
This should be used only for testing.
--s3-storage-class string
--s3-upload-concurrency int How many parts are uploaded in
parallel (default 2)
--s3-upload-part-size int The buffer size for multipart uploads
(MB) (default 5)
--sftp-buffer-size int The size of the buffer (in MB) to use
for transfers. By enabling buffering,
the reads and writes, from/to the
remote SFTP server, are split in
multiple concurrent requests and this
allows data to be transferred at a
faster rate, over high latency networks,
by overlapping round-trip times
--sftp-disable-concurrent-reads Concurrent reads are safe to use and
disabling them will degrade performance.
Disable for read once servers
--sftp-endpoint string SFTP endpoint as host:port for SFTP
provider
--sftp-fingerprints strings SFTP fingerprints to verify remote host
key for SFTP provider
--sftp-key-path string SFTP private key path for SFTP provider
--sftp-password string SFTP password for SFTP provider
--sftp-prefix string SFTP prefix allows restrict all
operations to a given path within the
remote SFTP server
--sftp-username string SFTP user for SFTP provider
-s, --sftpd-port int 0 means a random unprivileged port,
< 0 disabled
--ssh-commands strings SSH commands to enable.
"*" means any supported SSH command
including scp
(default [md5sum,sha1sum,sha256sum,cd,pwd,scp])
--start-directory string Alternate start directory.
This is a virtual path not a filesystem
path (default "/")
-u, --username string Leave empty to use an auto generated
value
--webdav-cert string Path to the certificate file for WebDAV
over HTTPS
--webdav-key string Path to the key file for WebDAV over
HTTPS
--webdav-port int 0 means a random unprivileged port,
< 0 disabled (default -1)
```
In portable mode you can apply further customizations using a configuration file/environment variables as for the service mode.
SFTP, FTP, HTTP and WebDAV settings configured using the CLI flags are applied to the first binding, any additional bindings will not be affected.

View file

@ -1,24 +0,0 @@
# Post-connect hook
This hook is executed as soon as a new connection is established. It notifies the connection's IP address and protocol. Based on the received response, the connection is accepted or rejected. Combining this hook with the [Post-login hook](./post-login-hook.md) you can implement your own (even for Protocol) blocklist/allowlist of IP addresses.
The `post_connect_hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can read the following environment variables:
- `SFTPGO_CONNECTION_IP`
- `SFTPGO_CONNECTION_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`, `OIDC` (OpenID Connect)
If the external command completes with a zero exit status the connection will be accepted otherwise rejected.
Global environment variables are cleared, for security reasons, when the script is called. You can set additional environment variables in the "command" configuration section.
The program must finish within 20 seconds.
If the hook defines an HTTP URL then this URL will be invoked as HTTP GET with the following query parameters:
- `ip`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `OIDC` (OpenID Connect)
The connection is accepted if the HTTP response code is `200` otherwise rejected.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry, TLS and headers configurations. See the HTTP Clients (`http`) section of the [config reference](./full-configuration.md).

View file

@ -1,26 +0,0 @@
# Post-disconnect hook
This hook is executed as soon as a SSH/FTP connection is closed. SSH is a multiplexing protocol, a client can open multiple channels on a single connection or can disconnect without opening any channels. For SSH-based connections (SFTP/SCP/SSH commands), SFTPGo notifies the disconnection of the channel so there is no exact match with the post-connect hook.
The hook is not executed for stateless protocols such as HTTP and WebDAV.
The `post_disconnect_hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can read the following environment variables:
- `SFTPGO_CONNECTION_IP`
- `SFTPGO_CONNECTION_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`, `OIDC` (OpenID Connect)
- `SFTPGO_CONNECTION_USERNAME`, can be empty if the channel is closed before user authentication
- `SFTPGO_CONNECTION_DURATION`, connection duration in milliseconds
Global environment variables are cleared, for security reasons, when the script is called. You can set additional environment variables in the "command" configuration section.
The program must finish within 20 seconds.
If the hook defines an HTTP URL then this URL will be invoked as HTTP GET with the following query parameters:
- `ip`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`, `OIDC` (OpenID Connect)
- `username`, can be empty if the channel is closed before user authentication
- `connection_duration`, connection duration in milliseconds
The HTTP hook will use the global configuration for HTTP clients and will respect the retry, TLS and headers configurations. See the HTTP Clients (`http`) section of the [config reference](./full-configuration.md).

View file

@ -1,29 +0,0 @@
# Post-login hook
This hook is executed after a login or after closing a connection for authentication timeout. Defining an appropriate `post_login_scope` you can get notifications for failed logins, successful logins or both.
The `post-login-hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can reads the following environment variables:
- `SFTPGO_LOGIND_USER`, it contains the user serialized as JSON. The username is empty if the connection is closed for authentication timeout
- `SFTPGO_LOGIND_IP`
- `SFTPGO_LOGIND_METHOD`, possible values are `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive`, `TLSCertificate`, `TLSCertificate+password` or `no_auth_tried`, `IDP` (external identity provider)
- `SFTPGO_LOGIND_STATUS`, 1 means login OK, 0 login KO
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`, `OIDC` (OpenID Connect)
Global environment variables are cleared, for security reasons, when the script is called. You can set additional environment variables in the "command" configuration section.
The program must finish within 20 seconds.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol, the ip address and the status of the user are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH&status=1`.
The request body will contain the user serialized as JSON.
The structure for SFTPGo users can be found within the [OpenAPI schema](../openapi/openapi.yaml).
The HTTP hook will use the global configuration for HTTP clients and will respect the retry, TLS and headers configurations. See the HTTP Clients (`http`) section of the [config reference](./full-configuration.md).
The `post_login_scope` supports the following configuration values:
- `0` means notify both failed and successful logins
- `1` means notify failed logins. Connections closed for authentication timeout are notified as failed logins. You will get an empty username in this case
- `2` means notify successful logins

View file

@ -1,23 +0,0 @@
# Profiling SFTPGo
The built-in profiler lets you collect CPU profiles, traces, allocations and heap profiles that allow to identify and correct specific bottlenecks.
You can enable the built-in profiler using `telemetry` configuration section inside the configuration file.
Profiling data are available via HTTP/HTTPS in the format expected by the [pprof](https://github.com/google/pprof/blob/main/doc/README.md) visualization tool. You can find the index page at the URL `/debug/pprof/`.
The following profiles are available, you can obtain them via HTTP GET requests:
- `allocs`, a sampling of all past memory allocations
- `block`, stack traces that led to blocking on synchronization primitives
- `goroutine`, stack traces of all current goroutines
- `heap`, a sampling of memory allocations of live objects. You can specify the `gc` GET parameter to run GC before taking the heap sample
- `mutex`, stack traces of holders of contended mutexes
- `profile`, CPU profile. You can specify the duration in the `seconds` GET parameter. After you get the profile file, use the `go tool pprof` command to investigate the profile
- `threadcreate`, stack traces that led to the creation of new OS threads
- `trace`, a trace of execution of the current program. You can specify the duration in the `seconds` GET parameter. After you get the trace file, use the `go tool trace` command to investigate the trace
For example you can:
- download a 30 seconds CPU profile from the URL `/debug/pprof/profile?seconds=30`
- download a sampling of memory allocations of live objects from the URL `/debug/pprof/heap?gc=1`
- download a sampling of all past memory allocations from the URL `/debug/pprof/allocs`

View file

@ -1,64 +0,0 @@
# Rate limiting
Rate limiting allows to control the number of requests going to the SFTPGo services.
SFTPGo implements a [token bucket](https://en.wikipedia.org/wiki/Token_bucket) initially full and refilled at the configured rate. The `burst` configuration parameter defines the size of the bucket. The rate is defined by dividing `average` by `period`, so for a rate below 1 req/s, one needs to define a period larger than a second.
Requests that exceed the configured limit will be delayed or denied if they exceed the maximum delay time.
SFTPGo allows to define per-protocol rate limiters so you can have different configurations for different protocols.
The supported protocols are:
- `SSH`, includes SFTP and SSH commands
- `FTP`, includes FTP, FTPES, FTPS
- `DAV`, WebDAV
- `HTTP`, REST API and web admin
You can also define two types of rate limiters:
- global, it is independent from the source host and therefore define an aggregate limit for the configured protocol/s
- per-host, this type of rate limiter can be connected to the built-in [defender](./defender.md) and generate `score_limit_exceeded` events and thus hosts that repeatedly exceed the configured limit can be automatically blocked
If you configure a per-host rate limiter, SFTPGo will keep a rate limiter in memory for each host that connects to the service, you can limit the memory usage using the `entries_soft_limit` and `entries_hard_limit` configuration keys.
You can exclude a list of IP addresses and IP ranges from rate limiters by adding them to rate limites allow list using the WebAdmin UI or the REST API. In multi-nodes setups, the list entries propagation between nodes may take some minutes.
You can defines how many rate limiters as you want, but keep in mind that if you defines multiple rate limiters each request will be checked against all the configured limiters and so it can potentially be delayed multiple times. Let's clarify with an example, here is a configuration that defines a global rate limiter and a per-host rate limiter for the FTP protocol:
```json
"rate_limiters": [
{
"average": 100,
"period": 1000,
"burst": 1,
"type": 1,
"protocols": [
"SSH",
"FTP",
"DAV",
"HTTP"
],
"generate_defender_events": false,
"entries_soft_limit": 100,
"entries_hard_limit": 150
},
{
"average": 10,
"period": 1000,
"burst": 1,
"type": 2,
"protocols": [
"FTP"
],
"allow_list": [],
"generate_defender_events": true,
"entries_soft_limit": 100,
"entries_hard_limit": 150
}
]
```
we have a global rate limiter that limit the aggregate rate for the all the services to 100 req/s and an additional rate limiter that limits the `FTP` protocol to 10 req/s per host.
With this configuration, when a client connects via FTP it will be limited first by the global rate limiter and then by the per host rate limiter.
Clients connecting via SFTP/WebDAV will be checked only against the global rate limiter.

View file

@ -1,93 +0,0 @@
# SFTPGo repositories
These repositories are available through Oregon State University's free mirroring service. Special thanks to Lance Albertson, Director of the Oregon State University Open Source Lab, who helped me with the initial setup.
## APT repo
Supported distributions:
- Debian 10 "buster"
- Debian 11 "bullseye"
- Debian 12 "bookworm"
Import the public key used by the package management system:
```shell
curl -sS https://ftp.osuosl.org/pub/sftpgo/apt/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/sftpgo-archive-keyring.gpg
```
If you receive an error indicating that `gnupg` is not installed, you can install it using the following command:
```shell
sudo apt install gnupg
```
Create the SFTPGo source list file:
```shell
CODENAME=`lsb_release -c -s`
echo "deb [signed-by=/usr/share/keyrings/sftpgo-archive-keyring.gpg] https://ftp.osuosl.org/pub/sftpgo/apt ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/sftpgo.list
```
Reload the package database and install SFTPGo:
```shell
sudo apt update
sudo apt install sftpgo
```
## YUM repo
The YUM repository can be used on generic Red Hat based distributions as well as on Suse/OpenSuse.
### Red Hat based distributions
Create the SFTPGo repository:
```shell
ARCH=`uname -m`
curl -sS https://ftp.osuosl.org/pub/sftpgo/yum/${ARCH}/sftpgo.repo | sudo tee /etc/yum.repos.d/sftpgo.repo
```
Reload the package database and install SFTPGo:
```shell
sudo yum update
sudo yum install sftpgo
```
Start the SFTPGo service and enable it to start at system boot:
```shell
sudo systemctl start sftpgo
sudo systemctl enable sftpgo
```
### Suse/OpenSUSE
Import the public key used by the package management system:
```shell
sudo rpm --import https://ftp.osuosl.org/pub/sftpgo/apt/gpg.key
```
Add the SFTPGo repository:
```shell
ARCH=`uname -m`
sudo zypper addrepo -f "https://ftp.osuosl.org/pub/sftpgo/yum/${ARCH}" sftpgo
```
Reload the package database and install SFTPGo:
```shell
sudo zypper refresh
sudo zypper install sftpgo
```
Start the SFTPGo service and enable it to start at system boot:
```shell
sudo systemctl start sftpgo
sudo systemctl enable sftpgo
```

View file

@ -1,105 +0,0 @@
# REST API
SFTPGo supports REST API to manage, backup, and restore users and folders, data retention, and to get real time reports of the active connections with the ability to forcibly close a connection.
If quota tracking is enabled in the configuration file, then the used size and number of files are updated each time a file is added/removed. If files are added/removed not using SFTP/SCP, or if you change `track_quota` from `2` to `1`, you can rescan the users home dir and update the used quota using the REST API.
REST API are protected using JSON Web Tokens (JWT) authentication and can be served over HTTPS. You can also configure client certificate authentication in addition to JWT.
You can get a JWT token using the `/api/v2/token` endpoint, you need to authenticate using HTTP Basic authentication and the credentials of an active administrator. Here is a sample response:
```json
{"access_token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MTA4NzU5NDksImp0aSI6ImMwMjAzbGZjZHJwZDRsMGMxanZnIiwibmJmIjoxNjEwODc1MzE5LCJwZXJtaXNzaW9ucyI6WyIqIl0sInN1YiI6ImlHZ010NlZNU3AzN2tld3hMR3lUV1l2b2p1a2ttSjBodXlJZHBzSWRyOFE9IiwidXNlcm5hbWUiOiJhZG1pbiJ9.dt-UwcWdEMwoGauuiQw8BmgpBAv4YlTaXkyNK-7iRJ4","expires_at":"2021-01-17T09:32:29Z"}
```
once the access token has expired, you need to get a new one.
By default, JWT tokens are not stored and we use a randomly generated secret to sign them so if you restart SFTPGo all the previous tokens will be invalidated and you will get a 401 HTTP response code.
If you define multiple bindings, each binding will sign JWT tokens with a different secret so the token generated for a binding is not valid for the other ones.
If, instead, you want to use a persistent signing key for JWT tokens, you can define a signing passphrase via configuration file or environment variable.
REST API can be disabled within the `httpd` configuration via the `enable_rest_api` key.
You can create other administrator and assign them the following permissions:
- add users
- edit users
- del users
- view users
- view connections
- close connections
- view server status
- view and start quota scans
- view defender
- manage defender
- manage API keys
- manage system
- manage admins
- manage groups
- manage data retention
- view events
- manage event rules
You can also restrict administrator access based on the source IP address. If you are running SFTPGo behind a reverse proxy you need to allow both the proxy IP address and the real client IP.
As alternative authentication method you can use API keys. API keys are mainly designed for machine-to-machine communications and a static API key is intrinsically less secure than a short lived JWT token. Although you can create permanent API keys it is recommended to set an expiration date. Additionally, a JWT token can be verified without further data provider queries while an API key requires one or more data provider queries to authenticate each request.
To generate API keys you first need to get a JWT token and then you can use the `/api/v2/apikeys` endpoint to manage your API keys.
The API keys allow the impersonation of users and administrators, using the API keys you inherit the permissions of the associated user/admin.
The user/admin association can be:
- static, a user/admin is explicitly associated to the API key
- dynamic, the API key has no user/admin associated, you need to add ".username" at the end of the key to specify the user/admin to impersonate. For example if your API key is `6ajKLwswLccVBGpZGv596G.ySAXc8vtp9hMiwAuaLtzof` and you want to impersonate the admin with username `myadmin` you have to use `6ajKLwswLccVBGpZGv596G.ySAXc8vtp9hMiwAuaLtzof.myadmin` as API key.
The API key scope defines if the API key can impersonate users or admins.
Before you can impersonate a user/admin you have to set `allow_api_key_auth` at user/admin level. Each user/admin can always revoke this permission.
The generated API key is returned in the response body when you create a new API key object. It is not stored as plain text, you need to save it after the initial creation, there is no way to display the API key as plain text after the initial creation.
API keys are not allowed for the following REST APIs:
- manage API keys itself. You cannot create, update, delete, enumerate API keys if you are logged in with an API key
- change password, public keys or second factor authentication for the associated user
- update the impersonated admin
Please keep in mind that using an API key not associated with any administrator it is still possible to create a new administrator, with full permissions, and then impersonate it: be careful if you share unassociated API keys with third parties and with the `manage admins` permission granted, they will basically allow full access, the only restriction is that the impersonated admin cannot be modified.
The data retention APIs allow you to define per-folder retention policies for each user. To clarify this concept let's show an example, a data retention check accepts a POST body like this one:
```json
[
{
"path": "/folder1",
"retention": 72
},
{
"path": "/folder1/subfolder",
"retention": 0
},
{
"path": "/folder2",
"retention": 24
}
]
```
In the above example we asked to SFTPGo:
- to delete all the files with modification time older than 72 hours in `/folder1`
- to exclude `/folder1/subfolder`, no files will be deleted here
- to delete all the files with modification time older than 24 hours in `/folder2`
The check results can be, optionally, notified by e-mail.
You can find an example script that shows how to manage data retention [here](../examples/data-retention). Checks the REST API schema for full details.
:warning: Deleting files is an irreversible action, please make sure you fully understand what you are doing before using this feature, you may have users with overlapping home directories or virtual folders shared between multiple users, it is relatively easy to inadvertently delete files you need.
The OpenAPI 3 schema for the supported APIs can be found inside the source tree: [openapi.yaml](../openapi/openapi.yaml "OpenAPI 3 specs"). You can render the schema and try the API using the `/openapi` endpoint. SFTPGo uses by default [Swagger UI](https://github.com/swagger-api/swagger-ui), you can use another renderer just by copying it to the defined OpenAPI path.
You can also explore the schema on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
You can generate your own REST API client in your preferred programming language, or even bash scripts, using an OpenAPI generator such as [swagger-codegen](https://github.com/swagger-api/swagger-codegen) or [OpenAPI Generator](https://openapi-generator.tech/).

View file

@ -1,13 +0,0 @@
# Roles
Roles can be assigned to users and administrators. Admins with a role are limited administrators, they can only view and manage users with their own role and they cannot have the following permissions:
- manage_admins
- manage_system
- manage_event_rules
- manage_roles
- view_events
Users created by role administrators automatically inherit their role.
Admins without a role are global administrators and can manage all users (with and without a role) and assign a specific role to users.

View file

@ -1,39 +0,0 @@
# S3 Compatible Object Storage backends
To connect SFTPGo to AWS, you need to specify credentials, a `bucket` and a `region`. Here is the list of available [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions). For example, if your bucket is at `Frankfurt`, you have to set the region to `eu-central-1`. You can specify an AWS [storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) too. Leave it blank to use the default AWS storage class. An endpoint is required if you are connecting to a Compatible AWS Storage such as [MinIO](https://min.io/).
AWS SDK has different options for credentials. We support:
1. Providing [Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
2. Use IAM roles for Amazon EC2
3. Use IAM roles for tasks if your application uses an ECS task definition
4. Utilizing [IAM roles](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) for service accounts (IRSA) if you operate SFTPGo atop AWS EKS.
5. Assuming specific IAM role by setting its ARN.
So, you need to provide access keys to activate option 1, or leave them blank to use the other ways to specify credentials.
You can also use a temporary session token or assume a role by setting its ARN.
Specifying a different `key_prefix`, you can assign different "folders" of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
SFTPGo uses multipart uploads and parallel downloads for storing and retrieving files from S3.
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and S3 then the client should wait for the last parts to be uploaded to S3 after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
The configured bucket must exist.
Some SFTP commands don't work over S3:
- `chown` and `chmod` will fail. If you want to silently ignore these method set `setstat_mode` to `1` or `2` in your configuration file
- `truncate`, `symlink`, `readlink` are not supported
- opening a file for both reading and writing at the same time is not supported
- resuming uploads is tricky and disabled by default
Other notes:
- `rename` is a two step operation: server-side copy and then deletion. So, it is not atomic as for local filesystem.
- We don't support renaming non empty directories since we should rename all the contents too and this could take a long time: think about directories with thousands of files: for each file we should do an AWS API call.
- For server side encryption, you have to configure the mapped bucket to automatically encrypt objects.
- A local home directory is still required to store temporary files.
- Clients that require advanced filesystem-like features such as `sshfs` are not supported.
- `chtime` not supported.

View file

@ -1,143 +0,0 @@
# Running SFTPGo as a service
Download a binary SFTPGo [release](https://github.com/drakkan/sftpgo/releases) or a build artifact for the [latest commit](https://github.com/drakkan/sftpgo/actions) or build SFTPGo yourself.
Run the following instructions from the directory that contains the sftpgo binary and the accompanying files.
## Linux
The easiest way to run SFTPGo as a service is to use the [deb/rpm repos](./repo.md) or download and install a pre-compiled deb/rpm package or use one of the Arch Linux PKGBUILDs we maintain.
This section describes the manual procedure.
A `systemd` sample [service](../init/sftpgo.service "systemd service") can be found inside the source tree.
Here are some basic instructions to run SFTPGo as service using a dedicated `sftpgo` system account.
Please run the following commands from the directory where you downloaded/compiled SFTPGo:
```bash
# create the sftpgo user and group
sudo groupadd --system sftpgo
sudo useradd --system \
--gid sftpgo \
--no-create-home \
--home-dir /var/lib/sftpgo \
--shell /usr/sbin/nologin \
--comment "SFTPGo user" \
sftpgo
# create the required directories
sudo mkdir -p /etc/sftpgo \
/var/lib/sftpgo \
/usr/share/sftpgo
# install the sftpgo executable
sudo install -Dm755 sftpgo /usr/bin/sftpgo
# install the default configuration file, edit it if required
sudo install -Dm644 sftpgo.json /etc/sftpgo/
# override some configuration keys using environment variables
sudo sh -c 'echo "SFTPGO_HTTPD__BACKUPS_PATH=/var/lib/sftpgo/backups" >> /etc/sftpgo/sftpgo.env'
sudo sh -c 'echo "SFTPGO_DATA_PROVIDER__CREDENTIALS_PATH=/var/lib/sftpgo/credentials" >> /etc/sftpgo/sftpgo.env'
# if you use a file based data provider such as sqlite or bolt consider to set the database path too, for example:
#sudo sh -c 'echo "SFTPGO_DATA_PROVIDER__NAME=/var/lib/sftpgo/sftpgo.db" >> /etc/sftpgo/sftpgo.env'
# also set the provider's PATH as env var to get initprovider to work with SQLite provider:
#export SFTPGO_DATA_PROVIDER__NAME=/var/lib/sftpgo/sftpgo.db
# install static files and templates for the web UI
sudo cp -r static templates openapi /usr/share/sftpgo/
# set files and directory permissions
sudo chown -R sftpgo:sftpgo /etc/sftpgo /var/lib/sftpgo
sudo chmod 750 /etc/sftpgo /var/lib/sftpgo
sudo chmod 640 /etc/sftpgo/sftpgo.json /etc/sftpgo/sftpgo.env
# initialize the configured data provider
# if you want to use MySQL or PostgreSQL you need to create the configured database before running the initprovider command
sudo -E su - sftpgo -m -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
# install the systemd service
sudo install -Dm644 init/sftpgo.service /etc/systemd/system
# start the service
sudo systemctl start sftpgo
# verify that the service is started
sudo systemctl status sftpgo
# automatically start sftpgo on boot
sudo systemctl enable sftpgo
# optional, create shell completion script, for example for bash
sudo sh -c '/usr/bin/sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo'
# optional, create man pages
sudo /usr/bin/sftpgo gen man -d /usr/share/man/man1
```
## macOS
The easiest way to run SFTPGo as a service is to install it from the Homebrew [Formula](https://formulae.brew.sh/formula/sftpgo).
This section describes the procedure to use if you prefer to build SFTPGo yourself or if you want to download and configure a pre-built release as tar.
For macOS, a `launchd` sample [service](../init/com.github.drakkan.sftpgo.plist "launchd plist") can be found inside the source tree. The `launchd` plist assumes that SFTPGo has `/usr/local/opt/sftpgo` as base directory.
Here are some basic instructions to run SFTPGo as service, please run the following commands from the directory where you downloaded SFTPGo:
```bash
# create the required directories
sudo mkdir -p /usr/local/opt/sftpgo/init \
/usr/local/opt/sftpgo/var/lib \
/usr/local/opt/sftpgo/usr/share \
/usr/local/opt/sftpgo/var/log \
/usr/local/opt/sftpgo/etc \
/usr/local/opt/sftpgo/bin
# install sftpgo executable
sudo cp sftpgo /usr/local/opt/sftpgo/bin/
# install the launchd service
sudo cp init/com.github.drakkan.sftpgo.plist /usr/local/opt/sftpgo/init/
sudo chown root:wheel /usr/local/opt/sftpgo/init/com.github.drakkan.sftpgo.plist
# install the default configuration file, edit it if required
sudo cp sftpgo.json /usr/local/opt/sftpgo/etc/
# install static files and templates for the web UI
sudo cp -r static templates openapi /usr/local/opt/sftpgo/usr/share/
# initialize the configured data provider
# if you want to use MySQL or PostgreSQL you need to create the configured database before running the initprovider command
sudo /usr/local/opt/sftpgo/bin/sftpgo initprovider -c /usr/local/opt/sftpgo/etc/
# add sftpgo to the launch daemons
sudo ln -s /usr/local/opt/sftpgo/init/com.github.drakkan.sftpgo.plist /Library/LaunchDaemons/com.github.drakkan.sftpgo.plist
# start the service and enable it to start on boot
sudo launchctl load -w /Library/LaunchDaemons/com.github.drakkan.sftpgo.plist
# verify that the service is started
sudo launchctl list com.github.drakkan.sftpgo
```
## Windows
On Windows, you can register SFTPGo as Windows Service. Take a look at the CLI usage to learn how to do this:
```powershell
PS> sftpgo.exe service --help
Manage SFTPGo Windows Service
Usage:
sftpgo service [command]
Available Commands:
install Install SFTPGo as Windows Service
reload Reload the SFTPGo Windows Service sending a "paramchange" request
rotatelogs Signal to the running service to rotate the logs
start Start SFTPGo Windows Service
status Retrieve the status for the SFTPGo Windows Service
stop Stop SFTPGo Windows Service
uninstall Uninstall SFTPGo Windows Service
Flags:
-h, --help help for service
Use "sftpgo service [command] --help" for more information about a command.
```
The `install` subcommand accepts the same flags that are valid for `serve`.
After installing as a Windows Service, please remember to allow network access to the SFTPGo executable using something like this:
```powershell
PS> netsh advfirewall firewall add rule name="SFTPGo Service" dir=in action=allow program="C:\Program Files\SFTPGo\sftpgo.exe"
```
Or through the Windows Firewall GUI.
The Windows installer will register the service and allow network access for it automatically.

View file

@ -1,72 +0,0 @@
# SFTP subsystem mode
In this mode SFTPGo speaks the server side of SFTP protocol to stdout and expects client requests from stdin.
You can use SFTPGo as subsystem via the `startsubsys` command.
This mode is not intended to be called directly, but from sshd using the `Subsystem` option.
For example adding a line like this one in `/etc/ssh/sshd_config`:
```shell
Subsystem sftp sftpgo startsubsys
```
Command-line flags should be specified in the Subsystem declaration.
```shell
Usage:
sftpgo startsubsys [flags]
Flags:
-d, --base-home-dir string If the user does not exist specify an alternate
starting directory. The home directory for a new
user will be:
[base-home-dir]/[username]
base-home-dir must be an absolute path.
-c, --config-dir string Location for the config dir. This directory
is used as the base for files with a relative
path, eg. the private keys for the SFTP
server or the SQLite database if you use
SQLite as data provider.
The configuration file, if not explicitly set,
is looked for in this dir. We support reading
from JSON, TOML, YAML, HCL, envfile and Java
properties config files. The default config
file name is "sftpgo" and therefore
"sftpgo.json", "sftpgo.yaml" and so on are
searched.
This flag can be set using SFTPGO_CONFIG_DIR
env var too. (default ".")
--config-file string Path to SFTPGo configuration file.
This flag explicitly defines the path, name
and extension of the config file. If must be
an absolute path or a path relative to the
configuration directory. The specified file
name must have a supported extension (JSON,
YAML, TOML, HCL or Java properties).
This flag can be set using SFTPGO_CONFIG_FILE
env var too.
-h, --help help for startsubsys
-j, --log-to-journald Send logs to journald. Only available on Linux.
Use:
$ journalctl -o verbose -f
To see full logs.
If not set, the logs will be sent to the standard
error
--log-utc-time Use UTC time for logging. This flag can be set
using SFTPGO_LOG_UTC_TIME env var too.
(default true)
-v, --log-verbose Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
(default true)
-p, --preserve-home If the user already exists, the existing home
directory will not be changed
```
In this mode `bolt` and `sqlite` providers are not usable as the same database file cannot be shared among multiple processes, if one of these provider is configured it will be automatically changed to `memory` provider.
The username and home directory for the logged in user are determined using [user.Current()](https://golang.org/pkg/os/user/#Current).
If the user who is logging is not found within the SFTPGo data provider, it is added automatically.
You can pre-configure the users inside the SFTPGo data provider, this way you can use a different home directory, restrict permissions and such.

View file

@ -1,35 +0,0 @@
# SFTP as storage backend
An SFTP account on another server can be used as storage for an SFTPGo account, so the remote SFTP server can be accessed in a similar way to the local file system.
Here are the supported configuration parameters:
- `Endpoint`, ssh endpoint as `host:port`
- `Username`
- `Password`
- `PrivateKey`
- `Fingerprints`
- `Prefix`
- `BufferSize`
The mandatory parameters are the endpoint, the username and a password or a private key. If you define both a password and a private key the key is tried first. The provided private key should be PEM encoded, something like this:
```shell
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACA8LWc4SahqKkAr4L3rS19w1Vt8/IAf4th2FZmf+PJ/vwAAAJBvnZIJb52S
CQAAAAtzc2gtZWQyNTUxOQAAACA8LWc4SahqKkAr4L3rS19w1Vt8/IAf4th2FZmf+PJ/vw
AAAEBE6F5Az4wzNfNYLRdG8blDwvPBYFXE8BYDi4gzIhnd9zwtZzhJqGoqQCvgvetLX3DV
W3z8gB/i2HYVmZ/48n+/AAAACW5pY29sYUBwMQECAwQ=
-----END OPENSSH PRIVATE KEY-----
```
The password and the private key are stored as ciphertext according to your [KMS configuration](./kms.md).
SHA256 fingerprints for remote server host keys are optional but highly recommended: if you provide one or more fingerprints the server host key will be verified against them and the connection will be denied if none of the fingerprints provided match that for the server host key.
Specifying a prefix you can restrict all operations to a given path within the remote SFTP server. If you set a prefix make sure it is not inside a symlinked directory or it is a symlink itself.
Buffering can be enabled by setting a buffer size (in MB) greater than 0. By enabling buffering, the reads and writes, from/to the remote SFTP server, are split in multiple concurrent requests and this allows data to be transferred at a faster rate, over high latency networks, by overlapping round-trip times. With buffering enabled, resuming uploads and truncate are not supported and a file cannot be opened for both reading and writing at the same time. 0 means disabled.
Some SFTP servers (eg. AWS Transfer) do not support opening files read/write at the same time, you can enable buffering to work with them.

View file

@ -1,50 +0,0 @@
# SSH commands
Some SSH commands are implemented directly inside SFTPGo, while for others we use system commands that need to be installed and in your system's `PATH`.
For system commands we have no direct control on file creation/deletion and so there are some limitations:
- we cannot allow them if the target directory contains virtual folders or file extensions filters
- system commands work only on local filesystem
- we cannot avoid to leak real filesystem paths
- quota check is suboptimal
- maximum size restriction on single file is not respected
- data at-rest encryption is not supported
If quota is enabled and SFTPGo receives a system command, the used size and number of files are checked at the command start and not while new files are created/deleted. While the command is running the number of files is not checked, the remaining size is calculated as the difference between the max allowed quota and the used one, and it is checked against the bytes transferred via SSH. The command is aborted if it uploads more bytes than the remaining allowed size calculated at the command start. Anyway, we only see the bytes that the remote command sends to the local one via SSH. These bytes contain both protocol commands and files, and so the size of the files is different from the size transferred via SSH: for example, a command can send compressed files, or a protocol command (few bytes) could delete a big file. To mitigate these issues, quotas are recalculated at the command end with a full scan of the directory specified for the system command. This could be heavy for big directories. If you need system commands and quotas you could consider disabling quota restrictions and periodically update quota usage yourself using the REST API.
For these reasons we should limit system commands usage as much as possible, we currently support the following system commands:
- `git-receive-pack`, `git-upload-pack`, `git-upload-archive`. These commands enable support for Git repositories over SSH. They need to be installed and in your system's `PATH`.
- `rsync`. The `rsync` command needs to be installed and in your system's `PATH`.
At least the following permissions are required to be able to run system commands:
- `list`
- `download`
- `upload`
- `create_dirs`
- `overwrite`
- `delete`
For `rsync` we cannot avoid that it creates symlinks so if the `create_symlinks` permission is granted we add the option `--safe-links`, if it is not already set, to the received `rsync` command. This should prevent to create symlinks that point outside the home directory.
If the user cannot create symlinks we add the option `--munge-links`, if it is not already set, to the received `rsync` command. This should make symlinks unusable (but manually recoverable).
**Note:**: you might consider to use SFTPGo as SFTP backend for [rclone](https://rclone.org/sftp/) instead of `rsync`, this way there are no limitations and `rclone` does not need to be installed on the server side since it uses the SFTP protocol.
SFTPGo supports the following built-in SSH commands:
- `scp`, SFTPGo implements the SCP protocol so we can support it for cloud filesystems too and we can avoid the other system commands limitations. SCP between two remote hosts is supported using the `-3` scp option. Wildcard expansion is not supported.
- `md5sum`, `sha1sum`, `sha256sum`, `sha384sum`, `sha512sum`. Useful to check message digests for uploaded files.
- `cd`, `pwd`. Some SFTP clients do not support the SFTP SSH_FXP_REALPATH packet type, so they use `cd` and `pwd` SSH commands to get the initial directory. Currently `cd` does nothing and `pwd` always returns the `/` path. These commands will work with any storage backend but keep in mind that to calculate the hash we need to read the whole file, for remote backends this means downloading the file, for the encrypted backend this means decrypting the file.
- `sftpgo-copy`. This is a built-in copy implementation. It allows server side copy for files and directories. The first argument is the source file/directory and the second one is the destination file/directory, for example `sftpgo-copy <src> <dst>`.
- `sftpgo-remove`. This is a built-in remove implementation. It allows to remove single files and to recursively remove directories. The first argument is the file/directory to remove, for example `sftpgo-remove <dst>`. Removing directories spanning virtual folders is not supported.
The following SSH commands are enabled by default:
- `md5sum`
- `sha1sum`
- `sha256sum`
- `cd`
- `pwd`
- `scp`

View file

@ -1,41 +0,0 @@
# Virtual Folders
A virtual folder is a mapping between a SFTPGo virtual path and a filesystem path outside the user home directory or on a different storage provider.
For example, you can have a local user with an S3-based virtual folder or vice versa.
SFTPGo will try to automatically create any missing parent directory for the configured virtual folders at user login.
For each virtual folder, the following properties can be configured:
- `folder_name`, is the ID for an existing folder. The folder structure contains the absolute filesystem path to map as virtual folder
- `filesystem`, this way you can map a local path or a Cloud backend to mount as virtual folders
- `virtual_path`, absolute path seen by SFTPGo users where the mapped path is accessible
- `quota_size`, maximum size allowed as bytes. 0 means unlimited, -1 included in user quota
- `quota_files`, maximum number of files allowed. 0 means unlimited, -1 included in user quota
For example if a folder is configured to use `/tmp/mapped` or `C:\mapped` as filesystem path and `/vfolder` as virtual path then SFTPGo users can access `/tmp/mapped` or `C:\mapped` via the `/vfolder` virtual path.
Nested SFTP folders using the same SFTPGo instance (identified using the host keys) are not allowed as they could cause infinite SFTP loops.
The same virtual folder can be shared among users, different folder quota limits for each user are supported.
Folder quota limits can also be included inside the user quota but in this case the folder is considered "private" and sharing it with other users will break user quota calculation.
The calculation of the quota for a given user is obtained as the sum of the files contained in his home directory and those within each defined virtual folder included in its quota.
If you define folders that point to nested paths or to the same path, the quota calculation will be incorrect. Example:
- `folder1` uses `/srv/data/mapped` or `C:\mapped` as mapped path
- `folder2` uses `/srv/data/mapped/subdir` or `C:\mapped\subdir` as mapped path
If you upload a file to `folder2` its quota will be updated but the quota of `folder1` will not. We allow this for more flexibility, but if you want to enforce disk quotas using SFTPGo, avoid folders with nested paths.
It is allowed to mount a virtual folder in the user's root path (`/`). This might be useful if you want to share the same virtual folder between different users. In this case the user's root filesystem is hidden from the virtual folder.
Using the REST API you can:
- monitor folders quota usage
- scan quota for folders
- inspect the relationships among users and folders
- delete a virtual folder. SFTPGo removes folders from the data provider, no files deletion will occur
If you remove a folder, from the data provider, any users relationships will be cleared up. If the deleted folder is mounted on the user's root (`/`) path, the user is still valid and its root filesystem will no longer be hidden. If the deleted folder is included inside the user quota you need to do a user quota scan to update its quota. An orphan virtual folder will not be automatically deleted since if you add it again later, then a quota scan is needed, and it could be quite expensive, anyway you can easily list the orphan folders using the REST API and delete them if they are not needed anymore.

View file

@ -1,10 +0,0 @@
# Web Admin
You can easily build your own interface using the SFTPGo [REST API](./rest-api.md). Anyway, SFTPGo also provides a basic built-in web interface that allows you to manage users, virtual folders, admins and connections.
With the default `httpd` configuration, the web admin is available at the following URL:
[http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
If no admin user is found within the data provider, typically after the initial installation, SFTPGo will ask you to create the first admin. You can also pre-create an admin user by loading initial data or by enabling the `create_default_admin` configuration key. Please take a look [here](./full-configuration.md) for more details.
The web interface can be configured over HTTPS and to require mutual TLS authentication in addition to administrator credentials.

View file

@ -1,15 +0,0 @@
# Web Client
SFTPGo provides a basic front-end web interface for your users. It allows end-users to browse and manage their files and change their credentials.
Each authorized user can create HTTP/S links to externally share files and folders securely, by setting limits to the number of downloads/uploads, protecting the share with a password, limiting access by source IP address, setting an automatic expiration date.
The web client user interface also allows you to edit plain text files up to 512KB in size.
The web interface can be globally disabled within the `httpd` configuration via the `enable_web_client` key or on a per-user basis by adding `HTTP` to the denied protocols.
Public keys management can be disabled, per-user, using a specific permission.
The web client allows you to download multiple files or folders as a single zip file, any non regular files (for example symlinks) will be silently ignored.
With the default `httpd` configuration, the web client is available at the following URL:
[http://127.0.0.1:8080/web/client](http://127.0.0.1:8080/web/client)

View file

@ -1,37 +0,0 @@
# WebDAV
The `WebDAV` support can be enabled by configuring one or more `bindings` inside the `webdavd` configuration section.
Each user can access their home directory using the path `http/s://<SFTPGo ip>:<WevDAVPORT>/<prefix>`. By default `prefix` is empty. If you define a prefix it must be an absolute URI, for example `/dav`.
WebDAV is quite a different protocol than SFTP/FTP, there is no session concept, each command is a separate HTTP request and must be authenticated, to improve performance SFTPGo caches authenticated users. This way SFTPGo don't need to do a dataprovider query and a password check for each request.
The user caching configuration allows to set:
- `expiration_time` in minutes. If a user is cached for more than the specified minutes it will be removed from the cache and a new dataprovider query will be performed. Please note that the `last_login` field will not be updated and `external_auth_hook`, `pre_login_hook` and `check_password_hook` will not be executed if the user is obtained from the cache.
- `max_size`. Maximum number of users to cache. When this limit is reached the user with the oldest expiration date will be removed from the cache. 0 means no limit however the cache size cannot exceed the number of users so if you have a small number of users you can set this value to 0.
Users are automatically removed from the cache after an update/delete.
WebDAV protocol requires the MIME type for each file. SFTPGo will first try to guess the MIME type by extension. If this fails it will send a `HEAD` request for Cloud backends and, as last resort, it will try to guess the MIME type reading the first 512 bytes of the file. This may slow down the directory listing, especially for Cloud based backends, if you have directories containing many files with unregistered extensions. To mitigate this problem, you can enable caching of MIME types so that the MIME type detection is done only once.
The MIME types caching configurations allows to set the maximum number of MIME types to cache. Once the cache reaches the configured maximum size no new MIME types will be added. The MIME types cache is a non-persistent in-memory cache. If you need a persistent cache add your MIME types to `/etc/mime.types` on Linux or inside the registry on Windows.
WebDAV should work as expected for most use cases but there are some minor issues and some missing features.
If you use WebDAV behind a reverse proxy ensure to preserve the `Host` header or `COPY`/`MOVE` operations will fail. For example for apache you have to set `ProxyPreserveHost On`.
Know issues:
- removing a directory tree on Cloud Storage backends could generate a `not found` error when removing the last (virtual) directory. This happens if the client cycles the directories tree itself and removes files and directories one by one instead of issuing a single remove command
- to be able to properly list a directory you need to grant both `list` and `download` permissions and to be able to upload files you need to gran both `list` and `upload` permissions
- if a file or a directory cannot be accessed, for example due to OS permissions issues or because a mapped path for a virtual folder is a missing, it will be omitted from the directory listing. If there is a different error then the whole directory listing will fail. This behavior is different from SFTP/FTP where you will be able to see the problematic file/directory in the directory listing, you will only get an error if you try to access it
- if you use the native Windows client please check its usage and pay particular attention to the [registry settings](https://docs.microsoft.com/en-us/iis/publish/using-webdav/using-the-webdav-redirector#webdav-redirector-registry-settings). The default file size limit is 50MB and if you don't configure SFTPGo to use HTTPS you have to set `BasicAuthLevel` to `2`
SFTPGo has a minimal implementation for [Dead Properties](https://tools.ietf.org/html/rfc4918#section-3). We support setting the last modification time and we return the value in the "live" properties, so basically we don't store anything.
To properly support dead properties we need a design decision, probably the best solution is to write a plugin and store them inside a supported data provider.
SFTPGo also supports setting the modification time using the `X-OC-Mtime` header. Nextcloud compatible clients set this header.
If you find any other quirks or problems please let us know opening a GitHub issue, thank you!

View file

@ -56,3 +56,5 @@ We provide the following examples:
- [Check password hook](./checkpwd/README.md) for 2FA using a password consisting of a fixed string and a One Time Token.
Please note that these are sample programs not intended for production use, you should write your own hook based on them and you should prefer HTTP based hooks if performance is a concern.
:warning: SFTPGo has also built-in 2FA support.

View file

@ -1,6 +1,6 @@
# Data Backup
:warning: Since v2.4.0 you can use the [EventManager](../../docs/eventmanager.md) to schedule backups.
:warning: Since v2.4.0 you can use the [EventManager](https://sftpgo.github.io/latest/eventmanager/) to schedule backups.
The `backup` example script shows how to use the SFTPGo REST API to backup your data.

View file

@ -1,6 +1,6 @@
# File retention policies
:warning: Since v2.4.0 you can use the [EventManager](../../docs/eventmanager.md) to schedule data retention checks.
:warning: Since v2.4.0 you can use the [EventManager](https://sftpgo.github.io/latest/eventmanager/) to schedule data retention checks.
The `checkretention` example script shows how to use the SFTPGo REST API to manage data retention.

Some files were not shown because too many files have changed in this diff Show more