* Correct Docker documentation as to distribution versions being used for the Docker images. This should have been updated with the December 2025 update
* Fix Docker entrypoint handling for non-root --user execution
This change updates the Docker entrypoint.sh to correctly support containers started with a numeric UID/GID via --user or user: (Docker Compose).
Previously, the entrypoint unconditionally attempted user and group management (useradd, groupadd, usermod) before checking whether the container was running as root. When the container was started as a non-root user, this resulted in immediate startup failures due to insufficient privileges.
The updated logic now:
* Detects whether the container is started as root or non-root
* Skips all user/group creation and ownership changes when running as non-root
* Treats --user / user: as authoritative when provided
* Preserves existing behaviour when the container is started as root (including optional privilege drop via gosu)
* Ensures ONEDRIVE_RUNAS_ROOT is only honoured when the container is actually running as root
This makes the container compatible with:
* Numeric UID/GID execution
* NFS-backed volumes where the user does not exist on the host
* Read-only bind mounts for upload-only scenarios
No changes are made to the OneDrive client itself; this update strictly improves container startup behaviour and correctness.
* Update docker documentation to clarify docker mounts as read-write and how to make them read-only for certain scenarios
* Update Ubuntu install guide to fix that ARMHF packages are available for those platforms on the OpenSuSE Build Service
* --force and --resync cannot be used together as --resync blows away the database, thus there is no way to calculate large local deletes
* Update --force-sync documentation
This change corrects how rename failures are handled when applying local path changes.
**What changed**
- Call sites now check the boolean return value from safeRename() instead of assuming the rename succeeded.
- Items are only marked as moved (itemWasMoved = true) when the rename operation actually completes successfully.
- Local timestamp updates (setLocalPathTimestamp()) are only applied after a confirmed rename.
**Why this is needed**
Previously, a failed rename (for example due to EBUSY or interrupted system calls) could still be treated as successful, causing the application to:
- Update internal state as if the item had moved
- Apply timestamps to paths that were never created
- Produce misleading or cascading errors during subsequent sync logic
This issue is reproducible in scenarios where a directory is busy (e.g. held as a working directory by another process), where retries cannot succeed.
This change hardens setLocalPathTimestamp() by adding bounded retry handling for transient filesystem errors while preserving the existing timestamp-comparison logic used to align local files with OneDrive metadata.
**What changed**
- setLocalPathTimestamp() now retries underlying filesystem operations when they fail with transient errors such as:
- EINTR — interrupted system call
- EBUSY / EAGAIN — temporary filesystem conditions
- Timestamp reads (getTimes()) and writes (setTimes()) are now performed via protected helper logic with capped retries and small backoff.
- Existing behaviour is preserved:
- Fractional seconds are intentionally ignored when comparing timestamps, matching OneDrive’s second-level precision.
- Access time is preserved when updating modification time.
- Timestamps are only updated when a whole-second difference is detected.
- Dry-run behaviour is unchanged.
**Why this is needed**
On POSIX systems (Linux and FreeBSD), timestamp-related syscalls can legitimately fail with EINTR when interrupted by signals, or with other transient errors under temporary filesystem load. Previously, these conditions were treated as hard failures, leading to noisy logs and occasional missed timestamp updates.
This change aligns timestamp handling with the same resilience and retry semantics already applied to other filesystem operations (safeRemove(), safeRename(), safeBackup()), improving reliability without altering logical behaviour.
This change fixes an intermittent crash observed during long-running --monitor sessions when WebSocket reconnections occurred.
When Microsoft Graph rotated the notificationUrl, the existing WebSocket instance was not being deterministically cleaned up before creating a new connection. Over time, this resulted in multiple inactive WebSocket instances retaining libcurl handles. Under memory pressure or explicit garbage collection, these orphaned instances were finalised, triggering unsafe cleanup paths and causing the application to crash.
**Key changes**
* Ensure WebSocket instances are explicitly cleaned up when a new notificationUrl is detected
* Introduce and consistently use cleanupCurlHandle() to deterministically release libcurl resources
* Ensure cleanupCurlHandle():
* Sets websocketConnected = false
* Performs no logging or memory allocation
* Ensure OneDriveSocketIo destructor performs explicit WebSocket cleanup before destruction
* Remove unsafe logging/allocation from WebSocket destructors
* Improve robustness of testInternetReachability() without altering runtime behaviour
* Add defensive exception handling in the WebSocket run loop to prevent unexpected thread termination
**Result**
* Prevents accumulation of inactive WebSocket instances during reconnects
* Eliminates GC-finalisation-time crashes caused by unsafe destructor behaviour
* Improves long-term stability during intermittent connectivity or Graph endpoint rotation
This change updates safeBackup() to use the hardened safeRename() implementation when renaming local files to preserve existing data.
**What changed**
- safeBackup() now delegates file renaming to safeRename() instead of calling std.file.rename() directly.
- The result of the rename operation is checked via the boolean return value from safeRename() before updating renamedPath.
- Transient filesystem errors (EINTR, EBUSY) are now handled consistently during backup renames.
- Cross-filesystem rename failures (EXDEV) and other non-retryable errors are logged and handled safely.
**Why this is needed**
The previous implementation performed a direct rename() inside a local try/catch, which:
- Did not retry on interruptible system calls (EINTR)
- Could incorrectly treat transient filesystem conditions as hard failures
- Did not share the same resilience and logging behaviour as other filesystem operations
This change ensures that backup renames benefit from the same bounded retry logic and error handling already applied to safeRemove() and safeRename().
**Scope and behaviour**
- No change to functional behaviour for successful backups.
- Dry-run behaviour is unchanged.
- Genuine filesystem errors are still logged and surfaced.
- Reduces noisy or misleading error logs caused by transient rename failures, particularly during shutdown or signal handling.
This brings safeBackup() into alignment with the updated filesystem-safety model used elsewhere in the codebase and improves reliability when preserving local data.
This change improves the robustness of safeRename() by adding bounded retry handling for transient filesystem errors and ensuring rename failures are handled safely and consistently.
**What changed**
* safeRename() now catches and handles std.file.rename() failures instead of allowing exceptions to propagate unexpectedly.
* The function retries rename operations when the underlying syscall returns:
* EINTR — interrupted system call (signal delivery before completion)
* EBUSY — temporary “resource busy” conditions
* Retries are capped and include a small backoff to avoid tight retry loops.
* EXDEV (cross-filesystem rename) is explicitly detected, logged, and not retried.
* Existing --dry-run behaviour is preserved.
**Why this is needed**
On POSIX systems (Linux and FreeBSD), rename() can legitimately fail with EINTR when a signal interrupts the syscall. Treating this as a hard failure leads to noisy logs and unnecessary operation aborts, particularly during shutdown, signal handling, or transient connectivity events.
In rarer cases, EBUSY may be returned due to temporary filesystem conditions. A limited retry avoids false failure reporting while still surfacing persistent or logical errors.
**Scope and behaviour**
* Applies to Linux and FreeBSD.
* No change to functional semantics for successful renames.
* Genuine error conditions (permissions, missing paths, cross-filesystem moves, etc.) are still logged and surfaced immediately.
* Aligns rename handling with the retry semantics recently added to safeRemove().
This brings safeRename() in line with POSIX-recommended handling for interruptible system calls while keeping retries bounded and safe.
This change improves the robustness of local file cleanup by enhancing util.safeRemove() to retry deletion when the underlying filesystem operation is interrupted or temporarily busy.
safeRemove() now retries remove() when:
* EINTR — Interrupted system call (signal received before syscall completion)
* EBUSY — transient “resource busy” conditions
* Retries are capped and include a small backoff to avoid tight retry loops.
Existing behaviour is preserved:
* ENOENT is treated as success (file already removed).
* All other error conditions are logged once and returned.
**Why this is needed**
Under normal operation (signal handling, network interruptions, shutdown sequences), POSIX systems such as Linux and FreeBSD can legitimately return EINTR for file deletion calls. Treating this as a hard failure creates noisy logs and can leave temporary files behind even though a retry would succeed.
In rarer cases, EBUSY may also be returned for transient filesystem conditions. A limited retry avoids false error reporting while still surfacing persistent failures.
**Scope**
* Applies to Linux and FreeBSD.
* No change to functional semantics or error visibility for genuine failures.
* Reduces spurious “Interrupted system call” errors observed in debug logs, particularly for temporary resume download files.
This aligns safeRemove() with POSIX-recommended retry behaviour for interruptible system calls while keeping retries bounded and safe.
* Fix performPermanentDelete() to ensure that when performing a POST operation, a zero content length is set, so that libcurl understands that is does not need to read any upload data, falling back to stdin in some cases which ends up hanging the client
Starting from FreeBSD 15.0-RELEASE, FreeBSD implemented inotify system calls. So, when the OS is FreeBSD, check the version (uname -U) and select to use libc (15.0-RELEASE and later) or independent library (until 14.3-RELEASE).
The FreeBSD implementation of inotify system calls do not support some of the flags that Linux equivalent supports, so modify the flags for FreeBSD.
Co-authored-by: Hiroo Ono <hiroo@oikumene.net>
Co-authored-by: abraunegg <alex.braunegg@gmail.com>
* Update testInternetReachability() function to ensure that the same curl options used for general activity are used for the testInternetReachability() function, ensuring that CURLOPT_NOSIGNAL, CURLOPT_TCP_NODELAY and CURLOPT_FORBID_REUSE are set correctly and aligned to operational use within the sync engine itself.
This change fixes an issue where directories configured via 'skip_dir' were not consistently excluded due to path normalisation mismatches.
In some code paths, directory paths were evaluated with prefixes such as `./` or `/`, while 'skip_dir' rules are defined relative to the sync directory. This caused valid 'skip_dir' rules to fail matching, allowing excluded directories to be treated as in-scope and persisted to the local state database.
The directory exclusion logic has been reworked to:
* Canonicalise directory paths before evaluation
* Perform a full-path match using the canonical form
* Retain non-strict segment matching behaviour where applicable
This ensures 'skip_dir' rules are applied consistently across all directory processing paths, preventing excluded directories from being incorrectly tracked or classified as deleted locally.
This change corrects shadow validation for 'sync_list' include rules when evaluating against both 'skip_dir' and 'skip_file'.
Previously, rooted 'sync_list' paths (those starting with /) were not consistently normalised before validation, allowing some non-viable configurations to pass undetected. This could result in 'sync_list' include rules being silently shadowed by 'skip_dir' or 'skip_file'.
The validation logic now normalises rooted 'sync_list' paths to match runtime filtering semantics, ensuring shadowed include rules are reliably detected and reported as configuration errors.
This brings 'skip_dir' and 'skip_file' shadow detection into parity with actual sync behaviour and prevents contradictory client-side filtering configurations.
This change adds validation to client-side filtering configuration to detect when 'sync_list' inclusion rules are rendered non-viable by 'skip_dir' or 'skip_file'.
The client now checks whether any 'sync_list' include paths would be excluded by the active 'skip_dir' or 'skip_file' rules, using the same runtime exclusion logic as the sync engine. If such a conflict is detected, the client reports a clear configuration error and exits early.
This prevents contradictory filtering configurations where explicitly included files or folders can never be synced, reducing confusion and avoiding unintended or unsafe sync behaviour.
This change hardens the handling of 'skip_dir' and 'skip_file' when multiple entries are specified in the configuration file.
Previously, multiple config lines could be concatenated in a way that produced empty rule entries (for example ||), leading to confusing or unintended filtering behaviour. Rules could also be duplicated unnecessarily.
This update introduces safe, normalised merging of pipe-delimited rules by:
* Trimming whitespace
* Removing empty entries
* De-duplicating rules
* Ensuring a clean, predictable combined rule set
This prevents malformed rule sets from being generated and improves robustness and clarity of client-side filtering behaviour without changing existing, valid configurations.
* Enhance displayFileSystemErrorMessage() to include details of the actual path that generated the error message to make diagnostics easier when a file system issue is generated
* Add SQLITE_READONLY as a case to catch if the database file is read-only
* Based on an application crash output from OMV, if the client is unable to create the required path, the application crashes. Harden all calls to mkdirRecurse() by wrapping in a try block.
* Update Dockerfiles December 2025:
- Update to Fedora 43 and GO 1.23
- Update to Alpine 3.23 and GO 1.25
- Update to Debian 13 and support relevant time64 package changes
This update refines the --resync user warning prompt to provide clearer,
more accurate guidance on the behaviour and risks associated with performing a
resynchronisation operation.
Key improvements include:
Explicitly explaining that --resync deletes the client’s local state
database and rebuilds it entirely from the current OneDrive contents.
Accurately describing possible outcomes, including overwrite scenarios,
conflict-driven renames or duplications, increased upload/download activity,
and potential Microsoft Graph API throttling (HTTP 429).
Removing incorrect implications that local-only files may be deleted during
--resync (they will instead be uploaded, unless destructive cleanup modes
are explicitly used).
Strengthening safety guidance by recommending:
Maintaining a current backup of the sync_dir
Running the same command with --dry-run before executing a real --resync
Enabling use_recycle_bin so locally triggered online deletions are preserved
Improved formatting and readability for terminal output.
This change enhances user understanding, reduces the likelihood of accidental
data loss, and aligns runtime messaging with the client’s actual synchronisation
logic and documented behaviour.
This update improves the robustness of application logging by ensuring that invalid or non-writeable log directory configurations no longer cause the client to exit unexpectedly.
Key changes:
* The calculateLogDirectory() function now performs an explicit writeability check on the resolved log_dir path.
* If the directory exists but cannot be written to (e.g., permissions such as /var/log/onedrive owned by root or another user), the client logs an error and automatically falls back to using the user’s home directory for runtime logs if enabled.
* Runtime behaviour now matches the intended design: logging misconfiguration must never stop or terminate the application.
This update adds validation to ensure the configured 'recycle_bin_path' is not located within the 'sync_dir'.
If the recycle bin is a child of the sync directory, any file moved into the recycle bin during online delete processing would be detected as a new local file and uploaded back to Microsoft OneDrive, creating a loop of re-uploads.
The client now:
* Expands and normalises the 'recycle_bin_path' (including tilde handling)
* Verifies that the resolved path is outside the configured 'sync_dir'
* Fails fast with a clear error message if the configuration is unsafe
* Prevents data churn, unexpected uploads, and confusing behaviour for users
This ensures correct, predictable behaviour when `use_recycle_bin = true` and avoids accidental mis-configuration.
* When Microsoft OneDrive sends a JSON for a deleted item, rather than blindly calling safeBackup() when the item to be deleted is determined to not be currently in sync with the local filesystem, perform some addition validation and only perform safeBackup() if there is a hash difference to the prior known state
* Adjust default 'operation_timeout' value to align to CURLOPT_TIMEOUT default
* Update downloadFile() to ensure correct handling when operational timeouts occur to correctly resume download and use correct offset for download
* Fix that websocket do not work with Sharepoint libraries by updating the 'websocketEndpoint' to use the 'drive_id' value when specified in the configuration file.
* Fix issue where the client will create many file versions when file modified time differ locally to that online, and does not evaluate which timestamp should be corrected - online or local
* Add missing TOC entry in application-config-options.md
* Add configuration option 'disable_version_check' to allow users to configure whether the application will check the GitHub API for release information to assist in advising users of new application releases.
* When using --upload-only , the inbuilt WebSocket process is disabled and not used, thus, there is a spinlock that needs to be taken care of when using --upload-only
* When using WebSockets to listen for remote change, a local change will trigger an 'echo' of that local change, which we need to debounce if this signal is received within a short window
Implement full “Display Manager Integration” support for both GNOME and KDE desktop environments. This new feature allows the OneDrive Client for Linux to detect the active desktop session and automatically:
* Register the configured sync_dir as a “special place” or sidebar entry within the file manager (Nautilus on GNOME; Dolphin on KDE).
* Apply a custom “onedrive” folder icon to the synchronisation directory when the installed icon theme supports it.
* Cleanly install and uninstall required resources (icons, bookmarks, file manager integration) via the Makefile’s install and uninstall targets, thereby supporting system-wide installations, packaging workflows, and per-user installs.
* Introduce a new configuration option display_manager_integration (boolean) to enable or disable this integration behaviour at runtime.
* Update documentation and usage guidance to clearly explain what “Display Manager Integration” means, what this client implements (sidebar entry + icon) and what features remain out-of-scope (context menus, overlay badges, tray icons).
* Ensure safe, idempotent integration logic for both GNOME and KDE (bookmark manipulation, icon theme detection, cache refresh) with fallbacks and minimal dependencies.
With this merge, users installing via make install or system packages will benefit from enhanced desktop usability: the OneDrive folder appears visibly and intuitively within their standard file manager sidebar, making access and identification simpler. At the same time, the core sync engine remains focused on reliable file synchronisation, with the desktop integration layer remaining optional and disabled by default unless explicitly enabled via configuration.