Merge branch 'onedrive-v2.5.0-alpha-5' into progress_new

This commit is contained in:
Jcomp 2024-03-09 09:26:15 +08:00
commit d5582bb02e
60 changed files with 3109 additions and 1021 deletions

View file

@ -55,7 +55,7 @@ endif
system_unit_files = contrib/systemd/onedrive@.service
user_unit_files = contrib/systemd/onedrive.service
DOCFILES = readme.md config LICENSE changelog.md docs/advanced-usage.md docs/application-config-options.md docs/application-security.md docs/business-shared-folders.md docs/docker.md docs/install.md docs/national-cloud-deployments.md docs/podman.md docs/privacy-policy.md docs/sharepoint-libraries.md docs/terms-of-service.md docs/ubuntu-package-install.md docs/usage.md
DOCFILES = readme.md config LICENSE changelog.md docs/advanced-usage.md docs/application-config-options.md docs/application-security.md docs/business-shared-items.md docs/client-architecture.md docs/docker.md docs/install.md docs/national-cloud-deployments.md docs/podman.md docs/privacy-policy.md docs/sharepoint-libraries.md docs/terms-of-service.md docs/ubuntu-package-install.md docs/usage.md docs/known-issues.md
ifneq ("$(wildcard /etc/redhat-release)","")
RHEL = $(shell cat /etc/redhat-release | grep -E "(Red Hat Enterprise Linux|CentOS)" | wc -l)
@ -74,6 +74,7 @@ SOURCES = \
src/qxor.d \
src/curlEngine.d \
src/onedrive.d \
src/webhook.d \
src/sync.d \
src/itemdb.d \
src/sqlite.d \

View file

@ -71,9 +71,9 @@ When using the OneDrive Client for Linux, the above authentication scopes will b
This is similar to the Microsoft Windows OneDrive Client:
![Linux Authentication to Microsoft OneDrive](./puml/onedrive_windows_authentication.png)
![Windows Authentication to Microsoft OneDrive](./puml/onedrive_windows_authentication.png)
In a business environment, where IT Staff need to 'approve' the OneDrive Client for Linux, can do so knowing that the client is safe to use. The only concernt that the IT Staff should have is how is the client device, where the OneDrive Client for Linux is running, is being secured, as in a corporate setting, Windows would be controlled by Active Directory and applicable Group Policy Objects (GPO's) to ensure the security of corporate data on the client device. It is out of scope for this client to handle how Linux devices are being secure.
In a business setting, IT staff who need to authorise the use of the OneDrive Client for Linux in their environment can be assured of its safety. The primary concern for IT staff should be securing the device running the OneDrive Client for Linux. Unlike in a corporate environment where Windows devices are secured through Active Directory and Group Policy Objects (GPOs) to protect corporate data on the device, it is beyond the responsibility of this client to manage security on Linux devices.
## Configuring read-only access to your OneDrive data
In some situations, it may be desirable to configure the OneDrive Client for Linux totally in read-only operation.

View file

@ -1,40 +0,0 @@
# How to configure OneDrive Business Shared Folder Sync
## Application Version
Before reading this document, please ensure you are running application version [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases) or greater. Use `onedrive --version` to determine what application version you are using and upgrade your client if required.
## Important Note
This feature has been 100% re-written from v2.5.0 onwards. A pre-requesite before using this capability in v2.5.0 and above is for you to revert any Shared Business Folder configuration you may be currently using, including, but not limited to:
* Removing `sync_business_shared_folders = "true|false"` from your 'config' file
* Removing the 'business_shared_folders' file
* Removing any local data | shared folder data from your configured 'sync_dir' to ensure that there are no conflicts or issues.
## Process Overview
Syncing OneDrive Business Shared Folders requires additional configuration for your 'onedrive' client:
1. From the OneDrive web interface, review the 'Shared' objects that have been shared with you.
2. Select the applicable folder, and click the 'Add shortcut to My files', which will then add this to your 'My files' folder
3. Update your OneDrive Client for Linux 'config' file to enable the feature by adding `sync_business_shared_items = "true"`. Adding this option will trigger a `--resync` requirement.
4. Test the configuration using '--dry-run'
5. Remove the use of '--dry-run' and sync the OneDrive Business Shared folders as required
**NOTE:** This documentation will be updated as this feature progresses.
### Enable syncing of OneDrive Business Shared Folders via config file
```text
sync_business_shared_items = "true"
```
### Disable syncing of OneDrive Business Shared Folders via config file
```text
sync_business_shared_items = "false"
```
## Known Issues
Shared folders, shared with you from people outside of your 'organisation' are unable to be synced. This is due to the Microsoft Graph API not presenting these folders.
Shared folders that match this scenario, when you view 'Shared' via OneDrive online, will have a 'world' symbol as per below:
![shared_with_me](./images/shared_with_me.JPG)
This issue is being tracked by: [#966](https://github.com/abraunegg/onedrive/issues/966)

View file

@ -0,0 +1,251 @@
# How to sync OneDrive Business Shared Items
## Application Version
Before reading this document, please ensure you are running application version [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases) or greater. Use `onedrive --version` to determine what application version you are using and upgrade your client if required.
## Important Note
This feature has been 100% re-written from v2.5.0 onwards. A pre-requesite before using this capability in v2.5.0 and above is for you to revert any Shared Business Folder configuration you may be currently using, including, but not limited to:
* Removing `sync_business_shared_folders = "true|false"` from your 'config' file
* Removing the 'business_shared_folders' file
* Removing any local data | shared folder data from your configured 'sync_dir' to ensure that there are no conflicts or issues.
* Removing any configuration online that might be related to using this feature prior to v2.5.0
## Process Overview
Syncing OneDrive Business Shared Folders requires additional configuration for your 'onedrive' client:
1. From the OneDrive web interface, review the 'Shared' objects that have been shared with you.
2. Select the applicable folder, and click the 'Add shortcut to My files', which will then add this to your 'My files' folder
3. Update your OneDrive Client for Linux 'config' file to enable the feature by adding `sync_business_shared_items = "true"`. Adding this option will trigger a `--resync` requirement.
4. Test the configuration using '--dry-run'
5. Remove the use of '--dry-run' and sync the OneDrive Business Shared folders as required
**NOTE:** This documentation will be updated as this feature progresses.
### Enable syncing of OneDrive Business Shared Items via config file
```text
sync_business_shared_items = "true"
```
### Disable syncing of OneDrive Business Shared Items via config file
```text
sync_business_shared_items = "false"
```
## Syncing OneDrive Business Shared Folders
Use the following steps to add a OneDrive Business Shared Folder to your account:
1. Login to Microsoft OneDrive online, and navigate to 'Shared' from the left hand side pane
![objects_shared_with_me](./images/objects_shared_with_me.png)
2. Select the respective folder you wish to sync, and click the 'Add shortcut to My files' at the top of the page
![add_shared_folder](./images/add_shared_folder.png)
3. The final result online will look like this:
![shared_folder_added](./images/shared_folder_added.png)
When using Microsoft Windows, this shared folder will appear as the following:
![windows_view_shared_folders](./images/windows_view_shared_folders.png)
4. Sync your data using `onedrive --sync --verbose`. If you have just enabled the `sync_business_shared_items = "true"` configuration option, you will be required to perform a resync. During the sync, the selected shared folder will be downloaded:
```
...
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 4
Finished processing /delta JSON response from the OneDrive API
Processing 3 applicable changes and items received from Microsoft OneDrive
Processing OneDrive JSON item batch [1/1] to ensure consistent local state
Creating local directory: ./my_shared_folder
Quota information is restricted or not available for this drive.
Syncing this OneDrive Business Shared Folder: my_shared_folder
Fetching /delta response from the OneDrive API for Drive ID: b!BhWyqa7K_kqXqHtSIlsqjR5iJogxpWxDradnpVGTU2VxBOJh82Y6S4he4rdnGPBT
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 6
Finished processing /delta JSON response from the OneDrive API
Processing 6 applicable changes and items received from Microsoft OneDrive
Processing OneDrive JSON item batch [1/1] to ensure consistent local state
Creating local directory: ./my_shared_folder/asdf
Creating local directory: ./my_shared_folder/original_data
Number of items to download from OneDrive: 3
Downloading file: my_shared_folder/asdf/asdfasdfhashdkfasdf.txt ... done
Downloading file: my_shared_folder/asdf/asdfasdf.txt ... done
Downloading file: my_shared_folder/original_data/file1.data ... done
Performing a database consistency and integrity check on locally stored data
...
```
When this is viewed locally, on Linux, this shared folder is seen as the following:
![linux_shared_folder_view](./images/linux_shared_folder_view.png)
Any shared folder you add can utilise any 'client side filtering' rules that you have created.
## Syncing OneDrive Business Shared Files
There are two methods to support the syncing OneDrive Business Shared Files with the OneDrive Application
1. Add a 'shortcut' to your 'My Files' for the file, which creates a URL shortcut to the file which can be followed when using a Linux Window Manager (Gnome, KDE etc) and the link will open up in a browser. Microsoft Windows only supports this option.
2. Use `--sync-shared-files` option to sync all files shared with you to your local disk. If you use this method, you can utilise any 'client side filtering' rules that you have created to filter out files you do not want locally. This option will create a new folder locally, with sub-folders named after the person who shared the data with you.
### Syncing OneDrive Business Shared Files using Option 1
1. As per the above method for adding folders, select the shared file, then select to 'Add shorcut' to the file
![add_shared_file_shortcut](./images/add_shared_file_shortcut.png)
2. The final result online will look like this:
![add_shared_file_shortcut_added](./images/online_shared_file_link.png)
When using Microsoft Windows, this shared file will appear as the following:
![windows_view_shared_file_link](./images/windows_view_shared_file_link.png)
3. Sync your data using `onedrive --sync --verbose`. If you have just enabled the `sync_business_shared_items = "true"` configuration option, you will be required to perform a resync.
```
...
All application operations will be performed in the configured local 'sync_dir' directory: /home/alex/OneDrive
Fetching /delta response from the OneDrive API for Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 2
Finished processing /delta JSON response from the OneDrive API
Processing 1 applicable changes and items received from Microsoft OneDrive
Processing OneDrive JSON item batch [1/1] to ensure consistent local state
Number of items to download from OneDrive: 1
Downloading file: ./file to share.docx.url ... done
Syncing this OneDrive Business Shared Folder: my_shared_folder
Fetching /delta response from the OneDrive API for Drive ID: b!BhWyqa7K_kqXqHtSIlsqjR5iJogxpWxDradnpVGTU2VxBOJh82Y6S4he4rdnGPBT
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 0
Finished processing /delta JSON response from the OneDrive API
No additional changes or items that can be applied were discovered while processing the data received from Microsoft OneDrive
Quota information is restricted or not available for this drive.
Performing a database consistency and integrity check on locally stored data
Processing DB entries for this Drive ID: b!BhWyqa7K_kqXqHtSIlsqjR5iJogxpWxDradnpVGTU2VxBOJh82Y6S4he4rdnGPBT
Quota information is restricted or not available for this drive.
...
```
When this is viewed locally, on Linux, this shared folder is seen as the following:
![linux_view_shared_file_link](./images/linux_view_shared_file_link.png)
Any shared file link you add can utilise any 'client side filtering' rules that you have created.
### Syncing OneDrive Business Shared Files using Option 2
**NOTE:** When using option 2, all files that have been shared with you will be downloaded by default. To reduce this, first use `--list-shared-items` to list all shared items with your account, then use 'client side filtering' rules such as 'sync_list' configuration to selectivly sync all the files to your local system.
1. Review all items that have been shared with you by using `onedrive --list-shared-items`. This should display output similar to the following:
```
...
Listing available OneDrive Business Shared Items:
-----------------------------------------------------------------------------------
Shared File: large_document_shared.docx
Shared By: test user (testuser@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared File: no_download_access.docx
Shared By: test user (testuser@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared File: online_access_only.txt
Shared By: test user (testuser@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared File: read_only.txt
Shared By: test user (testuser@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared File: qewrqwerwqer.txt
Shared By: test user (testuser@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared File: dummy_file_to_share.docx
Shared By: testuser2 testuser2 (testuser2@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared Folder: Sub Folder 2
Shared By: test user (testuser@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared File: file to share.docx
Shared By: test user (testuser@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared Folder: Top Folder
Shared By: test user (testuser@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared Folder: my_shared_folder
Shared By: testuser2 testuser2 (testuser2@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
Shared Folder: Jenkins
Shared By: test user (testuser@mynasau3.onmicrosoft.com)
-----------------------------------------------------------------------------------
...
```
2. If applicable, add entries to a 'sync_list' file, to only sync the shared files that are of importance to you.
3. Run the command `onedrive --sync --verbose --sync-shared-files` to sync the shared files to your local file system. This will create a new local folder called 'Files Shared With Me', and will contain sub-directories named after the entity account that has shared the file with you. In that folder will reside the shared file:
```
...
Finished processing /delta JSON response from the OneDrive API
No additional changes or items that can be applied were discovered while processing the data received from Microsoft OneDrive
Syncing this OneDrive Business Shared Folder: my_shared_folder
Fetching /delta response from the OneDrive API for Drive ID: b!BhWyqa7K_kqXqHtSIlsqjR5iJogxpWxDradnpVGTU2VxBOJh82Y6S4he4rdnGPBT
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 0
Finished processing /delta JSON response from the OneDrive API
No additional changes or items that can be applied were discovered while processing the data received from Microsoft OneDrive
Quota information is restricted or not available for this drive.
Creating the OneDrive Business Shared Files Local Directory: /home/alex/OneDrive/Files Shared With Me
Checking for any applicable OneDrive Business Shared Files which need to be synced locally
Creating the OneDrive Business Shared File Users Local Directory: /home/alex/OneDrive/Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)
Creating the OneDrive Business Shared File Users Local Directory: /home/alex/OneDrive/Files Shared With Me/testuser2 testuser2 (testuser2@mynasau3.onmicrosoft.com)
Number of items to download from OneDrive: 7
Downloading file: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/file to share.docx ... done
OneDrive returned a 'HTTP 403 - Forbidden' - gracefully handling error
Unable to download this file as this was shared as read-only without download permission: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/no_download_access.docx
ERROR: File failed to download. Increase logging verbosity to determine why.
Downloading file: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/no_download_access.docx ... failed!
Downloading file: Files Shared With Me/testuser2 testuser2 (testuser2@mynasau3.onmicrosoft.com)/dummy_file_to_share.docx ... done
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 0% | ETA --:--:--
Downloading file: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/online_access_only.txt ... done
Downloading file: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/read_only.txt ... done
Downloading file: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/qewrqwerwqer.txt ... done
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 5% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 10% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 15% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 20% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 25% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 30% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 35% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 40% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 45% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 50% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 55% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 60% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 65% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 70% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 75% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 80% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 85% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 90% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 95% | ETA 00:00:00
Downloading: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... 100% | DONE in 00:00:00
Quota information is restricted or not available for this drive.
Downloading file: Files Shared With Me/test user (testuser@mynasau3.onmicrosoft.com)/large_document_shared.docx ... done
Quota information is restricted or not available for this drive.
Quota information is restricted or not available for this drive.
Performing a database consistency and integrity check on locally stored data
Processing DB entries for this Drive ID: b!BhWyqa7K_kqXqHtSIlsqjR5iJogxpWxDradnpVGTU2VxBOJh82Y6S4he4rdnGPBT
Quota information is restricted or not available for this drive.
...
```
When this is viewed locally, on Linux, this 'Files Shared With Me' and content is seen as the following:
![files_shared_with_me_folder](./images/files_shared_with_me_folder.png)
Unfortunatly there is no Microsoft Windows equivalent for this capability.
## Known Issues
Shared folders, shared with you from people outside of your 'organisation' are unable to be synced. This is due to the Microsoft Graph API not presenting these folders.
Shared folders that match this scenario, when you view 'Shared' via OneDrive online, will have a 'world' symbol as per below:
![shared_with_me](./images/shared_with_me.JPG)
This issue is being tracked by: [#966](https://github.com/abraunegg/onedrive/issues/966)

331
docs/client-architecture.md Normal file
View file

@ -0,0 +1,331 @@
# OneDrive Client for Linux Application Architecture
## How does the client work at a high level?
The client utilises the 'libcurl' library to communicate with the Microsoft Authentication Service and the Microsoft Graph API. The diagram below shows this high level interaction with the Microsoft services online:
![client_use_of_libcurl](./puml/client_use_of_libcurl.png)
Depending on your operational environment, it is possible to 'tweak' the following options which will modify how libcurl operates with it's interaction with Microsoft OneDrive services:
* Downgrade all HTTPS operations to use HTTP1.1 ('force_http_11')
* Control how long a specific transfer should take before it is considered too slow and aborted ('operation_timeout')
* Control libcurl handling of DNS Cache Timeout ('dns_timeout')
* Control the maximum time allowed for the connection to be established ('connect_timeout')
* Control the timeout for activity on an established HTTPS connection ('data_timeout')
* Control what IP protocol version should be used when communicating with OneDrive ('ip_protocol_version')
* Control what User Agent is presented to Microsoft services ('user_agent')
**Note:** The default 'user_agent' value conforms to specific Microsoft requirements to identify as an ISV that complies with OneDrive traffic decoration requirements. Changing this value potentially will impact how Microsoft see's your client, thus your traffic may get throttled. For further information please read: https://learn.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
Diving a little deeper into how the client operates, the diagram below outlines at a high level the operational workflow of the OneDrive Client for Linux, demonstrating how it interacts with the OneDrive API to maintain synchronisation, manage local and cloud data integrity, and ensure that user data is accurately mirrored between the local filesystem and OneDrive cloud storage.
![High Level Application Sequence](./puml/high_level_operational_process.png)
The application operational processes have several high level key stages:
1. **Access Token Validation:** Initially, the client validates its access and the existing access token, refreshing it if necessary. This step ensures that the client has the required permissions to interact with the OneDrive API.
2. **Query Microsoft OneDrive API:** The client queries the /delta API endpoint of Microsoft OneDrive, which returns JSON responses. The /delta endpoint is particularly used for syncing changes, helping the client to identify any updates in the OneDrive storage.
3. **Process JSON Responses:** The client processes each JSON response to determine if it represents a 'root' or 'deleted' item. Items not marked as 'root' or 'deleted' are temporarily stored for further processing. For 'root' or 'deleted' items, the client processes them immediately, otherwise, the client evaluates the items against client-side filtering rules to decide whether to discard them or to process and save them in the local database cache for actions like creating directories or downloading files.
4. **Local Cache Database Processing for Data Integrity:** The client processes its local cache database to check for data integrity and differences compared to the OneDrive storage. If differences are found, such as a file or folder change including deletions, the client uploads these changes to OneDrive. Responses from the API, including item metadata, are saved to the local cache database.
5. **Local Filesystem Scanning:** The client scans the local filesystem for new files or folders. Each new item is checked against client-side filtering rules. If an item passes the filtering, it is uploaded to OneDrive. Otherwise, it is discarded if it doesn't meet the filtering criteria.
6. **Final Data True-Up:** Lastly, the client queries the /delta link for a final true-up, processing any further online JSON changes if required. This ensures that the local and OneDrive storages are fully synchronised.
## What are the operational modes of the client?
There are 2 main operational modes that the client can utilise:
1. Standalone sync mode that performs a single sync action against Microsoft OneDrive. This method is used when you utilise `--sync`.
2. Ongoing sync mode that continuously syncs your data with Microsoft OneDrive and utilises 'inotify' to watch for local system changes. This method is used when you utilise `--monitor`.
By default, both modes consider all data stored online within Microsoft OneDrive as the 'source-of-truth' - that is, what is online, is the correct data (file version, file content, file timestamp, folder structure and so on). This consideration also matches how the Microsoft OneDrive Client for Windows operates.
However, in standalone mode (`--sync`), you can *change* what reference the client will use as the 'source-of-truth' for your data by using the `--local-first` option so that the application will look at your local files *first* and consider your local files as your 'source-of-truth' to replicate that directory structure to Microsoft OneDrive.
**Critical Advisory:** Please be aware that if you designate a network mount point (such as NFS, Windows Network Share, or Samba Network Share) as your `sync_dir`, this setup inherently lacks 'inotify' support. Support for 'inotify' is essential for real-time tracking of file changes, which means that the client's 'Monitor Mode' cannot immediately detect changes in files located on these network shares. Instead, synchronisation between your local filesystem and Microsoft OneDrive will occur at intervals specified by the `monitor_interval` setting. This limitation regarding 'inotify' support on network mount points like NFS or Samba is beyond the control of this client.
## OneDrive Client for Linux High Level Activity Flows
The diagrams below show the high level process flow and decision making when running the application
### Main functional activity flows
![Main Activity](./puml/main_activity_flows.png)
### Processing a potentially new local item
![applyPotentiallyNewLocalItem](./puml/applyPotentiallyNewLocalItem.png)
### Processing a potentially changed local item
![applyPotentiallyChangedItem](./puml/applyPotentiallyChangedItem.png)
### Download a file from Microsoft OneDrive
![downloadFile](./puml/downloadFile.png)
### Upload a modified file to Microsoft OneDrive
![uploadModifiedFile](./puml/uploadModifiedFile.png)
### Upload a new local file to Microsoft OneDrive
![uploadFile](./puml/uploadFile.png)
### Determining if an 'item' is syncronised between Microsoft OneDrive and the local file system
![Item Sync Determination](./puml/is_item_in_sync.png)
### Determining if an 'item' is excluded due to 'Client Side Filtering' rules
By default, the OneDrive Client for Linux will sync all files and folders between Microsoft OneDrive and the local filesystem.
Client Side Filtering in the context of this client refers to user-configured rules that determine what files and directories the client should upload or download from Microsoft OneDrive. These rules are crucial for optimising synchronisation, especially when dealing with large numbers of files or specific file types. The OneDrive Client for Linux offers several configuration options to facilitate this:
* **skip_dir:** This option allows the user to specify directories that should not be synchronised with OneDrive. It's particularly useful for omitting large or irrelevant directories from the sync process.
* **skip_dotfiles:** Dotfiles, usually configuration files or scripts, can be excluded from the sync. This is useful for users who prefer to keep these files local.
* **skip_file:** Specific files can be excluded from synchronisation using this option. It provides flexibility in selecting which files are essential for cloud storage.
* **skip_symlinks:** Symlinks often point to files outside the OneDrive directory or to locations that are not relevant for cloud storage. This option prevents them from being included in the sync.
This exclusion process can be illustrated by the following activity diagram. A 'true' return value means that the path being evaluated needs to be excluded:
![Client Side Filtering Determination](./puml/client_side_filtering_rules.png)
## File conflict handling - default operational modes
When using the default operational modes (`--sync` or `--monitor`) the client application is conforming to how the Microsoft Windows OneDrive client operates in terms of resolving conflicts for files.
Additionally, when using `--resync` this conflict resolution can differ slightly, as, when using `--resync` you are *deleting* the known application state, thus, the application has zero reference as to what was previously in sync with the local file system.
Due to this factor, when using `--resync` the online source is always going to be considered accurate and the source-of-truth, regardless of the local file state, file timestamp or file hash.
### Default Operational Modes - Conflict Handling
#### Scenario
1. Create a local file
2. Perform a sync with Microsoft OneDrive using `onedrive --sync`
3. Modify file online
4. Modify file locally with different data|contents
5. Perform a sync with Microsoft OneDrive using `onedrive --sync`
![conflict_handling_default](./puml/conflict_handling_default.png)
#### Evidence of Conflict Handling
```
...
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 2
Finished processing /delta JSON response from the OneDrive API
Processing 1 applicable changes and items received from Microsoft OneDrive
Processing OneDrive JSON item batch [1/1] to ensure consistent local state
Number of items to download from OneDrive: 1
The local file to replace (./1.txt) has been modified locally since the last download. Renaming it to avoid potential local data loss.
The local item is out-of-sync with OneDrive, renaming to preserve existing file and prevent local data loss: ./1.txt -> ./1-onedrive-client-dev.txt
Downloading file ./1.txt ... done
Performing a database consistency and integrity check on locally stored data
Processing DB entries for this Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Processing ~/OneDrive
The directory has not changed
Processing α
...
The file has not changed
Processing เอกสาร
The directory has not changed
Processing 1.txt
The file has not changed
Scanning the local file system '~/OneDrive' for new data to upload
...
New items to upload to OneDrive: 1
Total New Data to Upload: 52 Bytes
Uploading new file ./1-onedrive-client-dev.txt ... done.
Performing a last examination of the most recent online data within Microsoft OneDrive to complete the reconciliation process
Fetching /delta response from the OneDrive API for Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 2
Finished processing /delta JSON response from the OneDrive API
Processing 1 applicable changes and items received from Microsoft OneDrive
Processing OneDrive JSON item batch [1/1] to ensure consistent local state
Sync with Microsoft OneDrive is complete
Waiting for all internal threads to complete before exiting application
```
### Default Operational Modes - Conflict Handling with --resync
#### Scenario
1. Create a local file
2. Perform a sync with Microsoft OneDrive using `onedrive --sync`
3. Modify file online
4. Modify file locally with different data|contents
5. Perform a sync with Microsoft OneDrive using `onedrive --sync --resync`
![conflict_handling_default_resync](./puml/conflict_handling_default_resync.png)
#### Evidence of Conflict Handling
```
...
Deleting the saved application sync status ...
Using IPv4 and IPv6 (if configured) for all network operations
Checking Application Version ...
...
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 14
Finished processing /delta JSON response from the OneDrive API
Processing 13 applicable changes and items received from Microsoft OneDrive
Processing OneDrive JSON item batch [1/1] to ensure consistent local state
Local file time discrepancy detected: ./1.txt
This local file has a different modified time 2024-Feb-19 19:32:55Z (UTC) when compared to remote modified time 2024-Feb-19 19:32:36Z (UTC)
The local file has a different hash when compared to remote file hash
Local item does not exist in local database - replacing with file from OneDrive - failed download?
The local item is out-of-sync with OneDrive, renaming to preserve existing file and prevent local data loss: ./1.txt -> ./1-onedrive-client-dev.txt
Number of items to download from OneDrive: 1
Downloading file ./1.txt ... done
Performing a database consistency and integrity check on locally stored data
Processing DB entries for this Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Processing ~/OneDrive
The directory has not changed
Processing α
...
Processing เอกสาร
The directory has not changed
Processing 1.txt
The file has not changed
Scanning the local file system '~/OneDrive' for new data to upload
...
New items to upload to OneDrive: 1
Total New Data to Upload: 52 Bytes
Uploading new file ./1-onedrive-client-dev.txt ... done.
Performing a last examination of the most recent online data within Microsoft OneDrive to complete the reconciliation process
Fetching /delta response from the OneDrive API for Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 2
Finished processing /delta JSON response from the OneDrive API
Processing 1 applicable changes and items received from Microsoft OneDrive
Processing OneDrive JSON item batch [1/1] to ensure consistent local state
Sync with Microsoft OneDrive is complete
Waiting for all internal threads to complete before exiting application
```
## File conflict handling - local-first operational mode
When using `--local-first` as your operational parameter the client application is now using your local filesystem data as the 'source-of-truth' as to what should be stored online.
However - Microsoft OneDrive itself, has *zero* acknowledgement of this concept, thus, conflict handling needs to be aligned to how Microsoft OneDrive on other platforms operate, that is, rename the local offending file.
Additionally, when using `--resync` you are *deleting* the known application state, thus, the application has zero reference as to what was previously in sync with the local file system.
Due to this factor, when using `--resync` the online source is always going to be considered accurate and the source-of-truth, regardless of the local file state, file timestamp or file hash or use of `--local-first`.
### Local First Operational Modes - Conflict Handling
#### Scenario
1. Create a local file
2. Perform a sync with Microsoft OneDrive using `onedrive --sync --local-first`
3. Modify file locally with different data|contents
4. Modify file online with different data|contents
5. Perform a sync with Microsoft OneDrive using `onedrive --sync --local-first`
![conflict_handling_local-first_default](./puml/conflict_handling_local-first_default.png)
#### Evidence of Conflict Handling
```
Reading configuration file: /home/alex/.config/onedrive/config
...
Using IPv4 and IPv6 (if configured) for all network operations
Checking Application Version ...
...
Sync Engine Initialised with new Onedrive API instance
All application operations will be performed in the configured local 'sync_dir' directory: /home/alex/OneDrive
Performing a database consistency and integrity check on locally stored data
Processing DB entries for this Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Processing ~/OneDrive
The directory has not changed
Processing α
The directory has not changed
...
The file has not changed
Processing เอกสาร
The directory has not changed
Processing 1.txt
Local file time discrepancy detected: 1.txt
The file content has changed locally and has a newer timestamp, thus needs to be uploaded to OneDrive
Changed local items to upload to OneDrive: 1
The local item is out-of-sync with OneDrive, renaming to preserve existing file and prevent local data loss: 1.txt -> 1-onedrive-client-dev.txt
Uploading new file 1-onedrive-client-dev.txt ... done.
Scanning the local file system '~/OneDrive' for new data to upload
...
Fetching /delta response from the OneDrive API for Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 3
Finished processing /delta JSON response from the OneDrive API
Processing 2 applicable changes and items received from Microsoft OneDrive
Processing OneDrive JSON item batch [1/1] to ensure consistent local state
Number of items to download from OneDrive: 1
Downloading file ./1.txt ... done
Sync with Microsoft OneDrive is complete
Waiting for all internal threads to complete before exiting application
```
### Local First Operational Modes - Conflict Handling with --resync
#### Scenario
1. Create a local file
2. Perform a sync with Microsoft OneDrive using `onedrive --sync --local-first`
3. Modify file locally with different data|contents
4. Modify file online with different data|contents
5. Perform a sync with Microsoft OneDrive using `onedrive --sync --local-first --resync`
![conflict_handling_local-first_resync](./puml/conflict_handling_local-first_resync.png)
#### Evidence of Conflict Handling
```
...
The usage of --resync will delete your local 'onedrive' client state, thus no record of your current 'sync status' will exist.
This has the potential to overwrite local versions of files with perhaps older versions of documents downloaded from OneDrive, resulting in local data loss.
If in doubt, backup your local data before using --resync
Are you sure you wish to proceed with --resync? [Y/N] y
Deleting the saved application sync status ...
Using IPv4 and IPv6 (if configured) for all network operations
...
Sync Engine Initialised with new Onedrive API instance
All application operations will be performed in the configured local 'sync_dir' directory: /home/alex/OneDrive
Performing a database consistency and integrity check on locally stored data
Processing DB entries for this Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Processing ~/OneDrive
The directory has not changed
Scanning the local file system '~/OneDrive' for new data to upload
Skipping item - excluded by sync_list config: ./random_25k_files
OneDrive Client requested to create this directory online: ./α
The requested directory to create was found on OneDrive - skipping creating the directory: ./α
...
New items to upload to OneDrive: 9
Total New Data to Upload: 49 KB
...
The file we are attemtping to upload as a new file already exists on Microsoft OneDrive: ./1.txt
Skipping uploading this item as a new file, will upload as a modified file (online file already exists): ./1.txt
The local item is out-of-sync with OneDrive, renaming to preserve existing file and prevent local data loss: ./1.txt -> ./1-onedrive-client-dev.txt
Uploading new file ./1-onedrive-client-dev.txt ... done.
Fetching /delta response from the OneDrive API for Drive ID: b!bO8V7s9SSk6r7mWHpIjURotN33W1W2tEv3OXV_oFIdQimEdOHR-1So7CqeT1MfHA
Processing API Response Bundle: 1 - Quantity of 'changes|items' in this bundle to process: 15
Finished processing /delta JSON response from the OneDrive API
Processing 14 applicable changes and items received from Microsoft OneDrive
Processing OneDrive JSON item batch [1/1] to ensure consistent local state
Number of items to download from OneDrive: 1
Downloading file ./1.txt ... done
Sync with Microsoft OneDrive is complete
Waiting for all internal threads to complete before exiting application
```
## Client Functional Component Architecture Relationships
The diagram below shows the main functional relationship of application code components, and how these relate to each relevant code module within this application:
![Functional Code Components](./puml/code_functional_component_relationships.png)
## Database Schema
The diagram below shows the database schema that is used within the application
![Database Schema](./puml/database_schema.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 223 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

View file

@ -12,23 +12,25 @@ Distribution packages may be of an older release when compared to the latest rel
|---------------------------------|------------------------------------------------------------------------------|:---------------:|:----:|:------:|:-----:|:-------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Alpine Linux | [onedrive](https://pkgs.alpinelinux.org/packages?name=onedrive&branch=edge) |<a href="https://pkgs.alpinelinux.org/packages?name=onedrive&branch=edge"><img src="https://repology.org/badge/version-for-repo/alpine_edge/onedrive.svg?header=" alt="Alpine Linux Edge package" width="46" height="20"></a>|❌|✔|❌|✔ | |
| Arch Linux<br><br>Manjaro Linux | [onedrive-abraunegg](https://aur.archlinux.org/packages/onedrive-abraunegg/) |<a href="https://aur.archlinux.org/packages/onedrive-abraunegg"><img src="https://repology.org/badge/version-for-repo/aur/onedrive-abraunegg.svg?header=" alt="AUR package" width="46" height="20"></a>|✔|✔|✔|✔ | Install via: `pamac build onedrive-abraunegg` from the Arch Linux User Repository (AUR)<br><br>**Note:** You must first install 'base-devel' as this is a pre-requisite for using the AUR<br><br>**Note:** If asked regarding a provider for 'd-runtime' and 'd-compiler', select 'liblphobos' and 'ldc'<br><br>**Note:** System must have at least 1GB of memory & 1GB swap space
| Debian 11 | [onedrive](https://packages.debian.org/bullseye/source/onedrive) |<a href="https://packages.debian.org/bullseye/source/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_11/onedrive.svg?header=" alt="Debian 11 package" width="46" height="20"></a>|✔|✔|✔|✔| **Note:** Do not install from Debian Package Repositories<br><br>It is recommended that for Debian 11 that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Debian 12 | [onedrive](https://packages.debian.org/bookworm/source/onedrive) |<a href="https://packages.debian.org/bookworm/source/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_12/onedrive.svg?header=" alt="Debian 12 package" width="46" height="20"></a>|✔|✔|✔|✔| **Note:** Do not install from Debian Package Repositories<br><br>It is recommended that for Debian 12 that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| CentOS 8 | [onedrive](https://koji.fedoraproject.org/koji/packageinfo?packageID=26044) |<a href="https://koji.fedoraproject.org/koji/packageinfo?packageID=26044"><img src="https://repology.org/badge/version-for-repo/epel_8/onedrive.svg?header=" alt="CentOS 8 package" width="46" height="20"></a>|❌|✔|❌|✔| **Note:** You must install the EPEL Repository first |
| CentOS 9 | [onedrive](https://koji.fedoraproject.org/koji/packageinfo?packageID=26044) |<a href="https://koji.fedoraproject.org/koji/packageinfo?packageID=26044"><img src="https://repology.org/badge/version-for-repo/epel_9/onedrive.svg?header=" alt="CentOS 9 package" width="46" height="20"></a>|❌|✔|❌|✔| **Note:** You must install the EPEL Repository first |
| Debian 11 | [onedrive](https://packages.debian.org/bullseye/source/onedrive) |<a href="https://packages.debian.org/bullseye/source/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_11/onedrive.svg?header=" alt="Debian 11 package" width="46" height="20"></a>|✔|✔|✔|✔| **Note:** Do not install from Debian Package Repositories as the package is obsolete and is not supported<br><br>For a supported application version, it is recommended that for Debian 11 that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Debian 12 | [onedrive](https://packages.debian.org/bookworm/source/onedrive) |<a href="https://packages.debian.org/bookworm/source/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_12/onedrive.svg?header=" alt="Debian 12 package" width="46" height="20"></a>|✔|✔|✔|✔| **Note:** Do not install from Debian Package Repositories as the package is obsolete and is not supported<br><br>For a supported application version, it is recommended that for Debian 12 that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Debian Sid | [onedrive](https://packages.debian.org/sid/onedrive) |<a href="https://packages.debian.org/sid/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_unstable/onedrive.svg?header=" alt="Debian Sid package" width="46" height="20"></a>|✔|✔|✔|✔| |
| Fedora | [onedrive](https://koji.fedoraproject.org/koji/packageinfo?packageID=26044) |<a href="https://koji.fedoraproject.org/koji/packageinfo?packageID=26044"><img src="https://repology.org/badge/version-for-repo/fedora_rawhide/onedrive.svg?header=" alt="Fedora Rawhide package" width="46" height="20"></a>|✔|✔|✔|✔| |
| Gentoo | [onedrive](https://gpo.zugaina.org/net-misc/onedrive) | No API Available |✔|✔|❌|❌| |
| Homebrew | [onedrive](https://formulae.brew.sh/formula/onedrive) | <a href="https://formulae.brew.sh/formula/onedrive"><img src="https://repology.org/badge/version-for-repo/homebrew/onedrive.svg?header=" alt="Homebrew package" width="46" height="20"></a> |❌|✔|❌|❌| |
| Linux Mint 20.x | [onedrive](https://community.linuxmint.com/software/view/onedrive) |<a href="https://community.linuxmint.com/software/view/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_20_04/onedrive.svg?header=" alt="Ubuntu 20.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Linux Mint Repositories<br><br>It is recommended that for Linux Mint that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Linux Mint 21.x | [onedrive](https://community.linuxmint.com/software/view/onedrive) |<a href="https://community.linuxmint.com/software/view/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_22_04/onedrive.svg?header=" alt="Ubuntu 22.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Linux Mint Repositories<br><br>It is recommended that for Linux Mint that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Linux Mint 20.x | [onedrive](https://community.linuxmint.com/software/view/onedrive) |<a href="https://community.linuxmint.com/software/view/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_20_04/onedrive.svg?header=" alt="Ubuntu 20.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Linux Mint Repositories as the package is obsolete and is not supported<br><br>For a supported application version, it is recommended that for Linux Mint that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Linux Mint 21.x | [onedrive](https://community.linuxmint.com/software/view/onedrive) |<a href="https://community.linuxmint.com/software/view/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_22_04/onedrive.svg?header=" alt="Ubuntu 22.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Linux Mint Repositories as the package is obsolete and is not supported<br><br>For a supported application version, it is recommended that for Linux Mint that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| NixOS | [onedrive](https://search.nixos.org/packages?channel=20.09&from=0&size=50&sort=relevance&query=onedrive)|<a href="https://search.nixos.org/packages?channel=20.09&from=0&size=50&sort=relevance&query=onedrive"><img src="https://repology.org/badge/version-for-repo/nix_unstable/onedrive.svg?header=" alt="nixpkgs unstable package" width="46" height="20"></a>|❌|✔|❌|❌| Use package `onedrive` either by adding it to `configuration.nix` or by using the command `nix-env -iA <channel name>.onedrive`. This does not install a service. To install a service, use unstable channel (will stabilize in 20.09) and add `services.onedrive.enable=true` in `configuration.nix`. You can also add a custom package using the `services.onedrive.package` option (recommended since package lags upstream). Enabling the service installs a default package too (based on the channel). You can also add multiple onedrive accounts trivially, see [documentation](https://github.com/NixOS/nixpkgs/pull/77734#issuecomment-575874225). |
| OpenSuSE | [onedrive](https://software.opensuse.org/package/onedrive) |<a href="https://software.opensuse.org/package/onedrive"><img src="https://repology.org/badge/version-for-repo/opensuse_network_tumbleweed/onedrive.svg?header=" alt="openSUSE Tumbleweed package" width="46" height="20"></a>|✔|✔|❌|❌| |
| OpenSuSE Build Service | [onedrive](https://build.opensuse.org/package/show/home:npreining:debian-ubuntu-onedrive/onedrive) | No API Available |✔|✔|✔|✔| Package Build Service for Debian and Ubuntu |
| Raspbian | [onedrive](https://archive.raspbian.org/raspbian/pool/main/o/onedrive/) |<a href="https://archive.raspbian.org/raspbian/pool/main/o/onedrive/"><img src="https://repology.org/badge/version-for-repo/raspbian_stable/onedrive.svg?header=" alt="Raspbian Stable package" width="46" height="20"></a> |❌|❌|✔|✔| **Note:** Do not install from Raspbian Package Repositories<br><br>It is recommended that for Raspbian that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Raspbian | [onedrive](https://archive.raspbian.org/raspbian/pool/main/o/onedrive/) |<a href="https://archive.raspbian.org/raspbian/pool/main/o/onedrive/"><img src="https://repology.org/badge/version-for-repo/raspbian_stable/onedrive.svg?header=" alt="Raspbian Stable package" width="46" height="20"></a> |❌|❌|✔|✔| **Note:** Do not install from Raspbian Package Repositories as the package is obsolete and is not supported<br><br>For a supported application version, it is recommended that for Raspbian that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Slackware | [onedrive](https://slackbuilds.org/result/?search=onedrive&sv=) |<a href="https://slackbuilds.org/result/?search=onedrive&sv="><img src="https://repology.org/badge/version-for-repo/slackbuilds/onedrive.svg?header=" alt="SlackBuilds package" width="46" height="20"></a>|✔|✔|❌|❌| |
| Solus | [onedrive](https://dev.getsol.us/search/query/FB7PIf1jG9Z9/#R) |<a href="https://dev.getsol.us/search/query/FB7PIf1jG9Z9/#R"><img src="https://repology.org/badge/version-for-repo/solus/onedrive.svg?header=" alt="Solus package" width="46" height="20"></a>|✔|✔|❌|❌| |
| Ubuntu 20.04 | [onedrive](https://packages.ubuntu.com/focal/onedrive) |<a href="https://packages.ubuntu.com/focal/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_20_04/onedrive.svg?header=" alt="Ubuntu 20.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Ubuntu Universe<br><br>It is recommended that for Ubuntu that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Ubuntu 22.04 | [onedrive](https://packages.ubuntu.com/jammy/onedrive) |<a href="https://packages.ubuntu.com/jammy/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_22_04/onedrive.svg?header=" alt="Ubuntu 22.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Ubuntu Universe<br><br>It is recommended that for Ubuntu that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Ubuntu 23.04 | [onedrive](https://packages.ubuntu.com/lunar/onedrive) |<a href="https://packages.ubuntu.com/lunar/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_23_04/onedrive.svg?header=" alt="Ubuntu 23.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Ubuntu Universe<br><br>It is recommended that for Ubuntu that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Ubuntu 20.04 | [onedrive](https://packages.ubuntu.com/focal/onedrive) |<a href="https://packages.ubuntu.com/focal/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_20_04/onedrive.svg?header=" alt="Ubuntu 20.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Ubuntu Universe as the package is obsolete and is not supported<br><br>For a supported application version, it is recommended that for Ubuntu that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Ubuntu 22.04 | [onedrive](https://packages.ubuntu.com/jammy/onedrive) |<a href="https://packages.ubuntu.com/jammy/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_22_04/onedrive.svg?header=" alt="Ubuntu 22.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Ubuntu Universe as the package is obsolete and is not supported<br><br>For a supported application version, it is recommended that for Ubuntu that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Ubuntu 23.04 | [onedrive](https://packages.ubuntu.com/lunar/onedrive) |<a href="https://packages.ubuntu.com/lunar/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_23_04/onedrive.svg?header=" alt="Ubuntu 23.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Ubuntu Universe as the package is obsolete and is not supported<br><br>For a supported application version, it is recommended that for Ubuntu that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Void Linux | [onedrive](https://voidlinux.org/packages/?arch=x86_64&q=onedrive) |<a href="https://voidlinux.org/packages/?arch=x86_64&q=onedrive"><img src="https://repology.org/badge/version-for-repo/void_x86_64/onedrive.svg?header=" alt="Void Linux x86_64 package" width="46" height="20"></a>|✔|✔|❌|❌| |
#### Important information for all Ubuntu and Ubuntu based distribution users:
@ -102,7 +104,7 @@ For notifications the following is also necessary:
sudo yum install libnotify-devel
```
### Dependencies: Fedora > Version 18 / CentOS 8.x / RHEL 8.x / RHEL 9.x
### Dependencies: Fedora > Version 18 / CentOS 8.x / CentOS 9.x/ RHEL 8.x / RHEL 9.x
```text
sudo dnf groupinstall 'Development Tools'
sudo dnf install libcurl-devel sqlite-devel

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

View file

@ -0,0 +1,48 @@
@startuml
start
partition "applyPotentiallyChangedItem" {
:Check if existing item path differs from changed item path;
if (itemWasMoved) then (yes)
:Log moving item;
if (destination exists) then (yes)
if (item in database) then (yes)
:Check if item is synced;
if (item is synced) then (yes)
:Log destination is in sync;
else (no)
:Log destination occupied with a different item;
:Backup conflicting file;
note right: Local data loss prevention
endif
else (no)
:Log destination occupied by an un-synced file;
:Backup conflicting file;
note right: Local data loss prevention
endif
endif
:Try to rename path;
if (dry run) then (yes)
:Track as faked id item;
:Track path not renamed;
else (no)
:Rename item;
:Flag item as moved;
if (item is a file) then (yes)
:Set local timestamp to match online;
endif
endif
else (no)
endif
:Check if eTag changed;
if (eTag changed) then (yes)
if (item is a file and not moved) then (yes)
:Decide if to download based on hash;
else (no)
:Update database;
endif
else (no)
:Update database if timestamp differs or in specific operational mode;
endif
}
stop
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

View file

@ -0,0 +1,90 @@
@startuml
start
partition "applyPotentiallyNewLocalItem" {
:Check if path exists;
if (Path exists?) then (yes)
:Log "Path on local disk already exists";
if (Is symbolic link?) then (yes)
:Log "Path is a symbolic link";
if (Can read symbolic link?) then (no)
:Log "Reading symbolic link failed";
:Log "Skipping item - invalid symbolic link";
stop
endif
endif
:Determine if item is in-sync;
note right: Execute 'isItemSynced()' function
if (Is item in-sync?) then (yes)
:Log "Item in-sync";
:Update/Insert item in DB;
stop
else (no)
:Log "Item not in-sync";
:Compare local & remote modification times;
if (Local time > Remote time?) then (yes)
if (ID in database?) then (yes)
:Log "Local file is newer & ID in DB";
:Fetch latest DB record;
if (Times equal?) then (yes)
:Log "Times match, keeping local file";
else (no)
:Log "Local time newer, keeping file";
note right: Online item has an 'older' modified timestamp wise than the local file\nIt is assumed that the local file is the file to keep
endif
stop
else (no)
:Log "Local item not in DB";
if (Bypass data protection?) then (yes)
:Log "WARNING: Data protection disabled";
else (no)
:Safe backup local file;
note right: Local data loss prevention
endif
stop
endif
else (no)
if (Remote time > Local time?) then (yes)
:Log "Remote item is newer";
if (Bypass data protection?) then (yes)
:Log "WARNING: Data protection disabled";
else (no)
:Safe backup local file;
note right: Local data loss prevention
endif
endif
if (Times equal?) then (yes)
note left: Specific handling if timestamp was\nadjusted by isItemSynced()
:Log "Times equal, no action required";
:Update/Insert item in DB;
stop
endif
endif
endif
else (no)
:Handle as potentially new item;
switch (Item type)
case (File)
:Add to download queue;
case (Directory)
:Log "Creating local directory";
if (Dry run?) then (no)
:Create directory & set attributes;
:Save item to DB;
else
:Log "Dry run, faking directory creation";
:Save item to dry-run DB;
endif
case (Unknown)
:Log "Unknown type, no action";
endswitch
endif
}
stop
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

View file

@ -0,0 +1,71 @@
@startuml
start
:Start;
partition "checkPathAgainstClientSideFiltering" {
:Get localFilePath;
if (Does path exist?) then (no)
:Return false;
stop
endif
if (Check .nosync?) then (yes)
:Check for .nosync file;
if (.nosync found) then (yes)
:Log and return true;
stop
endif
endif
if (Skip dotfiles?) then (yes)
:Check if dotfile;
if (Is dotfile) then (yes)
:Log and return true;
stop
endif
endif
if (Skip symlinks?) then (yes)
:Check if symlink;
if (Is symlink) then (yes)
if (Config says skip?) then (yes)
:Log and return true;
stop
elseif (Unexisting symlink?) then (yes)
:Check if relative link works;
if (Relative link ok) then (no)
:Log and return true;
stop
endif
endif
endif
endif
if (Skip dir or file?) then (yes)
:Check dir or file exclusion;
if (Excluded by config?) then (yes)
:Log and return true;
stop
endif
endif
if (Use sync_list?) then (yes)
:Check sync_list exclusions;
if (Excluded by sync_list?) then (yes)
:Log and return true;
stop
endif
endif
if (Check file size?) then (yes)
:Check for file size limit;
if (File size exceeds limit?) then (yes)
:Log and return true;
stop
endif
endif
:Return false;
}
stop
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

View file

@ -0,0 +1,25 @@
@startuml
participant "OneDrive Client\nfor Linux" as od
participant "libcurl" as lc
participant "Microsoft Authentication Service\n(OAuth 2.0 Endpoint)" as oauth
participant "Microsoft Graph API" as graph
activate od
activate lc
od->oauth: Request access token
activate oauth
oauth-->od: Access token
deactivate oauth
loop API Communication
od->lc: Construct HTTPS request (with token)
activate lc
lc->graph: API Request
activate graph
graph-->lc: API Response
deactivate graph
lc-->od: Process response
deactivate lc
end
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

View file

@ -0,0 +1,78 @@
@startuml
!define DATABASE_ENTITY(x) entity x
component main {
}
component config {
}
component log {
}
component curlEngine {
}
component util {
}
component onedrive {
}
component syncEngine {
}
component itemdb {
}
component clientSideFiltering {
}
component monitor {
}
component sqlite {
}
component qxor {
}
DATABASE_ENTITY("Database")
main --> config
main --> log
main --> curlEngine
main --> util
main --> onedrive
main --> syncEngine
main --> itemdb
main --> clientSideFiltering
main --> monitor
config --> log
config --> util
clientSideFiltering --> config
clientSideFiltering --> util
clientSideFiltering --> log
syncEngine --> config
syncEngine --> log
syncEngine --> util
syncEngine --> onedrive
syncEngine --> itemdb
syncEngine --> clientSideFiltering
util --> log
util --> config
util --> qxor
util --> curlEngine
sqlite --> log
sqlite -> "Database" : uses
onedrive --> config
onedrive --> log
onedrive --> util
onedrive --> curlEngine
monitor --> config
monitor --> util
monitor --> log
monitor --> clientSideFiltering
monitor .> syncEngine : inotify event
itemdb --> sqlite
itemdb --> util
itemdb --> log
curlEngine --> log
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

View file

@ -0,0 +1,31 @@
@startuml
start
note left: Operational Mode 'onedrive --sync'
:Query OneDrive /delta API for online changes;
note left: This data is considered the 'source-of-truth'\nLocal data should be a 'replica' of this data
:Process received JSON data;
if (JSON item is a file) then (yes)
if (Does the file exist locally) then (yes)
:Compute relevant file hashes;
:Check DB for file record;
if (DB record found) then (yes)
:Compare file hash with DB hash;
if (Is the hash different) then (yes)
:Log that the local file was modified locally since last sync;
:Renaming local file to avoid potential local data loss;
note left: Local data loss prevention\nRenamed file will be uploaded as new file
else (no)
endif
else (no)
endif
else (no)
endif
:Download file (as per online JSON item) as required;
else (no)
:Other handling for directories | root ojects | deleted items;
endif
:Performing a database consistency and\nintegrity check on locally stored data;
:Scan file system for any new data to upload;
note left: The file that was renamed will be uploaded here
stop
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

View file

@ -0,0 +1,35 @@
@startuml
start
note left: Operational Mode 'onedrive -sync --resync'
:Query OneDrive /delta API for online changes;
note left: This data is considered the 'source-of-truth'\nLocal data should be a 'replica' of this data
:Process received JSON data;
if (JSON item is a file) then (yes)
if (Does the file exist locally) then (yes)
note left: In a --resync scenario there are no DB\nrecords that can be used or referenced\nuntil the JSON item is processed and\nadded to the local database cache
if (Can the file be read) then (yes)
:Compute UTC timestamp data from local file and JSON data;
if (timestamps are equal) then (yes)
else (no)
:Log that a local file time discrepancy was detected;
if (Do file hashes match) then (yes)
:Correct the offending timestamp as hashes match;
else (no)
:Local file is technically different;
:Renaming local file to avoid potential local data loss;
note left: Local data loss prevention\nRenamed file will be uploaded as new file
endif
endif
else (no)
endif
else (no)
endif
:Download file (as per online JSON item) as required;
else (no)
:Other handling for directories | root ojects | deleted items;
endif
:Performing a database consistency and\nintegrity check on locally stored data;
:Scan file system for any new data to upload;
note left: The file that was renamed will be uploaded here
stop
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View file

@ -0,0 +1,62 @@
@startuml
start
note left: Operational Mode 'onedrive -sync -local-first'
:Performing a database consistency and\nintegrity check on locally stored data;
note left: This data is considered the 'source-of-truth'\nOnline data should be a 'replica' of this data
repeat
:Process each DB record;
if (Is the DB record is in sync with local file) then (yes)
else (no)
:Log reason for discrepancy;
:Flag item to be processed as a modified local file;
endif
repeat while
:Process modified items to upload;
if (Does local file DB record match current latest online JSON data) then (yes)
else (no)
:Log that the local file was modified locally since last sync;
:Renaming local file to avoid potential local data loss;
note left: Local data loss prevention\nRenamed file will be uploaded as new file
:Upload renamed local file as new file;
endif
:Upload modified file;
:Scan file system for any new data to upload;
:Query OneDrive /delta API for online changes;
:Process received JSON data;
if (JSON item is a file) then (yes)
if (Does the file exist locally) then (yes)
:Compute relevant file hashes;
:Check DB for file record;
if (DB record found) then (yes)
:Compare file hash with DB hash;
if (Is the hash different) then (yes)
:Log that the local file was modified locally since last sync;
:Renaming local file to avoid potential local data loss;
note left: Local data loss prevention\nRenamed file will be uploaded as new file
else (no)
endif
else (no)
endif
else (no)
endif
:Download file (as per online JSON item) as required;
else (no)
:Other handling for directories | root ojects | deleted items;
endif
stop
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

View file

@ -0,0 +1,70 @@
@startuml
start
note left: Operational Mode 'onedrive -sync -local-first -resync'
:Query OneDrive API and create new database with default root account objects;
:Performing a database consistency and\nintegrity check on locally stored data;
note left: This data is considered the 'source-of-truth'\nOnline data should be a 'replica' of this data\nHowever the database has only 1 record currently
:Scan file system for any new data to upload;
note left: This is where in this specific mode all local\n content is assessed for applicability for\nupload to Microsoft OneDrive
repeat
:For each new local item;
if (Is the item a directory) then (yes)
if (Is Directory found online) then (yes)
:Save directory details from online in local database;
else (no)
:Create directory online;
:Save details in local database;
endif
else (no)
:Flag file as a potentially new item to upload;
endif
repeat while
:Process potential new items to upload;
repeat
:For each potential file to upload;
if (Is File found online) then (yes)
if (Does the online JSON data match local file) then (yes)
:Save details in local database;
else (no)
:Log that the local file was modified locally since last sync;
:Renaming local file to avoid potential local data loss;
note left: Local data loss prevention\nRenamed file will be uploaded as new file
:Upload renamed local file as new file;
endif
else (no)
:Upload new file;
endif
repeat while
:Query OneDrive /delta API for online changes;
:Process received JSON data;
if (JSON item is a file) then (yes)
if (Does the file exist locally) then (yes)
:Compute relevant file hashes;
:Check DB for file record;
if (DB record found) then (yes)
:Compare file hash with DB hash;
if (Is the hash different) then (yes)
:Log that the local file was modified locally since last sync;
:Renaming local file to avoid potential local data loss;
note left: Local data loss prevention\nRenamed file will be uploaded as new file
else (no)
endif
else (no)
endif
else (no)
endif
:Download file (as per online JSON item) as required;
else (no)
:Other handling for directories | root ojects | deleted items;
endif
stop
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

View file

@ -0,0 +1,39 @@
@startuml
class item {
driveId: TEXT
id: TEXT
name: TEXT
remoteName: TEXT
type: TEXT
eTag: TEXT
cTag: TEXT
mtime: TEXT
parentId: TEXT
quickXorHash: TEXT
sha256Hash: TEXT
remoteDriveId: TEXT
remoteParentId: TEXT
remoteId: TEXT
remoteType: TEXT
deltaLink: TEXT
syncStatus: TEXT
size: TEXT
}
note right of item::driveId
PRIMARY KEY (driveId, id)
FOREIGN KEY (driveId, parentId) REFERENCES item
end note
item --|> item : parentId
note "Indexes" as N1
note left of N1
name_idx ON item (name)
remote_idx ON item (remoteDriveId, remoteId)
item_children_idx ON item (driveId, parentId)
selectByPath_idx ON item (name, driveId, parentId)
end note
@enduml

BIN
docs/puml/downloadFile.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

View file

@ -0,0 +1,63 @@
@startuml
start
partition "Download File" {
:Get item specifics from JSON;
:Calculate item's path;
if (Is item malware?) then (yes)
:Log malware detected;
stop
else (no)
:Check for file size in JSON;
if (File size missing) then (yes)
:Log error;
stop
endif
:Configure hashes for comparison;
if (Hashes missing) then (yes)
:Log error;
stop
endif
if (Does file exist locally?) then (yes)
:Check DB for item;
if (DB hash match?) then (no)
:Log modification; Perform safe backup;
note left: Local data loss prevention
endif
endif
:Check local disk space;
if (Insufficient space?) then (yes)
:Log insufficient space;
stop
else (no)
if (Dry run?) then (yes)
:Fake download process;
else (no)
:Attempt to download file;
if (Download exception occurs?) then (yes)
:Handle exceptions; Retry download or log error;
endif
if (File downloaded successfully?) then (yes)
:Validate download;
if (Validation passes?) then (yes)
:Log success; Update DB;
else (no)
:Log validation failure; Remove file;
endif
else (no)
:Log download failed;
endif
endif
endif
endif
}
stop
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

View file

@ -0,0 +1,55 @@
@startuml
participant "OneDrive Client\nfor Linux" as Client
participant "Microsoft OneDrive\nAPI" as API
== Access Token Validation ==
Client -> Client: Validate access and\nexisting access token\nRefresh if needed
== Query Microsoft OneDrive /delta API ==
Client -> API: Query /delta API
API -> Client: JSON responses
== Process JSON Responses ==
loop for each JSON response
Client -> Client: Determine if JSON is 'root'\nor 'deleted' item\nElse, push into temporary array for further processing
alt if 'root' or 'deleted'
Client -> Client: Process 'root' or 'deleted' items
else
Client -> Client: Evaluate against 'Client Side Filtering' rules
alt if unwanted
Client -> Client: Discard JSON
else
Client -> Client: Process JSON (create dir/download file)
Client -> Client: Save in local database cache
end
end
end
== Local Cache Database Processing for Data Integrity ==
Client -> Client: Process local cache database\nto check local data integrity and for differences
alt if difference found
Client -> API: Upload file/folder change including deletion
API -> Client: Response with item metadata
Client -> Client: Save response to local cache database
end
== Local Filesystem Scanning ==
Client -> Client: Scan local filesystem\nfor new files/folders
loop for each new item
Client -> Client: Check item against 'Client Side Filtering' rules
alt if item passes filtering
Client -> API: Upload new file/folder change including deletion
API -> Client: Response with item metadata
Client -> Client: Save response in local\ncache database
else
Client -> Client: Discard item\n(Does not meet filtering criteria)
end
end
== Final Data True-Up ==
Client -> API: Query /delta link for true-up
API -> Client: Process further online JSON changes if required
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

View file

@ -0,0 +1,79 @@
@startuml
start
partition "Is item in sync" {
:Check if path exists;
if (path does not exist) then (no)
:Return false;
stop
else (yes)
endif
:Identify item type;
switch (item type)
case (file)
:Check if path is a file;
if (path is not a file) then (no)
:Log "item is a directory but should be a file";
:Return false;
stop
else (yes)
endif
:Attempt to read local file;
if (file is unreadable) then (no)
:Log "file cannot be read";
:Return false;
stop
else (yes)
endif
:Get local and input item modified time;
note right: The 'input item' could be a database reference object, or the online JSON object\nas provided by the Microsoft OneDrive API
:Reduce time resolution to seconds;
if (localModifiedTime == itemModifiedTime) then (yes)
:Return true;
stop
else (no)
:Log time discrepancy;
endif
:Check if file hash is the same;
if (hash is the same) then (yes)
:Log "hash match, correcting timestamp";
if (local time > item time) then (yes)
if (download only mode) then (no)
:Correct timestamp online if not dryRun;
else (yes)
:Correct local timestamp if not dryRun;
endif
else (no)
:Correct local timestamp if not dryRun;
endif
:Return false;
note right: Specifically return false here as we performed a time correction\nApplication logic will then perform additional handling based on this very specific response.
stop
else (no)
:Log "different hash";
:Return false;
stop
endif
case (dir or remote)
:Check if path is a directory;
if (path is a directory) then (yes)
:Return true;
stop
else (no)
:Log "item is a file but should be a directory";
:Return false;
stop
endif
case (unknown)
:Return true but do not sync;
stop
endswitch
}
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 158 KiB

View file

@ -0,0 +1,81 @@
@startuml
start
:Validate access and existing access token\nRefresh if needed;
:Query /delta API;
note right: Query Microsoft OneDrive /delta API
:Receive JSON responses;
:Process JSON Responses;
partition "Process /delta JSON Responses" {
while (for each JSON response) is (yes)
:Determine if JSON is 'root'\nor 'deleted' item;
if ('root' or 'deleted') then (yes)
:Process 'root' or 'deleted' items;
if ('root' object) then (yes)
:Process 'root' JSON;
else (no)
if (Is 'deleted' object in sync) then (yes)
:Process delection of local item;
else (no)
:Rename local file as it is not in sync;
note right: Deletion event conflict handling\nLocal data loss prevention
endif
endif
else (no)
:Evaluate against 'Client Side Filtering' rules;
if (unwanted) then (yes)
:Discard JSON;
else (no)
:Process JSON (create dir/download file);
if (Is the 'JSON' item in the local cache) then (yes)
:Process JSON as a potentially changed local item;
note left: Run 'applyPotentiallyChangedItem' function
else (no)
:Process JSON as potentially new local item;
note right: Run 'applyPotentiallyNewLocalItem' function
endif
:Process objects in download queue;
:Download File;
note left: Download file from Microsoft OneDrive (Multi Threaded Download)
:Save in local database cache;
endif
endif
endwhile
}
partition "Perform data integrity check based on local cache database" {
:Process local cache database\nto check local data integrity and for differences;
if (difference found) then (yes)
:Upload file/folder change including deletion;
note right: Upload local change to Microsoft OneDrive
:Receive response with item metadata;
:Save response to local cache database;
else (no)
endif
}
partition "Local Filesystem Scanning" {
:Scan local filesystem\nfor new files/folders;
while (for each new item) is (yes)
:Check item against 'Client Side Filtering' rules;
if (item passes filtering) then (yes)
:Upload new file/folder change including deletion;
note right: Upload to Microsoft OneDrive
:Receive response with item metadata;
:Save response in local\ncache database;
else (no)
:Discard item\n(Does not meet filtering criteria);
endif
endwhile
}
partition "Final True-Up" {
:Query /delta link for true-up;
note right: Final Data True-Up
:Process further online JSON changes if required;
}
stop
@enduml

BIN
docs/puml/uploadFile.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

62
docs/puml/uploadFile.puml Normal file
View file

@ -0,0 +1,62 @@
@startuml
start
partition "Upload File" {
:Log "fileToUpload";
:Check database for parent path;
if (parent path found?) then (yes)
if (drive ID not empty?) then (yes)
:Proceed;
else (no)
:Use defaultDriveId;
endif
else (no)
stop
endif
:Check if file exists locally;
if (file exists?) then (yes)
:Read local file;
if (can read file?) then (yes)
if (parent path in DB?) then (yes)
:Get file size;
if (file size <= max?) then (yes)
:Check available space on OneDrive;
if (space available?) then (yes)
:Check if file exists on OneDrive;
if (file exists online?) then (yes)
:Save online metadata only;
if (if local file newer) then (yes)
:Local file is newer;
:Upload file as changed local file;
else (no)
:Remote file is newer;
:Perform safe backup;
note right: Local data loss prevention
:Upload renamed file as new file;
endif
else (no)
:Attempt upload;
endif
else (no)
:Log "Insufficient space";
endif
else (no)
:Log "File too large";
endif
else (no)
:Log "Parent path issue";
endif
else (no)
:Log "Cannot read file";
endif
else (no)
:Log "File disappeared locally";
endif
:Upload success or failure;
if (upload failed?) then (yes)
:Log failure;
else (no)
:Update cache;
endif
}
stop
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

View file

@ -0,0 +1,56 @@
@startuml
start
partition "Upload Modified File" {
:Initialize API Instance;
:Check for Dry Run;
if (Is Dry Run?) then (yes)
:Create Fake Response;
else (no)
:Get Current Online Data;
if (Error Fetching Data) then (yes)
:Handle Errors;
if (Retryable Error?) then (yes)
:Retry Fetching Data;
detach
else (no)
:Log and Display Error;
endif
endif
if (filesize > 0 and valid latest online data) then (yes)
if (is online file newer) then (yes)
:Log that online is newer;
:Perform safe backup;
note left: Local data loss prevention
:Upload renamed local file as new file;
endif
endif
:Determine Upload Method;
if (Use Simple Upload?) then (yes)
:Perform Simple Upload;
if (Upload Error) then (yes)
:Handle Upload Errors and Retries;
if (Retryable Upload Error?) then (yes)
:Retry Upload;
detach
else (no)
:Log and Display Upload Error;
endif
endif
else (no)
:Create Upload Session;
:Perform Upload via Session;
if (Session Upload Error) then (yes)
:Handle Session Upload Errors and Retries;
if (Retryable Session Error?) then (yes)
:Retry Session Upload;
detach
else (no)
:Log and Display Session Error;
endif
endif
endif
endif
:Finalize;
}
stop
@enduml

View file

@ -634,7 +634,7 @@ By default, the location where your Microsoft OneDrive data is stored, is within
To change this location, the application configuration option 'sync_dir' is used to specify a new local directory where your Microsoft OneDrive data should be stored.
**Important Note:** If your `sync_dir` is pointing to a network mount point (a network share via NFS, Windows Network Share, Samba Network Share) these types of network mount points do not support 'inotify', thus tracking real-time changes via inotify of local files is not possible when using 'Monitor Mode'. Local filesystem changes will be replicated between the local filesystem and Microsoft OneDrive based on the `monitor_interval` value. This is not something (inotify support for NFS, Samba) that this client can fix.
**Critical Advisory:** Please be aware that if you designate a network mount point (such as NFS, Windows Network Share, or Samba Network Share) as your `sync_dir`, this setup inherently lacks 'inotify' support. Support for 'inotify' is essential for real-time tracking of file changes, which means that the client's 'Monitor Mode' cannot immediately detect changes in files located on these network shares. Instead, synchronisation between your local filesystem and Microsoft OneDrive will occur at intervals specified by the `monitor_interval` setting. This limitation regarding 'inotify' support on network mount points like NFS or Samba is beyond the control of this client.
### How to change what file and directory permissions are assigned to data that is downloaded from Microsoft OneDrive?
The following are the application default permissions for any new directory or file that is created locally when downloaded from Microsoft OneDrive:

View file

@ -14,22 +14,21 @@ Originally derived as a 'fork' from the [skilion](https://github.com/skilion/one
This client represents a 100% re-imagining of the original work, addressing numerous notable bugs and issues while incorporating a significant array of new features. This client has been under active development since mid-2018.
## Features
* Supports 'Client Side Filtering' rules to determine what should be synced with Microsoft OneDrive
* Sync State Caching
* Real-Time local file monitoring with inotify
* Real-Time syncing of remote updates via webhooks
* File upload / download validation to ensure data integrity
* Resumable uploads
* Support OneDrive for Business (part of Office 365)
* Shared Folder support for OneDrive Personal and OneDrive Business accounts
* SharePoint / Office365 Shared Libraries
* Desktop notifications via libnotify
* Dry-run capability to test configuration changes
* Prevent major OneDrive accidental data deletion after configuration change
* Support for National cloud deployments (Microsoft Cloud for US Government, Microsoft Cloud Germany, Azure and Office 365 operated by 21Vianet in China)
* Supports single & multi-tenanted applications
* Supports rate limiting of traffic
* Supports multi-threaded uploads and downloads
* Compatible with OneDrive Personal, OneDrive for Business including accessing Microsoft SharePoint Libraries
* Provides rules for client-side filtering to select data for syncing with Microsoft OneDrive accounts
* Caches sync state for efficiency
* Supports a dry-run option for safe configuration testing
* Validates file transfers to ensure data integrity
* Monitors local files in real-time using inotify
* Supports interrupted uploads for completion at a later time
* Capability to sync remote updates immediately via webhooks
* Enhanced syncronisation speed with multi-threaded file transfers
* Manages traffic bandwidth use with rate limiting
* Supports seamless access to shared folders and files across both OneDrive Personal and OneDrive for Business accounts
* Supports national cloud deployments including Microsoft Cloud for US Government, Microsoft Cloud Germany and Azure and Office 365 operated by 21Vianet in China
* Supports sending desktop alerts using libnotify
* Protects against significant data loss on OneDrive after configuration changes
* Works with both single and multi-tenant applications
## What's missing
* Ability to encrypt/decrypt files on-the-fly when uploading/downloading files from OneDrive
@ -68,8 +67,8 @@ Refer to [docs/install.md](https://github.com/abraunegg/onedrive/blob/master/doc
### Configuration and Usage
Refer to [docs/usage.md](https://github.com/abraunegg/onedrive/blob/master/docs/usage.md)
### Configure OneDrive Business Shared Folders
Refer to [docs/business-shared-folders.md](https://github.com/abraunegg/onedrive/blob/master/docs/business-shared-folders.md)
### Configure OneDrive Business Shared Items
Refer to [docs/business-shared-items.md](https://github.com/abraunegg/onedrive/blob/master/docs/business-shared-items.md)
### Configure SharePoint / Office 365 Shared Libraries (Business or Education)
Refer to [docs/sharepoint-libraries.md](https://github.com/abraunegg/onedrive/blob/master/docs/sharepoint-libraries.md)

View file

@ -683,6 +683,7 @@ enum long defaultMaxContentLength = 5_000_000;
public import std.string;
public import std.stdio;
public import std.conv;
import std.concurrency;
import std.uri;
import std.uni;
import std.algorithm.comparison;
@ -3910,14 +3911,16 @@ struct RequestServer {
If you want the forking worker process server, you do need to compile with the embedded_httpd_processes config though.
+/
void serveEmbeddedHttp(alias fun, CustomCgi = Cgi, long maxContentLength = defaultMaxContentLength)(ThisFor!fun _this) {
shared void serveEmbeddedHttp(alias fun, T, CustomCgi = Cgi, long maxContentLength = defaultMaxContentLength)(shared T _this) {
globalStopFlag = false;
static if(__traits(isStaticFunction, fun))
alias funToUse = fun;
void funToUse(CustomCgi cgi) {
fun(_this, cgi);
}
else
void funToUse(CustomCgi cgi) {
static if(__VERSION__ > 2097)
__traits(child, _this, fun)(cgi);
__traits(child, _inst_this, fun)(_inst_this, cgi);
else static assert(0, "Not implemented in your compiler version!");
}
auto manager = new ListeningConnectionManager(listeningHost, listeningPort, &doThreadHttpConnection!(CustomCgi, funToUse), null, useFork, numberOfThreads);

View file

@ -20,7 +20,6 @@ class ClientSideFiltering {
// Class variables
ApplicationConfig appConfig;
string[] paths;
string[] businessSharedItemsList;
Regex!char fileMask;
Regex!char directoryMask;
bool skipDirStrictMatch = false;
@ -41,11 +40,6 @@ class ClientSideFiltering {
loadSyncList(appConfig.syncListFilePath);
}
// Load the Business Shared Items file if it exists
if (exists(appConfig.businessSharedItemsFilePath)){
loadBusinessSharedItems(appConfig.businessSharedItemsFilePath);
}
// Configure skip_dir, skip_file, skip-dir-strict-match & skip_dotfiles from config entries
// Handle skip_dir configuration in config file
addLogEntry("Configuring skip_dir ...", ["debug"]);
@ -91,7 +85,6 @@ class ClientSideFiltering {
void shutdown() {
object.destroy(appConfig);
object.destroy(paths);
object.destroy(businessSharedItemsList);
object.destroy(fileMask);
object.destroy(directoryMask);
}
@ -109,19 +102,6 @@ class ClientSideFiltering {
file.close();
}
// load business_shared_folders file
void loadBusinessSharedItems(string filepath) {
// open file as read only
auto file = File(filepath, "r");
auto range = file.byLine();
foreach (line; range) {
// Skip comments in file
if (line.length == 0 || line[0] == ';' || line[0] == '#') continue;
businessSharedItemsList ~= buildNormalizedPath(line);
}
file.close();
}
// Configure the regex that will be used for 'skip_file'
void setFileMask(const(char)[] mask) {
fileMask = wild2regex(mask);

View file

@ -41,6 +41,8 @@ class ApplicationConfig {
immutable string defaultLogFileDir = "/var/log/onedrive";
// - Default configuration directory
immutable string defaultConfigDirName = "~/.config/onedrive";
// - Default 'OneDrive Business Shared Files' Folder Name
immutable string defaultBusinessSharedFilesDirectoryName = "Files Shared With Me";
// Microsoft Requirements
// - Default Application ID (abraunegg)
@ -106,7 +108,6 @@ class ApplicationConfig {
bool debugLogging = false;
long verbosityCount = 0;
// Was the application just authorised - paste of response uri
bool applicationAuthorizeResponseUri = false;
@ -121,6 +122,7 @@ class ApplicationConfig {
// Store the 'session_upload.CRC32-HASH' file path
string uploadSessionFilePath = "";
// API initialisation flags
bool apiWasInitialised = false;
bool syncEngineWasInitialised = false;
@ -161,25 +163,23 @@ class ApplicationConfig {
private string applicableConfigFilePath = "";
// - Store the 'sync_list' file path
string syncListFilePath = "";
// - Store the 'business_shared_items' file path
string businessSharedItemsFilePath = "";
// OneDrive Business Shared File handling - what directory will be used?
string configuredBusinessSharedFilesDirectoryName = "";
// Hash files so that we can detect when the configuration has changed, in items that will require a --resync
private string configHashFile = "";
private string configBackupFile = "";
private string syncListHashFile = "";
private string businessSharedItemsHashFile = "";
// Store the actual 'runtime' hash
private string currentConfigHash = "";
private string currentSyncListHash = "";
private string currentBusinessSharedItemsHash = "";
// Store the previous config files hash values (file contents)
private string previousConfigHash = "";
private string previousSyncListHash = "";
private string previousBusinessSharedItemsHash = "";
// Store items that come in from the 'config' file, otherwise these need to be set the the defaults
private string configFileSyncDir = defaultSyncDir;
private string configFileSkipFile = defaultSkipFile;
@ -197,7 +197,6 @@ class ApplicationConfig {
string[string] stringValues;
long[string] longValues;
bool[string] boolValues;
bool shellEnvironmentSet = false;
// Initialise the application configuration
@ -275,7 +274,7 @@ class ApplicationConfig {
longValues["ip_protocol_version"] = defaultIpProtocol; // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only
// Number of concurrent threads
longValues["threads"] = defaultConcurrentThreads; // Default is 8, user can increase or decrease
longValues["threads"] = defaultConcurrentThreads; // Default is 8, user can increase to max of 16 or decrease
// - Do we wish to upload only?
boolValues["upload_only"] = false;
@ -469,9 +468,6 @@ class ApplicationConfig {
// - What is the full path for the system 'config' file if it is required
systemConfigFilePath = buildNormalizedPath(buildPath(systemConfigDirName, "config"));
// - What is the full path for the 'business_shared_items'
businessSharedItemsFilePath = buildNormalizedPath(buildPath(configDirName, "business_shared_items"));
// To determine if any configuration items has changed, where a --resync would be required, we need to have a hash file for the following items
// - 'config.backup' file
// - applicable 'config' file
@ -480,8 +476,7 @@ class ApplicationConfig {
configBackupFile = buildNormalizedPath(buildPath(configDirName, ".config.backup"));
configHashFile = buildNormalizedPath(buildPath(configDirName, ".config.hash"));
syncListHashFile = buildNormalizedPath(buildPath(configDirName, ".sync_list.hash"));
businessSharedItemsHashFile = buildNormalizedPath(buildPath(configDirName, ".business_shared_items.hash"));
// Debug Output for application set variables based on configDirName
addLogEntry("refreshTokenFilePath = " ~ refreshTokenFilePath, ["debug"]);
addLogEntry("deltaLinkFilePath = " ~ deltaLinkFilePath, ["debug"]);
@ -494,8 +489,6 @@ class ApplicationConfig {
addLogEntry("configBackupFile = " ~ configBackupFile, ["debug"]);
addLogEntry("configHashFile = " ~ configHashFile, ["debug"]);
addLogEntry("syncListHashFile = " ~ syncListHashFile, ["debug"]);
addLogEntry("businessSharedItemsFilePath = " ~ businessSharedItemsFilePath, ["debug"]);
addLogEntry("businessSharedItemsHashFile = " ~ businessSharedItemsHashFile, ["debug"]);
// Configure the Hash and Backup File Permission Value
string valueToConvert = to!string(defaultFilePermissionMode);
@ -900,6 +893,7 @@ class ApplicationConfig {
boolValues["synchronize"] = false;
boolValues["force"] = false;
boolValues["list_business_shared_items"] = false;
boolValues["sync_business_shared_files"] = false;
boolValues["force_sync"] = false;
boolValues["with_editing_perms"] = false;
@ -995,6 +989,12 @@ class ApplicationConfig {
"get-O365-drive-id",
"Query and return the Office 365 Drive ID for a given Office 365 SharePoint Shared Library (DEPRECIATED)",
&stringValues["sharepoint_library_name"],
"list-shared-items",
"List OneDrive Business Shared Items",
&boolValues["list_business_shared_items"],
"sync-shared-files",
"Sync OneDrive Business Shared Files to the local filesystem",
&boolValues["sync_business_shared_files"],
"local-first",
"Synchronize from the local directory source first, before downloading changes from OneDrive.",
&boolValues["local_first"],
@ -1365,20 +1365,7 @@ class ApplicationConfig {
// Is sync_business_shared_items enabled and configured ?
addLogEntry(); // used instead of an empty 'writeln();' to ensure the line break is correct in the buffered console output ordering
addLogEntry("Config option 'sync_business_shared_items' = " ~ to!string(getValueBool("sync_business_shared_items")));
if (exists(businessSharedItemsFilePath)){
addLogEntry("Selective Business Shared Items configured = true");
addLogEntry("sync_business_shared_items contents:");
// Output the sync_business_shared_items contents
auto businessSharedItemsFileList = File(businessSharedItemsFilePath, "r");
auto range = businessSharedItemsFileList.byLine();
foreach (line; range)
{
addLogEntry(to!string(line));
}
} else {
addLogEntry("Selective Business Shared Items configured = false");
}
addLogEntry("Config option 'Shared Files Directory' = " ~ configuredBusinessSharedFilesDirectoryName);
// Are webhooks enabled?
addLogEntry(); // used instead of an empty 'writeln();' to ensure the line break is correct in the buffered console output ordering
@ -1518,9 +1505,6 @@ class ApplicationConfig {
if (currentSyncListHash != previousSyncListHash)
logAndSetDifference("sync_list file has been updated, --resync needed", 0);
if (currentBusinessSharedItemsHash != previousBusinessSharedItemsHash)
logAndSetDifference("business_shared_folders file has been updated, --resync needed", 1);
// Check for updates in the config file
if (currentConfigHash != previousConfigHash) {
addLogEntry("Application configuration file has been updated, checking if --resync needed");
@ -1665,7 +1649,14 @@ class ApplicationConfig {
break;
}
}
// Final override
// In certain situations, regardless of config 'resync' needed status, ignore this so that the application can display 'non-syncable' information
// Options that should now be looked at are:
// --list-shared-items
if (getValueBool("list_business_shared_items")) resyncRequired = false;
// Return the calculated boolean
return resyncRequired;
}
@ -1676,7 +1667,6 @@ class ApplicationConfig {
addLogEntry("Cleaning up configuration hash files", ["debug"]);
safeRemove(configHashFile);
safeRemove(syncListHashFile);
safeRemove(businessSharedItemsHashFile);
} else {
// --dry-run scenario ... technically we should not be making any local file changes .......
addLogEntry("DRY RUN: Not removing hash files as --dry-run has been used");
@ -1704,17 +1694,6 @@ class ApplicationConfig {
// Hash file should only be readable by the user who created it - 0600 permissions needed
syncListHashFile.setAttributes(convertedPermissionValue);
}
// Update 'update business_shared_items' files
if (exists(businessSharedItemsFilePath)) {
// update business_shared_folders hash
addLogEntry("Updating business_shared_items hash", ["debug"]);
std.file.write(businessSharedItemsHashFile, computeQuickXorHash(businessSharedItemsFilePath));
// Hash file should only be readable by the user who created it - 0600 permissions needed
businessSharedItemsHashFile.setAttributes(convertedPermissionValue);
}
} else {
// --dry-run scenario ... technically we should not be making any local file changes .......
addLogEntry("DRY RUN: Not updating hash files as --dry-run has been used");
@ -1746,18 +1725,6 @@ class ApplicationConfig {
// Generate the runtime hash for the 'sync_list' file
currentSyncListHash = computeQuickXorHash(syncListFilePath);
}
// Does a 'business_shared_items' file exist with a valid hash file
if (exists(businessSharedItemsFilePath)) {
if (!exists(businessSharedItemsHashFile)) {
// no existing hash file exists
std.file.write(businessSharedItemsHashFile, "initial-hash");
// Hash file should only be readable by the user who created it - 0600 permissions needed
businessSharedItemsHashFile.setAttributes(convertedPermissionValue);
}
// Generate the runtime hash for the 'sync_list' file
currentBusinessSharedItemsHash = computeQuickXorHash(businessSharedItemsFilePath);
}
}
// Read in the text values of the previous configurations
@ -1783,16 +1750,7 @@ class ApplicationConfig {
return EXIT_FAILURE;
}
}
if (exists(businessSharedItemsHashFile)) {
try {
previousBusinessSharedItemsHash = readText(businessSharedItemsHashFile);
} catch (std.file.FileException e) {
// Unable to access required hash file
addLogEntry("ERROR: Unable to access " ~ e.msg);
// Use exit scopes to shutdown API
return EXIT_FAILURE;
}
}
return 0;
}
@ -1850,10 +1808,22 @@ class ApplicationConfig {
// --list-shared-folders cannot be used with --resync and/or --resync-auth
if ((getValueBool("list_business_shared_items")) && ((getValueBool("resync")) || (getValueBool("resync_auth")))) {
addLogEntry("ERROR: --list-shared-folders cannot be used with --resync or --resync-auth");
addLogEntry("ERROR: --list-shared-items cannot be used with --resync or --resync-auth");
operationalConflictDetected = true;
}
// --list-shared-folders cannot be used with --sync or --monitor
if ((getValueBool("list_business_shared_items")) && ((getValueBool("synchronize")) || (getValueBool("monitor")))) {
addLogEntry("ERROR: --list-shared-items cannot be used with --sync or --monitor");
operationalConflictDetected = true;
}
// --sync-shared-files can ONLY be used with sync_business_shared_items
if ((getValueBool("sync_business_shared_files")) && (!getValueBool("sync_business_shared_items"))) {
addLogEntry("ERROR: The --sync-shared-files option can only be utilised if the 'sync_business_shared_items' configuration setting is enabled.");
operationalConflictDetected = true;
}
// --display-sync-status cannot be used with --resync and/or --resync-auth
if ((getValueBool("display_sync_status")) && ((getValueBool("resync")) || (getValueBool("resync_auth")))) {
addLogEntry("ERROR: --display-sync-status cannot be used with --resync or --resync-auth");
@ -2057,6 +2027,9 @@ class ApplicationConfig {
// What will runtimeSyncDirectory be actually set to?
addLogEntry("sync_dir: runtimeSyncDirectory set to: " ~ runtimeSyncDirectory, ["debug"]);
// Configure configuredBusinessSharedFilesDirectoryName
configuredBusinessSharedFilesDirectoryName = buildNormalizedPath(buildPath(runtimeSyncDirectory, defaultBusinessSharedFilesDirectoryName));
return runtimeSyncDirectory;
}

View file

@ -18,6 +18,7 @@ import util;
import log;
enum ItemType {
none,
file,
dir,
remote,
@ -37,7 +38,9 @@ struct Item {
string quickXorHash;
string sha256Hash;
string remoteDriveId;
string remoteParentId;
string remoteId;
ItemType remoteType;
string syncStatus;
string size;
}
@ -144,8 +147,27 @@ Item makeDatabaseItem(JSONValue driveItem) {
// Is the object a remote drive item - living on another driveId ?
if (isItemRemote(driveItem)) {
item.remoteDriveId = driveItem["remoteItem"]["parentReference"]["driveId"].str;
item.remoteId = driveItem["remoteItem"]["id"].str;
// Check and assign remoteDriveId
if ("parentReference" in driveItem["remoteItem"] && "driveId" in driveItem["remoteItem"]["parentReference"]) {
item.remoteDriveId = driveItem["remoteItem"]["parentReference"]["driveId"].str;
}
// Check and assign remoteParentId
if ("parentReference" in driveItem["remoteItem"] && "id" in driveItem["remoteItem"]["parentReference"]) {
item.remoteParentId = driveItem["remoteItem"]["parentReference"]["id"].str;
}
// Check and assign remoteId
if ("id" in driveItem["remoteItem"]) {
item.remoteId = driveItem["remoteItem"]["id"].str;
}
// Check and assign remoteType
if ("file" in driveItem["remoteItem"].object) {
item.remoteType = ItemType.file;
} else {
item.remoteType = ItemType.dir;
}
}
// We have 3 different operational modes where 'item.syncStatus' is used to flag if an item is synced or not:
@ -165,7 +187,7 @@ Item makeDatabaseItem(JSONValue driveItem) {
final class ItemDatabase {
// increment this for every change in the db schema
immutable int itemDatabaseVersion = 12;
immutable int itemDatabaseVersion = 13;
Database db;
string insertItemStmt;
@ -236,12 +258,12 @@ final class ItemDatabase {
db.exec("PRAGMA locking_mode = EXCLUSIVE");
insertItemStmt = "
INSERT OR REPLACE INTO item (driveId, id, name, remoteName, type, eTag, cTag, mtime, parentId, quickXorHash, sha256Hash, remoteDriveId, remoteId, syncStatus, size)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14, ?15)
INSERT OR REPLACE INTO item (driveId, id, name, remoteName, type, eTag, cTag, mtime, parentId, quickXorHash, sha256Hash, remoteDriveId, remoteParentId, remoteId, remoteType, syncStatus, size)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14, ?15, ?16, ?17)
";
updateItemStmt = "
UPDATE item
SET name = ?3, remoteName = ?4, type = ?5, eTag = ?6, cTag = ?7, mtime = ?8, parentId = ?9, quickXorHash = ?10, sha256Hash = ?11, remoteDriveId = ?12, remoteId = ?13, syncStatus = ?14, size = ?15
SET name = ?3, remoteName = ?4, type = ?5, eTag = ?6, cTag = ?7, mtime = ?8, parentId = ?9, quickXorHash = ?10, sha256Hash = ?11, remoteDriveId = ?12, remoteParentId = ?13, remoteId = ?14, remoteType = ?15, syncStatus = ?16, size = ?17
WHERE driveId = ?1 AND id = ?2
";
selectItemByIdStmt = "
@ -279,7 +301,9 @@ final class ItemDatabase {
quickXorHash TEXT,
sha256Hash TEXT,
remoteDriveId TEXT,
remoteParentId TEXT,
remoteId TEXT,
remoteType TEXT,
deltaLink TEXT,
syncStatus TEXT,
size TEXT,
@ -447,12 +471,14 @@ final class ItemDatabase {
bind(2, id);
bind(3, name);
bind(4, remoteName);
// type handling
string typeStr = null;
final switch (type) with (ItemType) {
case file: typeStr = "file"; break;
case dir: typeStr = "dir"; break;
case remote: typeStr = "remote"; break;
case unknown: typeStr = "unknown"; break;
case none: typeStr = null; break;
}
bind(5, typeStr);
bind(6, eTag);
@ -462,15 +488,26 @@ final class ItemDatabase {
bind(10, quickXorHash);
bind(11, sha256Hash);
bind(12, remoteDriveId);
bind(13, remoteId);
bind(14, syncStatus);
bind(15, size);
bind(13, remoteParentId);
bind(14, remoteId);
// remoteType handling
string remoteTypeStr = null;
final switch (remoteType) with (ItemType) {
case file: remoteTypeStr = "file"; break;
case dir: remoteTypeStr = "dir"; break;
case remote: remoteTypeStr = "remote"; break;
case unknown: remoteTypeStr = "unknown"; break;
case none: remoteTypeStr = null; break;
}
bind(15, remoteTypeStr);
bind(16, syncStatus);
bind(17, size);
}
}
private Item buildItem(Statement.Result result) {
assert(!result.empty, "The result must not be empty");
assert(result.front.length == 16, "The result must have 16 columns");
assert(result.front.length == 18, "The result must have 18 columns");
Item item = {
// column 0: driveId
@ -485,10 +522,12 @@ final class ItemDatabase {
// column 9: quickXorHash
// column 10: sha256Hash
// column 11: remoteDriveId
// column 12: remoteId
// column 13: deltaLink
// column 14: syncStatus
// column 15: size
// column 12: remoteParentId
// column 13: remoteId
// column 14: remoteType
// column 15: deltaLink
// column 16: syncStatus
// column 17: size
driveId: result.front[0].dup,
id: result.front[1].dup,
@ -502,17 +541,30 @@ final class ItemDatabase {
quickXorHash: result.front[9].dup,
sha256Hash: result.front[10].dup,
remoteDriveId: result.front[11].dup,
remoteId: result.front[12].dup,
// Column 13 is deltaLink - not set here
syncStatus: result.front[14].dup,
size: result.front[15].dup
remoteParentId: result.front[12].dup,
remoteId: result.front[13].dup,
// Column 14 is remoteType - not set here
// Column 15 is deltaLink - not set here
syncStatus: result.front[16].dup,
size: result.front[17].dup
};
// Configure item.type
switch (result.front[4]) {
case "file": item.type = ItemType.file; break;
case "dir": item.type = ItemType.dir; break;
case "remote": item.type = ItemType.remote; break;
default: assert(0, "Invalid item type");
}
// Configure item.remoteType
switch (result.front[14]) {
// We only care about 'dir' and 'file' for 'remote' items
case "file": item.remoteType = ItemType.file; break;
case "dir": item.remoteType = ItemType.dir; break;
default: item.remoteType = ItemType.none; break; // Default to ItemType.none
}
// Return item
return item;
}

View file

@ -32,6 +32,7 @@ import syncEngine;
import itemdb;
import clientSideFiltering;
import monitor;
import webhook;
// What other constant variables do we require?
@ -39,7 +40,7 @@ const int EXIT_RESYNC_REQUIRED = 126;
// Class objects
ApplicationConfig appConfig;
OneDriveApi oneDriveApiInstance;
OneDriveWebhook oneDriveWebhook;
SyncEngine syncEngineInstance;
ItemDatabase itemDB;
ClientSideFiltering selectiveSync;
@ -341,19 +342,25 @@ int main(string[] cliArgs) {
processResyncDatabaseRemoval(runtimeDatabaseFile);
}
} else {
// Has any of our application configuration that would require a --resync been changed?
if (appConfig.applicationChangeWhereResyncRequired()) {
// Application configuration has changed however --resync not issued, fail fast
addLogEntry();
addLogEntry("An application configuration change has been detected where a --resync is required");
addLogEntry();
return EXIT_RESYNC_REQUIRED;
} else {
// No configuration change that requires a --resync to be issued
// Make a backup of the applicable configuration file
appConfig.createBackupConfigFile();
// Update hash files and generate a new config backup
appConfig.updateHashContentsForConfigFiles();
// Is the application currently authenticated? If not, it is pointless checking if a --resync is required until the application is authenticated
if (exists(appConfig.refreshTokenFilePath)) {
// Has any of our application configuration that would require a --resync been changed?
if (appConfig.applicationChangeWhereResyncRequired()) {
// Application configuration has changed however --resync not issued, fail fast
addLogEntry();
addLogEntry("An application configuration change has been detected where a --resync is required");
addLogEntry();
return EXIT_RESYNC_REQUIRED;
} else {
// No configuration change that requires a --resync to be issued
// Special cases need to be checked - if these options were enabled, it creates a false 'Resync Required' flag, so do not create a backup
if ((!appConfig.getValueBool("list_business_shared_items"))) {
// Make a backup of the applicable configuration file
appConfig.createBackupConfigFile();
// Update hash files and generate a new config backup
appConfig.updateHashContentsForConfigFiles();
}
}
}
}
@ -416,13 +423,16 @@ int main(string[] cliArgs) {
// Initialise the OneDrive API
addLogEntry("Attempting to initialise the OneDrive API ...", ["verbose"]);
oneDriveApiInstance = new OneDriveApi(appConfig);
OneDriveApi oneDriveApiInstance = new OneDriveApi(appConfig);
appConfig.apiWasInitialised = oneDriveApiInstance.initialise();
if (appConfig.apiWasInitialised) {
addLogEntry("The OneDrive API was initialised successfully", ["verbose"]);
// Flag that we were able to initalise the API in the application config
oneDriveApiInstance.debugOutputConfiguredAPIItems();
oneDriveApiInstance.shutdown();
object.destroy(oneDriveApiInstance);
// Need to configure the itemDB and syncEngineInstance for 'sync' and 'non-sync' operations
addLogEntry("Opening the item database ...", ["verbose"]);
@ -447,8 +457,9 @@ int main(string[] cliArgs) {
// Are we performing some sort of 'no-sync' task?
// - Are we obtaining the Office 365 Drive ID for a given Office 365 SharePoint Shared Library?
// - Are we displaying the sync satus?
// - Are we getting the URL for a file online
// - Are we listing who modified a file last online
// - Are we getting the URL for a file online?
// - Are we listing who modified a file last online?
// - Are we listing OneDrive Business Shared Items?
// - Are we createing a shareable link for an existing file on OneDrive?
// - Are we just creating a directory online, without any sync being performed?
// - Are we just deleting a directory online, without any sync being performed?
@ -500,6 +511,20 @@ int main(string[] cliArgs) {
return EXIT_SUCCESS;
}
// --list-shared-items - Are we listing OneDrive Business Shared Items
if (appConfig.getValueBool("list_business_shared_items")) {
// Is this a business account type?
if (appConfig.accountType == "business") {
// List OneDrive Business Shared Items
syncEngineInstance.listBusinessSharedObjects();
} else {
addLogEntry("ERROR: Unsupported account type for listing OneDrive Business Shared Items");
}
// Exit application
// Use exit scopes to shutdown API
return EXIT_SUCCESS;
}
// --create-share-link - Are we createing a shareable link for an existing file on OneDrive?
if (appConfig.getValueString("create_share_link") != "") {
// Query OneDrive for the file, and if valid, create a shareable link for the file
@ -851,15 +876,16 @@ int main(string[] cliArgs) {
addLogEntry("ERROR: The following inotify error was generated: " ~ e.msg);
}
}
// Webhook Notification reset to false for this loop
notificationReceived = false;
// Check for notifications pushed from Microsoft to the webhook
if (webhookEnabled) {
// Create a subscription on the first run, or renew the subscription
// on subsequent runs when it is about to expire.
oneDriveApiInstance.createOrRenewSubscription();
if (oneDriveWebhook is null) {
oneDriveWebhook = new OneDriveWebhook(thisTid, appConfig);
oneDriveWebhook.serve();
} else
oneDriveWebhook.createOrRenewSubscription();
}
// Get the current time this loop is starting
@ -1004,19 +1030,21 @@ int main(string[] cliArgs) {
if(filesystemMonitor.initialised || webhookEnabled) {
if(filesystemMonitor.initialised) {
// If local monitor is on
// If local monitor is on and is waiting (previous event was not from webhook)
// start the worker and wait for event
filesystemMonitor.send(true);
if (!notificationReceived)
filesystemMonitor.send(true);
}
if(webhookEnabled) {
// if onedrive webhook is enabled
// update sleep time based on renew interval
Duration nextWebhookCheckDuration = oneDriveApiInstance.getNextExpirationCheckDuration();
Duration nextWebhookCheckDuration = oneDriveWebhook.getNextExpirationCheckDuration();
if (nextWebhookCheckDuration < sleepTime) {
sleepTime = nextWebhookCheckDuration;
addLogEntry("Update sleeping time to " ~ to!string(sleepTime), ["debug"]);
}
// Webhook Notification reset to false for this loop
notificationReceived = false;
}
@ -1042,17 +1070,17 @@ int main(string[] cliArgs) {
// do not contain any actual changes, and we will always rely do the
// delta endpoint to sync to latest. Therefore, only one sync run is
// good enough to catch up for multiple notifications.
int signalCount = notificationReceived ? 1 : 0;
for (;; signalCount++) {
signalExists = receiveTimeout(dur!"seconds"(-1), (ulong _) {});
if (signalExists) {
notificationReceived = true;
} else {
if (notificationReceived) {
if (notificationReceived) {
int signalCount = 1;
while (true) {
signalExists = receiveTimeout(dur!"seconds"(-1), (ulong _) {});
if (signalExists) {
signalCount++;
} else {
addLogEntry("Received " ~ to!string(signalCount) ~ " refresh signals from the webhook");
oneDriveWebhookCallback();
break;
}
break;
}
}
@ -1091,11 +1119,10 @@ void performStandardExitProcess(string scopeCaller = null) {
addLogEntry("Running performStandardExitProcess due to: " ~ scopeCaller, ["debug"]);
}
// Shutdown the OneDrive API instance
if (oneDriveApiInstance !is null) {
addLogEntry("Shutdown OneDrive API instance", ["debug"]);
oneDriveApiInstance.shutdown();
object.destroy(oneDriveApiInstance);
// Shutdown the OneDrive Webhook instance
if (oneDriveWebhook !is null) {
oneDriveWebhook.stop();
object.destroy(oneDriveWebhook);
}
// Shutdown the sync engine
@ -1145,7 +1172,7 @@ void performStandardExitProcess(string scopeCaller = null) {
addLogEntry("Setting ALL Class Objects to null due to failure scope", ["debug"]);
itemDB = null;
appConfig = null;
oneDriveApiInstance = null;
oneDriveWebhook = null;
selectiveSync = null;
syncEngineInstance = null;
} else {

View file

@ -488,7 +488,9 @@ final class Monitor {
while (true) {
bool hasNotification = false;
while (true) {
int sleep_counter = 0;
// Batch events up to 5 seconds
while (sleep_counter < 5) {
int ret = poll(&fds, 1, 0);
if (ret == -1) throw new MonitorException("poll failed");
else if (ret == 0) break; // no events available
@ -621,7 +623,12 @@ final class Monitor {
skip:
i += inotify_event.sizeof + event.len;
}
Thread.sleep(dur!"seconds"(1));
// Sleep for one second to prevent missing fast-changing events.
if (poll(&fds, 1, 0) == 0) {
sleep_counter += 1;
Thread.sleep(dur!"seconds"(1));
}
}
if (!hasNotification) break;
processChanges();

View file

@ -22,9 +22,6 @@ import std.uri;
import std.array;
// Required for webhooks
import arsd.cgi;
import std.concurrency;
import core.atomic : atomicOp;
import std.uuid;
// What other modules that we have created do we need to import?
@ -56,116 +53,10 @@ class OneDriveException: Exception {
}
}
class OneDriveWebhook {
// We need OneDriveWebhook.serve to be a static function, otherwise we would hit the member function
// "requires a dual-context, which is deprecated" warning. The root cause is described here:
// - https://issues.dlang.org/show_bug.cgi?id=5710
// - https://forum.dlang.org/post/fkyppfxzegenniyzztos@forum.dlang.org
// The problem is deemed a bug and should be fixed in the compilers eventually. The singleton stuff
// could be undone when it is fixed.
//
// Following the singleton pattern described here: https://wiki.dlang.org/Low-Lock_Singleton_Pattern
// Cache instantiation flag in thread-local bool
// Thread local
private static bool instantiated_;
private RequestServer server;
// Thread global
private __gshared OneDriveWebhook instance_;
private string host;
private ushort port;
private Tid parentTid;
private shared uint count;
private bool started;
static OneDriveWebhook getOrCreate(string host, ushort port, Tid parentTid) {
if (!instantiated_) {
synchronized(OneDriveWebhook.classinfo) {
if (!instance_) {
instance_ = new OneDriveWebhook(host, port, parentTid);
}
instantiated_ = true;
}
}
return instance_;
}
private this(string host, ushort port, Tid parentTid) {
this.host = host;
this.port = port;
this.parentTid = parentTid;
this.count = 0;
}
void serve() {
spawn(&serveStatic);
this.started = true;
addLogEntry("Started webhook server");
}
void stop() {
if (this.started) {
server.stop();
this.started = false;
}
addLogEntry("Stopped webhook server");
object.destroy(server);
}
// The static serve() is necessary because spawn() does not like instance methods
private static void serveStatic() {
// we won't create the singleton instance if it hasn't been created already
// such case is a bug which should crash the program and gets fixed
instance_.serveImpl();
}
// The static handle() is necessary to work around the dual-context warning mentioned above
private static void handle(Cgi cgi) {
// we won't create the singleton instance if it hasn't been created already
// such case is a bug which should crash the program and gets fixed
instance_.handleImpl(cgi);
}
private void serveImpl() {
server = RequestServer(host, port);
server.serveEmbeddedHttp!handle();
}
private void handleImpl(Cgi cgi) {
if (debugHTTPResponseOutput) {
addLogEntry("Webhook request: " ~ to!string(cgi.requestMethod) ~ " " ~ to!string(cgi.requestUri));
if (!cgi.postBody.empty) {
addLogEntry("Webhook post body: " ~ to!string(cgi.postBody));
}
}
cgi.setResponseContentType("text/plain");
if ("validationToken" in cgi.get) {
// For validation requests, respond with the validation token passed in the query string
// https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/webhook-receiver-validation-request
cgi.write(cgi.get["validationToken"]);
addLogEntry("Webhook: handled validation request");
} else {
// Notifications don't include any information about the changes that triggered them.
// Put a refresh signal in the queue and let the main monitor loop process it.
// https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/using-webhooks
count.atomicOp!"+="(1);
send(parentTid, to!ulong(count));
cgi.write("OK");
addLogEntry("Webhook: sent refresh signal #" ~ to!string(count));
}
}
}
class OneDriveApi {
// Class variables
ApplicationConfig appConfig;
CurlEngine curlEngine;
OneDriveWebhook webhook;
string clientId = "";
string companyName = "";
@ -179,20 +70,14 @@ class OneDriveApi {
string itemByPathUrl = "";
string siteSearchUrl = "";
string siteDriveUrl = "";
string subscriptionUrl = "";
string tenantId = "";
string authScope = "";
const(char)[] refreshToken = "";
bool dryRun = false;
bool debugResponse = false;
ulong retryAfterValue = 0;
// Webhook Subscriptions
string subscriptionUrl = "";
string subscriptionId = "";
SysTime subscriptionExpiration, subscriptionLastErrorAt;
Duration subscriptionExpirationInterval, subscriptionRenewalInterval, subscriptionRetryInterval;
string notificationUrl = "";
this(ApplicationConfig appConfig) {
// Configure the class varaible to consume the application configuration
this.appConfig = appConfig;
@ -214,14 +99,9 @@ class OneDriveApi {
siteSearchUrl = appConfig.globalGraphEndpoint ~ "/v1.0/sites?search";
siteDriveUrl = appConfig.globalGraphEndpoint ~ "/v1.0/sites/";
// Subscriptions
subscriptionUrl = appConfig.globalGraphEndpoint ~ "/v1.0/subscriptions";
subscriptionExpiration = Clock.currTime(UTC());
subscriptionLastErrorAt = SysTime.fromUnixTime(0);
subscriptionExpirationInterval = dur!"seconds"(appConfig.getValueLong("webhook_expiration_interval"));
subscriptionRenewalInterval = dur!"seconds"(appConfig.getValueLong("webhook_renewal_interval"));
subscriptionRetryInterval = dur!"seconds"(appConfig.getValueLong("webhook_retry_interval"));
notificationUrl = appConfig.getValueString("webhook_public_url");
}
// Initialise the OneDrive API class
@ -475,20 +355,6 @@ class OneDriveApi {
// Shutdown OneDrive API Curl Engine
void shutdown() {
// Delete subscription if there exists any
try {
deleteSubscription();
} catch (OneDriveException e) {
logSubscriptionError(e);
}
// Shutdown webhook server if it is running
if (webhook !is null) {
webhook.stop();
object.destroy(webhook);
}
// Release curl instance
if (curlEngine !is null) {
curlEngine.release();
@ -646,6 +512,13 @@ class OneDriveApi {
return get(url);
}
// Return all the items that are shared with the user
// https://docs.microsoft.com/en-us/graph/api/drive-sharedwithme
JSONValue getSharedWithMe() {
checkAccessTokenExpired();
return get(sharedWithMeUrl);
}
// Create a shareable link for an existing file on OneDrive based on the accessScope JSON permissions
// https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_createlink
JSONValue createShareableLink(string driveId, string id, JSONValue accessScope) {
@ -834,6 +707,47 @@ class OneDriveApi {
url = siteDriveUrl ~ site_id ~ "/drives";
return get(url);
}
JSONValue createSubscription(string notificationUrl, SysTime expirationDateTime) {
checkAccessTokenExpired();
string driveId = appConfig.getValueString("drive_id");
string url = subscriptionUrl;
// Create a resource item based on if we have a driveId
string resourceItem;
if (driveId.length) {
resourceItem = "/drives/" ~ driveId ~ "/root";
} else {
resourceItem = "/me/drive/root";
}
// create JSON request to create webhook subscription
const JSONValue request = [
"changeType": "updated",
"notificationUrl": notificationUrl,
"resource": resourceItem,
"expirationDateTime": expirationDateTime.toISOExtString(),
"clientState": randomUUID().toString()
];
curlEngine.http.addRequestHeader("Content-Type", "application/json");
return post(url, request.toString());
}
JSONValue renewSubscription(string subscriptionId, SysTime expirationDateTime) {
string url;
url = subscriptionUrl ~ "/" ~ subscriptionId;
const JSONValue request = [
"expirationDateTime": expirationDateTime.toISOExtString()
];
curlEngine.http.addRequestHeader("Content-Type", "application/json");
return post(url, request.toString());
}
void deleteSubscription(string subscriptionId) {
string url;
url = subscriptionUrl ~ "/" ~ subscriptionId;
performDelete(url);
}
// https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_get_content
void downloadById(const(char)[] driveId, const(char)[] id, string saveToPath, long fileSize) {
@ -893,277 +807,7 @@ class OneDriveApi {
retryAfterValue = 0;
}
// Create a new subscription or renew the existing subscription
void createOrRenewSubscription() {
checkAccessTokenExpired();
// Kick off the webhook server first
if (webhook is null) {
webhook = OneDriveWebhook.getOrCreate(
appConfig.getValueString("webhook_listening_host"),
to!ushort(appConfig.getValueLong("webhook_listening_port")),
thisTid
);
webhook.serve();
}
auto elapsed = Clock.currTime(UTC()) - subscriptionLastErrorAt;
if (elapsed < subscriptionRetryInterval) {
return;
}
try {
if (!hasValidSubscription()) {
createSubscription();
} else if (isSubscriptionUpForRenewal()) {
renewSubscription();
}
} catch (OneDriveException e) {
logSubscriptionError(e);
subscriptionLastErrorAt = Clock.currTime(UTC());
addLogEntry("Will retry creating or renewing subscription in " ~ to!string(subscriptionRetryInterval));
} catch (JSONException e) {
addLogEntry("ERROR: Unexpected JSON error when attempting to validate subscription: " ~ e.msg);
subscriptionLastErrorAt = Clock.currTime(UTC());
addLogEntry("Will retry creating or renewing subscription in " ~ to!string(subscriptionRetryInterval));
}
}
// Return the duration to next subscriptionExpiration check
Duration getNextExpirationCheckDuration() {
SysTime now = Clock.currTime(UTC());
if (hasValidSubscription()) {
Duration elapsed = Clock.currTime(UTC()) - subscriptionLastErrorAt;
// Check if we are waiting for the next retry
if (elapsed < subscriptionRetryInterval)
return subscriptionRetryInterval - elapsed;
else
return subscriptionExpiration - now - subscriptionRenewalInterval;
}
else
return subscriptionRetryInterval;
}
// Private functions
private bool hasValidSubscription() {
return !subscriptionId.empty && subscriptionExpiration > Clock.currTime(UTC());
}
private bool isSubscriptionUpForRenewal() {
return subscriptionExpiration < Clock.currTime(UTC()) + subscriptionRenewalInterval;
}
private void createSubscription() {
addLogEntry("Initializing subscription for updates ...");
auto expirationDateTime = Clock.currTime(UTC()) + subscriptionExpirationInterval;
string driveId = appConfig.getValueString("drive_id");
string url = subscriptionUrl;
// Create a resource item based on if we have a driveId
string resourceItem;
if (driveId.length) {
resourceItem = "/drives/" ~ driveId ~ "/root";
} else {
resourceItem = "/me/drive/root";
}
// create JSON request to create webhook subscription
const JSONValue request = [
"changeType": "updated",
"notificationUrl": notificationUrl,
"resource": resourceItem,
"expirationDateTime": expirationDateTime.toISOExtString(),
"clientState": randomUUID().toString()
];
curlEngine.http.addRequestHeader("Content-Type", "application/json");
try {
JSONValue response = post(url, request.toString());
// Save important subscription metadata including id and expiration
subscriptionId = response["id"].str;
subscriptionExpiration = SysTime.fromISOExtString(response["expirationDateTime"].str);
addLogEntry("Created new subscription " ~ subscriptionId ~ " with expiration: " ~ to!string(subscriptionExpiration.toISOExtString()));
} catch (OneDriveException e) {
if (e.httpStatusCode == 409) {
// Take over an existing subscription on HTTP 409.
//
// Sample 409 error:
// {
// "error": {
// "code": "ObjectIdentifierInUse",
// "innerError": {
// "client-request-id": "615af209-467a-4ab7-8eff-27c1d1efbc2d",
// "date": "2023-09-26T09:27:45",
// "request-id": "615af209-467a-4ab7-8eff-27c1d1efbc2d"
// },
// "message": "Subscription Id c0bba80e-57a3-43a7-bac2-e6f525a76e7c already exists for the requested combination"
// }
// }
// Make sure the error code is "ObjectIdentifierInUse"
try {
if (e.error["error"]["code"].str != "ObjectIdentifierInUse") {
throw e;
}
} catch (JSONException jsonEx) {
throw e;
}
// Extract the existing subscription id from the error message
import std.regex;
auto idReg = ctRegex!(r"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}", "i");
auto m = matchFirst(e.error["error"]["message"].str, idReg);
if (!m) {
throw e;
}
// Save the subscription id and renew it immediately since we don't know the expiration timestamp
subscriptionId = m[0];
addLogEntry("Found existing subscription " ~ subscriptionId);
renewSubscription();
} else {
throw e;
}
}
}
private void renewSubscription() {
addLogEntry("Renewing subscription for updates ...");
auto expirationDateTime = Clock.currTime(UTC()) + subscriptionExpirationInterval;
string url;
url = subscriptionUrl ~ "/" ~ subscriptionId;
const JSONValue request = [
"expirationDateTime": expirationDateTime.toISOExtString()
];
curlEngine.http.addRequestHeader("Content-Type", "application/json");
try {
JSONValue response = patch(url, request.toString());
// Update subscription expiration from the response
subscriptionExpiration = SysTime.fromISOExtString(response["expirationDateTime"].str);
addLogEntry("Renewed subscription " ~ subscriptionId ~ " with expiration: " ~ to!string(subscriptionExpiration.toISOExtString()));
} catch (OneDriveException e) {
if (e.httpStatusCode == 404) {
addLogEntry("The subscription is not found on the server. Recreating subscription ...");
subscriptionId = null;
subscriptionExpiration = Clock.currTime(UTC());
createSubscription();
} else {
throw e;
}
}
}
private void deleteSubscription() {
if (!hasValidSubscription()) {
addLogEntry("No valid Microsoft OneDrive webhook subscription to delete", ["debug"]);
return;
}
string url;
url = subscriptionUrl ~ "/" ~ subscriptionId;
performDelete(url);
addLogEntry("Deleted Microsoft OneDrive webhook subscription", ["debug"]);
}
private void logSubscriptionError(OneDriveException e) {
if (e.httpStatusCode == 400) {
// Log known 400 error where Microsoft cannot get a 200 OK from the webhook endpoint
//
// Sample 400 error:
// {
// "error": {
// "code": "InvalidRequest",
// "innerError": {
// "client-request-id": "<uuid>",
// "date": "<timestamp>",
// "request-id": "<uuid>"
// },
// "message": "Subscription validation request failed. Notification endpoint must respond with 200 OK to validation request."
// }
// }
try {
if (e.error["error"]["code"].str == "InvalidRequest") {
import std.regex;
auto msgReg = ctRegex!(r"Subscription validation request failed", "i");
auto m = matchFirst(e.error["error"]["message"].str, msgReg);
if (m) {
addLogEntry("ERROR: Cannot create or renew subscription: Microsoft did not get 200 OK from the webhook endpoint.");
return;
}
}
} catch (JSONException) {
// fallthrough
}
} else if (e.httpStatusCode == 401) {
// Log known 401 error where authentication failed
//
// Sample 401 error:
// {
// "error": {
// "code": "ExtensionError",
// "innerError": {
// "client-request-id": "<uuid>",
// "date": "<timestamp>",
// "request-id": "<uuid>"
// },
// "message": "Operation: Create; Exception: [Status Code: Unauthorized; Reason: Authentication failed]"
// }
// }
try {
if (e.error["error"]["code"].str == "ExtensionError") {
import std.regex;
auto msgReg = ctRegex!(r"Authentication failed", "i");
auto m = matchFirst(e.error["error"]["message"].str, msgReg);
if (m) {
addLogEntry("ERROR: Cannot create or renew subscription: Authentication failed.");
return;
}
}
} catch (JSONException) {
// fallthrough
}
} else if (e.httpStatusCode == 403) {
// Log known 403 error where the number of subscriptions on item has exceeded limit
//
// Sample 403 error:
// {
// "error": {
// "code": "ExtensionError",
// "innerError": {
// "client-request-id": "<uuid>",
// "date": "<timestamp>",
// "request-id": "<uuid>"
// },
// "message": "Operation: Create; Exception: [Status Code: Forbidden; Reason: Number of subscriptions on item has exceeded limit]"
// }
// }
try {
if (e.error["error"]["code"].str == "ExtensionError") {
import std.regex;
auto msgReg = ctRegex!(r"Number of subscriptions on item has exceeded limit", "i");
auto m = matchFirst(e.error["error"]["message"].str, msgReg);
if (m) {
addLogEntry("ERROR: Cannot create or renew subscription: Number of subscriptions has exceeded limit.");
return;
}
}
} catch (JSONException) {
// fallthrough
}
}
// Log detailed message for unknown errors
addLogEntry("ERROR: Cannot create or renew subscription.");
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
}
private void addAccessTokenHeader() {
curlEngine.http.addRequestHeader("Authorization", appConfig.accessToken);
}
@ -1947,7 +1591,7 @@ class OneDriveApi {
case 403:
// OneDrive responded that the user is forbidden
addLogEntry("OneDrive returned a 'HTTP 403 - Forbidden' - gracefully handling error", ["verbose"]);
break;
throw new OneDriveException(curlEngine.http.statusLine.code, curlEngine.http.statusLine.reason);
// 404 - Item not found
case 404:

1353
src/sync.d

File diff suppressed because it is too large Load diff

View file

@ -49,7 +49,7 @@ static this() {
}
// Creates a safe backup of the given item, and only performs the function if not in a --dry-run scenario
void safeBackup(const(char)[] path, bool dryRun) {
void safeBackup(const(char)[] path, bool dryRun, out string renamedPath) {
auto ext = extension(path);
auto newPath = path.chomp(ext) ~ "-" ~ deviceName;
int n = 2;
@ -88,7 +88,8 @@ void safeBackup(const(char)[] path, bool dryRun) {
//
// Use rename() as Linux is POSIX compliant, we have an atomic operation where at no point in time the 'to' is missing.
try {
rename(path, newPath);
rename(path, newPath);
renamedPath = to!string(newPath);
} catch (Exception e) {
// Handle exceptions, e.g., log error
addLogEntry("Renaming of local file failed for " ~ to!string(path) ~ ": " ~ e.msg, ["error"]);

339
src/webhook.d Normal file
View file

@ -0,0 +1,339 @@
module webhook;
// What does this module require to function?
import core.atomic : atomicOp;
import std.datetime;
import std.concurrency;
import std.json;
// What other modules that we have created do we need to import?
import arsd.cgi;
import config;
import onedrive;
import log;
import util;
class OneDriveWebhook {
private RequestServer server;
private string host;
private ushort port;
private Tid parentTid;
private bool started;
private ApplicationConfig appConfig;
private OneDriveApi oneDriveApiInstance;
string subscriptionId = "";
SysTime subscriptionExpiration, subscriptionLastErrorAt;
Duration subscriptionExpirationInterval, subscriptionRenewalInterval, subscriptionRetryInterval;
string notificationUrl = "";
private uint count;
this(Tid parentTid, ApplicationConfig appConfig) {
this.host = appConfig.getValueString("webhook_listening_host");
this.port = to!ushort(appConfig.getValueLong("webhook_listening_port"));
this.parentTid = parentTid;
this.appConfig = appConfig;
subscriptionExpiration = Clock.currTime(UTC());
subscriptionLastErrorAt = SysTime.fromUnixTime(0);
subscriptionExpirationInterval = dur!"seconds"(appConfig.getValueLong("webhook_expiration_interval"));
subscriptionRenewalInterval = dur!"seconds"(appConfig.getValueLong("webhook_renewal_interval"));
subscriptionRetryInterval = dur!"seconds"(appConfig.getValueLong("webhook_retry_interval"));
notificationUrl = appConfig.getValueString("webhook_public_url");
}
// The static serve() is necessary because spawn() does not like instance methods
void serve() {
if (this.started)
return;
this.started = true;
this.count = 0;
server.listeningHost = this.host;
server.listeningPort = this.port;
spawn(&serveImpl, cast(shared) this);
addLogEntry("Started webhook server");
// Subscriptions
oneDriveApiInstance = new OneDriveApi(this.appConfig);
oneDriveApiInstance.initialise();
createOrRenewSubscription();
}
void stop() {
if (!this.started)
return;
server.stop();
this.started = false;
addLogEntry("Stopped webhook server");
object.destroy(server);
// Delete subscription if there exists any
try {
deleteSubscription();
} catch (OneDriveException e) {
logSubscriptionError(e);
}
oneDriveApiInstance.shutdown();
object.destroy(oneDriveApiInstance);
}
private static void handle(shared OneDriveWebhook _this, Cgi cgi) {
if (debugHTTPResponseOutput) {
addLogEntry("Webhook request: " ~ to!string(cgi.requestMethod) ~ " " ~ to!string(cgi.requestUri));
if (!cgi.postBody.empty) {
addLogEntry("Webhook post body: " ~ to!string(cgi.postBody));
}
}
cgi.setResponseContentType("text/plain");
if ("validationToken" in cgi.get) {
// For validation requests, respond with the validation token passed in the query string
// https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/webhook-receiver-validation-request
cgi.write(cgi.get["validationToken"]);
addLogEntry("Webhook: handled validation request");
} else {
// Notifications don't include any information about the changes that triggered them.
// Put a refresh signal in the queue and let the main monitor loop process it.
// https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/using-webhooks
_this.count.atomicOp!"+="(1);
send(cast()_this.parentTid, to!ulong(_this.count));
cgi.write("OK");
addLogEntry("Webhook: sent refresh signal #" ~ to!string(_this.count));
}
}
private static void serveImpl(shared OneDriveWebhook _this) {
_this.server.serveEmbeddedHttp!(handle, OneDriveWebhook)(_this);
}
// Create a new subscription or renew the existing subscription
void createOrRenewSubscription() {
auto elapsed = Clock.currTime(UTC()) - subscriptionLastErrorAt;
if (elapsed < subscriptionRetryInterval) {
return;
}
try {
if (!hasValidSubscription()) {
createSubscription();
} else if (isSubscriptionUpForRenewal()) {
renewSubscription();
}
} catch (OneDriveException e) {
logSubscriptionError(e);
subscriptionLastErrorAt = Clock.currTime(UTC());
addLogEntry("Will retry creating or renewing subscription in " ~ to!string(subscriptionRetryInterval));
} catch (JSONException e) {
addLogEntry("ERROR: Unexpected JSON error when attempting to validate subscription: " ~ e.msg);
subscriptionLastErrorAt = Clock.currTime(UTC());
addLogEntry("Will retry creating or renewing subscription in " ~ to!string(subscriptionRetryInterval));
}
}
// Return the duration to next subscriptionExpiration check
Duration getNextExpirationCheckDuration() {
SysTime now = Clock.currTime(UTC());
if (hasValidSubscription()) {
Duration elapsed = Clock.currTime(UTC()) - subscriptionLastErrorAt;
// Check if we are waiting for the next retry
if (elapsed < subscriptionRetryInterval)
return subscriptionRetryInterval - elapsed;
else
return subscriptionExpiration - now - subscriptionRenewalInterval;
}
else
return subscriptionRetryInterval;
}
private bool hasValidSubscription() {
return !subscriptionId.empty && subscriptionExpiration > Clock.currTime(UTC());
}
private bool isSubscriptionUpForRenewal() {
return subscriptionExpiration < Clock.currTime(UTC()) + subscriptionRenewalInterval;
}
private void createSubscription() {
addLogEntry("Initializing subscription for updates ...");
auto expirationDateTime = Clock.currTime(UTC()) + subscriptionExpirationInterval;
try {
JSONValue response = oneDriveApiInstance.createSubscription(notificationUrl, expirationDateTime);
// Save important subscription metadata including id and expiration
subscriptionId = response["id"].str;
subscriptionExpiration = SysTime.fromISOExtString(response["expirationDateTime"].str);
addLogEntry("Created new subscription " ~ subscriptionId ~ " with expiration: " ~ to!string(subscriptionExpiration.toISOExtString()));
} catch (OneDriveException e) {
if (e.httpStatusCode == 409) {
// Take over an existing subscription on HTTP 409.
//
// Sample 409 error:
// {
// "error": {
// "code": "ObjectIdentifierInUse",
// "innerError": {
// "client-request-id": "615af209-467a-4ab7-8eff-27c1d1efbc2d",
// "date": "2023-09-26T09:27:45",
// "request-id": "615af209-467a-4ab7-8eff-27c1d1efbc2d"
// },
// "message": "Subscription Id c0bba80e-57a3-43a7-bac2-e6f525a76e7c already exists for the requested combination"
// }
// }
// Make sure the error code is "ObjectIdentifierInUse"
try {
if (e.error["error"]["code"].str != "ObjectIdentifierInUse") {
throw e;
}
} catch (JSONException jsonEx) {
throw e;
}
// Extract the existing subscription id from the error message
import std.regex;
auto idReg = ctRegex!(r"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}", "i");
auto m = matchFirst(e.error["error"]["message"].str, idReg);
if (!m) {
throw e;
}
// Save the subscription id and renew it immediately since we don't know the expiration timestamp
subscriptionId = m[0];
addLogEntry("Found existing subscription " ~ subscriptionId);
renewSubscription();
} else {
throw e;
}
}
}
private void renewSubscription() {
addLogEntry("Renewing subscription for updates ...");
auto expirationDateTime = Clock.currTime(UTC()) + subscriptionExpirationInterval;
try {
JSONValue response = oneDriveApiInstance.renewSubscription(subscriptionId, expirationDateTime);
// Update subscription expiration from the response
subscriptionExpiration = SysTime.fromISOExtString(response["expirationDateTime"].str);
addLogEntry("Created new subscription " ~ subscriptionId ~ " with expiration: " ~ to!string(subscriptionExpiration.toISOExtString()));
} catch (OneDriveException e) {
if (e.httpStatusCode == 404) {
addLogEntry("The subscription is not found on the server. Recreating subscription ...");
subscriptionId = null;
subscriptionExpiration = Clock.currTime(UTC());
createSubscription();
} else {
throw e;
}
}
}
private void deleteSubscription() {
if (!hasValidSubscription()) {
return;
}
oneDriveApiInstance.deleteSubscription(subscriptionId);
addLogEntry("Deleted subscription");
}
private void logSubscriptionError(OneDriveException e) {
if (e.httpStatusCode == 400) {
// Log known 400 error where Microsoft cannot get a 200 OK from the webhook endpoint
//
// Sample 400 error:
// {
// "error": {
// "code": "InvalidRequest",
// "innerError": {
// "client-request-id": "<uuid>",
// "date": "<timestamp>",
// "request-id": "<uuid>"
// },
// "message": "Subscription validation request failed. Notification endpoint must respond with 200 OK to validation request."
// }
// }
try {
if (e.error["error"]["code"].str == "InvalidRequest") {
import std.regex;
auto msgReg = ctRegex!(r"Subscription validation request failed", "i");
auto m = matchFirst(e.error["error"]["message"].str, msgReg);
if (m) {
addLogEntry("ERROR: Cannot create or renew subscription: Microsoft did not get 200 OK from the webhook endpoint.");
return;
}
}
} catch (JSONException) {
// fallthrough
}
} else if (e.httpStatusCode == 401) {
// Log known 401 error where authentication failed
//
// Sample 401 error:
// {
// "error": {
// "code": "ExtensionError",
// "innerError": {
// "client-request-id": "<uuid>",
// "date": "<timestamp>",
// "request-id": "<uuid>"
// },
// "message": "Operation: Create; Exception: [Status Code: Unauthorized; Reason: Authentication failed]"
// }
// }
try {
if (e.error["error"]["code"].str == "ExtensionError") {
import std.regex;
auto msgReg = ctRegex!(r"Authentication failed", "i");
auto m = matchFirst(e.error["error"]["message"].str, msgReg);
if (m) {
addLogEntry("ERROR: Cannot create or renew subscription: Authentication failed.");
return;
}
}
} catch (JSONException) {
// fallthrough
}
} else if (e.httpStatusCode == 403) {
// Log known 403 error where the number of subscriptions on item has exceeded limit
//
// Sample 403 error:
// {
// "error": {
// "code": "ExtensionError",
// "innerError": {
// "client-request-id": "<uuid>",
// "date": "<timestamp>",
// "request-id": "<uuid>"
// },
// "message": "Operation: Create; Exception: [Status Code: Forbidden; Reason: Number of subscriptions on item has exceeded limit]"
// }
// }
try {
if (e.error["error"]["code"].str == "ExtensionError") {
import std.regex;
auto msgReg = ctRegex!(r"Number of subscriptions on item has exceeded limit", "i");
auto m = matchFirst(e.error["error"]["message"].str, msgReg);
if (m) {
addLogEntry("ERROR: Cannot create or renew subscription: Number of subscriptions has exceeded limit.");
return;
}
}
} catch (JSONException) {
// fallthrough
}
}
// Log detailed message for unknown errors
addLogEntry("ERROR: Cannot create or renew subscription.");
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
}
}