Fix misspellings found by codespell (#2715)

* Fix misspellings found by codespell
* Fix merge conflict

---------

Co-authored-by: abraunegg <alex.braunegg@gmail.com>
This commit is contained in:
Dimitri Papadopoulos Orfanos 2024-05-06 06:43:55 +02:00 committed by GitHub
parent 773a05c496
commit 1f86759003
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
29 changed files with 122 additions and 122 deletions

View file

@ -40,7 +40,7 @@ ifeq ($(DC_TYPE),dmd)
# Add DMD Debugging Flags
DCFLAGS += -g -debug -gs
else
# Add LDC Debuggging Flags
# Add LDC Debugging Flags
DCFLAGS += -g -d-debug -gc
endif
else

View file

@ -101,7 +101,7 @@ case $(basename $DC) in
VERSION=`$DC --version`
# remove everything up to first (
VERSION=${VERSION#* (}
# remove everthing after ):
# remove everything after ):
VERSION=${VERSION%%):*}
# now version should be something like L.M.N
MINVERSION=1.18.0
@ -162,7 +162,7 @@ dnl value via pkg-config and put it into $def_systemdsystemunitdir
AS_IF([test "x$with_systemdsystemunitdir" = "xyes" -o "x$with_systemdsystemunitdir" = "xauto"],
[ dnl true part, so try to determine with pkg-config
def_systemdsystemunitdir=$($PKG_CONFIG --variable=systemdsystemunitdir systemd)
dnl if we cannot find it via pkg-config, *and* the user explicitely passed it in with,
dnl if we cannot find it via pkg-config, *and* the user explicitly passed it in with,
dnl we warn, and in all cases we unset (set to no) the respective variable
AS_IF([test "x$def_systemdsystemunitdir" = "x"],
[ dnl we couldn't find the default value via pkg-config

View file

@ -129,7 +129,7 @@ ExecStart=/usr/local/bin/onedrive --monitor --confdir="/home/myusername/.config/
### Step 3: Enable the new systemd service
Once the file is correctly editied, you can enable the new systemd service using the following commands.
Once the file is correctly edited, you can enable the new systemd service using the following commands.
#### Red Hat Enterprise Linux, CentOS Linux
```text
@ -230,7 +230,7 @@ docker run -it --name onedrive -v onedrive_conf_sharepoint_site50:/onedrive/conf
```
> [!TIP]
> To avoid 're-authenticating' and 'authorising' each individual Docker container, if all the Docker containers are using the 'same' OneDrive credentials, you can re-use the 'refresh_token' from one Docker container to another by copying this file to the configuration Docker volume of each Docker container.
> To avoid 're-authenticating' and 'authorising' each individual Docker container, if all the Docker containers are using the 'same' OneDrive credentials, you can reuse the 'refresh_token' from one Docker container to another by copying this file to the configuration Docker volume of each Docker container.
>
> If the account credentials are different .. you will need to re-authenticate each Docker container individually.
@ -243,7 +243,7 @@ To fix the problem of windows turning all files (that should be kept offline) in
To find this setting, open the onedrive pop-up window from the taskbar, click "Help & Settings" > "Settings". This opens a new window. Go to the tab "Settings" and look for the section "Files On-Demand".
After unchecking the option and clicking "OK", the Windows OneDrive client should restart itself and start actually downloading your files so they will truely be available on your disk when offline. These files will then be fully accessible under Linux and the Linux OneDrive client.
After unchecking the option and clicking "OK", the Windows OneDrive client should restart itself and start actually downloading your files so they will truly be available on your disk when offline. These files will then be fully accessible under Linux and the Linux OneDrive client.
| OneDrive Personal | Onedrive Business<br>SharePoint |
|---|---|
@ -259,7 +259,7 @@ The issue here is - how does the client react if the mount point gets removed -
The client has zero knowledge of any event that causes a mountpoint to become unavailable, thus, the client (if you are running as a service) will assume that you deleted the files, thus, will go ahead and delete all your files on OneDrive. This is most certainly an undesirable action.
There are a few options here which you can configure in your 'config' file to assist you to prevent this sort of item from occuring:
There are a few options here which you can configure in your 'config' file to assist you to prevent this sort of item from occurring:
1. classify_as_big_delete
2. check_nomount
3. check_nosync
@ -285,7 +285,7 @@ After making this sort of change - test with `--dry-run` so you can see the impa
## Upload data from the local ~/OneDrive folder to a specific location on OneDrive
In some environments, you may not want your local ~/OneDrive folder to be uploaded directly to the root of your OneDrive account online.
Unfortunatly, the OneDrive API lacks any facility to perform a re-direction of data during upload.
Unfortunately, the OneDrive API lacks any facility to perform a re-direction of data during upload.
The workaround for this is to structure your local filesystem and reconfigure your client to achieve the desired goal.

View file

@ -264,7 +264,7 @@ _**CLI Option Use:**_ `--disable-upload-validation`
> If you're uploading data to SharePoint or OneDrive Business Shared Folders, you might find it necessary to activate this option. It's important to note that any issues encountered aren't due to a problem with this client; instead, they should be regarded as issues with the Microsoft OneDrive technology stack. Enabling this option disables all upload integrity checks.
### display_running_config
_**Description:**_ This option will include the running config of the application at application startup. This may be desirable to enable when running in containerised environments so that any application logging that is occuring, will have the application configuration being consumed at startup, written out to any applicable log file.
_**Description:**_ This option will include the running config of the application at application startup. This may be desirable to enable when running in containerised environments so that any application logging that is occurring, will have the application configuration being consumed at startup, written out to any applicable log file.
_**Value Type:**_ Boolean
@ -304,7 +304,7 @@ _**Default Value:**_ *None*
_**Config Example:**_ `drive_id = "b!bO8V6s9SSk9R7mWhpIjUrotN73WlW3tEv3OxP_QfIdQimEdOHR-1So6CqeG1MfDB"`
> [!NOTE]
> This option is typically only used when configuring the client to sync a specific SharePoint Library. If this configuration option is specified in your config file, a value must be specified otherwise the application will exit citing a fatal error has occured.
> This option is typically only used when configuring the client to sync a specific SharePoint Library. If this configuration option is specified in your config file, a value must be specified otherwise the application will exit citing a fatal error has occurred.
### dry_run
_**Description:**_ This setting controls the application capability to test your application configuration without actually performing any actual activity (download, upload, move, delete, folder creation).
@ -407,7 +407,7 @@ _**CLI Option Use:**_ `--monitor-interval '600'`
> A minimum value of 300 is enforced for this configuration setting.
### monitor_log_frequency
_**Description:**_ This configuration option controls the suppression of frequently printed log items to the system console when using `--monitor` mode. The aim of this configuration item is to reduce the log output when near zero sync activity is occuring.
_**Description:**_ This configuration option controls the suppression of frequently printed log items to the system console when using `--monitor` mode. The aim of this configuration item is to reduce the log output when near zero sync activity is occurring.
_**Value Type:**_ Integer
@ -428,7 +428,7 @@ Sync Engine Initialised with new Onedrive API instance
All application operations will be performed in: /home/user/OneDrive
OneDrive synchronisation interval (seconds): 300
Initialising filesystem inotify monitoring ...
Performing initial syncronisation to ensure consistent local state ...
Performing initial synchronisation to ensure consistent local state ...
Starting a sync with Microsoft OneDrive
Fetching items from the OneDrive API for Drive ID: b!bO8V6s9SSk9R7mWhpIjUrotN73WlW3tEv3OxP_QfIdQimEdOHR-1So6CqeG1MfDB ..
Processing changes and items received from Microsoft OneDrive ...
@ -446,10 +446,10 @@ Syncing changes from Microsoft OneDrive ...
Sync with Microsoft OneDrive is complete
```
> [!NOTE]
> The additional log output `Performing a database consistency and integrity check on locally stored data ...` will only be displayed when this activity is occuring which is triggered by 'monitor_fullscan_frequency'.
> The additional log output `Performing a database consistency and integrity check on locally stored data ...` will only be displayed when this activity is occurring which is triggered by 'monitor_fullscan_frequency'.
> [!NOTE]
> If verbose application output is being used (`--verbose`), then this configuration setting has zero effect, as application verbose output takes priority over application output surpression.
> If verbose application output is being used (`--verbose`), then this configuration setting has zero effect, as application verbose output takes priority over application output suppression.
### no_remote_delete
_**Description:**_ This configuration option controls whether local file and folder deletes are actioned on Microsoft OneDrive.
@ -764,7 +764,7 @@ _**CLI Option Use:**_ `--sync-root-files`
> Although it's not mandatory, it's recommended that after enabling this option, you perform a `--resync`. This ensures that any previously excluded content is now included in your sync process.
### threads
_**Description:**_ This configuration option controls the number of 'threads' for upload and download operations when files need to be transfered between your local system and Microsoft OneDrive.
_**Description:**_ This configuration option controls the number of 'threads' for upload and download operations when files need to be transferred between your local system and Microsoft OneDrive.
_**Value Type:**_ Integer
@ -955,7 +955,7 @@ _**Usage Example:**_ `onedrive --auth-response https://login.microsoftonline.com
> ```text
> https://login.microsoftonline.com/common/oauth2/v2.0/authorise?client_id=22c49a0d-d21c-4792-aed1-8f163c982546&scope=Files.ReadWrite%20Files.ReadWrite.all%20Sites.ReadWrite.All%20offline_access&response_type=code&redirect_uri=https://login.microsoftonline.com/common/oauth2/nativeclient
> ```
> With this URL being known, it is possible ahead of time to request an authentication token by visiting this URL, and performing the authenticaton access request.
> With this URL being known, it is possible ahead of time to request an authentication token by visiting this URL, and performing the authentication access request.
### CLI Option: --confdir
_**Description:**_ This CLI option allows the user to specify where all the application configuration and relevant components are stored.
@ -963,7 +963,7 @@ _**Description:**_ This CLI option allows the user to specify where all the appl
_**Usage Example:**_ `onedrive --confdir '~/.config/onedrive-business/'`
> [!IMPORTANT]
> If using this option, it must be specified each and every time the application is used. If this is ommited, the application default configuration directory will be used.
> If using this option, it must be specified each and every time the application is used. If this is omitted, the application default configuration directory will be used.
### CLI Option: --create-directory
_**Description:**_ This CLI option allows the user to create the specified directory path on Microsoft OneDrive without performing a sync.
@ -1030,7 +1030,7 @@ _**Usage Example:**_ `onedrive --sync --verbose --force-sync --single-directory
>
> Are you sure you wish to proceed with --force-sync [Y/N]
> ```
> To procceed with this sync task, you must risk accept the actions you are taking. If you have any concerns, first use `--dry-run` and evaluate the outcome before proceeding with the actual action.
> To proceed with this sync task, you must risk accept the actions you are taking. If you have any concerns, first use `--dry-run` and evaluate the outcome before proceeding with the actual action.
### CLI Option: --get-file-link
_**Description:**_ This CLI option queries the OneDrive API and return's the WebURL for the given local file.
@ -1087,7 +1087,7 @@ Shared By: test user (testuser@domain.tld)
```
### CLI Option: --logout
_**Description:**_ This CLI option removes this clients authentictaion status with Microsoft OneDrive. Any further application use will requrie the application to be re-authenticated with Microsoft OneDrive.
_**Description:**_ This CLI option removes this clients authentictaion status with Microsoft OneDrive. Any further application use will require the application to be re-authenticated with Microsoft OneDrive.
_**Usage Example:**_ `onedrive --logout`
@ -1110,7 +1110,7 @@ _**Description:**_ Print the current access token being used to access Microsoft
_**Usage Example:**_ `onedrive --verbose --verbose --debug-https --print-access-token`
> [!CAUTION]
> Do not use this option if you do not know why you are wanting to use it. Be highly cautious of exposing this object. Change your password if you feel that you have inadvertantly exposed this token.
> Do not use this option if you do not know why you are wanting to use it. Be highly cautious of exposing this object. Change your password if you feel that you have inadvertently exposed this token.
### CLI Option: --reauth
_**Description:**_ This CLI option controls the ability to re-authenticate your client with Microsoft OneDrive.
@ -1177,7 +1177,7 @@ _**Depreciated Config Example:**_ `force_http_2 = "true"`
_**Depreciated CLI Option:**_ `--force-http-2`
_**Reason for depreciation:**_ HTTP/2 will be used by default where possible, when the OneDrive API platform does not downgrade the connection to HTTP/1.1, thus this confuguration option is no longer required.
_**Reason for depreciation:**_ HTTP/2 will be used by default where possible, when the OneDrive API platform does not downgrade the connection to HTTP/1.1, thus this configuration option is no longer required.
### min_notify_changes
_**Description:**_ Minimum number of pending incoming changes necessary to trigger a GUI desktop notification.

View file

@ -1,5 +1,5 @@
# RPM Package Build Process
The instuctions below have been tested on the following systems:
The instructions below have been tested on the following systems:
* CentOS 7 x86_64
* CentOS 8 x86_64

View file

@ -239,7 +239,7 @@ When this is viewed locally, on Linux, this 'Files Shared With Me' and content i
![files_shared_with_me_folder](./images/files_shared_with_me_folder.png)
Unfortunatly there is no Microsoft Windows equivalent for this capability.
Unfortunately there is no Microsoft Windows equivalent for this capability.
## Known Issues
Shared folders, shared with you from people outside of your 'organisation' are unable to be synced. This is due to the Microsoft Graph API not presenting these folders.

View file

@ -72,7 +72,7 @@ The diagrams below show the high level process flow and decision making when run
### Upload a new local file to Microsoft OneDrive
![uploadFile](./puml/uploadFile.png)
### Determining if an 'item' is syncronised between Microsoft OneDrive and the local file system
### Determining if an 'item' is synchronised between Microsoft OneDrive and the local file system
![Item Sync Determination](./puml/is_item_in_sync.png)
### Determining if an 'item' is excluded due to 'Client Side Filtering' rules
@ -304,7 +304,7 @@ The requested directory to create was found on OneDrive - skipping creating the
New items to upload to OneDrive: 9
Total New Data to Upload: 49 KB
...
The file we are attemtping to upload as a new file already exists on Microsoft OneDrive: ./1.txt
The file we are attempting to upload as a new file already exists on Microsoft OneDrive: ./1.txt
Skipping uploading this item as a new file, will upload as a modified file (online file already exists): ./1.txt
The local item is out-of-sync with OneDrive, renaming to preserve existing file and prevent local data loss: ./1.txt -> ./1-onedrive-client-dev.txt
Uploading new file ./1-onedrive-client-dev.txt ... done.

View file

@ -122,7 +122,7 @@ For reference, below are the available application logging output functions and
If the code changes any of the functionality that is documented, it is expected that any PR submission will also include updating the respective section of user documentation and/or man page as part of the code submission.
## Development Testing
Whilst there are more modern DMD and LDC compilers available, ensuring client build compatability with older platforms is a key requirement.
Whilst there are more modern DMD and LDC compilers available, ensuring client build compatibility with older platforms is a key requirement.
The issue stems from Debian and Ubuntu LTS versions - such as Ubuntu 20.04. It's [ldc package](https://packages.ubuntu.com/focal/ldc) is only v1.20.1 , thus, this is the minimum version that all compilation needs to be tested against.
@ -144,7 +144,7 @@ Application output that is doing whatever | or illustration of issue being fixed
```
Please also include validation of compilation using the minimum LDC package version.
To assist with your testing validation against the minimum LDC compiler version, a script as per below could assit you with this validation:
To assist with your testing validation against the minimum LDC compiler version, a script as per below could assist you with this validation:
```bash

View file

@ -253,7 +253,7 @@ If you are experienced with docker and onedrive, you can use the following scrip
```bash
# Update ONEDRIVE_DATA_DIR with correct OneDrive directory path
ONEDRIVE_DATA_DIR="${HOME}/OneDrive"
# Create directory if non-existant
# Create directory if non-existent
mkdir -p ${ONEDRIVE_DATA_DIR}
firstRun='-d'

View file

@ -3,7 +3,7 @@
> Before reading this document, please ensure you are running application version [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases) or greater. Use `onedrive --version` to determine what application version you are using and upgrade your client if required.
## Process Overview
In some cases it is a requirement to utilise specific Microsoft Azure cloud deployments to conform with data and security reuqirements that requires data to reside within the geographic borders of that country.
In some cases it is a requirement to utilise specific Microsoft Azure cloud deployments to conform with data and security requirements that requires data to reside within the geographic borders of that country.
Current national clouds that are supported are:
* Microsoft Cloud for US Government
* Microsoft Cloud Germany
@ -106,7 +106,7 @@ azure_tenant_id = "insert valid entry here"
This will configure your client to use the specified tenant id in its Azure AD and Graph endpoint URIs, instead of "common".
The tenant id may be the GUID Directory ID (formatted "00000000-0000-0000-0000-000000000000"), or the fully qualified tenant name (e.g. "example.onmicrosoft.us").
The GUID Directory ID may be located in the Azure administation page as per [https://docs.microsoft.com/en-us/onedrive/find-your-office-365-tenant-id](https://docs.microsoft.com/en-us/onedrive/find-your-office-365-tenant-id). Note that you may need to go to your national-deployment-specific administration page, rather than following the links within that document.
The GUID Directory ID may be located in the Azure administration page as per [https://docs.microsoft.com/en-us/onedrive/find-your-office-365-tenant-id](https://docs.microsoft.com/en-us/onedrive/find-your-office-365-tenant-id). Note that you may need to go to your national-deployment-specific administration page, rather than following the links within that document.
The tenant name may be obtained by following the PowerShell instructions on [https://docs.microsoft.com/en-us/onedrive/find-your-office-365-tenant-id](https://docs.microsoft.com/en-us/onedrive/find-your-office-365-tenant-id); it is shown as the "TenantDomain" upon completion of the "Connect-AzureAD" command.
**Example:**

View file

@ -22,7 +22,7 @@ if (JSON item is a file) then (yes)
endif
:Download file (as per online JSON item) as required;
else (no)
:Other handling for directories | root ojects | deleted items;
:Other handling for directories | root objects | deleted items;
endif
:Performing a database consistency and\nintegrity check on locally stored data;
:Scan file system for any new data to upload;

View file

@ -26,7 +26,7 @@ if (JSON item is a file) then (yes)
endif
:Download file (as per online JSON item) as required;
else (no)
:Other handling for directories | root ojects | deleted items;
:Other handling for directories | root objects | deleted items;
endif
:Performing a database consistency and\nintegrity check on locally stored data;
:Scan file system for any new data to upload;

View file

@ -56,7 +56,7 @@ if (JSON item is a file) then (yes)
:Download file (as per online JSON item) as required;
else (no)
:Other handling for directories | root ojects | deleted items;
:Other handling for directories | root objects | deleted items;
endif
stop
@enduml

View file

@ -62,7 +62,7 @@ if (JSON item is a file) then (yes)
:Download file (as per online JSON item) as required;
else (no)
:Other handling for directories | root ojects | deleted items;
:Other handling for directories | root objects | deleted items;
endif

View file

@ -18,7 +18,7 @@ partition "Process /delta JSON Responses" {
:Process 'root' JSON;
else (no)
if (Is 'deleted' object in sync) then (yes)
:Process delection of local item;
:Process deletion of local item;
else (no)
:Rename local file as it is not in sync;
note right: Deletion event conflict handling\nLocal data loss prevention

View file

@ -174,7 +174,7 @@ ExecStart=/usr/local/bin/onedrive --monitor --confdir="/home/myusername/.config/
> When running the client manually, `--confdir="~/.config/......` is acceptable. In a systemd configuration file, the full path must be used. The `~` must be manually expanded when editing your systemd file.
### Step 3: Enable the new systemd service
Once the file is correctly editied, you can enable the new systemd service using the following commands.
Once the file is correctly edited, you can enable the new systemd service using the following commands.
#### Red Hat Enterprise Linux, CentOS Linux
```text

View file

@ -38,7 +38,7 @@ OneDrive Client for Linux is not responsible for the Microsoft OneDrive Service
To the fullest extent permitted by law, we shall not be liable for any direct, indirect, incidental, special, consequential, or punitive damages, or any loss of profits or revenues, whether incurred directly or indirectly, or any loss of data, use, goodwill, or other intangible losses, resulting from (a) your use or inability to use the Service, or (b) any other matter relating to the Service.
This limitiation of liability explicitly relates to the use of the OneDrive Client for Linux software and does not affect your rights under the GPLv3.
This limitation of liability explicitly relates to the use of the OneDrive Client for Linux software and does not affect your rights under the GPLv3.
## 7. Changes to Terms

View file

@ -19,7 +19,7 @@ This document outlines the steps for installing the 'onedrive' client on Debian,
> Although packages for the 'onedrive' client are available through distribution repositories, it is strongly advised against installing them. These distribution-provided packages are outdated, unsupported, and contain bugs and issues that have already been resolved in newer versions. They should not be used.
## Determine which instructions to use
Ubuntu and its clones are based on various different releases, thus, you must use the correct instructions below, otherwise you may run into package dependancy issues and will be unable to install the client.
Ubuntu and its clones are based on various different releases, thus, you must use the correct instructions below, otherwise you may run into package dependency issues and will be unable to install the client.
### Step 1: Remove any configured PPA and associated 'onedrive' package and systemd service files
@ -43,7 +43,7 @@ This systemd entry is erroneous and needs to be removed. Without removing this e
Opening the item database ...
ERROR: onedrive application is already running - check system process list for active application instances
- Use 'sudo ps aufxw | grep onedrive' to potentially determine acive running process
- Use 'sudo ps aufxw | grep onedrive' to potentially determine active running process
Waiting for all internal threads to complete before exiting application
```

View file

@ -127,7 +127,7 @@ Query OneDrive service and report on pending changes.
.TP
\fB\-\-auth-files\fR \fIARG\fR
Perform authentication not via interactive dialog but via files that are read/writen when using this option. The two files are passed in as \fBARG\fP in the format \fBauthUrl:responseUrl\fP.
Perform authentication not via interactive dialog but via files that are read/written when using this option. The two files are passed in as \fBARG\fP in the format \fBauthUrl:responseUrl\fP.
The authorisation URL is written to the \fBauthUrl\fP file, then \fBonedrive\fP waits for the file \fBresponseUrl\fP to be present, and reads the response from that file.
.br
Always specify the full path when using this option, otherwise the application will default to using the default configuration path for these files (~/.config/onedrive/)

View file

@ -22,7 +22,7 @@ This client represents a 100% re-imagining of the original work, addressing nume
* Monitors local files in real-time using inotify
* Supports interrupted uploads for completion at a later time
* Capability to sync remote updates immediately via webhooks
* Enhanced syncronisation speed with multi-threaded file transfers
* Enhanced synchronisation speed with multi-threaded file transfers
* Manages traffic bandwidth use with rate limiting
* Supports seamless access to shared folders and files across both OneDrive Personal and OneDrive for Business accounts
* Supports national cloud deployments including Microsoft Cloud for US Government, Microsoft Cloud Germany and Azure and Office 365 operated by VNET in China
@ -43,7 +43,7 @@ This client represents a 100% re-imagining of the original work, addressing nume
Refer to [Frequently Asked Questions](https://github.com/abraunegg/onedrive/wiki/Frequently-Asked-Questions)
## Have a question
If you have a question or need something clarified, please raise a new disscussion post [here](https://github.com/abraunegg/onedrive/discussions)
If you have a question or need something clarified, please raise a new discussion post [here](https://github.com/abraunegg/onedrive/discussions)
## Reporting an Issue or Bug
If you encounter any bugs you can report them here on Github. Before filing an issue be sure to:

View file

@ -1101,7 +1101,7 @@ class Cgi {
const(ubyte)[] delegate() readdata = null,
// finally, use this to do custom output if needed
void delegate(const(ubyte)[]) _rawDataOutput = null,
// to flush teh custom output
// to flush the custom output
void delegate() _flush = null
)
{
@ -2227,7 +2227,7 @@ class Cgi {
uri ~= "s";
uri ~= "://";
uri ~= host;
/+ // the host has the port so p sure this never needed, cgi on apache and embedded http all do the right hting now
/+ // the host has the port so p sure this never needed, cgi on apache and embedded http all do the right thing now
version(none)
if(!(!port || port == defaultPort)) {
uri ~= ":";
@ -2317,7 +2317,7 @@ class Cgi {
/// This is like setResponseExpires, but it can be called multiple times. The setting most in the past is the one kept.
/// If you have multiple functions, they all might call updateResponseExpires about their own return value. The program
/// output as a whole is as cacheable as the least cachable part in the chain.
/// output as a whole is as cacheable as the least cacheable part in the chain.
/// setCache(false) always overrides this - it is, by definition, the strictest anti-cache statement available. If your site outputs sensitive user data, you should probably call setCache(false) when you do, to ensure no other functions will cache the content, as it may be a privacy risk.
/// Conversely, setting here overrides setCache(true), since any expiration date is in the past of infinity.
@ -2329,7 +2329,7 @@ class Cgi {
}
/*
/// Set to true if you want the result to be cached publically - that is, is the content shared?
/// Set to true if you want the result to be cached publicly - that is, is the content shared?
/// Should generally be false if the user is logged in. It assumes private cache only.
/// setCache(true) also turns on public caching, and setCache(false) sets to private.
void setPublicCaching(bool allowPublicCaches) {
@ -7292,7 +7292,7 @@ private void serialize(T)(scope void delegate(scope ubyte[]) sink, T t) {
} else static assert(0, T.stringof);
}
// all may be stack buffers, so use cautio
// all may be stack buffers, so use caution
private void deserialize(T)(scope ubyte[] delegate(int sz) get, scope void delegate(T) dg) {
static if(is(T == struct)) {
T t;
@ -10181,7 +10181,7 @@ struct Redirection {
/++
Serves a class' methods, as a kind of low-state RPC over the web. To be used with [dispatcher].
Usage of this function will add a dependency on [arsd.dom] and [arsd.jsvar] unless you have overriden
Usage of this function will add a dependency on [arsd.dom] and [arsd.jsvar] unless you have overridden
the presenter in the dispatcher.
FIXME: explain this better
@ -10621,7 +10621,7 @@ template urlNamesForMethod(alias method, string default_) {
enum AccessCheck {
allowed,
denied,
nonExistant,
nonExistent,
}
enum Operation {

View file

@ -26,7 +26,7 @@ class ClientSideFiltering {
bool skipDotfiles = false;
this(ApplicationConfig appConfig) {
// Configure the class varaible to consume the application configuration
// Configure the class variable to consume the application configuration
this.appConfig = appConfig;
}

View file

@ -200,8 +200,8 @@ class ApplicationConfig {
bool initialise(string confdirOption, bool helpRequested) {
// Default runtime configuration - entries in config file ~/.config/onedrive/config or derived from variables above
// An entry here means it can be set via the config file if there is a coresponding entry, read from config and set via update_from_args()
// The below becomes the 'default' application configuration before config file and/or cli options are overlayed on top
// An entry here means it can be set via the config file if there is a corresponding entry, read from config and set via update_from_args()
// The below becomes the 'default' application configuration before config file and/or cli options are overlaid on top
// - Set the required default values
stringValues["application_id"] = defaultApplicationId;
@ -1435,7 +1435,7 @@ class ApplicationConfig {
// What did the user enter?
addLogEntry("--resync warning User Response Entered: " ~ to!string(response), ["debug"]);
// Evaluate user repsonse
// Evaluate user response
if ((to!string(response) == "y") || (to!string(response) == "Y")) {
// User has accepted --resync risk to proceed
userRiskAcceptance = true;
@ -1481,7 +1481,7 @@ class ApplicationConfig {
// What did the user enter?
addLogEntry("--force-sync warning User Response Entered: " ~ to!string(response), ["debug"]);
// Evaluate user repsonse
// Evaluate user response
if ((to!string(response) == "y") || (to!string(response) == "Y")) {
// User has accepted --force-sync risk to proceed
userRiskAcceptance = true;
@ -2124,17 +2124,17 @@ class ApplicationConfig {
// Are we performing some sort of 'no-sync' task?
// - Are we obtaining the Office 365 Drive ID for a given Office 365 SharePoint Shared Library?
// - Are we displaying the sync satus?
// - Are we displaying the sync status?
// - Are we getting the URL for a file online?
// - Are we listing who modified a file last online?
// - Are we listing OneDrive Business Shared Items?
// - Are we createing a shareable link for an existing file on OneDrive?
// - Are we creating a shareable link for an existing file on OneDrive?
// - Are we just creating a directory online, without any sync being performed?
// - Are we just deleting a directory online, without any sync being performed?
// - Are we renaming or moving a directory?
// - Are we displaying the quota information?
// Return a true|false if any of these have been set, so that we use the 'dry-run' DB copy, to execute these tasks, incase the client is currently operational
// Return a true|false if any of these have been set, so that we use the 'dry-run' DB copy, to execute these tasks, in case the client is currently operational
// --get-sharepoint-drive-id - Get the SharePoint Library drive_id
if (getValueString("sharepoint_library_name") != "") {
@ -2166,7 +2166,7 @@ class ApplicationConfig {
noSyncOperation = true;
}
// --create-share-link - Are we createing a shareable link for an existing file on OneDrive?
// --create-share-link - Are we creating a shareable link for an existing file on OneDrive?
if (getValueString("create_share_link") != "") {
// flag that a no sync operation has been requested
noSyncOperation = true;

View file

@ -208,7 +208,7 @@ final class ItemDatabase {
if (e.msg == "database is locked") {
addLogEntry();
addLogEntry("ERROR: The 'onedrive' application is already running - please check system process list for active application instances");
addLogEntry(" - Use 'sudo ps aufxw | grep onedrive' to potentially determine acive running process");
addLogEntry(" - Use 'sudo ps aufxw | grep onedrive' to potentially determine active running process");
addLogEntry();
} else {
// A different error .. detail the message, detail the actual SQLite Error Code to assist with troubleshooting
@ -693,7 +693,7 @@ final class ItemDatabase {
// National Cloud Deployments (US and DE) do not support /delta as a query
// We need to track in the database that this item is in sync
// As we query /children to get all children from OneDrive, update anything in the database
// to be flagged as not-in-sync, thus, we can use that flag to determing what was previously
// to be flagged as not-in-sync, thus, we can use that flag to determine what was previously
// in-sync, but now deleted on OneDrive
void downgradeSyncStatusFlag(const(char)[] driveId, const(char)[] id) {
assert(driveId);

View file

@ -73,7 +73,7 @@ int main(string[] cliArgs) {
bool online = false;
// Does the operating environment have shell environment variables set
bool shellEnvSet = false;
// What is the runtime syncronisation directory that will be used
// What is the runtime synchronisation directory that will be used
// Typically this will be '~/OneDrive' .. however tilde expansion is unreliable
string runtimeSyncDirectory = "";
@ -432,7 +432,7 @@ int main(string[] cliArgs) {
if (appConfig.apiWasInitialised) {
addLogEntry("The OneDrive API was initialised successfully", ["verbose"]);
// Flag that we were able to initalise the API in the application config
// Flag that we were able to initialise the API in the application config
oneDriveApiInstance.debugOutputConfiguredAPIItems();
oneDriveApiInstance.releaseCurlEngine();
object.destroy(oneDriveApiInstance);
@ -460,11 +460,11 @@ int main(string[] cliArgs) {
// Are we performing some sort of 'no-sync' task?
// - Are we obtaining the Office 365 Drive ID for a given Office 365 SharePoint Shared Library?
// - Are we displaying the sync satus?
// - Are we displaying the sync status?
// - Are we getting the URL for a file online?
// - Are we listing who modified a file last online?
// - Are we listing OneDrive Business Shared Items?
// - Are we createing a shareable link for an existing file on OneDrive?
// - Are we creating a shareable link for an existing file on OneDrive?
// - Are we just creating a directory online, without any sync being performed?
// - Are we just deleting a directory online, without any sync being performed?
// - Are we renaming or moving a directory?
@ -529,7 +529,7 @@ int main(string[] cliArgs) {
return EXIT_SUCCESS;
}
// --create-share-link - Are we createing a shareable link for an existing file on OneDrive?
// --create-share-link - Are we creating a shareable link for an existing file on OneDrive?
if (appConfig.getValueString("create_share_link") != "") {
// Query OneDrive for the file, and if valid, create a shareable link for the file
@ -620,7 +620,7 @@ int main(string[] cliArgs) {
}
}
// Configure the sync direcory based on the runtimeSyncDirectory configured directory
// Configure the sync directory based on the runtimeSyncDirectory configured directory
addLogEntry("All application operations will be performed in the configured local 'sync_dir' directory: " ~ runtimeSyncDirectory, ["verbose"]);
// Try and set the 'sync_dir', attempt to create if it does not exist
try {
@ -656,7 +656,7 @@ int main(string[] cliArgs) {
// Set the default thread pool value
defaultPoolThreads(to!int(appConfig.getValueLong("threads")));
// Is the sync engine initiallised correctly?
// Is the sync engine initialised correctly?
if (appConfig.syncEngineWasInitialised) {
// Configure some initial variables
string singleDirectoryPath;
@ -832,7 +832,7 @@ int main(string[] cliArgs) {
try {
addLogEntry("Initialising filesystem inotify monitoring ...");
filesystemMonitor.initialise();
addLogEntry("Performing initial syncronisation to ensure consistent local state ...");
addLogEntry("Performing initial synchronisation to ensure consistent local state ...");
} catch (MonitorException e) {
// monitor class initialisation failed
addLogEntry("ERROR: " ~ e.msg);
@ -919,13 +919,13 @@ int main(string[] cliArgs) {
// Loop Start
addLogEntry(loopStartOutputMessage, ["debug"]);
addLogEntry("Total Run-Time Loop Number: " ~ to!string(monitorLoopFullCount), ["debug"]);
addLogEntry("Full Scan Freqency Loop Number: " ~ to!string(fullScanFrequencyLoopCount), ["debug"]);
addLogEntry("Full Scan Frequency Loop Number: " ~ to!string(fullScanFrequencyLoopCount), ["debug"]);
SysTime startFunctionProcessingTime = Clock.currTime();
addLogEntry("Start Monitor Loop Time: " ~ to!string(startFunctionProcessingTime), ["debug"]);
// Do we perform any monitor console logging output surpression?
// Do we perform any monitor console logging output suppression?
// 'monitor_log_frequency' controls how often, in a non-verbose application output mode, how often
// the full output of what is occuring is done. This is done to lessen the 'verbosity' of non-verbose
// the full output of what is occurring is done. This is done to lessen the 'verbosity' of non-verbose
// logging, but only when running in --monitor
if (monitorLogOutputLoopCount > logOutputSupressionInterval) {
// unsurpress the logging output
@ -933,13 +933,13 @@ int main(string[] cliArgs) {
addLogEntry("Unsuppressing initial sync log output", ["debug"]);
appConfig.surpressLoggingOutput = false;
} else {
// do we surpress the logging output to absolute minimal
// do we suppress the logging output to absolute minimal
if (monitorLoopFullCount == 1) {
// application startup with --monitor
addLogEntry("Unsuppressing initial sync log output", ["debug"]);
appConfig.surpressLoggingOutput = false;
} else {
// only surpress if we are not doing --verbose or higher
// only suppress if we are not doing --verbose or higher
if (appConfig.verbosityCount == 0) {
addLogEntry("Suppressing --monitor log output", ["debug"]);
appConfig.surpressLoggingOutput = true;
@ -1146,7 +1146,7 @@ void performUploadOnlySyncProcess(string localPath, Monitor filesystemMonitor =
void performStandardSyncProcess(string localPath, Monitor filesystemMonitor = null) {
// If we are performing log supression, output this message so the user knows what is happening
// If we are performing log suppression, output this message so the user knows what is happening
if (appConfig.surpressLoggingOutput) {
addLogEntry("Syncing changes from Microsoft OneDrive ...");
}
@ -1214,7 +1214,7 @@ void performStandardSyncProcess(string localPath, Monitor filesystemMonitor = nu
}
// If we are not doing a 'force_children_scan' perform a true-up
// 'force_children_scan' is used when using /children rather than /delta and it is not efficent to re-run this exact same process twice
// 'force_children_scan' is used when using /children rather than /delta and it is not efficient to re-run this exact same process twice
if (!appConfig.getValueBool("force_children_scan")) {
// Perform the final true up scan to ensure we have correctly replicated the current online state locally
if (!appConfig.surpressLoggingOutput) {

View file

@ -275,7 +275,7 @@ final class Monitor {
ActionHolder actionHolder;
// Configure the class varaible to consume the application configuration including selective sync
// Configure the class variable to consume the application configuration including selective sync
this(ApplicationConfig appConfig, ClientSideFiltering selectiveSync) {
this.appConfig = appConfig;
this.selectiveSync = selectiveSync;
@ -335,7 +335,7 @@ final class Monitor {
wdToDirName = null;
}
// Recursivly add this path to be monitored
// Recursively add this path to be monitored
private void addRecursive(string dirname) {
// skip non existing/disappeared items
if (!exists(dirname)) {
@ -365,7 +365,7 @@ final class Monitor {
return;
}
}
// is the path exluded by sync_list?
// is the path excluded by sync_list?
if (selectiveSync.isPathExcludedViaSyncList(buildNormalizedPath(dirname))) {
// dont add a watch for this item
addLogEntry("Skipping monitoring due to sync_list match: " ~ dirname, ["debug"]);
@ -392,7 +392,7 @@ final class Monitor {
if (isDir(dirname)) {
// This is a directory
// is the path exluded if skip_dotfiles configured and path is a .folder?
// is the path excluded if skip_dotfiles configured and path is a .folder?
if ((selectiveSync.getSkipDotfiles()) && (isDotFile(dirname))) {
// dont add a watch for this directory
return;
@ -407,7 +407,7 @@ final class Monitor {
wdToDirName[wd] = buildNormalizedPath(dirname) ~ "/";
}
// if this is a directory, recursivly add this path
// if this is a directory, recursively add this path
if (isDir(dirname)) {
// try and get all the directory entities for this path
try {

View file

@ -90,7 +90,7 @@ class OneDriveApi {
bool keepAlive = false;
this(ApplicationConfig appConfig) {
// Configure the class varaible to consume the application configuration
// Configure the class variable to consume the application configuration
this.appConfig = appConfig;
this.curlEngine = null;
// Configure the major API Query URL's, based on using application configuration

View file

@ -117,7 +117,7 @@ class SyncEngine {
string[] pathsRenamed;
// List of paths that were a POSIX case-insensitive match, thus could not be created online
string[] posixViolationPaths;
// List of local paths, that, when using the OneDrive Business Shared Folders feature, then diabling it, folder still exists locally and online
// List of local paths, that, when using the OneDrive Business Shared Folders feature, then disabling it, folder still exists locally and online
// This list of local paths need to be skipped
string[] businessSharedFoldersOnlineToSkip;
// List of interrupted uploads session files that need to be resumed
@ -202,9 +202,9 @@ class SyncEngine {
processPool = new TaskPool(to!int(appConfig.getValueLong("threads")));
addLogEntry("Initialised TaskPool worker with threads: " ~ to!string(processPool.size), ["debug"]);
// Configure the class varaible to consume the application configuration
// Configure the class variable to consume the application configuration
this.appConfig = appConfig;
// Configure the class varaible to consume the database configuration
// Configure the class variable to consume the database configuration
this.itemDB = itemDB;
// Configure the class variable to consume the selective sync (skip_dir, skip_file and sync_list) configuration
this.selectiveSync = selectiveSync;
@ -844,7 +844,7 @@ class SyncEngine {
//
// - Are we performing a --download-only --cleanup-local-files action?
// - If we are, and we use a normal /delta query, we get all the local 'deleted' objects as well.
// - If the user deletes a folder online, then replaces it online, we download the deletion events and process the new 'upload' via the web iterface ..
// - If the user deletes a folder online, then replaces it online, we download the deletion events and process the new 'upload' via the web interface ..
// the net effect of this, is that the valid local files we want to keep, are actually deleted ...... not desirable
if ((singleDirectoryScope) || (nationalCloudDeployment) || (cleanupLocalFiles)) {
// Generate a simulated /delta response so that we correctly capture the current online state, less any 'online' delete and replace activity
@ -887,7 +887,7 @@ class SyncEngine {
}
}
// Dynamic output for non-verbose and verbose run so that the user knows something is being retreived from the OneDrive API
// Dynamic output for non-verbose and verbose run so that the user knows something is being retrieved from the OneDrive API
if (appConfig.verbosityCount == 0) {
if (!appConfig.surpressLoggingOutput) {
addProcessingLogHeaderEntry("Fetching items from the OneDrive API for Drive ID: " ~ driveIdToQuery, appConfig.verbosityCount);
@ -910,7 +910,7 @@ class SyncEngine {
// If the initial deltaChanges response is an invalid JSON object, keep trying until we get a valid response ..
if (deltaChanges.type() != JSONType.object) {
while (deltaChanges.type() != JSONType.object) {
// Handle the invalid JSON response adn retry
// Handle the invalid JSON response and retry
addLogEntry("ERROR: Query of the OneDrive API via deltaChanges = getDeltaChangesByItemId() returned an invalid JSON response", ["debug"]);
deltaChanges = getDeltaChangesByItemId(driveIdToQuery, itemIdToQuery, currentDeltaLink, getDeltaQueryOneDriveApiInstance);
}
@ -1671,7 +1671,7 @@ class SyncEngine {
// What path are we checking?
addLogEntry("sync_list item to check: " ~ newItemPath, ["debug"]);
// Unfortunatly there is no avoiding this call to check if the path is excluded|included via sync_list
// Unfortunately there is no avoiding this call to check if the path is excluded|included via sync_list
if (selectiveSync.isPathExcludedViaSyncList(newItemPath)) {
// selective sync advised to skip, however is this a file and are we configured to upload / download files in the root?
if ((isItemFile(onedriveJSONItem)) && (appConfig.getValueBool("sync_root_files")) && (rootName(newItemPath) == "") ) {
@ -2150,7 +2150,7 @@ class SyncEngine {
fileJSONItemsToDownload ~= onedriveJSONItem;
} else {
// If the timestamp is different, or we are running a client operational mode that does not support /delta queries - we have to update the DB with the details from OneDrive
// Unfortunatly because of the consequence of Nataional Cloud Deployments not supporting /delta queries, the application uses the local database to flag what is out-of-date / track changes
// Unfortunately because of the consequence of Nataional Cloud Deployments not supporting /delta queries, the application uses the local database to flag what is out-of-date / track changes
// This means that the constant disk writing to the database fix implemented with https://github.com/abraunegg/onedrive/pull/2004 cannot be utilised when using these operational modes
// as all records are touched / updated when performing the OneDrive sync operations. The impacted operational modes are:
// - National Cloud Deployments do not support /delta as a query
@ -2184,7 +2184,7 @@ class SyncEngine {
// The existingDatabaseItem.eTag == changedOneDriveItem.eTag .. nothing has changed eTag wise
// If the timestamp is different, or we are running a client operational mode that does not support /delta queries - we have to update the DB with the details from OneDrive
// Unfortunatly because of the consequence of Nataional Cloud Deployments not supporting /delta queries, the application uses the local database to flag what is out-of-date / track changes
// Unfortunately because of the consequence of Nataional Cloud Deployments not supporting /delta queries, the application uses the local database to flag what is out-of-date / track changes
// This means that the constant disk writing to the database fix implemented with https://github.com/abraunegg/onedrive/pull/2004 cannot be utilised when using these operational modes
// as all records are touched / updated when performing the OneDrive sync operations. The impacted operational modes are:
// - National Cloud Deployments do not support /delta as a query
@ -2216,7 +2216,7 @@ class SyncEngine {
// Download items in parallel
void downloadOneDriveItemsInParallel(JSONValue[] array) {
// This function recieved an array of 16 JSON items to download
// This function received an array of 16 JSON items to download
foreach (i, onedriveJSONItem; processPool.parallel(array)) {
// Take each JSON item and
downloadFileItem(onedriveJSONItem);
@ -2432,7 +2432,7 @@ class SyncEngine {
downloadValueMismatch = true;
addLogEntry("Actual file size on disk: " ~ to!string(downloadFileSize), ["debug"]);
addLogEntry("OneDrive API reported size: " ~ to!string(jsonFileSize), ["debug"]);
addLogEntry("ERROR: File download size mis-match. Increase logging verbosity to determine why.");
addLogEntry("ERROR: File download size mismatch. Increase logging verbosity to determine why.");
}
// Hash Error
@ -2441,7 +2441,7 @@ class SyncEngine {
downloadValueMismatch = true;
addLogEntry("Actual local file hash: " ~ downloadedFileHash, ["debug"]);
addLogEntry("OneDrive API reported hash: " ~ onlineFileHash, ["debug"]);
addLogEntry("ERROR: File download hash mis-match. Increase logging verbosity to determine why.");
addLogEntry("ERROR: File download hash mismatch. Increase logging verbosity to determine why.");
}
// .heic data loss check
@ -2851,7 +2851,7 @@ class SyncEngine {
// Update 'originItem.cTag' with the correct cTag from the response
// Update 'originItem.quickXorHash' with the correct quickXorHash from the response
// Everything else should remain the same .. and then save this DB record to the DB ..
// However, we did this, for the local modified file right before calling this function to update the online timestamp ... so .. do we need to do this again, effectivly performing a double DB write for the same data?
// However, we did this, for the local modified file right before calling this function to update the online timestamp ... so .. do we need to do this again, effectively performing a double DB write for the same data?
if ((originItem.type != ItemType.remote) && (originItem.remoteType != ItemType.file)) {
// Save the response JSON
// Is the response a valid JSON object - validation checking done in saveItem
@ -3682,7 +3682,7 @@ class SyncEngine {
// What path are we checking?
addLogEntry("sync_list item to check: " ~ newItemPath, ["debug"]);
// Unfortunatly there is no avoiding this call to check if the path is excluded|included via sync_list
// Unfortunately there is no avoiding this call to check if the path is excluded|included via sync_list
if (selectiveSync.isPathExcludedViaSyncList(newItemPath)) {
// selective sync advised to skip, however is this a file and are we configured to upload / download files in the root?
if ((isItemFile(onedriveJSONItem)) && (appConfig.getValueBool("sync_root_files")) && (rootName(newItemPath) == "") ) {
@ -3731,7 +3731,7 @@ class SyncEngine {
}
}
// Process all the changed local items in parrallel
// Process all the changed local items in parallel
void processChangedLocalItemsToUploadInParallel(string[3][] array) {
foreach (i, localItemDetails; processPool.parallel(array)) {
@ -3763,11 +3763,11 @@ class SyncEngine {
// Flag for if space is available online
bool spaceAvailableOnline = false;
// When we are uploading OneDrive Business Shared Files, we need to be targetting the right driveId and itemId
// When we are uploading OneDrive Business Shared Files, we need to be targeting the right driveId and itemId
string targetDriveId;
string targetItemId;
// Unfortunatly, we cant store an array of Item's ... so we have to re-query the DB again - unavoidable extra processing here
// Unfortunately, we cant store an array of Item's ... so we have to re-query the DB again - unavoidable extra processing here
// This is because the Item[] has no other functions to allow is to parallel process those elements, so we have to use a string array as input to this function
Item dbItem;
itemDB.selectById(changedItemParentId, changedItemId, dbItem);
@ -3820,7 +3820,7 @@ class SyncEngine {
if (cachedOnlineDriveData.quotaAvailable) {
// Our query told us we have free space online .. if we upload this file, will we exceed space online - thus upload will fail during upload?
if (calculatedSpaceOnlinePostUpload > 0) {
// Based on this thread action, we beleive that there is space available online to upload - proceed
// Based on this thread action, we believe that there is space available online to upload - proceed
spaceAvailableOnline = true;
}
}
@ -3896,7 +3896,7 @@ class SyncEngine {
saveItem(uploadResponse);
}
// Update the 'cachedOnlineDriveData' record for this 'targetDriveId' so that this is tracked as accuratly as possible for other threads
// Update the 'cachedOnlineDriveData' record for this 'targetDriveId' so that this is tracked as accurately as possible for other threads
updateDriveDetailsCache(targetDriveId, cachedOnlineDriveData.quotaRestricted, cachedOnlineDriveData.quotaAvailable, thisFileSizeLocal);
// Check the integrity of the uploaded modified file if not in a --dry-run scenario
@ -3930,7 +3930,7 @@ class SyncEngine {
JSONValue uploadSessionData;
string currentETag;
// When we are uploading OneDrive Business Shared Files, we need to be targetting the right driveId and itemId
// When we are uploading OneDrive Business Shared Files, we need to be targeting the right driveId and itemId
string targetDriveId;
string targetParentId;
string targetItemId;
@ -3987,7 +3987,7 @@ class SyncEngine {
// If the filesize is greater than zero , and we have valid 'latest' online data is the online file matching what we think is in the database?
if ((thisFileSizeLocal > 0) && (currentOnlineData.type() == JSONType.object)) {
// Issue #2626 | Case 2-1
// If the 'online' file is newer, this will be overwritten with the file from the local filesystem - potentially consituting online data loss
// If the 'online' file is newer, this will be overwritten with the file from the local filesystem - potentially constituting online data loss
Item onlineFile = makeItem(currentOnlineData);
// Which file is technically newer? The local file or the remote file?
@ -4056,7 +4056,7 @@ class SyncEngine {
}
} else {
// As this is a unique thread, the sessionFilePath for where we save the data needs to be unique
// The best way to do this is generate a 10 digit alphanumeric string, and use this as the file extention
// The best way to do this is generate a 10 digit alphanumeric string, and use this as the file extension
string threadUploadSessionFilePath = appConfig.uploadSessionFilePath ~ "." ~ generateAlphanumericString();
// Create the upload session
@ -5091,7 +5091,7 @@ class SyncEngine {
// Can we read the file - as a permissions issue or actual file corruption will cause a failure
// Resolves: https://github.com/abraunegg/onedrive/issues/113
if (readLocalFile(fileToUpload)) {
// The local file can be read - so we can read it to attemtp to upload it in this thread
// The local file can be read - so we can read it to attempt to upload it in this thread
// Is the path parent in the DB?
if (parentPathFoundInDB) {
// Parent path is in the database
@ -5144,7 +5144,7 @@ class SyncEngine {
if (cachedOnlineDriveData.quotaAvailable) {
// Our query told us we have free space online .. if we upload this file, will we exceed space online - thus upload will fail during upload?
if (calculatedSpaceOnlinePostUpload > 0) {
// Based on this thread action, we beleive that there is space available online to upload - proceed
// Based on this thread action, we believe that there is space available online to upload - proceed
spaceAvailableOnline = true;
}
}
@ -5234,13 +5234,13 @@ class SyncEngine {
// Issue #2626 | Case 2-2 (resync)
// If the 'online' file is newer, this will be overwritten with the file from the local filesystem - potentially consituting online data loss
// If the 'online' file is newer, this will be overwritten with the file from the local filesystem - potentially constituting online data loss
// The file 'version history' online will have to be used to 'recover' the prior online file
string changedItemParentDriveId = fileDetailsFromOneDrive["parentReference"]["driveId"].str;
string changedItemId = fileDetailsFromOneDrive["id"].str;
addLogEntry("Skipping uploading this item as a new file, will upload as a modified file (online file already exists): " ~ fileToUpload);
// In order for the processing of the local item as a 'changed' item, unfortunatly we need to save the online data of the existing online file to the local DB
// In order for the processing of the local item as a 'changed' item, unfortunately we need to save the online data of the existing online file to the local DB
saveItem(fileDetailsFromOneDrive);
// Which file is technically newer? The local file or the remote file?
@ -5308,7 +5308,7 @@ class SyncEngine {
uploadFailed = true;
}
} else {
// skip file upload - insufficent space to upload
// skip file upload - insufficient space to upload
addLogEntry("Skipping uploading this new file as it exceeds the available free space on Microsoft OneDrive: " ~ fileToUpload);
uploadFailed = true;
}
@ -5340,7 +5340,7 @@ class SyncEngine {
// Upload success or failure?
if (!uploadFailed) {
// Update the 'cachedOnlineDriveData' record for this 'dbItem.driveId' so that this is tracked as accuratly as possible for other threads
// Update the 'cachedOnlineDriveData' record for this 'dbItem.driveId' so that this is tracked as accurately as possible for other threads
updateDriveDetailsCache(parentItem.driveId, cachedOnlineDriveData.quotaRestricted, cachedOnlineDriveData.quotaAvailable, thisFileSize);
} else {
@ -5428,7 +5428,7 @@ class SyncEngine {
// - All Business | Office365 | SharePoint files > 0 bytes
JSONValue uploadSessionData;
// As this is a unique thread, the sessionFilePath for where we save the data needs to be unique
// The best way to do this is generate a 10 digit alphanumeric string, and use this as the file extention
// The best way to do this is generate a 10 digit alphanumeric string, and use this as the file extension
string threadUploadSessionFilePath = appConfig.uploadSessionFilePath ~ "." ~ generateAlphanumericString();
// Attempt to upload the > 4MB file using an upload session for all account types
@ -5530,7 +5530,7 @@ class SyncEngine {
// OK as the upload did not fail, we need to save the response from OneDrive, but it has to be a valid JSON response
if (uploadResponse.type() == JSONType.object) {
// check if the path still exists locally before we try to set the file times online - as short lived files, whilst we uploaded it - it may not exist locally aready
// check if the path still exists locally before we try to set the file times online - as short lived files, whilst we uploaded it - it may not exist locally already
if (exists(fileToUpload)) {
if (!dryRun) {
// Check the integrity of the uploaded file, if the local file still exists
@ -6065,7 +6065,7 @@ class SyncEngine {
} else {
// Not a Microsoft OneNote Mime Type Object ..
string apiWarningMessage = "WARNING: OneDrive API inconsistency - this file does not have any hash: ";
// This is computationally expensive .. but we are only doing this if there are no hashses provided
// This is computationally expensive .. but we are only doing this if there are no hashes provided
bool parentInDatabase = itemDB.idInLocalDatabase(newDatabaseItem.driveId, newDatabaseItem.parentId);
// Is the parent id in the database?
if (parentInDatabase) {
@ -6454,7 +6454,7 @@ class SyncEngine {
// The full parent path of the child, as per the JSON might be:
// /Level 1/Level 2/Level 3/Child Shared Folder/some folder/another folder
// But 'Child Shared Folder' is what is shared, thus '/Level 1/Level 2/Level 3/' is a potential information leak if logged.
// Plus, the application output now shows accuratly what is being shared - so that is a good thing.
// Plus, the application output now shows accurately what is being shared - so that is a good thing.
addLogEntry("Adding " ~ to!string(count(thisLevelChildren["value"].array)) ~ " OneDrive items for processing from " ~ pathForLogging, ["verbose"]);
}
foreach (child; thisLevelChildren["value"].array) {
@ -6569,7 +6569,7 @@ class SyncEngine {
queryOneDriveForSpecificPath.initialise();
foreach (thisFolderName; pathSplitter(thisNewPathToSearch)) {
addLogEntry("Testing for the existance online of this folder path: " ~ thisFolderName, ["debug"]);
addLogEntry("Testing for the existence online of this folder path: " ~ thisFolderName, ["debug"]);
directoryFoundOnline = false;
// If this is '.' this is the account root
@ -6879,7 +6879,7 @@ class SyncEngine {
if (!itemDB.selectByPath(oldPath, appConfig.defaultDriveId, oldItem)) {
// The old path|item is not synced with the database, upload as a new file
addLogEntry("Moved local item was not in-sync with local databse - uploading as new item");
addLogEntry("Moved local item was not in-sync with local database - uploading as new item");
scanLocalFilesystemPathForNewData(newPath);
return;
}
@ -7312,7 +7312,7 @@ class SyncEngine {
// If the initial deltaChanges response is an invalid JSON object, keep trying until we get a valid response ..
if (deltaChanges.type() != JSONType.object) {
while (deltaChanges.type() != JSONType.object) {
// Handle the invalid JSON response adn retry
// Handle the invalid JSON response and retry
addLogEntry("ERROR: Query of the OneDrive API via deltaChanges = getDeltaChangesByItemId() returned an invalid JSON response", ["debug"]);
deltaChanges = getDeltaChangesByItemId(driveIdToQuery, itemIdToQuery, deltaLink, getDeltaQueryOneDriveApiInstance);
}
@ -7845,7 +7845,7 @@ class SyncEngine {
return false;
}
// Add 'sessionFilePath' to 'sessionFileData' so that it can be used when we re-use the JSON data to resume the upload
// Add 'sessionFilePath' to 'sessionFileData' so that it can be used when we reuse the JSON data to resume the upload
sessionFileData["sessionFilePath"] = sessionFilePath;
// Add sessionFileData to jsonItemsToResumeUpload as it is now valid
@ -7853,9 +7853,9 @@ class SyncEngine {
return true;
}
// Resume all resumable session uploads in parrallel
// Resume all resumable session uploads in parallel
void resumeSessionUploadsInParallel(JSONValue[] array) {
// This function recieved an array of 16 JSON items to resume upload
// This function received an array of 16 JSON items to resume upload
foreach (i, jsonItemToResume; processPool.parallel(array)) {
// Take each JSON item and resume upload using the JSON data
@ -7884,7 +7884,7 @@ class SyncEngine {
// Was the response from the OneDrive API a valid JSON item?
if (uploadResponse.type() == JSONType.object) {
// A valid JSON object was returned - session resumption upload sucessful
// A valid JSON object was returned - session resumption upload successful
// Are we in an --upload-only & --remove-source-files scenario?
// Use actual config values as we are doing an upload session recovery
@ -8252,7 +8252,7 @@ class SyncEngine {
try {
sharedWithMeItems = sharedWithMeOneDriveApiInstance.getSharedWithMe();
// We cant shutdown the API instance here, as we re-use it below
// We cant shutdown the API instance here, as we reuse it below
} catch (OneDriveException e) {
// Display error message

View file

@ -302,7 +302,7 @@ bool readLocalFile(string path) {
// Check if the read operation was successful
if (data.length != 1) {
// Read operation not sucessful
// Read operation not successful
addLogEntry("Failed to read the required amount from the file: " ~ path);
return false;
}