Make webhooks more resilient

* Do not quit on subscription API errors. Instead, wait a configurable interval before retrying.
* Shorten default webhook expiration and renewal intervals to reduce the chance of http 409 errors.
* Handle http 409 on subscription creation by taking over the existing subscription.
* Handle http 404 on subscription renewal by creating a new subscription(current behavior, listed for completeness).
* Log other known http errors include 400 (bad webhook endpoint), 401 (auth failed), and 403 (too many subscriptions).
* Log detailed messages when encoutering unknown http errors to assist with future debugging.
This commit is contained in:
Lyncredible 2023-10-15 08:14:02 -07:00
parent 4a60654e3f
commit 30ee83a2ae
6 changed files with 362 additions and 193 deletions

5
config
View file

@ -48,8 +48,9 @@
# webhook_public_url = "" # webhook_public_url = ""
# webhook_listening_host = "" # webhook_listening_host = ""
# webhook_listening_port = "8888" # webhook_listening_port = "8888"
# webhook_expiration_interval = "86400" # webhook_expiration_interval = "600"
# webhook_renewal_interval = "43200" # webhook_renewal_interval = "300"
# webhook_retry_interval = "60"
# space_reservation = "50" # space_reservation = "50"
# display_running_config = "false" # display_running_config = "false"
# read_only_auth_scope = "false" # read_only_auth_scope = "false"

View file

@ -57,8 +57,8 @@ Before reading this document, please ensure you are running application version
- [Running 'onedrive' in 'monitor' mode](#running-onedrive-in-monitor-mode) - [Running 'onedrive' in 'monitor' mode](#running-onedrive-in-monitor-mode)
* [Use webhook to subscribe to remote updates in 'monitor' mode](#use-webhook-to-subscribe-to-remote-updates-in-monitor-mode) * [Use webhook to subscribe to remote updates in 'monitor' mode](#use-webhook-to-subscribe-to-remote-updates-in-monitor-mode)
* [More webhook configuration options](#more-webhook-configuration-options) * [More webhook configuration options](#more-webhook-configuration-options)
+ [webhook_listening_host and webhook_listening_port](#webhook_listening_host-and-webhook_listening_port) + [Webhook listening host and port](#webhook-listening-host-and-port)
+ [webhook_expiration_interval and webhook_renewal_interval](#webhook_expiration_interval-and-webhook_renewal_interval) + [Webhook expiration, renewal and retry intervals](#webhook-expiration-renewal-and-retry-intervals)
- [Running 'onedrive' as a system service](#running-onedrive-as-a-system-service) - [Running 'onedrive' as a system service](#running-onedrive-as-a-system-service)
* [OneDrive service running as root user via init.d](#onedrive-service-running-as-root-user-via-initd) * [OneDrive service running as root user via init.d](#onedrive-service-running-as-root-user-via-initd)
* [OneDrive service running as root user via systemd (Arch, Ubuntu, Debian, OpenSuSE, Fedora)](#onedrive-service-running-as-root-user-via-systemd-arch-ubuntu-debian-opensuse-fedora) * [OneDrive service running as root user via systemd (Arch, Ubuntu, Debian, OpenSuSE, Fedora)](#onedrive-service-running-as-root-user-via-systemd-arch-ubuntu-debian-opensuse-fedora)
@ -524,8 +524,8 @@ See the [config](https://raw.githubusercontent.com/abraunegg/onedrive/master/con
# webhook_public_url = "" # webhook_public_url = ""
# webhook_listening_host = "" # webhook_listening_host = ""
# webhook_listening_port = "8888" # webhook_listening_port = "8888"
# webhook_expiration_interval = "86400" # webhook_expiration_interval = "600"
# webhook_renewal_interval = "43200" # webhook_renewal_interval = "300"
# space_reservation = "50" # space_reservation = "50"
# display_running_config = "false" # display_running_config = "false"
# read_only_auth_scope = "false" # read_only_auth_scope = "false"
@ -891,8 +891,8 @@ By default, the application will reserve 50MB of disk space to prevent your file
Example: Example:
```text ```text
... ...
# webhook_expiration_interval = "86400" # webhook_expiration_interval = "600"
# webhook_renewal_interval = "43200" # webhook_renewal_interval = "300"
space_reservation = "10" space_reservation = "10"
``` ```
@ -1059,7 +1059,7 @@ For any further nginx configuration assistance, please refer to: https://docs.ng
Below options can be optionally configured. The default is usually good enough. Below options can be optionally configured. The default is usually good enough.
#### webhook_listening_host and webhook_listening_port #### Webhook listening host and port
Set `webhook_listening_host` and `webhook_listening_port` to change the webhook listening endpoint. If `webhook_listening_host` is left empty, which is the default, the webhook will bind to `0.0.0.0`. The default `webhook_listening_port` is `8888`. Set `webhook_listening_host` and `webhook_listening_port` to change the webhook listening endpoint. If `webhook_listening_host` is left empty, which is the default, the webhook will bind to `0.0.0.0`. The default `webhook_listening_port` is `8888`.
@ -1068,16 +1068,21 @@ webhook_listening_host = ""
webhook_listening_port = "8888" webhook_listening_port = "8888"
``` ```
#### webhook_expiration_interval and webhook_renewal_interval #### Webhook expiration, renewal and retry intervals
Set `webhook_expiration_interval` and `webhook_renewal_interval` to change the frequency of subscription renewal. By default, the webhook asks Microsoft to keep subscriptions alive for 24 hours, and it renews subscriptions when it is less than 12 hours before their expiration. Set `webhook_expiration_interval` and `webhook_renewal_interval` to change the frequency of subscription renewal. Set `webhook_retry_interval` to change the delay before retrying after an error happens while communicating with Microsoft's subscription endpoints.
By default, the webhook asks Microsoft to keep subscriptions alive for 10 minutes, and it renews subscriptions when it is less than 5 minutes before their expiration. When errors happen, it waits 1 minute before retrying.
``` ```
# Default expiration interval is 24 hours # Default expiration interval is 10 minutes
webhook_expiration_interval = "86400" webhook_expiration_interval = "600"
# Default renewal interval is 12 hours # Default renewal interval is 5 minutes
webhook_renewal_interval = "43200" webhook_renewal_interval = "300"
# Default retry interval is 1 minute
webhook_retry_interval = "60"
``` ```
## Running 'onedrive' as a system service ## Running 'onedrive' as a system service

View file

@ -43,9 +43,9 @@ final class Config
// Default file permission mode // Default file permission mode
public long defaultFilePermissionMode = 600; public long defaultFilePermissionMode = 600;
public int configuredFilePermissionMode; public int configuredFilePermissionMode;
// Bring in v2.5.0 config items // Bring in v2.5.0 config items
// HTTP Struct items, used for configuring HTTP() // HTTP Struct items, used for configuring HTTP()
// Curl Timeout Handling // Curl Timeout Handling
// libcurl dns_cache_timeout timeout // libcurl dns_cache_timeout timeout
@ -70,8 +70,8 @@ final class Config
immutable int defaultMaxRedirects = 5; immutable int defaultMaxRedirects = 5;
// Specify what IP protocol version should be used when communicating with OneDrive // Specify what IP protocol version should be used when communicating with OneDrive
immutable int defaultIpProtocol = 0; // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only immutable int defaultIpProtocol = 0; // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only
this(string confdirOption) this(string confdirOption)
{ {
@ -106,7 +106,7 @@ final class Config
longValues["min_notify_changes"] = 5; longValues["min_notify_changes"] = 5;
longValues["monitor_log_frequency"] = 6; longValues["monitor_log_frequency"] = 6;
// Number of N sync runs before performing a full local scan of sync_dir // Number of N sync runs before performing a full local scan of sync_dir
// By default 12 which means every ~60 minutes a full disk scan of sync_dir will occur // By default 12 which means every ~60 minutes a full disk scan of sync_dir will occur
// 'monitor_interval' * 'monitor_fullscan_frequency' = 3600 = 1 hour // 'monitor_interval' * 'monitor_fullscan_frequency' = 3600 = 1 hour
longValues["monitor_fullscan_frequency"] = 12; longValues["monitor_fullscan_frequency"] = 12;
// Number of children in a path that is locally removed which will be classified as a 'big data delete' // Number of children in a path that is locally removed which will be classified as a 'big data delete'
@ -158,8 +158,9 @@ final class Config
stringValues["webhook_public_url"] = ""; stringValues["webhook_public_url"] = "";
stringValues["webhook_listening_host"] = ""; stringValues["webhook_listening_host"] = "";
longValues["webhook_listening_port"] = 8888; longValues["webhook_listening_port"] = 8888;
longValues["webhook_expiration_interval"] = 3600 * 24; longValues["webhook_expiration_interval"] = 60 * 10;
longValues["webhook_renewal_interval"] = 3600 * 12; longValues["webhook_renewal_interval"] = 60 * 5;
longValues["webhook_retry_interval"] = 60;
// Log to application output running configuration values // Log to application output running configuration values
boolValues["display_running_config"] = false; boolValues["display_running_config"] = false;
// Configure read-only authentication scope // Configure read-only authentication scope
@ -187,7 +188,7 @@ final class Config
// - Enabling this option will add function processing times to the console output // - Enabling this option will add function processing times to the console output
// - This then enables tracking of where the application is spending most amount of time when processing data when users have questions re performance // - This then enables tracking of where the application is spending most amount of time when processing data when users have questions re performance
boolValues["display_processing_time"] = false; boolValues["display_processing_time"] = false;
// HTTPS & CURL Operation Settings // HTTPS & CURL Operation Settings
// - Maximum time an operation is allowed to take // - Maximum time an operation is allowed to take
// This includes dns resolution, connecting, data transfer, etc. // This includes dns resolution, connecting, data transfer, etc.
@ -200,7 +201,7 @@ final class Config
longValues["data_timeout"] = defaultDataTimeout; longValues["data_timeout"] = defaultDataTimeout;
// What IP protocol version should be used when communicating with OneDrive // What IP protocol version should be used when communicating with OneDrive
longValues["ip_protocol_version"] = defaultIpProtocol; // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only longValues["ip_protocol_version"] = defaultIpProtocol; // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only
// EXPAND USERS HOME DIRECTORY // EXPAND USERS HOME DIRECTORY
// Determine the users home directory. // Determine the users home directory.
// Need to avoid using ~ here as expandTilde() below does not interpret correctly when running under init.d or systemd scripts // Need to avoid using ~ here as expandTilde() below does not interpret correctly when running under init.d or systemd scripts
@ -282,7 +283,7 @@ final class Config
writeln("ERROR: ~/.config/onedrive is a file rather than a directory"); writeln("ERROR: ~/.config/onedrive is a file rather than a directory");
} }
// Must exit // Must exit
exit(EXIT_FAILURE); exit(EXIT_FAILURE);
} }
} }
@ -405,7 +406,7 @@ final class Config
&longValues["classify_as_big_delete"], &longValues["classify_as_big_delete"],
"cleanup-local-files", "cleanup-local-files",
"Cleanup additional local files when using --download-only. This will remove local data.", "Cleanup additional local files when using --download-only. This will remove local data.",
&boolValues["cleanup_local_files"], &boolValues["cleanup_local_files"],
"create-directory", "create-directory",
"Create a directory on OneDrive - no sync will be performed.", "Create a directory on OneDrive - no sync will be performed.",
&stringValues["create_directory"], &stringValues["create_directory"],
@ -642,11 +643,11 @@ final class Config
// Use exit scopes to shutdown API // Use exit scopes to shutdown API
return false; return false;
} }
// We were able to readText the config file - so, we should be able to open and read it // We were able to readText the config file - so, we should be able to open and read it
auto file = File(filename, "r"); auto file = File(filename, "r");
string lineBuffer; string lineBuffer;
// configure scopes // configure scopes
// - failure // - failure
scope(failure) { scope(failure) {
@ -711,7 +712,7 @@ final class Config
setValueString("skip_dir", configFileSkipDir); setValueString("skip_dir", configFileSkipDir);
} }
} }
// --single-directory Strip quotation marks from path // --single-directory Strip quotation marks from path
// This is an issue when using ONEDRIVE_SINGLE_DIRECTORY with Docker // This is an issue when using ONEDRIVE_SINGLE_DIRECTORY with Docker
if (key == "single_directory") { if (key == "single_directory") {
// Strip quotation marks from provided path // Strip quotation marks from provided path
@ -751,7 +752,7 @@ final class Config
if (key == "space_reservation") { if (key == "space_reservation") {
// temp value // temp value
ulong tempValue = to!long(c.front.dup); ulong tempValue = to!long(c.front.dup);
// a value of 0 needs to be made at least 1MB .. // a value of 0 needs to be made at least 1MB ..
if (tempValue == 0) { if (tempValue == 0) {
tempValue = 1; tempValue = 1;
} }
@ -823,7 +824,7 @@ final class Config
} }
return configuredFilePermissionMode; return configuredFilePermissionMode;
} }
void resetSkipToDefaults() { void resetSkipToDefaults() {
// reset skip_file and skip_dir to application defaults // reset skip_file and skip_dir to application defaults
// skip_file // skip_file

View file

@ -58,12 +58,12 @@ int main(string[] args)
bool cleanupLocalFilesGlobal = false; bool cleanupLocalFilesGlobal = false;
bool synchronizeConfigured = false; bool synchronizeConfigured = false;
bool invalidSyncExit = false; bool invalidSyncExit = false;
// start and finish messages // start and finish messages
string startMessage = "Starting a sync with OneDrive"; string startMessage = "Starting a sync with OneDrive";
string finishMessage = "Sync with OneDrive is complete"; string finishMessage = "Sync with OneDrive is complete";
string helpMessage = "Please use 'onedrive --help' for further assistance in regards to running this application."; string helpMessage = "Please use 'onedrive --help' for further assistance in regards to running this application.";
// hash file permission values // hash file permission values
string hashPermissionValue = "600"; string hashPermissionValue = "600";
auto convertedPermissionValue = parse!long(hashPermissionValue, 8); auto convertedPermissionValue = parse!long(hashPermissionValue, 8);
@ -150,7 +150,7 @@ int main(string[] args)
"verbose|v+", "Print more details, useful for debugging (repeat for extra debugging)", &log.verbose, "verbose|v+", "Print more details, useful for debugging (repeat for extra debugging)", &log.verbose,
"version", "Print the version and exit", &printVersion "version", "Print the version and exit", &printVersion
); );
// print help and exit // print help and exit
if (opt.helpWanted) { if (opt.helpWanted) {
args ~= "--help"; args ~= "--help";
@ -182,7 +182,7 @@ int main(string[] args)
// Error message already printed // Error message already printed
return EXIT_FAILURE; return EXIT_FAILURE;
} }
// How was this application started - what options were passed in // How was this application started - what options were passed in
log.vdebug("passed in options: ", args); log.vdebug("passed in options: ", args);
log.vdebug("note --confdir and --verbose not listed in args"); log.vdebug("note --confdir and --verbose not listed in args");
@ -195,12 +195,12 @@ int main(string[] args)
// update configuration from command line args // update configuration from command line args
cfg.update_from_args(args); cfg.update_from_args(args);
// --resync should be a 'last resort item' .. the user needs to 'accept' to proceed // --resync should be a 'last resort item' .. the user needs to 'accept' to proceed
if ((cfg.getValueBool("resync")) && (!cfg.getValueBool("display_config"))) { if ((cfg.getValueBool("resync")) && (!cfg.getValueBool("display_config"))) {
// what is the risk acceptance? // what is the risk acceptance?
bool resyncRiskAcceptance = false; bool resyncRiskAcceptance = false;
if (!cfg.getValueBool("resync_auth")) { if (!cfg.getValueBool("resync_auth")) {
// need to prompt user // need to prompt user
char response; char response;
@ -209,7 +209,7 @@ int main(string[] args)
writeln("This has the potential to overwrite local versions of files with potentially older versions downloaded from OneDrive which can lead to data loss"); writeln("This has the potential to overwrite local versions of files with potentially older versions downloaded from OneDrive which can lead to data loss");
writeln("If in-doubt, backup your local data first before proceeding with --resync"); writeln("If in-doubt, backup your local data first before proceeding with --resync");
write("\nAre you sure you wish to proceed with --resync? [Y/N] "); write("\nAre you sure you wish to proceed with --resync? [Y/N] ");
try { try {
// Attempt to read user response // Attempt to read user response
readf(" %c\n", &response); readf(" %c\n", &response);
@ -217,7 +217,7 @@ int main(string[] args)
// Caught an error // Caught an error
return EXIT_FAILURE; return EXIT_FAILURE;
} }
// Evaluate user repsonse // Evaluate user repsonse
if ((to!string(response) == "y") || (to!string(response) == "Y")) { if ((to!string(response) == "y") || (to!string(response) == "Y")) {
// User has accepted --resync risk to proceed // User has accepted --resync risk to proceed
@ -229,7 +229,7 @@ int main(string[] args)
// resync_auth is true // resync_auth is true
resyncRiskAcceptance = true; resyncRiskAcceptance = true;
} }
// Action based on response // Action based on response
if (!resyncRiskAcceptance){ if (!resyncRiskAcceptance){
// --resync risk not accepted // --resync risk not accepted
@ -320,7 +320,7 @@ int main(string[] args)
if (exists(configFilePath)) currentConfigHash = computeQuickXorHash(configFilePath); if (exists(configFilePath)) currentConfigHash = computeQuickXorHash(configFilePath);
if (exists(syncListFilePath)) currentSyncListHash = computeQuickXorHash(syncListFilePath); if (exists(syncListFilePath)) currentSyncListHash = computeQuickXorHash(syncListFilePath);
if (exists(businessSharedFolderFilePath)) currentBusinessSharedFoldersHash = computeQuickXorHash(businessSharedFolderFilePath); if (exists(businessSharedFolderFilePath)) currentBusinessSharedFoldersHash = computeQuickXorHash(businessSharedFolderFilePath);
// read the existing hashes for each of the relevant configuration files if they exist // read the existing hashes for each of the relevant configuration files if they exist
if (exists(configHashFile)) { if (exists(configHashFile)) {
try { try {
@ -507,7 +507,7 @@ int main(string[] args)
if (cfg.getValueBool("list_business_shared_folders")) ignoreResyncRequirement = true; if (cfg.getValueBool("list_business_shared_folders")) ignoreResyncRequirement = true;
if ((!cfg.getValueString("get_o365_drive_id").empty)) ignoreResyncRequirement = true; if ((!cfg.getValueString("get_o365_drive_id").empty)) ignoreResyncRequirement = true;
if ((!cfg.getValueString("get_file_link").empty)) ignoreResyncRequirement = true; if ((!cfg.getValueString("get_file_link").empty)) ignoreResyncRequirement = true;
// Do we need to ignore a --resync requirement? // Do we need to ignore a --resync requirement?
if (!ignoreResyncRequirement) { if (!ignoreResyncRequirement) {
// We are not ignoring --requirement // We are not ignoring --requirement
@ -564,7 +564,7 @@ int main(string[] args)
} }
// configure databaseFilePathDryRunGlobal // configure databaseFilePathDryRunGlobal
databaseFilePathDryRunGlobal = cfg.databaseFilePathDryRun; databaseFilePathDryRunGlobal = cfg.databaseFilePathDryRun;
string dryRunShmFile = databaseFilePathDryRunGlobal ~ "-shm"; string dryRunShmFile = databaseFilePathDryRunGlobal ~ "-shm";
string dryRunWalFile = databaseFilePathDryRunGlobal ~ "-wal"; string dryRunWalFile = databaseFilePathDryRunGlobal ~ "-wal";
// If the dry run database exists, clean this up // If the dry run database exists, clean this up
@ -681,7 +681,7 @@ int main(string[] args)
// Exit // Exit
return EXIT_SUCCESS; return EXIT_SUCCESS;
} }
// Handle --reauth to re-authenticate the client // Handle --reauth to re-authenticate the client
if (cfg.getValueBool("reauth")) { if (cfg.getValueBool("reauth")) {
log.vdebug("--reauth requested"); log.vdebug("--reauth requested");
@ -690,20 +690,20 @@ int main(string[] args)
safeRemove(cfg.refreshTokenFilePath); safeRemove(cfg.refreshTokenFilePath);
} }
} }
// Display current application configuration // Display current application configuration
if ((cfg.getValueBool("display_config")) || (cfg.getValueBool("display_running_config"))) { if ((cfg.getValueBool("display_config")) || (cfg.getValueBool("display_running_config"))) {
if (cfg.getValueBool("display_running_config")) { if (cfg.getValueBool("display_running_config")) {
writeln("--------------- Application Runtime Configuration ---------------"); writeln("--------------- Application Runtime Configuration ---------------");
} }
// Display application version // Display application version
writeln("onedrive version = ", strip(import("version"))); writeln("onedrive version = ", strip(import("version")));
// Display all of the pertinent configuration options // Display all of the pertinent configuration options
writeln("Config path = ", cfg.configDirName); writeln("Config path = ", cfg.configDirName);
// Does a config file exist or are we using application defaults // Does a config file exist or are we using application defaults
writeln("Config file found in config path = ", exists(configFilePath)); writeln("Config file found in config path = ", exists(configFilePath));
// Is config option drive_id configured? // Is config option drive_id configured?
if (cfg.getValueString("drive_id") != ""){ if (cfg.getValueString("drive_id") != ""){
writeln("Config option 'drive_id' = ", cfg.getValueString("drive_id")); writeln("Config option 'drive_id' = ", cfg.getValueString("drive_id"));
@ -711,25 +711,25 @@ int main(string[] args)
// Config Options as per 'config' file // Config Options as per 'config' file
writeln("Config option 'sync_dir' = ", syncDir); writeln("Config option 'sync_dir' = ", syncDir);
// logging and notifications // logging and notifications
writeln("Config option 'enable_logging' = ", cfg.getValueBool("enable_logging")); writeln("Config option 'enable_logging' = ", cfg.getValueBool("enable_logging"));
writeln("Config option 'log_dir' = ", cfg.getValueString("log_dir")); writeln("Config option 'log_dir' = ", cfg.getValueString("log_dir"));
writeln("Config option 'disable_notifications' = ", cfg.getValueBool("disable_notifications")); writeln("Config option 'disable_notifications' = ", cfg.getValueBool("disable_notifications"));
writeln("Config option 'min_notify_changes' = ", cfg.getValueLong("min_notify_changes")); writeln("Config option 'min_notify_changes' = ", cfg.getValueLong("min_notify_changes"));
// skip files and directory and 'matching' policy // skip files and directory and 'matching' policy
writeln("Config option 'skip_dir' = ", cfg.getValueString("skip_dir")); writeln("Config option 'skip_dir' = ", cfg.getValueString("skip_dir"));
writeln("Config option 'skip_dir_strict_match' = ", cfg.getValueBool("skip_dir_strict_match")); writeln("Config option 'skip_dir_strict_match' = ", cfg.getValueBool("skip_dir_strict_match"));
writeln("Config option 'skip_file' = ", cfg.getValueString("skip_file")); writeln("Config option 'skip_file' = ", cfg.getValueString("skip_file"));
writeln("Config option 'skip_dotfiles' = ", cfg.getValueBool("skip_dotfiles")); writeln("Config option 'skip_dotfiles' = ", cfg.getValueBool("skip_dotfiles"));
writeln("Config option 'skip_symlinks' = ", cfg.getValueBool("skip_symlinks")); writeln("Config option 'skip_symlinks' = ", cfg.getValueBool("skip_symlinks"));
// --monitor sync process options // --monitor sync process options
writeln("Config option 'monitor_interval' = ", cfg.getValueLong("monitor_interval")); writeln("Config option 'monitor_interval' = ", cfg.getValueLong("monitor_interval"));
writeln("Config option 'monitor_log_frequency' = ", cfg.getValueLong("monitor_log_frequency")); writeln("Config option 'monitor_log_frequency' = ", cfg.getValueLong("monitor_log_frequency"));
writeln("Config option 'monitor_fullscan_frequency' = ", cfg.getValueLong("monitor_fullscan_frequency")); writeln("Config option 'monitor_fullscan_frequency' = ", cfg.getValueLong("monitor_fullscan_frequency"));
// sync process and method // sync process and method
writeln("Config option 'read_only_auth_scope' = ", cfg.getValueBool("read_only_auth_scope")); writeln("Config option 'read_only_auth_scope' = ", cfg.getValueBool("read_only_auth_scope"));
writeln("Config option 'dry_run' = ", cfg.getValueBool("dry_run")); writeln("Config option 'dry_run' = ", cfg.getValueBool("dry_run"));
@ -751,7 +751,7 @@ int main(string[] args)
writeln("Config option 'sync_dir_permissions' = ", cfg.getValueLong("sync_dir_permissions")); writeln("Config option 'sync_dir_permissions' = ", cfg.getValueLong("sync_dir_permissions"));
writeln("Config option 'sync_file_permissions' = ", cfg.getValueLong("sync_file_permissions")); writeln("Config option 'sync_file_permissions' = ", cfg.getValueLong("sync_file_permissions"));
writeln("Config option 'space_reservation' = ", cfg.getValueLong("space_reservation")); writeln("Config option 'space_reservation' = ", cfg.getValueLong("space_reservation"));
// curl operations // curl operations
writeln("Config option 'application_id' = ", cfg.getValueString("application_id")); writeln("Config option 'application_id' = ", cfg.getValueString("application_id"));
writeln("Config option 'azure_ad_endpoint' = ", cfg.getValueString("azure_ad_endpoint")); writeln("Config option 'azure_ad_endpoint' = ", cfg.getValueString("azure_ad_endpoint"));
@ -765,11 +765,11 @@ int main(string[] args)
writeln("Config option 'connect_timeout' = ", cfg.getValueLong("connect_timeout")); writeln("Config option 'connect_timeout' = ", cfg.getValueLong("connect_timeout"));
writeln("Config option 'data_timeout' = ", cfg.getValueLong("data_timeout")); writeln("Config option 'data_timeout' = ", cfg.getValueLong("data_timeout"));
writeln("Config option 'ip_protocol_version' = ", cfg.getValueLong("ip_protocol_version")); writeln("Config option 'ip_protocol_version' = ", cfg.getValueLong("ip_protocol_version"));
// Is sync_list configured ? // Is sync_list configured ?
writeln("Config option 'sync_root_files' = ", cfg.getValueBool("sync_root_files")); writeln("Config option 'sync_root_files' = ", cfg.getValueBool("sync_root_files"));
if (exists(syncListFilePath)){ if (exists(syncListFilePath)){
writeln("Selective sync 'sync_list' configured = true"); writeln("Selective sync 'sync_list' configured = true");
writeln("sync_list contents:"); writeln("sync_list contents:");
// Output the sync_list contents // Output the sync_list contents
@ -781,7 +781,7 @@ int main(string[] args)
} }
} else { } else {
writeln("Selective sync 'sync_list' configured = false"); writeln("Selective sync 'sync_list' configured = false");
} }
// Is business_shared_folders enabled and configured ? // Is business_shared_folders enabled and configured ?
@ -799,7 +799,7 @@ int main(string[] args)
} else { } else {
writeln("Business Shared Folders configured = false"); writeln("Business Shared Folders configured = false");
} }
// Are webhooks enabled? // Are webhooks enabled?
writeln("Config option 'webhook_enabled' = ", cfg.getValueBool("webhook_enabled")); writeln("Config option 'webhook_enabled' = ", cfg.getValueBool("webhook_enabled"));
if (cfg.getValueBool("webhook_enabled")) { if (cfg.getValueBool("webhook_enabled")) {
@ -808,12 +808,13 @@ int main(string[] args)
writeln("Config option 'webhook_listening_port' = ", cfg.getValueLong("webhook_listening_port")); writeln("Config option 'webhook_listening_port' = ", cfg.getValueLong("webhook_listening_port"));
writeln("Config option 'webhook_expiration_interval' = ", cfg.getValueLong("webhook_expiration_interval")); writeln("Config option 'webhook_expiration_interval' = ", cfg.getValueLong("webhook_expiration_interval"));
writeln("Config option 'webhook_renewal_interval' = ", cfg.getValueLong("webhook_renewal_interval")); writeln("Config option 'webhook_renewal_interval' = ", cfg.getValueLong("webhook_renewal_interval"));
writeln("Config option 'webhook_retry_interval' = ", cfg.getValueLong("webhook_retry_interval"));
} }
if (cfg.getValueBool("display_running_config")) { if (cfg.getValueBool("display_running_config")) {
writeln("-----------------------------------------------------------------"); writeln("-----------------------------------------------------------------");
} }
// Do we exit? We only exit if --display-config has been used // Do we exit? We only exit if --display-config has been used
if (cfg.getValueBool("display_config")) { if (cfg.getValueBool("display_config")) {
return EXIT_SUCCESS; return EXIT_SUCCESS;
@ -833,7 +834,7 @@ int main(string[] args)
log.vdebug("Testing if we have exclusive access to local database file"); log.vdebug("Testing if we have exclusive access to local database file");
// Are we the only running instance? Test that we can open the database file path // Are we the only running instance? Test that we can open the database file path
itemDb = new ItemDatabase(cfg.databaseFilePath); itemDb = new ItemDatabase(cfg.databaseFilePath);
// did we successfully initialise the database class? // did we successfully initialise the database class?
if (!itemDb.isDatabaseInitialised()) { if (!itemDb.isDatabaseInitialised()) {
// no .. destroy class // no .. destroy class
@ -841,7 +842,7 @@ int main(string[] args)
// exit application // exit application
return EXIT_FAILURE; return EXIT_FAILURE;
} }
// If we have exclusive access we will not have exited // If we have exclusive access we will not have exited
// destroy access test // destroy access test
destroy(itemDb); destroy(itemDb);
@ -853,7 +854,7 @@ int main(string[] args)
safeRemove(cfg.uploadStateFilePath); safeRemove(cfg.uploadStateFilePath);
} }
} }
// Test if OneDrive service can be reached, exit if it cant be reached // Test if OneDrive service can be reached, exit if it cant be reached
log.vdebug("Testing network to ensure network connectivity to Microsoft OneDrive Service"); log.vdebug("Testing network to ensure network connectivity to Microsoft OneDrive Service");
online = testNetwork(cfg); online = testNetwork(cfg);
@ -911,13 +912,13 @@ int main(string[] args)
} }
} }
} }
// Check application version and Initialize OneDrive API, check for authorization // Check application version and Initialize OneDrive API, check for authorization
if (online) { if (online) {
// Check Application Version // Check Application Version
log.vlog("Checking Application Version ..."); log.vlog("Checking Application Version ...");
checkApplicationVersion(); checkApplicationVersion();
// we can only initialise if we are online // we can only initialise if we are online
log.vlog("Initializing the OneDrive API ..."); log.vlog("Initializing the OneDrive API ...");
oneDrive = new OneDriveApi(cfg); oneDrive = new OneDriveApi(cfg);
@ -956,7 +957,7 @@ int main(string[] args)
emptyParameter = "--destination-directory"; emptyParameter = "--destination-directory";
dataParameter = "--source-directory"; dataParameter = "--source-directory";
} }
log.error("ERROR: " ~ dataParameter ~ " was passed in without also using " ~ emptyParameter); log.error("ERROR: " ~ dataParameter ~ " was passed in without also using " ~ emptyParameter);
} }
// Use exit scopes to shutdown API // Use exit scopes to shutdown API
writeln(); writeln();
@ -1023,7 +1024,7 @@ int main(string[] args)
log.vdebug("Using database file: ", asNormalizedPath(cfg.databaseFilePath)); log.vdebug("Using database file: ", asNormalizedPath(cfg.databaseFilePath));
itemDb = new ItemDatabase(cfg.databaseFilePath); itemDb = new ItemDatabase(cfg.databaseFilePath);
} }
// did we successfully initialise the database class? // did we successfully initialise the database class?
if (!itemDb.isDatabaseInitialised()) { if (!itemDb.isDatabaseInitialised()) {
// no .. destroy class // no .. destroy class
@ -1154,14 +1155,14 @@ int main(string[] args)
// performing this action could have undesirable effects .. the user must accept this risk // performing this action could have undesirable effects .. the user must accept this risk
// what is the risk acceptance? // what is the risk acceptance?
bool resyncRiskAcceptance = false; bool resyncRiskAcceptance = false;
// need to prompt user // need to prompt user
char response; char response;
// warning message // warning message
writeln("\nThe use of --force-sync will reconfigure the application to use defaults. This may have untold and unknown future impacts."); writeln("\nThe use of --force-sync will reconfigure the application to use defaults. This may have untold and unknown future impacts.");
writeln("By proceeding in using this option you accept any impacts including any data loss that may occur as a result of using --force-sync."); writeln("By proceeding in using this option you accept any impacts including any data loss that may occur as a result of using --force-sync.");
write("\nAre you sure you wish to proceed with --force-sync [Y/N] "); write("\nAre you sure you wish to proceed with --force-sync [Y/N] ");
try { try {
// Attempt to read user response // Attempt to read user response
readf(" %c\n", &response); readf(" %c\n", &response);
@ -1169,7 +1170,7 @@ int main(string[] args)
// Caught an error // Caught an error
return EXIT_FAILURE; return EXIT_FAILURE;
} }
// Evaluate user repsonse // Evaluate user repsonse
if ((to!string(response) == "y") || (to!string(response) == "Y")) { if ((to!string(response) == "y") || (to!string(response) == "Y")) {
// User has accepted --force-sync risk to proceed // User has accepted --force-sync risk to proceed
@ -1177,7 +1178,7 @@ int main(string[] args)
// Are you sure you wish .. does not use writeln(); // Are you sure you wish .. does not use writeln();
write("\n"); write("\n");
} }
// Action based on response // Action based on response
if (!resyncRiskAcceptance){ if (!resyncRiskAcceptance){
// --force-sync not accepted // --force-sync not accepted
@ -1188,7 +1189,7 @@ int main(string[] args)
cfg.resetSkipToDefaults(); cfg.resetSkipToDefaults();
// update sync engine regex with reset defaults // update sync engine regex with reset defaults
selectiveSync.setDirMask(cfg.getValueString("skip_dir")); selectiveSync.setDirMask(cfg.getValueString("skip_dir"));
selectiveSync.setFileMask(cfg.getValueString("skip_file")); selectiveSync.setFileMask(cfg.getValueString("skip_file"));
} }
} }
@ -1248,7 +1249,7 @@ int main(string[] args)
log.log("WARNING: Local data loss MAY occur in this scenario."); log.log("WARNING: Local data loss MAY occur in this scenario.");
sync.setBypassDataPreservation(); sync.setBypassDataPreservation();
} }
// Do we configure to clean up local files if using --download-only ? // Do we configure to clean up local files if using --download-only ?
if ((cfg.getValueBool("download_only")) && (cfg.getValueBool("cleanup_local_files"))) { if ((cfg.getValueBool("download_only")) && (cfg.getValueBool("cleanup_local_files"))) {
// --download-only and --cleanup-local-files were passed in // --download-only and --cleanup-local-files were passed in
@ -1270,13 +1271,13 @@ int main(string[] args)
sync.setNationalCloudDeployment(); sync.setNationalCloudDeployment();
} }
} }
// Are we forcing to use /children scan instead of /delta to simulate National Cloud Deployment use of /children? // Are we forcing to use /children scan instead of /delta to simulate National Cloud Deployment use of /children?
if (cfg.getValueBool("force_children_scan")) { if (cfg.getValueBool("force_children_scan")) {
log.log("Forcing client to use /children scan rather than /delta to simulate National Cloud Deployment use of /children"); log.log("Forcing client to use /children scan rather than /delta to simulate National Cloud Deployment use of /children");
sync.setNationalCloudDeployment(); sync.setNationalCloudDeployment();
} }
// Do we need to display the function processing timing // Do we need to display the function processing timing
if (cfg.getValueBool("display_processing_time")) { if (cfg.getValueBool("display_processing_time")) {
log.log("Forcing client to display function processing times"); log.log("Forcing client to display function processing times");
@ -1324,14 +1325,14 @@ int main(string[] args)
// --create-share-link - Are we createing a shareable link for an existing file on OneDrive? // --create-share-link - Are we createing a shareable link for an existing file on OneDrive?
if (cfg.getValueString("create_share_link") != "") { if (cfg.getValueString("create_share_link") != "") {
// Query OneDrive for the file, and if valid, create a shareable link for the file // Query OneDrive for the file, and if valid, create a shareable link for the file
// By default, the shareable link will be read-only. // By default, the shareable link will be read-only.
// If the user adds: // If the user adds:
// --with-editing-perms // --with-editing-perms
// this will create a writeable link // this will create a writeable link
bool writeablePermissions = cfg.getValueBool("with_editing_perms"); bool writeablePermissions = cfg.getValueBool("with_editing_perms");
sync.createShareableLinkForFile(cfg.getValueString("create_share_link"), writeablePermissions); sync.createShareableLinkForFile(cfg.getValueString("create_share_link"), writeablePermissions);
// Exit application // Exit application
// Use exit scopes to shutdown API // Use exit scopes to shutdown API
return EXIT_SUCCESS; return EXIT_SUCCESS;
@ -1345,7 +1346,7 @@ int main(string[] args)
// Use exit scopes to shutdown API // Use exit scopes to shutdown API
return EXIT_SUCCESS; return EXIT_SUCCESS;
} }
// --modified-by - Are we listing the modified-by details of a provided path? // --modified-by - Are we listing the modified-by details of a provided path?
if (cfg.getValueString("modified_by") != "") { if (cfg.getValueString("modified_by") != "") {
// Query OneDrive for the file link // Query OneDrive for the file link
@ -1379,7 +1380,7 @@ int main(string[] args)
log.error("ERROR: Unsupported account type for syncing OneDrive Business Shared Folders"); log.error("ERROR: Unsupported account type for syncing OneDrive Business Shared Folders");
} }
} }
// Ensure that the value stored for cfg.getValueString("single_directory") does not contain any extra quotation marks // Ensure that the value stored for cfg.getValueString("single_directory") does not contain any extra quotation marks
if (cfg.getValueString("single_directory") != ""){ if (cfg.getValueString("single_directory") != ""){
string originalSingleDirectoryValue = cfg.getValueString("single_directory"); string originalSingleDirectoryValue = cfg.getValueString("single_directory");
@ -1409,7 +1410,7 @@ int main(string[] args)
if (online) { if (online) {
// set flag for exit scope // set flag for exit scope
synchronizeConfigured = true; synchronizeConfigured = true;
// Check user entry for local path - the above chdir means we are already in ~/OneDrive/ thus singleDirectory is local to this path // Check user entry for local path - the above chdir means we are already in ~/OneDrive/ thus singleDirectory is local to this path
if (cfg.getValueString("single_directory") != "") { if (cfg.getValueString("single_directory") != "") {
// Does the directory we want to sync actually exist? // Does the directory we want to sync actually exist?
@ -1521,7 +1522,7 @@ int main(string[] args)
immutable long fullScanFrequency = cfg.getValueLong("monitor_fullscan_frequency"); immutable long fullScanFrequency = cfg.getValueLong("monitor_fullscan_frequency");
MonoTime lastCheckTime = MonoTime.currTime(); MonoTime lastCheckTime = MonoTime.currTime();
MonoTime lastGitHubCheckTime = MonoTime.currTime(); MonoTime lastGitHubCheckTime = MonoTime.currTime();
long logMonitorCounter = 0; long logMonitorCounter = 0;
long fullScanCounter = 0; long fullScanCounter = 0;
// set fullScanRequired to true so that at application startup we perform a full walk // set fullScanRequired to true so that at application startup we perform a full walk
@ -1589,7 +1590,7 @@ int main(string[] args)
// update when we have performed this check // update when we have performed this check
lastGitHubCheckTime = MonoTime.currTime(); lastGitHubCheckTime = MonoTime.currTime();
} }
// monitor sync loop // monitor sync loop
logOutputMessage = "################################################## NEW LOOP ##################################################"; logOutputMessage = "################################################## NEW LOOP ##################################################";
if (displaySyncOptions) { if (displaySyncOptions) {
@ -1647,7 +1648,7 @@ int main(string[] args)
try { try {
// performance timing // performance timing
SysTime startSyncProcessingTime = Clock.currTime(); SysTime startSyncProcessingTime = Clock.currTime();
// perform a --monitor sync // perform a --monitor sync
if ((cfg.getValueLong("verbose") > 0) || (logMonitorCounter == logInterval) || (fullScanRequired) ) { if ((cfg.getValueLong("verbose") > 0) || (logMonitorCounter == logInterval) || (fullScanRequired) ) {
// log to console and log file if enabled // log to console and log file if enabled
@ -1695,25 +1696,25 @@ int main(string[] args)
} }
// performSync complete, set lastCheckTime to current time // performSync complete, set lastCheckTime to current time
lastCheckTime = MonoTime.currTime(); lastCheckTime = MonoTime.currTime();
// Display memory details before cleanup // Display memory details before cleanup
if (displayMemoryUsage) log.displayMemoryUsagePreGC(); if (displayMemoryUsage) log.displayMemoryUsagePreGC();
// Perform Garbage Cleanup // Perform Garbage Cleanup
GC.collect(); GC.collect();
// Display memory details after cleanup // Display memory details after cleanup
if (displayMemoryUsage) log.displayMemoryUsagePostGC(); if (displayMemoryUsage) log.displayMemoryUsagePostGC();
// If we did a full scan, make sure we merge the conents of the WAL and SHM to disk // If we did a full scan, make sure we merge the conents of the WAL and SHM to disk
if (fullScanRequired) { if (fullScanRequired) {
// Write WAL and SHM data to file for this loop // Write WAL and SHM data to file for this loop
log.vdebug("Merge contents of WAL and SHM files into main database file"); log.vdebug("Merge contents of WAL and SHM files into main database file");
itemDb.performVacuum(); itemDb.performVacuum();
} }
// reset fullScanRequired and syncListConfiguredFullScanOverride // reset fullScanRequired and syncListConfiguredFullScanOverride
fullScanRequired = false; fullScanRequired = false;
if (syncListConfigured) syncListConfiguredFullScanOverride = false; if (syncListConfigured) syncListConfiguredFullScanOverride = false;
// monitor loop complete // monitor loop complete
logOutputMessage = "################################################ LOOP COMPLETE ###############################################"; logOutputMessage = "################################################ LOOP COMPLETE ###############################################";
@ -1840,13 +1841,13 @@ void performSync(SyncEngine sync, string singleDirectory, bool downloadOnly, boo
// OneDrive First // OneDrive First
if (logLevel < MONITOR_LOG_QUIET) log.log("Syncing changes from selected OneDrive path ..."); if (logLevel < MONITOR_LOG_QUIET) log.log("Syncing changes from selected OneDrive path ...");
sync.applyDifferencesSingleDirectory(remotePath); sync.applyDifferencesSingleDirectory(remotePath);
// Is this a --download-only --cleanup-local-files request? // Is this a --download-only --cleanup-local-files request?
// If yes, scan for local changes - but --cleanup-local-files is being used, a further flag will trigger local file deletes rather than attempt to upload files to OneDrive // If yes, scan for local changes - but --cleanup-local-files is being used, a further flag will trigger local file deletes rather than attempt to upload files to OneDrive
if (cleanupLocalFiles) { if (cleanupLocalFiles) {
// --download-only and --cleanup-local-files were passed in // --download-only and --cleanup-local-files were passed in
log.log("Searching local filesystem for extra files and folders which need to be removed"); log.log("Searching local filesystem for extra files and folders which need to be removed");
sync.scanForDifferencesFilesystemScan(localPath); sync.scanForDifferencesFilesystemScan(localPath);
} else { } else {
// is this a --download-only request? // is this a --download-only request?
if (!downloadOnly) { if (!downloadOnly) {
@ -1894,13 +1895,13 @@ void performSync(SyncEngine sync, string singleDirectory, bool downloadOnly, boo
log.vdebug(syncCallLogOutput); log.vdebug(syncCallLogOutput);
} }
sync.applyDifferences(false); sync.applyDifferences(false);
// Is this a --download-only --cleanup-local-files request? // Is this a --download-only --cleanup-local-files request?
// If yes, scan for local changes - but --cleanup-local-files is being used, a further flag will trigger local file deletes rather than attempt to upload files to OneDrive // If yes, scan for local changes - but --cleanup-local-files is being used, a further flag will trigger local file deletes rather than attempt to upload files to OneDrive
if (cleanupLocalFiles) { if (cleanupLocalFiles) {
// --download-only and --cleanup-local-files were passed in // --download-only and --cleanup-local-files were passed in
log.log("Searching local filesystem for extra files and folders which need to be removed"); log.log("Searching local filesystem for extra files and folders which need to be removed");
sync.scanForDifferencesFilesystemScan(localPath); sync.scanForDifferencesFilesystemScan(localPath);
} else { } else {
// is this a --download-only request? // is this a --download-only request?
if (!downloadOnly) { if (!downloadOnly) {
@ -1916,14 +1917,14 @@ void performSync(SyncEngine sync, string singleDirectory, bool downloadOnly, boo
log.vdebug(logOutputMessage); log.vdebug(logOutputMessage);
log.vdebug(syncCallLogOutput); log.vdebug(syncCallLogOutput);
} }
SysTime startIntegrityCheckProcessingTime = Clock.currTime(); SysTime startIntegrityCheckProcessingTime = Clock.currTime();
if (sync.getPerformanceProcessingOutput()) { if (sync.getPerformanceProcessingOutput()) {
// performance timing for DB and file system integrity check - start // performance timing for DB and file system integrity check - start
writeln("============================================================"); writeln("============================================================");
writeln("Start Integrity Check Processing Time: ", startIntegrityCheckProcessingTime); writeln("Start Integrity Check Processing Time: ", startIntegrityCheckProcessingTime);
} }
// What sort of local scan do we want to do? // What sort of local scan do we want to do?
// In --monitor mode, when performing the DB scan, a race condition occurs where by if a file or folder is moved during this process // In --monitor mode, when performing the DB scan, a race condition occurs where by if a file or folder is moved during this process
// the inotify event is discarded once performSync() is finished (see m.update(false) above), so these events need to be handled // the inotify event is discarded once performSync() is finished (see m.update(false) above), so these events need to be handled
@ -1936,12 +1937,12 @@ void performSync(SyncEngine sync, string singleDirectory, bool downloadOnly, boo
} else { } else {
// --monitor in use // --monitor in use
// Use individual calls with inotify checks between to avoid a race condition between these 2 functions // Use individual calls with inotify checks between to avoid a race condition between these 2 functions
// Database scan integrity check to compare DB data vs actual content on disk to ensure what we think is local, is local // Database scan integrity check to compare DB data vs actual content on disk to ensure what we think is local, is local
// and that the data 'hash' as recorded in the DB equals the hash of the actual content // and that the data 'hash' as recorded in the DB equals the hash of the actual content
// This process can be extremely expensive time and CPU processing wise // This process can be extremely expensive time and CPU processing wise
// //
// fullScanRequired is set to TRUE when the application starts up, or the config option 'monitor_fullscan_frequency' count is reached // fullScanRequired is set to TRUE when the application starts up, or the config option 'monitor_fullscan_frequency' count is reached
// By default, 'monitor_fullscan_frequency' = 12, and 'monitor_interval' = 300, meaning that by default, a full database consistency check // By default, 'monitor_fullscan_frequency' = 12, and 'monitor_interval' = 300, meaning that by default, a full database consistency check
// is done once an hour. // is done once an hour.
// //
// To change this behaviour adjust 'monitor_interval' and 'monitor_fullscan_frequency' to desired values in the application config file // To change this behaviour adjust 'monitor_interval' and 'monitor_fullscan_frequency' to desired values in the application config file
@ -1954,14 +1955,14 @@ void performSync(SyncEngine sync, string singleDirectory, bool downloadOnly, boo
log.vdebug("NOT performing Database Integrity Check .. fullScanRequired = FALSE"); log.vdebug("NOT performing Database Integrity Check .. fullScanRequired = FALSE");
m.update(true); m.update(true);
} }
// Filesystem walk to find new files not uploaded // Filesystem walk to find new files not uploaded
log.vdebug("Searching local filesystem for new data"); log.vdebug("Searching local filesystem for new data");
sync.scanForDifferencesFilesystemScan(localPath); sync.scanForDifferencesFilesystemScan(localPath);
// handle any inotify events that occured 'whilst' we were scanning the local filesystem // handle any inotify events that occured 'whilst' we were scanning the local filesystem
m.update(true); m.update(true);
} }
SysTime endIntegrityCheckProcessingTime = Clock.currTime(); SysTime endIntegrityCheckProcessingTime = Clock.currTime();
if (sync.getPerformanceProcessingOutput()) { if (sync.getPerformanceProcessingOutput()) {
// performance timing for DB and file system integrity check - finish // performance timing for DB and file system integrity check - finish
@ -1969,7 +1970,7 @@ void performSync(SyncEngine sync, string singleDirectory, bool downloadOnly, boo
writeln("Elapsed Function Processing Time: ", (endIntegrityCheckProcessingTime - startIntegrityCheckProcessingTime)); writeln("Elapsed Function Processing Time: ", (endIntegrityCheckProcessingTime - startIntegrityCheckProcessingTime));
writeln("============================================================"); writeln("============================================================");
} }
// At this point, all OneDrive changes / local changes should be uploaded and in sync // At this point, all OneDrive changes / local changes should be uploaded and in sync
// This MAY not be the case when using sync_list, thus a full walk of OneDrive ojects is required // This MAY not be the case when using sync_list, thus a full walk of OneDrive ojects is required

View file

@ -201,8 +201,8 @@ final class OneDriveApi
private SysTime accessTokenExpiration; private SysTime accessTokenExpiration;
private HTTP http; private HTTP http;
private OneDriveWebhook webhook; private OneDriveWebhook webhook;
private SysTime subscriptionExpiration; private SysTime subscriptionExpiration, subscriptionLastErrorAt;
private Duration subscriptionExpirationInterval, subscriptionRenewalInterval; private Duration subscriptionExpirationInterval, subscriptionRenewalInterval, subscriptionRetryInternal;
private string notificationUrl; private string notificationUrl;
// if true, every new access token is printed // if true, every new access token is printed
@ -240,7 +240,7 @@ final class OneDriveApi
if (cfg.getValueBool("debug_https")) { if (cfg.getValueBool("debug_https")) {
http.verbose = true; http.verbose = true;
.debugResponse = true; .debugResponse = true;
// Output what options we are using so that in the debug log this can be tracked // Output what options we are using so that in the debug log this can be tracked
log.vdebug("http.dnsTimeout = ", cfg.getValueLong("dns_timeout")); log.vdebug("http.dnsTimeout = ", cfg.getValueLong("dns_timeout"));
log.vdebug("http.connectTimeout = ", cfg.getValueLong("connect_timeout")); log.vdebug("http.connectTimeout = ", cfg.getValueLong("connect_timeout"));
@ -475,7 +475,7 @@ final class OneDriveApi
// https://curl.se/libcurl/c/CURLOPT_FORBID_REUSE.html // https://curl.se/libcurl/c/CURLOPT_FORBID_REUSE.html
// Ensure that we ARE reusing connections - setting to 0 ensures that we are reusing connections // Ensure that we ARE reusing connections - setting to 0 ensures that we are reusing connections
http.handle.set(CurlOption.forbid_reuse,0); http.handle.set(CurlOption.forbid_reuse,0);
// Do we set the dryRun handlers? // Do we set the dryRun handlers?
if (cfg.getValueBool("dry_run")) { if (cfg.getValueBool("dry_run")) {
.dryRun = true; .dryRun = true;
@ -485,8 +485,10 @@ final class OneDriveApi
} }
subscriptionExpiration = Clock.currTime(UTC()); subscriptionExpiration = Clock.currTime(UTC());
subscriptionLastErrorAt = SysTime.fromUnixTime(0);
subscriptionExpirationInterval = dur!"seconds"(cfg.getValueLong("webhook_expiration_interval")); subscriptionExpirationInterval = dur!"seconds"(cfg.getValueLong("webhook_expiration_interval"));
subscriptionRenewalInterval = dur!"seconds"(cfg.getValueLong("webhook_renewal_interval")); subscriptionRenewalInterval = dur!"seconds"(cfg.getValueLong("webhook_renewal_interval"));
subscriptionRetryInternal = dur!"seconds"(cfg.getValueLong("webhook_retry_interval"));
notificationUrl = cfg.getValueString("webhook_public_url"); notificationUrl = cfg.getValueString("webhook_public_url");
} }
@ -576,7 +578,7 @@ final class OneDriveApi
// read-write authentication scopes will be used (default) // read-write authentication scopes will be used (default)
authScope = "&scope=Files.ReadWrite%20Files.ReadWrite.All%20Sites.ReadWrite.All%20offline_access&response_type=code&prompt=login&redirect_uri="; authScope = "&scope=Files.ReadWrite%20Files.ReadWrite.All%20Sites.ReadWrite.All%20offline_access&response_type=code&prompt=login&redirect_uri=";
} }
string url = authUrl ~ "?client_id=" ~ clientId ~ authScope ~ redirectUrl; string url = authUrl ~ "?client_id=" ~ clientId ~ authScope ~ redirectUrl;
string authFilesString = cfg.getValueString("auth_files"); string authFilesString = cfg.getValueString("auth_files");
string authResponseString = cfg.getValueString("auth_response"); string authResponseString = cfg.getValueString("auth_response");
@ -586,7 +588,7 @@ final class OneDriveApi
string[] authFiles = authFilesString.split(":"); string[] authFiles = authFilesString.split(":");
string authUrl = authFiles[0]; string authUrl = authFiles[0];
string responseUrl = authFiles[1]; string responseUrl = authFiles[1];
try { try {
// Try and write out the auth URL to the nominated file // Try and write out the auth URL to the nominated file
auto authUrlFile = File(authUrl, "w"); auto authUrlFile = File(authUrl, "w");
@ -598,7 +600,7 @@ final class OneDriveApi
displayFileSystemErrorMessage(e.msg, getFunctionName!({})); displayFileSystemErrorMessage(e.msg, getFunctionName!({}));
return false; return false;
} }
while (!exists(responseUrl)) { while (!exists(responseUrl)) {
Thread.sleep(dur!("msecs")(100)); Thread.sleep(dur!("msecs")(100));
} }
@ -636,7 +638,7 @@ final class OneDriveApi
redeemToken(c.front); redeemToken(c.front);
return true; return true;
} }
string getSiteSearchUrl() string getSiteSearchUrl()
{ {
// Return the actual siteSearchUrl being used and/or requested when performing 'siteQuery = onedrive.o365SiteSearch(nextLink);' call // Return the actual siteSearchUrl being used and/or requested when performing 'siteQuery = onedrive.o365SiteSearch(nextLink);' call
@ -1004,17 +1006,25 @@ final class OneDriveApi
spawn(&OneDriveWebhook.serve); spawn(&OneDriveWebhook.serve);
} }
if (!hasValidSubscription()) { auto elapsed = Clock.currTime(UTC()) - subscriptionLastErrorAt;
createSubscription(); if (elapsed < subscriptionRetryInternal) {
} else if (isSubscriptionUpForRenewal()) { return;
try { }
try {
if (!hasValidSubscription()) {
createSubscription();
} else if (isSubscriptionUpForRenewal()) {
renewSubscription(); renewSubscription();
} catch (OneDriveException e) {
if (e.httpStatusCode == 404) {
log.log("The subscription is not found on the server. Recreating subscription ...");
createSubscription();
}
} }
} catch (OneDriveException e) {
logSubscriptionError(e);
subscriptionLastErrorAt = Clock.currTime(UTC());
log.log("Will retry creating or renewing subscription in ", subscriptionRetryInternal);
} catch (JSONException e) {
log.error("ERROR: Unexpected JSON error: ", e.msg);
subscriptionLastErrorAt = Clock.currTime(UTC());
log.log("Will retry creating or renewing subscription in ", subscriptionRetryInternal);
} }
} }
@ -1039,7 +1049,7 @@ final class OneDriveApi
} else { } else {
resourceItem = "/me/drive/root"; resourceItem = "/me/drive/root";
} }
// create JSON request to create webhook subscription // create JSON request to create webhook subscription
const JSONValue request = [ const JSONValue request = [
"changeType": "updated", "changeType": "updated",
@ -1049,22 +1059,56 @@ final class OneDriveApi
"clientState": randomUUID().toString() "clientState": randomUUID().toString()
]; ];
http.addRequestHeader("Content-Type", "application/json"); http.addRequestHeader("Content-Type", "application/json");
JSONValue response;
try { try {
response = post(url, request.toString()); JSONValue response = post(url, request.toString());
} catch (OneDriveException e) {
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
// We need to exit here, user needs to fix issue
log.error("ERROR: Unable to initialize subscriptions for updates. Please fix this issue.");
shutdown();
exit(-1);
}
// Save important subscription metadata including id and expiration // Save important subscription metadata including id and expiration
subscriptionId = response["id"].str; subscriptionId = response["id"].str;
subscriptionExpiration = SysTime.fromISOExtString(response["expirationDateTime"].str); subscriptionExpiration = SysTime.fromISOExtString(response["expirationDateTime"].str);
log.log("Created new subscription ", subscriptionId, " with expiration: ", subscriptionExpiration.toISOExtString());
} catch (OneDriveException e) {
if (e.httpStatusCode == 409) {
// Take over an existing subscription on HTTP 409.
//
// Sample 409 error:
// {
// "error": {
// "code": "ObjectIdentifierInUse",
// "innerError": {
// "client-request-id": "615af209-467a-4ab7-8eff-27c1d1efbc2d",
// "date": "2023-09-26T09:27:45",
// "request-id": "615af209-467a-4ab7-8eff-27c1d1efbc2d"
// },
// "message": "Subscription Id c0bba80e-57a3-43a7-bac2-e6f525a76e7c already exists for the requested combination"
// }
// }
// Make sure the error code is "ObjectIdentifierInUse"
try {
if (e.error["error"]["code"].str != "ObjectIdentifierInUse") {
throw e;
}
} catch (JSONException jsonEx) {
throw e;
}
// Extract the existing subscription id from the error message
import std.regex;
auto idReg = ctRegex!(r"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}", "i");
auto m = matchFirst(e.error["error"]["message"].str, idReg);
if (!m) {
throw e;
}
// Save the subscription id and renew it immediately since we don't know the expiration timestamp
subscriptionId = m[0];
log.log("Found existing subscription ", subscriptionId);
renewSubscription();
} else {
throw e;
}
}
} }
private void renewSubscription() { private void renewSubscription() {
@ -1077,10 +1121,23 @@ final class OneDriveApi
"expirationDateTime": expirationDateTime.toISOExtString() "expirationDateTime": expirationDateTime.toISOExtString()
]; ];
http.addRequestHeader("Content-Type", "application/json"); http.addRequestHeader("Content-Type", "application/json");
JSONValue response = patch(url, request.toString());
// Update subscription expiration from the response try {
subscriptionExpiration = SysTime.fromISOExtString(response["expirationDateTime"].str); JSONValue response = patch(url, request.toString());
// Update subscription expiration from the response
subscriptionExpiration = SysTime.fromISOExtString(response["expirationDateTime"].str);
log.log("Renewed subscription ", subscriptionId, " with expiration: ", subscriptionExpiration.toISOExtString());
} catch (OneDriveException e) {
if (e.httpStatusCode == 404) {
log.log("The subscription is not found on the server. Recreating subscription ...");
subscriptionId = null;
subscriptionExpiration = Clock.currTime(UTC());
createSubscription();
} else {
throw e;
}
}
} }
private void deleteSubscription() { private void deleteSubscription() {
@ -1094,6 +1151,100 @@ final class OneDriveApi
log.log("Deleted subscription"); log.log("Deleted subscription");
} }
private void logSubscriptionError(OneDriveException e) {
if (e.httpStatusCode == 400) {
// Log known 400 error where Microsoft cannot get a 200 OK from the webhook endpoint
//
// Sample 400 error:
// {
// "error": {
// "code": "InvalidRequest",
// "innerError": {
// "client-request-id": "<uuid>",
// "date": "<timestamp>",
// "request-id": "<uuid>"
// },
// "message": "Subscription validation request failed. Notification endpoint must respond with 200 OK to validation request."
// }
// }
try {
if (e.error["error"]["code"].str == "InvalidRequest") {
import std.regex;
auto msgReg = ctRegex!(r"Subscription validation request failed", "i");
auto m = matchFirst(e.error["error"]["message"].str, msgReg);
if (m) {
log.error("ERROR: Cannot create or renew subscription: Microsoft did not get 200 OK from the webhook endpoint.");
return;
}
}
} catch (JSONException) {
// fallthrough
}
} else if (e.httpStatusCode == 401) {
// Log known 401 error where authentication failed
//
// Sample 401 error:
// {
// "error": {
// "code": "ExtensionError",
// "innerError": {
// "client-request-id": "<uuid>",
// "date": "<timestamp>",
// "request-id": "<uuid>"
// },
// "message": "Operation: Create; Exception: [Status Code: Unauthorized; Reason: Authentication failed]"
// }
// }
try {
if (e.error["error"]["code"].str == "ExtensionError") {
import std.regex;
auto msgReg = ctRegex!(r"Authentication failed", "i");
auto m = matchFirst(e.error["error"]["message"].str, msgReg);
if (m) {
log.error("ERROR: Cannot create or renew subscription: Authentication failed.");
return;
}
}
} catch (JSONException) {
// fallthrough
}
} else if (e.httpStatusCode == 403) {
// Log known 403 error where the number of subscriptions on item has exceeded limit
//
// Sample 403 error:
// {
// "error": {
// "code": "ExtensionError",
// "innerError": {
// "client-request-id": "<uuid>",
// "date": "<timestamp>",
// "request-id": "<uuid>"
// },
// "message": "Operation: Create; Exception: [Status Code: Forbidden; Reason: Number of subscriptions on item has exceeded limit]"
// }
// }
try {
if (e.error["error"]["code"].str == "ExtensionError") {
import std.regex;
auto msgReg = ctRegex!(r"Number of subscriptions on item has exceeded limit", "i");
auto m = matchFirst(e.error["error"]["message"].str, msgReg);
if (m) {
log.error("ERROR: Cannot create or renew subscription: Number of subscriptions has exceeded limit.");
return;
}
}
} catch (JSONException) {
// fallthrough
}
}
// Log detailed message for unknown errors
log.error("ERROR: Cannot create or renew subscription.");
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
}
private void redeemToken(const(char)[] authCode) private void redeemToken(const(char)[] authCode)
{ {
const(char)[] postData = const(char)[] postData =
@ -1148,7 +1299,7 @@ final class OneDriveApi
} }
} }
} }
if ("access_token" in response){ if ("access_token" in response){
accessToken = "bearer " ~ response["access_token"].str(); accessToken = "bearer " ~ response["access_token"].str();
refreshToken = response["refresh_token"].str(); refreshToken = response["refresh_token"].str();
@ -1229,11 +1380,11 @@ final class OneDriveApi
{ {
// Threshold for displaying download bar // Threshold for displaying download bar
long thresholdFileSize = 4 * 2^^20; // 4 MiB long thresholdFileSize = 4 * 2^^20; // 4 MiB
// To support marking of partially-downloaded files, // To support marking of partially-downloaded files,
string originalFilename = filename; string originalFilename = filename;
string downloadFilename = filename ~ ".partial"; string downloadFilename = filename ~ ".partial";
// open downloadFilename as write in binary mode // open downloadFilename as write in binary mode
auto file = File(downloadFilename, "wb"); auto file = File(downloadFilename, "wb");
@ -1302,7 +1453,7 @@ final class OneDriveApi
// Data Received = 13685777 // Data Received = 13685777
// Expected Total = 52428800 // Expected Total = 52428800
// Percent Complete = 26 // Percent Complete = 26
if (cfg.getValueLong("rate_limit") > 0) { if (cfg.getValueLong("rate_limit") > 0) {
// User configured rate limit // User configured rate limit
// How much data should be in each segment to qualify for 5% // How much data should be in each segment to qualify for 5%
@ -1314,7 +1465,7 @@ final class OneDriveApi
if ((dlnow > thisSegmentData) && (dlnow < nextSegmentData) && (previousProgressPercent != currentDLPercent) || (dlnow == dltotal)) { if ((dlnow > thisSegmentData) && (dlnow < nextSegmentData) && (previousProgressPercent != currentDLPercent) || (dlnow == dltotal)) {
// Downloaded data equals approx 5% // Downloaded data equals approx 5%
log.vdebug("Incrementing Progress Bar using calculated 5% of data received"); log.vdebug("Incrementing Progress Bar using calculated 5% of data received");
// Downloading 50% |oooooooooooooooooooo | ETA 00:01:40 // Downloading 50% |oooooooooooooooooooo | ETA 00:01:40
// increment progress bar // increment progress bar
p.next(); p.next();
// update values // update values
@ -1328,7 +1479,7 @@ final class OneDriveApi
if ((isIdentical(fmod(currentDLPercent, percentCheck), 0.0)) && (previousProgressPercent != currentDLPercent)) { if ((isIdentical(fmod(currentDLPercent, percentCheck), 0.0)) && (previousProgressPercent != currentDLPercent)) {
// currentDLPercent matches a new increment // currentDLPercent matches a new increment
log.vdebug("Incrementing Progress Bar using fmod match"); log.vdebug("Incrementing Progress Bar using fmod match");
// Downloading 50% |oooooooooooooooooooo | ETA 00:01:40 // Downloading 50% |oooooooooooooooooooo | ETA 00:01:40
// increment progress bar // increment progress bar
p.next(); p.next();
// update values // update values
@ -1491,7 +1642,7 @@ final class OneDriveApi
log.vdebug("onedrive.perform() Generated a OneDrive CurlException"); log.vdebug("onedrive.perform() Generated a OneDrive CurlException");
auto errorArray = splitLines(e.msg); auto errorArray = splitLines(e.msg);
string errorMessage = errorArray[0]; string errorMessage = errorArray[0];
// what is contained in the curl error message? // what is contained in the curl error message?
if (canFind(errorMessage, "Couldn't connect to server on handle") || canFind(errorMessage, "Couldn't resolve host name on handle") || canFind(errorMessage, "Timeout was reached on handle")) { if (canFind(errorMessage, "Couldn't connect to server on handle") || canFind(errorMessage, "Couldn't resolve host name on handle") || canFind(errorMessage, "Timeout was reached on handle")) {
// This is a curl timeout // This is a curl timeout
@ -1505,12 +1656,12 @@ final class OneDriveApi
int timestampAlign = 0; int timestampAlign = 0;
bool retrySuccess = false; bool retrySuccess = false;
SysTime currentTime; SysTime currentTime;
// what caused the initial curl exception? // what caused the initial curl exception?
if (canFind(errorMessage, "Couldn't connect to server on handle")) log.vdebug("Unable to connect to server - HTTPS access blocked?"); if (canFind(errorMessage, "Couldn't connect to server on handle")) log.vdebug("Unable to connect to server - HTTPS access blocked?");
if (canFind(errorMessage, "Couldn't resolve host name on handle")) log.vdebug("Unable to resolve server - DNS access blocked?"); if (canFind(errorMessage, "Couldn't resolve host name on handle")) log.vdebug("Unable to resolve server - DNS access blocked?");
if (canFind(errorMessage, "Timeout was reached on handle")) log.vdebug("A timeout was triggered - data too slow, no response ... use --debug-https to diagnose further"); if (canFind(errorMessage, "Timeout was reached on handle")) log.vdebug("A timeout was triggered - data too slow, no response ... use --debug-https to diagnose further");
while (!retrySuccess){ while (!retrySuccess){
try { try {
// configure libcurl to perform a fresh connection // configure libcurl to perform a fresh connection
@ -1540,16 +1691,16 @@ final class OneDriveApi
if (canFind(e.msg, "Couldn't connect to server on handle")) { if (canFind(e.msg, "Couldn't connect to server on handle")) {
log.log(" - Check HTTPS access or Firewall Rules"); log.log(" - Check HTTPS access or Firewall Rules");
timestampAlign = 9; timestampAlign = 9;
} }
if (canFind(e.msg, "Couldn't resolve host name on handle")) { if (canFind(e.msg, "Couldn't resolve host name on handle")) {
log.log(" - Check DNS resolution or Firewall Rules"); log.log(" - Check DNS resolution or Firewall Rules");
timestampAlign = 0; timestampAlign = 0;
} }
// increment backoff interval // increment backoff interval
backoffInterval++; backoffInterval++;
int thisBackOffInterval = retryAttempts*backoffInterval; int thisBackOffInterval = retryAttempts*backoffInterval;
// display retry information // display retry information
currentTime.fracSecs = Duration.zero; currentTime.fracSecs = Duration.zero;
auto timeString = currentTime.toString(); auto timeString = currentTime.toString();
@ -1558,13 +1709,13 @@ final class OneDriveApi
if (thisBackOffInterval > maxBackoffInterval) { if (thisBackOffInterval > maxBackoffInterval) {
thisBackOffInterval = maxBackoffInterval; thisBackOffInterval = maxBackoffInterval;
} }
// detail when the next attempt will be tried // detail when the next attempt will be tried
// factor in the delay for curl to generate the exception - otherwise the next timestamp appears to be 'out' even though technically correct // factor in the delay for curl to generate the exception - otherwise the next timestamp appears to be 'out' even though technically correct
auto nextRetry = currentTime + dur!"seconds"(thisBackOffInterval) + dur!"seconds"(timestampAlign); auto nextRetry = currentTime + dur!"seconds"(thisBackOffInterval) + dur!"seconds"(timestampAlign);
log.vlog(" Next retry in approx: ", (thisBackOffInterval + timestampAlign), " seconds"); log.vlog(" Next retry in approx: ", (thisBackOffInterval + timestampAlign), " seconds");
log.vlog(" Next retry approx: ", nextRetry); log.vlog(" Next retry approx: ", nextRetry);
// thread sleep // thread sleep
Thread.sleep(dur!"seconds"(thisBackOffInterval)); Thread.sleep(dur!"seconds"(thisBackOffInterval));
} }
@ -1585,7 +1736,7 @@ final class OneDriveApi
// Some other error was returned // Some other error was returned
log.error(" Error Message: ", errorMessage); log.error(" Error Message: ", errorMessage);
log.error(" Calling Function: ", getFunctionName!({})); log.error(" Calling Function: ", getFunctionName!({}));
// Was this a curl initialization error? // Was this a curl initialization error?
if (canFind(errorMessage, "Failed initialization on handle")) { if (canFind(errorMessage, "Failed initialization on handle")) {
// initialization error ... prevent a run-away process if we have zero disk space // initialization error ... prevent a run-away process if we have zero disk space

View file

@ -96,7 +96,7 @@ Regex!char wild2regex(const(char)[] pattern)
break; break;
case ' ': case ' ':
str ~= "\\s+"; str ~= "\\s+";
break; break;
case '/': case '/':
str ~= "\\/"; str ~= "\\/";
break; break;
@ -129,10 +129,10 @@ bool testNetwork(Config cfg)
http.dataTimeout = (dur!"seconds"(cfg.getValueLong("data_timeout"))); http.dataTimeout = (dur!"seconds"(cfg.getValueLong("data_timeout")));
// maximum time any operation is allowed to take // maximum time any operation is allowed to take
// This includes dns resolution, connecting, data transfer, etc. // This includes dns resolution, connecting, data transfer, etc.
http.operationTimeout = (dur!"seconds"(cfg.getValueLong("operation_timeout"))); http.operationTimeout = (dur!"seconds"(cfg.getValueLong("operation_timeout")));
// What IP protocol version should be used when using Curl - IPv4 & IPv6, IPv4 or IPv6 // What IP protocol version should be used when using Curl - IPv4 & IPv6, IPv4 or IPv6
http.handle.set(CurlOption.ipresolve,cfg.getValueLong("ip_protocol_version")); // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only http.handle.set(CurlOption.ipresolve,cfg.getValueLong("ip_protocol_version")); // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only
// HTTP connection test method // HTTP connection test method
http.method = HTTP.Method.head; http.method = HTTP.Method.head;
// Attempt to contact the Microsoft Online Service // Attempt to contact the Microsoft Online Service
@ -190,7 +190,7 @@ bool isValidName(string path)
// Restriction and limitations about windows naming files // Restriction and limitations about windows naming files
// https://msdn.microsoft.com/en-us/library/aa365247 // https://msdn.microsoft.com/en-us/library/aa365247
// https://support.microsoft.com/en-us/help/3125202/restrictions-and-limitations-when-you-sync-files-and-folders // https://support.microsoft.com/en-us/help/3125202/restrictions-and-limitations-when-you-sync-files-and-folders
// allow root item // allow root item
if (path == ".") { if (path == ".") {
return true; return true;
@ -210,7 +210,7 @@ bool isValidName(string path)
); );
auto m = match(itemName, invalidNameReg); auto m = match(itemName, invalidNameReg);
matched = m.empty; matched = m.empty;
// Additional explicit validation checks // Additional explicit validation checks
if (itemName == ".lock") {matched = false;} if (itemName == ".lock") {matched = false;}
if (itemName == "desktop.ini") {matched = false;} if (itemName == "desktop.ini") {matched = false;}
@ -218,7 +218,7 @@ bool isValidName(string path)
if(canFind(itemName, "_vti_")){matched = false;} if(canFind(itemName, "_vti_")){matched = false;}
// Item name cannot equal '~' // Item name cannot equal '~'
if (itemName == "~") {matched = false;} if (itemName == "~") {matched = false;}
// return response // return response
return matched; return matched;
} }
@ -229,7 +229,7 @@ bool containsBadWhiteSpace(string path)
if (path == ".") { if (path == ".") {
return true; return true;
} }
// https://github.com/abraunegg/onedrive/issues/35 // https://github.com/abraunegg/onedrive/issues/35
// Issue #35 presented an interesting issue where the filename contained a newline item // Issue #35 presented an interesting issue where the filename contained a newline item
// 'State-of-the-art, challenges, and open issues in the integration of Internet of'$'\n''Things and Cloud Computing.pdf' // 'State-of-the-art, challenges, and open issues in the integration of Internet of'$'\n''Things and Cloud Computing.pdf'
@ -237,7 +237,7 @@ bool containsBadWhiteSpace(string path)
// /v1.0/me/drive/root:/.%2FState-of-the-art%2C%20challenges%2C%20and%20open%20issues%20in%20the%20integration%20of%20Internet%20of%0AThings%20and%20Cloud%20Computing.pdf // /v1.0/me/drive/root:/.%2FState-of-the-art%2C%20challenges%2C%20and%20open%20issues%20in%20the%20integration%20of%20Internet%20of%0AThings%20and%20Cloud%20Computing.pdf
// The '$'\n'' is translated to %0A which causes the OneDrive query to fail // The '$'\n'' is translated to %0A which causes the OneDrive query to fail
// Check for the presence of '%0A' via regex // Check for the presence of '%0A' via regex
string itemName = encodeComponent(baseName(path)); string itemName = encodeComponent(baseName(path));
auto invalidWhitespaceReg = auto invalidWhitespaceReg =
ctRegex!( ctRegex!(
@ -254,12 +254,12 @@ bool containsASCIIHTMLCodes(string path)
// If a filename contains ASCII HTML codes, regardless of if it gets encoded, it generates an error // If a filename contains ASCII HTML codes, regardless of if it gets encoded, it generates an error
// Check if the filename contains an ASCII HTML code sequence // Check if the filename contains an ASCII HTML code sequence
auto invalidASCIICode = auto invalidASCIICode =
ctRegex!( ctRegex!(
// Check to see if &#XXXX is in the filename // Check to see if &#XXXX is in the filename
`(?:&#|&#[0-9][0-9]|&#[0-9][0-9][0-9]|&#[0-9][0-9][0-9][0-9])` `(?:&#|&#[0-9][0-9]|&#[0-9][0-9][0-9]|&#[0-9][0-9][0-9][0-9])`
); );
auto m = match(path, invalidASCIICode); auto m = match(path, invalidASCIICode);
return m.empty; return m.empty;
} }
@ -276,14 +276,15 @@ void displayOneDriveErrorMessage(string message, string callingFunction)
// extra debug // extra debug
log.vdebug("Raw Error Data: ", message); log.vdebug("Raw Error Data: ", message);
log.vdebug("JSON Message: ", errorMessage); log.vdebug("JSON Message: ", errorMessage);
// What is the reason for the error // What is the reason for the error
if (errorMessage.type() == JSONType.object) { if (errorMessage.type() == JSONType.object) {
// configure the error reason // configure the error reason
string errorReason; string errorReason;
string errorCode;
string requestDate; string requestDate;
string requestId; string requestId;
// set the reason for the error // set the reason for the error
try { try {
// Use error_description as reason // Use error_description as reason
@ -291,15 +292,15 @@ void displayOneDriveErrorMessage(string message, string callingFunction)
} catch (JSONException e) { } catch (JSONException e) {
// we dont want to do anything here // we dont want to do anything here
} }
// set the reason for the error // set the reason for the error
try { try {
// Use ["error"]["message"] as reason // Use ["error"]["message"] as reason
errorReason = errorMessage["error"]["message"].str; errorReason = errorMessage["error"]["message"].str;
} catch (JSONException e) { } catch (JSONException e) {
// we dont want to do anything here // we dont want to do anything here
} }
// Display the error reason // Display the error reason
if (errorReason.startsWith("<!DOCTYPE")) { if (errorReason.startsWith("<!DOCTYPE")) {
// a HTML Error Reason was given // a HTML Error Reason was given
@ -309,34 +310,43 @@ void displayOneDriveErrorMessage(string message, string callingFunction)
// a non HTML Error Reason was given // a non HTML Error Reason was given
log.error(" Error Reason: ", errorReason); log.error(" Error Reason: ", errorReason);
} }
// Get the error code if available
try {
// Use ["error"]["code"] as code
errorCode = errorMessage["error"]["code"].str;
} catch (JSONException e) {
// we dont want to do anything here
}
// Get the date of request if available // Get the date of request if available
try { try {
// Use ["error"]["innerError"]["date"] as date // Use ["error"]["innerError"]["date"] as date
requestDate = errorMessage["error"]["innerError"]["date"].str; requestDate = errorMessage["error"]["innerError"]["date"].str;
} catch (JSONException e) { } catch (JSONException e) {
// we dont want to do anything here // we dont want to do anything here
} }
// Get the request-id if available // Get the request-id if available
try { try {
// Use ["error"]["innerError"]["request-id"] as request-id // Use ["error"]["innerError"]["request-id"] as request-id
requestId = errorMessage["error"]["innerError"]["request-id"].str; requestId = errorMessage["error"]["innerError"]["request-id"].str;
} catch (JSONException e) { } catch (JSONException e) {
// we dont want to do anything here // we dont want to do anything here
} }
// Display the date and request id if available // Display the error code, date and request id if available
if (errorCode != "") log.error(" Error Code: ", errorCode);
if (requestDate != "") log.error(" Error Timestamp: ", requestDate); if (requestDate != "") log.error(" Error Timestamp: ", requestDate);
if (requestId != "") log.error(" API Request ID: ", requestId); if (requestId != "") log.error(" API Request ID: ", requestId);
} }
// Where in the code was this error generated // Where in the code was this error generated
log.vlog(" Calling Function: ", callingFunction); log.vlog(" Calling Function: ", callingFunction);
} }
// Parse and display error message received from the local file system // Parse and display error message received from the local file system
void displayFileSystemErrorMessage(string message, string callingFunction) void displayFileSystemErrorMessage(string message, string callingFunction)
{ {
writeln(); writeln();
log.error("ERROR: The local file system returned an error with the following message:"); log.error("ERROR: The local file system returned an error with the following message:");
@ -367,14 +377,14 @@ JSONValue getLatestReleaseDetails() {
JSONValue versionDetails; JSONValue versionDetails;
string latestTag; string latestTag;
string publishedDate; string publishedDate;
try { try {
content = get("https://api.github.com/repos/abraunegg/onedrive/releases/latest"); content = get("https://api.github.com/repos/abraunegg/onedrive/releases/latest");
} catch (CurlException e) { } catch (CurlException e) {
// curl generated an error - meaning we could not query GitHub // curl generated an error - meaning we could not query GitHub
log.vdebug("Unable to query GitHub for latest release"); log.vdebug("Unable to query GitHub for latest release");
} }
try { try {
githubLatest = content.parseJSON(); githubLatest = content.parseJSON();
} catch (JSONException e) { } catch (JSONException e) {
@ -382,7 +392,7 @@ JSONValue getLatestReleaseDetails() {
log.vdebug("Unable to parse GitHub JSON response"); log.vdebug("Unable to parse GitHub JSON response");
githubLatest = parseJSON("{}"); githubLatest = parseJSON("{}");
} }
// githubLatest has to be a valid JSON object // githubLatest has to be a valid JSON object
if (githubLatest.type() == JSONType.object){ if (githubLatest.type() == JSONType.object){
// use the returned tag_name // use the returned tag_name
@ -409,15 +419,15 @@ JSONValue getLatestReleaseDetails() {
log.vdebug("Invalid JSON Object. Setting GitHub 'tag_name' release version to 0.0.0"); log.vdebug("Invalid JSON Object. Setting GitHub 'tag_name' release version to 0.0.0");
latestTag = "0.0.0"; latestTag = "0.0.0";
log.vdebug("Invalid JSON Object. Setting GitHub 'published_at' date to 2018-07-18T18:00:00Z"); log.vdebug("Invalid JSON Object. Setting GitHub 'published_at' date to 2018-07-18T18:00:00Z");
publishedDate = "2018-07-18T18:00:00Z"; publishedDate = "2018-07-18T18:00:00Z";
} }
// return the latest github version and published date as our own JSON // return the latest github version and published date as our own JSON
versionDetails = [ versionDetails = [
"latestTag": JSONValue(latestTag), "latestTag": JSONValue(latestTag),
"publishedDate": JSONValue(publishedDate) "publishedDate": JSONValue(publishedDate)
]; ];
// return JSON // return JSON
return versionDetails; return versionDetails;
} }
@ -431,14 +441,14 @@ JSONValue getCurrentVersionDetails(string thisVersion) {
JSONValue versionDetails; JSONValue versionDetails;
string versionTag = "v" ~ thisVersion; string versionTag = "v" ~ thisVersion;
string publishedDate; string publishedDate;
try { try {
content = get("https://api.github.com/repos/abraunegg/onedrive/releases"); content = get("https://api.github.com/repos/abraunegg/onedrive/releases");
} catch (CurlException e) { } catch (CurlException e) {
// curl generated an error - meaning we could not query GitHub // curl generated an error - meaning we could not query GitHub
log.vdebug("Unable to query GitHub for release details"); log.vdebug("Unable to query GitHub for release details");
} }
try { try {
githubDetails = content.parseJSON(); githubDetails = content.parseJSON();
} catch (JSONException e) { } catch (JSONException e) {
@ -446,7 +456,7 @@ JSONValue getCurrentVersionDetails(string thisVersion) {
log.vdebug("Unable to parse GitHub JSON response"); log.vdebug("Unable to parse GitHub JSON response");
githubDetails = parseJSON("{}"); githubDetails = parseJSON("{}");
} }
// githubDetails has to be a valid JSON array // githubDetails has to be a valid JSON array
if (githubDetails.type() == JSONType.array){ if (githubDetails.type() == JSONType.array){
foreach (searchResult; githubDetails.array) { foreach (searchResult; githubDetails.array) {
@ -458,7 +468,7 @@ JSONValue getCurrentVersionDetails(string thisVersion) {
publishedDate = searchResult["published_at"].str; publishedDate = searchResult["published_at"].str;
} }
} }
if (publishedDate.empty) { if (publishedDate.empty) {
// empty .. no version match ? // empty .. no version match ?
// set to v2.0.0 release date // set to v2.0.0 release date
@ -468,15 +478,15 @@ JSONValue getCurrentVersionDetails(string thisVersion) {
} else { } else {
// JSONValue is not an Array // JSONValue is not an Array
log.vdebug("Invalid JSON Array. Setting GitHub 'published_at' date to 2018-07-18T18:00:00Z"); log.vdebug("Invalid JSON Array. Setting GitHub 'published_at' date to 2018-07-18T18:00:00Z");
publishedDate = "2018-07-18T18:00:00Z"; publishedDate = "2018-07-18T18:00:00Z";
} }
// return the latest github version and published date as our own JSON // return the latest github version and published date as our own JSON
versionDetails = [ versionDetails = [
"versionTag": JSONValue(thisVersion), "versionTag": JSONValue(thisVersion),
"publishedDate": JSONValue(publishedDate) "publishedDate": JSONValue(publishedDate)
]; ];
// return JSON // return JSON
return versionDetails; return versionDetails;
} }
@ -489,7 +499,7 @@ void checkApplicationVersion() {
SysTime publishedDate = SysTime.fromISOExtString(latestVersionDetails["publishedDate"].str).toUTC(); SysTime publishedDate = SysTime.fromISOExtString(latestVersionDetails["publishedDate"].str).toUTC();
SysTime releaseGracePeriod = publishedDate; SysTime releaseGracePeriod = publishedDate;
SysTime currentTime = Clock.currTime().toUTC(); SysTime currentTime = Clock.currTime().toUTC();
// drop fraction seconds // drop fraction seconds
publishedDate.fracSecs = Duration.zero; publishedDate.fracSecs = Duration.zero;
currentTime.fracSecs = Duration.zero; currentTime.fracSecs = Duration.zero;
@ -500,20 +510,20 @@ void checkApplicationVersion() {
// what is this clients version? // what is this clients version?
auto currentVersionArray = strip(strip(import("version"), "v")).split("-"); auto currentVersionArray = strip(strip(import("version"), "v")).split("-");
string applicationVersion = currentVersionArray[0]; string applicationVersion = currentVersionArray[0];
// debug output // debug output
log.vdebug("applicationVersion: ", applicationVersion); log.vdebug("applicationVersion: ", applicationVersion);
log.vdebug("latestVersion: ", latestVersion); log.vdebug("latestVersion: ", latestVersion);
log.vdebug("publishedDate: ", publishedDate); log.vdebug("publishedDate: ", publishedDate);
log.vdebug("currentTime: ", currentTime); log.vdebug("currentTime: ", currentTime);
log.vdebug("releaseGracePeriod: ", releaseGracePeriod); log.vdebug("releaseGracePeriod: ", releaseGracePeriod);
// display details if not current // display details if not current
// is application version is older than available on GitHub // is application version is older than available on GitHub
if (applicationVersion != latestVersion) { if (applicationVersion != latestVersion) {
// application version is different // application version is different
bool displayObsolete = false; bool displayObsolete = false;
// what warning do we present? // what warning do we present?
if (applicationVersion < latestVersion) { if (applicationVersion < latestVersion) {
// go get this running version details // go get this running version details
@ -521,12 +531,12 @@ void checkApplicationVersion() {
SysTime thisVersionPublishedDate = SysTime.fromISOExtString(thisVersionDetails["publishedDate"].str).toUTC(); SysTime thisVersionPublishedDate = SysTime.fromISOExtString(thisVersionDetails["publishedDate"].str).toUTC();
thisVersionPublishedDate.fracSecs = Duration.zero; thisVersionPublishedDate.fracSecs = Duration.zero;
log.vdebug("thisVersionPublishedDate: ", thisVersionPublishedDate); log.vdebug("thisVersionPublishedDate: ", thisVersionPublishedDate);
// the running version grace period is its release date + 1 month // the running version grace period is its release date + 1 month
SysTime thisVersionReleaseGracePeriod = thisVersionPublishedDate; SysTime thisVersionReleaseGracePeriod = thisVersionPublishedDate;
thisVersionReleaseGracePeriod = thisVersionReleaseGracePeriod.add!"months"(1); thisVersionReleaseGracePeriod = thisVersionReleaseGracePeriod.add!"months"(1);
log.vdebug("thisVersionReleaseGracePeriod: ", thisVersionReleaseGracePeriod); log.vdebug("thisVersionReleaseGracePeriod: ", thisVersionReleaseGracePeriod);
// is this running version obsolete ? // is this running version obsolete ?
if (!displayObsolete) { if (!displayObsolete) {
// if releaseGracePeriod > currentTime // if releaseGracePeriod > currentTime
@ -539,7 +549,7 @@ void checkApplicationVersion() {
displayObsolete = true; displayObsolete = true;
} }
} }
// display version response // display version response
writeln(); writeln();
if (!displayObsolete) { if (!displayObsolete) {