Fix Bug #3355: Fix that long running big upload (250GB+) fails because of an expired access token (#3361)

* Revert back to v2.5.5 performSessionFileUpload() and apply minimal change for upload session offset handling to prevent desynchronisation on large files
* Add specific 403 handler for when the upload session URL itself expires
* Add 'file_fragment_size'
* Clean up debug logging output
* Add 'tempauth' to spelling words
* Update documentation URL's
* Ensure that on each fragment upload, whilst the application is using the 'tempauth' for session upload, the global OAuth2 token needs to be checked for validity and refreshed if required
* Add limit check for 'file_fragment_size' option
* Add to default 'config' file
* Update documentation for 'file_fragment_size'
* Add 'file_fragment_size' to --display-config output
* Add --file-fragment-size option to enable use via Docker option
* Add to manpage
* Update Docker entrypoint
* Update Docker | Podman documentation
* Update logging output to include connection method to URL
* Update Upload Session URL expiry update to include UTC and LocalTime values
* Update comment which was dropped / missed
* Clarify that this is the OAuth2 Access Token
* Clarify that the expiry timestamp is localTime
* Update PR with dynamic use of fragment size if fileSize > 100MiB
* Enforce multiple 320KiB for fragment size to align to Microsoft documentation
* Fix Docker entrypoint and confirm working for ONEDRIVE_FILE_FRAGMENT_SIZE
* Change 'defaultMaxFileFragmentSize' to 60
* Revise fragmentSize calculation to be as close to 60 MiB as possible without breaching Microsoft documented threshold
This commit is contained in:
abraunegg 2025-07-03 17:21:16 +10:00 committed by GitHub
commit ea7c3abd2d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
11 changed files with 281 additions and 85 deletions

View file

@ -445,6 +445,7 @@ systemdsystemunitdir
systemduserunitdir
tbh
tdcockers
tempauth
templ
testbuild
Thh

3
config
View file

@ -76,6 +76,9 @@
## This setting controls the application logging all actions to a separate file.
#enable_logging = "false"
## This setting controls the file fragment size when uploading large files to Microsoft OneDrive.
#file_fragment_size = "10"
## This setting controls the application HTTP protocol version, downgrading to HTTP/1.1 when enabled.
#force_http_11 = "false"

View file

@ -174,6 +174,13 @@ if [ "${ONEDRIVE_SYNC_SHARED_FILES:=0}" == "1" ]; then
ARGS=(--sync-shared-files ${ARGS[@]})
fi
# Tell client to use a different value for file fragment size for large file uploads
if [ -n "${ONEDRIVE_FILE_FRAGMENT_SIZE:=""}" ]; then
echo "# We are specifying the file fragment size for large file uploads (in MB)"
echo "# Adding --file-fragment-size ARG"
ARGS=(--file-fragment-size ${ONEDRIVE_FILE_FRAGMENT_SIZE} ${ARGS[@]})
fi
if [ ${#} -gt 0 ]; then
ARGS=("${@}")
fi

View file

@ -29,6 +29,7 @@ Before reading this document, please ensure you are running application version
- [drive_id](#drive_id)
- [dry_run](#dry_run)
- [enable_logging](#enable_logging)
- [file_fragment_size](#file_fragment_size)
- [force_http_11](#force_http_11)
- [force_session_upload](#force_session_upload)
- [inotify_delay](#inotify_delay)
@ -412,6 +413,20 @@ _**CLI Option Use:**_ `--enable-logging`
> [!IMPORTANT]
> Additional configuration is potentially required to configure the default log directory. Refer to the [Enabling the Client Activity Log](./usage.md#enabling-the-client-activity-log) section in usage.md for details
### file_fragment_size
_**Description:**_ This option controls the fragment size when uploading large files to Microsoft OneDrive. The value specified is in MB.
_**Value Type:**_ Integer
_**Default Value:**_ 10
_**Maximum Value:**_ 60
_**Config Example:**_ `file_fragment_size = "25"`
_**CLI Option Use:**_ `--file-fragment-size = '25'`
### force_http_11
_**Description:**_ This setting controls the application HTTP protocol version. By default, the application will use libcurl defaults for which HTTP protocol version will be used to interact with Microsoft OneDrive. Use this setting to downgrade libcurl to only use HTTP/1.1.
@ -871,6 +886,10 @@ _**Config Example:**_ `skip_size = "50"`
_**CLI Option Use:**_ `--skip-size '50'`
> [!NOTE]
> This option is considered a 'Client Side Filtering Rule' and if configured, is utilised for all sync operations. After changing this option, you will be required to perform a resync.
### skip_symlinks
_**Description:**_ This configuration option controls whether the application will skip all symbolic links when performing sync operations. Microsoft OneDrive has no concept or understanding of symbolic links, and attempting to upload a symbolic link to Microsoft OneDrive generates a platform API error. All data (files and folders) that are uploaded to OneDrive must be whole files or actual directories.

View file

@ -290,6 +290,7 @@ docker run $firstRun --restart unless-stopped --name onedrive -v onedrive_conf:/
| <B>ONEDRIVE_SYNC_SHARED_FILES</B> | Controls "--sync-shared-files" option. Default is 0 | 1 |
| <B>ONEDRIVE_RUNAS_ROOT</B> | Controls if the Docker container should be run as the 'root' user instead of 'onedrive' user. Default is 0 | 1 |
| <B>ONEDRIVE_SYNC_ONCE</B> | Controls if the Docker container should be run in Standalone Mode. It will use Monitor Mode otherwise. Default is 0 | 1 |
| <B>ONEDRIVE_FILE_FRAGMENT_SIZE</B> | Controls the fragment size when uploading large files to Microsoft OneDrive. The value specified is in MB. Default is 10, Limit is 60 | 25 |
### Environment Variables Usage Examples
**Verbose Output:**

View file

@ -305,6 +305,7 @@ podman run -it --name onedrive_work --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" \
| <B>ONEDRIVE_SYNC_SHARED_FILES</B> | Controls "--sync-shared-files" option. Default is 0 | 1 |
| <B>ONEDRIVE_RUNAS_ROOT</B> | Controls if the Docker container should be run as the 'root' user instead of 'onedrive' user. Default is 0 | 1 |
| <B>ONEDRIVE_SYNC_ONCE</B> | Controls if the Docker container should be run in Standalone Mode. It will use Monitor Mode otherwise. Default is 0 | 1 |
| <B>ONEDRIVE_FILE_FRAGMENT_SIZE</B> | Controls the fragment size when uploading large files to Microsoft OneDrive. The value specified is in MB. Default is 10, Limit is 60 | 25 |
### Environment Variables Usage Examples
**Verbose Output:**

View file

@ -204,6 +204,10 @@ Perform a trial sync with no changes made.
\fB\-\-enable-logging\fR
Enable client activity to a separate log file.
.TP
\fB\-\-file-fragment-size\fR
Specify the file fragment size for large file uploads (in MB).
.TP
\fB\-\-force\fR
Force the deletion of data when a 'big delete' is detected.

View file

@ -46,6 +46,9 @@ class ApplicationConfig {
immutable string defaultConfigDirName = "~/.config/onedrive";
// - Default 'OneDrive Business Shared Files' Folder Name
immutable string defaultBusinessSharedFilesDirectoryName = "Files Shared With Me";
// - Default file fragment size for uploads
immutable long defaultFileFragmentSize = 10;
immutable long defaultMaxFileFragmentSize = 60;
// Microsoft Requirements
// - Default Application ID (abraunegg)
@ -288,6 +291,8 @@ class ApplicationConfig {
longValues["rate_limit"] = 0;
// - To ensure we do not fill up the load disk, how much disk space should be reserved by default
longValues["space_reservation"] = 50 * 2^^20; // 50 MB as Bytes
// - How large should our file fragments be when uploading as an 'upload session' ?
longValues["file_fragment_size"] = defaultFileFragmentSize; // whole number, treated as MB, will be converted to bytes within performSessionFileUpload(). Default is 10.
// HTTPS & CURL Operation Settings
// - Maximum time an operation is allowed to take
@ -1040,6 +1045,20 @@ class ApplicationConfig {
tempValue = 0;
}
setValueLong("skip_size", tempValue);
} else if (key == "file_fragment_size") {
ulong tempValue = thisConfigValue;
// If set, this must be greater than the default, but also aligning to Microsoft upper limit of 60 MiB
// Enforce lower bound (must be greater than default)
if (tempValue < defaultFileFragmentSize) {
addLogEntry("Invalid value for key in config file (too low) - using default value: " ~ key);
tempValue = defaultFileFragmentSize;
}
// Enforce upper bound (safe maximum)
else if (tempValue > defaultMaxFileFragmentSize) {
addLogEntry("Invalid value for key in config file (too high) - using maximum safe value: " ~ key);
tempValue = defaultMaxFileFragmentSize;
}
setValueLong("file_fragment_size", tempValue);
}
} else {
addLogEntry("Unknown key in config file: " ~ key);
@ -1138,25 +1157,25 @@ class ApplicationConfig {
std.getopt.config.bundling,
std.getopt.config.caseSensitive,
"auth-files",
"Perform authentication not via interactive dialog but via files read/writes to these files.",
"Perform authentication not via interactive dialog but via files read/writes to these files",
&stringValues["auth_files"],
"auth-response",
"Perform authentication not via interactive dialog but via providing the response url directly.",
"Perform authentication not via interactive dialog but via providing the response url directly",
&stringValues["auth_response"],
"check-for-nomount",
"Check for the presence of .nosync in the syncdir root. If found, do not perform sync.",
"Check for the presence of .nosync in the syncdir root. If found, do not perform sync",
&boolValues["check_nomount"],
"check-for-nosync",
"Check for the presence of .nosync in each directory. If found, skip directory from sync.",
"Check for the presence of .nosync in each directory. If found, skip directory from sync",
&boolValues["check_nosync"],
"classify-as-big-delete",
"Number of children in a path that is locally removed which will be classified as a 'big data delete'",
&longValues["classify_as_big_delete"],
"cleanup-local-files",
"Cleanup additional local files when using --download-only. This will remove local data.",
"Cleanup additional local files when using --download-only. This will remove local data",
&boolValues["cleanup_local_files"],
"create-directory",
"Create a directory on OneDrive - no sync will be performed.",
"Create a directory on OneDrive - no sync will be performed",
&stringValues["create_directory"],
"create-share-link",
"Create a shareable link for an existing file on OneDrive",
@ -1165,10 +1184,10 @@ class ApplicationConfig {
"Debug OneDrive HTTPS communication.",
&boolValues["debug_https"],
"destination-directory",
"Destination directory for renamed or move on OneDrive - no sync will be performed.",
"Destination directory for renamed or move on OneDrive - no sync will be performed",
&stringValues["destination_directory"],
"disable-notifications",
"Do not use desktop notifications in monitor mode.",
"Do not use desktop notifications in monitor mode",
&boolValues["disable_notifications"],
"disable-download-validation",
"Disable download validation when downloading from OneDrive",
@ -1177,19 +1196,19 @@ class ApplicationConfig {
"Disable upload validation when uploading to OneDrive",
&boolValues["disable_upload_validation"],
"display-config",
"Display what options the client will use as currently configured - no sync will be performed.",
"Display what options the client will use as currently configured - no sync will be performed",
&boolValues["display_config"],
"display-running-config",
"Display what options the client has been configured to use on application startup.",
"Display what options the client has been configured to use on application startup",
&boolValues["display_running_config"],
"display-sync-status",
"Display the sync status of the client - no sync will be performed.",
"Display the sync status of the client - no sync will be performed",
&boolValues["display_sync_status"],
"display-quota",
"Display the quota status of the client - no sync will be performed.",
"Display the quota status of the client - no sync will be performed",
&boolValues["display_quota"],
"download-only",
"Replicate the OneDrive online state locally, by only downloading changes from OneDrive. Do not upload local changes to OneDrive.",
"Replicate the OneDrive online state locally, by only downloading changes from OneDrive. Do not upload local changes to OneDrive",
&boolValues["download_only"],
"dry-run",
"Perform a trial sync with no changes made",
@ -1197,6 +1216,9 @@ class ApplicationConfig {
"enable-logging",
"Enable client activity to a separate log file",
&boolValues["enable_logging"],
"file-fragment-size",
"Specify the file fragment size for large file uploads (in MB)",
&longValues["file_fragment_size"],
"force-http-11",
"Force the use of HTTP 1.1 for all operations",
&boolValues["force_http_11"],
@ -1222,10 +1244,10 @@ class ApplicationConfig {
"Sync OneDrive Business Shared Files to the local filesystem",
&boolValues["sync_business_shared_files"],
"local-first",
"Synchronize from the local directory source first, before downloading changes from OneDrive.",
"Synchronize from the local directory source first, before downloading changes from OneDrive",
&boolValues["local_first"],
"log-dir",
"Directory where logging output is saved to, needs to end with a slash.",
"Directory where logging output is saved to, needs to end with a slash",
&stringValues["log_dir"],
"logout",
"Logout the current user",
@ -1237,7 +1259,7 @@ class ApplicationConfig {
"Keep monitoring for local and remote changes",
&boolValues["monitor"],
"monitor-interval",
"Number of seconds by which each sync operation is undertaken when idle under monitor mode.",
"Number of seconds by which each sync operation is undertaken when idle under monitor mode",
&longValues["monitor_interval"],
"monitor-fullscan-frequency",
"Number of sync runs before performing a full local scan of the synced directory",
@ -1261,13 +1283,13 @@ class ApplicationConfig {
"Approve the use of performing a --resync action",
&boolValues["resync_auth"],
"remove-directory",
"Remove a directory on OneDrive - no sync will be performed.",
"Remove a directory on OneDrive - no sync will be performed",
&stringValues["remove_directory"],
"remove-source-files",
"Remove source file after successful transfer to OneDrive when using --upload-only",
&boolValues["remove_source_files"],
"single-directory",
"Specify a single local directory within the OneDrive root to sync.",
"Specify a single local directory within the OneDrive root to sync",
&stringValues["single_directory"],
"skip-dot-files",
"Skip dot files and folders from syncing",
@ -1288,7 +1310,7 @@ class ApplicationConfig {
"Skip syncing of symlinks",
&boolValues["skip_symlinks"],
"source-directory",
"Source directory to rename or move on OneDrive - no sync will be performed.",
"Source directory to rename or move on OneDrive - no sync will be performed",
&stringValues["source_directory"],
"space-reservation",
"The amount of disk space to reserve (in MB) to avoid 100% disk space utilisation",
@ -1306,10 +1328,10 @@ class ApplicationConfig {
"Perform a synchronisation with Microsoft OneDrive (DEPRECATED)",
&boolValues["synchronize"],
"sync-root-files",
"Sync all files in sync_dir root when using sync_list.",
"Sync all files in sync_dir root when using sync_list",
&boolValues["sync_root_files"],
"upload-only",
"Replicate the locally configured sync_dir state to OneDrive, by only uploading local changes to OneDrive. Do not download changes from OneDrive.",
"Replicate the locally configured sync_dir state to OneDrive, by only uploading local changes to OneDrive. Do not download changes from OneDrive",
&boolValues["upload_only"],
"confdir",
"Set the directory used to store the configuration files",
@ -1325,7 +1347,7 @@ class ApplicationConfig {
&boolValues["with_editing_perms"]
);
// Was --syncdir used?
// Was --syncdir specified?
if (!getValueString("sync_dir_cli").empty) {
// Build the line we need to update and/or write out
string newConfigOptionSyncDirLine = "sync_dir = \"" ~ getValueString("sync_dir_cli") ~ "\"";
@ -1411,12 +1433,24 @@ class ApplicationConfig {
setValueString("sync_dir", getValueString("sync_dir_cli"));
}
// was --monitor-interval used and now set to a value below minimum requirement?
// Was --monitor-interval specified and now set to a value below minimum requirement?
if (getValueLong("monitor_interval") < 300 ) {
addLogEntry("Invalid value for --monitor-interval - using default value: 300");
setValueLong("monitor_interval", 300);
}
// Was --file-fragment-size specified and now set to a value below or above maximum?
// Enforce lower bound (must be greater than default) for 'file_fragment_size'
if (getValueLong("file_fragment_size") < defaultFileFragmentSize) {
addLogEntry("Invalid value for --file-fragment-size (too low) - using default value: " ~ to!string(defaultFileFragmentSize));
setValueLong("file_fragment_size", defaultFileFragmentSize);
}
// Enforce upper bound (safe maximum) for 'file_fragment_size'
if (getValueLong("file_fragment_size") > defaultMaxFileFragmentSize) {
addLogEntry("Invalid value for --file-fragment-size (too high) - using maximum safe value: " ~ to!string(defaultMaxFileFragmentSize));
setValueLong("file_fragment_size", defaultMaxFileFragmentSize);
}
// Was --auth-files used?
if (!getValueString("auth_files").empty) {
// --auth-files used, need to validate that '~' was not used as a path identifier, and if yes, perform the correct expansion
@ -1597,6 +1631,7 @@ class ApplicationConfig {
addLogEntry("Config option 'inotify_delay' = " ~ to!string(getValueLong("inotify_delay")));
addLogEntry("Config option 'display_transfer_metrics' = " ~ to!string(getValueBool("display_transfer_metrics")));
addLogEntry("Config option 'force_session_upload' = " ~ to!string(getValueBool("force_session_upload")));
addLogEntry("Config option 'file_fragment_size' = " ~ to!string(getValueLong("file_fragment_size")));
// data integrity
addLogEntry("Config option 'classify_as_big_delete' = " ~ to!string(getValueLong("classify_as_big_delete")));

View file

@ -994,24 +994,24 @@ class OneDriveApi {
return post(url, item.toString(), requestHeaders);
}
// https://dev.onedrive.com/items/upload_large_files.htm
// https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#upload-bytes-to-the-upload-session
JSONValue uploadFragment(string uploadUrl, string filepath, long offset, long offsetSize, long fileSize) {
// open file as read-only in binary mode
// If we upload a modified file, with the current known online eTag, this gets changed when the session is started - thus, the tail end of uploading
// a fragment fails with a 412 Precondition Failed and then a 416 Requested Range Not Satisfiable
// For the moment, comment out adding the If-Match header in createUploadSession, which then avoids this issue
string contentRange = "bytes " ~ to!string(offset) ~ "-" ~ to!string(offset + offsetSize - 1) ~ "/" ~ to!string(fileSize);
if (debugLogging) {
addLogEntry("", ["debug"]); // Add an empty newline before log output
addLogEntry("contentRange: " ~ contentRange, ["debug"]);
addLogEntry("fragment contentRange: " ~ contentRange, ["debug"]);
}
// Before we submit this 'HTTP PUT' request, pre-emptively check token expiry to avoid future 401s during long uploads
checkAccessTokenExpired();
// Perform the HTTP PUT action to upload the file fragment
return put(uploadUrl, filepath, true, contentRange, offset, offsetSize);
}
// https://dev.onedrive.com/items/upload_large_files.htm
// https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#resuming-an-in-progress-upload
JSONValue requestUploadStatus(string uploadUrl) {
return get(uploadUrl, true);
}
@ -1324,10 +1324,10 @@ class OneDriveApi {
// Check if the existing access token has expired, if it has, generate a new one
private void checkAccessTokenExpired() {
if (Clock.currTime() >= appConfig.accessTokenExpiration) {
if (debugLogging) {addLogEntry("Microsoft OneDrive Access Token has expired. Must generate a new Microsoft OneDrive Access Token", ["debug"]);}
if (debugLogging) {addLogEntry("Microsoft OneDrive OAuth2 Access Token has expired. Must generate a new Microsoft OneDrive OAuth2 Access Token", ["debug"]);}
generateNewAccessToken();
} else {
if (debugLogging) {addLogEntry("Existing Microsoft OneDrive Access Token Expires: " ~ to!string(appConfig.accessTokenExpiration), ["debug"]);}
if (debugLogging) {addLogEntry("Microsoft OneDrive OAuth2 Access Token Valid Until (Local): " ~ to!string(appConfig.accessTokenExpiration), ["debug"]);}
}
}
@ -1341,7 +1341,9 @@ class OneDriveApi {
}
private void connect(HTTP.Method method, const(char)[] url, bool skipToken, CurlResponse response, string[string] requestHeaders=null) {
if (debugLogging) {addLogEntry("Request URL = " ~ to!string(url), ["debug"]);}
// If we are debug logging, output the URL being accessed and the HTTP method being used to access that URL
if (debugLogging) {addLogEntry("HTTP " ~ to!string(method) ~ " request to URL: " ~ to!string(url), ["debug"]);}
// Check access token first in case the request is overridden
if (!skipToken) addAccessTokenHeader(&requestHeaders);
curlEngine.setResponseHolder(response);

View file

@ -9131,6 +9131,9 @@ class SyncEngine {
// Save this session
saveSessionFile(threadUploadSessionFilePath, uploadSession);
}
// When does this upload URL expire?
displayUploadSessionExpiry(uploadSession);
} else {
// no valid session was created
if (verboseLogging) {addLogEntry("Creation of OneDrive API Upload Session failed.", ["verbose"]);}
@ -9148,6 +9151,28 @@ class SyncEngine {
return uploadSession;
}
// Display upload session expiry time
void displayUploadSessionExpiry(JSONValue uploadSessionData) {
try {
// Step 1: Extract the ISO 8601 UTC string from the JSON
string utcExpiry = uploadSessionData["expirationDateTime"].str;
// Step 2: Convert ISO 8601 string to SysTime (assumes Zulu / UTC timezone)
SysTime expiryUTC = SysTime.fromISOExtString(utcExpiry);
// Step 3: Convert to local time
auto expiryLocal = expiryUTC.toLocalTime();
// Step 4: Print both UTC and Local times
if (debugLogging) {
addLogEntry("Upload session URL expires at (UTC): " ~ to!string(expiryUTC), ["debug"]);
addLogEntry("Upload session URL expires at (Local): " ~ to!string(expiryLocal), ["debug"]);
}
} catch (Exception e) {
// nothing
}
}
// Save the session upload data
void saveSessionFile(string threadUploadSessionFilePath, JSONValue uploadSessionData) {
// Function Start Time
@ -9187,36 +9212,64 @@ class SyncEngine {
logKey = generateAlphanumericString();
displayFunctionProcessingStart(thisFunctionName, logKey);
}
// Response for upload
JSONValue uploadResponse;
// Session JSON needs to contain valid elements
// Get the offset details
long fragmentSize = 10 * 2^^20; // 10 MiB
// https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#upload-bytes-to-the-upload-session
// You can upload the entire file, or split the file into multiple byte ranges, as long as the maximum bytes in any given request is less than 60 MiB.
// Calculate File Fragment Size (must be valid multiple of 320 KiB)
long baseSize;
long fragmentSize;
enum HUNDRED_MIB = 100L * 1024L * 1024L; // 100 MiB = 104,857,600 bytes
enum CHUNK_SIZE = 327_680L; // 320 KiB
enum MAX_FRAGMENT_BYTES = 60L * 1_048_576L; // 60 MiB = 62,914,560 bytes
// If file is > 100 MiB then automatically use the larger fragment size
if (thisFileSize > HUNDRED_MIB) {
if (debugLogging) {
addLogEntry("Large file detected (" ~ to!string(thisFileSize) ~ " bytes), automatically using max fragment size: " ~ to!string(appConfig.defaultMaxFileFragmentSize), ["debug"]);
}
// Calculate base size using max fragment size
baseSize = appConfig.defaultMaxFileFragmentSize * 2^^20;
} else {
// Calculate base size using configured fragment size
baseSize = appConfig.getValueLong("file_fragment_size") * 2^^20;
}
// Ensure 'fragmentSize' is a multiple of 327680 bytes and < 60 MiB
if (baseSize >= MAX_FRAGMENT_BYTES) {
// Use the maximum valid size below 60 MiB, rounded down to nearest 320 KiB multiple
fragmentSize = ((MAX_FRAGMENT_BYTES - 1) / CHUNK_SIZE) * CHUNK_SIZE;
} else {
fragmentSize = (baseSize / CHUNK_SIZE) * CHUNK_SIZE;
}
// Set the fragment count and fragSize
size_t fragmentCount = 0;
long fragSize = 0;
// Extract current upload offset from session data
long offset = uploadSessionData["nextExpectedRanges"][0].str.splitter('-').front.to!long;
// Estimate total number of expected fragments
size_t expected_total_fragments = cast(size_t) ceil(double(thisFileSize) / double(fragmentSize));
long start_unix_time = Clock.currTime.toUnixTime();
int h, m, s;
string etaString;
string uploadLogEntry = "Uploading: " ~ uploadSessionData["localPath"].str ~ " ... ";
// If we get a 404, create a new upload session and store it here
JSONValue newUploadSession;
// Start the session upload using the active API instance for this thread
while (true) {
// fragment upload
fragmentCount++;
if (debugLogging) {addLogEntry("Fragment: " ~ to!string(fragmentCount) ~ " of " ~ to!string(expected_total_fragments), ["debug"]);}
// Calculate ETA
auto eta = calc_eta((fragmentCount - 1), expected_total_fragments, start_unix_time);
// What ETA string do we use?
auto eta = calc_eta((fragmentCount -1), expected_total_fragments, start_unix_time);
if (eta == 0) {
// Initial calculation ...
etaString = format!"| ETA --:--:--";
@ -9226,28 +9279,33 @@ class SyncEngine {
etaString = format!"| ETA %02d:%02d:%02d"(h, m, s);
}
// Calculate upload percentage
// Calculate this progress output
auto ratio = cast(double)(fragmentCount - 1) / expected_total_fragments;
// Convert the ratio to a percentage and format it to two decimal places
string percentage = leftJustify(format("%d%%", cast(int)(ratio * 100)), 5, ' ');
addLogEntry(uploadLogEntry ~ percentage ~ etaString, ["consoleOnly"]);
// Determine actual fragment size
// What fragment size will be used?
if (debugLogging) {addLogEntry("fragmentSize: " ~ to!string(fragmentSize) ~ " offset: " ~ to!string(offset) ~ " thisFileSize: " ~ to!string(thisFileSize), ["debug"]);}
fragSize = fragmentSize < thisFileSize - offset ? fragmentSize : thisFileSize - offset;
if (debugLogging) {addLogEntry("Using fragSize: " ~ to!string(fragSize), ["debug"]);}
// Guard against negative fragSize
// fragSize must not be a negative value
if (fragSize < 0) {
// Session upload will fail
// not a JSON object - fragment upload failed
if (verboseLogging) {addLogEntry("File upload session failed - invalid calculation of fragment size", ["verbose"]);}
if (exists(threadUploadSessionFilePath)) {
remove(threadUploadSessionFilePath);
}
// set uploadResponse to null as error
uploadResponse = null;
return uploadResponse;
}
// Upload this fragment
// If the resume upload fails, we need to check for a return code here
try {
uploadResponse = activeOneDriveApiInstance.uploadFragment(
uploadSessionData["uploadUrl"].str,
@ -9257,14 +9315,21 @@ class SyncEngine {
thisFileSize
);
} catch (OneDriveException exception) {
// HTTP 100: continue silently
// if a 100 uploadResponse is generated, continue
if (exception.httpStatusCode == 100) {
continue;
}
// HTTP 404: recreate the session
if (exception.httpStatusCode == 404) {
if (debugLogging) {addLogEntry("The upload session was not found .... re-create session");}
// Issue #3355: https://github.com/abraunegg/onedrive/issues/3355
if (exception.httpStatusCode == 403 && (exception.msg.canFind("accessDenied") || exception.msg.canFind("You do not have authorization to access the file"))) {
addLogEntry("ERROR: Upload session has expired (403 - Access Denied)");
addLogEntry("Probable Cause: The 'tempauth' token embedded in the upload URL has most likely expired.");
addLogEntry(" Microsoft issues this token when the upload session is first created. It cannot be refreshed, extended, or queried for its expiry time.");
addLogEntry(" The only way to infer its validity is by measuring the time from session creation to this 403 failure.");
addLogEntry(" The upload session URL itself may still appear active (based on expirationDateTime), but the upload URL is no longer usable once this 'tempauth' token expires.");
addLogEntry(" A new upload session will now be created. Upload will restart from the beginning using the new session URL and new 'tempauth' token.");
// Attempt creation of new upload session
newUploadSession = createSessionForFileUpload(
activeOneDriveApiInstance,
uploadSessionData["localPath"].str,
@ -9274,87 +9339,143 @@ class SyncEngine {
null,
threadUploadSessionFilePath
);
// Attempt retry (which will start upload again from scratch) with new session upload URL
continue;
}
// There was an error uploadResponse from OneDrive when uploading the file fragment
if (exception.httpStatusCode == 404) {
// The upload session was not found .. ?? we just created it .. maybe the backend is still creating it or failed to create it
if (debugLogging) {addLogEntry("The upload session was not found .... re-create session");}
newUploadSession = createSessionForFileUpload(
activeOneDriveApiInstance,
uploadSessionData["localPath"].str,
uploadSessionData["targetDriveId"].str,
uploadSessionData["targetParentId"].str,
baseName(uploadSessionData["localPath"].str),
null,
threadUploadSessionFilePath
);
}
// HTTP 416: continue silently
// Issue https://github.com/abraunegg/onedrive/issues/2747
// if a 416 uploadResponse is generated, continue
if (exception.httpStatusCode == 416) {
continue;
}
// Display error and handle fatal or retry cases
// Handle transient errors:
// 408 - Request Time Out
// 429 - Too Many Requests
// 503 - Service Unavailable
// 504 - Gateway Timeout
//
// Insert a new line as well, so that the below error is inserted on the console in the right location
if (verboseLogging) {addLogEntry("Fragment upload failed - received an exception response from OneDrive API", ["verbose"]);}
if (exception.httpStatusCode == 403) {
displayOneDriveErrorMessage(exception.msg, thisFunctionName);
uploadResponse = null;
return uploadResponse;
}
// display what the error is if we have not already continued
if (exception.httpStatusCode != 404) {
displayOneDriveErrorMessage(exception.msg, thisFunctionName);
}
// retry fragment upload in case error is transient
if (verboseLogging) {addLogEntry("Retrying fragment upload", ["verbose"]);}
// Retry logic
// Retry fragment upload logic
try {
string effectiveRetryUploadURL;
string effectiveLocalPath;
if ("uploadUrl" in newUploadSession) {
effectiveRetryUploadURL = newUploadSession["uploadUrl"].str;
effectiveLocalPath = newUploadSession["localPath"].str;
} else {
effectiveRetryUploadURL = uploadSessionData["uploadUrl"].str;
effectiveLocalPath = uploadSessionData["localPath"].str;
}
// If we re-created the session, use the new data on re-try
if (newUploadSession.type() == JSONType.object) {
if ("uploadUrl" in newUploadSession) {
// get this from 'newUploadSession'
effectiveRetryUploadURL = newUploadSession["uploadUrl"].str;
effectiveLocalPath = newUploadSession["localPath"].str;
} else {
// get this from the original input
effectiveRetryUploadURL = uploadSessionData["uploadUrl"].str;
effectiveLocalPath = uploadSessionData["localPath"].str;
}
uploadResponse = activeOneDriveApiInstance.uploadFragment(
effectiveRetryUploadURL,
effectiveLocalPath,
offset,
fragSize,
thisFileSize
);
// retry the fragment upload
uploadResponse = activeOneDriveApiInstance.uploadFragment(
effectiveRetryUploadURL,
effectiveLocalPath,
offset,
fragSize,
thisFileSize
);
} else {
// newUploadSession not a JSON
uploadResponse = null;
return uploadResponse;
}
} catch (OneDriveException e) {
// OneDrive threw another error on retry
if (verboseLogging) {addLogEntry("Retry to upload fragment failed", ["verbose"]);}
// display what the error is
displayOneDriveErrorMessage(e.msg, thisFunctionName);
// set uploadResponse to null as the fragment upload was in error twice
uploadResponse = null;
return uploadResponse;
} catch (std.exception.ErrnoException e) {
// There was a file system error - display the error message
displayFileSystemErrorMessage(e.msg, thisFunctionName);
return uploadResponse;
}
} catch (ErrnoException e) {
// There was a file system error
// display the error message
displayFileSystemErrorMessage(e.msg, thisFunctionName);
uploadResponse = null;
return uploadResponse;
}
// Post-upload: verify and update session progress
// was the fragment uploaded without issue?
if (uploadResponse.type() == JSONType.object) {
// Get new offset from updated server state
// Fragment uploaded
if (debugLogging) {addLogEntry("Fragment upload complete", ["debug"]);}
// Use updated offset from response, not fixed increment
if ("nextExpectedRanges" in uploadResponse &&
uploadResponse["nextExpectedRanges"].type() == JSONType.array &&
!uploadResponse["nextExpectedRanges"].array.empty) {
offset = uploadResponse["nextExpectedRanges"].array[0].str.splitter('-').front.to!long;
} else {
// No more expected ranges, upload is complete
// No nextExpectedRanges? Assume upload complete
break;
}
// Update session tracking
// update the uploadSessionData details
uploadSessionData["expirationDateTime"] = uploadResponse["expirationDateTime"];
uploadSessionData["nextExpectedRanges"] = uploadResponse["nextExpectedRanges"];
// Log URL 'updated' expirationDateTime as 'UTC' and 'localTime'
if (debugLogging) {
// Convert expiration time to localTime
string utcExpiry = uploadResponse["expirationDateTime"].str;
SysTime expiryUTC = SysTime.fromISOExtString(utcExpiry);
SysTime expiryLocal = expiryUTC.toLocalTime();
// Display updated URL expiry as UTC and localTime
addLogEntry("Upload Session URL expiration extended to (UTC): " ~ to!string(expiryUTC), ["debug"]);
addLogEntry("Upload Session URL expiration extended to (Local): " ~ to!string(expiryLocal), ["debug"]);
addLogEntry("", ["debug"]); // Add new line as this fragment is complete
}
// Save for reuse
saveSessionFile(threadUploadSessionFilePath, uploadSessionData);
} else {
// not a JSON object - fragment upload failed
if (verboseLogging) {addLogEntry("File upload session failed - invalid response from OneDrive API", ["verbose"]);}
// cleanup session data
if (exists(threadUploadSessionFilePath)) {
remove(threadUploadSessionFilePath);
}
// set uploadResponse to null as error
uploadResponse = null;
return uploadResponse;
}
@ -9367,20 +9488,22 @@ class SyncEngine {
etaString = format!"| DONE in %02d:%02d:%02d"(h, m, s);
addLogEntry(uploadLogEntry ~ "100% " ~ etaString, ["consoleOnly"]);
// Cleanup session file
// Remove session file if it exists
if (exists(threadUploadSessionFilePath)) {
remove(threadUploadSessionFilePath);
}
// Display function processing time
// Display function processing time if configured to do so
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
// Combine module name & running Function
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
}
// Return final upload result
// Return the session upload response
return uploadResponse;
}
// Delete an item on OneDrive
void uploadDeletedItem(Item itemToDelete, string path) {
// Function Start Time

View file

@ -809,8 +809,8 @@ void displayOneDriveErrorMessage(string message, string callingFunction) {
}
// Where in the code was this error generated
if (verboseLogging) {addLogEntry(" Calling Function: " ~ callingFunction, ["verbose"]);}
if (debugLogging) {addLogEntry(" Calling Function: " ~ callingFunction, ["debug"]);}
if (verboseLogging) {addLogEntry(" Calling Function: " ~ callingFunction, ["verbose"]);} // will get printed in debug
// Extra Debug if we are using --verbose --verbose
if (debugLogging) {
addLogEntry("Raw Error Data: " ~ message, ["debug"]);