mirror of
https://github.com/abraunegg/onedrive
synced 2026-03-14 14:35:46 +01:00
* Implement resumable downloads (FR #2576) by using a state JSON file to track download progress and resuming the download from that offset, post validation of: * offset point is where the local point is at * hash has not changed online, thus file has not changed online in-between failed download
This commit is contained in:
parent
3e61081515
commit
1321ae6306
8 changed files with 646 additions and 38 deletions
|
|
@ -37,6 +37,7 @@ Before reading this document, please ensure you are running application version
|
|||
- [GUI Notifications](#gui-notifications)
|
||||
- [Handling a Microsoft OneDrive Account Password Change](#handling-a-microsoft-onedrive-account-password-change)
|
||||
- [Determining the synchronisation result](#determining-the-synchronisation-result)
|
||||
- [Resumable Transfers](#resumable-transfers)
|
||||
- [Frequently Asked Configuration Questions](#frequently-asked-configuration-questions)
|
||||
- [How to change the default configuration of the client?](#how-to-change-the-default-configuration-of-the-client)
|
||||
- [How to change where my data from Microsoft OneDrive is stored?](#how-to-change-where-my-data-from-microsoft-onedrive-is-stored)
|
||||
|
|
@ -1120,6 +1121,47 @@ In order to fix the upload or download failures, you may need to:
|
|||
* Review the application output to determine what happened
|
||||
* Re-try your command utilising a resync to ensure your system is correctly synced with your Microsoft OneDrive Account
|
||||
|
||||
### Resumable Transfers
|
||||
The OneDrive Client for Linux supports resumable transfers for both uploads and downloads. This capability enhances the reliability and robustness of file transfers by allowing interrupted operations to continue from the last successful point, instead of restarting from the beginning. This is especially important in environments with unstable network connections or during large file transfers.
|
||||
|
||||
#### What Are Resumable Transfers?
|
||||
A resumable transfer is a process that:
|
||||
* Detects when a file upload or download was interrupted due to a network error, system shutdown, or other external factors.
|
||||
* Saves the current state of the transfer, including offsets, temporary filenames, and online session metadata.
|
||||
* Upon application restart, automatically detects these incomplete operations and resumes them from where they left off.
|
||||
|
||||
#### When Does It Occur?
|
||||
Resumable transfers are automatically engaged when:
|
||||
* The application is not started with `--resync`.
|
||||
* Interrupted downloads exist with associated metadata saved to disk.
|
||||
* Interrupted uploads using session-based transfers are pending resumption.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> If a `--resync` operation is being performed, all resumable transfer metadata is purged to ensure a clean and consistent resynchronisation state.
|
||||
|
||||
#### How It Works Internally
|
||||
* **Downloads:** Partial download state is stored as a JSON metadata file, including the online hash, download URL, and byte offset. The file itself is saved with a `.partial` suffix. When detected, this metadata is parsed and the download resumes using HTTP range headers.
|
||||
* **Uploads:** Session uploads use OneDrive Upload Sessions. If interrupted, the session URL and transfer state are persisted. On restart, the client attempts to resume the upload using the remaining byte ranges.
|
||||
|
||||
#### Benefits of Resumable Transfers
|
||||
* Saves bandwidth by avoiding full re-transfer of large files.
|
||||
* Improves reliability in poor network conditions.
|
||||
* Increases performance and reduces recovery time after unexpected shutdowns.
|
||||
|
||||
#### Considerations
|
||||
Resumable state is only preserved if the client exits gracefully or the system preserves temporary files across sessions.
|
||||
|
||||
If `--resync` is used, all resumable data is discarded intentionally.
|
||||
|
||||
#### Recommendations
|
||||
* Avoid using `--resync` unless explicitly required.
|
||||
* Enable logging (`--enable-logging`) to help diagnose resumable transfer behaviour.
|
||||
* For environments where network interruptions are common, ensure that the system does not clean temporary or cache files between reboots.
|
||||
|
||||
> [!NOTE]
|
||||
> Resumable transfer support is built-in and requires no special configuration. It is automatically applied during both standalone and monitor operational modes when applicable.
|
||||
|
||||
|
||||
## Frequently Asked Configuration Questions
|
||||
|
||||
### How to change the default configuration of the client?
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ Designed for maximum flexibility and reliability, this powerful and highly confi
|
|||
|
||||
|
||||
## Project Background
|
||||
This project originated as a fork of the skilion client in early 2018, after a number of proposed improvements and bug fixes — including [Pull Requests #82 and #314](https://github.com/skilion/onedrive/pulls?q=author%3Aabraunegg) — were not merged and development activity of the skilion client had largely stalled. While it’s unclear whether the original developer was unavailable or had stepped away from the project - bug reports and feature requests remained unanswered for extended periods. In 2020, the developer confirmed they had no intention of maintaining or supporting their work ([reference](https://github.com/skilion/onedrive/issues/518#issuecomment-717604726)).
|
||||
This project originated as a fork of the skilion client in early 2018, after a number of proposed improvements and bug fixes — including [Pull Requests #82 and #314](https://github.com/skilion/onedrive/pulls?q=author%3Aabraunegg) — were not merged and development activity of the skilion client had largely stalled. While it’s unclear whether the original developer was unavailable or had stepped away from the project - bug reports and feature requests remained unanswered for extended periods. In 2020, the original developer (skilion) confirmed they had no intention of maintaining or supporting their work ([reference](https://github.com/skilion/onedrive/issues/518#issuecomment-717604726)).
|
||||
|
||||
The original [skilion repository](https://github.com/skilion/onedrive) was formally archived and made read-only on GitHub in December 2024. While still publicly accessible as a historical reference, an archived repository is no longer maintained, cannot accept contributions, and reflects a frozen snapshot of the codebase. The last code change to the skilion client was merged in November 2021; however, active development had slowed significantly well before then. As such, the skilion client should no longer be considered current or supported — particularly given the major API changes and evolving Microsoft OneDrive platform requirements since that time.
|
||||
|
||||
|
|
@ -31,6 +31,7 @@ Since forking in early 2018, this client has evolved into a clean re-imagining o
|
|||
* Provides rules for client-side filtering to select data for syncing with Microsoft OneDrive accounts
|
||||
* Protects against significant data loss on OneDrive after configuration changes
|
||||
* Supports a dry-run option for safe configuration testing
|
||||
* Supports interruption-tolerant uploads and downloads by resuming file transfers from the point of failure, ensuring data integrity and efficiency
|
||||
* Validates file transfers to ensure data integrity
|
||||
* Caches sync state for efficiency
|
||||
* Monitors local files in real-time using inotify
|
||||
|
|
|
|||
|
|
@ -121,8 +121,10 @@ class ApplicationConfig {
|
|||
string refreshTokenFilePath = "";
|
||||
// Store the accessTokenExpiration for use within the application
|
||||
SysTime accessTokenExpiration;
|
||||
// Store the 'session_upload.CRC32-HASH' file path
|
||||
// Store the 'session_upload.UNIQUE_STRING' file path
|
||||
string uploadSessionFilePath = "";
|
||||
// Store the 'resume_download.UNIQUE_STRING' file path
|
||||
string resumeDownloadFilePath = "";
|
||||
// Store the Intune account information
|
||||
string intuneAccountDetails;
|
||||
// Store the Intune account information on disk for reuse
|
||||
|
|
@ -561,6 +563,8 @@ class ApplicationConfig {
|
|||
databaseFilePathDryRun = buildNormalizedPath(buildPath(configDirName, "items-dryrun.sqlite3"));
|
||||
// - What is the full path for the 'resume_upload'
|
||||
uploadSessionFilePath = buildNormalizedPath(buildPath(configDirName, "session_upload"));
|
||||
// - What is the full path for the resume 'resume_download' file
|
||||
resumeDownloadFilePath = buildNormalizedPath(buildPath(configDirName, "resume_download"));
|
||||
// - What is the full path for the 'sync_list' file
|
||||
syncListFilePath = buildNormalizedPath(buildPath(configDirName, "sync_list"));
|
||||
// - What is the full path for the 'config' - the user file to configure the application
|
||||
|
|
|
|||
|
|
@ -212,6 +212,7 @@ class CurlEngine {
|
|||
string internalThreadId;
|
||||
SysTime releaseTimestamp;
|
||||
ulong maxIdleTime;
|
||||
private long resumeFromOffset = -1;
|
||||
|
||||
this() {
|
||||
http = HTTP(); // Directly initializes HTTP using its default constructor
|
||||
|
|
@ -446,10 +447,13 @@ class CurlEngine {
|
|||
|
||||
CurlResponse download(string originalFilename, string downloadFilename) {
|
||||
setResponseHolder(null);
|
||||
// open downloadFilename as write in binary mode
|
||||
auto file = File(downloadFilename, "wb");
|
||||
|
||||
// Open the file in append mode if resuming, else write mode
|
||||
auto file = (resumeFromOffset > 0)
|
||||
? File(downloadFilename, "ab") // append binary
|
||||
: File(downloadFilename, "wb"); // write binary
|
||||
|
||||
// function scopes
|
||||
// Function exit scope
|
||||
scope(exit) {
|
||||
cleanup();
|
||||
if (file.isOpen()){
|
||||
|
|
@ -458,11 +462,19 @@ class CurlEngine {
|
|||
}
|
||||
}
|
||||
|
||||
// Apply Range header if resuming
|
||||
if (resumeFromOffset > 0) {
|
||||
string rangeHeader = format("bytes=%d-", resumeFromOffset);
|
||||
addRequestHeader("Range", rangeHeader);
|
||||
}
|
||||
|
||||
// Receive data
|
||||
http.onReceive = (ubyte[] data) {
|
||||
file.rawWrite(data);
|
||||
return data.length;
|
||||
};
|
||||
|
||||
// Perform HTTP Operation
|
||||
http.perform();
|
||||
|
||||
// close open file - avoids problems with renaming on GCS Buckets and other semi-POSIX systems
|
||||
|
|
@ -473,6 +485,7 @@ class CurlEngine {
|
|||
// Rename downloaded file
|
||||
rename(downloadFilename, originalFilename);
|
||||
|
||||
// Update response and return response
|
||||
response.update(&http);
|
||||
return response;
|
||||
}
|
||||
|
|
@ -580,6 +593,11 @@ class CurlEngine {
|
|||
addLogEntry("Enabling SSL peer verification");
|
||||
http.handle.set(CurlOption.ssl_verifypeer, 1);
|
||||
}
|
||||
|
||||
// Set an applicable resumable offset point when downloading a file
|
||||
void setDownloadResumeOffset(long offset) {
|
||||
resumeFromOffset = offset;
|
||||
}
|
||||
}
|
||||
|
||||
// Methods to control obtaining and releasing a CurlEngine instance from the curlEnginePool
|
||||
|
|
|
|||
17
src/main.d
17
src/main.d
|
|
@ -842,17 +842,28 @@ int main(string[] cliArgs) {
|
|||
string localPath = ".";
|
||||
string remotePath = "/";
|
||||
|
||||
// If not performing a --resync , interrupted upload session(s)
|
||||
// If not performing a --resync, check if there are interrupted downloads and/or uploads that need to be completed
|
||||
if (!appConfig.getValueBool("resync")) {
|
||||
// Check if there are any downloads that need to be resumed
|
||||
if (syncEngineInstance.checkForResumableDownloads) {
|
||||
// Need to re-process the the 'resumable data' to resume the download
|
||||
addLogEntry("There are interrupted downloads that need to be resumed ...");
|
||||
// Process the resumable download files
|
||||
syncEngineInstance.processResumableDownloadFiles();
|
||||
}
|
||||
|
||||
// Check if there are interrupted upload session(s)
|
||||
if (syncEngineInstance.checkForInterruptedSessionUploads) {
|
||||
// Need to re-process the session upload files to resume the failed session uploads
|
||||
addLogEntry("There are interrupted session uploads that need to be resumed ...");
|
||||
// Process the session upload files
|
||||
syncEngineInstance.processForInterruptedSessionUploads();
|
||||
syncEngineInstance.processInterruptedSessionUploads();
|
||||
}
|
||||
} else {
|
||||
// Clean up any upload session files due to --resync being used
|
||||
// Clean up any downloads that were due to be resumed, but will not be resumed due to --resync being used
|
||||
syncEngineInstance.clearInterruptedDownloads();
|
||||
|
||||
// Clean up any uploads that were due to be resumed, but will not be resumed due to --resync being used
|
||||
syncEngineInstance.clearInterruptedSessionUploads();
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1088,7 +1088,13 @@ class OneDriveApi {
|
|||
}
|
||||
|
||||
// https://docs.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_get_content
|
||||
void downloadById(const(char)[] driveId, const(char)[] id, string saveToPath, long fileSize) {
|
||||
void downloadById(const(char)[] driveId, const(char)[] itemId, string saveToPath, long fileSize, JSONValue onlineHash, long resumeOffset = 0) {
|
||||
// We pass through to 'downloadFile()'
|
||||
// - resumeOffset
|
||||
// - onlineHash
|
||||
// - driveId
|
||||
// - itemId
|
||||
|
||||
scope(failure) {
|
||||
if (exists(saveToPath)) {
|
||||
// try and remove the file, catch error
|
||||
|
|
@ -1101,22 +1107,22 @@ class OneDriveApi {
|
|||
}
|
||||
}
|
||||
|
||||
// Create the required local directory
|
||||
string newPath = dirName(saveToPath);
|
||||
// Create the required local parental path structure if this does not exist
|
||||
string parentalPath = dirName(saveToPath);
|
||||
|
||||
// Does the path exist locally?
|
||||
if (!exists(newPath)) {
|
||||
// Does the parental path exist locally?
|
||||
if (!exists(parentalPath)) {
|
||||
try {
|
||||
if (debugLogging) {addLogEntry("Requested local path does not exist, creating directory structure: " ~ newPath, ["debug"]);}
|
||||
mkdirRecurse(newPath);
|
||||
if (debugLogging) {addLogEntry("Requested local parental path does not exist, creating directory structure: " ~ parentalPath, ["debug"]);}
|
||||
mkdirRecurse(parentalPath);
|
||||
// Has the user disabled the setting of filesystem permissions?
|
||||
if (!appConfig.getValueBool("disable_permission_set")) {
|
||||
// Configure the applicable permissions for the folder
|
||||
if (debugLogging) {addLogEntry("Setting directory permissions for: " ~ newPath, ["debug"]);}
|
||||
newPath.setAttributes(appConfig.returnRequiredDirectoryPermissions());
|
||||
if (debugLogging) {addLogEntry("Setting directory permissions for: " ~ parentalPath, ["debug"]);}
|
||||
parentalPath.setAttributes(appConfig.returnRequiredDirectoryPermissions());
|
||||
} else {
|
||||
// Use inherited permissions
|
||||
if (debugLogging) {addLogEntry("Using inherited filesystem permissions for: " ~ newPath, ["debug"]);}
|
||||
if (debugLogging) {addLogEntry("Using inherited filesystem permissions for: " ~ parentalPath, ["debug"]);}
|
||||
}
|
||||
} catch (FileException exception) {
|
||||
// display the error message
|
||||
|
|
@ -1124,10 +1130,13 @@ class OneDriveApi {
|
|||
}
|
||||
}
|
||||
|
||||
const(char)[] url = driveByIdUrl ~ driveId ~ "/items/" ~ id ~ "/content?AVOverride=1";
|
||||
// Download file
|
||||
downloadFile(url, saveToPath, fileSize);
|
||||
// Does path exist?
|
||||
// Create the URL to download the file
|
||||
const(char)[] url = driveByIdUrl ~ driveId ~ "/items/" ~ itemId ~ "/content?AVOverride=1";
|
||||
|
||||
// Download file using the URL created above
|
||||
downloadFile(driveId, itemId, url, saveToPath, fileSize, onlineHash, resumeOffset);
|
||||
|
||||
// Does downloaded file now exist locally?
|
||||
if (exists(saveToPath)) {
|
||||
// Has the user disabled the setting of filesystem permissions?
|
||||
if (!appConfig.getValueBool("disable_permission_set")) {
|
||||
|
|
@ -1136,7 +1145,7 @@ class OneDriveApi {
|
|||
saveToPath.setAttributes(appConfig.returnRequiredFilePermissions());
|
||||
} else {
|
||||
// Use inherited permissions
|
||||
if (debugLogging) {addLogEntry("Using inherited filesystem permissions for: " ~ newPath, ["debug"]);}
|
||||
if (debugLogging) {addLogEntry("Using inherited filesystem permissions for: " ~ saveToPath, ["debug"]);}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1369,18 +1378,40 @@ class OneDriveApi {
|
|||
}, validateJSONResponse, callingFunction, lineno);
|
||||
}
|
||||
|
||||
private void downloadFile(const(char)[] url, string filename, long fileSize, string callingFunction=__FUNCTION__, int lineno=__LINE__) {
|
||||
// Download a file based on the URL request
|
||||
private void downloadFile(const(char)[] driveId, const(char)[] itemId, const(char)[] url, string filename, long fileSize, JSONValue onlineHash, long resumeOffset = 0, string callingFunction=__FUNCTION__, int lineno=__LINE__) {
|
||||
// Threshold for displaying download bar
|
||||
long thresholdFileSize = 4 * 2^^20; // 4 MiB
|
||||
|
||||
// To support marking of partially-downloaded files,
|
||||
string originalFilename = filename;
|
||||
string downloadFilename = filename ~ ".partial";
|
||||
|
||||
// To support resumable downloads, configure the 'resumable data' file path
|
||||
string threadResumeDownloadFilePath = appConfig.resumeDownloadFilePath ~ "." ~ generateAlphanumericString();
|
||||
|
||||
// Create a JSONValue with download state so this can be used when resuming, to evaluate if the online file has changed, and if we are able to resume in a safe manner
|
||||
JSONValue resumeDownloadData = JSONValue([
|
||||
"driveId": JSONValue(to!string(driveId)),
|
||||
"itemId": JSONValue(to!string(itemId)),
|
||||
"onlineHash": onlineHash,
|
||||
"originalFilename": JSONValue(originalFilename),
|
||||
"downloadFilename": JSONValue(downloadFilename),
|
||||
"resumeOffset": JSONValue(to!string(resumeOffset))
|
||||
]);
|
||||
|
||||
// Validate the JSON response
|
||||
bool validateJSONResponse = false;
|
||||
|
||||
oneDriveErrorHandlerWrapper((CurlResponse response) {
|
||||
connect(HTTP.Method.get, url, false, response);
|
||||
|
||||
if (fileSize >= thresholdFileSize){
|
||||
// If we have a resumable offset to use, set this as the offset to use
|
||||
if (resumeOffset > 0) {
|
||||
curlEngine.setDownloadResumeOffset(resumeOffset);
|
||||
}
|
||||
|
||||
// Download Progress variables
|
||||
size_t expected_total_segments = 20;
|
||||
|
||||
|
|
@ -1496,6 +1527,15 @@ class OneDriveApi {
|
|||
previousProgressPercent = currentDLPercent;
|
||||
}
|
||||
}
|
||||
|
||||
// Has 'dlnow' changed?
|
||||
if (dlnow > to!long(resumeDownloadData["resumeOffset"].str)) {
|
||||
// Update resumeOffset for this progress event with the latest 'dlnow' value
|
||||
resumeDownloadData["resumeOffset"] = JSONValue(to!string(dlnow));
|
||||
|
||||
// Save resumable download data - this needs to be saved on every onProgress event that is processed
|
||||
saveResumeDownloadFile(threadResumeDownloadFilePath, resumeDownloadData);
|
||||
}
|
||||
} else {
|
||||
if ((currentDLPercent == 0) && (!barInit)) {
|
||||
// Calculate the output
|
||||
|
|
@ -1506,15 +1546,35 @@ class OneDriveApi {
|
|||
barInit = true;
|
||||
}
|
||||
}
|
||||
|
||||
// return
|
||||
return 0;
|
||||
};
|
||||
} else {
|
||||
// No progress bar
|
||||
// No progress bar, no resumable download
|
||||
}
|
||||
|
||||
// Capture the result of the download action
|
||||
auto result = curlEngine.download(originalFilename, downloadFilename);
|
||||
|
||||
return curlEngine.download(originalFilename, downloadFilename);
|
||||
// Safe remove 'threadResumeDownloadFilePath' as if we get to this point, the file has been download successfully
|
||||
safeRemove(threadResumeDownloadFilePath);
|
||||
|
||||
// Return the applicable result
|
||||
return result;
|
||||
}, validateJSONResponse, callingFunction, lineno);
|
||||
}
|
||||
|
||||
// Save the resume download data
|
||||
private void saveResumeDownloadFile(string threadResumeDownloadFilePath, JSONValue resumeDownloadData) {
|
||||
string thisFunctionName = format("%s.%s", strip(__MODULE__) , strip(getFunctionName!({})));
|
||||
try {
|
||||
std.file.write(threadResumeDownloadFilePath, resumeDownloadData.toString());
|
||||
} catch (FileException e) {
|
||||
// display the error message
|
||||
displayFileSystemErrorMessage(e.msg, thisFunctionName);
|
||||
}
|
||||
}
|
||||
|
||||
private JSONValue get(string url, bool skipToken = false, string[string] requestHeaders=null, string callingFunction=__FUNCTION__, int lineno=__LINE__) {
|
||||
bool validateJSONResponse = true;
|
||||
|
|
|
|||
467
src/sync.d
467
src/sync.d
|
|
@ -125,8 +125,12 @@ class SyncEngine {
|
|||
string[] businessSharedFoldersOnlineToSkip;
|
||||
// List of interrupted uploads session files that need to be resumed
|
||||
string[] interruptedUploadsSessionFiles;
|
||||
// List of interrupted downloads that need to be resumed
|
||||
string[] interruptedDownloadFiles;
|
||||
// List of validated interrupted uploads session JSON items to resume
|
||||
JSONValue[] jsonItemsToResumeUpload;
|
||||
// List of validated interrupted download JSON items to resume
|
||||
JSONValue[] jsonItemsToResumeDownload;
|
||||
// This list of local paths that need to be created online
|
||||
string[] pathsToCreateOnline;
|
||||
// Array of items from the database that have been deleted locally, that needs to be deleted online
|
||||
|
|
@ -1020,6 +1024,7 @@ class SyncEngine {
|
|||
jsonItemsToProcess = [];
|
||||
fileJSONItemsToDownload = [];
|
||||
jsonItemsToResumeUpload = [];
|
||||
jsonItemsToResumeDownload = [];
|
||||
|
||||
// String Arrays
|
||||
fileDownloadFailures = [];
|
||||
|
|
@ -1030,6 +1035,7 @@ class SyncEngine {
|
|||
posixViolationPaths = [];
|
||||
businessSharedFoldersOnlineToSkip = [];
|
||||
interruptedUploadsSessionFiles = [];
|
||||
interruptedDownloadFiles = [];
|
||||
pathsToCreateOnline = [];
|
||||
databaseItemsToDeleteOnline = [];
|
||||
|
||||
|
|
@ -3616,7 +3622,7 @@ class SyncEngine {
|
|||
|
||||
// This function received an array of JSON items to download, the number of elements based on appConfig.getValueLong("threads")
|
||||
foreach (i, onedriveJSONItem; processPool.parallel(array)) {
|
||||
// Take each JSON item and
|
||||
// Take each JSON item and download it
|
||||
downloadFileItem(onedriveJSONItem);
|
||||
}
|
||||
|
||||
|
|
@ -3628,7 +3634,7 @@ class SyncEngine {
|
|||
}
|
||||
|
||||
// Perform the actual download of an object from OneDrive
|
||||
void downloadFileItem(JSONValue onedriveJSONItem, bool ignoreDataPreservationCheck = false) {
|
||||
void downloadFileItem(JSONValue onedriveJSONItem, bool ignoreDataPreservationCheck = false, long resumeOffset = 0) {
|
||||
// Function Start Time
|
||||
SysTime functionStartTime;
|
||||
string logKey;
|
||||
|
|
@ -3648,6 +3654,9 @@ class SyncEngine {
|
|||
Item databaseItem;
|
||||
bool fileFoundInDB = false;
|
||||
|
||||
// Create a JSONValue to store the online hash for resumable file checking
|
||||
JSONValue onlineHash;
|
||||
|
||||
// Capture what time this download started
|
||||
SysTime downloadStartTime = Clock.currTime();
|
||||
|
||||
|
|
@ -3685,6 +3694,10 @@ class SyncEngine {
|
|||
if (onedriveJSONItem["file"]["hashes"]["quickXorHash"].str != "") {
|
||||
OneDriveFileXORHash = onedriveJSONItem["file"]["hashes"]["quickXorHash"].str;
|
||||
}
|
||||
// Assign to JSONValue as object for resumable file checking
|
||||
onlineHash = JSONValue([
|
||||
"quickXorHash": JSONValue(OneDriveFileXORHash)
|
||||
]);
|
||||
} else {
|
||||
// Fallback: Check for SHA256Hash
|
||||
if (hasSHA256Hash(onedriveJSONItem)) {
|
||||
|
|
@ -3692,11 +3705,19 @@ class SyncEngine {
|
|||
if (onedriveJSONItem["file"]["hashes"]["sha256Hash"].str != "") {
|
||||
OneDriveFileSHA256Hash = onedriveJSONItem["file"]["hashes"]["sha256Hash"].str;
|
||||
}
|
||||
// Assign to JSONValue as object for resumable file checking
|
||||
onlineHash = JSONValue([
|
||||
"sha256Hash": JSONValue(OneDriveFileSHA256Hash)
|
||||
]);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// file hash data missing
|
||||
if (debugLogging) {addLogEntry("ERROR: onedriveJSONItem['file']['hashes'] is missing - unable to compare file hash after download", ["debug"]);}
|
||||
if (debugLogging) {addLogEntry("ERROR: onedriveJSONItem['file']['hashes'] is missing - unable to compare file hash after download to verify integrity of the downloaded file", ["debug"]);}
|
||||
// Assign to JSONValue as object for resumable file checking
|
||||
onlineHash = JSONValue([
|
||||
"hashMissing": JSONValue("none")
|
||||
]);
|
||||
}
|
||||
|
||||
// Does the file already exist in the path locally?
|
||||
|
|
@ -3763,8 +3784,8 @@ class SyncEngine {
|
|||
downloadDriveId = onedriveJSONItem["remoteItem"]["parentReference"]["driveId"].str;
|
||||
}
|
||||
|
||||
// Perform the download
|
||||
downloadFileOneDriveApiInstance.downloadById(downloadDriveId, downloadItemId, newItemPath, jsonFileSize);
|
||||
// Perform the download with any applicable set offset
|
||||
downloadFileOneDriveApiInstance.downloadById(downloadDriveId, downloadItemId, newItemPath, jsonFileSize, onlineHash, resumeOffset);
|
||||
|
||||
// OneDrive API Instance Cleanup - Shutdown API, free curl object and memory
|
||||
downloadFileOneDriveApiInstance.releaseCurlEngine();
|
||||
|
|
@ -3773,7 +3794,7 @@ class SyncEngine {
|
|||
GC.collect();
|
||||
|
||||
} catch (OneDriveException exception) {
|
||||
if (debugLogging) {addLogEntry("downloadFileOneDriveApiInstance.downloadById(downloadDriveId, downloadItemId, newItemPath, jsonFileSize); generated a OneDriveException", ["debug"]);}
|
||||
if (debugLogging) {addLogEntry("downloadFileOneDriveApiInstance.downloadById(downloadDriveId, downloadItemId, newItemPath, jsonFileSize, onlineHash, resumeOffset); generated a OneDriveException", ["debug"]);}
|
||||
|
||||
// HTTP request returned status code 403
|
||||
if ((exception.httpStatusCode == 403) && (appConfig.getValueBool("sync_business_shared_files"))) {
|
||||
|
|
@ -4401,6 +4422,11 @@ class SyncEngine {
|
|||
static import core.exception;
|
||||
string calculatedPath;
|
||||
|
||||
// Issue #3336 - Convert thisDriveId to lowercase before any test
|
||||
if (appConfig.accountType == "personal") {
|
||||
thisDriveId = transformToLowerCase(thisDriveId);
|
||||
}
|
||||
|
||||
// What driveID and itemID we trying to calculate the path for
|
||||
if (debugLogging) {
|
||||
string initialComputeLogMessage = format("Attempting to calculate local filesystem path for '%s' and '%s'", thisDriveId, thisItemId);
|
||||
|
|
@ -9725,7 +9751,7 @@ class SyncEngine {
|
|||
// Additional application logging
|
||||
addLogEntry("ERROR: The total number of items being deleted is: " ~ to!string(itemsToDelete));
|
||||
addLogEntry("ERROR: To delete a large volume of data use --force or increase the config value 'classify_as_big_delete' to a larger value");
|
||||
addLogEntry("ERROR: Optionally, perform a --resync to reset your local synchronisation state");
|
||||
|
||||
// Must exit here to preserve data on online , allow logging to be done
|
||||
forceExit();
|
||||
}
|
||||
|
|
@ -10388,7 +10414,6 @@ class SyncEngine {
|
|||
return outputDriveId;
|
||||
}
|
||||
|
||||
|
||||
// Print the fileDownloadFailures and fileUploadFailures arrays if they are not empty
|
||||
void displaySyncFailures() {
|
||||
// Function Start Time
|
||||
|
|
@ -12480,7 +12505,6 @@ class SyncEngine {
|
|||
}
|
||||
|
||||
// Count all 'session_upload' files in appConfig.configDirName
|
||||
//interruptedUploadsCount = count(dirEntries(appConfig.configDirName, "session_upload.*", SpanMode.shallow));
|
||||
interruptedUploadsCount = count(interruptedUploadsSessionFiles);
|
||||
if (interruptedUploadsCount != 0) {
|
||||
interruptedUploads = true;
|
||||
|
|
@ -12496,6 +12520,46 @@ class SyncEngine {
|
|||
return interruptedUploads;
|
||||
}
|
||||
|
||||
// Query the system for resume_download.* files
|
||||
bool checkForResumableDownloads() {
|
||||
// Function Start Time
|
||||
SysTime functionStartTime;
|
||||
string logKey;
|
||||
string thisFunctionName = format("%s.%s", strip(__MODULE__) , strip(getFunctionName!({})));
|
||||
// Only set this if we are generating performance processing times
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
functionStartTime = Clock.currTime();
|
||||
logKey = generateAlphanumericString();
|
||||
displayFunctionProcessingStart(thisFunctionName, logKey);
|
||||
}
|
||||
|
||||
bool resumableDownloads = false;
|
||||
long resumableDownloadsCount;
|
||||
|
||||
// Scan the filesystem for the files we are interested in, build up interruptedDownloadFiles array
|
||||
foreach (resumeDownloadFile; dirEntries(appConfig.configDirName, "resume_download.*", SpanMode.shallow)) {
|
||||
// calculate the full path
|
||||
string tempPath = buildNormalizedPath(buildPath(appConfig.configDirName, resumeDownloadFile));
|
||||
// add to array
|
||||
interruptedDownloadFiles ~= [tempPath];
|
||||
}
|
||||
|
||||
// Count all 'resume_download' files in appConfig.configDirName
|
||||
resumableDownloadsCount = count(interruptedDownloadFiles);
|
||||
if (resumableDownloadsCount != 0) {
|
||||
resumableDownloads = true;
|
||||
}
|
||||
|
||||
// Display function processing time if configured to do so
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
// Combine module name & running Function
|
||||
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
|
||||
}
|
||||
|
||||
// return if there are interrupted uploads to process
|
||||
return resumableDownloads;
|
||||
}
|
||||
|
||||
// Clear any session_upload.* files
|
||||
void clearInterruptedSessionUploads() {
|
||||
// Function Start Time
|
||||
|
|
@ -12529,8 +12593,47 @@ class SyncEngine {
|
|||
}
|
||||
}
|
||||
|
||||
// Clear any resume_download.* files
|
||||
void clearInterruptedDownloads() {
|
||||
// Function Start Time
|
||||
SysTime functionStartTime;
|
||||
string logKey;
|
||||
string thisFunctionName = format("%s.%s", strip(__MODULE__) , strip(getFunctionName!({})));
|
||||
// Only set this if we are generating performance processing times
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
functionStartTime = Clock.currTime();
|
||||
logKey = generateAlphanumericString();
|
||||
displayFunctionProcessingStart(thisFunctionName, logKey);
|
||||
}
|
||||
|
||||
// Scan the filesystem for the files we are interested in, build up interruptedDownloadFiles array
|
||||
foreach (resumeDownloadFile; dirEntries(appConfig.configDirName, "resume_download.*", SpanMode.shallow)) {
|
||||
// calculate the full path
|
||||
string tempPath = buildNormalizedPath(buildPath(appConfig.configDirName, resumeDownloadFile));
|
||||
|
||||
|
||||
JSONValue resumeFileData = readText(tempPath).parseJSON();
|
||||
addLogEntry("Removing interrupted download file due to --resync for: " ~ resumeFileData["originalFilename"].str, ["info"]);
|
||||
string resumeFilename = resumeFileData["downloadFilename"].str;
|
||||
|
||||
// Process removal
|
||||
if (!dryRun) {
|
||||
// remove the .partial file
|
||||
safeRemove(resumeFilename);
|
||||
// remove the resume_download. file
|
||||
safeRemove(tempPath);
|
||||
}
|
||||
}
|
||||
|
||||
// Display function processing time if configured to do so
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
// Combine module name & running Function
|
||||
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
|
||||
}
|
||||
}
|
||||
|
||||
// Process interrupted 'session_upload' files
|
||||
void processForInterruptedSessionUploads() {
|
||||
void processInterruptedSessionUploads() {
|
||||
// Function Start Time
|
||||
SysTime functionStartTime;
|
||||
string logKey;
|
||||
|
|
@ -12585,7 +12688,66 @@ class SyncEngine {
|
|||
}
|
||||
}
|
||||
|
||||
// A resume session upload file need to be valid to be used
|
||||
// Process 'resumable download' files that were found
|
||||
void processResumableDownloadFiles() {
|
||||
// Function Start Time
|
||||
SysTime functionStartTime;
|
||||
string logKey;
|
||||
string thisFunctionName = format("%s.%s", strip(__MODULE__) , strip(getFunctionName!({})));
|
||||
// Only set this if we are generating performance processing times
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
functionStartTime = Clock.currTime();
|
||||
logKey = generateAlphanumericString();
|
||||
displayFunctionProcessingStart(thisFunctionName, logKey);
|
||||
}
|
||||
|
||||
// For each 'resume_download' file that has been found, process the data to ensure it is still valid
|
||||
foreach (resumeDownloadFile; interruptedDownloadFiles) {
|
||||
// What 'resumable data' are we trying to resume
|
||||
if (verboseLogging) {addLogEntry("Attempting to resume file download using this 'resumable data' file: " ~ resumeDownloadFile, ["verbose"]);}
|
||||
// Does this pass validation?
|
||||
if (!validateResumableDownloadFileData(resumeDownloadFile)) {
|
||||
// Remove 'resume_download' file as it is invalid
|
||||
if (verboseLogging) {addLogEntry("Resume file download verification failed - cleaning up resumable download data file: " ~ resumeDownloadFile, ["verbose"]);}
|
||||
// Cleanup 'resume_download' file
|
||||
if (exists(resumeDownloadFile)) {
|
||||
if (!dryRun) {
|
||||
remove(resumeDownloadFile);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// At this point we should have an array of JSON items to resume downloading
|
||||
if (count(jsonItemsToResumeDownload) > 0) {
|
||||
// There are valid items to resume download
|
||||
// Lets deal with all the JSON items that need to be resumed for download in a batch process
|
||||
size_t batchSize = to!int(appConfig.getValueLong("threads"));
|
||||
long batchCount = (jsonItemsToResumeDownload.length + batchSize - 1) / batchSize;
|
||||
long batchesProcessed = 0;
|
||||
|
||||
foreach (chunk; jsonItemsToResumeDownload.chunks(batchSize)) {
|
||||
// send an array containing 'appConfig.getValueLong("threads")' JSON items to resume download
|
||||
resumeDownloadsInParallel(chunk);
|
||||
}
|
||||
|
||||
// For this set of items, perform a DB PASSIVE checkpoint
|
||||
itemDB.performCheckpoint("PASSIVE");
|
||||
}
|
||||
|
||||
// Cleanup all 'resume_download' files
|
||||
foreach (resumeDownloadFile; interruptedDownloadFiles) {
|
||||
safeRemove(resumeDownloadFile);
|
||||
}
|
||||
|
||||
// Display function processing time if configured to do so
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
// Combine module name & running Function
|
||||
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
|
||||
}
|
||||
}
|
||||
|
||||
// A resume session upload file needs to be valid to be used
|
||||
// This function validates this data
|
||||
bool validateUploadSessionFileData(string sessionFilePath) {
|
||||
// Function Start Time
|
||||
|
|
@ -12843,7 +13005,256 @@ class SyncEngine {
|
|||
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
|
||||
}
|
||||
|
||||
// return session file is invalid
|
||||
// return session file is valid
|
||||
return true;
|
||||
}
|
||||
|
||||
// A 'resumable download' file needs to be valid to be used
|
||||
bool validateResumableDownloadFileData(string resumeDownloadFile) {
|
||||
// Function Start Time
|
||||
SysTime functionStartTime;
|
||||
string logKey;
|
||||
string thisFunctionName = format("%s.%s", strip(__MODULE__) , strip(getFunctionName!({})));
|
||||
// Only set this if we are generating performance processing times
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
functionStartTime = Clock.currTime();
|
||||
logKey = generateAlphanumericString();
|
||||
displayFunctionProcessingStart(thisFunctionName, logKey);
|
||||
}
|
||||
|
||||
// Function variables
|
||||
JSONValue resumeDownloadFileData;
|
||||
JSONValue latestOnlineFileDetails;
|
||||
OneDriveApi validateResumableDownloadFileDataApiInstance;
|
||||
string driveId;
|
||||
string itemId;
|
||||
string existingHash;
|
||||
string downloadFilename;
|
||||
long resumeOffset;
|
||||
string OneDriveFileXORHash;
|
||||
string OneDriveFileSHA256Hash;
|
||||
|
||||
// Try and read the text from the 'resumable download' file as a JSON array
|
||||
try {
|
||||
if (getSize(resumeDownloadFile) > 0) {
|
||||
// There is data to read in
|
||||
resumeDownloadFileData = readText(resumeDownloadFile).parseJSON();
|
||||
} else {
|
||||
// No data to read in - invalid file
|
||||
if (debugLogging) {addLogEntry("SESSION-RESUME: Invalid JSON file: " ~ resumeDownloadFile, ["debug"]);}
|
||||
|
||||
// Display function processing time if configured to do so
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
// Combine module name & running Function
|
||||
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
|
||||
}
|
||||
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
}
|
||||
} catch (JSONException e) {
|
||||
if (debugLogging) {addLogEntry("SESSION-RESUME: Invalid JSON data in: " ~ resumeDownloadFile, ["debug"]);}
|
||||
|
||||
// Display function processing time if configured to do so
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
// Combine module name & running Function
|
||||
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
|
||||
}
|
||||
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
}
|
||||
|
||||
// What needs to be checked?
|
||||
// - JSON has 'downloadFilename' - critical to check the online state
|
||||
// - JSON has 'driveId' - critical to check the online state
|
||||
// - JSON has 'itemId' - critical to check the online state
|
||||
// - JSON has 'resumeOffset' - critical to check the online state
|
||||
// - JSON has 'onlineHash' with an applicable hash value - critical to check the online state
|
||||
|
||||
if (!hasDownloadFilename(resumeDownloadFileData)) {
|
||||
// no downloadFilename present - file invalid
|
||||
if (verboseLogging) {addLogEntry("The 'resumable download' file contains invalid data: Missing 'downloadFilename'", ["verbose"]);}
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
} else {
|
||||
// Configure search variables
|
||||
downloadFilename = resumeDownloadFileData["downloadFilename"].str;
|
||||
// Does the file specified by 'downloadFilename' exist on disk?
|
||||
if (!exists(downloadFilename)) {
|
||||
// File that is supposed to contain our resumable
|
||||
if (verboseLogging) {addLogEntry("The 'resumable download' file no longer exists on your local disk: " ~ downloadFilename, ["verbose"]);}
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// If we get to this point 'downloadFilename' has a file name and the file exists on disk.
|
||||
// If any of the other validations fail, we can remove the file
|
||||
|
||||
if (!hasDriveId(resumeDownloadFileData)) {
|
||||
// no driveId present - file invalid
|
||||
if (verboseLogging) {addLogEntry("The 'resumable download' file contains invalid data: Missing 'driveId'", ["verbose"]);}
|
||||
// Remove local file
|
||||
safeRemove(downloadFilename);
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
} else {
|
||||
// Configure search variables
|
||||
driveId = resumeDownloadFileData["driveId"].str;
|
||||
}
|
||||
|
||||
if (!hasItemId(resumeDownloadFileData)) {
|
||||
// no itemId present - file invalid
|
||||
if (verboseLogging) {addLogEntry("The 'resumable download' file contains invalid data: Missing 'itemId'", ["verbose"]);}
|
||||
// Remove local file
|
||||
safeRemove(downloadFilename);
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
} else {
|
||||
// Configure search variables
|
||||
itemId = resumeDownloadFileData["itemId"].str;
|
||||
}
|
||||
|
||||
if (!hasResumeOffset(resumeDownloadFileData)) {
|
||||
// no resumeOffset present - file invalid
|
||||
if (verboseLogging) {addLogEntry("The 'resumable download' file contains invalid data: Missing 'resumeOffset'", ["verbose"]);}
|
||||
// Remove local file
|
||||
safeRemove(downloadFilename);
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
} else {
|
||||
// we have a resumeOffset value
|
||||
resumeOffset = to!long(resumeDownloadFileData["resumeOffset"].str);
|
||||
// We need to check 'resumeOffset' against the 'downloadFilename' on-disk size
|
||||
long onDiskSize = getSize(downloadFilename);
|
||||
|
||||
if (resumeOffset != onDiskSize) {
|
||||
// The size of the offset location does not equal the size on disk .. if we resume that file, the file will be corrupt
|
||||
string logMessage = format("The 'resumable download' file on disk is a different size to the resumable offset: %s vs %s", to!string(resumeOffset), to!string(onDiskSize));
|
||||
if (verboseLogging) {addLogEntry(logMessage, ["verbose"]);}
|
||||
// Remove local file
|
||||
safeRemove(downloadFilename);
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (!hasOnlineHash(resumeDownloadFileData)) {
|
||||
// no onlineHash present - file invalid
|
||||
if (verboseLogging) {addLogEntry("The 'resumable download' file contains invalid data: Missing 'onlineHash'", ["verbose"]);}
|
||||
// Remove local file
|
||||
safeRemove(downloadFilename);
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
} else {
|
||||
// Configure hash variable from the resume data
|
||||
// QuickXorHash Check
|
||||
if (hasQuickXorHashResume(resumeDownloadFileData)) {
|
||||
// We have a quickXorHash value
|
||||
existingHash = resumeDownloadFileData["onlineHash"]["quickXorHash"].str;
|
||||
} else {
|
||||
// Fallback: Check for SHA256Hash
|
||||
if (hasSHA256HashResume(resumeDownloadFileData)) {
|
||||
// We have a sha256Hash value
|
||||
existingHash = resumeDownloadFileData["onlineHash"]["sha256Hash"].str;
|
||||
}
|
||||
}
|
||||
|
||||
// At this point if we do not have a existingHash value, its a fail
|
||||
if (existingHash.empty) {
|
||||
if (verboseLogging) {addLogEntry("The 'resumable download' file contains invalid data: Missing 'onlineHash' value", ["verbose"]);}
|
||||
// Remove local file
|
||||
safeRemove(downloadFilename);
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// At this point we have elements in the 'resumable download' JSON data that will allow is to check if the online file has been modified - if it has, resuming the download is pointless
|
||||
try {
|
||||
// Create a new OneDrive API instance
|
||||
validateResumableDownloadFileDataApiInstance = new OneDriveApi(appConfig);
|
||||
validateResumableDownloadFileDataApiInstance.initialise();
|
||||
|
||||
// Request latest file details
|
||||
latestOnlineFileDetails = validateResumableDownloadFileDataApiInstance.getPathDetailsById(driveId, itemId);
|
||||
|
||||
// OneDrive API Instance Cleanup - Shutdown API, free curl object and memory
|
||||
validateResumableDownloadFileDataApiInstance.releaseCurlEngine();
|
||||
validateResumableDownloadFileDataApiInstance = null;
|
||||
// Perform Garbage Collection
|
||||
GC.collect();
|
||||
|
||||
// no error .. potentially all still valid
|
||||
} catch (OneDriveException e) {
|
||||
// handle any onedrive error response as invalid
|
||||
|
||||
|
||||
// OneDrive API Instance Cleanup - Shutdown API, free curl object and memory
|
||||
validateResumableDownloadFileDataApiInstance.releaseCurlEngine();
|
||||
validateResumableDownloadFileDataApiInstance = null;
|
||||
// Perform Garbage Collection
|
||||
GC.collect();
|
||||
|
||||
// Display function processing time if configured to do so
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
// Combine module name & running Function
|
||||
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
|
||||
}
|
||||
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
}
|
||||
|
||||
// Configure the hashes from the online data for comparison
|
||||
if (hasHashes(latestOnlineFileDetails)) {
|
||||
// File details returned hash details
|
||||
// QuickXorHash
|
||||
if (hasQuickXorHash(latestOnlineFileDetails)) {
|
||||
// Use the provided quickXorHash as reported by OneDrive
|
||||
if (latestOnlineFileDetails["file"]["hashes"]["quickXorHash"].str != "") {
|
||||
OneDriveFileXORHash = latestOnlineFileDetails["file"]["hashes"]["quickXorHash"].str;
|
||||
}
|
||||
} else {
|
||||
// Fallback: Check for SHA256Hash
|
||||
if (hasSHA256Hash(latestOnlineFileDetails)) {
|
||||
// Use the provided sha256Hash as reported by OneDrive
|
||||
if (latestOnlineFileDetails["file"]["hashes"]["sha256Hash"].str != "") {
|
||||
OneDriveFileSHA256Hash = latestOnlineFileDetails["file"]["hashes"]["sha256Hash"].str;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Last check - has the online file changed since we attempted to do the download that we are trying to resume?
|
||||
// Test 'existingHash' against the potential 2 online hashes for a match
|
||||
// As we dont know what type of hash 'existingHash' is, we have to test it against the 2 known online types
|
||||
bool hashesMatch = (existingHash == OneDriveFileXORHash) || (existingHash == OneDriveFileSHA256Hash);
|
||||
|
||||
// Do the hashes match?
|
||||
if (!hashesMatch) {
|
||||
// Hashes do not match
|
||||
if (verboseLogging) {addLogEntry("The 'online file' has changed in content since the download was last attempted. Aborting this resumable download attempt.", ["verbose"]);}
|
||||
// Remove local file
|
||||
safeRemove(downloadFilename);
|
||||
// Return 'resumable download' file is invalid
|
||||
return false;
|
||||
}
|
||||
|
||||
// Display function processing time if configured to do so
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
// Combine module name & running Function
|
||||
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
|
||||
}
|
||||
|
||||
// Augment 'latestOnlineFileDetails' with our resume point
|
||||
latestOnlineFileDetails["resumeOffset"] = JSONValue(to!string(resumeOffset));
|
||||
|
||||
// Add latestOnlineFileDetails to jsonItemsToResumeDownload as it is now valid
|
||||
jsonItemsToResumeDownload ~= latestOnlineFileDetails;
|
||||
|
||||
// Return 'resumable download' file is valid
|
||||
return true;
|
||||
}
|
||||
|
||||
|
|
@ -12916,6 +13327,38 @@ class SyncEngine {
|
|||
}
|
||||
}
|
||||
|
||||
// Resume all resumable downloads in parallel
|
||||
void resumeDownloadsInParallel(JSONValue[] array) {
|
||||
// Function Start Time
|
||||
SysTime functionStartTime;
|
||||
string logKey;
|
||||
string thisFunctionName = format("%s.%s", strip(__MODULE__) , strip(getFunctionName!({})));
|
||||
// Only set this if we are generating performance processing times
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
functionStartTime = Clock.currTime();
|
||||
logKey = generateAlphanumericString();
|
||||
displayFunctionProcessingStart(thisFunctionName, logKey);
|
||||
}
|
||||
|
||||
// This function received an array of JSON items to resume download, the number of elements based on appConfig.getValueLong("threads")
|
||||
foreach (i, jsonItemToResume; processPool.parallel(array)) {
|
||||
// Take each JSON item and resume download using the JSON data
|
||||
|
||||
// Extract the 'offset' from the JSON data
|
||||
long resumeOffset;
|
||||
resumeOffset = to!long(jsonItemToResume["resumeOffset"].str);
|
||||
|
||||
// Take each JSON item and download it using the offset
|
||||
downloadFileItem(jsonItemToResume, false, resumeOffset);
|
||||
}
|
||||
|
||||
// Display function processing time if configured to do so
|
||||
if (appConfig.getValueBool("display_processing_time") && debugLogging) {
|
||||
// Combine module name & running Function
|
||||
displayFunctionProcessingTime(thisFunctionName, functionStartTime, Clock.currTime(), logKey);
|
||||
}
|
||||
}
|
||||
|
||||
// Function to process the path by removing prefix up to ':' - remove '/drive/root:' from a path string
|
||||
string processPathToRemoveRootReference(ref string pathToCheck) {
|
||||
// Function Start Time
|
||||
|
|
|
|||
29
src/util.d
29
src/util.d
|
|
@ -1367,6 +1367,35 @@ bool hasExpiresOn(const ref JSONValue item) {
|
|||
return ("expiresOn" in item) != null;
|
||||
}
|
||||
|
||||
// Resumable Download checks
|
||||
bool hasDriveId(const ref JSONValue item) {
|
||||
return ("driveId" in item) != null;
|
||||
}
|
||||
|
||||
bool hasItemId(const ref JSONValue item) {
|
||||
return ("itemId" in item) != null;
|
||||
}
|
||||
|
||||
bool hasDownloadFilename(const ref JSONValue item) {
|
||||
return ("downloadFilename" in item) != null;
|
||||
}
|
||||
|
||||
bool hasResumeOffset(const ref JSONValue item) {
|
||||
return ("resumeOffset" in item) != null;
|
||||
}
|
||||
|
||||
bool hasOnlineHash(const ref JSONValue item) {
|
||||
return ("onlineHash" in item) != null;
|
||||
}
|
||||
|
||||
bool hasQuickXorHashResume(const ref JSONValue item) {
|
||||
return ("quickXorHash" in item["onlineHash"]) != null;
|
||||
}
|
||||
|
||||
bool hasSHA256HashResume(const ref JSONValue item) {
|
||||
return ("sha256Hash" in item["onlineHash"]) != null;
|
||||
}
|
||||
|
||||
// Convert bytes to GB
|
||||
string byteToGibiByte(ulong bytes) {
|
||||
if (bytes == 0) {
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue