Compare commits

..

No commits in common. "master" and "v2.4.8" have entirely different histories.

67 changed files with 3072 additions and 20274 deletions

2
.github/FUNDING.yml vendored
View file

@ -1,2 +0,0 @@
---
github: abraunegg

61
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View file

@ -0,0 +1,61 @@
---
name: Bug report
about: Create a report to help us improve
---
**Note:** Before submitting a bug report, please ensure you are running the latest 'onedrive' client as built from 'master' and by using the latest available DMD compiler. Refer to the readme on building the client for your system.
### Bug Report Details ###
**Describe the bug**
A clear and concise description of what the bug is.
**Application and Operating System Details:**
* Provide your OS & version (CentOS 6.x, Ubuntu 18.x etc) and the output of: `uname -a`
* Are you using a headless system (no gui) or with a gui installed?
* OneDrive Account Type
* Did you build from source or install from a package?
* If you installed from source, what is your DMD or LDC compiler version: `dmd --version` or `ldmd2 --version`
* OneDrive Application Version: Output of `onedrive --version`
* OneDrive Application Configuration: Output of `onedrive --display-config`
* Provide the version of curl you are using: Output of `curl --version`
* Is your configured 'sync_dir' a local directory or a network mount point?
* If *not* local, provide all the mountpoints in your system: Output of: `mount`
* What partition format type does your configured 'sync_dir' reside on? Output of: `lsblk -f`
* Explain your entire configuration setup - is the OneDrive folder shared with any other system, shared with any other platform at the same time, is the OneDrive account you use shared across multiple systems / platforms / Operating Systems and in use at the same time
**Note:** Please generate a full debug log whilst reproducing the issue as per [https://github.com/abraunegg/onedrive/wiki/Generate-debug-log-for-support](https://github.com/abraunegg/onedrive/wiki/Generate-debug-log-for-support) and email to support@mynas.com.au
**To Reproduce**
Steps to reproduce the behavior if not causing an application crash:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
If issue is replicated by a specific 'file' or 'path' please archive the file and path tree & email to support@mynas.com.au
**Complete Verbose Log Output**
A clear and full log of the problem when running the application in the following manner (ie, not in monitor mode):
```bash
onedrive --synchronize --verbose <any of your other needed options>
```
Run the application in a separate terminal window or SSH session and provide the entire application output including the error & crash. When posing the logs, Please format log output to make it easier to read. See [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) for more details.
Application Log Output:
```bash
Verbose console log output goes here
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
### Bug Report Checklist ###
* [] Detailed description
* [] Application and Operating System Details provided in full
* [] Reproduction steps (if applicable)
* [] Verbose Log Output from your error
* [] Debug Log generated and submitted

View file

@ -1,176 +0,0 @@
name: "Bug Report"
description: Create a Bug Report to help us fix your issue
title: "Bug: "
labels: ["Bug"]
body:
- type: markdown
attributes:
value: |
**Note:** Before submitting a bug report, please ensure you are running the latest 'onedrive' client as built from 'master' and compile by using the latest available DMD or LDC compiler. Refer to the the [INSTALL](https://github.com/abraunegg/onedrive/blob/master/docs/INSTALL.md) document on how to build the client for your system.
- type: textarea
id: bugDescription
attributes:
label: Describe the bug
description: |
Add a clear and concise description of what you think the bug is.
validations:
required: true
- type: textarea
id: operatingSystemDetails
attributes:
label: Operating System Details
description: |
* What is your Operating System (`uname -a`)
* Output of: (`cat /etc/redhat-release`) or (`lsb_release -a`)
render: shell
validations:
required: true
- type: dropdown
id: installMethod
attributes:
label: Client Installation Method
description: |
How did you install the client?
multiple: false
options:
- From Source
- From Distribution Package
- From 3rd Party Source (PPA, OpenSuSE Build Service etc)
validations:
required: true
- type: dropdown
id: accountType
attributes:
label: OneDrive Account Type
description: |
What is your OneDrive Account Type?
multiple: false
options:
- Personal
- Business | Office365
- SharePoint
validations:
required: true
- type: input
id: applicationVersion
attributes:
label: What is your OneDrive Application Version
description: |
* What is your 'onedrive' client version (`onedrive --version`)?
validations:
required: true
- type: textarea
id: applicationConfig
attributes:
label: What is your OneDrive Application Configuration
description: |
* What is your Application Configuration (`onedrive --display-config`)?
render: shell
validations:
required: true
- type: textarea
id: curlVersion
attributes:
label: What is your 'curl' version
description: |
* What is your output of (`curl --version`)?
render: shell
validations:
required: true
- type: dropdown
id: syncdirLocation
attributes:
label: Where is your 'sync_dir' located
description: |
Is your 'sync_dir' a local directory or on a network mount point?
multiple: false
options:
- Local
- Network
validations:
required: true
- type: textarea
id: mountPoints
attributes:
label: What are all your system 'mount points'
description: |
* What is your output of (`mount`)?
render: shell
validations:
required: true
- type: textarea
id: partitionTypes
attributes:
label: What are all your local file system partition types
description: |
* What is your output of (`lsblk -f`)?
render: shell
validations:
required: true
- type: textarea
id: usageDetails
attributes:
label: How do you use 'onedrive'
description: |
Explain your entire configuration setup - is the OneDrive folder shared with any other system, shared with any other platform at the same time, is the OneDrive account you use shared across multiple systems / platforms / Operating Systems and in use at the same time
validations:
required: true
- type: textarea
id: howToReproduce
attributes:
label: Steps to reproduce the behaviour
description: |
List all the steps required to reproduce the issue.
If issue is replicated by a specific 'file' or 'path' please archive the file and path tree & email to support@mynas.com.au
validations:
required: true
- type: textarea
id: applicationVerboseLog
attributes:
label: Complete Verbose Log Output
description: |
A clear and full log of the problem when running the application in the following manner (ie, not in monitor mode): (`onedrive --synchronize --verbose <any of your other needed options>`)
Run the application in a separate terminal window or SSH session and provide the entire application output including the error & crash.
Please also generate a full debug log whilst reproducing the issue as per [https://github.com/abraunegg/onedrive/wiki/Generate-debug-log-for-support](https://github.com/abraunegg/onedrive/wiki/Generate-debug-log-for-support) and email to support@mynas.com.au
render: shell
validations:
required: true
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: |
If applicable, add screenshots to help explain your problem.
- type: textarea
id: otherLogs
attributes:
label: Other Log Information or Details
description: |
If applicable, add the relevant output from `dmesg` or similar.
render: shell
- type: textarea
id: additionalContext
attributes:
label: Additional context
description: |
Add any other relevant additional context for the problem.

View file

@ -1,5 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: "Have a question?"
url: https://github.com/abraunegg/onedrive/discussions
about: "Please do not raise a GitHub issue for asking questions - please post your question under GitHub Discussions. When opening a new discussion, please include all relevant details such as including your application version and how you installed the client. Thanks in advance for helping us keep the issue tracker clean!"

View file

@ -0,0 +1,17 @@
---
name: Feature request
about: Suggest an idea for this project
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when ...
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View file

@ -1,45 +0,0 @@
name: "Feature Request"
description: Suggest an idea for this project
title: "Feature Request: "
labels: ["Feature Request"]
body:
- type: markdown
attributes:
value: |
Suggest an idea for this project
- type: textarea
id: featureProblem
attributes:
label: Is your feature request related to a problem? Please describe.
description: |
A clear and concise description of what the problem is. Ex. I'm always frustrated when ...
validations:
required: true
- type: textarea
id: featureSolution
attributes:
label: Describe the solution you'd like
description: |
A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: featureAlternatives
attributes:
label: Describe alternatives you've considered
description: |
A clear and concise description of any alternative solutions or features you've considered.
validations:
required: true
- type: textarea
id: additionalContext
attributes:
label: Additional context
description: |
Add any other context or information about the feature request here.
validations:
required: false

38
.github/lock.yml vendored Normal file
View file

@ -0,0 +1,38 @@
# Configuration for lock-threads - https://github.com/dessant/lock-threads
# Number of days of inactivity before a closed issue or pull request is locked
daysUntilLock: 30
# Skip issues and pull requests created before a given timestamp. Timestamp must
# follow ISO 8601 (`YYYY-MM-DD`). Set to `false` to disable
skipCreatedBefore: false
# Issues and pull requests with these labels will not be locked. Set to `[]` to disable
exemptLabels: []
# Label to add before locking, such as `outdated`. Set to `false` to disable
lockLabel: false
# Comment to post before locking. Set to `false` to disable
lockComment: >
This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a new issue for
related bugs.
# Assign `resolved` as the reason for locking. Set to `false` to disable
setLockReason: false
# Limit to only `issues` or `pulls`
# only: issues
# Optionally, specify configuration settings just for `issues` or `pulls`
# issues:
# exemptLabels:
# - help-wanted
# lockLabel: outdated
# pulls:
# daysUntilLock: 30
# Repository to extend settings from
# _extends: repo

View file

@ -1,96 +0,0 @@
name: Build Docker Images
on:
push:
branches: [ master ]
tags: [ 'v*' ]
pull_request:
# Comment these out to force a test build on a PR
branches:
- master
types: [closed]
env:
DOCKER_HUB_SLUG: driveone/onedrive
jobs:
build:
# Comment this out to force a test build on a PR
if: (!(github.event.action == 'closed' && github.event.pull_request.merged != true))
# Build runs on
runs-on: ubuntu-latest
strategy:
matrix:
flavor: [ fedora, debian, alpine ]
include:
- flavor: fedora
dockerfile: ./contrib/docker/Dockerfile
platforms: linux/amd64,linux/arm64
- flavor: debian
dockerfile: ./contrib/docker/Dockerfile-debian
platforms: linux/386,linux/amd64,linux/arm64,linux/arm/v7
- flavor: alpine
dockerfile: ./contrib/docker/Dockerfile-alpine
platforms: linux/amd64,linux/arm64
steps:
- name: Check out code from GitHub
uses: actions/checkout@v3
with:
submodules: recursive
fetch-depth: 0
- name: Docker meta
id: docker_meta
uses: marcelcoding/ghaction-docker-meta@v2
with:
tag-edge: true
images: |
${{ env.DOCKER_HUB_SLUG }}
tag-semver: |
{{version}}
{{major}}.{{minor}}
flavor: ${{ matrix.flavor }}
main-flavor: ${{ matrix.flavor == 'debian' }}
- uses: docker/setup-qemu-action@v2
with:
image: tonistiigi/binfmt:latest
platforms: all
if: matrix.platforms != 'linux/amd64'
- uses: docker/setup-buildx-action@v2
- name: Cache Docker layers
uses: actions/cache@v3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ matrix.flavor }}-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-${{ matrix.flavor }}
- name: Login to Docker Hub
uses: docker/login-action@v2
if: github.event_name != 'pull_request'
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Build and Push to Docker
uses: docker/build-push-action@v3
with:
context: .
file: ${{ matrix.dockerfile }}
platforms: ${{ matrix.platforms }}
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.docker_meta.outputs.tags }}
labels: ${{ steps.docker_meta.outputs.labels }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache

View file

@ -1,33 +0,0 @@
name: 'Lock Threads'
on:
schedule:
- cron: '19 0 * * *'
jobs:
lock:
runs-on: ubuntu-latest
steps:
- name: Lock Threads
uses: dessant/lock-threads@v2.0.3
with:
github-token: ${{ secrets.LOCK_THREADS }}
issue-lock-inactive-days: '7'
issue-exclude-created-before: ''
issue-exclude-labels: ''
issue-lock-labels: ''
issue-lock-comment: >
This issue has been automatically locked since there
has not been any recent activity after it was closed.
Please open a new issue for related bugs.
issue-lock-reason: 'resolved'
pr-lock-inactive-days: '7'
pr-exclude-created-before: ''
pr-exclude-labels: ''
pr-lock-labels: ''
pr-lock-comment: >
This pull request has been automatically locked since there
has not been any recent activity after it was closed.
Please open a new issue for related bugs.
pr-lock-reason: ''
process-only: ''

View file

@ -1,43 +0,0 @@
name: Test Build
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
#runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- name: Check out code from GitHub
uses: actions/checkout@v3
with:
submodules: recursive
fetch-depth: 0
- name: Update Image
run: |
sudo apt-get clean
sudo apt-get update -y
- name: Install build-essential
run: sudo apt install -y build-essential
- name: Install build-dependencies
run: sudo apt install -y libcurl4-openssl-dev libsqlite3-dev pkg-config git curl ldc
- name: Configure
run: ./configure
- name: Compile
run: make clean; make;
- name: Install
run: sudo make install
- name: Run
run: onedrive --version

1
.gitignore vendored
View file

@ -8,4 +8,3 @@ config.log
config.status
autom4te.cache/
contrib/pacman/PKGBUILD
contrib/spec/onedrive.spec

192
.travis-ci.sh Normal file
View file

@ -0,0 +1,192 @@
#!/bin/bash
# Based on a test script from avsm/ocaml repo https://github.com/avsm/ocaml
# Adapted from https://www.tomaz.me/2013/12/02/running-travis-ci-tests-on-arm.html
# Adapted from https://github.com/PJK/libcbor/blob/master/.travis-qemu.sh
# Adapted from https://gist.github.com/oznu/b5efd7784e5a820ec3746820f2183dc0
# Adapted from https://blog.lazy-evaluation.net/posts/linux/debian-armhf-bootstrap.html
# Adapted from https://blog.lazy-evaluation.net/posts/linux/debian-stretch-arm64.html
set -e
# CHROOT Directory
CHROOT_DIR=/tmp/chroot
# Debian package dependencies for the host to run ARM under QEMU
DEBIAN_MIRROR="http://httpredir.debian.org/debian"
HOST_DEPENDENCIES=(qemu-user-static binfmt-support debootstrap sbuild wget)
# Debian package dependencies for the chrooted environment
GUEST_DEPENDENCIES=(build-essential libcurl4-openssl-dev libsqlite3-dev libgnutls-openssl27 git pkg-config libxml2)
# LDC Version
# Different versions due to https://github.com/ldc-developers/ldc/issues/3027
# LDC v1.16.0 re-introduces ARMHF and ARM64 version - https://github.com/ldc-developers/ldc/releases/tag/v1.16.0
LDC_VERSION_ARMHF=1.16.0
LDC_VERSION_ARM64=1.16.0
function setup_arm32_chroot {
# Update apt repository details
sudo apt-get update
# 32Bit Variables
VERSION=jessie
CHROOT_ARCH=armhf
# Host dependencies
sudo apt-get install -qq -y "${HOST_DEPENDENCIES[@]}"
# Download LDC compiler
wget "https://github.com/ldc-developers/ldc/releases/download/v${LDC_VERSION_ARMHF}/ldc2-${LDC_VERSION_ARMHF}-linux-armhf.tar.xz"
tar -xf "ldc2-${LDC_VERSION_ARMHF}-linux-armhf.tar.xz"
mv "ldc2-${LDC_VERSION_ARMHF}-linux-armhf" "dlang-${ARCH}"
rm -rf "ldc2-${LDC_VERSION_ARMHF}-linux-armhf.tar.xz"
# Create chrooted environment
sudo mkdir "${CHROOT_DIR}"
sudo debootstrap --foreign --no-check-gpg --variant=buildd --arch="${CHROOT_ARCH}" "${VERSION}" "${CHROOT_DIR}" "${DEBIAN_MIRROR}"
sudo cp /usr/bin/qemu-arm-static "${CHROOT_DIR}"/usr/bin/
sudo chroot "${CHROOT_DIR}" /debootstrap/debootstrap --second-stage
sudo sbuild-createchroot --arch=${CHROOT_ARCH} --foreign --setup-only ${VERSION} ${CHROOT_DIR} ${DEBIAN_MIRROR}
configure_chroot
}
function setup_arm64_chroot {
# Update apt repository details
sudo apt-get update
# 64Bit Variables
VERSION64=stretch
CHROOT_ARCH64=arm64
# Host dependencies
sudo apt-get install -qq -y "${HOST_DEPENDENCIES[@]}"
# Download LDC compiler
wget "https://github.com/ldc-developers/ldc/releases/download/v${LDC_VERSION_ARM64}/ldc2-${LDC_VERSION_ARM64}-linux-aarch64.tar.xz"
tar -xf "ldc2-${LDC_VERSION_ARM64}-linux-aarch64.tar.xz"
mv "ldc2-${LDC_VERSION_ARM64}-linux-aarch64" "dlang-${ARCH}"
rm -rf "ldc2-${LDC_VERSION_ARM64}-linux-aarch64.tar.xz"
# ARM64 qemu-debootstrap needs to be 1.0.78, Trusty is 1.0.59
#sudo echo "deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse" >> /etc/apt/sources.list
echo "deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list > /dev/null
sudo apt-get update
sudo apt-get install -t xenial debootstrap
# Create chrooted environment
sudo mkdir "${CHROOT_DIR}"
sudo qemu-debootstrap --arch=${CHROOT_ARCH64} ${VERSION64} ${CHROOT_DIR} ${DEBIAN_MIRROR}
configure_chroot
}
function setup_x32_chroot {
# Update apt repository details
sudo apt-get update
# 32Bit Variables
VERSION=jessie
CHROOT_ARCH32=i386
# Host dependencies
sudo apt-get install -qq -y "${HOST_DEPENDENCIES[@]}"
# Download DMD compiler
DMDVER=2.083.1
wget "http://downloads.dlang.org/releases/2.x/${DMDVER}/dmd.${DMDVER}.linux.tar.xz"
tar -xf "dmd.${DMDVER}.linux.tar.xz"
mv dmd2 "dlang-${ARCH}"
rm -rf "dmd.${DMDVER}.linux.tar.xz"
# Create chrooted environment
sudo mkdir "${CHROOT_DIR}"
sudo debootstrap --foreign --no-check-gpg --variant=buildd --arch=${CHROOT_ARCH32} ${VERSION} ${CHROOT_DIR} ${DEBIAN_MIRROR}
sudo cp /usr/bin/qemu-i386-static "${CHROOT_DIR}/usr/bin/"
sudo cp /usr/bin/qemu-x86_64-static "${CHROOT_DIR}/usr/bin/"
sudo chroot "${CHROOT_DIR}" /debootstrap/debootstrap --second-stage
sudo sbuild-createchroot --arch=${CHROOT_ARCH32} --foreign --setup-only ${VERSION} ${CHROOT_DIR} ${DEBIAN_MIRROR}
configure_chroot
}
function configure_chroot {
# Create file with environment variables which will be used inside chrooted environment
echo "export ARCH=${ARCH}" > envvars.sh
echo "export TRAVIS_BUILD_DIR=${TRAVIS_BUILD_DIR}" >> envvars.sh
chmod a+x envvars.sh
# Install dependencies inside chroot
sudo chroot "${CHROOT_DIR}" apt-get update
sudo chroot "${CHROOT_DIR}" apt-get --allow-unauthenticated install -qq -y "${GUEST_DEPENDENCIES[@]}"
# Create build dir and copy travis build files to our chroot environment
sudo mkdir -p "${CHROOT_DIR}"/"${TRAVIS_BUILD_DIR}"
sudo rsync -a "${TRAVIS_BUILD_DIR}"/ "${CHROOT_DIR}"/"${TRAVIS_BUILD_DIR}"/
# Indicate chroot environment has been set up
sudo touch "${CHROOT_DIR}"/.chroot_is_done
# Call ourselves again which will cause tests to run
sudo chroot "${CHROOT_DIR}" bash -c "cd ${TRAVIS_BUILD_DIR} && chmod a+x ./.travis-ci.sh"
sudo chroot "${CHROOT_DIR}" bash -c "cd ${TRAVIS_BUILD_DIR} && ./.travis-ci.sh"
}
function build_onedrive {
# Depending on architecture, build onedrive using applicable tool
echo "$(uname -a)"
HOMEDIR=$(pwd)
if [ "${ARCH}" = "x64" ]; then
# Build on x86_64 as normal
./configure
make clean; make;
else
if [ "${ARCH}" = "x32" ]; then
# 32Bit DMD Build
./configure DC="${HOMEDIR}"/dlang-"${ARCH}"/linux/bin32/dmd
make clean;
make
else
# LDC Build - ARM32, ARM64
./configure DC="${HOMEDIR}"/dlang-"${ARCH}"/bin/ldmd2
make clean;
make
fi
fi
# Functional testing of built application
test_onedrive
}
function test_onedrive {
# Testing onedrive client - does the built application execute?
./onedrive --version
# Functional testing on x64 only
if [ "${ARCH}" = "x64" ]; then
chmod a+x ./tests/makefiles.sh
cd ./tests/
./makefiles.sh
cd ..
mkdir -p ~/.config/onedrive/
echo "$ODP" > ~/.config/onedrive/refresh_token
./onedrive --synchronize --verbose --syncdir '~/OneDriveALT'
# OneDrive Cleanup
rm -rf ~/OneDriveALT/*
./onedrive --synchronize --verbose --syncdir '~/OneDriveALT'
fi
}
if [ "${ARCH}" = "arm32" ] || [ "${ARCH}" = "arm64" ] || [ "${ARCH}" = "x32" ]; then
if [ -e "/.chroot_is_done" ]; then
# We are inside ARM chroot
echo "Running inside chrooted QEMU ${ARCH} environment"
. ./envvars.sh
export PATH="$PATH:/usr/sbin:/sbin:/bin"
build_onedrive
else
# Need to set up chrooted environment first
echo "Setting up chrooted ${ARCH} build environment"
if [ "${ARCH}" = "x32" ]; then
# 32Bit i386 Environment
setup_x32_chroot
else
if [ "${ARCH}" = "arm32" ]; then
# 32Bit ARM Environment
setup_arm32_chroot
else
# 64Bit ARM Environment
setup_arm64_chroot
fi
fi
fi
else
# Proceed as normal
echo "Running an x86_64 Build"
build_onedrive
fi

17
.travis.yml Normal file
View file

@ -0,0 +1,17 @@
# sudo access is required
sudo: required
# Compilation language
language: d
# Use latest DMD
d:
- dmd
# What build architectures will we build on
env:
- ARCH=x64
- ARCH=x32
- ARCH=arm32
- ARCH=arm64
script:
- "bash -ex .travis-ci.sh"

View file

@ -2,344 +2,6 @@
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## 2.4.25 - 2023-06-21
### Fixed
* Fixed that the application was reporting as v2.2.24 when in fact it was v2.4.24 (release tagging issue)
* Fixed that the running version obsolete flag (due to above issue) was causing a false flag as being obsolete
* Fixed that zero-byte files do not have a hash as reported by the OneDrive API thus should not generate an error message
### Updated
* Update to Debian Docker file to resolve Docker image Operating System reported vulnerabilities
* Update to Alpine Docker file to resolve Docker image Operating System reported vulnerabilities
* Update to Fedora Docker file to resolve Docker image Operating System reported vulnerabilities
* Updated documentation (various)
## 2.4.24 - 2023-06-20
### Fixed
* Fix for extra encoded quotation marks surrounding Docker environment variables
* Fix webhook subscription creation for SharePoint Libraries
* Fix that a HTTP 504 - Gateway Timeout causes local files to be deleted when using --download-only & --cleanup-local-files mode
* Fix that folders are renamed despite using --dry-run
* Fix deprecation warnings with dmd 2.103.0
* Fix error that the application is unable to perform a database vacuum: out of memory when exiting
### Removed
* Remove sha1 from being used by the client as this is being depreciated by Microsoft in July 2023
* Complete the removal of crc32 elements
### Added
* Added ONEDRIVE_SINGLE_DIRECTORY configuration capability to Docker
* Added --get-file-link shell completion
* Added configuration to allow HTTP session timeout(s) tuning via config (taken from v2.5.x)
### Updated
* Update to Debian Docker file to resolve Docker image Operating System reported vulnerabilities
* Update to Alpine Docker file to resolve Docker image Operating System reported vulnerabilities
* Update to Fedora Docker file to resolve Docker image Operating System reported vulnerabilities
* Updated cgi.d to commit 680003a - last upstream change before requiring `core.d` dependency requirement
* Updated documentation (various)
## 2.4.23 - 2023-01-06
### Fixed
* Fixed RHEL7, RHEL8 and RHEL9 Makefile and SPEC file compatibility
### Removed
* Disable systemd 'PrivateUsers' due to issues with systemd running processes when option is enabled, causes local file deletes on RHEL based systems
### Updated
* Update --get-O365-drive-id error handling to display a more a more appropriate error message if the API cannot be found
* Update the GitHub version check to utilise the date a release was done, to allow 1 month grace period before generating obsolete version message
* Update Alpine Dockerfile to use Alpine 3.17 and Golang 1.19
* Update handling of --source-directory and --destination-directory if one is empty or missing and if used with --synchronize or --monitor
* Updated documentation (various)
## 2.4.22 - 2022-12-06
### Fixed
* Fix application crash when local file is changed to a symbolic link with non-existent target
* Fix build error with dmd-2.101.0
* Fix build error with LDC 1.28.1 on Alpine
* Fix issue of silent exit when unable to delete local files when using --cleanup-local-files
* Fix application crash due to access permissions on configured path for sync_dir
* Fix potential application crash when exiting due to failure state and unable to cleanly shutdown the database
* Fix creation of parent empty directories when parent is excluded by sync_list
### Added
* Added performance output details for key functions
### Changed
* Switch Docker 'latest' to point at Debian builds rather than Fedora due to ongoing Fedora build failures
* Align application logging events to actual application defaults for --monitor operations
* Performance Improvement: Avoid duplicate costly path calculations and DB operations if not required
* Disable non-working remaining sandboxing options within systemd service files
* Performance Improvement: Only check 'sync_list' if this has been enabled and configured
* Display 'Sync with OneDrive is complete' when using --synchronize
* Change the order of processing between Microsoft OneDrive restrictions and limitations check and skip_file|skip_dir check
### Removed
* Remove building Fedora ARMv7 builds due to ongoing build failures
### Updated
* Update config change detection handling
* Updated documentation (various)
## 2.4.21 - 2022-09-27
### Fixed
* Fix that the download progress bar doesn't always reach 100% when rate_limit is set
* Fix --resync handling of database file removal
* Fix Makefile to be consistent with permissions that are being used
* Fix that logging output for skipped uploaded files is missing
* Fix to allow non-sync tasks while sync is running
* Fix where --resync is enforced for non-sync operations
* Fix to resolve segfault when running 'onedrive --display-sync-status' when run as 2nd process
* Fix DMD 2.100.2 depreciation warning
### Added
* Add GitHub Action Test Build Workflow (replacing Travis CI)
* Add option --display-running-config to display the running configuration as used at application startup
* Add 'config' option to request readonly access in oauth authorization step
* Add option --cleanup-local-files to cleanup local files regardless of sync state when using --download-only
* Add option --with-editing-perms to create a read-write shareable link when used with --create-share-link <file>
### Changed
* Change the exit code of the application to 126 when a --resync is required
### Updated
* Updated --get-O365-drive-id implementation for data access
* Update what application options require an argument
* Update application logging output for error messages to remove certain \n prefix when logging to a file
* Update onedrive.spec.in to fix error building RPM
* Update GUI notification handling for specific skipped scenarios
* Updated documentation (various)
## 2.4.20 - 2022-07-20
### Fixed
* Fix 'foreign key constraint failed' when using OneDrive Business Shared Folders due to change to using /delta query
* Fix various little spelling fixes (check with lintian during Debian packaging)
* Fix handling of a custom configuration directory when using --confdir
* Fix to ensure that any active http instance is shutdown before any application exit
* Fix to enforce that --confdir must be a directory
### Added
* Added 'force_http_11' configuration option to allow forcing HTTP/1.1 operations
### Changed
* Increased thread sleep for better process I/O wait handling
* Removed 'force_http_2' configuration option
### Updated
* Update OneDrive API response handling for National Cloud Deployments
* Updated to switch to using curl defaults for HTTP/2 operations
* Updated documentation (various)
## 2.4.19 - 2022-06-15
### Fixed
* Update Business Shared Folders to use a /delta query
* Update when DB is updated by OneDrive API data and update when file hash is required to be generated
### Added
* Added ONEDRIVE_UPLOADONLY flag for Docker
### Updated
* Updated GitHub workflows
* Updated documentation (various)
## 2.4.18 - 2022-06-02
### Fixed
* Fixed various database related access issues steming from running multiple instances of the application at the same time using the same configuration data
* Fixed --display-config being impacted by --resync flag
* Fixed installation permissions for onedrive man-pages file
* Fixed that in some situations that users try --upload-only and --download-only together which is not possible
* Fixed application crash if unable to read required hash files
### Added
* Added Feature Request to add an override for skip_dir|skip_file through flag to force sync
* Added a check to validate local filesystem available space before attempting file download
* Added GitHub Actions to build Docker containers and push to DockerHub
### Updated
* Updated all Docker build files to current distributions, using updated distribution LDC version
* Updated logging output to logfiles when an actual sync process is occuring
* Updated output of --display-config to be more relevant
* Updated manpage to align with application configuration
* Updated documentation and Docker files based on minimum compiler versions to dmd-2.088.0 and ldc-1.18.0
* Updated documentation (various)
## 2.4.17 - 2022-04-30
### Fixed
* Fix docker build, by add missing git package for Fedora builds
* Fix application crash when attempting to sync a broken symbolic link
* Fix Internet connect disruption retry handling and logging output
* Fix local folder creation timestamp with timestamp from OneDrive
* Fix logging output when download failed
### Added
* Add additional logging specifically for delete event to denote in log output the source of a deletion event when running in --monitor mode
### Changed
* Improve when the local database integrity check is performed and on what frequency the database integrity check is performed
### Updated
* Remove application output ambiguity on how to access 'help' for the client
* Update logging output when running in --monitor --verbose mode in regards to the inotify events
* Updated documentation (various)
## 2.4.16 - 2022-03-10
### Fixed
* Update application file logging error handling
* Explicitly set libcurl options
* Fix that when a sync_list exclusion is matched, the item needs to be excluded when using --resync
* Fix so that application can be compiled correctly on Android hosts
* Fix the handling of 429 and 5xx responses when they are generated by OneDrive in a self-referencing circular pattern
* Fix applying permissions to volume directories when running in rootless podman
* Fix unhandled errors from OneDrive when initialising subscriptions fail
### Added
* Enable GitHub Sponsors
* Implement --resync-auth to enable CLI passing in of --rsync approval
* Add function to check client version vs latest GitHub release
* Add --reauth to allow easy re-authentication of the client
* Implement --modified-by to display who last modified a file and when the modification was done
* Implement feature request to mark partially-downloaded files as .partial during download
* Add documentation for Podman support
### Changed
* Document risk regarding using --resync and force user acceptance of usage risk to proceed
* Use YAML for Bug Reports and Feature Requests
* Update Dockerfiles to use more modern base Linux distribution
### Updated
* Updated documentation (various)
## 2.4.15 - 2021-12-31
### Fixed
* Fix unable to upload to OneDrive Business Shared Folders due to OneDrive API restricting quota information
* Update fixing edge case with OneDrive Personal Shared Folders and --resync --upload-only
### Added
* Add SystemD hardening
* Add --operation-timeout argument
### Changed
* Updated minimum compiler versions to dmd-2.087.0 and ldc-1.17.0
### Updated
* Updated Dockerfile-alpine to use Apline 3.14
* Updated documentation (various)
## 2.4.14 - 2021-11-24
### Fixed
* Support DMD 2.097.0 as compiler for Docker Builds
* Fix getPathDetailsByDriveId query when using --dry-run and a nested path with --single-directory
* Fix edge case when syncing OneDrive Personal Shared Folders
* Catch unhandled API response errors when querying OneDrive Business Shared Folders
* Catch unhandled API response errors when listing OneDrive Business Shared Folders
* Fix error 'Key not found: remaining' with Business Shared Folders (OneDrive API change)
* Fix overwriting local files with older versions from OneDrive when items.sqlite3 does not exist and --resync is not used
### Added
* Added operation_timeout as a new configuration to assist in cases where operations take longer that 1h to complete
* Add Real-Time syncing of remote updates via webhooks
* Add --auth-response option and expose through entrypoint.sh for Docker
* Add --disable-download-validation
### Changed
* Always prompt for credentials for authentication rather than re-using cached browser details
* Do not re-auth on --logout
### Updated
* Updated documentation (various)
## 2.4.13 - 2021-7-14
### Fixed
* Support DMD 2.097.0 as compiler
* Fix to handle OneDrive API Bad Request response when querying if file exists
* Fix application crash and incorrect handling of --single-directory when syncing a OneDrive Business Shared Folder due to using 'Add Shortcut to My Files'
* Fix application crash due to invalid UTF-8 sequence in the pathname for the application configuration
* Fix error message when deleting a large number of files
* Fix Docker build process to source GOSU keys from updated GPG key location
* Fix application crash due to a conversion overflow when calculating file offset for session uploads
* Fix Docker Alpine build failing due to filesystem permissions issue due to Docker build system and Alpine Linux 3.14 incompatibility
* Fix that Business Shared Folders with parentheses are ignored
### Updated
* Updated Lock Bot to run daily
* Updated documentation (various)
## 2.4.12 - 2021-5-28
### Fixed
* Fix an unhandled Error 412 when uploading modified files to OneDrive Business Accounts
* Fix 'sync_list' handling of inclusions when name is included in another folders name
* Fix that options --upload-only & --remove-source-files are ignored on an upload session restore
* Fix to add file check when adding item to database if using --upload-only --remove-source-files
* Fix application crash when SharePoint displayName is being withheld
### Updated
* Updated Lock Bot to use GitHub Actions
* Updated documentation (various)
## 2.4.11 - 2021-4-07
### Fixed
* Fix support for '/*' regardless of location within sync_list file
* Fix 429 response handling correctly check for 'retry-after' response header and use set value
* Fix 'sync_list' path handling for sub item matching, so that items in parent are not implicitly matched when there is no wildcard present
* Fix --get-O365-drive-id to use 'nextLink' value if present when searching for specific SharePoint site names
* Fix OneDrive Business Shared Folder existing name conflict check
* Fix incorrect error message 'Item cannot be deleted from OneDrive because it was not found in the local database' when item is actually present
* Fix application crash when unable to rename folder structure due to unhandled file-system issue
* Fix uploading documents to Shared Business Folders when the shared folder exists on a SharePoint site due to Microsoft Sharepoint 'enrichment' of files
* Fix that a file record is kept in database when using --no-remote-delete & --remove-source-files
### Added
* Added support in --get-O365-drive-id to provide the 'drive_id' for multiple 'document libraries' within a single Shared Library Site
### Removed
* Removed the depreciated config option 'force_http_11' which was flagged as depreciated by PR #549 in v2.3.6 (June 2019)
### Updated
* Updated error output of --get-O365-drive-id to provide more details why an error occurred if a SharePoint site lacks the details we need to perform the match
* Updated Docker build files for Raspberry Pi to dedicated armhf & aarch64 Dockerfiles
* Updated logging output when in --monitor mode, avoid outputting misleading logging when the new or modified item is a file, not a directory
* Updated documentation (various)
## 2.4.10 - 2021-2-19
### Fixed
* Catch database assertion when item path cannot be calculated
* Fix alpine Docker build so it uses the same golang alpine version
* Search all distinct drive id's rather than just default drive id for --get-file-link
* Use correct driveId value to query for changes when using --single-directory
* Improve upload handling of files for SharePoint sites and detecting when SharePoint modifies the file post upload
* Correctly handle '~' when present in 'log_dir' configuration option
* Fix logging output when handing downloaded new files
* Fix to use correct path offset for sync_list exclusion matching
### Added
* Add upload speed metrics when files are uploaded and clarify that 'data to transfer' is what is needed to be downloaded from OneDrive
* Add new config option to rate limit connection to OneDrive
* Support new file maximum upload size of 250GB
* Support sync_list matching full path root wildcard with exclusions to simplify sync_list configuration
### Updated
* Rename Office365.md --> SharePoint-Shared-Libraries.md which better describes this document
* Updated Dockerfile config for arm64
* Updated documentation (various)
## 2.4.9 - 2020-12-27
### Fixed
* Fix to handle case where API provided deltaLink generates a further API error
* Fix application crash when unable to read a local file due to local file permissions
* Fix application crash when calculating the path length due to invalid UTF characters in local path
* Fix Docker build on Alpine due missing symbols due to using the edge version of ldc and ldc-runtime
* Fix application crash with --get-O365-drive-id when API response is restricted
### Added
* Add debug log output of the configured URL's which will be used throughout the application to remove any ambiguity as to using incorrect URL's when making API calls
* Improve application startup when using --monitor when there is no network connection to the OneDrive API and only initialise application once OneDrive API is reachable
* Add Docker environment variable to allow --logout for re-authentication
### Updated
* Remove duplicate code for error output functions and enhance error logging output
* Updated documentation
## 2.4.8 - 2020-11-30
### Fixed
* Fix to use config set option for 'remove_source_files' and 'skip_dir_strict_match' rather than ignore if set

View file

@ -55,10 +55,10 @@ endif
system_unit_files = contrib/systemd/onedrive@.service
user_unit_files = contrib/systemd/onedrive.service
DOCFILES = README.md config LICENSE CHANGELOG.md docs/Docker.md docs/INSTALL.md docs/SharePoint-Shared-Libraries.md docs/USAGE.md docs/BusinessSharedFolders.md docs/advanced-usage.md docs/application-security.md
DOCFILES = README.md config LICENSE CHANGELOG.md docs/Docker.md docs/INSTALL.md docs/Office365.md docs/USAGE.md docs/BusinessSharedFolders.md docs/advanced-usage.md
ifneq ("$(wildcard /etc/redhat-release)","")
RHEL = $(shell cat /etc/redhat-release | grep -E "(Red Hat Enterprise Linux|CentOS)" | wc -l)
RHEL = $(shell cat /etc/redhat-release | grep -E "(Red Hat Enterprise Linux Server|CentOS)" | wc -l)
RHEL_VERSION = $(shell rpm --eval "%{rhel}")
else
RHEL = 0
@ -78,8 +78,7 @@ SOURCES = \
src/sync.d \
src/upload.d \
src/util.d \
src/progress.d \
src/arsd/cgi.d
src/progress.d
ifeq ($(NOTIFICATIONS),yes)
SOURCES += src/notifications/notify.d src/notifications/dnotify.d
@ -107,10 +106,10 @@ onedrive: $(SOURCES)
install: all
$(INSTALL) -D onedrive $(DESTDIR)$(bindir)/onedrive
$(INSTALL) -D -m 0644 onedrive.1 $(DESTDIR)$(mandir)/man1/onedrive.1
$(INSTALL) -D -m 0644 contrib/logrotate/onedrive.logrotate $(DESTDIR)$(sysconfdir)/logrotate.d/onedrive
$(INSTALL) -D onedrive.1 $(DESTDIR)$(mandir)/man1/onedrive.1
$(INSTALL) -D -m 644 contrib/logrotate/onedrive.logrotate $(DESTDIR)$(sysconfdir)/logrotate.d/onedrive
mkdir -p $(DESTDIR)$(docdir)
$(INSTALL) -D -m 0644 $(DOCFILES) $(DESTDIR)$(docdir)
$(INSTALL) -D -m 644 $(DOCFILES) $(DESTDIR)$(docdir)
ifeq ($(HAVE_SYSTEMD),yes)
$(INSTALL) -d -m 0755 $(DESTDIR)$(systemduserunitdir) $(DESTDIR)$(systemdsystemunitdir)
ifeq ($(RHEL),1)
@ -124,12 +123,12 @@ else
ifeq ($(RHEL_VERSION),6)
install -D contrib/init.d/onedrive.init $(DESTDIR)/etc/init.d/onedrive
install -D contrib/init.d/onedrive_service.sh $(DESTDIR)$(bindir)/onedrive_service.sh
endif
endif
endif
ifeq ($(COMPLETIONS),yes)
$(INSTALL) -D -m 0644 contrib/completions/complete.zsh $(DESTDIR)$(ZSH_COMPLETION_DIR)/_onedrive
$(INSTALL) -D -m 0644 contrib/completions/complete.bash $(DESTDIR)$(BASH_COMPLETION_DIR)/onedrive
$(INSTALL) -D -m 0644 contrib/completions/complete.fish $(DESTDIR)$(FISH_COMPLETION_DIR)/onedrive.fish
$(INSTALL) -D -m 644 contrib/completions/complete.zsh $(DESTDIR)$(ZSH_COMPLETION_DIR)/_onedrive
$(INSTALL) -D -m 644 contrib/completions/complete.bash $(DESTDIR)$(BASH_COMPLETION_DIR)/onedrive
$(INSTALL) -D -m 644 contrib/completions/complete.fish $(DESTDIR)$(FISH_COMPLETION_DIR)/onedrive.fish
endif

View file

@ -1,20 +1,19 @@
# OneDrive Client for Linux
[![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases)
[![Release Date](https://img.shields.io/github/release-date/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases)
[![Test Build](https://github.com/abraunegg/onedrive/actions/workflows/testbuild.yaml/badge.svg)](https://github.com/abraunegg/onedrive/actions/workflows/testbuild.yaml)
[![Build Docker Images](https://github.com/abraunegg/onedrive/actions/workflows/docker.yaml/badge.svg)](https://github.com/abraunegg/onedrive/actions/workflows/docker.yaml)
[![Travis CI](https://img.shields.io/travis/com/abraunegg/onedrive)](https://travis-ci.com/abraunegg/onedrive/builds)
[![Docker Build](https://img.shields.io/docker/cloud/automated/driveone/onedrive)](https://hub.docker.com/r/driveone/onedrive)
[![Docker Pulls](https://img.shields.io/docker/pulls/driveone/onedrive)](https://hub.docker.com/r/driveone/onedrive)
A free Microsoft OneDrive Client which supports OneDrive Personal, OneDrive for Business, OneDrive for Office365 and SharePoint.
This powerful and highly configurable client can run on all major Linux distributions, FreeBSD, or as a Docker container. It supports one-way and two-way sync capabilities and securely connects to Microsoft OneDrive services.
This client is a 'fork' of the [skilion](https://github.com/skilion/onedrive) client, which the developer has confirmed he has no desire to maintain or support the client ([reference](https://github.com/skilion/onedrive/issues/518#issuecomment-717604726)). This fork has been in active development since mid 2018.
This client is a 'fork' of the [skilion](https://github.com/skilion/onedrive) client which was abandoned in 2018.
## Features
* State caching
* Real-Time local file monitoring with inotify
* Real-Time syncing of remote updates via webhooks
* Real-Time file monitoring with Inotify
* File upload / download validation to ensure data integrity
* Resumable uploads
* Support OneDrive for Business (part of Office 365)
@ -24,69 +23,39 @@ This client is a 'fork' of the [skilion](https://github.com/skilion/onedrive) cl
* Dry-run capability to test configuration changes
* Prevent major OneDrive accidental data deletion after configuration change
* Support for National cloud deployments (Microsoft Cloud for US Government, Microsoft Cloud Germany, Azure and Office 365 operated by 21Vianet in China)
* Supports single & multi-tenanted applications
* Supports rate limiting of traffic
## What's missing
* While local changes are uploaded right away, remote changes are delayed until next automated sync cycle when using --monitor
* Ability to encrypt/decrypt files on-the-fly when uploading/downloading files from OneDrive
* Support for Windows 'On-Demand' functionality so file is only downloaded when accessed locally
* A GUI for configuration management
## External Enhancements
* A GUI for configuration management: [OneDrive Client for Linux GUI](https://github.com/bpozdena/OneDriveGUI)
* Colorful log output terminal modification: [OneDrive Client for Linux Colorful log Output](https://github.com/zzzdeb/dotfiles/blob/master/scripts/tools/onedrive_log)
* System Tray Icon: [OneDrive Client for Linux System Tray Icon](https://github.com/DanielBorgesOliveira/onedrive_tray)
## Reporting issues
If you encounter any bugs you can report them here on Github. Before filing an issue be sure to:
## Supported Application Version
Only the current application release version or greater is supported.
The current application release version is: [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases)
Check the version of the application you are using `onedrive --version` and ensure that you are running either the current release or compile the application yourself from master to get the latest version.
If you are not using the above application version or greater, you must upgrade your application to obtain support.
## Have a Question
If you have a question or need something clarified, please raise a new disscussion post [here](https://github.com/abraunegg/onedrive/discussions)
Be sure to review the Frequently Asked Questions as well before raising a new discussion post.
## Frequently Asked Questions
Refer to [Frequently Asked Questions](https://github.com/abraunegg/onedrive/wiki/Frequently-Asked-Questions)
## Reporting an Issue or Bug
If you encounter any bugs you can report them here on GitHub. Before filing an issue be sure to:
1. Check the version of the application you are using `onedrive --version` and ensure that you are running a supported application version. If you are not using a supported application version, you must first upgrade your application to a supported version and then re-test for your issue.
2. If you are using a supported applcation version, fill in a new bug report using the [issue template](https://github.com/abraunegg/onedrive/issues/new?template=bug_report.md)
1. Check the version of the application you are using `onedrive --version` and ensure that you are running either the latest [release](https://github.com/abraunegg/onedrive/releases) or built from master.
2. Fill in a new bug report using the [issue template](https://github.com/abraunegg/onedrive/issues/new?template=bug_report.md)
3. Generate a debug log for support using the following [process](https://github.com/abraunegg/onedrive/wiki/Generate-debug-log-for-support)
* If you are in *any* way concerned regarding the sensitivity of the data contained with in the verbose debug log file, create a new OneDrive account, configure the client to use that, use *dummy* data to simulate your environment and then replicate your original issue
* If you are still concerned, provide an NDA or confidentiality document to sign
4. Upload the debug log to [pastebin](https://pastebin.com/) or archive and email to support@mynas.com.au
* If you are concerned regarding the sensitivity of your debug data, encrypt + password protect the archive file and provide the decryption password via an out-of-band (OOB) mechanism. Email support@mynas.com.au for an OOB method for the password to be sent.
* If you are still concerned, provide an NDA or confidentiality document to sign
## Known issues
Refer to [docs/known-issues.md](https://github.com/abraunegg/onedrive/blob/master/docs/known-issues.md)
See [docs/known-issues.md](https://github.com/abraunegg/onedrive/blob/master/docs/known-issues.md)
## Documentation and Configuration Assistance
### Installing from Distribution Packages or Building the OneDrive Client for Linux from source
Refer to [docs/INSTALL.md](https://github.com/abraunegg/onedrive/blob/master/docs/INSTALL.md)
### Building and Installation
See [docs/INSTALL.md](https://github.com/abraunegg/onedrive/blob/master/docs/INSTALL.md)
### Configuration and Usage
Refer to [docs/USAGE.md](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md)
See [docs/USAGE.md](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md)
### Configure OneDrive Business Shared Folders
Refer to [docs/BusinessSharedFolders.md](https://github.com/abraunegg/onedrive/blob/master/docs/BusinessSharedFolders.md)
See [docs/BusinessSharedFolders.md](https://github.com/abraunegg/onedrive/blob/master/docs/BusinessSharedFolders.md)
### Configure SharePoint / Office 365 Shared Libraries (Business or Education)
Refer to [docs/SharePoint-Shared-Libraries.md](https://github.com/abraunegg/onedrive/blob/master/docs/SharePoint-Shared-Libraries.md)
See [docs/Office365.md](https://github.com/abraunegg/onedrive/blob/master/docs/Office365.md)
### Configure National Cloud support
Refer to [docs/national-cloud-deployments.md](https://github.com/abraunegg/onedrive/blob/master/docs/national-cloud-deployments.md)
See [docs/national-cloud-deployments.md](https://github.com/abraunegg/onedrive/blob/master/docs/national-cloud-deployments.md)
### Docker support
Refer to [docs/Docker.md](https://github.com/abraunegg/onedrive/blob/master/docs/Docker.md)
### Podman support
Refer to [docs/Podman.md](https://github.com/abraunegg/onedrive/blob/master/docs/Podman.md)
See [docs/Docker.md](https://github.com/abraunegg/onedrive/blob/master/docs/Docker.md)

25
config
View file

@ -19,16 +19,16 @@
# disable_upload_validation = "false"
# enable_logging = "false"
# force_http_11 = "false"
# force_http_2 = "false"
# local_first = "false"
# no_remote_delete = "false"
# skip_symlinks = "false"
# debug_https = "false"
# skip_dotfiles = "false"
# skip_size = "1000"
# dry_run = "false"
# min_notify_changes = "5"
# monitor_log_frequency = "6"
# monitor_fullscan_frequency = "12"
# monitor_log_frequency = "5"
# monitor_fullscan_frequency = "10"
# sync_root_files = "false"
# classify_as_big_delete = "1000"
# user_agent = ""
@ -36,26 +36,9 @@
# skip_dir_strict_match = "false"
# application_id = ""
# resync = "false"
# resync_auth = "false"
# bypass_data_preservation = "false"
# azure_ad_endpoint = ""
# azure_tenant_id = "common"
# sync_business_shared_folders = "false"
# sync_dir_permissions = "700"
# sync_file_permissions = "600"
# rate_limit = "131072"
# webhook_enabled = "false"
# webhook_public_url = ""
# webhook_listening_host = ""
# webhook_listening_port = "8888"
# webhook_expiration_interval = "86400"
# webhook_renewal_interval = "43200"
# space_reservation = "50"
# display_running_config = "false"
# read_only_auth_scope = "false"
# cleanup_local_files = "false"
# operation_timeout = "3600"
# dns_timeout = "60"
# connect_timeout = "10"
# data_timeout = "600"
# ip_protocol_version = "0"
# sync_file_permissions = "600"

24
configure vendored
View file

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for onedrive v2.4.25.
# Generated by GNU Autoconf 2.69 for onedrive v2.4.8.
#
# Report bugs to <https://github.com/abraunegg/onedrive>.
#
@ -579,8 +579,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='onedrive'
PACKAGE_TARNAME='onedrive'
PACKAGE_VERSION='v2.4.25'
PACKAGE_STRING='onedrive v2.4.25'
PACKAGE_VERSION='v2.4.8'
PACKAGE_STRING='onedrive v2.4.8'
PACKAGE_BUGREPORT='https://github.com/abraunegg/onedrive'
PACKAGE_URL=''
@ -1219,7 +1219,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures onedrive v2.4.25 to adapt to many kinds of systems.
\`configure' configures onedrive v2.4.8 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1280,7 +1280,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of onedrive v2.4.25:";;
short | recursive ) echo "Configuration of onedrive v2.4.8:";;
esac
cat <<\_ACEOF
@ -1393,7 +1393,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
onedrive configure v2.4.25
onedrive configure v2.4.8
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -1410,7 +1410,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by onedrive $as_me v2.4.25, which was
It was created by onedrive $as_me v2.4.8, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -2133,7 +2133,7 @@ case $(basename $DC) in
# remove everthing after ):
VERSION=${VERSION%%):*}
# now version should be something like L.M.N
MINVERSION=1.18.0
MINVERSION=1.12.0
;;
dmd)
# DMD64 D Compiler v2.085.1\n...
@ -2141,7 +2141,7 @@ case $(basename $DC) in
VERSION=${VERSION#*Compiler v}
VERSION=${VERSION%% *}
# now version should be something like L.M.N
MINVERSION=2.088.0
MINVERSION=2.083.1
;;
esac
@ -2162,7 +2162,7 @@ fi
PACKAGE_DATE="June 2023"
PACKAGE_DATE="November 2020"
@ -3159,7 +3159,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by onedrive $as_me v2.4.25, which was
This file was extended by onedrive $as_me v2.4.8, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -3212,7 +3212,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
onedrive config.status v2.4.25
onedrive config.status v2.4.8
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"

View file

@ -9,7 +9,7 @@ dnl - commit the changed files (configure.ac, configure)
dnl - tag the release
AC_PREREQ([2.69])
AC_INIT([onedrive],[v2.4.25], [https://github.com/abraunegg/onedrive], [onedrive])
AC_INIT([onedrive],[v2.4.8], [https://github.com/abraunegg/onedrive], [onedrive])
AC_CONFIG_SRCDIR([src/main.d])
@ -104,7 +104,7 @@ case $(basename $DC) in
# remove everthing after ):
VERSION=${VERSION%%):*}
# now version should be something like L.M.N
MINVERSION=1.18.0
MINVERSION=1.12.0
;;
dmd)
# DMD64 D Compiler v2.085.1\n...
@ -112,7 +112,7 @@ case $(basename $DC) in
VERSION=${VERSION#*Compiler v}
VERSION=${VERSION%% *}
# now version should be something like L.M.N
MINVERSION=2.088.0
MINVERSION=2.083.1
;;
esac

View file

@ -1,3 +1,5 @@
#!/bin/bash
#
# BASH completion code for OneDrive Linux Client
# (c) 2019 Norbert Preining
# License: GPLv3+ (as with the rest of the OneDrive Linux client project)
@ -10,8 +12,8 @@ _onedrive()
cur=${COMP_WORDS[COMP_CWORD]}
prev=${COMP_WORDS[COMP_CWORD-1]}
options='--check-for-nomount --check-for-nosync --debug-https --disable-notifications --display-config --display-sync-status --download-only --disable-upload-validation --dry-run --enable-logging --force-http-1.1 --force-http-2 --get-file-link --local-first --logout -m --monitor --no-remote-delete --print-token --reauth --resync --skip-dot-files --skip-symlinks --synchronize --upload-only -v --verbose --version -h --help'
argopts='--create-directory --get-O365-drive-id --operation-timeout --remove-directory --single-directory --source-directory'
options='--check-for-nomount --check-for-nosync --debug-https --disable-notifications --display-config --display-sync-status --download-only --disable-upload-validation --dry-run --enable-logging --force-http-1.1 --force-http-2 --local-first --logout -m --monitor --no-remote-delete --print-token --resync --skip-dot-files --skip-symlinks --synchronize --upload-only -v --verbose --version -h --help'
argopts='--create-directory --get-O365-drive-id --remove-directory --single-directory --source-directory'
# Loop on the arguments to manage conflicting options
for (( i=0; i < ${#COMP_WORDS[@]}-1; i++ )); do
@ -19,22 +21,13 @@ _onedrive()
[[ ${COMP_WORDS[i]} == '--synchronize' ]] && options=${options/--monitor}
[[ ${COMP_WORDS[i]} == '--monitor' ]] && options=${options/--synchronize}
done
case "$prev" in
--confdir|--syncdir)
_filedir
return 0
;;
--get-file-link)
if command -v sed &> /dev/null; then
pushd "$(onedrive --display-config | sed -n "/sync_dir/s/.*= //p")" &> /dev/null
_filedir
popd &> /dev/null
fi
return 0
;;
--create-directory|--get-O365-drive-id|--operation-timeout|--remove-directory|--single-directory|--source-directory)
--create-directory|--get-O365-drive-id|--remove-directory|--single-directory|--source-directory)
return 0
;;
*)
@ -42,7 +35,7 @@ _onedrive()
return 0
;;
esac
# notreached
return 0
}

View file

@ -16,17 +16,14 @@ complete -c onedrive -l dry-run -d 'Perform a trial sync with no changes made.'
complete -c onedrive -l enable-logging -d 'Enable client activity to a separate log file.'
complete -c onedrive -l force-http-1.1 -d 'Force the use of HTTP 1.1 for all operations.'
complete -c onedrive -l force-http-2 -d 'Force the use of HTTP 2 for all operations.'
complete -c onedrive -l get-file-link -d 'Display the file link of a synced file.'
complete -c onedrive -l get-O365-drive-id -d 'Query and return the Office 365 Drive ID for a given Office 365 SharePoint Shared Library.'
complete -c onedrive -s h -l help -d 'Print help information.'
complete -c onedrive -l local-first -d 'Synchronize from the local directory source first, before downloading changes from OneDrive.'
complete -c onedrive -l logout -d 'Logout the current user.'
complete -c onedrive -n "not __fish_seen_subcommand_from --synchronize" -a "-m --monitor" -d 'Keep monitoring for local and remote changes.'
complete -c onedrive -l no-remote-delete -d 'Do not delete local file deletes from OneDrive when using --upload-only.'
complete -c onedrive -l operation-timeout -d 'Specify the maximum amount of time (in seconds) an operation is allowed to take.'
complete -c onedrive -l print-token -d 'Print the access token, useful for debugging.'
complete -c onedrive -l remote-directory -d 'Remove a directory on OneDrive - no sync will be performed.'
complete -c onedrive -l reauth -d 'Reauthenticate the client with OneDrive.'
complete -c onedrive -l resync -d 'Forget the last saved state, perform a full sync.'
complete -c onedrive -l single-directory -d 'Specify a single local directory within the OneDrive root to sync.'
complete -c onedrive -l skip-dot-files -d 'Skip dot files and folders from syncing.'

View file

@ -21,15 +21,12 @@ all_opts=(
'--enable-logging[Enable client activity to a separate log file]'
'--force-http-1.1[Force the use of HTTP 1.1 for all operations]'
'--force-http-2[Force the use of HTTP 2 for all operations]'
'--get-file-link[Display the file link of a synced file.]:file name:'
'--get-O365-drive-id[Query and return the Office 365 Drive ID for a given Office 365 SharePoint Shared Library]:'
'--local-first[Synchronize from the local directory source first, before downloading changes from OneDrive.]'
'--logout[Logout the current user]'
'(-m --monitor)'{-m,--monitor}'[Keep monitoring for local and remote changes]'
'--no-remote-delete[Do not delete local file deletes from OneDrive when using --upload-only]'
'--operation-timeout[Specify the maximum amount of time (in seconds) an operation is allowed to take.]:seconds:'
'--print-token[Print the access token, useful for debugging]'
'--reauth[Reauthenticate the client with OneDrive]'
'--resync[Forget the last saved state, perform a full sync]'
'--remove-directory[Remove a directory on OneDrive - no sync will be performed.]:directory name:'
'--single-directory[Specify a single local directory within the OneDrive root to sync.]:source directory:_files -/'

View file

@ -1,41 +1,26 @@
# -*-Dockerfile-*-
ARG FEDORA_VERSION=38
ARG DEBIAN_VERSION=bullseye
ARG GO_VERSION=1.20
ARG GOSU_VERSION=1.16
FROM golang:${GO_VERSION}-${DEBIAN_VERSION} AS builder-gosu
ARG GOSU_VERSION
RUN go install -ldflags "-s -w" github.com/tianon/gosu@${GOSU_VERSION}
FROM fedora:${FEDORA_VERSION} AS builder-onedrive
RUN dnf install -y ldc pkgconf libcurl-devel sqlite-devel git
ENV PKG_CONFIG=/usr/bin/pkgconf
FROM centos:7
ENV GOSU_VERSION=1.11
RUN yum install -y make git gcc libcurl-devel sqlite-devel pkg-config && \
yum install -y http://downloads.dlang.org/releases/2.x/2.092.1/dmd-2.092.1-0.fedora.x86_64.rpm && \
rm -rf /var/cache/yum/ && \
# gosu installation
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/${GOSU_VERSION}/gosu-amd64" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/${GOSU_VERSION}/gosu-amd64.asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& rm -r /root/.gnupg/ \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true
RUN mkdir -p /onedrive/conf /onedrive/data
COPY . /usr/src/onedrive
WORKDIR /usr/src/onedrive
RUN ./configure \
&& make clean \
&& make \
&& make install
FROM fedora:${FEDORA_VERSION}
RUN dnf clean all \
&& dnf -y update
RUN dnf install -y libcurl sqlite ldc-libs \
&& dnf clean all \
&& mkdir -p /onedrive/conf /onedrive/data
COPY --from=builder-gosu /go/bin/gosu /usr/local/bin/
COPY --from=builder-onedrive /usr/local/bin/onedrive /usr/local/bin/
COPY contrib/docker/entrypoint.sh /
RUN chmod +x /entrypoint.sh
RUN ./configure && \
make clean && \
make && \
make install
COPY contrib/docker/entrypoint.sh /entrypoint.sh
VOLUME ["/onedrive/conf"]
ENTRYPOINT ["/entrypoint.sh"]

View file

@ -1,38 +1,23 @@
# -*-Dockerfile-*-
ARG ALPINE_VERSION=3.18
ARG GO_VERSION=1.20
ARG GOSU_VERSION=1.16
FROM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS builder-gosu
ARG GOSU_VERSION
RUN go install -ldflags "-s -w" github.com/tianon/gosu@${GOSU_VERSION}
FROM alpine:${ALPINE_VERSION} AS builder-onedrive
RUN apk add --update --no-cache alpine-sdk gnupg xz curl-dev sqlite-dev binutils-gold autoconf automake ldc
FROM golang:alpine
RUN apk add -X http://dl-cdn.alpinelinux.org/alpine/edge/community \
-X http://dl-cdn.alpinelinux.org/alpine/edge/main \
alpine-sdk gnupg xz curl-dev sqlite-dev binutils-gold \
autoconf automake ldc
RUN go get github.com/tianon/gosu
COPY . /usr/src/onedrive
WORKDIR /usr/src/onedrive
RUN autoreconf -fiv \
&& ./configure \
&& make clean \
&& make \
&& make install
FROM alpine:${ALPINE_VERSION}
RUN apk add --upgrade apk-tools \
&& apk upgrade --available
RUN apk add --update --no-cache bash libcurl libgcc shadow sqlite-libs ldc-runtime \
&& mkdir -p /onedrive/conf /onedrive/data
COPY --from=builder-gosu /go/bin/gosu /usr/local/bin/
COPY --from=builder-onedrive /usr/local/bin/onedrive /usr/local/bin/
COPY contrib/docker/entrypoint.sh /
RUN chmod +x /entrypoint.sh
RUN cd /usr/src/onedrive/ && \
autoreconf -fiv && \
./configure && \
make clean && \
make && \
make install
FROM alpine
ENTRYPOINT ["/entrypoint.sh"]
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/community \
-X http://dl-cdn.alpinelinux.org/alpine/edge/main \
bash libcurl libgcc shadow sqlite-libs ldc-runtime && \
mkdir -p /onedrive/conf /onedrive/data
COPY contrib/docker/entrypoint.sh /
COPY --from=0 /go/bin/gosu /usr/local/bin/onedrive /usr/local/bin/

View file

@ -1,36 +0,0 @@
# -*-Dockerfile-*-
ARG DEBIAN_VERSION=stable
FROM debian:${DEBIAN_VERSION} AS builder-onedrive
RUN apt-get clean \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends build-essential curl ca-certificates libcurl4-openssl-dev libsqlite3-dev libxml2-dev pkg-config git ldc \
&& rm -rf /var/lib/apt/lists/*
COPY . /usr/src/onedrive
WORKDIR /usr/src/onedrive
RUN ./configure DC=/usr/bin/ldmd2 \
&& make clean \
&& make \
&& make install
FROM debian:${DEBIAN_VERSION}-slim
RUN apt-get clean \
&& apt-get update \
&& apt-get upgrade -y \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends gosu libcurl4 libsqlite3-0 ca-certificates libphobos2-ldc-shared100 \
&& rm -rf /var/lib/apt/lists/* \
# Fix bug with ssl on armhf: https://serverfault.com/a/1045189
&& /usr/bin/c_rehash \
&& mkdir -p /onedrive/conf /onedrive/data
COPY --from=builder-onedrive /usr/local/bin/onedrive /usr/local/bin/
COPY contrib/docker/entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

View file

@ -0,0 +1,21 @@
# -*-Dockerfile-*-
FROM debian:stretch
RUN apt update && \
apt install -y build-essential curl libcurl4-openssl-dev libsqlite3-dev pkg-config wget git
RUN wget https://github.com/ldc-developers/ldc/releases/download/v1.16.0/ldc2-1.16.0-linux-armhf.tar.xz && \
tar -xvf ldc2-1.16.0-linux-armhf.tar.xz
COPY . /usr/src/onedrive
RUN cd /usr/src/onedrive/ && \
./configure DC=/ldc2-1.16.0-linux-armhf/bin/ldmd2 && \
make clean && \
make && \
make install
FROM debian:stretch-slim
ENTRYPOINT ["/entrypoint.sh"]
RUN apt update && \
apt install -y gosu libcurl3 libsqlite3-0 && \
rm -rf /var/*/apt && \
mkdir -p /onedrive/conf /onedrive/data
COPY contrib/docker/entrypoint.sh /
COPY --from=0 /usr/local/bin/onedrive /usr/local/bin/

View file

@ -0,0 +1,23 @@
# -*-Dockerfile-*-
FROM debian:stretch
RUN apt update && \
apt install -y build-essential curl libcurl4-openssl-dev libsqlite3-dev pkg-config git wget
RUN wget http://downloads.dlang.org/releases/2.x/2.092.1/dmd_2.092.1-0_amd64.deb -O /tmp/dmd_amd64.deb && \
dpkg -i /tmp/dmd_amd64.deb
RUN rm -f /tmp/dmd_amd64.deb
COPY . /usr/src/onedrive
RUN cd /usr/src/onedrive/ && \
./configure && \
make clean && \
make && \
make install
FROM debian:stretch-slim
ENTRYPOINT ["/entrypoint.sh"]
RUN apt update && \
apt install -y gosu libcurl3 libsqlite3-0 && \
rm -rf /var/*/apt && \
mkdir -p /onedrive/conf /onedrive/data
COPY contrib/docker/entrypoint.sh /
COPY --from=0 /usr/local/bin/onedrive /usr/local/bin/

View file

@ -1,6 +1,6 @@
#!/bin/bash -eu
set +H -euo pipefail
set +H -xeuo pipefail
: ${ONEDRIVE_UID:=$(stat /onedrive/data -c '%u')}
: ${ONEDRIVE_GID:=$(stat /onedrive/data -c '%g')}
@ -23,110 +23,43 @@ else
grep -qv root <( groups "${oduser}" ) || { echo 'ROOT level privileges prohibited!'; exit 1; }
fi
chown "${oduser}:${odgroup}" /onedrive/ /onedrive/conf
# Default parameters
ARGS=(--monitor --confdir /onedrive/conf --syncdir /onedrive/data)
echo "Base Args: ${ARGS}"
# Make Verbose output optional, based on an environment variable
if [ "${ONEDRIVE_VERBOSE:=0}" == "1" ]; then
echo "# We are being verbose"
echo "# Adding --verbose"
ARGS=(--verbose ${ARGS[@]})
fi
# Tell client to perform debug output, based on an environment variable
if [ "${ONEDRIVE_DEBUG:=0}" == "1" ]; then
echo "# We are performing debug output"
echo "# Adding --verbose --verbose"
ARGS=(--verbose --verbose ${ARGS[@]})
fi
# Tell client to perform HTTPS debug output, based on an environment variable
if [ "${ONEDRIVE_DEBUG_HTTPS:=0}" == "1" ]; then
echo "# We are performing HTTPS debug output"
echo "# Adding --debug-https"
ARGS=(--debug-https ${ARGS[@]})
fi
# Tell client to perform a resync based on environment variable
if [ "${ONEDRIVE_RESYNC:=0}" == "1" ]; then
echo "# We are performing a --resync"
echo "# Adding --resync --resync-auth"
ARGS=(--resync --resync-auth ${ARGS[@]})
ARGS=(--resync ${ARGS[@]})
fi
# Tell client to sync in download-only mode based on environment variable
if [ "${ONEDRIVE_DOWNLOADONLY:=0}" == "1" ]; then
echo "# We are synchronizing in download-only mode"
echo "# Adding --download-only"
ARGS=(--download-only ${ARGS[@]})
fi
# Tell client to sync in upload-only mode based on environment variable
if [ "${ONEDRIVE_UPLOADONLY:=0}" == "1" ]; then
echo "# We are synchronizing in upload-only mode"
echo "# Adding --upload-only"
ARGS=(--upload-only ${ARGS[@]})
fi
# Tell client to sync in no-remote-delete mode based on environment variable
if [ "${ONEDRIVE_NOREMOTEDELETE:=0}" == "1" ]; then
echo "# We are synchronizing in no-remote-delete mode"
echo "# Adding --no-remote-delete"
ARGS=(--no-remote-delete ${ARGS[@]})
fi
# Tell client to logout based on environment variable
if [ "${ONEDRIVE_LOGOUT:=0}" == "1" ]; then
echo "# We are logging out"
echo "# Adding --logout"
ARGS=(--logout ${ARGS[@]})
fi
# Tell client to re-authenticate based on environment variable
if [ "${ONEDRIVE_REAUTH:=0}" == "1" ]; then
echo "# We are logging out to perform a reauthentication"
echo "# Adding --reauth"
ARGS=(--reauth ${ARGS[@]})
fi
# Tell client to utilize auth files at the provided locations based on environment variable
if [ -n "${ONEDRIVE_AUTHFILES:=""}" ]; then
echo "# We are using auth files to perform authentication"
echo "# Adding --auth-files ARG"
ARGS=(--auth-files ${ONEDRIVE_AUTHFILES} ${ARGS[@]})
fi
# Tell client to utilize provided auth response based on environment variable
if [ -n "${ONEDRIVE_AUTHRESPONSE:=""}" ]; then
echo "# We are providing the auth response directly to perform authentication"
echo "# Adding --auth-response ARG"
ARGS=(--auth-response \"${ONEDRIVE_AUTHRESPONSE}\" ${ARGS[@]})
fi
# Tell client to print the running configuration at application startup
if [ "${ONEDRIVE_DISPLAY_CONFIG:=0}" == "1" ]; then
echo "# We are printing the application running configuration at application startup"
echo "# Adding --display-running-config"
ARGS=(--display-running-config ${ARGS[@]})
fi
# Tell client to use sync single dir option
if [ -n "${ONEDRIVE_SINGLE_DIRECTORY:=""}" ]; then
echo "# We are synchronizing in single-directory mode"
echo "# Adding --single-directory ARG"
ARGS=(--single-directory \"${ONEDRIVE_SINGLE_DIRECTORY}\" ${ARGS[@]})
fi
if [ ${#} -gt 0 ]; then
ARGS=("${@}")
fi
echo "# Launching onedrive"
# Only switch user if not running as target uid (ie. Docker)
if [ "$ONEDRIVE_UID" = "$(id -u)" ]; then
/usr/local/bin/onedrive "${ARGS[@]}"
else
chown "${oduser}:${odgroup}" /onedrive/data /onedrive/conf
exec gosu "${oduser}" /usr/local/bin/onedrive "${ARGS[@]}"
fi
exec gosu "${oduser}" /usr/local/bin/onedrive "${ARGS[@]}"

View file

@ -5,14 +5,8 @@
%global with_systemd 0
%endif
%if 0%{?rhel} >= 7
%global rhel_unitdir 1
%else
%global rhel_unitdir 0
%endif
Name: onedrive
Version: 2.4.25
Version: 2.4.8
Release: 1%{?dist}
Summary: Microsoft OneDrive Client
Group: System Environment/Network
@ -21,7 +15,7 @@ URL: https://github.com/abraunegg/onedrive
Source0: v%{version}.tar.gz
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
BuildRequires: dmd >= 2.088.0
BuildRequires: dmd >= 2.083.0
BuildRequires: sqlite-devel >= 3.7.15
BuildRequires: libcurl-devel
Requires: sqlite >= 3.7.15
@ -65,16 +59,11 @@ make
%{_docdir}/%{name}
%{_bindir}/%{name}
%if 0%{?with_systemd}
%if 0%{?rhel_unitdir}
%{_unitdir}/%{name}.service
%{_unitdir}/%{name}@.service
%else
%{_userunitdir}/%{name}.service
%{_unitdir}/%{name}@.service
%endif
%else
%{_bindir}/onedrive_service.sh
/etc/init.d/onedrive
%endif
%changelog
%changelog

View file

@ -5,23 +5,10 @@ After=network-online.target
Wants=network-online.target
[Service]
# Commented out hardenings are disabled because they may not work out of the box on your distribution
# If you know what you are doing please try to enable them.
ProtectSystem=full
#PrivateUsers=true
#PrivateDevices=true
ProtectHostname=true
#ProtectClock=true
ProtectKernelTunables=true
#ProtectKernelModules=true
#ProtectKernelLogs=true
ProtectControlGroups=true
RestrictRealtime=true
ExecStart=@prefix@/bin/onedrive --monitor
Restart=on-failure
RestartSec=3
RestartPreventExitStatus=3
[Install]
WantedBy=default.target
WantedBy=default.target

View file

@ -5,17 +5,6 @@ After=network-online.target
Wants=network-online.target
[Service]
# Commented out hardenings are disabled because they don't work out of the box.
# If you know what you are doing please try to enable them.
ProtectSystem=full
#PrivateDevices=true
ProtectHostname=true
#ProtectClock=true
ProtectKernelTunables=true
#ProtectKernelModules=true
#ProtectKernelLogs=true
ProtectControlGroups=true
RestrictRealtime=true
ExecStart=@prefix@/bin/onedrive --monitor --confdir=/home/%i/.config/onedrive
User=%i
Group=users

View file

@ -1,8 +1,4 @@
# How to configure OneDrive Business Shared Folder Sync
## Application Version
Before reading this document, please ensure you are running application version [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases) or greater. Use `onedrive --version` to determine what application version you are using and upgrade your client if required.
## Process Overview
Syncing OneDrive Business Shared Folders requires additional configuration for your 'onedrive' client:
1. List available shared folders to determine which folder you wish to sync & to validate that you have access to that folder
2. Create a new file called 'business_shared_folders' in your config directory which contains a list of the shared folders you wish to sync

View file

@ -1,225 +1,112 @@
# Run the OneDrive Client for Linux under Docker
This client can be run as a Docker container, with 3 available container base options for you to choose from:
# onedrive docker image
| Container Base | Docker Tag | Description | i686 | x86_64 | ARMHF | AARCH64 |
|----------------|-------------|----------------------------------------------------------------|:------:|:------:|:-----:|:-------:|
| Alpine Linux | edge-alpine | Docker container based on Alpine 3.18 using 'master' |❌|✔|❌|✔|
| Alpine Linux | alpine | Docker container based on Alpine 3.18 using latest release |❌|✔|❌|✔|
| Debian | debian | Docker container based on Debian Stable using latest release |✔|✔|✔|✔|
| Debian | edge | Docker container based on Debian Stable using 'master' |✔|✔|✔|✔|
| Debian | edge-debian | Docker container based on Debian Stable using 'master' |✔|✔|✔|✔|
| Debian | latest | Docker container based on Debian Stable using latest release |✔|✔|✔|✔|
| Fedora | edge-fedora | Docker container based on Fedora 38 using 'master' |❌|✔|❌|✔|
| Fedora | fedora | Docker container based on Fedora 38 using latest release |❌|✔|❌|✔|
Thats right folks onedrive is now dockerized ;)
These containers offer a simple monitoring-mode service for the OneDrive Client for Linux.
This container offers simple monitoring-mode service for 'Free Client for OneDrive on Linux'.
The instructions below have been validated on:
* Fedora 38
## Basic Setup
The instructions below will utilise the 'edge' tag, however this can be substituted for any of the other docker tags such as 'latest' from the table above if desired.
### 0. Install docker under your own platform's instructions
The 'edge' Docker Container will align closer to all documentation and features, where as 'latest' is the release version from a static point in time. The 'latest' tag however may contain bugs and/or issues that will have been fixed, and those fixes are contained in 'edge'.
### 1. Pull the image
Additionally there are specific version release tags for each release. Refer to https://hub.docker.com/r/driveone/onedrive/tags for any other Docker tags you may be interested in.
**Note:** The below instructions for docker has been tested and validated when logging into the system as an unprivileged user (non 'root' user).
## High Level Configuration Steps
1. Install 'docker' as per your distribution platform's instructions if not already installed.
2. Configure 'docker' to allow non-privileged users to run Docker commands
3. Disable 'SELinux' as per your distribution platform's instructions
4. Test 'docker' by running a test container without using `sudo`
5. Prepare the required docker volumes to store the configuration and data
6. Run the 'onedrive' container and perform authorisation
7. Running the 'onedrive' container under 'docker'
## Configuration Steps
### 1. Install 'docker' on your platform
Install 'docker' as per your distribution platform's instructions if not already installed.
### 2. Configure 'docker' to allow non-privileged users to run Docker commands
Read https://docs.docker.com/engine/install/linux-postinstall/ to configure the 'docker' user group with your user account to allow your non 'root' user to run 'docker' commands.
### 3. Disable SELinux on your platform
In order to run the Docker container, SELinux must be disabled. Without doing this, when the application is authenticated in the steps below, the following error will be presented:
```text
ERROR: The local file system returned an error with the following message:
Error Message: /onedrive/conf/refresh_token: Permission denied
The database cannot be opened. Please check the permissions of ~/.config/onedrive/items.sqlite3
```
The only known work-around for the above problem at present is to disable SELinux. Please refer to your distribution platform's instructions on how to perform this step.
* Fedora: https://docs.fedoraproject.org/en-US/quick-docs/selinux-changing-states-and-modes/#_disabling_selinux
* Red Hat Enterprise Linux: https://access.redhat.com/solutions/3176
Post disabling SELinux and reboot your system, confirm that `getenforce` returns `Disabled`:
```text
$ getenforce
Disabled
```
If you are still experiencing permission issues despite disabling SELinux, please read https://www.redhat.com/sysadmin/container-permission-denied-errors
### 4. Test 'docker' on your platform
Ensure that 'docker' is running as a system service, and is enabled to be activated on system reboot:
```bash
sudo systemctl enable --now docker
docker pull driveone/onedrive:latest
```
Test that 'docker' is operational for your 'non-root' user, as per below:
```bash
[alex@fedora-38-docker-host ~]$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
719385e32844: Pull complete
Digest: sha256:88ec0acaa3ec199d3b7eaf73588f4518c25f9d34f58ce9a0df68429c5af48e8d
Status: Downloaded newer image for hello-world:latest
**NOTE:** SELinux context needs to be configured or disabled for Docker, to be able to write to OneDrive host directory.
Hello from Docker!
This message shows that your installation appears to be working correctly.
### 2. Prepare config volume
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
Onedrive needs two volumes. One of them is the config volume. Create it with:
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
[alex@fedora-38-docker-host ~]$
```
### 5. Configure the required docker volumes
The 'onedrive' Docker container requires 2 docker volumes to operate:
* Config Volume
* Data Volume
The first volume is the configuration volume that stores all the applicable application configuration + current runtime state. In a non-containerised environment, this normally resides in `~/.config/onedrive` - in a containerised environment this is stored in the volume tagged as `/onedrive/conf`
The second volume is the data volume, where all your data from Microsoft OneDrive is stored locally. This volume is mapped to an actual directory point on your local filesystem and this is stored in the volume tagged as `/onedrive/data`
#### 5.1 Prepare the 'config' volume
Create the 'config' volume with the following command:
```bash
docker volume create onedrive_conf
```
This will create a docker volume labeled `onedrive_conf`, where all configuration of your onedrive account will be stored. You can add a custom config file in this location at a later point in time if required.
This will create a docker volume labeled `onedrive_conf`, where all configuration of your onedrive account will be stored. You can add a custom config file and other things later.
The second docker volume is for your data folder and is created in the next step. It needs the path to a folder on your filesystem that you want to keep in sync with OneDrive. Keep in mind that:
- The owner of your specified folder must not be root
- The owner of your specified folder must have permissions for its parent directory
### 3. First run
Onedrive needs to be authorized with your Microsoft account. This is achieved by running docker in interactive mode. Run the docker image with the two commands below and **make sure to change `onedriveDir` to the onedrive data directory on your filesystem (e.g. `"/home/abraunegg/OneDrive"`)**.
Additionally, the user id and group id should be added to remove any potential user conflicts, denoted by the environment variables `${ONEDRIVE_UID}` and `${ONEDRIVE_GID}`.
#### 5.2 Prepare the 'data' volume
Create the 'data' volume with the following command:
```bash
docker volume create onedrive_data
onedriveDir="${HOME}/OneDrive"
docker run -it --name onedrive -v onedrive_conf:/onedrive/conf -v "${onedriveDir}:/onedrive/data" -e "ONEDRIVE_UID:${ONEDRIVE_UID}" -e "ONEDRIVE_GID:${ONEDRIVE_GID}" driveone/onedrive:latest
```
This will create a docker volume labeled `onedrive_data` and will map to a path on your local filesystem. This is where your data from Microsoft OneDrive will be stored. Keep in mind that:
- You will be asked to open a specific link using your web browser
- Login to your Microsoft Account and give the application the permission
- After giving the permission, you will be redirected to a blank page.
- Copy the URI of the blank page into the application.
* The owner of this specified folder must not be root
* The owner of this specified folder must have permissions for its parent directory
* Docker will attempt to change the permissions of the volume to the user the container is configured to run as
The onedrive monitor is configured to start with your host system. If your onedrive is working as expected, you can detach from the container with Ctrl+p, Ctrl+q.
**NOTE:** Issues occur when this target folder is a mounted folder of an external system (NAS, SMB mount, USB Drive etc) as the 'mount' itself is owed by 'root'. If this is your use case, you *must* ensure your normal user can mount your desired target without having the target mounted by 'root'. If you do not fix this, your Docker container will fail to start with the following error message:
```bash
ROOT level privileges prohibited!
```
### 4. Status, stop, and restart
### 6. First run of Docker container under docker and performing authorisation
The 'onedrive' client within the container first needs to be authorised with your Microsoft account. This is achieved by initially running docker in interactive mode.
Check if the monitor service is running
Run the docker image with the commands below and make sure to change the value of `ONEDRIVE_DATA_DIR` to the actual onedrive data directory on your filesystem that you wish to use (e.g. `export ONEDRIVE_DATA_DIR="/home/abraunegg/OneDrive"`).
**Important:** The 'target' folder of `ONEDRIVE_DATA_DIR` must exist before running the docker container. The script below will create 'ONEDRIVE_DATA_DIR' so that it exists locally for the docker volume mapping to occur.
It is also a requirement that the container be run using a non-root uid and gid, you must insert a non-root UID and GID (e.g.` export ONEDRIVE_UID=1000` and export `ONEDRIVE_GID=1000`). The script below will use `id` to evaluate your system environment to use the correct values.
```bash
export ONEDRIVE_DATA_DIR="${HOME}/OneDrive"
export ONEDRIVE_UID=`id -u`
export ONEDRIVE_GID=`id -g`
mkdir -p ${ONEDRIVE_DATA_DIR}
docker run -it --name onedrive -v onedrive_conf:/onedrive/conf \
-v "${ONEDRIVE_DATA_DIR}:/onedrive/data" \
-e "ONEDRIVE_UID=${ONEDRIVE_UID}" \
-e "ONEDRIVE_GID=${ONEDRIVE_GID}" \
driveone/onedrive:edge
```
When the Docker container successfully starts:
* You will be asked to open a specific link using your web browser
* Login to your Microsoft Account and give the application the permission
* After giving the permission, you will be redirected to a blank page
* Copy the URI of the blank page into the application prompt to authorise the application
Once the 'onedrive' application is authorised, the client will automatically start monitoring your `ONEDRIVE_DATA_DIR` for data changes to be uploaded to OneDrive. Files stored on OneDrive will be downloaded to this location.
If the client is working as expected, you can detach from the container with Ctrl+p, Ctrl+q.
### 7. Running the 'onedrive' container under 'docker'
#### 7.1 Check if the monitor service is running
```bash
docker ps -f name=onedrive
```
#### 7.2 Show 'onedrive' runtime logs
Show monitor run logs
```bash
docker logs onedrive
```
#### 7.3 Stop running 'onedrive' container
Stop running monitor
```bash
docker stop onedrive
```
#### 7.4 Start 'onedrive' container
Resume monitor
```bash
docker start onedrive
```
#### 7.5 Remove 'onedrive' container
Remove onedrive monitor
```bash
docker rm -f onedrive
```
## Advanced Setup
## Advanced Usage
### 5. Docker-compose
### How to use Docker-compose
You can utilise `docker-compose` if available on your platform if you are able to use docker compose schemas > 3.
In the following example it is assumed you have a `ONEDRIVE_DATA_DIR` environment variable and have already created the `onedrive_conf` volume.
You can also use docker bind mounts for the configuration folder, e.g. `export ONEDRIVE_CONF="${HOME}/OneDriveConfig"`.
Also supports docker-compose schemas > 3.
In the following example it is assumed you have a `onedriveDir` environment variable and a `onedrive_conf` volume.
However, you can also use bind mounts for the configuration folder, e.g. `export ONEDRIVE_CONF="${HOME}/OneDriveConfig"`.
```
version: "3"
services:
onedrive:
image: driveone/onedrive:edge
image: driveone/onedrive:latest
restart: unless-stopped
environment:
- ONEDRIVE_UID=${PUID}
- ONEDRIVE_GID=${PGID}
volumes:
- onedrive_conf:/onedrive/conf
- ${ONEDRIVE_DATA_DIR}:/onedrive/data
- ${onedriveDir}:/onedrive/data
```
Note that you still have to perform step 3: First Run.
### Editing the running configuration and using a 'config' file
The 'onedrive' client should run in default configuration, however you can change this default configuration by placing a custom config file in the `onedrive_conf` docker volume. First download the default config from [here](https://raw.githubusercontent.com/abraunegg/onedrive/master/config)
### 6. Edit the config
Onedrive should run in default configuration, however you can change your configuration by placing a custom config file in the `onedrive_conf` docker volume. First download the default config from [here](https://raw.githubusercontent.com/abraunegg/onedrive/master/config)
Then put it into your onedrive_conf volume path, which can be found with:
```bash
@ -230,35 +117,37 @@ Or you can map your own config folder to the config volume. Make sure to copy al
The detailed document for the config can be found here: [Configuration](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md#configuration)
### Syncing multiple accounts
There are many ways to do this, the easiest is probably to do the following:
### 7. Sync multiple accounts
There are many ways to do this, the easiest is probably to
1. Create a second docker config volume (replace `Work` with your desired name): `docker volume create onedrive_conf_Work`
2. And start a second docker monitor container (again replace `Work` with your desired name):
```
export ONEDRIVE_DATA_DIR_WORK="/home/abraunegg/OneDriveWork"
mkdir -p ${ONEDRIVE_DATA_DIR_WORK}
docker run -it --restart unless-stopped --name onedrive_Work -v onedrive_conf_Work:/onedrive/conf -v "${ONEDRIVE_DATA_DIR_WORK}:/onedrive/data" driveone/onedrive:edge
onedriveDirWork="/home/abraunegg/OneDriveWork"
docker run -it --restart unless-stopped --name onedrive_Work -v onedrive_conf_Work:/onedrive/conf -v "${onedriveDirWork}:/onedrive/data" driveone/onedrive:latest
```
### Run or update the Docker container with one script
## Run or update with one script
If you are experienced with docker and onedrive, you can use the following script:
```bash
# Update ONEDRIVE_DATA_DIR with correct OneDrive directory path
ONEDRIVE_DATA_DIR="${HOME}/OneDrive"
# Create directory if non-existant
mkdir -p ${ONEDRIVE_DATA_DIR}
# Update onedriveDir with correct existing OneDrive directory path
onedriveDir="${HOME}/OneDrive"
firstRun='-d'
docker pull driveone/onedrive:edge
docker inspect onedrive_conf > /dev/null 2>&1 || { docker volume create onedrive_conf; firstRun='-it'; }
docker inspect onedrive > /dev/null 2>&1 && docker rm -f onedrive
docker run $firstRun --restart unless-stopped --name onedrive -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" driveone/onedrive:edge
docker pull driveone/onedrive:latest
docker inspect onedrive_conf > /dev/null || { docker volume create onedrive_conf; firstRun='-it'; }
docker inspect onedrive > /dev/null && docker rm -f onedrive
docker run $firstRun --restart unless-stopped --name onedrive -v onedrive_conf:/onedrive/conf -v "${onedriveDir}:/onedrive/data" driveone/onedrive:latest
```
## Supported Docker Environment Variables
| Variable | Purpose | Sample Value |
| ---------------- | --------------------------------------------------- |:--------------------------------------------------------------------------------------------------------------------------------:|
## Environment Variables
| Variable | Purpose | Sample Value |
| ---------------- | --------------------------------------------------- |:-------------:|
| <B>ONEDRIVE_UID</B> | UserID (UID) to run as | 1000 |
| <B>ONEDRIVE_GID</B> | GroupID (GID) to run as | 1000 |
| <B>ONEDRIVE_VERBOSE</B> | Controls "--verbose" switch on onedrive sync. Default is 0 | 1 |
@ -266,56 +155,32 @@ docker run $firstRun --restart unless-stopped --name onedrive -v onedrive_conf:/
| <B>ONEDRIVE_DEBUG_HTTPS</B> | Controls "--debug-https" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_RESYNC</B> | Controls "--resync" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_DOWNLOADONLY</B> | Controls "--download-only" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_UPLOADONLY</B> | Controls "--upload-only" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_NOREMOTEDELETE</B> | Controls "--no-remote-delete" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_LOGOUT</B> | Controls "--logout" switch. Default is 0 | 1 |
| <B>ONEDRIVE_REAUTH</B> | Controls "--reauth" switch. Default is 0 | 1 |
| <B>ONEDRIVE_AUTHFILES</B> | Controls "--auth-files" option. Default is "" | "authUrl:responseUrl" |
| <B>ONEDRIVE_AUTHRESPONSE</B> | Controls "--auth-response" option. Default is "" | See [here](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md#authorize-the-application-with-your-onedrive-account) |
| <B>ONEDRIVE_DISPLAY_CONFIG</B> | Controls "--display-running-config" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_SINGLE_DIRECTORY</B> | Controls "--single-directory" option. Default = "" | "mydir" |
### Environment Variables Usage Examples
### Usage Examples
**Verbose Output:**
```bash
docker container run -e ONEDRIVE_VERBOSE=1 -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" driveone/onedrive:edge
docker container run -e ONEDRIVE_VERBOSE=1 -v onedrive_conf:/onedrive/conf -v "${onedriveDir}:/onedrive/data" driveone/onedrive:latest
```
**Debug Output:**
```bash
docker container run -e ONEDRIVE_DEBUG=1 -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" driveone/onedrive:edge
docker container run -e ONEDRIVE_DEBUG=1 -v onedrive_conf:/onedrive/conf -v "${onedriveDir}:/onedrive/data" driveone/onedrive:latest
```
**Perform a --resync:**
```bash
docker container run -e ONEDRIVE_RESYNC=1 -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" driveone/onedrive:edge
docker container run -e ONEDRIVE_RESYNC=1 -v onedrive_conf:/onedrive/conf -v "${onedriveDir}:/onedrive/data" driveone/onedrive:latest
```
**Perform a --resync and --verbose:**
```bash
docker container run -e ONEDRIVE_RESYNC=1 -e ONEDRIVE_VERBOSE=1 -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" driveone/onedrive:edge
```
**Perform a --logout and re-authenticate:**
```bash
docker container run -it -e ONEDRIVE_LOGOUT=1 -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" driveone/onedrive:edge
docker container run -e ONEDRIVE_RESYNC=1 -e ONEDRIVE_VERBOSE=1 -v onedrive_conf:/onedrive/conf -v "${onedriveDir}:/onedrive/data" driveone/onedrive:latest
```
## Building a custom Docker image
## Build instructions
### Build Environment Requirements
* Build environment must have at least 1GB of memory & 2GB swap space
You can validate your build environment memory status with the following command:
```text
cat /proc/meminfo | grep -E 'MemFree|Swap'
```
This should result in the following similar output:
```text
MemFree: 3704644 kB
SwapCached: 0 kB
SwapTotal: 8117244 kB
SwapFree: 8117244 kB
```
If you do not have enough swap space, you can use the following script to dynamically allocate a swapfile for building the Docker container:
There are 2 ways to validate this requirement:
* Modify the file `/etc/dphys-swapfile` and edit the `CONF_SWAPSIZE`, for example: `CONF_SWAPSIZE=2024`. A reboot is required to make this change effective.
* Dynamically allocate a swapfile for building:
```bash
cd /var
sudo fallocate -l 1.5G swapfile
@ -330,67 +195,24 @@ swapon -s
free -h
```
If you are running a Raspberry Pi, you will need to edit your system configuration to increase your swapfile:
* Modify the file `/etc/dphys-swapfile` and edit the `CONF_SWAPSIZE`, for example: `CONF_SWAPSIZE=2048`.
A reboot of your Raspberry Pi is required to make this change effective.
### Building and running a custom Docker image
You can also build your own image instead of pulling the one from [hub.docker.com](https://hub.docker.com/r/driveone/onedrive):
### Building the Docker image
You can also build your own image instead of pulling the one from dockerhub:
```bash
git clone https://github.com/abraunegg/onedrive
cd onedrive
docker build . -t local-onedrive -f contrib/docker/Dockerfile
docker container run -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" local-onedrive:latest
```
There are alternate, smaller images available by using `Dockerfile-debian` or `Dockerfile-alpine`. These [multi-stage builder pattern](https://docs.docker.com/develop/develop-images/multistage-build/) Dockerfiles require Docker version at least 17.05.
There are alternate, smaller images available by building
Dockerfile-stretch or Dockerfile-alpine. These [multi-stage builder
pattern](https://docs.docker.com/develop/develop-images/multistage-build/)
Dockerfiles require Docker version at least 17.05.
### How to build and run a custom Docker image based on Debian
``` bash
docker build . -t local-ondrive-debian -f contrib/docker/Dockerfile-debian
docker container run -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" local-ondrive-debian:latest
docker build . -t local-ondrive-stretch -f contrib/docker/Dockerfile-stretch
```
or
### How to build and run a custom Docker image based on Alpine Linux
``` bash
docker build . -t local-ondrive-alpine -f contrib/docker/Dockerfile-alpine
docker container run -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" local-ondrive-alpine:latest
```
### How to build and run a custom Docker image for ARMHF (Raspberry Pi)
Compatible with:
* Raspberry Pi
* Raspberry Pi 2
* Raspberry Pi Zero
* Raspberry Pi 3
* Raspberry Pi 4
``` bash
docker build . -t local-onedrive-armhf -f contrib/docker/Dockerfile-debian
docker container run -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" local-onedrive-armhf:latest
```
### How to build and run a custom Docker image for AARCH64 Platforms
``` bash
docker build . -t local-onedrive-aarch64 -f contrib/docker/Dockerfile-debian
docker container run -v onedrive_conf:/onedrive/conf -v "${ONEDRIVE_DATA_DIR}:/onedrive/data" local-onedrive-aarch64:latest
```
### How to support double-byte languages
In some geographic regions, you may need to change and/or update the locale specification of the Docker container to better support the local language used for your local filesystem. To do this, follow the example below:
```
FROM driveone/onedrive
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y locales
RUN echo "ja_JP.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen ja_JP.UTF-8 && \
dpkg-reconfigure locales && \
/usr/sbin/update-locale LANG=ja_JP.UTF-8
ENV LC_ALL ja_JP.UTF-8
```
The above example changes the Docker container to support Japanese. To support your local language, change `ja_JP.UTF-8` to the required entry.

View file

@ -1,34 +1,20 @@
# Installing or Upgrading using Distribution Packages or Building the OneDrive Client for Linux from source
# Building and Installing the OneDrive Free Client
## Installing or Upgrading using Distribution Packages
This project has been packaged for the following Linux distributions as per below. The current client release is: [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases)
## Linux Packages
This project has been packaged for the following Linux distributions:
Only the current release version or greater is supported. Earlier versions are not supported and should not be installed or used.
* Arch Linux, available from AUR as [onedrive-abraunegg](https://aur.archlinux.org/packages/onedrive-abraunegg/)
* Debian, available from the package repository as [onedrive](https://packages.debian.org/sid/net/onedrive)
* Fedora, available via package repositories as [onedrive](https://koji.fedoraproject.org/koji/packageinfo?packageID=26044)
* Gentoo, available via portage overlay as [onedrive](https://gpo.zugaina.org/net-misc/onedrive)
* NixOS, use package `onedrive` either by adding it to `configuration.nix` or by using the command `nix-env -iA <channel name>.onedrive`. This does not install a service. To install a service, use unstable channel (will stabilize in 20.09) and add `services.onedrive.enable=true` in `configuration.nix`. You can also add a custom package using the `services.onedrive.package` option (recommended since package lags upstream). Enabling the service installs a default package too (based on the channel). You can also add multiple onedrive accounts trivially, see [documentation](https://github.com/NixOS/nixpkgs/pull/77734#issuecomment-575874225)`.
* openSUSE, available for Tumbleweed, Leap 15.2, Leap 15.1 as [onedrive](https://software.opensuse.org/package/onedrive)
* Slackware, available from the slackbuilds.org repository as [onedrive](https://slackbuilds.org/repository/14.2/network/onedrive/)
* Solus, available from the package repository as [onedrive](https://dev.getsol.us/search/query/FB7PIf1jG9Z9/#R)
* Ubuntu, available as a package from the following PPA [onedrive](https://launchpad.net/~yann1ck/+archive/ubuntu/onedrive)
#### Important Note:
Distribution packages may be of an older release when compared to the latest release that is [available](https://github.com/abraunegg/onedrive/releases). If any package version indicator below is 'red' for your distribution, it is recommended that you build from source. Do not install the software from the available distribution package. If a package is out of date, please contact the package maintainer for resolution.
| Distribution | Package Name & Package Link | &nbsp;&nbsp;PKG_Version&nbsp;&nbsp; | &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 | Extra Details |
|---------------------------------|------------------------------------------------------------------------------|:---------------:|:----:|:------:|:-----:|:-------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Alpine Linux | [onedrive](https://pkgs.alpinelinux.org/packages?name=onedrive&branch=edge) |<a href="https://pkgs.alpinelinux.org/packages?name=onedrive&branch=edge"><img src="https://repology.org/badge/version-for-repo/alpine_edge/onedrive.svg?header=" alt="Alpine Linux Edge package" width="46" height="20"></a>|❌|✔|❌|✔ | |
| Arch Linux<br><br>Manjaro Linux | [onedrive-abraunegg](https://aur.archlinux.org/packages/onedrive-abraunegg/) |<a href="https://aur.archlinux.org/packages/onedrive-abraunegg"><img src="https://repology.org/badge/version-for-repo/aur/onedrive-abraunegg.svg?header=" alt="AUR package" width="46" height="20"></a>|✔|✔|✔|✔ | Install via: `pamac build onedrive-abraunegg` from the Arch Linux User Repository (AUR)<br><br>**Note:** If asked regarding a provider for 'd-runtime' and 'd-compiler', select 'liblphobos' and 'ldc'<br><br>**Note:** System must have at least 1GB of memory & 1GB swap space
| Debian 11 | [onedrive](https://packages.debian.org/bullseye/source/onedrive) |<a href="https://packages.debian.org/bullseye/source/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_11/onedrive.svg?header=" alt="Debian 11 package" width="46" height="20"></a>|✔|✔|✔|✔| **Note:** Do not install from Debian Package Repositories<br><br>It is recommended that for Debian 11 that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Debian 12 | [onedrive](https://packages.debian.org/bookworm/source/onedrive) |<a href="https://packages.debian.org/bookworm/source/onedrive"><img src="https://repology.org/badge/version-for-repo/debian_12/onedrive.svg?header=" alt="Debian 12 package" width="46" height="20"></a>|✔|✔|✔|✔| **Note:** Do not install from Debian Package Repositories<br><br>It is recommended that for Debian 12 that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Fedora | [onedrive](https://koji.fedoraproject.org/koji/packageinfo?packageID=26044) |<a href="https://koji.fedoraproject.org/koji/packageinfo?packageID=26044"><img src="https://repology.org/badge/version-for-repo/fedora_rawhide/onedrive.svg?header=" alt="Fedora Rawhide package" width="46" height="20"></a>|✔|✔|✔|✔| |
| Gentoo | [onedrive](https://gpo.zugaina.org/net-misc/onedrive) | No API Available |✔|✔|❌|❌| |
| Homebrew | [onedrive](https://formulae.brew.sh/formula/onedrive) | <a href="https://formulae.brew.sh/formula/onedrive"><img src="https://repology.org/badge/version-for-repo/homebrew/onedrive.svg?header=" alt="Homebrew package" width="46" height="20"></a> |❌|✔|❌|❌| |
| Linux Mint 20.x | [onedrive](https://community.linuxmint.com/software/view/onedrive) |<a href="https://community.linuxmint.com/software/view/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_20_04/onedrive.svg?header=" alt="Ubuntu 20.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Linux Mint Repositories<br><br>It is recommended that for Linux Mint that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Linux Mint 21.x | [onedrive](https://community.linuxmint.com/software/view/onedrive) |<a href="https://community.linuxmint.com/software/view/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_22_04/onedrive.svg?header=" alt="Ubuntu 22.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Linux Mint Repositories<br><br>It is recommended that for Linux Mint that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| NixOS | [onedrive](https://search.nixos.org/packages?channel=20.09&from=0&size=50&sort=relevance&query=onedrive)|<a href="https://search.nixos.org/packages?channel=20.09&from=0&size=50&sort=relevance&query=onedrive"><img src="https://repology.org/badge/version-for-repo/nix_unstable/onedrive.svg?header=" alt="nixpkgs unstable package" width="46" height="20"></a>|❌|✔|❌|❌| Use package `onedrive` either by adding it to `configuration.nix` or by using the command `nix-env -iA <channel name>.onedrive`. This does not install a service. To install a service, use unstable channel (will stabilize in 20.09) and add `services.onedrive.enable=true` in `configuration.nix`. You can also add a custom package using the `services.onedrive.package` option (recommended since package lags upstream). Enabling the service installs a default package too (based on the channel). You can also add multiple onedrive accounts trivially, see [documentation](https://github.com/NixOS/nixpkgs/pull/77734#issuecomment-575874225). |
| OpenSuSE | [onedrive](https://software.opensuse.org/package/onedrive) |<a href="https://software.opensuse.org/package/onedrive"><img src="https://repology.org/badge/version-for-repo/opensuse_network_tumbleweed/onedrive.svg?header=" alt="openSUSE Tumbleweed package" width="46" height="20"></a>|✔|✔|❌|❌| |
| OpenSuSE Build Service | [onedrive](https://build.opensuse.org/package/show/home:npreining:debian-ubuntu-onedrive/onedrive) | No API Available |✔|✔|✔|✔| Package Build Service for Debian and Ubuntu |
| Raspbian | [onedrive](https://archive.raspbian.org/raspbian/pool/main/o/onedrive/) |<a href="https://archive.raspbian.org/raspbian/pool/main/o/onedrive/"><img src="https://repology.org/badge/version-for-repo/raspbian_stable/onedrive.svg?header=" alt="Raspbian Stable package" width="46" height="20"></a> |❌|❌|✔|✔| **Note:** Do not install from Raspbian Package Repositories<br><br>It is recommended that for Raspbian that you install from OpenSuSE Build Service using the Debian Package Install [Instructions](ubuntu-package-install.md) |
| Slackware | [onedrive](https://slackbuilds.org/result/?search=onedrive&sv=) |<a href="https://slackbuilds.org/result/?search=onedrive&sv="><img src="https://repology.org/badge/version-for-repo/slackbuilds/onedrive.svg?header=" alt="SlackBuilds package" width="46" height="20"></a>|✔|✔|❌|❌| |
| Solus | [onedrive](https://dev.getsol.us/search/query/FB7PIf1jG9Z9/#R) |<a href="https://dev.getsol.us/search/query/FB7PIf1jG9Z9/#R"><img src="https://repology.org/badge/version-for-repo/solus/onedrive.svg?header=" alt="Solus package" width="46" height="20"></a>|✔|✔|❌|❌| |
| Ubuntu 20.04 | [onedrive](https://packages.ubuntu.com/focal/onedrive) |<a href="https://packages.ubuntu.com/focal/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_20_04/onedrive.svg?header=" alt="Ubuntu 20.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Ubuntu Universe<br><br>It is recommended that for Ubuntu that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Ubuntu 22.04 | [onedrive](https://packages.ubuntu.com/jammy/onedrive) |<a href="https://packages.ubuntu.com/jammy/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_22_04/onedrive.svg?header=" alt="Ubuntu 22.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Ubuntu Universe<br><br>It is recommended that for Ubuntu that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Ubuntu 23.04 | [onedrive](https://packages.ubuntu.com/lunar/onedrive) |<a href="https://packages.ubuntu.com/lunar/onedrive"><img src="https://repology.org/badge/version-for-repo/ubuntu_23_04/onedrive.svg?header=" alt="Ubuntu 23.04 package" width="46" height="20"></a> |❌|✔|✔|✔| **Note:** Do not install from Ubuntu Universe<br><br>It is recommended that for Ubuntu that you install from OpenSuSE Build Service using the Ubuntu Package Install [Instructions](ubuntu-package-install.md) |
| Void Linux | [onedrive](https://voidlinux.org/packages/?arch=x86_64&q=onedrive) |<a href="https://voidlinux.org/packages/?arch=x86_64&q=onedrive"><img src="https://repology.org/badge/version-for-repo/void_x86_64/onedrive.svg?header=" alt="Void Linux x86_64 package" width="46" height="20"></a>|✔|✔|❌|❌| |
Distribution packages may be of an older release when compared to the latest release that is [available](https://github.com/abraunegg/onedrive/releases). If a package is out of date, please contact the package maintainer for resolution.
#### Important information for all Ubuntu and Ubuntu based distribution users:
This information is specifically for the following platforms and distributions:
@ -38,18 +24,17 @@ This information is specifically for the following platforms and distributions:
* POP OS
* Peppermint OS
Whilst there are [onedrive](https://packages.ubuntu.com/search?keywords=onedrive&searchon=names&suite=all&section=all) Universe packages available for Ubuntu, do not install 'onedrive' from these Universe packages. The default Universe packages are out-of-date and are not supported and should not be used. If you wish to use a package, it is highly recommended that you utilise the [OpenSuSE Build Service](ubuntu-package-install.md) to install packages for these platforms. If the OpenSuSE Build Service does not cater for your version, your only option is to build from source.
Whilst there are [onedrive](https://packages.ubuntu.com/search?keywords=onedrive&searchon=names&suite=all&section=all) packages available for Ubuntu, do not install 'onedrive' from these packages via `apt install onedrive`. These packages are out-of-date and should not be used. If you wish to use a package, it is highly recommended that you utilise the Ubuntu PPA listed above. If the Ubuntu PPA does not support your distribution or version, your only option is to compile from source using the relevant Ubuntu instructions below.
If you wish to change this situation so that you can just use the Universe packages via 'apt install onedrive', consider becoming the Ubuntu package maintainer and contribute back to your community.
If you wish to change this situation so that you can just use 'apt install onedrive', consider becoming the Ubuntu package maintainer and contribute back to the community.
## Building from Source - High Level Requirements
## Build Requirements
* Build environment must have at least 1GB of memory & 1GB swap space
* Install the required distribution package dependencies
* [libcurl](http://curl.haxx.se/libcurl/)
* [SQLite 3](https://www.sqlite.org/) >= 3.7.15
* [Digital Mars D Compiler (DMD)](http://dlang.org/download.html) or [LDC the LLVM-based D Compiler](https://github.com/ldc-developers/ldc)
**Note:** DMD version >= 2.088.0 or LDC version >= 1.18.0 is required to compile this application
**Note:** DMD version >= 2.083.1 or LDC version >= 1.12.0 is required to compile this application
### Example for installing DMD Compiler
```text
@ -62,16 +47,119 @@ curl -fsS https://dlang.org/install.sh | bash -s ldc
```
## Distribution Package Dependencies
### Dependencies: Ubuntu 16.x
Ubuntu Linux 16.x LTS reached the end of its five-year LTS window on April 30th 2021 and is no longer supported.
### Dependencies: Ubuntu 16.x - i386 / i686 (less than 1GB Memory)
**Important:** Build environment must have at least 512 of memory & 1GB swap space
### Dependencies: Ubuntu 18.x / Lubuntu 18.x
Ubuntu Linux 18.x LTS reached the end of its five-year LTS window on May 31th 2023 and is no longer supported.
**Important:** Only use this method if you have <1GB of physical memory.
### Dependencies: Debian 9
Debian 9 reached the end of its five-year support window on June 30th 2022 and is no longer supported.
**Note:** Peppermint 7 validated with the DMD compiler on the following i386 / i686 platform:
```text
DISTRIB_ID=Peppermint
DISTRIB_RELEASE=7
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Peppermint 7 Seven"
```
### Dependencies: Ubuntu 20.x -> Ubuntu 23.x / Debian 10 -> Debian 12 - x86_64
First install development dependencies as per below:
```text
sudo apt install build-essential
sudo apt install libcurl4-openssl-dev
sudo apt install libsqlite3-dev
sudo apt install pkg-config
sudo apt install git
sudo apt install curl
```
For notifications the following is also necessary:
```text
sudo apt install libnotify-dev
```
Second, install the DMD compiler as per below:
```text
sudo wget http://master.dl.sourceforge.net/project/d-apt/files/d-apt.list -O /etc/apt/sources.list.d/d-apt.list
sudo apt-get update && sudo apt-get -y --allow-unauthenticated install --reinstall d-apt-keyring
sudo apt-get update && sudo apt-get install dmd-compiler dub
```
### Dependencies: Ubuntu 16.x - i386 / i686 / x86_64 (1GB Memory or more)
**Note:** Ubuntu 16.x validated with the DMD compiler on the following Ubuntu i386 / i686 platform:
```text
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
```
First install development dependencies as per below:
```text
sudo apt install build-essential
sudo apt install libcurl4-openssl-dev
sudo apt install libsqlite3-dev
sudo apt install pkg-config
sudo apt install git
sudo apt install curl
```
For notifications the following is also necessary:
```text
sudo apt install libnotify-dev
```
Second, install the DMD compiler as per below:
```text
curl -fsS https://dlang.org/install.sh | bash -s dmd
```
### Dependencies: Ubuntu 18.x / Lubuntu 18.x / Debian 9 - i386 / i686
These dependencies are also applicable for all Ubuntu based distributions such as:
* Lubuntu
* Linux Mint
* POP OS
* Peppermint OS
**Important:** The DMD compiler cannot be used in its default configuration on Ubuntu 18.x / Lubuntu 18.x / Debian 9 i386 / i686 architectures due to an issue in the Ubuntu / Debian linking process. See [https://issues.dlang.org/show_bug.cgi?id=19116](https://issues.dlang.org/show_bug.cgi?id=19116) for further details.
**Note:** Ubuntu 18.x validated with the DMD compiler on the following Ubuntu i386 / i686 platform:
```text
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
```
**Note:** Lubuntu 18.x validated with the DMD compiler on the following Lubuntu i386 / i686 platform:
```text
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.10
DISTRIB_CODENAME=cosmic
DISTRIB_DESCRIPTION="Ubuntu 18.10"
```
**Note:** Debian 9 validated with the DMD compiler on the following Debian i386 / i686 platform:
```text
cat /etc/debian_version
9.11
```
First install development dependencies as per below:
```text
sudo apt install build-essential
sudo apt install libcurl4-openssl-dev
sudo apt install libsqlite3-dev
sudo apt install pkg-config
sudo apt install git
sudo apt install curl
```
For notifications the following is also necessary:
```text
sudo apt install libnotify-dev
```
Second, install the DMD compiler as per below:
```text
curl -fsS https://dlang.org/install.sh | bash -s dmd
```
Thirdly, reconfigure the default linker as per below:
```text
sudo update-alternatives --install "/usr/bin/ld" "ld" "/usr/bin/ld.gold" 20
sudo update-alternatives --install "/usr/bin/ld" "ld" "/usr/bin/ld.bfd" 10
```
### Dependencies: Ubuntu 18.x, Ubuntu 19.x, Ubuntu 20.x / Debian 9, Debian 10 - x86_64
These dependencies are also applicable for all Ubuntu based distributions such as:
* Lubuntu
* Linux Mint
@ -79,7 +167,11 @@ These dependencies are also applicable for all Ubuntu based distributions such a
* Peppermint OS
```text
sudo apt install build-essential
sudo apt install libcurl4-openssl-dev libsqlite3-dev pkg-config git curl
sudo apt install libcurl4-openssl-dev
sudo apt install libsqlite3-dev
sudo apt install pkg-config
sudo apt install git
sudo apt install curl
curl -fsS https://dlang.org/install.sh | bash -s dmd
```
For notifications the following is also necessary:
@ -88,23 +180,43 @@ sudo apt install libnotify-dev
```
### Dependencies: CentOS 6.x / RHEL 6.x
CentOS 6.x and RHEL 6.x reached End of Life status on November 30th 2020 and is no longer supported.
```text
sudo yum groupinstall 'Development Tools'
sudo yum install libcurl-devel
sudo yum install sqlite-devel
curl -fsS https://dlang.org/install.sh | bash -s dmd
```
For notifications the following is also necessary:
```text
sudo yum install libnotify-devel
```
In addition to the above requirements, the `sqlite` version used on CentOS 6.x / RHEL 6.x needs to be upgraded. Use the following instructions to update your version of `sqlite` so that it can support this client:
```text
sudo yum -y update
sudo yum -y install epel-release wget
sudo yum -y install mock
wget https://kojipkgs.fedoraproject.org//packages/sqlite/3.7.15.2/2.fc19/src/sqlite-3.7.15.2-2.fc19.src.rpm
mock --rebuild sqlite-3.7.15.2-2.fc19.src.rpm
sudo yum -y upgrade /var/lib/mock/epel-6-`arch`/result/sqlite-*
```
### Dependencies: Fedora < Version 18 / CentOS 7.x / RHEL 7.x
```text
sudo yum groupinstall 'Development Tools'
sudo yum install libcurl-devel sqlite-devel
curl -fsS https://dlang.org/install.sh | bash -s dmd-2.099.0
sudo yum install libcurl-devel
sudo yum install sqlite-devel
curl -fsS https://dlang.org/install.sh | bash -s dmd
```
For notifications the following is also necessary:
```text
sudo yum install libnotify-devel
```
### Dependencies: Fedora > Version 18 / CentOS 8.x / RHEL 8.x / RHEL 9.x
### Dependencies: Fedora > Version 18 / CentOS 8.x / RHEL 8.x
```text
sudo dnf groupinstall 'Development Tools'
sudo dnf install libcurl-devel sqlite-devel
sudo dnf install libcurl-devel
sudo dnf install sqlite-devel
curl -fsS https://dlang.org/install.sh | bash -s dmd
```
For notifications the following is also necessary:
@ -112,28 +224,38 @@ For notifications the following is also necessary:
sudo dnf install libnotify-devel
```
### Dependencies: Arch Linux & Manjaro Linux
### Dependencies: Arch Linux
```text
sudo pacman -S make pkg-config curl sqlite ldc
sudo pacman -S curl sqlite dmd
```
For notifications the following is also necessary:
```text
sudo pacman -S libnotify
```
### Dependencies: Raspbian (ARMHF) and Ubuntu 22.x / Debian 11 / Debian 12 / Raspbian (ARM64)
**Note:** The minimum LDC compiler version required to compile this application is now 1.18.0, which is not available for Debian Buster or distributions based on Debian Buster. You are advised to first upgrade your platform distribution to one that is based on Debian Bullseye (Debian 11) or later.
These instructions were validated using:
* `Linux raspberrypi 5.10.92-v8+ #1514 SMP PREEMPT Mon Jan 17 17:39:38 GMT 2022 aarch64` (2022-01-28-raspios-bullseye-armhf-lite) using Raspberry Pi 3B (revision 1.2)
* `Linux raspberrypi 5.10.92-v8+ #1514 SMP PREEMPT Mon Jan 17 17:39:38 GMT 2022 aarch64` (2022-01-28-raspios-bullseye-arm64-lite) using Raspberry Pi 3B (revision 1.2)
* `Linux ubuntu 5.15.0-1005-raspi #5-Ubuntu SMP PREEMPT Mon Apr 4 12:21:48 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux` (ubuntu-22.04-preinstalled-server-arm64+raspi) using Raspberry Pi 3B (revision 1.2)
### Dependencies: Raspbian (ARMHF)
**Note:** Build environment must have at least 1GB of memory & 1GB swap space. Check with `swapon`.
```text
sudo apt install build-essential
sudo apt install libcurl4-openssl-dev libsqlite3-dev pkg-config git curl ldc
sudo apt-get install libcurl4-openssl-dev
sudo apt-get install libsqlite3-dev
sudo apt-get install libxml2
sudo apt-get install pkg-config
wget https://github.com/ldc-developers/ldc/releases/download/v1.16.0/ldc2-1.16.0-linux-armhf.tar.xz
tar -xvf ldc2-1.16.0-linux-armhf.tar.xz
```
For notifications the following is also necessary:
```text
sudo apt install libnotify-dev
```
### Dependencies: Debian (ARM64)
```text
sudo apt-get install libcurl4-openssl-dev
sudo apt-get install libsqlite3-dev
sudo apt-get install libxml2
sudo apt-get install pkg-config
wget https://github.com/ldc-developers/ldc/releases/download/v1.16.0/ldc2-1.16.0-linux-aarch64.tar.xz
tar -xvf ldc2-1.16.0-linux-aarch64.tar.xz
```
For notifications the following is also necessary:
```text
@ -154,30 +276,8 @@ sudo emerge x11-libs/libnotify
### Dependencies: OpenSuSE Leap 15.0
```text
sudo zypper addrepo https://download.opensuse.org/repositories/devel:languages:D/openSUSE_Leap_15.0/devel:languages:D.repo
sudo zypper refresh
sudo zypper install gcc git libcurl-devel sqlite3-devel dmd phobos-devel phobos-devel-static
```
For notifications the following is also necessary:
```text
sudo zypper install libnotify-devel
```
### Dependencies: OpenSuSE Leap 15.1
```text
sudo zypper addrepo https://download.opensuse.org/repositories/devel:languages:D/openSUSE_Leap_15.1/devel:languages:D.repo
sudo zypper refresh
sudo zypper install gcc git libcurl-devel sqlite3-devel dmd phobos-devel phobos-devel-static
```
For notifications the following is also necessary:
```text
sudo zypper install libnotify-devel
```
### Dependencies: OpenSuSE Leap 15.2
```text
sudo zypper refresh
sudo zypper install gcc git libcurl-devel sqlite3-devel dmd phobos-devel phobos-devel-static
sudo zypper addrepo --check --refresh --name "D" http://download.opensuse.org/repositories/devel:/languages:/D/openSUSE_Leap_15.0/devel:languages:D.repo
sudo zypper install git libcurl-devel sqlite3-devel D:dmd D:libphobos2-0_81 D:phobos-devel D:phobos-devel-static
```
For notifications the following is also necessary:
```text
@ -185,22 +285,16 @@ sudo zypper install libnotify-devel
```
## Compilation & Installation
### High Level Steps
1. Install the platform dependencies for your Linux OS
2. Activate your DMD or LDC compiler
3. Clone the GitHub repository, run configure and make, then install
4. Deactivate your DMD or LDC compiler
### Building using DMD Reference Compiler
Before cloning and compiling, if you have installed DMD via curl for your OS, you will need to activate DMD as per example below:
```text
Run `source ~/dlang/dmd-2.088.0/activate` in your shell to use dmd-2.088.0.
Run `source ~/dlang/dmd-2.081.1/activate` in your shell to use dmd-2.081.1.
This will setup PATH, LIBRARY_PATH, LD_LIBRARY_PATH, DMD, DC, and PS1.
Run `deactivate` later on to restore your environment.
```
Without performing this step, the compilation process will fail.
**Note:** Depending on your DMD version, substitute `2.088.0` above with your DMD version that is installed.
**Note:** Depending on your DMD version, substitute `2.081.1` above with your DMD version that is installed.
```text
git clone https://github.com/abraunegg/onedrive.git
@ -230,46 +324,33 @@ as far as possible automatically, but can be overridden by passing
`--with-fish-completion-dir=<DIR>` to `configure`.
### Building using a different compiler (for example [LDC](https://wiki.dlang.org/LDC))
#### ARMHF Architecture (Raspbian) and ARM64 Architecture (Ubuntu 22.x / Debian 11 / Raspbian)
**Note:** The minimum LDC compiler version required to compile this application is now 1.18.0, which is not available for Debian Buster or distributions based on Debian Buster. You are advised to first upgrade your platform distribution to one that is based on Debian Bullseye (Debian 11) or later.
#### ARMHF Architecture (Raspbian etc)
**Note:** Build environment must have at least 1GB of memory & 1GB swap space. Check with `swapon`.
```text
git clone https://github.com/abraunegg/onedrive.git
cd onedrive
./configure DC=/usr/bin/ldmd2
./configure DC=~/ldc2-1.16.0-linux-armhf/bin/ldmd2
make clean; make
sudo make install
```
## Upgrading the client
If you have installed the client from a distribution package, the client will be updated when the distribution package is updated by the package maintainer and will be updated to the new application version when you perform your package update.
#### ARM64 Architecture
**Note:** Build environment must have at least 1GB of memory & 1GB swap space. Check with `swapon`
```text
git clone https://github.com/abraunegg/onedrive.git
cd onedrive
./configure DC=~/ldc2-1.16.0-linux-aarch64/bin/ldmd2
make clean; make
sudo make install
```
If you have built the client from source, to upgrade your client, it is recommended that you first uninstall your existing 'onedrive' binary (see below), then re-install the client by re-cloning, re-compiling and re-installing the client again to install the new version.
**Note:** Following the uninstall process will remove all client components including *all* systemd files, including any custom files created for specific access such as SharePoint Libraries.
You can optionally choose to not perform this uninstallation step, and simply re-install the client by re-cloning, re-compiling and re-installing the client again - however the risk here is that you end up with two onedrive client binaries on your system, and depending on your system search path preferences, this will determine which binary is used.
**Important:** Before performing any upgrade, it is highly recommended for you to stop any running systemd service if applicable to ensure that these services are restarted using the updated client version.
Post re-install, to confirm that you have the new version of the client installed, use `onedrive --version` to determine the client version that is now installed.
## Uninstalling the client
### Uninstalling the client if installed from distribution package
Follow your distribution documentation to uninstall the package that you installed
### Uninstalling the client if installed and built from source
From within your GitHub repository clone, perform the following to remove the 'onedrive' binary:
## Uninstall
```text
sudo make uninstall
```
If you are not upgrading your client, to remove your application state and configuration, perform the following additional step:
```
# delete the application state
rm -rf ~/.config/onedrive
```
**Note:** If you are using the `--confdir option`, substitute `~/.config/onedrive` for the correct directory storing your client configuration.
If you are using the `--confdir option`, substitute `~/.config/onedrive` above for that directory.
If you want to just delete the application key, but keep the items database:
```text

46
docs/Office365.md Normal file
View file

@ -0,0 +1,46 @@
# How to configure OneDrive SharePoint Shared Library sync
Syncing a OneDrive SharePoint library requires additional configuration for your 'onedrive' client:
1. Login to OneDrive and under 'Shared Libraries' obtain the shared library name
2. Query that shared library name using the client to obtain the required configuration details
3. Configure the client's config file with the required 'drive_id'
4. Test the configuration using '--dry-run'
5. Sync the SharePoint Library as required
## Listing available OneDrive SharePoint Libraries
1. Login to the OneDrive web interface and determine which shared library you wish to configure the client for:
![shared_libraries](./images/SharedLibraries.jpg)
## Query that shared library name using the client to obtain the required configuration details
2. Run the following command using the 'onedrive' client
```text
onedrive --get-O365-drive-id '<your library name>'
```
3. This will return the following:
```text
Configuration file successfully loaded
Configuring Global Azure AD Endpoints
Initializing the Synchronization Engine ...
Office 365 Library Name Query: <your library name>
SiteName: <your library name>
drive_id: b!6H_y8B...xU5
URL: <your site URL>
```
## Configure the client's config file with the required 'drive_id'
4. Once you have obtained the 'drive_id' above, add to your 'onedrive' configuration file (`~/.config/onedrive/config`) the following:
```text
drive_id = "insert the drive_id value from above here"
```
The OneDrive client will now be configured to sync this SharePoint shared library to your local system.
**Note:** After changing `drive_id`, you must perform a full re-synchronization by adding `--resync` to your existing command line.
## Test the configuration using '--dry-run'
5. Test your new configuration using the `--dry-run` option to validate the the new configuration
## Sync the SharePoint Library as required
6. Sync the SharePoint Library to your system with either `--synchronize` or `--monitor` operations
# How to configure multiple OneDrive SharePoint Shared Library sync
Refer to [./advanced-usage.md](advanced-usage.md) for configuration assistance.

View file

@ -1,360 +0,0 @@
# Run the OneDrive Client for Linux under Podman
This client can be run as a Podman container, with 3 available container base options for you to choose from:
| Container Base | Docker Tag | Description | i686 | x86_64 | ARMHF | AARCH64 |
|----------------|-------------|----------------------------------------------------------------|:------:|:------:|:-----:|:-------:|
| Alpine Linux | edge-alpine | Podman container based on Alpine 3.18 using 'master' |❌|✔|❌|✔|
| Alpine Linux | alpine | Podman container based on Alpine 3.18 using latest release |❌|✔|❌|✔|
| Debian | debian | Podman container based on Debian Stable using latest release |✔|✔|✔|✔|
| Debian | edge | Podman container based on Debian Stable using 'master' |✔|✔|✔|✔|
| Debian | edge-debian | Podman container based on Debian Stable using 'master' |✔|✔|✔|✔|
| Debian | latest | Podman container based on Debian Stable using latest release |✔|✔|✔|✔|
| Fedora | edge-fedora | Podman container based on Fedora 38 using 'master' |❌|✔|❌|✔|
| Fedora | fedora | Podman container based on Fedora 38 using latest release |❌|✔|❌|✔|
These containers offer a simple monitoring-mode service for the OneDrive Client for Linux.
The instructions below have been validated on:
* Fedora 38
The instructions below will utilise the 'edge' tag, however this can be substituted for any of the other docker tags such as 'latest' from the table above if desired.
The 'edge' Docker Container will align closer to all documentation and features, where as 'latest' is the release version from a static point in time. The 'latest' tag however may contain bugs and/or issues that will have been fixed, and those fixes are contained in 'edge'.
Additionally there are specific version release tags for each release. Refer to https://hub.docker.com/r/driveone/onedrive/tags for any other Docker tags you may be interested in.
**Note:** The below instructions for podman has been tested and validated when logging into the system as an unprivileged user (non 'root' user).
## High Level Configuration Steps
1. Install 'podman' as per your distribution platform's instructions if not already installed.
2. Disable 'SELinux' as per your distribution platform's instructions
3. Test 'podman' by running a test container
4. Prepare the required podman volumes to store the configuration and data
5. Run the 'onedrive' container and perform authorisation
6. Running the 'onedrive' container under 'podman'
## Configuration Steps
### 1. Install 'podman' on your platform
Install 'podman' as per your distribution platform's instructions if not already installed.
### 2. Disable SELinux on your platform
In order to run the Docker container under 'podman', SELinux must be disabled. Without doing this, when the application is authenticated in the steps below, the following error will be presented:
```text
ERROR: The local file system returned an error with the following message:
Error Message: /onedrive/conf/refresh_token: Permission denied
The database cannot be opened. Please check the permissions of ~/.config/onedrive/items.sqlite3
```
The only known work-around for the above problem at present is to disable SELinux. Please refer to your distribution platform's instructions on how to perform this step.
* Fedora: https://docs.fedoraproject.org/en-US/quick-docs/selinux-changing-states-and-modes/#_disabling_selinux
* Red Hat Enterprise Linux: https://access.redhat.com/solutions/3176
Post disabling SELinux and reboot your system, confirm that `getenforce` returns `Disabled`:
```text
$ getenforce
Disabled
```
If you are still experiencing permission issues despite disabling SELinux, please read https://www.redhat.com/sysadmin/container-permission-denied-errors
### 3. Test 'podman' on your platform
Test that 'podman' is operational for your 'non-root' user, as per below:
```bash
[alex@fedora38-podman ~]$ podman pull fedora
Resolved "fedora" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull registry.fedoraproject.org/fedora:latest...
Getting image source signatures
Copying blob b30887322388 done |
Copying config a1cd3cbf8a done |
Writing manifest to image destination
a1cd3cbf8adaa422629f2fcdc629fd9297138910a467b11c66e5ddb2c2753dff
[alex@fedora38-podman ~]$ podman run fedora /bin/echo "Welcome to the Podman World"
Welcome to the Podman World
[alex@fedora38-podman ~]$
```
### 4. Configure the required podman volumes
The 'onedrive' Docker container requires 2 podman volumes to operate:
* Config Volume
* Data Volume
The first volume is the configuration volume that stores all the applicable application configuration + current runtime state. In a non-containerised environment, this normally resides in `~/.config/onedrive` - in a containerised environment this is stored in the volume tagged as `/onedrive/conf`
The second volume is the data volume, where all your data from Microsoft OneDrive is stored locally. This volume is mapped to an actual directory point on your local filesystem and this is stored in the volume tagged as `/onedrive/data`
#### 4.1 Prepare the 'config' volume
Create the 'config' volume with the following command:
```bash
podman volume create onedrive_conf
```
This will create a podman volume labeled `onedrive_conf`, where all configuration of your onedrive account will be stored. You can add a custom config file in this location at a later point in time if required.
#### 4.2 Prepare the 'data' volume
Create the 'data' volume with the following command:
```bash
podman volume create onedrive_data
```
This will create a podman volume labeled `onedrive_data` and will map to a path on your local filesystem. This is where your data from Microsoft OneDrive will be stored. Keep in mind that:
* The owner of this specified folder must not be root
* Podman will attempt to change the permissions of the volume to the user the container is configured to run as
**NOTE:** Issues occur when this target folder is a mounted folder of an external system (NAS, SMB mount, USB Drive etc) as the 'mount' itself is owed by 'root'. If this is your use case, you *must* ensure your normal user can mount your desired target without having the target mounted by 'root'. If you do not fix this, your Podman container will fail to start with the following error message:
```bash
ROOT level privileges prohibited!
```
### 5. First run of Docker container under podman and performing authorisation
The 'onedrive' client within the container first needs to be authorised with your Microsoft account. This is achieved by initially running podman in interactive mode.
Run the podman image with the commands below and make sure to change the value of `ONEDRIVE_DATA_DIR` to the actual onedrive data directory on your filesystem that you wish to use (e.g. `export ONEDRIVE_DATA_DIR="/home/abraunegg/OneDrive"`).
**Important:** The 'target' folder of `ONEDRIVE_DATA_DIR` must exist before running the podman container. The script below will create 'ONEDRIVE_DATA_DIR' so that it exists locally for the podman volume mapping to occur.
It is also a requirement that the container be run using a non-root uid and gid, you must insert a non-root UID and GID (e.g.` export ONEDRIVE_UID=1000` and export `ONEDRIVE_GID=1000`). The script below will use `id` to evaluate your system environment to use the correct values.
```bash
export ONEDRIVE_DATA_DIR="${HOME}/OneDrive"
export ONEDRIVE_UID=`id -u`
export ONEDRIVE_GID=`id -g`
mkdir -p ${ONEDRIVE_DATA_DIR}
podman run -it --name onedrive --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" \
-v onedrive_conf:/onedrive/conf:U,Z \
-v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" \
driveone/onedrive:edge
```
**Important:** In some scenarios, 'podman' sets the configuration and data directories to a different UID & GID as specified. To resolve this situation, you must run 'podman' with the `--userns=keep-id` flag to ensure 'podman' uses the UID and GID as specified. The updated script example when using `--userns=keep-id` is below:
```bash
export ONEDRIVE_DATA_DIR="${HOME}/OneDrive"
export ONEDRIVE_UID=`id -u`
export ONEDRIVE_GID=`id -g`
mkdir -p ${ONEDRIVE_DATA_DIR}
podman run -it --name onedrive --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" \
--userns=keep-id \
-v onedrive_conf:/onedrive/conf:U,Z \
-v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" \
driveone/onedrive:edge
```
**Important:** If you plan to use the 'podman' built in auto-updating of container images described in 'Systemd Service & Auto Updating' below, you must pass an additional argument to set a label during the first run. The updated script example to support auto-updating of container images is below:
```bash
export ONEDRIVE_DATA_DIR="${HOME}/OneDrive"
export ONEDRIVE_UID=`id -u`
export ONEDRIVE_GID=`id -g`
mkdir -p ${ONEDRIVE_DATA_DIR}
podman run -it --name onedrive --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" \
--userns=keep-id \
-v onedrive_conf:/onedrive/conf:U,Z \
-v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" \
-e PODMAN=1 \
--label "io.containers.autoupdate=image" \
driveone/onedrive:edge
```
When the Podman container successfully starts:
* You will be asked to open a specific link using your web browser
* Login to your Microsoft Account and give the application the permission
* After giving the permission, you will be redirected to a blank page
* Copy the URI of the blank page into the application prompt to authorise the application
Once the 'onedrive' application is authorised, the client will automatically start monitoring your `ONEDRIVE_DATA_DIR` for data changes to be uploaded to OneDrive. Files stored on OneDrive will be downloaded to this location.
If the client is working as expected, you can detach from the container with Ctrl+p, Ctrl+q.
### 6. Running the 'onedrive' container under 'podman'
#### 6.1 Check if the monitor service is running
```bash
podman ps -f name=onedrive
```
#### 6.2 Show 'onedrive' runtime logs
```bash
podman logs onedrive
```
#### 6.3 Stop running 'onedrive' container
```bash
podman stop onedrive
```
#### 6.4 Start 'onedrive' container
```bash
podman start onedrive
```
#### 6.5 Remove 'onedrive' container
```bash
podman rm -f onedrive
```
## Advanced Usage
### Systemd Service & Auto Updating
Podman supports running containers as a systemd service and also auto updating of the container images. Using the existing running container you can generate a systemd unit file to be installed by the **root** user. To have your container image auto-update with podman, it must first be created with the label `"io.containers.autoupdate=image"` mentioned in step 5 above.
```
cd /tmp
podman generate systemd --new --restart-policy on-failure --name -f onedrive
/tmp/container-onedrive.service
# copy the generated systemd unit file to the systemd path and reload the daemon
cp -Z ~/container-onedrive.service /usr/lib/systemd/system
systemctl daemon-reload
#optionally enable it to startup on boot
systemctl enable container-onedrive.service
#check status
systemctl status container-onedrive
#start/stop/restart container as a systemd service
systemctl stop container-onedrive
systemctl start container-onedrive
```
To update the image using podman (Ad-hoc)
```
podman auto-update
```
To update the image using systemd (Automatic/Scheduled)
```
# Enable the podman-auto-update.timer service at system start:
systemctl enable podman-auto-update.timer
# Start the service
systemctl start podman-auto-update.timer
# Containers with the autoupdate label will be updated on the next scheduled timer
systemctl list-timers --all
```
### Editing the running configuration and using a 'config' file
The 'onedrive' client should run in default configuration, however you can change this default configuration by placing a custom config file in the `onedrive_conf` podman volume. First download the default config from [here](https://raw.githubusercontent.com/abraunegg/onedrive/master/config)
Then put it into your onedrive_conf volume path, which can be found with:
```bash
podman volume inspect onedrive_conf
```
Or you can map your own config folder to the config volume. Make sure to copy all files from the volume into your mapped folder first.
The detailed document for the config can be found here: [Configuration](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md#configuration)
### Syncing multiple accounts
There are many ways to do this, the easiest is probably to do the following:
1. Create a second podman config volume (replace `work` with your desired name): `podman volume create onedrive_conf_work`
2. And start a second podman monitor container (again replace `work` with your desired name):
```bash
export ONEDRIVE_DATA_DIR_WORK="/home/abraunegg/OneDriveWork"
export ONEDRIVE_UID=`id -u`
export ONEDRIVE_GID=`id -g`
mkdir -p ${ONEDRIVE_DATA_DIR_WORK}
podman run -it --name onedrive_work --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" \
--userns=keep-id \
-v onedrive_conf_work:/onedrive/conf:U,Z \
-v "${ONEDRIVE_DATA_DIR_WORK}:/onedrive/data:U,Z" \
-e PODMAN=1 \
--label "io.containers.autoupdate=image" \
driveone/onedrive:edge
```
## Supported Podman Environment Variables
| Variable | Purpose | Sample Value |
| ---------------- | --------------------------------------------------- |:-------------:|
| <B>ONEDRIVE_UID</B> | UserID (UID) to run as | 1000 |
| <B>ONEDRIVE_GID</B> | GroupID (GID) to run as | 1000 |
| <B>ONEDRIVE_VERBOSE</B> | Controls "--verbose" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_DEBUG</B> | Controls "--verbose --verbose" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_DEBUG_HTTPS</B> | Controls "--debug-https" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_RESYNC</B> | Controls "--resync" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_DOWNLOADONLY</B> | Controls "--download-only" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_UPLOADONLY</B> | Controls "--upload-only" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_NOREMOTEDELETE</B> | Controls "--no-remote-delete" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_LOGOUT</B> | Controls "--logout" switch. Default is 0 | 1 |
| <B>ONEDRIVE_REAUTH</B> | Controls "--reauth" switch. Default is 0 | 1 |
| <B>ONEDRIVE_AUTHFILES</B> | Controls "--auth-files" option. Default is "" | "authUrl:responseUrl" |
| <B>ONEDRIVE_AUTHRESPONSE</B> | Controls "--auth-response" option. Default is "" | See [here](https://github.com/abraunegg/onedrive/blob/master/docs/USAGE.md#authorize-the-application-with-your-onedrive-account) |
| <B>ONEDRIVE_DISPLAY_CONFIG</B> | Controls "--display-running-config" switch on onedrive sync. Default is 0 | 1 |
| <B>ONEDRIVE_SINGLE_DIRECTORY</B> | Controls "--single-directory" option. Default = "" | "mydir" |
### Environment Variables Usage Examples
**Verbose Output:**
```bash
podman run -e ONEDRIVE_VERBOSE=1 -v onedrive_conf:/onedrive/conf:U,Z -v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" driveone/onedrive:edge
```
**Debug Output:**
```bash
podman run -e ONEDRIVE_DEBUG=1 -v onedrive_conf:/onedrive/conf:U,Z -v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" driveone/onedrive:edge
```
**Perform a --resync:**
```bash
podman run -e ONEDRIVE_RESYNC=1 -v onedrive_conf:/onedrive/conf:U,Z -v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" driveone/onedrive:edge
```
**Perform a --resync and --verbose:**
```bash
podman run -e ONEDRIVE_RESYNC=1 -e ONEDRIVE_VERBOSE=1 -v onedrive_conf:/onedrive/conf:U,Z -v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" driveone/onedrive:edge
```
**Perform a --logout and re-authenticate:**
```bash
podman run -it -e ONEDRIVE_LOGOUT=1 -v onedrive_conf:/onedrive/conf:U,Z -v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" driveone/onedrive:edge
```
## Building a custom Podman image
You can also build your own image instead of pulling the one from [hub.docker.com](https://hub.docker.com/r/driveone/onedrive):
```bash
git clone https://github.com/abraunegg/onedrive
cd onedrive
podman build . -t local-onedrive -f contrib/docker/Dockerfile
```
There are alternate, smaller images available by building
Dockerfile-debian or Dockerfile-alpine. These [multi-stage builder pattern](https://docs.docker.com/develop/develop-images/multistage-build/)
Dockerfiles require Docker version at least 17.05.
### How to build and run a custom Podman image based on Debian
``` bash
podman build . -t local-ondrive-debian -f contrib/docker/Dockerfile-debian
podman run -v onedrive_conf:/onedrive/conf:U,Z -v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" --userns=keep-id local-ondrive-debian:latest
```
### How to build and run a custom Podman image based on Alpine Linux
``` bash
podman build . -t local-ondrive-alpine -f contrib/docker/Dockerfile-alpine
podman run -v onedrive_conf:/onedrive/conf:U,Z -v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" --userns=keep-id local-ondrive-alpine:latest
```
### How to build and run a custom Podman image for ARMHF (Raspberry Pi)
Compatible with:
* Raspberry Pi
* Raspberry Pi 2
* Raspberry Pi Zero
* Raspberry Pi 3
* Raspberry Pi 4
``` bash
podman build . -t local-onedrive-armhf -f contrib/docker/Dockerfile-debian
podman run -v onedrive_conf:/onedrive/conf:U,Z -v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" --userns=keep-id local-onedrive-armhf:latest
```
### How to build and run a custom Podman image for AARCH64 Platforms
``` bash
podman build . -t local-onedrive-aarch64 -f contrib/docker/Dockerfile-debian
podman run -v onedrive_conf:/onedrive/conf:U,Z -v "${ONEDRIVE_DATA_DIR}:/onedrive/data:U,Z" --user "${ONEDRIVE_UID}:${ONEDRIVE_GID}" --userns=keep-id local-onedrive-aarch64:latest
```

View file

@ -1,228 +0,0 @@
# How to configure OneDrive SharePoint Shared Library sync
**WARNING:** Several users have reported files being overwritten causing data loss as a result of using this client with SharePoint Libraries when running as a systemd service.
When this has been investigated, the following has been noted as potential root causes:
* File indexing application such as Baloo File Indexer or Tracker3 constantly indexing your OneDrive data
* The use of WPS Office and how it 'saves' files by deleting the existing item and replaces it with the saved data
Additionally there could be a yet unknown bug with the client, however all debugging and data provided previously shows that an 'external' process to the 'onedrive' application modifies the files triggering the undesirable upload to occur.
**Possible Preventative Actions:**
* Disable all File Indexing for your SharePoint Library data. It is out of scope to detail on how you should do this.
* Disable using a systemd service for syncing your SharePoint Library data.
* Do not use WPS Office to edit your documents. Use OpenOffice or LibreOffice as these do not exhibit the same 'delete to save' action that WPS Office has.
Additionally, please use caution when using this client with SharePoint.
## Application Version
Before reading this document, please ensure you are running application version [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases) or greater. Use `onedrive --version` to determine what application version you are using and upgrade your client if required.
## Process Overview
Syncing a OneDrive SharePoint library requires additional configuration for your 'onedrive' client:
1. Login to OneDrive and under 'Shared Libraries' obtain the shared library name
2. Query that shared library name using the client to obtain the required configuration details
3. Create a unique local folder which will be the SharePoint Library 'root'
4. Configure the client's config file with the required 'drive_id'
5. Test the configuration using '--dry-run'
6. Sync the SharePoint Library as required
**Note:** The `--get-O365-drive-id` process below requires a fully configured 'onedrive' configuration so that the applicable Drive ID for the given Office 365 SharePoint Shared Library can be determined. It is highly recommended that you do not use the application 'default' configuration directory for any SharePoint Site, and configure separate items for each site you wish to use.
## 1. Listing available OneDrive SharePoint Libraries
Login to the OneDrive web interface and determine which shared library you wish to configure the client for:
![shared_libraries](./images/SharedLibraries.jpg)
## 2. Query OneDrive API to obtain required configuration details
Run the following command using the 'onedrive' client to query the OneDrive API to obtain the required 'drive_id' of the SharePoint Library that you wish to sync:
```text
onedrive --get-O365-drive-id '<your site name to search>'
```
This will return something similar to the following:
```text
Configuration file successfully loaded
Configuring Global Azure AD Endpoints
Initializing the Synchronization Engine ...
Office 365 Library Name Query: <your site name to search>
-----------------------------------------------
Site Name: <your site name>
Library Name: <your library name>
drive_id: b!6H_y8B...xU5
Library URL: <your library URL>
-----------------------------------------------
```
If there are no matches to the site you are attempting to search, the following will be displayed:
```text
Configuration file successfully loaded
Configuring Global Azure AD Endpoints
Initializing the Synchronization Engine ...
Office 365 Library Name Query: blah
ERROR: The requested SharePoint site could not be found. Please check it's name and your permissions to access the site.
The following SharePoint site names were returned:
* <site name 1>
* <site name 2>
...
* <site name X>
```
This list of site names can be used as a basis to search for the correct site for which you are searching
## 3. Create a new configuration directory and sync location for this SharePoint Library
Create a new configuration directory for this SharePoint Library in the following manner:
```text
mkdir ~/.config/SharePoint_My_Library_Name
```
Create a new local folder to store the SharePoint Library data in:
```text
mkdir ~/SharePoint_My_Library_Name
```
**Note:** Do not use spaces in the directory name, use '_' as a replacement
## 4. Configure SharePoint Library config file with the required 'drive_id' & 'sync_dir' options
Download a copy of the default configuration file by downloading this file from GitHub and saving this file in the directory created above:
```text
wget https://raw.githubusercontent.com/abraunegg/onedrive/master/config -O ~/.config/SharePoint_My_Library_Name/config
```
Update your 'onedrive' configuration file (`~/.config/SharePoint_My_Library_Name/config`) with the local folder where you will store your data:
```text
sync_dir = "~/SharePoint_My_Library_Name"
```
Update your 'onedrive' configuration file(`~/.config/SharePoint_My_Library_Name/config`) with the 'drive_id' value obtained in the steps above:
```text
drive_id = "insert the drive_id value from above here"
```
The OneDrive client will now be configured to sync this SharePoint shared library to your local system and the location you have configured.
**Note:** After changing `drive_id`, you must perform a full re-synchronization by adding `--resync` to your existing command line.
## 5. Validate and Test the configuration
Validate your new configuration using the `--display-config` option to validate you have configured the application correctly:
```text
onedrive --confdir="~/.config/SharePoint_My_Library_Name" --display-config
```
Test your new configuration using the `--dry-run` option to validate the application configuration:
```text
onedrive --confdir="~/.config/SharePoint_My_Library_Name" --synchronize --verbose --dry-run
```
**Note:** As this is a *new* configuration, the application will be required to be re-authorised the first time this command is run with the new configuration.
## 6. Sync the SharePoint Library as required
Sync the SharePoint Library to your system with either `--synchronize` or `--monitor` operations:
```text
onedrive --confdir="~/.config/SharePoint_My_Library_Name" --synchronize --verbose
```
```text
onedrive --confdir="~/.config/SharePoint_My_Library_Name" --monitor --verbose
```
**Note:** As this is a *new* configuration, the application will be required to be re-authorised the first time this command is run with the new configuration.
## 7. Enable custom systemd service for SharePoint Library
Systemd can be used to automatically run this configuration in the background, however, a unique systemd service will need to be setup for this SharePoint Library instance
In order to automatically start syncing each SharePoint Library, you will need to create a service file for each SharePoint Library. From the applicable 'systemd folder' where the applicable systemd service file exists:
* RHEL / CentOS: `/usr/lib/systemd/system`
* Others: `/usr/lib/systemd/user` and `/lib/systemd/system`
### Step1: Create a new systemd service file
#### Red Hat Enterprise Linux, CentOS Linux
Copy the required service file to a new name:
```text
sudo cp /usr/lib/systemd/system/onedrive.service /usr/lib/systemd/system/onedrive-SharePoint_My_Library_Name.service
```
or
```text
sudo cp /usr/lib/systemd/system/onedrive@.service /usr/lib/systemd/system/onedrive-SharePoint_My_Library_Name@.service
```
#### Others such as Arch, Ubuntu, Debian, OpenSuSE, Fedora
Copy the required service file to a new name:
```text
sudo cp /usr/lib/systemd/user/onedrive.service /usr/lib/systemd/user/onedrive-SharePoint_My_Library_Name.service
```
or
```text
sudo cp /lib/systemd/system/onedrive@.service /lib/systemd/system/onedrive-SharePoint_My_Library_Name@.service
```
### Step 2: Edit new systemd service file
Edit the new systemd file, updating the line beginning with `ExecStart` so that the confdir mirrors the one you used above:
```text
ExecStart=/usr/local/bin/onedrive --monitor --confdir="/full/path/to/config/dir"
```
Example:
```text
ExecStart=/usr/local/bin/onedrive --monitor --confdir="/home/myusername/.config/SharePoint_My_Library_Name"
```
**Note:** When running the client manually, `--confdir="~/.config/......` is acceptable. In a systemd configuration file, the full path must be used. The `~` must be expanded.
### Step 3: Enable the new systemd service
Once the file is correctly editied, you can enable the new systemd service using the following commands.
#### Red Hat Enterprise Linux, CentOS Linux
```text
systemctl enable onedrive-SharePoint_My_Library_Name
systemctl start onedrive-SharePoint_My_Library_Name
```
#### Others such as Arch, Ubuntu, Debian, OpenSuSE, Fedora
```text
systemctl --user enable onedrive-SharePoint_My_Library_Name
systemctl --user start onedrive-SharePoint_My_Library_Name
```
or
```text
systemctl --user enable onedrive-SharePoint_My_Library_Name@myusername.service
systemctl --user start onedrive-SharePoint_My_Library_Name@myusername.service
```
### Step 4: Viewing systemd status and logs for the custom service
#### Viewing systemd service status - Red Hat Enterprise Linux, CentOS Linux
```text
systemctl status onedrive-SharePoint_My_Library_Name
```
#### Viewing systemd service status - Others such as Arch, Ubuntu, Debian, OpenSuSE, Fedora
```text
systemctl --user status onedrive-SharePoint_My_Library_Name
```
#### Viewing journalctl systemd logs - Red Hat Enterprise Linux, CentOS Linux
```text
journalctl --unit=onedrive-SharePoint_My_Library_Name -f
```
#### Viewing journalctl systemd logs - Others such as Arch, Ubuntu, Debian, OpenSuSE, Fedora
```text
journalctl --user --unit=onedrive-SharePoint_My_Library_Name -f
```
### Step 5: (Optional) Run custom systemd service at boot without user login
In some cases it may be desirable for the systemd service to start without having to login as your 'user'
All the systemd steps above that utilise the `--user` option, will run the systemd service as your particular user. As such, the systemd service will not start unless you actually login to your system.
To avoid this issue, you need to reconfigure your 'user' account so that the systemd services you have created will startup without you having to login to your system:
```text
loginctl enable-linger <your_user_name>
```
Example:
```text
alex@ubuntu-headless:~$ loginctl enable-linger alex
```
## 8. Configuration for a SharePoint Library is complete
The 'onedrive' client configuration for this particular SharePoint Library is now complete.
# How to configure multiple OneDrive SharePoint Shared Library sync
Create a new configuration as per the process above. Repeat these steps for each SharePoint Library that you wish to use.

File diff suppressed because it is too large Load diff

View file

@ -1,13 +1,10 @@
# Advanced Configuration of the OneDrive Free Client
This document covers the following scenarios:
* [Configuring the client to use multiple OneDrive accounts / configurations](#configuring-the-client-to-use-multiple-onedrive-accounts--configurations)
* [Configuring the client to use multiple OneDrive accounts / configurations using Docker](#configuring-the-client-to-use-multiple-onedrive-accounts--configurations-using-docker)
* [Configuring the client for use in dual-boot (Windows / Linux) situations](#configuring-the-client-for-use-in-dual-boot-windows--linux-situations)
* [Configuring the client for use when 'sync_dir' is a mounted directory](#configuring-the-client-for-use-when-sync_dir-is-a-mounted-directory)
* [Upload data from the local ~/OneDrive folder to a specific location on OneDrive](#upload-data-from-the-local-onedrive-folder-to-a-specific-location-on-onedrive)
* Configuring the client to use mutlitple OneDrive accounts / configurations
* Configuring the client for use in dual-boot (Windows / Linux) situations
## Configuring the client to use multiple OneDrive accounts / configurations
Essentially, each OneDrive account or SharePoint Shared Library which you require to be synced needs to have its own and unique configuration, local sync directory and service files. To do this, the following steps are needed:
## Configuring the client to use mutlitple OneDrive accounts / configurations
Essentially, each OneDrive account or SharePoint Shared Library which you require to be synced needs to have it's own and unique configuration, local sync directory and service files. To do this, the following steps are needed:
1. Create a unique configuration folder for each onedrive client configuration that you need
2. Copy to this folder a copy of the default configuration file
3. Update the default configuration file as required, changing the required minimum config options and any additional options as needed to support your multi-account configuration
@ -64,7 +61,7 @@ Test the configuration using '--display-config' and '--dry-run'. By doing so, th
#### Display the configuration
```text
onedrive --confdir="~/.config/my-new-config" --display-config
onedrive --confdir="~/.config/my-new-config --display-config"
```
#### Test the configuration by performing a dry-run
@ -92,29 +89,18 @@ In order to automatically start syncing your OneDrive accounts, you will need to
* RHEL / CentOS: `/usr/lib/systemd/system`
* Others: `/usr/lib/systemd/user` and `/lib/systemd/system`
### Step1: Create a new systemd service file
#### Red Hat Enterprise Linux, CentOS Linux
**Note:** The `onedrive.service` runs the service as the 'root' user, whereas the `onedrive@.service` runs the service as your user account.
Copy the required service file to a new name:
```text
sudo cp /usr/lib/systemd/system/onedrive.service /usr/lib/systemd/system/onedrive-my-new-config
cp onedrive.service onedrive-my-new-config.service
```
or
```text
sudo cp /usr/lib/systemd/system/onedrive@.service /usr/lib/systemd/system/onedrive-my-new-config@.service
cp onedrive@.service onedrive-my-new-config@.service
```
#### Others such as Arch, Ubuntu, Debian, OpenSuSE, Fedora
Copy the required service file to a new name:
```text
sudo cp /usr/lib/systemd/user/onedrive.service /usr/lib/systemd/user/onedrive-my-new-config.service
```
or
```text
sudo cp /lib/systemd/system/onedrive@.service /lib/systemd/system/onedrive-my-new-config@.service
```
### Step 2: Edit new systemd service file
Edit the new systemd file, updating the line beginning with `ExecStart` so that the confdir mirrors the one you used above:
Edit the line beginning with `ExecStart` so that the confdir mirrors the one you used above:
```text
ExecStart=/usr/local/bin/onedrive --monitor --confdir="/full/path/to/config/dir"
```
@ -124,18 +110,7 @@ Example:
ExecStart=/usr/local/bin/onedrive --monitor --confdir="/home/myusername/.config/my-new-config"
```
**Note:** When running the client manually, `--confdir="~/.config/......` is acceptable. In a systemd configuration file, the full path must be used. The `~` must be expanded.
### Step 3: Enable the new systemd service
Once the file is correctly editied, you can enable the new systemd service using the following commands.
#### Red Hat Enterprise Linux, CentOS Linux
```text
systemctl enable onedrive-my-new-config
systemctl start onedrive-my-new-config
```
#### Others such as Arch, Ubuntu, Debian, OpenSuSE, Fedora
Then you can safely run these commands:
```text
systemctl --user enable onedrive-my-new-config
systemctl --user start onedrive-my-new-config
@ -146,92 +121,8 @@ systemctl --user enable onedrive-my-new-config@myusername.service
systemctl --user start onedrive-my-new-config@myusername.service
```
### Step 4: Viewing systemd status and logs for the custom service
#### Viewing systemd service status - Red Hat Enterprise Linux, CentOS Linux
```text
systemctl status onedrive-my-new-config
```
#### Viewing systemd service status - Others such as Arch, Ubuntu, Debian, OpenSuSE, Fedora
```text
systemctl --user status onedrive-my-new-config
```
#### Viewing journalctl systemd logs - Red Hat Enterprise Linux, CentOS Linux
```text
journalctl --unit=onedrive-my-new-config -f
```
#### Viewing journalctl systemd logs - Others such as Arch, Ubuntu, Debian, OpenSuSE, Fedora
```text
journalctl --user --unit=onedrive-my-new-config -f
```
### Step 5: (Optional) Run custom systemd service at boot without user login
In some cases it may be desirable for the systemd service to start without having to login as your 'user'
All the systemd steps above that utilise the `--user` option, will run the systemd service as your particular user. As such, the systemd service will not start unless you actually login to your system.
To avoid this issue, you need to reconfigure your 'user' account so that the systemd services you have created will startup without you having to login to your system:
```text
loginctl enable-linger <your_user_name>
```
Example:
```text
alex@ubuntu-headless:~$ loginctl enable-linger alex
```
Repeat these steps for each OneDrive new account that you wish to use.
## Configuring the client to use multiple OneDrive accounts / configurations using Docker
In some situations it may be desirable to run multiple Docker containers at the same time, each with their own configuration.
To run the Docker container successfully, it needs two unique Docker volumes to operate:
* Your configuration Docker volumes
* Your data Docker volume
When running multiple Docker containers, this is no different - each Docker container must have it's own configuration and data volume.
### High level steps:
1. Create the required unique Docker volumes for the configuration volume
2. Create the required unique local path used for the Docker data volume
3. Start the multiple Docker containers with the required configuration for each container
#### Create the required unique Docker volumes for the configuration volume
Create the required unique Docker volumes for the configuration volume(s):
```text
docker volume create onedrive_conf_sharepoint_site1
docker volume create onedrive_conf_sharepoint_site2
docker volume create onedrive_conf_sharepoint_site3
...
docker volume create onedrive_conf_sharepoint_site50
```
#### Create the required unique local path used for the Docker data volume
Create the required unique local path used for the Docker data volume
```text
mkdir -p /use/full/local/path/no/tilda/SharePointSite1
mkdir -p /use/full/local/path/no/tilda/SharePointSite2
mkdir -p /use/full/local/path/no/tilda/SharePointSite3
...
mkdir -p /use/full/local/path/no/tilda/SharePointSite50
```
#### Start the Docker container with the required configuration (example)
```text
docker run -it --name onedrive -v onedrive_conf_sharepoint_site1:/onedrive/conf -v "/use/full/local/path/no/tilda/SharePointSite1:/onedrive/data" driveone/onedrive:latest
docker run -it --name onedrive -v onedrive_conf_sharepoint_site2:/onedrive/conf -v "/use/full/local/path/no/tilda/SharePointSite2:/onedrive/data" driveone/onedrive:latest
docker run -it --name onedrive -v onedrive_conf_sharepoint_site3:/onedrive/conf -v "/use/full/local/path/no/tilda/SharePointSite3:/onedrive/data" driveone/onedrive:latest
...
docker run -it --name onedrive -v onedrive_conf_sharepoint_site50:/onedrive/conf -v "/use/full/local/path/no/tilda/SharePointSite50:/onedrive/data" driveone/onedrive:latest
```
#### TIP
To avoid 're-authenticating' and 'authorising' each individual Docker container, if all the Docker containers are using the 'same' OneDrive credentials, you can re-use the 'refresh_token' from one Docker container to another by copying this file to the configuration Docker volume of each Docker container.
If the account credentials are different .. you will need to re-authenticate each Docker container individually.
## Configuring the client for use in dual-boot (Windows / Linux) situations
When dual booting Windows and Linux, depending on the Windows OneDrive account configuration, the 'Files On-Demand' option may be enabled when running OneDrive within your Windows environment.
@ -246,57 +137,3 @@ After unchecking the option and clicking "OK", the Windows OneDrive client shoul
| OneDrive Personal | Onedrive Business<br>SharePoint |
|---|---|
| ![Uncheck-Personal](./images/personal-files-on-demand.png) | ![Uncheck-Business](./images/business-files-on-demand.png) |
## Configuring the client for use when 'sync_dir' is a mounted directory
In some environments, your setup might be that your configured 'sync_dir' is pointing to another mounted file system - a NFS|CIFS location, an external drive (USB stuc, eSATA etc). As such, you configure your 'sync_dir' as follows:
```text
sync_dir = "/path/to/mountpoint/OneDrive"
```
The issue here is - how does the client react if the mount point gets removed - network loss, device removal?
The client has zero knowledge of any event that causes a mountpoint to become unavailable, thus, the client (if you are running as a service) will assume that you deleted the files, thus, will go ahead and delete all your files on OneDrive. This is most certainly an undesirable action.
There are a few options here which you can configure in your 'config' file to assist you to prevent this sort of item from occuring:
1. classify_as_big_delete
2. check_nomount
3. check_nosync
**Note:** Before making any change to your configuration, stop any sync process & stop any onedrive systemd service from running.
### classify_as_big_delete
By default, this uses a value of 1000 files|folders. An undesirable unmount if you have more than 1000 files, this default level will prevent the client from executing the online delete. Modify this value up or down as desired
### check_nomount & check_nosync
These two options are really the right safe guards to use.
In your 'mount point', *before* you mount your external folder|device, create empty `.nosync` file, so that this is the *only* file present in the mount location before you mount your data to your mount point. When you mount your data, this '.nosync' file will not be visible, but, if the device you are mounting goes away - this '.nosync' file is the only file visible.
Next, in your 'config' file, configure the following options: `check_nomount = "true"` and `check_nosync = "true"`
What this will do is tell the client, if at *any* point you see this file - stop syncing - thus, protecting your online data from being deleted by the mounted device being suddenly unavailable.
After making this sort of change - test with `--dry-run` so you can see the impacts of your mount point being unavailable, and how the client is now reacting. Once you are happy with how the system will react, restart your sync processes.
## Upload data from the local ~/OneDrive folder to a specific location on OneDrive
In some environments, you may not want your local ~/OneDrive folder to be uploaded directly to the root of your OneDrive account online.
Unfortunatly, the OneDrive API lacks any facility to perform a re-direction of data during upload.
The workaround for this is to structure your local filesystem and reconfigure your client to achieve the desired goal.
### High level steps:
1. Create a new folder, for example `/opt/OneDrive`
2. Configure your application config 'sync_dir' to look at this folder
3. Inside `/opt/OneDrive` create the folder you wish to sync the data online to, for example: `/opt/OneDrive/RemoteOnlineDestination`
4. Configure the application to only sync `/opt/OneDrive/RemoteDestination` via 'sync_list'
5. Symbolically link `~/OneDrive` -> `/opt/OneDrive/RemoteOnlineDestination`
### Outcome:
* Your `~/OneDrive` will look / feel as per normal
* The data will be stored online under `/RemoteOnlineDestination`
### Testing:
* Validate your configuration with `onedrive --display-config`
* Test your configuration with `onedrive --dry-run`

View file

@ -1,97 +0,0 @@
# OneDrive Client for Linux Application Security
This document details the following information:
* Why is this application an 'unverified publisher'?
* Application Security and Permission Scopes
* How to change Permission Scopes
* How to review your existing application access consent
## Why is this application an 'unverified publisher'?
Publisher Verification, as per the Microsoft [process](https://learn.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview) has actually been configured, and, actually has been verified!
### Verified Publisher Configuration Evidence
As per the image below, the Azure portal shows that the 'Publisher Domain' has actually been verified:
![confirmed_verified_publisher](./images/confirmed_verified_publisher.jpg)
* The 'Publisher Domain' is: https://abraunegg.github.io/
* The required 'Microsoft Identity Association' is: https://abraunegg.github.io/.well-known/microsoft-identity-association.json
## Application Security and Permission Scopes
There are 2 main components regarding security for this application:
* Azure Application Permissions
* User Authentication Permissions
Keeping this in mind, security options should follow the security principal of 'least privilege':
> The principle that a security architecture should be designed so that each entity
> is granted the minimum system resources and authorizations that the entity needs
> to perform its function.
Reference: [https://csrc.nist.gov/glossary/term/least_privilege](https://csrc.nist.gov/glossary/term/least_privilege)
As such, the following API permissions are used by default:
### Default Azure Application Permissions
| API / Permissions name | Type | Description | Admin consent required |
|---|---|---|---|
| Files.Read | Delegated | Have read-only access to user files | No |
| Files.Read.All | Delegated | Have read-only access to all files user can access | No |
| Sites.Read.All | Delegated | Have read-only access to all items in all site collections | No |
| offline_access | Delegated | Maintain access to data you have given it access to | No |
![default_authentication_scopes](./images/default_authentication_scopes.jpg)
### Default User Authentication Permissions
When a user authenticates with Microsoft OneDrive, additional account permissions are provided by service to give the user specific access to their data. These are delegated permissions provided by the platform:
| API / Permissions name | Type | Description | Admin consent required |
|---|---|---|---|
| Files.ReadWrite | Delegated | Have full access to user files | No |
| Files.ReadWrite.All | Delegated | Have full access to all files user can access | No |
| Sites.ReadWrite.All | Delegated | Have full access to all items in all site collections | No |
| offline_access | Delegated | Maintain access to data you have given it access to | No |
When these delegated API permissions are combined, these provide the effective authentication scope for the OneDrive Client for Linux to access your data. The resulting effective 'default' permissions will be:
| API / Permissions name | Type | Description | Admin consent required |
|---|---|---|---|
| Files.ReadWrite | Delegated | Have full access to user files | No |
| Files.ReadWrite.All | Delegated | Have full access to all files user can access | No |
| Sites.ReadWrite.All | Delegated | Have full access to all items in all site collections | No |
| offline_access | Delegated | Maintain access to data you have given it access to | No |
These 'default' permissions will allow the OneDrive Client for Linux to read, write and delete data associated with your OneDrive Account.
## Configuring read-only access to your OneDrive data
In some situations, it may be desirable to configure the OneDrive Client for Linux totally in read-only operation.
To change the application to 'read-only' access, add the following to your configuration file:
```text
read_only_auth_scope = "true"
```
This will change the user authentication scope request to use read-only access.
**Note:** When changing this value, you *must* re-authenticate the client using the `--reauth` option to utilise the change in authentication scopes.
When using read-only authentication scopes, the uploading of any data or local change to OneDrive will fail with the following error:
```
2022-Aug-06 13:16:45.3349625 ERROR: Microsoft OneDrive API returned an error with the following message:
2022-Aug-06 13:16:45.3351661 Error Message: HTTP request returned status code 403 (Forbidden)
2022-Aug-06 13:16:45.3352467 Error Reason: Access denied
2022-Aug-06 13:16:45.3352838 Error Timestamp: 2022-06-12T13:16:45
2022-Aug-06 13:16:45.3353171 API Request ID: <redacted>
```
As such, it is also advisable for you to add the following to your configuration file so that 'uploads' are prevented:
```text
download_only = "true"
```
**Important:** Additionally when using 'read_only_auth_scope' you also will need to remove your existing application access consent otherwise old authentication consent will be valid and will be used. This will mean the application will technically have the consent to upload data. See below on how to remove your prior application consent.
## Reviewing your existing application access consent
To review your existing application access consent, you need to access the following URL: https://account.live.com/consent/Manage
From here, you are able to review what applications have been given what access to your data, and remove application access as required.

View file

@ -1,11 +1,12 @@
# RPM Package Build Process
The instuctions below have been tested on the following systems:
* CentOS 6 x86_64
* CentOS 7 x86_64
* CentOS 8 x86_64
These instructions should also be applicable for RedHat & Fedora platforms, or any other RedHat RPM based distribution.
## Prepare Package Development Environment (CentOS 7, 8)
## Prepare Package Development Environment (CentOS 6, 7, 8)
Install the following dependencies on your build system:
```text
sudo yum groupinstall -y 'Development Tools'
@ -13,40 +14,226 @@ sudo yum install -y libcurl-devel
sudo yum install -y sqlite-devel
sudo yum install -y libnotify-devel
sudo yum install -y wget
sudo yum install -y http://downloads.dlang.org/releases/2.x/2.088.0/dmd-2.088.0-0.fedora.x86_64.rpm
sudo yum install -y http://downloads.dlang.org/releases/2.x/2.091.0/dmd-2.091.0-0.fedora.x86_64.rpm
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
```
### CentOS 6 Only
In addition to the above requirements, the `sqlite` version used on CentOS 6.x / RHEL 6.x needs to be upgraded. Use the following instructions to update your version of `sqlite` so that it can support this client:
```text
sudo yum -y install epel-release
sudo yum -y install mock
wget https://kojipkgs.fedoraproject.org//packages/sqlite/3.7.15.2/2.fc19/src/sqlite-3.7.15.2-2.fc19.src.rpm -O ~/rpmbuild/SRPMS/sqlite-3.7.15.2-2.fc19.src.rpm
mock -r epel-6-x86_64 --rebuild ~/rpmbuild/SRPMS/sqlite-3.7.15.2-2.fc19.src.rpm
mock -r epel-6-i386 --rebuild ~/rpmbuild/SRPMS/sqlite-3.7.15.2-2.fc19.src.rpm
mkdir ~/sqlite-upgrade
mv /var/lib/mock/epel-6-x86_64/result/sqlite-*.rpm ~/sqlite-upgrade/
mv /var/lib/mock/epel-6-i386/result/sqlite-*.i686.rpm ~/sqlite-upgrade/
sudo yum -y upgrade ~/sqlite-upgrade/sqlite-*
```
## Build RPM from spec file
Build the RPM from the provided spec file:
```text
wget https://github.com/abraunegg/onedrive/archive/refs/tags/v2.4.22.tar.gz -O ~/rpmbuild/SOURCES/v2.4.22.tar.gz
wget https://github.com/abraunegg/onedrive/archive/v2.4.0.tar.gz -O ~/rpmbuild/SOURCES/v2.4.0.tar.gz
wget https://raw.githubusercontent.com/abraunegg/onedrive/master/contrib/spec/onedrive.spec.in -O ~/rpmbuild/SPECS/onedrive.spec
rpmbuild -ba ~/rpmbuild/SPECS/onedrive.spec
```
## RPM Build Example Results
Below are example output results of building, installing and running the RPM package on the respective platforms:
## RPM Build Results
Below are output results of building, installing and running the RPM package on the respective platforms:
### CentOS 6
```text
[alex@localhost ~]$ rpmbuild -ba ~/rpmbuild/SPECS/onedrive.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.Ve7WYf
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ LANG=C
+ export LANG
+ unset DISPLAY
+ cd /home/alex/rpmbuild/BUILD
+ rm -rf onedrive-2.4.0
+ /bin/tar -xf -
+ /usr/bin/gzip -dc /home/alex/rpmbuild/SOURCES/v2.4.0.tar.gz
+ STATUS=0
+ '[' 0 -ne 0 ']'
+ cd onedrive-2.4.0
+ /bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ exit 0
Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.iB40rK
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd onedrive-2.4.0
+ LANG=C
+ export LANG
+ unset DISPLAY
+ CFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'
+ export CFLAGS
+ CXXFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'
+ export CXXFLAGS
+ FFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/lib64/gfortran/modules'
+ export FFLAGS
+ ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info
checking for a BSD-compatible install... /usr/bin/install -c
checking for x86_64-redhat-linux-gnu-pkg-config... no
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for dmd... dmd
checking version of D compiler... 2.091.0
checking for curl... yes
checking for sqlite... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating contrib/pacman/PKGBUILD
config.status: creating contrib/spec/onedrive.spec
config.status: creating onedrive.1
config.status: creating contrib/systemd/onedrive.service
config.status: creating contrib/systemd/onedrive@.service
+ make
if [ -f .git/HEAD ] ; then \
git describe --tags > version ; \
else \
echo v2.4.0 > version ; \
fi
dmd -w -g -O -J. -L-lcurl -L-lsqlite3 -L-ldl src/config.d src/itemdb.d src/log.d src/main.d src/monitor.d src/onedrive.d src/qxor.d src/selective.d src/sqlite.d src/sync.d src/upload.d src/util.d src/progress.d -ofonedrive
+ exit 0
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.VP7LUb
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ '[' /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64 '!=' / ']'
+ rm -rf /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64
++ dirname /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64
+ mkdir -p /home/alex/rpmbuild/BUILDROOT
+ mkdir /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64
+ cd onedrive-2.4.0
+ LANG=C
+ export LANG
+ unset DISPLAY
+ make install DESTDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64 PREFIX=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64
/usr/bin/install -c -D onedrive /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/usr/bin/onedrive
/usr/bin/install -c -D onedrive.1 /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/usr/share/man/man1/onedrive.1
/usr/bin/install -c -D -m 644 contrib/logrotate/onedrive.logrotate /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/etc/logrotate.d/onedrive
mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/usr/share/doc/onedrive
/usr/bin/install -c -D -m 644 README.md config LICENSE CHANGELOG.md docs/Docker.md docs/INSTALL.md docs/Office365.md docs/USAGE.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/usr/share/doc/onedrive
install -D contrib/init.d/onedrive.init /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/etc/init.d/onedrive
install -D contrib/init.d/onedrive_service.sh /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/usr/bin/onedrive_service.sh
+ /usr/lib/rpm/check-buildroot
+ /usr/lib/rpm/redhat/brp-compress
+ /usr/lib/rpm/redhat/brp-strip /usr/bin/strip
+ /usr/lib/rpm/redhat/brp-strip-static-archive /usr/bin/strip
+ /usr/lib/rpm/redhat/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump
+ /usr/lib/rpm/brp-python-bytecompile /usr/bin/python
+ /usr/lib/rpm/redhat/brp-python-hardlink
+ /usr/lib/rpm/redhat/brp-java-repack-jars
Processing files: onedrive-2.4.0-1.el6.x86_64
Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.6wsc3O
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd onedrive-2.4.0
+ DOCDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/usr/share/doc/onedrive-2.4.0
+ export DOCDIR
+ rm -rf /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/usr/share/doc/onedrive-2.4.0
+ /bin/mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/usr/share/doc/onedrive-2.4.0
+ cp -pr README.md LICENSE CHANGELOG.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64/usr/share/doc/onedrive-2.4.0
+ exit 0
Provides: config(onedrive) = 2.4.0-1.el6
Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1
Requires(post): chkconfig
Requires(preun): chkconfig initscripts
Requires(postun): initscripts
Requires: /bin/bash /bin/sh ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.6)(64bit) libcurl.so.4()(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_4.2.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) libpthread.so.0(GLIBC_2.3.3)(64bit) libpthread.so.0(GLIBC_2.3.4)(64bit) librt.so.1()(64bit) librt.so.1(GLIBC_2.2.5)(64bit) libsqlite3.so.0()(64bit) rtld(GNU_HASH)
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el6.x86_64
Wrote: /home/alex/rpmbuild/SRPMS/onedrive-2.4.0-1.el6.src.rpm
Wrote: /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el6.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.MuVJDP
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd onedrive-2.4.0
+ exit 0
[alex@localhost ~]$ sudo yum -y install /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el6.x86_64.rpm
Loaded plugins: fastestmirror
Setting up Install Process
Examining /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el6.x86_64.rpm: onedrive-2.4.0-1.el6.x86_64
Marking /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el6.x86_64.rpm to be installed
Loading mirror speeds from cached hostfile
* base: mirror.internode.on.net
* epel: fedora.mirror.serversaustralia.com.au
* extras: mirror.internode.on.net
* updates: mirror.colocity.com
Resolving Dependencies
--> Running transaction check
---> Package onedrive.x86_64 0:2.4.0-1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================
Installing:
onedrive x86_64 2.4.0-1.el6 /onedrive-2.4.0-1.el6.x86_64 5.8 M
Transaction Summary
==============================================================================================================================================================================================
Install 1 Package(s)
Total size: 5.8 M
Installed size: 5.8 M
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : onedrive-2.4.0-1.el6.x86_64 1/1
Verifying : onedrive-2.4.0-1.el6.x86_64 1/1
Installed:
onedrive.x86_64 0:2.4.0-1.el6
Complete!
[alex@localhost ~]$ which onedrive
/usr/bin/onedrive
[alex@localhost ~]$ onedrive --version
onedrive v2.4.0
[alex@localhost ~]$ onedrive --display-config
onedrive version = v2.4.0
Config path = /home/alex/.config/onedrive
Config file found in config path = false
Config option 'check_nosync' = false
Config option 'sync_dir' = /home/alex/OneDrive
Config option 'skip_dir' =
Config option 'skip_file' = ~*|.~*|*.tmp
Config option 'skip_dotfiles' = false
Config option 'skip_symlinks' = false
Config option 'monitor_interval' = 45
Config option 'min_notify_changes' = 5
Config option 'log_dir' = /var/log/onedrive/
Config option 'classify_as_big_delete' = 1000
Config option 'sync_root_files' = false
Selective sync configured = false
[alex@localhost ~]$
```
### CentOS 7
```text
[alex@localhost ~]$ rpmbuild -ba ~/rpmbuild/SPECS/onedrive.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.wi6Tdz
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.BEprc0
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd /home/alex/rpmbuild/BUILD
+ rm -rf onedrive-2.4.15
+ rm -rf onedrive-2.4.0
+ /usr/bin/gzip -dc /home/alex/rpmbuild/SOURCES/v2.4.0.tar.gz
+ /usr/bin/tar -xf -
+ /usr/bin/gzip -dc /home/alex/rpmbuild/SOURCES/v2.4.15.tar.gz
+ STATUS=0
+ '[' 0 -ne 0 ']'
+ cd onedrive-2.4.15
+ cd onedrive-2.4.0
+ /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ exit 0
Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.dyeEuM
Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.CSy74N
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd onedrive-2.4.15
+ cd onedrive-2.4.0
+ CFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
+ export CFLAGS
+ CXXFLAGS='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic'
@ -67,7 +254,7 @@ checking for x86_64-redhat-linux-gnu-pkg-config... no
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for dmd... dmd
checking version of D compiler... 2.087.0
checking version of D compiler... 2.091.0
checking for curl... yes
checking for sqlite... yes
configure: creating ./config.status
@ -82,28 +269,28 @@ configure: WARNING: unrecognized options: --disable-dependency-tracking
if [ -f .git/HEAD ] ; then \
git describe --tags > version ; \
else \
echo v2.4.15 > version ; \
echo v2.4.0 > version ; \
fi
dmd -w -g -O -J. -L-lcurl -L-lsqlite3 -L-ldl src/config.d src/itemdb.d src/log.d src/main.d src/monitor.d src/onedrive.d src/qxor.d src/selective.d src/sqlite.d src/sync.d src/upload.d src/util.d src/progress.d src/arsd/cgi.d -ofonedrive
dmd -w -g -O -J. -L-lcurl -L-lsqlite3 -L-ldl src/config.d src/itemdb.d src/log.d src/main.d src/monitor.d src/onedrive.d src/qxor.d src/selective.d src/sqlite.d src/sync.d src/upload.d src/util.d src/progress.d -ofonedrive
+ exit 0
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.L3JbHy
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.Ffintx
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ '[' /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64 '!=' / ']'
+ rm -rf /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64
++ dirname /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64
+ '[' /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64 '!=' / ']'
+ rm -rf /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64
++ dirname /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64
+ mkdir -p /home/alex/rpmbuild/BUILDROOT
+ mkdir /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64
+ cd onedrive-2.4.15
+ /usr/bin/make install DESTDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64 PREFIX=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64
/usr/bin/install -c -D onedrive /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/bin/onedrive
/usr/bin/install -c -D onedrive.1 /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/share/man/man1/onedrive.1
/usr/bin/install -c -D -m 644 contrib/logrotate/onedrive.logrotate /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/etc/logrotate.d/onedrive
mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/share/doc/onedrive
/usr/bin/install -c -D -m 644 README.md config LICENSE CHANGELOG.md docs/Docker.md docs/INSTALL.md docs/SharePoint-Shared-Libraries.md docs/USAGE.md docs/BusinessSharedFolders.md docs/advanced-usage.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/share/doc/onedrive
/usr/bin/install -c -d -m 0755 /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/lib/systemd/user /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/lib/systemd/system
/usr/bin/install -c -m 0644 contrib/systemd/onedrive@.service /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/lib/systemd/system
/usr/bin/install -c -m 0644 contrib/systemd/onedrive.service /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/lib/systemd/system
+ mkdir /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64
+ cd onedrive-2.4.0
+ /usr/bin/make install DESTDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64 PREFIX=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64
/usr/bin/install -c -D onedrive /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/bin/onedrive
/usr/bin/install -c -D onedrive.1 /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/share/man/man1/onedrive.1
/usr/bin/install -c -D -m 644 contrib/logrotate/onedrive.logrotate /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/etc/logrotate.d/onedrive
mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/share/doc/onedrive
/usr/bin/install -c -D -m 644 README.md config LICENSE CHANGELOG.md docs/Docker.md docs/INSTALL.md docs/Office365.md docs/USAGE.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/share/doc/onedrive
/usr/bin/install -c -d -m 0755 /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/lib/systemd/user /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/lib/systemd/system
/usr/bin/install -c -m 0644 contrib/systemd/onedrive@.service /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/lib/systemd/system
/usr/bin/install -c -m 0644 contrib/systemd/onedrive.service /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/lib/systemd/system
+ /usr/lib/rpm/check-buildroot
+ /usr/lib/rpm/redhat/brp-compress
+ /usr/lib/rpm/redhat/brp-strip /usr/bin/strip
@ -112,73 +299,73 @@ mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/share/do
+ /usr/lib/rpm/brp-python-bytecompile /usr/bin/python 1
+ /usr/lib/rpm/redhat/brp-python-hardlink
+ /usr/lib/rpm/redhat/brp-java-repack-jars
Processing files: onedrive-2.4.15-1.el7.x86_64
Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.cpSXho
Processing files: onedrive-2.4.0-1.el7.x86_64
Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.EB5XJj
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd onedrive-2.4.15
+ DOCDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/share/doc/onedrive-2.4.15
+ cd onedrive-2.4.0
+ DOCDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/share/doc/onedrive-2.4.0
+ export DOCDIR
+ /usr/bin/mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/share/doc/onedrive-2.4.15
+ cp -pr README.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/share/doc/onedrive-2.4.15
+ cp -pr LICENSE /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/share/doc/onedrive-2.4.15
+ cp -pr CHANGELOG.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64/usr/share/doc/onedrive-2.4.15
+ /usr/bin/mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/share/doc/onedrive-2.4.0
+ cp -pr README.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/share/doc/onedrive-2.4.0
+ cp -pr LICENSE /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/share/doc/onedrive-2.4.0
+ cp -pr CHANGELOG.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64/usr/share/doc/onedrive-2.4.0
+ exit 0
Provides: config(onedrive) = 2.4.15-1.el7 onedrive = 2.4.15-1.el7 onedrive(x86-64) = 2.4.15-1.el7
Provides: config(onedrive) = 2.4.0-1.el7 onedrive = 2.4.0-1.el7 onedrive(x86-64) = 2.4.0-1.el7
Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.15)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.6)(64bit) libc.so.6(GLIBC_2.8)(64bit) libc.so.6(GLIBC_2.9)(64bit) libcurl.so.4()(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_4.2.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) libpthread.so.0(GLIBC_2.3.4)(64bit) librt.so.1()(64bit) librt.so.1(GLIBC_2.2.5)(64bit) libsqlite3.so.0()(64bit) rtld(GNU_HASH)
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el7.x86_64
Wrote: /home/alex/rpmbuild/SRPMS/onedrive-2.4.15-1.el7.src.rpm
Wrote: /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.15-1.el7.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.nWoW33
Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.6)(64bit) libcurl.so.4()(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_4.2.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) libpthread.so.0(GLIBC_2.3.3)(64bit) libpthread.so.0(GLIBC_2.3.4)(64bit) librt.so.1()(64bit) librt.so.1(GLIBC_2.2.5)(64bit) libsqlite3.so.0()(64bit) rtld(GNU_HASH)
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el7.x86_64
Wrote: /home/alex/rpmbuild/SRPMS/onedrive-2.4.0-1.el7.src.rpm
Wrote: /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el7.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.2VzBVJ
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd onedrive-2.4.15
+ cd onedrive-2.4.0
+ exit 0
[alex@localhost ~]$ sudo yum -y install /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.15-1.el7.x86_64.rpm
[alex@localhost ~]$ sudo yum -y install /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el7.x86_64.rpm
Loaded plugins: fastestmirror
Examining /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.15-1.el7.x86_64.rpm: onedrive-2.4.15-1.el7.x86_64
Marking /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.15-1.el7.x86_64.rpm to be installed
Examining /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el7.x86_64.rpm: onedrive-2.4.0-1.el7.x86_64
Marking /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package onedrive.x86_64 0:2.4.15-1.el7 will be installed
---> Package onedrive.x86_64 0:2.4.0-1.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================================================================================
Package Arch Version Repository Size
Package Arch Version Repository Size
==============================================================================================================================================================================================
Installing:
onedrive x86_64 2.4.15-1.el7 /onedrive-2.4.15-1.el7.x86_64 7.2 M
onedrive x86_64 2.4.0-1.el7 /onedrive-2.4.0-1.el7.x86_64 5.8 M
Transaction Summary
==============================================================================================================================================================================================
Install 1 Package
Total size: 7.2 M
Installed size: 7.2 M
Total size: 5.8 M
Installed size: 5.8 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : onedrive-2.4.15-1.el7.x86_64 1/1
Verifying : onedrive-2.4.15-1.el7.x86_64 1/1
Installing : onedrive-2.4.0-1.el7.x86_64 1/1
Verifying : onedrive-2.4.0-1.el7.x86_64 1/1
Installed:
onedrive.x86_64 0:2.4.15-1.el7
onedrive.x86_64 0:2.4.0-1.el7
Complete!
[alex@localhost ~]$ which onedrive
/usr/bin/onedrive
[alex@localhost ~]$ onedrive --version
onedrive v2.4.15
onedrive v2.4.0
[alex@localhost ~]$ onedrive --display-config
onedrive version = v2.4.15
onedrive version = v2.4.0
Config path = /home/alex/.config/onedrive
Config file found in config path = false
Config option 'check_nosync' = false
@ -187,38 +374,34 @@ Config option 'skip_dir' =
Config option 'skip_file' = ~*|.~*|*.tmp
Config option 'skip_dotfiles' = false
Config option 'skip_symlinks' = false
Config option 'monitor_interval' = 300
Config option 'monitor_interval' = 45
Config option 'min_notify_changes' = 5
Config option 'log_dir' = /var/log/onedrive/
Config option 'classify_as_big_delete' = 1000
Config option 'upload_only' = false
Config option 'no_remote_delete' = false
Config option 'remove_source_files' = false
Config option 'sync_root_files' = false
Selective sync 'sync_list' configured = false
Business Shared Folders configured = false
Selective sync configured = false
[alex@localhost ~]$
```
### CentOS 8
```text
[alex@localhost ~]$ rpmbuild -ba ~/rpmbuild/SPECS/onedrive.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.UINFyE
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.5LOfYv
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd /home/alex/rpmbuild/BUILD
+ rm -rf onedrive-2.4.15
+ /usr/bin/gzip -dc /home/alex/rpmbuild/SOURCES/v2.4.15.tar.gz
+ rm -rf onedrive-2.4.0
+ /usr/bin/gzip -dc /home/alex/rpmbuild/SOURCES/v2.4.0.tar.gz
+ /usr/bin/tar -xof -
+ STATUS=0
+ '[' 0 -ne 0 ']'
+ cd onedrive-2.4.15
+ cd onedrive-2.4.0
+ /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ exit 0
Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.cX1WQa
Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.HRIOjX
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd onedrive-2.4.15
+ cd onedrive-2.4.0
+ CFLAGS='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection'
+ export CFLAGS
+ CXXFLAGS='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection'
@ -241,7 +424,7 @@ checking for a BSD-compatible install... /usr/bin/install -c
checking for x86_64-redhat-linux-gnu-pkg-config... /usr/bin/x86_64-redhat-linux-gnu-pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for dmd... dmd
checking version of D compiler... 2.087.0
checking version of D compiler... 2.091.0
checking for curl... yes
checking for sqlite... yes
configure: creating ./config.status
@ -256,31 +439,31 @@ configure: WARNING: unrecognized options: --disable-dependency-tracking
if [ -f .git/HEAD ] ; then \
git describe --tags > version ; \
else \
echo v2.4.15 > version ; \
echo v2.4.0 > version ; \
fi
dmd -w -g -O -J. -L-lcurl -L-lsqlite3 -L-ldl src/config.d src/itemdb.d src/log.d src/main.d src/monitor.d src/onedrive.d src/qxor.d src/selective.d src/sqlite.d src/sync.d src/upload.d src/util.d src/progress.d src/arsd/cgi.d -ofonedrive
dmd -w -g -O -J. -L-lcurl -L-lsqlite3 -L-ldl src/config.d src/itemdb.d src/log.d src/main.d src/monitor.d src/onedrive.d src/qxor.d src/selective.d src/sqlite.d src/sync.d src/upload.d src/util.d src/progress.d -ofonedrive
+ exit 0
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.dNFPdx
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.u9F8Hd
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ '[' /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64 '!=' / ']'
+ rm -rf /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64
++ dirname /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64
+ '[' /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64 '!=' / ']'
+ rm -rf /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64
++ dirname /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64
+ mkdir -p /home/alex/rpmbuild/BUILDROOT
+ mkdir /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64
+ cd onedrive-2.4.15
+ /usr/bin/make install DESTDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64 'INSTALL=/usr/bin/install -p' PREFIX=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64
/usr/bin/install -p -D onedrive /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/bin/onedrive
/usr/bin/install -p -D onedrive.1 /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/share/man/man1/onedrive.1
/usr/bin/install -p -D -m 644 contrib/logrotate/onedrive.logrotate /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/etc/logrotate.d/onedrive
mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/share/doc/onedrive
/usr/bin/install -p -D -m 644 README.md config LICENSE CHANGELOG.md docs/Docker.md docs/INSTALL.md docs/SharePoint-Shared-Libraries.md docs/USAGE.md docs/BusinessSharedFolders.md docs/advanced-usage.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/share/doc/onedrive
/usr/bin/install -p -d -m 0755 /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/lib/systemd/user /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/lib/systemd/system
/usr/bin/install -p -m 0644 contrib/systemd/onedrive@.service /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/lib/systemd/system
/usr/bin/install -p -m 0644 contrib/systemd/onedrive.service /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/lib/systemd/system
+ mkdir /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64
+ cd onedrive-2.4.0
+ /usr/bin/make install DESTDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64 'INSTALL=/usr/bin/install -p' PREFIX=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64
/usr/bin/install -p -D onedrive /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/bin/onedrive
/usr/bin/install -p -D onedrive.1 /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/share/man/man1/onedrive.1
/usr/bin/install -p -D -m 644 contrib/logrotate/onedrive.logrotate /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/etc/logrotate.d/onedrive
mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/share/doc/onedrive
/usr/bin/install -p -D -m 644 README.md config LICENSE CHANGELOG.md docs/Docker.md docs/INSTALL.md docs/Office365.md docs/USAGE.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/share/doc/onedrive
/usr/bin/install -p -d -m 0755 /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/lib/systemd/user /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/lib/systemd/system
/usr/bin/install -p -m 0644 contrib/systemd/onedrive@.service /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/lib/systemd/system
/usr/bin/install -p -m 0644 contrib/systemd/onedrive.service /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/lib/systemd/system
+ /usr/lib/rpm/check-buildroot
+ /usr/lib/rpm/redhat/brp-ldconfig
/sbin/ldconfig: Warning: ignoring configuration file that cannot be opened: /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/etc/ld.so.conf: No such file or directory
/sbin/ldconfig: Warning: ignoring configuration file that cannot be opened: /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/etc/ld.so.conf: No such file or directory
+ /usr/lib/rpm/brp-compress
+ /usr/lib/rpm/brp-strip /usr/bin/strip
+ /usr/lib/rpm/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump
@ -289,53 +472,53 @@ mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/share/do
+ /usr/lib/rpm/brp-python-hardlink
+ PYTHON3=/usr/libexec/platform-python
+ /usr/lib/rpm/redhat/brp-mangle-shebangs
Processing files: onedrive-2.4.15-1.el8.x86_64
Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.TnFKbZ
Processing files: onedrive-2.4.0-1.el8.x86_64
Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.zi889w
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd onedrive-2.4.15
+ DOCDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/share/doc/onedrive
+ cd onedrive-2.4.0
+ DOCDIR=/home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/share/doc/onedrive
+ export LC_ALL=C
+ LC_ALL=C
+ export DOCDIR
+ /usr/bin/mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/share/doc/onedrive
+ cp -pr README.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/share/doc/onedrive
+ cp -pr LICENSE /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/share/doc/onedrive
+ cp -pr CHANGELOG.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64/usr/share/doc/onedrive
+ /usr/bin/mkdir -p /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/share/doc/onedrive
+ cp -pr README.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/share/doc/onedrive
+ cp -pr LICENSE /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/share/doc/onedrive
+ cp -pr CHANGELOG.md /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64/usr/share/doc/onedrive
+ exit 0
warning: File listed twice: /usr/share/doc/onedrive
warning: File listed twice: /usr/share/doc/onedrive/CHANGELOG.md
warning: File listed twice: /usr/share/doc/onedrive/LICENSE
warning: File listed twice: /usr/share/doc/onedrive/README.md
Provides: config(onedrive) = 2.4.15-1.el8 onedrive = 2.4.15-1.el8 onedrive(x86-64) = 2.4.15-1.el8
Provides: config(onedrive) = 2.4.0-1.el8 onedrive = 2.4.0-1.el8 onedrive(x86-64) = 2.4.0-1.el8
Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.15)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.3.4)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.6)(64bit) libc.so.6(GLIBC_2.8)(64bit) libc.so.6(GLIBC_2.9)(64bit) libcurl.so.4()(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_4.2.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) libpthread.so.0(GLIBC_2.3.4)(64bit) librt.so.1()(64bit) librt.so.1(GLIBC_2.2.5)(64bit) libsqlite3.so.0()(64bit) rtld(GNU_HASH)
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.15-1.el8.x86_64
Wrote: /home/alex/rpmbuild/SRPMS/onedrive-2.4.15-1.el8.src.rpm
Wrote: /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.15-1.el8.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.FAMTFz
Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.4)(64bit) libc.so.6(GLIBC_2.6)(64bit) libcurl.so.4()(64bit) libdl.so.2()(64bit) libdl.so.2(GLIBC_2.2.5)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_4.2.0)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libpthread.so.0()(64bit) libpthread.so.0(GLIBC_2.2.5)(64bit) libpthread.so.0(GLIBC_2.3.2)(64bit) libpthread.so.0(GLIBC_2.3.3)(64bit) libpthread.so.0(GLIBC_2.3.4)(64bit) librt.so.1()(64bit) librt.so.1(GLIBC_2.2.5)(64bit) libsqlite3.so.0()(64bit) rtld(GNU_HASH)
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/alex/rpmbuild/BUILDROOT/onedrive-2.4.0-1.el8.x86_64
Wrote: /home/alex/rpmbuild/SRPMS/onedrive-2.4.0-1.el8.src.rpm
Wrote: /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el8.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.XEoFDV
+ umask 022
+ cd /home/alex/rpmbuild/BUILD
+ cd onedrive-2.4.15
+ cd onedrive-2.4.0
+ exit 0
[alex@localhost ~]$ sudo yum -y install /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.15-1.el8.x86_64.rpm
Last metadata expiration check: 0:04:07 ago on Fri 14 Jan 2022 14:22:13 EST.
[alex@localhost ~]$ sudo yum -y install /home/alex/rpmbuild/RPMS/x86_64/onedrive-2.4.0-1.el8.x86_64.rpm
Last metadata expiration check: 0:34:12 ago on Fri 17 Apr 2020 18:11:23 EDT.
Dependencies resolved.
==============================================================================================================================================================================================
Package Architecture Version Repository Size
==============================================================================================================================================================================================
Installing:
onedrive x86_64 2.4.15-1.el8 @commandline 1.5 M
onedrive x86_64 2.4.0-1.el8 @commandline 1.2 M
Transaction Summary
==============================================================================================================================================================================================
Install 1 Package
Total size: 1.5 M
Installed size: 7.1 M
Total size: 1.2 M
Installed size: 5.7 M
Downloading Packages:
Running transaction check
Transaction check succeeded.
@ -343,20 +526,20 @@ Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : onedrive-2.4.15-1.el8.x86_64 1/1
Running scriptlet: onedrive-2.4.15-1.el8.x86_64 1/1
Verifying : onedrive-2.4.15-1.el8.x86_64 1/1
Installing : onedrive-2.4.0-1.el8.x86_64 1/1
Running scriptlet: onedrive-2.4.0-1.el8.x86_64 1/1
Verifying : onedrive-2.4.0-1.el8.x86_64 1/1
Installed:
onedrive-2.4.15-1.el8.x86_64
onedrive-2.4.0-1.el8.x86_64
Complete!
[alex@localhost ~]$ which onedrive
/usr/bin/onedrive
[alex@localhost ~]$ onedrive --version
onedrive v2.4.15
onedrive v2.4.0
[alex@localhost ~]$ onedrive --display-config
onedrive version = v2.4.15
onedrive version = v2.4.0
Config path = /home/alex/.config/onedrive
Config file found in config path = false
Config option 'check_nosync' = false
@ -365,15 +548,11 @@ Config option 'skip_dir' =
Config option 'skip_file' = ~*|.~*|*.tmp
Config option 'skip_dotfiles' = false
Config option 'skip_symlinks' = false
Config option 'monitor_interval' = 300
Config option 'monitor_interval' = 45
Config option 'min_notify_changes' = 5
Config option 'log_dir' = /var/log/onedrive/
Config option 'classify_as_big_delete' = 1000
Config option 'upload_only' = false
Config option 'no_remote_delete' = false
Config option 'remove_source_files' = false
Config option 'sync_root_files' = false
Selective sync 'sync_list' configured = false
Business Shared Folders configured = false
[alex@localhost ~]$
```
Selective sync configured = false
[alex@localhost ~]$
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

View file

@ -6,7 +6,7 @@ The below are known issues with this client:
**Description:**
When running the client in standalone mode (`--synchronize`) moving folders that are successfully synced around between subsequent standalone syncs causes a deletion & re-upload of data to occur.
When running the client in standalone mode (`--synchronize`) moving folders that are sucessfully synced around between subseqant standalone syncs causes a deletion & re-upload of data to occur.
**Explanation:**
@ -17,13 +17,11 @@ Technically, the client is 'working' correctly, as, when moving files, you are '
If the tracking of moving data to new local directories is requried, it is better to run the client in service mode (`--monitor`) rather than in standalone mode, as the 'move' of files can then be handled at the point when it occurs, so that the data is moved to the new location on OneDrive without the need to be deleted and re-uploaded.
## Application 'stops' running without any visible reason
**Issue Tracker:** [#494](https://github.com/abraunegg/onedrive/issues/494), [#753](https://github.com/abraunegg/onedrive/issues/753), [#792](https://github.com/abraunegg/onedrive/issues/792), [#884](https://github.com/abraunegg/onedrive/issues/884), [#1162](https://github.com/abraunegg/onedrive/issues/1162), [#1408](https://github.com/abraunegg/onedrive/issues/1408), [#1520](https://github.com/abraunegg/onedrive/issues/1520), [#1526](https://github.com/abraunegg/onedrive/issues/1526)
**Issue Tracker:** [#494](https://github.com/abraunegg/onedrive/issues/494), [#753](https://github.com/abraunegg/onedrive/issues/753), [#792](https://github.com/abraunegg/onedrive/issues/792), [#884](https://github.com/abraunegg/onedrive/issues/884)
**Description:**
When running the client and performing an upload or download operation, the application just stops working without any reason or explanation. If `echo $?` is used after the application has exited without visible reason, an error level of 141 may be provided.
Additionally, this issue has mainly been seen when the client is operating against Microsoft's Europe Data Centre's.
When running the client and performing an upload or download operation, the application just stops working without any reason or explanation.
**Explanation:**

View file

@ -1,15 +1,11 @@
# How to configure access to specific Microsoft Azure deployments
## Application Version
Before reading this document, please ensure you are running application version [![Version](https://img.shields.io/github/v/release/abraunegg/onedrive)](https://github.com/abraunegg/onedrive/releases) or greater. Use `onedrive --version` to determine what application version you are using and upgrade your client if required.
## Process Overview
In some cases it is a requirement to utilise specific Microsoft Azure cloud deployments to conform with data and security reuqirements that requires data to reside within the geographic borders of that country.
Current national clouds that are supported are:
* Microsoft Cloud for US Government
* Microsoft Cloud Germany
* Azure and Office365 operated by 21Vianet in China
* Azure and Office 365 operated by 21Vianet in China
In order to successfully use these specific Microsoft Azure deployments, the following steps are required:
In order to sucessfully use these specific Microsoft Azure deployments, the following steps are required:
1. Register an application with the Microsoft identity platform using the Azure portal
2. Configure the new application with the appropriate authentication scopes
3. Validate that the authentication / redirect URI is correct for your application registration
@ -18,14 +14,7 @@ In order to successfully use these specific Microsoft Azure deployments, the fol
6. Authenticate the client
## Step 1: Register a new application with Microsoft Azure
1. Log into your applicable Microsoft Azure Portal with your applicable Office365 identity:
| National Cloud Environment | Microsoft Azure Portal |
|---|---|
| Microsoft Cloud for US Government | https://portal.azure.com/ |
| Microsoft Cloud Germany | https://portal.azure.com/ |
| Azure and Office365 operated by 21Vianet | https://portal.azure.cn/ |
1. Log into [Microsoft Azure](https://portal.azure.com/) with your applicable identity
2. Select 'Azure Active Directory' as the service you wish to configure
3. Under 'Manage', select 'App registrations' to register a new application
4. Click 'New registration'
@ -46,8 +35,9 @@ Configure the API permissions as per the following:
|---|---|---|---|
| Files.ReadWrite | Delegated | Have full access to user files | No |
| Files.ReadWrite.All | Delegated | Have full access to all files user can access | No |
| Sites.ReadWrite.All | Delegated | Have full access to all items in all site collections | No |
| offline_access | Delegated | Maintain access to data you have given it access to | No |
| Sites.Read.All | Delegated | Read items in all site collections | No |
| Sites.ReadWrite.All | Delegated | Edit or delete items in all site collections | No |
![authentication_scopes](./images/authentication_scopes.jpg)
@ -59,12 +49,12 @@ Add the appropriate redirect URI for your Azure deployment:
A valid entry for the response URI should be one of:
* https://login.microsoftonline.us/common/oauth2/nativeclient (Microsoft Cloud for US Government)
* https://login.microsoftonline.de/common/oauth2/nativeclient (Microsoft Cloud Germany)
* https://login.chinacloudapi.cn/common/oauth2/nativeclient (Azure and Office365 operated by 21Vianet in China)
* https://login.chinacloudapi.cn/common/oauth2/nativeclient (Azure and Office 365 operated by 21Vianet in China)
For a single-tenant application, it may be necessary to use your specific tenant id instead of "common":
* https://login.microsoftonline.us/example.onmicrosoft.us/oauth2/nativeclient (Microsoft Cloud for US Government)
* https://login.microsoftonline.de/example.onmicrosoft.de/oauth2/nativeclient (Microsoft Cloud Germany)
* https://login.chinacloudapi.cn/example.onmicrosoft.cn/oauth2/nativeclient (Azure and Office365 operated by 21Vianet in China)
* https://login.chinacloudapi.cn/example.onmicrosoft.cn/oauth2/nativeclient (Azure and Office 365 operated by 21Vianet in China)
## Step 4: Configure the onedrive client to use new application registration
Update to your 'onedrive' configuration file (`~/.config/onedrive/config`) the following:
@ -89,7 +79,7 @@ Valid entries are:
* USL4 (Microsoft Cloud for US Government)
* USL5 (Microsoft Cloud for US Government - DOD)
* DE (Microsoft Cloud Germany)
* CN (Azure and Office365 operated by 21Vianet in China)
* CN (Azure and Office 365 operated by 21Vianet in China)
This will configure your client to use the correct Azure AD and Graph endpoints as per [https://docs.microsoft.com/en-us/graph/deployments](https://docs.microsoft.com/en-us/graph/deployments)

View file

@ -1,65 +0,0 @@
# Privacy Policy
Effective Date: May 16 2018
## Introduction
This Privacy Policy outlines how OneDrive Client for Linux ("we," "our," or "us") collects, uses, and protects information when you use our software ("OneDrive Client for Linux"). We respect your privacy and are committed to ensuring the confidentiality and security of any information you provide while using the Software.
## Information We Do Not Collect
We want to be transparent about the fact that we do not collect any personal data, usage data, or tracking data through the Software. This means:
1. **No Personal Data**: We do not collect any information that can be used to personally identify you, such as your name, email address, phone number, or physical address.
2. **No Usage Data**: We do not collect data about how you use the Software, such as the features you use, the duration of your sessions, or any interactions within the Software.
3. **No Tracking Data**: We do not use cookies or similar tracking technologies to monitor your online behavior or track your activities across websites or apps.
## How We Use Your Information
Since we do not collect any personal, usage, or tracking data, there is no information for us to use for any purpose.
## Third-Party Services
The Software may include links to third-party websites or services, but we do not have control over the privacy practices or content of these third-party services. We encourage you to review the privacy policies of any third-party services you access through the Software.
## Children's Privacy
Since we do not collect any personal, usage, or tracking data, there is no restriction on the use of this application by anyone under the age of 18.
## Information You Choose to Share
While we do not collect personal data, usage data, or tracking data through the Software, there may be instances where you voluntarily choose to share information with us, particularly when submitting bug reports. These bug reports may contain sensitive information such as account details, file names, and directory names. It's important to note that these details are included in the logs and debug logs solely for the purpose of diagnosing and resolving technical issues with the Software.
We want to emphasize that, even in these cases, we do not have access to your actual data. The logs and debug logs provided in bug reports are used exclusively for technical troubleshooting and debugging purposes. We take measures to treat this information with the utmost care, and it is only accessible to our technical support and development teams. We do not use this information for any other purpose, and we have strict security measures in place to protect it.
## Protecting Your Sensitive Data
We are committed to safeguarding your sensitive data and maintaining its confidentiality. To ensure its protection:
1. **Limited Access**: Only authorized personnel within our technical support and development teams have access to the logs and debug logs containing sensitive data, and they are trained in handling this information securely.
2. **Data Encryption**: We use industry-standard encryption protocols to protect the transmission and storage of sensitive data.
3. **Data Retention**: We retain bug report data for a limited time necessary for resolving the reported issue. Once the issue is resolved, we promptly delete or anonymize the data.
4. **Security Measures**: We employ robust security measures to prevent unauthorized access, disclosure, or alteration of sensitive data.
By submitting a bug report, you acknowledge and consent to the inclusion of sensitive information in logs and debug logs for the sole purpose of addressing technical issues with the Software.
## Your Responsibilities
While we take measures to protect your sensitive data, it is essential for you to exercise caution when submitting bug reports. Please refrain from including any sensitive or personally identifiable information that is not directly related to the technical issue you are reporting. You have the option to redact or obfuscate sensitive details in bug reports to further protect your data.
## Changes to this Privacy Policy
We may update this Privacy Policy from time to time to reflect changes in our practices or for other operational, legal, or regulatory reasons. We will notify you of any material changes by posting the updated Privacy Policy on our website or through the Software. We encourage you to review this Privacy Policy periodically.
## Contact Us
If you have any questions or concerns about this Privacy Policy or our privacy practices, please contact us at support@mynas.com.au or via GitHub (https://github.com/abraunegg/onedrive)
## Conclusion
By using the Software, you agree to the terms outlined in this Privacy Policy. If you do not agree with any part of this policy, please discontinue the use of the Software.

View file

@ -1,54 +0,0 @@
# OneDrive Client for Linux - Software Service Terms of Service
## 1. Introduction
These Terms of Service ("Terms") govern your use of the OneDrive Client for Linux ("Application") software and related Microsoft OneDrive services ("Service") provided by Microsoft. By accessing or using the Service, you agree to comply with and be bound by these Terms. If you do not agree to these Terms, please do not use the Service.
## 2. License Compliance
The OneDrive Client for Linux software is licensed under the GNU General Public License, version 3.0 (the "GPLv3"). Your use of the software must comply with the terms and conditions of the GPLv3. A copy of the GPLv3 can be found here: https://www.gnu.org/licenses/gpl-3.0.en.html
## 3. Use of the Service
### 3.1. Access and Accounts
You may need to create an account or provide personal information to access certain features of the Service. You are responsible for maintaining the confidentiality of your account information and are solely responsible for all activities that occur under your account.
### 3.2. Prohibited Activities
You agree not to:
- Use the Service in any way that violates applicable laws or regulations.
- Use the Service to engage in any unlawful, harmful, or fraudulent activity.
- Use the Service in any manner that disrupts, damages, or impairs the Service.
## 4. Intellectual Property
The OneDrive Client for Linux software is subject to the GPLv3, and you must respect all copyrights, trademarks, and other intellectual property rights associated with the software. Any contributions you make to the software must also comply with the GPLv3.
## 5. Disclaimer of Warranties
The OneDrive Client for Linux software is provided "as is" without any warranties, either expressed or implied. We do not guarantee that the use of the Application will be error-free or uninterrupted.
Microsoft is not responsible for OneDrive Client for Linux. Any issues or problems with OneDrive Client for Linux should be raised on GitHub at https://github.com/abraunegg/onedrive or email support@mynas.com.au
OneDrive Client for Linux is not responsible for the Microsoft OneDrive Service or the Microsoft Graph API Service that this Application utilizes. Any issue with either Microsoft OneDrive or Microsoft Graph API should be raised with Microsoft via their support channel in your country.
## 6. Limitation of Liability
To the fullest extent permitted by law, we shall not be liable for any direct, indirect, incidental, special, consequential, or punitive damages, or any loss of profits or revenues, whether incurred directly or indirectly, or any loss of data, use, goodwill, or other intangible losses, resulting from (a) your use or inability to use the Service, or (b) any other matter relating to the Service.
This limitiation of liability explicitly relates to the use of the OneDrive Client for Linux software and does not affect your rights under the GPLv3.
## 7. Changes to Terms
We reserve the right to update or modify these Terms at any time without prior notice. Any changes will be effective immediately upon posting on GitHub. Your continued use of the Service after the posting of changes constitutes your acceptance of such changes. Changes can be reviewed on GitHub.
## 8. Governing Law
These Terms shall be governed by and construed in accordance with the laws of Australia, without regard to its conflict of law principles.
## 9. Contact Us
If you have any questions or concerns about these Terms, please contact us at https://github.com/abraunegg/onedrive or email support@mynas.com.au

View file

@ -1,414 +0,0 @@
# Installation of 'onedrive' package on Debian and Ubuntu
This document covers the appropriate steps to install the 'onedrive' client using the provided packages for Debian and Ubuntu.
#### Important information for all Ubuntu and Ubuntu based distribution users:
This information is specifically for the following platforms and distributions:
* Lubuntu
* Linux Mint
* POP OS
* Peppermint OS
* Raspbian
* Ubuntu
Whilst there are [onedrive](https://packages.ubuntu.com/search?keywords=onedrive&searchon=names&suite=all&section=all) Universe packages available for Ubuntu, do not install 'onedrive' from these Universe packages. The default Ubuntu Universe packages are out-of-date and are not supported and should not be used.
## Determine which instructions to use
Ubuntu and its clones are based on various different releases, thus, you must use the correct instructions below, otherwise you may run into package dependancy issues and will be unable to install the client.
### Step 1: Remove any configured PPA and associated 'onedrive' package and systemd service files
Many Internet 'help' pages provide inconsistent details on how to install the OneDrive Client for Linux. A number of these websites continue to point users to install the client via the yann1ck PPA repository however this PPA no longer exists and should not be used.
To remove the PPA repository and the older client, perform the following actions:
```text
sudo apt remove onedrive
sudo add-apt-repository --remove ppa:yann1ck/onedrive
```
Additionally, Ubuntu and its clones have a bad habit of creating a 'default' systemd service file when installing the 'onedrive' package so that the client will automatically run the client post being authenticated. This systemd entry is erroneous and needs to be removed.
```
Created symlink /etc/systemd/user/default.target.wants/onedrive.service → /usr/lib/systemd/user/onedrive.service.
```
To remove this symbolic link, run the following command:
```
sudo rm /etc/systemd/user/default.target.wants/onedrive.service
```
### Step 2: Ensure your system is up-to-date
Use a script, similar to the following to ensure your system is updated correctly:
```text
#!/bin/bash
rm -rf /var/lib/dpkg/lock-frontend
rm -rf /var/lib/dpkg/lock
apt-get update
apt-get upgrade -y
apt-get dist-upgrade -y
apt-get autoremove -y
apt-get autoclean -y
```
Run this script as 'root' by using `su -` to elevate to 'root'. Example below:
```text
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-48-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
425 updates can be installed immediately.
208 of these updates are security updates.
To see these additional updates run: apt list --upgradable
Your Hardware Enablement Stack (HWE) is supported until April 2025.
Last login: Thu Jan 20 14:21:48 2022 from my.ip.address
alex@ubuntu-20-LTS:~$ su -
Password:
root@ubuntu-20-LTS:~# ls -la
total 28
drwx------ 3 root root 4096 Oct 10 2020 .
drwxr-xr-x 20 root root 4096 Oct 10 2020 ..
-rw------- 1 root root 175 Jan 20 14:23 .bash_history
-rw-r--r-- 1 root root 3106 Dec 6 2019 .bashrc
drwx------ 2 root root 4096 Apr 23 2020 .cache
-rw-r--r-- 1 root root 161 Dec 6 2019 .profile
-rwxr-xr-x 1 root root 174 Oct 10 2020 update-os.sh
root@ubuntu-20-LTS:~# cat update-os.sh
#!/bin/bash
rm -rf /var/lib/dpkg/lock-frontend
rm -rf /var/lib/dpkg/lock
apt-get update
apt-get upgrade -y
apt-get dist-upgrade -y
apt-get autoremove -y
apt-get autoclean -y
root@ubuntu-20-LTS:~# ./update-os.sh
Hit:1 http://au.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://au.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:3 http://au.archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu focal-security InRelease
Reading package lists... 96%
...
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.13.0-27-generic
Found initrd image: /boot/initrd.img-5.13.0-27-generic
Found linux image: /boot/vmlinuz-5.4.0-48-generic
Found initrd image: /boot/initrd.img-5.4.0-48-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done
Removing linux-modules-5.4.0-26-generic (5.4.0-26.30) ...
Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
Reading package lists... Done
Building dependency tree
Reading state information... Done
root@ubuntu-20-LTS:~#
```
Reboot your system after running this process before continuing with Step 3.
```text
reboot
```
### Step 3: Determine what your OS is based on
Determine what your OS is based on. To do this, run the following command:
```text
lsb_release -a
```
**Example:**
```text
alex@ubuntu-system:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04 LTS
Release: 22.04
Codename: jammy
```
### Step 4: Pick the correct instructions to use
If required, review the table below based on your 'lsb_release' information to pick the appropriate instructions to use:
| Release & Codename | Instructions to use |
|--------------------|---------------------|
| Linux Mint 19.x | This platform is End-of-Life (EOL) and no longer supported. You must upgrade to Linux Mint 21.x |
| Linux Mint 20.x | Use [Ubuntu 20.04](#distribution-ubuntu-2004) instructions below |
| Linux Mint 21.x | Use [Ubuntu 22.04](#distribution-ubuntu-2204) instructions below |
| Linux Mint Debian Edition (LMDE) 5 / Elsie | Use [Debian 11](#distribution-debian-11) instructions below |
| Linux Mint Debian Edition (LMDE) 6 / Faye | Use [Debian 12](#distribution-debian-12) instructions below |
| Debian 9 | This platform is End-of-Life (EOL) and no longer supported. You must upgrade to Debian 12 |
| Debian 10 | You must build from source or upgrade your Operating System to Debian 12 |
| Debian 11 | Use [Debian 11](#distribution-debian-11) instructions below |
| Debian 12 | Use [Debian 12](#distribution-debian-12) instructions below |
| Raspbian GNU/Linux 10 | You must build from source or upgrade your Operating System to Raspbian GNU/Linux 12 |
| Raspbian GNU/Linux 11 | Use [Debian 11](#distribution-debian-11) instructions below |
| Raspbian GNU/Linux 12 | Use [Debian 12](#distribution-debian-12) instructions below |
| Ubuntu 18.04 / Bionic | This platform is End-of-Life (EOL) and no longer supported. You must upgrade to Ubuntu 22.04 |
| Ubuntu 20.04 / Focal | Use [Ubuntu 20.04](#distribution-ubuntu-2004) instructions below |
| Ubuntu 21.04 / Hirsute | Use [Ubuntu 21.04](#distribution-ubuntu-2104) instructions below |
| Ubuntu 21.10 / Impish | Use [Ubuntu 21.10](#distribution-ubuntu-2110) instructions below |
| Ubuntu 22.04 / Jammy | Use [Ubuntu 22.04](#distribution-ubuntu-2204) instructions below |
| Ubuntu 22.10 / Kinetic | Use [Ubuntu 22.10](#distribution-ubuntu-2210) instructions below |
| Ubuntu 23.04 / Lunar | Use [Ubuntu 23.04](#distribution-ubuntu-2304) instructions below |
| Ubuntu 23.10 / Mantic | Use [Ubuntu 23.10](#distribution-ubuntu-2310) instructions below |
## Distribution Package Install Instructions
### Distribution: Debian 11
The packages support the following platform architectures:
| &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 |
|:----:|:------:|:-----:|:-------:|
|✔|✔|✔|✔|
#### Step 1: Add the OpenSuSE Build Service repository release key
Add the OpenSuSE Build Service repository release key using the following command:
```text
wget -qO - https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/Debian_11/Release.key | gpg --dearmor | sudo tee /usr/share/keyrings/obs-onedrive.gpg > /dev/null
```
#### Step 2: Add the OpenSuSE Build Service repository
Add the OpenSuSE Build Service repository using the following command:
```text
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/obs-onedrive.gpg] https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/Debian_11/ ./" | sudo tee /etc/apt/sources.list.d/onedrive.list
```
#### Step 3: Update your apt package cache
Run: `sudo apt-get update`
#### Step 4: Install 'onedrive'
Run: `sudo apt install --no-install-recommends --no-install-suggests onedrive`
#### Step 5: Read 'Known Issues' with these packages
Read and understand the [known issues](#known-issues-with-installing-from-the-above-packages) with these packages below, taking any action that is needed.
### Distribution: Debian 12
The packages support the following platform architectures:
| &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 |
|:----:|:------:|:-----:|:-------:|
|✔|✔|✔|✔|
#### Step 1: Add the OpenSuSE Build Service repository release key
Add the OpenSuSE Build Service repository release key using the following command:
```text
wget -qO - https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/Debian_12/Release.key | gpg --dearmor | sudo tee /usr/share/keyrings/obs-onedrive.gpg > /dev/null
```
#### Step 2: Add the OpenSuSE Build Service repository
Add the OpenSuSE Build Service repository using the following command:
```text
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/obs-onedrive.gpg] https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/Debian_12/ ./" | sudo tee /etc/apt/sources.list.d/onedrive.list
```
#### Step 3: Update your apt package cache
Run: `sudo apt-get update`
#### Step 4: Install 'onedrive'
Run: `sudo apt install --no-install-recommends --no-install-suggests onedrive`
#### Step 5: Read 'Known Issues' with these packages
Read and understand the [known issues](#known-issues-with-installing-from-the-above-packages) with these packages below, taking any action that is needed.
### Distribution: Ubuntu 20.04
The packages support the following platform architectures:
| &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 |
|:----:|:------:|:-----:|:-------:|
|❌|✔|✔|✔|
#### Step 1: Add the OpenSuSE Build Service repository release key
Add the OpenSuSE Build Service repository release key using the following command:
```text
wget -qO - https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_20.04/Release.key | sudo apt-key add -
```
#### Step 2: Add the OpenSuSE Build Service repository
Add the OpenSuSE Build Service repository using the following command:
```text
echo 'deb https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_20.04/ ./' | sudo tee /etc/apt/sources.list.d/onedrive.list
```
#### Step 3: Update your apt package cache
Run: `sudo apt-get update`
#### Step 4: Install 'onedrive'
Run: `sudo apt install --no-install-recommends --no-install-suggests onedrive`
#### Step 5: Read 'Known Issues' with these packages
Read and understand the [known issues](#known-issues-with-installing-from-the-above-packages) with these packages below, taking any action that is needed.
### Distribution: Ubuntu 21.04
The packages support the following platform architectures:
| &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 |
|:----:|:------:|:-----:|:-------:|
|❌|✔|✔|✔|
#### Step 1: Add the OpenSuSE Build Service repository release key
Add the OpenSuSE Build Service repository release key using the following command:
```text
wget -qO - https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_21.04/Release.key | gpg --dearmor | sudo tee /usr/share/keyrings/obs-onedrive.gpg > /dev/null
```
#### Step 2: Add the OpenSuSE Build Service repository
Add the OpenSuSE Build Service repository using the following command:
```text
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/obs-onedrive.gpg] https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_21.04/ ./" | sudo tee /etc/apt/sources.list.d/onedrive.list
```
#### Step 3: Update your apt package cache
Run: `sudo apt-get update`
#### Step 4: Install 'onedrive'
Run: `sudo apt install --no-install-recommends --no-install-suggests onedrive`
#### Step 5: Read 'Known Issues' with these packages
Read and understand the [known issues](#known-issues-with-installing-from-the-above-packages) with these packages below, taking any action that is needed.
### Distribution: Ubuntu 21.10
The packages support the following platform architectures:
| &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 |
|:----:|:------:|:-----:|:-------:|
|❌|✔|✔|✔|
#### Step 1: Add the OpenSuSE Build Service repository release key
Add the OpenSuSE Build Service repository release key using the following command:
```text
wget -qO - https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_21.10/Release.key | gpg --dearmor | sudo tee /usr/share/keyrings/obs-onedrive.gpg > /dev/null
```
#### Step 2: Add the OpenSuSE Build Service repository
Add the OpenSuSE Build Service repository using the following command:
```text
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/obs-onedrive.gpg] https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_21.10/ ./" | sudo tee /etc/apt/sources.list.d/onedrive.list
```
#### Step 3: Update your apt package cache
Run: `sudo apt-get update`
#### Step 4: Install 'onedrive'
Run: `sudo apt install --no-install-recommends --no-install-suggests onedrive`
#### Step 5: Read 'Known Issues' with these packages
Read and understand the [known issues](#known-issues-with-installing-from-the-above-packages) with these packages below, taking any action that is needed.
### Distribution: Ubuntu 22.04
The packages support the following platform architectures:
| &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 |
|:----:|:------:|:-----:|:-------:|
|❌|✔|✔|✔|
#### Step 1: Add the OpenSuSE Build Service repository release key
Add the OpenSuSE Build Service repository release key using the following command:
```text
wget -qO - https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_22.04/Release.key | gpg --dearmor | sudo tee /usr/share/keyrings/obs-onedrive.gpg > /dev/null
```
#### Step 2: Add the OpenSuSE Build Service repository
Add the OpenSuSE Build Service repository using the following command:
```text
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/obs-onedrive.gpg] https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_22.04/ ./" | sudo tee /etc/apt/sources.list.d/onedrive.list
```
#### Step 3: Update your apt package cache
Run: `sudo apt-get update`
#### Step 4: Install 'onedrive'
Run: `sudo apt install --no-install-recommends --no-install-suggests onedrive`
#### Step 5: Read 'Known Issues' with these packages
Read and understand the [known issues](#known-issues-with-installing-from-the-above-packages) with these packages below, taking any action that is needed.
### Distribution: Ubuntu 22.10
The packages support the following platform architectures:
| &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 |
|:----:|:------:|:-----:|:-------:|
|❌|✔|✔|✔|
#### Step 1: Add the OpenSuSE Build Service repository release key
Add the OpenSuSE Build Service repository release key using the following command:
```text
wget -qO - https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_22.10/Release.key | gpg --dearmor | sudo tee /usr/share/keyrings/obs-onedrive.gpg > /dev/null
```
#### Step 2: Add the OpenSuSE Build Service repository
Add the OpenSuSE Build Service repository using the following command:
```text
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/obs-onedrive.gpg] https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_22.10/ ./" | sudo tee /etc/apt/sources.list.d/onedrive.list
```
#### Step 3: Update your apt package cache
Run: `sudo apt-get update`
#### Step 4: Install 'onedrive'
Run: `sudo apt install --no-install-recommends --no-install-suggests onedrive`
#### Step 5: Read 'Known Issues' with these packages
Read and understand the [known issues](#known-issues-with-installing-from-the-above-packages) with these packages below, taking any action that is needed.
### Distribution: Ubuntu 23.04
The packages support the following platform architectures:
| &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 |
|:----:|:------:|:-----:|:-------:|
|❌|✔|✔|✔|
#### Step 1: Add the OpenSuSE Build Service repository release key
Add the OpenSuSE Build Service repository release key using the following command:
```text
wget -qO - https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_23.04/Release.key | gpg --dearmor | sudo tee /usr/share/keyrings/obs-onedrive.gpg > /dev/null
```
#### Step 2: Add the OpenSuSE Build Service repository
Add the OpenSuSE Build Service repository using the following command:
```text
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/obs-onedrive.gpg] https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_23.04/ ./" | sudo tee /etc/apt/sources.list.d/onedrive.list
```
#### Step 3: Update your apt package cache
Run: `sudo apt-get update`
#### Step 4: Install 'onedrive'
Run: `sudo apt install --no-install-recommends --no-install-suggests onedrive`
#### Step 5: Read 'Known Issues' with these packages
Read and understand the [known issues](#known-issues-with-installing-from-the-above-packages) with these packages below, taking any action that is needed.
### Distribution: Ubuntu 23.10
The packages support the following platform architectures:
| &nbsp;i686&nbsp; | x86_64 | ARMHF | AARCH64 |
|:----:|:------:|:-----:|:-------:|
|❌|✔|❌|✔|
#### Step 1: Add the OpenSuSE Build Service repository release key
Add the OpenSuSE Build Service repository release key using the following command:
```text
wget -qO - https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_23.10/Release.key | gpg --dearmor | sudo tee /usr/share/keyrings/obs-onedrive.gpg > /dev/null
```
#### Step 2: Add the OpenSuSE Build Service repository
Add the OpenSuSE Build Service repository using the following command:
```text
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/obs-onedrive.gpg] https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_23.10/ ./" | sudo tee /etc/apt/sources.list.d/onedrive.list
```
#### Step 3: Update your apt package cache
Run: `sudo apt-get update`
#### Step 4: Install 'onedrive'
Run: `sudo apt install --no-install-recommends --no-install-suggests onedrive`
#### Step 5: Read 'Known Issues' with these packages
Read and understand the [known issues](#known-issues-with-installing-from-the-above-packages) with these packages below, taking any action that is needed.
## Known Issues with Installing from the above packages
### 1. The client may segfault | core-dump when exiting
When the client is run in `--monitor` mode manually, or when using the systemd service, the client may segfault on exit.
This issue is caused by the way the 'onedrive' packages are built using the distribution LDC package & the default distribution compiler options which is the root cause for this issue. Refer to: https://bugs.launchpad.net/ubuntu/+source/ldc/+bug/1895969
**Additional references:**
* https://github.com/abraunegg/onedrive/issues/1053
* https://github.com/abraunegg/onedrive/issues/1609
**Resolution Options:**
* Uninstall the package and build client from source

View file

@ -23,9 +23,6 @@ Perform authorization via two files passed in as \fBARG\fP in the format \fBauth
The authorization URL is written to the \fBauthUrl\fP, then \fBonedrive\fP waits for
the file \fBresponseUrl\fP to be present, and reads the response from that file.
.TP
\fB\-\-auth\-response\fP ARG
Perform authentication not via interactive dialog but via providing the response url directly.
.TP
\fB\-\-check\-for\-nomount\fP
Check for the presence of .nosync in the syncdir root. If found, do not perform sync.
.br
@ -41,11 +38,6 @@ Number of children in a path that is locally removed which will be classified as
.br
Configuration file key: \fBclassify_as_big_delete\fP (default: \fB1000\fP)
.TP
\fB\-\-cleanup\-local\-files\fP
Cleanup additional local files when using \-\-download-only. This will remove local data.
.br
Configuration file key: \fBcleanup_local_files\fP (default: \fBfalse\fP)
.TP
\fB\-\-confdir\fP ARG
Set the directory used to store the configuration files
.TP
@ -63,11 +55,6 @@ Configuration file key: \fBdebug_https\fP (default: \fBfalse\fP)
\fB\-\-destination\-directory\fP ARG
Destination directory for renamed or move on OneDrive \- no sync will be performed.
.TP
\fB\-\-disable\-download\-validation\fP
Disable download validation when downloading from OneDrive
.br
Configuration file key: \fBdisable_download_validation\fP (default: \fBfalse\fP)
.TP
\fB\-\-disable\-notifications\fP
Do not use desktop notifications in monitor mode
.br
@ -81,9 +68,6 @@ Configuration file key: \fBdisable_upload_validation\fP (default: \fBfalse\fP)
\fB\-\-display\-config\fP
Display what options the client will use as currently configured \- no sync will be performed.
.TP
\fB\-\-display\-running\-config\fP
Display what options the client has been configured to use on application startup.
.TP
\fB\-\-display\-sync\-status\fP
Display the sync status of the client \- no sync will be performed.
.TP
@ -105,15 +89,15 @@ Configuration file key: \fBenable_logging\fP (default: \fBfalse\fP)
\fB\-\-force\fP
Force the deletion of data when a 'big delete' is detected
.TP
\fB\-\-force\-http\-11\fP
Force the use of HTTP 1.1 for all operations
\fB\-\-force\-http\-1.1\fP
Force the use of HTTP 1.1 for all operations (DEPRECIATED)
.br
Configuration file key: \fBforce_http_11\fP (default: \fBfalse\fP)
.TP
\fB\-\-force\-sync\fP
Force a synchronization of a specific folder, only when using --synchronize --single-directory and ignore
\fB\-\-force\-http\-2\fP
Force the use of HTTP/2 for all operations where applicable
.br
all non-default skip_dir and skip_file rules
Configuration file key: \fBforce_http_2\fP (default: \fBfalse\fP)
.TP
\fB\-\-get\-O365\-drive\-id\fP ARG
Query and return the Office 365 Drive ID for a given Office 365 SharePoint Shared Library
@ -137,15 +121,12 @@ defines the directory where logging output is saved to, needs to end with a slas
.br
Configuration file key: \fBlog_dir\fP (default: \fB/var/log/onedrive/\fP)
.TP
\fB\-\-min\-notify\-changes\fP
\fB\-\-min-notify-changes\fP
the minimum number of pending incoming changes necessary to trigger
a desktop notification
.br
Configuration file key: \fBmin_notify_changes\fP (default: \fB5\fP)
.TP
\fB\-m \-\-modified\-by\fP ARG
Display the last modified by details of a given path
.TP
\fB\-m \-\-monitor\fP
Keep monitoring for local and remote changes
.TP
@ -170,16 +151,11 @@ Do not delete local file 'deletes' from OneDrive when using \fB\-\-upload\-only\
.br
Configuration file key: \fBno_remote_delete\fP (default: \fBfalse\fP)
.TP
\fB\-\-operation\-timeout\fP ARG
Set the maximum amount of time (seconds) a file operation is allowed to take. This includes DNS resolution, connecting, data transfer, etc.
.br
Configuration file key: \fBoperation_timeout\fP (default: \fB3600\fP)
.TP
\fB\-\-print\-token\fP
Print the access token, useful for debugging
.TP
\fB\-\-reauth\fP
Reauthenticate the client with OneDrive
\fB\-\-resync\fP
Forget the last saved state, perform a full sync
.TP
\fB\-\-remove\-directory\fP ARG
Remove a directory on OneDrive \- no sync will be performed.
@ -189,12 +165,6 @@ Remove source file after successful transfer to OneDrive when using \-\-upload-o
.br
Configuration file key: \fBremove_source_files\fP (default: \fBfalse\fP)
.TP
\fB\-\-resync\fP
Forget the last saved state, perform a full sync
.TP
\fB\-\-resync\-auth\fP
Approve the use of performing a --resync action without needing CLI authorization
.TP
\fB\-\-single\-directory\fP ARG
Specify a single local directory within the OneDrive root to sync.
.TP
@ -227,11 +197,9 @@ Configuration file key: \fBskip_symlinks\fP (default: \fBfalse\fP)
\fB\-\-source\-directory\fP ARG
Source directory to rename or move on OneDrive \- no sync will be performed.
.TP
\fB\-\-space\-reservation\fP ARG
The amount of disk space to reserve (in MB) to avoid 100% disk space utilisation
.TP
\fB\-\-sync\-root\-files\fP
Sync all files in sync_dir root when using sync_list.
.TP
\fB\-\-sync\-shared\-folders\fP
Sync OneDrive Business Shared Folders
@ -263,9 +231,6 @@ enables even more verbose debug statements.
\fB\-\-version\fP
Print the version and exit
.TP
\fB\-\-with\-editing\-perms\fP
Create a read-write shareable link for an existing file on OneDrive when used with --create-share-link <file>
.TP
\fB\-h \-\-help\fP
This help information.
.PP
@ -387,5 +352,5 @@ Further examples and documentation is available in
\f[C]docs/USAGE.md\f[]
\f[C]docs/advanced-usage.md\f[]
\f[C]docs/BusinessSharedFolders.md\f[]
\f[C]docs/SharePoint-Shared-Libraries.md\f[]
\f[C]docs/Office365.md\f[]
\f[C]docs/national-cloud-deployments.md\f[]

View file

@ -1,8 +0,0 @@
The files in this directory have been obtained form the following places:
cgi.d
https://github.com/adamdruppe/arsd/blob/a870179988b8881b04126856105f0fad2cc0018d/cgi.d
License: Boost Software License - Version 1.0
Copyright 2008-2021, Adam D. Ruppe
see https://github.com/adamdruppe/arsd/blob/a870179988b8881b04126856105f0fad2cc0018d/LICENSE

File diff suppressed because it is too large Load diff

View file

@ -10,7 +10,6 @@ final class Config
public string defaultSyncDir = "~/OneDrive";
public string defaultSkipFile = "~*|.~*|*.tmp";
public string defaultSkipDir = "";
public string defaultLogFileDir = "/var/log/onedrive/";
// application set items
public string refreshTokenFilePath = "";
public string deltaLinkFilePath = "";
@ -44,35 +43,6 @@ final class Config
public long defaultFilePermissionMode = 600;
public int configuredFilePermissionMode;
// Bring in v2.5.0 config items
// HTTP Struct items, used for configuring HTTP()
// Curl Timeout Handling
// libcurl dns_cache_timeout timeout
immutable int defaultDnsTimeout = 60;
// Connect timeout for HTTP|HTTPS connections
immutable int defaultConnectTimeout = 10;
// With the following settings we force
// - if there is no data flow for 10min, abort
// - if the download time for one item exceeds 1h, abort
//
// Timeout for activity on connection
// this translates into Curl's CURLOPT_LOW_SPEED_TIME
// which says:
// It contains the time in number seconds that the
// transfer speed should be below the CURLOPT_LOW_SPEED_LIMIT
// for the library to consider it too slow and abort.
immutable int defaultDataTimeout = 600;
// Maximum time any operation is allowed to take
// This includes dns resolution, connecting, data transfer, etc.
immutable int defaultOperationTimeout = 3600;
// Specify how many redirects should be allowed
immutable int defaultMaxRedirects = 5;
// Specify what IP protocol version should be used when communicating with OneDrive
immutable int defaultIpProtocol = 0; // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only
this(string confdirOption)
{
// default configuration - entries in config file ~/.config/onedrive/config
@ -80,7 +50,7 @@ final class Config
stringValues["sync_dir"] = defaultSyncDir;
stringValues["skip_file"] = defaultSkipFile;
stringValues["skip_dir"] = defaultSkipDir;
stringValues["log_dir"] = defaultLogFileDir;
stringValues["log_dir"] = "/var/log/onedrive/";
stringValues["drive_id"] = "";
stringValues["user_agent"] = "";
boolValues["upload_only"] = false;
@ -88,10 +58,10 @@ final class Config
boolValues["check_nosync"] = false;
boolValues["download_only"] = false;
boolValues["disable_notifications"] = false;
boolValues["disable_download_validation"] = false;
boolValues["disable_upload_validation"] = false;
boolValues["enable_logging"] = false;
boolValues["force_http_11"] = false;
boolValues["force_http_2"] = false;
boolValues["local_first"] = false;
boolValues["no_remote_delete"] = false;
boolValues["skip_symlinks"] = false;
@ -104,11 +74,10 @@ final class Config
longValues["monitor_interval"] = 300;
longValues["skip_size"] = 0;
longValues["min_notify_changes"] = 5;
longValues["monitor_log_frequency"] = 6;
// Number of N sync runs before performing a full local scan of sync_dir
// By default 12 which means every ~60 minutes a full disk scan of sync_dir will occur
// 'monitor_interval' * 'monitor_fullscan_frequency' = 3600 = 1 hour
longValues["monitor_fullscan_frequency"] = 12;
longValues["monitor_log_frequency"] = 5;
// Number of n sync runs before performing a full local scan of sync_dir
// By default 10 which means every ~7.5 minutes a full disk scan of sync_dir will occur
longValues["monitor_fullscan_frequency"] = 10;
// Number of children in a path that is locally removed which will be classified as a 'big data delete'
longValues["classify_as_big_delete"] = 1000;
// Delete source after successful transfer
@ -120,8 +89,6 @@ final class Config
stringValues["application_id"] = "";
// allow for resync to be set via config file
boolValues["resync"] = false;
// resync now needs to be acknowledged based on the 'risk' of using it
boolValues["resync_auth"] = false;
// Ignore data safety checks and overwrite local data rather than preserve & rename
// This is a config file option ONLY
boolValues["bypass_data_preservation"] = false;
@ -149,25 +116,8 @@ final class Config
longValues["sync_dir_permissions"] = defaultDirectoryPermissionMode;
// Configure the default file permission attributes for newly created file
longValues["sync_file_permissions"] = defaultFilePermissionMode;
// Configure download / upload rate limits
longValues["rate_limit"] = 0;
// To ensure we do not fill up the load disk, how much disk space should be reserved by default
longValues["space_reservation"] = 50 * 2^^20; // 50 MB as Bytes
// Webhook options
boolValues["webhook_enabled"] = false;
stringValues["webhook_public_url"] = "";
stringValues["webhook_listening_host"] = "";
longValues["webhook_listening_port"] = 8888;
longValues["webhook_expiration_interval"] = 3600 * 24;
longValues["webhook_renewal_interval"] = 3600 * 12;
// Log to application output running configuration values
boolValues["display_running_config"] = false;
// Configure read-only authentication scope
boolValues["read_only_auth_scope"] = false;
// Flag to cleanup local files when using --download-only
boolValues["cleanup_local_files"] = false;
// DEVELOPER OPTIONS
// DEVELOPER OPTIONS
// display_memory = true | false
// - It may be desirable to display the memory usage of the application to assist with diagnosing memory issues with the application
// - This is especially beneficial when debugging or performing memory tests with Valgrind
@ -179,30 +129,8 @@ final class Config
// display_sync_options = true | false
// - It may be desirable to see what options are being passed in to performSync() without enabling the full verbose debug logging
boolValues["display_sync_options"] = false;
// force_children_scan = true | false
// - Force client to use /children rather than /delta to query changes on OneDrive
// - This option flags nationalCloudDeployment as true, forcing the client to act like it is using a National Cloud Deployment
boolValues["force_children_scan"] = false;
// display_processing_time = true | false
// - Enabling this option will add function processing times to the console output
// - This then enables tracking of where the application is spending most amount of time when processing data when users have questions re performance
boolValues["display_processing_time"] = false;
// HTTPS & CURL Operation Settings
// - Maximum time an operation is allowed to take
// This includes dns resolution, connecting, data transfer, etc.
longValues["operation_timeout"] = defaultOperationTimeout;
// libcurl dns_cache_timeout timeout
longValues["dns_timeout"] = defaultDnsTimeout;
// Timeout for HTTPS connections
longValues["connect_timeout"] = defaultConnectTimeout;
// Timeout for activity on a HTTPS connection
longValues["data_timeout"] = defaultDataTimeout;
// What IP protocol version should be used when communicating with OneDrive
longValues["ip_protocol_version"] = defaultIpProtocol; // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only
// EXPAND USERS HOME DIRECTORY
// Determine the users home directory.
// Determine the users home directory.
// Need to avoid using ~ here as expandTilde() below does not interpret correctly when running under init.d or systemd scripts
// Check for HOME environment variable
if (environment.get("HOME") != ""){
@ -222,17 +150,15 @@ final class Config
homePath = "~";
}
}
// Output homePath calculation
log.vdebug("homePath: ", homePath);
// Determine the correct configuration directory to use
string configDirBase;
string systemConfigDirBase;
if (confdirOption != "") {
// A CLI 'confdir' was passed in
// Clean up any stray " .. these should not be there ...
confdirOption = strip(confdirOption,"\"");
log.vdebug("configDirName: CLI override to set configDirName to: ", confdirOption);
if (canFind(confdirOption,"~")) {
// A ~ was found
@ -253,7 +179,7 @@ final class Config
// Also set up a path to pre-shipped shared configs (which can be overridden by supplying a config file in userspace)
systemConfigDirBase = "/etc";
}
// Output configDirBase calculation
log.vdebug("configDirBase: ", configDirBase);
// Set the default application configuration directory
@ -263,33 +189,19 @@ final class Config
// systemConfigDirBase contains the correct path so we do not need to check for presence of '~'
systemConfigDirName = systemConfigDirBase ~ "/onedrive";
}
// Config directory options all determined
if (!exists(configDirName)) {
// create the directory
mkdirRecurse(configDirName);
// Configure the applicable permissions for the folder
configDirName.setAttributes(returnRequiredDirectoryPermisions());
} else {
// The config path exists
// The path that exists must be a directory, not a file
if (!isDir(configDirName)) {
if (!confdirOption.empty) {
// the configuration path was passed in by the user .. user error
writeln("ERROR: --confdir entered value is an existing file instead of an existing directory");
} else {
// other error
writeln("ERROR: ~/.config/onedrive is a file rather than a directory");
}
// Must exit
exit(EXIT_FAILURE);
}
}
// configDirName has a trailing /
if (!configDirName.empty) log.vlog("Using 'user' Config Dir: ", configDirName);
if (!systemConfigDirName.empty) log.vlog("Using 'system' Config Dir: ", systemConfigDirName);
log.vlog("Using 'user' Config Dir: ", configDirName);
log.vlog("Using 'system' Config Dir: ", systemConfigDirName);
// Update application set variables based on configDirName
refreshTokenFilePath = buildNormalizedPath(configDirName ~ "/refresh_token");
deltaLinkFilePath = buildNormalizedPath(configDirName ~ "/delta_link");
@ -300,7 +212,7 @@ final class Config
syncListFilePath = buildNormalizedPath(configDirName ~ "/sync_list");
systemConfigFilePath = buildNormalizedPath(systemConfigDirName ~ "/config");
businessSharedFolderFilePath = buildNormalizedPath(configDirName ~ "/business_shared_folders");
// Debug Output for application set variables based on configDirName
log.vdebug("refreshTokenFilePath = ", refreshTokenFilePath);
log.vdebug("deltaLinkFilePath = ", deltaLinkFilePath);
@ -358,25 +270,20 @@ final class Config
stringValues["create_share_link"] = "";
stringValues["destination_directory"] = "";
stringValues["get_file_link"] = "";
stringValues["modified_by"] = "";
stringValues["get_o365_drive_id"] = "";
stringValues["remove_directory"] = "";
stringValues["single_directory"] = "";
stringValues["source_directory"] = "";
stringValues["auth_files"] = "";
stringValues["auth_response"] = "";
boolValues["display_config"] = false;
boolValues["display_sync_status"] = false;
boolValues["print_token"] = false;
boolValues["logout"] = false;
boolValues["reauth"] = false;
boolValues["monitor"] = false;
boolValues["synchronize"] = false;
boolValues["force"] = false;
boolValues["list_business_shared_folders"] = false;
boolValues["force_sync"] = false;
boolValues["with_editing_perms"] = false;
// Application Startup option validation
try {
string tmpStr;
@ -384,18 +291,15 @@ final class Config
long tmpVerb;
// duplicated from main.d to get full help output!
auto opt = getopt(
args,
std.getopt.config.bundling,
std.getopt.config.caseSensitive,
"auth-files",
"Perform authentication not via interactive dialog but via files read/writes to these files.",
&stringValues["auth_files"],
"auth-response",
"Perform authentication not via interactive dialog but via providing the response url directly.",
&stringValues["auth_response"],
"check-for-nomount",
"Check for the presence of .nosync in the syncdir root. If found, do not perform sync.",
"Check for the presence of .nosync in the syncdir root. If found, do not perform sync.",
&boolValues["check_nomount"],
"check-for-nosync",
"Check for the presence of .nosync in each directory. If found, skip directory from sync.",
@ -403,17 +307,14 @@ final class Config
"classify-as-big-delete",
"Number of children in a path that is locally removed which will be classified as a 'big data delete'",
&longValues["classify_as_big_delete"],
"cleanup-local-files",
"Cleanup additional local files when using --download-only. This will remove local data.",
&boolValues["cleanup_local_files"],
"create-directory",
"Create a directory on OneDrive - no sync will be performed.",
&stringValues["create_directory"],
"create-share-link",
"Create a shareable link for an existing file on OneDrive",
&stringValues["create_share_link"],
"debug-https",
"Debug OneDrive HTTPS communication.",
"debug-https",
"Debug OneDrive HTTPS communication.",
&boolValues["debug_https"],
"destination-directory",
"Destination directory for renamed or move on OneDrive - no sync will be performed.",
@ -421,18 +322,12 @@ final class Config
"disable-notifications",
"Do not use desktop notifications in monitor mode.",
&boolValues["disable_notifications"],
"disable-download-validation",
"Disable download validation when downloading from OneDrive",
&boolValues["disable_download_validation"],
"disable-upload-validation",
"Disable upload validation when uploading to OneDrive",
&boolValues["disable_upload_validation"],
"display-config",
"Display what options the client will use as currently configured - no sync will be performed.",
&boolValues["display_config"],
"display-running-config",
"Display what options the client has been configured to use on application startup.",
&boolValues["display_running_config"],
"display-sync-status",
"Display the sync status of the client - no sync will be performed.",
&boolValues["display_sync_status"],
@ -445,15 +340,15 @@ final class Config
"enable-logging",
"Enable client activity to a separate log file",
&boolValues["enable_logging"],
"force-http-11",
"Force the use of HTTP 1.1 for all operations",
"force-http-1.1",
"Force the use of HTTP/1.1 for all operations (DEPRECIATED)",
&boolValues["force_http_11"],
"force-http-2",
"Force the use of HTTP/2 for all operations where applicable",
&boolValues["force_http_2"],
"force",
"Force the deletion of data when a 'big delete' is detected",
&boolValues["force"],
"force-sync",
"Force a synchronization of a specific folder, only when using --synchronize --single-directory and ignore all non-default skip_dir and skip_file rules",
&boolValues["force_sync"],
"get-file-link",
"Display the file link of a synced file",
&stringValues["get_file_link"],
@ -472,9 +367,6 @@ final class Config
"min-notify-changes",
"Minimum number of pending incoming changes necessary to trigger a desktop notification",
&longValues["min_notify_changes"],
"modified-by",
"Display the last modified by details of a given path",
&stringValues["modified_by"],
"monitor|m",
"Keep monitoring for local and remote changes",
&boolValues["monitor"],
@ -493,15 +385,9 @@ final class Config
"print-token",
"Print the access token, useful for debugging",
&boolValues["print_token"],
"reauth",
"Reauthenticate the client with OneDrive",
&boolValues["reauth"],
"resync",
"Forget the last saved state, perform a full sync",
&boolValues["resync"],
"resync-auth",
"Approve the use of performing a --resync action",
&boolValues["resync_auth"],
"remove-directory",
"Remove a directory on OneDrive - no sync will be performed.",
&stringValues["remove_directory"],
@ -532,9 +418,6 @@ final class Config
"source-directory",
"Source directory to rename or move on OneDrive - no sync will be performed.",
&stringValues["source_directory"],
"space-reservation",
"The amount of disk space to reserve (in MB) to avoid 100% disk space utilisation",
&longValues["space_reservation"],
"syncdir",
"Specify the local directory used for synchronization to OneDrive",
&stringValues["sync_dir"],
@ -564,10 +447,7 @@ final class Config
&boolValues["list_business_shared_folders"],
"sync-shared-folders",
"Sync OneDrive Business Shared Folders",
&boolValues["sync_business_shared_folders"],
"with-editing-perms",
"Create a read-write shareable link for an existing file on OneDrive when used with --create-share-link <file>",
&boolValues["with_editing_perms"]
&boolValues["sync_business_shared_folders"]
);
if (opt.helpWanted) {
outputLongHelp(opt.options);
@ -634,16 +514,6 @@ final class Config
private bool load(string filename)
{
// configure function variables
try {
readText(filename);
} catch (std.file.FileException e) {
// Unable to access required file
log.error("ERROR: Unable to access ", e.msg);
// Use exit scopes to shutdown API
return false;
}
// We were able to readText the config file - so, we should be able to open and read it
auto file = File(filename, "r");
string lineBuffer;
@ -655,6 +525,7 @@ final class Config
// close open file
file.close();
}
return false;
}
// - exit
scope(exit) {
@ -664,7 +535,7 @@ final class Config
file.close();
}
}
// read file line by line
auto range = file.byLine();
foreach (line; range) {
@ -711,13 +582,7 @@ final class Config
setValueString("skip_dir", configFileSkipDir);
}
}
// --single-directory Strip quotation marks from path
// This is an issue when using ONEDRIVE_SINGLE_DIRECTORY with Docker
if (key == "single_directory") {
// Strip quotation marks from provided path
string configSingleDirectory = strip(to!string(c.front.dup), "\"");
setValueString("single_directory", configSingleDirectory);
}
// Azure AD Configuration
if (key == "azure_ad_endpoint") {
string azureConfigValue = c.front.dup;
@ -737,7 +602,7 @@ final class Config
case "CN":
log.log("Using config option for Azure AD China operated by 21Vianet");
break;
// Default - all other entries
// Default - all other entries
default:
log.log("Unknown Azure AD Endpoint - using Global Azure AD Endpoints");
}
@ -747,16 +612,6 @@ final class Config
if (ppp) {
c.popFront();
setValueLong(key, to!long(c.front.dup));
// if key is space_reservation we have to calculate MB -> bytes
if (key == "space_reservation") {
// temp value
ulong tempValue = to!long(c.front.dup);
// a value of 0 needs to be made at least 1MB ..
if (tempValue == 0) {
tempValue = 1;
}
setValueLong("space_reservation", to!long(tempValue * 2^^20));
}
} else {
log.log("Unknown key in config file: ", key);
return false;
@ -770,10 +625,10 @@ final class Config
}
return true;
}
void configureRequiredDirectoryPermisions() {
// return the directory permission mode required
// - return octal!defaultDirectoryPermissionMode; ... cant be used .. which is odd
// - return octal!defaultDirectoryPermissionMode; ... cant be used .. which is odd
// Error: variable defaultDirectoryPermissionMode cannot be read at compile time
if (getValueLong("sync_dir_permissions") != defaultDirectoryPermissionMode) {
// return user configured permissions as octal integer
@ -787,10 +642,10 @@ final class Config
configuredDirectoryPermissionMode = to!int(convertedValue);
}
}
void configureRequiredFilePermisions() {
// return the file permission mode required
// - return octal!defaultFilePermissionMode; ... cant be used .. which is odd
// - return octal!defaultFilePermissionMode; ... cant be used .. which is odd
// Error: variable defaultFilePermissionMode cannot be read at compile time
if (getValueLong("sync_file_permissions") != defaultFilePermissionMode) {
// return user configured permissions as octal integer
@ -804,7 +659,7 @@ final class Config
configuredFilePermissionMode = to!int(convertedValue);
}
}
int returnRequiredDirectoryPermisions() {
// read the configuredDirectoryPermissionMode and return
if (configuredDirectoryPermissionMode == 0) {
@ -814,7 +669,7 @@ final class Config
}
return configuredDirectoryPermissionMode;
}
int returnRequiredFilePermisions() {
// read the configuredFilePermissionMode and return
if (configuredFilePermissionMode == 0) {
@ -823,27 +678,11 @@ final class Config
}
return configuredFilePermissionMode;
}
void resetSkipToDefaults() {
// reset skip_file and skip_dir to application defaults
// skip_file
log.vdebug("original skip_file: ", getValueString("skip_file"));
log.vdebug("resetting skip_file");
setValueString("skip_file", defaultSkipFile);
log.vdebug("reset skip_file: ", getValueString("skip_file"));
// skip_dir
log.vdebug("original skip_dir: ", getValueString("skip_dir"));
log.vdebug("resetting skip_dir");
setValueString("skip_dir", defaultSkipDir);
log.vdebug("reset skip_dir: ", getValueString("skip_dir"));
}
}
void outputLongHelp(Option[] opt)
{
auto argsNeedingOptions = [
"--auth-files",
"--auth-response",
"--confdir",
"--create-directory",
"--create-share-link",
@ -852,18 +691,13 @@ void outputLongHelp(Option[] opt)
"--get-O365-drive-id",
"--log-dir",
"--min-notify-changes",
"--modified-by",
"--monitor-interval",
"--monitor-log-frequency",
"--monitor-fullscan-frequency",
"--operation-timeout",
"--remove-directory",
"--single-directory",
"--skip-dir",
"--skip-file",
"--skip-size",
"--source-directory",
"--space-reservation",
"--syncdir",
"--user-agent" ];
writeln(`OneDrive - a client for OneDrive Cloud Services

View file

@ -23,8 +23,9 @@ struct Item {
string cTag;
SysTime mtime;
string parentId;
string crc32Hash;
string sha1Hash;
string quickXorHash;
string sha256Hash;
string remoteDriveId;
string remoteId;
string syncStatus;
@ -33,7 +34,7 @@ struct Item {
final class ItemDatabase
{
// increment this for every change in the db schema
immutable int itemDatabaseVersion = 11;
immutable int itemDatabaseVersion = 10;
Database db;
string insertItemStmt;
@ -41,7 +42,6 @@ final class ItemDatabase
string selectItemByIdStmt;
string selectItemByParentIdStmt;
string deleteItemByIdStmt;
bool databaseInitialised = false;
this(const(char)[] filename)
{
@ -51,17 +51,8 @@ final class ItemDatabase
dbVersion = db.getVersion();
} catch (SqliteException e) {
// An error was generated - what was the error?
if (e.msg == "database is locked") {
writeln();
log.error("ERROR: onedrive application is already running - check system process list for active application instances");
log.vlog(" - Use 'sudo ps aufxw | grep onedrive' to potentially determine acive running process");
writeln();
} else {
writeln();
log.error("ERROR: An internal database error occurred: " ~ e.msg);
writeln();
}
return;
log.error("\nAn internal database error occurred: " ~ e.msg ~ "\n");
exit(-1);
}
if (dbVersion == 0) {
@ -93,18 +84,14 @@ final class ItemDatabase
// https://www.sqlite.org/pragma.html#pragma_auto_vacuum
// PRAGMA schema.auto_vacuum = 0 | NONE | 1 | FULL | 2 | INCREMENTAL;
db.exec("PRAGMA auto_vacuum = FULL");
// This pragma sets or queries the database connection locking-mode. The locking-mode is either NORMAL or EXCLUSIVE.
// https://www.sqlite.org/pragma.html#pragma_locking_mode
// PRAGMA schema.locking_mode = NORMAL | EXCLUSIVE
db.exec("PRAGMA locking_mode = EXCLUSIVE");
insertItemStmt = "
INSERT OR REPLACE INTO item (driveId, id, name, type, eTag, cTag, mtime, parentId, quickXorHash, sha256Hash, remoteDriveId, remoteId, syncStatus)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13)
INSERT OR REPLACE INTO item (driveId, id, name, type, eTag, cTag, mtime, parentId, crc32Hash, sha1Hash, quickXorHash, remoteDriveId, remoteId, syncStatus)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14)
";
updateItemStmt = "
UPDATE item
SET name = ?3, type = ?4, eTag = ?5, cTag = ?6, mtime = ?7, parentId = ?8, quickXorHash = ?9, sha256Hash = ?10, remoteDriveId = ?11, remoteId = ?12, syncStatus = ?13
SET name = ?3, type = ?4, eTag = ?5, cTag = ?6, mtime = ?7, parentId = ?8, crc32Hash = ?9, sha1Hash = ?10, quickXorHash = ?11, remoteDriveId = ?12, remoteId = ?13, syncStatus = ?14
WHERE driveId = ?1 AND id = ?2
";
selectItemByIdStmt = "
@ -114,14 +101,6 @@ final class ItemDatabase
";
selectItemByParentIdStmt = "SELECT * FROM item WHERE driveId = ? AND parentId = ?";
deleteItemByIdStmt = "DELETE FROM item WHERE driveId = ? AND id = ?";
// flag that the database is accessible and we have control
databaseInitialised = true;
}
bool isDatabaseInitialised()
{
return databaseInitialised;
}
void createTable()
@ -135,8 +114,9 @@ final class ItemDatabase
cTag TEXT,
mtime TEXT NOT NULL,
parentId TEXT,
crc32Hash TEXT,
sha1Hash TEXT,
quickXorHash TEXT,
sha256Hash TEXT,
remoteDriveId TEXT,
remoteId TEXT,
deltaLink TEXT,
@ -319,18 +299,19 @@ final class ItemDatabase
bind(6, cTag);
bind(7, mtime.toISOExtString());
bind(8, parentId);
bind(9, quickXorHash);
bind(10, sha256Hash);
bind(11, remoteDriveId);
bind(12, remoteId);
bind(13, syncStatus);
bind(9, crc32Hash);
bind(10, sha1Hash);
bind(11, quickXorHash);
bind(12, remoteDriveId);
bind(13, remoteId);
bind(14, syncStatus);
}
}
private Item buildItem(Statement.Result result)
{
assert(!result.empty, "The result must not be empty");
assert(result.front.length == 14, "The result must have 14 columns");
assert(result.front.length == 15, "The result must have 15 columns");
Item item = {
driveId: result.front[0].dup,
id: result.front[1].dup,
@ -339,11 +320,12 @@ final class ItemDatabase
cTag: result.front[5].dup,
mtime: SysTime.fromISOExtString(result.front[6]),
parentId: result.front[7].dup,
quickXorHash: result.front[8].dup,
sha256Hash: result.front[9].dup,
remoteDriveId: result.front[10].dup,
remoteId: result.front[11].dup,
syncStatus: result.front[12].dup
crc32Hash: result.front[8].dup,
sha1Hash: result.front[9].dup,
quickXorHash: result.front[10].dup,
remoteDriveId: result.front[11].dup,
remoteId: result.front[12].dup,
syncStatus: result.front[14].dup
};
switch (result.front[3]) {
case "file": item.type = ItemType.file; break;
@ -406,9 +388,6 @@ final class ItemDatabase
}
} else {
// broken tree
log.vdebug("The following generated a broken tree query:");
log.vdebug("Drive ID: ", driveId);
log.vdebug("Item ID: ", id);
assert(0);
}
}
@ -499,27 +478,7 @@ final class ItemDatabase
// Perform a vacuum on the database, commit WAL / SHM to file
void performVacuum()
{
try {
auto stmt = db.prepare("VACUUM;");
stmt.exec();
} catch (SqliteException e) {
writeln();
log.error("ERROR: Unable to perform a database vacuum: " ~ e.msg);
writeln();
}
}
// Select distinct driveId items from database
string[] selectDistinctDriveIds()
{
string[] driveIdArray;
auto stmt = db.prepare("SELECT DISTINCT driveId FROM item;");
auto res = stmt.exec();
if (res.empty) return driveIdArray;
while (!res.empty) {
driveIdArray ~= res.front[0].dup;
res.step();
}
return driveIdArray;
auto stmt = db.prepare("VACUUM;");
stmt.exec();
}
}

View file

@ -13,7 +13,6 @@ version(Notifications) {
// enable verbose logging
long verbose;
bool writeLogFile = false;
bool logFileWriteFailFlag = false;
private bool doNotifications;
@ -36,7 +35,7 @@ void init(string logDir)
// we got an error ..
writeln("\nUnable to access ", logFilePath);
writeln("Please manually create '",logFilePath, "' and set appropriate permissions to allow write access");
writeln("The requested client activity log will instead be located in your users home directory");
writeln("The requested client activity log will instead be located in the users home directory\n");
}
}
}
@ -155,7 +154,6 @@ void notify(T...)(T args)
private void logfileWriteLine(T...)(T args)
{
static import std.exception;
// Write to log file
string logFileName = .logFilePath ~ .username ~ ".onedrive.log";
auto currentTime = Clock.currTime();
@ -170,17 +168,6 @@ private void logfileWriteLine(T...)(T args)
// We cannot open the log file in logFilePath location for writing
// The user is not part of the standard 'users' group (GID 100)
// Change logfile to ~/onedrive.log putting the log file in the users home directory
if (!logFileWriteFailFlag) {
// write out error message that we cant log to the requested file
writeln("\nUnable to write activity log to ", logFileName);
writeln("Please set appropriate permissions to allow write access to the logging directory for your user account");
writeln("The requested client activity log will instead be located in your users home directory\n");
// set the flag so we dont keep printing this error message
logFileWriteFailFlag = true;
}
string homePath = environment.get("HOME");
string logFileNameAlternate = homePath ~ "/onedrive.log";
logFile = File(logFileNameAlternate, "a");

1410
src/main.d

File diff suppressed because it is too large Load diff

View file

@ -151,7 +151,7 @@ final class Monitor
// catch any error which is generated
} catch (std.file.FileException e) {
// Standard filesystem error
displayFileSystemErrorMessage(e.msg, getFunctionName!({}));
displayFileSystemErrorMessage(e.msg);
return;
} catch (Exception e) {
// Issue #1154 handling
@ -166,7 +166,7 @@ final class Monitor
exit(-1);
} else {
// some other error
displayFileSystemErrorMessage(e.msg, getFunctionName!({}));
displayFileSystemErrorMessage(e.msg);
return;
}
}
@ -388,4 +388,12 @@ final class Monitor
}
}
}
// Parse and display error message received from the local file system
private void displayFileSystemErrorMessage(string message)
{
log.error("ERROR: The local file system returned an error with the following message:");
auto errorArray = splitLines(message);
log.error(" Error Message: ", errorArray[0]);
}
}

View file

@ -7,4 +7,4 @@ dnotify.d
notify.d
https://github.com/D-Programming-Deimos/libnotify/blob/master/deimos/notify/notify.d
License: GNU Lesser General Public License (LGPL) 2.1 or upwards, see file
License: GPL 2.1 or upwards, see file

View file

@ -163,7 +163,7 @@ class Notification {
this(in char[] summary, in char[] body_, in char[] icon="")
in { assert(is_initted(), "call dnotify.init() before using Notification"); }
do {
body {
this.summary = summary;
this.body_ = body_;
this.icon = icon;

File diff suppressed because it is too large Load diff

View file

@ -24,7 +24,6 @@ class Progress
size_t getTerminalWidth() {
size_t column = default_width;
version (CRuntime_Musl) {
} else version(Android) {
} else {
winsize ws;
if(ioctl(STDOUT_FILENO, TIOCGWINSZ, &ws) != -1 && ws.ws_col > 0) {

View file

@ -1,88 +1,88 @@
import std.algorithm;
import std.digest;
// implementation of the QuickXorHash algorithm in D
// https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/code-snippets/quickxorhash.md
struct QuickXor
{
private enum int widthInBits = 160;
private enum size_t lengthInBytes = (widthInBits - 1) / 8 + 1;
private enum size_t lengthInQWords = (widthInBits - 1) / 64 + 1;
private enum int bitsInLastCell = widthInBits % 64; // 32
private enum int shift = 11;
private ulong[lengthInQWords] _data;
private ulong _lengthSoFar;
private int _shiftSoFar;
nothrow @safe void put(scope const(ubyte)[] array...)
{
int vectorArrayIndex = _shiftSoFar / 64;
int vectorOffset = _shiftSoFar % 64;
immutable size_t iterations = min(array.length, widthInBits);
for (size_t i = 0; i < iterations; i++) {
immutable bool isLastCell = vectorArrayIndex == _data.length - 1;
immutable int bitsInVectorCell = isLastCell ? bitsInLastCell : 64;
if (vectorOffset <= bitsInVectorCell - 8) {
for (size_t j = i; j < array.length; j += widthInBits) {
_data[vectorArrayIndex] ^= cast(ulong) array[j] << vectorOffset;
}
} else {
int index1 = vectorArrayIndex;
int index2 = isLastCell ? 0 : (vectorArrayIndex + 1);
ubyte low = cast(ubyte) (bitsInVectorCell - vectorOffset);
ubyte xoredByte = 0;
for (size_t j = i; j < array.length; j += widthInBits) {
xoredByte ^= array[j];
}
_data[index1] ^= cast(ulong) xoredByte << vectorOffset;
_data[index2] ^= cast(ulong) xoredByte >> low;
}
vectorOffset += shift;
if (vectorOffset >= bitsInVectorCell) {
vectorArrayIndex = isLastCell ? 0 : vectorArrayIndex + 1;
vectorOffset -= bitsInVectorCell;
}
}
_shiftSoFar = cast(int) (_shiftSoFar + shift * (array.length % widthInBits)) % widthInBits;
_lengthSoFar += array.length;
}
nothrow @safe void start()
{
_data = _data.init;
_shiftSoFar = 0;
_lengthSoFar = 0;
}
nothrow @trusted ubyte[lengthInBytes] finish()
{
ubyte[lengthInBytes] tmp;
tmp[0 .. lengthInBytes] = (cast(ubyte*) _data)[0 .. lengthInBytes];
for (size_t i = 0; i < 8; i++) {
tmp[lengthInBytes - 8 + i] ^= (cast(ubyte*) &_lengthSoFar)[i];
}
return tmp;
}
}
unittest
{
assert(isDigest!QuickXor);
}
unittest
{
QuickXor qxor;
qxor.put(cast(ubyte[]) "The quick brown fox jumps over the lazy dog");
assert(qxor.finish().toHexString() == "6CC4A56F2B26C492FA4BBE57C1F31C4193A972BE");
}
alias QuickXorDigest = WrapperDigest!(QuickXor);
import std.algorithm;
import std.digest;
// implementation of the QuickXorHash algorithm in D
// https://github.com/OneDrive/onedrive-api-docs/blob/live/docs/code-snippets/quickxorhash.md
struct QuickXor
{
private immutable int widthInBits = 160;
private immutable size_t lengthInBytes = (widthInBits - 1) / 8 + 1;
private immutable size_t lengthInQWords = (widthInBits - 1) / 64 + 1;
private immutable int bitsInLastCell = widthInBits % 64; // 32
private immutable int shift = 11;
private ulong[lengthInQWords] _data;
private ulong _lengthSoFar;
private int _shiftSoFar;
nothrow @safe void put(scope const(ubyte)[] array...)
{
int vectorArrayIndex = _shiftSoFar / 64;
int vectorOffset = _shiftSoFar % 64;
immutable size_t iterations = min(array.length, widthInBits);
for (size_t i = 0; i < iterations; i++) {
immutable bool isLastCell = vectorArrayIndex == _data.length - 1;
immutable int bitsInVectorCell = isLastCell ? bitsInLastCell : 64;
if (vectorOffset <= bitsInVectorCell - 8) {
for (size_t j = i; j < array.length; j += widthInBits) {
_data[vectorArrayIndex] ^= cast(ulong) array[j] << vectorOffset;
}
} else {
int index1 = vectorArrayIndex;
int index2 = isLastCell ? 0 : (vectorArrayIndex + 1);
ubyte low = cast(ubyte) (bitsInVectorCell - vectorOffset);
ubyte xoredByte = 0;
for (size_t j = i; j < array.length; j += widthInBits) {
xoredByte ^= array[j];
}
_data[index1] ^= cast(ulong) xoredByte << vectorOffset;
_data[index2] ^= cast(ulong) xoredByte >> low;
}
vectorOffset += shift;
if (vectorOffset >= bitsInVectorCell) {
vectorArrayIndex = isLastCell ? 0 : vectorArrayIndex + 1;
vectorOffset -= bitsInVectorCell;
}
}
_shiftSoFar = cast(int) (_shiftSoFar + shift * (array.length % widthInBits)) % widthInBits;
_lengthSoFar += array.length;
}
nothrow @safe void start()
{
_data = _data.init;
_shiftSoFar = 0;
_lengthSoFar = 0;
}
nothrow @trusted ubyte[lengthInBytes] finish()
{
ubyte[lengthInBytes] tmp;
tmp[0 .. lengthInBytes] = (cast(ubyte*) _data)[0 .. lengthInBytes];
for (size_t i = 0; i < 8; i++) {
tmp[lengthInBytes - 8 + i] ^= (cast(ubyte*) &_lengthSoFar)[i];
}
return tmp;
}
}
unittest
{
assert(isDigest!QuickXor);
}
unittest
{
QuickXor qxor;
qxor.put(cast(ubyte[]) "The quick brown fox jumps over the lazy dog");
assert(qxor.finish().toHexString() == "6CC4A56F2B26C492FA4BBE57C1F31C4193A972BE");
}
alias QuickXorDigest = WrapperDigest!(QuickXor);

View file

@ -179,14 +179,6 @@ final class SelectiveSync
if (!name.matchFirst(businessSharedFoldersList).empty) {
return true;
} else {
// try a direct comparison just in case
foreach (userFolder; businessSharedFoldersList) {
if (userFolder == name) {
// direct match
log.vdebug("'matchFirst' failed to match, however direct comparison was matched: ", name);
return true;
}
}
return false;
}
}
@ -221,9 +213,7 @@ private bool isPathExcluded(string path, string[] allowedPaths)
{
// function variables
bool exclude = false;
bool exludeDirectMatch = false; // will get updated to true, if there is a pattern match to sync_list entry
bool excludeMatched = false; // will get updated to true, if there is a pattern match to sync_list entry
bool finalResult = true; // will get updated to false, if pattern match to sync_list entry
bool finalResult = true; // will get updated to false, if pattern matched to sync_list entry
int offset;
string wildcard = "*";
@ -232,10 +222,7 @@ private bool isPathExcluded(string path, string[] allowedPaths)
// if there are no allowed paths always return false
if (allowedPaths.empty) return false;
path = buildNormalizedPath(path);
log.vdebug("Evaluation against 'sync_list' for this path: ", path);
log.vdebug("[S]exclude = ", exclude);
log.vdebug("[S]exludeDirectMatch = ", exludeDirectMatch);
log.vdebug("[S]excludeMatched = ", excludeMatched);
log.vdebug("Evaluation against 'sync_list' for: ", path);
// unless path is an exact match, entire sync_list entries need to be processed to ensure
// negative matches are also correctly detected
@ -243,31 +230,17 @@ private bool isPathExcluded(string path, string[] allowedPaths)
// is this an inclusion path or finer grained exclusion?
switch (allowedPath[0]) {
case '-':
// sync_list path starts with '-', this user wants to exclude this path
// allowed path starts with '-', this user wants to exclude this path
exclude = true;
// If the sync_list entry starts with '-/' offset needs to be 2, else 1
if (startsWith(allowedPath, "-/")){
// Offset needs to be 2
offset = 2;
} else {
// Offset needs to be 1
offset = 1;
}
offset = 1;
break;
case '!':
// sync_list path starts with '!', this user wants to exclude this path
// allowed path starts with '!', this user wants to exclude this path
exclude = true;
// If the sync_list entry starts with '!/' offset needs to be 2, else 1
if (startsWith(allowedPath, "!/")){
// Offset needs to be 2
offset = 2;
} else {
// Offset needs to be 1
offset = 1;
}
offset = 1;
break;
case '/':
// sync_list path starts with '/', this user wants to include this path
// allowed path starts with '/', this user wants to include this path
// but a '/' at the start causes matching issues, so use the offset for comparison
exclude = false;
offset = 1;
@ -285,75 +258,33 @@ private bool isPathExcluded(string path, string[] allowedPaths)
// Generate the common prefix from the path vs the allowed path
auto comm = commonPrefix(path, allowedPath[offset..$]);
// Is path is an exact match of the allowed path?
// is path is an exact match of the allowed path
if (comm.length == path.length) {
// we have a potential exact match
// strip any potential '/*' from the allowed path, to avoid a potential lesser common match
string strippedAllowedPath = strip(allowedPath[offset..$], "/*");
if (path == strippedAllowedPath) {
// we have an exact path match
log.vdebug("exact path match");
if (!exclude) {
log.vdebug("Evaluation against 'sync_list' result: direct match");
finalResult = false;
// direct match, break and go sync
break;
} else {
log.vdebug("Evaluation against 'sync_list' result: direct match - path to be excluded");
// do not set excludeMatched = true here, otherwise parental path also gets excluded
// flag exludeDirectMatch so that a 'wildcard match' will not override this exclude
exludeDirectMatch = true;
// final result
finalResult = true;
}
// the given path is contained in an allowed path
if (!exclude) {
log.vdebug("Evaluation against 'sync_list' result: direct match");
finalResult = false;
// direct match, break and go sync
break;
} else {
// no exact path match, but something common does match
log.vdebug("something 'common' matches the input path");
auto splitAllowedPaths = pathSplitter(strippedAllowedPath);
string pathToEvaluate = "";
foreach(base; splitAllowedPaths) {
pathToEvaluate ~= base;
if (path == pathToEvaluate) {
// The input path matches what we want to evaluate against as a direct match
if (!exclude) {
log.vdebug("Evaluation against 'sync_list' result: direct match for parental path item");
finalResult = false;
// direct match, break and go sync
break;
} else {
log.vdebug("Evaluation against 'sync_list' result: direct match for parental path item but to be excluded");
finalResult = true;
// do not set excludeMatched = true here, otherwise parental path also gets excluded
}
}
pathToEvaluate ~= dirSeparator;
}
log.vdebug("Evaluation against 'sync_list' result: direct match but to be excluded");
finalResult = true;
}
}
// Is path is a subitem/sub-folder of the allowed path?
// is path is a subitem of the allowed path
if (comm.length == allowedPath[offset..$].length) {
// The given path is potentially a subitem of an allowed path
// We want to capture sub-folders / files of allowed paths here, but not explicitly match other items
// if there is no wildcard
auto subItemPathCheck = allowedPath[offset..$] ~ "/";
if (canFind(path, subItemPathCheck)) {
// The 'path' includes the allowed path, and is 'most likely' a sub-path item
if (!exclude) {
log.vdebug("Evaluation against 'sync_list' result: parental path match");
finalResult = false;
// parental path matches, break and go sync
break;
} else {
log.vdebug("Evaluation against 'sync_list' result: parental path match but must be excluded");
finalResult = true;
excludeMatched = true;
}
// the given path is a subitem of an allowed path
if (!exclude) {
log.vdebug("Evaluation against 'sync_list' result: parental path match");
finalResult = false;
} else {
log.vdebug("Evaluation against 'sync_list' result: parental path match but to be excluded");
finalResult = true;
}
}
// Does the allowed path contain a wildcard? (*)
// does the allowed path contain a wildcard? (*)
if (canFind(allowedPath[offset..$], wildcard)) {
// allowed path contains a wildcard
// manually replace '*' for '.*' to be compatible with regex
@ -361,29 +292,16 @@ private bool isPathExcluded(string path, string[] allowedPaths)
auto allowedMask = regex(regexCompatiblePath);
if (matchAll(path, allowedMask)) {
// regex wildcard evaluation matches
// if we have a prior pattern match for an exclude, excludeMatched = true
if (!exclude && !excludeMatched && !exludeDirectMatch) {
// nothing triggered an exclusion before evaluation against wildcard match attempt
if (!exclude) {
log.vdebug("Evaluation against 'sync_list' result: wildcard pattern match");
finalResult = false;
} else {
log.vdebug("Evaluation against 'sync_list' result: wildcard pattern matched but must be excluded");
log.vdebug("Evaluation against 'sync_list' result: wildcard pattern match but to be excluded");
finalResult = true;
excludeMatched = true;
}
}
}
}
// Interim results
log.vdebug("[F]exclude = ", exclude);
log.vdebug("[F]exludeDirectMatch = ", exludeDirectMatch);
log.vdebug("[F]excludeMatched = ", excludeMatched);
// If exclude or excludeMatched is true, then finalResult has to be true
if ((exclude) || (excludeMatched) || (exludeDirectMatch)) {
finalResult = true;
}
// results
if (finalResult) {
log.vdebug("Evaluation against 'sync_list' final result: EXCLUDED");

3055
src/sync.d

File diff suppressed because it is too large Load diff

View file

@ -173,7 +173,6 @@ struct UploadSession
Progress p = new Progress(iteration);
p.title = "Uploading";
long fragmentCount = 0;
long fragSize = 0;
// Initialise the download bar at 0%
p.next();
@ -182,23 +181,7 @@ struct UploadSession
fragmentCount++;
log.vdebugNewLine("Fragment: ", fragmentCount, " of ", iteration);
p.next();
log.vdebugNewLine("fragmentSize: ", fragmentSize, "offset: ", offset, " fileSize: ", fileSize );
fragSize = fragmentSize < fileSize - offset ? fragmentSize : fileSize - offset;
log.vdebugNewLine("Using fragSize: ", fragSize);
// fragSize must not be a negative value
if (fragSize < 0) {
// Session upload will fail
// not a JSON object - fragment upload failed
log.vlog("File upload session failed - invalid calculation of fragment size");
if (exists(sessionFilePath)) {
remove(sessionFilePath);
}
// set response to null as error
response = null;
return response;
}
long fragSize = fragmentSize < fileSize - offset ? fragmentSize : fileSize - offset;
// If the resume upload fails, we need to check for a return code here
try {
response = onedrive.uploadFragment(
@ -228,7 +211,7 @@ struct UploadSession
// insert a new line as well, so that the below error is inserted on the console in the right location
log.vlog("\nFragment upload failed - received an exception response from OneDrive");
// display what the error is
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
displayOneDriveErrorMessage(e.msg);
// retry fragment upload in case error is transient
log.vlog("Retrying fragment upload");
}
@ -245,7 +228,7 @@ struct UploadSession
// OneDrive threw another error on retry
log.vlog("Retry to upload fragment failed");
// display what the error is
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
displayOneDriveErrorMessage(e.msg);
// set response to null as the fragment upload was in error twice
response = null;
}
@ -285,16 +268,16 @@ struct UploadSession
}
}
string getUploadSessionLocalFilePath() {
// return the session file path
string localPath = "";
if ("localPath" in session){
localPath = session["localPath"].str;
}
return localPath;
// Parse and display error message received from OneDrive
private void displayOneDriveErrorMessage(string message) {
log.error("ERROR: OneDrive returned an error with the following message:");
auto errorArray = splitLines(message);
log.error(" Error Message: ", errorArray[0]);
// extract 'message' as the reason
JSONValue errorMessage = parseJSON(replace(message, errorArray[0], ""));
log.error(" Error Reason: ", errorMessage["error"]["message"].str);
}
// save session details to temp file
private void save()
{
std.file.write(sessionFilePath, session.toString());

View file

@ -11,13 +11,8 @@ import std.stdio;
import std.string;
import std.algorithm;
import std.uri;
import std.json;
import std.traits;
import qxor;
import core.stdc.stdlib;
import log;
import config;
static import log;
shared string deviceName;
@ -50,6 +45,28 @@ void safeRemove(const(char)[] path)
if (exists(path)) remove(path);
}
// returns the crc32 hex string of a file
string computeCrc32(string path)
{
CRC32 crc;
auto file = File(path, "rb");
foreach (ubyte[] data; chunks(file, 4096)) {
crc.put(data);
}
return crc.finish().toHexString().dup;
}
// returns the sha1 hash hex string of a file
string computeSha1Hash(string path)
{
SHA1 sha;
auto file = File(path, "rb");
foreach (ubyte[] data; chunks(file, 4096)) {
sha.put(data);
}
return sha.finish().toHexString().dup;
}
// returns the quickXorHash base64 string of a file
string computeQuickXorHash(string path)
{
@ -61,16 +78,6 @@ string computeQuickXorHash(string path)
return Base64.encode(qxor.finish());
}
// returns the SHA256 hex string of a file
string computeSHA256Hash(string path) {
SHA256 sha256;
auto file = File(path, "rb");
foreach (ubyte[] data; chunks(file, 4096)) {
sha256.put(data);
}
return sha256.finish().toHexString().dup;
}
// converts wildcards (*, ?) to regex
Regex!char wild2regex(const(char)[] pattern)
{
@ -116,43 +123,19 @@ Regex!char wild2regex(const(char)[] pattern)
}
// returns true if the network connection is available
bool testNetwork(Config cfg)
bool testNetwork()
{
// Use low level HTTP struct
auto http = HTTP();
http.url = "https://login.microsoftonline.com";
// DNS lookup timeout
http.dnsTimeout = (dur!"seconds"(cfg.getValueLong("dns_timeout")));
// Timeout for connecting
http.connectTimeout = (dur!"seconds"(cfg.getValueLong("connect_timeout")));
// Data Timeout for HTTPS connections
http.dataTimeout = (dur!"seconds"(cfg.getValueLong("data_timeout")));
// maximum time any operation is allowed to take
// This includes dns resolution, connecting, data transfer, etc.
http.operationTimeout = (dur!"seconds"(cfg.getValueLong("operation_timeout")));
// What IP protocol version should be used when using Curl - IPv4 & IPv6, IPv4 or IPv6
http.handle.set(CurlOption.ipresolve,cfg.getValueLong("ip_protocol_version")); // 0 = IPv4 + IPv6, 1 = IPv4 Only, 2 = IPv6 Only
// HTTP connection test method
http.dnsTimeout = (dur!"seconds"(5));
http.method = HTTP.Method.head;
// Attempt to contact the Microsoft Online Service
try {
log.vdebug("Attempting to contact online service");
try {
http.perform();
log.vdebug("Shutting down HTTP engine as successfully reached OneDrive Online Service");
http.shutdown();
return true;
} catch (SocketException e) {
// Socket issue
log.vdebug("HTTP Socket Issue");
log.error("Cannot connect to Microsoft OneDrive Service - Socket Issue");
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
return false;
} catch (CurlException e) {
// No network connection to OneDrive Service
log.vdebug("No Network Connection");
log.error("Cannot connect to Microsoft OneDrive Service - Network Connection Issue");
displayOneDriveErrorMessage(e.msg, getFunctionName!({}));
} catch (SocketException) {
return false;
}
}
@ -168,7 +151,7 @@ bool readLocalFile(string path)
read(path,1);
} catch (std.file.FileException e) {
// unable to read the new local file
displayFileSystemErrorMessage(e.msg, getFunctionName!({}));
log.log("Skipping uploading this file as it cannot be read (file permissions or file corruption): ", path);
return false;
}
return true;
@ -264,299 +247,6 @@ bool containsASCIIHTMLCodes(string path)
return m.empty;
}
// Parse and display error message received from OneDrive
void displayOneDriveErrorMessage(string message, string callingFunction)
{
writeln();
log.error("ERROR: Microsoft OneDrive API returned an error with the following message:");
auto errorArray = splitLines(message);
log.error(" Error Message: ", errorArray[0]);
// Extract 'message' as the reason
JSONValue errorMessage = parseJSON(replace(message, errorArray[0], ""));
// extra debug
log.vdebug("Raw Error Data: ", message);
log.vdebug("JSON Message: ", errorMessage);
// What is the reason for the error
if (errorMessage.type() == JSONType.object) {
// configure the error reason
string errorReason;
string requestDate;
string requestId;
// set the reason for the error
try {
// Use error_description as reason
errorReason = errorMessage["error_description"].str;
} catch (JSONException e) {
// we dont want to do anything here
}
// set the reason for the error
try {
// Use ["error"]["message"] as reason
errorReason = errorMessage["error"]["message"].str;
} catch (JSONException e) {
// we dont want to do anything here
}
// Display the error reason
if (errorReason.startsWith("<!DOCTYPE")) {
// a HTML Error Reason was given
log.error(" Error Reason: A HTML Error response was provided. Use debug logging (--verbose --verbose) to view this error");
log.vdebug(errorReason);
} else {
// a non HTML Error Reason was given
log.error(" Error Reason: ", errorReason);
}
// Get the date of request if available
try {
// Use ["error"]["innerError"]["date"] as date
requestDate = errorMessage["error"]["innerError"]["date"].str;
} catch (JSONException e) {
// we dont want to do anything here
}
// Get the request-id if available
try {
// Use ["error"]["innerError"]["request-id"] as request-id
requestId = errorMessage["error"]["innerError"]["request-id"].str;
} catch (JSONException e) {
// we dont want to do anything here
}
// Display the date and request id if available
if (requestDate != "") log.error(" Error Timestamp: ", requestDate);
if (requestId != "") log.error(" API Request ID: ", requestId);
}
// Where in the code was this error generated
log.vlog(" Calling Function: ", callingFunction);
}
// Parse and display error message received from the local file system
void displayFileSystemErrorMessage(string message, string callingFunction)
{
writeln();
log.error("ERROR: The local file system returned an error with the following message:");
auto errorArray = splitLines(message);
// What was the error message
log.error(" Error Message: ", errorArray[0]);
// Where in the code was this error generated
log.vlog(" Calling Function: ", callingFunction);
// If we are out of disk space (despite download reservations) we need to exit the application
ulong localActualFreeSpace = to!ulong(getAvailableDiskSpace("."));
if (localActualFreeSpace == 0) {
// force exit
exit(-1);
}
}
// Get the function name that is being called to assist with identifying where an error is being generated
string getFunctionName(alias func)() {
return __traits(identifier, __traits(parent, func)) ~ "()\n";
}
// Get the latest release version from GitHub
JSONValue getLatestReleaseDetails() {
// Import curl just for this function
import std.net.curl;
char[] content;
JSONValue githubLatest;
JSONValue versionDetails;
string latestTag;
string publishedDate;
try {
content = get("https://api.github.com/repos/abraunegg/onedrive/releases/latest");
} catch (CurlException e) {
// curl generated an error - meaning we could not query GitHub
log.vdebug("Unable to query GitHub for latest release");
}
try {
githubLatest = content.parseJSON();
} catch (JSONException e) {
// unable to parse the content JSON, set to blank JSON
log.vdebug("Unable to parse GitHub JSON response");
githubLatest = parseJSON("{}");
}
// githubLatest has to be a valid JSON object
if (githubLatest.type() == JSONType.object){
// use the returned tag_name
if ("tag_name" in githubLatest) {
// use the provided tag
// "tag_name": "vA.B.CC" and strip 'v'
latestTag = strip(githubLatest["tag_name"].str, "v");
} else {
// set to latestTag zeros
log.vdebug("'tag_name' unavailable in JSON response. Setting GitHub 'tag_name' release version to 0.0.0");
latestTag = "0.0.0";
}
// use the returned published_at date
if ("published_at" in githubLatest) {
// use the provided value
publishedDate = githubLatest["published_at"].str;
} else {
// set to v2.0.0 release date
log.vdebug("'published_at' unavailable in JSON response. Setting GitHub 'published_at' date to 2018-07-18T18:00:00Z");
publishedDate = "2018-07-18T18:00:00Z";
}
} else {
// JSONValue is not an object
log.vdebug("Invalid JSON Object. Setting GitHub 'tag_name' release version to 0.0.0");
latestTag = "0.0.0";
log.vdebug("Invalid JSON Object. Setting GitHub 'published_at' date to 2018-07-18T18:00:00Z");
publishedDate = "2018-07-18T18:00:00Z";
}
// return the latest github version and published date as our own JSON
versionDetails = [
"latestTag": JSONValue(latestTag),
"publishedDate": JSONValue(publishedDate)
];
// return JSON
return versionDetails;
}
// Get the release details from the 'current' running version
JSONValue getCurrentVersionDetails(string thisVersion) {
// Import curl just for this function
import std.net.curl;
char[] content;
JSONValue githubDetails;
JSONValue versionDetails;
string versionTag = "v" ~ thisVersion;
string publishedDate;
try {
content = get("https://api.github.com/repos/abraunegg/onedrive/releases");
} catch (CurlException e) {
// curl generated an error - meaning we could not query GitHub
log.vdebug("Unable to query GitHub for release details");
}
try {
githubDetails = content.parseJSON();
} catch (JSONException e) {
// unable to parse the content JSON, set to blank JSON
log.vdebug("Unable to parse GitHub JSON response");
githubDetails = parseJSON("{}");
}
// githubDetails has to be a valid JSON array
if (githubDetails.type() == JSONType.array){
foreach (searchResult; githubDetails.array) {
// searchResult["tag_name"].str;
if (searchResult["tag_name"].str == versionTag) {
log.vdebug("MATCHED version");
log.vdebug("tag_name: ", searchResult["tag_name"].str);
log.vdebug("published_at: ", searchResult["published_at"].str);
publishedDate = searchResult["published_at"].str;
}
}
if (publishedDate.empty) {
// empty .. no version match ?
// set to v2.0.0 release date
log.vdebug("'published_at' unavailable in JSON response. Setting GitHub 'published_at' date to 2018-07-18T18:00:00Z");
publishedDate = "2018-07-18T18:00:00Z";
}
} else {
// JSONValue is not an Array
log.vdebug("Invalid JSON Array. Setting GitHub 'published_at' date to 2018-07-18T18:00:00Z");
publishedDate = "2018-07-18T18:00:00Z";
}
// return the latest github version and published date as our own JSON
versionDetails = [
"versionTag": JSONValue(thisVersion),
"publishedDate": JSONValue(publishedDate)
];
// return JSON
return versionDetails;
}
// Check the application version versus GitHub latestTag
void checkApplicationVersion() {
// Get the latest details from GitHub
JSONValue latestVersionDetails = getLatestReleaseDetails();
string latestVersion = latestVersionDetails["latestTag"].str;
SysTime publishedDate = SysTime.fromISOExtString(latestVersionDetails["publishedDate"].str).toUTC();
SysTime releaseGracePeriod = publishedDate;
SysTime currentTime = Clock.currTime().toUTC();
// drop fraction seconds
publishedDate.fracSecs = Duration.zero;
currentTime.fracSecs = Duration.zero;
releaseGracePeriod.fracSecs = Duration.zero;
// roll the grace period forward to allow distributions to catch up based on their release cycles
releaseGracePeriod = releaseGracePeriod.add!"months"(1);
// what is this clients version?
auto currentVersionArray = strip(strip(import("version"), "v")).split("-");
string applicationVersion = currentVersionArray[0];
// debug output
log.vdebug("applicationVersion: ", applicationVersion);
log.vdebug("latestVersion: ", latestVersion);
log.vdebug("publishedDate: ", publishedDate);
log.vdebug("currentTime: ", currentTime);
log.vdebug("releaseGracePeriod: ", releaseGracePeriod);
// display details if not current
// is application version is older than available on GitHub
if (applicationVersion != latestVersion) {
// application version is different
bool displayObsolete = false;
// what warning do we present?
if (applicationVersion < latestVersion) {
// go get this running version details
JSONValue thisVersionDetails = getCurrentVersionDetails(applicationVersion);
SysTime thisVersionPublishedDate = SysTime.fromISOExtString(thisVersionDetails["publishedDate"].str).toUTC();
thisVersionPublishedDate.fracSecs = Duration.zero;
log.vdebug("thisVersionPublishedDate: ", thisVersionPublishedDate);
// the running version grace period is its release date + 1 month
SysTime thisVersionReleaseGracePeriod = thisVersionPublishedDate;
thisVersionReleaseGracePeriod = thisVersionReleaseGracePeriod.add!"months"(1);
log.vdebug("thisVersionReleaseGracePeriod: ", thisVersionReleaseGracePeriod);
// is this running version obsolete ?
if (!displayObsolete) {
// if releaseGracePeriod > currentTime
// display an information warning that there is a new release available
if (releaseGracePeriod.toUnixTime() > currentTime.toUnixTime()) {
// inside release grace period ... set flag to false
displayObsolete = false;
} else {
// outside grace period
displayObsolete = true;
}
}
// display version response
writeln();
if (!displayObsolete) {
// display the new version is available message
log.logAndNotify("INFO: A new onedrive client version is available. Please upgrade your client version when possible.");
} else {
// display the obsolete message
log.logAndNotify("WARNING: Your onedrive client version is now obsolete and unsupported. Please upgrade your client version.");
}
log.log("Current Application Version: ", applicationVersion);
log.log("Version Available: ", latestVersion);
writeln();
}
}
}
// Unit Tests
unittest
{
assert(multiGlobMatch(".hidden", ".*"));