forked from rclone/rclone
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] master from rclone:master #1
Open
pull
wants to merge
958
commits into
BingoKingo:master
Choose a base branch
from
rclone:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
These tests fail for --vfs-cache-mode minimal on Linux for the same reason they don't work properly with --vfs-cache-mode off
This fixes various cache invalidation bugs
This adds a new optional parameter to the backend, to specify a path to a unix domain socket to connect to, instead the specified URL. The URL itself is still used for the rest of the HTTP client, allowing host and subpath to stay intact. This allows using rclone with the webdav backend to connect to a WebDAV server provided at a Unix Domain socket: rclone serve webdav --addr unix:///tmp/my.socket remote:path rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
This adds an additional flag --unix-socket, and if supplied connects to the unix socket given. rclone rcd --rc-addr unix:///tmp/my.socket rclone rc --unix-socket /tmp/my.socket core/stats
Before this change, macOS-specific metadata was not preserved by rclone, even for local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata limited to xattrs.) Additionally, rclone did not take advantage of APFS's native "cloning" functionality for fast and deduplicated transfers. After this change, local (on macOS only) supports "server-side copy" similarly to other remotes, and achieves this by using (when possible) macOS's native APFS "cloning", which is the same underlying mechanism deployed when a user duplicates a file via the Finder UI. This has several advantages over the previous behavior: - It is extremely fast (even large files can be cloned instantly) - It is very efficient in terms of storage, as it automatically deduplicates when possible (i.e. so that having two identical files does not consume more storage than having just one.) (The concept is similar to a "hard link", but subsequent modifications will not affect the original file.) - It preserves Mac-specific metadata to the maximum degree, including not only xattrs but also metadata not easily settable by other methods, including Finder and Spotlight params. When server-side "clone" is not available (for example, on non-APFS volumes), it falls back to server-side "copy" (still preserving metadata but using more disk storage.) It is only used when both remotes are local (and not wrapped by other remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
This flag allows users to disable the reflink cloning feature and instead force "deep" copies, for certain use cases where data redundancy is preferable. It is functionally equivalent to using `--disable Copy` on local.
Sometimes (particularly on macOS amd64) the serve s3 test fails with TestIntegration/FsMkdir/FsPutError where it wasn't expecting to get an object but it did. This is likely caused by a race between the serve s3 goroutine deleting the half uploaded file and the fstests code looking for it to not exist. This fix treats it like any other eventual consistency problem and retries the check using the test framework.
This was causing a conflict error. This was fixed by renaming the existing file first and if the copy was successful deleting it, or renaming it back.
This was causing a conflict error. This was fixed by renaming the existing file first and if the copy was successful deleting it, or renaming it back.
This was causing a conflict error. This was fixed by checking for the existing object and deleting it after the file was server side copied.
This was causing a conflict error. This was fixed by renaming the existing file first and if the copy was successful deleting it, or renaming it back.
This was causing a conflict error. This was fixed by renaming the existing file first and if the copy was successful deleting it, or renaming it back.
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.0 to 4.5.1. - [Release notes](https://github.com/golang-jwt/jwt/releases) - [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md) - [Commits](golang-jwt/jwt@v4.5.0...v4.5.1) --- updated-dependencies: - dependency-name: github.com/golang-jwt/jwt/v4 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
… tests Before this change, upgrading to v1.13.7 caused a deadlock in the tests. This was caused by additional locking in the sftp package exposing a bad choice by the rclone code. See pkg/sftp#603 and thanks to @puellanivis for the fix suggestion.
Before this change, if rclone is used as a library and logrus is used after a call to rc `sync/bisync`, logging does not work anymore and leads to writing to a closed pipe. This change restores the output correctly. Fixes #8158
We changed the precision of the onedrive personal backend in c053429 from 1mS to 1S. However the tests did not get updated. This changes the time tests to use `fstest.AssertTimeEqualWithPrecision` which compares with precision so hopefully won't break again.
Before this change, if writing to a local backend with --metadata and --links, if the incoming metadata contained mode or ownership information then rclone would apply the mode/ownership to the destination of the link not the link itself. This fixes the problem by using the link safe sycall variants lchown/fchmodat when --links and --metadata is in use. Note that Linux does not support setting permissions on symlinks, so rclone emits a debug message in this case. This also fixes setting times on symlinks on Windows which wasn't implemented for atime, mtime and was incorrectly setting the target of the symlink for btime. See: GHSA-hrxh-9w67-g4cv
This reverts commit 1e2b354.
…tadata Before this change, if writing to a local backend with --metadata and --links, if the incoming metadata contained mode or ownership information then rclone would apply the mode/ownership to the destination of the link not the link itself. This fixes the problem by using the link safe sycall variants lchown/fchmodat when --links and --metadata is in use. Note that Linux does not support setting permissions on symlinks, so rclone emits a debug message in this case. This also fixes setting times on symlinks on Windows which wasn't implemented for atime, mtime and was incorrectly setting the target of the symlink for btime. See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
Before this change attempting to download a file with `Content-Encoding: gzip` from Cloudflare R2 gave this error corrupted on transfer: sizes differ src 0 vs dst 999 This was caused by the SDK v2 overriding our attempt to set `Accept-Encoding: gzip`. This fixes the problem by disabling the middleware that does that overriding.
* head -number is not allowed by POSIX.1-2024: https://pubs.opengroup.org/onlinepubs/9799919799/utilities/head.html https://devmanual.gentoo.org/tools-reference/head-and-tail/index.html
CEPH uses a special bucket form `tenant:bucket` for multitentant access using S3 as documented here: https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3 However when doing multipart uploads, in the reply from `CreateMultipart` the `tenant:` was missing from the `Bucket` response rclone was using to build the `UploadPart` request. This caused a 404 failure return. This may be a CEPH bug, but it is easy to work around. This changes the code to use the `Bucket` and `Key` that we used in `CreateMultipart` in `UploadPart` rather than the one returned from `CreateMultipart` which fixes the problem. See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
Also update the Filescom icon.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by pull[bot] (v2.0.0-alpha.1)
Can you help keep this open source service alive? 💖 Please sponsor : )