Caja (file browser for MATE, generally better than Nautilus, Dolphin, Thunar):

1. Does not give destination files temporary names while copying is in progress.
2. Does not provide an option to verify-after-copying or even just compare-these-folders (you have to run a separate app such as kdiff3, not included, and re-navigate to the folders in question) if a copy operation is interrupted in the middle of a file, there is no clear indication of whether you have the whole file or not.

#2 at least is a problem with just about every file-browser in existence -- and yet the problem of files being incompletely copied is also quite common. .

Also, why is copying via browser so much slower than rsync? Why don't browsers offer an option to use rsync for copying multiple files?

· · Web · 1 · 0 · 1

@woozle I think the answer to "why is it slower" is that rsync first determines what files to transfer, then streams them all back-to-back; while HTTP clients generally have to make a separate request for each file, so you get extra delay due to round-trip-time between each file. In the worst case, the HTTP client might close and re-open the TCP connection every time, which would also trigger TCP's slow-start behavior.

I think it might be easy to encapsulate the rsync protocol in a single HTTP request, at least if you're retrieving files from the server, but that would require special code on both the client and the server.

You'd get a similarly good result if your server could hand you a single tarball or zip file of all the files you want. That, at least, would work with an unmodified HTTP client, so you'd only need server-side support.

Whether that's worth implementing is another question 🤔🤷

@jamey When I say "browser", I mean "file browser" not "web browser". The connection is using sftp -- which is also what rsync uses (or at least *can* use, and that's always how I've used it).

That said, it *could* be making a separate request for each file -- but that re-opens the question of why it's doing that.

And yes, zipping up the files remotely would be another option to offer.

The option to begin copying files right away, without first accumulating totals for number of files and total size (which can be done concurrently), would also be a time-saver. (...and if it's going to accumulate those totals, why not show the discrepancy when the copy is incomplete?)

Another wish-item: in cases where the copy completed but there were errors, how about listing the files that gave errors?

A clickable list, even, so you can browse directly to the source file and resolve the issue, or try the operation again.

@woozle Oh, even though I saw that you were talking about file managers I still managed to get confused. 😅

I don't think rsync ever uses sftp? It can tunnel over SSH, and sftp also tunnels over SSH, but as far as I know they don't have anything else in common.

I don't know anything about what protocol sftp uses. I wouldn't be surprised if it requires a separate request from the client for each file, but I would be disappointed.

Tunneling tar or zip over SSH is easy and the client could do that without any special provisions on the server. So as long as the server has either rsync or an archiver installed, any file manager that can use sftp could use that instead. I agree, it's weird that they don't!


You're right that rsync doesn't use sftp but rather tunnels via ssh as sftp does; they both work with the same security keys, which is why I got confused.

SFTP is its own protocol, however.

@woozle Ah-hah, after reading the last draft spec for SFTP: it does allow sending new requests before previous ones have completed. So a client could in theory do everything you want with it, limited only by how many requests the server is willing to read before blocking. Unfortunately people rarely bother using those sorts of features correctly 😢

Sign in to participate in the conversation

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!