Any chance this work can be upstreamed into mainline SSH? I'd love to have better performance for SSH, but I'm probably not going to install and remember to use this just for the few times it would be relevant.

I doubt this would ever be accepted upstream. That said if one wants speed play around with lftp [1]. It has a mirror subsystem that can replicate much of rsync functionality in a chroot sftp-only destination and can use multiple TCP/SFTP streams in a batch upload and per-file meaning one can saturate just about any upstream. I have used this for transferring massive postgres backups and then because I am paranoid when using applications that automatically multipart transfer files I include a checksum file for the source and then verify the destination files.

The only downside I have found using lftp is that given there is no corresponding daemon for rsync on the destination then directory enumeration can be slow if there are a lot of nested sub-directories. Oh and the syntax is a little odd for me anyway. I always have to look at my existing scripts when setting up new automation.

Demo to play with, download only. Try different values. This will be faster on your servers, especially anything within the data-center.

    ssh mirror@mirror.newsdump.org # do this once to accept key as ssh-keyscan will choke on my big banner

    mkdir -p /dev/shm/test && cd /dev/shm/test

    lftp -u mirror, -e "mirror --parallel=4 --use-pget=8 --no-perms --verbose /pub/big_file_test/ /dev/shm/test;bye" sftp://mirror.newsdump.org
For automation add --loop to repeat job until nothing has changed.

[1] - https://linux.die.net/man/1/lftp

The normal answer that I have heard to the performance problems in the conversion from scp to sftp is to use rsync.

The design of sftp is such that it cannot exploit "TCP sliding windows" to maximize bandwidth on high-latency connections. Thus, the migration from scp to sftp has involved a performance loss, which is well-known.

https://daniel.haxx.se/blog/2010/12/08/making-sftp-transfers...

The rsync question is not a workable answer, as OpenBSD has reimplemented the rsync protocol in a new codebase:

https://www.openrsync.org/

An attempt to combine the BSD-licensed rsync with OpenSSH would likely see it stripped out of GPL-focused implementations, where the original GPL release has long standing.

It would be more straightforward to design a new SFTP implementation that implements sliding windows.

I understand (but have not measured) that forcibly reverting to the original scp protocol will also raise performance in high-latency conditions. This does introduce an attack surface, should not be the default transfer tool, and demands thoughtful care.

https://lwn.net/Articles/835962/

I included LFTP using mirror+sftp in my example as it is the secure way to give less than trusted people access to files and one can work around the lack of sliding windows by spawning as many TCP flows as one wishes with LFTP. I would love to see SFTP evolve to use sliding windows but for now using it in the data-center or over WAN accelerated links is still fast.

Rsync is great when moving files between trusted systems that one has a shell on but the downside is that rsync can not split up files into multiple streams so there is still a limit based on source+dest buffer+rtt and one has to provide people a shell or add some clunky way to prevent a shell by using wrappers unless using native rsync port 873 which is not encrypted. Some people break up jobs on the client side and spawn multiple rsync jobs in the background. It appears that openrsync is still very much work in progress.

SCP is being or has been deprecated but the binaries still exist for now. People will have to hold onto old binaries and should probably static compile them as the linked libraries will likely go away at some point.

The scp program switched to calling sftp as the server in OpenSSH version 8.9, and notably Windows is now running 9.5, so large segments of scp users are now invoking sftp behind the scenes.

If you want to use the historic scp server instead, a command line option is provided to allow this:

"In case of incompatibility, the scp(1) client may be instructed to use the legacy scp/rcp using the -O flag."

https://www.openssh.org/releasenotes.html

The old scp behavior hasn't been removed, but you need to specifically request it. It is not the default.

It would seem to me that an alternate invocation for file transfer could be tested against sftp in high latency situations:

  ssh yourhost 'cat somefile' > somefile
That would be slightly faster than tar, which adds some overhead. Using tar on both sides would allow transfers of special files, soft links, and retain hard links, which neither scp nor sftp will do.

  ssh yourhost 'tar cf - yourdir' | tar xpf -
Windows has also recently added a tar command.

Keep in mind that SCP/SSH might be faster in some cases than SFTP but in both cases it is still limited to a 2MB application layer receive window which is drastically undersized in a lot of situations. It doesn't matter what the TCP window is set to because the OpenSSH window overrides that value. Basically, if your bandwidth delay product is more than 2MB (e.g. 1gbps @ 17ms RTT) you're going to be application limited by OpenSSH. HPN-SSH gets most of the performance benefit by normalizing the application layer receive window to the TCP receive window (up to 128MB). In some cases you'll see 100X throughput improvement on well tuned hosts on a high delay path.

If your BDP is less than 2MB you still might get some benefit if you are CPU limited and use the parallel ciphers. However, the fastest cipher is AES-GCM and we haven't parallelized that as of yet (that's next on the list).

When I need speed, I drop down to FTP/rcp or some other cleartext protocol.

Moving a terabyte database in an upgrade, I have connected three ports direct (no switch), then used xargs to keep all three connections busy with transferring the 2gb data files. I can get the transfer done in under an hour this way.

I don't currently have a performance need for an encrypted transfer, but one may arise.

I fully understand that. We're using this, along with parsyncfp2 (which you should checkout) to move 1.5PB of data a month across a 40Gb link. Not saying that HPN-SSH is only useful in that context but different people certainly do have different needs.

If encryption was absolutely required, I might try it over s_client/s_server or two stunnels, one in client=yes mode.

I'm assuming that would have different limits than you outlined above, although I don't think they do multi core.

Rsync commonly uses SSH as the transport layer so it won't necessarily be any faster than SFTP unless you are using the rsync daemon (usually on port 873). However, the rsync daemon won't provide any encryption and I can't suggest using it unless it's on a private network.

> The rsync question is not a workable answer, as OpenBSD has reimplemented the rsync protocol in a new codebase

I thought openrsync existed solely because of rpki. Even OpenBSD devs recommend using the real version from ports.

Wow, I hadn't heard of this before. You're saying it can "chunk" large files when operating against a remote sftp-subsystem (OpenSSH)?

I often find myself needing to move a single large file rather than many smaller ones but TCP overhead and latency will always keep speeds down.

Not every OS or every SSH daemon support byte ranges but most up to date Linux systems and OpenSSH absolutely support it. One should not assume this exists on legacy systems and daemons.

Byte ranges are the only way to access files over sftp. Look at the read and write requests in https://datatracker.ietf.org/doc/html/draft-ietf-secsh-filex...

I agree but there are legacy daemons that do not follow the spec. Most here will never see them in their lifetime but I had to deal with it in the financial world. People would be amazed and terrified at all the old non-standard crap that their payroll data is flying across. They just ignore the range and send the entire file. I am happy to not have to deal with that any more.

I use lftp a lot because of it's better UI compared to sftp. However, for large files, even with scp I can pin GigE with an old Xeon-D system acting as a server.

Yes, for local access this is my experience too. For trans-oceanic file transfers I can really see the limits and parallelization is essential.

Also upstream is extremely well audited. That's a huge benefit i don't want to loose by using fork.

I do want to say that HPN-SSH is also well audited; you can see the results of CI tests on the github. We also do fuzz testing, static analysis, extensive code reviews, and functionality testing. We build directly on top of OpenSSH and work with them when we can. We don't touch the authentication code and the parallel ciphers are built directly on top of OpenSSL.

I've been developing it for 20+ years and if you have any specific questions I'd be happy to answer them.

this, I'm not going to start using a random ssh fork with modified ciphers.

It may still be sensible if you only expose it to private networks.

So could this safely be used on Tailscale then ? I’m very curious though also a bit paranoid.

> So could this safely be used on Tailscale then ? I’m very curious though also a bit paranoid.

You may as well just use tailscale ssh in that case. It already disables ssh encryption because your connection is encrypted with WireGuard anyway.

It could safely be used on public internet, all this fearmongering has no basis under it.

Better question is 'does it have any actual improvements in day-to-day operations'? Because it seems like it mostly changes up some ciphering which is already very fast.

> It could safely be used on public internet, all this fearmongering has no basis under it.

On what basis are making that claim? Because AFAICT, concern about it being less secure is entirely reasonable and is one of the big caveats to it.

Concern about it being less secure is fully justified. I'm the lead developer and have been for the past 20 years. I'm happy to answer any questions you might happen to have.

I'm not fear mongering. I'm just saying

- IF you don't trust it

- AND you want to use it

=> run it on a private network

You don't have to trust it for security to use it. Putting services on secure networks when the public doesn't need access is standard practice.

I remember the last time I really cared to look into this was in the 2000’s, I had these wdtv embedded boxes that had a super anemic cpu that doing local copies with scp was slow as hell from the cipher overhead. I believe at the time it was possible to disable ciphers in scp but it was still slower than smbfs. NFS was to be avoided as wifi was shit then and losing connection meant risking system locking up. This of course was local LAN so I did not really care about encryption.

But I don’t miss having those limitations.

It's still possible but we only suggest doing it on private known secure networks or when it's data you don't care about. Authentication is still fully encrypted - we just rekey post authentication with a null cipher.

lose*

OpenSSH is from the people at OpenBSD, which means performance improvements have to be carefully vetted against bugs, and, judging by the fact that they're still on fastfs and the lack of TRIM in 2025, that will not happen.

There's nothing inherently slow about UFS2; the theoretical performance profile should be nearly identical to Ext4. For basic filesystem operations UFS2 and Ext4 will often be faster than more modern filesystems.

OpenBSD's filesystem operations are slow not because of UFS2, but because they simply haven't been optimized up-and-down the stack the way Ext4 has been Linux or UFS2 on FreeBSD. And of course, OpenBSD's implementation doesn't have a journal (both UFS and Ext had journaling bolted late in life) so filesystem checks (triggered on an unclean shutdown or after N boots) can take a long time, which often cause people to think their system has frozen or didn't come up. That user interface problem notwithstanding, UFS2 is extremely robust. OpenBSD is very conservative about optimizations, especially when they increase code complexity, and particularly for subsystems where the project doesn't have time available to give it the necessary attention.

OpenBSD UFS did have "soft updates" which were some kind of alternative to journaling.

I believe that these were recently removed. Perhaps they don't play well with SMP.

McKusick, who wrote the original BSD FFS, also later came up with SU:

* https://en.wikipedia.org/wiki/Soft_updates

I'm the lead developer. I can go into this a bit more when I get from an appointment if people are interested.

I’m interested. Mainly to update the documentation on it for Gentoo, people have asked about it over the years. Also, TIL it appears HN has a sort of account dormancy status it appears you are in.

For Gentoo I should put you in touch with my co-developer. He's active in Gentoo and has been maintaining a port for it. I'll point him at this conversation. That said, documentation wise, the HPN-README goes into a lot of detail about the HPN-SSH specific changes. I should point out that while HPN-SSH is a fork we follow OpenSSH. Whenever they come out with a new release we come out with one that incorporates their changes - usually we get this out in about a week.

I admittedly don't really know how SSH is built but it looks to me like the patch that "makes" it HPN-SSH is already present upstream[1], it's just not applied by default? Nixpkgs seems to allow you to build the pkg with the patch [2].

[1] https://github.com/freebsd/freebsd-ports/blob/main/security/...

[2] https://github.com/NixOS/nixpkgs/blob/d85ef06512a3afbd6f9082...

Upstream is either OpenBSD itself or https://github.com/openssh/openssh-portable , not the FreeBSD port. I'm... not sure why nix is pulling the patch from FreeBSD, that's odd.

There’s a third party ZFS utility (zrepl, I think) that solves this in a nice way: ssh is used as a control channel to coordinate a new TLS connection over which the actual data is sent. It is considerably faster, apparently.

Unlikely. These patches have been carried out-of-tree for over a decade precisely because upstream OpenSSH won't accept them.

More than 2 decades at this point. The primary reasons is that the full patch set would be a burden for them to integrate and they don't prioritize performance for bulk data transfers. Which is perfectly understandable from their perspective. HPN-SSH builds on the expertise of OpenSSH and we follow their work closely - when they make a new release we incorporate it and follow with our own release inside of a week or two (depending on how long the code review and functionality/regression testing takes). We focus on throughput performance which involves receive buffer normalization, private key cipher speed, code optimization, and so forth. We tend to stay clear of anything involve authentication and we never roll our own when it comes to the ciphers.

Depending on your hardware architecture and security needs, fiddling with ciphers in mainline might improve speed.