0x677378 - gsx


filesystem

Create directory tree: mkdir ~/tmp/{rand,sshfs,9p}
Create 1024 files mrand_{0000..1023} (1MiB): for i in {0000..1023}; do head -c 1M /dev/urandom > ~/tmp/rand/mrand_"$i"; done
Create 1 file grand (1GiB): head -c 1G /dev/urandom > ~/tmp/rand/grand

Everything was read from a RAMDisk and written to either /dev/null or a RAMDisk.

ncat unencrypted

1 big file grand

server

ncat -l -k -p 1337 > /dev/null

client

hyperfine --warmup=3 -r 10 'ncat $HOST 1337 < ~/tmp/rand/grand'

result

  Time (mean ± σ):      9.415 s ±  0.005 s    [User: 0.499 s, System: 1.127 s]
  Range (min … max):    9.410 s …  9.425 s    10 runs

1024 small files mrand

server

ncat -l -k -p 1337 > /dev/null

client

Yes, I know you could pipe the files through tar, but I explicitly wanted to see the overhead of sending 1024 small single files.

hyperfine --warmup=3 -r 10 'for i in $(ls ~/tmp/rand/mrand_*); do ncat $HOST 1337 < "$i"; done'

result

  Time (mean ± σ):     28.716 s ±  0.153 s    [User: 15.873 s, System: 3.117 s]
  Range (min … max):   28.460 s … 28.902 s    10 runs

ncat through wireguard

1 big file grand

server

ncat -l -k -p 1337 > /dev/null

client

hyperfine --warmup=3 -r 10 'ncat $VPNHOST 1337 < ~/tmp/rand/grand'

result

  Time (mean ± σ):     10.103 s ±  0.038 s    [User: 0.646 s, System: 1.575 s]
  Range (min … max):   10.053 s … 10.173 s    10 runs

1024 small files mrand

server

ncat -l -k -p 1337 > /dev/null

client

Yes, I know you could pipe the files through tar, but I explicitly wanted to see the overhead of sending 1024 small single files.

hyperfine --warmup=3 -r 10 'for i in $(ls ~/tmp/rand/mrand_*); do ncat $VPNHOST 1337 < "$i"; done'

result

  Time (mean ± σ):     35.018 s ±  0.239 s    [User: 16.584 s, System: 4.138 s]
  Range (min … max):   34.724 s … 35.535 s    10 runs

scp

1 big file grand

client

hyperfine --warmup=3 -r 10 'scp ~/tmp/rand/grand [$HOST]:tmp/'

result

  Time (mean ± σ):      9.574 s ±  0.013 s    [User: 4.326 s, System: 2.555 s]
  Range (min … max):    9.558 s …  9.593 s    10 runs

1024 small files mrand

client

hyperfine --warmup=3 -r 10 'scp ~/tmp/rand/mrand_* [$HOST]:tmp/'

result

  Time (mean ± σ):     10.788 s ±  0.054 s    [User: 4.188 s, System: 2.893 s]
  Range (min … max):   10.736 s … 10.899 s    10 runs

sftp

1 big file grand

client

hyperfine --warmup=3 -r 10 'sftp -b ~/tmp/sftp_grand.batch [$HOST]'

content batch file

put tmp/rand/grand tmp/

result

  Time (mean ± σ):      9.688 s ±  0.009 s    [User: 4.785 s, System: 2.197 s]
  Range (min … max):    9.672 s …  9.701 s    10 runs

1024 small files mrand

client

hyperfine --warmup=3 -r 10 'sftp -b ~/tmp/sftp_mrand.batch [$HOST]'

content batch file

put tmp/rand/mrand_* tmp/

result

  Time (mean ± σ):     13.462 s ±  0.089 s    [User: 4.742 s, System: 3.646 s]
  Range (min … max):   13.339 s … 13.597 s    10 runs

rsync

1 big file grand

client

There is probably some overhead by removing the file with ssh after the transfer.

hyperfine --warmup=3 -r 10 'rsync ~/tmp/rand/grand [$HOST]:tmp/ && ssh $HOST rm ~/tmp/grand'

result

  Time (mean ± σ):      9.861 s ±  0.020 s    [User: 4.845 s, System: 2.472 s]
  Range (min … max):    9.834 s …  9.900 s    10 runs

1024 small files mrand

client

There is probably some overhead by removing the files with ssh after the transfer.

hyperfine --warmup=3 -r 10 'rsync ~/tmp/rand/mrand_* [$HOST]:tmp/ && ssh $HOST rm ~/tmp/mrand_*'

result

  Time (mean ± σ):      9.868 s ±  0.015 s    [User: 4.721 s, System: 2.242 s]
  Range (min … max):    9.844 s …  9.888 s    10 runs

sshfs

1 big file grand

client

sshfs [$HOST]:tmp/ ~/tmp/sshfs

command

hyperfine --warmup=3 -r 10 'cp ~/tmp/rand/grand ~/tmp/sshfs/'

result

  Time (mean ± σ):      9.686 s ±  0.020 s    [User: 0.002 s, System: 1.383 s]
  Range (min … max):    9.637 s …  9.707 s    10 runs

1024 small files mrand

client

sshfs [$HOST]:tmp/ ~/tmp/sshfs

command

hyperfine --warmup=3 -r 10 'cp ~/tmp/rand/mrand_* ~/tmp/sshfs/'

result

  Time (mean ± σ):     12.167 s ±  0.045 s    [User: 0.019 s, System: 1.304 s]
  Range (min … max):   12.084 s … 12.237 s    10 runs

9p diod v9fs unencrypted

1 big file grand

server

diod -f -l [$HOST]:564 -n -e /home/gsx/tmp

client

mount -t diod -n [$HOST]:/home/gsx/tmp /home/gsx/tmp/9p

command

hyperfine --warmup=3 -r 10 'cp ~/tmp/rand/grand ~/tmp/9p/'

result

  Time (mean ± σ):     26.584 s ±  0.504 s    [User: 0.004 s, System: 1.719 s]
  Range (min … max):   25.731 s … 27.409 s    10 runs

1 big file grand -o msize=1048576

server

diod -f -l [$HOST]:564 -n -e /home/gsx/tmp

client

mount -t diod -n -o msize=1048576 [$HOST]:/home/gsx/tmp /home/gsx/tmp/9p

command

hyperfine --warmup=3 -r 10 'cp ~/tmp/rand/grand ~/tmp/9p/'

result

  Time (mean ± σ):     23.876 s ±  0.175 s    [User: 0.003 s, System: 4.167 s]
  Range (min … max):   23.592 s … 24.177 s    10 runs

1024 small files mrand

server

diod -f -l [$HOST]:564 -n -e /home/gsx/tmp

client

mount -t diod -n [$HOST]:/home/gsx/tmp /home/gsx/tmp/9p

command

hyperfine --warmup=3 -r 10 'cp ~/tmp/rand/mrand_* ~/tmp/9p/'

result

  Time (mean ± σ):     29.581 s ±  0.337 s    [User: 0.029 s, System: 2.115 s]
  Range (min … max):   28.993 s … 30.113 s    10 runs

9p diod v9fs through wireguard

1 big file grand

server

diod -f -l [$VPNHOST]:564 -n -e /home/gsx/tmp

client

mount -t diod -n [$VPNHOST]:/home/gsx/tmp /home/gsx/tmp/9p

command

hyperfine --warmup=3 -r 10 'cp ~/tmp/rand/grand ~/tmp/9p/'

result

  Time (mean ± σ):     37.667 s ±  0.350 s    [User: 0.003 s, System: 1.992 s]
  Range (min … max):   37.260 s … 38.272 s    10 runs

1024 small files mrand

server

diod -f -l [$VPNHOST]:564 -n -e /home/gsx/tmp

client

mount -t diod -n [$VPNHOST]:/home/gsx/tmp /home/gsx/tmp/9p

command

hyperfine --warmup=3 -r 10 'cp ~/tmp/rand/mrand_* ~/tmp/9p/'

result

  Time (mean ± σ):     44.168 s ±  0.231 s    [User: 0.042 s, System: 3.077 s]
  Range (min … max):   43.901 s … 44.548 s    10 runs