lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 4 Apr 2019 12:47:34 +0200
From:   Stefano Garzarella <sgarzare@...hat.com>
To:     Alex Bennée <alex.bennee@...aro.org>
Cc:     qemu devel list <qemu-devel@...gnu.org>, netdev@...r.kernel.org,
        Stefan Hajnoczi <stefanha@...hat.com>
Subject: Re: VSOCK benchmark and optimizations

On Tue, Apr 02, 2019 at 04:19:25AM +0000, Alex Bennée wrote:
> 
> My main interest is how it stacks up against:
> 
>   --device virtio-net-pci and I guess the vhost equivalent
> 

Hi Alex,
I added TCP tests on virtio-net and I did also a test with TCP_NODELAY,
just to be fair, because VSOCK doesn't implement something like this
(maybe could be an improvement to add for maximizing the throughput).
I set the MTU to the maximum allowed (65520).

I also redo the VSOCK tests. There are some differences because now I'm
using tuned to have fewer fluctuations and I removed batching in VSOCK
optimization because it is not ready to be published.

                   VSOCK               TCP + virtio-net + vhost
             host -> guest [Gbps]         host -> guest [Gbps]
pkt_size    before opt.  optimized      TCP_NODELAY
  64            0.060       0.096           0.16        0.15
  256           0.22        0.36            0.32        0.57
  512           0.42        0.74            1.2         1.2
  1K            0.7         1.5             2.1         2.1
  2K            1.5         2.9             3.5         3.4
  4K            2.5         5.3             5.5         5.3
  8K            3.9         8.8             8.0         7.9
  16K           6.6        12.8             9.8        10.2
  32K           9.9        18.1            11.8        10.7
  64K          13.5        21.4            11.4        11.3
  128K         17.9        23.6            11.2        11.0
  256K         18.0        24.4            11.1        11.0
  512K         18.4        25.3            10.1        10.7

Note: Maybe I have something miss configured because TCP on virtio-net
doesn't exceed 11 Gbps.

                   VSOCK               TCP + virtio-net + vhost
             guest -> host [Gbps]         guest -> host [Gbps]
pkt_size    before opt.  optimized      TCP_NODELAY
  64            0.088       0.101           0.24        0.24
  256           0.35        0.41            0.36        1.03
  512           0.70        0.73            0.69        1.6
  1K            1.1         1.3             1.1         3.0
  2K            2.4         2.6             2.1         5.5
  4K            4.3         4.5             3.8         8.8
  8K            7.3         7.6             6.6        20.0
  16K           9.2        11.1            12.3        29.4
  32K           8.3        18.1            19.3        28.2
  64K           8.3        25.4            20.6        28.7
  128K          7.2        26.7            23.1        27.9
  256K          7.7        24.9            28.5        29.4
  512K          7.7        25.0            28.3        29.3

virtio-net is well optimized than VSOCK, but we are near :). Maybe we
will use virtio-net as a transport for VSOCK, in order to avoid duplicate
optimizations.

How to reproduce TCP tests:

host$ ip link set dev br0 mtu 65520
host$ ip link set dev tap0 mtu 65520
host$ qemu-system-x86_64 ... \
      -netdev tap,id=net0,vhost=on,ifname=tap0,script=no,downscript=no \
      -device virtio-net-pci,netdev=net0

guest$ ip link set dev eth0 mtu 65520
guest$ iperf3 -s

host$ iperf3 -c ${VM_IP} -N -l ${pkt_size}      # host -> guest
host$ iperf3 -c ${VM_IP} -N -l ${pkt_size} -R   # guest -> host


Cheers,
Stefano

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ