lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zhp9599u.fsf@zen.linaroharston>
Date:   Tue, 02 Apr 2019 04:19:25 +0000
From:   Alex Bennée <alex.bennee@...aro.org>
To:     Stefano Garzarella <sgarzare@...hat.com>
Cc:     qemu devel list <qemu-devel@...gnu.org>, netdev@...r.kernel.org
Subject: Re: VSOCK benchmark and optimizations


Stefano Garzarella <sgarzare@...hat.com> writes:

> Hi Alex,
> I'm sending you some benchmarks and information about VSOCK CCing qemu-devel
> and linux-netdev (maybe this info could be useful for others :))
>
> One of the VSOCK advantages is the simple configuration: you don't need to set
> up IP addresses for guest/host, and it can be used with the standard POSIX
> socket API. [1]
>
> I'm currently working on it, so the "optimized" values are still work in
> progress and I'll send the patches upstream (Linux) as soon as possible.
> (I hope in 1 or 2 weeks)
>
> Optimizations:
> + reducing the number of credit update packets
>   - RX side sent, on every packet received, an empty packet only to inform the
>     TX side about the space in the RX buffer.
> + increase RX buffers size to 64 KB (from 4 KB)
> + merge packets to fill RX buffers
>
> As benchmark tool I used iperf3 [2] modified with VSOCK support:
>
>              host -> guest [Gbps]      guest -> host [Gbps]
> pkt_size    before opt.  optimized    before opt.  optimized
>   1K            0.5         1.6           1.4         1.4
>   2K            1.1         3.1           2.3         2.5
>   4K            2.0         5.6           4.2         4.4
>   8K            3.2        10.2           7.2         7.5
>   16K           6.4        14.2           9.4        11.3
>   32K           9.8        18.9           9.2        17.8
>   64K          13.8        22.9           8.8        25.0
>   128K         17.6        24.5           7.7        25.7
>   256K         19.0        24.8           8.1        25.6
>   512K         20.8        25.1           8.1        25.4
>
>
> How to reproduce:
>
> host$ modprobe vhost_vsock
> host$ qemu-system-x86_64 ... -device vhost-vsock-pci,guest-cid=3
>       # Note: Guest CID should be >= 3
>       # (0, 1 are reserved and 2 identify the host)
>
> guest$ iperf3 --vsock -s
>
> host$ iperf3 --vsock -c 3 -l ${pkt_size}      # host -> guest
> host$ iperf3 --vsock -c 3 -l ${pkt_size} -R   # guest -> host
>
>
> If you want, I can do a similar benchmark (with iperf3) using a networking
> card (do you have a specific configuration?).

My main interest is how it stacks up against:

  --device virtio-net-pci and I guess the vhost equivalent

AIUI one of the motivators was being able to run something like NFS for
a guest FS over vsock instead of the overhead from UDP and having to
deal with the additional complication of having a working network setup.

>
> Let me know if you need more details!
>
> Thanks,
> Stefano
>
> [1] https://wiki.qemu.org/Features/VirtioVsock
> [2] https://github.com/stefano-garzarella/iperf/


-- 
Alex Bennée

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ