[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190403123654.GF32425@stefanha-x1.localdomain>
Date: Wed, 3 Apr 2019 13:36:54 +0100
From: Stefan Hajnoczi <stefanha@...il.com>
To: Stefano Garzarella <sgarzare@...hat.com>
Cc: Alex Bennée <alex.bennee@...aro.org>,
netdev@...r.kernel.org, qemu devel list <qemu-devel@...gnu.org>,
Stefan Hajnoczi <stefanha@...hat.com>
Subject: Re: [Qemu-devel] VSOCK benchmark and optimizations
On Tue, Apr 02, 2019 at 09:37:06AM +0200, Stefano Garzarella wrote:
> On Tue, Apr 02, 2019 at 04:19:25AM +0000, Alex Bennée wrote:
> >
> > Stefano Garzarella <sgarzare@...hat.com> writes:
> >
> > > Hi Alex,
> > > I'm sending you some benchmarks and information about VSOCK CCing qemu-devel
> > > and linux-netdev (maybe this info could be useful for others :))
> > >
> > > One of the VSOCK advantages is the simple configuration: you don't need to set
> > > up IP addresses for guest/host, and it can be used with the standard POSIX
> > > socket API. [1]
> > >
> > > I'm currently working on it, so the "optimized" values are still work in
> > > progress and I'll send the patches upstream (Linux) as soon as possible.
> > > (I hope in 1 or 2 weeks)
> > >
> > > Optimizations:
> > > + reducing the number of credit update packets
> > > - RX side sent, on every packet received, an empty packet only to inform the
> > > TX side about the space in the RX buffer.
> > > + increase RX buffers size to 64 KB (from 4 KB)
> > > + merge packets to fill RX buffers
> > >
> > > As benchmark tool I used iperf3 [2] modified with VSOCK support:
> > >
> > > host -> guest [Gbps] guest -> host [Gbps]
> > > pkt_size before opt. optimized before opt. optimized
> > > 1K 0.5 1.6 1.4 1.4
> > > 2K 1.1 3.1 2.3 2.5
> > > 4K 2.0 5.6 4.2 4.4
> > > 8K 3.2 10.2 7.2 7.5
> > > 16K 6.4 14.2 9.4 11.3
> > > 32K 9.8 18.9 9.2 17.8
> > > 64K 13.8 22.9 8.8 25.0
> > > 128K 17.6 24.5 7.7 25.7
> > > 256K 19.0 24.8 8.1 25.6
> > > 512K 20.8 25.1 8.1 25.4
> > >
> > >
> > > How to reproduce:
> > >
> > > host$ modprobe vhost_vsock
> > > host$ qemu-system-x86_64 ... -device vhost-vsock-pci,guest-cid=3
> > > # Note: Guest CID should be >= 3
> > > # (0, 1 are reserved and 2 identify the host)
> > >
> > > guest$ iperf3 --vsock -s
> > >
> > > host$ iperf3 --vsock -c 3 -l ${pkt_size} # host -> guest
> > > host$ iperf3 --vsock -c 3 -l ${pkt_size} -R # guest -> host
> > >
> > >
> > > If you want, I can do a similar benchmark (with iperf3) using a networking
> > > card (do you have a specific configuration?).
> >
> > My main interest is how it stacks up against:
> >
> > --device virtio-net-pci and I guess the vhost equivalent
>
> I'll do some tests with virtio-net and vhost.
>
> >
> > AIUI one of the motivators was being able to run something like NFS for
> > a guest FS over vsock instead of the overhead from UDP and having to
> > deal with the additional complication of having a working network setup.
> >
>
> CCing Stefan.
>
> I know he is working on virtio-fs that maybe suite better with your use cases.
> He also worked on VSOCK support for NFS, but I think it is not merged upstream.
Hi Alex,
David Gilbert, Vivek Goyal, Miklos Szeredi, and I are working on
virtio-fs for host<->guest file sharing. It performs better than
virtio-9p and we're currently working on getting it upstream (first the
VIRTIO device spec, then Linux and QEMU patches).
You can read about it and try it here:
https://virtio-fs.gitlab.io/
Stefan
Download attachment "signature.asc" of type "application/pgp-signature" (456 bytes)
Powered by blists - more mailing lists