[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <5703B0F3.7050700@nagafix.co.uk>
Date: Tue, 5 Apr 2016 19:34:59 +0700
From: Antoine Martin <antoine@...afix.co.uk>
To: netdev@...r.kernel.org
Cc: "stefanha@...hat.com >> Stefan Hajnoczi" <stefanha@...hat.com>
Subject: AF_VSOCK status
Hi,
Forgive me if these questions are obvious, I am not a kernel developer.
>From what I am reading here:
http://lists.linuxfoundation.org/pipermail/virtualization/2015-December/030935.html
The code has been removed from mainline, is it queued for 4.6? If not,
when are you planning on re-submitting it?
We now have a vsock transport merged into xpra, which works very well
with the kernel and qemu versions found here:
http://qemu-project.org/Features/VirtioVsock
Congratulations on making this easy to use!
Is the upcoming revised interface likely to cause incompatibilities with
existing binaries?
It seems impossible for the host to connect to a guest: the guest has to
initiate the connection. Is this a feature / known limitation or am I
missing something? For some of our use cases, it would be more practical
to connect in the other direction.
In terms of raw performance, I am getting about 10Gbps on an Intel
Skylake i7 (the data stream arrives from the OS socket recv syscall
split into 256KB chunks), that's good but not much faster than
virtio-net and since the packets are avoiding all sorts of OS layer
overheads I was hoping to get a little bit closer to the ~200Gbps memory
bandwidth that this CPU+RAM are capable of. Am I dreaming or just doing
it wrong?
In terms of bandwidth requirements, we're nowhere near that level per
guest - but since there are lot of guests per host, I use this as a
rough guesstimate of the efficiency and suitability of the transport.
How hard would it be to introduce a virtio mmap-like transport of some
sort so that the guest and host could share some memory region?
I assume this would give us the best possible performance when
transferring large amounts of data? (we already have a local mmap
transport we could adapt)
Thanks
Antoine
Powered by blists - more mailing lists