lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A8B8F4E.80207@gmail.com>
Date:	Wed, 19 Aug 2009 01:36:14 -0400
From:	Gregory Haskins <gregory.haskins@...il.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
CC:	Avi Kivity <avi@...hat.com>,
	Anthony Liguori <anthony@...emonkey.ws>,
	Ingo Molnar <mingo@...e.hu>,
	Gregory Haskins <ghaskins@...ell.com>, kvm@...r.kernel.org,
	alacrityvm-devel@...ts.sourceforge.net,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH v3 3/6] vbus: add a "vbus-proxy" bus model for vbus_driver
 objects

Michael S. Tsirkin wrote:
> On Tue, Aug 18, 2009 at 11:51:59AM -0400, Gregory Haskins wrote:
>>> It's not laughably trivial when you try to support the full feature set
>>> of kvm (for example, live migration will require dirty memory tracking,
>>> and exporting all state stored in the kernel to userspace).
>> Doesn't vhost suffer from the same issue?  If not, could I also apply
>> the same technique to support live-migration in vbus?
> 
> vhost does this by switching to userspace for the duration of live
> migration. venet could do this I guess, but you'd need to write a
> userspace implementation. vhost just reuses existing userspace virtio.
> 
>> With all due respect, I didnt ask you do to anything, especially not
>> abandon something you are happy with.
>>
>> All I did was push guest drivers to LKML.  The code in question is
>> independent of KVM, and its proven to improve the experience of using
>> Linux as a platform.  There are people interested in using them (by
>> virtue of the number of people that have signed up for the AlacrityVM
>> list, and have mailed me privately about this work).
>>
>> So where is the problem here?
> 
> If virtio net in guest could be improved instead, everyone would
> benefit.

So if I whip up a virtio-net backend for vbus with a PCI compliant
connector, you are happy?


> I am doing this, and I wish more people would join.  Instead,
> you change ABI in a incompatible way.

Only by choice of my particular connector.  The ABI is a function of the
connector design.  So one such model is to terminate the connector in
qemu, and surface the resulting objects as PCI devices.  I choose not to
use this particular design for my connector that I am pushing upstream
because I am of the opinion that I can do better by terminating it in
the guest directly as a PV optimized bus.  However, both connectors can
theoretically coexist peacefully.

The advantage that this would give us is that one in-kernel virtio-net
model could be surfaced to all vbus users (pci, or otherwise), which
will hopefully be growing over time.  This would have gained vbus a
virtio-net backend, and it would have saved you from re-inventing the
various abstractions and management interfaces that vbus has in place.


> So now, there's no single place to
> work on kvm networking performance. Now, it would all be understandable
> if the reason was e.g. better performance. But you say yourself it
> isn't.

Actually, I really didn't say that.  As far as I know, your patch hasnt
been performance proven to my knowledge, but I just gave you the benefit
of the doubt.  What I said was that for a limited type of benchmark, it
*may* get similar numbers if you implemented vhost optimally.  For
others (for instance, when we can start to take advantage of priority,
or scaling the number of interfaces) it may not since my proposed
connector was designed to optimize this over raw PCI facilities.

But I digress.  Please post results when you have numbers, as I had to
give up my 10GE rig in the lab.  I suspect you will have performance
issues until you at least address GSO, but you may already be there by now.

Kind Regards,
-Greg




Download attachment "signature.asc" of type "application/pgp-signature" (268 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ