lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <491371F0.7020805@codemonkey.ws>
Date:	Thu, 06 Nov 2008 16:38:40 -0600
From:	Anthony Liguori <anthony@...emonkey.ws>
To:	Matthew Wilcox <matthew@....cx>
CC:	"Fischer, Anna" <anna.fischer@...com>, Greg KH <greg@...ah.com>,
	H L <swdevyid@...oo.com>,
	"randy.dunlap@...cle.com" <randy.dunlap@...cle.com>,
	"grundler@...isc-linux.org" <grundler@...isc-linux.org>,
	"Chiang, Alexander" <achiang@...com>,
	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	"rdreier@...co.com" <rdreier@...co.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"jbarnes@...tuousgeek.org" <jbarnes@...tuousgeek.org>,
	"virtualization@...ts.linux-foundation.org" 
	<virtualization@...ts.linux-foundation.org>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"mingo@...e.hu" <mingo@...e.hu>
Subject: Re: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

Matthew Wilcox wrote:
> [Anna, can you fix your word-wrapping please?  Your lines appear to be
> infinitely long which is most unpleasant to reply to]
>
> On Thu, Nov 06, 2008 at 05:38:16PM +0000, Fischer, Anna wrote:
>   
>>> Where would the VF drivers have to be associated?  On the "pci_dev"
>>> level or on a higher one?
>>>       
>> A VF appears to the Linux OS as a standard (full, additional) PCI
>> device. The driver is associated in the same way as for a normal PCI
>> device. Ideally, you would use SR-IOV devices on a virtualized system,
>> for example, using Xen. A VF can then be assigned to a guest domain as
>> a full PCI device.
>>     
>
> It's not clear thats the right solution.  If the VF devices are _only_
> going to be used by the guest, then arguably, we don't want to create
> pci_devs for them in the host.  (I think it _is_ the right answer, but I
> want to make it clear there's multiple opinions on this).
>   

The VFs shouldn't be limited to being used by the guest.

SR-IOV is actually an incredibly painful thing.  You need to have a VF 
driver in the guest, do hardware pass through, have a PV driver stub in 
the guest that's hypervisor specific (a VF is not usable on it's own), 
have a device specific backend in the VMM, and if you want to do live 
migration, have another PV driver in the guest that you can do teaming 
with.  Just a mess.

What we would rather do in KVM, is have the VFs appear in the host as 
standard network devices.  We would then like to back our existing PV 
driver to this VF directly bypassing the host networking stack.  A key 
feature here is being able to fill the VF's receive queue with guest 
memory instead of host kernel memory so that you can get zero-copy 
receive traffic.  This will perform just as well as doing passthrough 
(at least) and avoid all that ugliness of dealing with SR-IOV in the guest.

This eliminates all of the mess of various drivers in the guest and all 
the associated baggage of doing hardware passthrough.

So IMHO, having VFs be usable in the host is absolutely critical because 
I think it's the only reasonable usage model.

Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ