lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 01 Sep 2009 17:54:36 +0000
From:	Alok Kataria <akataria@...are.com>
To:	James Bottomley <James.Bottomley@...e.de>
Cc:	Roland Dreier <rdreier@...co.com>,
	Dmitry Torokhov <dtor@...are.com>,
	Matthew Wilcox <matthew@....cx>,
	Bart Van Assche <bvanassche@....org>,
	Robert Love <robert.w.love@...el.com>,
	Randy Dunlap <randy.dunlap@...cle.com>,
	Mike Christie <michaelc@...wisc.edu>,
	"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Rolf Eike Beer <eike-kernel@...tec.de>,
	Maxime Austruy <maustruy@...are.com>
Subject: Re: [PATCH] SCSI driver for VMware's virtual HBA.


On Tue, 2009-09-01 at 10:40 -0700, James Bottomley wrote:
> On Tue, 2009-09-01 at 10:25 -0700, Roland Dreier wrote:
> > > Nobody said you had to have the exact same driver for every hypervisor.
> >  > What people are suggesting is that we look at commonalities in the
> >  > interfaces both from a control plane point of view (transport class) and
> >  > from a code sharing point of view (libscsivirt).  However, all the
> >  > hypervisor interfaces I've seen are basically DMA rings ...
> > 
> > I don't think that's anything special about hypervisors though -- pretty
> > much all modern device interfaces are basically DMA rings, aren't they?
> > I'm definitely in favor of common code to handle commonality but on the
> > other hand I don't see what's so special about virtual devices vs. real
> > HW devices.  One the one side we have VMware's closed hypervisor code
> > and on the other side we have vendor XYZ's closed RTL and firmware code.
> 
> But the main difference between actual hardware and hypervisors is the
> fact that to set up a DMA transfer you have to poke registers on the
> card, set up a mailbox and manage queues of commands to the card.  For a
> hypervisor, sending a DMA transaction is a hypercall.

Not really, it depends on how you see it, VMware exports different IO
registers too which need to be bit banged to start some IO. So starting
an IO is not just a hypercall but a series of commands. Look at
pvscsi_kick_io, also the driver and the hypervisor code share the
request rings and completion rings which is quite similar to how a
command-queue is managed for a card. 

Also note that, the way all these things are implemented for each of the
hypervisor devices will differ and getting every hv-vendor to agree on a
common set of things is not very attractive proposition ( atleast, this
I can say from my past experiences).


> 
> Now for most physical drivers, take for example FCP ones, we have a
> common control plane interface (fc transport class), we're evolving a
> frame handling library (libfc) so all the drivers really have are
> specific codes to bit bang the hardware.  Some of the libfc handling is
> actually done in intelligent offload firmware on the HBAs, so some will
> use more or less of the libfc handling (same is true for SAS and
> libsas).  When there's no actual hardware to be bit banged, and no real
> firmware offload, it does make one wonder what would be left unique to
> the driver.
> 
> James
> 

Alok

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ