lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 01 Sep 2009 12:40:04 -0500
From:	James Bottomley <James.Bottomley@...e.de>
To:	Roland Dreier <rdreier@...co.com>
Cc:	Dmitry Torokhov <dtor@...are.com>, Matthew Wilcox <matthew@....cx>,
	Bart Van Assche <bvanassche@....org>,
	Alok Kataria <akataria@...are.com>,
	Robert Love <robert.w.love@...el.com>,
	Randy Dunlap <randy.dunlap@...cle.com>,
	Mike Christie <michaelc@...wisc.edu>,
	"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Rolf Eike Beer <eike-kernel@...tec.de>,
	Maxime Austruy <maustruy@...are.com>
Subject: Re: [PATCH] SCSI driver for VMware's virtual HBA.

On Tue, 2009-09-01 at 10:25 -0700, Roland Dreier wrote:
> > Nobody said you had to have the exact same driver for every hypervisor.
>  > What people are suggesting is that we look at commonalities in the
>  > interfaces both from a control plane point of view (transport class) and
>  > from a code sharing point of view (libscsivirt).  However, all the
>  > hypervisor interfaces I've seen are basically DMA rings ...
> 
> I don't think that's anything special about hypervisors though -- pretty
> much all modern device interfaces are basically DMA rings, aren't they?
> I'm definitely in favor of common code to handle commonality but on the
> other hand I don't see what's so special about virtual devices vs. real
> HW devices.  One the one side we have VMware's closed hypervisor code
> and on the other side we have vendor XYZ's closed RTL and firmware code.

But the main difference between actual hardware and hypervisors is the
fact that to set up a DMA transfer you have to poke registers on the
card, set up a mailbox and manage queues of commands to the card.  For a
hypervisor, sending a DMA transaction is a hypercall.

Now for most physical drivers, take for example FCP ones, we have a
common control plane interface (fc transport class), we're evolving a
frame handling library (libfc) so all the drivers really have are
specific codes to bit bang the hardware.  Some of the libfc handling is
actually done in intelligent offload firmware on the HBAs, so some will
use more or less of the libfc handling (same is true for SAS and
libsas).  When there's no actual hardware to be bit banged, and no real
firmware offload, it does make one wonder what would be left unique to
the driver.

James

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ