lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 01 Sep 2009 09:17:53 -0500
From:	James Bottomley <James.Bottomley@...e.de>
To:	Bart Van Assche <bvanassche@....org>
Cc:	akataria@...are.com, Robert Love <robert.w.love@...el.com>,
	Randy Dunlap <randy.dunlap@...cle.com>,
	Mike Christie <michaelc@...wisc.edu>,
	linux-scsi@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Dmitry Torokhov <dtor@...are.com>,
	Rolf Eike Beer <eike-kernel@...tec.de>,
	Maxime Austruy <maustruy@...are.com>
Subject: Re: [PATCH] SCSI driver for VMware's virtual HBA.

On Tue, 2009-09-01 at 13:12 +0200, Bart Van Assche wrote:
> On Mon, Aug 31, 2009 at 8:00 PM, James Bottomley
> <James.Bottomley@...e.de> wrote:
> >
> > On Mon, 2009-08-31 at 10:28 -0700, Alok Kataria wrote:
> > > VMware PVSCSI driver - v2.
> >
> > OK, so the first thing that springs to mind is that we already have one
> > of these things: the ibmvscsi ... is there no way we can share code
> > between this and the other PV drivers?
> 
> Good question. But shouldn't the ibmvscsi driver be refactored before
> considering sharing ibmvscsi code with other paravirtualized drivers ?

Not really, that would make it a chicken and egg problem.  The question
was meant to direct attention to the issue of whether we should share
code for PV drivers or not. I think the answer to this one is yes; the
next thing is how to do it.

The one thing I'm not really keen on having is half a dozen totally
different virtual SCSI drivers for our half a dozen virtualisation
solutions.  Apart from the coding waste, each will have new and
different bugs and a much smaller pool of users to find them.

The IBM vscsi operates slightly differently from the way newer PV
drivers may be expected to operate, but the SRP abstraction does look
like a reasonable one for a PV driver.

> A quote from the ibmvscsi.c source code:
> 
>  * TODO: This is currently pretty tied to the IBM i/pSeries hypervisor
>  * interfaces.  It would be really nice to abstract this above an RDMA
>  * layer.
> 
> Splitting the ibmvscsi.c driver in an SRP initiator and an RDMA driver
> would make the following possible:
> - Reuse the existing SRP initiator (ib_srp). Currently there are two
> SRP initiators present in the Linux kernel -- one that uses the RDMA
> verbs API (ib_srp) and one that only works with IBM's i/pSeries
> hypervisor (ibmvscsi).
> - Reuse the ib_ipoib kernel module to provide an IP stack on top of
> the new RDMA driver instead of having to maintain a separate network
> driver for this hardware (ibmveth).

So the RDMA piece is what I'm not sure about.  For a protocol
abstraction, SRP makes a lot of sense.  For a hypervisor interface, it's
not really clear that RDMA is the best way to go.  In fact, some more
minimal DMA ring implementation seems to be the way most hypervisors are
set up, but it's still possible to run a nice SRP abstraction over them.

> More information about the architecture the ibmvscsi and the ibmveth
> drivers have been developed for can be found in the following paper:
> D. Boutcher and D. Engebretsen, Linux Virtualization on IBM POWER5
> Systems, Proceedings of the Linux Symposium, Vol. 1, July 2004, pp.
> 113-120 (http://www.kernel.org/doc/mirror/ols2004v1.pdf).

The other piece of this is that it's not clear that SCSI is actually the
best layer for this abstration.  For a simple, fast storage interface,
nbd is probably the easiest abstraction to do (the disadvantage being
the lack of ioctl support, so it really only does storage).

James


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ