lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110501204748.GB1017@infradead.org>
Date:	Sun, 1 May 2011 16:47:48 -0400
From:	Christoph Hellwig <hch@...radead.org>
To:	KY Srinivasan <kys@...rosoft.com>
Cc:	Greg KH <greg@...ah.com>, Christoph Hellwig <hch@...radead.org>,
	"gregkh@...e.de" <gregkh@...e.de>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	"virtualization@...ts.osdl.org" <virtualization@...ts.osdl.org>
Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code

On Sun, May 01, 2011 at 06:56:58PM +0000, KY Srinivasan wrote:
> > Yeah, it seems to me that no matter how the user specifies the disk
> > "type" for the guest configuration, we should use the same Linux driver,
> > with the same naming scheme for both ways.
> > 
> > As Christoph points out, it's just a matter of hooking the device up to
> > the scsi subsystem.  We do that today for ide, usb, scsi, and loads of
> > other types of devices all with the common goal of making it easier for
> > userspace to handle the devices in a standard manner.
> 
> This is not what is being done in Xen and KVM - they both have a PV front-end
> block drivers that is not managed by the scsi stack. The Hyper-V block driver is
> equivalent to what we have in Xen and KVM in this respect.

Xen also has a PV SCSI driver, although that isn't used very much.
For virtio we think it was a mistake to not speak SCSI these days,
and ponder introducing a virtio-scsi to replace virtio-blk.

But that's not the point here at all.  The point is that blockvsc
speaks a SCSI protocol over the wire, so it should be implemented
as a SCSI LLDD unless you have a good reason not to do it.  This
is especially important to get advanced features like block level
cache flush and FUA support, device topology, discard support, for
free.  Cache flush and FUA are good example for something that blkvsc
currently gets wrong, btw.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ