[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6E21E5352C11B742B20C142EB499E0481DD6B7@TK5EX14MBXC124.redmond.corp.microsoft.com>
Date: Fri, 29 Apr 2011 17:32:43 +0000
From: KY Srinivasan <kys@...rosoft.com>
To: Greg KH <greg@...ah.com>
CC: Christoph Hellwig <hch@...radead.org>,
"gregkh@...e.de" <gregkh@...e.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
"virtualization@...ts.osdl.org" <virtualization@...ts.osdl.org>
Subject: RE: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
> -----Original Message-----
> From: Greg KH [mailto:greg@...ah.com]
> Sent: Friday, April 29, 2011 12:40 PM
> To: KY Srinivasan
> Cc: Christoph Hellwig; gregkh@...e.de; linux-kernel@...r.kernel.org;
> devel@...uxdriverproject.org; virtualization@...ts.osdl.org
> Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
>
> On Fri, Apr 29, 2011 at 04:32:35PM +0000, KY Srinivasan wrote:
> >
> >
> > > -----Original Message-----
> > > From: Christoph Hellwig [mailto:hch@...radead.org]
> > > Sent: Wednesday, April 27, 2011 8:19 AM
> > > To: KY Srinivasan
> > > Cc: Christoph Hellwig; Greg KH; gregkh@...e.de; linux-
> kernel@...r.kernel.org;
> > > devel@...uxdriverproject.org; virtualization@...ts.osdl.org
> > > Subject: Re: [PATCH 00/25] Staging: hv: Cleanup vmbus driver code
> > >
> > > On Wed, Apr 27, 2011 at 11:47:03AM +0000, KY Srinivasan wrote:
> > > > On the host side, Windows emulates the standard PC hardware
> > > > to permit hosting of fully virtualized operating systems.
> > > > To enhance disk I/O performance, we support a virtual block driver.
> > > > This block driver currently handles disks that have been setup as IDE
> > > > disks for the guest - as specified in the guest configuration.
> > > >
> > > > On the SCSI side, we emulate a SCSI HBA. Devices configured
> > > > under the SCSI controller for the guest are handled via this
> > > > emulated HBA (SCSI front-end). So, SCSI disks configured for
> > > > the guest are handled through native SCSI upper-level drivers.
> > > > If this SCSI front-end driver is not loaded, currently, the guest
> > > > cannot see devices that have been configured as SCSI devices.
> > > > So, while the virtual block driver described earlier could potentially
> > > > handle all block devices, the implementation choices made on the host
> > > > will not permit it. Also, the only SCSI device that can be currently configured
> > > > for the guest is a disk device.
> > > >
> > > > Both the block device driver (hv_blkvsc) and the SCSI front-end
> > > > driver (hv_storvsc) communicate with the host via unique channels
> > > > that are implemented as bi-directional ring buffers. Each (storage)
> > > > channel carries with it enough state to uniquely identify the device on
> > > > the host side. Microsoft has chosen to use SCSI verbs for this storage
> channel
> > > > communication.
> > >
> > > This doesn't really explain much at all. The only important piece
> > > of information I can read from this statement is that both blkvsc
> > > and storvsc only support disks, but not any other kind of device,
> > > and that chosing either one is an arbitrary seletin when setting up
> > > a VM configuration.
> > >
> > > But this still isn't an excuse to implement a block layer driver for
> > > a SCSI protocol, and it doesn't not explain in what way the two
> > > protocols actually differ. You really should implement blksvs as a SCSI
> > > LLDD, too - and from the looks of it it doesn't even have to be a
> > > separate one, but just adding the ids to storvsc would do the work.
> >
> > On the host-side, as part of configuring a guest you can specify block devices
> > as being under an IDE controller or under a
> > SCSI controller. Those are the only options you have. Devices configured under
> > the IDE controller cannot be seen in the guest under the emulated SCSI front-
> end which is
> > the scsi driver (storvsc_drv).
>
> Are you sure the libata core can't see this ide controller and connect
> to it? That way you would use the scsi system if you do that and you
> would need a much smaller ide driver, perhaps being able to merge it
> with your scsi driver.
If we don't load the blkvsc driver, the emulated IDE controller exposed to
the guest can and will be seen by the libata core. In this case though, your
disk I/O will be taking the emulated path with the usual performance hit.
When you load the blkvsc driver, the device access does not go through the emulated
IDE controller. Blkvsc is truly a generic block driver that registers as a block driver in
the guest and talks to an appropriate device driver on the host, communicating over
the vmbus. In this respect, it is identical to block drivers we have for guests in other
virtualization platforms (Xen etc.). The only difference is that on the host side,
the only way you can assign a scsi disk to the guest is to configure this scsi disk
under the scsi controller. So, while blkvsc is a generic block driver, because of the
restrictions on the host side, it only ends up managing block devices that have IDE majors.
>
> We really don't want to write new IDE drivers anymore that don't use
> libata.
As I noted earlier, it is incorrect to view Hyper-V blkvsc driver as an IDE driver. There
is nothing IDE specific about it. It is very much like other block front-end drivers
(like in Xen) that get their device information from the host and register the block
device accordingly with the guest. It just happens that in the current version of the
Windows host, only devices that are configured as IDE devices in the host end up being
managed by this driver. To make this clear, in my recent cleanup of this driver (these patches
have been applied), all IDE major information has been properly consolidated.
Regards,
K. Y
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists