lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6E21E5352C11B742B20C142EB499E0481AA6D232@TK5EX14MBXC128.redmond.corp.microsoft.com>
Date:	Fri, 18 Nov 2011 04:53:03 +0000
From:	KY Srinivasan <kys@...rosoft.com>
To:	James Bottomley <James.Bottomley@...senPartnership.com>
CC:	"gregkh@...e.de" <gregkh@...e.de>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	"virtualization@...ts.osdl.org" <virtualization@...ts.osdl.org>,
	"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
	"ohering@...e.com" <ohering@...e.com>,
	"hch@...radead.org" <hch@...radead.org>
Subject: RE: [PATCH 1/1] Staging: hv: storvsc: Move the storage driver out
 of the staging area



> -----Original Message-----
> From: James Bottomley [mailto:James.Bottomley@...senPartnership.com]
> Sent: Thursday, November 17, 2011 1:27 PM
> To: KY Srinivasan
> Cc: gregkh@...e.de; linux-kernel@...r.kernel.org;
> devel@...uxdriverproject.org; virtualization@...ts.osdl.org; linux-
> scsi@...r.kernel.org; ohering@...e.com; hch@...radead.org
> Subject: Re: [PATCH 1/1] Staging: hv: storvsc: Move the storage driver out of the
> staging area
> 
> On Tue, 2011-11-08 at 10:13 -0800, K. Y. Srinivasan wrote:
> > The storage driver (storvsc_drv.c) handles all block storage devices
> > assigned to Linux guests hosted on Hyper-V. This driver has been in the
> > staging tree for a while and this patch moves it out of the staging area.
> > As per Greg's recommendation, this patch makes no changes to the staging/hv
> > directory. Once the driver moves out of staging, we will cleanup the
> > staging/hv directory.
> >
> > This patch includes all the patches that I have sent against the staging/hv
> > tree to address the comments I have gotten to date on this storage driver.
> 
> First comment is that it would have been easier to see the individual
> patches for comment before you committed them.

I am not sure if the patches have been committed yet. All patches were sent
to various mailing lists and you were copied as well. In the future, I will include
the scsi mailing list in the set of lists I include for the staging patches. 
 
> 
> The way you did mempool isn't entirely right: the problem is that to
> prevent a memory to I/O deadlock we need to ensure forward progress on
> the drain device.  Just having 64 commands available to the host doesn't
> necessarily achieve this because LUN1 could consume them all and starve
> LUN0 which is the drain device leading to the deadlock, so the mempool
> really needs to be per device using slave_alloc.

I will do this per LUN.

> 
> +static int storvsc_device_alloc(struct scsi_device *sdevice)
> +{
> +	/*
> +	 * This enables luns to be located sparsely. Otherwise, we may not
> +	 * discovered them.
> +	 */
> +	sdevice->sdev_bflags |= BLIST_SPARSELUN | BLIST_LARGELUN;
> +	return 0;
> +}
> 
> Looks bogus ... this should happen automatically for SCSI-3 devices ...
> unless your hypervisor has some strange (and wrong) identification?  I
> really think you want to use SCSI-3 because it will do report LUN
> scanning, which consumes far fewer resources.

I will see if I can clean this up.

> 
> I still think you need to disable clustering and junk the bvec merge
> function.  Your object seems to be to accumulate in page size multiples
> (and not aggregate over this) ... that's what clustering is designed to
> do.

As part of addressing your first round of comments, I experimented with your
suggestions and I could not get rid of the code that does the bounce buffer handling.
I could generate I/O patterns that would require bounce buffer handling with your
suggestions in place.

Regards,

K. Y  

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ