[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6E21E5352C11B742B20C142EB499E0480816C384@TK5EX14MBXC126.redmond.corp.microsoft.com>
Date: Thu, 30 Jun 2011 23:32:34 +0000
From: KY Srinivasan <kys@...rosoft.com>
To: Christoph Hellwig <hch@...radead.org>
CC: "gregkh@...e.de" <gregkh@...e.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
"virtualization@...ts.osdl.org" <virtualization@...ts.osdl.org>
Subject: RE: [PATCH 00/40] Staging: hv: Driver cleanup
> -----Original Message-----
> From: Christoph Hellwig [mailto:hch@...radead.org]
> Sent: Thursday, June 30, 2011 3:34 PM
> To: KY Srinivasan
> Cc: gregkh@...e.de; linux-kernel@...r.kernel.org;
> devel@...uxdriverproject.org; virtualization@...ts.osdl.org
> Subject: Re: [PATCH 00/40] Staging: hv: Driver cleanup
>
> On Wed, Jun 29, 2011 at 07:38:21AM -0700, K. Y. Srinivasan wrote:
> > Further cleanup of the hv drivers:
> >
> > 1) Cleanup the reference counting mess for both stor and net devices.
>
> I really don't understand the need for reference counting on the storage
> side, especially now that you only have a SCSI driver. The SCSI
> midlayer does proper counting on it's objects (Scsi_Host, scsi_device,
> scsi_cmnd), so you'll get that for free given that SCSI drivers just
> piggyback on the midlayer lifetime rules.
>
> For now your patches should probably go in as-is, but mid-term you
> should be able to completely remove that code on the storage side.
>
Greg,
I am thinking of going back to my original implementation where I had one scsi host
per IDE device. This will certainly simply the code. Let me know what you think. If you
agree with this approach, please drop this patch-set, I will send you a new set of patches.
Regards,
K. Y
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists