[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6E21E5352C11B742B20C142EB499E0480816C4A5@TK5EX14MBXC126.redmond.corp.microsoft.com>
Date: Fri, 1 Jul 2011 13:25:26 +0000
From: KY Srinivasan <kys@...rosoft.com>
To: Stephen Hemminger <shemminger@...tta.com>
CC: Christoph Hellwig <hch@...radead.org>,
"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
"gregkh@...e.de" <gregkh@...e.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"virtualization@...ts.osdl.org" <virtualization@...ts.osdl.org>
Subject: RE: [PATCH 00/40] Staging: hv: Driver cleanup
> -----Original Message-----
> From: Stephen Hemminger [mailto:shemminger@...tta.com]
> Sent: Friday, July 01, 2011 12:45 AM
> To: KY Srinivasan
> Cc: Christoph Hellwig; devel@...uxdriverproject.org; gregkh@...e.de; linux-
> kernel@...r.kernel.org; virtualization@...ts.osdl.org
> Subject: Re: [PATCH 00/40] Staging: hv: Driver cleanup
>
> On Fri, 1 Jul 2011 00:19:38 +0000
> KY Srinivasan <kys@...rosoft.com> wrote:
>
> >
> >
> > > -----Original Message-----
> > > From: Stephen Hemminger [mailto:shemminger@...tta.com]
> > > Sent: Thursday, June 30, 2011 7:48 PM
> > > To: KY Srinivasan
> > > Cc: Christoph Hellwig; devel@...uxdriverproject.org; gregkh@...e.de; linux-
> > > kernel@...r.kernel.org; virtualization@...ts.osdl.org
> > > Subject: Re: [PATCH 00/40] Staging: hv: Driver cleanup
> > >
> > > On Thu, 30 Jun 2011 23:32:34 +0000
> > > KY Srinivasan <kys@...rosoft.com> wrote:
> > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Christoph Hellwig [mailto:hch@...radead.org]
> > > > > Sent: Thursday, June 30, 2011 3:34 PM
> > > > > To: KY Srinivasan
> > > > > Cc: gregkh@...e.de; linux-kernel@...r.kernel.org;
> > > > > devel@...uxdriverproject.org; virtualization@...ts.osdl.org
> > > > > Subject: Re: [PATCH 00/40] Staging: hv: Driver cleanup
> > > > >
> > > > > On Wed, Jun 29, 2011 at 07:38:21AM -0700, K. Y. Srinivasan wrote:
> > > > > > Further cleanup of the hv drivers:
> > > > > >
> > > > > > 1) Cleanup the reference counting mess for both stor and net
> devices.
> > > > >
> > > > > I really don't understand the need for reference counting on the storage
> > > > > side, especially now that you only have a SCSI driver. The SCSI
> > > > > midlayer does proper counting on it's objects (Scsi_Host, scsi_device,
> > > > > scsi_cmnd), so you'll get that for free given that SCSI drivers just
> > > > > piggyback on the midlayer lifetime rules.
> > > > >
> > > > > For now your patches should probably go in as-is, but mid-term you
> > > > > should be able to completely remove that code on the storage side.
> > > > >
> > > >
> > > > Greg,
> > > >
> > > > I am thinking of going back to my original implementation where I had one
> scsi
> > > host
> > > > per IDE device. This will certainly simply the code. Let me know what you
> think.
> > > If you
> > > > agree with this approach, please drop this patch-set, I will send you a new
> set
> > > of patches.
> > >
> > > I think there ref counting on network devices is also unneeded
> > > as long as the unregister logic handles RCU correctly. The network layer
> > > calls the driver unregister routine after all packets are gone.
> > On the networking side, what about incoming packets that may be racing
> > with the device destruction. The current ref counting scheme deals with
> > that case.
>
> Not sure how HV driver tells hypervisor to stop sending packets. But the
> destructor is not called until after all other CPU's are done processing
> packets from that device.
The issue I was concerned is one where, on packet reception, we need
to de-reference the ext field in the struct hv_device (this is the pointer
to the net device and this could happen concurrently with the guest
trying to shut-down the device. In the current code we deal with this
condition.
Regards,
K. Y
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists