[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110824201729.GB28667@kroah.com>
Date: Wed, 24 Aug 2011 13:17:29 -0700
From: Greg KH <greg@...ah.com>
To: KY Srinivasan <kys@...rosoft.com>
Cc: "gregkh@...e.de" <gregkh@...e.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
"virtualization@...ts.osdl.org" <virtualization@...ts.osdl.org>,
Haiyang Zhang <haiyangz@...rosoft.com>
Subject: Re: [PATCH 086/117] Staging: hv: storvsc: Leverage the spinlock to
manage ref_cnt
On Wed, Aug 24, 2011 at 04:25:18PM +0000, KY Srinivasan wrote:
> > > > On Fri, Jul 15, 2011 at 10:47:14AM -0700, K. Y. Srinivasan wrote:
> > > > > Now that we have a spin lock protecting access to the stor device pointer,
> > > > > use it manage the reference count as well.
> > > > >
> > > > > Signed-off-by: K. Y. Srinivasan <kys@...rosoft.com>
> > > > > Signed-off-by: Haiyang Zhang <haiyangz@...rosoft.com>
> > > > > ---
> > > > > drivers/staging/hv/hyperv_storage.h | 8 ++++----
> > > > > drivers/staging/hv/storvsc.c | 10 +++++-----
> > > > > 2 files changed, 9 insertions(+), 9 deletions(-)
> > > > >
> > > > > diff --git a/drivers/staging/hv/hyperv_storage.h
> > > > b/drivers/staging/hv/hyperv_storage.h
> > > > > index 53b65be..d946211 100644
> > > > > --- a/drivers/staging/hv/hyperv_storage.h
> > > > > +++ b/drivers/staging/hv/hyperv_storage.h
> > > > > @@ -265,7 +265,7 @@ struct storvsc_device {
> > > > > struct hv_device *device;
> > > > >
> > > > > /* 0 indicates the device is being destroyed */
> > > > > - atomic_t ref_count;
> > > > > + int ref_count;
> > > >
> > > > Is this really needed? Can't you rely on the reference count of the
> > > > hv_device itself?
> > >
> > > We don't have a reference count on the hv_device
> >
> > Wait, why not? You shure better have a reference count on that device
> > if you have a pointer to it, if not, you have a bug, and that needs to
> > be fixed. Please reread Documentation/CodingStyle for details.
>
> Greg,
>
> I am confused here. The model we have is identical to other device/bus
> abstractions in the kernel. For instance, neither struct pci_dev nor struct
> virtio_device have an explicit reference count. However, they both embed
> struct device which has the kobject structure embedded to manage
> the object life cycle. This is exactly the model we have for the vmbus devices -
> struct hv_device embeds struct device that embeds the struct kobject for
> managing the lifecycle.
Yes, that is correct.
> Like other bus specific devices in the kernel (pci devices, virtio devices,),
> class specific vmbus devices - struct storvsc_device and struct netvsc_device
> embed a pointer to the underlying struct hv_device.
And when you save that pointer, you ARE incrementing the reference count
properly, right? If not, you just caused a bug.
> Furthermore, a pointer to the class specific device structure is
> stashed in the struct hv_device (the ext pointer).
> This is identical what is done in the virtio blk device - look at the
> priv element in struct virtio_device.
Yes, but the "base" structure of virtio_device does not contain a lock
that the users of that priv pointer are using to control access to data
_within_ the priv pointer, right?
That's up to the users within the priv pointer.
Now I see how you were trying to move the lock "deeper" as it seemed
that everyone needed it, but you just moved the lock away from the thing
that it was trying to protect, which might cause problems, and at the
least, is confusing as heck because you are mixing two different layers
here, in ways that should not be mixed.
If you really need a lock to protect some private data within the priv
pointer, then put it somewhere else,like in the priv pointer, as almost
all other subsystems do.
greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists