[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070419154849.2e722762@gondolin.boeblingen.de.ibm.com>
Date: Thu, 19 Apr 2007 15:48:49 +0200
From: Cornelia Huck <cornelia.huck@...ibm.com>
To: "Dmitry Torokhov" <dmitry.torokhov@...il.com>
Cc: "Tejun Heo" <htejun@...il.com>,
"Alan Stern" <stern@...land.harvard.edu>,
linux-kernel <linux-kernel@...r.kernel.org>,
"Greg K-H" <greg@...ah.com>,
"Rusty Russell" <rusty@...tcorp.com.au>
Subject: Re: [PATCH RFD] alternative kobject release wait mechanism
On Thu, 19 Apr 2007 09:13:43 -0400,
"Dmitry Torokhov" <dmitry.torokhov@...il.com> wrote:
> Because they are managed by 2 different entities. the struct device
> objects are managed by device core and driver-specific objects are
> managed by their respective driver.
Not sure if I understand you here. My view of this was always that the
embedding object was kind of an extended device and that the relevant
driver/subsystem managed it through the driver core infrastructure.
>
> > > Pretty much drivers have 2 options:
> > >
> > > struct my_device {
> > > void *private_data;
> > > struct device dev;
> > > };
> > >
> > > In this case ->release must live in a subsystem code; individual
> > > drivers kfree(my_dev->private) and do any additional cleanup after
> > > calling device_unregister(&my_dev->dev);
> >
> > They must do this in the ->remove callback.
>
> Why? If the driver truly stops hardware then any driver-specific data
> is not needed. With sysfs severing access to removed attributes there
> is no need to gave "global release", cleanup can be done in stages.
I think I meant the same thing :) Freeing the data in the ->release
callback is obviously too late. Freeing it in the ->remove callback
means that the device is no longer really used (and can't be looked up
any more); only some further refrences may linger (and those are of no
consequence with the sysfs disconnect).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists