[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YKgbzO0AkYN4J7Ye@kroah.com>
Date: Fri, 21 May 2021 22:45:00 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: Luis Chamberlain <mcgrof@...nel.org>
Cc: Minchan Kim <minchan@...nel.org>, Hannes Reinecke <hare@...e.de>,
Douglas Gilbert <dgilbert@...erlog.com>, ngupta@...are.org,
sergey.senozhatsky.work@...il.com, axboe@...nel.dk,
mbenes@...e.com, jpoimboe@...hat.com, tglx@...utronix.de,
keescook@...omium.org, jikos@...nel.org, rostedt@...dmis.org,
peterz@...radead.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/4] zram: fix few sysfs races
On Fri, May 21, 2021 at 08:16:18PM +0000, Luis Chamberlain wrote:
> On Fri, May 21, 2021 at 10:01:52PM +0200, Greg Kroah-Hartman wrote:
> > On Wed, May 19, 2021 at 08:20:23PM +0000, Luis Chamberlain wrote:
> > > Greg,
> > >
> > > your feedback would be appreciated here.
> >
> > Appreciated where? This is a zram patchset, what do I need to mess with
> > it for?
>
> This patchset has 2 issues which I noted in the last series that are
> generic, and could best be dealt with on sysfs, and suggested
> how this could actually be dealt with on sysfs / kernfs.
>
> > > Greg, can you comment on technical levels why a general core fix is not
> > > desirable upstream for those two issues?
> >
> > What issues exactly?
>
> When I suggested the generic way to fix this your main argument against
> a generic solution was that we don't support module removal. Given that
> argument did not seem to hold any water it begs the question if you
> still would rather not see this fixed in sysfs / kernfs.
>
> If you however are more open to it now, I can instead take that work, and
> send a proper patch for review.
I looked at the last patch here and I really do not see the issue.
In order for the module to be removed, zram_exit() has to return, right?
And that function calls destroy_devices() which will then remove all
devices in sysfs that are associated with this driver. At that point in
time, sysfs detaches the attributes from kernfs so that any open file
handle that happened to be around for an attribute file will not call
back into the show/store function for that device.
Then destroy_devices() returns, and zram_exit() returns, and the module
is unloaded.
So how can a show/store function in zram_drv.c be called after
destroy_devices() returns?
The changelog text in patch 4/4 is odd, destroy_devices() shouldn't be
racing with anything as devices have reference counts in order to
protect this type of thing from happening, right? How can a store
function be called when a device is somehow removed from memory at the
same time? Don't we properly incremement/decrement the device
structure's reference count? If not, wouldn't that be the simplest
solution here?
And who is ripping out zram drivers while the system is running anyway?
What workflow causes this to happen so much so that the sysfs files need
to be "protected"? What tool/script/whatever is hammering on those
sysfs files so much while someone wants to unload the module?
What am I missing?
thanks,
greg k-h
Powered by blists - more mailing lists