[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CD14089.7050709@kernel.org>
Date: Wed, 03 Nov 2010 11:59:21 +0100
From: Tejun Heo <tj@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: lkml <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Rusty Russell <rusty@...tcorp.com.au>
Subject: Re: [PATCH v2.6.36-rc7] init: don't call flush_scheduled_work() from
do_initcalls()
Hello, Andrew.
On 10/22/2010 08:09 PM, Andrew Morton wrote:
> mm.. I think we'd be OK to merge it. Any such code is pretty badly
> buggy and is probably also crashable with a well-timed rmmod.
>
> It'll also be code which few people ever use, so any runtime checks
> won't get us very good coverage.
>
> Still, if it's not too hard to implement an "are there any scheduled
> works which live in initmem" check then I guess that would be the
> prudent approach. A quite gross way of implementing that might be
> something like
I've been trying to implement proper check code but there is a
problem. It's possible to check all pending works to see whether the
work struct itself or the work function is in initmem and warn about
them.
The problem is with currently running works. As work_struct isn't
accessible once it starts executing, struct worker would need to cache
it for later reference. Worker already remembers the work_struct
pointer itself and its cwq and adding one more field to remember the
currently running work function is easy. However, it's only useful
during the unlikely buggy case during init. Given that if anything is
still depending on initmem, it will blow up pretty reliably, I don't
think it's worthwhile to add additional tracking just for this. So, I
think I'll just go ahead and drop the flush call and deal with the
unlikely fallouts if there's any.
Thank you.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists