[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120907202249.GH9426@google.com>
Date: Fri, 7 Sep 2012 13:22:49 -0700
From: Tejun Heo <tj@...nel.org>
To: Lai Jiangshan <laijs@...fujitsu.com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH wq/for-3.6-fixes 3/3] workqueue: fix possible idle
worker depletion during CPU_ONLINE
Hello again, Lai.
On Fri, Sep 07, 2012 at 12:29:39PM -0700, Tejun Heo wrote:
> > Since we introduce manage_mutex(), any palace should be allowed to grab it
> > when its context allows. So it is not hotplug code's responsibility of this bug.
> >
> > manage_workers() just use mutex_trylock() to grab the lock, it does not make
> > hard to do it jobs when need, and it does not try to find out the reason of fail.
> > so I think it is manage_workers()'s responsibility to handle this bug.
> > a manage_workers_slowpath() is enough to fix the bug.
>
> It doesn't really matter how the synchronization between regular
> manager and hotplug path is done. The point is that hotplug path, as
> much as possible, should be responsible for any incurred complexities,
> so I'd really like to stay away from adding a completely different
> path manager can be invoked in the usual path if at all possible.
> Let's try to solve this from the hotplug side.
So, how about something like the following?
* Make manage_workers() called outside gcwq->lock (or drop gcwq->lock
after checking MANAGING). worker_thread() can jump back to woke_up:
instead.
* Distinguish synchronization among workers and against hotplug. Was
this what you tried with non_manager_mutex? Anyways, revive
WORKER_MANAGING to synchronize among workers. If the worker won
MANAGING, drop gcwq->lock and mutex_lock() gcwq->hotplug_mutex and
then do other stuff.
This should prevent any idle worker passing through manage_workers()
while hotplug is in progress. Do you think it would work?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists