[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070104163139.GA312@tv-sign.ru>
Date: Thu, 4 Jan 2007 19:31:39 +0300
From: Oleg Nesterov <oleg@...sign.ru>
To: Srivatsa Vaddagiri <vatsa@...ibm.com>
Cc: Andrew Morton <akpm@...l.org>, David Howells <dhowells@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...l.org>,
linux-kernel@...r.kernel.org, Gautham shenoy <ego@...ibm.com>
Subject: Re: [PATCH, RFC] reimplement flush_workqueue()
On 01/04, Srivatsa Vaddagiri wrote:
>
> On Thu, Jan 04, 2007 at 05:29:36PM +0300, Oleg Nesterov wrote:
> > Thanks, I need to think about this.
> >
> > However I am not sure I fully understand the problem.
> >
> > First, this deadlock was not introduced by recent changes (including "single
> > threaded flush_workqueue() takes workqueue_mutex too"), yes?
>
> AFAIK this deadlock originated from Andrew's patch here:
>
> http://lkml.org/lkml/2006/12/7/231
I don't think so. The core problem is not that we are doing unlock/sleep/lock
with this patch. The thing is: work->func() can't take wq_mutex (and thus use
flush_work/workqueue) because it is possible that CPU_DEAD holds this mutex
and waits for us to complete(kthread_stop_info). I believe this bug is old.
> (Yes, your patches didnt introduce this. I was just reiterating here my
> earlier point that workqueue code is broken of late wrt cpu hotplug).
>
> > Also, it seems to me we have a much more simple scenario for deadlock.
> >
> > events/0 runs run_workqueue(), work->func() sleeps or takes a preemtion. CPU 0
> > dies, keventd thread migrates to another CPU. CPU_DEAD calls kthread_stop() under
> > workqueue_mutex and waits for until kevents thread exits. Now, if this work (or
> > another work pending on cwq->worklist) takes workqueue_mutex (for example, does
> > flush_workqueue) we have a deadlock.
> >
> > No?
>
> Yes, the above scenario also will cause a deadlock.
Ok, thanks for acknowledgement.
> I supposed one could avoid the deadlock by having a 'workqueue_mutex_held'
> flag and avoid taking the mutex set under some conditions,
I am thinking about the same right now. Probably we can do something like this:
int xxx_lock(void)
{
for (;;) {
if (mutex_trylock(wq_mutex))
return 1;
// the owner of wq_mutex sleeps, we can proceed
if (kthread_should_stop())
return 0;
}
}
void xxx_unlock(int yesno)
{
if (yesno)
mutex_unlock(wq_mutext);
}
and then do
locked = xxx_lock();
...
xxx_unlock(locked);
in flush_xxx() instead of plain lock/unlock.
Yes, ugly. I'll try to do something else on weekend.
> but IMHO a
> more neater solution is to provide a cpu-hotplug lock which works under
> all these corner cases. One such proposal was made here:
>
> http://lkml.org/lkml/2006/10/26/65
I'll take a look later, thanks.
Oleg.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists