[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200901301633.54013.rusty@rustcorp.com.au>
Date: Fri, 30 Jan 2009 16:33:53 +1030
From: Rusty Russell <rusty@...tcorp.com.au>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Mike Travis <travis@....com>, Ingo Molnar <mingo@...hat.com>,
Dave Jones <davej@...hat.com>, cpufreq@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] work_on_cpu: Use our own workqueue.
On Thursday 29 January 2009 12:42:05 Andrew Morton wrote:
> On Thu, 29 Jan 2009 12:13:32 +1030 Rusty Russell <rusty@...tcorp.com.au> wrote:
>
> > On Thursday 29 January 2009 06:14:40 Andrew Morton wrote:
> > > It's vulnerable to the same deadlock, I think? Suppose we have:
> > ...
> > > - A calls work_on_cpu() and takes woc_mutex.
> > >
> > > - Before function_which_takes_L() has started to execute, task B takes L
> > > then calls work_on_cpu() and task B blocks on woc_mutex.
> > >
> > > - Now function_which_takes_L() runs, and blocks on L
> >
> > Agreed, but now it's a fairly simple case. Both sides have to take lock L, and both have to call work_on_cpu.
> >
> > Workqueues are more generic and widespread, and an amazing amount of stuff gets called from them. That's why I felt uncomfortable with removing the one known problematic caller.
> >
>
> hm. it's a bit of a timebomb.
>
> y'know, the original way in which acpi-cpufreq did this is starting to
> look attractive. Migrate self to that CPU then just call the dang
> function. Slow, but no deadlocks (I think)?
Just buggy. What random thread was it mugging? If there's any path where
it's not a kthread, what if userspace does the same thing at the same time?
We risk running on the wrong cpu, *then* overriding userspace when we restore
it.
In general these cpumask games are a bad idea.
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists