[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200902101924.08656.rusty@rustcorp.com.au>
Date: Tue, 10 Feb 2009 19:24:07 +1030
From: Rusty Russell <rusty@...tcorp.com.au>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: travis@....com, mingo@...hat.com, davej@...hat.com,
cpufreq@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] work_on_cpu: Use our own workqueue.
On Thursday 05 February 2009 02:06:36 Andrew Morton wrote:
> On Wed, 4 Feb 2009 21:11:35 +1030 Rusty Russell <rusty@...tcorp.com.au> wrote:
>
> > On Wednesday 04 February 2009 13:31:11 Andrew Morton wrote:
> > > On Wed, 4 Feb 2009 13:14:31 +1030 Rusty Russell <rusty@...tcorp.com.au> wrote:
> > > > I think you're right though: smp_call_function_single (or neat wrappers)
> > > > where possible, work_on_cpu which can fail for the others, and we'll just
> > > > have to plumb in the error returns.
> > >
> > > I bet a lot of those can use plain old schedule_work_on().
> >
> > Which is where work_on_cpu started: a little wrapper around schedule_work_on.
> >
> > We're going in circles, no?
>
> No, we've made some progress. We have a better understanding of what
> the restrictions, shortcomings and traps are in this stuff. We've
> learned (surprise!) that a one-size-fits-all big hammer wasn't such a
> great idea.
>
> Proposed schedule_work_on() rule: either the flush_work() caller or the
> callback should not hold any explicit or implicit sleeping locks.
But as you found out looking through these, it's really hard to tell. I can
guess, but that's a little fraught...
How about we make work_on_cpu spawn a temp thread; if you care, use
something cleverer? Spawning a thread just isn't that slow.
Meanwhile, I'll prepare patches to convert all the non-controversial cases
(ie. smp_call_function-style ones).
Cheers,
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists