[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090127085126.730dd77a.akpm@linux-foundation.org>
Date: Tue, 27 Jan 2009 08:51:26 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Ingo Molnar <mingo@...e.hu>
Cc: Rusty Russell <rusty@...tcorp.com.au>,
Mike Travis <travis@....com>, Ingo Molnar <mingo@...hat.com>,
Dave Jones <davej@...hat.com>, cpufreq@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] work_on_cpu: Use our own workqueue.
On Tue, 27 Jan 2009 16:28:30 +0100 Ingo Molnar <mingo@...e.hu> wrote:
>
> * Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> > > But it's a general comment about fixing a general issue. The
> > > currently known case is not directly relevent; that it can happen and
> > > it's restricting the use of this otherwise-general API is.
> >
> > I think we should switch acpi-cpufreq to smp_call_function(), revert
> > this stuff and ban the calling of work_on_cpu() under locks.
>
> I agree that do_drv_read()/write() should be converted to
> smp_function_call() (what it does is atomic: msr or PIO cycles).
>
> Then work_on_cpu() can be removed for good, to not lure people into using
> it. You seem to agree that work_on_cpu() is unfixable so it's far better
> to offer nothing than to offer such a deceivingly named but fundamentally
> limited facility.
>
Well, I don't think it's unfixable. But a full fix would, I think,
require a kernel thread for each callback invokation. As dicussed
earlier, this could be optimised to only create the new kernel thread
if the keventd thread is presently off doing something else.
Is work_on_cpu() valuable enough to justify doing all that? Dunno.
It appears to have six callers in three drivers at present, which is
quite a large number.
Or perhaps there's a smarter way of fixing it all.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists