[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090126235331.GA8726@elte.hu>
Date: Tue, 27 Jan 2009 00:53:31 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: oleg@...hat.com, a.p.zijlstra@...llo.nl, rusty@...tcorp.com.au,
travis@....com, mingo@...hat.com, davej@...hat.com,
cpufreq@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] work_on_cpu: Use our own workqueue.
* Andrew Morton <akpm@...ux-foundation.org> wrote:
> > The problem is the intrinsic utility of work_on_cpu(): we _really_
> > want such a generic facility to be usable from any (blockable)
> > context, just like on_each_cpu(func, info) does for atomic functions,
> > without restrictions on locking context.
>
> Do we? work_on_cpu() is some last-gasp oh-i-screwed-my-code-up thing.
> We _really_ want people to use on_each_cpu()!
why? on_each_cpu() is limited and runs in IRQ context. Is there a
requirement that worklets need to be atomic?
> We should bust a gut to keep the number of callers to the
> resource-intensive (deadlocky!) work_on_cpu() to a minimum.
i wouldnt call +10K 'resource intensive'.
> (And to think that adding add_timer_on() creeped me out).
>
> hm. None of that was very helpful. How to move forward?
>
> I think I disagree that work_on_cpu() should be made into some robust,
> smiled-upon core kernel facility. It _is_ slow, it _is_ deadlockable.
uhm, why is it slow? It could be faster in fact in some cases: the main
overhead in on_each_cpu() is having to wait for the IPIs - with a thread
based approach if the other CPUs are idle we can get an IPI-less wakeup.
> It should be positioned as something which is only used as a last
> resort. And if you _have_ to use it, sort out your locking!
>
> Plus the number of code sites which want to fiddle with other CPUs in
> this manner will always be small. cpufreq, MCE, irq-affinity, things
> like that.
>
> What is the deadlock in acpi-cpufreq? Which lock, and who is the
> "other" holder of that lock?
a quick look suggests that it's dbs_mutex.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists