[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060727014049.GA13187@elte.hu>
Date: Thu, 27 Jul 2006 03:40:49 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Linus Torvalds <torvalds@...l.org>
Cc: Srivatsa Vaddagiri <vatsa@...ibm.com>,
Arjan van de Ven <arjan@...ux.intel.com>,
Dave Jones <davej@...hat.com>,
Chuck Ebbert <76306.1226@...puserve.com>,
Ashok Raj <ashok.raj@...el.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...l.org>
Subject: Re: [patch] Reorganize the cpufreq cpu hotplug locking to not be totally bizare
* Linus Torvalds <torvalds@...l.org> wrote:
> But I agree with Arjan - I think the fundamental problem is that cpu
> hotplug locking is just is fundamentally badly designed as-is. There's
> really very little point to making it a _lock_ per se, since most
> people really want more of a "I'm using this CPU, don't try to remove
> it right now" thing which is more of a ref-counting-like issue.
we'd also need a facility to wait on that refcount - i.e. a waitqueue.
Which means we'd have a "refcount + waitqueue", which is equivalent to a
"recursive, sleeping read-lock", where the write-side could be used as a
simple facility to "wait for all readers to go away and block new
readers from entering the critical sections". [which type of lock Linux
does not have right now. rwsems come the closest but they dont recurse.]
Also, the hotplug lock is global right now which is pretty unscalable,
so the rw-mutex should also be per-CPU, and the hotplug locking API
should be changed to something like:
cpu = cpu_hotplug_lock();
...
cpu_hotplug_unlock(cpu);
To enable a task to schedule away (and potentially migrate to another
CPU) with the per-CPU lock held but still be able to unlock the right
per-cpu lock. [this above approach is quite similar to how we do
sleeping per-cpu locks in -rt.]
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists