[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52F9ED11.5010800@linux.vnet.ibm.com>
Date: Tue, 11 Feb 2014 14:57:45 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To: Toshi Kani <toshi.kani@...com>
CC: paulus@...ba.org, oleg@...hat.com, rusty@...tcorp.com.au,
peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, mingo@...nel.org,
paulmck@...ux.vnet.ibm.com, tj@...nel.org, walken@...gle.com,
ego@...ux.vnet.ibm.com, linux@....linux.org.uk,
linux-kernel@...r.kernel.org,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>
Subject: Re: [PATCH 01/51] CPU hotplug: Provide lockless versions of callback
registration functions
On 02/11/2014 06:56 AM, Toshi Kani wrote:
> On Thu, 2014-02-06 at 03:34 +0530, Srivatsa S. Bhat wrote:
> :
[...]
>>
>> Also, since cpu_maps_update_begin/done() is like a super-set of
>> get/put_online_cpus(), the former naturally protects the critical sections
>> from concurrent hotplug operations.
>
> get/put_online_cpus() is a reader-lock and concurrent executions are
> allowed among the readers. They won't be serialized until a cpu
> online/offline operation begins. By replacing this lock with
> cpu_maps_update_begin/done(), we now serialize all readers. Isn't that
> too restrictive?
That's an excellent line of thought! It doesn't really hurt at the moment
because the for_each_online_cpu() kind of loop that the initcalls of various
subsystems run (before registering the notifier) are really tiny (typically
the loop runs for just 1 cpu, the boot-cpu). In other words, this change
represents a tiny increase in the critical section size; so its effect
shouldn't be noticeable. (Note that in the old model, register_cpu_notifier()
already takes the cpu_add_remove_lock, so they will be serialized at that
point, and this is necessary).
However, going forward, when we start using more aggressive CPU onlining
techniques during boot (such as parallel CPU hotplug), the issue you pointed
out can become a real bottleneck, since for_each_online_cpu() can become
quite a large loop, and hence explicit (and unnecessary) mutual exclusion
will start hurting.
> Can we fix the issue with CPU_POST_DEAD and continue
> to use get_online_cpus()?
>
We don't want to get rid of CPU_POST_DEAD, so unfortunately we can't continue
to use get_online_cpus(). However, I am thinking of introducing a Reader-Writer
semaphore for this purpose, so that the registration routines can run in
parallel most of the time. (Basically, the rw-semaphore is like
get/put_online_cpus(), except that it protects the full hotplug critical section,
including the CPU_POST_DEAD stage.)
The usage would be like this:
cpu_notifier_register_begin(); //does down_read(&cpu_hotplug_rwsem);
for_each_online_cpu(cpu)
init_cpu(cpu);
/* Takes cpu_add_remove_lock mutex */
register_cpu_notifier(&foobar_cpu_notifier);
cpu_notifier_register_end(); //does up_read(&cpu_hotplug_rwsem);
An untested RFC patch is shown below. With this, the task performing CPU hotplug
will take the locks in this order:
down_write(&cpu_hotplug_rwsem);
mutex_lock(&cpu_add_remove_lock);
mutex_lock(&cpu_hotplug.lock);
//Perform CPU hotplug
mutex_unlock(&cpu_hotplug.lock);
mutex_unlock(&cpu_add_remove_lock);
up_write(&cpu_hotplug_rwsem);
The APIs register/unregister_cpu_notifiers will take only the
cpu_add_remove_lock mutex during callback registration.
-----------------------------------------------------------------------------
diff --git a/kernel/cpu.c b/kernel/cpu.c
index deff2e6..8054e6f 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -23,6 +23,19 @@
#include "smpboot.h"
#ifdef CONFIG_SMP
+
+static DECLARE_RWSEM(cpu_hotplug_rwsem);
+
+void cpu_notifier_register_begin(void)
+{
+ down_read(&cpu_hotplug_rwsem);
+}
+
+void cpu_notifier_register_end(void)
+{
+ up_read(&cpu_hotplug_rwsem);
+}
+
/* Serializes the updates to cpu_online_mask, cpu_present_mask */
static DEFINE_MUTEX(cpu_add_remove_lock);
@@ -32,12 +45,14 @@ static DEFINE_MUTEX(cpu_add_remove_lock);
*/
void cpu_maps_update_begin(void)
{
+ down_write(&cpu_hotplug_rwsem);
mutex_lock(&cpu_add_remove_lock);
}
void cpu_maps_update_done(void)
{
mutex_unlock(&cpu_add_remove_lock);
+ up_write(&cpu_hotplug_rwsem);
}
static RAW_NOTIFIER_HEAD(cpu_chain);
@@ -160,9 +175,10 @@ void cpu_hotplug_enable(void)
int __ref register_cpu_notifier(struct notifier_block *nb)
{
int ret;
- cpu_maps_update_begin();
+
+ mutex_lock(&cpu_add_remove_lock);
ret = raw_notifier_chain_register(&cpu_chain, nb);
- cpu_maps_update_done();
+ mutex_unlock(&cpu_add_remove_lock);
return ret;
}
@@ -192,9 +208,9 @@ EXPORT_SYMBOL(register_cpu_notifier);
void __ref unregister_cpu_notifier(struct notifier_block *nb)
{
- cpu_maps_update_begin();
+ mutex_lock(&cpu_add_remove_lock);
raw_notifier_chain_unregister(&cpu_chain, nb);
- cpu_maps_update_done();
+ mutex_unlock(&cpu_add_remove_lock);
}
EXPORT_SYMBOL(unregister_cpu_notifier);
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists