[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080217234556.GA655@tv-sign.ru>
Date: Mon, 18 Feb 2008 02:45:56 +0300
From: Oleg Nesterov <oleg@...sign.ru>
To: Jarek Poplawski <jarkao2@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Dipankar Sarma <dipankar@...ibm.com>,
Gautham R Shenoy <ego@...ibm.com>,
Jarek Poplawski <jarkao2@...pl>,
Srivatsa Vaddagiri <vatsa@...ibm.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] workqueues: shrink cpu_populated_map when CPU dies
On 02/17, Jarek Poplawski wrote:
>
> This patch looks OK to me.
Thanks for looking at this!
> But while reading this I got some doubts
> in nearby places, so BTW 2 small questions:
>
> 1) ... workqueue_cpu_callback(...)
> {
> ...
> list_for_each_entry(wq, &workqueues, list) {
> cwq = per_cpu_ptr(wq->cpu_wq, cpu);
>
> switch (action) {
> case CPU_UP_PREPARE:
> ...
>
> It looks like not all CPU_ cases are served here: shouldn't
> list_for_each_entry() be omitted for them?
Yes, but this is harmless. cpu-hotplug callbacks are not time-critical,
and cpu_down/cpu_up happens not often, and LIST_HEAD(workqueues) is not
very long, so ...
> 2) ... __create_workqueue_key(...)
> {
> ...
> if (singlethread) {
> ...
> } else {
> get_online_cpus();
> spin_lock(&workqueue_lock);
> list_add(&wq->list, &workqueues);
>
> Shouldn't this list_add() be done after all these inits below?
>
> spin_unlock(&workqueue_lock);
>
> for_each_possible_cpu(cpu) {
> cwq = init_cpu_workqueue(wq, cpu);
> ...
> }
> ...
This doesn't matter. Please note that get_online_cpus() blocks
cpu_up/cpu_down, they take cpu_hotplug_begin().
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists