[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20060827110143.663d8207.akpm@osdl.org>
Date: Sun, 27 Aug 2006 11:01:43 -0700
From: Andrew Morton <akpm@...l.org>
To: dipankar@...ibm.com
Cc: Linus Torvalds <torvalds@...l.org>, Dave Jones <davej@...hat.com>,
ego@...ibm.com, rusty@...tcorp.com.au,
linux-kernel@...r.kernel.org, arjan@...el.linux.com, mingo@...e.hu,
vatsa@...ibm.com, ashok.raj@...el.com
Subject: Re: [RFC][PATCH 0/4] Redesign cpu_hotplug locking.
On Sun, 27 Aug 2006 23:19:46 +0530
Dipankar Sarma <dipankar@...ibm.com> wrote:
> I don't see why this
> is needed -
>
> + break;
> +
> + case CPU_DOWN_PREPARE:
> + mutex_lock(&workqueue_mutex);
> + break;
> +
> + case CPU_DOWN_FAILED:
> + mutex_unlock(&workqueue_mutex);
> break;
>
> This seems like some implicit code locking to me. Why is it not
> sufficient to hold the lock in the CPU_DEAD code while walking
> the workqueues ?
?
We need to hold workqueue_mutex to protect the per-cpu workqueue resources
while cpu_online_map is changing and while per-cpu memory is being
allocated or freed.
Look at cpu_down() and mentally replace the
blocking_notifier_call_chain(CPU_DOWN_PREPARE) with
mutex_lock(workqueue_mutex), etc. The __stop_machine_run() in there
modifies the (ie: potentially frees) the workqueue code's per-cpu memory.
So we take that resource's lock while doing so.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists