[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060827174946.GB11710@in.ibm.com>
Date: Sun, 27 Aug 2006 23:19:46 +0530
From: Dipankar Sarma <dipankar@...ibm.com>
To: Andrew Morton <akpm@...l.org>
Cc: Linus Torvalds <torvalds@...l.org>, Dave Jones <davej@...hat.com>,
ego@...ibm.com, rusty@...tcorp.com.au,
linux-kernel@...r.kernel.org, arjan@...el.linux.com, mingo@...e.hu,
vatsa@...ibm.com, ashok.raj@...el.com
Subject: Re: [RFC][PATCH 0/4] Redesign cpu_hotplug locking.
On Sun, Aug 27, 2006 at 10:21:16AM -0700, Andrew Morton wrote:
> On Sun, 27 Aug 2006 16:36:58 +0530
> Dipankar Sarma <dipankar@...ibm.com> wrote:
>
> > > Did you look? workqueue_mutex is used to protect per-cpu workqueue
> > > resources. The lock is taken prior to modification of per-cpu resources
> > > and is released after their modification. Very very simple.
> >
> > I did and there is no lock named workqueue_mutex. workqueue_cpu_callback()
> > is farily simple and doesn't have the issues in cpufreq that
> > we are talking about (lock_cpu_hotplug() in cpu callback path).
>
> http://www.kernel.org/git/gitweb.cgi?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=9b41ea7289a589993d3daabc61f999b4147872c4
Ah, I didn't realize that it was already in git. It does take
care of create_workqueue callers, however I don't see why this
is needed -
+ break;
+
+ case CPU_DOWN_PREPARE:
+ mutex_lock(&workqueue_mutex);
+ break;
+
+ case CPU_DOWN_FAILED:
+ mutex_unlock(&workqueue_mutex);
break;
This seems like some implicit code locking to me. Why is it not
sufficient to hold the lock in the CPU_DEAD code while walking
the workqueues ?
Thanks
Dipankar
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists