[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTikpNG2Y3S3AyxAbCkMynKu1u5yKPrw=bh+uy=9R@mail.gmail.com>
Date: Tue, 3 Aug 2010 20:44:01 -0700
From: Paul Menage <menage@...gle.com>
To: Ben Blum <bblum@...rew.cmu.edu>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org,
containers@...ts.linux-foundation.org, ebiederm@...ssion.com,
lizf@...fujitsu.com, matthltc@...ibm.com, oleg@...hat.com
Subject: Re: [PATCH v4 1/2] cgroups: read-write lock CLONE_THREAD forking per
threadgroup
On Fri, Jul 30, 2010 at 4:57 PM, Ben Blum <bblum@...rew.cmu.edu> wrote:
> + * The threadgroup_fork_lock prevents threads from forking with
> + * CLONE_THREAD while held for writing. Use this for fork-sensitive
> + * threadgroup-wide operations. It's taken for reading in fork.c in
> + * copy_process().
> + * Currently only needed write-side by cgroups.
> + */
> + struct rw_semaphore threadgroup_fork_lock;
> +#endif
I'm not sure how best to word this comment, but I'd prefer something like:
"The threadgroup_fork_lock is taken in read mode during a CLONE_THREAD
fork operation; taking it in write mode prevents the owning
threadgroup from adding any new threads and thus allows you to
synchronize against the addition of unseen threads when performing
threadgroup-wide operations. New-process forks (without CLONE_THREAD)
are not affected."
As far as the #ifdef mess goes, it's true that some people don't have
CONFIG_CGROUPS defined. I'd imagine that these are likely to be
embedded systems with a fairly small number of processes and threads
per process. Are there really any such platforms where the cost of a
single extra rwsem per process is going to make a difference either in
terms of memory or lock contention? I think you should consider making
these additions unconditional.
Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists