lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Jun 2010 11:43:59 -0400
From:	Ben Blum <bblum@...rew.cmu.edu>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	Ben Blum <bblum@...rew.cmu.edu>, linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, akpm@...ux-foundation.org,
	ebiederm@...ssion.com, lizf@...fujitsu.com, matthltc@...ibm.com,
	menage@...gle.com, oleg@...hat.com
Subject: Re: [RFC] [PATCH v3 1/2] cgroups: read-write lock CLONE_THREAD
 forking per threadgroup

On Mon, Jun 28, 2010 at 04:10:31PM +0900, KAMEZAWA Hiroyuki wrote:
> On Fri, 25 Jun 2010 01:46:54 -0400
> Ben Blum <bblum@...rew.cmu.edu> wrote:
> 
> > Adds functionality to read/write lock CLONE_THREAD fork()ing per-threadgroup
> > 
> > From: Ben Blum <bblum@...rew.cmu.edu>
> > 
> > This patch adds an rwsem that lives in a threadgroup's signal_struct that's
> > taken for reading in the fork path, under CONFIG_CGROUPS. If another part of
> > the kernel later wants to use such a locking mechanism, the CONFIG_CGROUPS
> > ifdefs should be changed to a higher-up flag that CGROUPS and the other system
> > would both depend on.
> > 
> > This is a pre-patch for cgroups-procs-write.patch.
> > 
> 
> Hmm, at adding a new lock, please describe its semantics.
> Following is my understanding, right ?
> 
> Lock range:
> This rwsem is read-locked from cgroup_fork() to cgroup_post_fork(). 
> Most of works for fork() are between cgroup_fork() and cgroup_post_for().
> This means if sig->threadgroup_fork_lock is held, no new do_work() can
> make progress in this process groups. This rwsem is held only when
> CLONE_THREAD is in clone_flags. IOW, this rwsem is taken only at creating
> new thread.
> 
> What we can do with this:
> By locking sig->threadgroup_fork_lock, a code can visit _all_ threads
> in a process group witout any races. So, if you want to implement an
> atomic operation against a process, taking this lock is an idea.
> 
> For what:
> To implement an atomic process move in cgroup, we need this lock.

All good. Should a description like this go in Documentation/ somewhere,
or above the declaration of the lock?

> Why this implemantation:
> Considering cgroup, threads in a cgroup can be under several different
> cgroup. So, we can't implement lock in cgroup-internal, we use signal
> struct.

Not entirely, though that's an additional restriction... The reason it's
in signal_struct: signal_struct is per-threadgroup and has exactly the
lifetime rules we want. Putting the lock in task_struct and taking
current->group_leader->signal... seems like it would also work, but
introduces cacheline conflicts that weren't there before, since fork
doesn't look at group_leader (but it does bump signal's count).

> By the way, IMHO, hiding lock in cgroup_fork() and cgroup_post_fork() doesn't
> seem good idea. How about a code like this ?
> 
>   read_lock_thread_clone(current);
>   cgroup_fork();
>  .....
>   cgroup_post_fork();
>   read_unlock_thrad_clone(current);
> 
> We may have chances to move these lock to better position if cgroup is
> an only user.

I didn't do that out of a desire to change fork.c as little as possible,
but that does look better than what I've got. Those two functions should
be in fork.c under #ifdef CONFIG_CGROUPS.

> 
> Thanks,
> -Kame

Thanks,
-- Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ