lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 8 Sep 2011 14:58:05 -0400
From:	Ben Blum <bblum@...rew.cmu.edu>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Ben Blum <bblum@...rew.cmu.edu>, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, fweisbec@...il.com, neilb@...e.de,
	paul@...lmenage.org, paulmck@...ux.vnet.ibm.com
Subject: Re: +
 cgroups-more-safe-tasklist-locking-in-cgroup_attach_proc.patch added to
 -mm tree

On Thu, Sep 08, 2011 at 07:35:59PM +0200, Oleg Nesterov wrote:
> On 09/07, Ben Blum wrote:
> >
> > On Fri, Sep 02, 2011 at 05:55:34PM +0200, Oleg Nesterov wrote:
> > > On 09/02, Ben Blum wrote:
> > > >
> > > > But I don't think the check becomes pointless? If a sub-thread execs
> > > > right before read_lock(&tasklist_lock) (but after the find_task_by_vpid
> > > > in attach_task_by_pid), that causes the case that the comment refers to.
> > >
> > > How so? The comment says:
> > >
> > > 	* a race with de_thread from another thread's exec() may strip
> > > 	* us of our leadership, making while_each_thread unsafe
> > >
> > > This is not true.
> >
> > Sorry, the comment is unclear.
> 
> No, the comment is clear. In fact it was me who pointed out we can't
> do while_each_thread() blindly. And now I am tried to confuse you ;)
> 
> So, sorry for noise, and thanks for correcting me. Somehow I forgot
> this is not safe even under tasklist.
> 
> Partly I was confused because I was thinking about the patch I suggested,
> if we use ->siglock we are safe. If lock_task_sighand(task) succeeds,
> this task should be on list.
> 
> Anyway, I was wrong, sorry.
> 
> Oleg.

All right, no problem.

As for the patch below (which is the same as it was last time?): did you
mean for Andrew to replace the old tasklist_lock patch with this one, or
should one of us rewrite this against it? either way, it should have
something like the comment I proposed in the first thread.

Thanks,
Ben

> 
> --- x/kernel/cgroup.c
> +++ x/kernel/cgroup.c
> @@ -2000,6 +2000,7 @@ int cgroup_attach_proc(struct cgroup *cg
>  	/* threadgroup list cursor and array */
>  	struct task_struct *tsk;
>  	struct flex_array *group;
> +	unsigned long flags;
>  	/*
>  	 * we need to make sure we have css_sets for all the tasks we're
>  	 * going to move -before- we actually start moving them, so that in
> @@ -2027,19 +2028,10 @@ int cgroup_attach_proc(struct cgroup *cg
>  		goto out_free_group_list;
>  
>  	/* prevent changes to the threadgroup list while we take a snapshot. */
> -	rcu_read_lock();
> -	if (!thread_group_leader(leader)) {
> -		/*
> -		 * a race with de_thread from another thread's exec() may strip
> -		 * us of our leadership, making while_each_thread unsafe to use
> -		 * on this task. if this happens, there is no choice but to
> -		 * throw this task away and try again (from cgroup_procs_write);
> -		 * this is "double-double-toil-and-trouble-check locking".
> -		 */
> -		rcu_read_unlock();
> -		retval = -EAGAIN;
> +	retval = -EAGAIN;
> +	if (!lock_task_sighand(leader, &flags))
>  		goto out_free_group_list;
> -	}
> +
>  	/* take a reference on each task in the group to go in the array. */
>  	tsk = leader;
>  	i = 0;
> @@ -2055,9 +2047,9 @@ int cgroup_attach_proc(struct cgroup *cg
>  		BUG_ON(retval != 0);
>  		i++;
>  	} while_each_thread(leader, tsk);
> +	unlock_task_sighand(leader, &flags);
>  	/* remember the number of threads in the array for later. */
>  	group_size = i;
> -	rcu_read_unlock();
>  
>  	/*
>  	 * step 1: check that we can legitimately attach to the cgroup.
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ