lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100618193403.GA17314@redhat.com>
Date:	Fri, 18 Jun 2010 21:34:03 +0200
From:	Oleg Nesterov <oleg@...hat.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Don Zickus <dzickus@...hat.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Ingo Molnar <mingo@...e.hu>,
	Jerome Marchand <jmarchan@...hat.com>,
	Mandeep Singh Baines <msb@...gle.com>,
	Roland McGrath <roland@...hat.com>,
	linux-kernel@...r.kernel.org, stable@...nel.org,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: while_each_thread() under rcu_read_lock() is broken?

(add cc's)

Hmm. Once I sent this patch, I suddenly realized with horror that
while_each_thread() is NOT safe under rcu_read_lock(). Both
do_each_thread/while_each_thread or do/while_each_thread() can
race with exec().

Yes, it is safe to do next_thread() or next_task(). But:

	#define while_each_thread(g, t) \
		while ((t = next_thread(t)) != g)

suppose that t is not the group leader, and it does de_thread() and then
release_task(g). After that next_thread(t) returns t, not g, and the loop
will never stop.

I _really_ hope I missed something, will recheck tomorrow with the fresh
head. Still I'd like to share my concerns...

If I am right, probably we can fix this, something like

	#define while_each_thread(g, t) \
		while ((t = next_thread(t)) != g && pid_alive(g))

[we can't do while (!thread_group_leadr(t = next_thread(t)))].
but this needs barrires, and we should validate the callers anyway.

Or, perhaps,

	#define XXX(t)	({
		struct task_struct *__prev = t;
		t = next_thread(t);
		t != g && t != __prev;
	})

	#define while_each_thread(g, t) \
		while (XXX(t))

Please tell me I am wrong!

Oleg.

On 06/18, Oleg Nesterov wrote:
>
> check_hung_uninterruptible_tasks()->rcu_lock_break() introduced by
> "softlockup: check all tasks in hung_task" commit ce9dbe24 looks
> absolutely wrong.
>
> 	- rcu_lock_break() does put_task_struct(). If the task has exited
> 	  it is not safe to even read its ->state, nothing protects this
> 	  task_struct.
>
> 	- The TASK_DEAD checks are wrong too. Contrary to the comment, we
> 	  can't use it to check if the task was unhashed. It can be unhashed
> 	  without TASK_DEAD, or it can be valid with TASK_DEAD.
>
> 	  For example, an autoreaping task can do release_task(current)
> 	  long before it sets TASK_DEAD in do_exit().
>
> 	  Or, a zombie task can have ->state == TASK_DEAD but release_task()
> 	  was not called, and in this case we must not break the loop.
>
> Change this code to check pid_alive() instead, and do this before we
> drop the reference to the task_struct.
>
> Signed-off-by: Oleg Nesterov <oleg@...hat.com>
> ---
>
>  kernel/hung_task.c |   11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
>
> --- 35-rc2/kernel/hung_task.c~CHT_FIX_RCU_LOCK_BREAK	2009-12-18 19:05:38.000000000 +0100
> +++ 35-rc2/kernel/hung_task.c	2010-06-18 20:06:11.000000000 +0200
> @@ -113,15 +113,20 @@ static void check_hung_task(struct task_
>   * For preemptible RCU it is sufficient to call rcu_read_unlock in order
>   * exit the grace period. For classic RCU, a reschedule is required.
>   */
> -static void rcu_lock_break(struct task_struct *g, struct task_struct *t)
> +static bool rcu_lock_break(struct task_struct *g, struct task_struct *t)
>  {
> +	bool can_cont;
> +
>  	get_task_struct(g);
>  	get_task_struct(t);
>  	rcu_read_unlock();
>  	cond_resched();
>  	rcu_read_lock();
> +	can_cont = pid_alive(g) && pid_alive(t);
>  	put_task_struct(t);
>  	put_task_struct(g);
> +
> +	return can_cont;
>  }
>
>  /*
> @@ -148,9 +153,7 @@ static void check_hung_uninterruptible_t
>  			goto unlock;
>  		if (!--batch_count) {
>  			batch_count = HUNG_TASK_BATCHING;
> -			rcu_lock_break(g, t);
> -			/* Exit if t or g was unhashed during refresh. */
> -			if (t->state == TASK_DEAD || g->state == TASK_DEAD)
> +			if (!rcu_lock_break(g, t))
>  				goto unlock;
>  		}
>  		/* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ