lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130813075550.GS27162@twins.programming.kicks-ass.net>
Date:	Tue, 13 Aug 2013 09:55:50 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Oleg Nesterov <oleg@...hat.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
	Long Gao <gaolong@...inos.com.cn>,
	Al Viro <viro@...iv.linux.org.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH] sched: fix the theoretical signal_wake_up() vs
 schedule() race

On Mon, Aug 12, 2013 at 07:02:57PM +0200, Oleg Nesterov wrote:
> This is only theoretical, but after try_to_wake_up(p) was changed
> to check p->state under p->pi_lock the code like
> 
> 	__set_current_state(TASK_INTERRUPTIBLE);
> 	schedule();
> 
> can miss a signal. This is the special case of wait-for-condition,
> it relies on try_to_wake_up/schedule interaction and thus it does
> not need mb() between __set_current_state() and if(signal_pending).
> 
> However, this __set_current_state() can move into the critical
> section protected by rq->lock, now that try_to_wake_up() takes
> another lock we need to ensure that it can't be reordered with
> "if (signal_pending(current))" check inside that section.
> 
> The patch is actually one-liner, it simply adds smp_wmb() before
> spin_lock_irq(rq->lock). This is what try_to_wake_up() already
> does by the same reason.
> 
> We turn this wmb() into the new helper, smp_mb__before_spinlock(),
> for better documentation and to allow the architectures to change
> the default implementation.
> 
> While at it, kill smp_mb__after_lock(), it has no callers.
> 
> Perhaps we can also add smp_mb__before/after_spinunlock() for
> prepare_to_wait().
> 
> Signed-off-by: Oleg Nesterov <oleg@...hat.com>

Thanks!

> +/*
> + * Despite its name it doesn't necessarily has to be a full barrier.
> + * It should only guarantee that a STORE before the critical section
> + * can not be reordered with a LOAD inside this section.
> + * So the default implementation simply ensures that a STORE can not
> + * move into the critical section, smp_wmb() should serialize it with
> + * another STORE done by spin_lock().
> + */
> +#ifndef smp_mb__before_spinlock
> +#define smp_mb__before_spinlock()	smp_wmb()
>  #endif

I would have expected mention of the ACQUIRE of the lock keeping the
LOAD inside the locked section.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ