lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140608130718.GA11129@redhat.com>
Date:	Sun, 8 Jun 2014 15:07:18 +0200
From:	Oleg Nesterov <oleg@...hat.com>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>,
	Clark Williams <williams@...hat.com>
Subject: safety of *mutex_unlock() (Was: [BUG] signal: sighand unprotected
	when accessed by /proc)

On 06/06, Paul E. McKenney wrote:
>
> On Tue, Jun 03, 2014 at 10:01:25PM +0200, Oleg Nesterov wrote:
> >
> > I'll try to recheck rt_mutex_unlock() tomorrow. _Perhaps_ rcu_read_unlock()
> > should be shifted from lock_task_sighand() to unlock_task_sighand() to
> > ensure that rt_mutex_unlock() does nothihg with this memory after it
> > makes another lock/unlock possible.
> >
> > But if we need this (currently I do not think so), this doesn't depend on
> > SLAB_DESTROY_BY_RCU. And, at first glance, in this case rcu_read_unlock_special()
> > might be wrong too.
>
> OK, I will bite...  What did I mess up in rcu_read_unlock_special()?
>
> This function does not report leaving the RCU read-side critical section
> until after its call to rt_mutex_unlock() has returned, so any RCU
> read-side critical sections in rt_mutex_unlock() will be respected.

Sorry for confusion.

I only meant that afaics rcu_read_unlock_special() equally depends on the
fact that rt_mutex_unlock() does nothing with "struct rt_mutex" after it
makes another rt_mutex_lock() + rt_mutex_unlock() possible, otherwise this
code is wrong (and unlock_task_sighand() would be wrong too).

Just to simplify the discussion... suppose we add "atomic_t nr_slow_unlock"
into "struct rt_mutex" and change rt_mutex_slowunlock() to do
atomic_inc(&lock->nr_slow_unlock) after it drops ->wait_lock. Of course this
would be ugly, just for illustration.

In this case atomic_inc() above can write to rcu_boost()'s stack after this
functions returns to the caller. And unlock_task_sighand() would be wrong
too, atomic_inc() could write to the memory which was already returned to
system because "unlock" path runs outside of rcu-protected section.

But it seems to me that currently we are safe, rt_mutex_unlock() doesn't do
something like this, a concurrent rt_mutex_lock() must always take wait_lock
too.


And while this is off-topic and I can be easily wrong, it seems that the
normal "struct mutex" is not safe in this respect. If nothing else, once
__mutex_unlock_common_slowpath()->__mutex_slowpath_needs_to_unlock() sets
lock->count = 1, a concurent mutex_lock() can take and then release this
mutex before __mutex_unlock_common_slowpath() takes ->wait_lock.

So _perhaps_ we should not rely on this property of rt_mutex "too much".

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ