lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140609154114.20585056@gandalf.local.home>
Date:	Mon, 9 Jun 2014 15:41:14 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Oleg Nesterov <oleg@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>,
	Clark Williams <williams@...hat.com>
Subject: Re: safety of *mutex_unlock() (Was: [BUG] signal: sighand
 unprotected when accessed by /proc)

On Mon, 9 Jun 2014 11:51:09 -0700
Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> On Mon, Jun 9, 2014 at 11:29 AM, Steven Rostedt <rostedt@...dmis.org> wrote:
> >>
> >> And once again, note that the normal mutex is already unsafe (unless I missed
> >> something).
> >
> > Is it unsafe?
> >
> > This thread was started because of a bug we triggered in -rt, which
> > ended up being a change specific to -rt that modified the way slub
> > handled SLAB_DESTROY_BY_RCU. What else was wrong with it?
> 
> There's a different issue with freeing of mutexes, which is not a bug,
> but "by design". Namely that mutexes aren't truly "atomic". They are
> complex data structures, and they have issues that a spinlock does not
> have.
> 
> When unlocking a mutex, the thread doing the unlocking will still
> touch the mutex itself _after_ another thread could already have
> successfully acquired the mutex. This is not a problem in any normal
> use. since all of this is perfectly coherent in general, but it means
> that code sequences like:
> 
>   mutex_lock(mem->mutex);
>   kill_it = !--mem->refcount;
>   mutex_unlock(mem->mutex);
>   if (kill_it)
>      free(mem);

Oh this again. I remember you having a discussion about it in the past.
I'm trying to wrap my head around it so let me make sure I understand
the spinlock (working) case, and then a bad mutex case.

For this to work, you have some item, lets say on a list. When it is
created, the refcount is 1 and it's added to the list. To remove it,
you remove it from the list and dec the refcount, and if it is zero
then you can free it. The issue is that you don't know what may have
seen it while its on the list and the last one to dec it to zero frees
it. Is this a situation that is used?

Because spinlocks are atomic, this is all fine, but if you have a
mutex, then this is where you can have issues. I think rtmutex has an
issue with it too. Specifically in the slow_unlock case:

	if (!rt_mutex_has_waiters(lock)) {
		lock->owner = NULL;
		raw_spin_unlock(&lock->wait_lock);
		return;
	}

That is, you can have:


	CPU0				CPU1
	----				----
				[refcount = 1]
				read foo from list
				[refcount = 2]
  [refcount 2]
  remove foo from list

  mutex_lock(foo->mutex);
  kill_it = !--mem->refcount;
  [refcount = 1]
  [kill_it == 0]
  mutex_unlock(foo->mutex)
    lock->owner = NULL;

				mutex_lock()
				 [ owner == NULL, fast path!]
				kill_it = !--mem->refcount;
				 [refcount = 0]
				 [kill_it = 1]
				mutex_unlock()
				 [ owner == current, fast path!]

				if (kill_it)
				   free(foo);

  raw_spin_unlock(&lock->wait_lock);
   [access freed foo's wait_lock, BUG!]

Is this the problem? I'm just trying to visualize it, and maybe even
visualize it for others that are reading this thread.

Thanks,

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ