lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 8 Jan 2009 15:12:45 -0500 (EST)
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
cc:	Chris Mason <chris.mason@...cle.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...e.hu>, paulmck@...ux.vnet.ibm.com,
	Gregory Haskins <ghaskins@...ell.com>,
	Matthew Wilcox <matthew@....cx>,
	Andi Kleen <andi@...stfloor.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	linux-btrfs <linux-btrfs@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Nick Piggin <npiggin@...e.de>,
	Peter Morreale <pmorreale@...ell.com>,
	Sven Dietrich <SDietrich@...ell.com>
Subject: Re: [PATCH -v7][RFC]: mutex: implement adaptive spinning


> 
> The problem? Setting "lock->count" to 0. That will mean that the next 
> "mutex_unlock()" will not necessarily enter the slowpath at all, and won't 
> necessarily wake things up like it should.
> 
> Normally we set lock->count to 0 after getting the lock, and only _inside_ 
> the spinlock, and then we check the waiters after that. The comment says 
> it all:
> 
>         /* set it to 0 if there are no waiters left: */
>         if (likely(list_empty(&lock->wait_list)))
>                 atomic_set(&lock->count, 0);
> 
> and the spinning case violates that rule. 

The difference is that here we have it set to -1 (in the non patched 
code), and we have to decide if we should change that to 0. To change from 
-1 to 0 needs the protection of the spin locks.

In the loop, we only change from 1 to 0 which is the same as the fast 
path, and should not cause any problems.


> 
> Now, the spinning case only sets it to 0 if we saw it set to 1, so I think 
> the argument can go something like:

Yep.

> 
>  - if it was 1, and we _have_ seen contention, then we know that at 
>    least _one_ person that set it to 1 must have gone through the unlock 
>    slowpath (ie it wasn't just a normal "locked increment".

Correct.

> 
>  - So even if _we_ (in the spinning part of stealing that lock) didn't 
>    wake the waiter up, the slowpath wakeup case (that did _not_ wake 
>    us up, since we were spinning and hadn't added ourselves to the wait 
>    list) must have done so.

Agreed.

> 
> So maybe it's all really really safe, and we're still guaranteed to have 
> as many wakeups as we had go-to-sleeps. But I have to admit that my brain 
> hurts a bit from worrying about this.

I do not think that the issue with the previous bug that Chris showed had 
anything to do with the actual sleepers. The slow path never changes the 
lock to '1', so it should not affect the spinners. We can think of the 
spinners as not having true contention with the lock, and are just like a:

	while (cond) {
		if (mutex_trylock(lock))
			goto got_the_lock;
	}

> 
> Sleeping mutexes are not ever simple. 

Now you see why in -rt we did all this in the slow path ;-)

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ