lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.1.10.0805071032280.3024@woody.linux-foundation.org>
Date:	Wed, 7 May 2008 10:47:20 -0700 (PDT)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Ingo Molnar <mingo@...e.hu>
cc:	Matthew Wilcox <matthew@....cx>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"J. Bruce Fields" <bfields@...i.umich.edu>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Alexander Viro <viro@....linux.org.uk>,
	linux-fsdevel@...r.kernel.org
Subject: Re: AIM7 40% regression with 2.6.26-rc1



On Wed, 7 May 2008, Linus Torvalds wrote:
> 
> All the "normal" mutex code use fine-grained locking, so even if you slow 
> down the fast path, that won't cause the same kind of fastpath->slowpath 
> increase.

Put another way: let's say that the "good fastpath" is basically a single 
locked instruction - ~12 cycles on AMD, ~35 Core 2. That's the 
no-bouncing, no-contention case.

Doing it with debugging (call overhead, spinlocks, local irq saving rtc) 
will probably easily triple it or more, but we're not changing anything 
else. There's no "downstream" effect: the behaviour itself doesn't change. 
It doesn't get more bouncing, it doesn't start sleeping.

But what happens if the lock has the *potential* for conflicts is 
different.

There, a "longish pause + fast lock + short average code sequece + fast 
unlock" is quite likely to stay uncontended for a fair amount of time, and 
while it will be much slower than the no-contention-at-all case (because 
you do get a pretty likely cacheline event at the "fast lock" part), with 
a fairly low number of CPU's and a long enough pause, you *also* easily 
get into a pattern where the thing that got the lock will likely also get 
to unlock without dropping the cacheline.

So far so good.

But that basically depends on the fact that "lock + work + unlock" is 
_much_ shorter than the "longish pause" in between, so that even if you 
have <n> CPU's all doing the same thing, their pauses between the locked 
section are still bigger than <n> times that short time.

Once that is no longer true, you now start to bounce both at the lock 
*and* the unlock, and now that whole sequence got likely ten times slower. 
*AND* because it now actually has real contention, it actually got even 
worse: if the lock is a sleeping one, you get *another* order of magnitude 
just because you now started doing scheduling overhead too!

So the thing is, it just breaks down very badly. A spinlock that gets 
contention probably gets ten times slower due to bouncing the cacheline. A 
semaphore that gets contention probably gets a *hundred* times slower, or 
more.

And so my bet is that both the old and the new semaphores had the same bad 
break-down situation, but the new semaphores just are a lot easier to 
trigger it because they are at least three times costlier than the old 
ones, so you just hit the bad behaviour with much lower loads (or fewer 
number of CPU's).

But spinlocks really do behave much better when contended, because at 
least they don't get the even bigger hit of also hitting the scheduler. So 
the old semaphores would have behaved badly too *eventually*, they just 
needed a more extreme case to show that bad behavior.

		Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ