lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <511BE4A3.8050607@redhat.com>
Date:	Wed, 13 Feb 2013 14:08:19 -0500
From:	Rik van Riel <riel@...hat.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
CC:	Ingo Molnar <mingo@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>, rostedt@...dmiss.org,
	aquini@...hat.com, Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Michel Lespinasse <walken@...gle.com>,
	linux-tip-commits@...r.kernel.org
Subject: Re: [tip:core/locking] x86/smp: Move waiting on contended ticket
 lock out of line

On 02/13/2013 11:20 AM, Linus Torvalds wrote:
> On Wed, Feb 13, 2013 at 4:06 AM, tip-bot for Rik van Riel
> <riel@...hat.com> wrote:
>>
>> x86/smp: Move waiting on contended ticket lock out of line
>>
>> Moving the wait loop for congested loops to its own function
>> allows us to add things to that wait loop, without growing the
>> size of the kernel text appreciably.
>
> Did anybody actually look at the code generation of this?

Good catch.

This looks like something that may be fixable, though I
do not know whether it actually matters. Adding an unlikely
to the if condition where we call the contention path does
seem to clean up the code a little bit...

> This is apparently for the auto-tuning, which came with absolutely no
> performance numbers (except for the *regressions* it caused), and
> which is something we have actively *avoided* in the past, because
> back-off is a f*cking idiotic thing, and the only real fix for
> contended spinlocks is to try to avoid the contention and fix the
> caller to do something smarter to begin with.
>
> In other words, the whole f*cking thing looks incredibly broken. At
> least give some good explanations for why crap like this is needed,
> instead of just implementing backoff without even numbers for real
> loads. And no, don't bother to give numbers for pointless benchmarks.
> It's easy to get contention on a benchmark, but spinlock backoff is
> only remotely interesting on real loads.

Lock contention falls into two categories. One is contention
on resources that are used inside the kernel, which may be
fixable by changing the data and the code.

The second is lock contention driven by external factors,
like userspace processes all trying to access the same file,
or grab the same semaphore. Not all of these cases may be
fixable on the kernel side.

A further complication is that these kinds of performance
issues often get discovered on production systems, which
are stuck on a particular kernel and cannot introduce
drastic changes.

The spinlock backoff code prevents these last cases from
experiencing large performance regressions when the hardware
is upgraded.

None of the scalable locking systems magically make things
scale. All they do is prevent catastrophic performance drops
when moving from N to N+x CPUs, allowing user systems to
continue working while kernel developers address the actual
underlying scalability issues.

As a car analogy, think of this not as an accelerator, but
as an airbag. Spinlock backoff (or other scalable locking
code) exists to keep things from going horribly wrong when
we hit a scalability wall.

Does that make more sense?

-- 
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ