lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFxYRDZvisB7iZ5a-bcp5_2pkvcC9Opk6=yJtjfK57EWTw@mail.gmail.com>
Date:	Mon, 6 May 2013 12:32:58 -0700
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Paul McKenney <paulmck@...ux.vnet.ibm.com>
Cc:	Ingo Molnar <mingo@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	"Fr?d?ric Weisbecker" <fweisbec@...il.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Thomas Gleixner <tglx@...utronix.de>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [GIT PULL, RFC] Full dynticks, CONFIG_NO_HZ_FULL feature

On Mon, May 6, 2013 at 8:35 AM, Paul E. McKenney
<paulmck@...ux.vnet.ibm.com> wrote:
>>
>> I think Linus might have referred to my 'future plans' entry:

Indeed. I feel that HPC is entirely irrelevant to anybody,
*especially* HPC benchmarks. In real life, even HPC doesn't tend to
have the nice behavior their much-touted benchmarks have.

So as long as the NOHZ is for HPC-style loads, then quite frankly, I
don't feel it is worth it. The _only_ thing that makes it worth it is
that "future plans" part where it would actually help real loads.

>>
>> Interesting that HZ=1000 caused 8% overhead there. On a regular x86 server
>> PC I've measured the HZ=1000 overhead to pure user-space execution to be
>> around 1% (sometimes a bit less, sometimes a bit more).
>>
>> But even 1% is worth it.
>
> I believe that the difference is tick skew

Quite possibly it is also virtualization.

The VM people are the one who complain the loudest about how certain
things make their performance go down the toilet. And interrupts tend
to be high on that list, and unless you have hardware support for
virtual timer interrupts I can easily see a factor of four cost or
more.

And the VM people then flail around wildly to always blame everybody
else. *Anybody* else than the VM overhead itself.

It also depends a lot on architecture. The ia64 people had much bigger
problems with the timer interrupt than x86 ever did. Again, they saw
this mainly on the HPC benchmarks, because the benchmarks were
carefully tuned to have huge-page support and were doing largely
irrelevant things like big LINPACK runs, and the timer irq ended up
blowing their carefully tuned caches and TLB's out.

Never mind that nobody sane ever *cared*. Afaik, no real HPC load has
anything like that behavior, much less anything else. But they had
numbers to prove how bad it was, and it was a load with very stable
numbers.

Combine the two (bad HPC benchmarks and VM), and you can make an
argument for just about anything. And people have.

I am personally less than impressed with some of the benchmarks I've
seen, if it wasn't clear.

            Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ