lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1610262000410.5013@nanos>
Date:   Wed, 26 Oct 2016 20:09:54 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Tim Chen <tim.c.chen@...ux.intel.com>
cc:     Peter Zijlstra <peterz@...radead.org>, rjw@...ysocki.net,
        mingo@...hat.com, bp@...e.de, x86@...nel.org,
        linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-acpi@...r.kernel.org, jolsa@...hat.com,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
Subject: Re: [PATCH v6 5/9] x86/sysctl: Add sysctl for ITMT scheduling
 feature

On Wed, 26 Oct 2016, Tim Chen wrote:
> On Wed, 2016-10-26 at 13:24 +0200, Thomas Gleixner wrote:
> > > There were reservations on the multi-socket case of ITMT, maybe it would
> > > help to spell those out in great detail here. That is, have the comment
> > > explain the policy instead of simply stating what the code does (which
> > > is always bad comment policy, you can read the code just fine).
> > What is the objection for multi sockets? If it improves the behaviour then
> > why would this be a bad thing for multi sockets?
> 
> For multi-socket (server system), it is much more likely that they will
> have multiple cpus in a socket busy and not run in turbo mode. So the extra
> work in migrating the workload to the one with extra headroom will
> not make use of those headroom in that scenario.  I will update the comment
> to reflect this policy.

So on a single socket server system the extra work does not matter, right?
Don't tell me that single socket server systems are irrelevant. Intel is
actively promoting single socket CPUs, like XEON D, for high densitiy
servers...

Instead of handwaving arguments I prefer a proper analysis of what the
overhead is and why it is not a good thing for loaded servers in general.

Then instead of slapping half baken heuristics into the code, we should sit
down and think a bit harder about it.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ