lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160824175057.GA5032@linux.intel.com>
Date:   Wed, 24 Aug 2016 10:50:57 -0700
From:   Tim Chen <tim.c.chen@...ux.intel.com>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        mingo@...hat.com, tglx@...utronix.de, hpa@...or.com,
        rjw@...ysocki.net, peterz@...radead.org, x86@...nel.org,
        bp@...e.de, sudeep.holla@....com, ak@...ux.intel.com,
        linux-acpi@...r.kernel.org, linux-pm@...r.kernel.org,
        alexey.klimov@....com, viresh.kumar@...aro.org,
        akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
        lenb@...nel.org, paul.gortmaker@...driver.com, jpoimboe@...hat.com,
        mcgrof@...nel.org, jgross@...e.com, robert.moore@...el.com,
        dvyukov@...gle.com, jeyu@...hat.com,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH 04/11] sched,x86: Enable Turbo Boost Max Technology

On Wed, Aug 24, 2016 at 12:18:53PM +0200, Ingo Molnar wrote:
> 
> * Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com> wrote:
> 
> > From: Tim Chen <tim.c.chen@...ux.intel.com>
> > 
> > On some Intel cores, they can boosted to a higher turbo frequency than
> > the other cores on the same die.  So we prefer processes to be run on
> > them vs other lower frequency ones for extra performance.
> > 
> > We extend the asym packing feature in the scheduler to support packing
> > task to the higher frequency core at the core sched domain level.
> > 
> > We set up a core priority metric to abstract the core preferences based
> > on the maximum boost frequency.  The priority is instantiated such that
> > the core with a higher priority is favored over the core with lower
> > priority when making scheduling decision using ASYM_PACKING.  The smt
> > threads that are of higher number are discounted in their priority so
> > we will not try to pack tasks onto all the threads of a favored core
> > before using other cpu cores.  The cpu that's of the highese priority
> > in a sched_group is recorded in sched_group->asym_prefer_cpu during
> > initialization to save lookup during load balancing.
> > 
> > A sysctl variable /proc/sys/kernel/sched_itmt_enabled is provided so
> > the scheduling based on favored core can be turned on or off at run time.
> 
> > +/*
> > + * Boolean to control whether we want to move processes to cpu capable
> > + * of higher turbo frequency for cpus supporting Intel Turbo Boost Max
> > + * Technology 3.0.
> > + *
> > + * It can be set via /proc/sys/kernel/sched_itmt_enabled
> > + */
> > +unsigned int __read_mostly sysctl_sched_itmt_enabled = 0;
> 
> Ugh, no.
> 
> We don't add features to the scheduler in the hope that they might or might not 
> help. We either enable a new feature by default (and make damn sure it helps!),
> or don't add the feature at all.
> 
> Thanks,
> 
> 	Ingo

Ingo,

This feature will be a clear benefit for client machines and
less clear on servers.

This feature is most beneficial to single threaded workload running on
a single socket that operates in mostly Turbo mode.  Client platform
like Broadwell High End Desktop is the first one that supports it.
Enablng this feature for such platform by default will be a win as it
runs single threaded workload much of the time (10%-15% peformance
upside).

On the other hand, a heavily loaded server that rarely operates in Turbo
mode will benefit much less from this feature.  There is some overhead
incurred by migrating load to the favored cores.  Some server folks
have asked us to be cautious here and not to turn on ITMT scheduling
by default.   Even so, when the server is lightly loaded, this feature
can still be a win.  That said, this is future looking as we don't have
any server with this feature today.

So if we take the approach to enable this feature by default for only
single node system (using that as a criteria for client), will that seem
reasonable to you?

Thanks.

Tim

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ