[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191206081654.GA22330@linux.vnet.ibm.com>
Date: Fri, 6 Dec 2019 13:46:54 +0530
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Rik van Riel <riel@...riel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Valentin Schneider <valentin.schneider@....com>
Subject: Re: [PATCH] sched/fair: Optimize select_idle_core
* Vincent Guittot <vincent.guittot@...aro.org> [2019-12-05 19:52:40]:
> On Thu, 5 Dec 2019 at 18:52, Srikar Dronamraju
> <srikar@...ux.vnet.ibm.com> wrote:
> >
> > * Vincent Guittot <vincent.guittot@...aro.org> [2019-12-05 18:27:51]:
> >
> > > Hi Srikar,
> > >
> > > On Thu, 5 Dec 2019 at 18:23, Srikar Dronamraju
> > > <srikar@...ux.vnet.ibm.com> wrote:
> > > >
> > > > Currently we loop through all threads of a core to evaluate if the core
> > > > is idle or not. This is unnecessary. If a thread of a core is not
> > > > idle, skip evaluating other threads of a core.
> > >
> > > I think that the goal is also to clear all CPUs of the core from the
> > > cpumask of the loop above so it will not try the same core next time
> > >
> > > >
> >
> > That goal we still continue to maintain by the way of cpumask_andnot.
> > i.e instead of clearing CPUs one at a time, we clear all the CPUs in the
> > core at one shot.
>
> ah yes sorry, I have been to quick and overlooked the cpumask_andnot line
>
Just to reiterate why this is necessary.
Currently, even if the first thread of a core is not idle, we iterate
through all threads of the core and individually clear the CPU from the core
mask.
Collecting ticks on a Power 9 SMT 8 system around select_idle_core
while running schbench shows us that
(units are in ticks, hence lesser is better)
Without patch
N Min Max Median Avg Stddev
x 130 151 1083 284 322.72308 144.41494
With patch
N Min Max Median Avg Stddev Improvement
x 164 88 610 201 225.79268 106.78943 30.03%
--
Thanks and Regards
Srikar Dronamraju
Powered by blists - more mailing lists