[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210114093543.GM3592@techsingularity.net>
Date: Thu, 14 Jan 2021 09:35:43 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Li Aubrey <aubrey.li@...ux.intel.com>,
Qais Yousef <qais.yousef@....com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/5] sched/fair: Merge select_idle_core/cpu()
On Wed, Jan 13, 2021 at 06:03:00PM +0100, Vincent Guittot wrote:
> > @@ -6159,16 +6171,29 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
> > for_each_cpu_wrap(cpu, cpus, target) {
> > if (!--nr)
> > return -1;
> > - if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> > - break;
> > + if (smt) {
>
> If we want to stay on something similar to the previous behavior, we
> want to check on all cores if test_idle_cores is true so nr should be
> set to number of cores
>
I don't think we necessarily want to do that. has_idle_cores is an
effective throttling mechanism but it's not perfect. If the full domain
is always scanned for a core then there can be excessive scanning in
workloads like hackbench which tends to have has_idle_cores return false
positives. It becomes important once average busy CPUs is over half of
the domain for SMT2.
At least with the patch if that change was made, we still would not scan
twice going over the same runqueues so it would still be an improvement
but it would be nice to avoid excessive deep scanning when there are a
lot of busy CPUs but individual tasks are rapidly idling.
However, in the event regressions are found, changing to your suggested
behaviour would be an obvious starting point.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists