[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210917133512.GH3959@techsingularity.net>
Date: Fri, 17 Sep 2021 14:35:12 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Aubrey Li <aubrey.li@...ux.intel.com>
Cc: Barry Song <21cnbao@...il.com>, linux-kernel@...r.kernel.org,
mingo@...nel.org, peterz@...radead.org, song.bao.hua@...ilicon.com,
valentin.schneider@....com, vincent.guittot@...aro.org,
yangyicong@...wei.com
Subject: Re: [PATCH 8/9] sched/fair: select idle cpu from idle cpumask for
task wakeup
On Fri, Sep 17, 2021 at 05:11:11PM +0800, Aubrey Li wrote:
> On 9/17/21 12:15 PM, Barry Song wrote:
> >> @@ -4965,6 +4965,7 @@ void scheduler_tick(void)
> >>
> >> #ifdef CONFIG_SMP
> >> rq->idle_balance = idle_cpu(cpu);
> >> + update_idle_cpumask(cpu, rq->idle_balance);
> >> trigger_load_balance(rq);
> >> #endif
> >> }
> >
> > might be stupid, a question bothering yicong and me is that why don't we
> > choose to update_idle_cpumask() while idle task exits and switches to a
> > normal task?
>
> I implemented that way and we discussed before(RFC v1 ?), updating a cpumask
> at every enter/exit idle is more expensive than we expected, though it's
> per LLC domain, Vincent saw a significant regression IIRC. You can also
> take a look at nohz.idle_cpus_mask as a reference.
>
It's possible to track it differently and I prototyped it some time
back. The results were mixed at the time. It helped some workloads
and was marginal on others. It appeared to help hackbench but I found
that hackbench is much more vulnerable to the wakeup_granularity and
overscheduling. For hackbench, it makes more sense to target that directly
before revisiting the alt-idlecore to see what it really helps. I'm waiting
on test results on various ways wakeup_gran can be scaled depending on
rq activity.
For alternative idle core tracking, the current 5.15-rc1 rebase
prototype looks like this
https://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git/commit/?h=sched-altidlecore-v2r8&id=b2af1a88245f6cbeb28343e89f3183a77b29d52d
Test results still pending and as usual the queue is busy. I swear, my
primary bottleneck for doing anything is benchmark and validation :(
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists