[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <jhjh7qdozu3.mognet@arm.com>
Date: Thu, 29 Oct 2020 14:45:40 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
linux-kernel <linux-kernel@...r.kernel.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Tao Zhou <ouwen210@...mail.com>
Subject: Re: [PATCH v2] sched/fair: prefer prev cpu in asymmetric wakeup path
On 29/10/20 14:19, Vincent Guittot wrote:
> On Thu, 29 Oct 2020 at 12:16, Valentin Schneider
> <valentin.schneider@....com> wrote:
>> On legacy big.LITTLE systems, sd_asym_cpucapacity spans all CPUs, so we
>> would iterate over those in select_idle_capacity() anyway - the policy
>> we've been going for is that capacity fitness trumps cache use.
>>
>> This does require the system to have a decent interconnect, cache snooping
>> & co, but that is IMO a requirement of any sane asymmetric system.
>>
>> To put words into code, this is the kind of check I would see:
>>
>> if (static_branch_unlikely(&sched_asym_cpucapacity))
>> return fits_capacity(task_util, capacity_of(cpu));
>> else
>
> You can't make the shortcut that prev will always belong to the domain
> so you have to check that prev belongs to the sd_asym_cpucapacity.
> Even if it's true with current mobile Soc, This code is generic core
> code and must handle any kind of funny topology than HW guys could
> imagine
>
Don't give them any funny ideas! :-)
But yes, you're right in that we could have more than one asym domain span,
although AFAIA that would only be permitted by DynamIQ.
I was about to say that for DynamIQ the shared L3 should make it that the
asym domain commonly matches MC (thus cpus_share_cache()), but phantom
domains wreck that :/ Arguably that isn't upstream's problem though.
Powered by blists - more mailing lists