[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <12061140-4f09-b83f-843c-2fb8ff9f6e81@amd.com>
Date: Thu, 18 May 2023 15:56:12 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Chen Yu <yu.c.chen@...el.com>
Cc: Mike Galbraith <efault@....de>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Tim Chen <tim.c.chen@...el.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Abel Wu <wuyun.abel@...edance.com>,
Yicong Yang <yangyicong@...ilicon.com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
Len Brown <len.brown@...el.com>,
Chen Yu <yu.chen.surf@...il.com>,
Arjan Van De Ven <arjan.van.de.ven@...el.com>,
Aaron Lu <aaron.lu@...el.com>, Barry Song <baohua@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] sched/fair: Introduce SIS_PAIR to wakeup task on
local idle core first
Hello Chenyu,
On 5/18/2023 9:47 AM, Chen Yu wrote:
> Hi Prateek,
> On 2023-05-18 at 09:00:38 +0530, K Prateek Nayak wrote:
>> [..snip..]
>>> + /*
>>> + * If the waker and the wakee are good friends to each other,
>>> + * putting them within the same SMT domain could reduce C2C
>>> + * overhead. But this only applies when there is no idle core
>>> + * available. SMT idle sibling should be prefered to wakee's
>>> + * previous CPU, because the latter could still have the risk of C2C
>>> + * overhead.
>>> + *
>>> + */
>>> + if (sched_feat(SIS_PAIR) && sched_smt_active() && !has_idle_core &&
>>
>> "has_idle_core" is not populated at this point and will always be false
>> from the initialization. Should there be a:
>>
>> has_idle_core = test_idle_cores(? /* Which CPU? */);
> Yes you are right, I have 2 patches, the first one is to check has_idle_core
> in the beginning but I forgot to send it out but only the second one.
>> if (sched_feat(SIS_PAIR) ...) {
>> ...
>> }
>> has_idle_core = false;
>>
>> ?: "has_idle_core" is currently used in select_idle_sibling() from the
>> perspective of the target MC. Does switching target to current core
>> (which may not be on the same MC) if target MC does not have an idle core
>> make sense in all scenarios?
> Right, we should check whether target equals to current CPU. Since I tested
> with 1 socket online, this issue did not expose
Ah! I see. Yup a single socket (with one MC) will not run into any issues
with the current implementation.
>>
>>> + current->last_wakee == p && p->last_wakee == current) {
>>> + i = select_idle_smt(p, smp_processor_id());
>>
>> Also wondering if asym_fits_cpu() check is needed in some way here.
>> Consider a case where waker is on a weaker capacity CPU but wakee
>> previously ran on a stronger capacity CPU. It might be worthwhile
>> to wake the wakee on previous CPU if the current CPU does not fit
>> the task's utilization and move the pair to the CPU with larger
>> capacity during the next wakeup. wake_affine_weight() would select
>> a target based on load and capacity consideration but here we
>> switch the wakeup target to a thread on the current core.
>>
>> Wondering if the capacity details already considered in the path?
>>
> Good point, I guess what you mean is that, target could be other CPU rather than
> the current one, there should be a check if the target equals to current CPU.
Yup. That should handle the asymmetric capacity condition too but
wondering if it makes the case too narrow to see the same benefit.
Can you perhaps try "cpus_share_cache(target, smp_processor_id())"
instead of a "target == smp_processor_id()"? Since we use similar
logic to test if p->recent_used_cpu is a good target or not?
This will be equivalent to your current implementation for a single
socket with one LLC and as for dual socket or multiple LLC case,
we can be sure "has_idle_core" is indicates the status of MC which
is shared by both target and current cpu.
> Let me refine the patch and have a test.
>
I'll hold off queuing a full test run until then.
> thanks,
> Chenyu
>
> [..snip..]
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists