[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <778f6247-749a-47c0-bc37-c42ced4c078b@amd.com>
Date: Sat, 14 Dec 2024 01:30:29 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Shrikanth Hegde <sshegde@...ux.ibm.com>
CC: "H. Peter Anvin" <hpa@...or.com>, Dietmar Eggemann
<dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin Schneider
<vschneid@...hat.com>, "Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>, Tim Chen
<tim.c.chen@...ux.intel.com>, Mario Limonciello <mario.limonciello@....com>,
Meng Li <li.meng@....com>, Huang Rui <ray.huang@....com>, "Gautham R. Shenoy"
<gautham.shenoy@....com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar
<mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Dave Hansen
<dave.hansen@...ux.intel.com>, Peter Zijlstra <peterz@...radead.org>, "Juri
Lelli" <juri.lelli@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>,
<x86@...nel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 8/8] sched/fair: Uncache asym_prefer_cpu and find it
during update_sd_lb_stats()
Hello Shrikanth,
On 12/13/2024 8:32 PM, Shrikanth Hegde wrote:
>
>
> On 12/12/24 00:25, K Prateek Nayak wrote:
>> On AMD processors supporting dynamic preferred core ranking, the
>> asym_prefer_cpu cached in sched_group can change dynamically. Since
>> asym_prefer_cpu is cached when the sched domain hierarchy is built,
>> updating the cached value across the system would require rebuilding
>> the sched domain which is prohibitively expensive.
>>
>> All the asym_prefer_cpu comparisons in the load balancing path are only
>> carried out post the sched group stats have been updated after iterating
>> all the CPUs in the group. Uncache the asym_prefer_cpu and compute it
>> while sched group statistics are being updated as a part of sg_lb_stats.
>>
>> Fixes: f3a052391822 ("cpufreq: amd-pstate: Enable amd-pstate preferred core support")
>> Signed-off-by: K Prateek Nayak <kprateek.nayak@....com>
>> ---
>> kernel/sched/fair.c | 21 +++++++++++++++++++--
>> kernel/sched/sched.h | 1 -
>> kernel/sched/topology.c | 15 +--------------
>> 3 files changed, 20 insertions(+), 17 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 3f36805ecdca..166b8e831064 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -9911,6 +9911,8 @@ struct sg_lb_stats {
>> unsigned int sum_nr_running; /* Nr of all tasks running in the group */
>> unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
>> unsigned int idle_cpus; /* Nr of idle CPUs in the group */
>> + unsigned int asym_prefer_cpu; /* CPU with highest asym priority */
>> + int highest_asym_prio; /* Asym priority of asym_prefer_cpu */
>
> Its better to move this after group_asym_packing field, so all related fields are together.
Sure, I'll move the around in the next iteration if folks are okay
with this approach.
>
>> unsigned int group_weight;
>> enum group_type group_type;
>> unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
>> [..snip..]
>
> Tried minimal testing of ASYM_PACKING behavior on Power10 Shared VM. It is working as expected with the patch as well. (functionality wise, performance isn't tested)
Thank you for testing! Let me know if there are any visible regressions
in which case lets see if the alternate approach suggested in the cover
letter fares any better.
Thanks a ton for reviewing and testing the series.
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists