[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f988e37f-cb28-4ac3-9e93-1e5fa6750e59@amd.com>
Date: Mon, 24 Mar 2025 09:28:45 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Libo Chen <libo.chen@...cle.com>, Ingo Molnar <mingo@...hat.com>, "Peter
Zijlstra" <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
"Vincent Guittot" <vincent.guittot@...aro.org>, Chen Yu
<yu.c.chen@...el.com>, <linux-kernel@...r.kernel.org>
CC: Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt
<rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman
<mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>, David Vernet
<void@...ifault.com>, "Gautham R. Shenoy" <gautham.shenoy@....com>, "Swapnil
Sapkal" <swapnil.sapkal@....com>, Shrikanth Hegde <sshegde@...ux.ibm.com>
Subject: Re: [RFC PATCH 0/8] sched/fair: Propagate load balancing stats up the
sched domain hierarchy
Hello Libo,
Thank you for taking a look at the series and sorry for the late
response.
On 3/21/2025 3:34 PM, Libo Chen wrote:
>
>
> On 3/13/25 02:37, K Prateek Nayak wrote:
>
>> Benchmark results
>> =================
>>
>
> Hi Prateek,
>
> Definitely like the idea, esp. if we can pull this off on newidle lb
> which tends to be more problematic on systems with a large number
> of cores. But the data below on periodic lb isn't I guess as good as
> I expect. So I am wondering if the cost of update_[sd|sg]_lb_stats()
> actually went down as the result of the caching?
I have some numbers for versioning idea that I got working just before
OSPM in [1] The benchmark results don't move much but the total time
for newidle balance reduces by ~5% overall.
There is a ~30% overhead of aggregating and propagating the stats
upwards at SMT domain that offsets some of the benefits of propagation
at higher domains but I'm working to see if this can be reduced and
only done if required.
Some ideas were discussed at OSPM to reduce the overheads further and
shared the burden of busy load balancing across all CPUs in the domain
and I'll tackle that next.
If you have any benchmark where this shows up prominently, please do let
me know and I can try adding it to the bunch.
[1] https://lore.kernel.org/lkml/20250316102916.10614-1-kprateek.nayak@amd.com/
--
Thanks and Regards,
Prateek
>
> Thanks,
> Libo
Powered by blists - more mailing lists