[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250815092910.GA33@bytedance>
Date: Fri, 15 Aug 2025 17:30:08 +0800
From: Aaron Lu <ziqianlu@...edance.com>
To: Valentin Schneider <vschneid@...hat.com>
Cc: Ben Segall <bsegall@...gle.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Peter Zijlstra <peterz@...radead.org>,
Chengming Zhou <chengming.zhou@...ux.dev>,
Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Xi Wang <xii@...gle.com>, linux-kernel@...r.kernel.org,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>,
Chuyi Zhou <zhouchuyi@...edance.com>,
Jan Kiszka <jan.kiszka@...mens.com>,
Florian Bezdeka <florian.bezdeka@...mens.com>,
Songtang Liu <liusongtang@...edance.com>
Subject: Re: [PATCH v3 3/5] sched/fair: Switch to task based throttle model
On Thu, Aug 14, 2025 at 05:54:34PM +0200, Valentin Schneider wrote:
> On 12/08/25 16:48, Aaron Lu wrote:
> > On Fri, Aug 08, 2025 at 01:45:11PM +0200, Valentin Schneider wrote:
> >> On 08/08/25 18:13, Aaron Lu wrote:
> >> > Let me run some scheduler benchmark to see how it impacts performance.
> >> >
> >> > I'm thinking maybe running something like hackbench on server platforms,
> >> > first with quota not set and see if performance changes; then also test
> >> > with quota set and see how performance changes.
> >> >
> >> > Does this sound good to you? Or do you have any specific benchmark and
> >> > test methodology in mind?
> >> >
> >>
> >> Yeah hackbench is pretty good for stressing the EQ/DQ paths.
> >>
> >
> > Tested hackbench/pipe and netperf/UDP_RR on Intel EMR(2 sockets/240
> > cpus) and AMD Genoa(2 sockets/384 cpus), the tldr is: there is no clear
> > performance change between base and this patchset(head). Below is
> > detailed test data:
> > (turbo/boost disabled, cpuidle disabled, cpufreq set to performance)
> >
> > hackbench/pipe/loops=150000
> > (seconds, smaller is better)
> >
> > On Intel EMR:
> >
> > nr_group base head change
> > 1 3.62±2.99% 3.61±10.42% +0.28%
> > 8 8.06±1.58% 7.88±5.82% +2.23%
> > 16 11.40±2.57% 11.25±3.72% +1.32%
> >
> > For nr_group=16 case, configure a cgroup and set quota to half cpu and
> > then let hackbench run in this cgroup:
> >
> > base head change
> > quota=50% 18.35±2.40% 18.78±1.97% -2.34%
> >
> > On AMD Genoa:
> >
> > nr_group base head change
> > 1 17.05±1.92% 16.99±2.81% +0.35%
> > 8 16.54±0.71% 16.73±1.18% -1.15%
> > 16 27.04±0.39% 26.72±2.37% +1.18%
> >
> > For nr_group=16 case, configure a cgroup and set quota to half cpu and
> > then let hackbench run in this cgroup:
> >
> > base head change
> > quota=50% 43.79±1.10% 44.65±0.37% -1.96%
> >
> > Netperf/UDP_RR/testlen=30s
> > (throughput, higher is better)
> >
> > 25% means nr_clients set to 1/4 nr_cpu, 50% means nr_clients is 1/2
> > nr_cpu, etc.
> >
> > On Intel EMR:
> >
> > nr_clients base head change
> > 25% 83,567±0.06% 84,298±0.23% +0.87%
> > 50% 61,336±1.49% 60,816±0.63% -0.85%
> > 75% 40,592±0.97% 40,461±0.14% -0.32%
> > 100% 31,277±2.11% 30,948±1.84% -1.05%
> >
> > For nr_clients=100% case, configure a cgroup and set quota to half cpu
> > and then let netperf run in this cgroup:
> >
> > nr_clients base head change
> > 100% 25,532±0.56% 26,772±3.05% +4.86%
> >
> > On AMD Genoa:
> >
> > nr_clients base head change
> > 25% 12,443±0.40% 12,525±0.06% +0.66%
> > 50% 11,403±0.35% 11,472±0.50% +0.61%
> > 75% 10,070±0.19% 10,071±0.95% 0.00%
> > 100% 9,947±0.80% 9,881±0.58% -0.66%
> >
> > For nr_clients=100% case, configure a cgroup and set quota to half cpu
> > and then let netperf run in this cgroup:
> >
> > nr_clients base head change
> > 100% 4,954±0.24% 4,952±0.14% 0.00%
>
> Thank you for running these, looks like mostly slightly bigger variance on
> a few of these but that's about it.
>
> I would also suggest running similar benchmarks but with deeper
> hierarchies, to get an idea of how much worse unthrottle_cfs_rq() can get
> when tg_unthrottle_up() goes up a bigger tree.
No problem.
I suppose I can reuse the previous shared test script:
https://lore.kernel.org/lkml/CANCG0GdOwS7WO0k5Fb+hMd8R-4J_exPTt2aS3-0fAMUC5pVD8g@mail.gmail.com/
There I used:
nr_level1=2
nr_level2=100
nr_level3=10
But I can tweak these numbers for this performance evaluation. I can make
the leaf level to be 5 level deep and place tasks in leaf level cgroups
and configure quota on 1st level cgroups.
I'll get back to you once I finished collecting data, feel free to let
me know if you have other idea testing this :)
Powered by blists - more mailing lists