[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae65e4aa-3407-4fb0-b1f1-eb7c2626f768@linux.ibm.com>
Date: Mon, 7 Oct 2024 22:50:11 +0530
From: Anjali K <anjalik@...ux.ibm.com>
To: Qais Yousef <qyousef@...alina.io>,
"Rafael J. Wysocki"
<rafael@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>, Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Christian Loehle <christian.loehle@....com>,
Hongyan Xia <hongyan.xia2@....com>, John Stultz <jstultz@...gle.com>,
linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7] sched: Consolidate cpufreq updates
Hi, I tested this patch to see if it causes any regressions on bare-metal power9 systems with microbenchmarks.
The test system is a 2 NUMA node 128 cpu powernv power9 system. The conservative governor is enabled.
I took the baseline as the 6.10.0-rc1 tip sched/core kernel.
No regressions were found.
+------------------------------------------------------+--------------------+----------+
| Benchmark | Baseline | Baseline |
| | (6.10.0-rc1 tip | + patch |
| | sched/core) | |
+------------------------------------------------------+--------------------+----------+
|Hackbench run duration (sec) | 1 | 1.01 |
|Lmbench simple fstat (usec) | 1 | 0.99 |
|Lmbench simple open/close (usec) | 1 | 1.02 |
|Lmbench simple read (usec) | 1 | 1 |
|Lmbench simple stat (usec) | 1 | 1.01 |
|Lmbench simple syscall (usec) | 1 | 1.01 |
|Lmbench simple write (usec) | 1 | 1 |
|stressng (bogo ops) | 1 | 0.94 |
|Unixbench execl throughput (lps) | 1 | 0.97 |
|Unixbench Pipebased Context Switching throughput (lps)| 1 | 0.94 |
|Unixbench Process Creation (lps) | 1 | 1 |
|Unixbench Shell Scripts (1 concurrent) (lpm) | 1 | 1 |
|Unixbench Shell Scripts (8 concurrent) (lpm) | 1 | 1.01 |
+------------------------------------------------------+--------------------+----------+
Thank you,
Anjali K
Powered by blists - more mailing lists