[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <92f30c7e-a0e2-4c50-9ae8-f97a915e2c8d@amazon.com>
Date: Wed, 4 Feb 2026 22:48:48 +0000
From: "Mohamed Abuelfotoh, Hazem" <abuehaze@...zon.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Mario Roy <marioeroy@...il.com>, Chris Mason <clm@...a.com>, "Joseph
Salisbury" <joseph.salisbury@...cle.com>, Adam Li
<adamli@...amperecomputing.com>, Josh Don <joshdon@...gle.com>,
<mingo@...hat.com>, <juri.lelli@...hat.com>, <vincent.guittot@...aro.org>,
<dietmar.eggemann@....com>, <rostedt@...dmis.org>, <bsegall@...gle.com>,
<mgorman@...e.de>, <vschneid@...hat.com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/4] sched/fair: Proportional newidle balance
On 04/02/2026 14:05, Peter Zijlstra wrote:
>> I will try to
>> come-up with standalone reproduction steps that can be used to investigate
>> this memcached regression. Meanwhile we will share the fio regression
>> reproduction steps that I mentioned in my previous update. This should be
>> much simpler in steps and can be done on a single machine.
>
> Thanks! I have a few machines with a 'spare' nvme drive to run things
> on, hopefully that is sufficient.
It looks like the previously reported fio regression has been fully
mitigated by the proposed patch [1]. I verified this on both 6.18.5 &
6.12.66. I will try to come-up with standalone reproduction for the
memcached regression to make it easier for debugging.
**fio regression reproduction environment**
AWS EC2 instance: c5ad.24xlarge
96 vCPUs = 48 Cores with HT
12 CCDs
Memory : 192 GiB
SSD Disk space: 1900 GiB
SSD Disk Max write IOPS: 180K
SSD Disk Max Write B.W: 760 MB/sec
Below are the results of three different runs.
6.18.5
6.18.5_revert 6.18.5 with the revert of 1b9c118fe318 ("sched/fair:
Proportional newidle balance")
6.18.5_proposed 6.18.5 with patch[1]
---------------------------------------------------------------
Version 6.18.5
# sudo fio --time_based --name=benchmark --size=50G --runtime=60
--filename=/dev/nvme1n1 --ioengine=psync --randrepeat=0 --iodepth=1
--fsync=64 --invalidate=1 --verify=0 --verify_fatal=0 --blocksize=4k
--group_reporting --rw=randwrite --numjobs=4
Run status group 0 (all jobs):
WRITE: bw=478MiB/s (501MB/s), 478MiB/s-478MiB/s (501MB/s-501MB/s),
io=28.0GiB (30.1GB), run=60003-60003msec
----------------------------------------------------------------
Version 6.18.5_revert
# sudo fio --time_based --name=benchmark --size=50G --runtime=60
--filename=/dev/nvme1n1 --ioengine=psync --randrepeat=0 --iodepth=1
--fsync=64 --invalidate=1 --verify=0 --verify_fatal=0 --blocksize=4k
--group_reporting --rw=randwrite --numjobs=4
Run status group 0 (all jobs):
WRITE: bw=549MiB/s (575MB/s), 549MiB/s-549MiB/s (575MB/s-575MB/s),
io=32.2GiB (34.5GB), run=60002-60002msec
-----------------------------------------------------------------
Version 6.18.5_proposed
# sudo fio --time_based --name=benchmark --size=50G --runtime=60
--filename=/dev/nvme1n1 --ioengine=psync --randrepeat=0 --iodepth=1
--fsync=64 --invalidate=1 --verify=0 --verify_fatal=0 --blocksize=4k
--group_reporting --rw=randwrite --numjobs=4
Run status group 0 (all jobs):
WRITE: bw=551MiB/s (578MB/s), 551MiB/s-551MiB/s (578MB/s-578MB/s),
io=32.3GiB (34.7GB), run=60003-60003msec
[1]
https://lore.kernel.org/all/20260127151748.GA1079264@noisy.programming.kicks-ass.net/T/#u
Powered by blists - more mailing lists