[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGudoHFpQyxOx7SU4O5XMSK--JCtkOFc_He13UdtCYQLLwGu8w@mail.gmail.com>
Date: Mon, 31 Mar 2025 05:22:30 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: kernel test robot <oliver.sang@...el.com>
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com, linux-kernel@...r.kernel.org,
Christian Brauner <brauner@...nel.org>
Subject: Re: [linus:master] [wait] 84654c7f47: reaim.jobs_per_min 3.0% regression
On Mon, Mar 31, 2025 at 4:58 AM kernel test robot <oliver.sang@...el.com> wrote:
>
>
>
> Hello,
>
> kernel test robot noticed a 3.0% regression of reaim.jobs_per_min on:
>
>
> commit: 84654c7f47307692d47ea914d01287c8c54b3532 ("wait: avoid spurious calls to prepare_to_wait_event() in ___wait_event()")
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>
> [still regression on linus/master 1a9239bb4253f9076b5b4b2a1a4e8d7defd77a95]
> [still regreasion on linux-next/master db8da9da41bced445077925f8a886c776a47440c]
>
> testcase: reaim
> config: x86_64-rhel-9.4
> compiler: gcc-12
> test machine: 192 threads 2 sockets Intel(R) Xeon(R) Platinum 8468V CPU @ 2.4GHz (Sapphire Rapids) with 384G memory
> parameters:
>
[snip]
> =========================================================================================
> compiler/cpufreq_governor/disk/fs/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
> gcc-12/performance/1SSD/ext4/x86_64-rhel-9.4/100%/debian-12-x86_64-20240206.cgz/300s/igk-spr-2sp4/disk/reaim
>
> commit:
> 46af8e2406 ("pipe: cache 2 pages instead of 1")
> 84654c7f47 ("wait: avoid spurious calls to prepare_to_wait_event() in ___wait_event()")
>
> 46af8e2406c27cc2 84654c7f47307692d47ea914d01
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
[snip]
> 75.63 +4.3% 78.87 iostat.cpu.idle
> 2.05 -2.9% 1.99 iostat.cpu.iowait
> 21.28 ą 2% -14.8% 18.12 iostat.cpu.system
> 1.04 -2.7% 1.01 iostat.cpu.user
[snip]
So this test spends most of the time off CPU and with my change there
was some drop in system time, which most likely affected timings
elsewhere and there is *more* idle.
The actual perf problem is the locks and this is not a real regression
in the sense there would be a clear speed up if it was not for those.
I don't remember the details now, but there was something funky on
last dequeue from adaptive spinning, where threads would refuse to
spin when they *could*. I suspect this is part of the problem here
(the fact that there is contention aside ofc). Maybe I'll get around
to writing a proper writeup about that, but could not bring myself to
seriously dig into it.
In the meantime I don't believe this report warrants any action
concerning the patch at hand.
--
Mateusz Guzik <mjguzik gmail.com>
Powered by blists - more mailing lists