[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFULd4a1t8LVMqFKNcjanKUimaxpPSQttKTQHROHrAvxGcyPEA@mail.gmail.com>
Date: Thu, 6 Mar 2025 10:38:02 +0100
From: Uros Bizjak <ubizjak@...il.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Linus Torvalds <torvalds@...uxfoundation.org>
Subject: Re: [PATCH -tip] x86/locking/atomic: Use asm_inline for atomic
locking insns
On Wed, Mar 5, 2025 at 9:21 PM Ingo Molnar <mingo@...nel.org> wrote:
>
>
> * Uros Bizjak <ubizjak@...il.com> wrote:
>
> > I simply run the lmbench command, where the benchmark was obtained as
> > .rpm for Fedora 41 [1], assuming that the benchmark itself sets the
> > benchmarked system to the correct state and does enough repetitions
> > to obtain a meaningful result [2].
>
> These assumptions are not valid in a lot of cases - see my other email
> with an example.
Thanks for the post, it was very thorough and informative, an
interesting read indeed.
OTOH, the proposed approach requires a deep specialist knowledge,
which IMO is unreasonable to expect from the prospective patch
submitter. I totally agree with Dave's request for some performance
numbers [partial quote from his post earlier in the thread]:
"I'm seriously not picky: will-it-scale, lmbench, dbench, kernel
compiles. *ANYTHING*. *ANY* hardware. Run it on your laptop."
that would indicate if the patch holds its promise. However, "Run it
on your laptop" already implies the ability to compile, install and
run a distribution kernel, not exactly a trivial task, compared to
e.g. running a patched kernel under QEMU. Some kind of easy-to-run
benchmark scripts, or documented easy-to-follow steps, would be of
immense help to the patch submitter to assess the performance aspects
of the patch. As said elsewhere, code size is a nice metric, but it is
not exactly appropriate for an -O2 compiled kernel.
As probably everybody reading LKML noticed, there is quite some CI
infrastructure available that tests the posted patches. That
infrastructure is set and maintained by experts that *can* precisely
measure performance impact of the patch. My proposal would be to ease
the requirements for the patches to something that Dave requested in
order to allow patches to be integrated and further tested by the CI
infrastructure.
Disclaimer: Please read the above as a feedback from the wannabe
contributor, not as a critique of the established development process.
Thanks,
Uros.
Powered by blists - more mailing lists