[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z8mACAi4-kN4uBLz@gmail.com>
Date: Thu, 6 Mar 2025 11:59:20 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Dirk Gouders <dirk@...ders.net>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Namhyung Kim <namhyung@...nel.org>
Cc: Uros Bizjak <ubizjak@...il.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
Linus Torvalds <torvalds@...uxfoundation.org>
Subject: Re: [PATCH -tip] x86/locking/atomic: Use asm_inline for atomic
locking insns
( I've Cc:-ed some perf gents as to the measurement artifacts observed
below. Full report quoted below. )
* Dirk Gouders <dirk@...ders.net> wrote:
> Hi Ingo,
>
> my interest comes, because I just started to try to better understand
> PCL and am reading the perf manual pages. Perhaps I should therefore
> keep my RO-bit permanent for some more months, but:
>
> > And if the benchmark is context-switching heavy, you'll want to use
> > 'perf stat -a' option to not have PMU context switching costs, and the
>
> I'm sure you know what you are talking about so I don't doubt the above
> is correct but perhaps, the manual page should also clarify -a:
>
> -a::
> --all-cpus::
> system-wide collection from all CPUs (default if no target is specified)
>
> In the last example -a is combined with -C 2 which is even more irritating when
> you just started with the manual pages.
>
>
> But the main reason why I thought it might be OK to once toggle my
> RO-bit is that I tried your examples and with the first one I have way
> higher numbers than yours and I thought that must be, because you just
> own the faster machine (as I would have expected):
>
> > starship:~> perf bench sched pipe
> > # Running 'sched/pipe' benchmark:
> > # Executed 1000000 pipe operations between two processes
> >
> > Total time: 6.939 [sec]
> >
> > 6.939128 usecs/op
> > 144110 ops/sec
>
> lena:~> perf bench sched pipe
> # Running 'sched/pipe' benchmark:
> # Executed 1000000 pipe operations between two processes
>
> Total time: 11.129 [sec]
>
> 11.129952 usecs/op
> 89847 ops/sec
>
> And I expected this to continue throughout the examples.
>
> But -- to keep this short -- with the last example, my numbers are
> suddenly significantly lower than yours:
>
> > starship:~> taskset 0x4 perf stat -a -C 2 -e instructions --repeat 5 perf bench sched pipe
> > 5.808068 usecs/op
> > 5.843716 usecs/op
> > 5.826543 usecs/op
> > 5.801616 usecs/op
> > 5.793129 usecs/op
> >
> > Performance counter stats for 'system wide' (5 runs):
> >
> > 32,244,691,275 instructions ( +- 0.21% )
> >
> > 5.81624 +- 0.00912 seconds time elapsed ( +- 0.16% )
>
> lena:~> taskset 0x4 perf stat -a -C 2 -e instructions --repeat 5 perf bench sched pipe
> 4.204444 usecs/op
> 4.169279 usecs/op
> 4.186812 usecs/op
> 4.217039 usecs/op
> 4.208538 usecs/op
>
> Performance counter stats for 'system wide' (5 runs):
>
> 14,196,762,588 instructions ( +- 0.04% )
>
> 4.20203 +- 0.00854 seconds time elapsed ( +- 0.20% )
>
>
>
> Of course, I don't want to waste anyones time if this is a so obvious
> thing that only newbies don't understand. So, feel free to just ignore this.
>
> Regards
>
> Dirk
Powered by blists - more mailing lists