[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBCoesvpVU0-njjH@gmail.com>
Date: Tue, 29 Apr 2025 12:22:50 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Arnd Bergmann <arnd@...nel.org>
Cc: "H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
"Ahmed S . Darwish" <darwi@...utronix.de>,
Andrew Cooper <andrew.cooper3@...rix.com>,
Ard Biesheuvel <ardb@...nel.org>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
John Ogness <john.ogness@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 13/15] x86/cpu: Make CONFIG_X86_CX8 unconditional
* Arnd Bergmann <arnd@...nel.org> wrote:
> On Mon, Apr 28, 2025, at 11:16, Ingo Molnar wrote:
> > * Arnd Bergmann <arnd@...nel.org> wrote:
> >>
> >> b) always build with -march=i586 and leave only the -mtune
> >> flags; see if anyone cares enough to even benchmark
> >> and pick one of the other options if they can show
> >> a meaningful regression over -march=i686 -mtune=
> >
> > That's actually a good idea IMO. I looked at the code generation with
> > current compilers and it turns out that M686 is *substantially* worse
> > in code generation than M586, as apparently the extra CMOV instructions
> > bloat up the generated code:
> >
> > text data bss dec hex filename
> > 15427023 7601010 1744896 24772929 17a0141 vmlinux.M586
> > 16578295 7598826 1744896 25922017 18b89e1 vmlinux.M686
> >
> > - +7.5% increase in text size (5.6% according to bloatometer),
> > - +2% increase in instruction count,
> > - while number of branches increases by +1.3%.
> >
> > But it's not about CMOV: I checked about a dozen functions that end up
> > using CMOV, and the 'conditional' part of CMOV does seem to reduce
> > branches for those functions by a minor degree and ends up reducing
> > their size as well. So CMOV helps, a bit.
> >
> > The substantial code bloat comes from some other aspect of GCC's
> > march=i686 flag ... I bet it's primarily inlining: there's a 0.7%
> > reduction in number of calls done.
>
> I had tried the same thing already, but saw a different result,
Just to clarify, my measurements only compare -march=i586 to
-march=i686, not -mtune. Your results are primarily -mtune figures.
So unless you see something different from my figures with -march only,
it's an apples to oranges comparison.
> There is a good chance that the -mtune= optimizations totally dwarf
> cmov not just in code size difference but also actual performance,
> the bit I'm unsure about is whether we still need to worry about any
> core where this is not the case (I'm guessing not but have no way to
> prove that).
I didn't use -mtune - I only tested two Kconfig variants:
CONFIG_M686=y vs. CONFIG_M586TSC=y
... which map to two -march flags, not different -mtune flags:
arch/x86/Makefile_32.cpu:cflags-$(CONFIG_M586TSC) += -march=i586
...
arch/x86/Makefile_32.cpu:cflags-$(CONFIG_M686) += -march=i686
This is the current upstream status quo of x86-32 compiler flags, which
results in significant .text bloat:
text data bss dec hex filename
15427023 7601010 1744896 24772929 17a0141 vmlinux.M586
16578295 7598826 1744896 25922017 18b89e1 vmlinux.M686
- +7.5% increase in text size (+5.6% according to bloatometer),
- +2% increase in instruction count,
- the number of branches increases by +1.3%,
- while there's a -0.7% reduction in number of CALLs done.
I believe this is mostly the result of increased amount of inlining GCC
14.2.0 does on march=i686 vs. march=i586.
The extra CMOV use on -march=i686 helps a bit but is overwhelmed by the
effects of inlining.
Obviously these metrics cannot be automatically transformed into
performance figures, but such inlining driven bloat almost always
reduces the kernel's performance even on CPUs with large caches, for
most but a few select 'hot' functions.
An interesting 'modern' twist: the reduced number of CALLs due to
increased inlining is almost certainly reflected in a reduced number of
CALLs in real workloads as well, which would be a disproportionately
positive factor on x86 kernels and CPUs with retbleed-style mitigations
activated (which is almost all of them).
Thanks,
Ingo
Powered by blists - more mailing lists