[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFULd4Yuhb-BbV9LAJ+edMRGEi2kTYfcq70=TTMaSXP3oxwfQQ@mail.gmail.com>
Date: Thu, 6 Mar 2025 14:07:03 +0100
From: Uros Bizjak <ubizjak@...il.com>
To: David Laight <david.laight.linux@...il.com>
Cc: Linus Torvalds <torvalds@...uxfoundation.org>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...el.com>, x86@...nel.org, linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH -tip] x86/locking/atomic: Use asm_inline for atomic
locking insns
On Thu, Mar 6, 2025 at 11:45 AM Uros Bizjak <ubizjak@...il.com> wrote:
>
> On Wed, Mar 5, 2025 at 9:14 PM David Laight
> <david.laight.linux@...il.com> wrote:
> >
> > On Wed, 5 Mar 2025 07:04:08 -1000
> > Linus Torvalds <torvalds@...uxfoundation.org> wrote:
> >
> > > On Tue, 4 Mar 2025 at 22:54, Uros Bizjak <ubizjak@...il.com> wrote:
> > > >
> > > > Even to my surprise, the patch has some noticeable effects on the
> > > > performance, please see the attachment in [1] for LMBench data or [2]
> > > > for some excerpts from the data. So, I think the patch has potential
> > > > to improve the performance.
> > >
> > > I suspect some of the performance difference - which looks
> > > unexpectedly large - is due to having run them on a CPU with the
> > > horrendous indirect return costs, and then inlining can make a huge
> > > difference.
> > ...
> >
> > Another possibility is that the processes are getting bounced around
> > cpu in a slightly different way.
> > An idle cpu might be running at 800MHz, run something that spins on it
> > and the clock speed will soon jump to 4GHz.
> > But if your 'spinning' process is migrated to a different cpu it starts
> > again at 800MHz.
> >
> > (I had something where a fpga compile when from 12 mins to over 20 because
> > the kernel RSB stuffing caused the scheduler to behave differently even
> > though nothing was doing a lot of system calls.)
> >
> > All sorts of things can affect that - possibly even making some code faster!
> >
> > The (IIRC) 30k increase in code size will be a few functions being inlined.
> > The bloat-o-meter might show which, and forcing a few inlines the same way
> > should reduce that difference.
>
> bloat-o-meter is an excellent idea, I'll analyse binaries some more
> and report my findings.
Please find attached bloat.txt where:
a) some functions now include once-called functions. These are:
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the performance increase because there is no
call setup. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
IMO, for 0.14% code increase, these changes are desirable.
Thanks,
Uros.
Download attachment "bloat.txt.xz" of type "application/x-xz" (20368 bytes)
Powered by blists - more mailing lists