[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+MiCZdTJJRaKHTkmu7DTuXVvqgCLi+YBpx3UqybOZ5zA@mail.gmail.com>
Date: Fri, 5 Dec 2025 05:03:33 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>, Andy Lutomirski <luto@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>, Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH] entry: always inline local_irq_{enable,disable}_exit_to_user()
On Fri, Dec 5, 2025 at 4:45 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Fri, Dec 05, 2025 at 02:54:26AM -0800, Eric Dumazet wrote:
> > On Fri, Dec 5, 2025 at 2:51 AM Peter Zijlstra <peterz@...radead.org> wrote:
> > >
> > > On Thu, Dec 04, 2025 at 03:31:27PM +0000, Eric Dumazet wrote:
> > > > clang needs __always_inline instead of inline, even for tiny helpers.
> > > >
> > > > This saves some cycles in system call fast path, and saves 195 bytes
> > > > on x86_64 build:
> > > >
> > > > $ size vmlinux.before vmlinux.after
> > > > text data bss dec hex filename
> > > > 34652814 22291961 5875180 62819955 3be8e73 vmlinux.before
> > > > 34652619 22291961 5875180 62819760 3be8db0 vmlinux.after
> > > >
> > > > Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> > >
> > > Yeah, sometimes these inline heuristics drive me mad. I've picked up
> > > this and the rseq one. I'll do something with them after rc1.
> >
> > Thanks Peter.
> >
> > I forgot to include perf numbers for this one, but apparently having a
> > local_irq_enable()
> > in an out-of-line function in syscall path was adding a 5 % penalty on
> > some platforms.
> >
> > Crazy...
>
> Earlier Zen with RET mitigation? ;-)
This was AMD Rome "AMD EPYC 7B12 64-Core Processor",
bu also AMDTurin "AMD EPYC 9B45 128-Core Processor" to a certain extent.
When you say RET mitigation, this is the five int3 after retq, right ?
Powered by blists - more mailing lists