[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250708185412.GC477119@noisy.programming.kicks-ass.net>
Date: Tue, 8 Jul 2025 20:54:12 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Wander Lairson Costa <wander@...hat.com>
Cc: Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
David Woodhouse <dwmw@...zon.co.uk>,
Thomas Gleixner <tglx@...utronix.de>,
Boqun Feng <boqun.feng@...il.com>,
open list <linux-kernel@...r.kernel.org>,
"open list:TRACING" <linux-trace-kernel@...r.kernel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Clark Williams <williams@...hat.com>,
Gabriele Monaco <gmonaco@...hat.com>
Subject: Re: [PATCH v3 2/2] tracing/preemptirq: Optimize
preempt_disable/enable() tracepoint overhead
On Tue, Jul 08, 2025 at 09:54:06AM -0300, Wander Lairson Costa wrote:
> O Mon, Jul 07, 2025 at 01:20:03PM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 04, 2025 at 02:07:43PM -0300, Wander Lairson Costa wrote:
> > > Similar to the IRQ tracepoint, the preempt tracepoints are typically
> > > disabled in production systems due to the significant overhead they
> > > introduce even when not in use.
> > >
> > > The overhead primarily comes from two sources: First, when tracepoints
> > > are compiled into the kernel, preempt_count_add() and preempt_count_sub()
> > > become external function calls rather than inlined operations. Second,
> > > these functions perform unnecessary preempt_count() checks even when the
> > > tracepoint itself is disabled.
> > >
> > > This optimization introduces an early check of the tracepoint static key,
> > > which allows us to skip both the function call overhead and the redundant
> > > preempt_count() checks when tracing is disabled. The change maintains all
> > > existing functionality when tracing is active while significantly
> > > reducing overhead for the common case where tracing is inactive.
> > >
> >
> > This one in particular I worry about the code gen impact. There are a
> > *LOT* of preempt_{dis,en}able() sites in the kernel and now they all get
> > this static branch and call crud on.
> >
> > We spend significant effort to make preempt_{dis,en}able() as small as
> > possible.
> >
>
> Thank you for the feedback, it's much appreciated. I just want to make sure
> I'm on the right track. If I understand your concern correctly, it revolves
> around the overhead this patch might introduce???specifically to the binary
> size and its effect on the iCache???when the kernel is built with preempt
> tracepoints enabled. Is that an accurate summary?
Yes, specifically:
preempt_disable()
incl %gs:__preempt_count
preempt_enable()
decl %gs:__preempt_count
jz do_schedule
1: ...
do_schedule:
call __SCT__preemptible_schedule
jmp 1
your proposal adds significantly to this.
Powered by blists - more mailing lists