lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <33jax5mmu7mdt6ph5t5bb7fvprbypxhefrvgrc2ru4p2dpqldg@d6af6oc6442r>
Date: Tue, 8 Jul 2025 09:54:06 -0300
From: Wander Lairson Costa <wander@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>, 
	Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>, 
	Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, 
	Valentin Schneider <vschneid@...hat.com>, Masami Hiramatsu <mhiramat@...nel.org>, 
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, David Woodhouse <dwmw@...zon.co.uk>, 
	Thomas Gleixner <tglx@...utronix.de>, Boqun Feng <boqun.feng@...il.com>, 
	open list <linux-kernel@...r.kernel.org>, "open list:TRACING" <linux-trace-kernel@...r.kernel.org>, 
	Arnaldo Carvalho de Melo <acme@...nel.org>, Clark Williams <williams@...hat.com>, 
	Gabriele Monaco <gmonaco@...hat.com>
Subject: Re: [PATCH v3 2/2] tracing/preemptirq: Optimize
 preempt_disable/enable() tracepoint overhead

O Mon, Jul 07, 2025 at 01:20:03PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 04, 2025 at 02:07:43PM -0300, Wander Lairson Costa wrote:
> > Similar to the IRQ tracepoint, the preempt tracepoints are typically
> > disabled in production systems due to the significant overhead they
> > introduce even when not in use.
> > 
> > The overhead primarily comes from two sources: First, when tracepoints
> > are compiled into the kernel, preempt_count_add() and preempt_count_sub()
> > become external function calls rather than inlined operations. Second,
> > these functions perform unnecessary preempt_count() checks even when the
> > tracepoint itself is disabled.
> > 
> > This optimization introduces an early check of the tracepoint static key,
> > which allows us to skip both the function call overhead and the redundant
> > preempt_count() checks when tracing is disabled. The change maintains all
> > existing functionality when tracing is active while significantly
> > reducing overhead for the common case where tracing is inactive.
> > 
> 
> This one in particular I worry about the code gen impact. There are a
> *LOT* of preempt_{dis,en}able() sites in the kernel and now they all get
> this static branch and call crud on.
> 
> We spend significant effort to make preempt_{dis,en}able() as small as
> possible.
> 

Thank you for the feedback, it's much appreciated. I just want to make sure
I'm on the right track. If I understand your concern correctly, it revolves
around the overhead this patch might introduce—specifically to the binary
size and its effect on the iCache—when the kernel is built with preempt
tracepoints enabled. Is that an accurate summary?


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ