lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dcc3a509-d8fd-4f46-8051-683d7277fde7@linux.dev>
Date: Fri, 9 Jan 2026 11:19:55 -0800
From: Yonghong Song <yonghong.song@...ux.dev>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>,
 Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: Steven Rostedt <rostedt@...dmis.org>, LKML
 <linux-kernel@...r.kernel.org>,
 Linux trace kernel <linux-trace-kernel@...r.kernel.org>,
 bpf <bpf@...r.kernel.org>, Masami Hiramatsu <mhiramat@...nel.org>,
 "Paul E. McKenney" <paulmck@...nel.org>,
 Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
 Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v5] tracing: Guard __DECLARE_TRACE() use of
 __DO_TRACE_CALL() with SRCU-fast



On 1/9/26 11:10 AM, Alexei Starovoitov wrote:
> On Fri, Jan 9, 2026 at 6:45 AM Mathieu Desnoyers
> <mathieu.desnoyers@...icios.com> wrote:
>> On 2026-01-08 22:05, Steven Rostedt wrote:
>>> From: "Paul E. McKenney" <paulmck@...nel.org>
>> [...]
>>
>> I disagree with many elements of the proposed approach.
>>
>> On one end we have BPF wanting to hook on arbitrary tracepoints without
>> adding significant latency to PREEMPT RT kernels.
>>
>> One the other hand, we have high-speed tracers which execute very short
>> critical sections to serialize trace data into ring buffers.
>>
>> All of those users register to the tracepoint API.
>>
>> We also have to consider that migrate disable is *not* cheap at all
>> compared to preempt disable.
> Looks like your complaint comes from lack of engagement in kernel
> development.
> migrate_disable _was_ not cheap.
> Try to benchmark it now.
> It's inlined. It's a fraction of extra overhead on top of preempt_disable.
>
The following are related patches to inline migrate_disable():

35561bab768977c9e05f1f1a9bc00134c85f3e28 arch: Add the macro COMPILE_OFFSETS to all the asm-offsets.c
88a90315a99a9120cd471bf681515cc77cd7cdb8 rcu: Replace preempt.h with sched.h in include/linux/rcupdate.h
378b7708194fff77c9020392067329931c3fcc04 sched: Make migrate_{en,dis}able() inline


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ