[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241215214034.GE2472262@mit.edu>
Date: Sun, 15 Dec 2024 16:40:34 -0500
From: "Theodore Ts'o" <tytso@....edu>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Steven Rostedt <rostedt@...dmis.org>, LKML <linux-kernel@...r.kernel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Al Viro <viro@...iv.linux.org.uk>, Michal Simek <monstr@...str.eu>
Subject: Re: [GIT PULL] ftrace: Fixes for v6.13
On Sun, Dec 15, 2024 at 09:23:18AM -0800, Linus Torvalds wrote:
>
> You are literally mis-using va_list. The code is *wrong*. It depends
> on the exact calling convention of varargs, and just happens to work
> on many platforms.
It seems to me that the disagreement is fundamentally about whether we
can depend on implementation details, or the formal abstraction of
interfaces like varargs. One school of thought is that we should only
depend on the formally defined abstraction, so that we are proof
against random breakage cauesed by compilers trying to win
benchmarketing wars by proving that they are 0.001% faster because
that makes a big deal when you can advertise on the back cover of
Businessweek. (OK, that's really more a trick that enterprise
databases play, but you get the point.)
The other school of thought is that when trying to squeeze every last
cycle of performance (because *we* are the ones engaging in
benchmarketing wars), it's fair game to depend on implementation
details if it gets us a sufficiently large performance advantage, or
if it allows us to preserve interface semantics (perhaps for something
which was imprudently guaranteed by us when we or the code was younger
and more foolish, but we really don't want to break programs depending
on the current semantics).
I've been on both sides of this debate, although when I do the second,
it's often because I know something specific about my operating
environment (such as $WORK's data centers will *never* worry about
using big endian systems, or some such). I *have* gotten in trouble
when I do this, so these days I insist on doumenting with big red
flags what abstractions I am violating, and the justification for
doing this, and adding tests that check to make sure that the
assumptions I am making won't suddenly break with a new version of the
compiler, or when someone tries to do something like introduce Rust
bindings, that might not know about the terrible assumptions we are
making.
I'm not convinced that it's worth it in this particular case, so I
think I side with Linus here; maybe all of this hackery isn't worth
it? Steven, what am I missing? Why did we go down this particular
path in the first place? I assume there must have been something that
seemed like a good reason at the time?
- Ted
Powered by blists - more mailing lists