[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <61a7e7db786d9549cbe201b153647689cbe12d75.camel@tugraz.at>
Date: Fri, 21 Feb 2025 17:28:30 +0100
From: Martin Uecker <uecker@...raz.at>
To: Dan Carpenter <dan.carpenter@...aro.org>
Cc: Greg KH <gregkh@...uxfoundation.org>, Boqun Feng <boqun.feng@...il.com>,
"H. Peter Anvin" <hpa@...or.com>, Miguel Ojeda
<miguel.ojeda.sandonis@...il.com>, Christoph Hellwig <hch@...radead.org>,
rust-for-linux <rust-for-linux@...r.kernel.org>, Linus Torvalds
<torvalds@...ux-foundation.org>, David Airlie <airlied@...il.com>,
linux-kernel@...r.kernel.org, ksummit@...ts.linux.dev
Subject: Re: Rust kernel policy
Am Freitag, dem 21.02.2025 um 12:48 +0300 schrieb Dan Carpenter:
> On Thu, Feb 20, 2025 at 04:40:02PM +0100, Martin Uecker wrote:
> > I mean "memory safe" in the sense that you can not have an OOB access
> > or use-after-free or any other UB. The idea would be to mark certain
> > code regions as safe, e.g.
> >
> > #pragma MEMORY_SAFETY STATIC
>
> Could we tie this type of thing to a scope instead? Maybe there
> would be a compiler parameter to default on/off and then functions
> and scopes could be on/off if we need more fine control.
At the moment my feeling is that tying it to a specific scope
would not be flexible enough.
The model I have in my mind are the pragmas GCC has
to turn on and off diagnostics for regions of code
(i.e. #pragma GCC diagnostic warning, etc.). These memory
safety modes would still be based on many different individual
warnings that are can then be jointly toggled using these
pragmas but which could also individually be toggled as usual.
>
> This kind of #pragma is basically banned in the kernel. It's used
> in drivers/gpu/drm but it disables the Sparse static checker.
Why is this?
>
> > unsigned int foo(unsigned int a, unsigned int b)
> > {
> > return a * b;
> > }
> >
> > static int foo(const int a[static 2])
> > {
> > int r = 0;
> > if (ckd_mul(&r, a[0], a[1]))
> > return -1;
> > return r;
> > }
> >
> > static int bar(int x)
> > {
> > int a[2] = { x, x };
> > return foo(a);
> > }
> >
> >
> > and the compiler would be required to emit a diagnostic when there
> > is any operation that could potentially cause UB.
>
> I'm less convinced by the static analysis parts of this... The kernel
> disables checking for unsigned less than zero by default because there
> are too many places which do:
>
> if (x < 0 || x >= 10) {
>
> That code is perfectly fine so why is the compiler complaining? But at
> the same time, being super strict is the whole point of Rust and people
> love Rust so maybe I have misread the room.
What is a bit weird is that on the one side there are people
who think we absolutely need compiler-ensured memory safety
and this might be even worth rewriting code from scratch and
on the other side there are people who think that dealing with
new false positives in existing code when adding new warnings
is already too much of a burden.
> >
> > I would also have a DYNAMIC mode that traps for UB detected at
> > run-time (but I understand that this is not useful for the kernel).
>
> No, this absolutely is useful. This is what UBSan does now.
>
Yes, it is similar to UBSan. The ideas to make sure that in the
mode there is *either* a compile-time warning *or* run-time
trap for any UB. So if you fix all warnings, then any remaining
UB is trapped at run-time.
> You're
> basically talking about exception handling. How could that not be
> the most useful thing ever?
At the moment, I wasn't thinking about a mechanism to catch those
exceptions, but just to abort the program directly (or just emit
a diagnostic and continue.
BTW: Another option I am investigating it to have UBsan insert traps
into the code and then have the compiler emit a warning only when
it actually emits the trapping instruction after optimization. So
you only get the warning if the optimizer does not remove the trap.
Essentially, this means that one can use the optimizer to prove that
the code does not have certain issues. For example, you could use the
signed-overflow sanitizer to insert a conditional trap everywhere
where there could be signed overflow, and if the optimizer happens
to remove all such traps because they are unreachable, then it is
has shown that the code can never have a signed overflow at run-time.
This is super easy to implement (I have a patch for GCC) and
seems promising. One problem with this is that any change in the
optimizer could change whether you get a warning or not.
Martin
>
> regards,
> dan carpenter
>
Powered by blists - more mailing lists