lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240304184725.55449e70@gandalf.local.home>
Date: Mon, 4 Mar 2024 18:47:25 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Masami Hiramatsu
 <mhiramat@...nel.org>, Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
 Sachin Sant <sachinp@...ux.ibm.com>
Subject: Re: [GIT PULL] tracing: Prevent trace_marker being bigger than
 unsigned short

On Mon, 4 Mar 2024 15:20:52 -0800
Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> On Mon, 4 Mar 2024 at 14:08, Steven Rostedt <rostedt@...dmis.org> wrote:
> >
> > Fine, I'll just remove the precision as that's not needed. There was no
> > other overflows involved here.  
> 
> I really want you to add the size check on the trace buffer *creation* side.
> 
> I don't understand why you refuse to accept the fact that the
> precision warning found a PROBLEM.

And what exactly was that problem? That someone wrote a huge string into
the trace_marker on a machine that had huge page sizes?

What exactly broke?

There was no overflow. No bad memory. KASAN is happy.


> 
> And no, the fix was never to paper over the problem by limiting the
> precision field. Hiding a problem isn't fixing it.

What was broken? The fact that we allowed the buffer to be 64K on a system
that has 64K PAGE_SIZE and we limited the strings to the size of what the
buffer can hold?

I would limit the buffers themselves but they are used in page mappings
which needs to be PAGE_SIZE. Which is most cases is only 4K.

> 
> And no, the fix was also never to chop up the printing of the string
> in smaller pieces to hide paper over the precision field. Again,
> hiding a problem isn't fixing it.

Yes, I didn't like that change either.

> 
> And finally, NO, the fix was also never to add extra debug code to see
> that there was a NUL character there.

That was not a fix, but just a paranoid check. The only times that my
paranoid checks usually trigger is during development which saves a few
hours of debugging as they point out exactly what broke.

> 
> The fix was *always* to simply not accept insanely long strings in the
> first place, and make sure that the field was correctly *set*.

And what exactly is an "insane size"?

The buffer is allocated by page size, and when writing to it I make sure it
fits. I'm now supposed to add some code to tell the user, "No you can't
write that much into the buffer, even though we allocated enough to fit it"?

If I had never added that precision (which I just recently added), there
would have been no bug report, because there would have been no bug.


> 
> IOW, at *creation* time the code needed a proper check for length
> (which obviously indirectly includes checking for the terminating NUL
> character at that point).
> 
> Why do these threads with you always have to end up this long? Why do
> I Nhave to explain every single step of the way that you need to *FIX*
> the problem, not try to hide it with new extra code.

Because I still don't know what exactly the problem is! I'm supposed to add
some arbitrary limit to what people can enter just because we think it's
crazy to have big strings?

If I remove the precision, as I did in this patch. Then adding a limit is
not fixing any bug. It's just adding a limit.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ