lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 10 Dec 2023 12:28:32 -0500
From:   Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Linux Trace Kernel <linux-trace-kernel@...r.kernel.org>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        Mark Rutland <mark.rutland@....com>
Subject: Re: [PATCH] tracing: Allow for max buffer data size trace_marker
 writes

On 2023-12-10 11:38, Steven Rostedt wrote:
> On Sun, 10 Dec 2023 11:07:22 -0500
> Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> 
>>> It just allows more to be written in one go.
>>>
>>> I don't see why the tests need to cover this or detect this change.
>>
>> If the purpose of this change is to ensure that the entire
>> trace marker payload is shown within a single event, then
>> there should be a test which exercises this, and which
>> validates that the end result is that the entire payload
>> is indeed shown within a single event record.
> 
> No, the purpose of the change is not to do that, because there can always
> be a bigger trace marker write than a single event can hold. This is the
> way it has always worked. This is an optimization or "enhancement". The 1KB
> restriction was actually because of a previous implementation years ago
> (before selftests even existed) that wrote into a temp buffer before
> copying into the ring buffer. But since we now can copy directly into the
> ring buffer, there's no reason not to use the maximum that the ring buffer
> can accept.

My point is that the difference between the new "enhanced" behavior
and the previous behavior is not tested for.

> 
>>
>> Otherwise there is no permanent validation that this change
>> indeed does what it is intended to do, so it can regress
>> at any time without any test noticing it.
> 
> What regress? The amount of a trace_marker write that can make it into a
> the buffer in one go? Now, I agree that we should have a test to make sure
> that all of the trace marker write gets into the buffer.

Yes. This is pretty much my point.


> But it's always
> been allowed to break up that write however it wanted to.

And the enhanced behavior extends the amount of data that can get
written into a single sub-buffer, and this is not tested.

> 
> Note, because different architectures have different page sizes, how much
> that can make it in one go is architecture dependent. So you can have a
> "regression" by simply running your application on a different architecture.

Which is why in the following patches you have expressing the subbuffer
size as bytes rather than pages is important at the ABI level. It
facilitates portability of tests, and decreases documentation / user
burden.

> Again, it's not a requirement, it's just an enhancement.

How does this have anything to do with dispensing from testing the
new behavior ? If the new behavior has a bug that causes it to
silently truncate the trace marker payloads, how do you catch it
with the current tests ?

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ