[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20cbb02b-762f-4a3f-ba40-aae018388b3b@yandex-team.ru>
Date: Sat, 13 Sep 2025 20:38:24 +0300
From: Daniil Tatianin <d-tatianin@...dex-team.ru>
To: John Ogness <john.ogness@...utronix.de>, Petr Mladek <pmladek@...e.com>
Cc: linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH v2 0/2] printk_ringbuffer: don't needlessly wrap data
blocks around
On 9/12/25 9:43 PM, John Ogness wrote:
> Hi Petr,
>
> Summary: printk() is not in danger but we should correct a loose bounds
> check.
>
> On 2025-09-12, Petr Mladek <pmladek@...e.com> wrote:
>> Honestly, I would really like to limit the maximal record size to
>> 1/4 of the buffer size. I do not want to make the design more
>> complicated just to be able to fill just one record, definitely.
> So I was able to track this down. Your usage of
>
> DEFINE_PRINTKRB(test_rb, 4, 4);
>
> actually made it relatively easy because there are only 16
> descriptors. All I needed to do was dump the descriptors before each
> reserve, between reserve and commit, after commit, and when reserve
> fails. This allowed me to easily see exactly how the ringbuffer is
> behaving.
>
> The problem can be reproduced with a single writer, no reader
> needed. Using
>
> #define MAX_RBDATA_TEXT_SIZE (0x256 - sizeof(struct prbtest_rbdata))
>
> provides a wild range of attempts that trigger the problem within about
> 20 write cycles.
>
> The problem comes from the function data_make_reusable(). The job of
> this function is to push the data_ring tail forward, one data block at a
> time, while setting the related descriptors to reusable.
>
> After pushing the tail forward, if it still has not pushed it far enough
> for new requested reservation, it must push it further. For this it
> _assumes the current position of the tail is a descriptor ID for the
> next data block_. But what if the tail was pushed all the way to the
> head? Then there is no next data block and it will read in garbage,
> thinking it is the next descriptor ID to set reusable. And from there it
> just goes crazy because it is reading garbage to determine how big the
> data block is so that it can continue pushing the tail (beyond the
> head!).
>
> Example: Assume the 96 byte ringbuffer has a single message of 64
> bytes. Then we try to reserve space for a 72-byte
> message. data_make_reusable() will first set the descriptor of the
> 64-byte message to reusable and push the tail forward to index 64. But
> the new message needs 72 bytes, so data_make_reusable() will keep going
> and read the descriptor ID at index 64, but there is only random garbage
> at that position. 64 is the head and there is nothing valid after it.
Good catch, although I'm not sure i understand why this produces the
error Petr is seeing.
Why would it see garbage in a zeroed .bss buffer? Is this because of the
previous test runs
dirtying the same data ring or something?
At least in my tests it seems like it would try to free a descriptor
with an id of 0, which would result
in a "desc_miss", so it would basically fail to allocate anything.
Either way, I guess after your patch is accepted i'm going to resend
mine to only strip the trailing id,
but the records must still be less than half of the data ring size so
that data_make_reusable never
invalidates past the current head.
> This situation can never happen for printk because of your 1/4 limit
> (MAX_LOG_TAKE_PART), although it is over-conservative. It is enough to
> limit messages to 1/2 of the data ring (with Daniil's series). Otherwise
> the limit must be "1/2 - sizeof(long)" to also leave room for the
> trailing ID of a wrapping data block.
>
> I am still positive about Daniil's series. And we should fix
> data_check_size() to be provide a proper limit as well as describe the
> critical relationship between data_check_size() and
> data_make_reusable().
>
> I prefer not modify data_make_reusable() to handle this case. Currently
> data_make_reusable() does nothing with the head, so it would introduce
> new memory barriers. Also, the "push tail beyond head" scenario is a bit
> odd to handle. It is better just to document the assumption and put in
> the correct bounds checks.
>
> I will put together a patch without Daniil's series that improves the
> comments and puts a proper bound on data_check_size().
>
> John Ogness
Powered by blists - more mailing lists