[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aMPt8y-8Wazh6ZmO@pathway.suse.cz>
Date: Fri, 12 Sep 2025 11:54:59 +0200
From: Petr Mladek <pmladek@...e.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Daniil Tatianin <d-tatianin@...dex-team.ru>,
linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH v2 0/2] printk_ringbuffer: don't needlessly wrap data
blocks around
On Fri 2025-09-12 11:25:09, Petr Mladek wrote:
> On Thu 2025-09-11 18:18:32, John Ogness wrote:
> > On 2025-09-11, Petr Mladek <pmladek@...e.com> wrote:
> > > diff --git a/kernel/printk/printk_ringbuffer_kunit_test.c b/kernel/printk/printk_ringbuffer_kunit_test.c
> > > index 2282348e869a..241f7ef49ac6 100644
> > > --- a/kernel/printk/printk_ringbuffer_kunit_test.c
> > > +++ b/kernel/printk/printk_ringbuffer_kunit_test.c
> > > @@ -56,7 +56,7 @@ struct prbtest_rbdata {
> > > char text[] __counted_by(size);
> > > };
> > >
> > > -#define MAX_RBDATA_TEXT_SIZE 0x80
> > > +#define MAX_RBDATA_TEXT_SIZE (0x256 - sizeof(struct prbtest_rbdata))
> >
> > I guess this should be:
> >
> > #define MAX_RBDATA_TEXT_SIZE (256 - sizeof(struct prbtest_rbdata))
>
> Great catch!
>
> But the KUnit test fails even with this change, see below. And I am
> not surprised. The test should work even with larger-than-allowed
> messages. prbtest_writer() should skip then because prb_reserve()
> should fail.
>
> Here is test result with:
>
> #define MAX_RBDATA_TEXT_SIZE (256 - sizeof(struct prbtest_rbdata))
> #define MAX_PRB_RECORD_SIZE (sizeof(struct prbtest_rbdata) + MAX_RBDATA_TEXT_SIZE)
>
> DEFINE_PRINTKRB(test_rb, 4, 4);
>
> and with this patchset reverted, aka, sources from
> printk/linux.git, branch for-next:
>
> It is well reproducible. It always fails after reading few records.
> Here are results from few other runs:
And I am not longer able to reproduce it after limiting the size
of the record to 1/4 of the data buffer size. I did it with
the following change:
diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
index bc811de18316..2f02254705aa 100644
--- a/kernel/printk/printk_ringbuffer.c
+++ b/kernel/printk/printk_ringbuffer.c
@@ -398,8 +398,6 @@ static unsigned int to_blk_size(unsigned int size)
*/
static bool data_check_size(struct prb_data_ring *data_ring, unsigned int size)
{
- struct prb_data_block *db = NULL;
-
if (size == 0)
return true;
@@ -409,7 +407,7 @@ static bool data_check_size(struct prb_data_ring *data_ring, unsigned int size)
* at least the ID of the next block.
*/
size = to_blk_size(size);
- if (size > DATA_SIZE(data_ring) - sizeof(db->id))
+ if (size > DATA_SIZE(data_ring) / 4)
return false;
return true;
I guess that there is a race when we need to make all existing records
reusable when making space for the next one.
Another aspect might be the very small amount of descriptors (16).
They are quickly recycled. But it is not a problem after
limiting the size of the record to 1/4.
Note that my test system is using 12 CPUs in KVM.
And it is x86_64.
Best Regards,
Petr
Powered by blists - more mailing lists