[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aLm_SpmQP3UwzkqQ@pathway>
Date: Thu, 4 Sep 2025 18:33:14 +0200
From: Petr Mladek <pmladek@...e.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Daniil Tatianin <d-tatianin@...dex-team.ru>,
linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH v1] printk_ringbuffer: don't needlessly wrap data blocks
around
On Thu 2025-09-04 16:04:30, John Ogness wrote:
> On 2025-09-03, Daniil Tatianin <d-tatianin@...dex-team.ru> wrote:
> > Previously, data blocks that perfectly fit the data ring buffer would
> > get wrapped around to the beginning for no reason since the calculated
> > offset of the next data block would belong to the next wrap. Since this
> > offset is not actually part of the data block, but rather the offset of
> > where the next data block is going to start, there is no reason to
> > include it when deciding whether the current block fits the buffer.
>
> This is a nice catch!
>
> Although note that this patch avoids wasting a maximum of 8 bytes of
> ringbuffer space. If you are interested in tackling the wasted-space
> issue of the printk ringbuffer there are much larger [0] fish to catch.
>
> [0] https://lore.kernel.org/lkml/84y10vz7ty.fsf@jogness.linutronix.de
>
> My comments below...
>
> > diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
> > index d9fb053cff67..f885ba8be5e6 100644
> > --- a/kernel/printk/printk_ringbuffer.c
> > +++ b/kernel/printk/printk_ringbuffer.c
> > @@ -1002,6 +1002,18 @@ static bool desc_reserve(struct printk_ringbuffer *rb, unsigned long *id_out)
> > return true;
> > }
> >
> > +static bool same_lpos_wraps(struct prb_data_ring *data_ring,
> > + unsigned long begin_lpos, unsigned long next_lpos)
>
> We need a better name here since it is not actually using the value of
> @next_lpos to check the wrap count. Perhaps inverting the return value
> and naming it blk_lpos_wraps(). So it would be identifying if the given
> blk_lpos values lead to a wrapping data block. Some like this:
>
> diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
> index d9fb053cff67d..cf0fcd9b106ae 100644
> --- a/kernel/printk/printk_ringbuffer.c
> +++ b/kernel/printk/printk_ringbuffer.c
> @@ -1002,6 +995,17 @@ static bool desc_reserve(struct printk_ringbuffer *rb, unsigned long *id_out)
> return true;
> }
>
> +/* Identify if given blk_lpos values represent a wrapping data block. */
> +static bool blk_lpos_wraps(struct prb_data_ring *data_ring,
> + unsigned long begin_lpos, unsigned long next_lpos)
> +{
> + /*
> + * Subtract one from @next_lpos since it is not actually part of this
> + * data block. This allows perfectly fitting records to not wrap.
> + */
> + return (DATA_WRAPS(data_ring, begin_lpos) != DATA_WRAPS(data_ring, next_lpos - 1));
> +}
Or a combination of my and this proposal: is_blk_wrapped().
> +
> /* Determine the end of a data block. */
> static unsigned long get_next_lpos(struct prb_data_ring *data_ring,
> unsigned long lpos, unsigned int size)
>
> > +{
> > + /*
> > + * Subtract one from next_lpos since it's not actually part of this data
> > + * block. We do this to prevent perfectly fitting records from wrapping
> > + * around for no reason.
> > + */
> > + return DATA_WRAPS(data_ring, begin_lpos) ==
> > + DATA_WRAPS(data_ring, next_lpos - 1);
> > +}
> > +
> > /* Determine the end of a data block. */
> > static unsigned long get_next_lpos(struct prb_data_ring *data_ring,
> > unsigned long lpos, unsigned int size)
>
> The rest looked fine to me and also passed various private
> tests. However, we should also update data_check_size(), since now data
> blocks are allowed to occupy the entire data ring. Something like this:
>
> diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
> index d9fb053cff67d..e6bdfb8237a3d 100644
> --- a/kernel/printk/printk_ringbuffer.c
> +++ b/kernel/printk/printk_ringbuffer.c
> @@ -397,21 +397,14 @@ static unsigned int to_blk_size(unsigned int size)
> */
> static bool data_check_size(struct prb_data_ring *data_ring, unsigned int size)
> {
> - struct prb_data_block *db = NULL;
> -
> if (size == 0)
> return true;
>
> /*
> * Ensure the alignment padded size could possibly fit in the data
> - * array. The largest possible data block must still leave room for
> - * at least the ID of the next block.
> + * array.
> */
> - size = to_blk_size(size);
> - if (size > DATA_SIZE(data_ring) - sizeof(db->id))
> - return false;
> -
> - return true;
> + return (to_blk_size(size) <= DATA_SIZE(data_ring));
> }
I hope that we would never reach this limit. A buffer for one
message does not look practical. I originally suggested to avoid
messages bigger than 1/4 of the buffer size ;-)
That said, strictly speaking, the above change looks correct.
I would just do it in a separate patch. The use of the full
buffer and the limit of the maximal message are related
but they are not the same things. Also separate patch might
help with bisection in case of problems.
Best Regards,
Petr
Powered by blists - more mailing lists