[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YCHwruhEJ7yi/tK3@jagdpanzerIV.localdomain>
Date: Tue, 9 Feb 2021 11:17:18 +0900
From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
To: John Ogness <john.ogness@...utronix.de>
Cc: Petr Mladek <pmladek@...e.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org, "J. Avila" <elavila@...gle.com>
Subject: Re: [PATCH] printk: avoid prb_first_valid_seq() where possible
On (21/02/05 15:23), John Ogness wrote:
> If message sizes average larger than expected (more than 32
> characters), the data_ring will wrap before the desc_ring. Once the
> data_ring wraps, it will start invalidating descriptors. These
> invalid descriptors hang around until they are eventually recycled
> when the desc_ring wraps. Readers do not care about invalid
> descriptors, but they still need to iterate past them. If the
> average message size is much larger than 32 characters, then there
> will be many invalid descriptors preceding the valid descriptors.
>
> The function prb_first_valid_seq() always begins at the oldest
> descriptor and searches for the first valid descriptor. This can
> be rather expensive for the above scenario. And, in fact, because
> of its heavy usage in /dev/kmsg, there have been reports of long
> delays and even RCU stalls.
>
> For code that does not need to search from the oldest record,
> replace prb_first_valid_seq() usage with prb_read_valid_*()
> functions, which provide a start sequence number to search from.
>
> Fixes: 896fbe20b4e2333fb55 ("printk: use the lockless ringbuffer")
> Reported-by: kernel test robot <oliver.sang@...el.com>
> Reported-by: J. Avila <elavila@...gle.com>
> Signed-off-by: John Ogness <john.ogness@...utronix.de>
Acked-by: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
> diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
> index 5a95c688621f..035aae771ea1 100644
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -735,9 +735,9 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf,
> logbuf_lock_irq();
> }
>
> - if (user->seq < prb_first_valid_seq(prb)) {
> + if (r->info->seq != user->seq) {
> /* our last seen message is gone, return error and reset */
> - user->seq = prb_first_valid_seq(prb);
Yeah, I can see how this pattern can be expensive, it would have been
less obvious and harder to spot had it been something like this
valid_seq = prb_first_valid_seq(prb);
if (user->seq < valid_seq) {
user->seq = valid_seq;
...
}
Great analysis, John.
I wonder if Intel test robot measures all test execution times; I do
recall "we saw N% performance improvement after patch P" emails, but
not sure if all of the tests are being tracked.
-ss
Powered by blists - more mailing lists