[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180928090939.GE1160@jagdpanzerIV>
Date: Fri, 28 Sep 2018 18:09:39 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Petr Mladek <pmladek@...e.com>,
Steven Rostedt <rostedt@...dmis.org>,
Alexander Potapenko <glider@...gle.com>,
Dmitriy Vyukov <dvyukov@...gle.com>,
kbuild test robot <fengguang.wu@...el.com>,
syzkaller <syzkaller@...glegroups.com>,
LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] printk: inject caller information into the body of
message
On (09/24/18 17:11), Tetsuo Handa wrote:
> The reason of using statically preallocated global buffers is that I think
> that it is inconvenient for KERN_CONT users to calculate necessary bytes
> only for avoiding message truncation. The pr_line might be passed to deep
> into the callchain and adjusting buffer size whenever the content's possible
> max length changes is as much painful as changing printk() to accept only
> one "const char *" argument. Even if we guarantee that any context can
> allocate buffer from kernel stack, we cannot guarantee that many concurrent
> printk() won't trigger lockup. Thus, I think that trying to allocate from
> finite static buffers with a fallback to unbuffered printk() upon failure
> is sufficient.
Yes, this makes sense. At the same time we can keep pr_line buffer
in .bss
static char buffer[1024];
static DEFINE_PR_LINE_BUF(..., buffer);
just like you have already mentioned. But that's going to require a
case-by-case handling; so a big list of printk buffers is a simpler
option. Fallback, tho, can be painful. On a system with 1024 CPUs can
one have more than 16 concurrent cont printks? If the answer is yes,
then we are looking at the same broken cont output as before.
-ss
Powered by blists - more mailing lists