[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180926110500.vfroi373mookavua@pathway.suse.cz>
Date: Wed, 26 Sep 2018 13:05:00 +0200
From: Petr Mladek <pmladek@...e.com>
To: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
He Zhe <zhe.he@...driver.com>,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] printk: Fix panic caused by passing log_buf_len
to command line
On Tue 2018-09-25 22:31:43, Sergey Senozhatsky wrote:
> On (09/25/18 14:23), Petr Mladek wrote:
> > The 32GB was mentioned as an example one year ego. This is not enough
> > for a new syscall from my point of view.
>
> I agree. I didn't think of syslog(); was merely thinking about logbuf
> and flushing it to the consoles. syslog() stuff is a bit complex. We
> sort of don't expect user space to allocate 64G to read all log_buf
> messages, do we.
>
> I'm wondering if we can do something like this
>
> ---
>
> diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
> index cf275f4d7912..1b48b61da8fe 100644
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -1110,9 +1110,15 @@ static void __init log_buf_len_update(unsigned size)
> /* save requested log_buf_len since it's too early to process it */
> static int __init log_buf_len_setup(char *str)
> {
> - unsigned size = memparse(str, &str);
> + u64 size = memparse(str, &str);
>
> - log_buf_len_update(size);
> + if (size > UINT_MAX) {
> + size = UINT_MAX;
> + pr_err("log_buf over 4G is not supported. "
> + "Please contact printk maintainers.\n");
> + }
> +
> + log_buf_len_update((unsigned int)size);
>
> return 0;
> }
>
> ---
>
> So we could know that "the day has come".
Sounds good to me. Just two nits.
First, I would move the check into log_buf_len_update() so that
we catch the overflow also in other situations.
Second, please, keep only the first line. It is enough to describe
the problem. Upstream kernel maintainers are not responsible
for implementing all missing features.
Best Regards,
Petr
Powered by blists - more mailing lists