lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 29 Sep 2018 20:15:14 +0900
From:   Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To:     Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Cc:     Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
        Petr Mladek <pmladek@...e.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Alexander Potapenko <glider@...gle.com>,
        Dmitriy Vyukov <dvyukov@...gle.com>,
        kbuild test robot <fengguang.wu@...el.com>,
        syzkaller <syzkaller@...glegroups.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] printk: inject caller information into the body of
 message

On 2018/09/29 19:51, Sergey Senozhatsky wrote:
> On (09/28/18 20:01), Tetsuo Handa wrote:
>>> Yes, this makes sense. At the same time we can keep pr_line buffer
>>> in .bss
>>>
>>> 	static char buffer[1024];
>>> 	static DEFINE_PR_LINE_BUF(..., buffer);
>>>
>>> just like you have already mentioned. But that's going to require a
>>> case-by-case handling; so a big list of printk buffers is a simpler
>>> option. Fallback, tho, can be painful. On a system with 1024 CPUs can
>>> one have more than 16 concurrent cont printks? If the answer is yes,
>>> then we are looking at the same broken cont output as before.
>>
>> I'm OK with making "16" configurable (at kernel configuration and/or
>> at kernel boot like log_buf_len= kernel command line parameter).
> 
> Do we really want this? Why .bss placement doesn't work for you?
> 
> 	void oom(...)
> 	{
> 		static DEFINE_PR_LINE(KERN_ERR, pr);
> 
> 		pr_line(&pr, ....);
> 		pr_line(&pr, "\n");
> 	}
> 
> the underlying buffer will be static; the pr_line will get re-init
> (offset = 0) every time we call the function, which is OK. And we can
> pass &pr to any function oom() invokes. What am I missing?

Because there is no guarantee that memory information is dumped under the
oom_lock mutex. The oom_lock is held when calling out_of_memory(), and it
cannot be held when reporting GFP_ATOMIC memory allocation failures.

> 
>> We could even allow each "struct task_struct" to have corresponding
>> "struct printk_buffer".
> 
> Tetsuo, realistically, we can't. Sorry. No one will let us to have a printk
> buffer on per-task_struct basis. Even if someone will let us to do this,
> a miracle, a single per-task_struct buffer won't work. Because, then
> someone will discover that a very simple API
> 
> 	buffered_printk(current->printk_buffer, "......");
> 
> does not work if buffered_printk() gets interrupted by IRQ, etc. in case
> if that new context also does
> 
> 	buffered_printk(current->printk_buffer, "......");
> 
> So then we will have per-context per-task_struct printk buffer: for task,
> for exceptions, for softirq, for hardirq, for NMI, etc. This is not worth
> it.

The number of "struct task_struct" instances is volatile. But number of non
"struct task_struct" contexts is finite which can be determined at boot (or
initialization) time.

My intention is that allocate "struct printk_buffer" when "struct task_struct"
is created (i.e. upon dup_task_struct()) and release "struct printk_buffer"
when "struct task_struct" is destroyed (i.e. upon free_task_struct()), and
allocate "struct printk_buffer" for non "struct task_struct" contexts when a
CPU is onlined and release "struct printk_buffer" for non "struct task_struct"
contexts when a CPU is offlined. Then, it will be guaranteed that there is
enough "struct printk_buffer" for any callers.

> 
> Let's just have a very simple seq_buf based pr_line API. No config options,
> no command line arguments - heap, bss or stack for buffer placement. Or even
> simpler.

We cannot avoid "** %u printk messages dropped **\n" inside printk() upon out
of space. But I don't want line buffered printk() API to truncate upon out of
space for line buffered printk() API. I want buffered printk() API to flush
incomplete line even if it resulted in printed in multiple lines.
Injecting caller information can mitigate "printed in multiple lines" case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ