lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <880ef52f-dff7-af91-5353-f63513265ffe@i-love.sakura.ne.jp>
Date:   Mon, 1 Oct 2018 20:21:05 +0900
From:   Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To:     Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc:     Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        Petr Mladek <pmladek@...e.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Alexander Potapenko <glider@...gle.com>,
        Dmitriy Vyukov <dvyukov@...gle.com>,
        kbuild test robot <fengguang.wu@...el.com>,
        syzkaller <syzkaller@...glegroups.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] printk: inject caller information into the body of
 message

On 2018/10/01 11:37, Sergey Senozhatsky wrote:
> On (09/29/18 20:15), Tetsuo Handa wrote:
>>
>> Because there is no guarantee that memory information is dumped under the
>> oom_lock mutex. The oom_lock is held when calling out_of_memory(), and it
>> cannot be held when reporting GFP_ATOMIC memory allocation failures.
> 
> IOW, static pr_line buffer needs additional synchronization for OOM. Correct?

Yes (assuming that your OOM refer to both out_of_memory() and warn_alloc()).
And since warn_alloc() might be called from atomic/intrrupt contexts, we can't
use locks for synchronization.

> 
> If we are about to have a list of printk buffers then we probably can
> define a list of NR_CPUS cont buffers. And we probably can reuse the
> existing struct cont for buffered printk, having 2 different struct-s
> for the same thing - struct cont and struct printk_buffer - is not very
> cool.

My plan is to remove "struct cont" after most of KERN_CONT users are
converted to use buffered_printk(). There will be 2 different struct-s
only during transition period.

By the way, only up to two threads (the active printer thread and a thread
which is marked as console_waiter) can stall inside printk(), doesn't it?
Then, can you imagine a situation where 1024 (NR_CPUS) threads are stalling
inside printk() waiting for flush? Such system is already dead. All callers
but the two should release printk_buffer as soon as their printk() added their
message to the log buffer.

Maybe "struct printk_buffer" after all becomes identical to "struct cont". But
I guess that even 16 printk_buffer-s is practically sufficient for 1024 CPUs
system, and allocating NR_CPUS printk_buffer-s will be too wasteful.

> 
>> But I don't want line buffered printk() API to truncate upon out of
>> space for line buffered printk() API.
> 
> All printk()-s are limited by LOG_LINE_MAX. Buffered printk() is not
> special.

I'm saying that I don't like discarding overflowed part because you are
using seq_buf_vprintf() which just marks "overflowed" rather than
"flush incomplete line" and "store the new data".

 DEFINE_PR_LINE(pr);

 pr_line(&pr, "1234567890123456789012345678901234567890123456789012345678901234567890");
 pr_line(&pr, "1234567890abcde\n");

will discard "1234567890abcde\n" part, won't it?
I think that getting

 1234567890123456789012345678901234567890123456789012345678901234567890\n
 1234567890abcde\n

is better than getting

 1234567890123456789012345678901234567890123456789012345678901234567890\n

because we can still understand such output by prefixing caller information.

Your DEFINE_PR_LINE() is limiting to far smaller than LOG_LINE_MAX.
Since your version has to worry about "buffer full" (i.e. hitting
seq_buf_set_overflow()) case, it might become a headache for API users.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ