lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e855cbef-03ff-b4ac-3e06-d2f2f48cfb1f@i-love.sakura.ne.jp>
Date:   Fri, 14 Sep 2018 21:03:20 +0900
From:   Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
To:     Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Cc:     Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
        Petr Mladek <pmladek@...e.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Alexander Potapenko <glider@...gle.com>,
        Dmitriy Vyukov <dvyukov@...gle.com>,
        kbuild test robot <fengguang.wu@...el.com>,
        syzkaller <syzkaller@...glegroups.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH] printk: inject caller information into the body of
 message

On 2018/09/14 20:50, Sergey Senozhatsky wrote:
>>> +#define DEFINE_PR_LINE_BUF(lev, name, buf, sz)			\
>>> +	struct pr_line	name = {				\
>>> +		.sb	= __SEQ_BUF_INITIALIZER(buf, (sz)),	\
>>> +		.level	= lev,					\
>>> +	}
>>> +
>>
>> I would use this one for the OOM killer. 80 bytes is too short.
> 
> 80 bytes is quite short for OOM, agreed.
> 
>>   static char oom_print_buf[1024];
>>   DEFINE_PR_LINE_BUF(level, oom_print_buf);
> 
> Do I get it right that you suggest to drop the "size" param?

No. I just forgot to add params. ;-)

> Do OOM people agree on 1024 bytes stack usage?

I won't allocate oom_print_buf on the stack. Since its usage is serialized
by oom_lock mutex, we don't need to allocate from stack. Since memory
allocation request might happen when stack is already tight, we should not
try to allocate much from stack.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ