lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 3 Apr 2014 13:18:55 -0400
From:	Theodore Ts'o <>
To:	"H. Peter Anvin" <>
Cc:	Joerg Roedel <>,
	Linus Torvalds <>,
	Jiri Kosina <>,
	Andrew Morton <>,
	Mateusz Guzik <>,
	Greg Kroah-Hartman <>,
	Steven Rostedt <>,
	LKML <>,
	Thomas Gleixner <>,
	Borislav Petkov <>, Ingo Molnar <>,
	Mel Gorman <>, Kay Sievers <>
Subject: Re: [RFC PATCH] cmdline: Hide "debug" from /proc/cmdline

On Thu, Apr 03, 2014 at 10:09:29AM -0700, H. Peter Anvin wrote:
> Having the kernel be the keeper of the logging IPC isn't at all
> unreasonable.  However, kmsg in its current form isn't adequate.
> Augmenting it into a proper logging IPC might be the right thing to do.
>  (Hmm... new IPC... does this sound a bit like kdbus to anyone?)

I'm not sure it makes sense for the kernel to be stashing log entries
until there is a good place to save them.  So if systemd wants to send
hundreds of thousands of messages before the file system is remounted
read-only, we don't really want to storing them in non-swappable
kernel memory.  In a userspace process, you can do things like do
compression, and in the worst case, the oom killer can kill the
logging daemon if the systemd verbosity has been turned up too high,
and it's taking too long to get /var/log mounted and writeable.

That's why I wrote logsave(8) --- because IMHO, this is a userspace
problem, and not something that we should even be trying to solve in
kernel space.

						- Ted
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists