lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100622012303.BD72E402AD@magilla.sf.frob.com>
Date:	Mon, 21 Jun 2010 18:23:03 -0700 (PDT)
From:	Roland McGrath <roland@...hat.com>
To:	Edward Allcutt <edward@...cutt.me.uk>
Cc:	Alexander Viro <viro@...iv.linux.org.uk>,
	Randy Dunlap <rdunlap@...otime.net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Jiri Kosina <jkosina@...e.cz>,
	Dave Young <hidave.darkstar@...il.com>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	"H. Peter Anvin" <hpa@...or.com>, Oleg Nesterov <oleg@...hat.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Neil Horman <nhorman@...driver.com>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] fs: limit maximum concurrent coredumps

A core dump is just an instance of a process suddenly reading lots of its
address space and doing lots of filesystem writes, producing the kinds of
thrashing that any such instance might entail.  It really seems like the
real solution to this kind of problem will be in some more general kind of
throttling of processes (or whatever manner of collections thereof) when
they got hog-wild on page-ins or filesystem writes, or whatever else.  I'm
not trying to get into the details of what that would be.  But I have to
cite this hack as the off-topic kludge that it really is.  That said, I do
certainly sympathize with the desire for a quick hack that addresses the
scenario you experience.

For the case you described, it seems to me that constraining concurrency
per se would be better than punting core dumps when too concurrent.  That
is, you should not skip the dump when you hit the limit.  Rather, you
should block in do_coredump() until the next dump already in progress
finishes.  (It should be possible to use TASK_KILLABLE so that those dumps
in waiting can be aborted with a follow-on SIGKILL.  But Oleg will have to
check on the signals details being right for that.)

That won't make your crashers each complete quickly, but it will prevent
the thrashing.  Instead of some crashers suddenly not producing dumps at
all, they'll just all queue up waiting to finish crashing but not using any
CPU or IO resources.  That way you don't lose any core dumps unless you
want to start SIGKILL'ing things (which oom_kill might do if need be),
you just don't die in flames trying to do nothing but dump cores.


Thanks,
Roland
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ