lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Jun 2009 13:38:36 -0400
From:	Neil Horman <nhorman@...driver.com>
To:	linux-kernel@...r.kernel.org
Cc:	akpm@...ux-foundation.org, earl_chew@...lent.com,
	Oleg Nesterov <oleg@...hat.com>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Andi Kleen <andi@...stfloor.org>, nhorman@...driver.com
Subject: Re: [PATCH 0/3] exec: Make do_coredump more robust and safer when
	using pipes in core_pattern (v4)

Ok, Heres version 4 of this patch set.  Based on feedback from the past few days
I've made some changes (noted below).  I've tested all of these patches here,
and they work quite well, I'm able to prevent recursive core dumps, wait on
dumps to complete, and limit the number of dupms I handle in parallel

Change Notes:

1) Diffed against latest Linus kernel + Olegs cleanup patches to do_coredump

2) Refactored into 3 patches instead of two (I still don't think its needed, but
I've received more than one request to pull the sysctl into a separate patch, so
I'm going with consensus here, and it won't hurt anything anyway)

3) Changed how we detect completed user space processes.  We need to be able to
close a pipe and then wait on the pipe reader to finish with it.  As such we
have to do some trickery with the pipe readers and writers counts to make that
happen.

Patches in subsequent mails
Regards
Neil

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ