lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202408051018.F7BA4C0A6@keescook>
Date: Mon, 5 Aug 2024 10:25:14 -0700
From: Kees Cook <kees@...nel.org>
To: Brian Mak <makb@...iper.net>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] binfmt_elf: Dump smaller VMAs first in ELF cores

On Thu, Aug 01, 2024 at 05:58:06PM +0000, Brian Mak wrote:
> On Jul 31, 2024, at 7:52 PM, Eric W. Biederman <ebiederm@...ssion.com> wrote:
> > One practical concern with this approach is that I think the ELF
> > specification says that program headers should be written in memory
> > order.  So a comment on your testing to see if gdb or rr or any of
> > the other debuggers that read core dumps cares would be appreciated.
> 
> I've already tested readelf and gdb on core dumps (truncated and whole)
> with this patch and it is able to read/use these core dumps in these
> scenarios with a proper backtrace.

Can you compare the "rr" selftest before/after the patch? They have been
the most sensitive to changes to ELF, ptrace, seccomp, etc, so I've
tried to double-check "user visible" changes with their tree. :)

> > Since your concern is about stacks, and the kernel has information about
> > stacks it might be worth using that information explicitly when sorting
> > vmas, instead of just assuming stacks will be small.
> 
> This was originally the approach that we explored, but ultimately moved
> away from. We need more than just stacks to form a proper backtrace. I
> didn't narrow down exactly what it was that we needed because the sorting
> solution seemed to be cleaner than trying to narrow down each of these
> pieces that we'd need. At the very least, we need information about shared
> libraries (.dynamic, etc.) and stacks, but my testing showed that we need a
> third piece sitting in an anonymous R/W VMA, which is the point that I
> stopped exploring this path. I was having a difficult time narrowing down
> what this last piece was.

And those VMAs weren't thread stacks?

> Please let me know your thoughts!

I echo all of Eric's comments, especially the "let's make this the
default if we can". My only bit of discomfort is with making this change
is that it falls into the "it happens to work" case, and we don't really
understand _why_ it works for you. :)

It does also feel like part of the overall problem is that systemd
doesn't have a way to know the process is crashing, and then creates the
truncation problem. (i.e. we're trying to use the kernel to work around
a visibility issue in userspace.)

All this said, if it doesn't create problems for gdb and rr, I would be
fine to give a shot.

-Kees

-- 
Kees Cook

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ