lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170318192328.31dc1fde@hananiah.suse.cz>
Date:   Sat, 18 Mar 2017 19:23:28 +0100
From:   Petr Tesarik <ptesarik@...e.cz>
To:     Xunlei Pang <xpang@...hat.com>
Cc:     Baoquan He <bhe@...hat.com>, xlpang@...hat.com,
        Dave Young <dyoung@...hat.com>, akpm@...ux-foundation.org,
        kexec@...ts.infradead.org, linux-kernel@...r.kernel.org,
        Eric Biederman <ebiederm@...ssion.com>
Subject: Re: [PATCH] kexec: Update vmcoreinfo after crash happened

On Thu, 16 Mar 2017 21:40:58 +0800
Xunlei Pang <xpang@...hat.com> wrote:

> On 03/16/2017 at 09:18 PM, Baoquan He wrote:
> > On 03/16/17 at 08:36pm, Xunlei Pang wrote:
> >> On 03/16/2017 at 08:27 PM, Baoquan He wrote:
> >>> Hi Xunlei,
> >>>
> >>> Did you really see this ever happened? Because the vmcore size estimate
> >>> feature, namely --mem-usage option of makedumpfile, depends on the
> >>> vmcoreinfo in 1st kernel, your change will break it.
> >> Hi Baoquan,
> >>
> >> I can reproduce it using a kernel module which modifies the vmcoreinfo,
> >> so it's a problem can actually happen.
> >>
> >>> If not, it could be not good to change that.
> >> That's a good point, then I guess we can keep the crash_save_vmcoreinfo_init(),
> >> and store again all the vmcoreinfo after crash. What do you think?
> > Well, then it will make makedumpfile segfault happen too when execute
> > below command in 1st kernel if it existed:
> > 	makedumpfile --mem-usage /proc/kcore
> 
> Yes, if the initial vmcoreinfo data was modified before "makedumpfile --mem-usage", it might happen,
> after all the system is going something wrong. And that's why we deploy kdump service at the very
> beginning when the system has a low possibility of going wrong.
> 
> But we have to guarantee kdump vmcore can be generated correctly as possible as it can.
> 
> >
> > So we still need to face that problem and need fix it. vmcoreinfo_note
> > is in kernel data area, how does module intrude into this area? And can
> > we fix the module code?
> >
> 
> Bugs always exist in products, we can't know what will happen and fix all the errors,
> that's why we need kdump.
> 
> I think the following update should guarantee the correct vmcoreinfo for kdump.

I'm still not convinced. I would probably have more trust in a clean
kernel (after boot) than a kernel that has already crashed (presumably
because of a serious bug). How can be reliability improved by running
more code in unsafe environment?

If some code overwrites reserved areas (such as vmcoreinfo), then it's
seriously buggy. And in my opinion, it is more difficult to identify
such bugs if they are masked by re-initializing vmcoreinfo after crash.
In fact, if makedumpfile in the kexec'ed kernel complains that it
didn't find valid VMCOREINFO content, that's already a hint.

As a side note, if you're debugging a vmcoreinfo corruption, it's
possible to use a standalone VMCOREINFO file with makedumpfile, so you
can pre-generate it and save it in the kdump initrd.

In short, I don't see a compelling case for this change.

Just my two cents,
Petr T

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ