lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 Mar 2013 11:19:12 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Jingbai Ma <jingbai.ma@...com>
Cc:	mingo@...hat.com, kumagai-atsushi@....nes.nec.co.jp,
	ebiederm@...ssion.com, hpa@...or.com, yinghai@...nel.org,
	kexec@...ts.infradead.org, linux-kernel@...r.kernel.org,
	"Mitchell, Lisa (MCLinux in Fort Collins)" <lisa.mitchell@...com>
Subject: Re: [RFC PATCH 0/5] crash dump bitmap: scan memory pages in kernel
 to speedup kernel dump process

On Fri, Mar 08, 2013 at 06:06:31PM +0800, Jingbai Ma wrote:

[..]
> >- First of all it is doing more stuff in first kernel. And that runs
> >   contrary to kdump design where we want to do stuff in second kernel.
> >   After a kernel crash, you can't trust running kernel's data structures.
> >   So to improve reliability just do minial stuff in crashed kernel and
> >   get out quickly.
> 
> I agreed with you, the first kernel should do as less as possible.
> Intuitively, filter memory pages in the first kernel will harm the
> reliability of kernel dump, but let's think it thoroughly:
> 
> 1. It only relies on the memory management data structure that
> makedumpfile also relies on, so no any reliability degradation at
> this point.

Its not same. If there is something wrong with memory management
data structures, you can panic() again and self lock yourself and
never even transition to the second kernel.

With makedumpfile, if something is wrong, either we will save wrong
bits or get segmentation fault. But one can still try to be careful
or save whole dump and try to get specific pieces out.

So it it is not apples to apples comparison.

[..]
> >Looks like now hpa and yinghai have done the work to be able to load
> >kdump kernel above 4GB. I am assuming this also removes the restriction
> >that we can only reserve 512MB or 896MB in second kernel. If that's
> >the case, then I don't see why people can't get away with reserving
> >64MB per TB.
> 
> That's true. With kernel 3.9-rc1 with kexec-tools 2.0.4, capture
> kernel will have enough memory to run. And makedumpfile could be
> always run at non-cyclic mode, but we still concern about the kernel
> dump performance on systems with huge memory (above 4TB).

I would think that lets first try to make mmap() on /proc/vmcore work and
optimize makefumpfile to make use of it and then see if performance is
acceptable or not on large machines. And then take it from there.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ