lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131115225540.GA5485@anatevka.fc.hp.com>
Date:	Fri, 15 Nov 2013 15:55:40 -0700
From:	jerry.hoemann@...com
To:	Yinghai Lu <yinghai@...nel.org>
Cc:	Vivek Goyal <vgoyal@...hat.com>, "H. Peter Anvin" <hpa@...or.com>,
	Ingo Molnar <mingo@...nel.org>,
	Pekka Enberg <penberg@...nel.org>,
	Rob Landley <rob@...dley.net>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	x86 maintainers <x86@...nel.org>,
	Matt Fleming <matt.fleming@...el.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"list@...ederm.org:DOCUMENTATION" <linux-doc@...r.kernel.org>,
	"list@...ederm.org:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
	linux-efi@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/3] Early use of boot service memory

On Fri, Nov 15, 2013 at 02:24:25PM -0800, Yinghai Lu wrote:
> On Fri, Nov 15, 2013 at 10:03 AM, Vivek Goyal <vgoyal@...hat.com> wrote:
> > On Fri, Nov 15, 2013 at 09:33:41AM -0800, Yinghai Lu wrote:
> 
> >> I have one system with 6TiB memory, kdump does not work even
> >> crashkernel=512M in legacy mode. ( it only work on system with
> >> 4.5TiB).
> >
> > Recently I tested one system with 6TB of memory and dumped successfully
> > with 512MB reserved under 896MB. Also I have heard reports of successful
> > dump of 12TB system with 512MB reserved below 896MB (due to cyclic
> > mode of makedumpfile).
> >
> > So with newer releases only reason one might want to reserve more
> > memory is that it might provide speed benefits. We need more testing
> > to quantify this.
> 
> You may need bunch of PCIe cards installed.
> 
> The system with 6TiB + 16 PCIe cards, second kernel OOM.
> The system with 4.5TiB + 16 PCIe cards, second kernel works with vmcore dumped.

Yinghai,

Your original email said you were using "legacy mode".  Does this mean
you're not running makedumpfile in cyclic mode?  Cyclic mode makes
a *big* difference in memory foot print of makedumpfile.

thanks


Jerry


> 
> >
> >> --- first kernel can reserve the 512M under 896M, second kernel will
> >> OOM as it load driver for every pci devices...
> >>
> >> So why would RH guys not spend some time on optimizing your kdump initrd
> >> build scripts and only put dump device related driver in it?
> >
> > Try latest Fedora and that's what we do. Now we have moved to dracut
> > based initramfs generation and we tell dracut that build initramfs for
> > host and additional dump destination and dracut builds it for those only.
> > I think there might be scope for further optimization, but I don't think
> > that's the problem any more.
> 
> Good. Assume that will be in RHEL 7.
> 
> >
> > So issue remains that crashkernel=X,high is not a good default choice
> > because it consumes extra 72M which we don't have to.
> 
> then if it falls into 896~4G, user may still need to update kexec-tools ?
> 
> Thanks
> 
> Yinghai

-- 

----------------------------------------------------------------------------
Jerry Hoemann            Software Engineer              Hewlett-Packard

3404 E Harmony Rd. MS 57                        phone:  (970) 898-1022
Ft. Collins, CO 80528                           FAX:    (970) 898-XXXX
                                                email:  jerry.hoemann@...com
----------------------------------------------------------------------------

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ