lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 23 Jan 2015 16:44:53 +0800
From:	Baoquan He <bhe@...hat.com>
To:	Joerg Roedel <joro@...tes.org>
Cc:	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
	x86@...nel.org, linux-kernel@...r.kernel.org,
	Joerg Roedel <jroedel@...e.de>
Subject: Re: [PATCH 3/3] x86, crash: Allocate enough low-mem when
 crashkernel=high

Hi Joerg,

Yeah, it does happen if too many devices. I guess the reason no reports
come to us on rhel is we always use a auto mechsanism. We always try to
allocate from below 896M, if failed try below 4G, if failed try above
4G.

This could be solved in 2 ways:

1) We could optimize the distro shell scripts which build the initramfs
for kdump. Only include those devices which are necessary in kdump
kernel. E.g for net dump only that NIC which connect to dump target will
be brought up, this can decrease the dma memory requirement.

2) increase low-mem when crashkernel=high. But we have to be careful to
do this. We implement crashkernel=high not only for the unhappiness
crashkernel reservation is limited below 4G, but dma/dma32 memory space
is precious on some systems. If set crashkernel=high still too much low
memory has to be reserved by default, it's important to find the
balance. So if we have to increase the default low-mem, how much memory
is enough, why 256M?  why not 128M/192M/320M/384M?  And if 256M works
on your system, what if another person say it does't work because there
are more devices on his system?

Anyway, I understand the requirement, but we need find out how much
memory can satisfy most of systems.


Thanks
Baoquan

On 01/06/15 at 03:51pm, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@...e.de>
> 
> When the crashkernel is loaded above 4GiB in memory the
> first kernel only allocates 72MiB of low-memory for the DMA
> requirements of the second kernel. On systems with many
> devices this is not enough and causes device driver
> initialization errors and failed crash dumps. Set this
> default value to 256MiB to make sure there is enough memory
> available for DMA.
> 
> Signed-off-by: Joerg Roedel <jroedel@...e.de>
> ---
>  arch/x86/kernel/setup.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index ab4734e..d6e6a6d 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -536,8 +536,11 @@ static void __init reserve_crashkernel_low(void)
>  		 *	swiotlb overflow buffer: now is hardcoded to 32k.
>  		 *		We round it to 8M for other buffers that
>  		 *		may need to stay low too.
> +		 *		Also make sure we allocate enough extra memory
> +		 *		low memory so that we don't run out of DMA
> +		 *		buffers for 32bit devices.
>  		 */
> -		low_size = swiotlb_size_or_default() + (8UL<<20);
> +		low_size = max(swiotlb_size_or_default() + (8UL<<20), 256UL<<20);
>  		auto_set = true;
>  	} else {
>  		/* passed with crashkernel=0,low ? */
> -- 
> 1.9.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ