[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1xVeP/RtG37zLYm@zn.tnic>
Date: Sat, 29 Oct 2022 00:19:36 +0200
From: Borislav Petkov <bp@...en8.de>
To: Eric DeVolder <eric.devolder@...cle.com>
Cc: Baoquan He <bhe@...hat.com>, david@...hat.com,
Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, x86@...nel.org,
kexec@...ts.infradead.org, ebiederm@...ssion.com,
dyoung@...hat.com, vgoyal@...hat.com, tglx@...utronix.de,
mingo@...hat.com, dave.hansen@...ux.intel.com, hpa@...or.com,
nramas@...ux.microsoft.com, thomas.lendacky@....com,
robh@...nel.org, efault@....de, rppt@...nel.org,
sourabhjain@...ux.ibm.com, linux-mm@...ck.org
Subject: Re: [PATCH v12 7/7] x86/crash: Add x86 crash hotplug support
On Fri, Oct 28, 2022 at 04:22:54PM -0500, Eric DeVolder wrote:
> /*
> * For the kexec_file_load() syscall path, specify the maximum number of
> * memory regions that the elfcorehdr buffer/segment can accommodate.
> * These regions are obtained via walk_system_ram_res(); eg. the
> * 'System RAM' entries in /proc/iomem.
> * This value is combined with NR_CPUS and multiplied by sizeof(Elf64_Phdr)
NR_CPUS_DEFAULT
> * to determine the final elfcorehdr memory buffer/segment size.
> * The value 8192, for example, covers a (sparsely populated) 1TiB system
> * consisting of 128MiB memblock size, while resulting in an elfcorehdr
> * memory buffer/segment size under 1MiB.
... and it is a sane choice trying to accomodate both actual baremetal
and VM configurations."
Yeah, it's a good start.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists