[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1lIoTBNvHrKX5K9@MiWiFi-R3L-srv>
Date: Wed, 26 Oct 2022 22:48:01 +0800
From: Baoquan He <bhe@...hat.com>
To: Borislav Petkov <bp@...en8.de>, david@...hat.com
Cc: Eric DeVolder <eric.devolder@...cle.com>,
Oscar Salvador <osalvador@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, x86@...nel.org,
kexec@...ts.infradead.org, ebiederm@...ssion.com,
dyoung@...hat.com, vgoyal@...hat.com, tglx@...utronix.de,
mingo@...hat.com, dave.hansen@...ux.intel.com, hpa@...or.com,
nramas@...ux.microsoft.com, thomas.lendacky@....com,
robh@...nel.org, efault@....de, rppt@...nel.org,
sourabhjain@...ux.ibm.com, linux-mm@...ck.org
Subject: Re: [PATCH v12 7/7] x86/crash: Add x86 crash hotplug support
On 10/25/22 at 12:31pm, Borislav Petkov wrote:
> On Thu, Oct 13, 2022 at 10:57:28AM +0800, Baoquan He wrote:
> > The concern to range number mainly is on Virt guest systems.
>
> And why would virt emulate 1K hotpluggable DIMM slots and not emulate a
> real machine?
Well, currently, mem hotpug is an important feature on virt system to
dynamically increase/shrink memory on the system. If only emulating real
machine, it won't be different than bare metal system.
IIRC, the ballon driver or virtio-mem feature can add memory board, e.g
1G, block size is 128M, 8 blocks added. When shrinking this 1G memory
later, it will take best effort way to hot remove memory. Means if any
memory block is occupied, it will be kept there. Finally we could only
remove every second blocks, 4 blocks altogether. Then the left
un-removed blocks will produce 4 separate memory regions. Like this, a
virt guest could have many memory regions in kernel after memory
being added/removed.
If I am wrong, Please correct me, David.
>
> > On baremetal system, basically only very high end server support
> > memory hotplug. I ever visited customer's lab and saw one server,
> > it owns 8 slots, on each slot a box containing about 20 cpus and 2T
> > memory at most can be plugged in at one time. So people won't make too
> > many slots for hotplugging since it's too expensive.
>
> There you have it - the persuading argument.
>
> > I checked user space kexec code, the maximum memory range number is
> > honored to x86_64 because of a HPE SGI system. After that, nobody
> > complains about it. Please see below user space kexec-tools commit in
> > https://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
> >
> > The memory ranges may be not all made by different DIMM slots, could be
> > firmware reservatoin, e.g efi/BIOS diggged out physical memory,
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> I don't know what that means.
>
> If it is firmware crap, you want to exclude that from kdump anyway.
Yes, now assume we have a HPE SGI system and it has memory hotplug
capacity. The system itself has already got memory regions more than
1024. Then when we hot add extra memory board, we want to include the
newly added memory regions into elfcorehdr so that it will be dumped out
in kdump kernel.
That's why I earlier suggested 2048 for number of memory regions.
>
> > Now CONFIG_NR_CPUS has the maximum number as 8192. And user space
> > kexec-tools has maximum memory range number as 2048. We can take
> > the current 8192 + 2048 = 10K as default value conservatively. Or
> > take 8192 + 2048 * 2 = 12K which has two times of maximum memory range
> > bumber in kexec-tools. What do you think?
>
> I still think that we should stick to reality and support what is
> possible not what is potentially and theoretically there.
Yes, agree. We should try to get a number which satisfies needs in
reality.
For Kconfig CRASH_MAX_MEMORY_RANGES in this patch, I have three items to
suggest:
1) the name is not good, it doesn't reflect the fact that it's the
number of program headers of elfcorehdr which includes the cpu note
numbers and memory region numers.
2) default cpu number, I suggest 512 or 1024. The biggest number I
ever saw in reality is 384. On virt system, it won't be too big. Below
is abstracted from arch/x86/Kconfig. A smaller one is also OK, we can
enlarge it when people really have a super machine and run into the
problem.
config NR_CPUS_DEFAULT
int
depends on X86_64
default 8192 if MAXSMP
default 64 if SMP
default 1 if !SMP
3) For memory regions, I would suggest 2048. Likewise, smaller value is
also fine, we can enlarge it when a real system run into this.
I made a draft here for reference, with my undertanding. Please feel
free to change it.
+config CRASH_ELF_CORE_PHDRS_NUM
+ depends on CRASH_DUMP && KEXEC_FILE && (HOTPLUG_CPU || MEMORY_HOTPLUG)
+ int
+ default 3072
+ help
+ For the kexec_file_load path, specify the default number of
+ phdr for the vmcore. E.g the memory regions represented by the
+ 'System RAM' entries in /proc/iomem, the cpu notes of each
+ present cpu stored in /sys/devices/system/cpu/cpuX/crash_notes.
Thanks
Powered by blists - more mailing lists