[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230407011507.17572-1-bhe@redhat.com>
Date: Fri, 7 Apr 2023 09:15:04 +0800
From: Baoquan He <bhe@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: catalin.marinas@....com, rppt@...nel.org,
thunder.leizhen@...wei.com, will@...nel.org, ardb@...nel.org,
horms@...nel.org, John.p.donnelly@...cle.com,
kexec@...ts.infradead.org, linux-arm-kernel@...ts.infradead.org,
Baoquan He <bhe@...hat.com>
Subject: [PATCH v2 0/3] arm64: kdump : take off the protection on crashkernel memory region
Problem:
=======
On arm64, block and section mapping is supported to build page tables.
However, currently it enforces to take base page mapping for the whole
linear mapping if CONFIG_ZONE_DMA or CONFIG_ZONE_DMA32 is enabled and
crashkernel kernel parameter is set. This will cause longer time of the
linear mapping process during bootup and severe performance degradation
during running time.
Root cause:
==========
On arm64, crashkernel reservation relies on knowing the upper limit of
low memory zone because it needs to reserve memory in the zone so that
devices' DMA addressing in kdump kernel can be satisfied. However, the
upper limit of low memory on arm64 is variant. And the upper limit can
only be decided late till bootmem_init() is called [1].
And we need to map the crashkernel region with base page granularity when
doing linear mapping, because kdump needs to protect the crashkernel region
via set_memory_valid(,0) after kdump kernel loading. However, arm64 doesn't
support well on splitting the built block or section mapping due to some
cpu reststriction [2]. And unfortunately, the linear mapping is done before
bootmem_init().
To resolve the above conflict on arm64, the compromise is enforcing to
take base page mapping for the entire linear mapping if crashkernel is
set, and CONFIG_ZONE_DMA or CONFIG_ZONE_DMA32 is enabed. Hence
performance is sacrificed.
Solution:
=========
Comparing with the always encountered base page mapping for the whole
linear region, it's better to take off the protection on crashkernel memory
region for now because the protection can only happen in a chance in one
million, while the base page mapping for the whole linear mapping is
always mitigating arm64 systems with crashkernel set.
This can let distros have chance to back port this patchset to fix the
performance issue caused by the base page mapping in the whole linear
region.
TODO:
======
Add purgatory to kexec_file_load interface on arm64, then checksum
verification can be done there to check if stamping on crashkernel
memory region happened.
v1->v2:
- When trying to revert commit 031495635b46, two hunks were missed in v1
post. Remove them in v2. Thanks to Leizhen for pointing out this.
- Remove code comment above arm64_dma_phys_limit definition added
in commit 031495635b46;
- Move the arm64_dma_phys_limit assignment back into zone_sizes_init()
when both CONFIG_ZONE_DMA and CONFIG_ZONE_DMA32 are not enabled.
[1]
https://lore.kernel.org/all/YrIIJkhKWSuAqkCx@arm.com/T/#u
[2]
https://lore.kernel.org/linux-arm-kernel/20190911182546.17094-1-nsaenzjulienne@suse.de/T/
Baoquan He (3):
arm64: kdump : take off the protection on crashkernel memory region
arm64: kdump: do not map crashkernel region specifically
arm64: kdump: defer the crashkernel reservation for platforms with no
DMA memory zones
arch/arm64/include/asm/kexec.h | 6 -----
arch/arm64/include/asm/memory.h | 5 ----
arch/arm64/kernel/machine_kexec.c | 20 --------------
arch/arm64/mm/init.c | 34 +++---------------------
arch/arm64/mm/mmu.c | 43 -------------------------------
5 files changed, 3 insertions(+), 105 deletions(-)
--
2.34.1
Powered by blists - more mailing lists