[<prev] [next>] [day] [month] [year] [list]
Message-ID: <52a86012-026e-12e5-2c56-7e86537bab73@huawei.com>
Date: Mon, 6 Dec 2021 22:10:06 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>,
<linux-arm-kernel@...ts.infradead.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] arm64: mm: Make randomization works again in some case
Hello, Ard and Catalin, kindly ping...
On 2021/11/4 14:27, Kefeng Wang wrote:
> After commit 97d6786e0669 ("arm64: mm: account for hotplug memory when
> randomizing the linear region"), the KASLR could not work well in some
> case, eg, without memory hotplug and with va=39/pa=44, that is, linear
> region size < CPU's addressable PA range, the KASLR fails now but could
> work before this commit. Let's calculate pa range by memblock end/start
> without CONFIG_RANDOMIZE_BASE.
>
> Meanwhile, let's add a warning message if linear region size is too small
> for randomization.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
> ---
> Hi Ard, one more question, the parange from mmfr0 register may also too large,
> then even with this patch, the randomization still could not work.
>
> If we know the max physical memory range(including hotplug memory), could we
> add a way(maybe cmdline) to set max parange, then we could make randomization
> works in more cases, any thought?
>
> arch/arm64/mm/init.c | 30 +++++++++++++++++++++---------
> 1 file changed, 21 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index a8834434af99..27ec7f2c6fdb 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -284,21 +284,33 @@ void __init arm64_memblock_init(void)
>
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
> extern u16 memstart_offset_seed;
> - u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
> - int parange = cpuid_feature_extract_unsigned_field(
> - mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
> - s64 range = linear_region_size -
> - BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
> + s64 range;
> +
> + if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) {
> + u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
> + int parange = cpuid_feature_extract_unsigned_field(
> + mmfr0, ID_AA64MMFR0_PARANGE_SHIFT);
> + range = linear_region_size -
> + BIT(id_aa64mmfr0_parange_to_phys_shift(parange));
> +
> + } else {
> + range = linear_region_size -
> + (memblock_end_of_DRAM() - memblock_start_of_DRAM());
> + }
>
> /*
> * If the size of the linear region exceeds, by a sufficient
> * margin, the size of the region that the physical memory can
> * span, randomize the linear region as well.
> */
> - if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) {
> - range /= ARM64_MEMSTART_ALIGN;
> - memstart_addr -= ARM64_MEMSTART_ALIGN *
> - ((range * memstart_offset_seed) >> 16);
> + if (memstart_offset_seed > 0) {
> + if (range < (s64)ARM64_MEMSTART_ALIGN) {
> + pr_warn("linear mappings size is too small for KASLR\n");
> + } else {
> + range /= ARM64_MEMSTART_ALIGN;
> + memstart_addr -= ARM64_MEMSTART_ALIGN *
> + ((range * memstart_offset_seed) >> 16);
> + }
> }
> }
>
Powered by blists - more mailing lists