[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXGxyV0=s6jVZ674O_2amkYSnwSnubnozbzD6g6GOMJE-A@mail.gmail.com>
Date: Mon, 15 Feb 2021 20:27:14 +0100
From: Ard Biesheuvel <ardb@...nel.org>
To: Pavel Tatashin <pasha.tatashin@...een.com>
Cc: Tyler Hicks <tyhicks@...ux.microsoft.com>,
James Morris <jmorris@...ei.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Anshuman Khandual <anshuman.khandual@....com>,
Mike Rapoport <rppt@...nel.org>,
Logan Gunthorpe <logang@...tatee.com>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 1/1] arm64: mm: correct the inside linear map
boundaries during hotplug check
On Mon, 15 Feb 2021 at 20:22, Pavel Tatashin <pasha.tatashin@...een.com> wrote:
>
> Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
> linear map range is not checked correctly.
>
> The start physical address that linear map covers can be actually at the
> end of the range because of randomization. Check that and if so reduce it
> to 0.
>
> This can be verified on QEMU with setting kaslr-seed to ~0ul:
>
> memstart_offset_seed = 0xffff
> START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000
> END: __pa(PAGE_END - 1) = 1000bfffffff
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@...een.com>
> Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping")
> Tested-by: Tyler Hicks <tyhicks@...ux.microsoft.com>
> ---
> arch/arm64/mm/mmu.c | 20 ++++++++++++++++++--
> 1 file changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ae0c3d023824..cc16443ea67f 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1444,14 +1444,30 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
>
> static bool inside_linear_region(u64 start, u64 size)
> {
> + u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
> + u64 end_linear_pa = __pa(PAGE_END - 1);
> +
> + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
> + /*
> + * Check for a wrap, it is possible because of randomized linear
> + * mapping the start physical address is actually bigger than
> + * the end physical address. In this case set start to zero
> + * because [0, end_linear_pa] range must still be able to cover
> + * all addressable physical addresses.
> + */
> + if (start_linear_pa > end_linear_pa)
> + start_linear_pa = 0;
> + }
> +
> + WARN_ON(start_linear_pa > end_linear_pa);
> +
> /*
> * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
> * accommodating both its ends but excluding PAGE_END. Max physical
> * range which can be mapped inside this linear mapping range, must
> * also be derived from its end points.
> */
> - return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
> - (start + size - 1) <= __pa(PAGE_END - 1);
Can't we simply use signed arithmetic here? This expression works fine
if the quantities are all interpreted as s64 instead of u64
> + return start >= start_linear_pa && (start + size - 1) <= end_linear_pa;
> }
>
> int arch_add_memory(int nid, u64 start, u64 size,
> --
> 2.25.1
>
Powered by blists - more mailing lists