[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73bf9ad0-37a9-78f4-3583-2845dcb24f34@arm.com>
Date: Tue, 16 Feb 2021 08:25:34 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Ard Biesheuvel <ardb@...nel.org>,
Pavel Tatashin <pasha.tatashin@...een.com>
Cc: Tyler Hicks <tyhicks@...ux.microsoft.com>,
James Morris <jmorris@...ei.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...nel.org>,
Logan Gunthorpe <logang@...tatee.com>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 1/1] arm64: mm: correct the inside linear map
boundaries during hotplug check
On 2/16/21 12:57 AM, Ard Biesheuvel wrote:
> On Mon, 15 Feb 2021 at 20:22, Pavel Tatashin <pasha.tatashin@...een.com> wrote:
>>
>> Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
>> linear map range is not checked correctly.
>>
>> The start physical address that linear map covers can be actually at the
>> end of the range because of randomization. Check that and if so reduce it
>> to 0.
>>
>> This can be verified on QEMU with setting kaslr-seed to ~0ul:
>>
>> memstart_offset_seed = 0xffff
>> START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000
>> END: __pa(PAGE_END - 1) = 1000bfffffff
>>
>> Signed-off-by: Pavel Tatashin <pasha.tatashin@...een.com>
>> Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping")
>> Tested-by: Tyler Hicks <tyhicks@...ux.microsoft.com>
>
>> ---
>> arch/arm64/mm/mmu.c | 20 ++++++++++++++++++--
>> 1 file changed, 18 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index ae0c3d023824..cc16443ea67f 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -1444,14 +1444,30 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
>>
>> static bool inside_linear_region(u64 start, u64 size)
>> {
>> + u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
>> + u64 end_linear_pa = __pa(PAGE_END - 1);
>> +
>> + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
>> + /*
>> + * Check for a wrap, it is possible because of randomized linear
>> + * mapping the start physical address is actually bigger than
>> + * the end physical address. In this case set start to zero
>> + * because [0, end_linear_pa] range must still be able to cover
>> + * all addressable physical addresses.
>> + */
>> + if (start_linear_pa > end_linear_pa)
>> + start_linear_pa = 0;
>> + }
>> +
>> + WARN_ON(start_linear_pa > end_linear_pa);
>> +
>> /*
>> * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
>> * accommodating both its ends but excluding PAGE_END. Max physical
>> * range which can be mapped inside this linear mapping range, must
>> * also be derived from its end points.
>> */
>> - return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
>> - (start + size - 1) <= __pa(PAGE_END - 1);
>
> Can't we simply use signed arithmetic here? This expression works fine
> if the quantities are all interpreted as s64 instead of u64
There is a new generic framework which expects the platform to provide two
distinct range points (low and high) for hotplug address comparison. Those
range points can be different depending on whether address randomization
is enabled and the flip occurs. But this comparison here in the platform
code is going away.
This patch needs to rebased on the new framework which is part of linux-next.
https://patchwork.kernel.org/project/linux-mm/list/?series=425051
Powered by blists - more mailing lists