[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+CK2bBvaSzfD_mN3HdBdLrvQ3XDEPY0o2J8Ho8sViWX2apWyA@mail.gmail.com>
Date: Mon, 15 Feb 2021 08:42:17 -0500
From: Pavel Tatashin <pasha.tatashin@...een.com>
To: Anshuman Khandual <anshuman.khandual@....com>
Cc: Tyler Hicks <tyhicks@...ux.microsoft.com>,
James Morris <jmorris@...ei.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...nel.org>,
Logan Gunthorpe <logang@...tatee.com>, ardb@...nel.org,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] arm64: mm: correct the start of physical address in
linear map
On Mon, Feb 15, 2021 at 12:26 AM Anshuman Khandual
<anshuman.khandual@....com> wrote:
>
> Hello Pavel,
>
> On 2/13/21 6:53 AM, Pavel Tatashin wrote:
> > Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
> > linear map range is not checked correctly.
> >
> > The start physical address that linear map covers can be actually at the
> > end of the range because of randmomization. Check that and if so reduce it
> > to 0.
>
> Looking at the code, this seems possible if memstart_addr which is a signed
> value becomes large (after falling below 0) during arm64_memblock_init().
Right.
>
> >
> > This can be verified on QEMU with setting kaslr-seed to ~0ul:
> >
> > memstart_offset_seed = 0xffff
> > START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000
> > END: __pa(PAGE_END - 1) = 1000bfffffff
> >
> > Signed-off-by: Pavel Tatashin <pasha.tatashin@...een.com>
> > Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping")
> > ---
> > arch/arm64/mm/mmu.c | 15 +++++++++++++--
> > 1 file changed, 13 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index ae0c3d023824..6057ecaea897 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -1444,14 +1444,25 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
> >
> > static bool inside_linear_region(u64 start, u64 size)
> > {
> > + u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
> > + u64 end_linear_pa = __pa(PAGE_END - 1);
> > +
> > + /*
> > + * Check for a wrap, it is possible because of randomized linear mapping
> > + * the start physical address is actually bigger than the end physical
> > + * address. In this case set start to zero because [0, end_linear_pa]
> > + * range must still be able to cover all addressable physical addresses.
> > + */
>
> If this is possible only with randomized linear mapping, could you please
> add IS_ENABLED(CONFIG_RANDOMIZED_BASE) during the switch over. Wondering
> if WARN_ON(start_linear_pa > end_linear_pa) should be added otherwise i.e
> when linear mapping randomization is not enabled.
Yeah, good idea, I will add ifdef for CONFIG_RANDOMIZED_BASE.
>
> > + if (start_linear_pa > end_linear_pa)
> > + start_linear_pa = 0;
>
> This looks okay but will double check and give it some more testing.
Thank you,
Pasha
Powered by blists - more mailing lists