[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-kCkh1XWX8Rwjwz@kernel.org>
Date: Sun, 30 Mar 2025 11:36:34 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Dev Jain <dev.jain@....com>
Cc: Ryan Roberts <ryan.roberts@....com>, catalin.marinas@....com,
will@...nel.org, gshan@...hat.com, steven.price@....com,
suzuki.poulose@....com, tianyaxiong@...inos.cn, ardb@...nel.org,
david@...hat.com, urezki@...il.com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: pageattr: Explicitly bail out when changing
permissions for vmalloc_huge mappings
On Sun, Mar 30, 2025 at 01:53:57PM +0530, Dev Jain wrote:
>
>
> On 30/03/25 1:02 pm, Mike Rapoport wrote:
> > On Sat, Mar 29, 2025 at 09:46:56AM +0000, Ryan Roberts wrote:
> > > On 28/03/2025 18:50, Mike Rapoport wrote:
> > > > On Fri, Mar 28, 2025 at 11:51:03AM +0530, Dev Jain wrote:
> > > > > arm64 uses apply_to_page_range to change permissions for kernel VA mappings,
> > > >
> > > > for vmalloc mappings ^
> > > >
> > > > arm64 does not allow changing permissions to any VA address right now.
> > > >
> > > > > which does not support changing permissions for leaf mappings. This function
> > > > > will change permissions until it encounters a leaf mapping, and will bail
> > > > > out. To avoid this partial change, explicitly disallow changing permissions
> > > > > for VM_ALLOW_HUGE_VMAP mappings.
> > > > >
> > > > > Signed-off-by: Dev Jain <dev.jain@....com>
> > >
> > > I wonder if we want a Fixes: tag here? It's certainly a *latent* bug.
> >
> > We have only a few places that use vmalloc_huge() or VM_ALLOW_HUGE_VMAP and
> > if there was a code that plays permission games on these allocations, x86
> > set_memory would blow up immediately, so I don't think Fixes: is needed
> > here.
>
> But I think x86 can handle this (split_large_page() in __change_page_attr())
> ?
Yes, but it also updates corresponding direct map entries when vmalloc
permissions change and the direct map update presumes physical contiguity
of the range.
> > > > > ---
> > > > > arch/arm64/mm/pageattr.c | 4 ++--
> > > > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> > > > > index 39fd1f7ff02a..8337c88eec69 100644
> > > > > --- a/arch/arm64/mm/pageattr.c
> > > > > +++ b/arch/arm64/mm/pageattr.c
> > > > > @@ -96,7 +96,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> > > > > * we are operating on does not result in such splitting.
> > > > > *
> > > > > * Let's restrict ourselves to mappings created by vmalloc (or vmap).
> > > > > - * Those are guaranteed to consist entirely of page mappings, and
> > > > > + * Disallow VM_ALLOW_HUGE_VMAP vmalloc mappings so that
> > > >
> > > > I'd keep mention of page mappings in the comment, e.g
> > > >
> > > > * Disallow VM_ALLOW_HUGE_VMAP mappings to guarantee that only page
> > > > * mappings are updated and splitting is never needed.
> > > >
> > > > With this and changelog updates Ryan asked for
> > > >
> > > > Reviewed-by: Mike Rapoport (Microsoft) <rppt@...nel.org>
> > > >
> > > >
> > > > > * splitting is never needed.
> > > > > *
> > > > > * So check whether the [addr, addr + size) interval is entirely
> > > > > @@ -105,7 +105,7 @@ static int change_memory_common(unsigned long addr, int numpages,
> > > > > area = find_vm_area((void *)addr);
> > > > > if (!area ||
> > > > > end > (unsigned long)kasan_reset_tag(area->addr) + area->size ||
> > > > > - !(area->flags & VM_ALLOC))
> > > > > + ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC))
> > > > > return -EINVAL;
> > > > > if (!numpages)
> > > > > --
> > > > > 2.30.2
> > > > >
> > > >
> > >
> >
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists