[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z3glWkXg6EnVx8WU@arm.com>
Date: Fri, 3 Jan 2025 17:58:50 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: Zhenhua Huang <quic_zhenhuah@...cinc.com>
Cc: Anshuman Khandual <anshuman.khandual@....com>, will@...nel.org,
ardb@...nel.org, ryan.roberts@....com, mark.rutland@....com,
joey.gouly@....com, dave.hansen@...ux.intel.com,
akpm@...ux-foundation.org, chenfeiyang@...ngson.cn,
chenhuacai@...nel.org, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] arm64: mm: implement vmemmap_check_pmd for arm64
On Fri, Jan 03, 2025 at 10:43:51AM +0800, Zhenhua Huang wrote:
> On 2025/1/3 2:12, Catalin Marinas wrote:
> > On Fri, Dec 27, 2024 at 08:27:18AM +0530, Anshuman Khandual wrote:
> > > On 12/21/24 00:05, Catalin Marinas wrote:
> > > > On Mon, Dec 09, 2024 at 05:42:27PM +0800, Zhenhua Huang wrote:
> > > > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > > > > index fd59ee44960e..41c7978a92be 100644
> > > > > --- a/arch/arm64/mm/mmu.c
> > > > > +++ b/arch/arm64/mm/mmu.c
> > > > > @@ -1169,7 +1169,8 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
> > > > > unsigned long addr, unsigned long next)
> > > > > {
> > > > > vmemmap_verify((pte_t *)pmdp, node, addr, next);
> > > > > - return 1;
> > > > > +
> > > > > + return pmd_sect(*pmdp);
> > >
> > > Please change this as pmd_sect(READ_ONCE(*pmdp)) instead.
> > >
> > > > > }
> > > > > int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
> > > >
> > > > Don't we need this patch only if we implement the first one? Please fold
> > > > it into the other patch.
> > >
> > > Seems like these patches might not be related.
> > >
> > > While creating huge page based vmemmap mapping during vmemmap_populate_hugepages(),
> > > vmemmap_check_pmd() validates if a populated (i.e pmd_none) PMD already represents
> > > a huge mapping and can be skipped there after.
> > >
> > > Current implementation for vmemmap_check_pmd() on arm64, unconditionally returns 1
> > > thus asserting that the given populated PMD entry is a huge one indeed, which will
> > > be the case unless something is wrong. vmemmap_verify() only ensures that the node
> > > where the pfn is allocated from is local.
> > >
> > > int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
> > > unsigned long addr, unsigned long next)
> > > {
> > > vmemmap_verify((pte_t *)pmdp, node, addr, next);
> > > return 1;
> > > }
> > >
> > > However it does not really check the entry to be a section mapping which it should.
> > > Returning pmd_sect(READ_ONCE(*pmdp)) is the right thing, which should have been the
> > > case from the beginning when vmemmap_check_pmd() was added. I guess because arm64's
> > > original vmemmap_populate() checked only for vmemmap_verify() as well. So probably
> > > this does not need a "Fixes: " tag.
> >
> > I did not say the patch is wrong, only that it wouldn't be needed unless
> > we have the other patch in this series. However, if we do apply the
> > other patch, we definitely need this change, so keeping them together
> > would make it easier to backport.
>
> Hi Catalin,
>
> Based on our current discussion on patchset #1, we will prohibit
> hugepages(vmemmap mapping) for all hotplugging sections...The flow:
> vmemmap_populate
> vmemmap_populate_hugepages
> vmemmap_check_pmd
>
> will *only* be called for non-early sections. Therefore, with patchset #1, I
> don't see the patch as essential.. Would it be acceptable if we do not
> backport this patch? Anshuman's suggestion seems reasonable to me and I
> separated the patch out:
> https://lore.kernel.org/lkml/20250102074047.674156-1-quic_zhenhuah@quicinc.com/
Ah, ok, so if you only call vmemmap_populate_basepages() for hotplugged
memory, the vmemmap_check_pmd() won't even be called. So yeah, in this
case there won't be any dependency on this change. If we somehow end up
with a mix of vmemmap basepages and hugepages for hotplugged memory, we
probably need to update vmemmap_check_pmd() as well (and backport
together).
--
Catalin
Powered by blists - more mailing lists