[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240508163526.GM4650@nvidia.com>
Date: Wed, 8 May 2024 13:35:26 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Zi Yan <ziy@...dia.com>
Cc: Lance Yang <ioworker0@...il.com>, Alistair Popple <apopple@...dia.com>,
akpm@...ux-foundation.org, willy@...radead.org, sj@...nel.org,
maskray@...gle.com, ryan.roberts@....com, david@...hat.com,
21cnbao@...il.com, mhocko@...e.com, fengwei.yin@...el.com,
zokeefe@...gle.com, shy828301@...il.com, xiehuan09@...il.com,
libang.li@...group.com, wangkefeng.wang@...wei.com,
songmuchun@...edance.com, peterx@...hat.com, minchan@...nel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Baolin Wang <baolin.wang@...ux.alibaba.com>
Subject: Re: [PATCH v4 2/3] mm/rmap: integrate PMD-mapped folio splitting
into pagewalk loop
On Wed, May 08, 2024 at 12:22:08PM -0400, Zi Yan wrote:
> On 8 May 2024, at 11:52, Jason Gunthorpe wrote:
>
> > On Wed, May 08, 2024 at 10:56:34AM -0400, Zi Yan wrote:
> >
> >> Lance is improving try_to_unmap_one() to support unmapping PMD THP as a whole,
> >> so he moves split_huge_pmd_address() inside while (page_vma_mapped_walk(&pvmw))
> >> and after mmu_notifier_invalidate_range_start() as split_huge_pmd_locked()
> >> and does not include the mmu notifier ops inside split_huge_pmd_address().
> >> I wonder if that could cause issues, since the mmu_notifier_invalidate_range_start()
> >> before the while loop only has range of the original address and
> >> split huge pmd can affect the entire PMD address range and these two ranges
> >> might not be the same.
> >
> > That does not sound entirely good..
> >
> > I suppose it depends on what split does, if the MM page table has the
> > same translation before and after split then perhaps no invalidation
> > is even necessary.
>
> Before split, it is a PMD mapping to a PMD THP (order-9). After split,
> they are 512 PTEs mapping to the same THP. Unless the secondary TLB
> does not support PMD mapping and use 512 PTEs instead, it seems to
> be an issue from my understanding.
I may not recall fully, but I don't think any secondaries are
so sensitive to the PMD/PTE distinction.. At least the ones using
hmm_range_fault() are not.
When the PTE eventually comes up for invalidation then the secondary
should wipe out any granual they may have captured.
Though, perhaps KVM should be checked carefully.
> In terms of two mmu_notifier ranges, first is in the split_huge_pmd_address()[1]
> and second is in try_to_unmap_one()[2]. When try_to_unmap_one() is unmapping
> a subpage in the middle of a PMD THP, the former notifies about the PMD range
> change due to one PMD split into 512 PTEs and the latter only needs to notify
> about the invalidation of the unmapped PTE. I do not think the latter can
> replace the former, although a potential optimization can be that the latter
> can be removed as it is included in the range of the former.
I think we probably don't need both, either size might be fine, but
the larger size is definately fine..
> Regarding Lance's current code change, is it OK to change mmu_notifier range
> after mmu_notifier_invalidate_range_start()?
No, it cannot be changed during a start/stop transaction.
Jason
Powered by blists - more mailing lists