[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez1UNFfpTUph6ZMuN61CoTGne05NLAweL_=ijRfqj5evMg@mail.gmail.com>
Date: Thu, 1 Jun 2023 00:18:43 +0200
From: Jann Horn <jannh@...gle.com>
To: Peter Xu <peterx@...hat.com>
Cc: Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Mike Rapoport <rppt@...nel.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Qi Zheng <zhengqi.arch@...edance.com>,
Yang Shi <shy828301@...il.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Alistair Popple <apopple@...dia.com>,
Ralph Campbell <rcampbell@...dia.com>,
Ira Weiny <ira.weiny@...el.com>,
Steven Price <steven.price@....com>,
SeongJae Park <sj@...nel.org>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Zack Rusin <zackr@...are.com>, Jason Gunthorpe <jgg@...pe.ca>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Minchan Kim <minchan@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
Song Liu <song@...nel.org>,
Thomas Hellstrom <thomas.hellstrom@...ux.intel.com>,
Russell King <linux@...linux.org.uk>,
"David S. Miller" <davem@...emloft.net>,
Michael Ellerman <mpe@...erman.id.au>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Claudio Imbrenda <imbrenda@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
linux-arm-kernel@...ts.infradead.org, sparclinux@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-s390@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap
or vma lock
On Wed, May 31, 2023 at 10:54 PM Peter Xu <peterx@...hat.com> wrote:
> On Wed, May 31, 2023 at 05:34:58PM +0200, Jann Horn wrote:
> > On Mon, May 29, 2023 at 8:25 AM Hugh Dickins <hughd@...gle.com> wrote:
> > > -static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff,
> > > - struct mm_struct *target_mm,
> > > - unsigned long target_addr, struct page *hpage,
> > > - struct collapse_control *cc)
> > > +static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
> > > {
> > > struct vm_area_struct *vma;
> > > - int target_result = SCAN_FAIL;
> > >
> > > - i_mmap_lock_write(mapping);
> > > + i_mmap_lock_read(mapping);
> > > vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
> > > - int result = SCAN_FAIL;
> > > - struct mm_struct *mm = NULL;
> > > - unsigned long addr = 0;
> > > - pmd_t *pmd;
> > > - bool is_target = false;
> > > + struct mm_struct *mm;
> > > + unsigned long addr;
> > > + pmd_t *pmd, pgt_pmd;
> > > + spinlock_t *pml;
> > > + spinlock_t *ptl;
> > >
> > > /*
> > > * Check vma->anon_vma to exclude MAP_PRIVATE mappings that
> > > - * got written to. These VMAs are likely not worth investing
> > > - * mmap_write_lock(mm) as PMD-mapping is likely to be split
> > > - * later.
> > > + * got written to. These VMAs are likely not worth removing
> > > + * page tables from, as PMD-mapping is likely to be split later.
> > > *
> > > - * Note that vma->anon_vma check is racy: it can be set up after
> > > - * the check but before we took mmap_lock by the fault path.
> > > - * But page lock would prevent establishing any new ptes of the
> > > - * page, so we are safe.
> > > - *
> > > - * An alternative would be drop the check, but check that page
> > > - * table is clear before calling pmdp_collapse_flush() under
> > > - * ptl. It has higher chance to recover THP for the VMA, but
> > > - * has higher cost too. It would also probably require locking
> > > - * the anon_vma.
> > > + * Note that vma->anon_vma check is racy: it can be set after
> > > + * the check, but page locks (with XA_RETRY_ENTRYs in holes)
> > > + * prevented establishing new ptes of the page. So we are safe
> > > + * to remove page table below, without even checking it's empty.
> >
> > This "we are safe to remove page table below, without even checking
> > it's empty" assumes that the only way to create new anonymous PTEs is
> > to use existing file PTEs, right? What about private shmem VMAs that
> > are registered with userfaultfd as VM_UFFD_MISSING? I think for those,
> > the UFFDIO_COPY ioctl lets you directly insert anonymous PTEs without
> > looking at the mapping and its pages (except for checking that the
> > insertion point is before end-of-file), protected only by mmap_lock
> > (shared) and pte_offset_map_lock().
>
> Hmm, yes. We probably need to keep that though, and 5b51072e97 explained
> the reason (to still respect file permissions).
>
> Maybe the anon_vma check can also be moved into the pgtable lock section,
> with some comments explaining (but it's getting a bit ugly..)?
Or check that all entries are pte_none() or something like that inside
the pgtable-locked section?
[...]
> > The old code called collapse_and_free_pmd(), which involves MMU
> > notifier invocation...
[...]
> > ... while the new code only does pmdp_collapse_flush(), which clears
> > the pmd entry and does a TLB flush, but AFAICS doesn't use MMU
> > notifiers. My understanding is that that's problematic - maybe (?) it
> > is sort of okay with regards to classic MMU notifier users like KVM,
> > but it's probably wrong for IOMMUv2 users, where an IOMMU directly
> > consumes the normal page tables?
>
> The iommuv2 wasn't "consuming" the pgtables?
My wording was confusing, I meant that as "the iommuv2 hardware
directly uses/walks the page tables".
> IIUC it relies on that to
> make sure no secondary (and illegal) tlb exists in the iommu tlbs.
>
> For this case if the pgtable _must_ be empty when reaching here (we'd
> better make sure of it..), maybe we're good? Because we should have just
> invalidated once when unmap all the pages in the thp range, so no existing
> tlb should generate anyway for either cpu or iommu hardwares.
My headcanon is that there are approximately three reasons why we
normally have to do iommuv2 invalidations and I think one or two of
them might still apply here, though admittedly I haven't actually dug
up documentation on how this stuff actually works for IOMMUv2, so
maybe one of y'all can tell me that my concerns here are unfounded:
1. We have to flush normal TLB entries. This is probably not necessary
if the page table contains no entries.
2. We might have to flush "paging-structure caches" / "intermediate
table walk caches", if the IOMMU caches the physical addresses of page
tables to skip some levels of page table walk. IDK if IOMMUs do that,
but normal MMUs definitely do it, so I wouldn't be surprised if the
IOMMUs did it too (or reserved the right to do it in a future hardware
generation or whatever).
3. We have to *serialize* with page table walks performed by the
IOMMU. We're doing an RCU barrier to synchronize against page table
walks from the MMU, but without an appropriate mmu_notifier call, we
have nothing to ensure that we aren't yanking a page table out from
under an IOMMU page table walker while it's in the middle of its walk.
Sure, this isn't very likely in practice, the IOMMU page table walker
is probably pretty fast, but still we need some kind of explicit
synchronization to make this robust, I think.
> However OTOH, maybe it'll also be safer to just have the mmu notifiers like
> before (e.g., no idea whether anything can cache invalidate tlb
> translations from the empty pgtable)? As that doesn't seems to beat the
> purpose of the patchset as notifiers shouldn't fail.
>
> >
> > (FWIW, last I looked, there also seemed to be some other issues with
> > MMU notifier usage wrt IOMMUv2, see the thread
> > <https://lore.kernel.org/linux-mm/Yzbaf9HW1%2FreKqR8@nvidia.com/>.)
Powered by blists - more mailing lists