[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez2uMXLigojbF3HD20Q5jL4ZMSZf6GS-5Y7P=jiB7gibpQ@mail.gmail.com>
Date: Thu, 28 Sep 2023 17:29:43 +0200
From: Jann Horn <jannh@...gle.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>, brauner@...nel.org,
Shuah Khan <shuah@...nel.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Lokesh Gidra <lokeshgidra@...gle.com>,
Peter Xu <peterx@...hat.com>,
David Hildenbrand <david@...hat.com>,
Michal Hocko <mhocko@...e.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Mike Rapoport <rppt@...nel.org>, willy@...radead.org,
Liam.Howlett@...cle.com, zhangpeng362@...wei.com,
Brian Geffon <bgeffon@...gle.com>,
Kalesh Singh <kaleshsingh@...gle.com>,
Nicolas Geoffray <ngeoffray@...gle.com>,
Jared Duke <jdduke@...gle.com>, Linux-MM <linux-mm@...ck.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
kernel list <linux-kernel@...r.kernel.org>,
"open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>,
kernel-team <kernel-team@...roid.com>
Subject: Re: potential new userfaultfd vs khugepaged conflict [was: Re: [PATCH
v2 2/3] userfaultfd: UFFDIO_REMAP uABI]
On Wed, Sep 27, 2023 at 7:12 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Wed, Sep 27, 2023 at 3:07 AM Jann Horn <jannh@...gle.com> wrote:
> >
> > [moving Hugh into "To:" recipients as FYI for khugepaged interaction]
> >
> > On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
> > > From: Andrea Arcangeli <aarcange@...hat.com>
> > >
> > > This implements the uABI of UFFDIO_REMAP.
> > >
> > > Notably one mode bitflag is also forwarded (and in turn known) by the
> > > lowlevel remap_pages method.
> > >
> > > Signed-off-by: Andrea Arcangeli <aarcange@...hat.com>
> > > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > [...]
> > > +/*
> > > + * The mmap_lock for reading is held by the caller. Just move the page
> > > + * from src_pmd to dst_pmd if possible, and return true if succeeded
> > > + * in moving the page.
> > > + */
> > > +static int remap_pages_pte(struct mm_struct *dst_mm,
> > > + struct mm_struct *src_mm,
> > > + pmd_t *dst_pmd,
> > > + pmd_t *src_pmd,
> > > + struct vm_area_struct *dst_vma,
> > > + struct vm_area_struct *src_vma,
> > > + unsigned long dst_addr,
> > > + unsigned long src_addr,
> > > + __u64 mode)
> > > +{
> > > + swp_entry_t entry;
> > > + pte_t orig_src_pte, orig_dst_pte;
> > > + spinlock_t *src_ptl, *dst_ptl;
> > > + pte_t *src_pte = NULL;
> > > + pte_t *dst_pte = NULL;
> > > +
> > > + struct folio *src_folio = NULL;
> > > + struct anon_vma *src_anon_vma = NULL;
> > > + struct mmu_notifier_range range;
> > > + int err = 0;
> > > +
> > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
> > > + src_addr, src_addr + PAGE_SIZE);
> > > + mmu_notifier_invalidate_range_start(&range);
> > > +retry:
> > > + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> > > +
> > > + /* If an huge pmd materialized from under us fail */
> > > + if (unlikely(!dst_pte)) {
> > > + err = -EFAULT;
> > > + goto out;
> > > + }
> > > +
> > > + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
> > > +
> > > + /*
> > > + * We held the mmap_lock for reading so MADV_DONTNEED
> > > + * can zap transparent huge pages under us, or the
> > > + * transparent huge page fault can establish new
> > > + * transparent huge pages under us.
> > > + */
> > > + if (unlikely(!src_pte)) {
> > > + err = -EFAULT;
> > > + goto out;
> > > + }
> > > +
> > > + BUG_ON(pmd_none(*dst_pmd));
> > > + BUG_ON(pmd_none(*src_pmd));
> > > + BUG_ON(pmd_trans_huge(*dst_pmd));
> > > + BUG_ON(pmd_trans_huge(*src_pmd));
> >
> > This works for now, but note that Hugh Dickins has recently been
> > reworking khugepaged such that PTE-based mappings can be collapsed
> > into transhuge mappings under the mmap lock held in *read mode*;
> > holders of the mmap lock in read mode can only synchronize against
> > this by taking the right page table spinlock and rechecking the pmd
> > value. This is only the case for file-based mappings so far, not for
> > anonymous private VMAs; and this code only operates on anonymous
> > private VMAs so far, so it works out.
> >
> > But if either Hugh further reworks khugepaged such that anonymous VMAs
> > can be collapsed under the mmap lock in read mode, or you expand this
> > code to work on file-backed VMAs, then it will become possible to hit
> > these BUG_ON() calls. I'm not sure what the plans for khugepaged going
> > forward are, but the number of edgecases everyone has to keep in mind
> > would go down if you changed this function to deal gracefully with
> > page tables disappearing under you.
> >
> > In the newest version of mm/pgtable-generic.c, above
> > __pte_offset_map_lock(), there is a big comment block explaining the
> > current rules for page table access; in particular, regarding the
> > helper pte_offset_map_nolock() that you're using:
> >
> > * pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map();
> > * but when successful, it also outputs a pointer to the spinlock in ptlp - as
> > * pte_offset_map_lock() does, but in this case without locking it. This helps
> > * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time
> > * act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock
> > * pointer for the page table that it returns. In principle, the caller should
> > * recheck *pmd once the lock is taken; in practice, no callsite needs that -
> > * either the mmap_lock for write, or pte_same() check on contents, is enough.
> >
> > If this becomes hittable in the future, I think you will need to
> > recheck *pmd, at least for dst_pte, to avoid copying PTEs into a
> > detached page table.
>
> Thanks for the warning, Jann. It sounds to me it would be better to
> add this pmd check now even though it's not hittable. Does that sound
> good to everyone?
Sounds good to me.
Powered by blists - more mailing lists