[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250527083722.27309-1-21cnbao@gmail.com>
Date: Tue, 27 May 2025 20:37:22 +1200
From: Barry Song <21cnbao@...il.com>
To: david@...hat.com
Cc: 21cnbao@...il.com,
aarcange@...hat.com,
akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
lokeshgidra@...gle.com,
peterx@...hat.com,
ryncsn@...il.com,
surenb@...gle.com
Subject: Re: [BUG]userfaultfd_move fails to move a folio when swap-in occurs concurrently with swap-out
On Tue, May 27, 2025 at 4:17 PM Barry Song <21cnbao@...il.com> wrote:
>
> On Tue, May 27, 2025 at 12:39 AM David Hildenbrand <david@...hat.com> wrote:
> >
> > On 23.05.25 01:23, Barry Song wrote:
> > > Hi All,
> >
> > Hi!
> >
> > >
> > > I'm encountering another bug that can be easily reproduced using the small
> > > program below[1], which performs swap-out and swap-in in parallel.
> > >
> > > The issue occurs when a folio is being swapped out while it is accessed
> > > concurrently. In this case, do_swap_page() handles the access. However,
> > > because the folio is under writeback, do_swap_page() completely removes
> > > its exclusive attribute.
> > >
> > > do_swap_page:
> > > } else if (exclusive && folio_test_writeback(folio) &&
> > > data_race(si->flags & SWP_STABLE_WRITES)) {
> > > ...
> > > exclusive = false;
> > >
> > > As a result, userfaultfd_move() will return -EBUSY, even though the
> > > folio is not shared and is in fact exclusively owned.
> > >
> > > folio = vm_normal_folio(src_vma, src_addr,
> > > orig_src_pte);
> > > if (!folio || !PageAnonExclusive(&folio->page)) {
> > > spin_unlock(src_ptl);
> > > + pr_err("%s %d folio:%lx exclusive:%d
> > > swapcache:%d\n",
> > > + __func__, __LINE__, folio,
> > > PageAnonExclusive(&folio->page),
> > > + folio_test_swapcache(folio));
> > > err = -EBUSY;
> > > goto out;
> > > }
> > >
> > > I understand that shared folios should not be moved. However, in this
> > > case, the folio is not shared, yet its exclusive flag is not set.
> > >
> > > Therefore, I believe PageAnonExclusive is not a reliable indicator of
> > > whether a folio is truly exclusive to a process.
> >
> > It is. The flag *not* being set is not a reliable indicator whether it
> > is really shared. ;)
> >
> > The reason why we have this PAE workaround (dropping the flag) in place
> > is because the page must not be written to (SWP_STABLE_WRITES). CoW
> > reuse is not possible.
> >
> > uffd moving that page -- and in that same process setting it writable,
> > see move_present_pte()->pte_mkwrite() -- would be very bad.
>
> An alternative approach is to make the folio writable only when we are
> reasonably certain it is exclusive; otherwise, it remains read-only. If the
> destination is later written to and the folio has become exclusive, it can
> be reused directly. If not, a copy-on-write will occur on the destination
> address, transparently to userspace. This avoids Lokesh’s userspace-based
> strategy, which requires forcing a write to the source address.
Conceptually, I mean something like this:
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index bc473ad21202..70eaabf4f1a3 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -1047,7 +1047,8 @@ static int move_present_pte(struct mm_struct *mm,
}
if (folio_test_large(src_folio) ||
folio_maybe_dma_pinned(src_folio) ||
- !PageAnonExclusive(&src_folio->page)) {
+ (!PageAnonExclusive(&src_folio->page) &&
+ folio_mapcount(src_folio) != 1)) {
err = -EBUSY;
goto out;
}
@@ -1070,7 +1071,8 @@ static int move_present_pte(struct mm_struct *mm,
#endif
if (pte_dirty(orig_src_pte))
orig_dst_pte = pte_mkdirty(orig_dst_pte);
- orig_dst_pte = pte_mkwrite(orig_dst_pte, dst_vma);
+ if (PageAnonExclusive(&src_folio->page))
+ orig_dst_pte = pte_mkwrite(orig_dst_pte, dst_vma);
set_pte_at(mm, dst_addr, dst_pte, orig_dst_pte);
out:
@@ -1268,7 +1270,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
}
folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
- if (!folio || !PageAnonExclusive(&folio->page)) {
+ if (!folio || (!PageAnonExclusive(&folio->page) &&
+ folio_mapcount(folio) != 1)) {
spin_unlock(src_ptl);
err = -EBUSY;
goto out;
I'm not trying to push this approach—unless Lokesh clearly sees that it
could reduce userspace noise. I'm mainly just curious how we might make
the fixup transparent to userspace. :-)
>
> >
> > >
> > > The kernel log output is shown below:
> > > [ 23.009516] move_pages_pte 1285 folio:fffffdffc01bba40 exclusive:0
> > > swapcache:1
> > >
> > > I'm still struggling to find a real fix; it seems quite challenging.
> >
> > PAE tells you that you can immediately write to that page without going
> > through CoW. However, here, CoW is required.
> >
> > > Please let me know if you have any ideas. In any case It seems
> > > userspace should fall back to userfaultfd_copy.
> >
> > We could try detecting whether the page is now exclusive, to reset PAE.
> > That will only be possible after writeback completed, so it adds
> > complexity without being able to move the page in all cases (during
> > writeback).
> >
> > Letting userspace deal with that in these rate scenarios is
> > significantly easier.
>
> Right, this appears to introduce the least change—essentially none—to the
> kernel, while shifting more noise to userspace :-)
>
> >
> > --
> > Cheers,
> >
> > David / dhildenb
> >
>
Thanks
Barry
Powered by blists - more mailing lists