lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpGLcxSLNek7bUALKcg8HwF8vd9piaBf+cvjYRhY=xOfrA@mail.gmail.com>
Date: Tue, 25 Feb 2025 14:12:30 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Peter Xu <peterx@...hat.com>
Cc: akpm@...ux-foundation.org, lokeshgidra@...gle.com, aarcange@...hat.com, 
	21cnbao@...il.com, v-songbaohua@...o.com, david@...hat.com, 
	willy@...radead.org, Liam.Howlett@...cle.com, lorenzo.stoakes@...cle.com, 
	hughd@...gle.com, jannh@...gle.com, kaleshsingh@...gle.com, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] userfaultfd: do not block on locking a large folio
 with raised refcount

On Tue, Feb 25, 2025 at 1:32 PM Peter Xu <peterx@...hat.com> wrote:
>
> On Tue, Feb 25, 2025 at 12:46:13PM -0800, Suren Baghdasaryan wrote:
> > Lokesh recently raised an issue about UFFDIO_MOVE getting into a deadlock
> > state when it goes into split_folio() with raised folio refcount.
> > split_folio() expects the reference count to be exactly
> > mapcount + num_pages_in_folio + 1 (see can_split_folio()) and fails with
> > EAGAIN otherwise. If multiple processes are trying to move the same
> > large folio, they raise the refcount (all tasks succeed in that) then
> > one of them succeeds in locking the folio, while others will block in
> > folio_lock() while keeping the refcount raised. The winner of this
> > race will proceed with calling split_folio() and will fail returning
> > EAGAIN to the caller and unlocking the folio. The next competing process
> > will get the folio locked and will go through the same flow. In the
> > meantime the original winner will be retried and will block in
> > folio_lock(), getting into the queue of waiting processes only to repeat
> > the same path. All this results in a livelock.
> > An easy fix would be to avoid waiting for the folio lock while holding
> > folio refcount, similar to madvise_free_huge_pmd() where folio lock is
> > acquired before raising the folio refcount.
> > Modify move_pages_pte() to try locking the folio first and if that fails
> > and the folio is large then return EAGAIN without touching the folio
> > refcount. If the folio is single-page then split_folio() is not called,
> > so we don't have this issue.
> > Lokesh has a reproducer [1] and I verified that this change fixes the
> > issue.
> >
> > [1] https://github.com/lokeshgidra/uffd_move_ioctl_deadlock
> >
> > Reported-by: Lokesh Gidra <lokeshgidra@...gle.com>
> > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
>
> Reviewed-by: Peter Xu <peterx@...hat.com>
>
> One question irrelevant of this change below..
>
> > ---
> >  mm/userfaultfd.c | 17 ++++++++++++++++-
> >  1 file changed, 16 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index 867898c4e30b..f17f8290c523 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -1236,6 +1236,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
> >                */
> >               if (!src_folio) {
> >                       struct folio *folio;
> > +                     bool locked;
> >
> >                       /*
> >                        * Pin the page while holding the lock to be sure the
> > @@ -1255,12 +1256,26 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
> >                               goto out;
> >                       }
> >
> > +                     locked = folio_trylock(folio);
> > +                     /*
> > +                      * We avoid waiting for folio lock with a raised refcount
> > +                      * for large folios because extra refcounts will result in
> > +                      * split_folio() failing later and retrying. If multiple
> > +                      * tasks are trying to move a large folio we can end
> > +                      * livelocking.
> > +                      */
> > +                     if (!locked && folio_test_large(folio)) {
> > +                             spin_unlock(src_ptl);
> > +                             err = -EAGAIN;
> > +                             goto out;
> > +                     }
> > +
> >                       folio_get(folio);
> >                       src_folio = folio;
> >                       src_folio_pte = orig_src_pte;
> >                       spin_unlock(src_ptl);
> >
> > -                     if (!folio_trylock(src_folio)) {
> > +                     if (!locked) {
> >                               pte_unmap(&orig_src_pte);
> >                               pte_unmap(&orig_dst_pte);
>
> .. just notice this.  Are these problematic?  I mean, orig_*_pte are stack
> variables, afaict.  I'd expect these things blow on HIGHPTE..

Ugh! Yes, I think so. From a quick look, move_pages_pte() is the only
place we have this issue and I don't see a reason for copying src_pte
and dst_pte values. I'll spend some more time trying to understand if
we really need these local copies.

>
> >                               src_pte = dst_pte = NULL;
> >
> > base-commit: 801d47bd96ce22acd43809bc09e004679f707c39
> > --
> > 2.48.1.658.g4767266eb4-goog
> >
>
> --
> Peter Xu
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ