lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240130034627.4aupq27mksswisqg@revolver>
Date: Mon, 29 Jan 2024 22:46:27 -0500
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Lokesh Gidra <lokeshgidra@...gle.com>
Cc: akpm@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        selinux@...r.kernel.org, surenb@...gle.com, kernel-team@...roid.com,
        aarcange@...hat.com, peterx@...hat.com, david@...hat.com,
        axelrasmussen@...gle.com, bgeffon@...gle.com, willy@...radead.org,
        jannh@...gle.com, kaleshsingh@...gle.com, ngeoffray@...gle.com,
        timmurray@...gle.com, rppt@...nel.org
Subject: Re: [PATCH v2 2/3] userfaultfd: protect mmap_changing with rw_sem in
 userfaulfd_ctx

* Lokesh Gidra <lokeshgidra@...gle.com> [240129 17:35]:
> On Mon, Jan 29, 2024 at 1:00 PM Liam R. Howlett <Liam.Howlett@...cle.com> wrote:
> >
> > * Lokesh Gidra <lokeshgidra@...gle.com> [240129 14:35]:
> > > Increments and loads to mmap_changing are always in mmap_lock
> > > critical section.
> >
> > Read or write?
> >
> It's write-mode when incrementing (except in case of
> userfaultfd_remove() where it's done in read-mode) and loads are in
> mmap_lock (read-mode). I'll clarify this in the next version.
> >
> > > This ensures that if userspace requests event
> > > notification for non-cooperative operations (e.g. mremap), userfaultfd
> > > operations don't occur concurrently.
> > >
> > > This can be achieved by using a separate read-write semaphore in
> > > userfaultfd_ctx such that increments are done in write-mode and loads
> > > in read-mode, thereby eliminating the dependency on mmap_lock for this
> > > purpose.
> > >
> > > This is a preparatory step before we replace mmap_lock usage with
> > > per-vma locks in fill/move ioctls.
> > >
> > > Signed-off-by: Lokesh Gidra <lokeshgidra@...gle.com>
> > > ---
> > >  fs/userfaultfd.c              | 40 ++++++++++++----------
> > >  include/linux/userfaultfd_k.h | 31 ++++++++++--------
> > >  mm/userfaultfd.c              | 62 ++++++++++++++++++++---------------
> > >  3 files changed, 75 insertions(+), 58 deletions(-)
> > >
> > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> > > index 58331b83d648..c00a021bcce4 100644
> > > --- a/fs/userfaultfd.c
> > > +++ b/fs/userfaultfd.c
> > > @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
> > >               ctx->flags = octx->flags;
> > >               ctx->features = octx->features;
> > >               ctx->released = false;
> > > +             init_rwsem(&ctx->map_changing_lock);
> > >               atomic_set(&ctx->mmap_changing, 0);
> > >               ctx->mm = vma->vm_mm;
> > >               mmgrab(ctx->mm);
> > >
> > >               userfaultfd_ctx_get(octx);
> > > +             down_write(&octx->map_changing_lock);
> > >               atomic_inc(&octx->mmap_changing);
> > > +             up_write(&octx->map_changing_lock);

On init, I don't think taking the lock is strictly necessary - unless
there is a way to access it before this increment?  Not that it would
cost much.

> >
> > This can potentially hold up your writer as the readers execute.  I
> > think this will change your priority (ie: priority inversion)?
> 
> Priority inversion, if any, is already happening due to mmap_lock, no?
> Also, I thought rw_semaphore implementation is fair, so the writer
> will eventually get the lock right? Please correct me if I'm wrong.

You are correct.  Any writer will stop any new readers, but readers
currently in the section must finish before the writer.

> 
> At this patch: there can't be any readers as they need to acquire
> mmap_lock in read-mode first. While writers, at the point of
> incrementing mmap_changing, already hold mmap_lock in write-mode.
> 
> With per-vma locks, the same synchronization that mmap_lock achieved
> around mmap_changing, will be achieved by ctx->map_changing_lock.

The inversion I was thinking was that the writer cannot complete the
write until the reader is done failing because the atomic_inc has
happened..?  I see the writer as a priority since readers cannot
complete within the write, but I read it wrong.  I think the readers are
fine if the happen before, during, or after a write.  The work is thrown
out if the reader happens during the transition between those states,
which is detected through the atomic.  This makes sense now.

> >
> > You could use the first bit of the atomic_inc as indication of a write.
> > So if the mmap_changing is even, then there are no writers.  If it
> > didn't change and it's even then you know no modification has happened
> > (or it overflowed and hit the same number which would be rare, but
> > maybe okay?).
> 
> This is already achievable, right? If mmap_changing is >0 then we know
> there are writers. The problem is that we want writers (like mremap
> operations) to block as long as there is a userfaultfd operation (also
> reader of mmap_changing) going on. Please note that I'm inferring this
> from current implementation.
> 
> AFAIU, mmap_changing isn't required for correctness, because all
> operations are happening under the right mode of mmap_lock. It's used
> to ensure that while a non-cooperative operations is happening, if the
> user has asked it to be notified, then no other userfaultfd operations
> should take place until the user gets the event notification.

I think it is needed, mmap_changing is read before the mmap_lock is
taken, then compared after the mmap_lock is taken (both read mode) to
ensure nothing has changed.

..

> > > @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma,
> > >               return true;
> > >
> > >       userfaultfd_ctx_get(ctx);
> > > +     down_write(&ctx->map_changing_lock);
> > >       atomic_inc(&ctx->mmap_changing);
> > > +     up_write(&ctx->map_changing_lock);
> > >       mmap_read_unlock(mm);
> > >
> > >       msg_init(&ewq.msg);

If this happens in read mode, then why are you waiting for the readers
to leave?  Can't you just increment the atomic?  It's fine happening in
read mode today, so it should be fine with this new rwsem.

Thanks,
Liam

..

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ