lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZEcMhOywwzsc6CN5@google.com>
Date:   Mon, 24 Apr 2023 23:11:00 +0000
From:   Carlos Llamas <cmllamas@...gle.com>
To:     "Liam R. Howlett" <Liam.Howlett@...cle.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Arve Hjønnevåg <arve@...roid.com>,
        Todd Kjos <tkjos@...roid.com>,
        Martijn Coenen <maco@...roid.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        Christian Brauner <brauner@...nel.org>,
        Suren Baghdasaryan <surenb@...gle.com>,
        linux-kernel@...r.kernel.org, kernel-team@...roid.com,
        linux-mm@...ck.org
Subject: Re: [RFC PATCH 2/3] Revert "android: binder: stop saving a pointer
 to the VMA"

O Mon, Apr 24, 2023 at 06:34:19PM -0400, Liam R. Howlett wrote:
> Cc linux-mm
> 
> * Carlos Llamas <cmllamas@...gle.com> [230424 16:56]:
> > This reverts commit a43cfc87caaf46710c8027a8c23b8a55f1078f19.
> > 
> > This patch fixed an issue reported by syzkaller in [1]. However, this
> > turned out to be only a band-aid in binder. The root cause, as bisected
> > by syzkaller, was fixed by commit 5789151e48ac ("mm/mmap: undo ->mmap()
> > when mas_preallocate() fails"). We no longer need the patch for binder.
> > 
> > Reverting such patch allows us to have a lockless access to alloc->vma
> > in specific cases where the mmap_lock is not required.
> 
> Can you elaborate on the situation where recording a VMA pointer and
> later accessing it outside the mmap_lock is okay?

The specifics are in the third patch of this patchset but the gist of it
is that during ->mmap() handler, binder will complete the initialization
of the binder_alloc structure. With the last step of this process being
the caching of the vma pointer. Since the ordering is protected with a
barrier we can then check alloc->vma to determine if the initialization
has been completed.

Since this check is part of the critical path for every single binder
transaction, the performance plummeted when we started contending for
the mmap_lock. In this particular case, binder doesn't actually use the
vma. It only needs to know if the internal structure has been fully
initialized and it is safe to use it.

FWIW, this had been the design for ~15 years. The original patch is
this: https://git.kernel.org/torvalds/c/457b9a6f09f0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ