[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zyz--bjvkVXngc5U@google.com>
Date: Thu, 7 Nov 2024 17:55:05 +0000
From: Carlos Llamas <cmllamas@...gle.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Arve Hjønnevåg <arve@...roid.com>,
Todd Kjos <tkjos@...roid.com>, Martijn Coenen <maco@...roid.com>,
Joel Fernandes <joel@...lfernandes.org>,
Christian Brauner <brauner@...nel.org>,
linux-kernel@...r.kernel.org, kernel-team@...roid.com,
Nhat Pham <nphamcs@...il.com>, Johannes Weiner <hannes@...xchg.org>,
Barry Song <v-songbaohua@...o.com>, Hillf Danton <hdanton@...a.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Subject: Re: [PATCH v2 8/8] binder: use per-vma lock in page installation
On Thu, Nov 07, 2024 at 08:16:39AM -0800, Suren Baghdasaryan wrote:
> On Wed, Nov 6, 2024 at 8:03 PM Carlos Llamas <cmllamas@...gle.com> wrote:
> >
> > Use per-vma locking for concurrent page installations, this minimizes
> > contention with unrelated vmas improving performance. The mmap_lock is
> > still acquired when needed though, e.g. before get_user_pages_remote().
> >
> > Many thanks to Barry Song who posted a similar approach [1].
> >
> > Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/ [1]
> > Cc: Nhat Pham <nphamcs@...il.com>
> > Cc: Johannes Weiner <hannes@...xchg.org>
> > Cc: Barry Song <v-songbaohua@...o.com>
> > Cc: Suren Baghdasaryan <surenb@...gle.com>
> > Cc: Hillf Danton <hdanton@...a.com>
> > Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> > Signed-off-by: Carlos Llamas <cmllamas@...gle.com>
> > ---
> > drivers/android/binder_alloc.c | 85 +++++++++++++++++++++++-----------
> > 1 file changed, 57 insertions(+), 28 deletions(-)
> >
> > diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
> > index 814435a2601a..debfa541e01b 100644
> > --- a/drivers/android/binder_alloc.c
> > +++ b/drivers/android/binder_alloc.c
> > @@ -233,6 +233,56 @@ static inline bool binder_alloc_is_mapped(struct binder_alloc *alloc)
> > return smp_load_acquire(&alloc->mapped);
> > }
> >
> > +static struct page *binder_page_lookup(struct mm_struct *mm,
>
> Maybe pass "struct binder_alloc" in both binder_page_lookup() and
> binder_page_insert()?
I'm not sure this is worth it though. Yeah, it would match with
binder_page_insert() nicely, but also there is no usage for alloc in
binder_page_lookup(). It's only purpose would be to access the mm:
static struct page *binder_page_lookup(struct binder_alloc *alloc,
unsigned long addr)
{
struct mm_struct *mm = alloc->mm;
If you think this is cleaner I really don't mind adding it for v3.
> I like how previous code stabilized mm with mmget_not_zero() once vs
> now binder_page_lookup() and binder_page_insert() have to mmget/mmput
> individually. Not a big deal but looked cleaner.
Sure, I can factor this out (the way it was in v1).
>
> > + unsigned long addr)
> > +{
> > + struct page *page;
> > + long ret;
> > +
> > + if (!mmget_not_zero(mm))
> > + return NULL;
> > +
> > + mmap_read_lock(mm);
> > + ret = get_user_pages_remote(mm, addr, 1, 0, &page, NULL);
> > + mmap_read_unlock(mm);
> > + mmput_async(mm);
> > +
> > + return ret > 0 ? page : NULL;
> > +}
> > +
> > +static int binder_page_insert(struct binder_alloc *alloc,
> > + unsigned long addr,
> > + struct page *page)
> > +{
> > + struct mm_struct *mm = alloc->mm;
> > + struct vm_area_struct *vma;
> > + int ret = -ESRCH;
> > +
> > + if (!mmget_not_zero(mm))
> > + return -ESRCH;
> > +
> > + /* attempt per-vma lock first */
> > + vma = lock_vma_under_rcu(mm, addr);
> > + if (!vma)
> > + goto lock_mmap;
> > +
> > + if (binder_alloc_is_mapped(alloc))
>
> I don't think you need this check here. lock_vma_under_rcu() ensures
> that the VMA was not detached from the tree after locking the VMA, so
> if you got a VMA it's in the tree and it can't be removed (because
> it's locked). remove_vma()->vma_close()->vma->vm_ops->close() is
> called after VMA gets detached from the tree and that won't happen
> while VMA is locked. So, if lock_vma_under_rcu() returns a VMA,
> binder_alloc_is_mapped() has to always return true. A WARN_ON() check
> here to ensure that might be a better option.
Yes we are guaranteed to have _a_ non-isolated vma. However, the check
validates that it's the _expected_ vma. IIUC, our vma could have been
unmapped (clearing alloc->mapped) and a _new_ unrelated vma could have
gotten the same address space assigned?
The binder_alloc_is_mapped() checks if the vma belongs to binder. This
reminds me, I should also check this for get_user_pages_remote().
>
> > + ret = vm_insert_page(vma, addr, page);
> > + vma_end_read(vma);
> > + goto done;
>
> I think the code would be more readable without these jumps:
>
> vma = lock_vma_under_rcu(mm, addr);
> if (vma) {
> if (!WARN_ON(!binder_alloc_is_mapped(alloc)))
> ret = vm_insert_page(vma, addr, page);
> vma_end_read(vma);
> } else {
> /* fall back to mmap_lock */
> mmap_read_lock(mm);
> vma = vma_lookup(mm, addr);
> if (vma && binder_alloc_is_mapped(alloc))
> ret = vm_insert_page(vma, addr, page);
> mmap_read_unlock(mm);
> }
> mmput_async(mm);
> return ret;
Ok. I'm thinking with mmput_async() being factored out, I'll add an
early return. e.g.:
vma = lock_vma_under_rcu(mm, addr);
if (vma) {
if (binder_alloc_is_mapped(alloc))
ret = vm_insert_page(vma, addr, page);
vma_end_read(vma);
return ret;
}
/* fall back to mmap_lock */
mmap_read_lock(mm);
[...]
Thanks,
Carlos Llamas
Powered by blists - more mailing lists