[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJuCfpESdY4L_sSwiCYVCX+5y1WOuAjLNPw35pEGzTSyoHFYPA@mail.gmail.com>
Date: Fri, 22 Nov 2024 08:12:12 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Carlos Llamas <cmllamas@...gle.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Arve Hjønnevåg <arve@...roid.com>,
Todd Kjos <tkjos@...roid.com>, Martijn Coenen <maco@...roid.com>,
Joel Fernandes <joel@...lfernandes.org>, Christian Brauner <brauner@...nel.org>,
linux-kernel@...r.kernel.org, kernel-team@...roid.com,
"Liam R. Howlett" <Liam.Howlett@...cle.com>
Subject: Re: [PATCH v4 9/9] binder: use per-vma lock in page reclaiming
On Tue, Nov 19, 2024 at 10:33 AM Carlos Llamas <cmllamas@...gle.com> wrote:
>
> Use per-vma locking in the shrinker's callback when reclaiming pages,
> similar to the page installation logic. This minimizes contention with
> unrelated vmas improving performance. The mmap_sem is still acquired if
> the per-vma lock cannot be obtained.
>
> Cc: Suren Baghdasaryan <surenb@...gle.com>
> Suggested-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> Signed-off-by: Carlos Llamas <cmllamas@...gle.com>
> ---
> drivers/android/binder_alloc.c | 29 ++++++++++++++++++++++-------
> 1 file changed, 22 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
> index aea35d475ee8..85753897efa1 100644
> --- a/drivers/android/binder_alloc.c
> +++ b/drivers/android/binder_alloc.c
> @@ -1129,19 +1129,28 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
> struct mm_struct *mm = alloc->mm;
> struct vm_area_struct *vma;
> unsigned long page_addr;
> + int mm_locked = 0;
> size_t index;
>
> if (!mmget_not_zero(mm))
> goto err_mmget;
> - if (!mmap_read_trylock(mm))
> - goto err_mmap_read_lock_failed;
> - if (!mutex_trylock(&alloc->mutex))
> - goto err_get_alloc_mutex_failed;
>
> index = page->index;
> page_addr = alloc->vm_start + index * PAGE_SIZE;
>
> - vma = vma_lookup(mm, page_addr);
> + /* attempt per-vma lock first */
> + vma = lock_vma_under_rcu(mm, page_addr);
> + if (!vma) {
> + /* fall back to mmap_lock */
> + if (!mmap_read_trylock(mm))
> + goto err_mmap_read_lock_failed;
> + mm_locked = 1;
> + vma = vma_lookup(mm, page_addr);
> + }
> +
> + if (!mutex_trylock(&alloc->mutex))
> + goto err_get_alloc_mutex_failed;
> +
> if (vma && !binder_alloc_is_mapped(alloc))
On the previous version we had a long discussion about this
binder_alloc_is_mapped() check and that it works here only because
binder does not allow to map the same buffer more than once (the
alloc->buffer_size check inside binder_alloc_mmap_handler). If not for
that limitation the following race could have happened:
Proc A Proc B
mmap(addr, binder_fd)
mmget_not_zero()
munmap(addr) // alloc->mapped = false;
...
mmap(addr, other_fd) // mmap other
vma but same addr
mmap(other_addr, binder_fd) //
alloc->mapped = true;
vma = lock_vma_under_rcu(addr)
if (vma && !binder_alloc_is_mapped(alloc)) // yields true but wrong vma
I think adding a comment before binder_alloc_is_mapped() check would
help to avoid confusion in the future.
Other than that:
Reviewed-by: Suren Baghdasaryan <surenb@...gle.com>
> goto err_invalid_vma;
>
> @@ -1163,7 +1172,10 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
> }
>
> mutex_unlock(&alloc->mutex);
> - mmap_read_unlock(mm);
> + if (mm_locked)
> + mmap_read_unlock(mm);
> + else
> + vma_end_read(vma);
> mmput_async(mm);
> __free_page(page);
>
> @@ -1173,7 +1185,10 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
> err_invalid_vma:
> mutex_unlock(&alloc->mutex);
> err_get_alloc_mutex_failed:
> - mmap_read_unlock(mm);
> + if (mm_locked)
> + mmap_read_unlock(mm);
> + else
> + vma_end_read(vma);
> err_mmap_read_lock_failed:
> mmput_async(mm);
> err_mmget:
> --
> 2.47.0.338.g60cca15819-goog
>
Powered by blists - more mailing lists