[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220830185751.gpudzi2pv3jpjoib@revolver>
Date: Tue, 30 Aug 2022 18:57:59 +0000
From: Liam Howlett <liam.howlett@...cle.com>
To: Carlos Llamas <cmllamas@...gle.com>
CC: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Arve Hjønnevåg <arve@...roid.com>,
Todd Kjos <tkjos@...roid.com>,
Martijn Coenen <maco@...roid.com>,
Joel Fernandes <joel@...lfernandes.org>,
Christian Brauner <brauner@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
"kernel-team@...roid.com" <kernel-team@...roid.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/7] binder: remove binder_alloc_set_vma()
* Carlos Llamas <cmllamas@...gle.com> [220829 16:13]:
> The mmap_locked asserts here are not needed since this is only called
> back from the mmap stack in ->mmap() and ->close() which always acquire
> the lock first. Remove these asserts along with binder_alloc_set_vma()
> altogether since it's trivial enough to be consumed by callers.
I agree that it is not called anywhere else today. I think it's still
worth while since these asserts do nothing if you don't build with
lockdep and/or debug_vm enabled. I was hoping having these here would
avoid future mistakes a lot like what we have in the mm code's
find_vma() (mm/mmap.c ~L2297).
>
> Cc: Liam R. Howlett <Liam.Howlett@...cle.com>
> Signed-off-by: Carlos Llamas <cmllamas@...gle.com>
> ---
> drivers/android/binder_alloc.c | 25 ++-----------------------
> 1 file changed, 2 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
> index 749a4cd30a83..1c39cfce32fa 100644
> --- a/drivers/android/binder_alloc.c
> +++ b/drivers/android/binder_alloc.c
> @@ -309,27 +309,6 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
> return vma ? -ENOMEM : -ESRCH;
> }
>
> -
> -static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
> - struct vm_area_struct *vma)
> -{
> - unsigned long vm_start = 0;
> -
> - /*
> - * Allow clearing the vma with holding just the read lock to allow
> - * munmapping downgrade of the write lock before freeing and closing the
> - * file using binder_alloc_vma_close().
> - */
> - if (vma) {
> - vm_start = vma->vm_start;
> - mmap_assert_write_locked(alloc->mm);
> - } else {
> - mmap_assert_locked(alloc->mm);
> - }
> -
> - alloc->vma_addr = vm_start;
> -}
> -
> static inline struct vm_area_struct *binder_alloc_get_vma(
> struct binder_alloc *alloc)
> {
> @@ -793,7 +772,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
> buffer->free = 1;
> binder_insert_free_buffer(alloc, buffer);
> alloc->free_async_space = alloc->buffer_size / 2;
> - binder_alloc_set_vma(alloc, vma);
> + alloc->vma_addr = vma->vm_start;
>
> return 0;
>
> @@ -983,7 +962,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
> */
> void binder_alloc_vma_close(struct binder_alloc *alloc)
> {
> - binder_alloc_set_vma(alloc, NULL);
> + alloc->vma_addr = 0;
> }
>
> /**
> --
> 2.37.2.672.g94769d06f0-goog
>
Powered by blists - more mailing lists