[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YvPcX6/1E0hAhOO5@google.com>
Date: Wed, 10 Aug 2022 16:27:11 +0000
From: Carlos Llamas <cmllamas@...gle.com>
To: Liam Howlett <liam.howlett@...cle.com>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
syzbot <syzbot+a7b60a176ec13cafb793@...kaller.appspotmail.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Ondrej Mosnacek <omosnace@...hat.com>,
"syzkaller-bugs@...glegroups.com" <syzkaller-bugs@...glegroups.com>,
Minchan Kim <minchan@...nel.org>,
Christian Brauner <brauner@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Hridya Valsaraju <hridya@...gle.com>,
Joel Fernandes <joel@...lfernandes.org>,
Martijn Coenen <maco@...roid.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Todd Kjos <tkjos@...roid.com>,
Matthew Wilcox <willy@...radead.org>,
Arve Hjønnevåg <arve@...roid.com>
Subject: Re: [PATCH v2] binder_alloc: Add missing mmap_lock calls when using
the VMA
On Wed, Aug 10, 2022 at 04:02:25PM +0000, Liam Howlett wrote:
> Take the mmap_read_lock() when using the VMA in
> binder_alloc_print_pages() and when checking for a VMA in
> binder_alloc_new_buf_locked().
>
> It is worth noting binder_alloc_new_buf_locked() drops the VMA read lock
> after it verifies a VMA exists, but may be taken again deeper in the
> call stack, if necessary.
>
> Reported-by: Ondrej Mosnacek <omosnace@...hat.com>
> Reported-by: syzbot+a7b60a176ec13cafb793@...kaller.appspotmail.com
> Fixes: a43cfc87caaf (android: binder: stop saving a pointer to the VMA)
> Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> ---
> drivers/android/binder_alloc.c | 31 +++++++++++++++++++++----------
> 1 file changed, 21 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
> index f555eebceef6..c5e7c6d3a844 100644
> --- a/drivers/android/binder_alloc.c
> +++ b/drivers/android/binder_alloc.c
> @@ -395,12 +395,15 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
> size_t size, data_offsets_size;
> int ret;
>
> + mmap_read_lock(alloc->vma_vm_mm);
> if (!binder_alloc_get_vma(alloc)) {
> + mmap_read_unlock(alloc->vma_vm_mm);
> binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
> "%d: binder_alloc_buf, no vma\n",
> alloc->pid);
> return ERR_PTR(-ESRCH);
> }
> + mmap_read_unlock(alloc->vma_vm_mm);
>
> data_offsets_size = ALIGN(data_size, sizeof(void *)) +
> ALIGN(offsets_size, sizeof(void *));
> @@ -922,17 +925,25 @@ void binder_alloc_print_pages(struct seq_file *m,
> * Make sure the binder_alloc is fully initialized, otherwise we might
> * read inconsistent state.
> */
> - if (binder_alloc_get_vma(alloc) != NULL) {
> - for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
> - page = &alloc->pages[i];
> - if (!page->page_ptr)
> - free++;
> - else if (list_empty(&page->lru))
> - active++;
> - else
> - lru++;
> - }
> +
> + mmap_read_lock(alloc->vma_vm_mm);
> + if (binder_alloc_get_vma(alloc) == NULL) {
> + mmap_read_unlock(alloc->vma_vm_mm);
> + goto uninitialized;
> }
> +
> + mmap_read_unlock(alloc->vma_vm_mm);
> + for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
> + page = &alloc->pages[i];
> + if (!page->page_ptr)
> + free++;
> + else if (list_empty(&page->lru))
> + active++;
> + else
> + lru++;
> + }
> +
> +uninitialized:
> mutex_unlock(&alloc->mutex);
> seq_printf(m, " pages: %d:%d:%d\n", active, lru, free);
> seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high);
> --
> 2.35.1
Thanks Liam, this looks good.
Acked-by: Carlos Llamas <cmllamas@...gle.com>
Powered by blists - more mailing lists