[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220829105814.857786586@linuxfoundation.org>
Date: Mon, 29 Aug 2022 12:59:48 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Ondrej Mosnacek <omosnace@...hat.com>,
syzbot+a7b60a176ec13cafb793@...kaller.appspotmail.com,
Carlos Llamas <cmllamas@...gle.com>,
Minchan Kim <minchan@...nel.org>,
"Christian Brauner (Microsoft)" <brauner@...nel.org>,
Hridya Valsaraju <hridya@...gle.com>,
Joel Fernandes <joel@...lfernandes.org>,
Martijn Coenen <maco@...roid.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Todd Kjos <tkjos@...roid.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Arve Hjønnevåg <arve@...roid.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: [PATCH 5.19 138/158] binder_alloc: add missing mmap_lock calls when using the VMA
From: Liam Howlett <liam.howlett@...cle.com>
commit 44e602b4e52f70f04620bbbf4fe46ecb40170bde upstream.
Take the mmap_read_lock() when using the VMA in binder_alloc_print_pages()
and when checking for a VMA in binder_alloc_new_buf_locked().
It is worth noting binder_alloc_new_buf_locked() drops the VMA read lock
after it verifies a VMA exists, but may be taken again deeper in the call
stack, if necessary.
Link: https://lkml.kernel.org/r/20220810160209.1630707-1-Liam.Howlett@oracle.com
Fixes: a43cfc87caaf (android: binder: stop saving a pointer to the VMA)
Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
Reported-by: Ondrej Mosnacek <omosnace@...hat.com>
Reported-by: <syzbot+a7b60a176ec13cafb793@...kaller.appspotmail.com>
Acked-by: Carlos Llamas <cmllamas@...gle.com>
Tested-by: Ondrej Mosnacek <omosnace@...hat.com>
Cc: Minchan Kim <minchan@...nel.org>
Cc: Christian Brauner (Microsoft) <brauner@...nel.org>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Hridya Valsaraju <hridya@...gle.com>
Cc: Joel Fernandes <joel@...lfernandes.org>
Cc: Martijn Coenen <maco@...roid.com>
Cc: Suren Baghdasaryan <surenb@...gle.com>
Cc: Todd Kjos <tkjos@...roid.com>
Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
Cc: "Arve Hjønnevåg" <arve@...roid.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/android/binder_alloc.c | 31 +++++++++++++++++++++----------
1 file changed, 21 insertions(+), 10 deletions(-)
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -395,12 +395,15 @@ static struct binder_buffer *binder_allo
size_t size, data_offsets_size;
int ret;
+ mmap_read_lock(alloc->vma_vm_mm);
if (!binder_alloc_get_vma(alloc)) {
+ mmap_read_unlock(alloc->vma_vm_mm);
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
"%d: binder_alloc_buf, no vma\n",
alloc->pid);
return ERR_PTR(-ESRCH);
}
+ mmap_read_unlock(alloc->vma_vm_mm);
data_offsets_size = ALIGN(data_size, sizeof(void *)) +
ALIGN(offsets_size, sizeof(void *));
@@ -922,17 +925,25 @@ void binder_alloc_print_pages(struct seq
* Make sure the binder_alloc is fully initialized, otherwise we might
* read inconsistent state.
*/
- if (binder_alloc_get_vma(alloc) != NULL) {
- for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
- page = &alloc->pages[i];
- if (!page->page_ptr)
- free++;
- else if (list_empty(&page->lru))
- active++;
- else
- lru++;
- }
+
+ mmap_read_lock(alloc->vma_vm_mm);
+ if (binder_alloc_get_vma(alloc) == NULL) {
+ mmap_read_unlock(alloc->vma_vm_mm);
+ goto uninitialized;
+ }
+
+ mmap_read_unlock(alloc->vma_vm_mm);
+ for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
+ page = &alloc->pages[i];
+ if (!page->page_ptr)
+ free++;
+ else if (list_empty(&page->lru))
+ active++;
+ else
+ lru++;
}
+
+uninitialized:
mutex_unlock(&alloc->mutex);
seq_printf(m, " pages: %d:%d:%d\n", active, lru, free);
seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high);
Powered by blists - more mailing lists