[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZyqRvLlPdvKknbWE@google.com>
Date: Tue, 5 Nov 2024 21:44:28 +0000
From: Carlos Llamas <cmllamas@...gle.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Arve Hjønnevåg <arve@...roid.com>,
Todd Kjos <tkjos@...roid.com>, Martijn Coenen <maco@...roid.com>,
Joel Fernandes <joel@...lfernandes.org>,
Christian Brauner <brauner@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>
Cc: linux-kernel@...r.kernel.org, kernel-team@...roid.com,
Nhat Pham <nphamcs@...il.com>, Johannes Weiner <hannes@...xchg.org>,
Barry Song <v-songbaohua@...o.com>
Subject: Re: [PATCH 8/8] binder: use per-vma lock in page installation
On Tue, Nov 05, 2024 at 08:02:50PM +0000, Carlos Llamas wrote:
> +static struct page *binder_page_lookup(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> + unsigned long addr,
> + bool mm_locked)
> +{
> + struct folio_walk fw;
> + struct page *page;
> +
> + /* folio_walk_start() requires the mmap_lock */
> + if (!mm_locked)
> + mmap_read_lock(mm);
> +
> + if (!folio_walk_start(&fw, vma, addr, 0))
> + return NULL;
Just realized that mmap_lock need to be released if folio_walk_start()
fails and !mm_locked. I'll add the fix for v2.
Powered by blists - more mailing lists