[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YvLLQ/qasEJffdnn@google.com>
Date: Tue, 9 Aug 2022 21:01:55 +0000
From: Carlos Llamas <cmllamas@...gle.com>
To: Liam Howlett <liam.howlett@...cle.com>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
syzbot <syzbot+a7b60a176ec13cafb793@...kaller.appspotmail.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Ondrej Mosnacek <omosnace@...hat.com>,
"syzkaller-bugs@...glegroups.com" <syzkaller-bugs@...glegroups.com>,
Minchan Kim <minchan@...nel.org>,
Christian Brauner <brauner@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Hridya Valsaraju <hridya@...gle.com>,
Joel Fernandes <joel@...lfernandes.org>,
Martijn Coenen <maco@...roid.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Todd Kjos <tkjos@...roid.com>,
Matthew Wilcox <willy@...radead.org>,
Arve Hjønnevåg <arve@...roid.com>
Subject: Re: [PATCH] binder_alloc: Add missing mmap_lock calls when using the
VMA
On Tue, Aug 09, 2022 at 07:02:17PM +0000, Liam Howlett wrote:
> >
> > do we need to hold on to the lock while we loop through the pages here?
>
> I think we do? Holding this lock will ensure the pages don't go away, I
> believe (looking at mm/rmap.c comments on locking at the top)?
>
> In any case, this function is called from print_binder_proc_stats()
> which looks to be a debugfs/debugging call so I thought safer would be
> better than faster and with a potential race.
The pages are protected by alloc->mutex, so you could immediately
release the mmap lock after binder_alloc_get_vma() call. I agree this
is a debugging call so it would be nice to reduce contention.
Powered by blists - more mailing lists