[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAH5fLghwpEE8GAjVTFOj0pBJ-HW=LvaWf_K3P9+optpjTsAfmw@mail.gmail.com>
Date: Thu, 8 May 2025 14:26:13 +0200
From: Alice Ryhl <aliceryhl@...gle.com>
To: Tiffany Yang <ynaffit@...gle.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Arve Hjønnevåg <arve@...roid.com>,
Todd Kjos <tkjos@...roid.com>, Martijn Coenen <maco@...roid.com>,
Joel Fernandes <joel@...lfernandes.org>, Christian Brauner <brauner@...nel.org>,
Carlos Llamas <cmllamas@...gle.com>, Suren Baghdasaryan <surenb@...gle.com>, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v2 1/2] binder: Refactor binder_node print synchronization
On Thu, May 8, 2025 at 2:18 PM Alice Ryhl <aliceryhl@...gle.com> wrote:
>
> On Mon, May 05, 2025 at 09:42:32PM +0000, Tiffany Yang wrote:
> > + if (node->proc)
> > + binder_inner_proc_unlock(node->proc);
> > + else
> > + spin_unlock(&binder_dead_nodes_lock);
>
> I don't buy this logic. Imagine the following scenario:
>
> 1. print_binder_proc is called, and we loop over proc->nodes.
> 2. We call binder_inner_proc_unlock(node->proc).
> 3. On another thread, binder_deferred_release() is called.
> 4. The node is removed from proc->nodes and node->proc is set to NULL.
> 5. Back in print_next_binder_node_ilocked(), we now call
> spin_lock(&binder_dead_nodes_lock) and return.
> 6. In print_binder_proc(), we think that we hold the proc lock, but
> actually we hold the dead nodes lock instead. BOOM.
>
> What happens with the current code is that print_binder_proc() takes the
> proc lock again after the node was removed from proc->nodes, and then it
> exits the loop because rb_next(n) returns NULL when called on a node not
> in any rb-tree.
Oh, there's a v3 of this. Let me resend it there.
Alice
Powered by blists - more mailing lists