[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <dbx8a57nrkod.fsf@ynaffit-andsys.c.googlers.com>
Date: Thu, 08 May 2025 19:01:38 +0000
From: Tiffany Yang <ynaffit@...gle.com>
To: Alice Ryhl <aliceryhl@...gle.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Arve Hjønnevåg
<arve@...roid.com>, Todd Kjos <tkjos@...roid.com>, Martijn Coenen
<maco@...roid.com>, Joel Fernandes <joel@...lfernandes.org>, Christian
Brauner <brauner@...nel.org>, Carlos Llamas <cmllamas@...gle.com>, Suren
Baghdasaryan <surenb@...gle.com>, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v3 1/2] binder: Refactor binder_node print synchronization
Alice Ryhl <aliceryhl@...gle.com> writes:
> On Wed, May 07, 2025 at 09:10:05PM +0000, Tiffany Y. Yang wrote:
>> +/**
>> + * print_next_binder_node_ilocked() - Print binder_node from a locked list
>> + * @m: struct seq_file for output via seq_printf()
>> + * @node: struct binder_node to print fields of
>> + * @prev_node: struct binder_node we hold a temporary reference to (if any)
>> + *
>> + * Helper function to handle synchronization around printing a struct
>> + * binder_node while iterating through @node->proc->nodes or the dead nodes
>> + * list. Caller must hold either @node->proc->inner_lock (for live nodes) or
>> + * binder_dead_nodes_lock. This lock will be released during the body of this
>> + * function, but it will be reacquired before returning to the caller.
>> + *
>> + * Return: pointer to the struct binder_node we hold a tmpref on
>> + */
>> +static struct binder_node *
>> +print_next_binder_node_ilocked(struct seq_file *m, struct binder_node *node,
>> + struct binder_node *prev_node)
>> +{
>> + /*
>> + * Take a temporary reference on the node so that isn't removed from
>> + * its proc's tree or the dead nodes list while we print it.
>> + */
>> + binder_inc_node_tmpref_ilocked(node);
>> + /*
>> + * Live nodes need to drop the inner proc lock and dead nodes need to
>> + * drop the binder_dead_nodes_lock before trying to take the node lock.
>> + */
>> + if (node->proc)
>> + binder_inner_proc_unlock(node->proc);
>> + else
>> + spin_unlock(&binder_dead_nodes_lock);
>> + if (prev_node)
>> + binder_put_node(prev_node);
>
> I don't buy this logic. Imagine the following scenario:
>
> 1. print_binder_proc is called, and we loop over proc->nodes.
> 2. We call binder_inner_proc_unlock(node->proc).
> 3. On another thread, binder_deferred_release() is called.
> 4. The node is removed from proc->nodes and node->proc is set to NULL.
> 5. Back in print_next_binder_node_ilocked(), we now call
> spin_lock(&binder_dead_nodes_lock) and return.
> 6. In print_binder_proc(), we think that we hold the proc lock, but
> actually we hold the dead nodes lock instead. BOOM.
>
> What happens with the current code is that print_binder_proc() takes the
> proc lock again after the node was removed from proc->nodes, and then it
> exits the loop because rb_next(n) returns NULL when called on a node not
> in any rb-tree.
>
> Alice
Thanks for catching this!! I think this race could be solved by passing
"proc" in as a parameter (NULL if iterating over the dead_nodes_list),
and locking/unlocking based on that instead of node->proc. WDYT?
--
Tiffany Y. Yang
Powered by blists - more mailing lists