lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aByjK9-FR6KsYx_7@google.com>
Date: Thu, 8 May 2025 12:27:23 +0000
From: Alice Ryhl <aliceryhl@...gle.com>
To: "Tiffany Y. Yang" <ynaffit@...gle.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, 
	"Arve Hjønnevåg" <arve@...roid.com>, Todd Kjos <tkjos@...roid.com>, Martijn Coenen <maco@...roid.com>, 
	Joel Fernandes <joel@...lfernandes.org>, Christian Brauner <brauner@...nel.org>, 
	Carlos Llamas <cmllamas@...gle.com>, Suren Baghdasaryan <surenb@...gle.com>, linux-kernel@...r.kernel.org, 
	kernel-team@...roid.com
Subject: Re: [PATCH v3 1/2] binder: Refactor binder_node print synchronization

On Wed, May 07, 2025 at 09:10:05PM +0000, Tiffany Y. Yang wrote:
> +/**
> + * print_next_binder_node_ilocked() - Print binder_node from a locked list
> + * @m:          struct seq_file for output via seq_printf()
> + * @node:       struct binder_node to print fields of
> + * @prev_node:	struct binder_node we hold a temporary reference to (if any)
> + *
> + * Helper function to handle synchronization around printing a struct
> + * binder_node while iterating through @node->proc->nodes or the dead nodes
> + * list. Caller must hold either @node->proc->inner_lock (for live nodes) or
> + * binder_dead_nodes_lock. This lock will be released during the body of this
> + * function, but it will be reacquired before returning to the caller.
> + *
> + * Return:	pointer to the struct binder_node we hold a tmpref on
> + */
> +static struct binder_node *
> +print_next_binder_node_ilocked(struct seq_file *m, struct binder_node *node,
> +			       struct binder_node *prev_node)
> +{
> +	/*
> +	 * Take a temporary reference on the node so that isn't removed from
> +	 * its proc's tree or the dead nodes list while we print it.
> +	 */
> +	binder_inc_node_tmpref_ilocked(node);
> +	/*
> +	 * Live nodes need to drop the inner proc lock and dead nodes need to
> +	 * drop the binder_dead_nodes_lock before trying to take the node lock.
> +	 */
> +	if (node->proc)
> +		binder_inner_proc_unlock(node->proc);
> +	else
> +		spin_unlock(&binder_dead_nodes_lock);
> +	if (prev_node)
> +		binder_put_node(prev_node);

I don't buy this logic. Imagine the following scenario:

1. print_binder_proc is called, and we loop over proc->nodes.
2. We call binder_inner_proc_unlock(node->proc).
3. On another thread, binder_deferred_release() is called.
4. The node is removed from proc->nodes and node->proc is set to NULL.
5. Back in print_next_binder_node_ilocked(), we now call
   spin_lock(&binder_dead_nodes_lock) and return.
6. In print_binder_proc(), we think that we hold the proc lock, but
   actually we hold the dead nodes lock instead. BOOM.

What happens with the current code is that print_binder_proc() takes the
proc lock again after the node was removed from proc->nodes, and then it
exits the loop because rb_next(n) returns NULL when called on a node not
in any rb-tree.

Alice

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ