lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1251225483.2706.4.camel@josh-work.beaverton.ibm.com>
Date:	Tue, 25 Aug 2009 11:38:03 -0700
From:	Josh Triplett <josht@...ux.vnet.ibm.com>
To:	paulmck@...ux.vnet.ibm.com
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
	dipankar@...ibm.com, akpm@...ux-foundation.org,
	mathieu.desnoyers@...ymtl.ca, dvhltc@...ibm.com, niv@...ibm.com,
	tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org
Subject: Re: [PATCH -tip] Create rcutree plugins to handle hotplug CPU for
 multi-level trees

On Tue, 2009-08-25 at 11:22 -0700, Paul E. McKenney wrote:
> When offlining CPUs from a multi-level tree, there is the possibility
> of offlining the last CPU from a given node when there are preempted
> RCU read-side critical sections that started life on one of the CPUs on
> that node.  In this case, the corresponding tasks will be enqueued via
> the task_struct's rcu_node_entry list_head onto one of the rcu_node's
> blocked_tasks[] lists.  These tasks need to be moved somewhere else
> so that they will prevent the current grace period from ending.
> That somewhere is the root rcu_node.
> 
> With this patch, TREE_PREEMPT_RCU passes moderate rcutorture testing
> with aggressive CPU-hotplugging (no delay between inserting/removing
> randomly selected CPU).
> 
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>

Looks good.  One comment below.

> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1208,7 +1208,7 @@ struct task_struct {
>  #ifdef CONFIG_TREE_PREEMPT_RCU
>  	int rcu_read_lock_nesting;
>  	char rcu_read_unlock_special;
> -	int rcu_blocked_cpu;
> +	void *rcu_blocked_node;

This should use struct rcu_node *, not void *.  That would eliminate
several casts in the changes below.  You can forward-declare struct
rcu_node if you want to avoid including RCU headers in sched.h.

> --- a/kernel/rcutree_plugin.h
> +++ b/kernel/rcutree_plugin.h
> @@ -92,7 +92,7 @@ static void rcu_preempt_qs(int cpu)
>  		rnp = rdp->mynode;
>  		spin_lock(&rnp->lock);
>  		t->rcu_read_unlock_special |= RCU_READ_UNLOCK_BLOCKED;
> -		t->rcu_blocked_cpu = cpu;
> +		t->rcu_blocked_node = (void *)rnp;

Regardless of whether you change the type in the structure, you never
need to cast a pointer to type void *; any non-function pointer will
become void * without complaint.

> @@ -170,12 +170,21 @@ static void rcu_read_unlock_special(struct task_struct *t)
>  	if (special & RCU_READ_UNLOCK_BLOCKED) {
>  		t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_BLOCKED;
> 
> -		/* Remove this task from the list it blocked on. */
> -		rnp = rcu_preempt_state.rda[t->rcu_blocked_cpu]->mynode;
> -		spin_lock(&rnp->lock);
> +		/*
> +		 * Remove this task from the list it blocked on.  The
> +		 * task can migrate while we acquire the lock, but at
> +		 * most one time.  So at most two passes through loop.
> +		 */
> +		for (;;) {
> +			rnp = (struct rcu_node *)t->rcu_blocked_node;
> +			spin_lock(&rnp->lock);
> +			if (rnp == (struct rcu_node *)t->rcu_blocked_node)
> +				break;
> +			spin_unlock(&rnp->lock);
> +		}

Both of the casts of t->rcu_blocked_node can go away here, given the
type change in the structure.

- Josh Triplett

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ