lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 25 Aug 2009 14:48:00 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
	dipankar@...ibm.com, akpm@...ux-foundation.org,
	josht@...ux.vnet.ibm.com, dvhltc@...ibm.com, niv@...ibm.com,
	tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org
Subject: Re: [PATCH -tip] Create rcutree plugins to handle hotplug CPU for
	multi-level trees

* Paul E. McKenney (paulmck@...ux.vnet.ibm.com) wrote:
> When offlining CPUs from a multi-level tree, there is the possibility
> of offlining the last CPU from a given node when there are preempted
> RCU read-side critical sections that started life on one of the CPUs on
> that node.  In this case, the corresponding tasks will be enqueued via
> the task_struct's rcu_node_entry list_head onto one of the rcu_node's
> blocked_tasks[] lists.  These tasks need to be moved somewhere else
> so that they will prevent the current grace period from ending.
> That somewhere is the root rcu_node.
> 
> With this patch, TREE_PREEMPT_RCU passes moderate rcutorture testing
> with aggressive CPU-hotplugging (no delay between inserting/removing
> randomly selected CPU).
> 
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> ---
[...]
>  /*
> + * Handle tasklist migration for case in which all CPUs covered by the
> + * specified rcu_node have gone offline.  Move them up to the root
> + * rcu_node.  The reason for not just moving them to the immediate
> + * parent is to remove the need for rcu_read_unlock_special() to
> + * make more than two attempts to acquire the target rcu_node's lock.
> + *
> + * The caller must hold rnp->lock with irqs disabled.
> + */
> +static void rcu_preempt_offline_tasks(struct rcu_state *rsp,
> +				      struct rcu_node *rnp)
> +{
> +	int i;
> +	struct list_head *lp;
> +	struct list_head *lp_root;
> +	struct rcu_node *rnp_root = rcu_get_root(rsp);
> +	struct task_struct *tp;
> +
> +	if (rnp == rnp_root)
> +		return;  /* Shouldn't happen: at least one CPU online. */
> +

Hrm, is it "shouldn't happen" or "could be called, but we should not
move anything" ?

If it is really the former, we could put a WARN_ON_ONCE (or, more
aggressively, a BUG_ON) there and see when the caller is going crazy
rather than ignoring the error.

> +	/*
> +	 * Move tasks up to root rcu_node.  Rely on the fact that the
> +	 * root rcu_node can be at most one ahead of the rest of the
> +	 * rcu_nodes in terms of gp_num value.

Do you gather the description of such constraints in a central place
somewhere around the code or design documentation in the kernel tree ?
I just want to point out that every clever assumption like this, which
is based on the constraints imposed by the current design, should be
easy to list in a year from now if we ever decide to move from tree to
hashed RCU (or whichever next step will be necessary then).

I am just worried that migration helpers seems to be added to the design
as an afterthought, and therefore might make future evolution more
difficult.

Thanks,

Mathieu

>  This fact allows us to
> +	 * move the blocked_tasks[] array directly, element by element.
> +	 */
> +	for (i = 0; i < 2; i++) {
> +		lp = &rnp->blocked_tasks[i];
> +		lp_root = &rnp_root->blocked_tasks[i];
> +		while (!list_empty(lp)) {
> +			tp = list_entry(lp->next, typeof(*tp), rcu_node_entry);
> +			spin_lock(&rnp_root->lock); /* irqs already disabled */
> +			list_del(&tp->rcu_node_entry);
> +			tp->rcu_blocked_node = rnp_root;
> +			list_add(&tp->rcu_node_entry, lp_root);
> +			spin_unlock(&rnp_root->lock); /* irqs remain disabled */
> +		}
> +	}
> +}
> +
> +/*
>   * Do CPU-offline processing for preemptable RCU.
>   */
>  static void rcu_preempt_offline_cpu(int cpu)
> @@ -410,6 +460,15 @@ static int rcu_preempted_readers(struct rcu_node *rnp)
>  #ifdef CONFIG_HOTPLUG_CPU
>  
>  /*
> + * Because preemptable RCU does not exist, it never needs to migrate
> + * tasks that were blocked within RCU read-side critical sections.
> + */
> +static void rcu_preempt_offline_tasks(struct rcu_state *rsp,
> +				      struct rcu_node *rnp)
> +{
> +}
> +
> +/*
>   * Because preemptable RCU does not exist, it never needs CPU-offline
>   * processing.
>   */

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ