lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 Nov 2016 09:25:12 -0800
From:   Josh Triplett <josh@...htriplett.org>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
        dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
        fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH tip/core/rcu 6/7] rcu: Make expedited grace periods
 recheck dyntick idle state

On Mon, Nov 14, 2016 at 08:57:12AM -0800, Paul E. McKenney wrote:
> Expedited grace periods check dyntick-idle state, and avoid sending
> IPIs to idle CPUs, including those running guest OSes, and, on NOHZ_FULL
> kernels, nohz_full CPUs.  However, the kernel has been observed checking
> a CPU while it was non-idle, but sending the IPI after it has gone
> idle.  This commit therefore rechecks idle state immediately before
> sending the IPI, refraining from IPIing CPUs that have since gone idle.
> 
> Reported-by: Rik van Riel <riel@...hat.com>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>

atomic_add_return(0, ...) seems odd.  Do you actually want that, rather
than atomic_read(...)?  If so, can you please document exactly why?

>  kernel/rcu/tree.h     |  1 +
>  kernel/rcu/tree_exp.h | 12 +++++++++++-
>  2 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index e99a5234d9ed..fe98dd24adf8 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -404,6 +404,7 @@ struct rcu_data {
>  	atomic_long_t exp_workdone1;	/* # done by others #1. */
>  	atomic_long_t exp_workdone2;	/* # done by others #2. */
>  	atomic_long_t exp_workdone3;	/* # done by others #3. */
> +	int exp_dynticks_snap;		/* Double-check need for IPI. */
>  
>  	/* 7) Callback offloading. */
>  #ifdef CONFIG_RCU_NOCB_CPU
> diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> index 24343eb87b58..d3053e99fdb6 100644
> --- a/kernel/rcu/tree_exp.h
> +++ b/kernel/rcu/tree_exp.h
> @@ -358,8 +358,10 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
>  			struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
>  			struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
>  
> +			rdp->exp_dynticks_snap =
> +				atomic_add_return(0, &rdtp->dynticks);
>  			if (raw_smp_processor_id() == cpu ||
> -			    !(atomic_add_return(0, &rdtp->dynticks) & 0x1) ||
> +			    !(rdp->exp_dynticks_snap & 0x1) ||
>  			    !(rnp->qsmaskinitnext & rdp->grpmask))
>  				mask_ofl_test |= rdp->grpmask;
>  		}
> @@ -377,9 +379,17 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
>  		/* IPI the remaining CPUs for expedited quiescent state. */
>  		for_each_leaf_node_possible_cpu(rnp, cpu) {
>  			unsigned long mask = leaf_node_cpu_bit(rnp, cpu);
> +			struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
> +			struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
> +
>  			if (!(mask_ofl_ipi & mask))
>  				continue;
>  retry_ipi:
> +			if (atomic_add_return(0, &rdtp->dynticks) !=
> +			    rdp->exp_dynticks_snap) {
> +				mask_ofl_test |= mask;
> +				continue;
> +			}
>  			ret = smp_call_function_single(cpu, func, rsp, 0);
>  			if (!ret) {
>  				mask_ofl_ipi &= ~mask;
> -- 
> 2.5.2
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ