lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200527193914.GW2869@paulmck-ThinkPad-P72>
Date:   Wed, 27 May 2020 12:39:14 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     tglx@...utronix.de, frederic@...nel.org,
        linux-kernel@...r.kernel.org, x86@...nel.org, cai@....pw,
        mgorman@...hsingularity.net, joel@...lfernandes.org
Subject: Re: [RFC][PATCH 4/7] smp: Optimize send_call_function_single_ipi()

On Wed, May 27, 2020 at 07:12:36PM +0200, Peter Zijlstra wrote:
> On Wed, May 27, 2020 at 06:35:43PM +0200, Peter Zijlstra wrote:
> > Right, I went though them, didn't find anything obvious amiss. OK, let
> > me do a nicer patch.
> 
> something like so then?
> 
> ---
> Subject: rcu: Allow for smp_call_function() running callbacks from idle
> 
> Current RCU hard relies on smp_call_function() callbacks running from
> interrupt context. A pending optimization is going to break that, it
> will allow idle CPUs to run the callbacks from the idle loop. This
> avoids raising the IPI on the requesting CPU and avoids handling an
> exception on the receiving CPU.
> 
> Change rcu_is_cpu_rrupt_from_idle() to also accept task context,
> provided it is the idle task.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>

Looks good to me!

Reviewed-by: Paul E. McKenney <paulmck@...nel.org>

> ---
>  kernel/rcu/tree.c   | 25 +++++++++++++++++++------
>  kernel/sched/idle.c |  4 ++++
>  2 files changed, 23 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index d8e9dbbefcfa..c716eadc7617 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -418,16 +418,23 @@ void rcu_momentary_dyntick_idle(void)
>  EXPORT_SYMBOL_GPL(rcu_momentary_dyntick_idle);
>  
>  /**
> - * rcu_is_cpu_rrupt_from_idle - see if interrupted from idle
> + * rcu_is_cpu_rrupt_from_idle - see if 'interrupted' from idle
>   *
>   * If the current CPU is idle and running at a first-level (not nested)
> - * interrupt from idle, return true.  The caller must have at least
> - * disabled preemption.
> + * interrupt, or directly, from idle, return true.
> + *
> + * The caller must have at least disabled IRQs.
>   */
>  static int rcu_is_cpu_rrupt_from_idle(void)
>  {
> -	/* Called only from within the scheduling-clock interrupt */
> -	lockdep_assert_in_irq();
> +	long nesting;
> +
> +	/*
> +	 * Usually called from the tick; but also used from smp_function_call()
> +	 * for expedited grace periods. This latter can result in running from
> +	 * the idle task, instead of an actual IPI.
> +	 */
> +	lockdep_assert_irqs_disabled();
>  
>  	/* Check for counter underflows */
>  	RCU_LOCKDEP_WARN(__this_cpu_read(rcu_data.dynticks_nesting) < 0,
> @@ -436,9 +443,15 @@ static int rcu_is_cpu_rrupt_from_idle(void)
>  			 "RCU dynticks_nmi_nesting counter underflow/zero!");
>  
>  	/* Are we at first interrupt nesting level? */
> -	if (__this_cpu_read(rcu_data.dynticks_nmi_nesting) != 1)
> +	nesting = __this_cpu_read(rcu_data.dynticks_nmi_nesting);
> +	if (nesting > 1)
>  		return false;
>  
> +	/*
> +	 * If we're not in an interrupt, we must be in the idle task!
> +	 */
> +	WARN_ON_ONCE(!nesting && !is_idle_task(current));
> +
>  	/* Does CPU appear to be idle from an RCU standpoint? */
>  	return __this_cpu_read(rcu_data.dynticks_nesting) == 0;
>  }
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index e9cef84c2b70..05deb81bb3e3 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -289,6 +289,10 @@ static void do_idle(void)
>  	 */
>  	smp_mb__after_atomic();
>  
> +	/*
> +	 * RCU relies on this call to be done outside of an RCU read-side
> +	 * critical section.
> +	 */
>  	flush_smp_call_function_from_idle();
>  	schedule_idle();
>  

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ