lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Sep 2017 11:58:59 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     Neeraj Upadhyay <neeraju@...eaurora.org>, josh@...htriplett.org,
        mathieu.desnoyers@...icios.com, jiangshanlai@...il.com,
        linux-kernel@...r.kernel.org, sramana@...eaurora.org,
        prsood@...eaurora.org, pkondeti@...eaurora.org,
        markivx@...eaurora.org, peterz@...radead.org
Subject: Re: Query regarding synchronize_sched_expedited and resched_cpu

On Tue, 19 Sep 2017 08:31:26 -0700
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:

> commit bc43e2e7e08134e6f403ac845edcf4f85668d803
> Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Date:   Mon Sep 18 08:54:40 2017 -0700
> 
>     sched: Make resched_cpu() unconditional
>     
>     The current implementation of synchronize_sched_expedited() incorrectly
>     assumes that resched_cpu() is unconditional, which it is not.  This means
>     that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
>     fails as follows (analysis by Neeraj Upadhyay):
>     
>     o    CPU1 is waiting for expedited wait to complete:
>     sync_rcu_exp_select_cpus
>          rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
>          IPI sent to CPU5
>     
>     synchronize_sched_expedited_wait
>              ret = swait_event_timeout(
>                                          rsp->expedited_wq,
>       sync_rcu_preempt_exp_done(rnp_root),
>                                          jiffies_stall);
>     
>                 expmask = 0x20 , and CPU 5 is in idle path (in cpuidle_enter())
>     
>     o    CPU5 handles IPI and fails to acquire rq lock.
>     
>     Handles IPI
>          sync_sched_exp_handler
>              resched_cpu
>                  returns while failing to try lock acquire rq->lock
>              need_resched is not set
>     
>     o    CPU5 calls  rcu_idle_enter() and as need_resched is not set, goes to
>          idle (schedule() is not called).
>     
>     o    CPU 1 reports RCU stall.
>     
>     Given that resched_cpu() is used only by RCU, this commit fixes the

"is now only used by RCU", as it was created for another purpose.

>     assumption by making resched_cpu() unconditional.
>     
>     Reported-by: Neeraj Upadhyay <neeraju@...eaurora.org>
>     Suggested-by: Neeraj Upadhyay <neeraju@...eaurora.org>
>     Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>     Cc: Peter Zijlstra <peterz@...radead.org>
>     Cc: Steven Rostedt <rostedt@...dmis.org>

Acked-by: Steven Rostedt (VMware) <rostedt@...dmis.org>

-- Steve

> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index cab8c5ec128e..b2281971894c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -505,8 +505,7 @@ void resched_cpu(int cpu)
>  	struct rq *rq = cpu_rq(cpu);
>  	unsigned long flags;
>  
> -	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
> -		return;
> +	raw_spin_lock_irqsave(&rq->lock, flags);
>  	resched_curr(rq);
>  	raw_spin_unlock_irqrestore(&rq->lock, flags);
>  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ