lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 18 Sep 2017 09:01:25 -0700 From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> To: Steven Rostedt <rostedt@...dmis.org> Cc: Neeraj Upadhyay <neeraju@...eaurora.org>, josh@...htriplett.org, mathieu.desnoyers@...icios.com, jiangshanlai@...il.com, linux-kernel@...r.kernel.org, sramana@...eaurora.org, prsood@...eaurora.org, pkondeti@...eaurora.org, markivx@...eaurora.org, peterz@...radead.org Subject: Re: Query regarding synchronize_sched_expedited and resched_cpu On Mon, Sep 18, 2017 at 11:11:05AM -0400, Steven Rostedt wrote: > On Sun, 17 Sep 2017 11:37:06 +0530 > Neeraj Upadhyay <neeraju@...eaurora.org> wrote: > > > Hi Paul, how about replacing raw_spin_trylock_irqsave with > > raw_spin_lock_irqsave in resched_cpu()? Are there any paths > > in RCU code, which depend on trylock check/spinlock recursion? > > It looks to me that resched_cpu() was added for nohz full sched > balancing, but is not longer used by that. The only user is currently > RCU. Perhaps we should change that from a trylock to a lock. That certainly is a much simpler fix than the one I was thinking of! So how about the following patch? Thanx, Paul ------------------------------------------------------------------------ commit bc43e2e7e08134e6f403ac845edcf4f85668d803 Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> Date: Mon Sep 18 08:54:40 2017 -0700 sched: Make resched_cpu() unconditional The current implementation of synchronize_sched_expedited() incorrectly assumes that resched_cpu() is unconditional, which it is not. This means that synchronize_sched_expedited() can hang when resched_cpu()'s trylock fails as follows (analysis by Neeraj Upadhyay): o CPU1 is waiting for expedited wait to complete: sync_rcu_exp_select_cpus rdp->exp_dynticks_snap & 0x1 // returns 1 for CPU5 IPI sent to CPU5 synchronize_sched_expedited_wait ret = swait_event_timeout( rsp->expedited_wq, sync_rcu_preempt_exp_done(rnp_root), jiffies_stall); expmask = 0x20 , and CPU 5 is in idle path (in cpuidle_enter()) o CPU5 handles IPI and fails to acquire rq lock. Handles IPI sync_sched_exp_handler resched_cpu returns while failing to try lock acquire rq->lock need_resched is not set o CPU5 calls rcu_idle_enter() and as need_resched is not set, goes to idle (schedule() is not called). o CPU 1 reports RCU stall. Given that resched_cpu() is used only by RCU, this commit fixes the assumption by making resched_cpu() unconditional. Reported-by: Neeraj Upadhyay <neeraju@...eaurora.org> Suggested-by: Neeraj Upadhyay <neeraju@...eaurora.org> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@...radead.org> Cc: Steven Rostedt <rostedt@...dmis.org> diff --git a/kernel/sched/core.c b/kernel/sched/core.c index cab8c5ec128e..b2281971894c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -505,8 +505,7 @@ void resched_cpu(int cpu) struct rq *rq = cpu_rq(cpu); unsigned long flags; - if (!raw_spin_trylock_irqsave(&rq->lock, flags)) - return; + raw_spin_lock_irqsave(&rq->lock, flags); resched_curr(rq); raw_spin_unlock_irqrestore(&rq->lock, flags); }
Powered by blists - more mailing lists