lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 18 Sep 2017 09:01:25 -0700
From:   "Paul E. McKenney" <>
To:     Steven Rostedt <>
Cc:     Neeraj Upadhyay <>,,,,,,,,,
Subject: Re: Query regarding synchronize_sched_expedited and resched_cpu

On Mon, Sep 18, 2017 at 11:11:05AM -0400, Steven Rostedt wrote:
> On Sun, 17 Sep 2017 11:37:06 +0530
> Neeraj Upadhyay <> wrote:
> > Hi Paul, how about replacing raw_spin_trylock_irqsave with
> > raw_spin_lock_irqsave in resched_cpu()? Are there any paths
> > in RCU code, which depend on trylock check/spinlock recursion?
> It looks to me that resched_cpu() was added for nohz full sched
> balancing, but is not longer used by that. The only user is currently
> RCU. Perhaps we should change that from a trylock to a lock.

That certainly is a much simpler fix than the one I was thinking of!

So how about the following patch?

							Thanx, Paul


commit bc43e2e7e08134e6f403ac845edcf4f85668d803
Author: Paul E. McKenney <>
Date:   Mon Sep 18 08:54:40 2017 -0700

    sched: Make resched_cpu() unconditional
    The current implementation of synchronize_sched_expedited() incorrectly
    assumes that resched_cpu() is unconditional, which it is not.  This means
    that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
    fails as follows (analysis by Neeraj Upadhyay):
    o    CPU1 is waiting for expedited wait to complete:
         rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
         IPI sent to CPU5
             ret = swait_event_timeout(
                expmask = 0x20 , and CPU 5 is in idle path (in cpuidle_enter())
    o    CPU5 handles IPI and fails to acquire rq lock.
    Handles IPI
                 returns while failing to try lock acquire rq->lock
             need_resched is not set
    o    CPU5 calls  rcu_idle_enter() and as need_resched is not set, goes to
         idle (schedule() is not called).
    o    CPU 1 reports RCU stall.
    Given that resched_cpu() is used only by RCU, this commit fixes the
    assumption by making resched_cpu() unconditional.
    Reported-by: Neeraj Upadhyay <>
    Suggested-by: Neeraj Upadhyay <>
    Signed-off-by: Paul E. McKenney <>
    Cc: Peter Zijlstra <>
    Cc: Steven Rostedt <>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index cab8c5ec128e..b2281971894c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -505,8 +505,7 @@ void resched_cpu(int cpu)
 	struct rq *rq = cpu_rq(cpu);
 	unsigned long flags;
-	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
-		return;
+	raw_spin_lock_irqsave(&rq->lock, flags);
 	raw_spin_unlock_irqrestore(&rq->lock, flags);

Powered by blists - more mailing lists