lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 13 Dec 2014 08:36:34 +0100
From:	Ingo Molnar <>
To:	Linus Torvalds <>
Cc:	Dave Jones <>, Chris Mason <>,
	Mike Galbraith <>,
	Peter Zijlstra <>,
	Dâniel Fraga <>,
	Sasha Levin <>,
	"Paul E. McKenney" <>,
	Linux Kernel Mailing List <>
Subject: [PATCH] sched: Fix lost reschedule in __cond_resched()

* Linus Torvalds <> wrote:

> I'm also not sure if the bug ever happens with preemption 
> disabled. Sasha, was that you who reported that you cannot 
> reproduce it without preemption? It strikes me that there's a 
> race condition in __cond_resched() wrt preemption, for example: 
> we do
>         __preempt_count_add(PREEMPT_ACTIVE);
>         __schedule();
>         __preempt_count_sub(PREEMPT_ACTIVE);
> and in between the __schedule() and __preempt_count_sub(), if 
> an interrupt comes in and wakes up some important process, it 
> won't reschedule (because preemption is active), but then we 
> enable preemption again and don't check whether we should 
> reschedule (again), and we just go on our merry ways.

Indeed, that's a really good find regardless of whether it's the 
source of these lockups - the (untested) patch below ought to 
cure that.

> Now, I don't see how that could really matter for a long time - 
> returning to user space will check need_resched, and sleeping 
> will obviously force a reschedule anyway, so these kinds of 
> races should at most delay things by just a tiny amount, but 
> maybe there is some case where we screw up in a bigger way. So 
> I do *not* believe that the one in __cond_resched() matters, 
> but I'm giving it as an example of the kind of things that 
> could go wrong.

(as you later note) NOHZ is somewhat special in this regard, 
because there we try really hard not to run anything 
periodically, so a lost reschedule will matter more.

But ... I'd be surprised if this patch made a difference: it 
should normally not be possible to go idle with tasks on the 
runqueue (even with this bug present), and with at least one busy 
task on the CPU we get the regular scheduler tick which ought to 
hide such latencies.

It's nevertheless a good thing to fix, I'm just not sure it's the 
root cause of the observed lockup here.




Reported-by: Linus Torvalds <> 

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bb398c0c5f08..532809aa0544 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4207,6 +4207,8 @@ static void __cond_resched(void)
+	if (need_resched())
+		__schedule();
 int __sched _cond_resched(void)
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists