[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141213073634.GD32572@gmail.com>
Date: Sat, 13 Dec 2014 08:36:34 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Dave Jones <davej@...hat.com>, Chris Mason <clm@...com>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Dâniel Fraga <fragabr@...il.com>,
Sasha Levin <sasha.levin@...cle.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: [PATCH] sched: Fix lost reschedule in __cond_resched()
* Linus Torvalds <torvalds@...ux-foundation.org> wrote:
> I'm also not sure if the bug ever happens with preemption
> disabled. Sasha, was that you who reported that you cannot
> reproduce it without preemption? It strikes me that there's a
> race condition in __cond_resched() wrt preemption, for example:
> we do
>
> __preempt_count_add(PREEMPT_ACTIVE);
> __schedule();
> __preempt_count_sub(PREEMPT_ACTIVE);
>
> and in between the __schedule() and __preempt_count_sub(), if
> an interrupt comes in and wakes up some important process, it
> won't reschedule (because preemption is active), but then we
> enable preemption again and don't check whether we should
> reschedule (again), and we just go on our merry ways.
Indeed, that's a really good find regardless of whether it's the
source of these lockups - the (untested) patch below ought to
cure that.
> Now, I don't see how that could really matter for a long time -
> returning to user space will check need_resched, and sleeping
> will obviously force a reschedule anyway, so these kinds of
> races should at most delay things by just a tiny amount, but
> maybe there is some case where we screw up in a bigger way. So
> I do *not* believe that the one in __cond_resched() matters,
> but I'm giving it as an example of the kind of things that
> could go wrong.
(as you later note) NOHZ is somewhat special in this regard,
because there we try really hard not to run anything
periodically, so a lost reschedule will matter more.
But ... I'd be surprised if this patch made a difference: it
should normally not be possible to go idle with tasks on the
runqueue (even with this bug present), and with at least one busy
task on the CPU we get the regular scheduler tick which ought to
hide such latencies.
It's nevertheless a good thing to fix, I'm just not sure it's the
root cause of the observed lockup here.
Thanks,
Ingo
--
Reported-by: Linus Torvalds <torvalds@...ux-foundation.org>
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bb398c0c5f08..532809aa0544 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4207,6 +4207,8 @@ static void __cond_resched(void)
__preempt_count_add(PREEMPT_ACTIVE);
__schedule();
__preempt_count_sub(PREEMPT_ACTIVE);
+ if (need_resched())
+ __schedule();
}
int __sched _cond_resched(void)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists