[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwZufdV5Q7Wm+b2F8KurtgXsJ_eNe9b6_TOSUhuW_GfSg@mail.gmail.com>
Date: Fri, 6 Mar 2015 11:05:28 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Jason Low <jason.low2@...com>
Cc: Davidlohr Bueso <dave@...olabs.net>,
Ingo Molnar <mingo@...nel.org>,
Sasha Levin <sasha.levin@...cle.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Dave Jones <davej@...emonkey.org.uk>
Subject: Re: sched: softlockups in multi_cpu_stop
On Fri, Mar 6, 2015 at 10:57 AM, Jason Low <jason.low2@...com> wrote:
>
> Right, the can_spin_on_owner() was originally added to the mutex
> spinning code for optimization purposes, particularly so that we can
> avoid adding the spinner to the OSQ only to find that it doesn't need to
> spin. This function needing to return a correct value should really only
> affect performance, so yes, lockups due to this seems surprising.
Well, softlockups aren't about "correct behavior". They are about
certain things not happening in a timely manner.
Clearly the mutex code now tries to hold on to the CPU too aggressively.
At some point people need to admit that busy-looping isn't always a
good idea. Especially if
(a) we could idle the core instead
(b) the tuning has been done based on som especial-purpose benchmark
that is likely not realistic
(c) we get reports from people that it causes problems.
In other words: Let's just undo that excessive busy-looping. The
performance numbers were dubious to begin with. Real scalability comes
from fixing the locking, not from trying to play games with the locks
themselves. Particularly games that then cause problems.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists