[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFy4-tseM84CTTgCo_8KrPPA7JP8yF0kC1zJrmA2u2sq_Q@mail.gmail.com>
Date: Tue, 9 Feb 2016 09:04:18 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Tejun Heo <tj@...nel.org>
Cc: Mike Galbraith <umgwanakikbuti@...il.com>,
Michal Hocko <mhocko@...nel.org>, Jiri Slaby <jslaby@...e.cz>,
Thomas Gleixner <tglx@...utronix.de>,
Petr Mladek <pmladek@...e.com>, Jan Kara <jack@...e.cz>,
Ben Hutchings <ben@...adent.org.uk>,
Sasha Levin <sasha.levin@...cle.com>, Shaohua Li <shli@...com>,
LKML <linux-kernel@...r.kernel.org>,
stable <stable@...r.kernel.org>,
Daniel Bilik <daniel.bilik@...system.cz>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: Crashes with 874bbfe600a6 in 3.18.25
On Tue, Feb 9, 2016 at 8:50 AM, Tejun Heo <tj@...nel.org> wrote:
>
> idk, not doing so is likely to cause subtle bugs which are difficult
> to track down. The problem with -stable is 874bbfe6 being backported
> without the matching timer fix.
Well, according to this thread, even witht he timer fix the end result
then shows odd problems, _and_ has a NO_HZ_FULL regression.
I do agree about subtle bugs, but we haven't actually seen any other
ones than the vmstat breakage so far.
Also, I suspect that to flush out any bugs, we might want to
(a) actually dequeue timers and work queues that are bound to a
particular CPU when a CPU goes down.
Sure, we *could* make it a rule that everybody who binds a timer
to a particular CPU should just register the cpu-down thing, but why
make a rule that you have to make extra work? People who do per-cpu
work should have a setup function for when a new CPU comes _up_, but
why make people do pointless extra crap for the cpu-down case when the
generic code could just do ti for them.
(b) maybe one of the test-bots could be encouraged to do a lot of cpu
offlining/onlining as a stress test>
That (a) part is important in that it avoids the subtle bug where some
timer or workqueue entry ends up being run on the wrong CPU after all,
just because the target CPU went down.
And the (b) part would hopefully flush out things that didn't start
things properly when a new cpu comes online.
Hmm? The above is obviously a longer-term thing and a bigger change,
but I think we should be able to just revert 874bbfe6 without anything
else going on, since I don't think we ever found anything else than
vmstat that had issues.
Linus
Powered by blists - more mailing lists