lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 9 Feb 2016 12:51:01 -0500
From:	Tejun Heo <tj@...nel.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Mike Galbraith <umgwanakikbuti@...il.com>,
	Michal Hocko <mhocko@...nel.org>, Jiri Slaby <jslaby@...e.cz>,
	Thomas Gleixner <tglx@...utronix.de>,
	Petr Mladek <pmladek@...e.com>, Jan Kara <jack@...e.cz>,
	Ben Hutchings <ben@...adent.org.uk>,
	Sasha Levin <sasha.levin@...cle.com>, Shaohua Li <shli@...com>,
	LKML <linux-kernel@...r.kernel.org>,
	stable <stable@...r.kernel.org>,
	Daniel Bilik <daniel.bilik@...system.cz>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: Crashes with 874bbfe600a6 in 3.18.25

Hello,

On Tue, Feb 09, 2016 at 09:04:18AM -0800, Linus Torvalds wrote:
> On Tue, Feb 9, 2016 at 8:50 AM, Tejun Heo <tj@...nel.org> wrote:
> > idk, not doing so is likely to cause subtle bugs which are difficult
> > to track down.  The problem with -stable is 874bbfe6 being backported
> > without the matching timer fix.
> 
> Well, according to this thread, even witht he timer fix the end result
> then shows odd problems, _and_ has a NO_HZ_FULL regression.

I don't know what that odd problem is indicating but it's likely we're
seeing another issue exposed by these changes or a bug during
backport, but yeah it's problematic.

> I do agree about subtle bugs, but we haven't actually seen any other
> ones than the vmstat breakage so far.

The thing with vmstat is that it's a work item which is most likely to
expose the issue as it runs constantly on all systems and we started
seeing it triggering soon after timer migration becomes more common.
I'd be surprised if we don't discover a lot more subtler ones down the
road.  Maybe it's that most of them won't trigger often enough to
matter much but it's a bit scary.

> Also, I suspect that to flush out any bugs, we might want to
> 
>  (a) actually dequeue timers and work queues that are bound to a
> particular CPU when a CPU goes down.
> 
>      Sure, we *could* make it a rule that everybody who binds a timer
> to a particular CPU should just register the cpu-down thing, but why
> make a rule that you have to make extra work? People who do per-cpu
> work should have a setup function for when a new CPU comes _up_, but
> why make people do pointless extra crap for the cpu-down case when the
> generic code could just do ti for them.

This goes the same for work items and timers.  If we want to do
explicit dequeueing or flushing of cpu-bound stuff on cpu down, we'll
have to either dedicate *_on() interfaces for correctness or introduce
a separate set of interfaces to use for optimization and correctness.
The current situation is that work itmes which are explicitly shut
down on cpu-down are correctness usages while the ones which are not
are optimization usages.  I'll try to scan through the usages and see
what the actual proportions are like.  Maybe we can get away with
declaring that _on() usages are absolute.

>  (b) maybe one of the test-bots could be encouraged to do a lot of cpu
> offlining/onlining as a stress test>
> 
> That (a) part is important in that it avoids the subtle bug where some
> timer or workqueue entry ends up being run on the wrong CPU after all,
> just because the target CPU went down.
> 
> And the (b) part would hopefully flush out things that didn't start
> things properly when a new cpu comes online.
> 
> Hmm? The above is obviously a longer-term thing and a bigger change,
> but I think we should be able to just revert 874bbfe6 without anything
> else going on, since I don't think we ever found anything else than
> vmstat that had issues.

So, how about reverting 874bbfe6 and performing random foreign
queueing during -rc's for a couple cycles so that we can at least find
out the broken ones quickly in devel branch and backport fixes as
they're found?

Thanks.

-- 
tejun

Powered by blists - more mailing lists