lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 17 Aug 2015 20:39:20 -0700
From:	John Stultz <john.stultz@...aro.org>
To:	Shaohua Li <shli@...com>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	lkml <linux-kernel@...r.kernel.org>,
	Prarit Bhargava <prarit@...hat.com>,
	Richard Cochran <richardcochran@...il.com>,
	Daniel Lezcano <daniel.lezcano@...aro.org>,
	Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH 8/9] clocksource: Improve unstable clocksource detection

On Mon, Aug 17, 2015 at 7:57 PM, Shaohua Li <shli@...com> wrote:
> On Mon, Aug 17, 2015 at 03:17:28PM -0700, John Stultz wrote:
>> That said, I agree the "should"s and other vague qualifiers in the
>> commit description you point out should have more specifics to back
>> things up. And I'm fine delaying this (and the follow-on) patch until
>> those details are provided.
>
> It's not something I guess. We do see the issue from time to time. The
> IPMI driver accesses some IO ports in softirq and hog cpu for a very
> long time, then the watchdog alert. The false alert on the other hand
> has very worse effect. It forces to use HPET as clocksource, which has
> very big performance penality. We can't even manually switch back to TSC
> as current interface doesn't allow us to do it, then we can only reboot
> the system. I agree the driver should be fixed, but the watchdog has
> false alert, we definitively should fix it.

I think Thomas is requesting that some of the vague terms be
quantified. Seeing the issue "from time to time" isn't super
informative. When the IPMI driver hogs the cpu "for a very long time",
how long does that  actually take?  You've provided the HPET
frequency, and  the wrapping interval on your hardware. Do these
intervals all line up properly?

I sympathize that the "show-your-work" math problem aspect of this
request might be a little remedial and irritating, esp when the patch
fixes the problem for you. But its important, so later on when some
bug crops up in near by code, folks can easily repeat your calculation
and know the problem isn't from your code.

> The 1s interval is arbitary. If you think there is better way to fix the
> issue, please let me know.

I don't think 1s is necessarily arbitrary. Maybe not much conscious
thought was put into it, but clearly .001 sec wasn't chosen, nor
10minutes for a reason.

So given the intervals you're seeing the problem with, would maybe a
larger max interval (say 30-seconds) make more or less sense? What
would the tradeoffs be? (ie: Would that exclude clocksources with
faster wraps from being used as watchdogs, with your patches?).

I'm sure an good interval could be chosen with some thought, and the
rational be explained. :)

thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ