lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 12 Apr 2021 12:57:22 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     linux-kernel@...r.kernel.org, john.stultz@...aro.org,
        sboyd@...nel.org, corbet@....net, Mark.Rutland@....com,
        maz@...nel.org, kernel-team@...com, neeraju@...eaurora.org,
        ak@...ux.intel.com
Subject: Re: [PATCH v7 clocksource 3/5] clocksource: Check per-CPU clock
 synchronization when marked unstable

On Mon, Apr 12, 2021 at 08:54:03PM +0200, Thomas Gleixner wrote:
> Paul,
> 
> On Mon, Apr 12 2021 at 11:20, Paul E. McKenney wrote:
> > On Mon, Apr 12, 2021 at 03:08:16PM +0200, Thomas Gleixner wrote:
> >> The reason for irqsave is again historical AFAICT and nobody bothered to
> >> clean it up. spin_lock_bh() should be sufficient to serialize against
> >> the watchdog timer, though I haven't looked at all possible scenarios.
> >
> > Though if BH is disabled, there is not so much advantage to
> > invoking it from __clocksource_watchdog_kthread().  Might as
> > well just invoke it directly from clocksource_watchdog().
> >
> >> > 2.	Invoke clocksource_verify_percpu() from its original
> >> > 	location in clocksource_watchdog(), just before the call to
> >> > 	__clocksource_unstable().  This relies on the fact that
> >> > 	clocksource_watchdog() acquires watchdog_lock without
> >> > 	disabling interrupts.
> >> 
> >> That should be fine, but this might cause the softirq to 'run' for a
> >> very long time which is not pretty either.
> >> 
> >> Aside of that, do we really need to check _all_ online CPUs? What you
> >> are trying to figure out is whether the wreckage is CPU local or global,
> >> right?
> >> 
> >> Wouldn't a shirt-sleeve approach of just querying _one_ CPU be good
> >> enough? Either the other CPU has the same wreckage, then it's global or
> >> it hasn't which points to a per CPU local issue.
> >> 
> >> Sure it does not catch the case where a subset (>1) of all CPUs is
> >> affected, but I'm not seing how that really buys us anything.
> >
> > Good point!  My thought is to randomly pick eight CPUs to keep the
> > duration reasonable while having a good chance of hitting "interesting"
> > CPU choices in multicore and multisocket systems.
> >
> > However, if a hard-to-reproduce problem occurred, it would be good to take
> > the hit and scan all the CPUs.  Additionally, there are some workloads
> > for which the switch from TSC to HPET is fatal anyway due to increased
> > overhead.  For these workloads, the full CPU scan is no additional pain.
> >
> > So I am thinking in terms of a default that probes eight randomly selected
> > CPUs without worrying about duplicates (as in there would be some chance
> > that fewer CPUs would actually be probed), but with a boot-time flag
> > that does all CPUs.  I would add the (default) random selection as a
> > separate patch.
> 
> You can't do without making it complex, right? Keep it simple is not an
> option for a RCU hacker it seems :)

But it was simple!  It just hit all the CPUs.

However, you (quite rightly) pointed out that this simple approach had
a few shortcomings.  ;-)

> > I will send a new series out later today, Pacific Time.
> 
> Can you do me a favour and send it standalone and not as yet another
> reply to this existing thread maze. A trivial lore link to the previous
> version gives enough context.

Will do!

Of course, it turns out that lockdep also doesn't like waited-on
smp_call_function_single() invocations from timer handlers,
so I am currently looking at other options for dealing with that
potential use-after-free.  I am starting to like the looks of "only set
CLOCK_SOURCE_VERIFY_PERCPU on statically allocated clocksource structures
and let KASAN enforce this restriction", but I have not quite given up
on making it more general.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ