[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170530185957.jxl5tfnqfyjot75x@hirez.programming.kicks-ass.net>
Date: Tue, 30 May 2017 20:59:57 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andi Kleen <ak@...ux.intel.com>
Cc: Stephane Eranian <eranian@...gle.com>,
Vince Weaver <vincent.weaver@...ne.edu>,
"Liang, Kan" <kan.liang@...el.com>,
"mingo@...hat.com" <mingo@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"alexander.shishkin@...ux.intel.com"
<alexander.shishkin@...ux.intel.com>,
"acme@...hat.com" <acme@...hat.com>,
"jolsa@...hat.com" <jolsa@...hat.com>,
"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
"tglx@...utronix.de" <tglx@...utronix.de>
Subject: Re: [PATCH 1/2] perf/x86/intel: enable CPU ref_cycles for GP counter
On Tue, May 30, 2017 at 10:51:51AM -0700, Andi Kleen wrote:
> On Tue, May 30, 2017 at 07:40:14PM +0200, Peter Zijlstra wrote:
> > On Tue, May 30, 2017 at 10:22:08AM -0700, Andi Kleen wrote:
> > > > > You would only need a single one per system however, not one per CPU.
> > > > > RCU already tracks all the CPUs, all we need is a single NMI watchdog
> > > > > that makes sure RCU itself does not get stuck.
> > > > >
> > > > > So we just have to find a single watchdog somewhere that can trigger
> > > > > NMI.
> > > >
> > > > But then you have to IPI broadcast the NMI, which is less than ideal.
> > >
> > > Only when the watchdog times out to print the backtraces.
> >
> > The current NMI watchdog has a per-cpu state. So that means either doing
> > for_all_cpu() loops or IPI broadcasts from the NMI tickle. Neither is
> > something you really want.
>
> The normal case is that the RCU stall only prints the backtrace for
> the CPU that stalled.
>
> The extra NMI watchdog should only kick in when RCU is broken too,
> or the CPU that owns the stall detection stalled too, which should be rare.
Well, if we can drive the RCU watchdog from NMI (using RTC/HPET or
whatever) it might be good enough if we can convince ourselves there's
no holes in it otherwise.
The obvious hole being locked up while RCU isn't considering the CPU
'interesting'.
> In this case it's reasonable to print backtrace for all, like sysrq would do.
> In theory could try to figure out what the current CPU that would own stall
> detection is, but it's probably safer to do it for all.
>
> BTW there's an alternative solution in cycling the NMI watchdog over
> all available CPUs. Then it would eventually cover all. But that's
> less real time friendly than relying on RCU.
I don't think we need to worry too much about the watchdog being rt
friendly. Robustness is the thing that worries me most.
> > > > RCU doesn't have that problem because the quiescent state is a global
> > > > thing. CPU progress, which is what the NMI watchdog tests, is very much
> > > > per logical CPU though.
> > >
> > > RCU already has a CPU stall detector. It should work (and usually
> > > triggers before the NMI watchdog in my experience unless the
> > > whole system is dead)
> >
> > It only goes look at CPU state once it detects the global QS is stalled
> > I think. But I've not had much luck with the RCU one -- although I think
> > its been improved since I last had a hard problem.
>
> I've seen it trigger.
Oh, I've seen it trigger plenty,.. just not when I needed it and/or it
didn't contain useful bits.
Powered by blists - more mailing lists