[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190307165123.GE7535@tassilo.jf.intel.com>
Date: Thu, 7 Mar 2019 08:51:23 -0800
From: Andi Kleen <ak@...ux.intel.com>
To: Jiri Olsa <jolsa@...hat.com>
Cc: Jiri Olsa <jolsa@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
lkml <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Jonas Rabenstein <jonas.rabenstein@...dium.uni-erlangen.de>,
Nageswara R Sastry <nasastry@...ibm.com>,
Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
Subject: Re: [PATCHv2 5/8] perf tools: Get precise_ip from the pmu config
On Thu, Mar 07, 2019 at 04:35:00PM +0100, Jiri Olsa wrote:
> On Tue, Mar 05, 2019 at 08:40:17AM -0800, Andi Kleen wrote:
> > On Tue, Mar 05, 2019 at 05:28:54PM +0100, Jiri Olsa wrote:
> > > On Tue, Mar 05, 2019 at 08:13:19AM -0800, Andi Kleen wrote:
> > > > On Tue, Mar 05, 2019 at 04:25:33PM +0100, Jiri Olsa wrote:
> > > > > Getting precise_ip field from the perf_pmu::max_precise
> > > > > config read from sysfs. If it's not available falling
> > > > > back to current detection function.
> > > >
> > > > max_precise depends on the event. This won't work for all
> > > > events. For example only instructions and cycles support
> > > > ppp
> > >
> > > I'm getting precise_ip=3 on mem-* events as well, that's why I
> > > was fixing this.. now it's not working for any event
> >
> > I don't think it means anything for mem-*
> >
> > There's some support for it on Goldmont plus for other events,
> > but it doesn't support mem-*. On big core it's only
> > for instructions and cycles, all implemented with the same
> > event. All other PEBS events only have two levels
> > switching between the two IPs.
>
> ok, so how about this, it's the change I posted merged with the patch
Still seems like a hack. Even though I don't know of a case that
would break now. But it would be better to have the precise probing
in the open retry loop, instead of trying to reinvent it here.
-Andi
Powered by blists - more mailing lists