[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37D7C6CF3E00A74B8858931C1DB2F077536F079F@SHSMSX103.ccr.corp.intel.com>
Date: Mon, 22 May 2017 16:55:47 +0000
From: "Liang, Kan" <kan.liang@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: "mingo@...hat.com" <mingo@...hat.com>,
"eranian@...gle.com" <eranian@...gle.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"alexander.shishkin@...ux.intel.com"
<alexander.shishkin@...ux.intel.com>,
"acme@...hat.com" <acme@...hat.com>,
"jolsa@...hat.com" <jolsa@...hat.com>,
"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"vincent.weaver@...ne.edu" <vincent.weaver@...ne.edu>,
"ak@...ux.intel.com" <ak@...ux.intel.com>
Subject: RE: [PATCH 1/2] perf/x86/intel: enable CPU ref_cycles for GP counter
> On Fri, May 19, 2017 at 10:06:21AM -0700, kan.liang@...el.com wrote:
> > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index
> > 580b60f..e8b2326 100644
> > --- a/arch/x86/events/core.c
> > +++ b/arch/x86/events/core.c
> > @@ -101,6 +101,10 @@ u64 x86_perf_event_update(struct perf_event
> *event)
> > delta = (new_raw_count << shift) - (prev_raw_count << shift);
> > delta >>= shift;
> >
> > + /* Correct the count number if applying ref_cycles replacement */
> > + if (!is_sampling_event(event) &&
> > + (hwc->flags & PERF_X86_EVENT_REF_CYCLES_REP))
> > + delta *= x86_pmu.ref_cycles_factor;
>
> That condition seems wrong, why only correct for !sampling events?
>
For sampling, it's either fixed freq mode or fixed period mode.
- In the fixed freq mode, we should do nothing, because the adaptive
frequency algorithm will handle it.
- In the fixed period mode, we have already adjusted the period in
ref_cycles_rep().
Therefore, we should only handle !sampling events here.
> > local64_add(delta, &event->count);
> > local64_sub(delta, &hwc->period_left);
> >
>
>
> > @@ -934,6 +938,21 @@ int x86_schedule_events(struct cpu_hw_events
> *cpuc, int n, int *assign)
> > for (i = 0; i < n; i++) {
> > e = cpuc->event_list[i];
> > e->hw.flags |= PERF_X86_EVENT_COMMITTED;
> > +
> > + /*
> > + * 0x0300 is pseudo-encoding for REF_CPU_CYCLES.
> > + * It indicates that fixed counter 2 should be used.
> > + *
> > + * If fixed counter 2 is occupied and a GP counter
> > + * is assigned, an alternative event which can be
> > + * counted in GP counter will be used to replace
> > + * the pseudo-encoding REF_CPU_CYCLES event.
> > + */
> > + if (((e->hw.config & X86_RAW_EVENT_MASK) ==
> 0x0300) &&
> > + (assign[i] < INTEL_PMC_IDX_FIXED) &&
> > + x86_pmu.ref_cycles_rep)
> > + x86_pmu.ref_cycles_rep(e);
> > +
> > if (x86_pmu.commit_scheduling)
> > x86_pmu.commit_scheduling(cpuc, i,
> assign[i]);
> > }
>
> This looks dodgy, this is the branch were we managed to schedule all events.
> Why would we need to consider anything here?
>
> I was expecting a retry if there are still unscheduled events and one of the
> events was our 0x0300 event. In that case you have to reset the event and
> retry the whole scheduling thing.
Will do it.
Thanks,
Kan
Powered by blists - more mailing lists