[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160708220047.GK30909@twins.programming.kicks-ass.net>
Date: Sat, 9 Jul 2016 00:00:47 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Jiri Olsa <jolsa@...hat.com>
Cc: mingo@...nel.org, acme@...nel.org, linux-kernel@...r.kernel.org,
andi@...stfloor.org, eranian@...gle.com, jolsa@...nel.org,
torvalds@...ux-foundation.org, davidcc@...gle.com,
alexander.shishkin@...ux.intel.com, namhyung@...nel.org,
kan.liang@...el.com, khandual@...ux.vnet.ibm.com
Subject: Re: [RFC][PATCH 1/7] perf/x86/intel: Rework the large PEBS setup code
On Fri, Jul 08, 2016 at 06:36:16PM +0200, Jiri Olsa wrote:
> On Fri, Jul 08, 2016 at 03:31:00PM +0200, Peter Zijlstra wrote:
>
> SNIP
>
> > /*
> > - * When the event is constrained enough we can use a larger
> > - * threshold and run the event with less frequent PMI.
> > + * Use auto-reload if possible to save a MSR write in the PMI.
> > + * This must be done in pmu::start(), because PERF_EVENT_IOC_PERIOD.
> > */
> > - if (hwc->flags & PERF_X86_EVENT_FREERUNNING) {
> > - threshold = ds->pebs_absolute_maximum -
> > - x86_pmu.max_pebs_events * x86_pmu.pebs_record_size;
> > -
> > - if (first_pebs)
> > - perf_sched_cb_inc(event->ctx->pmu);
> > - } else {
> > - threshold = ds->pebs_buffer_base + x86_pmu.pebs_record_size;
> > -
> > - /*
> > - * If not all events can use larger buffer,
> > - * roll back to threshold = 1
> > - */
> > - if (!first_pebs &&
> > - (ds->pebs_interrupt_threshold > threshold))
> > - perf_sched_cb_dec(event->ctx->pmu);
> > - }
>
> hum, the original code switched back the perf_sched_cb,
> in case !feerunning event was detected.. I dont see it
> in the new code.. just the threshold update
> +static inline bool pebs_needs_sched_cb(struct cpu_hw_events *cpuc)
> {
> + return cpuc->n_pebs && (cpuc->n_pebs == cpuc->n_large_pebs);
> +}
> +static void intel_pmu_pebs_add(struct perf_event *event)
> +{
> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + struct hw_perf_event *hwc = &event->hw;
> + bool needs_cb = pebs_needs_sched_cb(cpuc);
> +
> + cpuc->n_pebs++;
> + if (hwc->flags & PERF_X86_EVENT_FREERUNNING)
> + cpuc->n_large_pebs++;
> +
> + if (!needs_cb && pebs_needs_sched_cb(cpuc))
> + perf_sched_cb_inc(event->ctx->pmu);
Ah, you're saying this,
> + pebs_update_threshold(cpuc);
> }
> +static void intel_pmu_pebs_del(struct perf_event *event)
> +{
> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + struct hw_perf_event *hwc = &event->hw;
> + bool needs_cb = pebs_needs_sched_cb(cpuc);
> +
> + cpuc->n_pebs--;
> + if (hwc->flags & PERF_X86_EVENT_FREERUNNING)
> + cpuc->n_large_pebs--;
> +
> + if (needs_cb && !pebs_needs_sched_cb(cpuc))
> + perf_sched_cb_dec(event->ctx->pmu);
and this, should also have something like
if (!needs_cb && pebs_needs_sched_cb(cpuc))
perf_sched_cb_inc(event->ctx->pmu)
Because the event we just removed was the one inhibiting FREERUNNING and
we can now let it rip again.
Yes, you're right. Let me try and see if I can make that better.
Thanks!
> +
> + if (cpuc->n_pebs)
> + pebs_update_threshold(cpuc);
> }
Powered by blists - more mailing lists