[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37D7C6CF3E00A74B8858931C1DB2F077017C7149@SHSMSX103.ccr.corp.intel.com>
Date: Wed, 15 Apr 2015 17:48:39 +0000
From: "Liang, Kan" <kan.liang@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...nel.org" <mingo@...nel.org>,
"acme@...radead.org" <acme@...radead.org>,
"eranian@...gle.com" <eranian@...gle.com>,
"andi@...stfloor.org" <andi@...stfloor.org>
Subject: RE: [PATCH V6 3/6] perf, x86: large PEBS interrupt threshold
> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@...radead.org]
> Sent: Wednesday, April 15, 2015 1:15 PM
> To: Liang, Kan
> Cc: linux-kernel@...r.kernel.org; mingo@...nel.org;
> acme@...radead.org; eranian@...gle.com; andi@...stfloor.org
> Subject: Re: [PATCH V6 3/6] perf, x86: large PEBS interrupt threshold
>
> On Thu, Apr 09, 2015 at 12:37:43PM -0400, Kan Liang wrote:
> > This patch also make AUTO_RELOAD conditional on large PEBS. Auto
> > reload only be enabled when fix period and large PEBS.
>
> What's a large PEBS?
>
Should be large PEBS available.
!(event->attr.sample_type & ~PEBS_FREERUNNING_FLAGS)
> > +++ b/arch/x86/kernel/cpu/perf_event.h
> > @@ -87,6 +87,17 @@ struct amd_nb {
> > #define MAX_PEBS_EVENTS 8
> >
> > /*
> > + * Flags PEBS can handle without an PMI.
> > + *
> > + * TID can only be handled by flushing at context switch.
> > + */
> > +#define PEBS_FREERUNNING_FLAGS \
> > + (PERF_SAMPLE_IP | PERF_SAMPLE_TID | PERF_SAMPLE_ADDR | \
> > + PERF_SAMPLE_ID | PERF_SAMPLE_CPU |
> PERF_SAMPLE_STREAM_ID | \
> > + PERF_SAMPLE_DATA_SRC | PERF_SAMPLE_IDENTIFIER | \
> > + PERF_SAMPLE_TRANSACTION)
> > +
> > +/*
> > * A debug store configuration.
> > *
> > * We only support architectures that use 64bit fields.
> > diff --git a/arch/x86/kernel/cpu/perf_event_intel.c
> > b/arch/x86/kernel/cpu/perf_event_intel.c
> > index 0a7b5ca..6c8579a 100644
> > --- a/arch/x86/kernel/cpu/perf_event_intel.c
> > +++ b/arch/x86/kernel/cpu/perf_event_intel.c
> > @@ -2306,7 +2306,9 @@ static int intel_pmu_hw_config(struct
> perf_event *event)
> > return ret;
> >
> > if (event->attr.precise_ip) {
> > - if (!event->attr.freq)
> > + /* only enable auto reload when fix period and large PEBS
> */
> > + if (!event->attr.freq &&
> > + !(event->attr.sample_type &
> ~PEBS_FREERUNNING_FLAGS))
> > event->hw.flags |=
> PERF_X86_EVENT_AUTO_RELOAD;
> > if (x86_pmu.pebs_aliases)
> > x86_pmu.pebs_aliases(event);
>
> I suspect you meant the above change right?
Yes.
>
> But this negates part of the benefit of the auto reload; where previously it
> saved an MSR write for pretty much all PEBS usage it now becomes a
> burden for pretty much everyone.
>
> Why cannot we retain the win for all PEBS users?
The change tries to address your comments.
https://lkml.org/lkml/2015/3/30/294
Yes, we can retain the win. If so, I think we need to introduce another
flag like PERF_X86_EVENT_LARGE_PEBS and check it in pebs_is_enabled.
Or just keep the previous V5 patch unchanged.
Thanks,
Kan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists