[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d23c9f80-b73e-693a-f4ab-507e4108c46b@linux.intel.com>
Date: Wed, 10 Apr 2019 09:57:50 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...hat.com, linux-kernel@...r.kernel.org, tglx@...utronix.de,
acme@...nel.org, jolsa@...nel.org, eranian@...gle.com,
alexander.shishkin@...ux.intel.com, ak@...ux.intel.com
Subject: Re: [PATCH 1/2] perf/x86/intel: Support adaptive PEBS for fixed
counters
On 4/10/2019 3:41 AM, Peter Zijlstra wrote:
> On Tue, Apr 09, 2019 at 06:09:59PM -0700, kan.liang@...ux.intel.com wrote:
>> From: Kan Liang <kan.liang@...ux.intel.com>
>>
>> Fixed counters can also generate adaptive PEBS record, if the
>> corresponding bit in IA32_FIXED_CTR_CTRL is set.
>> Otherwise, only basic record is generated.
>>
>> Unconditionally set the bit when PEBS is enabled on fixed counters.
>> Let MSR_PEBS_CFG decide which format of PEBS record should be generated.
>> There is no harmful to leave the bit set.
>
> I'll merge this back into:
>
> Subject: perf/x86/intel: Support adaptive PEBSv4
>
> such that this bug never existed, ok?
Yes, please.
Thanks,
Kan
>
>>
>> Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
>> ---
>> arch/x86/events/intel/core.c | 5 +++++
>> arch/x86/include/asm/perf_event.h | 1 +
>> 2 files changed, 6 insertions(+)
>>
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index 56df0f6..f34d92b 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -2174,6 +2174,11 @@ static void intel_pmu_enable_fixed(struct perf_event *event)
>> bits <<= (idx * 4);
>> mask = 0xfULL << (idx * 4);
>>
>> + if (x86_pmu.intel_cap.pebs_baseline && event->attr.precise_ip) {
>> + bits |= ICL_FIXED_0_ADAPTIVE << (idx * 4);
>> + mask |= ICL_FIXED_0_ADAPTIVE << (idx * 4);
>> + }
>> +
>> rdmsrl(hwc->config_base, ctrl_val);
>> ctrl_val &= ~mask;
>> ctrl_val |= bits;
>> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
>> index dcb8bac..ce0dc88 100644
>> --- a/arch/x86/include/asm/perf_event.h
>> +++ b/arch/x86/include/asm/perf_event.h
>> @@ -33,6 +33,7 @@
>> #define HSW_IN_TX (1ULL << 32)
>> #define HSW_IN_TX_CHECKPOINTED (1ULL << 33)
>> #define ICL_EVENTSEL_ADAPTIVE (1ULL << 34)
>> +#define ICL_FIXED_0_ADAPTIVE (1ULL << 32)
>>
>> #define AMD64_EVENTSEL_INT_CORE_ENABLE (1ULL << 36)
>> #define AMD64_EVENTSEL_GUESTONLY (1ULL << 40)
>> --
>> 2.7.4
>>
Powered by blists - more mailing lists