[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180723145944.GB2458@hirez.programming.kicks-ass.net>
Date: Mon, 23 Jul 2018 16:59:44 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: kan.liang@...ux.intel.com
Cc: tglx@...utronix.de, mingo@...hat.com, linux-kernel@...r.kernel.org,
acme@...nel.org, alexander.shishkin@...ux.intel.com,
vincent.weaver@...ne.edu, jolsa@...hat.com, ak@...ux.intel.com
Subject: Re: [PATCH 3/4] perf/x86/intel/ds: Handle PEBS overflow for fixed
counters
On Thu, Mar 08, 2018 at 06:15:41PM -0800, kan.liang@...ux.intel.com wrote:
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index ef47a418d819..86149b87cce8 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -2280,7 +2280,10 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
> * counters from the GLOBAL_STATUS mask and we always process PEBS
> * events via drain_pebs().
> */
> - status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
> + if (x86_pmu.flags & PMU_FL_PEBS_ALL)
> + status &= ~(cpuc->pebs_enabled & EXTENDED_PEBS_COUNTER_MASK);
> + else
> + status &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
>
> /*
> * PEBS overflow sets bit 62 in the global status register
Doesn't this re-introduce the problem fixed in commit fd583ad1563be,
where pebs_enabled:32-34 are PEBS Load Latency, instead of fixed
counters?
Powered by blists - more mailing lists