[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201005082516.GG2628@hirez.programming.kicks-ass.net>
Date: Mon, 5 Oct 2020 10:25:16 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Kim Phillips <kim.phillips@....com>
Cc: "Liang, Kan" <kan.liang@...ux.intel.com>, mingo@...hat.com,
linux-kernel@...r.kernel.org, ak@...ux.intel.com
Subject: [PATCH] perf/x86: Fix n_pair for cancelled txn
On Fri, Oct 02, 2020 at 04:10:42PM -0500, Kim Phillips wrote:
> Tested-by: Kim Phillips <kim.phillips@....com>
---
Subject: perf/x86: Fix n_pair for cancelled txn
From: Peter Zijlstra <peterz@...radead.org>
Date: Mon Oct 5 10:09:06 CEST 2020
Kan reported that n_metric gets corrupted for cancelled transactions;
a similar issue exists for n_pair for AMD's Large Increment thing.
The problem was confirmed and confirmed fixed by Kim using:
sudo perf stat -e "{cycles,cycles,cycles,cycles}:D" -a sleep 10 &
# should succeed:
sudo perf stat -e "{fp_ret_sse_avx_ops.all}:D" -a workload
# should fail:
sudo perf stat -e "{fp_ret_sse_avx_ops.all,fp_ret_sse_avx_ops.all,cycles}:D" -a workload
# previously failed, now succeeds with this patch:
sudo perf stat -e "{fp_ret_sse_avx_ops.all}:D" -a workload
Fixes: 5738891229a2 ("perf/x86/amd: Add support for Large Increment per Cycle Events")
Reported-by: Kan Liang <kan.liang@...ux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Tested-by: Kim Phillips <kim.phillips@....com>
---
arch/x86/events/core.c | 6 +++++-
arch/x86/events/perf_event.h | 1 +
2 files changed, 6 insertions(+), 1 deletion(-)
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1089,8 +1089,10 @@ static int collect_event(struct cpu_hw_e
return -EINVAL;
cpuc->event_list[n] = event;
- if (is_counter_pair(&event->hw))
+ if (is_counter_pair(&event->hw)) {
cpuc->n_pair++;
+ cpuc->n_txn_pair++;
+ }
return 0;
}
@@ -2062,6 +2064,7 @@ static void x86_pmu_start_txn(struct pmu
perf_pmu_disable(pmu);
__this_cpu_write(cpu_hw_events.n_txn, 0);
+ __this_cpu_write(cpu_hw_events.n_txn_pair, 0);
}
/*
@@ -2087,6 +2090,7 @@ static void x86_pmu_cancel_txn(struct pm
*/
__this_cpu_sub(cpu_hw_events.n_added, __this_cpu_read(cpu_hw_events.n_txn));
__this_cpu_sub(cpu_hw_events.n_events, __this_cpu_read(cpu_hw_events.n_txn));
+ __this_cpu_sub(cpu_hw_events.n_pair, __this_cpu_read(cpu_hw_events.n_txn_pair));
perf_pmu_enable(pmu);
}
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -235,6 +235,7 @@ struct cpu_hw_events {
they've never been enabled yet */
int n_txn; /* the # last events in the below arrays;
added in the current transaction */
+ int n_txn_pair;
int assign[X86_PMC_IDX_MAX]; /* event to counter assignment */
u64 tags[X86_PMC_IDX_MAX];
Powered by blists - more mailing lists