[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <156724236532.10782.17844013749559561663.tip-bot2@tip-bot2>
Date: Sat, 31 Aug 2019 09:06:05 -0000
From: "tip-bot2 for Kim Phillips" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Kim Phillips <kim.phillips@....com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"Arnaldo Carvalho de Melo" <acme@...nel.org>, <x86@...nel.org>,
Ingo Molnar <mingo@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Jiri Olsa <jolsa@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"Borislav Petkov" <bp@...en8.de>,
Stephane Eranian <eranian@...gle.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
"Namhyung Kim" <namhyung@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: [tip: perf/urgent] perf/x86/amd/ibs: Fix sample bias for dispatched micro-ops
The following commit has been merged into the perf/urgent branch of tip:
Commit-ID: 0f4cd769c410e2285a4e9873a684d90423f03090
Gitweb: https://git.kernel.org/tip/0f4cd769c410e2285a4e9873a684d90423f03090
Author: Kim Phillips <kim.phillips@....com>
AuthorDate: Mon, 26 Aug 2019 14:57:30 -05:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Fri, 30 Aug 2019 14:27:47 +02:00
perf/x86/amd/ibs: Fix sample bias for dispatched micro-ops
When counting dispatched micro-ops with cnt_ctl=1, in order to prevent
sample bias, IBS hardware preloads the least significant 7 bits of
current count (IbsOpCurCnt) with random values, such that, after the
interrupt is handled and counting resumes, the next sample taken
will be slightly perturbed.
The current count bitfield is in the IBS execution control h/w register,
alongside the maximum count field.
Currently, the IBS driver writes that register with the maximum count,
leaving zeroes to fill the current count field, thereby overwriting
the random bits the hardware preloaded for itself.
Fix the driver to actually retain and carry those random bits from the
read of the IBS control register, through to its write, instead of
overwriting the lower current count bits with zeroes.
Tested with:
perf record -c 100001 -e ibs_op/cnt_ctl=1/pp -a -C 0 taskset -c 0 <workload>
'perf annotate' output before:
15.70 65: addsd %xmm0,%xmm1
17.30 add $0x1,%rax
15.88 cmp %rdx,%rax
je 82
17.32 72: test $0x1,%al
jne 7c
7.52 movapd %xmm1,%xmm0
5.90 jmp 65
8.23 7c: sqrtsd %xmm1,%xmm0
12.15 jmp 65
'perf annotate' output after:
16.63 65: addsd %xmm0,%xmm1
16.82 add $0x1,%rax
16.81 cmp %rdx,%rax
je 82
16.69 72: test $0x1,%al
jne 7c
8.30 movapd %xmm1,%xmm0
8.13 jmp 65
8.24 7c: sqrtsd %xmm1,%xmm0
8.39 jmp 65
Tested on Family 15h and 17h machines.
Machines prior to family 10h Rev. C don't have the RDWROPCNT capability,
and have the IbsOpCurCnt bitfield reserved, so this patch shouldn't
affect their operation.
It is unknown why commit db98c5faf8cb ("perf/x86: Implement 64-bit
counter support for IBS") ignored the lower 4 bits of the IbsOpCurCnt
field; the number of preloaded random bits has always been 7, AFAICT.
Signed-off-by: Kim Phillips <kim.phillips@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: "Arnaldo Carvalho de Melo" <acme@...nel.org>
Cc: <x86@...nel.org>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Jiri Olsa <jolsa@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: "Borislav Petkov" <bp@...en8.de>
Cc: Stephane Eranian <eranian@...gle.com>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: "Namhyung Kim" <namhyung@...nel.org>
Cc: "H. Peter Anvin" <hpa@...or.com>
Link: https://lkml.kernel.org/r/20190826195730.30614-1-kim.phillips@amd.com
---
arch/x86/events/amd/ibs.c | 13 ++++++++++---
arch/x86/include/asm/perf_event.h | 12 ++++++++----
2 files changed, 18 insertions(+), 7 deletions(-)
diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
index 62f317c..5b35b7e 100644
--- a/arch/x86/events/amd/ibs.c
+++ b/arch/x86/events/amd/ibs.c
@@ -661,10 +661,17 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs)
throttle = perf_event_overflow(event, &data, ®s);
out:
- if (throttle)
+ if (throttle) {
perf_ibs_stop(event, 0);
- else
- perf_ibs_enable_event(perf_ibs, hwc, period >> 4);
+ } else {
+ period >>= 4;
+
+ if ((ibs_caps & IBS_CAPS_RDWROPCNT) &&
+ (*config & IBS_OP_CNT_CTL))
+ period |= *config & IBS_OP_CUR_CNT_RAND;
+
+ perf_ibs_enable_event(perf_ibs, hwc, period);
+ }
perf_event_update_userpage(event);
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 1392d5e..ee26e92 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -252,16 +252,20 @@ struct pebs_lbr {
#define IBSCTL_LVT_OFFSET_VALID (1ULL<<8)
#define IBSCTL_LVT_OFFSET_MASK 0x0F
-/* ibs fetch bits/masks */
+/* IBS fetch bits/masks */
#define IBS_FETCH_RAND_EN (1ULL<<57)
#define IBS_FETCH_VAL (1ULL<<49)
#define IBS_FETCH_ENABLE (1ULL<<48)
#define IBS_FETCH_CNT 0xFFFF0000ULL
#define IBS_FETCH_MAX_CNT 0x0000FFFFULL
-/* ibs op bits/masks */
-/* lower 4 bits of the current count are ignored: */
-#define IBS_OP_CUR_CNT (0xFFFF0ULL<<32)
+/*
+ * IBS op bits/masks
+ * The lower 7 bits of the current count are random bits
+ * preloaded by hardware and ignored in software
+ */
+#define IBS_OP_CUR_CNT (0xFFF80ULL<<32)
+#define IBS_OP_CUR_CNT_RAND (0x0007FULL<<32)
#define IBS_OP_CNT_CTL (1ULL<<19)
#define IBS_OP_VAL (1ULL<<18)
#define IBS_OP_ENABLE (1ULL<<17)
Powered by blists - more mailing lists