[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5624DAC6.900@intel.com>
Date: Mon, 19 Oct 2015 14:57:58 +0300
From: Adrian Hunter <adrian.hunter@...el.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Andy Lutomirski <luto@...capital.net>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org,
Stephane Eranian <eranian@...gle.com>,
Andi Kleen <ak@...ux.intel.com>
Subject: [PATCH V2 1/1] perf/x86: Fix time_shift in perf_event_mmap_page
>From 96bb79ce53b9373f5b6a212fcc2cd4d364d708af Mon Sep 17 00:00:00 2001
From: Adrian Hunter <adrian.hunter@...el.com>
Date: Fri, 16 Oct 2015 11:41:56 +0300
Subject: [PATCH 1/1] perf/x86: Fix time_shift in perf_event_mmap_page
Commit b20112edeadf ("perf/x86: Improve accuracy of perf/sched clock")
allowed the time_shift value in perf_event_mmap_page to be as much
as 32. Unfortunately the documented algorithms for using time_shift
have it shifting an integer, whereas to work correctly with the value
32, the type must be u64.
In the case of perf tools, Intel PT decodes correctly but the timestamps
that are output (for example by perf script) have lost 32-bits of
granularity so they look like they are not changing at all.
Fix by limiting the shift to 31 and adjusting the multiplier accordingly.
Also update the documentation of perf_event_mmap_page so that new code
based on it will be more future-proof.
Fixes: b20112edeadf ("perf/x86: Improve accuracy of perf/sched clock")
Signed-off-by: Adrian Hunter <adrian.hunter@...el.com>
---
Changes in V2:
Add to the commit message symptoms of the code misbehaviour
and how users notice.
arch/x86/kernel/tsc.c | 11 +++++++++++
include/uapi/linux/perf_event.h | 4 ++--
2 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 69b84a26ea17..c7c4d9c51e99 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -259,6 +259,17 @@ static void set_cyc2ns_scale(unsigned long cpu_khz, int cpu)
clocks_calc_mult_shift(&data->cyc2ns_mul, &data->cyc2ns_shift, cpu_khz,
NSEC_PER_MSEC, 0);
+ /*
+ * cyc2ns_shift is exported via arch_perf_update_userpage() where it is
+ * not expected to be greater than 31 due to the original published
+ * conversion algorithm shifting a 32-bit value (now specifies a 64-bit
+ * value) - refer perf_event_mmap_page documentation in perf_event.h.
+ */
+ if (data->cyc2ns_shift == 32) {
+ data->cyc2ns_shift = 31;
+ data->cyc2ns_mul >>= 1;
+ }
+
data->cyc2ns_offset = ns_now -
mul_u64_u32_shr(tsc_now, data->cyc2ns_mul, data->cyc2ns_shift);
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 2881145cda86..6c72e72e975c 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -476,7 +476,7 @@ struct perf_event_mmap_page {
* u64 delta;
*
* quot = (cyc >> time_shift);
- * rem = cyc & ((1 << time_shift) - 1);
+ * rem = cyc & (((u64)1 << time_shift) - 1);
* delta = time_offset + quot * time_mult +
* ((rem * time_mult) >> time_shift);
*
@@ -507,7 +507,7 @@ struct perf_event_mmap_page {
* And vice versa:
*
* quot = cyc >> time_shift;
- * rem = cyc & ((1 << time_shift) - 1);
+ * rem = cyc & (((u64)1 << time_shift) - 1);
* timestamp = time_zero + quot * time_mult +
* ((rem * time_mult) >> time_shift);
*/
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists