[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <171258183964.10875.16766485938132892744.tip-bot2@tip-bot2>
Date: Mon, 08 Apr 2024 13:10:39 -0000
From: "tip-bot2 for Adrian Hunter" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Adrian Hunter <adrian.hunter@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: timers/core] x86/vdso: Make delta calculation overflow safe
The following commit has been merged into the timers/core branch of tip:
Commit-ID: 7e90ffb716d289b3b82fb41892bb52a11bdadfd9
Gitweb: https://git.kernel.org/tip/7e90ffb716d289b3b82fb41892bb52a11bdadfd9
Author: Adrian Hunter <adrian.hunter@...el.com>
AuthorDate: Mon, 25 Mar 2024 08:40:12 +02:00
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitterDate: Mon, 08 Apr 2024 15:03:07 +02:00
x86/vdso: Make delta calculation overflow safe
Kernel timekeeping is designed to keep the change in cycles (since the last
timer interrupt) below max_cycles, which prevents multiplication overflow
when converting cycles to nanoseconds. However, if timer interrupts stop,
the calculation will eventually overflow.
Add protection against that. Select GENERIC_VDSO_OVERFLOW_PROTECT so that
max_cycles is made available in the VDSO data page. Check against
max_cycles, falling back to a slower higher precision calculation. Take
advantage of the opportunity to move masking and negative motion check
into the slow path.
The result is a calculation that has similar performance as before. Newer
machines showed performance benefit, whereas older Skylake-based hardware
such as Intel Kaby Lake was seen <1% worse.
Suggested-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Adrian Hunter <adrian.hunter@...el.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Link: https://lore.kernel.org/r/20240325064023.2997-9-adrian.hunter@intel.com
---
arch/x86/Kconfig | 1 +-
arch/x86/include/asm/vdso/gettimeofday.h | 31 ++++++++++++++++-------
2 files changed, 23 insertions(+), 9 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4fff6ed..4e251ba 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -168,6 +168,7 @@ config X86
select GENERIC_TIME_VSYSCALL
select GENERIC_GETTIMEOFDAY
select GENERIC_VDSO_TIME_NS
+ select GENERIC_VDSO_OVERFLOW_PROTECT
select GUP_GET_PXX_LOW_HIGH if X86_PAE
select HARDIRQS_SW_RESEND
select HARDLOCKUP_CHECK_TIMESTAMP if X86_64
diff --git a/arch/x86/include/asm/vdso/gettimeofday.h b/arch/x86/include/asm/vdso/gettimeofday.h
index 5727ded..0ef3619 100644
--- a/arch/x86/include/asm/vdso/gettimeofday.h
+++ b/arch/x86/include/asm/vdso/gettimeofday.h
@@ -319,18 +319,31 @@ static inline bool arch_vdso_cycles_ok(u64 cycles)
*/
static __always_inline u64 vdso_calc_ns(const struct vdso_data *vd, u64 cycles, u64 base)
{
- /*
- * Due to the MSB/Sign-bit being used as invalid marker (see
- * arch_vdso_cycles_valid() above), the effective mask is S64_MAX.
- */
- u64 delta = (cycles - vd->cycle_last) & S64_MAX;
+ u64 delta = cycles - vd->cycle_last;
/*
- * Due to the above mentioned TSC wobbles, filter out negative motion.
- * Per the above masking, the effective sign bit is now bit 62.
+ * Negative motion and deltas which can cause multiplication
+ * overflow require special treatment. This check covers both as
+ * negative motion is guaranteed to be greater than @vd::max_cycles
+ * due to unsigned comparison.
+ *
+ * Due to the MSB/Sign-bit being used as invalid marker (see
+ * arch_vdso_cycles_valid() above), the effective mask is S64_MAX,
+ * but that case is also unlikely and will also take the unlikely path
+ * here.
*/
- if (unlikely(delta & (1ULL << 62)))
- return base >> vd->shift;
+ if (unlikely(delta > vd->max_cycles)) {
+ /*
+ * Due to the above mentioned TSC wobbles, filter out
+ * negative motion. Per the above masking, the effective
+ * sign bit is now bit 62.
+ */
+ if (delta & (1ULL << 62))
+ return base >> vd->shift;
+
+ /* Handle multiplication overflow gracefully */
+ return mul_u64_u32_add_u64_shr(delta & S64_MAX, vd->mult, base, vd->shift);
+ }
return ((delta * vd->mult) + base) >> vd->shift;
}
Powered by blists - more mailing lists