[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <c8ce9baaef0dc7273e4bcc31f353b17b655113d1.1579196675.git.christophe.leroy@c-s.fr>
Date: Thu, 16 Jan 2020 17:58:34 +0000 (UTC)
From: Christophe Leroy <christophe.leroy@....fr>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>, nathanl@...ux.ibm.com,
arnd@...db.de, tglx@...utronix.de, vincenzo.frascino@....com,
luto@...nel.org
Cc: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-arm-kernel@...ts.infradead.org, linux-mips@...r.kernel.org,
x86@...nel.org
Subject: [RFC PATCH v4 10/11] lib: vdso: Allow arches to override the ns shift
operation
On powerpc/32, GCC (8.1) generates pretty bad code for the
ns >>= vd->shift operation taking into account that the
shift is always < 32 and the upper part of the result is
likely to be nul. GCC makes reversed assumptions considering
the shift to be likely >= 32 and the upper part to be like not nul.
unsigned long long shift(unsigned long long x, unsigned char s)
{
return x >> s;
}
results in:
00000018 <shift>:
18: 35 25 ff e0 addic. r9,r5,-32
1c: 41 80 00 10 blt 2c <shift+0x14>
20: 7c 64 4c 30 srw r4,r3,r9
24: 38 60 00 00 li r3,0
28: 4e 80 00 20 blr
2c: 54 69 08 3c rlwinm r9,r3,1,0,30
30: 21 45 00 1f subfic r10,r5,31
34: 7c 84 2c 30 srw r4,r4,r5
38: 7d 29 50 30 slw r9,r9,r10
3c: 7c 63 2c 30 srw r3,r3,r5
40: 7d 24 23 78 or r4,r9,r4
44: 4e 80 00 20 blr
Even when forcing the shift with an &= 31, it still considers
the shift as likely >= 32.
Define a vdso_shift_ns() macro that can be overriden by
arches.
Signed-off-by: Christophe Leroy <christophe.leroy@....fr>
---
lib/vdso/gettimeofday.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/lib/vdso/gettimeofday.c b/lib/vdso/gettimeofday.c
index 724b45c3e8ac..9ba92058cfd7 100644
--- a/lib/vdso/gettimeofday.c
+++ b/lib/vdso/gettimeofday.c
@@ -39,6 +39,13 @@ u64 vdso_calc_delta(u64 cycles, u64 last, u64 mask, u32 mult)
}
#endif
+#ifndef vdso_shift_ns
+static __always_inline u64 vdso_shift_ns(u64 ns, unsigned long shift)
+{
+ return ns >> shift;
+}
+#endif
+
#ifndef __arch_vdso_hres_capable
static inline bool __arch_vdso_hres_capable(void)
{
@@ -148,7 +155,7 @@ static __always_inline int do_hres(const struct vdso_data *vd, clockid_t clk,
ns = vdso_ts->nsec;
last = vd->cycle_last;
ns += vdso_calc_delta(cycles, last, vd->mask, vd->mult);
- ns >>= vd->shift;
+ ns = vdso_shift_ns(ns, vd->shift);
sec = vdso_ts->sec;
} while (unlikely(vdso_read_retry(vd, seq)));
--
2.13.3
Powered by blists - more mailing lists