[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20200622180803.1449-3-mathieu.desnoyers@efficios.com>
Date: Mon, 22 Jun 2020 14:08:02 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Florian Weimer <fweimer@...hat.com>
Cc: Carlos O'Donell <carlos@...hat.com>,
Joseph Myers <joseph@...esourcery.com>,
Szabolcs Nagy <szabolcs.nagy@....com>,
libc-alpha@...rceware.org,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ben Maurer <bmaurer@...com>,
Peter Zijlstra <peterz@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Boqun Feng <boqun.feng@...il.com>,
Will Deacon <will.deacon@....com>,
Paul Turner <pjt@...gle.com>, linux-kernel@...r.kernel.org,
linux-api@...r.kernel.org
Subject: [PATCH 2/3] Linux: Use rseq in sched_getcpu if available (v9)
When available, use the cpu_id field from __rseq_abi on Linux to
implement sched_getcpu(). Fall-back on the vgetcpu vDSO if unavailable.
Benchmarks:
x86-64: Intel E5-2630 v3@...0GHz, 16-core, hyperthreading
glibc sched_getcpu(): 13.7 ns (baseline)
glibc sched_getcpu() using rseq: 2.5 ns (speedup: 5.5x)
inline load cpuid from __rseq_abi TLS: 0.8 ns (speedup: 17.1x)
CC: Carlos O'Donell <carlos@...hat.com>
CC: Florian Weimer <fweimer@...hat.com>
CC: Joseph Myers <joseph@...esourcery.com>
CC: Szabolcs Nagy <szabolcs.nagy@....com>
CC: Thomas Gleixner <tglx@...utronix.de>
CC: Ben Maurer <bmaurer@...com>
CC: Peter Zijlstra <peterz@...radead.org>
CC: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
CC: Boqun Feng <boqun.feng@...il.com>
CC: Will Deacon <will.deacon@....com>
CC: Paul Turner <pjt@...gle.com>
CC: libc-alpha@...rceware.org
CC: linux-kernel@...r.kernel.org
CC: linux-api@...r.kernel.org
---
Changes since v1:
- rseq is only used if both __NR_rseq and RSEQ_SIG are defined.
Changes since v2:
- remove duplicated __rseq_abi extern declaration.
Changes since v3:
- update ChangeLog.
Changes since v4:
- Use atomic_load_relaxed to load the __rseq_abi.cpu_id field, a
consequence of the fact that __rseq_abi is not volatile anymore.
- Include atomic.h which provides atomic_load_relaxed.
Changes since v5:
- Use __ASSUME_RSEQ to detect rseq availability.
Changes since v6:
- Remove use of __ASSUME_RSEQ.
Changes since v7:
- Fix incorrect merge with commit d0def09ff6 ("linux: Fix vDSO macros
build with time64 interfaces")
Changes since v8:
- Update patch title.
- Add /* RSEQ_SIG */ for #else and #endif.
---
sysdeps/unix/sysv/linux/sched_getcpu.c | 22 ++++++++++++++++++++--
1 file changed, 20 insertions(+), 2 deletions(-)
diff --git a/sysdeps/unix/sysv/linux/sched_getcpu.c b/sysdeps/unix/sysv/linux/sched_getcpu.c
index c019cfb3cf..c0f992e056 100644
--- a/sysdeps/unix/sysv/linux/sched_getcpu.c
+++ b/sysdeps/unix/sysv/linux/sched_getcpu.c
@@ -18,10 +18,12 @@
#include <errno.h>
#include <sched.h>
#include <sysdep.h>
+#include <atomic.h>
#include <sysdep-vdso.h>
+#include <sys/rseq.h>
-int
-sched_getcpu (void)
+static int
+vsyscall_sched_getcpu (void)
{
unsigned int cpu;
int r = -1;
@@ -32,3 +34,19 @@ sched_getcpu (void)
#endif
return r == -1 ? r : cpu;
}
+
+#ifdef RSEQ_SIG
+int
+sched_getcpu (void)
+{
+ int cpu_id = atomic_load_relaxed (&__rseq_abi.cpu_id);
+
+ return cpu_id >= 0 ? cpu_id : vsyscall_sched_getcpu ();
+}
+#else /* RSEQ_SIG */
+int
+sched_getcpu (void)
+{
+ return vsyscall_sched_getcpu ();
+}
+#endif /* RSEQ_SIG */
--
2.17.1
Powered by blists - more mailing lists