[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221122203932.231377-26-mathieu.desnoyers@efficios.com>
Date: Tue, 22 Nov 2022 15:39:27 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
"H . Peter Anvin" <hpa@...or.com>, Paul Turner <pjt@...gle.com>,
linux-api@...r.kernel.org, Christian Brauner <brauner@...nel.org>,
Florian Weimer <fw@...eb.enyo.de>, David.Laight@...LAB.COM,
carlos@...hat.com, Peter Oskolkov <posk@...k.io>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
Chris Kennelly <ckennelly@...gle.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: [PATCH 25/30] rseq: Extend struct rseq with per-memory-map NUMA-aware Concurrency ID
Expose a per-memory-map NUMA-aware concurrency ID to userspace. Each
concurrency ID stays associated with the same NUMA node except in case
of NUMA topology reconfiguration.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
---
include/uapi/linux/rseq.h | 9 +++++++++
kernel/rseq.c | 10 +++++++++-
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h
index c233aae5eac9..5779249ed37f 100644
--- a/include/uapi/linux/rseq.h
+++ b/include/uapi/linux/rseq.h
@@ -148,6 +148,15 @@ struct rseq {
*/
__u32 mm_cid;
+ /*
+ * Restartable sequences mm_numa_cid field. Updated by the kernel.
+ * Read by user-space with single-copy atomicity semantics. This field
+ * should only be read by the thread which registered this data
+ * structure. Aligned on 32-bit. Contains the current thread's
+ * NUMA-aware concurrency ID (allocated uniquely within a memory map).
+ */
+ __u32 mm_numa_cid;
+
/*
* Flexible array member at end of structure, after last feature field.
*/
diff --git a/kernel/rseq.c b/kernel/rseq.c
index cb2512ab3256..58b09de0de47 100644
--- a/kernel/rseq.c
+++ b/kernel/rseq.c
@@ -91,14 +91,17 @@ static int rseq_update_cpu_node_id(struct task_struct *t)
u32 cpu_id = raw_smp_processor_id();
u32 node_id = cpu_to_node(cpu_id);
u32 mm_cid = task_mm_cid(t);
+ u32 mm_numa_cid = task_mm_numa_cid(t);
WARN_ON_ONCE((int) mm_cid < 0);
+ WARN_ON_ONCE((int) mm_numa_cid < 0);
if (!user_write_access_begin(rseq, t->rseq_len))
goto efault;
unsafe_put_user(cpu_id, &rseq->cpu_id_start, efault_end);
unsafe_put_user(cpu_id, &rseq->cpu_id, efault_end);
unsafe_put_user(node_id, &rseq->node_id, efault_end);
unsafe_put_user(mm_cid, &rseq->mm_cid, efault_end);
+ unsafe_put_user(mm_numa_cid, &rseq->mm_numa_cid, efault_end);
/*
* Additional feature fields added after ORIG_RSEQ_SIZE
* need to be conditionally updated only if
@@ -117,7 +120,7 @@ static int rseq_update_cpu_node_id(struct task_struct *t)
static int rseq_reset_rseq_cpu_node_id(struct task_struct *t)
{
u32 cpu_id_start = 0, cpu_id = RSEQ_CPU_ID_UNINITIALIZED, node_id = 0,
- mm_cid = 0;
+ mm_cid = 0, mm_numa_cid = 0;
/*
* Reset cpu_id_start to its initial state (0).
@@ -141,6 +144,11 @@ static int rseq_reset_rseq_cpu_node_id(struct task_struct *t)
*/
if (put_user(mm_cid, &t->rseq->mm_cid))
return -EFAULT;
+ /*
+ * Reset mm_numa_cid to its initial state (0).
+ */
+ if (put_user(mm_numa_cid, &t->rseq->mm_numa_cid))
+ return -EFAULT;
/*
* Additional feature fields added after ORIG_RSEQ_SIZE
* need to be conditionally reset only if
--
2.25.1
Powered by blists - more mailing lists