[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210716184719.269033-4-posk@google.com>
Date: Fri, 16 Jul 2021 11:47:18 -0700
From: Peter Oskolkov <posk@...k.io>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, linux-api@...r.kernel.org
Cc: Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>,
Peter Oskolkov <posk@...gle.com>,
Peter Oskolkov <posk@...k.io>,
Joel Fernandes <joel@...lfernandes.org>,
Andrei Vagin <avagin@...gle.com>,
Jim Newsome <jnewsome@...project.org>,
Jann Horn <jannh@...gle.com>,
Thierry Delisle <tdelisle@...terloo.ca>
Subject: [RFC PATCH 3/4 v0.3] sched/umcg: RFC: add userspace sll helpers
The helpers below can work with userspace single-linked lists concurrently
without indefinitely spinning in the kernel. Specifically:
push (add to the head of the list):
step = 0
while (++step < N)
old = *head
*next = old
cmp_xchng(head, &old, next)
pop (remove the first element from the list):
mark the node as deleted by flipping its lowest bit without
actually removing the node from the list:
curr = *head
step = 0
while (curr && ++step < N)
next = *curr
if (next & 1)
curr = next & ~1
continue
if (cmp_xchng(curr, next, next | 1))
return curr
else
curr = next & ~1
It is the userspace's responsibility to actually remove the
nodes marked as deleted from the list.
v0.2->v0.3 changes:
- removed spinning from the kernel (other than capped cmpxchg).
Signed-off-by: Peter Oskolkov <posk@...gle.com>
---
kernel/sched/umcg.h | 116 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 116 insertions(+)
diff --git a/kernel/sched/umcg.h b/kernel/sched/umcg.h
index 288017f5368c..435531d751f2 100644
--- a/kernel/sched/umcg.h
+++ b/kernel/sched/umcg.h
@@ -165,5 +165,121 @@ static inline int cmpxchg_user_64(u64 __user *uaddr, u64 *old, u64 new)
ret; \
})
+/**
+ * umcg_sll_push - push a node onto a single-linked-list (stack) in userspace
+ * @head - a pointer to the head of the stack;
+ * @node - a pointer to the node to push;
+ * @max_tries - the maximum number of attempts to push the node onto the list.
+ *
+ * Push a node onto a single-linked list (stack). The function tries
+ * @max_tries times and returns -EAGAIN if too many concurrent updates, so that
+ * the caller may yield the CPU.
+ *
+ * Return:
+ * 0 - success;
+ * -EFAULT - error;
+ * -EAGAIN - try again.
+ */
+static inline int umcg_sll_push(u64 __user *head, u64 __user *node, int max_tries)
+{
+ int step;
+
+ for (step = 0; step < max_tries; step++) {
+ int ret;
+ u64 first;
+
+ smp_mb(); /* Make the read below clean. */
+ if (get_user(first, head))
+ return -EFAULT;
+
+ if (put_user(first, node))
+ return -EFAULT;
+ smp_mb(); /* Make the write above visible. */
+
+ ret = cmpxchg_user_64(head, &first, (u64)node);
+ if (!ret)
+ return 0;
+
+ if (ret == -EAGAIN) {
+ cpu_relax();
+ continue;
+ }
+
+ if (WARN_ONCE(ret != -EFAULT, "unexpected umcg_cmpxchg result"))
+ return -EFAULT;
+
+ return -EFAULT;
+ }
+
+ return -EAGAIN;
+}
+
+/**
+ * umcg_sll_pop - pop a node from a single-linked-list (stack) in the userspace
+ * @head - a pointer to the head of the stack;
+ * @value - a pointer to where store the popped value;
+ * @max_tries - the maximum number of nodes to try to pop.
+ *
+ * Pop a node from a single-linked list (stack). Internally, popped nodes
+ * are marked as such, and not actually removed from the list, to avoid
+ * the ABA problem. It is the responsibility of the userspace to do GC.
+ *
+ * As an additional protection, the function checks only the first
+ * @max_tries nodes.
+ *
+ * Note: on success, @value should be cast to (u64 __user *) if not zero.
+ *
+ * Return:
+ * 0 - success;
+ * -EFAULT - error;
+ * -EAGAIN - try again.
+ */
+static inline int umcg_sll_pop(u64 __user *head, u64 *value, int max_tries)
+{
+ int step;
+ u64 curr;
+
+ smp_mb(); /* Make the read below clean. */
+ if (get_user(curr, head))
+ return -EFAULT;
+
+ for (step = 0; step < max_tries; step++) {
+ int ret;
+ u64 next;
+
+ if (!curr) {
+ *value = 0UL;
+ return 0;
+ }
+
+ if (curr & 1UL)
+ return -EFAULT; /* Should not happen. */
+
+ smp_mb(); /* Make the read below clean. */
+ if (get_user(next, (u64 __user *)curr))
+ return -EFAULT;
+
+ if (next & 1UL) { /* curr is deleted */
+ curr = next & ~1UL;
+ continue;
+ }
+
+ ret = cmpxchg_user_64((u64 __user *)curr, &next, next | 1UL);
+ if (!ret) {
+ *value = curr;
+ return 0;
+ }
+ if (ret == -EAGAIN) {
+ curr = next & ~1UL;
+ cpu_relax();
+ continue;
+ }
+
+ if (ret)
+ return -EFAULT;
+ }
+
+ return -EAGAIN;
+}
#endif /* CONFIG_X86_64 */
#endif /* _KERNEL_SCHED_UMCG_H */
--
2.25.1
Powered by blists - more mailing lists