lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150420182451.6602.qmail@ns.horizon.com>
Date:	20 Apr 2015 14:24:51 -0400
From:	"George Spelvin" <linux@...izon.com>
To:	dave@...olabs.net
Cc:	linux@...izon.com, linux-kernel@...r.kernel.org,
	peterz@...radead.org
Subject: Re: [PATCH 1/2] sched: lockless wake-queues

+struct wake_q_head {
+	struct wake_q_node *first;
+	struct wake_q_node *last;
+};
+
+#define WAKE_Q_TAIL ((struct wake_q_node *) 0x01)
+
+#define WAKE_Q(name)					\
+	struct wake_q_head name = { WAKE_Q_TAIL, WAKE_Q_TAIL }

Is there some reason you don't use the simpler singly-linked list
construction with the tail being a pointer to a pointer:

struct wake_q_head {
       struct wake_q_node *first, **lastp;
};

#define WAKE_Q(name)                                   \
       struct wake_q_head name = { WAKE_Q_TAIL, &name.first }


That removes a conditional from wake_q_add:

+/*
+ * Queue a task for later wake-up by wake_up_q().  If the task is already
+ * queued by someone else, leave it to them to deliver the wakeup.
+ *
+ * This property makes it impossible to guarantee the order of wakeups,
+ * but for efficiency we try to deliver wakeups in the order tasks
+ * are added.  If we didn't mind reversing the order, a LIFO stack
+ * would be simpler.
+ */
+void wake_q_add(struct wake_q_head *head, struct task_struct *task)
+{
+	struct wake_q_node *node = &task->wake_q;
+
+	/*
+	 * Atomically grab the task, if ->wake_q is !nil already it means
+	 * its already queued (either by us or someone else) and will get the
+	 * wakeup due to that.
+	 *
+	 * This cmpxchg() implies a full barrier, which pairs with the write
+	 * barrier implied by the wakeup in wake_up_list().
+	 */
+	if (cmpxchg(&node->next, NULL, WAKE_Q_TAIL))
+		return;
+
+	get_task_struct(task);
+
+	/*
+	 * The head is context local, there can be no concurrency.
+	 */
+	*head->lastp = node;
+	head->lastp = &node->next;
+}

It may also be worth commenting the fact that wake_up_q() leaves the
struct wake_q_head in a corrupt state, so don't try to do it again.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ