lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Apr 2015 13:08:20 -0700
From:	Davidlohr Bueso <dave@...olabs.net>
To:	George Spelvin <linux@...izon.com>
Cc:	linux-kernel@...r.kernel.org, peterz@...radead.org
Subject: Re: [PATCH 1/2] sched: lockless wake-queues

On Mon, 2015-04-20 at 14:24 -0400, George Spelvin wrote:
> +struct wake_q_head {
> +	struct wake_q_node *first;
> +	struct wake_q_node *last;
> +};
> +
> +#define WAKE_Q_TAIL ((struct wake_q_node *) 0x01)
> +
> +#define WAKE_Q(name)					\
> +	struct wake_q_head name = { WAKE_Q_TAIL, WAKE_Q_TAIL }
> 
> Is there some reason you don't use the simpler singly-linked list
> construction with the tail being a pointer to a pointer:

Sure, that would also work.

> 
> struct wake_q_head {
>        struct wake_q_node *first, **lastp;
> };
> 
> #define WAKE_Q(name)                                   \
>        struct wake_q_head name = { WAKE_Q_TAIL, &name.first }
> 
> 
> That removes a conditional from wake_q_add:
> 
> +/*
> + * Queue a task for later wake-up by wake_up_q().  If the task is already
> + * queued by someone else, leave it to them to deliver the wakeup.

This is already commented in the cmpxchg.

> + *
> + * This property makes it impossible to guarantee the order of wakeups,
> + * but for efficiency we try to deliver wakeups in the order tasks
> + * are added.  

Ok.

> If we didn't mind reversing the order, a LIFO stack
> + * would be simpler.

While true, I don't think it belongs here.

> + */
> +void wake_q_add(struct wake_q_head *head, struct task_struct *task)
> +{
> +	struct wake_q_node *node = &task->wake_q;
> +
> +	/*
> +	 * Atomically grab the task, if ->wake_q is !nil already it means
> +	 * its already queued (either by us or someone else) and will get the
> +	 * wakeup due to that.
> +	 *
> +	 * This cmpxchg() implies a full barrier, which pairs with the write
> +	 * barrier implied by the wakeup in wake_up_list().
> +	 */
> +	if (cmpxchg(&node->next, NULL, WAKE_Q_TAIL))
> +		return;
> +
> +	get_task_struct(task);
> +
> +	/*
> +	 * The head is context local, there can be no concurrency.
> +	 */
> +	*head->lastp = node;
> +	head->lastp = &node->next;
> +}
> 
> It may also be worth commenting the fact that wake_up_q() leaves the
> struct wake_q_head in a corrupt state, so don't try to do it again.

Right, we could re-init the list once the loop is complete, yes. But it
shouldn't matter due to how we use wake-queues.

Thanks,
Davidlohr

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ