lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130314042227.GA15675@dcvr.yhbt.net>
Date:	Thu, 14 Mar 2013 04:22:27 +0000
From:	Eric Wong <normalperson@...t.net>
To:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:	Lai Jiangshan <laijs@...fujitsu.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Stephen Hemminger <shemminger@...tta.com>,
	Davide Libenzi <davidel@...ilserver.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] Linux kernel Wait-Free Concurrent Queue
 Implementation

Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:
> Ported to the Linux kernel from Userspace RCU library, at commit
> 108a92e5b97ee91b2b902dba2dd2e78aab42f420.
> 
> Ref: http://git.lttng.org/userspace-rcu.git
> 
> It is provided as a starting point only. Test cases should be ported
> from Userspace RCU to kernel space and thoroughly ran on a wide range of
> architectures before considering this port production-ready.

Thanks, this seems to work.  Will post an early epoll patch in a minute.

Minor comments below.

> +/*
> + * Load a data from shared memory.
> + */
> +#define CMM_LOAD_SHARED(p)		ACCESS_ONCE(p)

When iterating through the queue by dequeueing, I needed a way
to get the tail at the start of the iteration and use that as
a sentinel while iterating, so I access the tail like this:

	struct wfcq_node *p = CMM_LOAD_SHARED(ep->rdltail.p);

I hope this is supported... it seems to work :)

Unlike most queue users, I need to stop iteration to prevent the same
item from appearing in the events returned by epoll_wait; since a
dequeued item may appear back in the wfcqueue immediately.

> +struct wfcq_head {
> +	struct wfcq_node node;
> +	struct mutex lock;
> +};

I'm not using this lock at all since I already have ep->mtx (which also
protects the ep->rbr).  Perhaps it should not be included; normal linked
list and most data structures I see in the kernel do not provide their
own locks, either

> +static inline void wfcq_init(struct wfcq_head *head,
> +		struct wfcq_tail *tail)
> +{
> +	/* Set queue head and tail */
> +	wfcq_node_init(&head->node);
> +	tail->p = &head->node;
> +	mutex_init(&head->lock);
> +}

There's no corresponding mutex_destroy, so I'm just destroying it
myself...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ