lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 21 Jan 2014 10:58:23 -0500
From:	Waiman Long <waiman.long@...com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
	linux-arch@...r.kernel.org, x86@...nel.org,
	linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Michel Lespinasse <walken@...gle.com>,
	Andi Kleen <andi@...stfloor.org>,
	Rik van Riel <riel@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
	George Spelvin <linux@...izon.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>, "" <aswin@...com>,
	Scott J Norton <scott.norton@...com>
Subject: Re: [PATCH v9 1/5] qrwlock: A queue read/write lock implementation

On 01/20/2014 10:21 AM, Peter Zijlstra wrote:
> On Tue, Jan 14, 2014 at 11:44:03PM -0500, Waiman Long wrote:
>> +#ifndef arch_mutex_cpu_relax
>> +# define arch_mutex_cpu_relax() cpu_relax()
>> +#endif
> Include<linux/mutex.h>
>

Will do so.

>> +#ifndef smp_load_acquire
>> +# ifdef CONFIG_X86
>> +#   define smp_load_acquire(p)				\
>> +	({						\
>> +		typeof(*p) ___p1 = ACCESS_ONCE(*p);	\
>> +		barrier();				\
>> +		___p1;					\
>> +	})
>> +# else
>> +#   define smp_load_acquire(p)				\
>> +	({						\
>> +		typeof(*p) ___p1 = ACCESS_ONCE(*p);	\
>> +		smp_mb();				\
>> +		___p1;					\
>> +	})
>> +# endif
>> +#endif
>> +
>> +#ifndef smp_store_release
>> +# ifdef CONFIG_X86
>> +#   define smp_store_release(p, v)			\
>> +	do {						\
>> +		barrier();				\
>> +		ACCESS_ONCE(*p) = v;			\
>> +	} while (0)
>> +# else
>> +#   define smp_store_release(p, v)			\
>> +	do {						\
>> +		smp_mb();				\
>> +		ACCESS_ONCE(*p) = v;			\
>> +	} while (0)
>> +# endif
>> +#endif
> Remove these.

Will do that.

>> +/*
>> + * If an xadd (exchange-add) macro isn't available, simulate one with
>> + * the atomic_add_return() function.
>> + */
>> +#ifdef xadd
>> +# define qrw_xadd(rw, inc)	xadd(&(rw).rwc, inc)
>> +#else
>> +# define qrw_xadd(rw, inc)	(u32)(atomic_add_return(inc,&(rw).rwa) - inc)
>> +#endif
> Is GCC really so stupid that you cannot always use the
> atomic_add_return()? The x86 atomic_add_return is i + xadd(), so you'll
> end up with:
>
>   i + xadd() - i
>
> Surely it can just remove the two i terms?

I guess gcc should do the right thing. I will remove the macro.

>> +/**
>> + * wait_in_queue - Add to queue and wait until it is at the head
>> + * @lock: Pointer to queue rwlock structure
>> + * @node: Node pointer to be added to the queue
>> + */
>> +static inline void wait_in_queue(struct qrwlock *lock, struct qrwnode *node)
>> +{
>> +	struct qrwnode *prev;
>> +
>> +	node->next = NULL;
>> +	node->wait = true;
>> +	prev = xchg(&lock->waitq, node);
>> +	if (prev) {
>> +		prev->next = node;
>> +		/*
>> +		 * Wait until the waiting flag is off
>> +		 */
>> +		while (smp_load_acquire(&node->wait))
>> +			arch_mutex_cpu_relax();
>> +	}
>> +}
> Please rebase on top of the MCS lock patches such that this is gone.

I would like to keep this as long as the MCS patches have not been 
merged into tip. However, I will take that out if the MCS patches are in 
when I need to revise the qrwlock patches.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists