lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160105211834.GR6344@twins.programming.kicks-ass.net>
Date:	Tue, 5 Jan 2016 22:18:34 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	ling.ma.program@...il.com
Cc:	waiman.long@....com, mingo@...hat.com,
	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
	ling.ml@...baba-inc.com
Subject: Re: [RFC PATCH] alispinlock: acceleration from lock integration on
 multi-core platform

On Thu, Dec 31, 2015 at 04:09:34PM +0800, ling.ma.program@...il.com wrote:
> +void alispinlock(struct ali_spinlock *lock, struct ali_spinlock_info *ali)
> +{
> +	struct ali_spinlock_info *next, *old;
> +
> +	ali->next = NULL;
> +	ali->locked = 1;
> +	old = xchg(&lock->lock_p, ali);
> +
> +	/* If NULL we are the first one */
> +	if (old) {
> +		WRITE_ONCE(old->next, ali);
> +		if(ali->flags & ALI_LOCK_FREE)
> +			return;
> +		while((READ_ONCE(ali->locked)))
> +			cpu_relax_lowlatency();
> +		return;
> +	}
> +	old = READ_ONCE(lock->lock_p);
> +
> +	/* Handle all pending works */
> +repeat:	
> +	if(old == ali)
> +		goto end;
> +
> +	while (!(next = READ_ONCE(ali->next)))
> +		cpu_relax();
> +	
> +	ali->fn(ali->para);
> +	ali->locked = 0;
> +
> +	if(old != next) {
> +		while (!(ali = READ_ONCE(next->next)))
> +			cpu_relax();
> +		next->fn(next->para);
> +		next->locked = 0;
> +		goto repeat;
> +		
> +	} else
> +		ali = next;

So I have a whole bunch of problems with this thing.. For one I object
to this being called a lock. Its much more like an async work queue like
thing.

It suffers the typical problems all those constructs do; namely it
wrecks accountability.

But here that is compounded by the fact that you inject other people's
work into 'your' lock region, thereby bloating lock hold times. Worse,
afaict (from a quick reading) there really isn't a bound on the amount
of work you inject.

This will completely wreck scheduling latency. At the very least the
callback loop should have a need_resched() test on, but even that will
not work if this has IRQs disabled.


And while its a cute collapse of an MCS lock and lockless list style
work queue (MCS after all is a lockless list), saving a few cycles from
the naive spinlock+llist implementation of the same thing, I really
do not see enough justification for any of this.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ