lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 6 Jan 2016 09:16:43 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>
Cc:	ling.ma.program@...il.com, waiman.long@....com, mingo@...hat.com,
	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
	ling.ml@...baba-inc.com
Subject: Re: [RFC PATCH] alispinlock: acceleration from lock integration on
 multi-core platform

On Tue, Jan 05, 2016 at 09:42:27PM +0000, One Thousand Gnomes wrote:
> > It suffers the typical problems all those constructs do; namely it
> > wrecks accountability.
> 
> That's "government thinking" ;-) - for most real users throughput is
> more important than accountability. With the right API it ought to also
> be compile time switchable.

Its to do with having been involved with -rt. RT wants to do
accountability for such things because of PI and sorts.

> > But here that is compounded by the fact that you inject other people's
> > work into 'your' lock region, thereby bloating lock hold times. Worse,
> > afaict (from a quick reading) there really isn't a bound on the amount
> > of work you inject.
> 
> That should be relatively easy to fix but for this kind of lock you
> normally get the big wins from stuff that is only a short amount of
> executing code. The fairness your trade in the cases it is useful should
> be tiny except under extreme load, where the "accountability first"
> behaviour would be to fall over in a heap.
> 
> If your "lock" involves a lot of work then it probably should be a work
> queue or not using this kind of locking.

Sure, but the fact that it was not even mentioned/considered doesn't
give me a warm fuzzy feeling.

> > And while its a cute collapse of an MCS lock and lockless list style
> > work queue (MCS after all is a lockless list), saving a few cycles from
> > the naive spinlock+llist implementation of the same thing, I really
> > do not see enough justification for any of this.
> 
> I've only personally dealt with such locks in the embedded space but
> there it was a lot more than a few cycles because you go from

Nah, what I meant was that you can do the same callback style construct
with a llist and a spinlock.

> The claim in the original post is 3x performance but doesn't explain
> performance doing what, or which kernel locks were switched and what
> patches were used. I don't find the numbers hard to believe for a big big
> box, but I'd like to see the actual use case patches so it can be benched
> with other workloads and also for latency and the like.

Very much agreed, those claims need to be substantiated with actual
patches using this thing and independently verified.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ