lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 8 Jul 2008 18:41:50 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Rik van Riel <riel@...hat.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Christoph Lameter <clameter@....com>,
	Petr Tesarik <ptesarik@...e.cz>, Ingo Molnar <mingo@...e.hu>,
	linux-kernel@...r.kernel.org
Subject: Re: Spinlocks: Factor our GENERIC_LOCKBREAK in order to avoid spin with irqs disable

On Tuesday 08 July 2008 15:57, Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:

> > They prevent lots of unfairness and starvation problems. The most
> > prominent one (ie. actually observed in Linux) was a single CPU
> > being totally starved by N others (to the point where lockup timers
> > would kick in).
>
> Yep.  My understanding was that the specific case was that cpu A was
> repeatedly taking and releasing the lock, while other cpus spin waiting
> for it, and that the cache coherence logic kept the cacheline owned by A
> (presumably because it kept modifying it).

In that case, the CPUs I've tested seem to do a reasonable job of
avoiding hogging the lock. And they all obviously make something of
an attempt at starvation/fairness of simple cacheline access...

But spinlocks require cacheline access *at exactly the right time*
in order to make progress. So simple cacheline fairness can break
down quite easily.


> Ticket locks work well in 
> this case because, as you say, it enforces a fairness policy that the
> hardware doesn't implement.  Are there other cases that ticket locks
> help with?

Well just starvation and fairness mainly.

Interestingly, it actually has some better scalability properties under
high contention sometimes (because the lock aquisition slowpath requires
no further stores to the cacheline).

If a single CPU is allowed to take and release a contended lock
multiple times, I don't consider that in itself as a bad property of
a lock implementation. I just reasoned that (maybe a little counter
intuitively) it isn't such an important thing to have either...


> Does the algorithm above solve the starvation issue? 

I don't believe so because there is still no guarantee you get out of
the slowpath (it's fairly similar to old spinlocks in that regard).


> > As an aside, these locks you propose are also a lot more costly in
> > the contended path. 4 vs 1 atomic operations on the lock cacheline
> > is not so great.
>
> Yep, that's not great.  But it doesn't bounce cache lines around as much
> either, so perhaps it doesn't make much difference.
>
> But really, I was being my own devil's advocate, to see if there's some
> other lock algorithm which satisfies both the normal ticket-lock case
> and the virtualization case.

Yeah, I'm not sure. I expect it would be hard to match ticket lock's
simplicity and guarantees.


> I have no real objections to ticket locks, so long as I can turn them off
> ;)

Sure. Btw. I have no problems with your patchset, but if SMP guests
are seriously used on x86, just remember the fairness issue. At some
point you might find you'll need an even more advanced lock for Xen.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ