lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110104064542.GF3402@amd>
Date:	Tue, 4 Jan 2011 17:45:42 +1100
From:	Nick Piggin <npiggin@...nel.dk>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Chris Mason <chris.mason@...cle.com>,
	Frank Rowand <frank.rowand@...sony.com>,
	Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	Mike Galbraith <efault@....de>,
	Oleg Nesterov <oleg@...hat.com>, Paul Turner <pjt@...gle.com>,
	Jens Axboe <axboe@...nel.dk>,
	Yong Zhang <yong.zhang0@...il.com>,
	linux-kernel@...r.kernel.org, Nick Piggin <npiggin@...nel.dk>,
	Jeremy Fitzhardinge <jeremy@...p.org>
Subject: Re: [RFC][PATCH 05/17] x86: Optimize arch_spin_unlock_wait()

On Mon, Jan 03, 2011 at 12:32:42PM +0100, Peter Zijlstra wrote:
> On Fri, 2010-12-24 at 10:26 -0800, Linus Torvalds wrote:
> > On Fri, Dec 24, 2010 at 4:23 AM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> > > Only wait for the current holder to release the lock.
> > >
> > > spin_unlock_wait() can only be about the current holder, since
> > > completion of this function is inherently racy with new contenders.
> > > Therefore, there is no reason to wait until the lock is completely
> > > unlocked.
> > 
> > Is there really any reason for this patch? I'd rather keep the simpler
> > and more straightforward code unless you have actual numbers.
> 
> No numbers, the testcase I use for this series is too unstable to really
> give that fine results. Its more a result of seeing the code an going:
> "oohh that can wait a long time when the lock is severely contended".

It would be kind of nice to fix, with ticket locks, dumb spin_unlock_wait
can infinitely starve if the lock queue is never empty, wheras at least
the simple spinlocks it would have a statistical chance of being given
the cacheline in unlocked state.

 
> But I think I can get rid of the need for calling this primitive
> alltogether, which is even better.

I always hated it because it seems hard to use right and verify result
is correct, particularly because it has no memory ordering guarantees.

assert(active == 1);
spin_lock(&blah);
if (should_die)
  active = 0;
counter++;
spin_unlock(&blah);

if (active) {
  spin_lock(&blah);
  /* do something */
  spin_unlock(&blah);
} else {
  /* wait for last to go away */
  spin_unlock_wait(&blah);
  counter++;
}

I don't know, stupid example but I can't really think of good ways to
use it off the top of my head.

Anyway this has a lost update problem even on x86 because counter can
be speculatively loaded out of order from the load of the lock word.
So the nice simple lock APIs which supposedly don't require any thought
of barriers have tricked us!

So I agree, taking it out the back and shooting it in the head would make
the world a better place.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ