lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 1 Oct 2018 21:10:22 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Will Deacon <will.deacon@....com>
Cc:     mingo@...nel.org, linux-kernel@...r.kernel.org, longman@...hat.com,
        andrea.parri@...rulasolutions.com, tglx@...utronix.de
Subject: Re: [RFC][PATCH 2/3] locking/qspinlock: Rework some comments

On Mon, Oct 01, 2018 at 06:17:08PM +0100, Will Deacon wrote:
> On Wed, Sep 26, 2018 at 01:01:19PM +0200, Peter Zijlstra wrote:
> > +
> >  	/*
> > -	 * If we observe any contention; undo and queue.
> > +	 * If we observe contention, there was a concurrent lock.
> 
> Nit: I think "concurrent lock" is confusing here, because that implies to
> me that the lock was actually taken behind our back, which isn't necessarily
> the case. How about "there is a concurrent locker"?

Yes, that's better.

> > +	 *
> > +	 * Undo and queue; our setting of PENDING might have made the
> > +	 * n,0,0 -> 0,0,0 transition fail and it will now be waiting
> > +	 * on @next to become !NULL.
> >  	 */
> 
> Hmm, but it could also fail another concurrent set of PENDING (and the lock
> could just be held the entire time).

Right. What I wanted to convey was that is we observe _any_ contention,
we must abort and queue, because of that above condition failing and
waiting on @next.

The other cases weren't as critical, but that one really does require us
to queue in order to make forward progress.

Or did I misunderstand your concern?

> >  	if (unlikely(val & ~_Q_LOCKED_MASK)) {
> > +
> > +		/* Undo PENDING if we set it. */
> >  		if (!(val & _Q_PENDING_MASK))
> >  			clear_pending(lock);
> > +
> >  		goto queue;
> >  	}
> >  
> > @@ -466,7 +473,7 @@ void queued_spin_lock_slowpath(struct qs
> >  	 * claim the lock:
> >  	 *
> >  	 * n,0,0 -> 0,0,1 : lock, uncontended
> > -	 * *,*,0 -> *,*,1 : lock, contended
> > +	 * *,0,0 -> *,0,1 : lock, contended
> 
> Pending can be set behind our back in the contended case, in which case
> we take the lock with a single byte store and don't clear pending. You
> mention this in the updated comment below, but I think we should leave this
> comment alone.

Ah, so the reason I write it like so is because when we get here,
val.locked_pending == 0, per the atomic_cond_read_acquire() condition.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ