lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 2 Oct 2018 14:20:05 +0100
From:   Will Deacon <will.deacon@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...nel.org, linux-kernel@...r.kernel.org, longman@...hat.com,
        andrea.parri@...rulasolutions.com, tglx@...utronix.de
Subject: Re: [RFC][PATCH 2/3] locking/qspinlock: Rework some comments

Hi Peter,

On Mon, Oct 01, 2018 at 09:10:22PM +0200, Peter Zijlstra wrote:
> On Mon, Oct 01, 2018 at 06:17:08PM +0100, Will Deacon wrote:
> > On Wed, Sep 26, 2018 at 01:01:19PM +0200, Peter Zijlstra wrote:
> > > +
> > >  	/*
> > > -	 * If we observe any contention; undo and queue.
> > > +	 * If we observe contention, there was a concurrent lock.
> > 
> > Nit: I think "concurrent lock" is confusing here, because that implies to
> > me that the lock was actually taken behind our back, which isn't necessarily
> > the case. How about "there is a concurrent locker"?
> 
> Yes, that's better.

Thanks.

> > > +	 *
> > > +	 * Undo and queue; our setting of PENDING might have made the
> > > +	 * n,0,0 -> 0,0,0 transition fail and it will now be waiting
> > > +	 * on @next to become !NULL.
> > >  	 */
> > 
> > Hmm, but it could also fail another concurrent set of PENDING (and the lock
> > could just be held the entire time).
> 
> Right. What I wanted to convey was that is we observe _any_ contention,
> we must abort and queue, because of that above condition failing and
> waiting on @next.
> 
> The other cases weren't as critical, but that one really does require us
> to queue in order to make forward progress.
> 
> Or did I misunderstand your concern?

See below, since I think my comments are related.

> > >  	if (unlikely(val & ~_Q_LOCKED_MASK)) {
> > > +
> > > +		/* Undo PENDING if we set it. */
> > >  		if (!(val & _Q_PENDING_MASK))
> > >  			clear_pending(lock);
> > > +
> > >  		goto queue;
> > >  	}
> > >  
> > > @@ -466,7 +473,7 @@ void queued_spin_lock_slowpath(struct qs
> > >  	 * claim the lock:
> > >  	 *
> > >  	 * n,0,0 -> 0,0,1 : lock, uncontended
> > > -	 * *,*,0 -> *,*,1 : lock, contended
> > > +	 * *,0,0 -> *,0,1 : lock, contended
> > 
> > Pending can be set behind our back in the contended case, in which case
> > we take the lock with a single byte store and don't clear pending. You
> > mention this in the updated comment below, but I think we should leave this
> > comment alone.
> 
> Ah, so the reason I write it like so is because when we get here,
> val.locked_pending == 0, per the atomic_cond_read_acquire() condition.

Ah, and I vaguely remember discussing this before. The way I read these
transition diagrams, I find it most useful if they correspond to the lock
word in memory. That way, it makes it clear about exactly which fields are
stable, and which can be concurrently modified. So in the comment above,
saying:

	 *,*,0 -> *,*,1 : lock, contended

is really helpful, because it clearly says "we're taking the lock, but the
rest of the lock word could be modified by others at the same time", whereas
saying:

	 *,0,0 -> *,0,1 : lock, contended

implies to me that pending is stable and cannot be set concurrently.

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ