lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180424091510.GB4064@hirez.programming.kicks-ass.net>
Date:   Tue, 24 Apr 2018 11:15:10 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Andrea Parri <andrea.parri@...rulasolutions.com>
Cc:     Waiman Long <longman@...hat.com>, Ingo Molnar <mingo@...hat.com>,
        linux-kernel@...r.kernel.org, Dave Chinner <david@...morbit.com>,
        Eric Sandeen <sandeen@...hat.com>,
        "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH] locking/rwsem: Synchronize task state & waiter->task of
 readers

On Mon, Apr 23, 2018 at 10:55:14PM +0200, Andrea Parri wrote:
> > > +	/*
> > > +	 * To avoid missed wakeup of reader, we need to make sure
> > > +	 * that task state and waiter->task are properly synchronized.
> > > +	 *
> > > +	 *     wakeup		      sleep
> > > +	 *     ------		      -----
> > > +	 * __rwsem_mark_wake:	rwsem_down_read_failed*:
> > > +	 *   [S] waiter->task	  [S] set_current_state(state)
> > > +	 *	 MB		      MB
> > > +	 * try_to_wake_up:
> > > +	 *   [L] state		  [L] waiter->task
> > > +	 *
> > > +	 * For the wakeup path, the original lock release-acquire pair
> > > +	 * does not provide enough guarantee of proper synchronization.
> > > +	 */
> > > +	smp_mb();
> > > +
> > >  	adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
> > >  	if (list_empty(&sem->wait_list)) {
> > >  		/* hit end of list above */
> > 

> try_to_wake_up() does:
> 
> 	raw_spin_lock_irqsave(&p->pi_lock, flags);
> 	smp_mb__after_spinlock();
> 	if (!(p->state & state))
> 
> My understanding is that this smp_mb__after_spinlock() provides us with
> the guarantee you described above.  The smp_mb__after_spinlock() should
> represent a 'cheaper way' to provide such a guarantee.

Right, I don't see what problem is being fixed here either. The scenario
in the comment is already closed by the smp_mb__after_spinlock() you
mention.

And it is fine to rely on that, we do in other places.

> If this understanding is correct, the remaining question would be about
> whether you want to rely on (and document) the smp_mb__after_spinlock()
> in the callsite in question (the comment in wake_up_q()
> 
>    /*
>     * wake_up_process() implies a wmb() to pair with the queueing
>     * in wake_q_add() so as not to miss wakeups.
>     */
> 

So that comment is about the ordering required for wake_q_add() vs
wake_up_q(). But yes, wmb is a little confusing. I suppose I was
thinking of the NULL store vs the wakeup (store), but that doesn't
really make much sense.

And wake_up_process() being a mb means it also implies a wmb; if such is
all that is required for the scenario at hand.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ