lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFyHQqrBAnod2BUA3X88bNdamzLZrFyZcZDLwudxjaib1Q@mail.gmail.com>
Date:	Fri, 20 May 2016 10:00:45 -0700
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Boqun Feng <boqun.feng@...il.com>,
	Davidlohr Bueso <dave@...olabs.net>,
	Manfred Spraul <manfred@...orfullife.com>,
	Waiman Long <Waiman.Long@....com>,
	Ingo Molnar <mingo@...nel.org>, ggherdovich@...e.com,
	Mel Gorman <mgorman@...hsingularity.net>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	Will Deacon <will.deacon@....com>
Subject: Re: sem_lock() vs qspinlocks

On Fri, May 20, 2016 at 9:04 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> +        * queued_spin_lock_slowpath() can ACQUIRE the lock before
> +        * issuing the unordered store that sets _Q_LOCKED_VAL.

Ugh. This was my least favorite part of the queued locks, and I really
liked the completely unambiguous semantics of the x86 atomics-based
versions we used to have.

But I guess we're stuck with it.

That said, it strikes me that almost all of the users of
"spin_is_locked()" are using it for verification purposes (not locking
correctness), and that the people who are playing games with locking
correctness are few and already have to play *other* games anyway.

See for example "ipc_smp_acquire__after_spin_is_unlocked()", which has
a big comment atop of it that now becomes nonsensical with this patch.

So I do wonder if we should make that smp_mb() be something the
*caller* has to do, and document rules for it. IOW, introduce a new
spinlock primitive called "spin_lock_synchronize()", and then spinlock
implementations that have this non-atomic behavior with an unordered
store would do something like

    static inline void queued_spin_lock_synchronize(struct qspinlock
*a, struct qspinlock *b)
    {
        smp_mb();
    }

and then we'd document that *if* yuou need ordering guarantees between

   spin_lock(a);
   .. spin_is_locked/spin_wait_lock(b) ..

you have to have a

    spin_lock_synchronize(a, b);

in between.

A spin-lock implementation with the old x86 atomic semantics would
make it a no-op.

We should also introduce something like that
"splin_lock_acquire_after_unlock()" so that the ipc/sem.c behavior
would be documented too. Partly because some spinlock implementations
might have stronger unlock ordering and that could be a no-op for
them), but mostly for documentation of the rules.

Now, I'd take Peter's patch as-is, because I don't think any of this
matters from a *performance* standpoint, and Peter's patch is much
smaller and simpler.

But the reason I think it might be a good thing to introduce those
spin_lock_synchronize() and splin_lock_acquire_after_unlock() concepts
would be to make it very very clear what those subtle implementations
in mutexes and the multi-level locks in the ipc layer are doing and
what they rely on.

Comments? There are arguments for Peter's simple approach too ("robust
and simpler interface" vs "make our requirements explicit").

               Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ