lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 12 Sep 2016 14:54:03 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Nicholas Piggin <npiggin@...il.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Will Deacon <will.deacon@....com>,
        Oleg Nesterov <oleg@...hat.com>,
        Paul McKenney <paulmck@...ux.vnet.ibm.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
        Alan Stern <stern@...land.harvard.edu>
Subject: Re: Question on smp_mb__before_spinlock

On Mon, Sep 12, 2016 at 12:27:08PM +1000, Nicholas Piggin wrote:
> On Wed, 7 Sep 2016 15:23:54 +0200
> Peter Zijlstra <peterz@...radead.org> wrote:

> > Interesting idea..
> > 
> > So I'm not a fan of that raw_spin_lock wrapper, since that would end up
> > with a lot more boiler-plate code than just the one extra barrier.
> 
> #ifndef sched_ctxsw_raw_spin_lock
> #define sched_ctxsw_raw_spin_lock(lock) raw_spin_lock(lock)
> #endif
> 
> #define sched_ctxsw_raw_spin_lock(lock) do { smp_mb() ; raw_spin_lock(lock); } while (0)

I was thinking you wanted to avoid the lwsync in arch_spin_lock()
entirely, at which point you'll grow more layers. Because then you get
an arch_spin_lock_mb() or something and then you'll have to do the
raw_spin_lock wrappery for that.

Or am I missing the point of having the raw_spin_lock wrapper, as
opposed to the extra barrier after it?

Afaict the benefit of having that wrapper is so you can avoid issuing
multiple barriers.

> > But moving MMIO/DMA/TLB etc.. barriers into this spinlock might not be a
> > good idea, since those are typically fairly heavy barriers, and its
> > quite common to call schedule() without ending up in switch_to().
> 
> That's true I guess, but if we already have the arch specific smp_mb__
> specifically for this context switch code, and you are asking for them to
> implement *cacheable* memory barrier vs migration, then I see no reason
> not to allow them to implement uncacheable as well.
> 
> You make a good point about schedule() without switch_to(), but
> architectures will still have no less flexibility than they do now.

Ah, so you're saying make it optional where they put it? I was initially
thinking you wanted to add it to the list of requirements. Sure,
optional works.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ