lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Nov 2017 21:08:40 +0100
From:   Andrea Arcangeli <aarcange@...hat.com>
To:     Laurent Dufour <ldufour@...ux.vnet.ibm.com>
Cc:     paulmck@...ux.vnet.ibm.com, peterz@...radead.org,
        akpm@...ux-foundation.org, kirill@...temov.name,
        ak@...ux.intel.com, mhocko@...nel.org, dave@...olabs.net,
        jack@...e.cz, Matthew Wilcox <willy@...radead.org>,
        benh@...nel.crashing.org, mpe@...erman.id.au, paulus@...ba.org,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, hpa@...or.com,
        Will Deacon <will.deacon@....com>,
        Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        haren@...ux.vnet.ibm.com, khandual@...ux.vnet.ibm.com,
        npiggin@...il.com, bsingharora@...il.com,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        linuxppc-dev@...ts.ozlabs.org, x86@...nel.org
Subject: Re: [PATCH v5 07/22] mm: Protect VMA modifications using VMA
 sequence count

On Thu, Nov 02, 2017 at 06:25:11PM +0100, Laurent Dufour wrote:
> I think there is some memory barrier missing when the VMA is modified so
> currently the modifications done in the VMA structure may not be written
> down at the time the pte is locked. So doing that change will also requires
> to call smp_wmb() before locking the page tables. In the current patch this
> is ensured by the call to write_seqcount_end().
> Doing so will still require to have a memory barrier when touching the VMA.
> Not sure we get far better performance compared to the sequence count
> change. But I'll give it a try anyway ;)

Luckily smp_wmb is a noop on x86. I would suggest to ignore the above
issue completely if you give it a try, and then if this performs, we
can just embed a smp_wmb() before spin_lock() somewhere in
pte_offset_map_lock/pte_lockptr/spin_lock_nested for those archs whose
spin_lock isn't a smp_wmb() equivalent. I would focus at flushing
writes before every pagetable spin_lock for non-x86 archs, rather than
after all vma modifications. That should be easier to keep under
control and it's going to be more efficient too as if something there
are fewer spin locks than vma modifications.

For non-x86 archs we may then need a smp_wmb__before_spin_lock. That
looks more self contained than surrounding all vma modifications and
it's a noop on x86 anyway.

I thought about the contention detection logic too yesterday: to
detect contention we could have a mm->mmap_sem_contention_jiffies and
if down_read_trylock_exclusive() [same as down_read_if_not_hold in
prev mail] fails (and it'll fail if either read or write mmap_sem is
hold, so also convering mremap/mprotect etc..) we set
mm->mmap_sem_contention_jiffies = jiffies and then to know if you must
not touch the mmap_sem at all, you compare jiffies against
mmap_sem_contention_jiffies, if it's equal we go speculative. If
that's not enough we can just keep going speculative for a few more
jiffies with time_before(). The srcu lock is non concerning because the
inc/dec of the fast path is in per-cpu cacheline of course, no false
sharing possible there or it wouldn't be any better than a normal lock.

The vma revalidation is already done by khugepaged and mm/userfaultfd,
both need to drop the mmap_sem and continue working on the pagetables,
so we already know it's workable and not too slow.

Summarizing.. by using a runtime contention triggered speculative
design that goes speculative only when contention is runtime-detected
using the above logic (or equivalent), and by having to revalidate the
vma by hand with find_vma without knowing instantly if the vma become
stale, we will run with a substantially slower speculative page fault
than with your current speculative always-on design, but the slower
speculative page fault runtime will still scale 100% in SMP so it
should still be faster on large SMP systems. The pros is that it won't
regress the mmap/brk vma modifications. The whole complexity of
tracking the vma modifications should also go away and the resulting
code should be more maintainable and less risky to break in subtle
ways impossible to reproduce.

Thanks!
Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ