lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Nov 2017 18:25:11 +0100
From:   Laurent Dufour <>
To:     Andrea Arcangeli <>
Cc:,,,,,,,, Matthew Wilcox <>,,,,
        Thomas Gleixner <>,
        Ingo Molnar <>,,
        Will Deacon <>,
        Sergey Senozhatsky <>,
        Alexei Starovoitov <>,,,,,,,
        Tim Chen <>,,
Subject: Re: [PATCH v5 07/22] mm: Protect VMA modifications using VMA sequence

On 02/11/2017 16:16, Laurent Dufour wrote:
> Hi Andrea,
> Thanks for reviewing this series, and sorry for the late answer, I took few
> days off...
> On 26/10/2017 12:18, Andrea Arcangeli wrote:
>> Hello Laurent,
>> Message-ID: <> shows
>> significant slowdown even for brk/malloc ops both single and
>> multi threaded.
>> The single threaded case I think is the most important because it has
>> zero chance of getting back any benefit later during page faults.
>> Could you check if:
>> 1. it's possible change vm_write_begin to be a noop if mm->mm_count is
>>    <= 1? Hint: clone() will run single threaded so there's no way it can run
>>    in the middle of a being/end critical section (clone could set an
>>    MMF flag to possibly keep the sequence counter activated if a child
>>    thread exits and mm_count drops to 1 while the other cpu is in the
>>    middle of a critical section in the other thread).
> This sounds to be a good idea, I'll dig on that.
> The major risk here is to have a thread calling vm_*_begin() with
> mm->mm_count > 1 and later calling vm_*_end() with mm->mm_count <= 1, but
> as you mentioned we should find a way to work around this.
>> 2. Same thing with RCU freeing of vmas. Wouldn't it be nicer if RCU
>>    freeing happened only once a MMF flag is set? That will at least
>>    reduce the risk of temporary memory waste until the next RCU grace
>>    period. The read of the MMF will scale fine. Of course to allow
>>    point 1 and 2 then the page fault should also take the mmap_sem
>>    until the MMF flag is set.
> I think we could also deal with the mm->mm_count value here, if there is
> only one thread, no need to postpone the VMA's free operation. Isn't it ?
> Also, if mm->mm_count <= 1, there is no need to try the speculative path.
>> Could you also investigate a much bigger change: I wonder if it's
>> possible to drop the sequence number entirely from the vma and stop
>> using sequence numbers entirely (which is likely the source of the
>> single threaded regression in point 1 that may explain the report in
>> the above message-id), and just call the vma rbtree lookup once again
>> and check that everything is still the same in the vma and the PT lock
>> obtained is still a match to finish the anon page fault and fill the
>> pte?
> That's an interesting idea. The big deal here would be to detect that the
> VMA has been touched in our back, but there are not so much VMA's fields
> involved in the speculative path so that sounds reasonable. The other point
> is to identify the impact of the vma rbtree lookup, it's also a known
> order, but there is the vma_srcu's lock involved.

I think there is some memory barrier missing when the VMA is modified so
currently the modifications done in the VMA structure may not be written
down at the time the pte is locked. So doing that change will also requires
to call smp_wmb() before locking the page tables. In the current patch this
is ensured by the call to write_seqcount_end().
Doing so will still require to have a memory barrier when touching the VMA.
Not sure we get far better performance compared to the sequence count
change. But I'll give it a try anyway ;)

>> Then of course we also need to add a method to the read-write
>> semaphore so it tells us if there's already one user holding the read
>> mmap_sem and we're the second one.  If we're the second one (or more
>> than second) only then we should skip taking the down_read mmap_sem.
>> Even a multithreaded app won't ever skip taking the mmap_sem until
>> there's sign of runtime contention, and it won't have to run the way
>> more expensive sequence number-less revalidation during page faults,
>> unless we get an immediate scalability payoff because we already know
>> the mmap_sem is already contended and there are multiple nested
>> threads in the page fault handler of the same mm.
> The problem is that we may have a thread entering the page fault path,
> seeing that the mmap_sem is free, grab it and continue processing the page
> fault. Then another thread is entering mprotect or any other mm service
> which grab the mmap_sem and it will be blocked until the page fault is
> done. The idea with the speculative page fault is also to not block the
> other thread which may need to grab the mmap_sem.
>> Perhaps we'd need something more advanced than a
>> down_read_trylock_if_not_hold() (which has to guaranteed not to write
>> to any cacheline) and we'll have to count the per-thread exponential
>> backoff of mmap_sem frequency, but starting with
>> down_read_trylock_if_not_hold() would be good I think.
>> This is not how the current patch works, the current patch uses a
>> sequence number because it pretends to go lockless always and in turn
>> has to slow down all vma updates fast paths or the revalidation
>> slowsdown performance for page fault too much (as it always
>> revalidates).
>> I think it would be much better to go speculative only when there's
>> "detected" runtime contention on the mmap_sem with
>> down_read_trylock_if_not_hold() and that will make the revalidation
>> cost not an issue to worry about because normally we won't have to
>> revalidate the vma at all during page fault. In turn by making the
>> revalidation more expensive by starting a vma rbtree lookup from
>> scratch, we can drop the sequence number entirely and that should
>> simplify the patch tremendously because all vm_write_begin/end would
>> disappear from the patch and in turn the mmap/brk slowdown measured by
>> the message-id above, should disappear as well.
> As I mentioned above, I'm not sure about checking the lock contention when
> entering the page fault path, checking for the mm->mm_count or a dedicated
> mm flags should be enough, but removing the sequence lock would be a very
> good simplification. I'll dig further here, and come back soon.
> Thanks a lot,
> Laurent.
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to  For more info on Linux MM,
> see: .
> Don't email: <a href=mailto:""> </a>

Powered by blists - more mailing lists