lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180403203718.GE5935@redhat.com>
Date:   Tue, 3 Apr 2018 16:37:18 -0400
From:   Jerome Glisse <jglisse@...hat.com>
To:     Laurent Dufour <ldufour@...ux.vnet.ibm.com>
Cc:     paulmck@...ux.vnet.ibm.com, peterz@...radead.org,
        akpm@...ux-foundation.org, kirill@...temov.name,
        ak@...ux.intel.com, mhocko@...nel.org, dave@...olabs.net,
        jack@...e.cz, Matthew Wilcox <willy@...radead.org>,
        benh@...nel.crashing.org, mpe@...erman.id.au, paulus@...ba.org,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, hpa@...or.com,
        Will Deacon <will.deacon@....com>,
        Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        kemi.wang@...el.com, sergey.senozhatsky.work@...il.com,
        Daniel Jordan <daniel.m.jordan@...cle.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        haren@...ux.vnet.ibm.com, khandual@...ux.vnet.ibm.com,
        npiggin@...il.com, bsingharora@...il.com,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        linuxppc-dev@...ts.ozlabs.org, x86@...nel.org
Subject: Re: [PATCH v9 00/24] Speculative page faults

On Tue, Mar 13, 2018 at 06:59:30PM +0100, Laurent Dufour wrote:
> This is a port on kernel 4.16 of the work done by Peter Zijlstra to
> handle page fault without holding the mm semaphore [1].
> 
> The idea is to try to handle user space page faults without holding the
> mmap_sem. This should allow better concurrency for massively threaded
> process since the page fault handler will not wait for other threads memory
> layout change to be done, assuming that this change is done in another part
> of the process's memory space. This type page fault is named speculative
> page fault. If the speculative page fault fails because of a concurrency is
> detected or because underlying PMD or PTE tables are not yet allocating, it
> is failing its processing and a classic page fault is then tried.
> 
> The speculative page fault (SPF) has to look for the VMA matching the fault
> address without holding the mmap_sem, this is done by introducing a rwlock
> which protects the access to the mm_rb tree. Previously this was done using
> SRCU but it was introducing a lot of scheduling to process the VMA's
> freeing
> operation which was hitting the performance by 20% as reported by Kemi Wang
> [2].Using a rwlock to protect access to the mm_rb tree is limiting the
> locking contention to these operations which are expected to be in a O(log
> n)
> order. In addition to ensure that the VMA is not freed in our back a
> reference count is added and 2 services (get_vma() and put_vma()) are
> introduced to handle the reference count. When a VMA is fetch from the RB
> tree using get_vma() is must be later freeed using put_vma(). Furthermore,
> to allow the VMA to be used again by the classic page fault handler a
> service is introduced can_reuse_spf_vma(). This service is expected to be
> called with the mmap_sem hold. It checked that the VMA is still matching
> the specified address and is releasing its reference count as the mmap_sem
> is hold it is ensure that it will not be freed in our back. In general, the
> VMA's reference count could be decremented when holding the mmap_sem but it
> should not be increased as holding the mmap_sem is ensuring that the VMA is
> stable. I can't see anymore the overhead I got while will-it-scale
> benchmark anymore.
> 
> The VMA's attributes checked during the speculative page fault processing
> have to be protected against parallel changes. This is done by using a per
> VMA sequence lock. This sequence lock allows the speculative page fault
> handler to fast check for parallel changes in progress and to abort the
> speculative page fault in that case.
> 
> Once the VMA is found, the speculative page fault handler would check for
> the VMA's attributes to verify that the page fault has to be handled
> correctly or not. Thus the VMA is protected through a sequence lock which
> allows fast detection of concurrent VMA changes. If such a change is
> detected, the speculative page fault is aborted and a *classic* page fault
> is tried.  VMA sequence lockings are added when VMA attributes which are
> checked during the page fault are modified.
> 
> When the PTE is fetched, the VMA is checked to see if it has been changed,
> so once the page table is locked, the VMA is valid, so any other changes
> leading to touching this PTE will need to lock the page table, so no
> parallel change is possible at this time.

What would have been nice is some pseudo highlevel code before all the
above detailed description. Something like:
  speculative_fault(addr) {
    mm_lock_for_vma_snapshot()
    vma_snapshot = snapshot_vma_infos(addr)
    mm_unlock_for_vma_snapshot()
    ...
    if (!vma_can_speculatively_fault(vma_snapshot, addr))
        return;
    ...
    /* Do fault ie alloc memory, read from file ... */
    page = ...;

    preempt_disable();
    if (vma_snapshot_still_valid(vma_snapshot, addr) &&
        vma_pte_map_lock(vma_snapshot, addr)) {
        if (pte_same(ptep, orig_pte)) {
            /* Setup new pte */
            page = NULL;
        }
    }
    preempt_enable();
    if (page)
        put(page)
  }

I just find pseudo code easier for grasping the highlevel view of the
expected code flow.


> 
> The locking of the PTE is done with interrupts disabled, this allows to
> check for the PMD to ensure that there is not an ongoing collapsing
> operation. Since khugepaged is firstly set the PMD to pmd_none and then is
> waiting for the other CPU to have catch the IPI interrupt, if the pmd is
> valid at the time the PTE is locked, we have the guarantee that the
> collapsing opertion will have to wait on the PTE lock to move foward. This
> allows the SPF handler to map the PTE safely. If the PMD value is different
> than the one recorded at the beginning of the SPF operation, the classic
> page fault handler will be called to handle the operation while holding the
> mmap_sem. As the PTE lock is done with the interrupts disabled, the lock is
> done using spin_trylock() to avoid dead lock when handling a page fault
> while a TLB invalidate is requested by an other CPU holding the PTE.
> 
> Support for THP is not done because when checking for the PMD, we can be
> confused by an in progress collapsing operation done by khugepaged. The
> issue is that pmd_none() could be true either if the PMD is not already
> populated or if the underlying PTE are in the way to be collapsed. So we
> cannot safely allocate a PMD if pmd_none() is true.

Might be a good topic fo LSF/MM, should we set the pmd to something
else then 0 when collapsing pmd (apply to pud too) ? This would allow
support THP.

[...]

> 
> Ebizzy:
> -------
> The test is counting the number of records per second it can manage, the
> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent
> result I repeated the test 100 times and measure the average result. The
> number is the record processes per second, the higher is the best.
> 
>   		BASE		SPF		delta	
> 16 CPUs x86 VM	14902.6		95905.16	543.55%
> 80 CPUs P8 node	37240.24	78185.67	109.95%

I find those results interesting as it seems that SPF do not scale well
on big configuration. Note that it still have a sizeable improvement so
it is still a very interesting feature i believe.

Still understanding what is happening here might a good idea. From the
numbers below it seems there is 2 causes to the scaling issue. First
pte lock contention (kind of expected i guess). Second changes to vma
while faulting.

Have you thought about this ? Do i read those numbers in the wrong way ?

> 
> Here are the performance counter read during a run on a 16 CPUs x86 VM:
>  Performance counter stats for './ebizzy -mRTp':
>             888157      faults
>             884773      spf
>                 92      pagefault:spf_pte_lock
>               2379      pagefault:spf_vma_changed
>                  0      pagefault:spf_vma_noanon
>                 80      pagefault:spf_vma_notsup
>                  0      pagefault:spf_vma_access
>                  0      pagefault:spf_pmd_changed
> 
> And the ones captured during a run on a 80 CPUs Power node:
>  Performance counter stats for './ebizzy -mRTp':
>             762134      faults
>             728663      spf
>              19101      pagefault:spf_pte_lock
>              13969      pagefault:spf_vma_changed
>                  0      pagefault:spf_vma_noanon
>                272      pagefault:spf_vma_notsup
>                  0      pagefault:spf_vma_access
>                  0      pagefault:spf_pmd_changed


There is one aspect that i would like to see cover. Maybe i am not
understanding something fundamental, but it seems to me that SPF can
trigger OOM or at very least over stress page allocation.

Assume you have a lot of concurrent SPF to anonymous vma and they all
allocate new pages, then you might overallocate for a single address
by a factor correlated with the number of CPUs in your system. Now,
multiply this for several distinc address and you might be allocating
a lot of memory transiently ie just for a short period time. While
the fact that you quickly free when you fail should prevent the OOM
reaper. But still this might severly stress the memory allocation
path.

Am i missing something in how this all work ? Or is the above some-
thing that might be of concern ? Should there be some boundary on the
maximum number of concurrent SPF (and thus boundary on maximum page
temporary page allocation) ?

Cheers,
Jérôme

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ