lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpE10y78SNPQ+LRY5EonDFhOG=1XjZ9FUUDiyhfhjZ54NA@mail.gmail.com>
Date:   Mon, 7 Mar 2022 21:37:11 -0800
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Michel Lespinasse <michel@...pinasse.org>,
        Linux-MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>, kernel-team@...com,
        Laurent Dufour <ldufour@...ux.ibm.com>,
        Jerome Glisse <jglisse@...gle.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Michal Hocko <mhocko@...e.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Davidlohr Bueso <dave@...olabs.net>,
        Matthew Wilcox <willy@...radead.org>,
        Liam Howlett <liam.howlett@...cle.com>,
        Rik van Riel <riel@...riel.com>,
        Paul McKenney <paulmck@...nel.org>,
        Song Liu <songliubraving@...com>,
        Minchan Kim <minchan@...gle.com>,
        Joel Fernandes <joelaf@...gle.com>,
        David Rientjes <rientjes@...gle.com>,
        Axel Rasmussen <axelrasmussen@...gle.com>,
        Andy Lutomirski <luto@...nel.org>
Subject: Re: [PATCH v2 00/35] Speculative page faults

On Wed, Feb 23, 2022 at 8:11 AM Mel Gorman <mgorman@...hsingularity.net> wrote:
>
> On Fri, Jan 28, 2022 at 05:09:31AM -0800, Michel Lespinasse wrote:
> > This patchset is my take on speculative page faults (spf).
> > It builds on ideas that have been previously proposed by Laurent Dufour,
> > Peter Zijlstra and others before. While Laurent's previous proposal
> > was rejected around the time of LSF/MM 2019, I am hoping we can revisit
> > this now based on what I think is a simpler and more bisectable approach,
> > much improved scaling numbers in the anonymous vma case, and the Android
> > use case that has since emerged. I will expand on these points towards
> > the end of this message.
> >
> > The patch series applies on top of linux v5.17-rc1;
> > a git tree is also available:
> > git fetch https://github.com/lespinasse/linux.git v5.17-rc1-spf-anon
> >
> > I would like these patches to be considered for inclusion into v5.18.
> > Several android vendors are using Laurent Dufour's previous SPF work into
> > their kernel tree in order to improve application startup performance,
> > want to converge to an upstream accepted solution, and have reported good
> > numbers with previous versions of this patchset. Also, there is a broader
> > interest into reducing mmap lock dependencies in critical MM paths,
> > and I think this patchset would be a good first step in that direction.
> >
>
> I think there is serious lack of performance data here. The only
> performance point offered is the Android Application Startup case.
> Unfortunately, that benefit may be specific to the Zygote process that
> preloads classes that may be required and listens for new applications to
> start. I suspect the benefit wouldn't apply to most Linux distributions
> and even JVM-based workloads are not primarily constrained by the startup
> cost. Improving application start up costs is not great justification
> for this level of code complexity even though I recognise why it is a
> key performance indicator for Android given that startup times affect
> the user experience.
>
> Laurent's original work was partially motivated by the performance of
> a proprietary application. While I cannot replicate a full production
> workload as that can only be done by the company, I could do a basic
> evaluation commonly conducted on standalone systems. It was extremely
> fault intensive with SPF success rates greater than 96% but almost no
> change in actual performance. It's perfectly possible that the application
> has changed since SPF was first proposed. The developers did spend a fair
> amount of effort at making the application NUMA-aware and reusing memory
> more aggressively to avoid faults. It's still very fault intensive but
> does not appear to suffer due to parallel memory operations guessing from
> the data.
>
> On my own tests, the only preliminary test that was a clear winner
> was will-it-scale using threads for the page-fault workloads and
> page-fault-test for threads. To be far, the increases there are dramatic
> with a high success rate of speculative faults.
>
> pft timings
>                                  5.17.0-rc3             5.17.0-rc3
>                                     vanilla        mm-spfault-v2r1
> Amean     elapsed-1        32.66 (   0.00%)       32.77 *  -0.36%*
> Amean     elapsed-4         9.17 (   0.00%)        8.89 *   3.07%*
> Amean     elapsed-7         5.53 (   0.00%)        5.26 *   4.95%*
> Amean     elapsed-12        4.13 (   0.00%)        3.50 *  15.16%*
> Amean     elapsed-21        3.93 (   0.00%)        2.79 *  29.03%*
> Amean     elapsed-30        4.02 (   0.00%)        2.94 *  26.79%*
> Amean     elapsed-48        4.37 (   0.00%)        2.83 *  35.24%*
> Amean     elapsed-79        4.13 (   0.00%)        2.17 *  47.36%*
> Amean     elapsed-80        4.12 (   0.00%)        2.13 *  48.22%*
>
> Ops SPFault Attempt                        0.00  4734439786.00
> Ops SPFault Abort                          0.00     9360014.00
> Ops SPFault Success                        0.00          99.80
>
> This is the ideal case for SPF but not very realistic. Interestingly,
> ebizzy barely benefitted even though it's threaded because it's not
> guaranteed to be address space modification intensive.
>
> Hackbench took a performance hit between 0-5% depending on the exact
> configuration and machine used. It is threaded and had high SPF abort rates
> (up to 50%). It's not a great example but it shows at least one example
> where SPF hurts more than it help and there may be other applications
> that are harmed by having to retry faults.
>
> The scope of SPF is narrow relative to the much older discussion of
> breaking up mmap_sem. The only time SPF benefits is when faults are racing
> against parallel memory address updates holding mmap_sem for write.
> That requires a threaded application that is both intense in terms of
> address space updates and fault intensive. That is much narrower than
> threaded applications that are address space update intensive (e.g.
> using mprotect to avoid accidentally leaking data, mapping data files
> for IO etc). Have we examples of realistic applications that meet all the
> criteria of "threaded", "address-space intensive" and "fault intensive"
> that are common enough to justify the complexity?
>
> Admittedly, I initially just threw this series at a collection of
> workloads that simply stress the allocator because it stresses faults as
> a side-effect but most of them did not match the criteria for "threaded
> application that is both address space update intensive and fault
> intensive". I'm struggling to think of good examples although redis
> is a possibility. HPC workloads like NPB parallelised with OpenMP is a
> possibility but I looked at some old results and while it does trap faults,
> the vast majority are related to NUMA balancing.  The other ones I normally
> consider for scaling purposes are process orientated and not threads.
>
> On the patches themselves, I'm not sure the optimisation for ignoring SPF
> is guaranteed to work as mm_users could be temporarily elevated although
> probably not enough to matter. I also think patch 5 stands on its own and
> could be sent separately. For the others, I didn't read them in sufficient
> depth but noted that the level of similar logic between speculative
> and non-speculative paths could be a maintenance headache to keep the
> speculative and !speculative rules in sync. I didn't see obvious problems
> as such but I still think the complexity is high for a corner case.

Hi Mel,
Thank you for taking your time to analyze SPF effects on different
workloads. Your feedback drove me to look into the reasons Android
benefits from this patchset. What we know is that apps which benefit
the most are the ones with high number of threads (~100) and when I
strace'd one of these apps I can see that each thread mmaps several
areas upon startup (Stack and Thread-local storage (TLS), thread
signal stack, indirect ref table).
So, I created a simple test that spawns a given number of threads,
each thread mmapping and faulting-in a given number of vmas with a
given number of pages in each one. Each thread records the time it
takes to mmap the vmas and fault-in the pages and the test reports the
total and the average times measured. You can find my test program
here: https://github.com/surenbaghdasaryan/spf_test/blob/main/spf_test.c

I ran a number of tests on my Pixel 6 and SPF shows quite positive
results even with a small number of vmas and pages. Couple examples:

100 threads, 2 vmas, 10 pages (cmdline: spf_test 100 2 10)
Baseline avg time: 1,889,398.01ns
SPF avg time: 327,299.36ns
Improvement: 83%

100 threads, 10 vmas, 2 pages (cmdline: spf_test 100 10 2)
Baseline avg time: 1,234,861.48ns
SPF avg time: 800,392.82ns
Improvement: 35%

100 threads, 10 vmas, 10 pages (cmdline: spf_test 100 10 10)
Baseline avg time: 12,199,939.04ns
SPF avg time: 3,223,206.41ns
Improvement: 74%

100 threads, 30 vmas, 30 pages (cmdline: spf_test 100 30 30)
Baseline avg time: 255,827,268.16ns
SPF avg time: 41,538,348.47ns
Improvement: 84%

To minimize the noise, the test setup was to run with the same
parameters for several hundred times and take the average between
runs.
I think this test represents an example of what you were describing as
a "threaded application that is both address space update intensive
and fault intensive" because mmaps modify the address space with
page-faults happening in parallel. We can call it an artificial
workload but it does not strike me as something very unusual. I can
imagine other systems apart from Android which could spawn multiple
threads with each thread mapping some memory area to work with and
using that area immediately.
Thanks,
Suren.


>
> --
> Mel Gorman
> SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ