[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+EESO5ABYANQuynOs57UGYMcOaMjKN9TQdv4T2PObY5ng_1nw@mail.gmail.com>
Date: Wed, 30 Sep 2020 15:42:17 -0700
From: Lokesh Gidra <lokeshgidra@...gle.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Kalesh Singh <kaleshsingh@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Minchan Kim <minchan@...gle.com>,
Joel Fernandes <joelaf@...gle.com>, kernel-team@...roid.com,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Kees Cook <keescook@...omium.org>,
Peter Zijlstra <peterz@...radead.org>,
Sami Tolvanen <samitolvanen@...gle.com>,
Masahiro Yamada <masahiroy@...nel.org>,
Arnd Bergmann <arnd@...db.de>,
Frederic Weisbecker <frederic@...nel.org>,
Krzysztof Kozlowski <krzk@...nel.org>,
Hassan Naveed <hnaveed@...ecomp.com>,
Christian Brauner <christian.brauner@...ntu.com>,
Mark Rutland <mark.rutland@....com>,
Mike Rapoport <rppt@...nel.org>, Gavin Shan <gshan@...hat.com>,
Zhenyu Ye <yezhenyu2@...wei.com>, Jia He <justin.he@....com>,
John Hubbard <jhubbard@...dia.com>,
William Kucharski <william.kucharski@...cle.com>,
Sandipan Das <sandipan@...ux.ibm.com>,
Ralph Campbell <rcampbell@...dia.com>,
Mina Almasry <almasrymina@...gle.com>,
Ram Pai <linuxram@...ibm.com>,
Dave Hansen <dave.hansen@...el.com>,
Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Brian Geffon <bgeffon@...gle.com>,
SeongJae Park <sjpark@...zon.de>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
linux-kselftest@...r.kernel.org
Subject: Re: [PATCH 0/5] Speed up mremap on large regions
On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov
<kirill.shutemov@...ux.intel.com> wrote:
>
> On Wed, Sep 30, 2020 at 10:21:17PM +0000, Kalesh Singh wrote:
> > mremap time can be optimized by moving entries at the PMD/PUD level if
> > the source and destination addresses are PMD/PUD-aligned and
> > PMD/PUD-sized. Enable moving at the PMD and PUD levels on arm64 and
> > x86. Other architectures where this type of move is supported and known to
> > be safe can also opt-in to these optimizations by enabling HAVE_MOVE_PMD
> > and HAVE_MOVE_PUD.
> >
> > Observed Performance Improvements for remapping a PUD-aligned 1GB-sized
> > region on x86 and arm64:
> >
> > - HAVE_MOVE_PMD is already enabled on x86 : N/A
> > - Enabling HAVE_MOVE_PUD on x86 : ~13x speed up
> >
> > - Enabling HAVE_MOVE_PMD on arm64 : ~ 8x speed up
> > - Enabling HAVE_MOVE_PUD on arm64 : ~19x speed up
> >
> > Altogether, HAVE_MOVE_PMD and HAVE_MOVE_PUD
> > give a total of ~150x speed up on arm64.
>
> Is there a *real* workload that benefit from HAVE_MOVE_PUD?
>
We have a Java garbage collector under development which requires
moving physical pages of multi-gigabyte heap using mremap. During this
move, the application threads have to be paused for correctness. It is
critical to keep this pause as short as possible to avoid jitters
during user interaction. This is where HAVE_MOVE_PUD will greatly
help.
> --
> Kirill A. Shutemov
Powered by blists - more mailing lists