[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20250211182833.4193-1-sj@kernel.org>
Date: Tue, 11 Feb 2025 10:28:33 -0800
From: SeongJae Park <sj@...nel.org>
To: Vern Hao <haoxing990@...il.com>
Cc: SeongJae Park <sj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
David Hildenbrand <david@...hat.com>,
Davidlohr Bueso <dave@...olabs.net>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Vlastimil Babka <vbabka@...e.cz>,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 0/4] mm/madvise: remove redundant mmap_lock operations from process_madvise()
Hi Vern,
On Tue, 11 Feb 2025 16:48:06 +0800 Vern Hao <haoxing990@...il.com> wrote:
>
> On 2025/2/6 14:15, SeongJae Park wrote:
> > process_madvise() calls do_madvise() for each address range. Then, each
> > do_madvise() invocation holds and releases same mmap_lock. Optimize the
> > redundant lock operations by splitting do_madvise() internal logics
> > including the mmap_lock operations, and calling the small logics
> > directly from process_madvise() in a sequence that removes the redundant
> > locking. As a result of this change, process_madvise() becomes more
> > efficient and less racy in terms of its results and latency.
[...]
> >
> > Evaluation
> > ==========
> >
[...]
> > The measurement results are as below. 'sz_batches' column shows the
> > batch size of process_madvise() calls. '0' batch size is for madvise()
> > calls case.
> Hi, i just wonder why these patches can reduce latency time on call
> madvise() DONT_NEED.
Thank you for asking this!
> > 'before' and 'after' columns are the measured time to apply
> > MADV_DONTNEED to the 256 MiB memory buffer in nanoseconds, on kernels
> > that built without and with the last patch of this series, respectively.
> > So lower value means better efficiency. 'after/before' column is the
> > ratio of 'after' to 'before'.
> >
> > sz_batches before after after/before
> > 0 146294215.2 121280536.2 0.829017989769427
> > 1 165851018.8 136305598.2 0.821855658085351
> > 2 129469321.2 103740383.6 0.801273866569094
> > 4 110369232.4 87835896.2 0.795836795182785
> > 8 102906232.4 77420920.2 0.752344327397609
> > 16 97551017.4 74959714.4 0.768415506038587
> > 32 94809848.2 71200848.4 0.750985786305689
> > 64 96087575.6 72593180 0.755489765942227
> > 128 96154163.8 68517055.4 0.712575022154163
> > 256 92901257.6 69054216.6 0.743307662177439
> > 512 93646170.8 67053296.2 0.716028168874151
> > 1024 92663219.2 70168196.8 0.75723892830177
[...]
> > Also note that this patch has somehow decreased latencies of madvise()
> > and single batch size process_madvise(). Seems this code path is small
> > enough to significantly be affected by compiler optimizations including
> > inlining of split-out functions. Please focus on only the improvement
> > amount that changed by the batch size.
I believe the above paragraph may answer your question. Please let me know if
not.
Thanks,
SJ
[...]
Powered by blists - more mailing lists