[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOUHufZq9=3wtFtUg0y_=OJjBcG59W9Dw2gkSG1nOOfKP83EoA@mail.gmail.com>
Date: Mon, 18 Mar 2024 20:51:01 -0400
From: Yu Zhao <yuzhao@...gle.com>
To: Aravinda Prasad <aravinda.prasad@...el.com>
Cc: damon@...ts.linux.dev, linux-mm@...ck.org, sj@...nel.org,
linux-kernel@...r.kernel.org, s2322819@...ac.uk, sandeep4.kumar@...el.com,
ying.huang@...el.com, dave.hansen@...el.com, dan.j.williams@...el.com,
sreenivas.subramoney@...el.com, antti.kervinen@...el.com,
alexander.kanevskiy@...el.com
Subject: Re: [PATCH v2 0/3] mm/damon: Profiling enhancements for DAMON
On Mon, Mar 18, 2024 at 9:24 AM Aravinda Prasad
<aravinda.prasad@...el.com> wrote:
>
> DAMON randomly samples one or more pages in every region and tracks
> accesses to them using the ACCESSED bit in PTE (or PMD for 2MB pages).
> When the region size is large (e.g., several GBs), which is common
> for large footprint applications, detecting whether the region is
> accessed or not completely depends on whether the pages that are
> actively accessed in the region are picked during random sampling.
> If such pages are not picked for sampling, DAMON fails to identify
> the region as accessed. However, increasing the sampling rate or
> increasing the number of regions increases CPU overheads of kdamond.
>
> This patch proposes profiling different levels of the application’s
> page table tree to detect whether a region is accessed or not. This
> patch set is based on the observation that, when the accessed bit for a
> page is set, the accessed bits at the higher levels of the page table
> tree (PMD/PUD/PGD) corresponding to the path of the page table walk
> are also set. Hence, it is efficient to check the accessed bits at
> the higher levels of the page table tree to detect whether a region
> is accessed or not. For example, if the access bit for a PUD entry
> is set, then one or more pages in the 1GB PUD subtree is accessed as
> each PUD entry covers 1GB mapping. Hence, instead of sampling
> thousands of 4K/2M pages to detect accesses in a large region,
> sampling at the higher level of page table tree is faster and efficient.
>
> This patch set is based on 6.8-rc5 kernel (commit: f48159f8, mm-unstable
> tree)
>
> Changes since v1 [1]
> ====================
>
> - Added support for 5-level page table tree
> - Split the patch to mm infrastructure changes and DAMON enhancements
> - Code changes as per comments on v1
> - Added kerneldoc comments
>
> [1] https://lkml.org/lkml/2023/12/15/272
>
> Evaluation:
>
> - MASIM benchmark with 1GB, 10GB, 100GB footprint with 10% hot data
> and 5TB with 10GB hot data.
> - DAMON: 5ms sampling, 200ms aggregation interval. Rest all
> parameters set to default value.
> - DAMON+PTP: Page table profiling applied to DAMON with the above
> parameters.
>
> Profiling efficiency in detecting hot data:
>
> Footprint 1GB 10GB 100GB 5TB
> ---------------------------------------------
> DAMON >90% <50% ~0% 0%
> DAMON+PTP >90% >90% >90% >90%
>
> CPU overheads (in billion cycles) for kdamond:
>
> Footprint 1GB 10GB 100GB 5TB
> ---------------------------------------------
> DAMON 1.15 19.53 3.52 9.55
> DAMON+PTP 0.83 3.20 1.27 2.55
>
> A detailed explanation and evaluation can be found in the arXiv paper:
> https://arxiv.org/pdf/2311.10275.pdf
NAK, on the ground of citing the nonfactual source above and
misrespenting the existing idea as your own invention [1].
The existing idea was purposely not patented so that all CPU vendors
are free to use it. Not sure what kind of peer review that source had,
but it's not getting around the reviewers here easily. Please do feel
free to ask any 3rd party that has no conflict of interest to override
my NAK though.
[1] https://lore.kernel.org/CAOUHufbDzy5dMcLR9ex25VdB_QBmSrW_We-2+KftZVYKNn4s9g@mail.gmail.com/
[2] https://lore.kernel.org/YE6yrQC1Ps195wPw@google.com/
Powered by blists - more mailing lists