[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20240118171756.80356-1-sj@kernel.org>
Date: Thu, 18 Jan 2024 09:17:56 -0800
From: SeongJae Park <sj@...nel.org>
To: Hyeongtak Ji <hyeongtak.ji@...com>
Cc: sj@...nel.org,
akpm@...ux-foundation.org,
apopple@...dia.com,
baolin.wang@...ux.alibaba.com,
damon@...ts.linux.dev,
dave.jiang@...el.com,
honggyu.kim@...com,
kernel_team@...ynix.com,
linmiaohe@...wei.com,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
linux-trace-kernel@...r.kernel.org,
lizhijian@...fujitsu.com,
mathieu.desnoyers@...icios.com,
mhiramat@...nel.org,
rakie.kim@...com,
rostedt@...dmis.org,
surenb@...gle.com,
yangx.jy@...itsu.com,
ying.huang@...el.com,
ziy@...dia.com
Subject: Re: [RFC PATCH 0/4] DAMON based 2-tier memory management for CXL memory
On Thu, 18 Jan 2024 19:40:16 +0900 Hyeongtak Ji <hyeongtak.ji@...com> wrote:
> Hi SeongJae,
>
> On Wed, 17 Jan 2024 SeongJae Park <sj@...nel.org> wrote:
>
> [...]
> >> Let's say there are 3 nodes in the system and the first node0 and node1
> >> are the first tier, and node2 is the second tier.
> >>
> >> $ cat /sys/devices/virtual/memory_tiering/memory_tier4/nodelist
> >> 0-1
> >>
> >> $ cat /sys/devices/virtual/memory_tiering/memory_tier22/nodelist
> >> 2
> >>
> >> Here is the result of partitioning hot/cold memory and I put execution
> >> command at the right side of numastat result. I initially ran each
> >> hot_cold program with preferred setting so that they initially allocate
> >> memory on one of node0 or node2, but they gradually migrated based on
> >> their access frequencies.
> >>
> >> $ numastat -c -p hot_cold
> >> Per-node process memory usage (in MBs)
> >> PID Node 0 Node 1 Node 2 Total
> >> --------------- ------ ------ ------ -----
> >> 754 (hot_cold) 1800 0 2000 3800 <- hot_cold 1800 2000
> >> 1184 (hot_cold) 300 0 500 800 <- hot_cold 300 500
> >> 1818 (hot_cold) 801 0 3199 4000 <- hot_cold 800 3200
> >> 30289 (hot_cold) 4 0 5 10 <- hot_cold 3 5
> >> 30325 (hot_cold) 31 0 51 81 <- hot_cold 30 50
> >> --------------- ------ ------ ------ -----
> >> Total 2938 0 5756 8695
> >>
> >> The final node placement result shows that DAMON accurately migrated
> >> pages by their hotness for multiple processes.
> >
> > What was the result when the corner cases handling logics were not applied?
>
> This is the result of the same test that Honggyu did, but with an insufficient
> corner cases handling logics.
>
> $ numastat -c -p hot_cold
>
> Per-node process memory usage (in MBs)
> PID Node 0 Node 1 Node 2 Total
> -------------- ------ ------ ------ -----
> 862 (hot_cold) 2256 0 1545 3801 <- hot_cold 1800 2000
> 863 (hot_cold) 403 0 398 801 <- hot_cold 300 500
> 864 (hot_cold) 1520 0 2482 4001 <- hot_cold 800 3200
> 865 (hot_cold) 6 0 3 9 <- hot_cold 3 5
> 866 (hot_cold) 29 0 52 81 <- hot_cold 30 50
> -------------- ------ ------ ------ -----
> Total 4215 0 4480 8695
>
> As time goes by, DAMON keeps trying to split the hot/cold region, but it does
> not seem to be enough.
>
> $ numastat -c -p hot_cold
>
> Per-node process memory usage (in MBs)
> PID Node 0 Node 1 Node 2 Total
> -------------- ------ ------ ------ -----
> 862 (hot_cold) 2022 0 1780 3801 <- hot_cold 1800 2000
> 863 (hot_cold) 351 0 450 801 <- hot_cold 300 500
> 864 (hot_cold) 1134 0 2868 4001 <- hot_cold 800 3200
> 865 (hot_cold) 7 0 2 9 <- hot_cold 3 5
> 866 (hot_cold) 43 0 39 81 <- hot_cold 30 50
> -------------- ------ ------ ------ -----
> Total 3557 0 5138 8695
>
> >
> > And, what are the corner cases handling logic that seemed essential? I show
> > the page granularity active/reference check could indeed provide many
> > improvements, but that's only my humble assumption.
>
> Yes, the page granularity active/reference check is essential. To make the
> above "insufficient" result, the only thing I did was to promote
> inactive/not_referenced pages.
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index f03be320f9ad..c2aefb883c54 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1127,9 +1127,7 @@ static unsigned int __promote_folio_list(struct list_head *folio_list,
> VM_BUG_ON_FOLIO(folio_test_active(folio), folio);
>
> references = folio_check_references(folio, sc);
> - if (references == FOLIOREF_KEEP ||
> - references == FOLIOREF_RECLAIM ||
> - references == FOLIOREF_RECLAIM_CLEAN)
> + if (references == FOLIOREF_KEEP )
> goto keep_locked;
>
> /* Relocate its contents to another node. */
Thank you for sharing the details :) I think DAMOS filters based approach
could be worthy to try, then.
>
> >
> > If the corner cases are indeed better to be applied in page granularity, I
> > agree we need some more efforts since DAMON monitoring results are not page
> > granularity aware by the design. Users could increase min_nr_regions to make
> > it more accurate, and we have plan to support page granularity monitoring,
> > though. But maybe the overhead could be unacceptable.
> >
> > Ideal solution would be making DAMON more accurate while keeping current level
> > of overhead. We indeed have TODO items for DAMON accuracy improvement, but
> > this may take some time that might unacceptable for your case.
> >
> > If that's the case, I think the additional corner handling (or, page gran
> > additional access check) could be made as DAMOS filters[1], since DAMOS filters
> > can be applied in page granularity, and designed for this kind of handling of
> > information that DAMON monitoring results cannot provide. More specifically,
> > we could have filters for promotion-qualifying pages and demotion-qualifying
> > pages. In this way, I think we can keep the action more flexible while the
> > filters can be applied in creative ways.
>
> Making corner handling as a new DAMOS filters is a good idea. I'm just a bit
> concerned if adding new filters might cause users to care more.
I prefer keeping DAMON API and Sysfs interface flexible and easy to extended
even if it increases number of parameters, while providing simplified
high level interfaces for end users aiming to use DAMON for specific use cases,
like DAMON_RECLAIM, DAMON_LRU_SORT, and damo do. Hence I'm not very concerned.
Thanks,
SJ
>
> Kind regards,
> Hyeongtak
Powered by blists - more mailing lists