[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5cb98e07-1e51-e376-8e67-dffc92f24941@linux.alibaba.com>
Date: Fri, 18 Mar 2022 18:01:19 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: sj@...nel.org
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/damon: Make the sampling more accurate
On 3/18/2022 5:40 PM, sj@...nel.org wrote:
> Hi Baolin,
>
> On Fri, 18 Mar 2022 17:23:13 +0800 Baolin Wang <baolin.wang@...ux.alibaba.com> wrote:
>
>> When I try to sample the physical address with DAMON to migrate pages
>> on tiered memory system, I found it will demote some cold regions mistakenly.
>> Now we will choose an physical address in the region randomly, but if
>> its corresponding page is not an online LRU page, we will ignore the
>> accessing status in this cycle of sampling, and actually will be treated
>> as a non-accessed region. Suppose a region including some non-LRU pages,
>> it will be treated as a cold region with a high probability, and may be
>> merged with adjacent cold regions, but there are some pages may be
>> accessed we missed.
>>
>> So instead of ignoring the access status of this region if we did not find
>> a valid page according to current sampling address, we can use last valid
>> sampling address to help to make the sampling more accurate, then we can do
>> a better decision.
>
> Well... Offlined pages are also a valid part of the memory region, so treating
> those as not accessed and making the memory region containing the offlined
> pages looks colder seems legal to me. IOW, this approach could make memory
> regions containing many non-online-LRU pages as hot.
IMO I don't think this is a problem, since if this region containing
many non-online-LRU pages is treated as hot, which means threre are aome
pages are hot, right? We can find them and promote them to fast memory
(or do other schemes). Meanwhile, for non-online-LRU pages, we can
filter them and do nothing for them, since we can not get a valid page
struct for them.
>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>> ---
>> include/linux/damon.h | 2 ++
>> mm/damon/core.c | 2 ++
>> mm/damon/paddr.c | 15 ++++++++++++---
>> 3 files changed, 16 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/damon.h b/include/linux/damon.h
>> index f23cbfa..3311e15 100644
>> --- a/include/linux/damon.h
>> +++ b/include/linux/damon.h
>> @@ -38,6 +38,7 @@ struct damon_addr_range {
>> * struct damon_region - Represents a monitoring target region.
>> * @ar: The address range of the region.
>> * @sampling_addr: Address of the sample for the next access check.
>> + * @last_sampling_addr: Last valid address of the sampling.
>> * @nr_accesses: Access frequency of this region.
>> * @list: List head for siblings.
>> * @age: Age of this region.
>> @@ -50,6 +51,7 @@ struct damon_addr_range {
>> struct damon_region {
>> struct damon_addr_range ar;
>> unsigned long sampling_addr;
>> + unsigned long last_sampling_addr;
>> unsigned int nr_accesses;
>> struct list_head list;
>>
>> diff --git a/mm/damon/core.c b/mm/damon/core.c
>> index c1e0fed..957704f 100644
>> --- a/mm/damon/core.c
>> +++ b/mm/damon/core.c
>> @@ -108,6 +108,7 @@ struct damon_region *damon_new_region(unsigned long start, unsigned long end)
>> region->ar.start = start;
>> region->ar.end = end;
>> region->nr_accesses = 0;
>> + region->last_sampling_addr = 0;
>> INIT_LIST_HEAD(®ion->list);
>>
>> region->age = 0;
>> @@ -848,6 +849,7 @@ static void damon_split_region_at(struct damon_ctx *ctx,
>> return;
>>
>> r->ar.end = new->ar.start;
>> + r->last_sampling_addr = 0;
>>
>> new->age = r->age;
>> new->last_nr_accesses = r->last_nr_accesses;
>> diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
>> index 21474ae..5f15068 100644
>> --- a/mm/damon/paddr.c
>> +++ b/mm/damon/paddr.c
>> @@ -31,10 +31,9 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma,
>> return true;
>> }
>>
>> -static void damon_pa_mkold(unsigned long paddr)
>> +static void damon_pa_mkold(struct page *page)
>> {
>> struct folio *folio;
>> - struct page *page = damon_get_page(PHYS_PFN(paddr));
>> struct rmap_walk_control rwc = {
>> .rmap_one = __damon_pa_mkold,
>> .anon_lock = folio_lock_anon_vma_read,
>> @@ -66,9 +65,19 @@ static void damon_pa_mkold(unsigned long paddr)
>> static void __damon_pa_prepare_access_check(struct damon_ctx *ctx,
>> struct damon_region *r)
>> {
>> + struct page *page;
>> +
>> r->sampling_addr = damon_rand(r->ar.start, r->ar.end);
>>
>> - damon_pa_mkold(r->sampling_addr);
>> + page = damon_get_page(PHYS_PFN(r->sampling_addr));
>> + if (page) {
>> + r->last_sampling_addr = r->sampling_addr;
>> + } else if (r->last_sampling_addr) {
>> + r->sampling_addr = r->last_sampling_addr;
>> + page = damon_get_page(PHYS_PFN(r->last_sampling_addr));
>> + }
>> +
>> + damon_pa_mkold(page);
>> }
>>
>> static void damon_pa_prepare_access_checks(struct damon_ctx *ctx)
>> --
>> 1.8.3.1
Powered by blists - more mailing lists