[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241011113919epcms1p4d6ae59bece34d4a930a5db31a2f00173@epcms1p4>
Date: Fri, 11 Oct 2024 20:39:19 +0900
From: Jaewon Kim <jaewon31.kim@...sung.com>
To: Vlastimil Babka <vbabka@...e.cz>, "sj@...nel.org" <sj@...nel.org>,
"minchan@...nel.org" <minchan@...nel.org>, "kaleshsingh@...gle.com"
<kaleshsingh@...gle.com>, "akpm@...ux-foundation.org"
<akpm@...ux-foundation.org>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"jaewon31.kim@...il.com" <jaewon31.kim@...il.com>
Subject: RE: [PATCH] vmscan: add a vmscan event for reclaim_pages
>On 10/11/24 10:25 AM, Jaewon Kim wrote:
>> Hi
>>
>> Thank you for your coment. Yes if it is allowed, I can do that way. When
>> I checked, the following functions should do the memset().
>>
>> reclaim_clean_pages_from_list
>> shrink_inactive_list
>> reclaim_folio_list
>> evict_folios
>>
>> Actually I was planning to move trace_mm_vmscan_reclaim_pages into
>> reclaim_folio_list so that we don't have to sum up and we may be able
>> to print node number, too. As we will see log for each node, if we'd
>> like to know the sum, that would be the post parser's job.
>>
>> Option 1. No change on memset, but print on each node.
>> mm_vmscan_reclaim_pages: nid=0 nr_scanned=112 nr_reclaimed=112 nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activate_anon=0 nr_activate_file=0 nr_ref_keep=0 nr_unmap_fail=0
>> mm_vmscan_reclaim_pages: nid=1 ...
>> mm_vmscan_reclaim_pages: nid=2 ...
>
>I see. Note it processes a list that might be from multiple nodes and
>will group consecutive pages from the same node, but if pages come from
>random nodes, the nodes will repeat and there might be many trace
>events, each for few pages only.
>
>Guess it depends on the workload if it has its pages from the same node.
>Maybe you can try and see how noisy it is in practice?
>
Hi
Actually my Android test device has only one node, so I cannot test several
nodes cases. But it shows quite many of mm_vmscan_reclaim_pages log when I do
the Android shmem based call to reclaim_pages. I have to do the post parsing
or make a fancy ftrace hist command.
I think madvise and damon, which are the other caller to reclaim_pages, are not
interested in showing trace log for each node.
I'm just worried changing policy, memset(0) is the caller responsibility,
could be error-prone if we forget someday. So if possible, let me take the
option 1.
To show a clean code, let me submit the v2 patch.
Thank you.
>> Option 2. Change on memset, but we don't care the stat from each node.
>> mm_vmscan_reclaim_pages: nr_scanned=35 nr_reclaimed=35 nr_dirty=0 nr_writeback=0 nr_congested=0 nr_immediate=0 nr_activate_anon=0 nr_activate_file=0 nr_ref_keep=0 nr_unmap_fail=0
>>
>> Would you give me you preference between the two options?
>>
>> Thank you
>> Jaewon Kim
>>
>>>
>>> AFAICS shrink_folio_list() only cares about these fields:
>>>
>>> pgactivate = stat->nr_activate[0] + stat->nr_activate[1];
>>>
>>> in order to do
>>>
>>> count_vm_events(PGACTIVATE, pgactivate);
>>>
>>> Which could be adjusted to deal with accumulating stat - i.e. take an
>>> initial sum of the fields in stat and subtract from the final sum to get
>>> the delta.
>>>
>>>> unsigned long reclaim_pages(struct list_head *folio_list)
>>>> {
>>>> int nid;
>>>> + unsigned int nr_scanned = 0;
>>>> unsigned int nr_reclaimed = 0;
>>>> LIST_HEAD(node_folio_list);
>>>> unsigned int noreclaim_flag;
>>>> + struct reclaim_stat stat_total, stat_one;
>>>>
>>>> if (list_empty(folio_list))
>>>> return nr_reclaimed;
>>>>
>>>> + memset(&stat_total, 0, sizeof(stat_total));
>>>> noreclaim_flag = memalloc_noreclaim_save();
>>>>
>>>> nid = folio_nid(lru_to_folio(folio_list));
>>>> @@ -2168,14 +2192,20 @@ unsigned long reclaim_pages(struct list_head *folio_list)
>>>> if (nid == folio_nid(folio)) {
>>>> folio_clear_active(folio);
>>>> list_move(&folio->lru, &node_folio_list);
>>>> + nr_scanned += folio_nr_pages(folio);
>>>> continue;
>>>> }
>>>>
>>>> - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid));
>>>> + nr_reclaimed += reclaim_folio_list(&node_folio_list,
>>>> + NODE_DATA(nid), &stat_one);
>>>> + reclaim_stat_add(&stat_one, &stat_total);
>>>> nid = folio_nid(lru_to_folio(folio_list));
>>>> } while (!list_empty(folio_list));
>>>>
>>>> - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid));
>>>> + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid),
>>>> + &stat_one);
>>>> + reclaim_stat_add(&stat_one, &stat_total);
>>>> + trace_mm_vmscan_reclaim_pages(nr_scanned, nr_reclaimed, &stat_total);
>>>>
>>>> memalloc_noreclaim_restore(noreclaim_flag);
>>>>
Powered by blists - more mailing lists