[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkowZgcXc9Oqcr0yr6X0TPmU5T55FLXJpV=5q+_NK8O4iQ@mail.gmail.com>
Date: Thu, 20 Aug 2020 15:56:15 -0700
From: Yang Shi <shy828301@...il.com>
To: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Yang Shi <yang.shi@...ux.alibaba.com>,
David Rientjes <rientjes@...gle.com>,
Huang Ying <ying.huang@...el.com>,
Dan Williams <dan.j.williams@...el.com>
Subject: Re: [RFC][PATCH 6/9] mm/vmscan: add page demotion counter
On Tue, Aug 18, 2020 at 11:53 AM Dave Hansen
<dave.hansen@...ux.intel.com> wrote:
>
>
> From: Yang Shi <yang.shi@...ux.alibaba.com>
>
> Account the number of demoted pages into reclaim_state->nr_demoted.
>
> Add pgdemote_kswapd and pgdemote_direct VM counters showed in
> /proc/vmstat.
BTW we'd better add promotion counters as well. The NUMA balancing
could promote pages to local nodes without any modification. We could
argue it may be optimized for PMEM usecases, but it does work. And it
makes the patchset more self-contained.
You could refer to:
https://lore.kernel.org/linux-mm/1560468577-101178-10-git-send-email-yang.shi@linux.alibaba.com/
>
> [ daveh:
> - __count_vm_events() a bit, and made them look at the THP
> size directly rather than getting data from migrate_pages()
> ]
>
> Signed-off-by: Yang Shi <yang.shi@...ux.alibaba.com>
> Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Huang Ying <ying.huang@...el.com>
> Cc: Dan Williams <dan.j.williams@...el.com>
> ---
>
> b/include/linux/vm_event_item.h | 2 ++
> b/mm/vmscan.c | 6 ++++++
> b/mm/vmstat.c | 2 ++
> 3 files changed, 10 insertions(+)
>
> diff -puN include/linux/vm_event_item.h~mm-vmscan-add-page-demotion-counter include/linux/vm_event_item.h
> --- a/include/linux/vm_event_item.h~mm-vmscan-add-page-demotion-counter 2020-08-18 11:36:54.062583176 -0700
> +++ b/include/linux/vm_event_item.h 2020-08-18 11:36:54.070583176 -0700
> @@ -32,6 +32,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PS
> PGREFILL,
> PGSTEAL_KSWAPD,
> PGSTEAL_DIRECT,
> + PGDEMOTE_KSWAPD,
> + PGDEMOTE_DIRECT,
> PGSCAN_KSWAPD,
> PGSCAN_DIRECT,
> PGSCAN_DIRECT_THROTTLE,
> diff -puN mm/vmscan.c~mm-vmscan-add-page-demotion-counter mm/vmscan.c
> --- a/mm/vmscan.c~mm-vmscan-add-page-demotion-counter 2020-08-18 11:36:54.064583176 -0700
> +++ b/mm/vmscan.c 2020-08-18 11:36:54.072583176 -0700
> @@ -147,6 +147,7 @@ struct scan_control {
> unsigned int immediate;
> unsigned int file_taken;
> unsigned int taken;
> + unsigned int demoted;
> } nr;
>
> /* for recording the reclaimed slab by now */
> @@ -1146,6 +1147,11 @@ static unsigned int demote_page_list(str
> list_splice(ret_list, demote_pages);
> }
>
> + if (current_is_kswapd())
> + __count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded);
> + else
> + __count_vm_events(PGDEMOTE_DIRECT, nr_succeeded);
> +
> return nr_succeeded;
> }
>
> diff -puN mm/vmstat.c~mm-vmscan-add-page-demotion-counter mm/vmstat.c
> --- a/mm/vmstat.c~mm-vmscan-add-page-demotion-counter 2020-08-18 11:36:54.067583176 -0700
> +++ b/mm/vmstat.c 2020-08-18 11:36:54.072583176 -0700
> @@ -1200,6 +1200,8 @@ const char * const vmstat_text[] = {
> "pgrefill",
> "pgsteal_kswapd",
> "pgsteal_direct",
> + "pgdemote_kswapd",
> + "pgdemote_direct",
> "pgscan_kswapd",
> "pgscan_direct",
> "pgscan_direct_throttle",
> _
Powered by blists - more mailing lists