[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJqJ8ihKy133afN=qTAiYAV3W4ifop+b18PbANVZf3FT-9auzg@mail.gmail.com>
Date: Tue, 10 Sep 2024 13:28:15 +0800
From: jingxiang zeng <jingxiangzeng.cas@...il.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Jingxiang Zeng <linuszeng@...cent.com>, linux-mm@...ck.org,
Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Shakeel Butt <shakeel.butt@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, Andrew Morton <akpm@...ux-foundation.org>,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/memcontrol: add per-memcg pgpgin/pswpin counter
On Tue, 10 Sept 2024 at 06:46, Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> On Fri, Aug 30, 2024 at 1:23 AM Jingxiang Zeng
> <jingxiangzeng.cas@...il.com> wrote:
> >
> > From: Jingxiang Zeng <linuszeng@...cent.com>
> >
> > In proactive memory reclamation scenarios, it is necessary to
> > estimate the pswpin and pswpout metrics of the cgroup to
> > determine whether to continue reclaiming anonymous pages in
> > the current batch. This patch will collect these metrics and
> > expose them.
>
> Could you add more details about the use case?
>
> By "reclaiming anonymous pages", do you mean using memory.reclaim with
> swappiness=200?
Yes.
>
> Why not just use PGPGOUT to figure out how many pages were reclaimed?
> Do you find a significant amount of file pages getting reclaimed with
> swappiness=200?
>
Currently, it's not possible to know the swap out situation of a
cgroup, and the
PGPGOUT metric, which includes the reclaim count of file pages and
anonymous pages, cannot accurately reflect the swap out situation.
> >
> > Signed-off-by: Jingxiang Zeng <linuszeng@...cent.com>
> > ---
> > mm/memcontrol-v1.c | 2 ++
> > mm/memcontrol.c | 2 ++
> > mm/page_io.c | 4 ++++
> > 3 files changed, 8 insertions(+)
> >
> > diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c
> > index b37c0d870816..44803cbea38a 100644
> > --- a/mm/memcontrol-v1.c
> > +++ b/mm/memcontrol-v1.c
> > @@ -2729,6 +2729,8 @@ static const char *const memcg1_stat_names[] = {
> > static const unsigned int memcg1_events[] = {
> > PGPGIN,
> > PGPGOUT,
> > + PSWPIN,
> > + PSWPOUT,
>
> memory.reclaim is not exposed in cgroup v1, so assuming these are only
> used for such proactive reclaim, we don't need to add them here.
Your point makes sense. I will remove these fields in the v2 version.
>
> > PGFAULT,
> > PGMAJFAULT,
> > };
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 087a8cb1a6d8..dde3d026f174 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -418,6 +418,8 @@ static const unsigned int memcg_vm_event_stat[] = {
> > PGPGIN,
> > PGPGOUT,
> > #endif
> > + PSWPIN,
> > + PSWPOUT,
> > PGSCAN_KSWAPD,
> > PGSCAN_DIRECT,
> > PGSCAN_KHUGEPAGED,
> > diff --git a/mm/page_io.c b/mm/page_io.c
> > index b6f1519d63b0..4bc77d1c6bfa 100644
> > --- a/mm/page_io.c
> > +++ b/mm/page_io.c
> > @@ -310,6 +310,7 @@ static inline void count_swpout_vm_event(struct folio *folio)
> > }
> > count_mthp_stat(folio_order(folio), MTHP_STAT_SWPOUT);
> > #endif
> > + count_memcg_folio_events(folio, PSWPOUT, folio_nr_pages(folio));
> > count_vm_events(PSWPOUT, folio_nr_pages(folio));
> > }
> >
> > @@ -505,6 +506,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
> > for (p = 0; p < sio->pages; p++) {
> > struct folio *folio = page_folio(sio->bvec[p].bv_page);
> >
> > + count_memcg_folio_events(folio, PSWPIN, folio_nr_pages(folio));
> > folio_mark_uptodate(folio);
> > folio_unlock(folio);
> > }
> > @@ -588,6 +590,7 @@ static void swap_read_folio_bdev_sync(struct folio *folio,
> > * attempt to access it in the page fault retry time check.
> > */
> > get_task_struct(current);
> > + count_memcg_folio_events(folio, PSWPIN, folio_nr_pages(folio));
> > count_vm_event(PSWPIN);
> > submit_bio_wait(&bio);
> > __end_swap_bio_read(&bio);
> > @@ -603,6 +606,7 @@ static void swap_read_folio_bdev_async(struct folio *folio,
> > bio->bi_iter.bi_sector = swap_folio_sector(folio);
> > bio->bi_end_io = end_swap_bio_read;
> > bio_add_folio_nofail(bio, folio, folio_size(folio), 0);
> > + count_memcg_folio_events(folio, PSWPIN, folio_nr_pages(folio));
> > count_vm_event(PSWPIN);
> > submit_bio(bio);
> > }
> > --
> > 2.43.5
> >
> >
>
Powered by blists - more mailing lists