[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkbrjV3Px8h1p950VZFi9FnzxZPn2Kg+vZD69eEcsQvtxg@mail.gmail.com>
Date: Wed, 23 Oct 2024 11:02:50 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Usama Arif <usamaarif642@...il.com>
Cc: Barry Song <21cnbao@...il.com>, senozhatsky@...omium.org, minchan@...nel.org,
hanchuanhua@...o.com, v-songbaohua@...o.com, akpm@...ux-foundation.org,
linux-mm@...ck.org, hannes@...xchg.org, david@...hat.com, willy@...radead.org,
kanchana.p.sridhar@...el.com, nphamcs@...il.com, chengming.zhou@...ux.dev,
ryan.roberts@....com, ying.huang@...el.com, riel@...riel.com,
shakeel.butt@...ux.dev, kernel-team@...a.com, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org
Subject: Re: [RFC 0/4] mm: zswap: add support for zswapin of large folios
[..]
> >> I suspect the regression occurs because you're running an edge case
> >> where the memory cgroup stays nearly full most of the time (this isn't
> >> an inherent issue with large folio swap-in). As a result, swapping in
> >> mTHP quickly triggers a memcg overflow, causing a swap-out. The
> >> next swap-in then recreates the overflow, leading to a repeating
> >> cycle.
> >>
> >
> > Yes, agreed! Looking at the swap counters, I think this is what is going
> > on as well.
> >
> >> We need a way to stop the cup from repeatedly filling to the brim and
> >> overflowing. While not a definitive fix, the following change might help
> >> improve the situation:
> >>
> >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> >>
> >> index 17af08367c68..f2fa0eeb2d9a 100644
> >> --- a/mm/memcontrol.c
> >> +++ b/mm/memcontrol.c
> >>
> >> @@ -4559,7 +4559,10 @@ int mem_cgroup_swapin_charge_folio(struct folio
> >> *folio, struct mm_struct *mm,
> >> memcg = get_mem_cgroup_from_mm(mm);
> >> rcu_read_unlock();
> >>
> >> - ret = charge_memcg(folio, memcg, gfp);
> >> + if (folio_test_large(folio) && mem_cgroup_margin(memcg) <
> >> MEMCG_CHARGE_BATCH)
> >> + ret = -ENOMEM;
> >> + else
> >> + ret = charge_memcg(folio, memcg, gfp);
> >>
> >> css_put(&memcg->css);
> >> return ret;
> >> }
> >>
> >
> > The diff makes sense to me. Let me test later today and get back to you.
> >
> > Thanks!
> >
> >> Please confirm if it makes the kernel build with memcg limitation
> >> faster. If so, let's
> >> work together to figure out an official patch :-) The above code hasn't consider
> >> the parent memcg's overflow, so not an ideal fix.
> >>
>
> Thanks Barry, I think this fixes the regression, and even gives an improvement!
> I think the below might be better to do:
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index c098fd7f5c5e..0a1ec55cc079 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -4550,7 +4550,11 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm,
> memcg = get_mem_cgroup_from_mm(mm);
> rcu_read_unlock();
>
> - ret = charge_memcg(folio, memcg, gfp);
> + if (folio_test_large(folio) &&
> + mem_cgroup_margin(memcg) < max(MEMCG_CHARGE_BATCH, folio_nr_pages(folio)))
> + ret = -ENOMEM;
> + else
> + ret = charge_memcg(folio, memcg, gfp);
>
> css_put(&memcg->css);
> return ret;
>
>
> AMD 16K+32K THP=always
> metric mm-unstable mm-unstable + large folio zswapin series mm-unstable + large folio zswapin + no swap thrashing fix
> real 1m23.038s 1m23.050s 1m22.704s
> user 53m57.210s 53m53.437s 53m52.577s
> sys 7m24.592s 7m48.843s 7m22.519s
> zswpin 612070 999244 815934
> zswpout 2226403 2347979 2054980
> pgfault 20667366 20481728 20478690
> pgmajfault 385887 269117 309702
>
> AMD 16K+32K+64K THP=always
> metric mm-unstable mm-unstable + large folio zswapin series mm-unstable + large folio zswapin + no swap thrashing fix
> real 1m22.975s 1m23.266s 1m22.549s
> user 53m51.302s 53m51.069s 53m46.471s
> sys 7m40.168s 7m57.104s 7m25.012s
> zswpin 676492 1258573 1225703
> zswpout 2449839 2714767 2899178
> pgfault 17540746 17296555 17234663
> pgmajfault 429629 307495 287859
>
Thanks Usama and Barry for looking into this. It seems like this would
fix a regression with large folio swapin regardless of zswap. Can the
same result be reproduced on zram without this series?
Powered by blists - more mailing lists