lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4zp=E7izB5oAAiWu14UCqNCSvWhveGoHCP6Wr030SHH1A@mail.gmail.com>
Date: Thu, 24 Oct 2024 07:52:01 +1300
From: Barry Song <21cnbao@...il.com>
To: Usama Arif <usamaarif642@...il.com>
Cc: Yosry Ahmed <yosryahmed@...gle.com>, senozhatsky@...omium.org, minchan@...nel.org, 
	hanchuanhua@...o.com, v-songbaohua@...o.com, akpm@...ux-foundation.org, 
	linux-mm@...ck.org, hannes@...xchg.org, david@...hat.com, willy@...radead.org, 
	kanchana.p.sridhar@...el.com, nphamcs@...il.com, chengming.zhou@...ux.dev, 
	ryan.roberts@....com, ying.huang@...el.com, riel@...riel.com, 
	shakeel.butt@...ux.dev, kernel-team@...a.com, linux-kernel@...r.kernel.org, 
	linux-doc@...r.kernel.org
Subject: Re: [RFC 0/4] mm: zswap: add support for zswapin of large folios

On Thu, Oct 24, 2024 at 7:31 AM Usama Arif <usamaarif642@...il.com> wrote:
>
>
>
> On 23/10/2024 19:02, Yosry Ahmed wrote:
> > [..]
> >>>> I suspect the regression occurs because you're running an edge case
> >>>> where the memory cgroup stays nearly full most of the time (this isn't
> >>>> an inherent issue with large folio swap-in). As a result, swapping in
> >>>> mTHP quickly triggers a memcg overflow, causing a swap-out. The
> >>>> next swap-in then recreates the overflow, leading to a repeating
> >>>> cycle.
> >>>>
> >>>
> >>> Yes, agreed! Looking at the swap counters, I think this is what is going
> >>> on as well.
> >>>
> >>>> We need a way to stop the cup from repeatedly filling to the brim and
> >>>> overflowing. While not a definitive fix, the following change might help
> >>>> improve the situation:
> >>>>
> >>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> >>>>
> >>>> index 17af08367c68..f2fa0eeb2d9a 100644
> >>>> --- a/mm/memcontrol.c
> >>>> +++ b/mm/memcontrol.c
> >>>>
> >>>> @@ -4559,7 +4559,10 @@ int mem_cgroup_swapin_charge_folio(struct folio
> >>>> *folio, struct mm_struct *mm,
> >>>>                 memcg = get_mem_cgroup_from_mm(mm);
> >>>>         rcu_read_unlock();
> >>>>
> >>>> -       ret = charge_memcg(folio, memcg, gfp);
> >>>> +       if (folio_test_large(folio) && mem_cgroup_margin(memcg) <
> >>>> MEMCG_CHARGE_BATCH)
> >>>> +               ret = -ENOMEM;
> >>>> +       else
> >>>> +               ret = charge_memcg(folio, memcg, gfp);
> >>>>
> >>>>         css_put(&memcg->css);
> >>>>         return ret;
> >>>> }
> >>>>
> >>>
> >>> The diff makes sense to me. Let me test later today and get back to you.
> >>>
> >>> Thanks!
> >>>
> >>>> Please confirm if it makes the kernel build with memcg limitation
> >>>> faster. If so, let's
> >>>> work together to figure out an official patch :-) The above code hasn't consider
> >>>> the parent memcg's overflow, so not an ideal fix.
> >>>>
> >>
> >> Thanks Barry, I think this fixes the regression, and even gives an improvement!
> >> I think the below might be better to do:
> >>
> >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> >> index c098fd7f5c5e..0a1ec55cc079 100644
> >> --- a/mm/memcontrol.c
> >> +++ b/mm/memcontrol.c
> >> @@ -4550,7 +4550,11 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm,
> >>                 memcg = get_mem_cgroup_from_mm(mm);
> >>         rcu_read_unlock();
> >>
> >> -       ret = charge_memcg(folio, memcg, gfp);
> >> +       if (folio_test_large(folio) &&
> >> +           mem_cgroup_margin(memcg) < max(MEMCG_CHARGE_BATCH, folio_nr_pages(folio)))
> >> +               ret = -ENOMEM;
> >> +       else
> >> +               ret = charge_memcg(folio, memcg, gfp);
> >>
> >>         css_put(&memcg->css);
> >>         return ret;
> >>
> >>
> >> AMD 16K+32K THP=always
> >> metric         mm-unstable      mm-unstable + large folio zswapin series    mm-unstable + large folio zswapin + no swap thrashing fix
> >> real           1m23.038s        1m23.050s                                   1m22.704s
> >> user           53m57.210s       53m53.437s                                  53m52.577s
> >> sys            7m24.592s        7m48.843s                                   7m22.519s
> >> zswpin         612070           999244                                      815934
> >> zswpout        2226403          2347979                                     2054980
> >> pgfault        20667366         20481728                                    20478690
> >> pgmajfault     385887           269117                                      309702
> >>
> >> AMD 16K+32K+64K THP=always
> >> metric         mm-unstable      mm-unstable + large folio zswapin series   mm-unstable + large folio zswapin + no swap thrashing fix
> >> real           1m22.975s        1m23.266s                                  1m22.549s
> >> user           53m51.302s       53m51.069s                                 53m46.471s
> >> sys            7m40.168s        7m57.104s                                  7m25.012s
> >> zswpin         676492           1258573                                    1225703
> >> zswpout        2449839          2714767                                    2899178
> >> pgfault        17540746         17296555                                   17234663
> >> pgmajfault     429629           307495                                     287859
> >>
> >
> > Thanks Usama and Barry for looking into this. It seems like this would
> > fix a regression with large folio swapin regardless of zswap. Can the
> > same result be reproduced on zram without this series?
>
>
> Yes, its a regression in large folio swapin support regardless of zswap/zram.
>
> Need to do 3 tests, one with probably the below diff to remove large folio support,
> one with current upstream and one with upstream + swap thrashing fix.
>
> We only use zswap and dont have a zram setup (and I am a bit lazy to create one :)).
> Any zram volunteers to try this?

Hi Usama,

I tried a quick experiment:

echo 1 > /sys/module/zswap/parameters/enabled
echo 0 > /sys/module/zswap/parameters/enabled

This was to test the zRAM scenario. Enabling zswap even
once disables mTHP swap-in. :)

I noticed a similar regression with zRAM alone, but the change resolved
the issue and even sped up the kernel build compared to the setup without
mTHP swap-in.

However, I’m still working on a proper patch to address this. The current
approach:

mem_cgroup_margin(memcg) < max(MEMCG_CHARGE_BATCH, folio_nr_pages(folio))

isn’t sufficient, as it doesn’t cover cases where group A contains group B, and
we’re operating within group B. The problem occurs not at the boundary of
group B but at the boundary of group A.

I believe there’s still room for improvement. For example, if a 64KB charge
attempt fails, there’s no need to waste time trying 32KB or 16KB. We can
directly fall back to 4KB, as 32KB and 16KB will also fail based on our
margin detection logic.

>
> diff --git a/mm/memory.c b/mm/memory.c
> index fecdd044bc0b..62f6b087beb3 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4124,6 +4124,8 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
>         gfp_t gfp;
>         int order;
>
> +       goto fallback;
> +
>         /*
>          * If uffd is active for the vma we need per-page fault fidelity to
>          * maintain the uffd semantics.

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ