lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGsJ_4yjU0NmQe0cM2xDkMYVdAWRc2Q1FUMGxpo8cVkEt5ifVQ@mail.gmail.com>
Date: Mon, 10 Nov 2025 15:21:35 +0800
From: Barry Song <21cnbao@...il.com>
To: Kairui Song <ryncsn@...il.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>, 
	Baoquan He <bhe@...hat.com>, Chris Li <chrisl@...nel.org>, Nhat Pham <nphamcs@...il.com>, 
	Johannes Weiner <hannes@...xchg.org>, Yosry Ahmed <yosry.ahmed@...ux.dev>, 
	David Hildenbrand <david@...hat.com>, Youngjun Park <youngjun.park@....com>, 
	Hugh Dickins <hughd@...gle.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>, 
	"Huang, Ying" <ying.huang@...ux.alibaba.com>, Kemeng Shi <shikemeng@...weicloud.com>, 
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, 
	"Matthew Wilcox (Oracle)" <willy@...radead.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 13/19] mm, swap: remove workaround for unsynchronized swap
 map cache state

On Sun, Nov 9, 2025 at 10:18 PM Kairui Song <ryncsn@...il.com> wrote:
>
> On Fri, Nov 7, 2025 at 11:07 AM Barry Song <21cnbao@...il.com> wrote:
> >
> > >  struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
> > >                                      struct mempolicy *mpol, pgoff_t ilx,
> > > -                                    bool *new_page_allocated,
> > > -                                    bool skip_if_exists)
> > > +                                    bool *new_page_allocated)
> > >  {
> > >         struct swap_info_struct *si = __swap_entry_to_info(entry);
> > >         struct folio *folio;
> > > @@ -548,8 +542,7 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
> > >         if (!folio)
> > >                 return NULL;
> > >         /* Try add the new folio, returns existing folio or NULL on failure. */
> > > -       result = __swap_cache_prepare_and_add(entry, folio, gfp_mask,
> > > -                                             false, skip_if_exists);
> > > +       result = __swap_cache_prepare_and_add(entry, folio, gfp_mask, false);
> > >         if (result == folio)
> > >                 *new_page_allocated = true;
> > >         else
> > > @@ -578,7 +571,7 @@ struct folio *swapin_folio(swp_entry_t entry, struct folio *folio)
> > >         unsigned long nr_pages = folio_nr_pages(folio);
> > >
> > >         entry = swp_entry(swp_type(entry), round_down(offset, nr_pages));
> > > -       swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true, false);
> > > +       swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true);
> > >         if (swapcache == folio)
> > >                 swap_read_folio(folio, NULL);
> > >         return swapcache;
> >
> > I wonder if we could also drop the "charged" — it doesn’t seem
> > difficult to move the charging step before
> > __swap_cache_prepare_and_add(), even for swap_cache_alloc_folio()?
>
> Hi Barry, thanks for the review and suggestion.
>
> It may cause much more serious cgroup thrashing. Charge may cause
> reclaim, so races swapin will have a much larger race window and cause
> a lot of repeated folio alloc / charge.
>
> This param exists because anon / shmem does their own charge for large
> folio swapin, and then inserts the folio into the swap cache, which is
> causing more memory pressure already. I think ideally we want to unify
> all alloc & charging for swap in folio allocation, and have a
> swap_cache_alloc_folio that supports `orders`. For raced swapin only
> one will insert a folio successfully into the swap cache and charge
> it, which should make the race window very tiny or maybe avoid
> redundant folio allocation completely with further work. I did some
> tests and it shows that it will improve the memory usage and avoid
> some OOM under pressure for (m)THP.

This is quite interesting. I wonder if the change below could help reduce
mTHP swap thrashing. The fallback order-0 path also changes after
swap_cache_add_folio(), as order-0 pages are typically the ones triggering
memcg reclamation.

diff --git a/mm/memory.c b/mm/memory.c
index 27d91ae3648a..d97f1a8a5ca3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4470,11 +4470,13 @@ static struct folio *__alloc_swap_folio(struct
vm_fault *vmf)
                return NULL;

        entry = pte_to_swp_entry(vmf->orig_pte);
+#if 0
        if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
                                           GFP_KERNEL, entry)) {
                folio_put(folio);
                return NULL;
        }
+#endif

        return folio;
 }
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 2bf72d58f6ee..9d0b55deacc6 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -605,7 +605,7 @@ struct folio *swapin_folio(swp_entry_t entry,
struct folio *folio)
        unsigned long nr_pages = folio_nr_pages(folio);

        entry = swp_entry(swp_type(entry), round_down(offset, nr_pages));
-       swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true);
+       swapcache = __swap_cache_prepare_and_add(entry, folio, 0,
folio_order(folio));
        if (swapcache == folio)
                swap_read_folio(folio, NULL);
        return swapcache;

>
> BTW with current SWAP_HAS_CACHE design, we also have redundant folio
> alloc for order 0 when under global pressure, as folio alloc is done
> before setting SWAP_HAS_CACHE.  But having SWAP_HAS_CACHE set then do
> the folio alloc will increase the chance of hitting the idle/busy loop
> on SWAP_HAS_CACHE which is also kind of problematic. We should be able
> to clean it up in later phases.

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ