[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAMgjq7BU5S3cPQSRA2+RriPRNEZzZZK-VeuRiMtAzOgva-ZUKw@mail.gmail.com>
Date: Mon, 17 Nov 2025 00:01:29 +0800
From: Kairui Song <ryncsn@...il.com>
To: Barry Song <21cnbao@...il.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Baoquan He <bhe@...hat.com>, Chris Li <chrisl@...nel.org>, Nhat Pham <nphamcs@...il.com>,
Johannes Weiner <hannes@...xchg.org>, Yosry Ahmed <yosry.ahmed@...ux.dev>,
David Hildenbrand <david@...hat.com>, Youngjun Park <youngjun.park@....com>,
Hugh Dickins <hughd@...gle.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Huang, Ying" <ying.huang@...ux.alibaba.com>, Kemeng Shi <shikemeng@...weicloud.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 13/19] mm, swap: remove workaround for unsynchronized swap
map cache state
On Mon, Nov 10, 2025 at 3:21 PM Barry Song <21cnbao@...il.com> wrote:
>
> On Sun, Nov 9, 2025 at 10:18 PM Kairui Song <ryncsn@...il.com> wrote:
> >
> > On Fri, Nov 7, 2025 at 11:07 AM Barry Song <21cnbao@...il.com> wrote:
> > >
> > > > struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
> > > > struct mempolicy *mpol, pgoff_t ilx,
> > > > - bool *new_page_allocated,
> > > > - bool skip_if_exists)
> > > > + bool *new_page_allocated)
> > > > {
> > > > struct swap_info_struct *si = __swap_entry_to_info(entry);
> > > > struct folio *folio;
> > > > @@ -548,8 +542,7 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_mask,
> > > > if (!folio)
> > > > return NULL;
> > > > /* Try add the new folio, returns existing folio or NULL on failure. */
> > > > - result = __swap_cache_prepare_and_add(entry, folio, gfp_mask,
> > > > - false, skip_if_exists);
> > > > + result = __swap_cache_prepare_and_add(entry, folio, gfp_mask, false);
> > > > if (result == folio)
> > > > *new_page_allocated = true;
> > > > else
> > > > @@ -578,7 +571,7 @@ struct folio *swapin_folio(swp_entry_t entry, struct folio *folio)
> > > > unsigned long nr_pages = folio_nr_pages(folio);
> > > >
> > > > entry = swp_entry(swp_type(entry), round_down(offset, nr_pages));
> > > > - swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true, false);
> > > > + swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true);
> > > > if (swapcache == folio)
> > > > swap_read_folio(folio, NULL);
> > > > return swapcache;
> > >
> > > I wonder if we could also drop the "charged" — it doesn’t seem
> > > difficult to move the charging step before
> > > __swap_cache_prepare_and_add(), even for swap_cache_alloc_folio()?
> >
> > Hi Barry, thanks for the review and suggestion.
> >
> > It may cause much more serious cgroup thrashing. Charge may cause
> > reclaim, so races swapin will have a much larger race window and cause
> > a lot of repeated folio alloc / charge.
> >
> > This param exists because anon / shmem does their own charge for large
> > folio swapin, and then inserts the folio into the swap cache, which is
> > causing more memory pressure already. I think ideally we want to unify
> > all alloc & charging for swap in folio allocation, and have a
> > swap_cache_alloc_folio that supports `orders`. For raced swapin only
> > one will insert a folio successfully into the swap cache and charge
> > it, which should make the race window very tiny or maybe avoid
> > redundant folio allocation completely with further work. I did some
> > tests and it shows that it will improve the memory usage and avoid
> > some OOM under pressure for (m)THP.
>
> This is quite interesting. I wonder if the change below could help reduce
> mTHP swap thrashing. The fallback order-0 path also changes after
> swap_cache_add_folio(), as order-0 pages are typically the ones triggering
> memcg reclamation.
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 27d91ae3648a..d97f1a8a5ca3 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4470,11 +4470,13 @@ static struct folio *__alloc_swap_folio(struct
> vm_fault *vmf)
> return NULL;
>
> entry = pte_to_swp_entry(vmf->orig_pte);
> +#if 0
> if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
> GFP_KERNEL, entry)) {
> folio_put(folio);
> return NULL;
> }
> +#endif
>
> return folio;
> }
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 2bf72d58f6ee..9d0b55deacc6 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -605,7 +605,7 @@ struct folio *swapin_folio(swp_entry_t entry,
> struct folio *folio)
> unsigned long nr_pages = folio_nr_pages(folio);
>
> entry = swp_entry(swp_type(entry), round_down(offset, nr_pages));
> - swapcache = __swap_cache_prepare_and_add(entry, folio, 0, true);
> + swapcache = __swap_cache_prepare_and_add(entry, folio, 0,
> folio_order(folio));
> if (swapcache == folio)
> swap_read_folio(folio, NULL);
> return swapcache;
Yeah, that will surely improve the thrashing issue. Having a
`folio_order` check as the charged parameter looks strange though.
Ideally we will have the swap_cache_alloc_folio to do all the folio
allocation so there won't be many different swap in folio charging
callsites (currently we have like > 3 callsites, anon THP, anon order
0, shmem THP, and the common order 0 in swap_cache_alloc_folio). That
will also help remove a WARN_ON check in Patch 3.
Powered by blists - more mailing lists