[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7A1hqQ+yboCtT+JF=5Tfijph2s4ooSqNwnexQ9kwJOCtA@mail.gmail.com>
Date: Tue, 16 Sep 2025 00:24:39 +0800
From: Kairui Song <ryncsn@...il.com>
To: Chris Mason <clm@...a.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>, Hugh Dickins <hughd@...gle.com>, Chris Li <chrisl@...nel.org>,
Barry Song <baohua@...nel.org>, Baoquan He <bhe@...hat.com>, Nhat Pham <nphamcs@...il.com>,
Kemeng Shi <shikemeng@...weicloud.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
Ying Huang <ying.huang@...ux.alibaba.com>, Johannes Weiner <hannes@...xchg.org>,
David Hildenbrand <david@...hat.com>, Yosry Ahmed <yosryahmed@...gle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Zi Yan <ziy@...dia.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 14/15] mm, swap: implement dynamic allocation of swap table
On Mon, Sep 15, 2025 at 11:55 PM Chris Mason <clm@...a.com> wrote:
>
> On Thu, 11 Sep 2025 00:08:32 +0800 Kairui Song <ryncsn@...il.com> wrote:
>
> > From: Kairui Song <kasong@...cent.com>
> >
> > Now swap table is cluster based, which means free clusters can free its
> > table since no one should modify it.
> >
> > There could be speculative readers, like swap cache look up, protect
> > them by making them RCU protected. All swap table should be filled with
> > null entries before free, so such readers will either see a NULL pointer
> > or a null filled table being lazy freed.
> >
> > On allocation, allocate the table when a cluster is used by any order.
> >
> > This way, we can reduce the memory usage of large swap device
> > significantly.
> >
> > This idea to dynamically release unused swap cluster data was initially
> > suggested by Chris Li while proposing the cluster swap allocator and
> > it suits the swap table idea very well.
> >
> > Co-developed-by: Chris Li <chrisl@...nel.org>
> > Signed-off-by: Chris Li <chrisl@...nel.org>
> > Signed-off-by: Kairui Song <kasong@...cent.com>
> > Acked-by: Chris Li <chrisl@...nel.org>
> > ---
> > mm/swap.h | 2 +-
> > mm/swap_state.c | 9 +--
> > mm/swap_table.h | 37 ++++++++-
> > mm/swapfile.c | 202 ++++++++++++++++++++++++++++++++++++++----------
> > 4 files changed, 199 insertions(+), 51 deletions(-)
> >
>
> [ ... ]
>
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 89659928465e..faf867a6c5c1 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> >
>
> [ ... ]
>
> > +/*
> > + * Allocate a swap table may need to sleep, which leads to migration,
> > + * so attempt an atomic allocation first then fallback and handle
> > + * potential race.
> > + */
> > +static struct swap_cluster_info *
> > +swap_cluster_alloc_table(struct swap_info_struct *si,
> > + struct swap_cluster_info *ci,
> > + int order)
> > {
> > - unsigned int ci_off;
> > - unsigned long swp_tb;
> > + struct swap_cluster_info *pcp_ci;
> > + struct swap_table *table;
> > + unsigned long offset;
> >
> > - if (!ci->table)
> > - return;
> > + /*
> > + * Only cluster isolation from the allocator does table allocation.
> > + * Swap allocator uses a percpu cluster and holds the local lock.
> > + */
> > + lockdep_assert_held(&ci->lock);
> > + lockdep_assert_held(&this_cpu_ptr(&percpu_swap_cluster)->lock);
> > +
> > + table = kmem_cache_zalloc(swap_table_cachep,
> > + __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN);
> > + if (table) {
> > + rcu_assign_pointer(ci->table, table);
> > + return ci;
> > + }
> > +
> > + /*
> > + * Try a sleep allocation. Each isolated free cluster may cause
> > + * a sleep allocation, but there is a limited number of them, so
> > + * the potential recursive allocation should be limited.
> > + */
> > + spin_unlock(&ci->lock);
> > + if (!(si->flags & SWP_SOLIDSTATE))
> > + spin_unlock(&si->global_cluster_lock);
> > + local_unlock(&percpu_swap_cluster.lock);
> > + table = kmem_cache_zalloc(swap_table_cachep, __GFP_HIGH | GFP_KERNEL);
> >
> > - for (ci_off = 0; ci_off < SWAPFILE_CLUSTER; ci_off++) {
> > - swp_tb = __swap_table_get(ci, ci_off);
> > - if (!swp_tb_is_null(swp_tb))
> > - pr_err_once("swap: unclean swap space on swapoff: 0x%lx",
> > - swp_tb);
> > + local_lock(&percpu_swap_cluster.lock);
> > + if (!(si->flags & SWP_SOLIDSTATE))
> > + spin_lock(&si->global_cluster_lock);
> > + /*
> > + * Back to atomic context. First, check if we migrated to a new
> > + * CPU with a usable percpu cluster. If so, try using that instead.
> > + * No need to check it for the spinning device, as swap is
> > + * serialized by the global lock on them.
> > + *
> > + * The is_usable check is a bit rough, but ensures order 0 success.
> > + */
> > + offset = this_cpu_read(percpu_swap_cluster.offset[order]);
> > + if ((si->flags & SWP_SOLIDSTATE) && offset) {
> > + pcp_ci = swap_cluster_lock(si, offset);
> > + if (cluster_is_usable(pcp_ci, order) &&
> > + pcp_ci->count < SWAPFILE_CLUSTER) {
> > + ci = pcp_ci;
> ^^^^^^^^^^^^^
> ci came from the caller, and in the case of isolate_lock_cluster() they
> had just removed it from a list. We overwrite ci and return something
> different.
Yes, that's expected. See the comment above. We have just dropped
local lock so it's possible that we migrated to another CPU which has
its own percpu cache ci (percpu_swap_cluster.offset).
To avoid fragmentation, drop the isolated ci and use the percpu ci
instead. But you are right that I need to add the ci back to the list,
or it will be leaked. Thanks!
Powered by blists - more mailing lists