[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r0m1ftvu.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Wed, 11 Oct 2023 16:25:25 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Gao Xiang <xiang@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Yang Shi <shy828301@...il.com>, Michal Hocko <mhocko@...e.com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [RFC PATCH v1 2/2] mm: swap: Swap-out small-sized THP without
splitting
Ryan Roberts <ryan.roberts@....com> writes:
> The upcoming anonymous small-sized THP feature enables performance
> improvements by allocating large folios for anonymous memory. However
> I've observed that on an arm64 system running a parallel workload (e.g.
> kernel compilation) across many cores, under high memory pressure, the
> speed regresses. This is due to bottlenecking on the increased number of
> TLBIs added due to all the extra folio splitting.
>
> Therefore, solve this regression by adding support for swapping out
> small-sized THP without needing to split the folio, just like is already
> done for PMD-sized THP. This change only applies when CONFIG_THP_SWAP is
> enabled, and when the swap backing store is a non-rotating block device
> - these are the same constraints as for the existing PMD-sized THP
> swap-out support.
>
> Note that no attempt is made to swap-in THP here - this is still done
> page-by-page, like for PMD-sized THP.
>
> The main change here is to improve the swap entry allocator so that it
> can allocate any power-of-2 number of contiguous entries between [4, (1
> << PMD_ORDER)]. This is done by allocating a cluster for each distinct
> order and allocating sequentially from it until the cluster is full.
> This ensures that we don't need to search the map and we get no
> fragmentation due to alignment padding for different orders in the
> cluster. If there is no current cluster for a given order, we attempt to
> allocate a free cluster from the list. If there are no free clusters, we
> fail the allocation and the caller falls back to splitting the folio and
> allocates individual entries (as per existing PMD-sized THP fallback).
>
> As far as I can tell, this should not cause any extra fragmentation
> concerns, given how similar it is to the existing PMD-sized THP
> allocation mechanism. There will be up to (PMD_ORDER-1) clusters in
> concurrent use though. In practice, the number of orders in use will be
> small though.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
> ---
> include/linux/swap.h | 7 ++++++
> mm/swapfile.c | 60 +++++++++++++++++++++++++++++++++-----------
> mm/vmscan.c | 10 +++++---
> 3 files changed, 59 insertions(+), 18 deletions(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index a073366a227c..fc55b760aeff 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -320,6 +320,13 @@ struct swap_info_struct {
> */
> struct work_struct discard_work; /* discard worker */
> struct swap_cluster_list discard_clusters; /* discard clusters list */
> + unsigned int large_next[PMD_ORDER]; /*
> + * next free offset within current
> + * allocation cluster for large
> + * folios, or UINT_MAX if no current
> + * cluster. Index is (order - 1).
> + * Only when cluster_info is used.
> + */
I think that it is better to make this per-CPU. That is, extend the
percpu_cluster mechanism. Otherwise, we may have scalability issue.
And this should be enclosed in CONFIG_THP_SWAP.
> struct plist_node avail_lists[]; /*
> * entries in swap_avail_heads, one
> * entry per node.
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists