[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab9070f8-a949-2fb0-5f7b-e392f3242928@google.com>
Date: Sun, 25 Aug 2024 15:05:30 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
cc: akpm@...ux-foundation.org, hughd@...gle.com, willy@...radead.org,
david@...hat.com, wangkefeng.wang@...wei.com, chrisl@...nel.org,
ying.huang@...el.com, 21cnbao@...il.com, ryan.roberts@....com,
shy828301@...il.com, ziy@...dia.com, ioworker0@...il.com,
da.gomez@...sung.com, p.raghav@...sung.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 6/9] mm: shmem: support large folio allocation for
shmem_replace_folio()
On Mon, 12 Aug 2024, Baolin Wang wrote:
> To support large folio swapin for shmem in the following patches, add
> large folio allocation for the new replacement folio in shmem_replace_folio().
> Moreover large folios occupy N consecutive entries in the swap cache
> instead of using multi-index entries like the page cache, therefore
> we should replace each consecutive entries in the swap cache instead
> of using the shmem_replace_entry().
>
> As well as updating statistics and folio reference count using the number
> of pages in the folio.
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
> mm/shmem.c | 54 +++++++++++++++++++++++++++++++-----------------------
> 1 file changed, 31 insertions(+), 23 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index f6bab42180ea..d94f02ad7bd1 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1889,28 +1889,24 @@ static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp)
> static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
> struct shmem_inode_info *info, pgoff_t index)
> {
> - struct folio *old, *new;
> - struct address_space *swap_mapping;
> - swp_entry_t entry;
> - pgoff_t swap_index;
> - int error;
> -
> - old = *foliop;
> - entry = old->swap;
> - swap_index = swap_cache_index(entry);
> - swap_mapping = swap_address_space(entry);
> + struct folio *new, *old = *foliop;
> + swp_entry_t entry = old->swap;
> + struct address_space *swap_mapping = swap_address_space(entry);
> + pgoff_t swap_index = swap_cache_index(entry);
> + XA_STATE(xas, &swap_mapping->i_pages, swap_index);
> + int nr_pages = folio_nr_pages(old);
> + int error = 0, i;
>
> /*
> * We have arrived here because our zones are constrained, so don't
> * limit chance of success by further cpuset and node constraints.
> */
> gfp &= ~GFP_CONSTRAINT_MASK;
> - VM_BUG_ON_FOLIO(folio_test_large(old), old);
> - new = shmem_alloc_folio(gfp, 0, info, index);
> + new = shmem_alloc_folio(gfp, folio_order(old), info, index);
It is not clear to me whether folio_order(old) will ever be more than 0
here: but if it can be, then care will need to be taken over the gfp flags,
that they are suited to allocating the large folio; and there will need to
be (could be awkward!) fallback to order 0 when that allocation fails.
My own testing never comes to shmem_replace_folio(): it was originally for
one lowend graphics driver; but IIRC there's now a more common case for it.
Hugh
Powered by blists - more mailing lists