lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4wFNDQ8vJDjt70kG+pShi13_BDa7P2=DtoNZ8Ypy+Gd0w@mail.gmail.com>
Date: Wed, 7 Aug 2024 20:02:24 +1200
From: Barry Song <21cnbao@...il.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: akpm@...ux-foundation.org, hughd@...gle.com, willy@...radead.org, 
	david@...hat.com, wangkefeng.wang@...wei.com, chrisl@...nel.org, 
	ying.huang@...el.com, ryan.roberts@....com, shy828301@...il.com, 
	ziy@...dia.com, ioworker0@...il.com, da.gomez@...sung.com, 
	p.raghav@...sung.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 02/10] mm: swap: extend swap_shmem_alloc() to support
 batch SWAP_MAP_SHMEM flag setting

On Wed, Aug 7, 2024 at 7:31 PM Baolin Wang
<baolin.wang@...ux.alibaba.com> wrote:
>
> To support shmem large folio swap operations, add a new parameter to
> swap_shmem_alloc() that allows batch SWAP_MAP_SHMEM flag setting for
> shmem swap entries.
>
> While we are at it, using folio_nr_pages() to get the number of pages
> of the folio as a preparation.
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>

Reviewed-by: Barry Song <baohua@...nel.org>

> ---
>  include/linux/swap.h | 4 ++--
>  mm/shmem.c           | 6 ++++--
>  mm/swapfile.c        | 4 ++--
>  3 files changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 1c8f844a9f0f..248db1dd7812 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -481,7 +481,7 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry);
>  extern swp_entry_t get_swap_page_of_type(int);
>  extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order);
>  extern int add_swap_count_continuation(swp_entry_t, gfp_t);
> -extern void swap_shmem_alloc(swp_entry_t);
> +extern void swap_shmem_alloc(swp_entry_t, int);
>  extern int swap_duplicate(swp_entry_t);
>  extern int swapcache_prepare(swp_entry_t entry, int nr);
>  extern void swap_free_nr(swp_entry_t entry, int nr_pages);
> @@ -548,7 +548,7 @@ static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask)
>         return 0;
>  }
>
> -static inline void swap_shmem_alloc(swp_entry_t swp)
> +static inline void swap_shmem_alloc(swp_entry_t swp, int nr)
>  {
>  }
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 4a5254bfd610..22cdc10f27ea 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1452,6 +1452,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
>         struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
>         swp_entry_t swap;
>         pgoff_t index;
> +       int nr_pages;
>
>         /*
>          * Our capabilities prevent regular writeback or sync from ever calling
> @@ -1484,6 +1485,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
>         }
>
>         index = folio->index;
> +       nr_pages = folio_nr_pages(folio);
>
>         /*
>          * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
> @@ -1536,8 +1538,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
>         if (add_to_swap_cache(folio, swap,
>                         __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN,
>                         NULL) == 0) {
> -               shmem_recalc_inode(inode, 0, 1);
> -               swap_shmem_alloc(swap);
> +               shmem_recalc_inode(inode, 0, nr_pages);
> +               swap_shmem_alloc(swap, nr_pages);
>                 shmem_delete_from_page_cache(folio, swp_to_radix_entry(swap));
>
>                 mutex_unlock(&shmem_swaplist_mutex);
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index ea023fc25d08..88d73880aada 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -3604,9 +3604,9 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr)
>   * Help swapoff by noting that swap entry belongs to shmem/tmpfs
>   * (in which case its reference count is never incremented).
>   */
> -void swap_shmem_alloc(swp_entry_t entry)
> +void swap_shmem_alloc(swp_entry_t entry, int nr)
>  {
> -       __swap_duplicate(entry, SWAP_MAP_SHMEM, 1);
> +       __swap_duplicate(entry, SWAP_MAP_SHMEM, nr);
>  }
>
>  /*
> --
> 2.39.3
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ