lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZziEBd7QAiNXsZgd27k_tRSYNOC72iyojmG2aJD=mwYw@mail.gmail.com>
Date: Mon, 23 Sep 2024 17:32:14 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Nhat Pham <nphamcs@...il.com>
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, hughd@...gle.com, 
	shakeel.butt@...ux.dev, ryan.roberts@....com, ying.huang@...el.com, 
	chrisl@...nel.org, david@...hat.com, kasong@...cent.com, willy@...radead.org, 
	viro@...iv.linux.org.uk, baohua@...nel.org, chengming.zhou@...ux.dev, 
	linux-mm@...ck.org, kernel-team@...a.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/2] swap: shmem: remove SWAP_MAP_SHMEM

On Mon, Sep 23, 2024 at 4:11 PM Nhat Pham <nphamcs@...il.com> wrote:
>
> The SWAP_MAP_SHMEM state was introduced in the commit aaa468653b4a
> ("swap_info: note SWAP_MAP_SHMEM"), to quickly determine if a swap entry
> belongs to shmem during swapoff.
>
> However, swapoff has since been rewritten in the commit b56a2d8af914
> ("mm: rid swapoff of quadratic complexity"). Now having swap count ==
> SWAP_MAP_SHMEM value is basically the same as having swap count == 1,
> and swap_shmem_alloc() behaves analogously to swap_duplicate().

It's probably useful to point out that swap_shmem_alloc() is
equivalent to swap_duplicate() because __swap_duplicate() should never
return -ENOMEM for shmem, as we only ever increment the swap count by
1 (for each entry).

>
> Remove this state and the associated helper to simplify the state
> machine (both mentally and in terms of actual code). We will also have
> an extra state/special value that can be repurposed (for swap entries
> that never gets re-duplicated).
>
> Signed-off-by: Nhat Pham <nphamcs@...il.com>
> ---
>  include/linux/swap.h |  6 ------
>  mm/shmem.c           |  2 +-
>  mm/swapfile.c        | 15 ---------------
>  3 files changed, 1 insertion(+), 22 deletions(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index e6ab234be7be..017f3c03ff7a 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -232,7 +232,6 @@ enum {
>  /* Special value in first swap_map */
>  #define SWAP_MAP_MAX   0x3e    /* Max count */
>  #define SWAP_MAP_BAD   0x3f    /* Note page is bad */
> -#define SWAP_MAP_SHMEM 0xbf    /* Owned by shmem/tmpfs */
>
>  /* Special value in each swap_map continuation */
>  #define SWAP_CONT_MAX  0x7f    /* Max count */
> @@ -482,7 +481,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry);
>  extern swp_entry_t get_swap_page_of_type(int);
>  extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order);
>  extern int add_swap_count_continuation(swp_entry_t, gfp_t);
> -extern void swap_shmem_alloc(swp_entry_t, int);
>  extern int swap_duplicate_nr(swp_entry_t, int);
>  extern int swapcache_prepare(swp_entry_t entry, int nr);
>  extern void swap_free_nr(swp_entry_t entry, int nr_pages);
> @@ -549,10 +547,6 @@ static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask)
>         return 0;
>  }
>
> -static inline void swap_shmem_alloc(swp_entry_t swp, int nr)
> -{
> -}
> -
>  static inline int swap_duplicate_nr(swp_entry_t swp, int nr)
>  {
>         return 0;
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 361affdf3990..1875f2521dc6 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1559,7 +1559,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
>                         __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN,
>                         NULL) == 0) {
>                 shmem_recalc_inode(inode, 0, nr_pages);
> -               swap_shmem_alloc(swap, nr_pages);
> +               swap_duplicate_nr(swap, nr_pages);
>                 shmem_delete_from_page_cache(folio, swp_to_radix_entry(swap));
>
>                 mutex_unlock(&shmem_swaplist_mutex);
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 47a2cd5f590d..cebc244ee60f 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1381,12 +1381,6 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *si,
>         if (usage == SWAP_HAS_CACHE) {
>                 VM_BUG_ON(!has_cache);
>                 has_cache = 0;
> -       } else if (count == SWAP_MAP_SHMEM) {
> -               /*
> -                * Or we could insist on shmem.c using a special
> -                * swap_shmem_free() and free_shmem_swap_and_cache()...
> -                */
> -               count = 0;
>         } else if ((count & ~COUNT_CONTINUED) <= SWAP_MAP_MAX) {
>                 if (count == COUNT_CONTINUED) {
>                         if (swap_count_continued(si, offset, count))
> @@ -3686,15 +3680,6 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr)
>         return err;
>  }
>
> -/*
> - * Help swapoff by noting that swap entry belongs to shmem/tmpfs
> - * (in which case its reference count is never incremented).
> - */
> -void swap_shmem_alloc(swp_entry_t entry, int nr)
> -{
> -       __swap_duplicate(entry, SWAP_MAP_SHMEM, nr);
> -}
> -
>  /**
>   * swap_duplicate_nr() - Increase reference count of nr contiguous swap entries
>   *                       by 1.
> --
> 2.43.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ