lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7D5WuNW_WpZe=U+U9pQ3xaYFxkG6kOXK8PD8E+VaBEoiA@mail.gmail.com>
Date: Sun, 6 Jul 2025 19:50:10 +0800
From: Kairui Song <ryncsn@...il.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>, 
	Hugh Dickins <hughd@...gle.com>, Matthew Wilcox <willy@...radead.org>, 
	Kemeng Shi <shikemeng@...weicloud.com>, Chris Li <chrisl@...nel.org>, 
	Nhat Pham <nphamcs@...il.com>, Baoquan He <bhe@...hat.com>, Barry Song <baohua@...nel.org>, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 4/9] mm/shmem, swap: tidy up swap entry splitting

On Sun, Jul 6, 2025 at 11:38 AM Baolin Wang
<baolin.wang@...ux.alibaba.com> wrote:
>
>
>
> On 2025/7/5 02:17, Kairui Song wrote:
> > From: Kairui Song <kasong@...cent.com>
> >
> > Instead of keeping different paths of splitting the entry before the
> > swap in start, move the entry splitting after the swapin has put
> > the folio in swap cache (or set the SWAP_HAS_CACHE bit). This way
> > we only need one place and one unified way to split the large entry.
> > Whenever swapin brought in a folio smaller than the shmem swap entry,
> > split the entry and recalculate the entry and index for verification.
> >
> > This removes duplicated codes and function calls, reduces LOC,
> > and the split is less racy as it's guarded by swap cache now. So it
> > will have a lower chance of repeated faults due to raced split.
> > The compiler is also able to optimize the coder further:
> >
> > bloat-o-meter results with GCC 14:
> >
> > With DEBUG_SECTION_MISMATCH (-fno-inline-functions-called-once):
> > ./scripts/bloat-o-meter mm/shmem.o.old mm/shmem.o
> > add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-82 (-82)
> > Function                                     old     new   delta
> > shmem_swapin_folio                          2361    2279     -82
> > Total: Before=33151, After=33069, chg -0.25%
> >
> > With !DEBUG_SECTION_MISMATCH:
> > ./scripts/bloat-o-meter mm/shmem.o.old mm/shmem.o
> > add/remove: 0/1 grow/shrink: 1/0 up/down: 949/-750 (199)
> > Function                                     old     new   delta
> > shmem_swapin_folio                          2878    3827    +949
> > shmem_split_large_entry.isra                 750       -    -750
> > Total: Before=33086, After=33285, chg +0.60%
> >
> > Since shmem_split_large_entry is only called in one place now. The
> > compiler will either generate more compact code, or inlined it for
> > better performance.
> >
> > Signed-off-by: Kairui Song <kasong@...cent.com>
> > ---
> >   mm/shmem.c | 53 +++++++++++++++++++++--------------------------------
> >   1 file changed, 21 insertions(+), 32 deletions(-)
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index e43becfa04b3..217264315842 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -2266,14 +2266,15 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> >       struct address_space *mapping = inode->i_mapping;
> >       struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL;
> >       struct shmem_inode_info *info = SHMEM_I(inode);
> > +     swp_entry_t swap, index_entry;
> >       struct swap_info_struct *si;
> >       struct folio *folio = NULL;
> >       bool skip_swapcache = false;
> > -     swp_entry_t swap;
> >       int error, nr_pages, order, split_order;
> > +     pgoff_t offset;
> >
> >       VM_BUG_ON(!*foliop || !xa_is_value(*foliop));
> > -     swap = radix_to_swp_entry(*foliop);
> > +     swap = index_entry = radix_to_swp_entry(*foliop);
> >       *foliop = NULL;
> >
> >       if (is_poisoned_swp_entry(swap))
> > @@ -2321,46 +2322,35 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> >               }
> >
> >               /*
> > -              * Now swap device can only swap in order 0 folio, then we
> > -              * should split the large swap entry stored in the pagecache
> > -              * if necessary.
> > -              */
> > -             split_order = shmem_split_large_entry(inode, index, swap, gfp);
> > -             if (split_order < 0) {
> > -                     error = split_order;
> > -                     goto failed;
> > -             }
> > -
> > -             /*
> > -              * If the large swap entry has already been split, it is
> > +              * Now swap device can only swap in order 0 folio, it is
> >                * necessary to recalculate the new swap entry based on
> > -              * the old order alignment.
> > +              * the offset, as the swapin index might be unalgined.
> >                */
> > -             if (split_order > 0) {
> > -                     pgoff_t offset = index - round_down(index, 1 << split_order);
> > -
> > +             if (order) {
> > +                     offset = index - round_down(index, 1 << order);
> >                       swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
> >               }
> >
> > -             /* Here we actually start the io */
> >               folio = shmem_swapin_cluster(swap, gfp, info, index);
> >               if (!folio) {
> >                       error = -ENOMEM;
> >                       goto failed;
> >               }
> > -     } else if (order > folio_order(folio)) {
> > +     }
> > +alloced:
> > +     if (order > folio_order(folio)) {
> >               /*
> > -              * Swap readahead may swap in order 0 folios into swapcache
> > +              * Swapin may get smaller folios due to various reasons:
> > +              * It may fallback to order 0 due to memory pressure or race,
> > +              * swap readahead may swap in order 0 folios into swapcache
> >                * asynchronously, while the shmem mapping can still stores
> >                * large swap entries. In such cases, we should split the
> >                * large swap entry to prevent possible data corruption.
> >                */
> > -             split_order = shmem_split_large_entry(inode, index, swap, gfp);
> > +             split_order = shmem_split_large_entry(inode, index, index_entry, gfp);
> >               if (split_order < 0) {
> > -                     folio_put(folio);
> > -                     folio = NULL;
> >                       error = split_order;
> > -                     goto failed;
> > +                     goto failed_nolock;
> >               }
> >
> >               /*
> > @@ -2369,15 +2359,13 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> >                * the old order alignment.
> >                */
> >               if (split_order > 0) {
> > -                     pgoff_t offset = index - round_down(index, 1 << split_order);
> > -
> > +                     offset = index - round_down(index, 1 << split_order);
> >                       swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
>
> Obviously, you should use the original swap value 'index_entry' to
> calculate the new swap value.

Thanks, good catch.

>
> With the following fix, you can add:
> Reviewed-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Tested-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index d530df550f7f..1e8422ac863e 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2361,7 +2361,7 @@ static int shmem_swapin_folio(struct inode *inode,
> pgoff_t index,
>                   */
>                  if (split_order > 0) {
>                          offset = index - round_down(index, 1 <<
> split_order);
> -                       swap = swp_entry(swp_type(swap),
> swp_offset(swap) + offset);
> +                       swap = swp_entry(swp_type(swap),
> swp_offset(index_swap) + offset);
>                  }
>          } else if (order < folio_order(folio)) {
>                  swap.val = round_down(swap.val, 1 << folio_order(folio));
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ