lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4zX1r8aQRuAbnTc0O5sPxDs11yhScz2T2t9uJ84GEjOoA@mail.gmail.com>
Date: Wed, 22 May 2024 09:21:38 +1200
From: Barry Song <21cnbao@...il.com>
To: akpm@...ux-foundation.org, linux-mm@...ck.org
Cc: baolin.wang@...ux.alibaba.com, chrisl@...nel.org, david@...hat.com, 
	hanchuanhua@...o.com, hannes@...xchg.org, hughd@...gle.com, 
	kasong@...cent.com, linux-kernel@...r.kernel.org, ryan.roberts@....com, 
	surenb@...gle.com, v-songbaohua@...o.com, willy@...radead.org, 
	xiang@...nel.org, ying.huang@...el.com, yosryahmed@...gle.com, 
	yuzhao@...gle.com, ziy@...dia.com
Subject: Re: [PATCH v4 0/6] large folios swap-in: handle refault cases first

Hi Andrew,

This patchset missed the merge window, but I've tried and found that it still
applies cleanly to today's mm-unstable. Would you like me to resend it or just
proceed with using this v4 version?

Thanks
Barry

On Thu, May 9, 2024 at 10:41 AM Barry Song <21cnbao@...il.com> wrote:
>
> From: Barry Song <v-songbaohua@...o.com>
>
> This patch is extracted from the large folio swapin series[1], primarily addressing
> the handling of scenarios involving large folios in the swap cache. Currently, it is
> particularly focused on addressing the refaulting of mTHP, which is still undergoing
> reclamation. This approach aims to streamline code review and expedite the integration
> of this segment into the MM tree.
>
> It relies on Ryan's swap-out series[2], leveraging the helper function
> swap_pte_batch() introduced by that series.
>
> Presently, do_swap_page only encounters a large folio in the swap
> cache before the large folio is released by vmscan. However, the code
> should remain equally useful once we support large folio swap-in via
> swapin_readahead(). This approach can effectively reduce page faults
> and eliminate most redundant checks and early exits for MTE restoration
> in recent MTE patchset[3].
>
> The large folio swap-in for SWP_SYNCHRONOUS_IO and swapin_readahead()
> will be split into separate patch sets and sent at a later time.
>
> -v4:
>  - collect acked-by/reviewed-by of Ryan, "Huang, Ying", Chris, David and
>    Khalid, many thanks!
>  - Simplify reuse code in do_swap_page() by checking refcount==1, per
>    David;
>  - Initialize large folio-related variables later in do_swap_page(), per
>    Ryan;
>  - define swap_free() as swap_free_nr(1) per Ying and Ryan.
>
> -v3:
>  - optimize swap_free_nr using bitmap with single one "long"; "Huang, Ying"
>  - drop swap_free() as suggested by "Huang, Ying", now hibernation can get
>    batched;
>  - lots of cleanup in do_swap_page() as commented by Ryan Roberts and "Huang,
>    Ying";
>  - handle arch_do_swap_page() with nr pages though the only platform which
>    needs it, sparc, doesn't support THP_SWAPOUT as suggested by "Huang,
>    Ying";
>  - introduce pte_move_swp_offset() as suggested by "Huang, Ying";
>  - drop the "any_shared" of checking swap entries with respect to David's
>    comment;
>  - drop the counter of swapin_refault and keep it for debug purpose per
>    Ying
>  - collect reviewed-by tags
>  Link:
>   https://lore.kernel.org/linux-mm/20240503005023.174597-1-21cnbao@gmail.com/
>
> -v2:
>  - rebase on top of mm-unstable in which Ryan's swap_pte_batch() has changed
>    a lot.
>  - remove folio_add_new_anon_rmap() for !folio_test_anon()
>    as currently large folios are always anon(refault).
>  - add mTHP swpin refault counters
>   Link:
>   https://lore.kernel.org/linux-mm/20240409082631.187483-1-21cnbao@gmail.com/
>
> -v1:
>   Link: https://lore.kernel.org/linux-mm/20240402073237.240995-1-21cnbao@gmail.com/
>
> Differences with the original large folios swap-in series
>  - collect r-o-b, acked;
>  - rename swap_nr_free to swap_free_nr, according to Ryan;
>  - limit the maximum kernel stack usage for swap_free_nr, Ryan;
>  - add output argument in swap_pte_batch to expose if all entries are
>    exclusive
>  - many clean refinements, handle the corner case folio's virtual addr
>    might not be naturally aligned
>
> [1] https://lore.kernel.org/linux-mm/20240304081348.197341-1-21cnbao@gmail.com/
> [2] https://lore.kernel.org/linux-mm/20240408183946.2991168-1-ryan.roberts@arm.com/
> [3] https://lore.kernel.org/linux-mm/20240322114136.61386-1-21cnbao@gmailcom/
>
> Barry Song (3):
>   mm: remove the implementation of swap_free() and always use
>     swap_free_nr()
>   mm: introduce pte_move_swp_offset() helper which can move offset
>     bidirectionally
>   mm: introduce arch_do_swap_page_nr() which allows restore metadata for
>     nr pages
>
> Chuanhua Han (3):
>   mm: swap: introduce swap_free_nr() for batched swap_free()
>   mm: swap: make should_try_to_free_swap() support large-folio
>   mm: swap: entirely map large folios found in swapcache
>
>  include/linux/pgtable.h | 26 +++++++++++++-----
>  include/linux/swap.h    |  9 +++++--
>  kernel/power/swap.c     |  5 ++--
>  mm/internal.h           | 25 ++++++++++++++---
>  mm/memory.c             | 60 +++++++++++++++++++++++++++++++++--------
>  mm/swapfile.c           | 48 +++++++++++++++++++++++++++++----
>  6 files changed, 142 insertions(+), 31 deletions(-)
>
> --
> 2.34.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ