lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuPLVp6s0iYm24tLf-h+sTmD2m-NOcw6xMPcL4HtoDnxxw@mail.gmail.com>
Date: Sat, 6 Sep 2025 08:35:31 -0700
From: Chris Li <chrisl@...nel.org>
To: Kairui Song <kasong@...cent.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>, 
	Matthew Wilcox <willy@...radead.org>, Hugh Dickins <hughd@...gle.com>, Barry Song <baohua@...nel.org>, 
	Baoquan He <bhe@...hat.com>, Nhat Pham <nphamcs@...il.com>, 
	Kemeng Shi <shikemeng@...weicloud.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>, 
	Ying Huang <ying.huang@...ux.alibaba.com>, Johannes Weiner <hannes@...xchg.org>, 
	David Hildenbrand <david@...hat.com>, Yosry Ahmed <yosryahmed@...gle.com>, 
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Zi Yan <ziy@...dia.com>, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 12/15] mm, swap: mark swap address space ro and add
 context debug check

I think adding one more check is fine.

I don't think there are exceptions but I am not 100% sure either. If
there are any violations we can catch it now that is a good thing.

Acked-by: Chris Li <chrisl@...nel.org>

Chris

On Fri, Sep 5, 2025 at 12:15 PM Kairui Song <ryncsn@...il.com> wrote:
>
> From: Kairui Song <kasong@...cent.com>
>
> Swap cache is now backed by swap table, and the address space is not
> holding any mutable data anymore. And swap cache is now protected by
> the swap cluster lock, instead of the XArray lock. All access to swap
> cache are wrapped by swap cache helpers. Locking is mostly handled
> internally by swap cache helpers, only a few __swap_cache_* helpers
> require the caller to lock the cluster by themselves.
>
> Worth noting that, unlike XArray, the cluster lock is not IRQ safe.
> The swap cache was very different compared to filemap, and now it's
> completely separated from filemap. Nothing wants to mark or change
> anything or do a writeback callback in IRQ.
>
> So explicitly document this and add a debug check to avoid further
> potential misuse. And mark the swap cache space as read-only to avoid
> any user wrongly mixing unexpected filemap helpers with swap cache.
>
> Signed-off-by: Kairui Song <kasong@...cent.com>
> ---
>  mm/swap.h       | 12 +++++++++++-
>  mm/swap_state.c |  3 ++-
>  2 files changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/mm/swap.h b/mm/swap.h
> index bf4e54f1f6b6..e48431a26f89 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -99,6 +99,16 @@ static __always_inline struct swap_cluster_info *__swap_cluster_lock(
>  {
>         struct swap_cluster_info *ci = __swap_offset_to_cluster(si, offset);
>
> +       /*
> +        * Nothing modifies swap cache in an IRQ context. All access to
> +        * swap cache is wrapped by swap_cache_* helpers, and swap cache
> +        * writeback is handled outside of IRQs. Swapin or swapout never
> +        * occurs in IRQ, and neither does in-place split or replace.
> +        *
> +        * Besides, modifying swap cache requires synchronization with
> +        * swap_map, which was never IRQ safe.
> +        */
> +       VM_WARN_ON_ONCE(!in_task());
>         VM_WARN_ON_ONCE(percpu_ref_is_zero(&si->users)); /* race with swapoff */
>         if (irq)
>                 spin_lock_irq(&ci->lock);
> @@ -191,7 +201,7 @@ void __swap_writepage(struct folio *folio, struct swap_iocb **swap_plug);
>  #define SWAP_ADDRESS_SPACE_SHIFT       14
>  #define SWAP_ADDRESS_SPACE_PAGES       (1 << SWAP_ADDRESS_SPACE_SHIFT)
>  #define SWAP_ADDRESS_SPACE_MASK                (SWAP_ADDRESS_SPACE_PAGES - 1)
> -extern struct address_space swap_space;
> +extern struct address_space swap_space __ro_after_init;
>  static inline struct address_space *swap_address_space(swp_entry_t entry)
>  {
>         return &swap_space;
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 7147b390745f..209d5e9e8d90 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -37,7 +37,8 @@ static const struct address_space_operations swap_aops = {
>  #endif
>  };
>
> -struct address_space swap_space __read_mostly = {
> +/* Set swap_space as read only as swap cache is handled by swap table */
> +struct address_space swap_space __ro_after_init = {
>         .a_ops = &swap_aops,
>  };
>
> --
> 2.51.0
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ