[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c9a1261-1256-0239-72bd-a713c959ce85@google.com>
Date: Sun, 20 Jul 2025 00:07:27 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Hugh Dickins <hughd@...gle.com>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>, Baoquan He <bhe@...hat.com>,
Barry Song <21cnbao@...il.com>, Chris Li <chrisl@...nel.org>,
Kairui Song <ryncsn@...il.com>, Kemeng Shi <shikemeng@...weicloud.com>,
Shakeel Butt <shakeel.butt@...ux.dev>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH mm-new 1/2] mm/shmem: hold shmem_swaplist spinlock (not
mutex) much less
On Wed, 16 Jul 2025, Hugh Dickins wrote:
> A flamegraph (from an MGLRU load) showed shmem_writeout()'s use of the
> global shmem_swaplist_mutex worryingly hot: improvement is long overdue.
>
> 3.1 commit 6922c0c7abd3 ("tmpfs: convert shmem_writepage and enable swap")
> apologized for extending shmem_swaplist_mutex across add_to_swap_cache(),
> and hoped to find another way: yes, there may be lots of work to allocate
> radix tree nodes in there. Then 6.15 commit b487a2da3575 ("mm, swap:
> simplify folio swap allocation") will have made it worse, by moving
> shmem_writeout()'s swap allocation under that mutex too (but the worrying
> flamegraph was observed even before that change).
>
> There's a useful comment about pagelock no longer protecting from eviction
> once moved to swap cache: but it's good till shmem_delete_from_page_cache()
> replaces page pointer by swap entry, so move the swaplist add between them.
>
> We would much prefer to take the global lock once per inode than once per
> page: given the possible races with shmem_unuse() pruning when !swapped
> (and other tasks racing to swap other pages out or in), try the swaplist
> add whenever swapped was incremented from 0 (but inode may already be on
> the list - only unuse and evict bother to remove it).
>
> This technique is more subtle than it looks (we're avoiding the very lock
> which would make it easy), but works: whereas an unlocked list_empty()
> check runs a risk of the inode being unqueued and left off the swaplist
> forever, swapoff only completing when the page is faulted in or removed.
>
> The need for a sleepable mutex went away in 5.1 commit b56a2d8af914
> ("mm: rid swapoff of quadratic complexity"): a spinlock works better now.
>
> This commit is certain to take shmem_swaplist_mutex out of contention,
> and has been seen to make a practical improvement (but there is likely
> to have been an underlying issue which made its contention so visible).
>
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
Tested-by: David Rientjes <rientjes@...gle.com>
Powered by blists - more mailing lists