[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65a66eb9-41f8-4790-8db2-0c70ea15979f@redhat.com>
Date: Mon, 4 Mar 2024 21:50:27 +0100
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>, Huang Ying <ying.huang@...el.com>,
Gao Xiang <xiang@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Yang Shi <shy828301@...il.com>, Michal Hocko <mhocko@...e.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from
swap_cluster_info:flags
>>>
>>> This is the existing free_swap_and_cache(). I think _swap_info_get() would break
>>> if this could race with swapoff(), and __swap_entry_free() looks up the cluster
>>> from an array, which would also be freed by swapoff if racing:
>>>
>>> int free_swap_and_cache(swp_entry_t entry)
>>> {
>>> struct swap_info_struct *p;
>>> unsigned char count;
>>>
>>> if (non_swap_entry(entry))
>>> return 1;
>>>
>>> p = _swap_info_get(entry);
>>> if (p) {
>>> count = __swap_entry_free(p, entry);
>>
>> If count dropped to 0 and
>>
>>> if (count == SWAP_HAS_CACHE)
>>
>>
>> count is now SWAP_HAS_CACHE, there is in fact no swap entry anymore. We removed
>> it. That one would have to be reclaimed asynchronously.
>>
>> The existing code we would call swap_page_trans_huge_swapped() with the SI it
>> obtained via _swap_info_get().
>>
>> I also don't see what should be left protecting the SI. It's not locked anymore,
>> the swapcounts are at 0. We don't hold the folio lock.
>>
>> try_to_unuse() will stop as soon as si->inuse_pages is at 0. Hm ...
>
> But, assuming the caller of free_swap_and_cache() acquires the PTL first, I
> think this all works out ok? While free_swap_and_cache() is running,
> try_to_unuse() will wait for the PTL. Or if try_to_unuse() runs first, then
> free_swap_and_cache() will never be called because the swap entry will have been
> removed from the PTE?
But can't try_to_unuse() run, detect !si->inuse_pages and not even
bother about scanning any further page tables?
But my head hurts from digging through that code.
Let me try again:
__swap_entry_free() might be the last user and result in "count ==
SWAP_HAS_CACHE".
swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0.
So the question is: could someone reclaim the folio and turn
si->inuse_pages==0, before we completed swap_page_trans_huge_swapped().
Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are
still references by swap entries.
Process 1 still references subpage 0 via swap entry.
Process 2 still references subpage 1 via swap entry.
Process 1 quits. Calls free_swap_and_cache().
-> count == SWAP_HAS_CACHE
[then, preempted in the hypervisor etc.]
Process 2 quits. Calls free_swap_and_cache().
-> count == SWAP_HAS_CACHE
Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls
__try_to_reclaim_swap().
__try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()->put_swap_folio()->
free_swap_slot()->swapcache_free_entries()->swap_entry_free()->swap_range_free()->
..
WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries);
What stops swapoff to succeed after process 2 reclaimed the swap cache
but before process 1 finished its call to swap_page_trans_huge_swapped()?
>
> That just leaves shmem... I suspected there might be some serialization between
> shmem_unuse() (called from try_to_unuse()) and the shmem free_swap_and_cache()
> callsites, but I can't see it. Hmm...
>
>>
>> Would performing the overall operation under lock_cluster_or_swap_info help? Not
>> so sure :(
>
> No - that function relies on being able to access the cluster from the array in
> the swap_info and lock it. And I think that array has the same lifetime as
> swap_map, so same problem. You'd need get_swap_device()/put_swap_device() and a
> bunch of refactoring for the internals not to take the locks, I guess. I think
> its doable, just not sure if neccessary...
Agreed.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists