[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <851940cd-64f1-9e59-3de9-b50701a99281@redhat.com>
Date: Tue, 16 May 2023 14:35:23 +0200
From: David Hildenbrand <david@...hat.com>
To: Peter Collingbourne <pcc@...gle.com>,
Catalin Marinas <catalin.marinas@....com>
Cc: Qun-wei Lin (林群崴)
<Qun-wei.Lin@...iatek.com>, linux-arm-kernel@...ts.infradead.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
"surenb@...gle.com" <surenb@...gle.com>,
Chinwen Chang (張錦文)
<chinwen.chang@...iatek.com>,
"kasan-dev@...glegroups.com" <kasan-dev@...glegroups.com>,
Kuan-Ying Lee (李冠穎)
<Kuan-Ying.Lee@...iatek.com>,
Casper Li (李中榮) <casper.li@...iatek.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
vincenzo.frascino@....com,
Alexandru Elisei <alexandru.elisei@....com>, will@...nel.org,
eugenis@...gle.com, Steven Price <steven.price@....com>,
stable@...r.kernel.org
Subject: Re: [PATCH 1/3] mm: Move arch_do_swap_page() call to before
swap_free()
On 16.05.23 01:40, Peter Collingbourne wrote:
> On Mon, May 15, 2023 at 06:34:30PM +0100, Catalin Marinas wrote:
>> On Sat, May 13, 2023 at 05:29:53AM +0200, David Hildenbrand wrote:
>>> On 13.05.23 01:57, Peter Collingbourne wrote:
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index 01a23ad48a04..83268d287ff1 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -3914,19 +3914,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>> }
>>>> }
>>>> - /*
>>>> - * Remove the swap entry and conditionally try to free up the swapcache.
>>>> - * We're already holding a reference on the page but haven't mapped it
>>>> - * yet.
>>>> - */
>>>> - swap_free(entry);
>>>> - if (should_try_to_free_swap(folio, vma, vmf->flags))
>>>> - folio_free_swap(folio);
>>>> -
>>>> - inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
>>>> - dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
>>>> pte = mk_pte(page, vma->vm_page_prot);
>>>> -
>>>> /*
>>>> * Same logic as in do_wp_page(); however, optimize for pages that are
>>>> * certainly not shared either because we just allocated them without
>>>> @@ -3946,8 +3934,21 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>> pte = pte_mksoft_dirty(pte);
>>>> if (pte_swp_uffd_wp(vmf->orig_pte))
>>>> pte = pte_mkuffd_wp(pte);
>>>> + arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
>>>> vmf->orig_pte = pte;
>>>> + /*
>>>> + * Remove the swap entry and conditionally try to free up the swapcache.
>>>> + * We're already holding a reference on the page but haven't mapped it
>>>> + * yet.
>>>> + */
>>>> + swap_free(entry);
>>>> + if (should_try_to_free_swap(folio, vma, vmf->flags))
>>>> + folio_free_swap(folio);
>>>> +
>>>> + inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
>>>> + dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
>>>> +
>>>> /* ksm created a completely new copy */
>>>> if (unlikely(folio != swapcache && swapcache)) {
>>>> page_add_new_anon_rmap(page, vma, vmf->address);
>>>> @@ -3959,7 +3960,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>> VM_BUG_ON(!folio_test_anon(folio) ||
>>>> (pte_write(pte) && !PageAnonExclusive(page)));
>>>> set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
>>>> - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
>>>> folio_unlock(folio);
>>>> if (folio != swapcache && swapcache) {
>>>
>>>
>>> You are moving the folio_free_swap() call after the folio_ref_count(folio)
>>> == 1 check, which means that such (previously) swapped pages that are
>>> exclusive cannot be detected as exclusive.
>>>
>>> There must be a better way to handle MTE here.
>>>
>>> Where are the tags stored, how is the location identified, and when are they
>>> effectively restored right now?
>>
>> I haven't gone through Peter's patches yet but a pretty good description
>> of the problem is here:
>> https://lore.kernel.org/all/5050805753ac469e8d727c797c2218a9d780d434.camel@mediatek.com/.
>> I couldn't reproduce it with my swap setup but both Qun-wei and Peter
>> triggered it.
>
> In order to reproduce this bug it is necessary for the swap slot cache
> to be disabled, which is unlikely to occur during normal operation. I
> was only able to reproduce the bug by disabling it forcefully with the
> following patch:
>
> diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> index 0bec1f705f8e0..25afba16980c7 100644
> --- a/mm/swap_slots.c
> +++ b/mm/swap_slots.c
> @@ -79,7 +79,7 @@ void disable_swap_slots_cache_lock(void)
>
> static void __reenable_swap_slots_cache(void)
> {
> - swap_slot_cache_enabled = has_usable_swap();
> + swap_slot_cache_enabled = false;
> }
>
> void reenable_swap_slots_cache_unlock(void)
>
> With that I can trigger the bug on an MTE-utilizing process by running
> a program that enumerates the process's private anonymous mappings and
> calls process_madvise(MADV_PAGEOUT) on all of them.
>
>> When a tagged page is swapped out, the arm64 code stores the metadata
>> (tags) in a local xarray indexed by the swap pte. When restoring from
>> swap, the arm64 set_pte_at() checks this xarray using the old swap pte
>> and spills the tags onto the new page. Apparently something changed in
>> the kernel recently that causes swap_range_free() to be called before
>> set_pte_at(). The arm64 arch_swap_invalidate_page() frees the metadata
>> from the xarray and the subsequent set_pte_at() won't find it.
>>
>> If we have the page, the metadata can be restored before set_pte_at()
>> and I guess that's what Peter is trying to do (again, I haven't looked
>> at the details yet; leaving it for tomorrow).
>>
>> Is there any other way of handling this? E.g. not release the metadata
>> in arch_swap_invalidate_page() but later in set_pte_at() once it was
>> restored. But then we may leak this metadata if there's no set_pte_at()
>> (the process mapping the swap entry died).
>
> Another problem that I can see with this approach is that it does not
> respect reference counts for swap entries, and it's unclear whether that
> can be done in a non-racy fashion.
>
> Another approach that I considered was to move the hook to swap_readpage()
> as in the patch below (sorry, it only applies to an older version
> of Android's android14-6.1 branch and not mainline, but you get the
> idea). But during a stress test (running the aforementioned program that
> calls process_madvise(MADV_PAGEOUT) in a loop during an Android "monkey"
> test) I discovered the following racy use-after-free that can occur when
> two tasks T1 and T2 concurrently restore the same page:
>
> T1: | T2:
> arch_swap_readpage() |
> | arch_swap_readpage() -> mte_restore_tags() -> xe_load()
> swap_free() |
> | arch_swap_readpage() -> mte_restore_tags() -> mte_restore_page_tags()
>
> We can avoid it by taking the swap_info_struct::lock spinlock in
> mte_restore_tags(), but it seems like it would lead to lock contention.
>
Would the idea be to fail swap_readpage() on the one that comes last,
simply retrying to lookup the page?
This might be a naive question, but how does MTE play along with shared
anonymous pages?
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists