[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z75nokRl5Bp0ywiX@x1.local>
Date: Tue, 25 Feb 2025 20:00:18 -0500
From: Peter Xu <peterx@...hat.com>
To: Barry Song <21cnbao@...il.com>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, Barry Song <v-songbaohua@...o.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Al Viro <viro@...iv.linux.org.uk>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Brian Geffon <bgeffon@...gle.com>,
Christian Brauner <brauner@...nel.org>,
David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>, Jann Horn <jannh@...gle.com>,
Kalesh Singh <kaleshsingh@...gle.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Lokesh Gidra <lokeshgidra@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...e.com>, Mike Rapoport <rppt@...nel.org>,
Nicolas Geoffray <ngeoffray@...gle.com>,
Ryan Roberts <ryan.roberts@....com>, Shuah Khan <shuah@...nel.org>,
ZhangPeng <zhangpeng362@...wei.com>,
Tangquan Zheng <zhengtangquan@...o.com>, stable@...r.kernel.org
Subject: Re: [PATCH v2] mm: Fix kernel BUG when userfaultfd_move encounters
swapcache
On Wed, Feb 26, 2025 at 01:14:00PM +1300, Barry Song wrote:
> From: Barry Song <v-songbaohua@...o.com>
>
> userfaultfd_move() checks whether the PTE entry is present or a
> swap entry.
>
> - If the PTE entry is present, move_present_pte() handles folio
> migration by setting:
>
> src_folio->index = linear_page_index(dst_vma, dst_addr);
>
> - If the PTE entry is a swap entry, move_swap_pte() simply copies
> the PTE to the new dst_addr.
>
> This approach is incorrect because, even if the PTE is a swap entry,
> it can still reference a folio that remains in the swap cache.
>
> This creates a race window between steps 2 and 4.
> 1. add_to_swap: The folio is added to the swapcache.
> 2. try_to_unmap: PTEs are converted to swap entries.
> 3. pageout: The folio is written back.
> 4. Swapcache is cleared.
> If userfaultfd_move() occurs in the window between steps 2 and 4,
> after the swap PTE has been moved to the destination, accessing the
> destination triggers do_swap_page(), which may locate the folio in
> the swapcache. However, since the folio's index has not been updated
> to match the destination VMA, do_swap_page() will detect a mismatch.
>
> This can result in two critical issues depending on the system
> configuration.
>
> If KSM is disabled, both small and large folios can trigger a BUG
> during the add_rmap operation due to:
>
> page_pgoff(folio, page) != linear_page_index(vma, address)
>
> [ 13.336953] page: refcount:6 mapcount:1 mapping:00000000f43db19c index:0xffffaf150 pfn:0x4667c
> [ 13.337520] head: order:2 mapcount:1 entire_mapcount:0 nr_pages_mapped:1 pincount:0
> [ 13.337716] memcg:ffff00000405f000
> [ 13.337849] anon flags: 0x3fffc0000020459(locked|uptodate|dirty|owner_priv_1|head|swapbacked|node=0|zone=0|lastcpupid=0xffff)
> [ 13.338630] raw: 03fffc0000020459 ffff80008507b538 ffff80008507b538 ffff000006260361
> [ 13.338831] raw: 0000000ffffaf150 0000000000004000 0000000600000000 ffff00000405f000
> [ 13.339031] head: 03fffc0000020459 ffff80008507b538 ffff80008507b538 ffff000006260361
> [ 13.339204] head: 0000000ffffaf150 0000000000004000 0000000600000000 ffff00000405f000
> [ 13.339375] head: 03fffc0000000202 fffffdffc0199f01 ffffffff00000000 0000000000000001
> [ 13.339546] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
> [ 13.339736] page dumped because: VM_BUG_ON_PAGE(page_pgoff(folio, page) != linear_page_index(vma, address))
> [ 13.340190] ------------[ cut here ]------------
> [ 13.340316] kernel BUG at mm/rmap.c:1380!
> [ 13.340683] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP
> [ 13.340969] Modules linked in:
> [ 13.341257] CPU: 1 UID: 0 PID: 107 Comm: a.out Not tainted 6.14.0-rc3-gcf42737e247a-dirty #299
> [ 13.341470] Hardware name: linux,dummy-virt (DT)
> [ 13.341671] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [ 13.341815] pc : __page_check_anon_rmap+0xa0/0xb0
> [ 13.341920] lr : __page_check_anon_rmap+0xa0/0xb0
> [ 13.342018] sp : ffff80008752bb20
> [ 13.342093] x29: ffff80008752bb20 x28: fffffdffc0199f00 x27: 0000000000000001
> [ 13.342404] x26: 0000000000000000 x25: 0000000000000001 x24: 0000000000000001
> [ 13.342575] x23: 0000ffffaf0d0000 x22: 0000ffffaf0d0000 x21: fffffdffc0199f00
> [ 13.342731] x20: fffffdffc0199f00 x19: ffff000006210700 x18: 00000000ffffffff
> [ 13.342881] x17: 6c203d2120296567 x16: 6170202c6f696c6f x15: 662866666f67705f
> [ 13.343033] x14: 6567617028454741 x13: 2929737365726464 x12: ffff800083728ab0
> [ 13.343183] x11: ffff800082996bf8 x10: 0000000000000fd7 x9 : ffff80008011bc40
> [ 13.343351] x8 : 0000000000017fe8 x7 : 00000000fffff000 x6 : ffff8000829eebf8
> [ 13.343498] x5 : c0000000fffff000 x4 : 0000000000000000 x3 : 0000000000000000
> [ 13.343645] x2 : 0000000000000000 x1 : ffff0000062db980 x0 : 000000000000005f
> [ 13.343876] Call trace:
> [ 13.344045] __page_check_anon_rmap+0xa0/0xb0 (P)
> [ 13.344234] folio_add_anon_rmap_ptes+0x22c/0x320
> [ 13.344333] do_swap_page+0x1060/0x1400
> [ 13.344417] __handle_mm_fault+0x61c/0xbc8
> [ 13.344504] handle_mm_fault+0xd8/0x2e8
> [ 13.344586] do_page_fault+0x20c/0x770
> [ 13.344673] do_translation_fault+0xb4/0xf0
> [ 13.344759] do_mem_abort+0x48/0xa0
> [ 13.344842] el0_da+0x58/0x130
> [ 13.344914] el0t_64_sync_handler+0xc4/0x138
> [ 13.345002] el0t_64_sync+0x1ac/0x1b0
> [ 13.345208] Code: aa1503e0 f000f801 910f6021 97ff5779 (d4210000)
> [ 13.345504] ---[ end trace 0000000000000000 ]---
> [ 13.345715] note: a.out[107] exited with irqs disabled
> [ 13.345954] note: a.out[107] exited with preempt_count 2
>
> If KSM is enabled, Peter Xu also discovered that do_swap_page() may
> trigger an unexpected CoW operation for small folios because
> ksm_might_need_to_copy() allocates a new folio when the folio index
> does not match linear_page_index(vma, addr).
>
> This patch also checks the swapcache when handling swap entries. If a
> match is found in the swapcache, it processes it similarly to a present
> PTE.
> However, there are some differences. For example, the folio is no longer
> exclusive because folio_try_share_anon_rmap_pte() is performed during
> unmapping.
> Furthermore, in the case of swapcache, the folio has already been
> unmapped, eliminating the risk of concurrent rmap walks and removing the
> need to acquire src_folio's anon_vma or lock.
>
> Note that for large folios, in the swapcache handling path, we directly
> return -EBUSY since split_folio() will return -EBUSY regardless if
> the folio is under writeback or unmapped. This is not an urgent issue,
> so a follow-up patch may address it separately.
>
> Fixes: adef440691bab ("userfaultfd: UFFDIO_MOVE uABI")
> Cc: Andrea Arcangeli <aarcange@...hat.com>
> Cc: Suren Baghdasaryan <surenb@...gle.com>
> Cc: Al Viro <viro@...iv.linux.org.uk>
> Cc: Axel Rasmussen <axelrasmussen@...gle.com>
> Cc: Brian Geffon <bgeffon@...gle.com>
> Cc: Christian Brauner <brauner@...nel.org>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: Jann Horn <jannh@...gle.com>
> Cc: Kalesh Singh <kaleshsingh@...gle.com>
> Cc: Liam R. Howlett <Liam.Howlett@...cle.com>
> Cc: Lokesh Gidra <lokeshgidra@...gle.com>
> Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Mike Rapoport (IBM) <rppt@...nel.org>
> Cc: Nicolas Geoffray <ngeoffray@...gle.com>
> Cc: Peter Xu <peterx@...hat.com>
> Cc: Ryan Roberts <ryan.roberts@....com>
> Cc: Shuah Khan <shuah@...nel.org>
> Cc: ZhangPeng <zhangpeng362@...wei.com>
> Cc: Tangquan Zheng <zhengtangquan@...o.com>
> Cc: <stable@...r.kernel.org>
> Signed-off-by: Barry Song <v-songbaohua@...o.com>
Acked-by: Peter Xu <peterx@...hat.com>
Some nitpicks below, maybe no worth for a repost..
> ---
> mm/userfaultfd.c | 76 ++++++++++++++++++++++++++++++++++++++++++------
> 1 file changed, 67 insertions(+), 9 deletions(-)
>
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index 867898c4e30b..2df5d100e76d 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -18,6 +18,7 @@
> #include <asm/tlbflush.h>
> #include <asm/tlb.h>
> #include "internal.h"
> +#include "swap.h"
>
> static __always_inline
> bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)
> @@ -1072,16 +1073,14 @@ static int move_present_pte(struct mm_struct *mm,
> return err;
> }
>
> -static int move_swap_pte(struct mm_struct *mm,
> +static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
> unsigned long dst_addr, unsigned long src_addr,
> pte_t *dst_pte, pte_t *src_pte,
> pte_t orig_dst_pte, pte_t orig_src_pte,
> pmd_t *dst_pmd, pmd_t dst_pmdval,
> - spinlock_t *dst_ptl, spinlock_t *src_ptl)
> + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> + struct folio *src_folio)
> {
> - if (!pte_swp_exclusive(orig_src_pte))
> - return -EBUSY;
> -
> double_pt_lock(dst_ptl, src_ptl);
>
> if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte,
> @@ -1090,10 +1089,20 @@ static int move_swap_pte(struct mm_struct *mm,
> return -EAGAIN;
> }
>
> + /*
> + * The src_folio resides in the swapcache, requiring an update to its
> + * index and mapping to align with the dst_vma, where a swap-in may
> + * occur and hit the swapcache after moving the PTE.
> + */
> + if (src_folio) {
> + folio_move_anon_rmap(src_folio, dst_vma);
> + src_folio->index = linear_page_index(dst_vma, dst_addr);
> + }
> +
> orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte);
> set_pte_at(mm, dst_addr, dst_pte, orig_src_pte);
> - double_pt_unlock(dst_ptl, src_ptl);
>
> + double_pt_unlock(dst_ptl, src_ptl);
Unnecessary line move.
> return 0;
> }
>
> @@ -1137,6 +1146,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
> __u64 mode)
> {
> swp_entry_t entry;
> + struct swap_info_struct *si = NULL;
> pte_t orig_src_pte, orig_dst_pte;
> pte_t src_folio_pte;
> spinlock_t *src_ptl, *dst_ptl;
> @@ -1318,6 +1328,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
> orig_dst_pte, orig_src_pte, dst_pmd,
> dst_pmdval, dst_ptl, src_ptl, src_folio);
> } else {
> + struct folio *folio = NULL;
> +
> entry = pte_to_swp_entry(orig_src_pte);
> if (non_swap_entry(entry)) {
> if (is_migration_entry(entry)) {
> @@ -1331,9 +1343,53 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
> goto out;
> }
>
> - err = move_swap_pte(mm, dst_addr, src_addr, dst_pte, src_pte,
> - orig_dst_pte, orig_src_pte, dst_pmd,
> - dst_pmdval, dst_ptl, src_ptl);
> + if (!pte_swp_exclusive(orig_src_pte)) {
> + err = -EBUSY;
> + goto out;
> + }
> +
> + si = get_swap_device(entry);
> + if (unlikely(!si)) {
> + err = -EAGAIN;
> + goto out;
> + }
> + /*
> + * Verify the existence of the swapcache. If present, the folio's
> + * index and mapping must be updated even when the PTE is a swap
> + * entry. The anon_vma lock is not taken during this process since
> + * the folio has already been unmapped, and the swap entry is
> + * exclusive, preventing rmap walks.
> + *
> + * For large folios, return -EBUSY immediately, as split_folio()
> + * also returns -EBUSY when attempting to split unmapped large
> + * folios in the swapcache. This issue needs to be resolved
> + * separately to allow proper handling.
> + */
> + if (!src_folio)
> + folio = filemap_get_folio(swap_address_space(entry),
> + swap_cache_index(entry));
> + if (!IS_ERR_OR_NULL(folio)) {
> + if (folio && folio_test_large(folio)) {
Can drop this folio check as it just did check "!IS_ERR_OR_NULL(folio)"..
> + err = -EBUSY;
> + folio_put(folio);
> + goto out;
> + }
> + src_folio = folio;
> + src_folio_pte = orig_src_pte;
> + if (!folio_trylock(src_folio)) {
> + pte_unmap(&orig_src_pte);
> + pte_unmap(&orig_dst_pte);
> + src_pte = dst_pte = NULL;
> + /* now we can block and wait */
> + folio_lock(src_folio);
> + put_swap_device(si);
> + si = NULL;
Not sure if it can do any harm, but maybe still nicer to put swap before
locking folio.
Thanks,
> + goto retry;
> + }
> + }
> + err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte,
> + orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval,
> + dst_ptl, src_ptl, src_folio);
> }
>
> out:
> @@ -1350,6 +1406,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
> if (src_pte)
> pte_unmap(src_pte);
> mmu_notifier_invalidate_range_end(&range);
> + if (si)
> + put_swap_device(si);
>
> return err;
> }
> --
> 2.39.3 (Apple Git-146)
>
--
Peter Xu
Powered by blists - more mailing lists