[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240710-bug12-v1-1-0e5440f9b8d3@gmail.com>
Date: Wed, 10 Jul 2024 22:13:17 -0700
From: Pei Li <peili.dev@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
skhan@...uxfoundation.org, syzkaller-bugs@...glegroups.com,
linux-kernel-mentees@...ts.linuxfoundation.org,
syzbot+35a4414f6e247f515443@...kaller.appspotmail.com,
Pei Li <peili.dev@...il.com>
Subject: [PATCH] mm: Fix mmap_assert_locked() in follow_pte()
This patch fixes this warning by acquiring read lock before entering
untrack_pfn() while write lock is not held.
syzbot has tested the proposed patch and the reproducer did not
trigger any issue.
Reported-by: syzbot+35a4414f6e247f515443@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=35a4414f6e247f515443
Tested-by: syzbot+35a4414f6e247f515443@...kaller.appspotmail.com
Signed-off-by: Pei Li <peili.dev@...il.com>
---
Syzbot reported the following warning in follow_pte():
WARNING: CPU: 3 PID: 5192 at include/linux/rwsem.h:195 rwsem_assert_held include/linux/rwsem.h:195 [inline]
WARNING: CPU: 3 PID: 5192 at include/linux/rwsem.h:195 mmap_assert_locked include/linux/mmap_lock.h:65 [inline]
WARNING: CPU: 3 PID: 5192 at include/linux/rwsem.h:195 follow_pte+0x414/0x4c0 mm/memory.c:5980
This is because we are assuming that mm->mmap_lock should be held when
entering follow_pte(). This is added in commit c5541ba378e3 (mm:
follow_pte() improvements).
However, in the following call stack, we are not acquring the lock:
follow_phys arch/x86/mm/pat/memtype.c:957 [inline]
get_pat_info+0xf2/0x510 arch/x86/mm/pat/memtype.c:991
untrack_pfn+0xf7/0x4d0 arch/x86/mm/pat/memtype.c:1104
unmap_single_vma+0x1bd/0x2b0 mm/memory.c:1819
zap_page_range_single+0x326/0x560 mm/memory.c:1920
In zap_page_range_single(), we passed mm_wr_locked as false, as we do
not expect write lock to be held.
In the special case where vma->vm_flags is set as VM_PFNMAP, we are
hitting untrack_pfn() which eventually calls into follow_phys.
This patch fixes this warning by acquiring read lock before entering
untrack_pfn() while write lock is not held.
syzbot has tested the proposed patch and the reproducer did not trigger any issue:
Tested on:
commit: 9d9a2f29 Merge tag 'mm-hotfixes-stable-2024-07-10-13-1..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13be8021980000
kernel config: https://syzkaller.appspot.com/x/.config?x=3456bae478301dc8
dashboard link: https://syzkaller.appspot.com/bug?extid=35a4414f6e247f515443
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=145e3441980000
Note: testing is done by a robot and is best-effort only.
---
mm/memory.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index d10e616d7389..75d7959b835b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1815,9 +1815,16 @@ static void unmap_single_vma(struct mmu_gather *tlb,
if (vma->vm_file)
uprobe_munmap(vma, start, end);
- if (unlikely(vma->vm_flags & VM_PFNMAP))
+ if (unlikely(vma->vm_flags & VM_PFNMAP)) {
+ if (!mm_wr_locked)
+ mmap_read_lock(vma->vm_mm);
+
untrack_pfn(vma, 0, 0, mm_wr_locked);
+ if (!mm_wr_locked)
+ mmap_read_unlock(vma->vm_mm);
+ }
+
if (start != end) {
if (unlikely(is_vm_hugetlb_page(vma))) {
/*
---
base-commit: 734610514cb0234763cc97ddbd235b7981889445
change-id: 20240710-bug12-fecfe4bb0dcb
Best regards,
--
Pei Li <peili.dev@...il.com>
Powered by blists - more mailing lists