[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e578713-c907-4bec-b2c2-f585772eae13@linux.alibaba.com>
Date: Thu, 13 Jun 2024 20:08:29 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: syzbot <syzbot+d6e5c328862b5ae6cbfe@...kaller.appspotmail.com>,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [mm?] KASAN: slab-use-after-free Read in finish_fault
On 2024/6/13 19:38, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: d35b2284e966 Add linux-next specific files for 20240607
> git tree: linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=178b77ba980000
> kernel config: https://syzkaller.appspot.com/x/.config?x=d8bf5cd6bcca7343
> dashboard link: https://syzkaller.appspot.com/bug?extid=d6e5c328862b5ae6cbfe
> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=174c680a980000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=111b9696980000
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/e0055a00a2cb/disk-d35b2284.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/192cbb8cf833/vmlinux-d35b2284.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/57804c9c9319/bzImage-d35b2284.xz
>
> The issue was bisected to:
>
> commit 1c05047ad01693ad92bdf8347fad3b5c2b25e8bb
> Author: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Date: Tue Jun 4 10:17:45 2024 +0000
>
> mm: memory: extend finish_fault() to support large folio
>
> bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=11267f94980000
> final oops: https://syzkaller.appspot.com/x/report.txt?x=13267f94980000
> console output: https://syzkaller.appspot.com/x/log.txt?x=15267f94980000
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+d6e5c328862b5ae6cbfe@...kaller.appspotmail.com
> Fixes: 1c05047ad016 ("mm: memory: extend finish_fault() to support large folio")
>
> ==================================================================
> BUG: KASAN: use-after-free in ptep_get include/linux/pgtable.h:317 [inline]
> BUG: KASAN: use-after-free in ptep_get_lockless include/linux/pgtable.h:581 [inline]
> BUG: KASAN: use-after-free in pte_range_none mm/memory.c:4409 [inline]
> BUG: KASAN: use-after-free in finish_fault+0xf87/0x1460 mm/memory.c:4905
> Read of size 8 at addr ffff88807bfb7000 by task syz-executor149/5117
>
> CPU: 0 PID: 5117 Comm: syz-executor149 Not tainted 6.10.0-rc2-next-20240607-syzkaller #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
> Call Trace:
> <TASK>
> __dump_stack lib/dump_stack.c:91 [inline]
> dump_stack_lvl+0x241/0x360 lib/dump_stack.c:117
> print_address_description mm/kasan/report.c:377 [inline]
> print_report+0x169/0x550 mm/kasan/report.c:488
> kasan_report+0x143/0x180 mm/kasan/report.c:601
> ptep_get include/linux/pgtable.h:317 [inline]
> ptep_get_lockless include/linux/pgtable.h:581 [inline]
> pte_range_none mm/memory.c:4409 [inline]
> finish_fault+0xf87/0x1460 mm/memory.c:4905
> do_read_fault mm/memory.c:5052 [inline]
> do_fault mm/memory.c:5178 [inline]
> do_pte_missing mm/memory.c:3948 [inline]
> handle_pte_fault+0x3db5/0x7130 mm/memory.c:5502
> __handle_mm_fault mm/memory.c:5645 [inline]
> handle_mm_fault+0x10df/0x1ba0 mm/memory.c:5810
> faultin_page mm/gup.c:1339 [inline]
> __get_user_pages+0x6ef/0x1590 mm/gup.c:1638
> populate_vma_page_range+0x264/0x330 mm/gup.c:2078
> __mm_populate+0x27a/0x460 mm/gup.c:2181
> mm_populate include/linux/mm.h:3442 [inline]
> __do_sys_remap_file_pages mm/mmap.c:3177 [inline]
> __se_sys_remap_file_pages+0x7a1/0x9a0 mm/mmap.c:3103
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
Thanks for reporting. I think the problem is I should also consider the
pagetable of PMD size in case the pte entry overflows. I will fix this
issue ASAP.
Powered by blists - more mailing lists