[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNMA4sOXAUdCpDDYz-4D9F_BaTFF8DL3Vkhx=q7vPfYG+A@mail.gmail.com>
Date: Fri, 26 Sep 2025 08:47:45 +0200
From: Marco Elver <elver@...gle.com>
To: syzbot <syzbot+60192c8877d0bc92a92b@...kaller.appspotmail.com>
Cc: Liam.Howlett@...cle.com, akpm@...ux-foundation.org, david@...hat.com,
harry.yoo@...cle.com, jannh@...gle.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, lorenzo.stoakes@...cle.com, riel@...riel.com,
syzkaller-bugs@...glegroups.com, vbabka@...e.cz
Subject: Re: [syzbot] [mm?] KCSAN: data-race in try_to_migrate_one / zap_page_range_single_batched
On Fri, 26 Sept 2025 at 08:44, syzbot
<syzbot+60192c8877d0bc92a92b@...kaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: cec1e6e5d1ab Merge tag 'sched_ext-for-6.17-rc7-fixes' of g..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=145d4f12580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=6e0c213d0735f5dd
> dashboard link: https://syzkaller.appspot.com/bug?extid=60192c8877d0bc92a92b
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/10b7c8fdfdec/disk-cec1e6e5.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/cbecc36962db/vmlinux-cec1e6e5.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/214f107d0a3e/bzImage-cec1e6e5.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+60192c8877d0bc92a92b@...kaller.appspotmail.com
>
> ==================================================================
> BUG: KCSAN: data-race in try_to_migrate_one / zap_page_range_single_batched
>
> write to 0xffff88810adfd798 of 8 bytes by task 13594 on cpu 1:
> update_hiwater_rss include/linux/mm.h:2657 [inline]
> try_to_migrate_one+0x918/0x16e0 mm/rmap.c:2455
> __rmap_walk_file+0x1ec/0x2b0 mm/rmap.c:2905
> try_to_migrate+0x1db/0x210 mm/rmap.c:-1
> migrate_folio_unmap mm/migrate.c:1324 [inline]
> migrate_pages_batch+0x6e1/0x1ae0 mm/migrate.c:1873
> migrate_pages_sync mm/migrate.c:1996 [inline]
> migrate_pages+0xf5f/0x1770 mm/migrate.c:2105
> do_mbind mm/mempolicy.c:1539 [inline]
> kernel_mbind mm/mempolicy.c:1682 [inline]
> __do_sys_mbind mm/mempolicy.c:1756 [inline]
> __se_sys_mbind+0x975/0xac0 mm/mempolicy.c:1752
> __x64_sys_mbind+0x78/0x90 mm/mempolicy.c:1752
> x64_sys_call+0x2932/0x2ff0 arch/x86/include/generated/asm/syscalls_64.h:238
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xd2/0x200 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> write to 0xffff88810adfd798 of 8 bytes by task 13595 on cpu 0:
> update_hiwater_rss include/linux/mm.h:2657 [inline]
> zap_page_range_single_batched+0x182/0x450 mm/memory.c:2007
> zap_page_range_single mm/memory.c:2041 [inline]
> unmap_mapping_range_vma mm/memory.c:4020 [inline]
> unmap_mapping_range_tree+0xfd/0x160 mm/memory.c:4037
> unmap_mapping_pages mm/memory.c:4103 [inline]
> unmap_mapping_range+0xe4/0xf0 mm/memory.c:4140
> shmem_fallocate+0x262/0x840 mm/shmem.c:3746
> vfs_fallocate+0x3b6/0x400 fs/open.c:342
> madvise_remove mm/madvise.c:1049 [inline]
> madvise_vma_behavior+0x192d/0x1cf0 mm/madvise.c:1346
> madvise_walk_vmas mm/madvise.c:1669 [inline]
> madvise_do_behavior+0x5b7/0x970 mm/madvise.c:1885
> do_madvise+0x10e/0x190 mm/madvise.c:1978
> __do_sys_madvise mm/madvise.c:1987 [inline]
> __se_sys_madvise mm/madvise.c:1985 [inline]
> __x64_sys_madvise+0x64/0x80 mm/madvise.c:1985
> x64_sys_call+0x1f1a/0x2ff0 arch/x86/include/generated/asm/syscalls_64.h:29
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xd2/0x200 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> value changed: 0x0000000000001645 -> 0x0000000000002165
One of these writes is getting lost. Which means highwater_rss is
lossy/approximate - does it matter?
Powered by blists - more mailing lists