[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef33d962-1749-4d5f-acd7-c2ba7e0fc008@redhat.com>
Date: Mon, 13 Jan 2025 16:39:52 +0100
From: David Hildenbrand <david@...hat.com>
To: syzbot <syzbot+c0673e1f1f054fac28c2@...kaller.appspotmail.com>,
akpm@...ux-foundation.org, hdanton@...a.com, liam.howlett@...cle.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
syzkaller-bugs@...glegroups.com, willy@...radead.org
Subject: Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2)
On 11.01.25 10:54, syzbot wrote:
> Hello,
>
> syzbot has tested the proposed patch but the reproducer is still triggering an issue:
> WARNING in __folio_rmap_sanity_checks
>
> page last free pid 7533 tgid 7532 stack trace:
> reset_page_owner include/linux/page_owner.h:25 [inline]
> free_pages_prepare mm/page_alloc.c:1127 [inline]
> free_unref_folios+0xe39/0x18b0 mm/page_alloc.c:2706
> folios_put_refs+0x76c/0x860 mm/swap.c:962
> folio_batch_release include/linux/pagevec.h:101 [inline]
> truncate_inode_pages_range+0x460/0x10e0 mm/truncate.c:330
> iomap_write_failed fs/iomap/buffered-io.c:668 [inline]
> iomap_write_iter fs/iomap/buffered-io.c:999 [inline]
> iomap_file_buffered_write+0xca5/0x11c0 fs/iomap/buffered-io.c:1039
> xfs_file_buffered_write+0x2de/0xac0 fs/xfs/xfs_file.c:792
> new_sync_write fs/read_write.c:586 [inline]
> vfs_write+0xaeb/0xd30 fs/read_write.c:679
> ksys_write+0x18f/0x2b0 fs/read_write.c:731
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> ------------[ cut here ]------------
> WARNING: CPU: 0 PID: 7538 at ./include/linux/rmap.h:216 __folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216
> Modules linked in:
> CPU: 0 UID: 0 PID: 7538 Comm: syz.1.57 Not tainted 6.13.0-rc6-syzkaller-gcd6313beaeae #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
> RIP: 0010:__folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216
> Code: 0f 0b 90 e9 b7 fd ff ff e8 ee af ab ff 48 ff cb e9 f8 fd ff ff e8 e1 af ab ff 4c 89 e7 48 c7 c6 c0 9c 15 8c e8 82 6f f5 ff 90 <0f> 0b 90 e9 e9 fd ff ff e8 c4 af ab ff 48 ff cb e9 34 fe ff ff e8
> RSP: 0018:ffffc9000c38efd8 EFLAGS: 00010246
> RAX: f8a45fcd41963a00 RBX: ffffea00014f8000 RCX: ffffc9000c38eb03
> RDX: 0000000000000005 RSI: ffffffff8c0aa3e0 RDI: ffffffff8c5fa860
> RBP: 0000000000013186 R08: ffffffff901978b7 R09: 1ffffffff2032f16
> R10: dffffc0000000000 R11: fffffbfff2032f17 R12: ffffea00014f0000
> R13: ffffea00014f8080 R14: 0000000000000000 R15: 0000000000000002
> FS: 00007f14451f96c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000020000140 CR3: 0000000073716000 CR4: 00000000003526f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> Call Trace:
> <TASK>
> __folio_add_rmap mm/rmap.c:1170 [inline]
> __folio_add_file_rmap mm/rmap.c:1489 [inline]
> folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511
> set_pte_range+0x30c/0x750 mm/memory.c:5065
> filemap_map_folio_range mm/filemap.c:3563 [inline]
> filemap_map_pages+0xfbe/0x1900 mm/filemap.c:3672
> do_fault_around mm/memory.c:5280 [inline]
> do_read_fault mm/memory.c:5313 [inline]
> do_fault mm/memory.c:5456 [inline]
> do_pte_missing mm/memory.c:3979 [inline]
> handle_pte_fault+0x3888/0x5ed0 mm/memory.c:5801
> __handle_mm_fault mm/memory.c:5944 [inline]
> handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112
> faultin_page mm/gup.c:1196 [inline]
> __get_user_pages+0x1c82/0x49e0 mm/gup.c:1494
> populate_vma_page_range+0x264/0x330 mm/gup.c:1932
> __mm_populate+0x27a/0x460 mm/gup.c:2035
> mm_populate include/linux/mm.h:3397 [inline]
> vm_mmap_pgoff+0x2c3/0x3d0 mm/util.c:580
> ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:546
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f1445385d29
> Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007f14451f9038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
> RAX: ffffffffffffffda RBX: 00007f1445575fa0 RCX: 00007f1445385d29
> RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000
> RBP: 00007f1445401b08 R08: 0000000000000004 R09: 0000000000000000
> R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
> R13: 0000000000000000 R14: 00007f1445575fa0 R15: 00007ffe4c3a7978
> </TASK>
>
>
> Tested on:
>
> commit: cd6313be Revert "vmstat: disable vmstat_work on vmstat..
> git tree: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-stable
> console output: https://syzkaller.appspot.com/x/log.txt?x=10b34bc4580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=d18955ff6936aa88
> dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2
> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
I tried reproducing it in manually in an x86-64 VM with the provided
config and C reproducer, so far no luck :(
Looking at the reports, we always seem to be dealing with an order-9 (PMD-size) XFS folio
with dentry name(?):"memory.current".
Apparently, we're PTE-mapping that PMD_sized folio.
[ 141.392393][ T7538] page: refcount:1025 mapcount:1 mapping:ffff88805b10ba48 index:0x400 pfn:0x53c00
[ 141.402708][ T7538] head: order:9 mapcount:512 entire_mapcount:0 nr_pages_mapped:512 pincount:0
[ 141.411562][ T7538] memcg:ffff88805b82e000
[ 141.415930][ T7538] aops:xfs_address_space_operations ino:42a dentry name(?):"memory.current"
[ 141.424695][ T7538] flags: 0xfff5800000027d(locked|referenced|uptodate|dirty|lru|workingset|head|node=0|zone=1|lastcpupid=0x7ff)
[ 141.436464][ T7538] raw: 00fff5800000027d ffffea00014d0008 ffffea00014f8008 ffff88805b10ba48
[ 141.445242][ T7538] raw: 0000000000000400 0000000000000000 0000040100000000 ffff88805b82e000
[ 141.454649][ T7538] head: 00fff5800000027d ffffea00014d0008 ffffea00014f8008 ffff88805b10ba48
[ 141.463708][ T7538] head: 0000000000000400 0000000000000000 0000040100000000 ffff88805b82e000
[ 141.472549][ T7538] head: 00fff00000000209 ffffea00014f0001 ffffffff000001ff 0000000000000200
[ 141.481225][ T7538] head: 0000000000000200 0000000000000000 0000000000000000 0000000000000000
[ 141.490004][ T7538] page dumped because: VM_WARN_ON_FOLIO((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))) != folio)
[ 141.508510][ T7538] page_owner tracks the page as allocated
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists