[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <8bb29431064fc1f70a42edef75a8788dd4a0eecc.camel@sapience.com>
Date: Sat, 30 Dec 2023 10:23:26 -0500
From: Genes Lists <lists@...ience.com>
To: linux-kernel@...r.kernel.org
Cc: "MatthewWilcox(Oracle)" <willy@...radead.org>, Andrew Morton
<akpm@...ux-foundation.org>, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org
Subject: 6.6.8 stable: crash in folio_mark_dirty
Apologies in advance, but I cannot git bisect this since machine was
running for 10 days on 6.6.8 before this happened.
Reporting in case it's useful (and not a hardware fail).
There is nothing interesting in journal ahead of the crash - previous
entry, 2 minutes prior from user space dhcp server.
- Root, efi is on nvme
- Spare root,efi is on sdg
- md raid6 on sda-sd with lvmcache from one partition on nvme drive.
- all filesystems are ext4 (other than efi).
- 32 GB mem.
regards
gene
details attached which show:
Dec 30 07:00:36 s6 kernel: <TASK>
Dec 30 07:00:36 s6 kernel: ? __folio_mark_dirty+0x21c/0x2a0
Dec 30 07:00:36 s6 kernel: ? __warn+0x81/0x130
Dec 30 07:00:36 s6 kernel: ? __folio_mark_dirty+0x21c/0x2a0
Dec 30 07:00:36 s6 kernel: ? report_bug+0x171/0x1a0
Dec 30 07:00:36 s6 kernel: ? handle_bug+0x3c/0x80
Dec 30 07:00:36 s6 kernel: ? exc_invalid_op+0x17/0x70
Dec 30 07:00:36 s6 kernel: ? asm_exc_invalid_op+0x1a/0x20
Dec 30 07:00:36 s6 kernel: ? __folio_mark_dirty+0x21c/0x2a0
Dec 30 07:00:36 s6 kernel: block_dirty_folio+0x8a/0xb0
Dec 30 07:00:36 s6 kernel: unmap_page_range+0xd17/0x1120
Dec 30 07:00:36 s6 kernel: unmap_vmas+0xb5/0x190
Dec 30 07:00:36 s6 kernel: exit_mmap+0xec/0x340
Dec 30 07:00:36 s6 kernel: __mmput+0x3e/0x130
Dec 30 07:00:36 s6 kernel: do_exit+0x31c/0xb20
Dec 30 07:00:36 s6 kernel: do_group_exit+0x31/0x80
Dec 30 07:00:36 s6 kernel: __x64_sys_exit_group+0x18/0x20
Dec 30 07:00:36 s6 kernel: do_syscall_64+0x5d/0x90
Dec 30 07:00:36 s6 kernel: ? count_memcg_events.constprop.0+0x1a/0x30
Dec 30 07:00:36 s6 kernel: ? handle_mm_fault+0xa2/0x360
Dec 30 07:00:36 s6 kernel: ? do_user_addr_fault+0x30f/0x660
Dec 30 07:00:36 s6 kernel: ? exc_page_fault+0x7f/0x180
Dec 30 07:00:36 s6 kernel: entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Dec 30 07:00:36 s6 kernel: RIP: 0033:0x7fb3c581ee2d
Dec 30 07:00:36 s6 kernel: Code: Unable to access opcode bytes at
0x7fb3c581ee03.
Dec 30 07:00:36 s6 kernel: RSP: 002b:00007fff620541e8 EFLAGS: 00000206
ORIG_RAX: 00000000000000e7
Dec 30 07:00:36 s6 kernel: RAX: ffffffffffffffda RBX: 00007fb3c591efa8
RCX: 00007fb3c581ee2d
Dec 30 07:00:36 s6 kernel: RDX: 00000000000000e7 RSI: ffffffffffffff88
RDI: 0000000000000000
Dec 30 07:00:36 s6 kernel: RBP: 0000000000000002 R08: 0000000000000000
R09: 00007fb3c5924920
Dec 30 07:00:36 s6 kernel: R10: 00005650f2e615f0 R11: 0000000000000206
R12: 0000000000000000
Dec 30 07:00:36 s6 kernel: R13: 0000000000000000 R14: 00007fb3c591d680
R15: 00007fb3c591efc0
Dec 30 07:00:36 s6 kernel: </TASK>
View attachment "s6-crash" of type "text/plain" (7893 bytes)
Powered by blists - more mailing lists