[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZZBbNm5RRSGEDlqk@casper.infradead.org>
Date: Sat, 30 Dec 2023 18:02:30 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Genes Lists <lists@...ience.com>
Cc: linux-kernel@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: 6.6.8 stable: crash in folio_mark_dirty
On Sat, Dec 30, 2023 at 10:23:26AM -0500, Genes Lists wrote:
> Apologies in advance, but I cannot git bisect this since machine was
> running for 10 days on 6.6.8 before this happened.
Thanks for the report. Apologies, I'm on holiday until the middle of
the week so this will be extremely terse.
> - Root, efi is on nvme
> - Spare root,efi is on sdg
> - md raid6 on sda-sd with lvmcache from one partition on nvme drive.
> - all filesystems are ext4 (other than efi).
> - 32 GB mem.
> Dec 30 07:00:36 s6 kernel: ------------[ cut here ]------------
> Dec 30 07:00:36 s6 kernel: WARNING: CPU: 0 PID: 521524 at mm/page-writeback.c:2668 __folio_mark_dirty (??:?)
This is:
WARN_ON_ONCE(warn && !folio_test_uptodate(folio));
> Dec 30 07:00:36 s6 kernel: CPU: 0 PID: 521524 Comm: rsync Not tainted 6.6.8-stable-1 #13 d238f5ab6a206cdb0cc5cd72f8688230f23d58df
So rsync is exiting. Do you happen to know what rsync is doing?
> Dec 30 07:00:36 s6 kernel: block_dirty_folio (??:?)
> Dec 30 07:00:36 s6 kernel: unmap_page_range (??:?)
> Dec 30 07:00:36 s6 kernel: unmap_vmas (??:?)
> Dec 30 07:00:36 s6 kernel: exit_mmap (??:?)
> Dec 30 07:00:36 s6 kernel: __mmput (??:?)
> Dec 30 07:00:36 s6 kernel: do_exit (??:?)
> Dec 30 07:00:36 s6 kernel: do_group_exit (??:?)
> Dec 30 07:00:36 s6 kernel: __x64_sys_exit_group (??:?)
> Dec 30 07:00:36 s6 kernel: do_syscall_64 (??:?)
It looks llike rsync has a page from the block device mmaped? I'll have
to investigate this properly when I'm back. If you haven't heard from
me in a week, please ping me.
(I don't think I caused this, but I think I stand a fighting chance of
tracking down what the problem is, just not right now).
Powered by blists - more mailing lists