[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <73ea63fe-0c8e-412b-9fb2-94c08933180a@www.fastmail.com>
Date: Sat, 20 Aug 2022 15:39:46 -0400
From: "Chris Murphy" <lists@...orremedies.com>
To: linux-kernel <linux-kernel@...r.kernel.org>,
"Linux List" <linux-mm@...ck.org>,
"Linux Devel" <linux-fsdevel@...r.kernel.org>
Subject: 6.0-rc1 BUG squashfs_decompress, and sleeping function called from invalid
context at include/linux/sched/mm.h
Seeing the following on every boot with kernel 6.0-rc1, when booting a Fedora Rawhide Live ISO with qemu-kvm. Full dmesg at:
https://drive.google.com/file/d/15u38HZD9NSihIvz4P9M0W3dx6FZWq0MX/view?usp=sharing
Excerpt:
[ 72.111934] kernel: BUG: sleeping function called from invalid context at include/linux/sched/mm.h:274
[ 72.111960] kernel: in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 94, name: kworker/u6:5
[ 72.111965] kernel: preempt_count: 1, expected: 0
[ 72.111969] kernel: RCU nest depth: 0, expected: 0
[ 72.111975] kernel: 4 locks held by kworker/u6:5/94:
[ 72.111980] kernel: #0: ffff9e87f4fc4d48 ((wq_completion)loop1){+.+.}-{0:0}, at: process_one_work+0x20b/0x600
[ 72.112059] kernel: #1: ffffb741c0b83e78 ((work_completion)(&worker->work)){+.+.}-{0:0}, at: process_one_work+0x20b/0x600
[ 72.112079] kernel: #2: ffff9e87f654ad50 (mapping.invalidate_lock#3){.+.+}-{3:3}, at: page_cache_ra_unbounded+0x69/0x1a0
[ 72.112100] kernel: #3: ffffd741bfa132f8 (&stream->lock){+.+.}-{2:2}, at: squashfs_decompress+0x5/0x1b0 [squashfs]
[ 72.112122] kernel: Preemption disabled at:
[ 72.112125] kernel: [<ffffffffc0605f1d>] squashfs_decompress+0x2d/0x1b0 [squashfs]
[ 72.112139] kernel: CPU: 2 PID: 94 Comm: kworker/u6:5 Not tainted 6.0.0-0.rc1.20220818git3b06a2755758.15.fc38.x86_64 #1
[ 72.112144] kernel: Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[ 72.112147] kernel: Workqueue: loop1 loop_workfn [loop]
[ 72.112163] kernel: Call Trace:
[ 72.112169] kernel: <TASK>
[ 72.112175] kernel: dump_stack_lvl+0x5b/0x77
[ 72.112190] kernel: __might_resched.cold+0xff/0x13a
[ 72.112202] kernel: kmem_cache_alloc_trace+0x207/0x370
[ 72.112217] kernel: handle_next_page+0x76/0xe0 [squashfs]
[ 72.112228] kernel: squashfs_xz_uncompress+0x58/0x200 [squashfs]
[ 72.112236] kernel: ? __wait_for_common+0xab/0x1d0
[ 72.112253] kernel: squashfs_decompress+0xbd/0x1b0 [squashfs]
[ 72.112268] kernel: squashfs_read_data+0xe7/0x5a0 [squashfs]
[ 72.112291] kernel: squashfs_readahead+0x4cd/0xb60 [squashfs]
[ 72.112306] kernel: ? kvm_sched_clock_read+0x14/0x40
[ 72.112310] kernel: ? sched_clock_cpu+0xb/0xc0
[ 72.112350] kernel: read_pages+0x4a/0x390
[ 72.112365] kernel: page_cache_ra_unbounded+0x118/0x1a0
[ 72.112386] kernel: filemap_get_pages+0x3d0/0x6b0
[ 72.112402] kernel: ? lock_is_held_type+0xe8/0x140
[ 72.112427] kernel: filemap_read+0xb4/0x410
[ 72.112437] kernel: ? avc_has_perm_noaudit+0xd3/0x1c0
[ 72.112452] kernel: ? __lock_acquire+0x388/0x1ef0
[ 72.112467] kernel: ? avc_has_perm+0x37/0xb0
[ 72.112488] kernel: do_iter_readv_writev+0xfa/0x110
[ 72.112511] kernel: do_iter_read+0xeb/0x1e0
[ 72.112525] kernel: loop_process_work+0x6fb/0xad0 [loop]
[ 72.112550] kernel: ? lock_acquire+0xde/0x2d0
[ 72.112576] kernel: process_one_work+0x29d/0x600
[ 72.112602] kernel: worker_thread+0x4f/0x3a0
[ 72.112615] kernel: ? process_one_work+0x600/0x600
[ 72.112619] kernel: kthread+0xf2/0x120
[ 72.112625] kernel: ? kthread_complete_and_exit+0x20/0x20
[ 72.112638] kernel: ret_from_fork+0x1f/0x30
[ 72.112676] kernel: </TASK>
--
Chris Murphy
Powered by blists - more mailing lists