lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 6 Feb 2024 20:50:14 +0000
From: Al Viro <viro@...iv.linux.org.uk>
To: Calvin Owens <jcalvinowens@...il.com>
Cc: Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [BUG] Infinite loop in cleanup_mnt() task_work on 6.3-rc3

On Tue, Feb 06, 2024 at 11:52:58AM -0800, Calvin Owens wrote:
> Hello all,
> 
> A couple times in the past week, my laptop has been wedged by a spinning
> cleanup_mnt() task_work from an exiting container runtime (bwrap).
> 
> The first time it reproduced was while writing to dm-crypt on nvme, so I
> blew it off as a manifestation of the tasklet corruption. But I hit it
> again last night on rc3, which contains commit 0a9bab391e33, so that's
> not it.
> 
> I'm sorry to say I have very little to go on. Both times it happened, I
> was using Nautilus to browse around in some directories, but I've tried
> monkeying around with that and had no luck reproducing it. The spinning
> happens late enough in the exit path that /proc/self/ is gutted, so I
> don't know what the bwrap container was actually doing.
> 
> The NMI stacktrace and the kconfig I'm running are below. The spinning
> task still moves between CPUs. No hung task notifications appear except
> for random sync() calls happening afterwards from userspace, which all
> block on super_lock() in iterate_supers(). Trying to ptrace the stuck
> process hangs also hangs the tracing process forever.
> 
> I rebuilt with lockdep this morning, but haven't seen any splats, and
> haven't hit the bug again.
> 
> Please let me know if you see anything specific I can test or try that
> might help narrow the problem down. Otherwise, I'll keep working on
> finding a reliable reproducer.

Check if git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs.git #fixes

helps.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ