lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH2r5muRnhFevDR29k=DkmD_B44xQ5jOXd5RnRqkyH27pKzNDQ@mail.gmail.com>
Date: Mon, 29 Jul 2024 12:33:48 -0500
From: Steve French <smfrench@...il.com>
To: Christian Brauner <brauner@...nel.org>
Cc: linux-fsdevel <linux-fsdevel@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>, 
	CIFS <linux-cifs@...r.kernel.org>
Subject: Re: Why do very few filesystems have umount helpers

> umount.udisks talks to the udisks daemon which keeps state
> on the block devices it manages and it also cleans up things that were
> created (directories etc.) at mount time

That does sound similar to the problem that some network fs face.
How to cleanup resources (e.g. cached metadata) better at umount time
(since kill_sb can take a while to be invoked)

> The first step should be to identify what exactly keeps your mount busy
> in generic/044 and generic/043.

That is a little tricky to debug (AFAIK no easy way to tell exactly which
reference is preventing the VFS from proceeding with the umount and
calling kill_sb).  My best guess is something related to deferred close
(cached network file handles) that had a brief refcount on
something being checked by umount, but when I experimented with
deferred close settings that did not seem to affect the problem so
looking for other possible causes.

I just did a quick experiment by adding a 1 second wait inside umount
and confirmed that that does fix it for those two tests when mounted to Samba,
but not clear why the slight delay in umount helps as there is no pending
network traffic at that point.

On Mon, Jul 29, 2024 at 4:50 AM Christian Brauner <brauner@...nel.org> wrote:
>
> On Sun, Jul 28, 2024 at 02:09:14PM GMT, Steve French wrote:
> > I noticed that nfs has a umount helper (/sbin/umount.nfs) as does hfs
> > (as does /sbin/umount.udisks2).  Any ideas why those are the only
> > three filesystems have them but other fs don't?
>
> Helpers such as mount.* or umount.* are used by util-linux. They're not
> supposed to be directly used (usually).
>
> For example, umount.udisks talks to the udisks daemon which keeps state
> on the block devices it manages and it also cleans up things that were
> created (directories etc.) at mount time. Such mounts are usually marked
> e.g., via helper=udisks to instruct util-linux to call umount.udisks
>
> Similar things probably apply to the others.
>
> > Since umount does not notify the filesystem on unmount until
> > references are closed (unless you do "umount --force") and therefore
> > the filesystem is only notified at kill_sb time, an easier approach to
> > fixing some of the problems where resources are kept around too long
> > (e.g. cached handles or directory entries etc. or references on the
> > mount are held) may be to add a mount helper which notifies the fs
> > (e.g. via fs specific ioctl) when umount has begun.   That may be an
> > easier solution that adding a VFS call to notify the fs when umount
> > begins.   As you can see from fs/namespace.c there is no mount
> > notification normally (only on "force" unmounts)
>
> The first step should be to identify what exactly keeps your mount busy
> in generic/044 and generic/043. If you don't know what the cause of this
> is no notification from VFS will help you. My guess is that this ends up
> being fixable in cifs.



-- 
Thanks,

Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ