lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH2r5muGmbafMMJozVxan+=qz3fXyLgV74pgEoewsfn30rbAQg@mail.gmail.com>
Date: Mon, 29 Jul 2024 15:26:31 -0500
From: Steve French <smfrench@...il.com>
To: Andreas Dilger <adilger@...ger.ca>
Cc: linux-fsdevel <linux-fsdevel@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>, 
	CIFS <linux-cifs@...r.kernel.org>
Subject: Re: Why do very few filesystems have umount helpers

>  even though the filesystem on the host is kept mounted the whole time.  If the host filesystem
> is flushing its cache "in anticipation" of being fully unmounted, but is actually servicing dozens
> of guests, then it could be a significant hit to system performance

The good news (at least with cifs.ko) is that when multiple
superblocks get mounted on the same share
("tree connection" in cifs.ko) then the cached files (deferred close
for network handles with file leases) and
directory entries (with directory leases) won't get freed until the
last superblock mounted to that //server/sharename
is unmounted.


On Mon, Jul 29, 2024 at 12:31 PM Andreas Dilger <adilger@...ger.ca> wrote:
>
> On Jul 28, 2024, at 1:09 PM, Steve French <smfrench@...il.com> wrote:
> >
> > I noticed that nfs has a umount helper (/sbin/umount.nfs) as does hfs
> > (as does /sbin/umount.udisks2).  Any ideas why those are the only
> > three filesystems have them but other fs don't?
>
> I think one of the reasons for this is that *unmount* helpers have been
> available only for a relatively short time compared to *mount* helpers,
> so not nearly as many filesystems have created them (though I'd wanted
> this functionality on occasion over the years).
>
> > Since umount does not notify the filesystem on unmount until
> > references are closed (unless you do "umount --force") and therefore
> > the filesystem is only notified at kill_sb time, an easier approach to
> > fixing some of the problems where resources are kept around too long
> > (e.g. cached handles or directory entries etc. or references on the
> > mount are held) may be to add a mount helper which notifies the fs
> > (e.g. via fs specific ioctl) when umount has begun.   That may be an
> > easier solution that adding a VFS call to notify the fs when umount
> > begins.
>
> I don't think that would be easier in the end, since you still need to
> change the kernel code to handle the new ioctl, and coordinate the umount
> helper to call this ioctl in userspace, rather than just have the kernel
> notify that an unmount is being called.
>
> One potential issue is with namespaces and virtualization, which may
> "unmount" the filesystem pretty frequently, even though the filesystem
> on the host is kept mounted the whole time.  If the host filesystem is
> flushing its cache "in anticipation" of being fully unmounted, but is
> actually servicing dozens of guests, then it could be a significant hit
> to system performance each time a guest/container starts and stops.
>
> Cheers, Andreas
>
> > As you can see from fs/namespace.c there is no mount
> > notification normally (only on "force" unmounts)
> >
> >        /*
> >         * If we may have to abort operations to get out of this
> >         * mount, and they will themselves hold resources we must
> >         * allow the fs to do things. In the Unix tradition of
> >         * 'Gee thats tricky lets do it in userspace' the umount_begin
> >         * might fail to complete on the first run through as other tasks
> >         * must return, and the like. Thats for the mount program to worry
> >         * about for the moment.
> >         */
> >
> >        if (flags & MNT_FORCE && sb->s_op->umount_begin) {
> >                sb->s_op->umount_begin(sb);
> >        }
> >
> >
> > Any thoughts on why those three fs are the only cases where there are
> > umount helpers? And why they added them?
> >
> > I do notice umount failures (which can cause the subsequent mount to
> > fail) on some of our functional test runs e.g. generic/043 and
> > generic/044 often fail to Samba with
> >
> >     QA output created by 043
> >    +umount: /mnt-local-xfstest/scratch: target is busy.
> >    +mount error(16): Device or resource busy
>
>
> Cheers, Andreas
>
>
>
>
>


-- 
Thanks,

Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ