lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF_S4t8pjpOhvNG+U-RUP1QerEJ1of7Me9smXvrFDiOQtzD_Vg@mail.gmail.com>
Date:	Sun, 11 Sep 2011 18:01:06 -0400
From:	Bryan Donlan <bdonlan@...il.com>
To:	Amit Sahrawat <amit.sahrawat83@...il.com>
Cc:	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	linkinjeon@...il.com
Subject: Re: Issue with lazy umount and closing file descriptor in between

On Wed, Sep 7, 2011 at 12:37, Amit Sahrawat <amit.sahrawat83@...il.com> wrote:
> I know that lazy umount was designed keeping in mind that the
> mountpoint is not accesible to all future I/O but for the ongoing I/O
> it will continute to work. It is only after the I/O is finished that
> the umount will actually occur. But this can be tricky at times
> considering there are situations where the operation will continue to
> be executed than what is expected duration, and you cannot unplug the
> device during that period because there are chances of filesystem
> corruption on doing so.
> Is there anything which could be done in this context? because simply
> reading the fd-table and closing fd's will not serve the purpose and
> there is every chance of a OOPs occuring due to this closing.
> Signalling from this point to the all the process's with open fd's on
> that mountpoint to close fd i.e., handling needs to be done from the
> user space applications...? Does this make sense
>
> Please through some insight into this. I am not looking for exact
> solution it is just mere opinion's on this that can add to this.

Essentially what you want here is a 'forced unmount' option.

It's difficult to do this directly in the existing VFS model; you'd
need to essentially change the operations structure for all open
files/inodes for that filesystem in a race-free manner, _and_ wait for
any outstanding operations to complete. The VFS isn't really designed
to support something like this. What you could try doing, however, is
creating a wrapper filesystem - one that redirects all requests to an
underlying filesystem, but supports an operation to:

1) Make all future requests fail with -EIO
2) Invalidate any existing VMA mappings
3) Wait for all outstanding requests to complete
4) Unmount (ie, unreference) the underlying filesystem

This will result in some overhead, of course, but would seem to be the
safest route.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ