lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 24 Oct 2010 12:21:31 -0400
From:	Christoph Hellwig <hch@...radead.org>
To:	Al Viro <viro@...IV.linux.org.uk>
Cc:	Dave Chinner <david@...morbit.com>, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 16/21] fs: Protect inode->i_state with the inode->i_lock

On Sun, Oct 24, 2010 at 10:13:10AM -0400, Christoph Hellwig wrote:
> On Sat, Oct 23, 2010 at 10:37:52PM +0100, Al Viro wrote:
> > 	* invalidate_inodes() - collect I_FREEING/I_WILL_FREE on a separate
> > list, then (after we'd evicted the stuff we'd decided to evict) wait until
> > they get freed by whatever's freeing them already.
> 
> Note that we would only have to do this for the umount case.  For others
> it's pretty pointless.

Now that I've looked into it I think we basically fine right now.

If we're in umount there should be no other I_FREEING inodes.

 - concurrent prune_icache is prevented by iprune_sem.
 - concurrent other invalidate_inodes should not happen because we're
   in unmount and the filesystem should not be reachable any more,
   and even if it did iprune_sem would protect us.
 - how could a concurrent iput_final happen?  filesystem is not
   accessible anymore, and iput of fs internal inodes is single-threaded
   with the rest of the actual umount process.

So just skipping over I_FREEING inodes here should be fine for
non-umount callers, and for umount we could even do a WARN_ON.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ