[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150713181751.GZ4568@sgi.com>
Date: Mon, 13 Jul 2015 13:17:51 -0500
From: Ben Myers <bpm@....com>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
"J. Bruce Fields" <bfields@...ldses.org>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [RFC] freeing unlinked file indefinitely delayed
Hey Al,
On Sun, Jul 12, 2015 at 04:00:35PM +0100, Al Viro wrote:
> On Wed, Jul 08, 2015 at 10:41:43AM -0500, Ben Myers wrote:
>
> > The bug rings a bell for me so I will stick my neck out instead of
> > lurking. Don't you need to sample that link count under the filesystems
> > internal lock in order to avoid an unlink/iget race? I suggest creating
> > a helper to prune disconnected dentries which a filesystem could call in
> > .unlink. That would avoid the risk of unintended side effects with the
> > d_alloc/d_free/icache approach and have provable link count correctness.
>
> For one thing, this patch does *not* check for i_nlink at all.
I agree that no checking of i_nlink has the advantage of brevity.
Anyone who is using dentry.d_fsdata with an open_by_handle workload (if
there are any) will be affected.
> For another, there's no such thing as 'filesystems internal lock' for
> i_nlink protection - that's handled by i_mutex... And what does
> iget() have to do with any of that?
i_mutex is good enough only for local filesystems.
Network/clustered/distributed filesystems need to take an internal lock
to provide exclusion for this .unlink with a .link on another host.
That's where I'm coming from with iget().
Maybe plumbing i_op.unlink with another argument to return i_nlink is
something to consider? A helper for the few filesystems that need to do
this might be good enough in the near term.
Thanks,
Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists