[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160219205147.GN13177@mtj.duckdns.org>
Date: Fri, 19 Feb 2016 15:51:47 -0500
From: Tejun Heo <tj@...nel.org>
To: Al Viro <viro@...IV.linux.org.uk>
Cc: Jan Kara <jack@...e.cz>, Tahsin Erdogan <tahsin@...gle.com>,
Jens Axboe <axboe@...nel.dk>, cgroups@...r.kernel.org,
Theodore Ts'o <tytso@....edu>,
Nauman Rafique <nauman@...gle.com>,
linux-kernel@...r.kernel.org, Jan Kara <jack@...e.com>
Subject: Re: [PATCH block/for-4.5-fixes] writeback: keep superblock pinned
during cgroup writeback association switches
Hello, Al.
On Fri, Feb 19, 2016 at 08:18:06PM +0000, Al Viro wrote:
> On Thu, Feb 18, 2016 at 08:00:33AM -0500, Tejun Heo wrote:
> > So, the question is why aren't we just using s_active and draining it
> > on umount of the last mountpoint. Because, right now, the behavior is
> > weird in that we allow umounts to proceed but then let the superblock
> > hang onto the block device till s_active is drained. This really
> > should be synchronous.
>
> This really should not. First of all, umount -l (or exit of the last
> namespace user, for that matter) can leave you with actual fs shutdown
> postponed until some opened files get closed. Nothing synchronous about
> that.
I see, I suppose that's what distinguishes s_active and s_umount
usages - whether pinning should block umounting?
> If you need details on s_active/s_umount/etc., I can give you a braindump,
> but I suspect your real question is a lot more specific. Details, please...
So, the problem is that cgroup writeback path sometimes schedules a
work item to change the cgroup an inode is associated. Currently,
only the inode was pinned and the underlying sb may go away while the
work item is still pending. The work item performs iput() at the end
and that explodes if the underlying sb is already gone.
As writeback path relies on s_umount for synchronization anyway, I
think that'd be the most natural way to hold onto the sb but
unfortunately there's no way to pass on the down_read to the async
execution context, so I made it grap s_active, which worked fine but
it made the sb hang around until such work items are finished. It's
an unlikely race to hit but still broken.
The last option would be canceling / flushing these work items from sb
shutdown path which is likely more involved.
What should it be doing?
Thanks!
--
tejun
Powered by blists - more mailing lists