[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <E1HQt7o-00042r-00@dorka.pomaz.szeredi.hu>
Date: Mon, 12 Mar 2007 23:36:16 +0100
From: Miklos Szeredi <miklos@...redi.hu>
To: dgc@....com
CC: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [patch 3/8] per backing_dev dirty and writeback page accounting
I'll try to explain the reason for the deadlock first.
> IIUC, your problem is that there's another bdi that holds all the
> dirty pages, and this throttle loop never flushes pages from that
> other bdi and we sleep instead. It seems to me that the fundamental
> problem is that to clean the pages we need to flush both bdi's, not
> just the bdi we are directly dirtying.
This is what happens:
write fault on upper filesystem
balance_dirty_pages
submit write requests
loop ...
------- fuse IPC ---------------
[fuse loopback fs thread 1]
read request
sys_write
mutex_lock(i_mutex)
...
balance_dirty_pages
submit write requests
loop ... write requests completed ... dirty still over limit ...
... loop forever
[fuse loopback fs thread 1]
read request
sys_write
mute_lock(i_mutex) blocks
So the queue for the upper filesystem is full. The queue for the
lower filesystem is empty. There are no dirty pages in the lower
filesystem.
So kicking pdflush for the lower filesystem doesn't help, there's
nothing to do. balance_dirty_pages() for the lower filesystem should
just realize that there's nothing to do and return, and then there
would be progress.
So there's there's really no need to do any accounting, just some
logic to determine that a backing dev is nearly or completely
quiescent.
And getting out of this tight situation doesn't have to be efficient.
This is probably a very rare corner case, that almost never happens in
real life, only with aggressive test tools like bash_shared_mapping.
> > OK. How about just accounting writeback pages? That should be much
> > less of a problem, since normally writeback is started from
> > pdflush/kupdate in large batches without any concurrency.
>
> Except when you are throttling you bounce the cacheline around
> each cpu as it triggers foreground writeback.....
Yeah, we'd loose a bit of CPU, but not any write performance, since it
is being throttled back anyway.
> > Or is it possible to export the state of the device queue to mm?
> > E.g. could balance_dirty_pages() query the backing dev if there are
> > any outstanding write requests?
>
> Not directly - writeback_in_progress(bdi) is a coarse measure
> indicating pdflush is active on this bdi, which implies outstanding
> write requests).
Hmm, not quite what I need.
> > > I'd call this a showstopper right now - maybe you need to look at
> > > something like the ZVC code that Christoph Lameter wrote, perhaps?
> >
> > That's rather a heavyweight approach for this I think.
>
> But if you want to use per-page accounting, you are going to
> need a per-cpu or per-zone set of counters on each bdi to do
> this without introducing regressions.
Yes, this is an option, but I hope for a simpler solution.
Thanks,
Miklos
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists