[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1255519348.8392.412.camel@twins>
Date: Wed, 14 Oct 2009 13:22:28 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Peter Staubach <staubach@...hat.com>,
Myklebust Trond <Trond.Myklebust@...app.com>,
Jan Kara <jack@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Theodore Tso <tytso@....edu>,
Christoph Hellwig <hch@...radead.org>,
Dave Chinner <david@...morbit.com>,
Chris Mason <chris.mason@...cle.com>,
"Li, Shaohua" <shaohua.li@...el.com>,
"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
Nick Piggin <npiggin@...e.de>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
Richard Kennedy <richard@....demon.co.uk>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 01/45] writeback: reduce calls to global_page_state in
balance_dirty_pages()
On Wed, 2009-10-14 at 09:38 +0800, Wu Fengguang wrote:
> > > Hmm, probably you've discussed this in some other email but why do we
> > > cycle in this loop until we get below dirty limit? We used to leave the
> > > loop after writing write_chunk... So the time we spend in
> > > balance_dirty_pages() is no longer limited, right?
>
> Right, this is a legitimate concern.
Quite.
> > Wu was saying that without the loop nr_writeback wasn't limited, but
> > since bdi_writeback_wakeup() is driven from writeout completion, I'm not
> > sure how again that was so.
>
> Let me summarize the ideas :)
>
> There are two cases:
>
> - there are no bdi or block io queue to limit nr_writeback
> This must be fixed. It either let nr_writeback grow to dirty_thresh
> (with loop) and thus squeeze nr_dirty, or grow out of control
> totally (without loop). Current state is, the nr_writeback wait
> queue for NFS is there; the one for btrfs is still missing.
>
> - there is a nr_writeback limit, but is larger than dirty_thresh
> In this case nr_dirty will be close to 0 regardless of the loop.
> The loop will help to keep
> nr_dirty + nr_writeback + nr_unstable < dirty_thresh
> Without the loop, the "real" dirty threshold would be larger
> (determined by the nr_writeback limit).
>
> > We can move all of bdi_dirty to bdi_writeout, if the bdi writeout queue
> > permits, but it cannot grow beyond the total limit, since we're actually
> > waiting for writeout completion.
>
> Yes, this explains the second case. It's some trade-off like: the
> nr_writeback limit can not be trusted in small memory systems, so do
> the loop to impose the dirty_thresh, which unfortunately can hurt
> responsiveness on all systems with prolonged wait time..
Ok, so I'm still puzzled.
set_page_dirty()
balance_dirty_pages_ratelimited()
balance_dirty_pages_ratelimited_nr(1)
balance_dirty_pages(nr);
So we call balance_dirty_pages() with an appropriate count for each
set_page_dirty() successful invocation, right?
balance_dirty_pages() guarantees that:
nr_dirty + nr_writeback + nr_unstable < dirty_thresh &&
(nr_dirty + nr_writeback + nr_unstable <
(dirty_thresh + background_thresh)/2 ||
bdi_dirty + bdi_writeback + bdi_unstable < bdi_thresh)
Now without loop, without writeback limit, I still see no way to
actually generate more 'dirty' pages than dirty_thresh.
As soon as we hit dirty_thresh a process will wait for exactly the same
amount of pages to get cleaned (writeback completed) as were dirtied
(+/- the ratelimit fuzz which should even out over processes).
That should bound things to dirty_thresh -- the wait is on writeback
complete, so nr_writeback is bounded too.
[ I forgot the exact semantics of unstable, if we clear writeback before
unstable, we need to fix something ]
Now, a nr_writeback queue that limits writeback will still be useful,
esp for high speed devices. Once they ramp up and bdi_thresh exceeds the
queue size, it'll take effect. So you reap the benefits when needed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists