[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110417021111.GA11352@localhost>
Date: Sun, 17 Apr 2011 10:11:11 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Jan Kara <jack@...e.cz>, Dave Chinner <david@...morbit.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Richard Kennedy <richard@....demon.co.uk>,
Hugh Dickins <hughd@...gle.com>,
Rik van Riel <riel@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 4/4] writeback: reduce per-bdi dirty threshold ramp up
time
On Sat, Apr 16, 2011 at 10:21:14PM +0800, Wu Fengguang wrote:
> On Sat, Apr 16, 2011 at 04:33:29PM +0800, Peter Zijlstra wrote:
> > On Sat, 2011-04-16 at 00:13 +0200, Jan Kara wrote:
> > >
> > > So what is a takeaway from this for me is that scaling the period
> > > with the dirty limit is not the right thing. If you'd have 4-times more
> > > memory, your choice of "dirty limit" as the period would be as bad as
> > > current 4*"dirty limit". What would seem like a better choice of period
> > > to me would be to have the period in an order of a few seconds worth of
> > > writeback. That would allow the bdi limit to scale up reasonably fast when
> > > new bdi starts to be used and still not make it fluctuate that much
> > > (hopefully).
> >
> > No best would be to scale the period with the writeout bandwidth, but
> > lacking that the dirty limit had to do. Since we're counting pages, and
> > bandwidth is pages/second we'll end up with a time measure, exactly the
> > thing you wanted.
>
> I owe you the patch :) Here is a tested one for doing the bandwidth
> based scaling. It's based on the attached global writeout bandwidth
> estimation.
>
> I tried updating the shift both on rosed and fallen bandwidth, however
> that leads to reset of the accumulated proportion values. So here the
> shift will only be increased and never decreased.
I cannot reproduce the issue now. It may be due to the bandwidth
estimation went wrong and get tiny values at times in an early patch,
thus "resetting" the proportional values.
I'll carry the below version in future tests. In theory we could do
more coarse tracking with
if (abs(shift - vm_completions.pg[0].shift) <= 1)
return;
But let's do it more diligent now.
Thanks,
Fengguang
---
@@ -143,6 +136,13 @@ static int calc_period_shift(void)
static void update_completion_period(void)
{
int shift = calc_period_shift();
+
+ if (shift > PROP_MAX_SHIFT)
+ shift = PROP_MAX_SHIFT;
+
+ if (shift == vm_completions.pg[0].shift)
+ return;
+
prop_change_shift(&vm_completions, shift);
prop_change_shift(&vm_dirties, shift);
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists