[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20110415034300.GA23430@localhost>
Date: Fri, 15 Apr 2011 11:43:00 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Jan Kara <jack@...e.cz>
Cc: Dave Chinner <david@...morbit.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Richard Kennedy <richard@....demon.co.uk>,
Hugh Dickins <hughd@...gle.com>,
Rik van Riel <riel@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 4/4] writeback: reduce per-bdi dirty threshold ramp up
time
On Fri, Apr 15, 2011 at 02:16:09AM +0800, Jan Kara wrote:
> On Thu 14-04-11 23:14:25, Wu Fengguang wrote:
> > On Thu, Apr 14, 2011 at 08:23:02AM +0800, Wu Fengguang wrote:
> > > On Thu, Apr 14, 2011 at 07:52:11AM +0800, Dave Chinner wrote:
> > > > On Thu, Apr 14, 2011 at 07:31:22AM +0800, Wu Fengguang wrote:
> > > > > On Thu, Apr 14, 2011 at 06:04:44AM +0800, Jan Kara wrote:
> > > > > > On Wed 13-04-11 16:59:41, Wu Fengguang wrote:
> > > > > > > Reduce the dampening for the control system, yielding faster
> > > > > > > convergence. The change is a bit conservative, as smaller values may
> > > > > > > lead to noticeable bdi threshold fluctuates in low memory JBOD setup.
> > > > > > >
> > > > > > > CC: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> > > > > > > CC: Richard Kennedy <richard@....demon.co.uk>
> > > > > > > Signed-off-by: Wu Fengguang <fengguang.wu@...el.com>
> > > > > > Well, I have nothing against this change as such but what I don't like is
> > > > > > that it just changes magical +2 for similarly magical +0. It's clear that
> > > > >
> > > > > The patch tends to make the rampup time a bit more reasonable for
> > > > > common desktops. From 100s to 25s (see below).
> > > > >
> > > > > > this will lead to more rapid updates of proportions of bdi's share of
> > > > > > writeback and thread's share of dirtying but why +0? Why not +1 or -1? So
> > > > >
> > > > > Yes, it will especially be a problem on _small memory_ JBOD setups.
> > > > > Richard actually has requested for a much radical change (decrease by
> > > > > 6) but that looks too much.
> > > > >
> > > > > My team has a 12-disk JBOD with only 6G memory. The memory is pretty
> > > > > small as a server, but it's a real setup and serves well as the
> > > > > reference minimal setup that Linux should be able to run well on.
> > > >
> > > > FWIW, linux runs on a lot of low power NAS boxes with jbod and/or
> > > > raid setups that have <= 1GB of RAM (many of them run XFS), so even
> > > > your setup could be considered large by a significant fraction of
> > > > the storage world. Hence you need to be careful of optimising for
> > > > what you think is a "normal" server, because there simply isn't such
> > > > a thing....
> > >
> > > Good point! This patch is likely to hurt a loaded 1GB 4-disk NAS box...
> > > I'll test the setup.
> >
> > Just did a comparison of the IO-less patches' performance with and
> > without this patch. I hardly notice any differences besides some more
> > bdi goal fluctuations in the attached graphs. The write throughput is
> > a bit large with this patch (80MB/s vs 76MB/s), however the delta is
> > within the even larger stddev range (20MB/s).
> Thanks for the test but I cannot find out from the numbers you provided
> how much did the per-bdi thresholds fluctuate in this low memory NAS case?
> You can gather current bdi threshold from /sys/kernel/debug/bdi/<dev>/stats
> so it shouldn't be hard to get the numbers...
Hi Jan, attached are your results w/o this patch. The "bdi goal" (gray
line) is calculated as (bdi_thresh - bdi_thresh/8) and is fluctuating
all over the place.. and average wkB/s is only 49MB/s..
Thanks,
Fengguang
---
wfg ~/bee% cat xfs-1dd-1M-16p-5907M-3:2-2.6.39-rc3-jan-bdp+-2011-04-15.11:11/iostat-avg
avg-cpu: %user %nice %system %iowait %steal %idle
sum 2.460 0.000 71.080 767.240 0.000 1859.220
avg 0.091 0.000 2.633 28.416 0.000 68.860
stddev 0.064 0.000 0.659 7.903 0.000 7.792
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sum 0.000 58.100 0.000 2926.980 0.000 1331730.590 18278.540 962.290 4850.450 97.470 1315.600
avg 0.000 2.152 0.000 108.407 0.000 49323.355 676.983 35.640 179.646 3.610 48.726
stddev 0.000 5.336 0.000 104.398 0.000 47602.790 400.410 40.696 169.289 2.212 45.870
Download attachment "balance_dirty_pages-pages.png" of type "image/png" (111238 bytes)
Download attachment "balance_dirty_pages-task-bw.png" of type "image/png" (36656 bytes)
Download attachment "balance_dirty_pages-pause.png" of type "image/png" (28377 bytes)
View attachment "iostat" of type "text/plain" (65183 bytes)
Powered by blists - more mailing lists