[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090608122302.GA8524@duck.suse.cz>
Date: Mon, 8 Jun 2009 14:23:02 +0200
From: Jan Kara <jack@...e.cz>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Frederic Weisbecker <fweisbec@...il.com>, Jan Kara <jack@...e.cz>,
Chris Mason <chris.mason@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
tytso@....edu, david@...morbit.com, hch@...radead.org,
yanmin_zhang@...ux.intel.com, richard@....demon.co.uk,
damien.wyart@...e.fr
Subject: Re: [PATCH 0/11] Per-bdi writeback flusher threads v9
On Mon 08-06-09 11:23:38, Jens Axboe wrote:
> On Sat, Jun 06 2009, Frederic Weisbecker wrote:
> > On Sat, Jun 06, 2009 at 02:23:40AM +0200, Jan Kara wrote:
> > > On Fri 05-06-09 20:18:15, Chris Mason wrote:
> > > > On Fri, Jun 05, 2009 at 11:14:38PM +0200, Jan Kara wrote:
> > > > > On Fri 05-06-09 21:15:28, Jens Axboe wrote:
> > > > > > On Fri, Jun 05 2009, Frederic Weisbecker wrote:
> > > > > > > The result with noop is even more impressive.
> > > > > > >
> > > > > > > See: http://kernel.org/pub/linux/kernel/people/frederic/dbench-noop.pdf
> > > > > > >
> > > > > > > Also a comparison, noop with pdflush against noop with bdi writeback:
> > > > > > >
> > > > > > > http://kernel.org/pub/linux/kernel/people/frederic/dbench-noop-cmp.pdf
> > > > > >
> > > > > > OK, so things aren't exactly peachy here to begin with. It may not
> > > > > > actually BE an issue, or at least now a new one, but that doesn't mean
> > > > > > that we should not attempt to quantify the impact.
> > > > > What looks interesting is also the overall throughput. With pdflush we
> > > > > get to 2.5 MB/s + 26 MB/s while with per-bdi we get to 2.7 MB/s + 13 MB/s.
> > > > > So per-bdi seems to be *more* fair but throughput suffers a lot (which
> > > > > might be inevitable due to incurred seeks).
> > > > > Frederic, how much does dbench achieve for you just on one partition
> > > > > (test both consecutively if possible) with as many threads as have those
> > > > > two dbench instances together? Thanks.
> > > >
> > > > Is the graph showing us dbench tput or disk tput? I'm assuming it is
> > > > disk tput, so bdi may just be writing less?
> > > Good, question. I was assuming dbench throughput :).
> > >
> > > Honza
> >
> >
> > Yeah it's dbench. May be that's not the right tool to measure the writeback
> > layer, even though dbench results are necessarily influenced by the writeback
> > behaviour.
> >
> > May be I should use something else?
> >
> > Note that if you want I can put some surgicals trace_printk()
> > in fs/fs-writeback.c
>
> FWIW, I ran a similar test here just now. CFQ was used, two partitions
> on an (otherwise) idle drive. I used 30 clients per dbench and 600s
> runtime. Results are nearly identical, both throughout the run and
> total:
>
> /dev/sdb1
> Throughput 165.738 MB/sec 30 clients 30 procs max_latency=459.002 ms
>
> /dev/sdb2
> Throughput 165.773 MB/sec 30 clients 30 procs max_latency=607.198 ms
Hmm, interesting. 165 MB/sec (in fact 330 MB/sec for that drive) sounds
like quite a lot ;). This usually happens with dbench when the processes
manage to delete / redirty data before writeback thread gets to them (so
some IO happens in memory only and throughput is bound by the CPU / memory
speed). So I think you are on a different part of the performance curve
than Frederic. Probably you have to run with more threads so that dbench
threads get throttled because of total amount of dirty data generated...
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists