[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090605211438.GA11650@duck.suse.cz>
Date: Fri, 5 Jun 2009 23:14:38 +0200
From: Jan Kara <jack@...e.cz>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Frederic Weisbecker <fweisbec@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
tytso@....edu, chris.mason@...cle.com, david@...morbit.com,
hch@...radead.org, jack@...e.cz, yanmin_zhang@...ux.intel.com,
richard@....demon.co.uk, damien.wyart@...e.fr
Subject: Re: [PATCH 0/11] Per-bdi writeback flusher threads v9
On Fri 05-06-09 21:15:28, Jens Axboe wrote:
> On Fri, Jun 05 2009, Frederic Weisbecker wrote:
> > The result with noop is even more impressive.
> >
> > See: http://kernel.org/pub/linux/kernel/people/frederic/dbench-noop.pdf
> >
> > Also a comparison, noop with pdflush against noop with bdi writeback:
> >
> > http://kernel.org/pub/linux/kernel/people/frederic/dbench-noop-cmp.pdf
>
> OK, so things aren't exactly peachy here to begin with. It may not
> actually BE an issue, or at least now a new one, but that doesn't mean
> that we should not attempt to quantify the impact.
What looks interesting is also the overall throughput. With pdflush we
get to 2.5 MB/s + 26 MB/s while with per-bdi we get to 2.7 MB/s + 13 MB/s.
So per-bdi seems to be *more* fair but throughput suffers a lot (which
might be inevitable due to incurred seeks).
Frederic, how much does dbench achieve for you just on one partition
(test both consecutively if possible) with as many threads as have those
two dbench instances together? Thanks.
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists