[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1232591728.3782.6.camel@mercury.localdomain>
Date: Wed, 21 Jan 2009 21:35:28 -0500
From: Ben Gamari <bgamari@...il.com>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
Andrea Arcangeli <andrea@...e.de>, akpm@...ux-foundation.org,
Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org, ltt-dev@...ts.casi.polymtl.ca
Subject: Re: [RFC PATCH] block: Fix bio merge induced high I/O latency
I'm not sure if this will help, but I just completed another set of
benchmarks using Jens' patch and a variety of device parameters. Again,
I don't know if this will help anyone, but I figured it might help
quantify the differences between device parameters. Let me know if
there's any other benchmarking or testing that I can do. Thanks,
- Ben
mint maxt
==========================================================
queue_depth=1, slice_async_rq=1, quantum=1, patched
anticipatory 25 msec 4410 msec
cfq 27 msec 1466 msec
deadline 36 msec 10735 msec
noop 48 msec 37439 msec
==========================================================
queue_depth=1, slice_async_rq=1, quantum=4, patched
anticipatory 38 msec 3579 msec
cfq 35 msec 822 msec
deadline 37 msec 10072 msec
noop 32 msec 45535 msec
==========================================================
queue_depth=1, slice_async_rq=2, quantum=1, patched
anticipatory 33 msec 4480 msec
cfq 28 msec 353 msec
deadline 30 msec 6738 msec
noop 36 msec 39691 msec
==========================================================
queue_depth=1, slice_async_rq=2, quantum=4, patched
anticipatory 40 msec 4498 msec
cfq 35 msec 1395 msec
deadline 41 msec 6877 msec
noop 38 msec 46410 msec
==========================================================
queue_depth=31, slice_async_rq=1, quantum=1, patched
anticipatory 31 msec 6011 msec
cfq 36 msec 4575 msec
deadline 41 msec 18599 msec
noop 38 msec 46347 msec
==========================================================
queue_depth=31, slice_async_rq=2, quantum=1, patched
anticipatory 30 msec 9985 msec
cfq 33 msec 4200 msec
deadline 38 msec 22285 msec
noop 25 msec 40245 msec
==========================================================
queue_depth=31, slice_async_rq=2, quantum=4, patched
anticipatory 30 msec 12197 msec
cfq 30 msec 3457 msec
deadline 35 msec 18969 msec
noop 34 msec 42803 msec
On Tue, 2009-01-20 at 15:22 -0500, Ben Gamari wrote:
> On Tue, Jan 20, 2009 at 2:37 AM, Jens Axboe <jens.axboe@...cle.com> wrote:
> > On Mon, Jan 19 2009, Mathieu Desnoyers wrote:
> >> * Jens Axboe (jens.axboe@...cle.com) wrote:
> >> Yes, ideally I should re-run those directly on the disk partitions.
> >
> > At least for comparison.
> >
>
> I just completed my own set of benchmarks using the fio job file
> Mathieu provided. This was on a 2.5 inch 7200 RPM SATA partition
> formatted as ext3. As you can see, I tested all of the available
> schedulers with both queuing enabled and disabled. I'll test the Jens'
> patch soon. Would a blktrace of the fio run help? Let me know if
> there's any other benchmarking or profiling that could be done.
> Thanks,
>
> - Ben
>
>
> mint maxt
> ==========================================================
> queue_depth=31:
> anticipatory 35 msec 11036 msec
> cfq 37 msec 3350 msec
> deadline 36 msec 18144 msec
> noop 39 msec 41512 msec
>
> ==========================================================
> queue_depth=1:
> anticipatory 45 msec 9561 msec
> cfq 28 msec 3974 msec
> deadline 47 msec 16802 msec
> noop 35 msec 38173 msec
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists