[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4807F8A8.4090605@cse.unsw.edu.au>
Date: Fri, 18 Apr 2008 11:26:00 +1000
From: Aaron Carroll <aaronc@....unsw.edu.au>
To: Fabio Checconi <fchecconi@...il.com>
CC: Jens Axboe <jens.axboe@...cle.com>, linux-kernel@...r.kernel.org,
paolo.valente@...more.it
Subject: Re: [RESEND][RFC] BFQ I/O Scheduler
Jens Axboe wrote:
> On Tue, Apr 01 2008, Fabio Checconi wrote:
>> [sorry for reposting, wrong subject]
>>
>> Hi,
>> we are working to a new I/O scheduler based on CFQ, aiming at
>> improved predictability and fairness of the service, while maintaining
>> the high throughput it already provides.
Here are some microbenchmark results. Test setup is a 2-way IA64 with a
single 15k RPM 73GiB SCSI disk with TCQ depth set to 1. Workloads are
generated with FIO: 128 processes issuing synchronous, O_DIRECT, 16KiB
block size requests.
Figures are quoted as average (stdev). CFQ (i=0) means idle window
disabled. All other tunables are default.
==================================x8=======================================
Random Readers
-----------------------------------------------
Latency (ms) Bandwidth (KiB/s)
-----------------------------------------------
CFQ 841.788 (4070.3) 2428.032 (23.1)
CFQ (i=0) 536.728 (216.9) 3841.024 (8.5)
BFQ 884.4 (8816.0) 2439.04 (1375.0)
Sequential 1MiB Readers
-----------------------------------------------
Latency (ms) Bandwidth (KiB/s)
-----------------------------------------------
CFQ 2865.331 (737.2) 46866.048 (103.1)
CFQ (i=0) 2544.618 (1047.2) 52685.952 (294.2)
BFQ 2860.795 (419.1) 46850.944 (81.5)
Clearly BFQ suffers from the same idle window problems as CFQ, but otherwise
the performance seems comparable in bandwidth terms. I'm guessing variability
in random workload service is due to max budget being too large compared to
CFQ's default time-slice. Sequential access looks nice and consistent, though.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists