[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <480709A2.7040606@unimore.it>
Date: Thu, 17 Apr 2008 10:26:10 +0200
From: Paolo Valente <paolo.valente@...more.it>
To: Jens Axboe <jens.axboe@...cle.com>
CC: Pavel Machek <pavel@....cz>, linux-kernel@...r.kernel.org
Subject: Re: [RESEND][RFC] BFQ I/O Scheduler
Jens Axboe ha scritto:
>> Actually, in the worst case among our tests, the aggregate throughput
>> with 4k sectors was ~ 20 MB/s, hence the time for 4k sectors ~ 4k * 512
>> / 20M = 100 ms.
>>
>
> That's not worse case, it is pretty close to BEST case.
Yes. 100 ms is just the worst case among our tests with 4k, but these
tests are limited to not much more than simultaneous sequential reads.
> Worst case is 4k
> of sectors, with each being a 512b IO and causing a full stroke seek.
> For that type of workload, even a modern sata hard drive will be doing
> 500kb/sec or less. That's rougly a thousand sectors per seconds, so ~4
> seconds worst case for 4k sectors.
>
In my opinion, the time-slice approach of cfq is definitely better
suited than the (sector) budget approach for this type of workloads. On
the opposite end, the price of time-slices is unfairness towards, e.g.,
threads doing sequential accesses. In bfq we were mainly thinking about
file copy, ftp, video streaming and so on. I was not able to find a good
solution for both types of workloads.
BTW, there is also another possibility. The internal scheduler of bfq
may be used to schedule time-slices instead of budgets. By doing so, the
O(1) instead of O(N) delay/jitter would still be guaranteed (as it is
probably already clear, bfq is obtained from cfq by just turning slices
into budgets, and the Round Robin-like scheduling policy into a Weighted
Fair Queueuing one).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists