[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CFF99E3.6020501@fusionio.com>
Date: Wed, 8 Dec 2010 22:44:51 +0800
From: Jens Axboe <jaxboe@...ionio.com>
To: Shaohua Li <shaohua.li@...el.com>
CC: lkml <linux-kernel@...r.kernel.org>,
"vgoyal@...hat.com" <vgoyal@...hat.com>
Subject: Re: [RFC]block: change sort order of elv_dispatch_sort
On 2010-12-08 22:39, Shaohua Li wrote:
> On Wed, 2010-12-08 at 16:01 +0800, Jens Axboe wrote:
>> On 2010-12-08 15:50, Shaohua Li wrote:
>>> On Wed, 2010-12-08 at 14:56 +0800, Jens Axboe wrote:
>>>> On 2010-12-08 13:42, Shaohua Li wrote:
>>>>> Change the sort order a little bit. Makes requests with sector above boundary
>>>>> in ascendant order, and requests with sector below boundary in descendant
>>>>> order. The goal is we have less disk spindle move.
>>>>> For example, boundary is 7, we add sector 8, 1, 9, 2, 3, 4, 10, 12, 5, 11, 6
>>>>> In the original sort, the sorted list is:
>>>>> 8, 9, 10, 11, 12, 1, 2, 3, 4, 5, 6
>>>>> the spindle move is 8->12->1->6, total movement is 12*2 sectors
>>>>> with the new sort, the list is:
>>>>> 8, 9, 10, 11, 12, 6, 5, 4, 3, 2, 1
>>>>> the spindle move is 8->12->6->1, total movement is 12*1.5 sectors
>>>>
>>>> It was actually done this way on purpose, it's been a while since we
>>>> have done two way elevators even outside the dispatch list sorting
>>>> itself.
>>>>
>>>> Do you have any results to back this change up? I'd argue that
>>>> continuing to the end, sweeping back, and reading forwards again will be
>>>> faster then doing backwards reads usually.
>>> No, have no data, that is why this is a RFC patch. Part reason is I
>>> don't know when we dispatch several requests to the list. Appears driver
>>> only takes one request one time. What kind of test do you suggest?
>>
>> Yes that is usually the case, it's mainly meant as a holding point for
>> dispatch, or for requeue, or for request that don't give sort ordering.
>> Or on io scheduler switches, for instance.
>
> Have a test in a hacked way. I use modified noop iosched, and every time
> when noop tries to dispatch request, it dispatches all requests in its
> list. Test does random read. The result is actually quite stable. The
> changed order always gives slightly better throughput, but the
> improvement is quite small (<1%)
First of all I think 1% is too close to call, unless your results are
REALLY stable. Secondly, a truly random workload is not a good test
case as requests are going to be all over the map anyway. For something
more realistic (like your example, but of course not fully contig) it
would be interesting to see.
>>> I'm curious why the sweeping back is faster. It definitely needs more
>>> spindle move. is there any hardware trick here?
>>
>> The idea is that while the initial seek is longer, due to drive prefetch
>> serving the latter half request series after the sweep is faster.
>>
>> I know that classic OS books mentions this is a good method, but I don't
>> think that has been the case for a long time.
>
> Hmm, if this is sequential I/O, then requests already merged. if not,
> how could drive know how to do prefetch.
Certainly, the requests are not going to look like in your example. I
didn't take those literally, I was assuming you just meant increasing
order on both sides. Once the drive has positioned the head, it is going
to read more than just the single sector in that request. They do do
read caching.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists