[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A3CD62B.1020407@vlnb.net>
Date: Sat, 20 Jun 2009 16:29:31 +0400
From: Vladislav Bolkhovitin <vst@...b.net>
To: Wu Fengguang <fengguang.wu@...el.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
"kosaki.motohiro@...fujitsu.com" <kosaki.motohiro@...fujitsu.com>,
"Alan.Brunelle@...com" <Alan.Brunelle@...com>,
"hifumi.hisashi@....ntt.co.jp" <hifumi.hisashi@....ntt.co.jp>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
"randy.dunlap@...cle.com" <randy.dunlap@...cle.com>,
Beheer InterCommIT <intercommit@...il.com>
Subject: Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev
Wu Fengguang, on 06/20/2009 07:55 AM wrote:
> On Fri, Jun 19, 2009 at 03:04:36AM +0800, Andrew Morton wrote:
>> On Sun, 7 Jun 2009 06:45:38 +0800
>> Wu Fengguang <fengguang.wu@...el.com> wrote:
>>
>>>>> Do you have a place where the raw blktrace data can be retrieved for
>>>>> more in-depth analysis?
>>>> I think your comment is really adequate. In another thread, Wu Fengguang pointed
>>>> out the same issue.
>>>> I and Wu also wait his analysis.
>>> And do it with a large readahead size :)
>>>
>>> Alan, this was my analysis:
>>>
>>> : Hifumi, can you help retest with some large readahead size?
>>> :
>>> : Your readahead size (128K) is smaller than your max_sectors_kb (256K),
>>> : so two readahead IO requests get merged into one real IO, that means
>>> : half of the readahead requests are delayed.
>>>
>>> ie. two readahead requests get merged and complete together, thus the effective
>>> IO size is doubled but at the same time it becomes completely synchronous IO.
>>>
>>> :
>>> : The IO completion size goes down from 512 to 256 sectors:
>>> :
>>> : before patch:
>>> : 8,0 3 177955 50.050313976 0 C R 8724991 + 512 [0]
>>> : 8,0 3 177966 50.053380250 0 C R 8725503 + 512 [0]
>>> : 8,0 3 177977 50.056970395 0 C R 8726015 + 512 [0]
>>> : 8,0 3 177988 50.060326743 0 C R 8726527 + 512 [0]
>>> : 8,0 3 177999 50.063922341 0 C R 8727039 + 512 [0]
>>> :
>>> : after patch:
>>> : 8,0 3 257297 50.000760847 0 C R 9480703 + 256 [0]
>>> : 8,0 3 257306 50.003034240 0 C R 9480959 + 256 [0]
>>> : 8,0 3 257307 50.003076338 0 C R 9481215 + 256 [0]
>>> : 8,0 3 257323 50.004774693 0 C R 9481471 + 256 [0]
>>> : 8,0 3 257332 50.006865854 0 C R 9481727 + 256 [0]
>>>
>> I haven't sent readahead-add-blk_run_backing_dev.patch in to Linus yet
>> and it's looking like 2.6.32 material, if ever.
>>
>> If it turns out to be wonderful, we could always ask the -stable
>> maintainers to put it in 2.6.x.y I guess.
>
> Agreed. The expected (and interesting) test on a properly configured
> HW RAID has not happened yet, hence the theory remains unsupported.
Hmm, do you see anything improper in the Ronald's setup (see
http://sourceforge.net/mailarchive/forum.php?thread_name=a0272b440906030714g67eabc5k8f847fb1e538cc62%40mail.gmail.com&forum_name=scst-devel)?
It is HW RAID based.
As I already wrote, we can ask Ronald to perform any needed tests.
> Thanks,
> Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists