[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090527025721.GA11153@localhost>
Date: Wed, 27 May 2009 10:57:21 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Hisashi Hifumi <hifumi.hisashi@....ntt.co.jp>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"kosaki.motohiro@...fujitsu.com" <kosaki.motohiro@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"jens.axboe@...cle.com" <jens.axboe@...cle.com>
Subject: Re: [PATCH] readahead:add blk_run_backing_dev
On Wed, May 27, 2009 at 10:47:47AM +0800, Hisashi Hifumi wrote:
>
> At 11:36 09/05/27, Wu Fengguang wrote:
> >On Wed, May 27, 2009 at 10:21:53AM +0800, Hisashi Hifumi wrote:
> >>
> >> At 11:09 09/05/27, Wu Fengguang wrote:
> >> >On Wed, May 27, 2009 at 08:25:04AM +0800, Hisashi Hifumi wrote:
> >> >>
> >> >> At 08:42 09/05/27, Andrew Morton wrote:
> >> >> >On Fri, 22 May 2009 10:33:23 +0800
> >> >> >Wu Fengguang <fengguang.wu@...el.com> wrote:
> >> >> >
> >> >> >> > I tested above patch, and I got same performance number.
> >> >> >> > I wonder why if (PageUptodate(page)) check is there...
> >> >> >>
> >> >> >> Thanks! This is an interesting micro timing behavior that
> >> >> >> demands some research work. The above check is to confirm if it's
> >> >> >> the PageUptodate() case that makes the difference. So why that case
> >> >> >> happens so frequently so as to impact the performance? Will it also
> >> >> >> happen in NFS?
> >> >> >>
> >> >> >> The problem is readahead IO pipeline is not running smoothly, which is
> >> >> >> undesirable and not well understood for now.
> >> >> >
> >> >> >The patch causes a remarkably large performance increase. A 9%
> >> >> >reduction in time for a linear read? I'd be surprised if the workload
> >> >>
> >> >> Hi Andrew.
> >> >> Yes, I tested this with dd.
> >> >>
> >> >> >even consumed 9% of a CPU, so where on earth has the kernel gone to?
> >> >> >
> >> >> >Have you been able to reproduce this in your testing?
> >> >>
> >> >> Yes, this test on my environment is reproducible.
> >> >
> >> >Hisashi, does your environment have some special configurations?
> >>
> >> Hi.
> >> My testing environment is as follows:
> >> Hardware: HP DL580
> >> CPU:Xeon 3.2GHz *4 HT enabled
> >> Memory:8GB
> >> Storage: Dothill SANNet2 FC (7Disks RAID-0 Array)
> >
> >This is a big hardware RAID. What's the readahead size?
> >
> >The numbers look too small for a 7 disk RAID:
> >
> > > #dd if=testdir/testfile of=/dev/null bs=16384
> > >
> > > -2.6.30-rc6
> > > 1048576+0 records in
> > > 1048576+0 records out
> > > 17179869184 bytes (17 GB) copied, 224.182 seconds, 76.6 MB/s
> > >
> > > -2.6.30-rc6-patched
> > > 1048576+0 records in
> > > 1048576+0 records out
> > > 17179869184 bytes (17 GB) copied, 206.465 seconds, 83.2 MB/s
> >
> >I'd suggest you to configure the array properly before coming back to
> >measuring the impact of this patch.
>
>
> I created 16GB file to this disk array, and mounted to testdir, dd to this directory.
I mean, you should get >300MB/s throughput with 7 disks, and you
should seek ways to achieve that before testing out this patch :-)
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists