[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100203062756.GB22890@localhost>
Date: Wed, 3 Feb 2010 14:27:56 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Jens Axboe <jens.axboe@...cle.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Linux Memory Management List <linux-mm@...ck.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 00/11] [RFC] 512K readahead size with thrashing safe
readahead
Vivek,
On Wed, Feb 03, 2010 at 06:38:03AM +0800, Vivek Goyal wrote:
> On Tue, Feb 02, 2010 at 11:28:35PM +0800, Wu Fengguang wrote:
> > Andrew,
> >
> > This is to lift default readahead size to 512KB, which I believe yields
> > more I/O throughput without noticeably increasing I/O latency for today's HDD.
> >
>
> Hi Fengguang,
>
> I was doing a quick test with the patches. I was using fio to run some
> sequential reader threads. I have got one access to one Lun from an HP
> EVA. In my case it looks like with the patches throughput has come down.
Thank you for the quick testing!
This patchset does 3 things:
1) 512K readahead size
2) new readahead algorithms
3) new readahead tracing/stats interfaces
(1) will impact performance, while (2) _might_ impact performance in
case of bugs.
Would you kindly retest the patchset with readahead size manually set
to 128KB? That would help identify the root cause of the performance
drop:
DEV=sda
echo 128 > /sys/block/$DEV/queue/read_ahead_kb
The readahead stats provided by the patchset are very useful for
analyzing the problem:
mount -t debugfs none /debug
# for each benchmark:
echo > /debug/readahead/stats # reset counters
# do benchmark
cat /debug/readahead/stats # check counters
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists