[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B1D47C5.7030804@pardus.org.tr>
Date: Mon, 07 Dec 2009 20:21:57 +0200
From: Ozan Çağlayan <ozan@...dus.org.tr>
To: "Miller, Mike (OS Dev)" <Mike.Miller@...com>
CC: linux-kernel <linux-kernel@...r.kernel.org>,
"scameron@...rdog.cce.hp.com" <scameron@...rdog.cce.hp.com>,
"jens.axboe@...cle.com" <jens.axboe@...cle.com>
Subject: Re: CCISS performance drop in buffered disk reads in newer kernels
Miller, Mike (OS Dev) wrote:
> Ozan,
> I'm aware of the performance drop. Please see: http://bugzilla.kernel.org/show_bug.cgi?id=13127. I removed the huge read ahead value of 1024 that we used because users were complaining about small writes being starved. That was back around the 2.6.25 timeframe. Since that timeframe there have no changes in the main i/o path. I'll get back on this as time allows.
>
> Meanwhile, you can tweak some of the block layer tunables as such.
>
> echo 64 > /sys/block/cciss\!c0d1/queue/read_ahead_kb
> OR
> blockdev --setra 128 /dev/cciss/c0d1
>
> These are just example values. There is also max_hw_sectors_kb and max_sectors_kb that be adjusted.
>
Hi,
Actually the "#define READ_AHEAD 1024" was removed on March 2008 which
was included in the 2.6.25.y tree so 2.6.25.20 has 128kB read_ahead
value too.
*But* setting read_ahead to 2048 increases buffered disk read average
from 60~MB/s to 190~MB/s hence the kernel compile time drops to 2 minutes.
So maybe the regression/change is in another place?
The server is just a compile-farm so it's triggered by hand, compiles
distribution's packages and stays idle until the next compilation queue.
Is it safe/OK to use that 2048kB read_ahead value for such workload?
(max_hw_sectors_kb is 512 on my 2.6.25.20 setup and 1024 on 2.6.30.9 but
it seems that it's read-only)
Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists