[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1231536871.29452.1.camel@twins>
Date: Fri, 09 Jan 2009 22:34:31 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: david@...g.hm
Cc: Jan Kara <jack@...e.cz>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Chris Mason <chris.mason@...cle.com>,
David Miller <davem@...emloft.net>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, npiggin@...e.de
Subject: Re: Increase dirty_ratio and dirty_background_ratio?
On Fri, 2009-01-09 at 14:31 -0800, david@...g.hm wrote:
> for that matter, it's not getting to where it makes sense to have wildly
> different storage on a machine
>
> 10's of GB of SSD for super-fast read-mostly
> 100's of GB of high-speed SCSI for fast writes
> TB's of SATA for high capacity
>
> does it make sense to consider tracking the dirty pages per-destination so
> that in addition to only having one process writing to the drive at a time
> you can also allow for different amounts of data to be queued per device?
>
> on a machine with 10's of GB of ram it becomes possible to hit the point
> where at one point you could have the entire SSD worth of data queued up
> to write, and at another point have the same total amount of data queued
> for the SATA storage and it's a fraction of a percent of the size of the
> storage.
That's exactly what we do today. Dirty pages are tracked per backing
device and the writeback cache size is proportionally divided based on
recent write speed ratios of the devices.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists