[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090901235645.GD1321@shareable.org>
Date: Wed, 2 Sep 2009 00:56:45 +0100
From: Jamie Lokier <jamie@...reable.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Jens Axboe <jens.axboe@...cle.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
chris.mason@...cle.com, david@...morbit.com, tytso@....edu,
akpm@...ux-foundation.org, jack@...e.cz
Subject: Re: [PATCH 8/8] vm: Add an tuning knob for vm.max_writeback_pages
Jamie Lokier wrote:
> Christoph Hellwig wrote:
> > On Tue, Sep 01, 2009 at 08:38:55PM +0200, Peter Zijlstra wrote:
> > > Do we really need a tunable for this?
> >
> > It will make increasing it in the field a lot easier. And having deal
> > with really large systems I have the fear that there are I/O topologies
> > outhere for which every "reasonable" value is too low.
> >
> > > I guess we need a limit to avoid it writing out everything, but can't we
> > > have something automagic?
> >
> > Some automatic adjustment would be nice. But finding the right auto
> > tuning will be an interesting exercise.
>
> I have embedded systems with 32MB RAM and no MMU, where I deliberately
> make the equivalent of max_writeback_pages *smaller* to limit the
> number of dirty pages causing fragmentation and preventing allocation
> of high-order pages... Write performance is less important than being
> able to allocate contiguous memory for reads.
>
> They are still using 2.4 kernels, but the principle still applies.
> maybe even more on 2.6 which is more prone to fragmentation on small
> no-MMU devices.
Sorry, I must get more sleep. I confused max_writeback_pages with the
limit on dirty pages in the system, which is completely different.
So please ignore my previous mail.
*little*,
-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists