[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <17640.65491.458305.525471@cse.unsw.edu.au>
Date: Mon, 21 Aug 2006 10:35:31 +1000
From: Neil Brown <neilb@...e.de>
To: Andi Kleen <ak@...e.de>
Cc: Jens Axboe <axboe@...e.de>, David Chinner <dgc@....com>,
linux-kernel@...r.kernel.org, akpm@...l.org
Subject: Re: RFC - how to balance Dirty+Writeback in the face of slow writeback.
On August 18, ak@...e.de wrote:
> Jens Axboe <axboe@...e.de> writes:
>
> > On Thu, Aug 17 2006, Andrew Morton wrote:
> > > It seems that the many-writers-to-different-disks workloads don't happen
> > > very often. We know this because
> > >
> > > a) The 2.4 performance is utterly awful, and I never saw anybody
> > > complain and
> >
> > Talk to some of the people that used DVD-RAM devices (or other
> > excruciatingly slow writers) on their system, and they would disagree
> > violently :-)
>
> I hit this recently while doing backups to a slow external USB disk.
> The system was quite unusable (some commands blocked for over a minute)
Ouch.
I suspect we are going to see more of this, as USB drive for backups
is probably a very attractive option for many.
The 'obvious' solution would be to count dirty pages per backing_dev
and rate limit writes based on this.
But counting pages can be expensive. I wonder if there might be some
way to throttle the required writes without doing too much counting.
Could we watch when the backing_dev is congested and use that?
e.g.
When Dirty+Writeback is between max_dirty/2 and max_dirty,
balance_dirty_pages waits until mapping->backing_dev_info
is not congested.
That might slow things down, but it is hard to know if it would slow
things down the right amount...
Given that large machines are likely to have lots of different
backing_devs, maybe counting all the dirty pages per backing_dev
wouldn't be too expensive?
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists