[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070804063217.GA25069@elte.hu>
Date: Sat, 4 Aug 2007 08:32:17 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, miklos@...redi.hu,
akpm@...ux-foundation.org, neilb@...e.de, dgc@....com,
tomoki.sekiyama.qu@...achi.com, nikita@...sterfs.com,
trond.myklebust@....uio.no, yingchao.zhou@...il.com,
richard@....demon.co.uk
Subject: Re: [PATCH 00/23] per device dirty throttling -v8
* Linus Torvalds <torvalds@...ux-foundation.org> wrote:
> On Fri, 3 Aug 2007, Peter Zijlstra wrote:
> >
> > These patches aim to improve balance_dirty_pages() and directly address three
> > issues:
> > 1) inter device starvation
> > 2) stacked device deadlocks
> > 3) inter process starvation
>
> Ok, the patches certainly look pretty enough, and you fixed the only
> thing I complained about last time (naming), so as far as I'm
> concerned it's now just a matter of whether it *works* or not. I guess
> being in -mm will help somewhat, but it would be good to have people
> with several disks etc actively test this out.
There are positive reports in the never-ending "my system crawls like an
XT when copying large files" bugzilla entry:
http://bugzilla.kernel.org/show_bug.cgi?id=7372
" vfs_cache_pressure=1
TCQ nr_requests
8 128 not that bad
1 128 snappiest configuration, almost no pauses
(or unnoticable ones) "
" 1) vfs_cache_pressure at 100, 2.6.21.5+per bdi throttling patch
Result is good, not as snappier as I'd want during a large copy but
still usable. No process seems stuck for agen, but there seems to be
some short (second or subsecond) moment where everything is stuck
(like if you run a top d 0.5, the screen is not updated on a regular
basis).
2) vfs_cache_pressure at 1, 2.6.21.5+per bdi throttling patch Result
is at 2.6.17 level. It is the better combination since 2.6.17. "
" 1) I've applied the patches posted by Peter Zijlstra in comment #76
to the 2.6.21-mm2 kernel to check if it removes the problem. My
impression is that the problem is still there with those patches,
although less visible then with the clean 2.6.21 kernel. "
so the whole problem area seems to be a "perfect storm" created by a
combination of TCQ, IO scheduling and VM dirty handling weaknesses. Per
device dirty throttling is a good step forward and it makes a very
visible positive difference.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists