[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <E1IdQJn-0002Cv-00@dorka.pomaz.szeredi.hu>
Date: Thu, 04 Oct 2007 15:00:43 +0200
From: Miklos Szeredi <miklos@...redi.hu>
To: a.p.zijlstra@...llo.nl
CC: miklos@...redi.hu, akpm@...ux-foundation.org, wfg@...l.ustc.edu.cn,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] remove throttle_vm_writeout()
> > 1) File backed pages -> file
> >
> > dirty + writeback count remains constant
> >
> > 2) Anonymous pages -> swap
> >
> > writeback count increases, dirty balancing will hold back file
> > writeback in favor of swap
> >
> > So the real question is: does case 2 need rate limiting, or is it OK
> > to let the device queue fill with swap pages as fast as possible?
>
> > Because balance_dirty_pages() maintains:
>
> nr_dirty + nr_unstable + nr_writeback <
> total_dirty + nr_cpus * ratelimit_pages
>
> throttle_vm_writeout() _should_ not deadlock on that, unless you're
> caught in the error term: nr_cpus * ratelimit_pages.
And it does get caught on that in small memory machines. This
deadlock is easily reproducable on a 32MB UML instance. I haven't yet
tested with the per-bdi patches, but I don't think they make a
difference in this case.
> Which can only happen when it is larger than 10% of dirty_thresh.
>
> Which is even more unlikely since it doesn't account nr_dirty (as I
> think it should).
I think nr_dirty is totally irrelevant. Since we don't care about
case 1), and in case 2) nr_dirty doesn't play any role.
> As for 2), yes I think having a limit on the total number of pages in
> flight is a good thing.
Why?
> But that said, there might be better ways to do that.
Sure, if we do need to globally limit the number of under-writeback
pages, then I think we need to do it independently of the dirty
accounting.
Miklos
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists