[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20081104190908.295a3d53.akpm@linux-foundation.org>
Date: Tue, 4 Nov 2008 19:09:08 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Christoph Lameter <cl@...ux-foundation.org>, npiggin@...e.de,
dfults@....com, linux-kernel@...r.kernel.org, rientjes@...gle.com,
containers@...ts.osdl.org, menage@...gle.com
Subject: Re: [patch 0/7] cpuset writeback throttling
On Wed, 5 Nov 2008 10:31:23 +0900 KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> >
> > Yes? Someone help me out here. I don't yet have my head around the
> > overlaps and incompatibilities here. Perhaps the containers guys will
> > wake up and put their thinking caps on?
> >
> >
> >
> > What happens if cpuset A uses nodes 0,1,2,3,4,5,6,7,8,9 and cpuset B
> > uses nodes 0,1? Can activity in cpuset A cause ooms in cpuset B?
> >
> For help this, per-node-dirty-ratio-throttoling is necessary.
>
> Shouldn't we just have a new parameter as /proc/sys/vm/dirty_ratio_per_node.
I guess that would work. But it is a general solution and will be less
efficient for the particular setups which are triggering this problem.
> /proc/sys/vm/dirty_ratio works for throttling the whole system dirty pages.
> /proc/sys/vm/dirty_ratio_per_node works for throttling dirty pages in a node.
>
> Implementation will not be difficult and works enough against OOM.
Yup. Just track per-node dirtiness and walk the LRU when it is over
threshold.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists