[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200701171659.16290.ak@suse.de>
Date: Wed, 17 Jan 2007 16:59:15 +1100
From: Andi Kleen <ak@...e.de>
To: Paul Jackson <pj@....com>
Cc: clameter@....com, akpm@...l.org, menage@...gle.com,
linux-kernel@...r.kernel.org, nickpiggin@...oo.com.au,
linux-mm@...ck.org, dgc@....com
Subject: Re: [RFC 5/8] Make writeout during reclaim cpuset aware
On Wednesday 17 January 2007 15:36, Paul Jackson wrote:
> > With a per node dirty limit ...
>
> What would this mean?
>
> Lets say we have a simple machine with 4 nodes, cpusets disabled.
There can be always NUMA policy without cpusets for once.
> Lets say all tasks are allowed to use all nodes, no set_mempolicy
> either.
Ok.
> If a task happens to fill up 80% of one node with dirty pages, but
> we have no dirty pages yet on other nodes, and we have a dirty ratio
> of 40%, then do we throttle that task's writes?
Yes we should actually. Every node should be able to supply
memory (unless extreme circumstances like mlock) and that much dirty
memory on a node will make that hard.
> I am surprised you are asking for this, Andi. I would have thought
> that on no-cpuset systems, the system wide throttling served your
> needs fine.
No actually people are fairly unhappy when one node is filled with
file data and then they don't get local memory from it anymore.
I get regular complaints about that for Opteron.
Dirty limit wouldn't be a full solution, but a good step.
> If not, then I can only guess that is because NUMA
> mempolicy constraints on allowed nodes are causing the same dirty page
> problems as cpuset constrained systems -- is that your concern?
That is another concern. I haven't checked recently, but it used
to be fairly simple to put a system to its knees by oversubscribing
a single node with a strict memory policy. Fixing that would be good.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists