[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.0903161512120.26565@chino.kir.corp.google.com>
Date: Mon, 16 Mar 2009 15:17:10 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Christoph Lameter <cl@...ux-foundation.org>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Pekka Enberg <penberg@...helsinki.fi>,
Matt Mackall <mpm@...enic.com>,
Paul Menage <menage@...gle.com>,
Randy Dunlap <randy.dunlap@...cle.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-kernel@...r.kernel.org
Subject: Re: [patch -mm v2] cpusets: add memory_slab_hardwall flag
On Mon, 16 Mar 2009, Christoph Lameter wrote:
> If the nodes are exclusive to a load then the cpus attached to those nodes
> are also exclusive?
No, they are not exclusive.
Here is my example (for the third time) if, for example, mems are grouped
by the cpus for which they have affinity:
/dev/cpuset
--> cpuset_A (cpus 0-1, mems 0-3)
--> cpuset_B (cpus 2-3, mems 4-7)
--> cpuset_C (cpus 4-5, mems 8-11)
--> ...
Within that, we isolate mems for specific jobs:
/dev/cpuset
--> cpuset_A (cpus 0-1, mems 0-3)
--> job_1 (mem 0)
--> job_2 (mem 1-2)
--> job_3 (mem 3)
--> ...
> If so then there is no problem since the percpu queues
> are only in use for a specific load with a consistent restriction on
> cpusets and a consistent memory policy. Thus there is no need for
> memory_slab_hardwall.
>
All of those jobs may have different mempolicy requirements.
Specifically, some cpusets may require slab hardwall behavior while
others do not for true memory isolation or NUMA optimizations.
In other words, there is _no_ way with slub to isolate slab allocations
for job_1 from job_2, job_3, etc. That is what memory_slab_hardwall
intends to address.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists