lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 Mar 2009 12:32:44 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Christoph Lameter <cl@...ux-foundation.org>
cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Matt Mackall <mpm@...enic.com>,
	Paul Menage <menage@...gle.com>,
	Randy Dunlap <randy.dunlap@...cle.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch -mm v2] cpusets: add memory_slab_hardwall flag

On Thu, 12 Mar 2009, Christoph Lameter wrote:

> > Yes, jobs are running in the leaf with my above example.  And it's quite
> > possible that the higher level has segmented the machine for NUMA locality
> > and then further divided that memory for individual jobs.  When a job
> > completes or is killed, the slab cache that it has allocated can be freed
> > in its entirety with no partial slab fragmentation (i.e. there are no
> > objects allocated from its slabs for disjoint, still running jobs).  That
> > cpuset may then serve another job.
> 
> Looks like we are talking about a differing project here. Partial slabs
> are shared between all processors with SLUB. Slab shares the partial slabs
> for the processors on the same node.
> 

If `memory_slab_hardwall' is set for a cpuset, its tasks will only pull a 
slab off the partial list that was allocated on an allowed node.  So in my 
earlier example which segments the machine via cpusets for NUMA locality 
and then divides those cpusets further for exclusive memory to provide to 
individual jobs, slab allocations will be constrained within the cpuset of 
the task that allocated them.  When a job dies, all slab allocations are 
freed so that no objects remain on the memory allowed to that job and, 
thus, no partial slabs remain (i.e. there were no object allocations on 
the job's slabs from disjoint cpusets because of the exclusivity).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ