lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 9 Mar 2009 13:26:15 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Matt Mackall <mpm@...enic.com>,
	Paul Menage <menage@...gle.com>,
	Randy Dunlap <randy.dunlap@...cle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch -mm] cpusets: add memory_slab_hardwall flag

On Mon, 9 Mar 2009, KOSAKI Motohiro wrote:

> My question mean, Why anyone need isolation?
> your patch insert new branch into hotpath.
> then, it makes slower hotpath a abit although a user don't use this feature.
> 

On large NUMA machines, it is currently possible for a very large 
percentage (if not all) of your slab allocations to come from memory that 
is distant from your application's set of allowable cpus.  Such 
allocations that are long-lived would benefit from having affinity to 
those processors.  Again, this is the typical use case for cpusets: to 
bind memory nodes to groups of cpus with affinity to it for the tasks 
attached to the cpuset.

> typically, slab cache don't need strict node binding because
> inode/dentry touched from multiple cpus.
> 

This change would obviously require inode and dentry objects to originate 
from a node on the cpuset's set of mems_allowed.  That would incur a 
performance penalty if the cpu slab is not from such a node, but that is 
assumed by the user who has enabled the option.

> In addition, on large numa systems, slab cache is relatively small
> than page cache. then this feature's improvement seems relatively small too.
> 

That's irrelevant, large NUMA machines may still require memory affinity 
to a specific group of cpus, the size of the global slab cache isn't 
important if that's the goal.  When the option is enabled for cpusets 
that require that memory locality, we happily trade off partial list 
fragmentation and increased slab allocations for the long-lived local 
allocations.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ