lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0903101651480.20363@qirst.com>
Date:	Tue, 10 Mar 2009 16:59:05 -0400 (EDT)
From:	Christoph Lameter <cl@...ux-foundation.org>
To:	David Rientjes <rientjes@...gle.com>
cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Matt Mackall <mpm@...enic.com>,
	Paul Menage <menage@...gle.com>,
	Randy Dunlap <randy.dunlap@...cle.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch -mm v2] cpusets: add memory_slab_hardwall flag

On Mon, 9 Mar 2009, David Rientjes wrote:

> This interface is lockless and very quick in the slab allocator fastpath
> when not enabled because a new task flag, PF_SLAB_HARDWALL, is added to
> determine whether or not its cpuset has mandated objects be allocated on
> the set of allowed nodes.  If the option is not set for a task's cpuset
> (or only a single cpuset exists), this reduces to only checking for a
> specific bit in current->flags.

We already have PF_SPREAD_PAGE PF_SPREAD_SLAB and PF_MEMPOLICY.
PF_MEMPOLICY in slab can have the same role as PF_SLAB_HARDWALL. It
attempts what you describe. One the one hand you duplicate functionality
that is already there and on the other you want to put code in the hot
paths that we have intentionally avoided for ages.

The description is not accurate. This feature is only useful if someone
comes up with a crummy cpuset definition in which a processor is a member
of multiple cpusets and thus the per cpu queues of multiple subsystems get
objects depending on which cpuset is active.

If a processor is only used from one cpuset (natural use) then these
problems do not occur.

There is still no use case for this on a NUMA platform. NUMA jobs that I
know about where people care about latencies have cpusets that do not
share processors and thus this problem does not occur.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ