lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0708021827280.13538@schroedinger.engr.sgi.com>
Date:	Thu, 2 Aug 2007 18:34:04 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Nick Piggin <npiggin@...e.de>
cc:	Andi Kleen <ak@...e.de>, Ingo Molnar <mingo@...e.hu>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>
Subject: Re: [rfc] balance-on-fork NUMA placement

On Fri, 3 Aug 2007, Nick Piggin wrote:

> Yeah it only gets set if the parent is initially using a default policy
> at this stage (and then is restored afterwards of course).

Uggh. Looks like more hackery ahead. I think this cannot be done in the 
desired clean way until we have some revving of the memory policy 
subsystem that makes policies task context independent so that you can do

alloc_pages(...., memory_policy)

The cleanest solution that I can think of at this point is certainly to 
switch to another processor and do the allocation and copying actions from 
there. We have the migration process context right? Can that be used to 
start the new thread and can the original processor wait on some flag 
until that is complete?

Forking off from there not only places the data correctly but it also 
warms up the caches for the new process and avoids evicting cacheline on 
the original processor.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ