lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Oct 2007 16:53:40 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Paul Jackson <pj@....com>
cc:	clameter@....com, Lee.Schermerhorn@...com,
	akpm@...ux-foundation.org, ak@...e.de, linux-kernel@...r.kernel.org
Subject: Re: [patch 2/2] cpusets: add interleave_over_allowed option

On Tue, 30 Oct 2007, Paul Jackson wrote:

> > Those applications that currently rely on the remapping are going to be 
> > broken anyway because they are unknowingly receiving different nodes than 
> > they intended, this is the objection to remapping that Lee agreed with.
> 
> No, they may or may not be broken.  That depends on whether or not they had
> specific hardware locality or affinity needs.
> 

Of course they have specific affinity needs, that's why they used 
mempolicies.  Remapping those policies to a set of nodes that resembles 
the original mempolicy's nodemask in terms of construction but without 
regard for the affinity those nodes have with respect to system topology 
could lead to performance degredations.

> If you're running apps that have specific hardware affinity requirements,
> then perhaps you shouldn't be moving them about in the first place ;).
> And if they did have such needs, aren't they just as likely to be busted
> by AND'ing off some of their nodes as they are by remapping those nodes?
> 

No, because you're interleaving over the set of actual nodes you wanted to 
interleave over in the first place and not some pseudo-random set that 
your cpuset has access to.

> I sure wish I knew what real world, actual, not hypothetical, situations
> were motivating this.
> 

You're defending the current remap behavior in terms of semantics of 
mempolicies?  My position, and Choice C's position, is that you either get 
the exact (or partially-constructed) policy that you asked for, or you get 
the MPOL_DEFAULT behavior.  What you don't get, even though it's currently 
how we do it, is a completely different set of nodes that you never 
intended to have a specific policy over.

		David
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ