lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071029125754.7d7cf172.pj@sgi.com>
Date:	Mon, 29 Oct 2007 12:57:54 -0700
From:	Paul Jackson <pj@....com>
To:	Lee Schermerhorn <Lee.Schermerhorn@...com>
Cc:	rientjes@...gle.com, clameter@....com, akpm@...ux-foundation.org,
	ak@...e.de, linux-kernel@...r.kernel.org
Subject: Re: [patch 2/2] cpusets: add interleave_over_allowed option

Lee wrote:
> > 	Indeed, if there was much such usage, I suspect they'd
> > 	be complaining that the current kernel API was borked, and
> > 	they'd be filing a request for enhancement -asking- for just
> > 	this subtle change in the kernel API's here.  In other words,
> > 	this subtle API change is a feature, not a bug ;)
> 
> Agreed. 

Hmmm ... put your thinking hat for my next comment ...

I could do one of two things in mm/mempolicy.c:
 B1) continue accepting nodemasks across the set_mempolicy and mbind
     system call APIs that are just like now (only nodes in the current
     tasks cpuset matter), but then remember what was passed in, so that
     if the tasks cpuset subsequently shrank down and then expanded
     again back to its original size, they would end up with the same
     memory policy placement they first had, or
 B2) accept nodemasks as if relative to the entire system, regardless
     of what cpuset they were in at the moment (all nodes in the system
     matter and can be specified.)

If I did B1, then that's just a subtle change in the API, and what
you agreed to above holds.

If I did B2, then that's a serious change in the way that nodes
are numbered in the nodemasks passed into mbind and set_mempolicy,
from being only nodes that happen to be in the tasks current cpuset,
to being nodes relative to all possible nodes on the system.

We need B2, I think.  Otherwise, if a job happens to be running in
a shrunken cpuset, it can't request what memory policy placement
it wants should it end up in a larger cpuset later on.  With B1, we
would continue to have the timing dependencies between when a task
is moved between different size cpusets, and when it happens to issue
mbind/set_mempolicy calls.

But B2 is an across the board change in how we number the nodes
passed into mbind and set_mempolicy.  That is in no way an upward
compatible change.

I am strongly inclined toward B2, but it must be a non-default optional
mode, at least for a while, perhaps a long while.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ