lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071025192945.82962eea.pj@sgi.com>
Date:	Thu, 25 Oct 2007 19:29:45 -0700
From:	Paul Jackson <pj@....com>
To:	David Rientjes <rientjes@...gle.com>
Cc:	clameter@....com, akpm@...ux-foundation.org, ak@...e.de,
	Lee.Schermerhorn@...com, linux-kernel@...r.kernel.org
Subject: Re: [patch 2/2] cpusets: add interleave_over_allowed option

> Yes, when a task with MPOL_INTERLEAVE has its cpuset mems_allowed expanded 
> to include more memory.  The task itself can't access all that memory with 
> the memory policy of its choice.

That much I could have guessed (did guess, actually.)

Are you seeing this in a real world situation?  Can you describe the
situation?  I don't mean just describing how it looks to this kernel
code, but what is going on in the system, what sort of job mix or
applications, what kind of users, ...  In short, a "use case", or brief
approximation thereto.  See further:

  http://en.wikipedia.org/wiki/Use_case

I have no need of a full blown use case; just a three sentence
mini-story should suffice.  But it should (if you can, without
revealing proprietary knowledge) describe a situation you have
actual need of addressing.

> So my change allows those tasks that have already expressed the
> desire to interleave their memory with MPOL_INTERLEAVE to always
> use the full range of memory available that is dynamically changing
> beneath them as a result of cpusets.

Yup, that it does.  Note that it is a special case -- "the full range",
not any application controlled specific subset thereof, short of
reissuing set_mempolicy() calls anytime that the applications cpuset
'mems' changes.

> The only other way to support such a feature is through a modification to 
> mempolicies themselves, which Lee has already proposed.  The problem with 
> that is it requires mempolicy support for cpuset cases and modification to 
> the set_mempolicy() API.

Do you have a link to what Lee proposed?  I agree that a full general
solution would seem to require a new or changed set_mempolicy API,
which may well be more than we want to do, absent a more compelling
"use case" requiring it than we have now.

> I find it hard to believe that a single cpuset with a single
> memory_spread_user boolean is going to include multiple tasks that
> request interleaved mempolicies over differing nodes within the
> cpuset's mems_allowed.  That, to me, is the special case.

That may well be, to you.  To me, pretty much -all- uses of
set_mempolicy() are special cases ;).  I have no way of telling
whether or not there are users who would require multiple tasks
in the same cpuset to have different interleave masks, but since
the API clearly supports that (except when changing cpuset 'mems'
settings mess things up), I have been presuming that somewhere in
the universe, such users exist or might come to exist.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ