lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1193670056.5035.27.camel@localhost>
Date:	Mon, 29 Oct 2007 11:00:56 -0400
From:	Lee Schermerhorn <Lee.Schermerhorn@...com>
To:	Christoph Lameter <clameter@....com>
Cc:	David Rientjes <rientjes@...gle.com>, Paul Jackson <pj@....com>,
	akpm@...ux-foundation.org, ak@...e.de,
	linux-kernel@...r.kernel.org, Mel Gorman <mel@....ul.ie>
Subject: Re: [patch 2/2] cpusets: add interleave_over_allowed option

On Fri, 2007-10-26 at 14:37 -0700, Christoph Lameter wrote:
> On Fri, 26 Oct 2007, Lee Schermerhorn wrote:
> 
> > > > Now, if we could replace the 'cpuset_mems_allowed' nodemask with a
> > > > pointer to something stable, it might be a win.
> > > 
> > > The memory policies are already shared and have refcounters for that 
> > > purpose.
> > 
> > I must have missed that in the code I'm reading :)
> 
> What is the benefit of having pointers to nodemasks? We likely would need 
> to have refcounts in those nodemasks too? So we duplicate a lot of 
> the characteristics of memory policies?

Hi, Christoph:

remoting the nodemasks from the mempolicy and allocating them only when
needed is something that you and Mel and I discussed last month, in the
context of Mel's "one zonelist filtered by nodemask" patches.  I just
put together the dynamic nodemask patch [included below FYI, NOT for
serious consideration] to see what it looked like and whether it helped.
Conclusion:  it's ugly/complex [especially trying to keep the nodemasks
embedded for systems that don't require > a pointer's worth of bits] and
they probably don't help much if most uses of non-default mempolicy
requires a nodemask.

I only brought it up again because now you all are considering another
nodemask per policy.  In fact, I only considered it in the first place
because nodemasks on our [HP's] platform don't require more than a
pointer's worth of bits [today, at least--I don't know about future
plans].  However, since we share an arch--ia64-with SGI and distros
don't want to support special kernels for different vendors, if they can
avoid it, we have 1K-bit nodemasks.   Since this is ia64 we're talking
about, most folks don't care.  Now that you're going to do the same for
x86_64, it might become more visible.  Then again, maybe there are few
enough mempolicy structs that no-one will care anyway.

Note:  I don't [didn't] think I need to ref count the nodemasks
associated with the mempolicies because they are allocated when the
mempolicy is and destroyed when the policy is--not shared.  Just like
the custom zonelist for bind policy, and we have no ref count there.
I.e., they're protected by the mempol's ref.  However, now that you
bring it up, I'm wondering about the effects of policy remapping, and
whether we have the reference counting or indirect protection [mmap_sem,
whatever] correct there in current code.  I'll have to take a look.

Lee

View attachment "dynamically-allocate-mempolicy-nodemasks.patch" of type "text/x-patch" (10132 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ