lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20061106122446.8269f7bc.akpm@osdl.org>
Date:	Mon, 6 Nov 2006 12:24:46 -0800
From:	Andrew Morton <akpm@...l.org>
To:	Christoph Lameter <clameter@....com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: Avoid allocating during interleave from almost full nodes

On Mon, 6 Nov 2006 12:12:50 -0800 (PST)
Christoph Lameter <clameter@....com> wrote:

> On Mon, 6 Nov 2006, Andrew Morton wrote:
> 
> > > It should do interleaving because the data is to be accessed from multiple 
> > > nodes.
> > 
> > I think you missed the point.
> > 
> > At present the code does interleaving by taking one page from each zone and
> > then advancing onto the next zone, yes?
> 
> s/zone/node/ then yes (zone == node if we just have a single zone).
> 
> > If so, this is pretty awful frmo a cache utilsiation POV.  it'd be much
> > better to take 16 pages from one zone before advancing onto the next one.
> 
> The L1/L2 cpu cache or the pageset hot / cold caches?

I'm referring to the metadata rather than to the pages themselves: the zone
structure at least.  I bet there are a couple of cache misses in there.

> Take N pages 
> from a node instead of 1? That would mean we need to have more complex 
> interleaving logic that keeps track of how many pages we took.

It's hardly rocket science.  Stick a nid and a counter in the task_struct
for a simple implmentation.

> The number 
> of pages to take will vary depending on the size of the shared data. For 
> shared data areas that are just a couple of pages this wont work.

What is "shared data"?

> > > Clustering on a single node may create hotspots or imbalances. 
> > 
> > Umm, but that's exactly what the patch we're discussing will do.
> 
> Not if we have a set of remaining nodes.

Yes it is.  You're proposing taking an arbitrarily large number of
successive pages from the same node rather than interleaving the allocations.
That will create "hotspots or imbalances" (whatever they are).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ