lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 1 Oct 2009 16:03:46 +0100
From:	Mel Gorman <mel@....ul.ie>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	Pekka Enberg <penberg@...helsinki.fi>,
	Nick Piggin <npiggin@...e.de>, heiko.carstens@...ibm.com,
	sachinp@...ibm.com, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, Tejun Heo <tj@...nel.org>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH 2/4] slqb: Record what node is local to a kmem_cache_cpu

On Thu, Oct 01, 2009 at 10:32:54AM -0400, Christoph Lameter wrote:
> On Thu, 1 Oct 2009, Mel Gorman wrote:
> 
> > > Frees are done directly to the target slab page if they are not to the
> > > current active slab page. No centralized locks. Concurrent frees from
> > > processors on the same node to multiple other nodes (or different pages
> > > on the same node) can occur.
> > >
> >
> > So as a total aside, SLQB has an advantage in that it always uses object
> > in LIFO order and is more likely to be cache hot. SLUB has an advantage
> > when one CPU allocates and another one frees because it potentially
> > avoids a cache line bounce. Might be something worth bearing in mind
> > when/if a comparison happens later.
> 
> SLQB may use cache hot objects regardless of their locality. SLUB
> always serves objects that have the same locality first (same page).
> SLAB returns objects via the alien caches to the remote node.
> So object allocations with SLUB will generate less TLB pressure since they
> are localized.

True, it might have been improved more if SLUB knew what local hugepage it
resided within as the kernel portion of the address space is backed by huge
TLB entries. Note that SLQB could have an advantage here early in boot as
the page allocator will tend to give it back pages within a single huge TLB
entry. It loses the advantage when the system has been running for a very long
time but it might be enough to skew benchmark results on cold-booted systems.

> SLUB objects are immediately returned to the remote node.
> SLAB/SLQB keeps them around for reallocation or queue processing.
> 
> > > Look at fallback_alloc() in slab. You can likely copy much of it. It
> > > considers memory policies and cpuset constraints.
> > >
> > True, it looks like some of the logic should be taken from there all right. Can
> > the treatment of memory policies be dealt with as a separate thread though? I'd
> > prefer to get memoryless nodes sorted out before considering the next two
> > problems (per-cpu instability on ppc64 and memory policy handling in SLQB).
> 
> Separate email thread? Ok.
> 

Yes, but I'll be honest. It'll be at least two weeks before I can tackle
memory policy related issues in SLQB. It's not high on my list of
priorities. I'm more concerned with breakage on ppc64 and a patch that
forces it to be disabled. Minimally, I want this resolved before getting
distracted by another thread.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ