lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0711031954080.17078@blonde.wat.veritas.com>
Date:	Sat, 3 Nov 2007 20:03:19 +0000 (GMT)
From:	Hugh Dickins <hugh@...itas.com>
To:	Christoph Lameter <clameter@....com>
cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] slub: fix Objects count

On Sat, 3 Nov 2007, Christoph Lameter wrote:
> On Sat, 3 Nov 2007, Hugh Dickins wrote:
> 
> > I was afraid you might say something like that.
> > Perhaps it'll be a patch I need to use in my own builds.
> > Though I'd have thought others would want that accuracy too.
> > Didn't SLAB give it?  (The "r*gr*ss**n" word!)
> 
> Slab also only counts objects that are not in the queues. See free_block() 
> f.e.

I'll take your word for it, and apologize for my slur on slub!
(Slub has a great deal to admire in it, I should say.)

> 
> We could improve the situation by flushing all cpu slabs before counts are 
> determined.
> 
> Which can be done manually. Run
> 
> 	slabinfo -s
> 
> and then look at the numbers.

Mmm, I'd been doing slabinfo -v sometimes.  These are fine in some
situations, but it's always better when the observer can avoid
interfering with the observed.  Impossible, we know, but...

Also, many caches too quickly re-equip themselves
with cpu slabs which again obscure the numbers.

> > > Adds to much overhead to the fast paths
> > 
> > You've come to that conclusion very quickly!
> 
> I have just spend a few weeks optimizing the fast and slow paths and there 
> is some additional overhead that I am still trying to eliminate.
> 
> > Any numbers to back it up?
> 
> The performance in the fast paths depends on updating only a single word
> for an allocation. Adding another counter makes that impossible.

Gosh, that's a tighter corner than any I've been in.

> 
> See the recent post on SLUB regression on SMP.

I'll have to read up on that, thanks for the pointer.

Hugh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ