lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 14 May 2008 14:53:57 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Matthew Wilcox <matthew@....cx>
cc:	Andi Kleen <andi@...stfloor.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	Mel Gorman <mel@...net.ie>, mpm@...enic.com,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Subject: Re: [patch 21/21] slab defrag: Obsolete SLAB

On Wed, 14 May 2008, Matthew Wilcox wrote:

> > Oh. The last message I got was an enthusiatic report on the performance 
> > gains you saw by pinning the process after we looked at slub statistics 
> > that showed that the behavior of the tests was different from your 
> > expectations. I got messages here that indicate that this was a scsi 
> > testing program that you had under development. And yes we saw the remote 
> > freeing degradations there.
> 
> What I said was:
> 
> : I've also been playing around with locking the scsi_ram_0 thread to
> : one CPU and it has a huge effect on the numbers.
> 
> : So we can see that scsi_ram_0 is clearly wandering between the two
> : CPUs normally; it takes up a significant (3 seconds ~= 7-8%) of the
> : execution time, and that locking it to one CPU (which interrupts tend
> : to be) improves the number of ops per second ... even of the CPU which
> : is forced to take all the extra work of running it!


The last message that I got on March 31st said:

>>I have a version below which tries to start the tasks at a similar time
>by using pause() and then signalling to wake all the tasks up.  I don't
>?know a better way to start threads simultaneously ... maybe MAP_SHARED a
>file and write to it in one task while spinning in the other tasks
>waiting for it to change value?

>I've also been playing around with locking the scsi_ram_0 thread to one
>CPU and it has a huge effect on the numbers.

This indicated to me that you were still developing a test here and 
discovered some startling things.

> Note the complete lack of comparison between slub and slab here!  As far
> as I know, slub still loses against slab by a few % -- but I haven't
> finished running a comparison with -rc2 yet.

Indeed remote frees are slightly slower in some situations. Dont really 
dispute that. I am just not sure that the TPC test is really suffering 
from that symptom. I thought for a long time that the tbench regression 
was due to a similar effect too until I got down to it.

> I thought you'd already run this test and were asking for the results of
> this to be validated against a real TPC run.

AFAICT the last state was that you were tinkering around with a test.

> I'm rather annoyed by this.  You demand a test-case to reproduce the
> problem and then when I come up with one, you ignore it!

Ignore it? That is pretty strange statement given that I helped you 
analyze the behavior of your test and understand what was going on the 
system.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ