lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080515181356.GP9921@parisc-linux.org>
Date:	Thu, 15 May 2008 12:13:56 -0600
From:	Matthew Wilcox <matthew@....cx>
To:	Christoph Lameter <clameter@....com>
Cc:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Andi Kleen <andi@...stfloor.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>, akpm@...ux-foundation.org,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	Mel Gorman <mel@...net.ie>, mpm@...enic.com
Subject: Re: [patch 21/21] slab defrag: Obsolete SLAB

On Thu, May 15, 2008 at 10:58:01AM -0700, Christoph Lameter wrote:
> On Thu, 15 May 2008, Matthew Wilcox wrote:
> 
> > On Thu, May 15, 2008 at 10:05:35AM -0700, Christoph Lameter wrote:
> > > Thanks for using the slab statistics. I wish I had these numbers for the 
> > > TPC benchmark. That would allow us to understand what is going on while it 
> > > is running.
> > 
> > Hang on, you want slab statistics for the TPC run?  You didn't tell me
> > that.  We're trying to gather oprofile data (and having trouble because
> > the machine crashes when we start using oprofile -- this is with the git
> > tree you/pekka put together for us to test).
> 
> Well we talked about this when you send me the test program. I just 
> thought that it would be logical to do the same for the real case.

You ran the test ... you didn't say "It would be helpful if you could
get these results for me for TPC-C".

> Details of the crash please?

I don't have any.

> You could just start with 2.6.25.X which already contains the slab 
> statistics.

Certainly.  Exactly how does collecting these stats work?  Am I supposed
to zero the counters after the TPC has done its initial ramp-up?  What
commands should I run, and at exactly which points?

> Also re: the test program since pinning a process does increase the 
> performance by orders of magnitude. Are you sure that the application was 
> properly tuned for an 8p configuration? Pinning is usually not necessary 
> for lower numbers of processors because the scheduler thrashing effect is 
> less of an issue.  If the test program is an accurate representation of 
> the TP-C benchmark then you can drastically increase its performance by 
> doing the same to the real test.

The application does nothing except submit IO and wait for it to complete.
It doesn't need to be tuned.  It's not an accurate representation of
TPC-C, it just simulates the amount of IO that a TPC-C run will generate
(and simulates it coming from all CPUs, which is accurate).

I don't want to get into details of how a TPC benchmark is tuned, because
it's not relevant.  Trust me, there are people who dedicate months of
their lives per year to tuning how TPC runs are scheduled.

The pinning I was talking about was pinning the scsi_ram_0 kernel thread
to one CPU to simulate interrupts being tied to one CPU.

-- 
Intel are signing my paycheques ... these opinions are still mine
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ