[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0805141426440.19073@schroedinger.engr.sgi.com>
Date: Wed, 14 May 2008 14:33:11 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Matthew Wilcox <matthew@....cx>
cc: Andi Kleen <andi@...stfloor.org>,
Pekka Enberg <penberg@...helsinki.fi>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Rik van Riel <riel@...hat.com>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Mel Gorman <mel@...net.ie>, mpm@...enic.com,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Subject: Re: [patch 21/21] slab defrag: Obsolete SLAB
On Wed, 14 May 2008, Matthew Wilcox wrote:
> > No. I thought you were satisfied with the performance increase you saw
> > when pinning the process to a single processor?
>
> Er, no. That program emulates a TPC-C run from the point of view of
> doing as much IO as possible from all CPUs. Pinning the process to one
> CPU would miss the point somewhat.
Oh. The last message I got was an enthusiatic report on the performance
gains you saw by pinning the process after we looked at slub statistics
that showed that the behavior of the tests was different from your
expectations. I got messages here that indicate that this was a scsi
testing program that you had under development. And yes we saw the remote
freeing degradations there.
> I seem to remember telling you that you might get more realistic
> performance numbers by pinning the scsi_ram_0 kernel thread to a single
> CPU (ie emulating an interrupt tied to one CPU rather than letting the
> scheduler choose to run the thread on the 'best' CPU).
If this is a stand in for the TPC then why did you not point that
out when Pekka and I recently asked you to retest some configurations?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists