[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <516CEBA6.9060703@sr71.net>
Date:	Mon, 15 Apr 2013 23:11:50 -0700
From:	Dave Hansen <dave@...1.net>
To:	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
CC:	Andrea Arcangeli <aarcange@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Al Viro <viro@...iv.linux.org.uk>,
	Hugh Dickins <hughd@...gle.com>,
	Wu Fengguang <fengguang.wu@...el.com>, Jan Kara <jack@...e.cz>,
	Mel Gorman <mgorman@...e.de>, linux-mm@...ck.org,
	Andi Kleen <ak@...ux.intel.com>,
	Matthew Wilcox <matthew.r.wilcox@...el.com>,
	"Kirill A. Shutemov" <kirill@...temov.name>,
	Hillf Danton <dhillf@...il.com>, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RESEND] IOZone with transparent huge page cache
On 04/15/2013 10:57 PM, Kirill A. Shutemov wrote:
>>> > > ** Initial writers **
>>> > > threads:	        1        2        4        8       16       32       64      128      256
>>> > > baseline:	  1103360   912585   500065   260503   128918    62039    34799    18718     9376
>>> > > patched:	  2127476  2155029  2345079  1942158  1127109   571899   127090    52939    25950
>>> > > speed-up(times):     1.93     2.36     4.69     7.46     8.74     9.22     3.65     2.83     2.77
>> > 
>> > I'm a _bit_ surprised that iozone scales _that_ badly especially while
>> > threads<nr_cpus.  Is this normal for iozone?  What are the units and
>> > metric there, btw?
> The units is KB/sec per process (I used 'Avg throughput per process' from
> iozone report). So it scales not that badly.
> I will use total children throughput next time to avoid confusion.
Wow.  Well, it's cool that your patches just fix it up inherently.  I'd
still really like to see some analysis exactly where the benefit is
coming from though.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Powered by blists - more mailing lists