lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20081007131719.8bb24698.akpm@linux-foundation.org>
Date:	Tue, 7 Oct 2008 13:17:19 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc:	riel@...hat.com, Lee.Schermerhorn@...com,
	kosaki.motohiro@...fujitsu.com, a.p.zijlstra@...llo.nl,
	torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, netdev@...r.kernel.org,
	trond.myklebust@....uio.no, dlezcano@...ibm.com,
	penberg@...helsinki.fi, neilb@...e.de, davem@...emloft.net
Subject: Re: split-lru performance mesurement part2

On Tue,  7 Oct 2008 23:26:54 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com> wrote:

> Hi
> 
> > yup,
> > I know many people want to other benchmark result too.
> > I'll try to mesure other bench at next week.
> 
> I ran another benchmark today.
> I choice dbench because dbench is one of most famous and real workload like i/o benchmark.
> 
> 
> % dbench client.txt 4000
> 
> mainline:  Throughput 13.4231 MB/sec  4000 clients  4000 procs  max_latency=1421988.159 ms
> mmotm(*):  Throughput  7.0354 MB/sec  4000 clients  4000 procs  max_latency=2369213.380 ms
> 
> (*) mmotm 2/Oct + Hugh's recently slub fix
> 
> 
> Wow!
> mmotm is slower than mainline largely (about half performance).
> 
> Therefore, I mesured it on "mainline + split-lru(only)" build.
> 
> 
> mainline + split-lru(only): Throughput 14.4062 MB/sec  4000 clients  4000 procs  max_latency=1152231.896 ms
> 
> 
> OK!
> split-lru outperform mainline from viewpoint of both throughput and latency :)
> 
> 
> 
> However, I don't understand why this regression happend.

erk.

dbench is pretty chaotic and it could be that a good change causes
dbench to get worse.  That's happened plenty of times in the past.


> Do you have any suggestion?


One of these:

vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
vm-dont-run-touch_buffer-during-buffercache-lookups.patch

perhaps?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ