lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071118204724.GS19691@waste.org>
Date:	Sun, 18 Nov 2007 14:47:24 -0600
From:	Matt Mackall <mpm@...enic.com>
To:	Abhishek Rai <abhishekrai@...gle.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Andreas Dilger <adilger@....com>, linux-kernel@...r.kernel.org,
	Ken Chen <kenchen@...gle.com>,
	Mike Waychison <mikew@...gle.com>
Subject: Re: [PATCH] Clustering indirect blocks in Ext3

On Sun, Nov 18, 2007 at 07:52:36AM -0800, Abhishek Rai wrote:
> Thanks for the suggestion Matt.
> 
> It took me some time to get compilebench working due to the known
> issue with drop_caches due to circular lock dependency between
> j_list_lock and inode_lock (compilebench triggers drop_caches quite
> frequently). Here are the results for compilebench run with options
> "-i 30 -r 30". I repeated the test 5 times on each of vanilla and mc
> configurations.
> 
> Setup: 4 cpu, 8GB RAM, 400GB disk.
> 
> Average vanilla results
> ==========================================================================
> intial create total runs 30 avg 46.49 MB/s (user 1.12s sys 2.25s)
> create total runs 5 avg 12.90 MB/s (user 1.08s sys 1.97s)
> patch total runs 4 avg 8.70 MB/s (user 0.60s sys 2.31s)
> compile total runs 7 avg 21.44 MB/s (user 0.32s sys 2.95s)
> clean total runs 4 avg 59.91 MB/s (user 0.05s sys 0.26s)
> read tree total runs 2 avg 21.85 MB/s (user 1.12s sys 2.89s)
> read compiled tree total runs 1 avg 23.47 MB/s (user 1.45s sys 4.89s)
> delete tree total runs 2 avg 13.18 seconds (user 0.64s sys 1.02s)
> no runs for delete compiled tree
> stat tree total runs 4 avg 4.76 seconds (user 0.70s sys 0.50s)
> stat compiled tree total runs 1 avg 7.84 seconds (user 0.74s sys 0.54s)
> 
> Average metaclustering results
> ==========================================================================
> intial create total runs 30 avg 45.04 MB/s (user 1.13s sys 2.42s)
> create total runs 5 avg 15.64 MB/s (user 1.08s sys 1.98s)
> patch total runs 4 avg 10.50 MB/s (user 0.61s sys 3.11s)
> compile total runs 7 avg 28.07 MB/s (user 0.33s sys 4.06s)
> clean total runs 4 avg 83.27 MB/s (user 0.04s sys 0.27s)
> read tree total runs 2 avg 21.17 MB/s (user 1.15s sys 2.91s)
> read compiled tree total runs 1 avg 22.79 MB/s (user 1.38s sys 4.89s)
> delete tree total runs 2 avg 9.23 seconds (user 0.62s sys 1.01s)
> no runs for delete compiled tree
> stat tree total runs 4 avg 4.72 seconds (user 0.71s sys 0.50s)
> stat compiled tree total runs 1 avg 6.50 seconds (user 0.79s sys 0.53s)
> 
> Overall, metaclustering does better than vanilla except in a few cases.

Well it strikes me as about half up and half down, but the ups are
indeed much more substantial. Looks quite promising.

-- 
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ