lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140106131042.GA5145@destitution>
Date:	Tue, 7 Jan 2014 00:10:42 +1100
From:	Dave Chinner <david@...morbit.com>
To:	fengguang.wu@...el.com
Cc:	Glauber Costa <glommer@...allels.com>,
	Linux Memory Management List <linux-mm@...ck.org>,
	linux-fsdevel@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
	lkp@...ux.intel.com
Subject: Re: [numa shrinker] 9b17c62382: -36.6% regression on sparse file copy

On Mon, Jan 06, 2014 at 04:20:48PM +0800, fengguang.wu@...el.com wrote:
> Hi Dave,
> 
> We noticed throughput drop in test case
> 
>         vm-scalability/300s-lru-file-readtwice (*)
> 
> between v3.11 and v3.12, and it's still low as of v3.13-rc6:
> 
>           v3.11                      v3.12                  v3.13-rc6
> ---------------  -------------------------  -------------------------
>   14934707 ~ 0%     -48.8%    7647311 ~ 0%     -47.6%    7829487 ~ 0%  vm-scalability.throughput
>              ^^     ^^^^^^
>         stddev%    change%

What does this vm-scalability.throughput number mean?

> (*) The test case basically does
> 
>         truncate -s 135080058880 /tmp/vm-scalability.img
>         mkfs.xfs -q /tmp/vm-scalability.img
>         mount -o loop /tmp/vm-scalability.img /tmp/vm-scalability 
> 
>         nr_cpu=120
>         for i in $(seq 1 $nr_cpu)
>         do     
>                 sparse_file=/tmp/vm-scalability/sparse-lru-file-readtwice-$i
>                 truncate $sparse_file -s 36650387592
>                 dd if=$sparse_file of=/dev/null &
>                 dd if=$sparse_file of=/dev/null &
>         done

So a page cache load of reading 120x36GB files twice concurrently?
There's no increase in system time, so it can't be that the
shrinkers are running wild.

FWIW, I'm at LCA right now, so it's going to be a week before I can
look at this, so if you can find any behavioural difference in the
shrinkers (e.g. from perf profiles, on different filesystems, etc)
I'd appreciate it...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ