[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140108111440.GA10467@localhost>
Date: Wed, 8 Jan 2014 19:14:40 +0800
From: Fengguang Wu <fengguang.wu@...el.com>
To: Dave Chinner <david@...morbit.com>
Cc: Glauber Costa <glommer@...allels.com>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
lkp@...ux.intel.com
Subject: Re: [numa shrinker] 9b17c62382: -36.6% regression on sparse file copy
On Tue, Jan 07, 2014 at 12:10:42AM +1100, Dave Chinner wrote:
> On Mon, Jan 06, 2014 at 04:20:48PM +0800, fengguang.wu@...el.com wrote:
> > Hi Dave,
> >
> > We noticed throughput drop in test case
> >
> > vm-scalability/300s-lru-file-readtwice (*)
> >
> > between v3.11 and v3.12, and it's still low as of v3.13-rc6:
> >
> > v3.11 v3.12 v3.13-rc6
> > --------------- ------------------------- -------------------------
> > 14934707 ~ 0% -48.8% 7647311 ~ 0% -47.6% 7829487 ~ 0% vm-scalability.throughput
> > ^^ ^^^^^^
> > stddev% change%
>
> What does this vm-scalability.throughput number mean?
It's the total throughput reported by all the 240 dd:
8781176832 bytes (8.8 GB) copied, 299.97 s, 29.3 MB/s
2124931+0 records in
2124930+0 records out
8703713280 bytes (8.7 GB) copied, 299.97 s, 29.0 MB/s
2174078+0 records in
2174077+0 records out
...
> > (*) The test case basically does
> >
> > truncate -s 135080058880 /tmp/vm-scalability.img
> > mkfs.xfs -q /tmp/vm-scalability.img
> > mount -o loop /tmp/vm-scalability.img /tmp/vm-scalability
> >
> > nr_cpu=120
> > for i in $(seq 1 $nr_cpu)
> > do
> > sparse_file=/tmp/vm-scalability/sparse-lru-file-readtwice-$i
> > truncate $sparse_file -s 36650387592
> > dd if=$sparse_file of=/dev/null &
> > dd if=$sparse_file of=/dev/null &
> > done
>
> So a page cache load of reading 120x36GB files twice concurrently?
Yes.
> There's no increase in system time, so it can't be that the
> shrinkers are running wild.
>
> FWIW, I'm at LCA right now, so it's going to be a week before I can
> look at this, so if you can find any behavioural difference in the
> shrinkers (e.g. from perf profiles, on different filesystems, etc)
> I'd appreciate it...
OK, enjoy your time! I'll try different parameters and check if that
makes any difference.
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists