[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20131221154925.GA7450@localhost>
Date: Sat, 21 Dec 2013 23:49:25 +0800
From: Fengguang Wu <fengguang.wu@...el.com>
To: Mel Gorman <mgorman@...e.de>
Cc: Alex Shi <alex.shi@...aro.org>, Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
H Peter Anvin <hpa@...or.com>, Linux-X86 <x86@...nel.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/4] Fix ebizzy performance regression due to X86 TLB
range flush v2
Hi Mel,
On Fri, Dec 20, 2013 at 04:44:26PM +0000, Mel Gorman wrote:
> On Fri, Dec 20, 2013 at 11:51:43PM +0800, Fengguang Wu wrote:
> > On Thu, Dec 19, 2013 at 02:34:50PM +0000, Mel Gorman wrote:
[snip]
> > > I doubt hackbench is doing any flushes and the 1.2% is noise.
> >
> > Here are the proc-vmstat.nr_tlb_remote_flush numbers for hackbench:
> >
> > 513 ~ 3% +4.3e+16% 2.192e+17 ~85% lkp-nex05/micro/hackbench/800%-process-pipe
> > 603 ~ 3% +7.7e+16% 4.669e+17 ~13% lkp-nex05/micro/hackbench/800%-process-socket
> > 6124 ~17% +5.7e+15% 3.474e+17 ~26% lkp-nex05/micro/hackbench/800%-threads-pipe
> > 7565 ~49% +5.5e+15% 4.128e+17 ~68% lkp-nex05/micro/hackbench/800%-threads-socket
> > 21252 ~ 6% +1.3e+15% 2.728e+17 ~39% lkp-snb01/micro/hackbench/1600%-threads-pipe
> > 24516 ~16% +8.3e+14% 2.034e+17 ~53% lkp-snb01/micro/hackbench/1600%-threads-socket
> >
>
> This is a surprise. The differences I can understand because of changes
> in accounting but not the flushes themselves. The only flushes I would
> expect are when the process exits and the regions are torn down.
>
> The exception would be if automatic NUMA balancing was enabled and this
> was a NUMA machine. In that case, NUMA hinting faults could be migrating
> memory and triggering flushes.
You are right, the kconfig (attached) does have
CONFIG_NUMA_BALANCING=y
and lkp-nex05 is a 4-socket NHM-EX machine; lkp-snb01 is a 2-socket
SNB machine.
> Could you do something like
>
> # perf probe native_flush_tlb_others
> # cd /sys/kernel/debug/tracing
> # echo sym-offset > trace_options
> # echo sym-addr > trace_options
> # echo stacktrace > trace_options
> # echo 1 > events/probe/native_flush_tlb_others/enable
> # cat trace_pipe > /tmp/log
>
> and get a breakdown of what the source of these remote flushes are
> please?
Sure. Attached is the log file.
> > This time, the ebizzy params are refreshed and the test case is
> > exercised in all our test machines. The results that have changed are:
> >
> > v3.13-rc3 eabb1f89905a0c809d13
> > --------------- -------------------------
> > 873 ~ 0% +0.7% 879 ~ 0% lkp-a03/micro/ebizzy/200%-100-10
> > 873 ~ 0% +0.7% 879 ~ 0% lkp-a04/micro/ebizzy/200%-100-10
> > 873 ~ 0% +0.8% 880 ~ 0% lkp-a06/micro/ebizzy/200%-100-10
> > 49242 ~ 0% -1.2% 48650 ~ 0% lkp-ib03/micro/ebizzy/200%-100-10
> > 26176 ~ 0% -1.6% 25760 ~ 0% lkp-sbx04/micro/ebizzy/200%-100-10
> > 2738 ~ 0% +0.2% 2744 ~ 0% lkp-t410/micro/ebizzy/200%-100-10
> > 80776 -1.2% 79793 TOTAL ebizzy.throughput
> >
>
> No change on lkp-ib03 which I would have expected some difference. Thing
> is, for ebizzy to notice the number of TLB entries matter. On both
> machines I tested, the last level TLB had 512 entries. How many entries
> are on the last level TLB on lkp-ib03?
[ 0.116154] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
[ 0.116154] Last level dTLB entries: 4KB 512, 2MB 0, 4MB 0
> > > I do see a few major regressions like this
> > >
> > > > 324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
> > >
> > > but I have no idea what the test is doing and whether something happened
> > > that the test broke that time or if it's something to be really
> > > concerned about.
> >
> > This test case simply creates sparse files, populate them with zeros,
> > then delete them in parallel. Here $mem is physical memory size 128G,
> > $nr_cpu is 120.
> >
> > for i in `seq $nr_cpu`
> > do
> > create_sparse_file $SPARSE_FILE-$i $((mem / nr_cpu))
> > cp $SPARSE_FILE-$i /dev/null
> > done
> >
> > for i in `seq $nr_cpu`
> > do
> > rm $SPARSE_FILE-$i &
> > done
> >
>
> In itself, that does not explain why the result was 0 with the series
> applied. The 3.13-rc3 result was "324497". 324497 what?
It's the proc-vmstat.nr_tlb_local_flush_one number, which is showed in the end
of every "TOTAL" line:
v3.13-rc3 eabb1f89905a0c809d13
--------------- -------------------------
...
324497 ~ 0% -100.0% 0 ~ 0% brickland2/micro/vm-scalability/16G-truncate
...
99986527 +3e+14% 2.988e+20 TOTAL proc-vmstat.nr_tlb_local_flush_one
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
btw, I've got the full test results for hackbench. Attached are the
new comparison results. There are small ups and downs, overall no big
regressions.
Thanks,
Fengguang
View attachment "perf-probe" of type "text/plain" (323123 bytes)
View attachment "config-3.13.0-rc3-00004-geabb1f8" of type "text/plain" (81251 bytes)
View attachment "eabb1f89905a0c809d13ec27795ced089c107eb8" of type "text/plain" (35733 bytes)
Powered by blists - more mailing lists