lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180720103627.GA29084@n2100.armlinux.org.uk>
Date:   Fri, 20 Jul 2018 11:36:27 +0100
From:   Russell King - ARM Linux <linux@...linux.org.uk>
To:     "Ooi, Tzy Way" <tzy.way.ooi@...el.com>
Cc:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "See, Chin Liang" <chin.liang.see@...el.com>,
        "Tan, Ley Foon" <ley.foon.tan@...el.com>,
        "Nguyen, Dinh" <dinh.nguyen@...el.com>,
        "Aw, Khai Liang" <khai.liang.aw@...el.com>
Subject: Re: Enquiry on unbalanced memory throughput for dual-Cortex A9 core.

On Fri, Jul 20, 2018 at 08:49:47AM +0000, Ooi, Tzy Way wrote:
> Hi Russell,
> 
> I am trying the memory write operation with the LM benchmark test. I
> tried to execute the memory write operation here
> <http://lmbench.sourceforge.net/cgi-bin/man?section=8&keyword=bw_mem>
> twice to get both Cortex A9 core processor to work on each processes.
> Both processors is going to perform write operation at almost the same
> time to the memory.
> 
> As shown in the pictures below, the memory throughput from one of the
> cores is about double the throughput of another core. i.e. 377MB/s VS
> 728MB/s
> 
> [cid:image001.png@...42049.5A7D0070]
> 
> I have tested this operation across few dual cores Cortex A9 boards and
> all the board is having the same result. The test is tested on kernel
> version 4.9 and newest Linux kernel version 4.18.0-rc2

Here's how 4.14 behaves on an iMX6D SoC (also dual core Cortex A9):

$ taskset -c 0 ./bw_mem -N 1000 1M fwr & taskset -c 1 ./bw_mem -N 1000 1M fwr
[1] 21799
1.00 521.10
1.00 497.27
[1]+  Done                    taskset -c 0 ./bw_mem -N 1000 1M fwr
$ taskset -c 0 ./bw_mem -N 1000 1M fwr & taskset -c 1 ./bw_mem -N 1000 1M fwr
[1] 21803
1.00 520.83
1.00 496.44

which shows some asymmetry but nowhere near yours.

I'm using taskset to force each to be locked to a particular CPU - you'll
see why further down.  Even without it, I get similar results to those I
mention above.

Now, playing around with this, so we can identify which bw_mem output is
which:

$ taskset -c 0 ./bw_mem -N 1000 1M fwr & c1=$(taskset -c 1 ./bw_mem -N 1000 1M fwr 2>&1); echo "c1: $c1"
[1] 21876
1.00 521.92
c1: 1.00 496.69
$ taskset -c 1 ./bw_mem -N 1000 1M fwr & c1=$(taskset -c 0 ./bw_mem -N 1000 1M fwr 2>&1); echo "c0: $c1"
[1] 21881
c0: 1.00 521.83
1.00 496.20

CPU0 is always the slightly faster of the two.  If we use /usr/bin/time
to time these:

CPU0:
6.10user 0.25system 0:06.56elapsed 96%CPU (0avgtext+0avgdata 1664maxresident)k
0inputs+0outputs (0major+407minor)pagefaults 0swaps

CPU1:
6.36user 0.24system 0:06.77elapsed 97%CPU (0avgtext+0avgdata 1600maxresident)k
0inputs+0outputs (0major+399minor)pagefaults 0swaps

So, CPU1 takes slightly longer in userspace, has less resident pages and
less minor faults which is rather odd.  Repeatedly running just one
instance gives different results each time... disabling virtual address
space randomisation solves that:

  echo 0 >/proc/sys/kernel/randomize_va_space

which then gives me:

CPU0: 1.00 520.20
6.18user 0.20system 0:06.59elapsed 96%CPU (0avgtext+0avgdata 1700maxresident)k
0inputs+0outputs (0major+403minor)pagefaults 0swaps
CPU1: 1.00 496.61
6.46user 0.14system 0:06.77elapsed 97%CPU (0avgtext+0avgdata 1700maxresident)k
0inputs+0outputs (0major+403minor)pagefaults 0swaps

CPU0: 1.00 521.10
6.13user 0.21system 0:06.57elapsed 96%CPU (0avgtext+0avgdata 1700maxresident)k
0inputs+0outputs (0major+403minor)pagefaults 0swaps
CPU1: 1.00 498.01
6.40user 0.18system 0:06.75elapsed 97%CPU (0avgtext+0avgdata 1700maxresident)k
0inputs+0outputs (0major+403minor)pagefaults 0swaps

which is rather more stable as far as resource usage goes between the
two CPUs, but still an asymmetry in the reported bandwidths and times.
So, this has ruled out differences in VA layout.

Now for the interesting bit... it's important to understand what and
how stuff is being measured.  Looking at the bw_mem.c and associated
source code, it measures the performance against the wall clock, which
includes everything that the system is doing on each particular CPU.
So, if a CPU is interrupted by another thread wanting to run, it'll
affect the results.  Hence, it's best to run on an otherwise quiet
system, eg, without an init daemon (eg, booted with init=/bin/sh on
the kernel command line - but note there won't be any job control,
so ^C won't work!)

However, continuing on...

If I run bw_mem on just one CPU:

CPU1: 1.00 2617.31
5.74user 0.18system 0:06.03elapsed 98%CPU (0avgtext+0avgdata 1700maxresident)k
0inputs+0outputs (0major+403minor)pagefaults 0swaps

Same number of iterations, same memory size, but notice that it appears
to be a lot faster reported by bw_mem, but the time taken is about the
same.  cpufreq comes to mind, but that's disabled on this system.

So, it brings up a rather obvious question: what exactly is bw_mem
measuring, and is it measuring it correctly?

$ /usr/bin/time taskset -c 1 ./bw_mem -P 1 -N 1000 1M fwr
1.00 2601.26
5.80user 0.16system 0:06.06elapsed 98%CPU (0avgtext+0avgdata 1700maxresident)k
0inputs+0outputs (0major+403minor)pagefaults 0swaps
$ /usr/bin/time ./bw_mem -P 2 -N 1000 1M fwr
^CCommand terminated by signal 2
5.54user 0.13system 1:12.20elapsed 7%CPU (0avgtext+0avgdata 1696maxresident)k
0inputs+0outputs (0major+365minor)pagefaults 0swaps

so requesting a parallelism of 2 results in the program never seemingly
ending in a reasonable period of time, which suggests a bug somewhere.
Are we sure that bw_mem is actually working as intended?

Maybe if Larry is reading this, he could share some thoughts.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 13.8Mbps down 630kbps up
According to speedtest.net: 13Mbps down 490kbps up

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ