lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 4 Jan 2021 16:34:38 +0100
From:   Michal Hocko <mhocko@...e.com>
To:     Feng Tang <feng.tang@...el.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, andi.kleen@...el.com,
        tim.c.chen@...el.com, dave.hansen@...el.com, ying.huang@...el.com,
        Roman Gushchin <guro@...com>
Subject: Re: [PATCH 1/2] mm: page_counter: relayout structure to reduce false
 sharing

On Mon 04-01-21 22:44:02, Feng Tang wrote:
> On Mon, Jan 04, 2021 at 03:11:40PM +0100, Michal Hocko wrote:
> > On Mon 04-01-21 21:34:45, Feng Tang wrote:
> > > Hi Michal,
> > > 
> > > On Mon, Jan 04, 2021 at 02:03:57PM +0100, Michal Hocko wrote:
> > > > On Tue 29-12-20 22:35:13, Feng Tang wrote:
> > > > > When checking a memory cgroup related performance regression [1],
> > > > > from the perf c2c profiling data, we found high false sharing for
> > > > > accessing 'usage' and 'parent'.
> > > > > 
> > > > > On 64 bit system, the 'usage' and 'parent' are close to each other,
> > > > > and easy to be in one cacheline (for cacheline size == 64+ B). 'usage'
> > > > > is usally written, while 'parent' is usually read as the cgroup's
> > > > > hierarchical counting nature.
> > > > > 
> > > > > So move the 'parent' to the end of the structure to make sure they
> > > > > are in different cache lines.
> > > > 
> > > > Yes, parent is write-once field so having it away from other heavy RW
> > > > fields makes sense to me.
> > > >  
> > > > > Following are some performance data with the patch, against
> > > > > v5.11-rc1, on several generations of Xeon platforms. Most of the
> > > > > results are improvements, with only one malloc case on one platform
> > > > > shows a -4.0% regression. Each category below has several subcases
> > > > > run on different platform, and only the worst and best scores are
> > > > > listed:
> > > > > 
> > > > > fio:				 +1.8% ~  +8.3%
> > > > > will-it-scale/malloc1:		 -4.0% ~  +8.9%
> > > > > will-it-scale/page_fault1:	 no change
> > > > > will-it-scale/page_fault2:	 +2.4% ~  +20.2%
> > > > 
> > > > What is the second number? Std?
> > > 
> > > For each case like 'page_fault2', I run several subcases on different
> > > generations of Xeon, and only listed the lowest (first number) and
> > > highest (second number) scores.
> > > 
> > > There are 5 runs and the result are: +3.6%, +2.4%, +10.4%, +20.2%,
> > > +4.7%, and +2.4% and +20.2% are listed.
> > 
> > This should be really explained in the changelog and ideally mention the
> > model as well. Seeing a std would be appreciated as well.
> 
> I guess I haven't made it clear (due to my poor English :))
> 
> The five scores are for different parameters on different HW:
> 
> Cascadelake (100%)    77844    +3.6%    80667   will-it-scale.per_process_ops
> Cascadelake  (50%)   182475    +2.4%   186866   will-it-scale.per_process_ops
> Haswell     (100%)    84870   +10.4%    93671   will-it-scale.per_process_ops
> Haswell      (50%)   197684   +20.2%   237585   will-it-scale.per_process_ops
> Newer Xeon   (50%)   268569    +4.7%   281320   will-it-scale.per_process_ops
> 
> +2.4% is the lowest improvement, while +20.2% is the highest. 

Please make sure to document these results in the changelog.

> 100% means the number of forked test processes eqauls to CPU number,
> while 50% is the half. Each line has been runed several times, whose score
> are consistent without big deviations.

It is still a good practice to mention the number of runs and std.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ