lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 22 May 2024 21:35:57 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>, 
	Michal Hocko <mhocko@...nel.org>, Roman Gushchin <roman.gushchin@...ux.dev>, 
	Muchun Song <muchun.song@...ux.dev>, ying.huang@...el.com, feng.tang@...el.com, 
	fengwei.yin@...el.com, oliver.sang@...el.com, kernel-team@...a.com, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] memcg: rearrage fields of mem_cgroup_per_node

On Wed, May 22, 2024 at 8:48 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> Kernel test robot reported [1] performance regression for will-it-scale
> test suite's page_fault2 test case for the commit 70a64b7919cb ("memcg:
> dynamically allocate lruvec_stats"). After inspection it seems like the
> commit has unintentionally introduced false cache sharing.
>
> After the commit the fields of mem_cgroup_per_node which get read on the
> performance critical path share the cacheline with the fields which
> get updated often on LRU page allocations or deallocations. This has
> caused contention on that cacheline and the workloads which manipulates
> a lot of LRU pages are regressed as reported by the test report.
>
> The solution is to rearrange the fields of mem_cgroup_per_node such that
> the false sharing is eliminated. Let's move all the read only pointers
> at the start of the struct, followed by memcg-v1 only fields and at the
> end fields which get updated often.
>
> Experiment setup: Ran fallocate1, fallocate2, page_fault1, page_fault2
> and page_fault3 from the will-it-scale test suite inside a three level
> memcg with /tmp mounted as tmpfs on two different machines, one a single
> numa node and the other one, two node machine.
>
>  $ ./[testcase]_processes -t $NR_CPUS -s 50
>
> Results for single node, 52 CPU machine:
>
> Testcase        base        with-patch
>
> fallocate1      1031081     1431291  (38.80 %)
> fallocate2      1029993     1421421  (38.00 %)
> page_fault1     2269440     3405788  (50.07 %)
> page_fault2     2375799     3572868  (50.30 %)
> page_fault3     28641143    28673950 ( 0.11 %)
>
> Results for dual node, 80 CPU machine:
>
> Testcase        base        with-patch
>
> fallocate1      2976288     3641185  (22.33 %)
> fallocate2      2979366     3638181  (22.11 %)
> page_fault1     6221790     7748245  (24.53 %)
> page_fault2     6482854     7847698  (21.05 %)
> page_fault3     28804324    28991870 ( 0.65 %)

Great analysis :)

>
> Fixes: 70a64b7919cb ("memcg: dynamically allocate lruvec_stats")
> Reported-by: kernel test robot <oliver.sang@...el.com>
> Closes: https://lore.kernel.org/oe-lkp/202405171353.b56b845-oliver.sang@intel.com
> Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
> ---
>  include/linux/memcontrol.h | 18 ++++++++++--------
>  1 file changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 030d34e9d117..16efd9737be9 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -96,23 +96,25 @@ struct mem_cgroup_reclaim_iter {
>   * per-node information in memory controller.
>   */
>  struct mem_cgroup_per_node {
> -       struct lruvec           lruvec;
> +       /* Keep the read-only fields at the start */
> +       struct mem_cgroup       *memcg;         /* Back pointer, we cannot */
> +                                               /* use container_of        */
>
>         struct lruvec_stats_percpu __percpu     *lruvec_stats_percpu;
>         struct lruvec_stats                     *lruvec_stats;
> -
> -       unsigned long           lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS];
> -
> -       struct mem_cgroup_reclaim_iter  iter;
> -
>         struct shrinker_info __rcu      *shrinker_info;
>
> +       /* memcg-v1 only stuff in middle */
> +
>         struct rb_node          tree_node;      /* RB tree node */
>         unsigned long           usage_in_excess;/* Set to the value by which */
>                                                 /* the soft limit is exceeded*/
>         bool                    on_tree;
> -       struct mem_cgroup       *memcg;         /* Back pointer, we cannot */
> -                                               /* use container_of        */

Do we need CACHELINE_PADDING() here (or maybe make struct lruvec
cache-aligned) to make sure the false cacheline sharing doesn't happen
again with the fields below, or is the idea that the fields that get
read in hot paths (memcg, lruvec_stats_percpu, lruvec_stats) are far
at the top, and the memcg v1 elements in the middle act as a buffer?

IOW, is sharing between the fields below and memcg v1 fields okay
because they are not read in the hot path? If yes, I believe it's
worth a comment. It can be easily missed if the memcg v1 soft limit is
removed later for example.

> +
> +       /* Fields which get updated often at the end. */
> +       struct lruvec           lruvec;
> +       unsigned long           lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS];
> +       struct mem_cgroup_reclaim_iter  iter;
>  };
>
>  struct mem_cgroup_threshold {
> --
> 2.43.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ