[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YwLlsr0jNq5m6v8z@feng-clx>
Date: Mon, 22 Aug 2022 10:10:58 +0800
From: Feng Tang <feng.tang@...el.com>
To: Shakeel Butt <shakeelb@...gle.com>
CC: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <songmuchun@...edance.com>,
Michal Koutn?? <mkoutny@...e.com>,
Eric Dumazet <edumazet@...gle.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
"Sang, Oliver" <oliver.sang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"lkp@...ts.01.org" <lkp@...ts.01.org>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/3] mm: page_counter: rearrange struct page_counter
fields
On Mon, Aug 22, 2022 at 08:17:36AM +0800, Shakeel Butt wrote:
> With memcg v2 enabled, memcg->memory.usage is a very hot member for
> the workloads doing memcg charging on multiple CPUs concurrently.
> Particularly the network intensive workloads. In addition, there is a
> false cache sharing between memory.usage and memory.high on the charge
> path. This patch moves the usage into a separate cacheline and move all
> the read most fields into separate cacheline.
>
> To evaluate the impact of this optimization, on a 72 CPUs machine, we
> ran the following workload in a three level of cgroup hierarchy with top
> level having min and low setup appropriately. More specifically
> memory.min equal to size of netperf binary and memory.low double of
> that.
>
> $ netserver -6
> # 36 instances of netperf with following params
> $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K
>
> Results (average throughput of netperf):
> Without (6.0-rc1) 10482.7 Mbps
> With patch 12413.7 Mbps (18.4% improvement)
>
> With the patch, the throughput improved by 18.4%.
>
> One side-effect of this patch is the increase in the size of struct
> mem_cgroup. However for the performance improvement, this additional
> size is worth it. In addition there are opportunities to reduce the size
> of struct mem_cgroup like deprecation of kmem and tcpmem page counters
> and better packing.
>
> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
> Reported-by: kernel test robot <oliver.sang@...el.com>
Looks good to me, with one nit below.
Reviewed-by: Feng Tang <feng.tang@...el.com>
> ---
> include/linux/page_counter.h | 34 +++++++++++++++++++++++-----------
> 1 file changed, 23 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h
> index 679591301994..8ce99bde645f 100644
> --- a/include/linux/page_counter.h
> +++ b/include/linux/page_counter.h
> @@ -3,15 +3,27 @@
> #define _LINUX_PAGE_COUNTER_H
>
> #include <linux/atomic.h>
> +#include <linux/cache.h>
> #include <linux/kernel.h>
> #include <asm/page.h>
>
> +#if defined(CONFIG_SMP)
> +struct pc_padding {
> + char x[0];
> +} ____cacheline_internodealigned_in_smp;
> +#define PC_PADDING(name) struct pc_padding name
> +#else
> +#define PC_PADDING(name)
> +#endif
There are 2 similar padding definitions in mmzone.h and memcontrol.h:
struct memcg_padding {
char x[0];
} ____cacheline_internodealigned_in_smp;
#define MEMCG_PADDING(name) struct memcg_padding name
struct zone_padding {
char x[0];
} ____cacheline_internodealigned_in_smp;
#define ZONE_PADDING(name) struct zone_padding name;
Maybe we can generalize them, and lift it into include/cache.h? so
that more places can reuse it in future.
Thanks,
Feng
Powered by blists - more mailing lists