[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aW86/Nc2+bkopFd7@intel.com>
Date: Tue, 20 Jan 2026 16:21:16 +0800
From: Zhao Liu <zhao1.liu@...el.com>
To: Hao Li <hao.li@...ux.dev>
Cc: Vlastimil Babka <vbabka@...e.cz>, Hao Li <haolee.swjtu@...il.com>,
akpm@...ux-foundation.org, harry.yoo@...cle.com, cl@...two.org,
rientjes@...gle.com, roman.gushchin@...ux.dev, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, tim.c.chen@...el.com,
yu.c.chen@...el.com, zhao1.liu@...el.com
Subject: Re: [PATCH v2] slub: keep empty main sheaf as spare in
__pcs_replace_empty_main()
> 1. Machine Configuration
>
> The topology of my machine is as follows:
>
> CPU(s): 384
> On-line CPU(s) list: 0-383
> Thread(s) per core: 2
> Core(s) per socket: 96
> Socket(s): 2
> NUMA node(s): 2
It seems like this is a GNR machine - maybe SNC could be enabled.
> Since my machine only has 192 cores when counting physical cores, I had to
> enable SMT to support the higher number of tasks in the LKP test cases. My
> configuration was as follows:
>
> will-it-scale:
> mode: process
> test: mmap2
> no_affinity: 0
> smt: 1
For lkp, smt parameter is disabled. I tried with smt=1 locally, the
difference between "with fix" & "w/o fix" is not significate. Maybe smt
parameter could be set as 0.
On another machine (2 sockets with SNC3 enabled - 6 NUMA nodes), there's
the similar regression happening when tasks fill up a socket and then
there're more get_partial_node().
> Here's the "perf report --no-children -g" output with the patch:
>
> ```
> + 30.36% mmap2_processes [kernel.kallsyms] [k] perf_iterate_ctx
> - 28.80% mmap2_processes [kernel.kallsyms] [k] native_queued_spin_lock_slowpath
> - 24.72% testcase
> - 24.71% __mmap
> - 24.68% entry_SYSCALL_64_after_hwframe
> - do_syscall_64
> - 24.61% ksys_mmap_pgoff
> - 24.57% vm_mmap_pgoff
> - 24.51% do_mmap
> - 24.30% __mmap_region
> - 18.33% mas_preallocate
> - 18.30% mas_alloc_nodes
> - 18.30% kmem_cache_alloc_noprof
> - 18.28% __pcs_replace_empty_main
> + 9.06% barn_replace_empty_sheaf
> + 6.12% barn_get_empty_sheaf
> + 3.09% refill_sheaf
this is the difference with my previous perf report: here the proportion
of refill_sheaf is low - it indicates the shaeves are enough in the most
time.
Back to my previous test, I'm guessing that with this fix, under extreme
conditions of massive mmap usage, each CPU now stores an empty spare sheaf
locally. Previously, each CPU's spare sheaf was NULL. So memory pressure
increases with more spare sheaves locally. And in that extreme scenario,
cross-socket remote NUMA access incurs significant overhead — which is why
regression occurs here.
However, testing from 1 task to max tasks (nr_tasks = nr_logical_cpus)
shows overall significant improvements in most scenarios. Regressions
only occur at the specific topology boundaries described above.
I believe the cases with performance gains are more common. So I think
the regression is a corner case. If it does indeed impact certain
workloads in the future, we may need to reconsider optimization at that
time. It can now be used as a reference.
Thanks,
Zhao
Powered by blists - more mailing lists