lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aR0i2f91VGv47swo@fedora>
Date: Wed, 19 Nov 2025 09:52:25 +0800
From: Ming Lei <ming.lei@...hat.com>
To: "Guo, Wangyang" <wangyang.guo@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...com>,
	Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
	linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
	virtualization@...ts.linux-foundation.org,
	linux-block@...r.kernel.org, Tianyou Li <tianyou.li@...el.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Dan Liang <dan.liang@...el.com>
Subject: Re: [PATCH RESEND] lib/group_cpus: make group CPU cluster aware

On Tue, Nov 18, 2025 at 02:29:20PM +0800, Guo, Wangyang wrote:
> On 11/13/2025 9:38 AM, Ming Lei wrote:
> > On Wed, Nov 12, 2025 at 11:02:47AM +0800, Guo, Wangyang wrote:
> > > On 11/11/2025 8:08 PM, Ming Lei wrote:
> > > > On Tue, Nov 11, 2025 at 01:31:04PM +0800, Guo, Wangyang wrote:
> > > > They still should share same L3 cache, and cpus_share_cache() should be
> > > > true when the IO completes on the CPU which belong to different L2 with the
> > > > submission CPU, and remote completion via IPI won't be triggered.
> > > Yes, remote IPI not triggered.
> > 
> > OK, in my test on AMD zen4, NVMe performance can be dropped to 1/2 - 1/3 if
> > remote IPI is triggered in case of crossing L3, which is understandable.
> > 
> > I will check if topo cluster can cover L3, if yes, the patch still can be
> > simplified a lot by introducing sub-node spread by changing build_node_to_cpumask()
> > and adding nr_sub_nodes.
> 
> Do you mean using cluster as "NUMA" nodes to spread CPU, instead of two
> level NUMA-cluster spreading?

Yes, I think the change may be minimized by introducing sub-numa-node for
covering it, what do you think of this approach?

However, it is bad to use cluster as sub-numa-node at default, because cluster
is aligned with CPUs sharing L2 cache, so there could be too many clusters
for many systems in which one cluster just includes two CPUs, then the finally
calculated mapping crosses clusters inevitably because nr_queues is
less than nr_clusters.

I'd suggest to map CPUs sharing L3 cache into one sub-numa-node.

For your case, either adding one kernel parameter, or adding group_cpus_cluster()
API for the unusual case by sharing single code path.


Thanks,
Ming


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ