lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAFj5m9JY4TSY1dYE0qBVGsRcEOmyNuA4utf+G2=SBU2n5Ks==w@mail.gmail.com>
Date: Wed, 5 Nov 2025 11:35:51 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>, Andrew Morton <akpm@...ux-foundation.org>, 
	Jens Axboe <axboe@...nel.dk>
Cc: linux-kernel@...r.kernel.org, linux-block@...r.kernel.org
Subject: Re: [PATCH] lib/group_cpus: fix cross-NUMA CPU assignment in group_cpus_evenly

On Mon, Oct 27, 2025 at 9:07 AM Ming Lei <ming.lei@...hat.com> wrote:
>
> On Mon, Oct 20, 2025 at 08:46:46PM +0800, Ming Lei wrote:
> > When numgrps > nodes, group_cpus_evenly() can incorrectly assign CPUs
> > from different NUMA nodes to the same group due to the wrapping logic.
> > Then poor block IO performance is caused because of remote IO completion.
> > And it can be avoided completely in case of `numgrps > nodes` because
> > each numa node may includes more CPUs than group's.
> >
> > The issue occurs when curgrp reaches last_grp and wraps to 0. This causes
> > CPUs from later-processed nodes to be added to groups that already contain
> > CPUs from earlier-processed nodes, violating NUMA locality.
> >
> > Example with 8 NUMA nodes, 16 groups:
> > - Each node gets 2 groups allocated
> > - After processing nodes, curgrp reaches 16
> > - Wrapping to 0 causes CPUs from node N to be added to group 0 which
> >   already has CPUs from node 0
> >
> > Fix this by adding find_next_node_group() helper that searches for the
> > next group (starting from 0) that already contains CPUs from the same
> > NUMA node. When wrapping is needed, use this helper instead of blindly
> > wrapping to 0, ensuring CPUs are only added to groups within the same
> > NUMA node.
> >
> > Signed-off-by: Ming Lei <ming.lei@...hat.com>
>
> Hello,
>
> ping...

Hello,

Ping...

Thanks,
Ming


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ