[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8ba50768-2f05-40a8-b8e8-4364f33ad269@intel.com>
Date: Sat, 10 Jan 2026 10:24:31 +0800
From: "Guo, Wangyang" <wangyang.guo@...el.com>
To: Radu Rendec <rrendec@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org,
Tianyou Li <tianyou.li@...el.com>, Tim Chen <tim.c.chen@...ux.intel.com>,
Dan Liang <dan.liang@...el.com>
Subject: Re: [PATCH] lib/group_cpus: make group CPU cluster aware
On 1/10/2026 3:13 AM, Radu Rendec wrote:
> Hi all,
>
> On Mon, 2025-12-22 at 11:03 +0800, Guo, Wangyang wrote:
>> On 12/22/2025 3:10 AM, Andrew Morton wrote:
>>> On Fri, 24 Oct 2025 10:30:38 +0800 Wangyang Guo <wangyang.guo@...el.com> wrote:
>>>
>>>> As CPU core counts increase, the number of NVMe IRQs may be smaller than
>>>> the total number of CPUs. This forces multiple CPUs to share the same
>>>> IRQ. If the IRQ affinity and the CPU’s cluster do not align, a
>>>> performance penalty can be observed on some platforms.
>>>
>>> It would be helpful to quantify "performance penalty". At least give
>>> readers some approximate understanding of how serious this issue is,
>>> please.
>>>
>> Thanks for your reminder, will update changelog in next version. We see
>> 15%+ performance difference in FIO libaio/randread/bs=8k.
>>
>>>> This patch improves IRQ affinity by grouping CPUs by cluster within each
>>>> NUMA domain, ensuring better locality between CPUs and their assigned
>>>> NVMe IRQs.
>>>>
>>>> Reviewed-by: Tianyou Li <tianyou.li@...el.com>
>>>> Reviewed-by: Tim Chen <tim.c.chen@...ux.intel.com>
>>>> Tested-by: Dan Liang <dan.liang@...el.com>
>>>> Signed-off-by: Wangyang Guo <wangyang.guo@...el.com>
>>>
>>> Patch hasn't attracted additional review so I'll queue this version for
>>> some testing in mm.git's mm-nonmm-unstable branch. I'll add a
>>> note-to-self that a changelog addition is desirable.
>>
>>
>> Thanks a lot for your time and support! Please let me know if you have
>> any further comments or guidance. Any feedback would be appreciated.
>
> With this patch applied, I see a weird issue in a qemu x86_64 vm if I
> start it with a higher number of max CPUs than active CPUs, for example
> `-smp 4,maxcpus=8` on the qemu command line.
>
> What I see is the `while (1)` loop in alloc_cluster_groups() spinning
> forever. Removing the `maxcpus=8` from the qemu command line fixes the
> issue but so does reverting the patch :)
Thanks for the reporting. I will investigate this problem.
BR
Wangyang
Powered by blists - more mailing lists