[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <9847EC49-8F55-486A-985D-C3EDD168762D@linux.alibaba.com>
Date: Fri, 1 Nov 2024 11:03:08 +0800
From: mapicccy <guanjun@...ux.alibaba.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: corbet@....net,
axboe@...nel.dk,
mst@...hat.com,
jasowang@...hat.com,
xuanzhuo@...ux.alibaba.com,
eperezma@...hat.com,
vgoyal@...hat.com,
stefanha@...hat.com,
miklos@...redi.hu,
peterz@...radead.org,
akpm@...ux-foundation.org,
paulmck@...nel.org,
thuth@...hat.com,
rostedt@...dmis.org,
bp@...en8.de,
xiongwei.song@...driver.com,
linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org,
virtualization@...ts.linux.dev,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH RFC v1 1/2] genirq/affinity: add support for limiting
managed interrupts
> 2024年10月31日 18:35,Thomas Gleixner <tglx@...utronix.de> 写道:
>
> On Thu, Oct 31 2024 at 15:46, guanjun@...ux.alibaba.com wrote:
>> #ifdef CONFIG_SMP
>>
>> +static unsigned int __read_mostly managed_irqs_per_node;
>> +static struct cpumask managed_irqs_cpumsk[MAX_NUMNODES] __cacheline_aligned_in_smp = {
>> + [0 ... MAX_NUMNODES-1] = {CPU_BITS_ALL}
>> +};
>>
>> +static void __group_prepare_affinity(struct cpumask *premask,
>> + cpumask_var_t *node_to_cpumask)
>> +{
>> + nodemask_t nodemsk = NODE_MASK_NONE;
>> + unsigned int ncpus, n;
>> +
>> + get_nodes_in_cpumask(node_to_cpumask, premask, &nodemsk);
>> +
>> + for_each_node_mask(n, nodemsk) {
>> + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], premask);
>> + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], node_to_cpumask[n]);
>
> How is this managed_irqs_cpumsk array protected against concurrency?
My intention was to allocate up to `managed_irq_per_node` cpu bits from `managed_irqs_cpumask[n]`,
even if another task modifies some of the bits in the `managed_irqs_cpumask[n]` at the same time.
>
>> + ncpus = cpumask_weight(&managed_irqs_cpumsk[n]);
>> + if (ncpus < managed_irqs_per_node) {
>> + /* Reset node n to current node cpumask */
>> + cpumask_copy(&managed_irqs_cpumsk[n], node_to_cpumask[n]);
>
> This whole logic is incomprehensible and aside of the concurrency
> problem it's broken when CPUs are made present at run-time because these
> cpu masks are static and represent the stale state of the last
> invocation.
Sorry, I realize there is indeed a logic issue here (caused by developing on 5.10 LTS and rebase to the latest linux-next).
>
> Given the limitations of the x86 vector space, which is not going away
> anytime soon, there are only two options IMO to handle such a scenario.
>
> 1) Tell the nvme/block layer to disable queue affinity management
>
> 2) Restrict the devices and queues to the nodes they sit on
I have tried fixing this issue through nvme driver, but later discovered that the same issue exists with virtio net.
Therefore, I want to address this with a more general solution.
Thanks,
Guanjun
>
> Thanks,
>
> tglx
Powered by blists - more mailing lists