[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFj5m9KZRjupM+bsuc-r_kTu1h8+wtc_fdmkHWS=cNbg4aU03g@mail.gmail.com>
Date: Thu, 31 Oct 2024 18:50:51 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>, Christoph Hellwig <hch@....de>
Cc: Guanjun <guanjun@...ux.alibaba.com>, corbet@....net, axboe@...nel.dk,
mst@...hat.com, jasowang@...hat.com, xuanzhuo@...ux.alibaba.com,
eperezma@...hat.com, vgoyal@...hat.com, stefanha@...hat.com,
miklos@...redi.hu, peterz@...radead.org, akpm@...ux-foundation.org,
paulmck@...nel.org, thuth@...hat.com, rostedt@...dmis.org, bp@...en8.de,
xiongwei.song@...driver.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
virtualization@...ts.linux.dev, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH RFC v1 1/2] genirq/affinity: add support for limiting
managed interrupts
On Thu, Oct 31, 2024 at 6:35 PM Thomas Gleixner <tglx@...utronix.de> wrote:
>
> On Thu, Oct 31 2024 at 15:46, guanjun@...ux.alibaba.com wrote:
> > #ifdef CONFIG_SMP
> >
> > +static unsigned int __read_mostly managed_irqs_per_node;
> > +static struct cpumask managed_irqs_cpumsk[MAX_NUMNODES] __cacheline_aligned_in_smp = {
> > + [0 ... MAX_NUMNODES-1] = {CPU_BITS_ALL}
> > +};
> >
> > +static void __group_prepare_affinity(struct cpumask *premask,
> > + cpumask_var_t *node_to_cpumask)
> > +{
> > + nodemask_t nodemsk = NODE_MASK_NONE;
> > + unsigned int ncpus, n;
> > +
> > + get_nodes_in_cpumask(node_to_cpumask, premask, &nodemsk);
> > +
> > + for_each_node_mask(n, nodemsk) {
> > + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], premask);
> > + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], node_to_cpumask[n]);
>
> How is this managed_irqs_cpumsk array protected against concurrency?
>
> > + ncpus = cpumask_weight(&managed_irqs_cpumsk[n]);
> > + if (ncpus < managed_irqs_per_node) {
> > + /* Reset node n to current node cpumask */
> > + cpumask_copy(&managed_irqs_cpumsk[n], node_to_cpumask[n]);
>
> This whole logic is incomprehensible and aside of the concurrency
> problem it's broken when CPUs are made present at run-time because these
> cpu masks are static and represent the stale state of the last
> invocation.
>
> Given the limitations of the x86 vector space, which is not going away
> anytime soon, there are only two options IMO to handle such a scenario.
>
> 1) Tell the nvme/block layer to disable queue affinity management
+1
There are other use cases, such as cpu isolation, which can benefit from
this way too.
https://lore.kernel.org/linux-nvme/20240702104112.4123810-1-ming.lei@redhat.com/
Thanks,
Powered by blists - more mailing lists