[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CACGkMEs+vzJS9mh-yYPg6vRPC0sWW_OGOb4i8Q5Y9sjLkY8y2Q@mail.gmail.com>
Date: Fri, 1 Nov 2024 11:34:42 +0800
From: Jason Wang <jasowang@...hat.com>
To: mapicccy <guanjun@...ux.alibaba.com>
Cc: Ming Lei <ming.lei@...hat.com>, Thomas Gleixner <tglx@...utronix.de>,
Christoph Hellwig <hch@....de>, corbet@....net, axboe@...nel.dk, mst@...hat.com,
xuanzhuo@...ux.alibaba.com, eperezma@...hat.com, vgoyal@...hat.com,
stefanha@...hat.com, miklos@...redi.hu, peterz@...radead.org,
akpm@...ux-foundation.org, paulmck@...nel.org, thuth@...hat.com,
rostedt@...dmis.org, bp@...en8.de, xiongwei.song@...driver.com,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org, virtualization@...ts.linux.dev,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH RFC v1 1/2] genirq/affinity: add support for limiting
managed interrupts
On Fri, Nov 1, 2024 at 11:12 AM mapicccy <guanjun@...ux.alibaba.com> wrote:
>
>
>
> 2024年10月31日 18:50,Ming Lei <ming.lei@...hat.com> 写道:
>
> On Thu, Oct 31, 2024 at 6:35 PM Thomas Gleixner <tglx@...utronix.de> wrote:
>
>
> On Thu, Oct 31 2024 at 15:46, guanjun@...ux.alibaba.com wrote:
>
> #ifdef CONFIG_SMP
>
> +static unsigned int __read_mostly managed_irqs_per_node;
> +static struct cpumask managed_irqs_cpumsk[MAX_NUMNODES] __cacheline_aligned_in_smp = {
> + [0 ... MAX_NUMNODES-1] = {CPU_BITS_ALL}
> +};
>
> +static void __group_prepare_affinity(struct cpumask *premask,
> + cpumask_var_t *node_to_cpumask)
> +{
> + nodemask_t nodemsk = NODE_MASK_NONE;
> + unsigned int ncpus, n;
> +
> + get_nodes_in_cpumask(node_to_cpumask, premask, &nodemsk);
> +
> + for_each_node_mask(n, nodemsk) {
> + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], premask);
> + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], node_to_cpumask[n]);
>
>
> How is this managed_irqs_cpumsk array protected against concurrency?
>
> + ncpus = cpumask_weight(&managed_irqs_cpumsk[n]);
> + if (ncpus < managed_irqs_per_node) {
> + /* Reset node n to current node cpumask */
> + cpumask_copy(&managed_irqs_cpumsk[n], node_to_cpumask[n]);
>
>
> This whole logic is incomprehensible and aside of the concurrency
> problem it's broken when CPUs are made present at run-time because these
> cpu masks are static and represent the stale state of the last
> invocation.
>
> Given the limitations of the x86 vector space, which is not going away
> anytime soon, there are only two options IMO to handle such a scenario.
>
> 1) Tell the nvme/block layer to disable queue affinity management
>
>
> +1
>
> There are other use cases, such as cpu isolation, which can benefit from
> this way too.
>
> https://lore.kernel.org/linux-nvme/20240702104112.4123810-1-ming.lei@redhat.com/
>
I wonder if we need to do the same for virtio-blk.
>
> Thanks for your reminder. However, in this link only modified the NVMe driver,
> but there is the same issue in the virtio net driver as well.
I guess you meant virtio-blk actually?
>
> Guanjun
>
>
> Thanks,
>
Thanks
Powered by blists - more mailing lists