[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZYC8wYdyHi3KA3Bp@yury-ThinkPad>
Date: Mon, 18 Dec 2023 13:42:25 -0800
From: Yury Norov <yury.norov@...il.com>
To: Jacob Keller <jacob.e.keller@...el.com>
Cc: Souradeep Chakrabarti <schakrabarti@...ux.microsoft.com>,
kys@...rosoft.com, haiyangz@...rosoft.com, wei.liu@...nel.org,
decui@...rosoft.com, davem@...emloft.net, edumazet@...gle.com,
kuba@...nel.org, pabeni@...hat.com, longli@...rosoft.com,
leon@...nel.org, cai.huoqing@...ux.dev, ssengar@...ux.microsoft.com,
vkuznets@...hat.com, tglx@...utronix.de,
linux-hyperv@...r.kernel.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-rdma@...r.kernel.org,
schakrabarti@...rosoft.com, paulros@...rosoft.com
Subject: Re: [PATCH 3/3] net: mana: add a function to spread IRQs per CPUs
On Mon, Dec 18, 2023 at 01:17:53PM -0800, Jacob Keller wrote:
>
>
> On 12/17/2023 1:32 PM, Yury Norov wrote:
> > +static __maybe_unused int irq_setup(unsigned int *irqs, unsigned int len, int node)
> > +{
> > + const struct cpumask *next, *prev = cpu_none_mask;
> > + cpumask_var_t cpus __free(free_cpumask_var);
> > + int cpu, weight;
> > +
> > + if (!alloc_cpumask_var(&cpus, GFP_KERNEL))
> > + return -ENOMEM;
> > +
> > + rcu_read_lock();
> > + for_each_numa_hop_mask(next, node) {
> > + weight = cpumask_weight_andnot(next, prev);
> > + while (weight-- > 0) {
> > + cpumask_andnot(cpus, next, prev);
> > + for_each_cpu(cpu, cpus) {
> > + if (len-- == 0)
> > + goto done;
> > + irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> > + cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> > + }
> > + }
> > + prev = next;
> > + }
> > +done:
> > + rcu_read_unlock();
> > + return 0;
> > +}
> > +
>
> You're adding a function here but its not called and even marked as
> __maybe_unused?
I expect that Souradeep would build his driver improvement on top of
this function. cpumask API is somewhat tricky to use it properly here,
so this is an attempt help him, instead of moving back and forth on
review.
Sorry, I had to be more explicit.
Thanks,
Yury
Powered by blists - more mailing lists