[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALuQH+VAvfAX1Gs1tNDa7e_wvZj2yyu1ZGpiLLt2ywssSF4sNQ@mail.gmail.com>
Date: Wed, 9 Jul 2025 10:55:51 -0700
From: Joshua Washington <joshwash@...gle.com>
To: Simon Horman <horms@...nel.org>
Cc: Jeroen de Borst <jeroendb@...gle.com>, netdev@...r.kernel.org,
Harshitha Ramamurthy <hramamurthy@...gle.com>, davem@...emloft.net,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Willem de Bruijn <willemb@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Bailey Forrest <bcf@...gle.com>
Subject: Re: [PATCH net-next v2] gve: make IRQ handlers and page allocation
NUMA aware
Thanks for the feedback.
> > + cur_cpu = cpumask_next(cur_cpu, node_mask);
> > + /* Wrap once CPUs in the node have been exhausted, or when
> > + * starting RX queue affinities. TX and RX queues of the same
> > + * index share affinity.
> > + */
> > + if (cur_cpu >= nr_cpu_ids || (i + 1) == priv->tx_cfg.max_queues)
> > + cur_cpu = cpumask_first(node_mask);
>
> FWIIW, maybe this can be written more succinctly as follows.
> (Completely untested!)
>
> /* TX and RX queues of the same index share affinity. */
> if (i + 1 == priv->tx_cfg.max_queues)
> cur_cpu = cpumask_first(node_mask);
> else
> cur_cpu = cpumask_next_wrap(cur_cpu, node_mask);
I personally do not have a very strong opinion on this, so I'll update
it if more feedback comes which requires another patch revision.
Otherwise, I will leave it as-is, as the feedback does not seem to be
blocking.
Powered by blists - more mailing lists