[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z32sncx9K4iFLsJN@li-4c4c4544-0047-5210-804b-b8c04f323634.ibm.com>
Date: Tue, 7 Jan 2025 16:37:17 -0600
From: Nick Child <nnac123@...ux.ibm.com>
To: Yury Norov <yury.norov@...il.com>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, Haren Myneni <haren@...ux.ibm.com>,
Rick Lindsley <ricklind@...ux.ibm.com>,
Thomas Falcon <tlfalcon@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Naveen N Rao <naveen@...nel.org>,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Rasmus Villemoes <linux@...musvillemoes.dk>
Subject: Re: [PATCH 03/14] ibmvnic: simplify ibmvnic_set_queue_affinity()
On Sat, Dec 28, 2024 at 10:49:35AM -0800, Yury Norov wrote:
> A loop based on cpumask_next_wrap() opencodes the dedicated macro
> for_each_online_cpu_wrap(). Using the macro allows to avoid setting
> bits affinity mask more than once when stride >= num_online_cpus.
>
> This also helps to drop cpumask handling code in the caller function.
>
> Signed-off-by: Yury Norov <yury.norov@...il.com>
> ---
> drivers/net/ethernet/ibm/ibmvnic.c | 17 ++++++++++-------
> 1 file changed, 10 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
> index e95ae0d39948..4cfd90fb206b 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -234,11 +234,16 @@ static int ibmvnic_set_queue_affinity(struct ibmvnic_sub_crq_queue *queue,
> (*stragglers)--;
> }
> /* atomic write is safer than writing bit by bit directly */
> - for (i = 0; i < stride; i++) {
> - cpumask_set_cpu(*cpu, mask);
> - *cpu = cpumask_next_wrap(*cpu, cpu_online_mask,
> - nr_cpu_ids, false);
> + for_each_online_cpu_wrap(i, *cpu) {
> + if (!stride--)
> + break;
> + cpumask_set_cpu(i, mask);
> }
> +
> + /* For the next queue we start from the first unused CPU in this queue */
> + if (i < nr_cpu_ids)
> + *cpu = i + 1;
> +
This should read '*cpu = i'. Since the loop breaks after incrementing i.
Thanks!
> /* set queue affinity mask */
> cpumask_copy(queue->affinity_mask, mask);
> rc = irq_set_affinity_and_hint(queue->irq, queue->affinity_mask);
> @@ -256,7 +261,7 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter)
> int num_rxqs = adapter->num_active_rx_scrqs, i_rxqs = 0;
> int num_txqs = adapter->num_active_tx_scrqs, i_txqs = 0;
> int total_queues, stride, stragglers, i;
> - unsigned int num_cpu, cpu;
> + unsigned int num_cpu, cpu = 0;
> bool is_rx_queue;
> int rc = 0;
>
> @@ -274,8 +279,6 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter)
> stride = max_t(int, num_cpu / total_queues, 1);
> /* number of leftover cpu's */
> stragglers = num_cpu >= total_queues ? num_cpu % total_queues : 0;
> - /* next available cpu to assign irq to */
> - cpu = cpumask_next(-1, cpu_online_mask);
>
> for (i = 0; i < total_queues; i++) {
> is_rx_queue = false;
> --
> 2.43.0
>
Powered by blists - more mailing lists