[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ed72aae3-e9c2-4768-a400-cb99cb2a0f24@cornelisnetworks.com>
Date: Mon, 23 Jun 2025 12:23:36 -0400
From: Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>
To: Yury Norov <yury.norov@...il.com>, Leon Romanovsky <leon@...nel.org>
Cc: Jason Gunthorpe <jgg@...pe.ca>,
Rasmus Villemoes <linux@...musvillemoes.dk>, linux-rdma@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/7] RDMA: hfi1: cpumasks usage fixes
On 6/21/25 11:03 AM, Yury Norov wrote:
> On Thu, Jun 12, 2025 at 11:12:29AM +0300, Leon Romanovsky wrote:
>> On Wed, Jun 04, 2025 at 03:39:36PM -0400, Yury Norov wrote:
>>> The driver uses cpumasks API in a non-optimal way; partially because of
>>> absence of proper functions. Fix this and nearby logic.
>>>
>>> Yury Norov [NVIDIA] (7):
>>> cpumask: add cpumask_clear_cpus()
>>> RDMA: hfi1: fix possible divide-by-zero in find_hw_thread_mask()
>>> RDMA: hfi1: simplify find_hw_thread_mask()
>>> RDMA: hfi1: simplify init_real_cpu_mask()
>>> RDMA: hfi1: use rounddown in find_hw_thread_mask()
>>> RDMA: hfi1: simplify hfi1_get_proc_affinity()
>>> RDMI: hfi1: drop cpumask_empty() call in hfi1/affinity.c
>>>
>>> drivers/infiniband/hw/hfi1/affinity.c | 96 +++++++++++----------------
>>> include/linux/cpumask.h | 12 ++++
>>> 2 files changed, 49 insertions(+), 59 deletions(-)
>>
>> Dennis?
>
> So?.. Any feedback?
I'm ambivalent about this patch series. It looks OK but I don't think it's
really fixing anything.
Powered by blists - more mailing lists