[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6a18009f-c9ea-4026-ac68-8cb753f4b001@nvidia.com>
Date: Wed, 21 Jan 2026 12:35:55 +0200
From: Yishai Hadas <yishaih@...dia.com>
To: Jason Gunthorpe <jgg@...pe.ca>, Edward Srouji <edwards@...dia.com>
CC: Leon Romanovsky <leon@...nel.org>, Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>,
<linux-kernel@...r.kernel.org>, <linux-rdma@...r.kernel.org>,
<linux-media@...r.kernel.org>, <dri-devel@...ts.freedesktop.org>
Subject: Re: [PATCH rdma-next 2/2] RDMA/mlx5: Implement DMABUF export ops
On 20/01/2026 20:18, Jason Gunthorpe wrote:
> On Thu, Jan 08, 2026 at 01:11:15PM +0200, Edward Srouji wrote:
>> +static int phys_addr_to_bar(struct pci_dev *pdev, phys_addr_t pa)
>> +{
>> + resource_size_t start, end;
>> + int bar;
>> +
>> + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
>> + /* Skip BARs not present or not memory-mapped */
>> + if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM))
>> + continue;
>> +
>> + start = pci_resource_start(pdev, bar);
>> + end = pci_resource_end(pdev, bar);
>> +
>> + if (!start || !end)
>> + continue;
>> +
>> + if (pa >= start && pa <= end)
>> + return bar;
>> + }
>
> Don't we know which of the two BARs the mmap entry came from based on
> its type? This seems like overkill..
>
Actually no.
Currently, a given type can reside on different BARs based on function
type (i.e. PF/SF).
As we don't have any cap/knowledge for the above mapping, we would
prefer the above code which finds the correct bar (for now 0 or 2)
dynamically.
Yishai
Powered by blists - more mailing lists