[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180614085015.3f39b367@w520.home>
Date: Thu, 14 Jun 2018 08:50:15 -0600
From: Alex Williamson <alex.williamson@...hat.com>
To: Srinath Mannam <srinath.mannam@...adcom.com>
Cc: Sinan Kaya <okaya@...eaurora.org>, Christoph Hellwig <hch@....de>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Abhishek Shah <abhishek.shah@...adcom.com>,
Vikram Prakash <vikram.prakash@...adcom.com>,
linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org, kvm@...r.kernel.org,
linux-pci-owner@...r.kernel.org
Subject: Re: Requirement to get BAR pci_bus_address in user space
On Thu, 14 Jun 2018 16:18:15 +0530
Srinath Mannam <srinath.mannam@...adcom.com> wrote:
> Hi Sinan Kaya,
>
> Here are the details,
>
> The issue is, For CMB cards SQs are allocated inside device BAR memory
> which is different from normal cards.
> In Normal cards SQ memory allocated at host side.
> In both the cases physical address of CQ memory is programmed in NVMe
> controller register.
> This method works for normal cards because CQ memory is at host side.
> But in CMB cards pci bus address equivalent to CQ memory needs to program.
>
> More details are in the patch: nvme-pci: Use PCI bus address for
> data/queues in CMB.
>
> With the above patch issue is fixed in the NVMe kernel driver, But
> similar fix is required in SPDK library also.
> So, We need a mechanism to get pci_bus_address in user space libraries
> to address this issue.
I don't understand the CQ vs CMB, but I think I gather that there's some
sort of buffer that's allocated from within the devices MMIO BAR and
some programming of the device needs to reference that buffer.
Wouldn't you therefore use the vfio type1 IOMMU MAP_DMA ioctl to map
the BAR into the IOVA address space and you can then use the IOVA +
offset into the BAR for the device to reference the buffer? It seems
this is the same way we'd setup a peer-to-peer mapping, but we're using
it for the device to reference itself effectively. Thanks,
Alex
Powered by blists - more mailing lists