[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180301211817.GC6742@redhat.com>
Date: Thu, 1 Mar 2018 16:18:18 -0500
From: Jerome Glisse <jglisse@...hat.com>
To: Logan Gunthorpe <logang@...tatee.com>
Cc: benh@....ibm.com, Dan Williams <dan.j.williams@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-pci@...r.kernel.org, linux-nvme@...ts.infradead.org,
linux-rdma <linux-rdma@...r.kernel.org>,
linux-nvdimm <linux-nvdimm@...ts.01.org>,
linux-block@...r.kernel.org, Stephen Bates <sbates@...thlin.com>,
Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
Keith Busch <keith.busch@...el.com>,
Sagi Grimberg <sagi@...mberg.me>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Jason Gunthorpe <jgg@...lanox.com>,
Max Gurtovoy <maxg@...lanox.com>,
Alex Williamson <alex.williamson@...hat.com>,
Oliver OHalloran <oliveroh@....ibm.com>
Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory
On Thu, Mar 01, 2018 at 02:11:34PM -0700, Logan Gunthorpe wrote:
>
>
> On 01/03/18 02:03 PM, Benjamin Herrenschmidt wrote:
> > However, what happens if anything calls page_address() on them ? Some
> > DMA ops do that for example, or some devices might ...
>
> Although we could probably work around it with some pain, we rely on
> page_address() and virt_to_phys(), etc to work on these pages. So on x86,
> yes, it makes it into the linear mapping.
This is pretty easy to do with HMM:
unsigned long hmm_page_to_phys_pfn(struct page *page)
{
struct hmm_devmem *devmem;
unsigned long ppfn;
/* Sanity test maybe BUG_ON() */
if (!is_device_private_page(page))
return -1UL;
devmem = page->pgmap->data;
ppfn = page_to_page(page) - devmem->pfn_first;
return ppfn + devmem->device_phys_base_pfn;
}
Note that last field does not exist in today HMM because i did not need
such helper so far but this can be added.
Cheers,
Jérôme
Powered by blists - more mailing lists