[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1de70207-40ce-29f0-6093-337112852475@deltatee.com>
Date: Thu, 1 Mar 2018 14:11:34 -0700
From: Logan Gunthorpe <logang@...tatee.com>
To: benh@....ibm.com, Dan Williams <dan.j.williams@...el.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-pci@...r.kernel.org, linux-nvme@...ts.infradead.org,
linux-rdma <linux-rdma@...r.kernel.org>,
linux-nvdimm <linux-nvdimm@...ts.01.org>,
linux-block@...r.kernel.org, Stephen Bates <sbates@...thlin.com>,
Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
Keith Busch <keith.busch@...el.com>,
Sagi Grimberg <sagi@...mberg.me>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Jason Gunthorpe <jgg@...lanox.com>,
Max Gurtovoy <maxg@...lanox.com>,
Jérôme Glisse <jglisse@...hat.com>,
Alex Williamson <alex.williamson@...hat.com>,
Oliver OHalloran <oliveroh@....ibm.com>
Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory
On 01/03/18 02:03 PM, Benjamin Herrenschmidt wrote:
> However, what happens if anything calls page_address() on them ? Some
> DMA ops do that for example, or some devices might ...
Although we could probably work around it with some pain, we rely on
page_address() and virt_to_phys(), etc to work on these pages. So on
x86, yes, it makes it into the linear mapping.
Logan
Powered by blists - more mailing lists