[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1519936815.4592.25.camel@au1.ibm.com>
Date: Fri, 02 Mar 2018 07:40:15 +1100
From: Benjamin Herrenschmidt <benh@....ibm.com>
To: Dan Williams <dan.j.williams@...el.com>
Cc: Logan Gunthorpe <logang@...tatee.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-pci@...r.kernel.org, linux-nvme@...ts.infradead.org,
linux-rdma <linux-rdma@...r.kernel.org>,
linux-nvdimm <linux-nvdimm@...ts.01.org>,
linux-block@...r.kernel.org, Stephen Bates <sbates@...thlin.com>,
Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
Keith Busch <keith.busch@...el.com>,
Sagi Grimberg <sagi@...mberg.me>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Jason Gunthorpe <jgg@...lanox.com>,
Max Gurtovoy <maxg@...lanox.com>,
Jérôme Glisse <jglisse@...hat.com>,
Alex Williamson <alex.williamson@...hat.com>,
Oliver OHalloran <oliveroh@....ibm.com>
Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI
Memory
On Fri, 2018-03-02 at 07:34 +1100, Benjamin Herrenschmidt wrote:
>
> But what happens with that PCI memory ? Is it effectively turned into
> nromal memory (ie, usable for normal allocations, potentially used to
> populate user pages etc...) or is it kept aside ?
(What I mean is is it added to the page allocator basically)
Also we need to be able to hard block MEMREMAP_WB mappings of non-RAM
on ppc64 (maybe via an arch hook as it might depend on the processor
family). Server powerpc cannot do cachable accesses on IO memory
(unless it's special OpenCAPI or nVlink, but not on PCIe).
> Also on ppc64, the physical addresses of PCIe make it so far appart
> that there's no way we can map them into the linear mapping at the
> normal offset of PAGE_OFFSET + (pfn << PAGE_SHIFT), so things like
> page_address or virt_to_page cannot work as-is on PCIe addresses.
Talking of which ... is there any documentation on the whole
memremap_page ? my grep turned out empty...
Cheers,
Ben.
Powered by blists - more mailing lists