[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a705afc5-779d-baf4-e5d2-e2da04c82743@ozlabs.ru>
Date: Thu, 26 Mar 2020 12:26:54 +1100
From: Alexey Kardashevskiy <aik@...abs.ru>
To: Christoph Hellwig <hch@....de>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
iommu@...ts.linux-foundation.org, linuxppc-dev@...ts.ozlabs.org,
Lu Baolu <baolu.lu@...ux.intel.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Joerg Roedel <joro@...tes.org>,
Robin Murphy <robin.murphy@....com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] dma-mapping: add a dma_ops_bypass flag to struct
device
On 25/03/2020 19:37, Christoph Hellwig wrote:
> On Wed, Mar 25, 2020 at 03:51:36PM +1100, Alexey Kardashevskiy wrote:
>>>> This is for persistent memory which you can DMA to/from but yet it does
>>>> not appear in the system as a normal memory and therefore requires
>>>> special handling anyway (O_DIRECT or DAX, I do not know the exact
>>>> mechanics). All other devices in the system should just run as usual,
>>>> i.e. use 1:1 mapping if possible.
>>>
>>> On other systems (x86 and arm) pmem as long as it is page backed does
>>> not require any special handling. This must be some weird way powerpc
>>> fucked up again, and I suspect you'll have to suffer from it.
>>
>>
>> It does not matter if it is backed by pages or not, the problem may also
>> appear if we wanted for example p2p PCI via IOMMU (between PHBs) and
>> MMIO might be mapped way too high in the system address space and make
>> 1:1 impossible.
>
> How can it be mapped too high for a direct mapping with a 64-bit DMA
> mask?
The window size is limited and often it is not even sparse. It requires
an 8 byte entry per an IOMMU page (which is most commonly is 64k max) so
1TB limit (a guest RAM size) is a quite real thing. MMIO is mapped to
guest physical address space outside of this 1TB (on PPC).
--
Alexey
Powered by blists - more mailing lists