[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <41cc93b1-62b5-7fb6-060d-01982e68503b@amd.com>
Date: Tue, 6 Aug 2019 13:38:57 +0000
From: "Lendacky, Thomas" <Thomas.Lendacky@....com>
To: Christoph Hellwig <hch@....de>,
Lucas Stach <l.stach@...gutronix.de>
CC: "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Thiago Jung Bauermann <bauerman@...ux.ibm.com>,
Halil Pasic <pasic@...ux.ibm.com>
Subject: Re: Regression due to d98849aff879 (dma-direct: handle
DMA_ATTR_NO_KERNEL_MAPPING in common code)
On 8/6/19 6:33 AM, Christoph Hellwig wrote:
> On Tue, Aug 06, 2019 at 11:13:29AM +0200, Lucas Stach wrote:
>> Hi Christoph,
>>
>> I just found a regression where my NVMe device is no longer able to set
>> up its HMB.
>>
>> After subject commit dma_direct_alloc_pages() is no longer initializing
>> dma_handle properly when DMA_ATTR_NO_KERNEL_MAPPING is set, as the
>> function is now returning too early.
>>
>> Now this could easily be fixed by adding the phy_to_dma translation to
>> the NO_KERNEL_MAPPING code path, but I'm not sure how this stuff
>> interacts with the memory encryption stuff set up later in the
>> function, so I guess this should be looked at by someone with more
>> experience with this code than me.
>
> There is not much we can do about the memory encryption case here,
> as that requires a kernel address to mark the memory as unencrypted.
>
> So the obvious trivial fix is probably the right one:
This will present problems under SEV (probably not SME unless the DMA
mask doesn't support 48-bit DMA) when an NVMe device is passed through.
The Documentation states that DMA_ATTR_NO_KERNEL_MAPPING is to avoid
creating the mapping because of time and resources that may be involved
on some archs. Would it make sense to check for memory encryption using
force_dma_unencrypted() and override the flag in those cases? Does x86
have issues where this flag is needed? It could be set that the mapping
is only generated if you have to force an unencrypted DMA. The code isn't
as clean and you would have to hit the dma_direct_free_pages() path, also.
I suspect Power and s390 may have the same concerns here (adding them on
Cc: just in case).
Thanks,
Tom
>
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 59bdceea3737..c49120193309 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -135,6 +135,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
> if (!PageHighMem(page))
> arch_dma_prep_coherent(page, size);
> /* return the page pointer as the opaque cookie */
> + *dma_handle = phys_to_dma(dev, page_to_phys(page));
> return page;
> }
>
>
Powered by blists - more mailing lists