[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <yq5abjjl4o0j.fsf@kernel.org>
Date: Fri, 26 Dec 2025 14:29:24 +0530
From: Aneesh Kumar K.V <aneesh.kumar@...nel.org>
To: Suzuki K Poulose <suzuki.poulose@....com>, linux-kernel@...r.kernel.org,
iommu@...ts.linux.dev, linux-coco@...ts.linux.dev
Cc: Catalin Marinas <catalin.marinas@....com>, will@...nel.org,
maz@...nel.org, tglx@...utronix.de, robin.murphy@....com,
akpm@...ux-foundation.org, jgg@...pe.ca, steven.price@....com
Subject: Re: [PATCH v2 4/4] dma: direct: set decrypted flag for remapped dma
allocations
Aneesh Kumar K.V <aneesh.kumar@...nel.org> writes:
> Suzuki K Poulose <suzuki.poulose@....com> writes:
>
>> On 21/12/2025 16:09, Aneesh Kumar K.V (Arm) wrote:
>>> Devices that are DMA non-coherent and need a remap were skipping
>>> dma_set_decrypted(), leaving buffers encrypted even when the device
>>> requires unencrypted access. Move the call after the remap
>>> branch so both paths mark the allocation decrypted (or fail cleanly)
>>> before use.
>>>
>>> Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_direct_alloc")
>>> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@...nel.org>
>>> ---
>>> kernel/dma/direct.c | 8 +++-----
>>> 1 file changed, 3 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
>>> index 3448d877c7c6..a62dc25524cc 100644
>>> --- a/kernel/dma/direct.c
>>> +++ b/kernel/dma/direct.c
>>> @@ -271,9 +271,6 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>>> if (remap) {
>>> pgprot_t prot = dma_pgprot(dev, PAGE_KERNEL, attrs);
>>>
>>> - if (force_dma_unencrypted(dev))
>>> - prot = pgprot_decrypted(prot);
>>
>> This would be problematic, isn't it ? We don't support decrypted on a
>> vmap area for arm64. If we move this down, we might actually use the
>> vmapped area. Not sure if other archs are fine with "decrypting" a
>> "vmap" address.
>>
>> If we map the "vmap" address with pgprot_decrypted, we could go ahead
>> and further map the linear map (i.e., page_address(page)) decrypted
>> and get everything working.
>
> We still have the problem w.r.t free
>
> dma_direct_free():
>
> if (is_vmalloc_addr(cpu_addr)) {
> vunmap(cpu_addr);
> } else {
> if (dma_set_encrypted(dev, cpu_addr, size))
> return;
> }
>
How about the below change?
commit 8261c528961c6959b85de87c5659ce9081dc85b7
Author: Aneesh Kumar K.V (Arm) <aneesh.kumar@...nel.org>
Date: Fri Dec 19 14:46:20 2025 +0530
dma: direct: set decrypted flag for remapped DMA allocations
Devices that are DMA non-coherent and require a remap were skipping
dma_set_decrypted(), leaving DMA buffers encrypted even when the device
requires unencrypted access. Move the call after the if (remap) branch
so that both direct and remapped allocation paths correctly mark the
allocation as decrypted (or fail cleanly) before use.
If CMA allocations return highmem pages, treat this as an allocation
error so that dma_direct_alloc() falls back to the standard allocation
path. This is required because some architectures (e.g. arm64) cannot
mark vmap addresses as decrypted, and highmem pages necessarily require
a vmap remap. As a result, such allocations cannot be safely marked
unencrypted for DMA.
Other architectures (e.g. x86) do not have this limitation, but instead
of making this architecture-specific, I have made the restriction apply
when the device requires unencrypted DMA access. This was done for
simplicity,
Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_direct_alloc")
Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@...nel.org>
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7c0b55ca121f..811de37ad81c 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -264,6 +264,15 @@ void *dma_direct_alloc(struct device *dev, size_t size,
* remapped to return a kernel virtual address.
*/
if (PageHighMem(page)) {
+ /*
+ * Unencrypted/shared DMA requires a linear-mapped buffer
+ * address to look up the PFN and set architecture-required PFN
+ * attributes. This is not possible with HighMem, so return
+ * failure.
+ */
+ if (force_dma_unencrypted(dev))
+ goto out_free_pages;
+
remap = true;
set_uncached = false;
}
@@ -284,7 +293,13 @@ void *dma_direct_alloc(struct device *dev, size_t size,
goto out_free_pages;
} else {
ret = page_address(page);
- if (dma_set_decrypted(dev, ret, size))
+ }
+
+ if (force_dma_unencrypted(dev)) {
+ void *lm_addr;
+
+ lm_addr = page_address(page);
+ if (set_memory_decrypted((unsigned long)lm_addr, PFN_UP(size)))
goto out_leak_pages;
}
@@ -349,8 +364,16 @@ void dma_direct_free(struct device *dev, size_t size,
} else {
if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
arch_dma_clear_uncached(cpu_addr, size);
- if (dma_set_encrypted(dev, cpu_addr, size))
+ }
+
+ if (force_dma_unencrypted(dev)) {
+ void *lm_addr;
+
+ lm_addr = phys_to_virt(dma_to_phys(dev, dma_addr));
+ if (set_memory_encrypted((unsigned long)lm_addr, PFN_UP(size))) {
+ pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n");
return;
+ }
}
__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
Powered by blists - more mailing lists