[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250811002357.714109-2-sdonthineni@nvidia.com>
Date: Sun, 10 Aug 2025 19:23:56 -0500
From: Shanker Donthineni <sdonthineni@...dia.com>
To: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Marek Szyprowski <m.szyprowski@...sung.com>, Suzuki K Poulose
<suzuki.poulose@....com>, Steven Price <steven.price@....com>,
<linux-arm-kernel@...ts.infradead.org>
CC: Robin Murphy <robin.murphy@....com>, Gavin Shan <gshan@...hat.com>, "Mike
Rapoport" <rppt@...nel.org>, Shanker Donthineni <sdonthineni@...dia.com>,
Vikram Sethi <vsethi@...dia.com>, Jason Sequeira <jsequeira@...dia.com>, "Dev
Jain" <dev.jain@....com>, David Rientjes <rientjes@...gle.com>,
<linux-kernel@...r.kernel.org>, <iommu@...ts.linux.dev>
Subject: [PATCH 1/2] dma/pool: Use vmap() address for memory encryption helpers on ARM64
In atomic_pool_expand(), set_memory_encrypted()/set_memory_decrypted()
are currently called with page_to_virt(page). On ARM64 with
CONFIG_DMA_DIRECT_REMAP=y, the atomic pool is mapped via vmap(), so
page_to_virt(page) does not reference the actual mapped region.
Using this incorrect address can cause encryption attribute updates to
be applied to the wrong memory region. On ARM64 systems with memory
encryption enabled (e.g. CCA), this can lead to data corruption or
crashes.
Fix this by using the vmap() address ('addr') on ARM64 when invoking
the memory encryption helpers, while retaining the existing
page_to_virt(page) usage for other architectures.
Fixes: 76a19940bd62 ("dma-direct: atomic allocations must come from atomic coherent pools")
Signed-off-by: Shanker Donthineni <sdonthineni@...dia.com>
---
kernel/dma/pool.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 7b04f7575796b..ba08a301590fd 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -81,6 +81,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
{
unsigned int order;
struct page *page = NULL;
+ void *vaddr;
void *addr;
int ret = -ENOMEM;
@@ -113,8 +114,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
* Memory in the atomic DMA pools must be unencrypted, the pools do not
* shrink so no re-encryption occurs in dma_direct_free().
*/
- ret = set_memory_decrypted((unsigned long)page_to_virt(page),
- 1 << order);
+ vaddr = IS_ENABLED(CONFIG_ARM64) ? addr : page_to_virt(page);
+ ret = set_memory_decrypted((unsigned long)vaddr, 1 << order);
if (ret)
goto remove_mapping;
ret = gen_pool_add_virt(pool, (unsigned long)addr, page_to_phys(page),
@@ -126,8 +127,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
return 0;
encrypt_mapping:
- ret = set_memory_encrypted((unsigned long)page_to_virt(page),
- 1 << order);
+ ret = set_memory_encrypted((unsigned long)vaddr, 1 << order);
if (WARN_ON_ONCE(ret)) {
/* Decrypt succeeded but encrypt failed, purposely leak */
goto out;
--
2.25.1
Powered by blists - more mailing lists