[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8fd39851-4bbc-4a31-84d9-5939b519d308@arm.com>
Date: Mon, 11 Aug 2025 13:09:52 +0100
From: Robin Murphy <robin.murphy@....com>
To: Mike Rapoport <rppt@...nel.org>,
Shanker Donthineni <sdonthineni@...dia.com>
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Steven Price <steven.price@....com>, linux-arm-kernel@...ts.infradead.org,
Gavin Shan <gshan@...hat.com>, Vikram Sethi <vsethi@...dia.com>,
Jason Sequeira <jsequeira@...dia.com>, Dev Jain <dev.jain@....com>,
David Rientjes <rientjes@...gle.com>, linux-kernel@...r.kernel.org,
iommu@...ts.linux.dev
Subject: Re: [RESEND PATCH 1/2] dma/pool: Use vmap() address for memory
encryption helpers on ARM64
On 2025-08-11 9:48 am, Mike Rapoport wrote:
> On Sun, Aug 10, 2025 at 07:50:34PM -0500, Shanker Donthineni wrote:
>> In atomic_pool_expand(), set_memory_encrypted()/set_memory_decrypted()
>> are currently called with page_to_virt(page). On ARM64 with
>> CONFIG_DMA_DIRECT_REMAP=y, the atomic pool is mapped via vmap(), so
>> page_to_virt(page) does not reference the actual mapped region.
>>
>> Using this incorrect address can cause encryption attribute updates to
>> be applied to the wrong memory region. On ARM64 systems with memory
>> encryption enabled (e.g. CCA), this can lead to data corruption or
>> crashes.
>>
>> Fix this by using the vmap() address ('addr') on ARM64 when invoking
>> the memory encryption helpers, while retaining the existing
>> page_to_virt(page) usage for other architectures.
>>
>> Fixes: 76a19940bd62 ("dma-direct: atomic allocations must come from atomic coherent pools")
>> Signed-off-by: Shanker Donthineni <sdonthineni@...dia.com>
>> ---
>> kernel/dma/pool.c | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
>> index 7b04f7575796b..ba08a301590fd 100644
>> --- a/kernel/dma/pool.c
>> +++ b/kernel/dma/pool.c
>> @@ -81,6 +81,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
>> {
>> unsigned int order;
>> struct page *page = NULL;
>> + void *vaddr;
>> void *addr;
>> int ret = -ENOMEM;
>>
>> @@ -113,8 +114,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
>> * Memory in the atomic DMA pools must be unencrypted, the pools do not
>> * shrink so no re-encryption occurs in dma_direct_free().
>> */
>> - ret = set_memory_decrypted((unsigned long)page_to_virt(page),
>> - 1 << order);
>> + vaddr = IS_ENABLED(CONFIG_ARM64) ? addr : page_to_virt(page);
>
> There's address calculation just before this code:
>
> #ifdef CONFIG_DMA_DIRECT_REMAP
> addr = dma_common_contiguous_remap(page, pool_size,
> pgprot_dmacoherent(PAGE_KERNEL),
> __builtin_return_address(0));
> if (!addr)
> goto free_page;
> #else
> addr = page_to_virt(page);
> #endif
>
> It should be enough to s/page_to_virt(page)/addr in the call to
> set_memory_decrypted().
Indeed, and either way this is clearly a DMA_DIRECT_REMAP concern rather
than just an ARM64 one.
Thanks,
Robin.
>> + ret = set_memory_decrypted((unsigned long)vaddr, 1 << order);
>> if (ret)
>> goto remove_mapping;
>> ret = gen_pool_add_virt(pool, (unsigned long)addr, page_to_phys(page),
>> @@ -126,8 +127,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
>> return 0;
>>
>> encrypt_mapping:
>> - ret = set_memory_encrypted((unsigned long)page_to_virt(page),
>> - 1 << order);
>> + ret = set_memory_encrypted((unsigned long)vaddr, 1 << order);
>> if (WARN_ON_ONCE(ret)) {
>> /* Decrypt succeeded but encrypt failed, purposely leak */
>> goto out;
>> --
>> 2.25.1
>>
>
Powered by blists - more mailing lists