[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b5ffd56c-ef29-4889-b2ac-ba334e86d059@arm.com>
Date: Tue, 25 Feb 2025 16:14:29 +0000
From: Suzuki K Poulose <suzuki.poulose@....com>
To: Robin Murphy <robin.murphy@....com>, will@...nel.org,
catalin.marinas@....com
Cc: maz@...nel.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, gregkh@...uxfoundation.org,
aneesh.kumar@...nel.org, steven.price@....com,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Christoph Hellwig <hch@....de>, Tom Lendacky <thomas.lendacky@....com>
Subject: Re: [PATCH v2 3/3] arm64: realm: Use aliased addresses for device DMA
to shared buffers
On 25/02/2025 13:04, Robin Murphy wrote:
> On 2025-02-19 10:07 pm, Suzuki K Poulose wrote:
>> When a device performs DMA to a shared buffer using physical addresses,
>> (without Stage1 translation), the device must use the "{I}PA address"
>> with the
>> top bit set in Realm. This is to make sure that a trusted device will
>> be able
>> to write to shared buffers as well as the protected buffers. Thus, a
>> Realm must
>> always program the full address including the "protection" bit, like
>> AMD SME
>> encryption bits.
>>
>> Enable this by providing arm64 specific
>> dma_{encrypted,decrypted,clear_encryption}
>> helpers for Realms. Please note that the VMM needs to similarly make
>> sure that
>> the SMMU Stage2 in the Non-secure world is setup accordingly to map
>> IPA at the
>> unprotected alias.
>>
>> Cc: Will Deacon <will@...nel.org>
>> Cc: Jean-Philippe Brucker <jean-philippe@...aro.org>
>> Cc: Catalin Marinas <catalin.marinas@....com>
>> Cc: Robin Murphy <robin.murphy@....com>
>> Cc: Steven Price <steven.price@....com>
>> Cc: Christoph Hellwig <hch@....de>
>> Cc: Tom Lendacky <thomas.lendacky@....com>
>> Cc: Aneesh Kumar K.V <aneesh.kumar@...nel.org>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@....com>
>> ---
>> arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
>> 1 file changed, 22 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/
>> include/asm/mem_encrypt.h
>> index f8f78f622dd2..aeda3bba255e 100644
>> --- a/arch/arm64/include/asm/mem_encrypt.h
>> +++ b/arch/arm64/include/asm/mem_encrypt.h
>> @@ -21,4 +21,26 @@ static inline bool force_dma_unencrypted(struct
>> device *dev)
>> return is_realm_world();
>> }
>> +static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
>> +{
>> + if (is_realm_world())
>> + daddr |= prot_ns_shared;
>> + return daddr;
>> +}
>> +#define dma_decrypted dma_decrypted
>> +
>> +static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
>> +{
>> + if (is_realm_world())
>> + daddr &= prot_ns_shared - 1;
>
> Nit: is there a reason this isn't the direct inverse of the other
> operation, i.e. "daddr &= ~prot_ns_shared"? If so, it might be worth
It could be. The IPA size for the realm is split into half with the
lower half protected/encrypted and anything above that unprotected.
Technically any addr >= prot_ns_shared is "unencrypted" (even though it
may be invalid, if >= BIT(IPA_Size) - 1), so to cover that, I masked
anything above the MS. But now when I think of it, it is much better to
trigger a Stage2 fault if the address is illegal (i.e., > BIT(IPA_Size)
- 1) than corrupting some valid memory, by masking the top bits (beyond
prot_ns_shared).
Cheers
Suzuki
> dropping a comment why we're doing slightly unintuitive arithmetic on a
> pagetable attribute (and if not then maybe just do the more obvious
> thing). I doubt anyone's in a rush to support TBI for DMA, and this
> would be far from the only potential hiccup for that, but still... :)
>
> Thanks,
> Robin.
>
>> + return daddr;
>> +}
>> +#define dma_encrypted dma_encrypted
>> +
>> +static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
>> +{
>> + return dma_encrypted(daddr);
>> +}
>> +#define dma_clear_encryption dma_clear_encryption
>> +
>> #endif /* __ASM_MEM_ENCRYPT_H */
>
Powered by blists - more mailing lists