[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5a135acb-b94d-47f5-9436-2d558cf78268@arm.com>
Date: Tue, 25 Feb 2025 16:31:10 +0000
From: Suzuki K Poulose <suzuki.poulose@....com>
To: Gavin Shan <gshan@...hat.com>, will@...nel.org, robin.murphy@....com,
catalin.marinas@....com
Cc: maz@...nel.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, gregkh@...uxfoundation.org,
aneesh.kumar@...nel.org, steven.price@....com,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Christoph Hellwig <hch@....de>, Tom Lendacky <thomas.lendacky@....com>
Subject: Re: [PATCH v2 3/3] arm64: realm: Use aliased addresses for device DMA
to shared buffers
Hi Gavin
Thanks for the review.
On 25/02/2025 05:28, Gavin Shan wrote:
> On 2/25/25 3:24 PM, Gavin Shan wrote:
>> On 2/20/25 8:07 AM, Suzuki K Poulose wrote:
>>> When a device performs DMA to a shared buffer using physical addresses,
>>> (without Stage1 translation), the device must use the "{I}PA address"
>>> with the
>>> top bit set in Realm. This is to make sure that a trusted device will
>>> be able
>>> to write to shared buffers as well as the protected buffers. Thus, a
>>> Realm must
>>> always program the full address including the "protection" bit, like
>>> AMD SME
>>> encryption bits.
>>>
>>> Enable this by providing arm64 specific
>>> dma_{encrypted,decrypted,clear_encryption}
>>> helpers for Realms. Please note that the VMM needs to similarly make
>>> sure that
>>> the SMMU Stage2 in the Non-secure world is setup accordingly to map
>>> IPA at the
>>> unprotected alias.
>>>
>>> Cc: Will Deacon <will@...nel.org>
>>> Cc: Jean-Philippe Brucker <jean-philippe@...aro.org>
>>> Cc: Catalin Marinas <catalin.marinas@....com>
>>> Cc: Robin Murphy <robin.murphy@....com>
>>> Cc: Steven Price <steven.price@....com>
>>> Cc: Christoph Hellwig <hch@....de>
>>> Cc: Tom Lendacky <thomas.lendacky@....com>
>>> Cc: Aneesh Kumar K.V <aneesh.kumar@...nel.org>
>>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@....com>
>>> ---
>>> arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
>>> 1 file changed, 22 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/
>>> include/asm/mem_encrypt.h
>>> index f8f78f622dd2..aeda3bba255e 100644
>>> --- a/arch/arm64/include/asm/mem_encrypt.h
>>> +++ b/arch/arm64/include/asm/mem_encrypt.h
>>> @@ -21,4 +21,26 @@ static inline bool force_dma_unencrypted(struct
>>> device *dev)
>>> return is_realm_world();
>>> }
>>> +static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
>>> +{
>>> + if (is_realm_world())
>>> + daddr |= prot_ns_shared;
>>> + return daddr;
>>> +}
>>> +#define dma_decrypted dma_decrypted
>>> +
>>
>> There is an existing macro (PROT_NS_SHARED), which is preferred to return
>> prot_ns_shared or 0 depending on the availability of the realm
>> capability.
>> However, that macro needs to be improved a bit so that it can be used
>> here.
>> We need to return 0UL to match with the type of prot_ns_shared
>> (unsigned long)
>>
>> -#define PROT_NS_SHARED (is_realm_world() ? prot_ns_shared : 0)
>> +#define PROT_NS_SHARED (is_realm_world() ? prot_ns_shared : 0UL)
>>
>> After that, the chunk of code can be as below.
>>
>> return daddr | PROT_NS_SHARED;
>>
>>> +static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
>>> +{
>>> + if (is_realm_world())
>>> + daddr &= prot_ns_shared - 1;
>>> + return daddr;
>>> +}
>>> +#define dma_encrypted dma_encrypted
>>> +
>>
>> With PROT_NS_SHARED, it can become something like below.
>> (PROT_NS_SHARED - 1)
>> is equivalent to -1UL, 'daddr & -1UL' should be fine since it does
>> nothing.
>>
>
> I meant (PROT_NS_SHARED - 1) is equivalent to -1UL when no realm capability
> is around :)
I didn't want this to be there ;-). But with Robin's comment, I think we
can revert back to PROT_NS_SHARED.
Cheers
Suzuki
>
>> return daddr & (PROT_NS_SHARED - 1);
>>
>>> +static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
>>> +{
>>> + return dma_encrypted(daddr);
>>> +}
>>> +#define dma_clear_encryption dma_clear_encryption
>>> +
>>> #endif /* __ASM_MEM_ENCRYPT_H */
>
> Thanks,
> Gavin
>
Powered by blists - more mailing lists