[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e55c8490-4342-4ef4-9d18-23c3dc3a526f@arm.com>
Date: Thu, 27 Jun 2024 15:34:15 +0100
From: Steven Price <steven.price@....com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: kvm@...r.kernel.org, kvmarm@...ts.linux.dev,
Suzuki K Poulose <suzuki.poulose@....com>, Marc Zyngier <maz@...nel.org>,
Will Deacon <will@...nel.org>, James Morse <james.morse@....com>,
Oliver Upton <oliver.upton@...ux.dev>, Zenghui Yu <yuzenghui@...wei.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Joey Gouly <joey.gouly@....com>, Alexandru Elisei
<alexandru.elisei@....com>, Christoffer Dall <christoffer.dall@....com>,
Fuad Tabba <tabba@...gle.com>, linux-coco@...ts.linux.dev,
Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>
Subject: Re: [PATCH v3 09/14] arm64: Enable memory encrypt for Realms
On 10/06/2024 18:27, Catalin Marinas wrote:
> On Wed, Jun 05, 2024 at 10:30:01AM +0100, Steven Price wrote:
>> +static int __set_memory_encrypted(unsigned long addr,
>> + int numpages,
>> + bool encrypt)
>> +{
>> + unsigned long set_prot = 0, clear_prot = 0;
>> + phys_addr_t start, end;
>> + int ret;
>> +
>> + if (!is_realm_world())
>> + return 0;
>> +
>> + if (!__is_lm_address(addr))
>> + return -EINVAL;
>> +
>> + start = __virt_to_phys(addr);
>> + end = start + numpages * PAGE_SIZE;
>> +
>> + /*
>> + * Break the mapping before we make any changes to avoid stale TLB
>> + * entries or Synchronous External Aborts caused by RIPAS_EMPTY
>> + */
>> + ret = __change_memory_common(addr, PAGE_SIZE * numpages,
>> + __pgprot(0),
>> + __pgprot(PTE_VALID));
>> +
>> + if (encrypt) {
>> + clear_prot = PROT_NS_SHARED;
>> + ret = rsi_set_memory_range_protected(start, end);
>> + } else {
>> + set_prot = PROT_NS_SHARED;
>> + ret = rsi_set_memory_range_shared(start, end);
>> + }
>> +
>> + if (ret)
>> + return ret;
>> +
>> + set_prot |= PTE_VALID;
>> +
>> + return __change_memory_common(addr, PAGE_SIZE * numpages,
>> + __pgprot(set_prot),
>> + __pgprot(clear_prot));
>> +}
>
> This works, does break-before-make and also rejects vmalloc() ranges
> (for the time being).
>
> One particular aspect I don't like is doing the TLBI twice. It's
> sufficient to do it when you first make the pte invalid. We could guess
> this in __change_memory_common() if set_mask has PTE_VALID. The call
> sites are restricted to this file, just add a comment. An alternative
> would be to add a bool flush argument to this function.
>
I'm always a bit scared of changing this sort of thing, but AFAICT the
below should be safe:
- flush_tlb_kernel_range(start, start + size);
+ /*
+ * If the memory is being made valid without changing any other bits
+ * then a TLBI isn't required as a non-valid entry cannot be cached in
+ * the TLB.
+ */
+ if (set_mask != PTE_VALID || clear_mask)
+ flush_tlb_kernel_range(start, start + size);
It will affect users of set_memory_valid() by removing the TLBI when
marking memory as valid.
I'll add this change as a separate patch so it can be reverted easily
if there's something I've overlooked.
Thanks,
Steve
Powered by blists - more mailing lists