[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZS+bA2l/yh0zZLmd@gmail.com>
Date: Wed, 18 Oct 2023 10:44:51 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Rick Edgecombe <rick.p.edgecombe@...el.com>
Cc: x86@...nel.org, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, hpa@...or.com, luto@...nel.org,
peterz@...radead.org, kirill.shutemov@...ux.intel.com,
elena.reshetova@...el.com, isaku.yamahata@...el.com,
seanjc@...gle.com, Michael Kelley <mikelley@...rosoft.com>,
thomas.lendacky@....com, decui@...rosoft.com,
sathyanarayanan.kuppuswamy@...ux.intel.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-s390@...r.kernel.org
Subject: Re: [PATCH 02/10] x86/mm/cpa: Reject incorrect encryption change
requests
* Rick Edgecombe <rick.p.edgecombe@...el.com> wrote:
> Kernel memory is "encrypted" by default. Some callers may "decrypt" it
> in order to share it with things outside the kernel like a device or an
> untrusted VMM.
>
> There is nothing to stop set_memory_encrypted() from being passed memory
> that is already "encrypted" (aka. "private" on TDX). In fact, some
> callers do this because ... $REASONS. Unfortunately, part of the TDX
> decrypted=>encrypted transition is truly one way*. It can't handle
> being asked to encrypt an already encrypted page
>
> Allow __set_memory_enc_pgtable() to detect already-encrypted memory
> before it hits the TDX code.
>
> * The one way part is "page acceptance"
>
> [commit log written by Dave Hansen]
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
> ---
> arch/x86/mm/pat/set_memory.c | 41 +++++++++++++++++++++++++++++++++++-
> 1 file changed, 40 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index bda9f129835e..1238b0db3e33 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -2122,6 +2122,21 @@ int set_memory_global(unsigned long addr, int numpages)
> __pgprot(_PAGE_GLOBAL), 0);
> }
>
> +static bool kernel_vaddr_encryped(unsigned long addr, bool enc)
> +{
> + unsigned int level;
> + pte_t *pte;
> +
> + pte = lookup_address(addr, &level);
> + if (!pte)
> + return false;
> +
> + if (enc)
> + return pte_val(*pte) == cc_mkenc(pte_val(*pte));
> +
> + return pte_val(*pte) == cc_mkdec(pte_val(*pte));
> +}
> +
> /*
> * __set_memory_enc_pgtable() is used for the hypervisors that get
> * informed about "encryption" status via page tables.
> @@ -2130,7 +2145,7 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
> {
> pgprot_t empty = __pgprot(0);
> struct cpa_data cpa;
> - int ret;
> + int ret, numpages_in_state = 0;
>
> /* Should not be working on unaligned addresses */
> if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr))
> @@ -2143,6 +2158,30 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
> cpa.mask_clr = enc ? pgprot_decrypted(empty) : pgprot_encrypted(empty);
> cpa.pgd = init_mm.pgd;
>
> + /*
> + * If any page is already in the right state, bail with an error
> + * because the code doesn't handled it. This is likely because
Grammar mistake here.
> + * something has gone wrong and isn't worth optimizing for.
> + *
> + * If all the memory pages are already in the desired state return
> + * success.
Missing comma.
> + *
> + * kernel_vaddr_encryped() does not synchronize against huge page
> + * splits so take pgd_lock. A caller doing strange things could
Missing comma.
Thanks,
Ingo
Powered by blists - more mailing lists