[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230530005736.ndwza2i5n7wrki5h@box.shutemov.name>
Date: Tue, 30 May 2023 03:57:36 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: Sathyanarayanan Kuppuswamy
<sathyanarayanan.kuppuswamy@...ux.intel.com>,
thomas.lendacky@....com
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
dave.hansen@...el.com, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, decui@...rosoft.com, rick.p.edgecombe@...el.com,
seanjc@...gle.com, x86@...nel.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCHv2 2/3] x86/tdx: Fix race between set_memory_encrypted()
and load_unaligned_zeropad()
On Fri, May 26, 2023 at 03:10:56PM -0700, Sathyanarayanan Kuppuswamy wrote:
>
>
> On 5/26/23 5:02 AM, Kirill A. Shutemov wrote:
> > Touching privately mapped GPA that is not properly converted to private
> > with MapGPA and accepted leads to unrecoverable exit to VMM.
> >
> > load_unaligned_zeropad() can touch memory that is not owned by the
> > caller, but just happened to next after the owned memory.
>
> /s/to/to be ?
Yep, my bad.
> > This load_unaligned_zeropad() behaviour makes it important when kernel
> > asks VMM to convert a GPA from shared to private or back. Kernel must
> > never have a page mapped into direct mapping (and aliases) as private
> > when the GPA is already converted to shared or when GPA is not yet
> > converted to private.
>
> I am wondering whether this issue exist in the AMD code?
>
> IMO, you can add some info on the window in set_memory_encrypted()
> where this race exists.
I don't think AMD affected by load_unaligned_zeropad() the same way as
Intel does. But I'm not sure.
Tom, do you have any comments?
>
> >
> > guest.enc_status_change_prepare() called before adjusting direct mapping
> > and therefore it is responsible for converting the memory to private.
> >
> > guest.enc_status_change_finish() called after adjusting direct mapping
> > and it converts the memory to shared.
> >
> > It is okay to have a shared mapping of memory that is not converted
> > properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
> > stepping on it.
>
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > Fixes: 7dbde7631629 ("x86/mm/cpa: Add support for TDX shared memory")
> > Cc: stable@...r.kernel.org
> > ---
> > arch/x86/coco/tdx/tdx.c | 56 ++++++++++++++++++++++++++++++++++++++---
> > 1 file changed, 53 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> > index e146b599260f..59cc13e41aa6 100644
> > --- a/arch/x86/coco/tdx/tdx.c
> > +++ b/arch/x86/coco/tdx/tdx.c
> > @@ -840,6 +840,30 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
> > return true;
> > }
> >
> > +static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
> > + bool enc)
> > +{
> > + /*
> > + * Only handle shared->private conversion here.
> > + * See the comment in tdx_early_init().
> > + */
> > + if (enc)
> > + return tdx_enc_status_changed(vaddr, numpages, enc);
> > + return true;
> > +}
> > +
> > +static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
> > + bool enc)
> > +{
> > + /*
> > + * Only handle private->shared conversion here.
> > + * See the comment in tdx_early_init().
> > + */
> > + if (!enc)
> > + return tdx_enc_status_changed(vaddr, numpages, enc);
> > + return true;
> > +}
> > +
> > void __init tdx_early_init(void)
> > {
> > u64 cc_mask;
> > @@ -867,9 +891,35 @@ void __init tdx_early_init(void)
> > */
> > physical_mask &= cc_mask - 1;
> >
> > - x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
> > - x86_platform.guest.enc_tlb_flush_required = tdx_tlb_flush_required;
>
> I think you don't need to change the order here.
I wanted to emphasise that the comment is for _prepare/_finish callbacks
and I hoped re-order would help with this.
> > - x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
> > + /*
> > + * Touching privately mapped GPA that is not properly converted to
> > + * private with MapGPA and accepted leads to unrecoverable exit
> > + * to VMM.
> > + *
> > + * load_unaligned_zeropad() can touch memory that is not owned by
> > + * the caller, but just happened to next after the owned memory.
> > + * This load_unaligned_zeropad() behaviour makes it important when
> > + * kernel asks VMM to convert a GPA from shared to private or back.
> > + * Kernel must never have a page mapped into direct mapping (and
> > + * aliases) as private when the GPA is already converted to shared or
> > + * when GPA is not yet converted to private.
> > + *
> > + * guest.enc_status_change_prepare() called before adjusting direct
> > + * mapping and therefore it is responsible for converting the memory
> > + * to private.
> > + *
> > + * guest.enc_status_change_finish() called after adjusting direct
> > + * mapping and it converts the memory to shared.
> > + *
> > + * It is okay to have a shared mapping of memory that is not converted
> > + * properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
> > + * stepping on it.
> > + */
> > + x86_platform.guest.enc_status_change_prepare = tdx_enc_status_change_prepare;
> > + x86_platform.guest.enc_status_change_finish = tdx_enc_status_change_finish;
> > +
> > + x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
> > + x86_platform.guest.enc_tlb_flush_required = tdx_tlb_flush_required;
> >
> > pr_info("Guest detected\n");
> > }
>
> --
> Sathyanarayanan Kuppuswamy
> Linux Kernel Developer
--
Kiryl Shutsemau / Kirill A. Shutemov
Powered by blists - more mailing lists