[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <xqi2bkulnhen2vax5msbzczlaywx3dsc7ezpn7oo5qn7u7xzap@xmaseinov7tf>
Date: Mon, 18 Aug 2025 09:04:04 +0100
From: Kiryl Shutsemau <kirill@...temov.name>
To: Shixuan Zhao <shixuan.zhao@...mail.com>
Cc: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>, linux-coco@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/tdx: support VM area addresses for
tdx_enc_status_changed
On Fri, Aug 15, 2025 at 02:18:34PM -0400, Shixuan Zhao wrote:
> Sorry got the Message ID wrong. Resending it.
>
> > Could you tell more about use-case?
>
> So basically I'm writing a project involving a kernel module that
> communicates with the host which we plan to do it via a shared buffer.
> That shared buffer has to be marked as shared so that the hypervisor can
> read it. The shared buffer needs a fixed physical address in our case so
> we reserved a range and did ioremap for it.
So on the host side it is going non-contiguous. Is it going to be some
kind of scatter-gather? Seems inefficient.
What sizes are we talking about? When do you allocate it?
If it is small enough and/or allocated early enough I would rather go
with guest physically contiguous.
> > I am not sure we ever want to convert vmalloc()ed memory to shared as it
> > will result in fracturing direct mapping.
>
> Currently in this patch, linear mapping memory will still be handled in
> the old way so there's technically no change to existing behaviour. These
> memory ranges are still mapped in a whole chunk instead of page-by-page It
> merely added a fall back path for vmalloc'ed or ioremap'ed or whatever
> mapping that's not in the linear mapping.
You cannot leave the same GPAs mapped as private in the direct mapping
as it will cause unrecoverable SEPT violation when someone would touch
this memory. For instance, load_unaligned_zeropad()
> tdx_enc_status_changed is called by set_memory_decrypted/encrypted which
> takes vmalloc'ed addresses just fine on other platforms like SEV. It would
> be an exception for TDX to not support VM area mappings.
>
> > And it seems to the wrong layer to make it. If we really need to go
> > this pass (I am not convinced) it has to be done in set_memory.c
>
> set_memory_decrypted handles vmalloc'ed memory. It's just that on TDX it
> has to call the TDX-specific enc_status_change_finish which is
> tdx_enc_status_changed that does not handle vmalloc'ed memory. This
> means that when people call the set_memory_decrypted with a vmalloc'ed,
> it will fail on TDX but will succeed in other platforms (e.g., SEV).
I don't know SEV specifics, but with TDX, I don't want to add support
for vmalloc, unless it is a must. It requires fracturing direct mapping
and we need really strong reason to do this.
--
Kiryl Shutsemau / Kirill A. Shutemov
Powered by blists - more mailing lists