[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aG3SnUUwFnIhiBp0@yzhao56-desk.sh.intel.com>
Date: Wed, 9 Jul 2025 10:23:25 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
CC: "pbonzini@...hat.com" <pbonzini@...hat.com>, "seanjc@...gle.com"
<seanjc@...gle.com>, "Shutemov, Kirill" <kirill.shutemov@...el.com>,
"quic_eberman@...cinc.com" <quic_eberman@...cinc.com>, "Li, Xiaoyao"
<xiaoyao.li@...el.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "Hansen,
Dave" <dave.hansen@...el.com>, "david@...hat.com" <david@...hat.com>,
"thomas.lendacky@....com" <thomas.lendacky@....com>, "tabba@...gle.com"
<tabba@...gle.com>, "Li, Zhiquan1" <zhiquan1.li@...el.com>, "Du, Fan"
<fan.du@...el.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "michael.roth@....com"
<michael.roth@....com>, "Weiny, Ira" <ira.weiny@...el.com>, "vbabka@...e.cz"
<vbabka@...e.cz>, "binbin.wu@...ux.intel.com" <binbin.wu@...ux.intel.com>,
"ackerleytng@...gle.com" <ackerleytng@...gle.com>, "Yamahata, Isaku"
<isaku.yamahata@...el.com>, "Peng, Chao P" <chao.p.peng@...el.com>,
"Annapurve, Vishal" <vannapurve@...gle.com>, "jroedel@...e.de"
<jroedel@...e.de>, "Miao, Jun" <jun.miao@...el.com>, "pgonda@...gle.com"
<pgonda@...gle.com>, "x86@...nel.org" <x86@...nel.org>
Subject: Re: [RFC PATCH 02/21] x86/virt/tdx: Enhance tdh_mem_page_aug() to
support huge pages
On Tue, Jul 08, 2025 at 09:55:39PM +0800, Edgecombe, Rick P wrote:
> On Tue, 2025-07-08 at 16:48 +0800, Yan Zhao wrote:
> > On Thu, Apr 24, 2025 at 11:04:28AM +0800, Yan Zhao wrote:
> > > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> > > index f5e2a937c1e7..a66d501b5677 100644
> > > --- a/arch/x86/virt/vmx/tdx/tdx.c
> > > +++ b/arch/x86/virt/vmx/tdx/tdx.c
> > > @@ -1595,9 +1595,18 @@ u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, struct page *page, u
> > According to the discussion in DPAMT [*],
> > "hpa here points to a 2M region that pamt_pages covers. We don't have
> > struct page that represents it. Passing 4k struct page would be
> > misleading IMO."
> >
> > Should we update tdh_mem_page_aug() accordingly to use hpa?
> > Or use struct folio instead?
> >
> > [*] https://lore.kernel.org/all/3coaqkcfp7xtpvh2x4kph55qlopupknm7dmzqox6fakzaedhem@a2oysbvbshpm/
>
> The original seamcall wrapper patches used "u64 hpa", etc everywhere. The
> feedback was that it was too error prone to not have types. We looked at using
> kvm types (hpa_t, etc), but the type checking was still just surface level [0].
>
> So the goal is to reduce errors and improve code readability. We can consider
> breaking symmetry if it is better that way. In this case though, why not use
> struct folio?
I'm Ok with using struct folio.
My previous ask was based on 2 considerations:
1. hpa is simpler and I didn't find Dave's NAK to Kirill's patch (v1 or v2).
2. using struct folio, I need to introduce "start_idx" as well (as below),
because it's likely that guest_memfd provides a huge folio while KVM wants to
map it at 4KB.
u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, struct folio *folio,
unsigned long start_idx, u64 *ext_err1, u64 *ext_err2)
{
struct page *start = folio_page(folio, start_idx);
unsigned long npages = 1 << (level * PTE_SHIFT);
struct tdx_module_args args = {
.rcx = gpa | level,
.rdx = tdx_tdr_pa(td),
.r8 = page_to_phys(start),
};
u64 ret;
if (start_idx + npages > folio_nr_pages(folio))
return TDX_SW_ERROR;
for (int i = 0; i < npages; i++)
tdx_clflush_page(nth_page(start, i));
ret = seamcall_ret(TDH_MEM_PAGE_AUG, &args);
*ext_err1 = args.rcx;
*ext_err2 = args.rdx;
return ret;
}
> [0] https://lore.kernel.org/kvm/30d0cef5-82d5-4325-b149-0e99833b8785@intel.com/
Powered by blists - more mailing lists