[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c5c30ac58b3b9ac84ec2b4e77c25a56763e80aa9.camel@intel.com>
Date: Wed, 11 Dec 2024 01:33:30 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "Zhao, Yan Y" <yan.y.zhao@...el.com>
CC: "Hansen, Dave" <dave.hansen@...el.com>, "seanjc@...gle.com"
<seanjc@...gle.com>, "Huang, Kai" <kai.huang@...el.com>, "x86@...nel.org"
<x86@...nel.org>, "binbin.wu@...ux.intel.com" <binbin.wu@...ux.intel.com>,
"Li, Xiaoyao" <xiaoyao.li@...el.com>, "isaku.yamahata@...il.com"
<isaku.yamahata@...il.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "tony.lindgren@...ux.intel.com"
<tony.lindgren@...ux.intel.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>, "Yamahata, Isaku"
<isaku.yamahata@...el.com>, "Hunter, Adrian" <adrian.hunter@...el.com>,
"yuan.yao@...el.com" <yuan.yao@...el.com>
Subject: Re: [RFC PATCH v2 4/6] x86/virt/tdx: Add SEAMCALL wrappers for TDX
page cache management
On Wed, 2024-12-11 at 09:23 +0800, Yan Zhao wrote:
> On Mon, Dec 02, 2024 at 05:03:14PM -0800, Rick Edgecombe wrote:
> ...
> > +u64 tdh_phymem_page_wbinvd_tdr(struct tdx_td *td)
> > +{
> > + struct tdx_module_args args = {};
> > +
> > + args.rcx = tdx_tdr_pa(td) | ((u64)tdx_global_keyid << boot_cpu_data.x86_phys_bits);
> > +
> > + return seamcall(TDH_PHYMEM_PAGE_WBINVD, &args);
> > +}
> > +EXPORT_SYMBOL_GPL(tdh_phymem_page_wbinvd_tdr);
> The tdx_global_keyid is of type u16 in TDX spec and TDX module.
> As Reinette pointed out, u64 could cause overflow.
>
> Do we need to change all keyids to u16, including those in
> tdh.mng.create() in patch 2,
> the global_keyid, tdx_guest_keyid_start in arch/x86/virt/vmx/tdx/tdx.c
> and kvm_tdx->hkid in arch/x86/kvm/vmx/tdx.c ?
It seems like a good idea.
>
> BTW, is it a good idea to move set_hkid_to_hpa() from KVM TDX to x86 common
> header?
>
> static __always_inline hpa_t set_hkid_to_hpa(hpa_t pa, u16 hkid)
> {
> return pa | ((hpa_t)hkid << boot_cpu_data.x86_phys_bits);
> }
Ah, yep.
Powered by blists - more mailing lists