lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCcFNWiX7qFzTLF+@yzhao56-desk.sh.intel.com>
Date: Fri, 16 May 2025 17:28:21 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
CC: "pbonzini@...hat.com" <pbonzini@...hat.com>, "seanjc@...gle.com"
	<seanjc@...gle.com>, "Shutemov, Kirill" <kirill.shutemov@...el.com>,
	"quic_eberman@...cinc.com" <quic_eberman@...cinc.com>, "Li, Xiaoyao"
	<xiaoyao.li@...el.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "Hansen,
 Dave" <dave.hansen@...el.com>, "david@...hat.com" <david@...hat.com>,
	"thomas.lendacky@....com" <thomas.lendacky@....com>, "tabba@...gle.com"
	<tabba@...gle.com>, "Li, Zhiquan1" <zhiquan1.li@...el.com>, "Du, Fan"
	<fan.du@...el.com>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "michael.roth@....com"
	<michael.roth@....com>, "Weiny, Ira" <ira.weiny@...el.com>, "vbabka@...e.cz"
	<vbabka@...e.cz>, "binbin.wu@...ux.intel.com" <binbin.wu@...ux.intel.com>,
	"ackerleytng@...gle.com" <ackerleytng@...gle.com>, "Yamahata, Isaku"
	<isaku.yamahata@...el.com>, "Peng, Chao P" <chao.p.peng@...el.com>,
	"Annapurve, Vishal" <vannapurve@...gle.com>, "jroedel@...e.de"
	<jroedel@...e.de>, "Miao, Jun" <jun.miao@...el.com>, "pgonda@...gle.com"
	<pgonda@...gle.com>, "x86@...nel.org" <x86@...nel.org>
Subject: Re: [RFC PATCH 09/21] KVM: TDX: Enable 2MB mapping size after TD is
 RUNNABLE

On Wed, May 14, 2025 at 04:10:10AM +0800, Edgecombe, Rick P wrote:
> On Thu, 2025-04-24 at 11:06 +0800, Yan Zhao wrote:
> > Allow TDX's .private_max_mapping_level hook to return 2MB after the TD is
> > RUNNABLE, enabling KVM to map TDX private pages at the 2MB level. Remove
> > TODOs and adjust KVM_BUG_ON()s accordingly.
> > 
> > Note: Instead of placing this patch at the tail of the series, it's
> > positioned here to show the code changes for basic mapping of private huge
> > pages (i.e., transitioning from non-present to present).
> > 
> > However, since this patch also allows KVM to trigger the merging of small
> > entries into a huge leaf entry or the splitting of a huge leaf entry into
> > small entries, errors are expected if any of these operations are triggered
> > due to the current lack of splitting/merging support.
> > 
> > Signed-off-by: Xiaoyao Li <xiaoyao.li@...el.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> > Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
> > ---
> >  arch/x86/kvm/vmx/tdx.c | 16 +++++++---------
> >  1 file changed, 7 insertions(+), 9 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> > index e23dce59fc72..6b3a8f3e6c9c 100644
> > --- a/arch/x86/kvm/vmx/tdx.c
> > +++ b/arch/x86/kvm/vmx/tdx.c
> > @@ -1561,10 +1561,6 @@ int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
> >  	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> >  	struct page *page = pfn_to_page(pfn);
> >  
> > -	/* TODO: handle large pages. */
> > -	if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
> > -		return -EINVAL;
> > -
> >  	/*
> >  	 * Because guest_memfd doesn't support page migration with
> >  	 * a_ops->migrate_folio (yet), no callback is triggered for KVM on page
> > @@ -1612,8 +1608,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
> >  	gpa_t gpa = gfn_to_gpa(gfn);
> >  	u64 err, entry, level_state;
> >  
> > -	/* TODO: handle large pages. */
> > -	if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm))
> > +	if (KVM_BUG_ON(kvm_tdx->state != TD_STATE_RUNNABLE && level != PG_LEVEL_4K, kvm))
> 
> It's not clear why some of these warnings are here and some are in patch 4.
Patch 4 contains only changes for !TD_STATE_RUNNABLE stage.
This patch is to allow huge page after TD_STATE_RUNNABLE.
So, relaxed the condition to trigger BUG_ON in this patch, i.e.,
before this patch, always bug on level > 4K;
after this patch, only bug on level > 4K before TD is runnable.

> >  		return -EINVAL;
> >  
> >  	if (KVM_BUG_ON(!is_hkid_assigned(kvm_tdx), kvm))
> > @@ -1714,8 +1709,8 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
> >  	gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level);
> >  	u64 err, entry, level_state;
> >  
> > -	/* For now large page isn't supported yet. */
> > -	WARN_ON_ONCE(level != PG_LEVEL_4K);
> > +	/* Before TD runnable, large page is not supported */
> > +	WARN_ON_ONCE(kvm_tdx->state != TD_STATE_RUNNABLE && level != PG_LEVEL_4K);
> >  
> >  	err = tdh_mem_range_block(&kvm_tdx->td, gpa, tdx_level, &entry, &level_state);
> >  
> > @@ -1817,6 +1812,9 @@ int tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn,
> >  	struct page *page = pfn_to_page(pfn);
> >  	int ret;
> >  
> > +	WARN_ON_ONCE(folio_page_idx(page_folio(page), page) + KVM_PAGES_PER_HPAGE(level) >
> > +		     folio_nr_pages(page_folio(page)));
> > +
> >  	/*
> >  	 * HKID is released after all private pages have been removed, and set
> >  	 * before any might be populated. Warn if zapping is attempted when
> > @@ -3265,7 +3263,7 @@ int tdx_gmem_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn)
> >  	if (unlikely(to_kvm_tdx(kvm)->state != TD_STATE_RUNNABLE))
> >  		return PG_LEVEL_4K;
> >  
> > -	return PG_LEVEL_4K;
> > +	return PG_LEVEL_2M;
> 
> Maybe combine this with patch 4, or split them into sensible categories.
Sorry to bring confusion.

As explained in the patch msg, this patch to return PG_LEVEL_2M actually needs
to be placed at the end of the series, after patches for page splitting/merging.

As inital RFC, it's placed earlier to show changes to enable basic TDX huge page
(without splitting/merging).

> >  }
> >  
> >  static int tdx_online_cpu(unsigned int cpu)
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ