lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCb1F4D6nIFVokNm@yzhao56-desk.sh.intel.com>
Date: Fri, 16 May 2025 16:19:35 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
CC: "pbonzini@...hat.com" <pbonzini@...hat.com>, "seanjc@...gle.com"
	<seanjc@...gle.com>, "Shutemov, Kirill" <kirill.shutemov@...el.com>,
	"quic_eberman@...cinc.com" <quic_eberman@...cinc.com>, "Li, Xiaoyao"
	<xiaoyao.li@...el.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "Hansen,
 Dave" <dave.hansen@...el.com>, "david@...hat.com" <david@...hat.com>,
	"thomas.lendacky@....com" <thomas.lendacky@....com>, "tabba@...gle.com"
	<tabba@...gle.com>, "Li, Zhiquan1" <zhiquan1.li@...el.com>, "Du, Fan"
	<fan.du@...el.com>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "michael.roth@....com"
	<michael.roth@....com>, "Weiny, Ira" <ira.weiny@...el.com>, "vbabka@...e.cz"
	<vbabka@...e.cz>, "binbin.wu@...ux.intel.com" <binbin.wu@...ux.intel.com>,
	"ackerleytng@...gle.com" <ackerleytng@...gle.com>, "Yamahata, Isaku"
	<isaku.yamahata@...el.com>, "Peng, Chao P" <chao.p.peng@...el.com>,
	"Annapurve, Vishal" <vannapurve@...gle.com>, "jroedel@...e.de"
	<jroedel@...e.de>, "Miao, Jun" <jun.miao@...el.com>, "pgonda@...gle.com"
	<pgonda@...gle.com>, "x86@...nel.org" <x86@...nel.org>
Subject: Re: [RFC PATCH 19/21] KVM: gmem: Split huge boundary leafs for punch
 hole of private memory

On Wed, May 14, 2025 at 06:59:01AM +0800, Edgecombe, Rick P wrote:
> On Thu, 2025-04-24 at 11:08 +0800, Yan Zhao wrote:
> > +static int kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start,
> > +				     pgoff_t end, bool need_split)
> >  {
> >  	bool flush = false, found_memslot = false;
> >  	struct kvm_memory_slot *slot;
> >  	struct kvm *kvm = gmem->kvm;
> >  	unsigned long index;
> > +	int ret = 0;
> >  
> >  	xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) {
> >  		pgoff_t pgoff = slot->gmem.pgoff;
> > @@ -319,14 +320,23 @@ static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start,
> >  			kvm_mmu_invalidate_begin(kvm);
> >  		}
> >  
> > +		if (need_split) {
> > +			ret = kvm_split_boundary_leafs(kvm, &gfn_range);
> 
> What is the effect for other guestmemfd users? SEV doesn't need this, right? Oh
> I see, down in tdp_mmu_split_boundary_leafs() it bails on non-mirror roots. I
> don't like the naming then. It sounds deterministic, but it's really only
> necessary splits for certain VM types.
Right, kvm_split_boundary_leafs() only takes effect on the mirror root.

> I guess it all depends on how well teaching kvm_mmu_unmap_gfn_range() to fail
> goes. But otherwise, we should call it like kvm_prepare_zap_range() or
Hmm, if we call it kvm_prepare_zap_range(), we have to invoke it for all zaps.
However, except kvm_gmem_punch_hole(), the other two callers
kvm_gmem_error_folio(), kvm_gmem_release() have no need to perfrom splitting
before zapping.
Passing in the invalidation reason to kvm_gmem_invalidate_begin() also makes
things complicated.

> something. And have it make it clearly do nothing for non-TDX high up where it's
> easy to see.
Would a name like kvm_split_boundary_leafs_for_mirror() be too TDX specific?

If we name it kvm_split_boundary_leafs(), SEV can simply remove the bailing out
if they want in future.

> 
> > +			if (ret < 0)
> > +				goto out;
> > +
> > +			flush |= ret;
> > +		}
> >  		flush |= kvm_mmu_unmap_gfn_range(kvm, &gfn_range);
> >  	}
> >  
> > +out:
> >  	if (flush)
> >  		kvm_flush_remote_tlbs(kvm);
> >  
> >  	if (found_memslot)
> >  		KVM_MMU_UNLOCK(kvm);
> > +	
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ