lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 17 Nov 2023 00:08:04 -0800
From:   Isaku Yamahata <isaku.yamahata@...ux.intel.com>
To:     Yuan Yao <yuan.yao@...ux.intel.com>
Cc:     isaku.yamahata@...el.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, isaku.yamahata@...il.com,
        Paolo Bonzini <pbonzini@...hat.com>, erdemaktas@...gle.com,
        Sean Christopherson <seanjc@...gle.com>,
        Sagi Shahar <sagis@...gle.com>,
        David Matlack <dmatlack@...gle.com>,
        Kai Huang <kai.huang@...el.com>,
        Zhi Wang <zhi.wang.linux@...il.com>, chen.bo@...el.com,
        hang.yuan@...el.com, tina.zhang@...el.com,
        isaku.yamahata@...ux.intel.com
Subject: Re: [PATCH v17 071/116] KVM: TDX: handle vcpu migration over logical
 processor

On Wed, Nov 15, 2023 at 02:49:56PM +0800,
Yuan Yao <yuan.yao@...ux.intel.com> wrote:

> On Tue, Nov 07, 2023 at 06:56:37AM -0800, isaku.yamahata@...el.com wrote:
> > From: Isaku Yamahata <isaku.yamahata@...el.com>
> >
> > For vcpu migration, in the case of VMX, VMCS is flushed on the source pcpu,
> > and load it on the target pcpu.  There are corresponding TDX SEAMCALL APIs,
> > call them on vcpu migration.  The logic is mostly same as VMX except the
> > TDX SEAMCALLs are used.
> >
> > When shutting down the machine, (VMX or TDX) vcpus needs to be shutdown on
> > each pcpu.  Do the similar for TDX with TDX SEAMCALL APIs.
> >
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> > ---
> >  arch/x86/kvm/vmx/main.c    |  32 ++++++-
> >  arch/x86/kvm/vmx/tdx.c     | 190 ++++++++++++++++++++++++++++++++++++-
> >  arch/x86/kvm/vmx/tdx.h     |   2 +
> >  arch/x86/kvm/vmx/x86_ops.h |   4 +
> >  4 files changed, 221 insertions(+), 7 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
> > index e7c570686736..8b109d0fe764 100644
> > --- a/arch/x86/kvm/vmx/main.c
> > +++ b/arch/x86/kvm/vmx/main.c
> > @@ -44,6 +44,14 @@ static int vt_hardware_enable(void)
> >  	return ret;
> >  }
> >
> ......
> > -void tdx_mmu_release_hkid(struct kvm *kvm)
> > +static int __tdx_mmu_release_hkid(struct kvm *kvm)
> >  {
> >  	bool packages_allocated, targets_allocated;
> >  	struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> >  	cpumask_var_t packages, targets;
> > +	struct kvm_vcpu *vcpu;
> > +	unsigned long j;
> > +	int i, ret = 0;
> >  	u64 err;
> > -	int i;
> >
> >  	if (!is_hkid_assigned(kvm_tdx))
> > -		return;
> > +		return 0;
> >
> >  	if (!is_td_created(kvm_tdx)) {
> >  		tdx_hkid_free(kvm_tdx);
> > -		return;
> > +		return 0;
> >  	}
> >
> >  	packages_allocated = zalloc_cpumask_var(&packages, GFP_KERNEL);
> >  	targets_allocated = zalloc_cpumask_var(&targets, GFP_KERNEL);
> >  	cpus_read_lock();
> >
> > +	kvm_for_each_vcpu(j, vcpu, kvm)
> > +		tdx_flush_vp_on_cpu(vcpu);
> > +
> >  	/*
> >  	 * We can destroy multiple the guest TDs simultaneously.  Prevent
> >  	 * tdh_phymem_cache_wb from returning TDX_BUSY by serialization.
> > @@ -236,6 +361,19 @@ void tdx_mmu_release_hkid(struct kvm *kvm)
> >  	 */
> >  	write_lock(&kvm->mmu_lock);
> >
> > +	err = tdh_mng_vpflushdone(kvm_tdx->tdr_pa);
> > +	if (err == TDX_FLUSHVP_NOT_DONE) {
> 
> Not sure IIUC, The __tdx_mmu_release_hkid() is called in MMU release
> callback, which means all threads of the process have dropped mm by
> do_exit() so they won't run kvm code anymore, and tdx_flush_vp_on_cpu()
> is called for each pcpu they run last time, so will this error really
> happen ?

KVM_TDX_RELEASE_VM calls the function too. Maybe this check should be
introduced with the patch for KVM_TDX_RELEASE_VM.
-- 
Isaku Yamahata <isaku.yamahata@...ux.intel.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ