lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240529005519.GA386318@ls.amr.corp.intel.com>
Date: Tue, 28 May 2024 17:55:19 -0700
From: Isaku Yamahata <isaku.yamahata@...el.com>
To: Chen Yu <yu.c.chen@...el.com>
Cc: Chao Gao <chao.gao@...el.com>, isaku.yamahata@...el.com,
	kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
	isaku.yamahata@...il.com, Paolo Bonzini <pbonzini@...hat.com>,
	erdemaktas@...gle.com, Sean Christopherson <seanjc@...gle.com>,
	Sagi Shahar <sagis@...gle.com>, Kai Huang <kai.huang@...el.com>,
	chen.bo@...el.com, hang.yuan@...el.com, tina.zhang@...el.com,
	Fengwei Yin <fengwei.yin@...el.com>, isaku.yamahata@...ux.intel.com
Subject: Re: [PATCH v19 070/130] KVM: TDX: TDP MMU TDX support

On Sun, May 26, 2024 at 04:45:15PM +0800,
Chen Yu <yu.c.chen@...el.com> wrote:

> On 2024-03-28 at 11:12:57 +0800, Chao Gao wrote:
> > >+#if IS_ENABLED(CONFIG_HYPERV)
> > >+static int vt_flush_remote_tlbs(struct kvm *kvm);
> > >+#endif
> > >+
> > > static __init int vt_hardware_setup(void)
> > > {
> > > 	int ret;
> > >@@ -49,11 +53,29 @@ static __init int vt_hardware_setup(void)
> > > 		pr_warn_ratelimited("TDX requires mmio caching.  Please enable mmio caching for TDX.\n");
> > > 	}
> > > 
> > >+#if IS_ENABLED(CONFIG_HYPERV)
> > >+	/*
> > >+	 * TDX KVM overrides flush_remote_tlbs method and assumes
> > >+	 * flush_remote_tlbs_range = NULL that falls back to
> > >+	 * flush_remote_tlbs.  Disable TDX if there are conflicts.
> > >+	 */
> > >+	if (vt_x86_ops.flush_remote_tlbs ||
> > >+	    vt_x86_ops.flush_remote_tlbs_range) {
> > >+		enable_tdx = false;
> > >+		pr_warn_ratelimited("TDX requires baremetal. Not Supported on VMM guest.\n");
> > >+	}
> > >+#endif
> > >+
> > > 	enable_tdx = enable_tdx && !tdx_hardware_setup(&vt_x86_ops);
> > > 	if (enable_tdx)
> > > 		vt_x86_ops.vm_size = max_t(unsigned int, vt_x86_ops.vm_size,
> > > 					   sizeof(struct kvm_tdx));
> > > 
> > >+#if IS_ENABLED(CONFIG_HYPERV)
> > >+	if (enable_tdx)
> > >+		vt_x86_ops.flush_remote_tlbs = vt_flush_remote_tlbs;
> > 
> > Is this hook necessary/beneficial to TDX?
> >
> 
> I think so.
> 
> We happended to encounter the following error and breaks the boot up:
> "SEAMCALL (0x000000000000000f) failed: 0xc0000b0800000001"
> 0xc0000b0800000001 indicates the TDX_TLB_TRACKING_NOT_DONE, and it is caused
> by page demotion but not yet doing a tlb shotdown by tlb track.
> 
> 
> It was found on my system the CONFIG_HYPERV is not set, and it makes
> kvm_arch_flush_remote_tlbs() not invoking tdx_track() before the
> tdh_mem_page_demote(), which caused the problem.
> 
> > if no, we can leave .flush_remote_tlbs as NULL. if yes, we should do:
> > 
> > struct kvm_x86_ops {
> > ...
> > #if IS_ENABLED(CONFIG_HYPERV) || IS_ENABLED(TDX...)
> > 	int  (*flush_remote_tlbs)(struct kvm *kvm);
> > 	int  (*flush_remote_tlbs_range)(struct kvm *kvm, gfn_t gfn,
> > 					gfn_t nr_pages);
> > #endif
> 
> If the flush_remote_tlbs implementation are both available in HYPERV and TDX,
> does it make sense to remove the config checks? I thought when commit 0277022a77a5
> was introduced, the only user of flush_remote_tlbs() is hyperv, and now
> there is TDX.

You don't like IS_ENABLED(CONFIG_HYPERV) || IS_ENABLED(CONFIG_TDX_HOST) in many
places?  Then, we can do something like the followings.  Although It would be
a bit ugly than the commit of 0277022a77a5, it's better to keep the intention
of it.

#if IS_ENABLED(CONFIG_HYPERV) || IS_ENABLED(CONFIG_TDX_HOST)
# define KVM_X86_WANT_FLUSH_REMOTE_TLBS
#endif

#if IS_DEFINED(KVM_X86_WANT_FLUSH_REMOTE_TLBS)
..
#endif

-- 
Isaku Yamahata <isaku.yamahata@...el.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ