lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <957b26d18ba7db611ed6582366066667267d10b8.camel@intel.com>
Date: Mon, 8 Apr 2024 23:46:19 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "seanjc@...gle.com" <seanjc@...gle.com>
CC: "davidskidmore@...gle.com" <davidskidmore@...gle.com>, "Li, Xiaoyao"
	<xiaoyao.li@...el.com>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "srutherford@...gle.com"
	<srutherford@...gle.com>, "pankaj.gupta@....com" <pankaj.gupta@....com>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>, "Yamahata, Isaku"
	<isaku.yamahata@...el.com>, "Wang, Wei W" <wei.w.wang@...el.com>
Subject: Re: [ANNOUNCE] PUCK Notes - 2024.04.03 - TDX Upstreaming Strategy

On Mon, 2024-04-08 at 15:36 -0700, Sean Christopherson wrote:
> > Currently the values for the directly settable CPUID leafs come via a TDX
> > specific init VM userspace API.
> 
> Is guest.MAXPHYADDR one of those?  If so, use that.

No it is not configurable. I'm looking into make it configurable, but it is not
likely to happen before we were hoping to get basic support upstream. An
alternative would be to have the KVM API peak at the value, and then discard it
(not pass the leaf value to the TDX module). Not ideal. Or have a dedicated GPAW
field and expose the concept to userspace like Xiaoyao was talking about.

> 
> > So should we look at making the TDX side follow a
> > KVM_GET_SUPPORTED_CPUID/KVM_SET_CPUID pattern for feature enablement? Or am
> > I
> > misreading general guidance out of this specific suggestion around GPAW? 
> 
> No?  Where I was going with that, is _if_ vCPUs can be created (in KVM) before
> the GPAW is set (in the TDX module), then using vCPU0's guest.MAXPHYADDR to
> compute the desired GPAW may be the least awful solution, all things
> considered.

Sorry, I was trying to uplevel the conversation to be about the general concept
of matching TD configuration to CPUID bits. Let me try to articulate the problem
a little better.

Today, KVM’s KVM_GET_SUPPORTED_CPUID is a way to specify which features are
virtualizable by KVM. Communicating this via CPUID leaf values works for the
most part, because CPUID is already designed to communicate which features are
supported. But TDX has a different language to communicate which features are
supported. That is special fields that are passed when creating a VM: XFAM
(matching XCR0 features) and ATTRIBUTES (TDX specific flags for MSR based
features like PKS, etc). So compared to KVM_GET_SUPPORTED_CPUID/KVM_SET_CPUID,
the TDX module instead accepts only a few CPUID bits to be set directly by the
VMM, and sets other CPUID leafs to match the configured features via XFAM and
ATTRIBUTES.

There are also some bits/features that have fixed values. Which leafs are fixed
and what the values are isn't something provided by any current TDX module API.
Instead they are only known via documentation, which is subject to change. The
queryable information is limited to communicating which bits are directly
configurable. 

So the current interface won't allow us to perfectly match the
KVM_GET_SUPPORTED_CPUID/KVM_SET_CPUID. Even excluding the vm-scoped vs vcpu-
scoped differences. However, we could try to match the general design a little
better.

Here we were discussing making gpaw configurable via a dedicated named field,
but the suggestion is to instead include it in CPUID bits. The current API takes
ATTRIBUTES as a dedicated field too. But there actually are CPUID bits for some
of those features. Those CPUID bits are controlled instead via the associated
ATTRIBUTES. So we could expose such features via CPUID as well. Userspace would
for example, pass the PKS CPUID bit in, and KVM would see it and configure PKS
via the ATTRIBUTES bit.

So what I was looking to understand is, what is the enthusiasm for generally
continuing to use CPUID has the main method for specifying which features should
be enabled/virtualized, if we can't match the existing
KVM_GET_SUPPORTED_CPUID/KVM_SET_CPUID APIs. Is the hope just to make userspace's
code more unified between TDX and normal VMs?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ