lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aVxgo4Kb-Ng3g2Ci@google.com>
Date: Mon, 5 Jan 2026 17:08:51 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: dan.j.williams@...el.com, Jon Lange <jlange@...rosoft.com>, 
	Paolo Bonzini <pbonzini@...hat.com>, john.starks@...rosoft.com, 
	Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>, 
	"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>, LKML <linux-kernel@...r.kernel.org>, 
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, 
	Rick P Edgecombe <rick.p.edgecombe@...el.com>, Andrew Cooper <andrew.cooper3@...rix.com>
Subject: Re: "Paravisor" Feature Enumeration

On Mon, Jan 05, 2026, Dave Hansen wrote:
> On 1/5/26 16:01, dan.j.williams@...el.com wrote:
> > Dave Hansen wrote:
> ...
> >> 	X86_FEATURE_KVM_CLOCKSOURCE in arm,pvclock
> >> or
> >> 	X86_FEATURE_KVM_STEAL_TIME  in arm,kvm-steal-time
> >>
> >> As far as I can tell, these aliases are all done ad-hoc. This approach
> >> could obviously be extended to paravisor features, but it would probably
> >> be on the slow side to do it for each new feature.
> > 
> > "Slow" as in standardization time?
> 
> Yes.
> 
> ...
> >> Is there anything stopping us from carving out a chunk of CPUID for
> >> this purpose?
> > 
> > At what point does an ACPI property become a CPUID? In other words if
> > there is an ACPI / DeviceTree enumeration of CPU/platform capabilities
> > in firmware that can supsersede / extend native enumeration, does it
> > matter if x86 maps that to extended CPUID space and ARM maps it however
> > is convenient?
> > 
> > I have no problem with an extended CPUID concept, just trying to
> > understand more about the assumptions.
> 
> The way it _seems_ to have worked until now is that KVM/x86 has led the
> way by defining a CPUID bit for things like KVM_CLOCK of KVM_STEAL_TIME.
> Then, the ARM folks came along and DeviceTree enumerations. Last, ACPI
> came along with a way to package up all the DeviceTree enumerations into
> a single table.
> 
> So, maybe that's a hack on a hack on a hack and we should just start
> with ACPI this time. That would certainly make this pretty straightforward.
> 
> I'd love to hear a take from the x86/KVM folks, though.

KVM x86 is blissfully unaware of ACPI.  I believe the same goes for DeviceTree on
ARM64, but don't quote me on that.  I can't envision a world where KVM would ever
enumerate or parse ACPI, let alone make ACPI a hard requirement, so any features
that need KVM support need KVM specific uAPI and/or arch-specific enumeration.

KVM uses CPUID for *KVM-defined* PV features on x86 because KVM already advertises
support for CPUID-based features via KVM_GET_SUPPORTED_CPUID.  And KVM is handed a
userspace-defined virtual CPU module that includes virtual CPUID information
(KVM_SET_CPUID{,2}), which KVM can then use to know whether or not a feature is
enabled for a given guest.  I.e. using CPUID gets KVM all the uAPI and guest ABI
it needs for super cheap.

PV features/devices that are provided solely by the VMM are a completely different
matter.  E.g. KVM similiar has no direct knowledge of VirtIO.  There are plenty of
optimizations in KVM that exist to make VirtIO go faster, but like ACPI, KVM is
blissfully unaware of what VirtIO devices are exposed to a guest, where they reside
in the platform topology, how they are enumerated to the guest, etc.

Concretely, exactly what type of PV features are we talking about?  To me,
"Confidential Services" sounds like things that should be implemented as virtual
devices in userspace, attached via whatever bus the VMM is using (e.g. vmbus vs.
PCIe), and enumerated to the guest via whatever mechanism the VMM chooses (which
on x86 is pretty much guaranteed to be ACPI).

Trying to use CPUID for any such virtual devices will never fly in a KVM-based
setup (outside of completely private/proprietary environments).  KVM shouldn't
ever accept a patch to define a CPUID feature for something that is conceptually
a device, and Linux-as-a-guest shouldn't ever accept a patch to consume CPUID
entries defined by a VMM (even if that VMM is QEMU).

So unless we're talking about services that require specific, dedicated KVM
support, i.e. where the KVM involvement can't be abstracted in some generic way,
I don't think there's a whole lot to discuss (in a good way).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ