lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAk4N0wYQeeYPLVM@google.com>
Date: Wed, 23 Apr 2025 11:57:59 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Zack Rusin <zack.rusin@...adcom.com>
Cc: Xin Li <xin@...or.com>, linux-kernel@...r.kernel.org, 
	Doug Covelli <doug.covelli@...adcom.com>, Paolo Bonzini <pbonzini@...hat.com>, 
	Jonathan Corbet <corbet@....net>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, 
	Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org, 
	"H. Peter Anvin" <hpa@...or.com>, kvm@...r.kernel.org, linux-doc@...r.kernel.org
Subject: Re: [PATCH v2 4/5] KVM: x86: Add support for legacy VMware backdoors
 in nested setups

On Wed, Apr 23, 2025, Zack Rusin wrote:
> On Wed, Apr 23, 2025 at 1:16 PM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Wed, Apr 23, 2025, Zack Rusin wrote:
> > > On Wed, Apr 23, 2025 at 11:54 AM Sean Christopherson <seanjc@...gle.com> wrote:
> > > > > I'd say that if we desperately want to use a single cap for all of
> > > > > these then I'd probably prefer a different approach because this would
> > > > > make vmware_backdoor_enabled behavior really wacky.
> > > >
> > > > How so?  If kvm.enable_vmware_backdoor is true, then the backdoor is enabled
> > > > for all VMs, else it's disabled by default but can be enabled on a per-VM basis
> > > > by the new capability.
> > >
> > > Like you said if  kvm.enable_vmware_backdoor is true, then it's
> > > enabled for all VMs, so it'd make sense to allow disabling it on a
> > > per-vm basis on those systems.
> > > Just like when the kvm.enable_vmware_backdoor is false, the cap can be
> > > used to enable it on a per-vm basis.
> >
> > Why?  What use case does that serve?
> 
> Testing purposes?

Heh, testing what?  To have heterogenous VMware emulation settings on a single
host, at least one of the VMMs needs to have been updated to utilize the new
capability.  Updating the VMM that doesn't want VMware emulation makes zero sense,
because that would limit testing to only the non-nested backdoor.

> > > > > It's the one that currently can only be set via kernel boot flags, so having
> > > > > systems where the boot flag is on and disabling it on a per-vm basis makes
> > > > > sense and breaks with this.
> > > >
> > > > We could go this route, e.g. KVM does something similar for PMU virtualization.
> > > > But the key difference is that enable_pmu is enabled by default, whereas
> > > > enable_vmware_backdoor is disabled by default.  I.e. it makes far more sense for
> > > > the capability to let userspace opt-in, as opposed to opt-out.
> > > >
> > > > > I'd probably still write the code to be able to disable/enable all of them
> > > > > because it makes sense for vmware_backdoor_enabled.
> > > >
> > > > Again, that's not KVM's default, and it will never be KVM's default.
> > >
> > > All I'm saying is that you can enable it on a whole system via the
> > > boot flags and on the systems on which it has been turned on it'd make
> > > sense to allow disabling it on a per-vm basis.
> >
> > Again, why would anyone do that?  If you *know* you're going to run some VMs
> > with VMware emulation and some without, the sane approach is to not touch the
> > module param and rely entirely on the capability.  Otherwise the VMM must be
> > able to opt-out, which means that running an older userspace that doesn't know
> > about the new capability *can't* opt-out.
> >
> > The only reason to even keep the module param is to not break existing users,
> > e.g. to be able to run VMs that want VMware functionality using an existing VMM.
> >
> > > Anyway, I'm sure I can make it work correctly under any constraints, so let
> > > me try to understand the issue because I'm not sure what we're solving here.
> > > Is the problem the fact that we have three caps and instead want to squeeze
> > > all of the functionality under one cap?
> >
> > The "problem" is that I don't want to add complexity and create ABI for a use
> > case that doesn't exist.
> 
> Would you like to see a v3 where I specifically do not allow disabling
> those caps?

Yes.  Though I recommend waiting to send a v3 until I (and others) have had a
change to review the rest of the patches, e.g. to avoid wasting your time.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ