lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260108065516.GA8281@k08j02272.eu95sqa>
Date: Thu, 8 Jan 2026 14:55:16 +0800
From: Hou Wenlong <houwenlong.hwl@...group.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>,
	Anish Ghulati <aghulati@...gle.com>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
	Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
	hpa@...or.com, Vitaly Kuznetsov <vkuznets@...hat.com>,
	peterz@...radead.org, paulmck@...nel.org,
	Mark Rutland <mark.rutland@....com>
Subject: Re: [RFC PATCH 00/14] Support multiple KVM modules on the same host

On Wed, Jan 07, 2026 at 07:54:53AM -0800, Sean Christopherson wrote:
> On Mon, Jan 05, 2026, Hou Wenlong wrote:
> > Sorry for revisiting this topic after a long time. I haven't seen any
> > new updates regarding this topic/series, and I didn’t find any recent
> > activity on the GitHub repository. Is the multi-KVM topic still being
> > considered for upstreaming, or is there anything blocking this?
> 
> We have abandoned upstreaming multi-KVM.  The operational cost+complexity is too
> high relative to the benefits, especially when factoring in things like ASI and
> live patching, and the benefits are almost entirely obsoleted by kernel live update
> support.
>

We need to look into the new features to see if they’re compatible with
multi-KVM and how we can integrate them effectively. This is definitely
challenging work. We also have the option to explore the kernel live
update now.

> > As Lai pointed out, we also have a similar multi-KVM implementation in
> > our internal environment, so we are quite interested in this topic.
> > Recently, when we upgraded our kernel version, we found that maintaining
> > multi-KVM has become a significant burden.
> 
> Yeah, I can imagine the pain all too well.  :-/
> 
> > We are willing to move forward with it if multi-KVM is still accepted for
> > upstream. So I look forward to feedback from the maintainers.
> >
> > From what I've seen, the recent patch set that enables VMX/SVM during
> > booting is a good starting point for multi-KVM as well.
> 
> I have mixed feelings on multi-KVM.  Without considering maintenance and support
> costs, I still love the idea of reworking the kernel to support running multiple
> hypervisors concurrently.  But as I explained in the first cover letter of that
> series[0], there is a massive amount of complexity, both in initial development
> and ongoing maintenance, needed to provide such infrastructure:
> 
>  : I got quite far long on rebasing some internal patches we have to extract the
>  : core virtualization bits out of KVM x86, but as I paged back in all of the
>  : things we had punted on (because they were waaay out of scope for our needs),
>  : I realized more and more that providing truly generic virtualization
>  : instrastructure is vastly different than providing infrastructure that can be
>  : shared by multiple instances of KVM (or things very similar to KVM)[1].
>  :
>  : So while I still don't want to blindly do VMXON, I also think that trying to
>  : actually support another in-tree hypervisor, without an imminent user to drive
>  : the development, is a waste of resources, and would saddle KVM with a pile of
>  : pointless complexity.
> 
> For deployment to a relatively homogeneous fleet, many of the pain points can be
> avoided by either avoiding them entirely or making the settings "inflexible",
> because there is effectively one use case and so such caveats are a non-issue.
> But those types of simplifications don't work upstream, e.g. saying "eVMCS is
> unsupported if multi-KVM is possible" instead of moving eVMCS enabling to a base
> module isn't acceptable.
> 
> So I guess my "official" stance is that I'm not opposed to upstreaming multi-KVM
> (or similar) functionality, but I'm not exactly in favor of it either.  And
> practically speaking, because multi-KVM would be in constant conflict with so
> much ongoing/new feature support (both in software and hardware), and is not a
> priority for anyone pursuing kernel live update, upstreaming would be likely take
> several years, without any guarantee of a successful landing.
> 
> [0] https://lore.kernel.org/all/20251010220403.987927-1-seanjc@google.com
> [1] https://lore.kernel.org/all/aOl5EutrdL_OlVOO@google.com

Got it, thanks for your feedback!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ