[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1gG/W/q/VIydpMu@google.com>
Date: Tue, 25 Oct 2022 15:55:41 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Christian Borntraeger <borntraeger@...ux.ibm.com>,
Emanuele Giuseppe Esposito <eesposit@...hat.com>,
kvm@...r.kernel.org, Jonathan Corbet <corbet@....net>,
Maxim Levitsky <mlevitsk@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] KVM: API to block and resume all running vcpus in a
vm
On Tue, Oct 25, 2022, Paolo Bonzini wrote:
> On 10/25/22 00:45, Sean Christopherson wrote:
> > > Yes that helps and should be part of the cover letter for the next iterations.
> > But that doesn't explain why KVM needs to get involved, it only explains why QEMU
> > can't use its existing pause_all_vcpus(). I do not understand why this is a
> > problem QEMU needs KVM's help to solve.
>
> I agree that it's not KVM's problem that QEMU cannot use pause_all_vcpus().
> Having a ioctl in KVM, rather than coding the same in QEMU, is *mostly* a
> matter of programmer and computer efficiency because the code is pretty
> simple.
>
> That said, I believe the limited memslot API makes it more than just a QEMU
> problem. Because KVM_GET_DIRTY_LOG cannot be combined atomically with
> KVM_SET_USER_MEMORY_REGION(MR_DELETE), any VMM that uses dirty-log regions
> while the VM is running is liable to losing the dirty status of some pages.
... and doesn't already do the sane thing and pause vCPUs _and anything else that
can touch guest memory_ before modifying memslots. I honestly think QEMU is the
only VMM that would ever use this API.
> That's also a reason to provide this API in KVM.
It's frankly a terrible API though. Providing a way to force vCPUs out of KVM_RUN
is at best half of the solution.
Userspace still needs:
- a refcounting scheme to track the number of "holds" put on the system
- serialization to ensure KVM_RESUME_ALL_KICKED_VCPUS completes before a new
KVM_KICK_ALL_RUNNING_VCPUS is initiated
- to prevent _all_ ioctls() because it's not just KVM_RUN that consumes memslots
- to stop anything else in the system that consumes KVM memslots, e.g. KVM GT
- to signal vCPU tasks so that the system doesn't livelock if a vCPU is stuck
outside of KVM, e.g. in get_user_pages_unlocked() (Peter Xu's series)
And because of the nature of KVM, to support this API on all architectures, KVM
needs to make change on all architectures, whereas userspace should be able to
implement a generic solution.
Powered by blists - more mailing lists