[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5a26c107-9ab5-60ee-0e9c-a9955dfe313d@redhat.com>
Date: Tue, 25 Oct 2022 11:33:29 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>
Cc: Emanuele Giuseppe Esposito <eesposit@...hat.com>,
kvm@...r.kernel.org, Jonathan Corbet <corbet@....net>,
Maxim Levitsky <mlevitsk@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] KVM: API to block and resume all running vcpus in a
vm
On 10/25/22 00:45, Sean Christopherson wrote:
>> Yes that helps and should be part of the cover letter for the next iterations.
> But that doesn't explain why KVM needs to get involved, it only explains why QEMU
> can't use its existing pause_all_vcpus(). I do not understand why this is a
> problem QEMU needs KVM's help to solve.
I agree that it's not KVM's problem that QEMU cannot use
pause_all_vcpus(). Having a ioctl in KVM, rather than coding the same
in QEMU, is *mostly* a matter of programmer and computer efficiency
because the code is pretty simple.
That said, I believe the limited memslot API makes it more than just a
QEMU problem. Because KVM_GET_DIRTY_LOG cannot be combined atomically
with KVM_SET_USER_MEMORY_REGION(MR_DELETE), any VMM that uses dirty-log
regions while the VM is running is liable to losing the dirty status of
some pages. That's also a reason to provide this API in KVM.
Paolo
Powered by blists - more mailing lists