[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6812bb87-f355-eddb-c484-b3bb089dd630@redhat.com>
Date: Tue, 25 Oct 2022 12:05:01 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Emanuele Giuseppe Esposito <eesposit@...hat.com>,
kvm@...r.kernel.org
Cc: Jonathan Corbet <corbet@....net>,
Maxim Levitsky <mlevitsk@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] KVM: use signals to abort enter_guest/blocking and
retry
On 10/24/22 09:43, Emanuele Giuseppe Esposito wrote:
>> Since the userspace should anyway avoid going into this effectively-busy
>> wait, what about clearing the request after the first exit? The
>> cancellation ioctl can be kept for vCPUs that are never entered after
>> KVM_KICK_ALL_RUNNING_VCPUS. Alternatively, kvm_clear_all_cpus_request
>> could be done right before up_write().
>
> Clearing makes sense, but should we "trust" the userspace not to go into
> busy wait?
I think so, there are many other ways for userspace to screw up.
> What's the typical "contract" between KVM and the userspace? Meaning,
> should we cover the basic usage mistakes like forgetting to busy wait on
> KVM_RUN?
Being able to remove the second ioctl if you do (sort-of pseudocode
based on this v1)
kvm_make_all_cpus_request(kvm, KVM_REQ_USERSPACE_KICK);
down_write(&kvm->memory_transaction);
up_write(&kvm->memory_transaction);
kvm_clear_all_cpus_request(kvm, KVM_REQ_USERSPACE_KICK);
would be worth it, I think.
Paolo
Powered by blists - more mailing lists