[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <23f11dc1-4fd1-4286-a69a-3892a869ed33@redhat.com>
Date: Tue, 23 Sep 2025 21:28:57 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org,
Dave Hansen <dave.hansen@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>,
Ingo Molnar <mingo@...hat.com>, Thomas Gleixner <tglx@...utronix.de>,
x86@...nel.org, Borislav Petkov <bp@...en8.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] KVM: x86: Fix a semi theoretical bug in
kvm_arch_async_page_present_queued
On 9/23/25 20:55, Sean Christopherson wrote:
> On Tue, Sep 23, 2025, Paolo Bonzini wrote:
>> On 8/13/25 21:23, Maxim Levitsky wrote:
>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>> index 9018d56b4b0a..3d45a4cd08a4 100644
>>> --- a/arch/x86/kvm/x86.c
>>> +++ b/arch/x86/kvm/x86.c
>>> @@ -13459,9 +13459,14 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
>>> void kvm_arch_async_page_present_queued(struct kvm_vcpu *vcpu)
>>> {
>>> - kvm_make_request(KVM_REQ_APF_READY, vcpu);
>>> - if (!vcpu->arch.apf.pageready_pending)
>>> + /* Pairs with smp_store_release in vcpu_enter_guest. */
>>> + bool in_guest_mode = (smp_load_acquire(&vcpu->mode) == IN_GUEST_MODE);
>>> + bool page_ready_pending = READ_ONCE(vcpu->arch.apf.pageready_pending);
>>> +
>>> + if (!in_guest_mode || !page_ready_pending) {
>>> + kvm_make_request(KVM_REQ_APF_READY, vcpu);
>>> kvm_vcpu_kick(vcpu);
>>> + }
>>
>> Unlike Sean, I think the race exists in abstract and is not benign
>
> How is it not benign? I never said the race doesn't exist, I said that consuming
> a stale vcpu->arch.apf.pageready_pending in kvm_arch_async_page_present_queued()
> is benign.
In principle there is a possibility that a KVM_REQ_APF_READY is missed.
Just by the reading of the specs, without a smp__mb_after_atomic() this
is broken:
kvm_make_request(KVM_REQ_APF_READY, vcpu);
if (!vcpu->arch.apf.pageready_pending)
kvm_vcpu_kick(vcpu);
It won't happen because set_bit() is written with asm("memory"), because
x86 set_bit() does prevent reordering at the processor level, etc.
In other words the race is only avoided by the fact that compiler
reorderings are prevented even in cases that memory-barriers.txt does
not promise.
Paolo
Powered by blists - more mailing lists