lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <69d0b2c3-0c14-83c8-0913-4ee163f9c1df@redhat.com>
Date:   Wed, 17 Aug 2022 18:49:21 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        mlevitsk@...hat.com, vkuznets@...hat.com
Subject: Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block

On 8/17/22 18:41, Sean Christopherson wrote:
> On Wed, Aug 17, 2022, Paolo Bonzini wrote:
>> On 8/17/22 01:34, Sean Christopherson wrote:
>>> Isn't freeing up the return from kvm_vcpu_check_block() unnecessary?  Can't we
>>> just do:
>>>
>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>> index 9f11b505cbee..ccb9f8bdeb18 100644
>>> --- a/arch/x86/kvm/x86.c
>>> +++ b/arch/x86/kvm/x86.c
>>> @@ -10633,7 +10633,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
>>>                   if (hv_timer)
>>>                           kvm_lapic_switch_to_hv_timer(vcpu);
>>>
>>> -               if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
>>> +               if (!kvm_arch_vcpu_runnable(vcpu))
>>>                           return 1;
>>>           }
>>>
>>>
>>> which IMO is more intuitive and doesn't require reworking halt-polling (again).
>>
>> This was my first idea indeed.  However I didn't like calling
>> kvm_arch_vcpu_runnable() again and "did it schedule()" seemed to be a less
>> interesting result from kvm_vcpu_block() (and in fact kvm_vcpu_halt() does
>> not bother passing it up the return chain).
> 
> The flip side of calling kvm_arch_vcpu_runnable() again is that KVM will immediately
> wake the vCPU if it becomes runnable after kvm_vcpu_check_block().  The edge cases
> where the vCPU becomes runnable late are unlikely to truly matter in practice, but
> on the other hand manually re-checking kvm_arch_vcpu_runnable() means KVM gets both
> cases "right" (waited=true iff vCPU actually waited, vCPU awakened ASAP), whereas
> squishing the information into the return of kvm_vcpu_check_block() means KVM gets
> both cases "wrong" (waited=true even if schedule() was never called, vCPU left in
> a non-running state even though it's runnable).
> 
> My only hesitation with calling kvm_arch_vcpu_runnable() again is that it could be
> problematic if KVM somehow managed to consume the event that caused kvm_vcpu_has_events()
> to return true, but I don't see how that could happen without it being a KVM bug.

No, I agree that it cannot happen, and especially so after getting rid 
of the kvm_check_nested_events() call in kvm_arch_vcpu_runnable().

I'll reorder the patches and apply your suggestion.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ