[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <77f30c15-9cae-46c2-ba2c-121712479b1c@oracle.com>
Date: Tue, 16 Apr 2024 19:37:09 -0400
From: boris.ostrovsky@...cle.com
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM/x86: Do not clear SIPI while in SMM
On 4/16/24 7:17 PM, Sean Christopherson wrote:
> On Tue, Apr 16, 2024, boris.ostrovsky@...cle.com wrote:
>> (Sorry, need to resend)
>>
>> On 4/16/24 6:03 PM, Paolo Bonzini wrote:
>>> On Tue, Apr 16, 2024 at 10:57 PM <boris.ostrovsky@...cle.com> wrote:
>>>> On 4/16/24 4:53 PM, Paolo Bonzini wrote:
>>>>> On 4/16/24 22:47, Boris Ostrovsky wrote:
>>>>>> Keeping the SIPI pending avoids this scenario.
>>>>>
>>>>> This is incorrect - it's yet another ugly legacy facet of x86, but we
>>>>> have to live with it. SIPI is discarded because the code is supposed
>>>>> to retry it if needed ("INIT-SIPI-SIPI").
>>>>
>>>> I couldn't find in the SDM/APM a definitive statement about whether SIPI
>>>> is supposed to be dropped.
>>>
>>> I think the manual is pretty consistent that SIPIs are never latched,
>>> they're only ever used in wait-for-SIPI state.
>>>
>>>>> The sender should set a flag as early as possible in the SIPI code so
>>>>> that it's clear that it was not received; and an extra SIPI is not a
>>>>> problem, it will be ignored anyway and will not cause trouble if
>>>>> there's a race.
>>>>>
>>>>> What is the reproducer for this?
>>>>
>>>> Hotplugging/unplugging cpus in a loop, especially if you oversubscribe
>>>> the guest, will get you there in 10-15 minutes.
>>>>
>>>> Typically (although I think not always) this is happening when OVMF if
>>>> trying to rendezvous and a processor is missing and is sent an extra SMI.
>>>
>>> Can you go into more detail? I wasn't even aware that OVMF's SMM
>>> supported hotplug - on real hardware I think there's extra work from
>>> the BMC to coordinate all SMIs across both existing and hotplugged
>>> packages(*)
>>
>>
>> It's been supported by OVMF for a couple of years (in fact, IIRC you were
>> part of at least initial conversations about this, at least for the unplug
>> part).
>>
>> During hotplug QEMU gathers all cpus in OVMF from (I think)
>> ich9_apm_ctrl_changed() and they are all waited for in
>> SmmCpuRendezvous()->SmmWaitForApArrival(). Occasionally it may so happen
>> that the SMI from QEMU is not delivered to a processor that was *just*
>> successfully hotplugged and so it is pinged again (https://github.com/tianocore/edk2/blob/fcfdbe29874320e9f876baa7afebc3fca8f4a7df/UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c#L304).
>>
>>
>> At the same time this processor is now being brought up by kernel and is
>> being sent INIT-SIPI-SIPI. If these (or at least the SIPIs) arrive after the
>> SMI reaches the processor then that processor is not going to have a good
>> day.
>
> It's specifically SIPI that's problematic. INIT is blocked by SMM, but latched,
> and SMIs are blocked by WFS, but latched. And AFAICT, KVM emulates all of those
> combinations correctly.
>
> Why is the SMI from QEMU not delivered? That seems like the smoking gun.
I haven't actually traced this but it seems that what happens is that
the newly-added processor is about to leave SMM and the count of in-SMM
processors is decremented. At the same time, since the processor is
still in SMM the QEMU's SMM is not taken.
And so when the count is looked at again in SmmWaitForApArrival() one
processor is missing.
-boris
Powered by blists - more mailing lists