[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86c79601-2d3d-a6d7-a0a5-baba03e00709@amd.com>
Date: Thu, 16 Feb 2023 03:15:40 +0700
From: "Suthikulpanit, Suravee" <suravee.suthikulpanit@....com>
To: Sean Christopherson <seanjc@...gle.com>,
Joao Martins <joao.m.martins@...cle.com>
Cc: Igor Mammedov <imammedo@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Alejandro Jimenez <alejandro.j.jimenez@...cle.com>
Subject: Re: [PATCH v2 2/3] KVM: SVM: Modify AVIC GATag to support max number
of 512 vCPUs
On 2/7/2023 11:38 PM, Sean Christopherson wrote:
> On Tue, Feb 07, 2023, Joao Martins wrote:
>> On 07/02/2023 08:33, Igor Mammedov wrote:
>>> On Tue, 7 Feb 2023 00:21:55 +0000
>>> Sean Christopherson <seanjc@...gle.com> wrote:
>>>
>>>> From: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
>>>>
>>>> Define AVIC_VCPU_ID_MASK based on AVIC_PHYSICAL_MAX_INDEX, i.e. the mask
>>>> that effectively controls the largest guest physical APIC ID supported by
>>>> x2AVIC, instead of hardcoding the number of bits to 8 (and the number of
>>>> VM bits to 24).
>>>
>>> Is there any particular reason not to tie it to max supported by KVM
>>> KVM_MAX_VCPU_IDS?
>>>
>>> Another question:
>>> will guest fail to start when configured with more than 512 vCPUs
>>> or it will start broken?
>>>
>>
>> I think the problem is not so much the GATag (which can really be anything at
>> the resolution you want). It's more of an SVM limit AIUI. Provided you can't
>> have GATAgs if you don't have guest-mode/AVIC active, then makes sense have the
>> same limit on both.
Correct.
> Yep. The physical ID table, which is needed to achieve full AVIC benefits for a
> vCPU, is a single 4KiB page that holds 512 64-bit entries. AIUI, the GATag is
> used if and only if the interrupt target is in the physical ID table, so using
> more GATag bits for vCPU ID is pointless.
Correct.
>> SVM seems to be limited to 256 vcpus in xAPIC mode or 512 vcpus in x2APIC
>> mode[0]. IIUC You actually won't be able to create guests with more than
>> 512vcpus as KVM bound checks those max limits very early in the vCPU init (see
>> avic_init_vcpu()). I guess the alternative would an AVIC inhibit if vCPU count
>> goes beyond those limits -- probably a must have once avic flips to 1 by default
>> like Intel.
>
> I don't _think_ KVM would have to explicitly inhibit AVIC. I believe the fallout
> would be that vCPUs >= 512 would simply not be eligible for virtual interrupt
> delivery, e.g. KVM would get a "Invalid Target in IPI" exit. I haven't dug into
> the IOMMU side of things though, so it's possible something in that world would
> necessitate disabling (x2)AVIC.
SVM-AVIC is independent of the IOMMU-AVIC. We can enable SVM-AVIC, and
use the legacy IOMMU interrupt remapping mode IRTE[GuestMode]=0.
However, I have not explored the case of combining of the two modes. I
can look into it and experiment with this case.
Thanks,
Suravee
>> [0] in APM Volume 2 15.29.4.3 Physical Address Pointer Restrictions,
>>
>> * All the addresses point to 4-Kbyte aligned data structures. Bits 11:0 are
>> reserved (except for offset 0F8h) and should be set to zero. The lower 8 bits of
>> offset 0F8h are used for the field AVIC_PHYSICAL_MAX_INDEX. VMRUN fails with
>> #VMEXIT(VMEXIT_INVALID) if AVIC_PHYSICAL_MAX_INDEX is greater than 255 in xAVIC
>> mode or greater than 511 in x2AVIC mode.
Powered by blists - more mailing lists