[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <113888e0-ba94-deca-57a1-d3549d55a315@linux.microsoft.com>
Date: Fri, 24 Feb 2023 17:17:18 +0100
From: Jeremi Piotrowski <jpiotrowski@...ux.microsoft.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Tianyu Lan <ltykernel@...il.com>,
"Michael Kelley (LINUX)" <mikelley@...rosoft.com>
Subject: Re: "KVM: x86/mmu: Overhaul TDP MMU zapping and flushing" breaks SVM
on Hyper-V
On 16/02/2023 15:40, Jeremi Piotrowski wrote:
> On 15/02/2023 23:16, Sean Christopherson wrote:
>> On Tue, Feb 14, 2023, Jeremi Piotrowski wrote:
>>> On 13/02/2023 20:56, Paolo Bonzini wrote:
>>>> On Mon, Feb 13, 2023 at 8:12 PM Sean Christopherson <seanjc@...gle.com> wrote:
>>>>>> Depending on the performance results of adding the hypercall to
>>>>>> svm_flush_tlb_current, the fix could indeed be to just disable usage of
>>>>>> HV_X64_NESTED_ENLIGHTENED_TLB.
>>>>>
>>>>> Minus making nested SVM (L3) mutually exclusive, I believe this will do the trick:
>>>>>
>>>>> + /* blah blah blah */
>>>>> + hv_flush_tlb_current(vcpu);
>>>>> +
>>>>
>>>> Yes, it's either this or disabling the feature.
>>>>
>>>> Paolo
>>>
>>> Combining the two sub-threads: both of the suggestions:
>>>
>>> a) adding a hyperv_flush_guest_mapping(__pa(root->spt) after kvm_tdp_mmu_get_vcpu_root_hpa's call to tdp_mmu_alloc_sp()
>>> b) adding a hyperv_flush_guest_mapping(vcpu->arch.mmu->root.hpa) to svm_flush_tlb_current()
>>>
>>> appear to work in my test case (L2 vm startup until panic due to missing rootfs).
>>>
>>> But in both these cases (and also when I completely disable HV_X64_NESTED_ENLIGHTENED_TLB)
>>> the runtime of an iteration of the test is noticeably longer compared to tdp_mmu=0.
>>
>> Hmm, what is test doing?
>
> Booting through OVMF and kernel with no rootfs provided, and panic=-1 specified on the
> kernel command line. It's a pure startup time test.
>
Hi Sean,
Have you been able to reproduce this by any chance?
I would be glad to see either of the two fixes getting merged (b) or a) if it doesn't require
special L3 nested handling) in order to get this regression resolved.
Jeremi
Powered by blists - more mailing lists