lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7e1d3590-2205-401c-c6f5-e4da534d85a7@linux.microsoft.com>
Date:   Thu, 16 Feb 2023 15:40:20 +0100
From:   Jeremi Piotrowski <jpiotrowski@...ux.microsoft.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, Tianyu Lan <ltykernel@...il.com>,
        "Michael Kelley (LINUX)" <mikelley@...rosoft.com>
Subject: Re: "KVM: x86/mmu: Overhaul TDP MMU zapping and flushing" breaks SVM
 on Hyper-V



On 15/02/2023 23:16, Sean Christopherson wrote:
> On Tue, Feb 14, 2023, Jeremi Piotrowski wrote:
>> On 13/02/2023 20:56, Paolo Bonzini wrote:
>>> On Mon, Feb 13, 2023 at 8:12 PM Sean Christopherson <seanjc@...gle.com> wrote:
>>>>> Depending on the performance results of adding the hypercall to
>>>>> svm_flush_tlb_current, the fix could indeed be to just disable usage of
>>>>> HV_X64_NESTED_ENLIGHTENED_TLB.
>>>>
>>>> Minus making nested SVM (L3) mutually exclusive, I believe this will do the trick:
>>>>
>>>> +       /* blah blah blah */
>>>> +       hv_flush_tlb_current(vcpu);
>>>> +
>>>
>>> Yes, it's either this or disabling the feature.
>>>
>>> Paolo
>>
>> Combining the two sub-threads: both of the suggestions:
>>
>> a) adding a hyperv_flush_guest_mapping(__pa(root->spt) after kvm_tdp_mmu_get_vcpu_root_hpa's call to tdp_mmu_alloc_sp()
>> b) adding a hyperv_flush_guest_mapping(vcpu->arch.mmu->root.hpa) to svm_flush_tlb_current()
>>
>> appear to work in my test case (L2 vm startup until panic due to missing rootfs).
>>
>> But in both these cases (and also when I completely disable HV_X64_NESTED_ENLIGHTENED_TLB)
>> the runtime of an iteration of the test is noticeably longer compared to tdp_mmu=0.
> 
> Hmm, what is test doing?

Booting through OVMF and kernel with no rootfs provided, and panic=-1 specified on the
kernel command line. It's a pure startup time test.

> 
>> So in terms of performance the ranking is (fastest to slowest):
>> 1. tdp_mmu=0 + enlightened TLB
>> 2. tdp_mmu=0 + no enlightened TLB
>> 3. tdp_mmu=1 (enlightened TLB makes minimal difference)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ