[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ad3c51a3-f46e-c559-7ad8-573564f63875@redhat.com>
Date: Mon, 13 Feb 2023 19:26:41 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Jeremi Piotrowski <jpiotrowski@...ux.microsoft.com>,
Sean Christopherson <seanjc@...gle.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Tianyu Lan <ltykernel@...il.com>,
"Michael Kelley (LINUX)" <mikelley@...rosoft.com>
Subject: Re: "KVM: x86/mmu: Overhaul TDP MMU zapping and flushing" breaks SVM
on Hyper-V
On 2/13/23 19:05, Jeremi Piotrowski wrote:
> So I looked at the ftrace (all kvm&kvmmu events + hyperv_nested_*
> events) I see the following: With tdp_mmu=0: kvm_exit sequence of
> kvm_mmu_prepare_zap_page hyperv_nested_flush_guest_mapping (always
> follows every sequence of kvm_mmu_prepare_zap_page) kvm_entry
>
> With tdp_mmu=1 I see: kvm_mmu_prepare_zap_page and
> kvm_tdp_mmu_spte_changed events from a kworker context, but they are
> not followed by hyperv_nested_flush_guest_mapping. The only
> hyperv_nested_flush_guest_mapping events I see happen from the qemu
> process context.
>
> Also the number of flush hypercalls is significantly lower: a 7second
> sequence through OVMF with tdp_mmu=0 produces ~270 flush hypercalls.
> In the traces with tdp_mmu=1 I now see max 3.
>
> So this might be easier to diagnose than I thought: the
> HvCallFlushGuestPhysicalAddressSpace calls are missing now.
Can you check if KVM is reusing a nCR3 value?
If so, perhaps you can just add
hyperv_flush_guest_mapping(__pa(root->spt), NULL) after
kvm_tdp_mmu_get_vcpu_root_hpa's call to tdp_mmu_alloc_sp()?
Paolo
Powered by blists - more mailing lists