[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220520095428.bahy37jxkznqtwx5@yy-desk-7060>
Date: Fri, 20 May 2022 17:54:28 +0800
From: Yuan Yao <yuan.yao@...ux.intel.com>
To: Yun Lu <luyun_611@....com>
Cc: pbonzini@...hat.com, seanjc@...gle.com, vkuznets@...hat.com,
wanpengli@...cent.com, jmattson@...gle.com, joro@...tes.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86/mmu: optimizing the code in
mmu_try_to_unsync_pages
On Fri, May 20, 2022 at 02:09:07PM +0800, Yun Lu wrote:
> There is no need to check can_unsync and prefetch in the loop
> every time, just move this check before the loop.
>
> Signed-off-by: Yun Lu <luyun@...inos.cn>
> ---
> arch/x86/kvm/mmu/mmu.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 311e4e1d7870..e51e7735adca 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2534,6 +2534,12 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot,
> if (kvm_slot_page_track_is_active(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE))
> return -EPERM;
>
> + if (!can_unsync)
> + return -EPERM;
> +
> + if (prefetch)
> + return -EEXIST;
> +
> /*
> * The page is not write-tracked, mark existing shadow pages unsync
> * unless KVM is synchronizing an unsync SP (can_unsync = false). In
> @@ -2541,15 +2547,9 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot,
> * allowing shadow pages to become unsync (writable by the guest).
> */
> for_each_gfn_indirect_valid_sp(kvm, sp, gfn) {
> - if (!can_unsync)
> - return -EPERM;
> -
> if (sp->unsync)
> continue;
>
> - if (prefetch)
> - return -EEXIST;
> -
Consider the case that for_each_gfn_indirect_valid_sp() loop is
not triggered, means the gfn is not MMU page table page:
The old behavior when : return 0;
The new behavior with this change: returrn -EPERM / -EEXIST;
It at least breaks FNAME(sync_page) -> make_spte(prefetch = true, can_unsync = false)
which removes PT_WRITABLE_MASK from last level mapping unexpectedly.
> /*
> * TDP MMU page faults require an additional spinlock as they
> * run with mmu_lock held for read, not write, and the unsync
> --
> 2.25.1
>
>
> No virus found
> Checked by Hillstone Network AntiVirus
>
Powered by blists - more mailing lists