[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YAh4q6ZCOw3qDzHP@google.com>
Date: Wed, 20 Jan 2021 10:38:35 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Ben Gardon <bgardon@...gle.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>,
Peter Feiner <pfeiner@...gle.com>,
Junaid Shahid <junaids@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 04/24] kvm: x86/mmu: change TDP MMU yield function
returns to match cond_resched
On Tue, Jan 12, 2021, Ben Gardon wrote:
> Currently the TDP MMU yield / cond_resched functions either return
> nothing or return true if the TLBs were not flushed. These are confusing
> semantics, especially when making control flow decisions in calling
> functions.
>
> To clean things up, change both functions to have the same
> return value semantics as cond_resched: true if the thread yielded,
> false if it did not. If the function yielded in the _flush_ version,
> then the TLBs will have been flushed.
>
> Reviewed-by: Peter Feiner <pfeiner@...gle.com>
> Signed-off-by: Ben Gardon <bgardon@...gle.com>
> ---
> arch/x86/kvm/mmu/tdp_mmu.c | 38 +++++++++++++++++++++++++++++---------
> 1 file changed, 29 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 2ef8615f9dba..b2784514ca2d 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -413,8 +413,15 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm,
> _mmu->shadow_root_level, _start, _end)
>
> /*
> - * Flush the TLB if the process should drop kvm->mmu_lock.
> - * Return whether the caller still needs to flush the tlb.
> + * Flush the TLB and yield if the MMU lock is contended or this thread needs to
> + * return control to the scheduler.
> + *
> + * If this function yields, it will also reset the tdp_iter's walk over the
> + * paging structure and the calling function should allow the iterator to
> + * continue its traversal from the paging structure root.
> + *
> + * Return true if this function yielded, the TLBs were flushed, and the
> + * iterator's traversal was reset. Return false if a yield was not needed.
> */
> static bool tdp_mmu_iter_flush_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
> {
> @@ -422,18 +429,30 @@ static bool tdp_mmu_iter_flush_cond_resched(struct kvm *kvm, struct tdp_iter *it
> kvm_flush_remote_tlbs(kvm);
> cond_resched_lock(&kvm->mmu_lock);
> tdp_iter_refresh_walk(iter);
> - return false;
> - } else {
> return true;
> - }
> + } else
> + return false;
Kernel style is to have curly braces on all branches if any branch has 'em. Or,
omit the else since the taken branch always returns. I think I prefer the latter?
> }
>
> -static void tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
> +/*
> + * Yield if the MMU lock is contended or this thread needs to return control
> + * to the scheduler.
> + *
> + * If this function yields, it will also reset the tdp_iter's walk over the
> + * paging structure and the calling function should allow the iterator to
> + * continue its traversal from the paging structure root.
> + *
> + * Return true if this function yielded and the iterator's traversal was reset.
> + * Return false if a yield was not needed.
> + */
> +static bool tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
> {
> if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
> cond_resched_lock(&kvm->mmu_lock);
> tdp_iter_refresh_walk(iter);
> - }
> + return true;
> + } else
> + return false;
Same here.
> }
>
> /*
> @@ -470,7 +489,8 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
> tdp_mmu_set_spte(kvm, &iter, 0);
>
> if (can_yield)
> - flush_needed = tdp_mmu_iter_flush_cond_resched(kvm, &iter);
> + flush_needed = !tdp_mmu_iter_flush_cond_resched(kvm,
> + &iter);
As with the existing code, I'd let this poke out. Alternatively, this could be
written as:
flush_needed = !can_yield ||
!tdp_mmu_iter_flush_cond_resched(kvm, &iter);
> else
> flush_needed = true;
> }
> @@ -1072,7 +1092,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
>
> tdp_mmu_set_spte(kvm, &iter, 0);
>
> - spte_set = tdp_mmu_iter_flush_cond_resched(kvm, &iter);
> + spte_set = !tdp_mmu_iter_flush_cond_resched(kvm, &iter);
> }
>
> if (spte_set)
> --
> 2.30.0.284.gd98b1dd5eaa7-goog
>
Powered by blists - more mailing lists