[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANgfPd-spmT1m9kGacpon9jmz-4YA_pwgp93xJGHrrS-2+F99g@mail.gmail.com>
Date: Mon, 6 Feb 2023 14:09:42 -0800
From: Ben Gardon <bgardon@...gle.com>
To: Vipin Sharma <vipinsh@...gle.com>
Cc: seanjc@...gle.com, pbonzini@...hat.com, dmatlack@...gle.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [Patch v2 1/5] KVM: x86/mmu: Make separate function to check for
SPTEs atomic write conditions
On Fri, Feb 3, 2023 at 11:28 AM Vipin Sharma <vipinsh@...gle.com> wrote:
>
> Move condition checks in kvm_tdp_mmu_write_spte() for writing spte
> atomically in a separate function.
>
> New function will be used in future commits to clear bits in SPTE.
>
> Signed-off-by: Vipin Sharma <vipinsh@...gle.com>
Reviewed-by: Ben Gardon <bgardon@...gle.com>
> ---
> arch/x86/kvm/mmu/tdp_iter.h | 16 +++++++++++-----
> 1 file changed, 11 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
> index f0af385c56e0..30a52e5e68de 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.h
> +++ b/arch/x86/kvm/mmu/tdp_iter.h
> @@ -29,11 +29,10 @@ static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte)
> WRITE_ONCE(*rcu_dereference(sptep), new_spte);
> }
>
> -static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte,
> - u64 new_spte, int level)
> +static inline bool kvm_tdp_mmu_spte_has_volatile_bits(u64 old_spte, int level)
> {
> /*
> - * Atomically write the SPTE if it is a shadow-present, leaf SPTE with
> + * Atomically write SPTEs if it is a shadow-present, leaf SPTE with
Nit: SPTEs must be modified atomically if they are shadow-present,
leaf SPTEs with
> * volatile bits, i.e. has bits that can be set outside of mmu_lock.
> * The Writable bit can be set by KVM's fast page fault handler, and
> * Accessed and Dirty bits can be set by the CPU.
> @@ -44,8 +43,15 @@ static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte,
> * logic needs to be reassessed if KVM were to use non-leaf Accessed
> * bits, e.g. to skip stepping down into child SPTEs when aging SPTEs.
> */
> - if (is_shadow_present_pte(old_spte) && is_last_spte(old_spte, level) &&
> - spte_has_volatile_bits(old_spte))
> + return is_shadow_present_pte(old_spte) &&
> + is_last_spte(old_spte, level) &&
> + spte_has_volatile_bits(old_spte);
> +}
> +
> +static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte,
> + u64 new_spte, int level)
> +{
> + if (kvm_tdp_mmu_spte_has_volatile_bits(old_spte, level))
> return kvm_tdp_mmu_write_spte_atomic(sptep, new_spte);
>
> __kvm_tdp_mmu_write_spte(sptep, new_spte);
> --
> 2.39.1.519.gcb327c4b5f-goog
>
Powered by blists - more mailing lists