lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 04 Aug 2022 10:59:48 +1200
From:   Kai Huang <kai.huang@...el.com>
To:     Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Michael Roth <michael.roth@....com>,
        Tom Lendacky <thomas.lendacky@....com>
Subject: Re: [PATCH v2 2/3] KVM: x86/mmu: Fully re-evaluate MMIO caching
 when SPTE masks change

On Wed, 2022-08-03 at 22:49 +0000, Sean Christopherson wrote:
> Fully re-evaluate whether or not MMIO caching can be enabled when SPTE
> masks change; simply clearing enable_mmio_caching when a configuration
> isn't compatible with caching fails to handle the scenario where the
> masks are updated, e.g. by VMX for EPT or by SVM to account for the C-bit
> location, and toggle compatibility from false=>true.
> 
> Snapshot the original module param so that re-evaluating MMIO caching
> preserves userspace's desire to allow caching.  Use a snapshot approach
> so that enable_mmio_caching still reflects KVM's actual behavior.
> 
> Fixes: 8b9e74bfbf8c ("KVM: x86/mmu: Use enable_mmio_caching to track if MMIO caching is enabled")
> Reported-by: Michael Roth <michael.roth@....com>
> Cc: Tom Lendacky <thomas.lendacky@....com>
> Cc: stable@...r.kernel.org
> Tested-by: Michael Roth <michael.roth@....com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>

Reviewed-by: Kai Huang <kai.huang@...el.com>

One Nit below...

> ---
>  arch/x86/kvm/mmu/mmu.c  |  4 ++++
>  arch/x86/kvm/mmu/spte.c | 19 +++++++++++++++++++
>  arch/x86/kvm/mmu/spte.h |  1 +
>  3 files changed, 24 insertions(+)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index bf808107a56b..48f34016cb0b 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -6699,11 +6699,15 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
>  /*
>   * nx_huge_pages needs to be resolved to true/false when kvm.ko is loaded, as
>   * its default value of -1 is technically undefined behavior for a boolean.
> + * Forward the module init call to SPTE code so that it too can handle module
> + * params that need to be resolved/snapshot.
>   */
>  void __init kvm_mmu_x86_module_init(void)
>  {
>  	if (nx_huge_pages == -1)
>  		__set_nx_huge_pages(get_nx_auto_mode());
> +
> +	kvm_mmu_spte_module_init();
>  }
>  
>  /*
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index 7314d27d57a4..66f76f5a15bd 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -20,6 +20,7 @@
>  #include <asm/vmx.h>
>  
>  bool __read_mostly enable_mmio_caching = true;
> +static bool __ro_after_init allow_mmio_caching;
>  module_param_named(mmio_caching, enable_mmio_caching, bool, 0444);
>  
>  u64 __read_mostly shadow_host_writable_mask;
> @@ -43,6 +44,18 @@ u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
>  
>  u8 __read_mostly shadow_phys_bits;
>  
> +void __init kvm_mmu_spte_module_init(void)
> +{
> +	/*
> +	 * Snapshot userspace's desire to allow MMIO caching.  Whether or not
> +	 * KVM can actually enable MMIO caching depends on vendor-specific
> +	 * hardware capabilities and other module params that can't be resolved
> +	 * until the vendor module is loaded, i.e. enable_mmio_caching can and
> +	 * will change when the vendor module is (re)loaded.
> +	 */
> +	allow_mmio_caching = enable_mmio_caching;

... Perhaps 'use_mmio_caching' or 'want_mmio_caching' is better as it reflects
userspace's desire? Anyway let you decide.

> +}
> +
>  static u64 generation_mmio_spte_mask(u64 gen)
>  {
>  	u64 mask;
> @@ -340,6 +353,12 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
>  	BUG_ON((u64)(unsigned)access_mask != access_mask);
>  	WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
>  
> +	/*
> +	 * Reset to the original module param value to honor userspace's desire
> +	 * to (dis)allow MMIO caching.  Update the param itself so that
> +	 * userspace can see whether or not KVM is actually using MMIO caching.
> +	 */
> +	enable_mmio_caching = allow_mmio_caching;
>  	if (!enable_mmio_caching)
>  		mmio_value = 0;
>  
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index cabe3fbb4f39..26b144ffd146 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -450,6 +450,7 @@ static inline u64 restore_acc_track_spte(u64 spte)
>  
>  u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, kvm_pfn_t new_pfn);
>  
> +void __init kvm_mmu_spte_module_init(void);
>  void kvm_mmu_reset_all_pte_masks(void);
>  
>  #endif

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ