lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 2 Aug 2019 15:59:40 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Krish Sadhukhan <krish.sadhukhan@...cle.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] KVM: x86: Unconditionally call x86 ops that are
 always implemented

On Fri, Aug 02, 2019 at 03:48:27PM -0700, Krish Sadhukhan wrote:
> 
> On 08/02/2019 03:06 PM, Sean Christopherson wrote:
> >Remove a few stale checks for non-NULL ops now that the ops in question
> >are implemented by both VMX and SVM.
> >
> >Note, this is **not** stable material, the Fixes tags are there purely
> >to show when a particular op was first supported by both VMX and SVM.
> >
> >Fixes: 74f169090b6f ("kvm/svm: Setup MCG_CAP on AMD properly")
> >Fixes: b31c114b82b2 ("KVM: X86: Provide a capability to disable PAUSE intercepts")
> >Fixes: 411b44ba80ab ("svm: Implements update_pi_irte hook to setup posted interrupt")
> >Cc: Krish Sadhukhan <krish.sadhukhan@...cle.com>
> >Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
> >---
> >
> >v2: Give update_pi_iret the same treatment [Krish].
> >
> >  arch/x86/kvm/x86.c | 13 +++----------
> >  1 file changed, 3 insertions(+), 10 deletions(-)
> >
> >diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> >index 01e18caac825..e7c993f0cbed 100644
> >--- a/arch/x86/kvm/x86.c
> >+++ b/arch/x86/kvm/x86.c
> >@@ -3506,8 +3506,7 @@ static int kvm_vcpu_ioctl_x86_setup_mce(struct kvm_vcpu *vcpu,
> >  	for (bank = 0; bank < bank_num; bank++)
> >  		vcpu->arch.mce_banks[bank*4] = ~(u64)0;
> >-	if (kvm_x86_ops->setup_mce)
> >-		kvm_x86_ops->setup_mce(vcpu);
> >+	kvm_x86_ops->setup_mce(vcpu);
> >  out:
> >  	return r;
> >  }
> >@@ -9313,10 +9312,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> >  	kvm_page_track_init(kvm);
> >  	kvm_mmu_init_vm(kvm);
> >-	if (kvm_x86_ops->vm_init)
> >-		return kvm_x86_ops->vm_init(kvm);
> >-
> >-	return 0;
> >+	return kvm_x86_ops->vm_init(kvm);
> >  }
> >  static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
> >@@ -9992,7 +9988,7 @@ EXPORT_SYMBOL_GPL(kvm_arch_has_noncoherent_dma);
> >  bool kvm_arch_has_irq_bypass(void)
> 
> Now that this is returning true always and that this is called only in
> kvm_irqfd_assign(), this can perhaps be removed altogether ?

No go, PowerPC has a conditional implementation.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ