lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YGZPhq9YI2m/OSBu@google.com>
Date:   Thu, 1 Apr 2021 22:56:06 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org,
        Thomas Gleixner <tglx@...utronix.de>,
        Wanpeng Li <wanpengli@...cent.com>,
        Borislav Petkov <bp@...en8.de>,
        Jim Mattson <jmattson@...gle.com>,
        "open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)" 
        <linux-kernel@...r.kernel.org>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>, Joerg Roedel <joro@...tes.org>,
        Ingo Molnar <mingo@...hat.com>,
        "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>
Subject: Re: [PATCH 3/4] KVM: x86: correctly merge pending and injected
 exception

On Thu, Apr 01, 2021, Paolo Bonzini wrote:
> On 01/04/21 16:38, Maxim Levitsky wrote:
> > +static int kvm_do_deliver_pending_exception(struct kvm_vcpu *vcpu)
> > +{
> > +	int class1, class2, ret;
> > +
> > +	/* try to deliver current pending exception as VM exit */
> > +	if (is_guest_mode(vcpu)) {
> > +		ret = kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu);
> > +		if (ret || !vcpu->arch.pending_exception.valid)
> > +			return ret;
> > +	}
> > +
> > +	/* No injected exception, so just deliver the payload and inject it */
> > +	if (!vcpu->arch.injected_exception.valid) {
> > +		trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
> > +					vcpu->arch.pending_exception.has_error_code,
> > +					vcpu->arch.pending_exception.error_code);
> > +queue:
> 
> If you move the queue label to the top of the function, you can "goto queue" for #DF as well and you don't need to call kvm_do_deliver_pending_exception again.  In fact you can merge this function and kvm_deliver_pending_exception completely:
> 
> 
> static int kvm_deliver_pending_exception_as_vmexit(struct kvm_vcpu *vcpu)
> {
> 	WARN_ON(!vcpu->arch.pending_exception.valid);
> 	if (is_guest_mode(vcpu))
> 		return kvm_x86_ops.nested_ops->deliver_exception_as_vmexit(vcpu);
> 	else
> 		return 0;
> }
> 
> static int kvm_merge_injected_exception(struct kvm_vcpu *vcpu)
> {
> 	/*
> 	 * First check if the pending exception takes precedence
> 	 * over the injected one, which will be reported in the
> 	 * vmexit info.
> 	 */
> 	ret = kvm_deliver_pending_exception_as_vmexit(vcpu);
> 	if (ret || !vcpu->arch.pending_exception.valid)
> 		return ret;
> 
> 	if (vcpu->arch.injected_exception.nr == DF_VECTOR) {
> 		...
> 		return 0;
> 	}
> 	...
> 	if ((class1 == EXCPT_CONTRIBUTORY && class2 == EXCPT_CONTRIBUTORY)
> 	    || (class1 == EXCPT_PF && class2 != EXCPT_BENIGN)) {
> 		...
> 	}
> 	vcpu->arch.injected_exception.valid = false;
> }
> 
> static int kvm_deliver_pending_exception(struct kvm_vcpu *vcpu)
> {
> 	if (!vcpu->arch.pending_exception.valid)
> 		return 0;
> 
> 	if (vcpu->arch.injected_exception.valid)
> 		kvm_merge_injected_exception(vcpu);
> 
> 	ret = kvm_deliver_pending_exception_as_vmexit(vcpu));
> 	if (ret || !vcpu->arch.pending_exception.valid)

I really don't like querying arch.pending_exception.valid to see if the exception
was morphed to a VM-Exit.  I also find kvm_deliver_pending_exception_as_vmexit()
to be misleading; to me, that reads as being a command, i.e. "deliver this
pending exception as a VM-Exit".

It' also be nice to make the helpers closer to pure functions, i.e. pass the
exception as a param instead of pulling it from vcpu->arch.

Now that we have static_call, the number of calls into vendor code isn't a huge
issue.  Moving nested_run_pending to arch code would help, too.  What about
doing something like:

static bool kvm_l1_wants_exception_vmexit(struct kvm_vcpu *vcpu, u8 vector)
{
	return is_guest_mode(vcpu) && kvm_x86_l1_wants_exception(vcpu, vector);
}

	...

	if (!kvm_x86_exception_allowed(vcpu))
		return -EBUSY;

	if (kvm_l1_wants_exception_vmexit(vcpu, vcpu->arch...))
		return kvm_x86_deliver_exception_as_vmexit(...);

> 		return ret;
> 
> 	trace_kvm_inj_exception(vcpu->arch.pending_exception.nr,
> 				vcpu->arch.pending_exception.has_error_code,
> 				vcpu->arch.pending_exception.error_code);
> 	...
> }
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ