lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161215143054.GC6667@potion>
Date:   Thu, 15 Dec 2016 15:30:54 +0100
From:   Radim Krčmář <rkrcmar@...hat.com>
To:     Roman Kagan <rkagan@...tuozzo.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Denis Plotnikov <dplotnikov@...tuozzo.com>, den@...tuozzo.com,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] KVM: x86: avoid redundant REQ_EVENT

2016-12-15 10:18+0300, Roman Kagan:
> On Wed, Dec 14, 2016 at 11:29:33PM +0100, Paolo Bonzini wrote:
>> On 14/12/2016 11:59, Denis Plotnikov wrote:
>> >  
>> >  	if ((exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT)
>> >  	    && nested_exit_intr_ack_set(vcpu)) {
>> > -		int irq = kvm_cpu_get_interrupt(vcpu);
>> > +		int irq = kvm_cpu_get_interrupt(vcpu, true);
>> >  		WARN_ON(irq < 0);
>> 
>> I think this is not needed, because all nested vmexits end with a KVM_REQ_EVENT:

I also think that it can safely be false and we could drop the parameter
from kvm_cpu_get_interrupt().

(We have injected the highest priority interrupt and put it into ISR,
 raising PPR again to its level, so there should be nothing to do in
 KVM_REQ_EVENT due to any TPR changes.)

>>         /*
>>          * the KVM_REQ_EVENT optimization bit is only on for one entry, and if
>>          * we did not inject a still-pending event to L1 now because of
>>          * nested_run_pending, we need to re-enable this bit.
>>          */
>>         if (vmx->nested.nested_run_pending)
>>                 kvm_make_request(KVM_REQ_EVENT, vcpu);
> 
> IIRC .nested_run_pending indicates we're emulating vmlaunch/vmresume and
> should not vmexit to L1, so this is not exactly "all nested vmexits"...
> 
>> This would allow you to always pass false from kvm_cpu_get_interrupt to
>> kvm_get_apic_interrupt.  Not sure if the additional complication in vmx.c
>> is worth the simplification in lapic.c.  Radim, second opinion? :)

This patch goes for a minimal change in the non-nested case, so I would
leave nVMX optimizations for another patch.

One useless round of KVM_REQ_EVENT is not going change nested
performance by much and it is not the only thing we could improve wrt.
TPR ... I would just leave it for now and take care of it when we
 * don't to update PPR at all with APICv -- it is already correct
 * drop the KVM_REQ_EVENT with flex priority, because lower TPR cannot
   unmask an interrupt

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ