lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CxzROx=eawemmzh==2Mz-DxKSyYFSxHqLxUiGFFnWkAYw@mail.gmail.com>
Date:   Wed, 22 Apr 2020 08:48:53 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Haiwei Li <lihaiwei@...cent.com>
Subject: Re: [PATCH 1/2] KVM: X86: TSCDEADLINE MSR emulation fastpath

On Tue, 21 Apr 2020 at 19:37, Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 21/04/20 13:20, Wanpeng Li wrote:
> > +     case MSR_IA32_TSCDEADLINE:
> > +             if (!kvm_x86_ops.event_needs_reinjection(vcpu)) {
> > +                     data = kvm_read_edx_eax(vcpu);
> > +                     if (!handle_fastpath_set_tscdeadline(vcpu, data))
> > +                             ret = EXIT_FASTPATH_CONT_RUN;
> > +             }
> >               break;
>
> Can you explain the event_needs_reinjection case?  Also, does this break

This is used to catch the case vmexit occurred while another event was
being delivered to guest software, I move the
vmx_exit_handlers_fastpath() call after vmx_complete_interrupts()
which will decode such event and make kvm_event_needs_reinjection
return true.

> AMD which does not implement the callback?

Now I add the tscdeadline msr emulation and vmx-preemption timer
fastpath pair for Intel platform.

>
> > +
> > +     reg = kvm_lapic_get_reg(apic, APIC_LVTT);
> > +     if (kvm_apic_hw_enabled(apic) && !(reg & APIC_LVT_MASKED)) {
> > +             vector = reg & APIC_VECTOR_MASK;
> > +             kvm_lapic_clear_vector(vector, apic->regs + APIC_TMR);
> > +
> > +             if (vcpu->arch.apicv_active) {
> > +                     if (pi_test_and_set_pir(vector, &vmx->pi_desc))
> > +                             return;
> > +
> > +                     if (pi_test_and_set_on(&vmx->pi_desc))
> > +                             return;
> > +
> > +                     vmx_sync_pir_to_irr(vcpu);
> > +             } else {
> > +                     kvm_lapic_set_irr(vector, apic);
> > +                     kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false);
> > +                     vmx_inject_irq(vcpu);
> > +             }
> > +     }
>
> This is mostly a copy of
>
>                if (kvm_x86_ops.deliver_posted_interrupt(vcpu, vector)) {
>                         kvm_lapic_set_irr(vector, apic);
>                         kvm_make_request(KVM_REQ_EVENT, vcpu);
>                         kvm_vcpu_kick(vcpu);
>                 }
>                 break;
>
> (is it required to do vmx_sync_pir_to_irr?).  So you should not special

I observe send notification vector as in
kvm_x86_ops.deliver_posted_interrupt() is ~900 cycles worse than
vmx_sync_pir_to_irr in my case. It needs to wait guest vmentry, then
the physical cpu ack the notification vector, read posted-interrupt
desciptor etc. For the non-APICv part, original copy needs to wait
inject_pending_event to do these stuff.

> case LVTT and move this code to lapic.c instead.  But even before that...
>
> >
> > +
> > +     if (kvm_start_hv_timer(apic)) {
> > +             if (kvm_check_request(KVM_REQ_PENDING_TIMER, vcpu)) {
> > +                     if (kvm_x86_ops.interrupt_allowed(vcpu)) {
> > +                             kvm_clear_request(KVM_REQ_PENDING_TIMER, vcpu);
> > +                             kvm_x86_ops.fast_deliver_interrupt(vcpu);
> > +                             atomic_set(&apic->lapic_timer.pending, 0);
> > +                             apic->lapic_timer.tscdeadline = 0;
> > +                             return 0;
> > +                     }
> > +                     return 1;
>
>
> Is it actually common that the timer is set back in time and therefore
> this code is executed?

It is used to handle the already-expired timer.

    Wanpeng

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ