lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+Cxet5FDQWy+aNb21KZBCqbLiRvZM1T6G64j=S7b8_9kfw@mail.gmail.com>
Date:   Wed, 15 Nov 2017 12:33:13 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     Rik van Riel <riel@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm <kvm@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        David Hildenbrand <david@...hat.com>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Radim Krcmar <rkrcmar@...hat.com>
Subject: Re: [PATCH 1/2] x86,kvm: move qemu/guest FPU switching out to vcpu_run

2017-11-15 11:03 GMT+08:00 Rik van Riel <riel@...hat.com>:
> On Wed, 2017-11-15 at 08:47 +0800, Wanpeng Li wrote:
>> 2017-11-15 5:54 GMT+08:00  <riel@...hat.com>:
>> > From: Rik van Riel <riel@...hat.com>
>> >
>> > Currently, every time a VCPU is scheduled out, the host kernel will
>> > first save the guest FPU/xstate context, then load the qemu
>> > userspace
>> > FPU context, only to then immediately save the qemu userspace FPU
>> > context back to memory. When scheduling in a VCPU, the same
>> > extraneous
>> > FPU loads and saves are done.
>> >
>> > This could be avoided by moving from a model where the guest FPU is
>> > loaded and stored with preemption disabled, to a model where the
>> > qemu userspace FPU is swapped out for the guest FPU context for
>> > the duration of the KVM_RUN ioctl.
>>
>> What will happen if CONFIG_PREEMPT is enabled?
>
> The scheduler will save the guest FPU context when a
> VCPU thread is preempted, and restore it when it is
> scheduled back in.

I mean all the involved processes will use fpu. Before patch if kernel
preempt occur:

context_switch
  -> prepare_task_switch
        -> fire_sched_out_preempt_notifiers
              -> kvm_sched_out
                    -> kvm_arch_vcpu_put
                          -> kvm_put_guest_fpu
                               -> copy_fpregs_to_fpstate(&vcpu->arch.guest_fpu)
                                    save xsave area to guest fpu buffer
                               -> __kernel_fpu_end
                                     ->
copy_kernel_to_fpregs(&current->thread.fpu.state)
                                         restore prev vCPU qemu
userspace FPU to the xsave area
  -> switch_to
        -> __switch_to
            -> switch_fpu_prepare
                  -> copy_fpregs_to_fpstate => save xsave area to prev
vCPU qemu userspace FPU
            -> switch_fpu_finish
                  -> copy_kernel_to_fpgregs => restore next task FPU
to xsave area


After the patch:

context_switch
  -> prepare_task_switch
        -> fire_sched_out_preempt_notifiers
              -> kvm_sched_out

 -> switch_to
        -> __switch_to
            -> switch_fpu_prepare
                  -> copy_fpregs_to_fpstate         => Oops
                  save xsave area to prev vCPU qemu userspace FPU,
actually the guest FPU buffer is loaded in xsave area, you transmit
guest FPU in xsave area into the prev vCPU qemu userspace FPU

Regards,
Wanpeng Li

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ