[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+Cyb9J8vE4jamLRHmYCsTpr1u9N4pmFG3nj83YoCgCZaZQ@mail.gmail.com>
Date: Thu, 16 Nov 2017 13:54:12 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Rik van Riel <riel@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
David Hildenbrand <david@...hat.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Radim Krcmar <rkrcmar@...hat.com>
Subject: Re: [PATCH 1/2] x86,kvm: move qemu/guest FPU switching out to vcpu_run
2017-11-15 22:40 GMT+08:00 Rik van Riel <riel@...hat.com>:
> On Wed, 2017-11-15 at 12:33 +0800, Wanpeng Li wrote:
>> 2017-11-15 11:03 GMT+08:00 Rik van Riel <riel@...hat.com>:
>> > On Wed, 2017-11-15 at 08:47 +0800, Wanpeng Li wrote:
>> > > 2017-11-15 5:54 GMT+08:00 <riel@...hat.com>:
>> > > > From: Rik van Riel <riel@...hat.com>
>> > > >
>> > > > Currently, every time a VCPU is scheduled out, the host kernel
>> > > > will
>> > > > first save the guest FPU/xstate context, then load the qemu
>> > > > userspace
>> > > > FPU context, only to then immediately save the qemu userspace
>> > > > FPU
>> > > > context back to memory. When scheduling in a VCPU, the same
>> > > > extraneous
>> > > > FPU loads and saves are done.
>> > > >
>> > > > This could be avoided by moving from a model where the guest
>> > > > FPU is
>> > > > loaded and stored with preemption disabled, to a model where
>> > > > the
>> > > > qemu userspace FPU is swapped out for the guest FPU context for
>> > > > the duration of the KVM_RUN ioctl.
>> > >
>> > > What will happen if CONFIG_PREEMPT is enabled?
>> >
>> > The scheduler will save the guest FPU context when a
>> > VCPU thread is preempted, and restore it when it is
>> > scheduled back in.
>>
>> I mean all the involved processes will use fpu. Before patch if
>> kernel
>> preempt occur:
>>
>> context_switch
>> -> prepare_task_switch
>> -> fire_sched_out_preempt_notifiers
>> -> kvm_sched_out
>> -> kvm_arch_vcpu_put
>> -> kvm_put_guest_fpu
>> -> copy_fpregs_to_fpstate(&vcpu-
>> >arch.guest_fpu)
>> save xsave area to guest fpu
>> buffer
>> -> __kernel_fpu_end
>> ->
>> copy_kernel_to_fpregs(¤t->thread.fpu.state)
>> restore prev vCPU qemu
>> userspace FPU to the xsave area
>> -> switch_to
>> -> __switch_to
>> -> switch_fpu_prepare
>> -> copy_fpregs_to_fpstate => save xsave area to
>> prev
>> vCPU qemu userspace FPU
>> -> switch_fpu_finish
>> -> copy_kernel_to_fpgregs => restore next task FPU
>> to xsave area
>>
>>
>> After the patch:
>>
>> context_switch
>> -> prepare_task_switch
>> -> fire_sched_out_preempt_notifiers
>> -> kvm_sched_out
>>
>> -> switch_to
>> -> __switch_to
>> -> switch_fpu_prepare
>> -> copy_fpregs_to_fpstate => Oops
>> save xsave area to prev vCPU qemu userspace FPU,
>> actually the guest FPU buffer is loaded in xsave area, you transmit
>> guest FPU in xsave area into the prev vCPU qemu userspace FPU
>
> When entering kvm_arch_vcpu_ioctl_run we save the qemu userspace
> FPU context in &vcpu->arch.user_fpu, and we restore that before
> leaving kvm_arch_vcpu_ioctl_run.
>
> Userspace should always see the userspace FPU context, no?
You are right.
Regards,
Wanpeng Li
Powered by blists - more mailing lists