[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR11MB325676AAA8A0785AF992A2B9A9B79@BYAPR11MB3256.namprd11.prod.outlook.com>
Date: Wed, 13 Oct 2021 07:46:50 +0000
From: "Liu, Jing2" <jing2.liu@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>
CC: "x86@...nel.org" <x86@...nel.org>,
"Bae, Chang Seok" <chang.seok.bae@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"Arjan van de Ven" <arjan@...ux.intel.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Nakajima, Jun" <jun.nakajima@...el.com>,
Jing Liu <jing2.liu@...ux.intel.com>,
"seanjc@...gle.com" <seanjc@...gle.com>
Subject: RE: [patch 13/31] x86/fpu: Move KVMs FPU swapping to FPU core
> On 13/10/21 08:15, Liu, Jing2 wrote:
> > After KVM passthrough XFD to guest, when vmexit opening irq window and
> > KVM is interrupted, kernel softirq path can call
> > kernel_fpu_begin() to touch xsave state. This function does XSAVES. If
> > guest XFD[18] is 1, and with guest AMX state in register, then guest
> > AMX state is lost by XSAVES.
>
> Yes, the host value of XFD (which is zero) has to be restored after vmexit.
> See how KVM already handles SPEC_CTRL.
>
I'm trying to understand why qemu's XFD is zero after kernel supports AMX.
Do you mean in guest #NM trap KVM also alloc extra user_fpu buffer and
clear qemu's XFD? But why do we need do that?
I think only when qemu userspace requests an AMX permission and exec
AMX instruction generating host #NM, host kernel clears qemu's XFD[18].
If guest #NM being trapped, KVM *don't* need clear host's XFD, but only
allocate guest_fpu's buffer and current->thread.fpu 's buffer, and
clear guest's XFD.
> Passthrough of XFD is only enabled after the guest has caused an #NM
> vmexit
Yes, passthrough is done by two cases: one is guest #NM trapped;
another is guest clearing XFD before it generates #NM (this is possible for
guest), then passthrough.
For the two cases, we passthrough and allocate buffer for guest_fpu, and
current->thread.fpu.
Thanks,
Jing
and the full XSAVE state has been dynamically allocated, therefore it
> is always possible to do an XSAVES even from atomic context.
>
> Paolo
Powered by blists - more mailing lists